entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 15
199
| authors
list | primary_category
stringlengths 5
18
| categories
list | text
stringlengths 1
461k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.01764v1
|
20230704150542
|
Knowledge-Aware Audio-Grounded Generative Slot Filling for Limited Annotated Data
|
[
"Guangzhi Sun",
"Chao Zhang",
"Ivan Vulić",
"Paweł Budzianowski",
"Philip C. Woodland"
] |
cs.CL
|
[
"cs.CL"
] |
1
.001
Knowledge-Aware Audio-Grounded Generative Slot Filling for Limited Annotated Data
Guangzhi Sun et al.
mode = title]Knowledge-Aware Audio-Grounded Generative Slot Filling for Limited Annotated Data
1]Guangzhi Sun
[1]
gs534@cam.ac.uk
[1]organization=University of Cambridge Department of Engineering,
addressline=Trumpington Street,
city=Cambridge,
postcode=CB2 1PZ,
country=United Kingdom
1, 3]Chao Zhang
cz277@tsinghua.edu.cn
1, 2]Ivan Vulić
ivan@poly-ai.com
2]Paweł Budzianowski
pawel@poly-ai.com
1]Philip C. Woodland
pw117@cam.ac.uk
[1]
[2]organization=PolyAI Ltd.,
address=10 York Rd,
city=London,
postcode=SE1 7ND,
country=United Kingdom
[3]organization=Tsinghua University,
address=30 Shuangqing Rd,
city=Beijing,
postcode=100190,
country=China
[cor1]Corresponding author
[fn1]Supported by a Cambridge International Scholarship from the Cambridge Commonwealth, European & International Trust
Manually annotating fine-grained slot-value labels for task-oriented dialogue (ToD) systems is an expensive and time-consuming endeavour. This motivates research into slot-filling methods that operate with limited amounts of labelled data. Moreover, the majority of current work on ToD is based solely on text as the input modality, neglecting the additional challenges of imperfect automatic speech recognition (ASR) when working with spoken language. In this work, we propose a Knowledge-Aware Audio-Grounded generative slot filling framework, termed KA2G, that focuses on few-shot and zero-shot slot filling for ToD with speech input. KA2G achieves robust and data-efficient slot filling for speech-based ToD by 1) framing it as a text generation task, 2) grounding text generation additionally in the audio modality, and 3) conditioning on available external knowledge (e.g. a predefined list of possible slot values). We show that combining both modalities within the KA2G framework improves the robustness against ASR errors. Further, the knowledge-aware slot-value generator in KA2G, implemented via a pointer generator mechanism, particularly benefits few-shot and zero-shot learning. Experiments, conducted on the standard speech-based single-turn SLURP dataset and a multi-turn dataset extracted from a commercial ToD system, display strong and consistent gains over prior work, especially in few-shot and zero-shot setups.
* A knowledge-aware audio-grounded (KA2G) generative slot-filling framework is proposed for use with limited annotated data for task-oriented dialogue (ToD). KA2G formulates slot filling as a language generation task with natural language prompts. Slot value generation (SVG) is also grounded based on speech input via an ASR module in addition to a pre-trained language model (PLM) in order to give robustness to recognition errors.
* KA2G integrates external contextual knowledge by using two tree-constrained pointer generator (TCPGen) components, one for ASR and one for SVG, with shared prefix-tree encoding networks. The use of TCPGen greatly benefits the KA2G slot-filling performance, especially on rare and unseen entities and unseen slot types.
* Experiments on the SLURP dataset with speech input showed that the proposed KA2G framework can produce state-of-the-art slot-filling results. KA2G was further evaluated on an in-house multi-turn ToD dataset, CONCIERGE, to validate the effectiveness of KA2G in real-world applications.
* Large and consistent improvements with the full KA2G framework were obtained over a standard pipeline-based ToD baseline on both datasets. The improvements were most prominent for rare and unseen entities on both datasets, with a 4.6% absolute SLU-F1 increase for few-shot entities, an 11.2% increase for zero-shot entities and 13.6% increase for unseen slot types in SLURP. Meanwhile, KA2G improved more than 20 joint goal accuracy (JGA) points in multi-turn evaluation on CONCIERGE. The importance and contributions of the two TCPGen components were verified in a series of ablation studies and other analyses.
* The proposed KA2G framework has a number of key differences in the use of TCPGen in ToD. Compared to the previous conference paper <cit.>
* Rather than formulating SLU as a sequence-tagging problem which is an audio-grounded extension of text-based methods that are only able to handle predefined slot types, KA2G handles slot-filling as a generative task that depends on both audio input and a knowledge base. It is able to handle an open set of slot types using natural language queries and generates natural language slot values. We also demonstrate that KA2G is more robust to ASR errors and achieves better performance.
* While TCPGen and structured knowledge were mainly considered from the ASR perspective in <cit.>, in KA2G, a stacked TCPGen structure with a shared tree encoding network is adopted. Notably, TCPGen is applied to SVG to guide the generation process using the most relevant knowledge from multiple ASR alternatives.
slot filling spoken language understanding audio-grounding contextual biasing knowledge base generative model limited data few-shot zero-shot
[
[
August 1, 2023
==================
§ INTRODUCTION
Slot filling, as a crucial natural language understanding component of task-oriented dialogue (ToD) systems, aims at filling in the correct value for predefined slots (e.g. restaurant and hotel names) <cit.>. As manual fine-grained annotation for slot labels is expensive, time-consuming, and usually requires domain expertise <cit.>, increasing demands have been put on the performance of slot-filling systems under few-shot or even zero-shot learning setups <cit.>. Following the now-prevalent use of large Transformer-based pretrained language models (PLM) <cit.> for transfer learning across many NLP tasks, PLMs have also been widely adopted in ToD for slot-filling tasks with limited labelled data <cit.>.
More recently, other text-based approaches have reformulated slot filling as a question-answering (QA) or a sequence generation task, in order to further exploit the power of QA-oriented and generative PLMs <cit.>, especially in low-data scenarios <cit.>
However, all these approaches operate directly on `perfect' text input, thus overestimating the performance of speech-based ToD systems where a loss in performance might occur due to imperfect automatic speech recognition (ASR) <cit.>. Imperfect ASR output can especially harm slot filling that deals with entities infrequent in the general language (e.g., atypical personal, restaurant or hotel names) and is even more pronounced in situations with limited annotated data.
While very recent research has started to explore end-to-end slot-filling tasks for ToD with speech input <cit.>, in this work we focus on a particularly challenging situation which is typically met in production: limited annotated data with many rare entities. Therefore, we propose KA2G, a Knowledge-Aware Audio-Grounded generative slot-filling framework which is tailored towards improving the robustness and performance of slot filling with spoken input.
KA2G integrates information from both audio and text as input to a slot-value generator (SVG) which then generates textual fillers for each slot. Note that the final generation is also grounded in the audio modality. This mitigates the issues arising from noisy ASR-generated transcriptions. KA2G particularly boosts the performance of rare and unseen entities by learning to exploit the available external knowledge (e.g., predefined lists of possible values for slots) via two tree-constrained pointer generator (TCPGen) components <cit.>. TCPGen builds a neural shortcut between the biasing list, which is a list of entities likely to appear in a given context, and the model output via a pointer generator. Biasing lists are extracted from an external knowledge base (KB) containing possible entities for each slot type and are structured as subword-based prefix trees to be searched.
The first TCPGen is applied on the ASR side
to reduce ASR errors on high-value biasing entities based on the available context.
The second TCPGen is applied on the SVG side
to bias the generator's output using sub-trees which contain branches on the prefix-trees that are traversed during ASR beam search. The entire KA2G model is jointly optimised in an end-to-end fashion from the input-speech-`end' to the generated slot-value-`end'. The code for KA2G is available at <https://github.com/the-anonymous-bs/espnet/tree/master/egs/slurp/asr1>
Although our previous conference paper <cit.> explored TCPGen in SLU tasks, the proposed KA2G framework is fundamentally different in the following two key aspects:
* Rather than formulating SLU as a sequence-tagging problem which is an audio-grounded extension of the text-based methods that are only able to handle predefined slot types, KA2G handles slot-filling as a generative task that depends on both audio input and a knowledge base. It handles an open set of slot types using natural language queries and generates natural language slot values. We also demonstrate that KA2G is more robust to ASR errors and achieves better performance.
* While TCPGen and structured knowledge were mainly considered from the ASR perspective in <cit.>, in KA2G, a stacked TCPGen structure with a shared tree encoding network is adopted here. Notably, TCPGen was applied to SVG to guide the generation process using the most relevant knowledge from multiple ASR alternatives.
The main experiments were conducted on two structurally different datasets with speech input, with a focus on few-shot and zero-shot learning scenarios: 1) the single-turn SLURP dataset <cit.>, and 2) an in-house multi-turn ToD dataset extracted from a commercial concierge/booking system (henceforth termed CONCIERGE).
While the zero-shot setup stretches the abilities of the tested systems to the extreme, the few-shot learning scenario is more pragmatic and suitable for industry research <cit.>, as a small number of labels for each entity can usually be made available.
Large and consistent improvements with the full KA2G framework were found over a standard pipeline-based ToD baseline on both datasets, e.g., improving by more than 20 joint goal accuracy (JGA) points in multi-turn evaluations on CONCIERGE. The improvements were most prominent for rare and unseen entities on both datasets. The importance and contributions of the two TCPGen components were verified in a series of ablation studies and other analyses.
The rest of this article is organised as follows. Section <ref> reviews related studies. Section <ref> introduces the KA2G framework, with a detailed explanation of TCPGen and how it can be applied to slot-filling. Section <ref> describes the experimental setup, and Section <ref> discusses the results. Finally, conclusions are provided in Section <ref>.
§ RELATED WORK
§.§ Slot Filling as a Text Generation Task
Recent research has seen increased interest in reformulating the slot-filling task beyond standard sequence labelling and classification paradigms <cit.>. <cit.> and <cit.> recast slot filling as a QA task and a reading comprehension task, respectively, with both studies focusing on applications with limited data. More recently, <cit.> performed a comprehensive analysis of the QA approach for slot filling and provided both efficient and effective fine-tuning methods for domain-specific slot-filling models.
Formulating slot filling as a text generation task has recently also become an active research area. <cit.> proposed a generative slot-filling framework that leverages PLMs fine-tuned on specific tasks and domains to improve task-/domain-specific generation. Another research stream focused on framing dialogue state tracking (DST) for multi-turn ToD as a text generation task. In particular, the T5DST model <cit.> utilised different slot descriptions as the prompt for generation for cross-domain DST. However, previous approaches have only dealt with text input and do not use external knowledge, whereas our proposed KA2G framework is audio-grounded and efficiently leverages external knowledge.
§.§ Knowledge Integration for Slot Filling
Research has also been performed on leveraging external knowledge bases or the ontology of a dialogue system for slot-filling.
In <cit.>, domain-slot relations from the dialogue ontology were encoded using a graph neural network (GNN) to guide the system, while <cit.> further extended the use of the GNNs to capture correlations between slots and values in different domains. For slot filling for ToD with speech input, <cit.> used a Transformer encoder to encode external knowledge into hidden representations, while <cit.> built a neural shortcut from the external knowledge base directly to the slot filling output. While both methods focused on zero-shot learning setups, they relied on the standard sequence labelling formulation of the slot-filling task. In contrast, KA2G adopts a more flexible generative framework, which yields improved performance in few-shot and zero-shot scenarios.
§.§ Contextual Knowledge Integration in ASR
Previous studies on contextual biasing have been focused on either shallow-fusion-based score-level interpolation <cit.> or deep neural encoders or representations <cit.>. Recent work also explored the combination of deep and shallow approaches for contextual biasing. Specifically, <cit.> proposed to apply shallow fusion and deep biasing together in the end-to-end ASR model. More recently, pointer-generator-style shortcuts <cit.> or neural-FST <cit.> approaches that directly modify the final ASR output distribution have been investigated which allowed joint optimisation of the entire network in an end-to-end fashion. Meanwhile, TCPGen <cit.> also achieved high efficiency by using a symbolic prefix-tree search to handle biasing lists of thousands of words. Further research into TCPGen <cit.> used a graph neural network (GNN) to encode the prefix tree in TCPGen, which achieved further improvements in the recognition accuracy of biasing words. TCPGen with powerful GNN encodings acts as the backbone in both our previous work <cit.> and the proposed KA2G framework.
§ METHODOLOGY
The KA2G framework is illustrated in Fig. <ref>. It comprises three key components as follows:
(A) The audio-grounded SVG module combines output representations from the ASR module and the text-only PLM to generate values based on the slot prompt. The audio-grounded SVG module, as explained in Section <ref>, acts as the foundation of KA2G where the two proposed TCPGen components for knowledge integration are added.
(B) The knowledge-aware ASR component that integrates external knowledge into KA2G via the first TCPGen component (TCPGen_ASR). The TCPGen component and how it is integrated into the ASR module of KA2G are explained in Section <ref>, together with the slot shortlist prediction mechanism dedicated to slot-filling tasks to obtain a more focused biasing list.
(C) The knowledge-aware SVG further integrates knowledge explored during the ASR beam search via the second TCPGen (TCPGen_SVG). TCPGen_SVG extends the scope of TCPGen-based contextual knowledge integration from ASR tasks to any general natural language generation tasks will be explained in detail in Section <ref>.
§.§ Audio-Grounded SVG
The audio-grounded SVG module comprises (i) an ASR module, (ii) a causal/autoregressive PLM, (iii) an alignment module, and (iv) the SVG; This is illustrated in the right side of Fig. <ref>. The SVG is implemented as a single-layer unidirectional LSTM which takes the concatenated vectors from the ASR module and the PLM as the representation of the context to make predictions for the value with a given slot query. The LSTM architecture is used for simplicity and increased stability in low-resource setups, and to avoid over-parameterisation since both ASR and PLM have complex model structures with millions of parameters. The ASR model is an attention-based encoder-decoder (AED) where the decoder hidden states, 𝐡^dec, are sent to the SVG.
As the label space for the PLM is too sparse to be used by the ASR module, the ASR module instead uses a much smaller subword token vocabulary than the PLM, and hence the sequence of ASR hidden states and the sequence of PLM output vectors 𝐡^PLM will be asynchronous. To resolve this (mis-)alignment issue, the output of the SVG is first set to have the same subword tokens as the ASR module, which helps to make the best use of the acoustic information.
The PLM outputs are then taken for each (full) word instead of every subword at each word end and aligned with 𝐡^dec at each word ending subword before concatenation. For non-terminal subwords, zero-vector padding is used as placeholders for the PLM output.
An example of this alignment is shown in Fig. <ref>. The alignment of 𝐡^dec and 𝐡^PLM is therefore achieved at word ends, and using the same subwords for both ASR and PLM is not necessary. Moreover, for a slot query which prompts the generation (e.g. or ), as there is no corresponding input audio, the embedding of the preceding wordpiece is used in place of 𝐡^dec. Note that this alignment mechanism allows the slot value generation to use the PLM as well: Whenever a new word is generated, a new 𝐡^PLM with the new word can be obtained which is then concatenated with the preceding wordpiece to generate the next one.
The SVG is trained end-to-end by jointly optimising the ASR and slot-value generation criteria as shown below:
ℒ_joint = ℒ_ASR + ℒ_SVG,
where the respective sub-losses are defined as
ℒ_ASR = log P(𝐲_1:n|𝐱_1:T),
ℒ_SVG =log P(𝐬_1:m|𝐪_1:k,𝐡^dec,𝐡^PLM).
Here, 𝐲_1:n is the subword ASR sequence, 𝐬 is the generated slot value sequence, 𝐪 represents the query sequence (e.g. ) and 𝐱_1:T is the sequence of acoustic features. Note that 𝐡^PLM in the SVG loss covers not only the context but also the slot query and value using the aforementioned alignment mechanism. In order to allow the model to also handle predictions for slots not present in the utterance, N_n (randomly sampled) slots that are not mentioned in the context are incorporated in training as negative examples, where N_n is a hyper-parameter. The model should learn to generate values for those `not-present' slots. The entire SVG, together with the PLM and the two TCPGen components are optimised using the SVG loss.
During inference, 𝐡^PLM is obtained by feeding the 1-best ASR hypothesis to the PLM. The same context is prompted with all possible slot types, and those that do not output a value are saved. For multi-turn ToD, the dialogue history is encoded by the PLM before the start of the current context.
§.§ Knowledge-Aware ASR
External knowledge is organised as a dictionary of slots along with the possible values per slot: see the left blue block in Fig. <ref>.
It conditions the ASR via contextual biasing using TCPGen_ASR (as shown in Fig. <ref>).
Contextual biasing is an effective method to boost the recognition of rare words or entities in end-to-end trainable ASR systems, which represents the knowledge as a biasing list <cit.>. The biasing list contains a list of biasing entities that are likely to appear in a given context, such as a particular restaurant name or the name of an artist in a playlist, and the recognition accuracy can be improved if they are included in the biasing list. In slot filling, possible named entities for each slot type can be collected to form a structured KB, and the biasing list can be extracted from the KB as explained in Section <ref>.
§.§.§ TCPGen
TCPGen <cit.> is a neural component combining symbolic prefix-tree search with a neural pointer generator for contextual biasing, which enables end-to-end optimisation with ASR systems. Although in this section, TCPGen is described based on TCPGen_ASR, TCPGen_SVG which is presented later in Section 3.3, is also based on the same mechanism. At each output step of ASR, TCPGen_ASR calculates a distribution over all valid subwords, referred to as the TCPGen_ASR distribution, constrained by a subword-level prefix-tree built from the biasing list. TCPGen_ASR also predicts a generation probability P^gen indicating how much contextual biasing is needed at a specific step. If there are valid paths found in the tree, the set of valid subwords is copied to the ASR output by interpolating the TCPGen_ASR distribution and the original ASR model output distribution, weighted by the generation probability.
An illustration of the computation of TCPGen using the same example prefix tree as Fig. <ref> is shown in Fig. <ref>. During ASR decoding, a set of valid subwords is obtained by searching the prefix tree with the decoded preceding subwords. Then, scaled dot-product attention is performed to obtain the TCPGen_ASR distribution P^ptr(y_i) (omitting dependencies on y_1:i-1 and 𝐱_1:T for presentation clarity) as follows:
P^ptr(y_i) = Softmax(Mask(𝐪_i𝐊^T/√(d))),
where d is the dimensionality of 𝐪_i and Mask(·) sets the probabilities of subwords that do not form valid paths at the current step to zero. The query vector 𝐪_i is computed from the context vector and the previously decoded token embedding. The key and value vectors are node encodings of corresponding subwords on the prefix tree. To enable lookahead functionality and obtain more powerful node representations, a graph convolutional network (GCN) <cit.> was used to encode the nodes on the tree. This node encoding can be done efficiently
The generation probability is calculated using the decoder hidden states and the weighted combination of node encodings from the attention mechanism. Then, the final output can be calculated as follows:
P(y_i) = P^mdl(y_i)(1-P^gen_i) + P^ptr(y_i)P^gen_i,
where P^mdl(y_i) represents the output distribution from the standard end-to-end model, and P^gen_i is the generation probability.
§.§.§ Slot Shortlist Prediction for TCPGen in SLU
For slot filling, possible entities for each slot are structured as prefix trees separately as shown in Fig. <ref>.
In order to have a more focused biasing list, instead of using all slots, a shortlist of slots is predicted at the start of decoding for each word using a class language model (CLM) <cit.> which takes the decoded word-level history as input. The top K slot types predicted by the CLM is used as contextual knowledge, where one TCPGen distribution is calculated for each slot type to model the joint distribution of slot type and wordpieces. The TCPGen distribution used for the pointer generator is obtained by marginalising with respect to the slot types, i.e. summing up probabilities of the same wordpiece in all shortlist slots, as shown in Eqn. (<ref>):
P^ptr(y_i) = ∑_s∈𝒮 P^ptr(s,y_i)
Note that the top K slot list is updated with the current decoded word history when there is no valid path found on any of the prefix trees.
Moreover, as the generation probability P^gen provides an indication of how much contextual biasing is needed to decode each subword token, it is concatenated with 𝐡^dec and sent to the SVG to further indicate where in the context the knowledge has been used. TCPGen_ASR is jointly optimised with the SVG module.
§.§ Knowledge-Aware SVG
Alternative hypotheses are an essential resource that can be obtained from the ASR system, especially for low-frequency named entities. To exploit the knowledge available in additional ASR hypotheses, the knowledge-aware SVG module is proposed here: it extracts branches on each prefix-tree during ASR beam-search decoding and forms sub-trees to be used by TCPGen_SVG (as shown in Fig. <ref>) to integrate knowledge into the SVG. In particular, as each prefix tree used in the ASR beam search decoding is searched, a valid path that leads from the root node to a leaf node will be saved, which corresponds to a valid named entity belonging to that slot type. After decoding, the lists of entities corresponding to the valid paths found for each slot are gathered and organised into prefix trees. These prefix trees are essentially sub-trees of the original prefix trees for each slot type.
Next, sub-trees are encoded using the same GCN as used in Section <ref> and are searched when generating slot values in the same way as with TCPGen_ASR. In contrast to TCPGen_ASR, for TCPGen_SVG, the query comes from the SVG hidden state at each decoding step, while the keys and values are taken from the node encodings on the sub-trees. In the example shown in Fig. <ref>, the beam search traverses the entity “rihanna" and “soha" in the slot, and hence the sub-tree of the slot is constructed using these two entities. When prompting the system with the slot , this sub-tree is subsequently used to generate values.
In this manner, possible entities that are not covered by the 1-best hypothesis but are explored in the ASR beam search can be effectively used via the copy mechanism in the SVG. In addition to the benefit of exploring other hypotheses, TCPGen_SVG on the generation side also improves the performance on rare and unseen entities even if they are correctly recognised, as the SVG might still be unable to pick them out due to insufficient training samples. TCPGen_SVG, as a pointer generator mechanism, enables the SVG to directly copy entities from the relevant knowledge that is also filtered by the ASR system, even if they are not seen in training.
§ EXPERIMENTAL SETUP
§.§ Training and Evaluation Data
Experiments were performed on two structurally different speech-based English ToD datasets as described below.
SLURP <cit.> is a collection of 72K audio recordings of single-turn user interactions with a home assistant, annotated with scenarios, actions and entities. We ran experiments in two data setups. 1) The official training, validation and test split were used and, following <cit.>, synthesised audio was used for training. 2) A simulated zero-shot setup following <cit.> was used: in training, we held out all utterances containing entities of five randomly selected `unseen' slots were held out and then the held-out utterances were used for testing.
External knowledge was organised as a simple dictionary for SLURP, referred to as the knowledge base (KB), where keys were slot types and values were lists of possible named entities for that type. It was created for experimental purposes by gathering named entities that appeared in the entire SLURP data for each slot type (including train, validation and test sets), as a simulation of a real-world task environment. The average size of these lists was 106: the largest list was which contained 872 entities, and the smallest list was , which only contained 2 entities.
CONCIERGE is a multi-turn dialogue dataset obtained from a commercial system, and it represents a standard and challenging few-shot learning setup typically met in production. The dataset contains a collection of 8 kHz noisy phone-call conversations of real customer interactions with a concierge voice bot covering the , and domains. The audio was upsampled to 16 kHz to match the ASR model. Example dialogues from CONCIERGE are shown in Fig. <ref>. Some dataset statistics are given in Table <ref>. Each entity in the test set had only 1 to 5 occurrences in the training portion of the dataset and hence can be considered a `few-shot' scenario.
§.§ KA2G: Main Setup
The ASR model used an encoder with 16 Conformer blocks <cit.> with a 512-dim hidden state and 4-head attention, a 1024-dim single-head location-sensitive attention and a 1024-dim single-layer unidirectional LSTM decoder. Each 10 ms frame of the input audio was represented by an 80-dim mel-scale filterbank feature. A suffix-based unigram WordPiece model <cit.> with 600 distinct WordPieces was used for the output.
GPT-2 <cit.> with a 768-dim output representation was used as the autoregressive PLM in all experiments. Both TCPGen components adopt one 256-dim single-head attention layer. A two-layer 256-dim GCN was used with the input node encodings set to the WordPiece embeddings of the ASR decoder, based on the default suggestions from prior work <cit.>. The dimensions of the slot-value generator and projection layers in TCPGen were all determined by the stated dimensionalities. The CLM used to predict a shortlist of slots for SLURP experiments was a single-layer 2048-dim LSTM.
§.§ Main Baseline
The proposed KA2G framework was compared to a pipeline system which used the same ASR model to generate the 1-best hypothesis. The 1-best hypothesis was used as input text to the GPT-2 model for slot-value generation. As for KA2G, the GPT-2 model was also finetuned by generating slot values given slot prompts. In addition, the pipeline system can also be equipped with TCPGen_ASR, acting as a stronger baseline, which achieves better recognition accuracy on rare entities. The purpose of comparing to this system is to showcase the improvement attained by KA2G beyond just the improvement in recognition accuracy.
§.§ KA2G: Training and Inference
KA2G was implemented in ESPNet <cit.>. The ASR AED models, together with the ASR-TCPGen component, were pretrained for 20 epochs on the Librispeech 960-hour English audiobook data <cit.>.
When training the SVG (see Section 3.1), N_n=10 negative `not-present' slots were randomly chosen and added for each utterance in SLURP, and N_n=3 for CONCIERGE. The training was run on a single Nvidia A100 GPU.
Both the TCPGen_SVG and TCPGen_ASR components were trained in the same way as in <cit.>, where the full biasing list was first defined for each dataset, and the biasing list for each utterance was organised by selecting biasing entities in the reference transcription and adding a certain number of distractors following <cit.>. For SLURP, the full biasing list was defined by selecting entities in the KB with fewer than 30 examples in the training set, including unseen entities. There were altogether 5,000 biasing entities. For TCPGen_ASR, the number of distractors was randomly picked between 100 and 200, whereas for TCPGen_SVG, 20 distractors from the same slot type were used, which was close to the size of the prefix-tree the model would see during in inference. A random drop of 30% of the reference biasing entities was applied to both the TCPGen_ASR and TCPGen_SVG. The same full biasing list selection criterion, as well as the training procedure for TCPGen, was also applied to the CONCIERGE data.
During inference, a beam size of 30 was used for ASR decoding. For SLURP, entities in the KB with fewer than 30 examples in training were used as biasing entities, and the CLM predicted the top 2 slot types at each word boundary. For experiments using the SLURP zero-shot learning split, a biasing list with 2k entities incorporated all entities in the unseen slots since they were all unseen entities, and was used as a whole during inference, as the CLM could not predict unseen slot types. For few-shot learning on CONCIERGE, all entities in the test set were included in the biasing list since they all appeared fewer than 5 times, which formed a biasing list of 105 entities. Since this was a much smaller biasing list, the entire list was used without CLM prediction. In the experiments, entities that appeared fewer than 5 times are referred to as few-shot entities. For the CONCIERGE data, all biasing entities were few-shot entities. Unless stated otherwise, greedy decoding was used for SVG.
§.§ Evaluation Metrics
The ASR output is evaluated using the standard word error rate (WER) measure. For slot filling, the SLU-F1 <cit.> and the micro Entity-F1 scores are used to measure performance, offering insights into both word-level and character-level F1 scores. Moreover, for multi-turn dialogues, joint goal accuracy (JGA) is reported: JGA counts a turn as correct only if all slots in the dialogue state are correctly filled in that turn.
§ RESULTS AND DISCUSSION
§.§ Experiments on SLURP
This section describes the set of experiments performed on the SLURP dataset. It begins with a summary of the main results, followed by a discussion of key aspects. This includes a comparison to prior work, a discussion from the practical aspect about few-shot and zero-shot learning for slot filling, the ablation study, the impact of training data sizes, the impact of beam search, the impact of the incorporation of an external LM as well as the performance under the zero-shot setup.
§.§.§ Main Results
The results on SLURP are summarised in Table <ref>. Both the SLU-F1 and Entity-F1 scores reveal that, compared to the pipeline system, using the audio-grounded SVG model (Row 2 in the table) achieved better performance overall, with much better F1 scores on biasing entities (i.e., their occurrence frequency was 0<f<30) and few-shot entities (0<f<5). Similar improvements from audio-grounding were observed when comparing the full KA2G framework to the pipeline system with a contextual ASR using TCPGen. Overall, KA2G achieved a 1.4% absolute SLU-F1 increase compared to the baseline pipeline system, with a 4.6% SLU-F1 improvement on few-shot entities and an 11.2% SLU-F1 improvement on unseen entities. This indicates that audio grounding helped the system to better deal with few-shot entities as long as at least some examples were exposed to it. Similar trends but large improvements were found in Entity-F1 than in SLU-F1 using KA2G, especially on entities in the biasing list as TCPGen is able to guide the slot-value generation to complete the entire entity correctly by following a specific path on the prefix-tree. Furthermore, similar to <cit.>, the model was also able to generate entities that were incorrectly recognised by ASR: for the audio-grounded system, 10% of the correctly filled entities were not correctly recognised by the ASR system, whereas that ratio was only 3% for the pipeline system.
In order to further investigate the influence of audio-grounding in slot filling, a case study was performed on SLURP. Two examples where the pipeline system and audio-grounded SVG had the same ASR output are shown in Fig. <ref>. In Case 1, while the semantic meanings of “launch” and “lunch” were completely different, their pronunciations were similar. Therefore, the ASR error “lunch” confused the pipeline system which produced the wrong slot type and value, which was not recoverable in the SVG-end since the audio information was lost in the pipeline system. However, with audio-grounding, the pronunciation similarities of “launch” and “lunch” can be considered by the SVG with the strong GPT-2 based PLM, which therefore successfully directed the model to look at the correct entity. In Case 2, although “abs” was in the ASR output, the ASR was uncertain about this prediction and “abbas” also had a good ASR score. Therefore, while both systems were able to fill the correct slot type, the audio-grounded SVG was able to replace the entity with the one seen in the training set that had similar pronunciation. This also explains why the pipeline-based baseline performed slightly better than the audio-grounded SVG baseline on unseen entities (see the last column of the first two rows in Table <ref>).
When TCPGen_ASR was used (Row 3 in the table) with the pipeline system, the main performance boost of the system was attributed to a better WER. The improvements in WER and both F1 scores were mainly on biasing and unseen entities as those were included in the biasing list for TCPGen. Finally, the full KA2G framework achieved higher overall SLU-F1 and Entity-F1 scores compared to the pipeline system and particularly large improvements were observed on few-shot entities and unseen entities. Moreover, due to the benefit of multi-task training, KA2G achieved a lower overall WER: this reflected the fact that the slot-filling task can positively impact the ASR performance. We further noted that the WER on words in the biasing entities was similar to that of the pipeline system: this indicated that the gain with KA2G on low-frequency entities did not only come from improved ASR.
§.§.§ Comparison to Baselines from Prior Work
The baseline pipeline system using contextual ASR with the overall SLU-F1 score of 79.7 in Table <ref> already achieved a better performance than the best system proposed in <cit.>. This was mainly due to the formulation of slot filling as a sequence generation task. <cit.> adopted a sequence labelling approach and reported an overall SLU-F1 score of 78.0 on SLURP (cf., 80.6 reported with KA2G in Table <ref>). Note that the work of <cit.> used the more powerful WavLM pretrained speech representations <cit.>, which resulted in a much lower WER of 9%. Concerning generation-based slot filling systems, <cit.> achieved a score of 78.9 using a sequence generator with wav2vec 2.0 as representations. We also found that our pipeline system which as used as the main baseline was usually much better than other pipeline systems reported, as our pipeline adopted the pre-trained GPT2 for slot filling whereas others usually compared to an NLU network with a similar size to the NLU modules in their end-to-end systems <cit.>.
§.§.§ Few-Shot versus Zero-Shot
The preliminary comparison of results between few-shot and unseen entities from Table <ref> showed that, even when only a handful of examples were provided, the model was able to achieve a sizable jump in performance. To provide more insight, a finer-grained frequency bin analysis was conducted on SLURP, as shown in Fig. <ref>. The results show that there is a very large increase in SLU-F1 after only providing a single sample for an entity (i.e., moving from zero-shot to one-shot), with SLU-F1 scores increasing from ∼30 to ∼70: this corroborated the major benefits of few-shot learning over zero-shot learning <cit.>. Fig. <ref> again validates that KA2G provides improvements over the baseline system in such low-resource scenarios.
§.§.§ Ablation Study
The results of the ablation experiments for KA2G are given in Table <ref>, where the last row in the table corresponds to the system which used the audio-grounded SVG (i.e., the second row in Table <ref>). By removing the TCPGen_SVG, the most significant change was the decrease in performance on unseen entities, especially in terms of Entity-F1. When TCPGen_SVG was included, the SVG was fully guided by the biasing entities, and hence the model was more likely to predict complete entities. This observation was also found by comparing the system without TCPGen_ASR to the system without TCPGen_SVG. Moreover, since the WERs for unseen entities were much higher for the 1-best hypothesis, extracting information from alternative hypotheses is much more useful for unseen entities.
While TCPGen_ASR contributed to the performance improvement, the use of the copy probability P^gen also contributed to the improvement, especially for rare and unseen entities, as it provided the indication of where knowledge has been used. This was particularly useful when the entity was correctly recognised but SVG was not able to generate it due to not seeing enough examples.
§.§.§ Impact of Training Data Size
The impact of the training data size on the performance of few-shot entities was also analysed. Specifically, the 2.6k utterances which contain few-shot entities were retained while the rest of the training set was sub-sampled. These subsets were then used to train the pipeline system and the full KA2G framework. Unlike other sequence-to-sequence slot-filling frameworks <cit.> with speech input, the ASR component in KA2G can potentially benefit from training as a standalone module on domain-specific ASR data. Since ASR annotation is usually easier to obtain than SLU annotation, the ASR component in both the pipeline system and KA2G could benefit from the ASR annotation of the full SLURP training data. To this end, two sets of experiments were conducted to investigate the impact of the training data size, where the first experiment trained both the ASR and the SLU on the same selected subset with TCPGen, whereas the second experiment used the full SLURP training set to train the ASR with TCPGen and the selected subset to train the SLU.
The results are shown in Fig. <ref>. Reducing other training data indeed had a strong impact on SLU-F1 scores of few-shot entities despite the fact that the utterances covering those entities were retained. With the ASR component trained on the full SLURP data (i.e. the second set), the reduction in SLU-F1 became much smaller. The full KA2G again consistently outperformed the baseline system across different training data sizes, with a larger difference when the ASR module was trained on the full data.
§.§.§ The beam search
Beam search can be performed for both ASR and SVG. For ASR, it was found that a higher beam size only yields very marginal improvements in WER while taking significantly more decoding time. From the perspective of the biasing lists for TCPGen_SVG, in the one-best hypothesis, only 50% of entities can be found. With a beam size of 30, 70% of entities were covered in the biasing lists for TCPGen_SVG, which was the main source of improvements for TCPGen_SVG. However, this was only improved by 2% using a beam size of 100, whereas the average size of the biasing lists increased from 10 to 15 which introduced more noise to the biasing lists. Therefore, a beam size of 30 was used for the rest of the experiments. On the SVG side, beam search only provided a marginal improvement, but the time for decoding was 4–5 times longer as the lengths of alternative slot values were in general longer than .
§.§.§ Impact of External Language Model (LM) Fusion
ASR models usually integrate external language information effectively via LM fusion to further boost their performance, hence it is worthwhile studying the effect of using an external LM in the SLU context for both pipeline systems and KA2G. As AED models used in this paper already inherently contained an implicit internal LM that is trained on the SLURP data, an external LM has to contain richer LM information to be effective for ASR. Therefore, a powerful GPT2 PLM finetuned on the text of the SLURP training set was employed. As the modelling unit of GPT2 was different to the AED model, the finetuned GPT2 was used to perform rescoring for the 30-best list from the AED model. For the pipeline system, the re-ranked 1-best hypothesis was used by the text-based SVG, and for KA2G, the hidden states of the 1-best hypothesis were cached and used by the audio-grounded SVG module. The results for both systems are shown in Table <ref>.
Although rescoring with GPT2 further reduced the WER by 0.4% in absolute value for both systems, it did not improve the SLU performance. In fact, the external LM also tended to reinforce the common correlation in the text which had been well-modelled by the SVG module, and it was the performance on the low-frequent entities that needed to be improved. As a result, external LM integration had a very limited or even negative influence on slot-filling.
§.§.§ SLURP: Zero-Shot Setup
The results of the zero-shot setup are summarised in Table <ref>. The WER for the zero-shot learning test set was 18.0% for ASR without TCPGen, which decreased to 16.9% for the KA2G. Compared to the pipeline system, KA2G achieved worthwhile improvements both in SLU-F1 and Entity-F1 scores. Therefore, KA2G provided an effective way of leveraging knowledge for zero-shot slot filling by bridging the SVG and ASR alternative hypotheses via a neural shortcut provided by TCPGen. Moreover, while zero-shot slot filling in <cit.> relied on manually tuned hyper-parameters, KA2G removed the requirement on TCPGen-related hyperparameter tuning during inference, which also improved the robustness and reliability of KA2G for zero-shot slot-filling.
§.§ Experiments on CONCIERGE
This section discusses the results of the CONCIERGE data under single-turn and multi-turn evaluation metrics.
§.§.§ Single-Turn Evaluation
The proposed KA2G framework was also validated on a real-world use case with the CONCIERGE dataset. The WER on this test set was ∼35% due to limited audio data for training, which made the slot-filling task with speech input on CONCIERGE even more challenging. Single-turn evaluation was first performed, with the same metrics as used with SLURP. The results are provided in Table <ref>.
In the challenging setup, external knowledge plays an even more important role, which led to much larger performance improvements with the KA2G framework. As with SLURP, having a few examples of entities yielded much better performance than zero-shot learning. Clearly, providing a handful of examples in the training set resulted in very large performance improvements. This again corroborated the fact that although zero-shot learning is an attractive research problem, few-shot learning is often more pragmatic for industrial applications, and this is also the case when using generative systems such as KA2G.
§.§.§ Multi-Turn Evaluation
For multi-turn experiments, entity mapping was applied in order to group different expressions of the same entity together. Further, the ASR 1-best hypothesis from the history of user inputs was included as input to the PLM. The JGA scores are summarised in Table <ref>. There were large improvements with the full KA2G, which were mostly due to the use of TCPGen_SVG. As JGA was more relevant to Entity-F1, and the SVG-TCPGen module in the KA2G framework provided particular benefits to Entity-F1, the KA2G framework resulted in a clear improvement in JGA.
§ CONCLUSIONS
A novel knowledge-aware audio-grounded generative slot-filling framework for speech-based ToD, called KA2G has been proposed. The framework is especially suited for low-resource slot-filling tasks and for handling rare and unseen entities/values. KA2G comprises an audio-grounded SVG, together with two TCPGen components.
The first TCPGen integrates knowledge from an external knowledge base containing possible entities for all slots into the ASR module, while the second TCPGen exploits entities found in alternative ASR hypotheses. A comprehensive evaluation has been performed on two different datasets with speech input: i) single-turn SLURP data and ii) multi-turn CONCIERGE data obtained from a commercial ToD system. The usefulness of KA2G has been experimentally validated on both datasets, with clear performance gains over current state-of-the-art systems. KA2G was especially useful in few-shot and zero-shot setups.
KA2G, as a prompt-based SLU framework with speech input, also possesses potential for future investigation with the advent of large language models (LLMs) and prompt-based generative AI.
We believe KA2G can serve as a promising speech front-end for LLMs. The explicit knowledge integration component that allows dynamic contextual knowledge to be incorporated in a prompt-based system may also potentially benefit the performance of LLMs on factual and domain-specific enquiries.
cas-model2-names
|
http://arxiv.org/abs/2307.00671v1
|
20230702213655
|
Leveraging Multi-modal Sensing for Robotic Insertion Tasks in R&D Laboratories
|
[
"Aaron Butterworth",
"Gabriella Pizzuto",
"Leszek Pecyna",
"Andrew I. Cooper",
"Shan Luo"
] |
cs.RO
|
[
"cs.RO"
] |
Electron delocalization in aromaticity as a superposition phenomenon
Onur Pusuluk
August 1, 2023
====================================================================
empty
empty
Performing a large volume of experiments in Chemistry labs creates repetitive actions costing researchers time, automating these routines is highly desirable. Previous experiments in robotic chemistry have performed high numbers of experiments autonomously, however, these processes rely on automated machines in all stages from solid or liquid addition to analysis of the final product. In these systems every transition between machine requires the robotic chemist to pick and place glass vials, however, this is currently performed using open loop methods which require all equipment being used by the robot to be in well defined known locations. We seek to begin closing the loop in this vial handling process in a way which also fosters human-robot collaboration in the chemistry lab environment. To do this the robot must be able to detect valid placement positions for the vials it is collecting, and reliably insert them into the detected locations. We create a single modality visual method for estimating placement locations to provide a baseline before introducing two additional methods of feedback (force and tactile feedback). Our visual method uses a combination of classic computer vision methods and a CNN discriminator to detect possible insertion points, then a vial is grasped and positioned above an insertion point and the multi-modal methods guide the final insertion movements using an efficient search pattern. Through our experiments we show the baseline insertion rate of 48.78% improves to 89.55% with the addition of our `force and vision' multi-modal feedback method.
§ INTRODUCTION
Automating a laboratory designed for humans poses several interesting challenges for robotics, a wide variety of transparent objects are employed and must be detected and handled in a safety-conscious manner. Creating collaborative environments for robotics and human scientists is an important problem, these collaborative lab spaces will significantly increase the speed at which experiments can be performed. A major challenge for these spaces is allowing robots to safely interact with glass labware, poor interactions can cause numerous hazards to the human collaborators and will cause disruption to the experiments in progress.
Therefore, equipping a laboratory robot with the ability to both see and touch when placing vials is a highly desirable requirement.
Although there has been an increase in usage of laboratory robotics <cit.>, such workflows are still carried out in open loop. This hinders overall robot deployment for carrying out long-term laboratory experiments, such systems are vulnerable to disruption caused by a prior failure and pose a significant safety threat to human scientists should glassware break. Therefore, a robot in a chemistry lab, much like its human collaborators, should aim to use all possible sensory information to ensure a safe workflow and increased reliability.
There exist different manual and tedious tasks across laboratory workflows that would greatly benefit from being automated; however, all of these would require manipulation of glass vials.
Up to date, tactile sensing has not been employed in laboratory automation. Existing robotic tasks rely on pre-programmed motions such as the mobile robotic chemist <cit.>. This rigidity presents an unnatural environment for a human collaborator, however, it provides high reliability which is highly desired for long experiments with many interactions. Multi-modal sensing has been introduced to object manipulation tasks outside of the laboratory setting to create a more sensor-rich environment and improve the reliability of object handling <cit.>. By introducing multi-modal sensing we aim to allow the robot to operate in a dynamic environment more suitable for human-robot collaboration, while maintaining a high level of reliability suited to longer experiments.
In this paper, we introduce a novel multi-modal sensing method for robotic vial insertion, a common task in research laboratories. Our aim is to investigate the role of multi-modal sensory feedback in this task and how it can improve the performance of the robot from the visual baseline. In this task, we can represent almost any interaction between the robot and the labware as vials are loaded into and out of racks for transportation, from machines to racks, and from racks to machines.
As shown in Fig. <ref>, in the simulated laboratory automation environment, we leverage the camera at the robot's wrist, intrinsic force sensors of the end-effector, two camera-based tactile sensors <cit.> mounted onto its 2-finger gripper to improve the robotic insertion task. The experimental results show that our multi-modal approach boosts the success rate of the vial insertion to 89.55%, compared to only 48.78% with a single visual modality. The results show that multi-modal sensing can provide more cues of the interaction with the vial and the racks, and has potential to improve the reliability of a robotic system in the laboratory automation environment.
§ RELATED WORKS
§.§ Laboratory Automation
To better suit a changing laboratory workflow more versatile robotic systems are required.
Laboratory automation for material discovery has already taken advantage of different robotic platforms to carry out different workflows predominantly in the areas of materials for clean energy and pharmaceuticals.
Burger et al. <cit.> demonstrated that a mobile manipulator exceeds human-level performance for photocatalysis, where the robot carried out 688 experiments over seven days.
The mobile robotic chemist freely moved around the lab space and requires much less modification to the existing human-oriented lab.
However, this system still requires many assumptions as it operates in open loop, which reduces its generality. For example, the sample vials have to be of a set size and held in a specific rack which is then moved between fixed holders in the workspace; and the system is not capable of detecting these vials.
As a result, a failed interaction may go unnoticed by the robot itself and thus jeopardises the remaining operations, while posing a safety hazard to human scientists.
Robotic manipulators have also been used for autonomously carrying out workflows to accelerate material discovery, for example, <cit.> and <cit.>.
While both used perception to understand the sample contents, they still operated the robotic manipulator in an open-loop manner, without any force, visual or tactile feedback.
Lim et al. <cit.> also used a robotic manipulator for autonomous chemical synthesis, where they demonstrated how the robot could successfully carry out a Michael reaction with a yield of 34%, comparable to that obtained by a junior chemist.
Nonetheless, the robotic operations were yet again simplified to pick-and-place tasks without sensory feedback.
The automation of laboratories for life sciences has perhaps a longer history of using robotics and automated platforms.
As a result of stricter protocols when compared to other fields, several works have demonstrated the usage of dual-arm robots <cit.>, <cit.>, <cit.> for sample preparation and measurement.
While the robots manipulate complex tools such as pipettes, the researchers focus more on automating the instrument and hence, leave the robotic control in open-loop.
Alternatively, using multi-modal sensory feedback could potentially have not required as much modification of the instruments while allowing the robot to have increased functionality such as failure detection if something goes wrong.
Existing laboratory automation for pharmaceutical applications normally operates in using a dedicated, specific method, for example during the COVID-19 pandemic automated workflows for sample testing allowed a site to move from 180 tests per day to over 1,000 tests per day after switching to a fully automated workflow <cit.>.
However, this type of automation uses specialist equipment to focus on a single task which is very different to the general use of an R&D laboratory in which small changes will be made between batches and the workflow may rapidly change from day to day.
Another application of laboratory automation is for drug discovery where Pickles et al. <cit.> demonstrate how a mobile manipulator can be used for crystallisation workflows.
The robotic platform mainly transports samples between different stations and rely on onboard sensory information such as LIDAR for navigation, while operating the robotic manipulator without visual or tactile feedback.
From all of the aforementioned works, it is evident that a vital process in laboratory automation is manipulation of vials, whether it is for sample preparation, measurement or transportation.
This is a common manipulation task across the different fields of materials discovery, life sciences, drug discovery, amongst others.
Our work addresses this task since it is fundamental across all domains and success here would scale to any workflow.
In the following sections, we will demonstrate the role multi-modal sensory feedback plays in this task, and our goal is to then transfer this knowledge to other manipulation operations in laboratory automation.
§.§ Multi-modal Object Interaction
Recent works also highlight the ability for a multi-modal system to significantly outperform the single modality baseline on grasping tasks, one such work shows vision guided tactile sensing can improve grasping success rate from 38.9% to 85.2% <cit.>. Multi-modal sensing is also found to be a significant factor in improving a robotic cable following task <cit.> where the success rate of a single modality was at maximum 77%, however, multi-modal sensing showed a maximum of 92%. This sensor fusion will be important in an environment where human-robot collaboration takes place as it allows the robots to form a clearer model of the environment <cit.>, better recover from failures, and avoid unwanted contact. However, this field is developing and to our best knowledge there are no works studying multi-modal feedback in the context of laboratory automation.
§ METHOD
In this work we evaluate two multi-modal methods for completing a routine laboratory vial insertion task against a single modality baseline, an overview of each modality is shown in Figure <ref>. We divide the vial insertion task into two sub-problems:
i. Goal Position Detection: we locate the target rack which may have been placed anywhere in the workspace. We use the circular Hough Transform to generate a large number of possible locations, a CNN then filters for vacant locations belonging to the target rack.
ii. Vial Insertion: a vial is collected from a known location and moved above a point detected by the first part of the pipeline, we then use an efficient search method and multi-modal feedback to place it into the rack.
§.§ Goal Position Detection
§.§.§ Candidate Detection
The workspace is imaged top-down using the robot mounted camera, the circular Hough Transform (CHT) <cit.> is applied to the image and a set of possible centre locations and radii are generated. Where the rack is in an oblique view, the CHT may fail to detect the now elliptical slots, we therefore select the detection parameters for the CHT to be overly sensitive so in these oblique cases a centre point is still detected. As consequence many image features are now detected as possible placement locations.
§.§.§ Filtering
To filter the candidates we introduce a CNN classifier. Input data is created by scaling the candidate's corresponding radius by a margin factor to 110% of the original and crop this region from the image, centred on the candidate location. The network itself is structured as an convolutional layer with a 5 × 5 feature map and stride of 2, followed by 2 × 2 max pooling, these two layers are repeated twice with independent weights and followed by a flattening layer and 3 fully connected layers with 512, 128, and 2 output neurons respectively. The network first aims to predict if the cropped image belongs to the rack and then in the case when the first parameter is sufficiently high if the rack slot is occupied.
Training data is manually labelled from frames extracted from a video feed of the robot mounted camera moving around the workspace, labels are generated quickly via button prompt where 3 options are presented (1. Not In Rack, 2. In Rack but Occupied, 3. In Rack and Unoccupied)
We select an insertion target from the filtered candidates by selecting the highest CNN classification score for `In Rack' and `Unoccupied', in case of a draw we select randomly from the tied candidates.
The selected target is represented by the image coordinate (u,v), we then use the camera's calibrated intrinsic parameters (focal length (f_x, f_y) and principal point (c_x, c_y)) and the height of the rack from the table (r_z), and cameras position w.r.t the robot base frame (CAM) are then used to calculate the real world position (x, y, z) w.r.t. the robot base frame:
[ x; y; z ] = [ CAM_x + (u - c_x)(CAM_z - r_z)/f_x; CAM_y + (v - c_y)(CAM_z - r_z)/f_y; r_z ]
Finally, the vial is grasped from above (perpendicular to the workspace plane) from a second rack in a known position and lifted above the calculated target position. The multi-modal nature of the following methods relies on using this initial target position and refining it using another sensory input, either force feedback or tactile feedback. The visual single modality baseline takes a second imaging step to refine the initial estimate rather than relying on another sensor modality.
§.§ Vial Insertion
The second sub-task begins with the vial grasped in a two finger gripper and held above the predicted insertion location, at this point one of the three following modalities is used to insert the vial
§.§.§ Visual Baseline
To aid in visual only insertion we first move the camera in the -z direction bringing it closer to the rack, a second image is captured and the image processing from the previous stage is repeated, the closest insertion point to the centre of the new image is selected. This process enlarges the rack in the image and ensures any skew distortion is minimal, minimising the error in the visual insertion point prediction. The vial is aligned with the revised insertion point and moved directly down the z axis until it is below the rack height, at which point the gripper release command is sent and success is evaluated.
§.§.§ Force and Visual Feedback
To assess the forces experienced by the vial we use the robotic arm's internal force sensors, however, the method makes minimal assumptions regarding the source of this data. We firstly initialise a first-in first-out (FIFO) buffer with length equal to one second of data from the sensor while the robotic arm is stationary, the average of this buffer then serves as a zero point to counteract the static force experienced by the sensor. This static force will vary with robot pose and payload, therefore, taking a practical measurement is preferred and does not significantly impact the speed at which the task is performed.
The robot then attempts to insert the vial by moving along the -z axis, low acceleration values reduce jerk forces lowering extraneous noise in the force sensor. The FIFO buffer is monitored against the recorded static value, a deviation of more than 20% causes the robot to stop moving and assess the vial's state. We have now reached the `Vial placed?' condition in Figure <ref>, this is assessed using the position of the vial at the tip of the gripper w.r.t the robot base frame (GRIP) and the height at which the vial is gripped (V_h). We evaluate GRIP_z >= r_z + V_h to indicate if the vial has impacted the top surface of the rack, creating a force which stopped the robot. In this case we move to the search algorithm detailed in Section <ref>, in the case the condition is not met (i.e. the vial is lower than the rack height, and thus must be in the slot) we send the gripper release command and return the robot to the starting position clear of the rack.
We select the stop condition as a 20% deviation as the vials being handled are not excessively fragile and the robots internal force sensor can be more noisy than a dedicated sensor. This parameter can be tuned based on the tolerance for force of the glassware being handled, and the noise floor of the tactile sensor being used. The buffer length can also be varied, longer buffers exert more force on the objects as the robot will press for longer before stopping but are less prone to noise caused by jerk / joint movements, shorter buffers are more prone to false positives but will exert a smaller force on the objects involved as the system will react faster.
A second safety condition will also stop the robot in the case GRIP_z < 1/2r_z to prevent the vial impacting the table should the robot misidentify a placement location and could hit the table. This condition is ultimately optional as the force feedback upon the vial applying a large force onto the table should trigger a stop, however, considering the environment and the hazards of unexpected contact this extra condition helps protect any delicate glassware from impacting an unknown object.
§.§.§ Tactile and Visual Feedback
For tactile feedback we rely on a pair of DIGIT <cit.> visual tactile sensors, however, this method should be applicable to any pair of visual-elastomer style tactile sensors. During the initial setup phase we capture a set of reference tactile images R with the gripper open and not in contact with any object, during the insertion attempt we poll the sensor at 60Hz calling each image received i and applying the following processing to extract the object location. Initially we calculate the absolute difference between i and each image in R, then average this new set of differences to produce a lower noise difference image δ:
δ = 1/| D |∑_d_i ∈ D d_i,
where D = { | s - i |, s ∈ R }.
The difference image is then normalised before a threshold (0 ≤ t ≤ 1) is applied, which results in a binary image b where each pixel b_(x,y):
b_(x,y) =
0 δ̂_(x,y) < t
1 δ̂_(x,y)≥ t
,
where δ̂ = δ - min(δ)/max(δ) - min(δ).
Contact regions are extracted from the binary image using the border following method detailed by Suzuki et al. <cit.> which produces a set of polygons P enclosing the contact regions.
Green's theorem <cit.> is applied to calculate the area enclosed by each polygon and any excessively small regions are filtered out giving the set of contact regions P'.
The centre point (g_x, g_y) of each region is used to track its general position in the frame, p represents a single polygon, and v a vertex within the polygon p:
[ g_x; g_y ] = 1/| p |[ ∑ v_x; ∑ v_y ], where each v ∈ p and p ∈ P'
Before we move the vial, its neutral position is calculated in the tactile image space, we can then use the tactile-physical mapping calculated using the method in Section <ref> to correct for a vial which has not been grasped centrally in the gripper. Initially this is generally not important as the vial is collected from a known location, however, the tactile sensors surface has a significantly lower friction than the rubber grippers. Failed placement attempts may twist the vial and the mapping gains importance as we can correct for this cumulative error.
We then move the vial down the -z axis and monitor the vial's position for deviation from the neutral state. A fixed threshold triggers the robot to stop and the position is assess using the same method as the previous force feedback modality, also testing the `Vial placed?' condition in Figure <ref>.
§.§ Search Algorithm
The initial stage locates multiple placement positions. Using the knowledge of our selected target (r_x, r_y), we attempt to detect neighbours by matching a set of up to 8 other detected regions, disregarding occupancy. These neighbours provide bounds for the search we fit a bounding box centred on the selected target with the width and height denoted as (r_w, r_h) therefore spanning r_x ±1/2r_w, r_y ±1/2r_h.
It is assumed the correct placement coordinate is in close proximity to the estimate, if we deviate by more than this perimeter while searching we may accidentally insert the vial into a different slot in the rack.
Generally, the search creates an envelope around the initial placement position at a distance represented by the search spacing. This creates possible trial locations to attempt to insert the vial into the originally targeted slot. If the vial fails to be inserted after searching the current envelope we expand it by the search spacing. Each time only the edge of the envelope is searched as its contents have been searched by previous iterations. We repeat this until either the vial is successfully inserted, or the search region is entirely beyond the bounds calculated previously.
Mathematically, we use the initial expansion factor E = 1, search spacing S = 2.5mm, and occupancy set V = ∅. The expansion along the x and y axis is represented by e_x and e_y respectively, trials positions (t_x, t_y) are generated as follows:
[ t_x; t_y ] =
[ r_x; r_y ] +
S ·[ e_x; e_y ]
where e_x, e_y ∈ℤ e_x,e_y ∈ [-E,E] [ e_x; e_y ]∉ V
§.§ Tactile Sensor Calibration
Variations between tactile sensors necessitate a practical method for calculating the transformation between a detected feature in the tactile image space and the physical position on the sensing surface.
In our experiments we have only considered the position in the plane of the tactile sensor's contact surface, the equal pressure applied by both sides of the parallel gripper will keep any error in the gripping plane to a minimum.
A vial is placed into a tight fitting rack at a known location, the robot arm is then positioned above the vial in the same top down grasp that will be used in later experiments. At known offsets from the vial's position the gripper is closed and a tactile image processed to find a position in the image space with a known offset on the physical sensing surface. Given varying resolutions of tactile sensors we normalise the in-image coordinate to the range [0…1] with 0 representing the left extreme of the sensor and 1 the right.
The ordinary least squares method is then applied and its parameters recorded for inference during the later experiments.
§ EXPERIMENTS
In this section, we conduct three experiments to evaluate the three modalities defined priorly. The goal of these experiments are to investigate the performance of multi-modal sensing on the vial insertion task compared to the single modality baseline.
§.§ Metrics
We consider average success rate (successes/attempts), average number of attempts before success (Σ placement attempts/successes), and the average time elapsed before success (Σ time taken/successes). An attempt is marked successful if the vial is fully inserted into the rack before being released.
§.§ Experiment Setup
As shown in Figure <ref>, our system is comprised of a UR5 robotic arm which has a Robotiq 2F-85 gripper mounted on the end effector plate and an Intel D415 camera mounted on the `bottom' of the wrist as shown in Figure <ref>. The gripper's standard rubber fingertips are used for both the visual and force feedback experiments, and they are replaced with a pair of Digit sensors <cit.> for the tactile experiment.
The tactile-physical mapping and the D415 camera are individually calibrated before any experiment begins and the same calibration is used throughout all experiments, our process of calibrating the digit sensor is described in Section <ref>.
Control of the UR5 is performed via PID in both target position and target velocity modes, inverse kinematics are provided by the built-in URScript methods.
Each experiment begins with the robot in a home pose above the workspace and a timestamp is recorded. The imaging step is then performed, and a single modality is used to insert the vial into the rack. A second timestamp is recorded when the gripper releases the vial, marking the end of that trial in the experiment. Success of the trial is evaluated and the vial is reset to the collection point, the rack is moved in the workspace and a new trial begins.
§.§ Results
Table <ref> shows numerical results for each modality, Figure <ref> shows the distribution of successful placements against number of attempts, and Figure <ref> shows the cumulative probability of the vial being placed successfully given a number of attempts.
§.§.§ Visual Modality
Our visual modality only makes a single placement attempt due to the limitations of our rack detection method, the transparent vial causes significant problems for visual servoing and is a difficult problem in itself (<cit.>). For this reason we cannot detect a failed placement attempt before releasing the vial and re-imaging the workspace.
The baseline does not show the high reliability needed for a high throughput automation environment, with 48.78% success. This modality also generally takes a longer time to execute, however, it's constant runtime does make it faster after the 7th placement attempt in the alternative modalities. The target velocity of the robot arm is constant between modalities to allow this direct comparison, however, the force feedback modality does lower the allowed acceleration to this velocity to minimise jerk forces.
§.§.§ Visual & Force Feedback
From Table <ref> the reduction in setup time by avoiding the additional visual estimate refinement step and related reduction in first time placement success is demonstrated. However, even with the additional placement attempts required the average run time is significantly lowered. This modality also displays the highest success rate of all tested modalities at 89.55%, and as shown in Figure <ref> it plateaus at a higher number of placement attempts, only showing minimal improvement in placement success after 4 attempts.
The decrease in first time placement accuracy shows the effectiveness of the second imaging step in the visual only modality, consider the experiment up until the first force feedback interruption, the only difference between the force feedback and visual only modalities at this point is the second imaging step. Therefore, the 8% first time success rate lost by the force feedback method must come from the missing second imaging stage.
As this method relies on the robot's inbuilt force sensor there is potential for large forces to be generated while placing the vial, while requiring minimal modification to the robot these sensor readings also contain the static holding forces the robot is experiencing creating a significant noise floor. A larger force is then required to avoid false positives, this is undesirable in a laboratory setting as accidentally disturbing the rack could lead to a chemical spill from the sample vials contained within. Also in human-robot collaboration there is always cause for concern due to the injury risk caused by excessive force. Nevertheless, this method should be directly applicable to external force sensors which are mounted to the gripper rather than strictly using the robot's inbuilt sensing capacity; with a more accurate sensor the risks can be greatly reduced.
§.§.§ Visual & Tactile Feedback
With an average runtime similar to the force feedback modality we can see that the initial reference image collection and tactile image processing introduces minimal overhead. However, surprisingly the success rate is significantly lower than the force feedback modality although still an improvement from the baseline.
We have identified several factors which may contribute to this result: the Digit sensor cannot grasp the vial with as much force as the hard rubber gripper used in the other modalities, and its surface creates less friction due to the silicon membrane; leading to the vial moving in the gripper when a placement attempt fails, making subsequent attempts less likely to succeed even with our correction attempts.
However, this weakness also helps eliminate the safety concerns in the previous modality as the forces exerted on the vial and surfaces in contact are significantly lower. We also have more insight into the orientation of the vial, spills can be avoided as additional constraints can be introduced when planning to keep the vials orientation within tolerance.
§.§.§ Search Method
If the tolerance between the vial and rack is minimal the search method becomes a failure case in our experiments. Consider the case where the robot undershoots the slot on the first attempt, but due to an overly large S after an envelope expansion now overshoots the slot. With this algorithm we may fail to place the vial as the search method does not dynamically adjust S, creating a balance between overall placement success rate and search speed. However, in our experiment using lab hardware we found the tolerance between the rack and vial to be comparatively large, aiding a human chemist to quickly handle vials without them becoming stuck in the rack, and allowing an S which favours placement speed.
§.§.§ Discussions
The additional time per placement attempt is linear and the visual method only becomes faster after 7 placement attempts, however this is highly dependent on the acceptable speed or safety parameters for the robot. However, the visual method uses more initial robot movements therefore will be slower comparatively on the same platform compared to the other methods.
Both feedback based systems show potential for error recovery inside a robotic chemistry system. Despite undesirable force exerted the force feedback method still shows significantly improved reliability which is an important step toward closing the control loop in laboratory automation. A robotic system with multi-modality for error detection alone is a step beyond a completely open loop system which may be prone to undetected failures. However, some methods of achieving multi-modality may also inadvertently introduce new failure cases. For example, the tactile system significantly lowers the first time success rate. This may be caused by the change in material properties of the grippers, the smoother contact surface of a vision-based tactile sensor may allow the vial to slide or rotate causing a collision at the entrance to the rack.
§ CONCLUSION
In this paper we have shown the effectiveness of multi-modal sensing for increasing reliability in a common laboratory vial handling task. We have also introduced an effective filtering method for identifying valid placement locations by combining existing classic computer vision methods and machine learning, and a bounded search method for recovering from a failed placement attempt. Both our multi-modal approaches show significant improvement over the single modality approach and the increased richness of the sensory data offers additional attractive features in the laboratory automation context.
Despite offering a smaller improvement over the baseline inclusion of tactile sensors allows for improved safety in the movement planning stage via additional insights into the grasped objects orientation. However, the inclusion of a standalone force sensor also provides greater insight about the external contact forces being placed on the grasped object as the tactile approach generally requires slippage to occur before force is detected using our method.
Albeit showing the best placement reliability the force feedback system without a standalone sensor still presents some concerning properties for the human-robot collaborative environment, such as excessive force being applied before a stop is triggered.
A possible improvement to the search algorithm would explore from the maximum bounds inward, dynamically modifying the search spacing similar to a binary search. Avoiding an excessively large S causing failure cases. Accounting for the previous search results may also allow the search to be biased towards an area, over time offsetting any calibration error between the gripper and camera or tactile sensors.
In future works the combination of all three modalities could be explored for a combination of the greater placement ability of the force feedback system and also the increased insight into the grasped objects pose from the tactile system. We also believe the use of machine learning could further increase the placement success rate by employing a reinforcement learning approach potentially using all 3 modalities simultaneously in a sim2real approach.
IEEEtran
|
http://arxiv.org/abs/2307.02841v1
|
20230706081453
|
Excitation of Wannier-Stark states in a chain of coupled optical resonators with linear gain and nonlinear losses
|
[
"A. Verbitskiy",
"A. Yulin"
] |
physics.optics
|
[
"physics.optics",
"nlin.PS"
] |
School of Physics and Engineering, ITMO University, Kronverksky Pr. 49, bldg. A, St. Petersburg, 197101, Russia
School of Physics and Engineering, ITMO University, Kronverksky Pr. 49, bldg. A, St. Petersburg, 197101, Russia
In this paper we theoretically study the nonlinear dynamics of Wannier-Stark states in the dissipative system consisting of interacting optical resonators, whose resonant frequencies depend linearly on their number. It is shown that the negative losses in some resonators can switch the system into a lasing regime with Wannier-Stark states serving as working modes.
It is shown by extensive numerical simulations that there may be single-frequency stationary regimes as well as multi-frequency regimes. In the latter case Bloch oscillations can appear in the system.
The possibility of selective excitation of Wannier-Stark states by the appropriate choice of the dissipation profile is investigated. A simple perturbation theory describing the quasi-linear regimes is developed and compared against the numerical results.
Excitation of Wannier-Stark states in a chain of coupled optical resonators with linear gain and nonlinear losses
A. Yulin
August 1, 2023
=================================================================================================================
§ INTRODUCTION
Wannier-Stark ladders (WSL) continue to be of great interest to scientists in different fields of physics, such as, solid-state physics, condensed matter and quantum magnets <cit.>. The WSL effect consists in the presence of equidistant lines in the spectrum, which correspond to the eigenmodes of the system (Wannier-Stark states) <cit.>. The beating between these states in time may result in periodic motion - Bloch oscillations (BOs) <cit.>.
BOs was initially predicted in solid state physics. However, the experimental observation of this phenomenon in solid is a challenging problem. This explains why it took so many years to confirm the effect experimentally <cit.>. However, it is worth noting here that BOs are a very common phenomena, so it has been found in a large variety of physical systems such as atomic systems <cit.>, lasers <cit.>, coupled LC circuits <cit.>, mechanical systems <cit.>, plasmonic <cit.> or exciton-polariton systems <cit.>.
The advantage of the optical systems is that the experiments allowing to observe the aforementioned effects are less involving compared to the experiments in solid state physics. So the theoretical prediction of optical WSL and BOs <cit.> has been quickly followed by the experimental demonstrations.
One of the first experimental study on observing optical WSL was done in <cit.>. Here, using a chirped Moire grating, a system concept similar to the electronic one, was implemented. No less intriguing evidences of the existence of WSL are presented in the works <cit.>. BOs have also been experimentally detected in the optical range <cit.>. For the complete review of the research on BOs and related phenomena see <cit.>.
The presence of dissipation, pump and nonlinear effects in optical systems (for example in arrays of interacting nonlinear optical cavities) calls for the generalization of the theory of BOs for the case of nonlinear dissipative systems. Let us remark that such optical systems as arrays of microlasers can be seen as promising sources of coherent radiation <cit.>. Thus the study of these systems are not only of fundamental but also of practical interest.
In this paper we aim to study the regimes of radiation generation in one-dimensional systems of coupled optical cavities, where each of the resonators sustains only one mode, and its structure is determined by the material and geometry of the resonator. The sketch of the described system is shown in Fig. 1. To make our system to be of Bloch kind we introduce linear dependence of cavities frequency on their numbering index. The resonators become microlasers if they posses positive linear gain caused by the population inversion of their electronic excitations. This can normally be done either by optical or electric pump.
However the interactions of different Wannier-Stark (WS) states can make the dynamics complicated resulting in the multi-frequency and, possibly, even chaotic behaviour. Below we consider in detail different regimes of WS lasers, switching from single frequency to multi-frequency regimes and the appearance of BOs. To explain the behaviour of the systems in the vicinity of the lasing threshold we develop a perturbation theory. We consider this work as a proof of concept rather then a discussion of the optimal experimental system. Therefore we utilize the simplest model of the lasing cavities. We would like to mention that it can happen that for real experiment the scheme and, correspondingly, the theoretical model should be elaborated.
To describe the dynamics of light in microresonators we use a well known discrete model for the slow varying in time complex amplitudes U_n(t) of the modes of the individual resonators <cit.>. For sake of mathematical convenience we use dimensionless variables:
i U_n + σ (U_n+1+U_n-1-2U_n) + μ n U_n + i γ_n U_n + iβ_n|U_n|^2 U_n = 0,
where t is normalized time, n is the index enumerating the resonator, σ is the coupling strength between the resonators which can be set σ=1 without loss of generality, μ accounts for the dependence of the resonant frequency on the resonator index, γ_n and β_n are the linear and nonlinear losses correspondingly. Both γ_n and β_n can be different for different resonator. Let us remark, that here we consider a simple, but physically meaningful case, assuming that the nonlinear effects changes the effective losses but not the resonant frequencies of the individual resonators. We acknowledge that the effect of the nonlinear correction of the resonant frequencies can be of importance but it requires a special consideration and will be done elsewhere.
Strong enough incoherent pump can not only change the linear losses but make them negative. This means that this pump can make an individual cavity to be a laser. However considering the system of the resonators we have to calculate the effective gain of the supermodes of the system rather then the effective gain of the individual resonator. Very roughly we can consider the stationary states as a balance between the effective gain and the effective nonlinear losses calculated for the WS state. Let us remark here, that there may be a case where the nonlinear losses are present only in the pumped cavities, and the case where the nonlinear losses are present in all resonators. Below we will show that in these two cases the dynamics of the WS modes is different.
The paper is structured as follows. To do a systematic study of the problem we start with the simplest case where only one resonator is pumped. This is done in Section II of the paper.
In Section III we show that simultaneous excitation of several resonators makes the dynamics richer giving rise to multi-frequency regimes including self-sustained BOs. In Section IV the mode selection is considered. It is shown that efficiency of mode excitation depends on pump profile and controlling the shape of the pump it is possible to extend range of intensities where the single-frequency regime takes place. The main findings of the work are briefly discussed in the Conclusion.
§ SYSTEMS EXCITED BY LINEAR GAIN IN ONLY ONE RESONATOR
We start our consideration with a simple case where γ_n is negative in only one resonators with n=0, everywhere else γ_n is a positive constant. This means that we have a linear amplification in the resonator n=0 and linear losses in all other resonators.
We choose the linear losses in the form γ_n=γ for n ≠ 0 and γ_0=γ - a, where a is pump amplitude, and study the dynamics of the system by numerical simulations.
The numerical simulations reveal that only trivial solution U_n=0 is possible until the linear gain a is below a lasing threshold that depends of the parameters of the system γ and μ. If the threshold is exceeded then the growth of eigenmodes of the system is observed. If the dissipative and nonlinear terms are small then the growing modes can be very accurately approximated by the WS states known analytically for the equation (<ref>) in the conservative limit γ_n → 0, <cit.>. The eigenvalues of the WS states form equidistant spectrum ω_m=μ m with the eigenfunctions W_n-m=J_n-m(2σ/μ), index m enumerate the eigenstates. We use the WS states normalized so that ∑_n W_n-m^2=1.
A simple perturbation theory can be developed if the dissipation is so small that it does not affect the spatial structure of the eigenstates. The quantity E=∑_n |U_n|^2 (energy of the field in the system) is a conserving quantity if γ_n=0 and β_n=0. If γ_n and β_n are nonzero but small then the field in the system can be found in the form U_n^(m)=A_m(t) W_n-mexp(iμ m t), where A_m(t) is the time dependent complex amplitude of the m-th WS state. Substituting this into (<ref>), multiplying by W_n-m and calculating the sum over n we obtain the ordinary differential equations for A_m:
A_m= - Γ_m A_m - B_m |A_m|^2 A_m,
where Γ_m=∑_n γ_n W_n-m^2 and B_m=∑_n β_n W_n-m^4 are the effective linear and the nonlinear losses for the m-th mode.
For pure dissipative nonlinearity (the nonlinearity affects only the effective losses but not the resonant frequency of the cavities) the equations (<ref>) can be re-formulated as a set of equations for the intensities I_m = |A_m|^2:
I_m= 2 (- Γ_m I_m - B_m I_m^2).
For our choice of γ_n =
γ - a δ_0n (δ_ij is
Kronecker symbol) the sum in the expression for the effective linear losses Γ_m can easily be calculated analytically:
Γ_m=γ - a W_-m^2.
The distributions of the intensity in the WS states are symmetric and have two main maxima situated symmetrically with respect to center of the mode. This fact means that if the system is excited by linear gain only in one resonator, then there are two fastest growing modes with the same increment. For the parameters we use for the numerical simulations the indexes of the fastest growing modes are m_max=± 8.
Now let us compare the results of the perturbation theory against the direct numerical simulations of the master equation (<ref>). It is natural to introduce an effective linear gain of the mode as -Γ_m. The effective linear gains extracted from the numerical simulations and calculated by formula (<ref>) are shown in Fig. 2(a) as functions of the pump amplitude a. It is seen that in the vicinity of the lasing threshold, where the dissipative terms can be considered as small corrections, the results of the perturbation theory fit the numerical simulations very well.
Let us remark that the complex frequencies of the modes can be found by analysing the linearized equation for the amplitudes U_n:
i U_n + σ (U_n+1+U_n-1-2U_n) + μ n U_n + i γ_n U_n = 0.
Then looking for a solution in the form U_n (t)= V_n exp (i ω t) we obtain an eigenvalue problem:
ω V_n = σ (V_n+1+V_n-1-2V_n) + μ n V_n + i γ_n V_n.
The real part of ω is the frequency of the eigenmode, the imaginary part - its dissipation rate, the eigenvector V_n describes the structure of the eigenmode. In the absence of the dissipative term the eigenstates are the conservative WS states discussed above. The solution of the spectral problem allows to find the exact solutions for the eigenstates in the dissipative case. We solved the spectral problem numerically to confirm that the dissipative terms does not affect much the structure of the eigenmodes.
It is also instructive to compare effective linear gains of different WS states. The numerically found -Γ_m for six fastest growing modes are shown in Fig. 2(b) as function of the pump amplitude a. One can see that the fastest growing modes with the lowest lasing threshold, for our choice of parameters, are the modes with m=± 8, the second and the third fastest growing modes have the indexes m=± 9 and m=± 2 correspondingly.
The intensity of the stationary states I_m forming in the system can easily be found from (<ref>):
I_m=-Γ_m/ B_m.
The dependencies on the pump a of the stationary intensities of three pairs of WS states having the highest effective linear gains are shown in Fig. 3 for two cases: (a) for the case where the nonlinear losses are nonzero only in the excited resonator with n=0: β_0=β and (b) for the case of spatially uniform nonlinear losses: β_n=β. It is interesting to note that the stationary intensities can be higher for modes with lower effective linear gains, see Fig. 3(a). This can be explained by the fact that in the case where the nonlinear losses are nonzero only in the excited resonator the modes with the highest effective linear gains have the highest nonlinear losses so that their ratio (<ref>) is less compared to that of the modes with the lower effective linear gains.
We performed numerical simulations that reveal that only one pair of the WS states with the highest effective linear gain is dynamically stable for small pump intensities. The dependencies of the stationary intensities of the WS states extracted from the numerical simulations are shown in Fig. 3. The comparison of the analytics and the numerics shows that the agreement between the perturbation theory and the numerical simulations is good for the low intensities of the pump.
It is instructive to study the dynamics when the initial conditions are taken in the form of noise of small intensity. As it is mentioned above the modes start growing when the pump exceeds some threshold. We choose the pump exceeding only the threshold for the fastest growing modes. So, for our choice of the parameters, only the modes m=± 8 grow. The numerical simulations show that in the case of the nonlinear losses present only in the pumped resonator we see the formation of a single frequency stationary state in the form of WS state with m=8 or m=-8. The probability of the formation of each of the states is 1/2. The formation of the stationary states is illustrated in Fig. 4(a), (b), (d) and (e).
Different regimes of the stationary states formation occur if the nonlinear losses are distributed evenly in the system. The excitation thresholds, of course, remain the same, but the stationary state forming from a weak noise varies periodically in time. Very close to the excitation threshold the stationary state can be seen as a superposition of the WS states with m=8 and m=-8, consequently the stationary state contains the temporal harmonics with the frequencies equal to the eigenfrequencies of the WS states. The formation of such a state is illustrated in Fig. 4(c),(f).
To explain such system behavior we expanded the perturbation theory described above by writing the equations for the amplitudes A_±
of two interacting modes m=±m̃ having the highest effective linear gains. So we sought the field in the form U_n=A_+W_n-m̃exp(i m̃ t) +A_-W_n+m̃exp(-i m̃ t). Substituting this ansatz into (<ref>) and projecting on the eigenstates we obtain the equations for A_±.
These equations can be reduced to the equations for the intensities I_± in the same way as was done for (<ref>):
I_+= -2 (Γ + B I_++B̃ I_-)I_+,
I_-= -2 (Γ + B I_-+B̃ I_+)I_-,
where B̃=2∑_n β_n W_n-m̃^2W_n+m̃^2, Γ=Γ_±m̃. Deriving these equations we assumed that the difference of the eigenfrequencies of the states is large and so we can safely disregard the frequently oscillating terms.
Let us analyse the fixed points of the dynamical system (<ref>)-(<ref>). For Γ>0 there is only a trivial solution I_±=0.
For the negative losses (and, correspondingly positive gain) there are four solutions: I_±=0; I_+=0, I_-=-Γ/ B; I_-=0, I_+=-Γ/ B and I_±=-Γ/ B+B̃.
It is straightforward to study the stability of the states writing the linearized equations for small perturbations ξ_± of the intensities I_± and finding the eigenvalues governing the evolution of the perturbations. The trivial state is, of course, always unstable λ_1,2=-2Γ. The second and the third states have the eigenvalues λ_1=-2Γ(1-B̃/ B) and λ_2=2Γ. λ_2 is always negative for Γ<0, λ_1 is negative for B<B̃ and positive otherwise. This means that this state can be either a stable node for B<B̃ or a saddle for B>B̃. The last state I_±=-Γ/ B+B̃ has the eigenvalues λ=2Γ/ B+B̃( B±B̃). From this we can conclude that this state is stable (a stable node) for B>B̃ or unstable (saddle) for B<B̃.
So the stability analysis tells us that if B>B̃ there is only one stable stationary state I_±=-Γ/ B+B̃ for the system (<ref>)-(<ref>). For B<B̃ there are two stable states I_+=0, I_-=-Γ/ B and I_+=-Γ/ B, I_-=0.
Now let us estimate the values B̃ and B. For the case, when only β_0 ≠ 0, B=∑_n β_n W_n-m̃^4=W_m̃^4 and B̃=2∑_n β_n W_n-m̃^2W_n+m̃^2=2W_m̃^4. This means that B̃=2 B and, as our stability analysis shows, in this case the stable stationary states are I_+=0, I_-=-Γ/ B and I_+=-Γ/ B, I_-=0. Only a stable state can be observed as a stationary state in numerical simulations and this explains why for this choice of β_n we see the formation of either one or another WS state.
In the case when β_n=β the ration of B and B̃ can be different. The coefficient B̃ depends on the overlap of the intensity distributions of the states W_n±m̃ and it is easy to see that this overlap decreases with the increase of the width of the WS state defined as H=√(∑_n W_n^2 (n-n_c)^2), where n_c is the center of the WS state. Fig. 5 shows the dependencies of B and B̃ on the width of the states H, (the width of the state H is determined by μ).
For μ = 0.2 used in our direct modelling the coefficients are B=0.07 and B̃=0.04. This means that in this case there is only one stable stationary state I_±=-Γ/ B+B̃.
So one can anticipate that in this case the final state consists of two WS states of the same intensity and oscillating with different frequencies. This fits the results of our numerical simulations perfectly, see Fig. 4(c). It is also worth noting that for μ>0.6 the bands of μ where B̃> B appear, see Fig. 5(a). This means that in these bands there should be two stable states I_+=0, I_-=-Γ/ B and I_+=-Γ/ B, I_-=0 instead of the previously observed single state. This fact is confirmed by numerical calculations.
We would like to note that the developed perturbation theory gives not only qualitative explanation of the observed effect, but allows to determinate the intensities of the two-components states with good precision. The dependencies of the intensity I_±(t) extracted from numerical simulations are overlapped with that calculated by formulas (<ref>)-(<ref>). It is seen that for low values of the linear gain a the agreement is good.
§ LASING WITH THE LINEAR GAIN IN SEVERAL RESONATORS
To increase the radiation power it makes sense to create linear gain in several resonators. Let us first consider the case where the nonlinear losses are present only in the pumped resonators.
At first we note that if the gain is evenly distributed in the pumped resonators then, as one can anticipate, the lasing starts at lower pump amplitudes for larger number of pumped resonators, see Fig. 6(a), showing the dependence of the total energy E of the single-mode stationary state as a function of the pump amplitude a for different number of pumped neighbouring resonators M. It can be seen from this figure that the stationary energy values E, obtained via the perturbation method (solid line) and by numerical simulations (circles) are in good agreement with each other for different M.
The single frequency state is the only one possible solution within the range of the pumps a_th1<a<a_th2, where a_th1 is the excitation threshold of the fastest growing pair of WS modes and a_th2 is the excitation threshold of the second fastest growing pair. The simulations show that with an increase of the pumped resonators number M, the range of the existence of a single frequency solution decreases, see Fig. 6(a).
It is obvious that, at sufficiently large pump amplitudes a, the single-frequency stationary state becomes unstable and collapses. It is important to study how the maximum amplitude of the pump a_max providing the single-frequency regime depends on the number of pumped sites M. This dependence is shown in Fig. 6(b). It can be seen that the critical pump amplitude decreases with an increase of M. This behaviour can be explained by the fact that for wide pumps the overlap integrals defining the effective gain of the mode depend on the indexes of the modes weakly. This means that the modes have very similar excitation thresholds. Thus the single frequency regime exists only in a tiny range of the pumps between the excitation thresholds of the fastest and the second fastest growing modes.
Let us now consider in more detail the dynamics of the system with the pump in three neighbouring resonators. The numerical simulations show that in the vicinity of the threshold (a ≈ 0.043) the radiation is monochromatic, see Fig. 6(a). The stationary states similar to those shown in Fig. 4(a) and (b) form with equal probabilities.
When the pump amplitude increase the threshold value for the next pair of WS states (a ≈ 0.044) a multi-frequency regime appears in the system. For the pump slightly above this threshold the stationary state can be seen as a superposition of two WS states having different frequencies. Let us note that due to nonlinearity the temporal spectrum of the stationary state contains the whole set of combination frequencies but in the case of the weak nonlinearity there are two dominating frequencies corresponding to the eigenfrequencies of the modes. So this case is very similar to the case where the gain is present in only one resonator but the nonlinear losses are present in all resonators of the system, see Fig. 4(c).
At the higher levels of the pump the multimode regime changes. The numerical simulations reveal that a complex states resembling BOs appear, see Fig. 7(a), (b). Their temporal spectra are shown in the panels (e), (f) of the same figure. The spectra contain four harmonics for the pump a=0.06 corresponding to WS states with the indexes m = ± 8 and m = ± 9, see Fig. 7(e). For the higher pump more temporal harmonics appear, see Fig. 7(f) showing the spectrum for a=0.2.
It is interesting to note that for the pumps exceeding a threshold level (a ≈ 0.13 for our parameters) the intensity distribution becomes asymmetric, compare Fig. 7(a) and Fig. 7(b). After the symmetry breaking one pair of frequencies survives without serious changes but the other transforms into several spectral lines signalling the excitation of many WS states. So the right snaking patterns in panel (f) contains more harmonics and thus the BOs produced by the eigenstates belonging to this part of the spectrum become smoother.
For the higher numbers of the excited resonators the BOs become smoother and have wide temporal spectrum, see Fig. 7(c), (d), (g), (h) showing the evolutions of the field amplitude and the temporal spectra for M=7 and M=21 excited resonators. To prove that the snaking pattern seen in panels (a)-(d) are related to the BOs we calculated analytically the trajectory of BOs for the linear conservative system and overimposed this curve with the panel (c). It is seen that the amplitude and the period of the BOs in the conservative counterpart of the considered system are very much similar to that observed in the direct numerical simulations.
Let us remark here that we verified that the multi-frequency regimes are qualitatively the same regardless whether the nonlinear losses are present in all resonators or only in the excited ones. That is why in this paper we do not discuss the case of evenly distributed nonlinear losses.
§ MODE SELECTION
For practical purposes it can be useful to increase the range of the pumps supporting the single-frequency regime. This problem seems especially important when the linear gain is created in many resonators which allows to increase the output power of the working mode. To stabilize the single-frequency regime we suggest to profile the pump. The profiling allows to control the effective gain seen by the different modes and thus it is possible to achieve that one of the modes has the increment significantly higher than the increments of the other modes.
We start with the case where only one of the resonators has positive linear gain and the nonlinear losses are present in this resonator. As it is discussed above, in this case there may form two different WS states with equal probability. Let us show that choosing the appropriate pump it is possible to provide that stationary states are predefined by the shape of the gain and thus do not depend on the initial conditions. For this we modify the effective losses in the individual resonators by adding some additional losses in the appropriate resonator, see Fig. 8(a, c), showing the distribution of the effective losses.
Without the added losses there are two eigenmodes with the same effective linear gain but different frequencies and field distributions. Then we increase the losses in the resonator where one of the modes has intensity maximum but the intensity of the second mode is small in this resonator. This means that the added dissipation suppresses the effective linear gain of the first mode but does not change much the growth rate of the second mode. This way it is possible to achieve a controllable excitation of the desirable WS state. The evolution of the mode growing from weak noise is shown in Fig. 8 (a, c).
In Sec. II it is shown that in the case of the nonlinear losses present in all resonators the stationary state is a combination of two fastest growing WS states having different frequencies. By adding additional losses in one of the resonators it is possible to suppress one of the WS states. So in this case the modification of the pump profile can proved a single-frequency lasing. This is illustrated by Fig. 8(b, d), showing the evolution of the field with weak noise taken as the initial conditions. For t<4 · 10^4 there is no additional losses and one can see the formation of a stationary state in the form superposition of two WS states. At t=4· 10^4 we switch on the additional losses in the resonator n=16. This suppresses one of the WS state immediately and one can observe a stable single-frequency WS state.
To increase the power of the lasing mode it is natural to increase the area and the intensity of the pump. However, as it is discussed above, this makes single-mode regime difficult to observe. But, the question of single-frequency generation is most acute when the nonlinear losses are located in the excited resonators. This issue can be overcome by the profiling of the gain placing the pump in the resonators where the working mode has intensity maxima and adding the losses in the resonators where the other modes have big intensity. In numerical simulations we consider the case of 7 pumped oscillators and different distribution of the effective losses, see Fig. 9(a-c).
The effective gain created by these pumps depends on the index of the WS state. These dependencies are shown in panels (d-f) of Fig. 9. It is seen there that the effective gain for one of the modes is much greater than the gain for the other modes, which is demonstrated in panels (e, f). Moreover, it is seen that in wide range of the pump amplitudes the only one mode has a positive increment. So it can be anticipated that the fastest growing mode will define the stationary state.
We checked by numerical simulations the hypothesis that if the only one mode has a positive increment then the final stationary state is a single frequency one. The numerical simulations fully confirm this prediction. At the same time, as expected, for the pump profile, shown in Fig. 9(a), the single-frequency generation range is extremely small (E_max=0.04).
However, in numerics, we see that due to gain profiling single frequency lasing regimes continue to exist at sufficiently high pumps, where there are more than one growing modes. So if the pump is located in the resonators where working mode has intensity maxima, see Fig. 9(b), the existence range of the single-frequency regime increases by two orders of magnitude (E_max≈ 4). This behavior can be explained by the fact that this pump profiling provides a better selection of the working mode in terms of the effective gain difference, see Fig. 9(e).
It is possible to increase the single-frequency range even further by adding the additional losses in the resonators where the intensity of the working mode has minima, see the pump profile shown in Fig. 9(c). In this case, it is interesting to study how the maximum values of the pump amplitude and the energy of the one-mode stationary state depend on the level of additional losses γ̃. The numerical findings are demonstrated in Fig. 10. It is seen that the range of the pump where the lasing is monochromatic drastically increases (by an order of magnitude). So the maximum achievable energy of the working mode is much higher if the additional losses are added. Besides, a comparison of panels (a) and (b) shows that the energy of single-frequency state depends almost linearly on the pump amplitude. Also, it is important to stress that for pump amplitude a>3 the single-mode regime is still supported, but the dissipative terms become comparable to the conservative ones, and therefore the modes in the system are no longer clear Wannier-Stark states.
§ CONCLUSION
In this paper we considered a dynamics of Wannier-Stark (WS) states in a discrete system consisting of an array of interacting optical resonators. The presence of linear gain in some of the resonators can switch the system into lasing regime provided that the gain exceed some threshold value depending on the structure of the WS state and the spatial distribution of the pump.
The linear gain is saturated by the nonlinear losses so that stationary states can form in the system. It is shown that in the suggested system the lasing modes are the WS states. We considered the system with the nonlinear losses present only in the pumped resonators having linear gain (only these resonators are appropriately doped) and the system with the nonlinear losses present in all resonators (all resonators are doped but the external pump profile makes linear gain to be a function of resonator index).
To study the dynamics of the system analytically a simple perturbation theory is developed. The theory allows to find the excitation thresholds, the stationary amplitudes and explains the competition between the modes. The comparison of the perturbation theory with the results of direct numerical simulation shows that the perturbation theory works well if the dissipative terms are small in the sense that they do not affect much the structure of the eigenmodes but governs the dynamics of the complex amplitudes of the modes.
It is found that the distribution of the nonlinear losses can play a crucial role in the dynamics of the excited WS states. In particular, it is shown that if linear gain exist in only one resonator then in the vicinity of the lasing threshold a single-frequency lasing regime takes place if the nonlinear losses exist in the pumped resonator only. However, if the nonlinear losses are present in all resonators then even in the vicinity of the lasing threshold the stationary state is a complex one with two dominating spectral lines corresponding to the frequencies of the two WS states having the maximum linear growth rate. This can be explain by calculating the coefficients determining the nonlinear interaction between the modes. This analysis shows that in the first case the mode of the higher amplitude successfully suppresses its competitor but in the second case this suppression is insufficient to prevent excitation of the other mode.
The case of the linear gain present in several neighboring resonators is also considered. It is shown that single-frequency lasing is still possible in the vicinity of the lasing threshold. However, this regime occurs within a small range of pump and the energy of the lasing mode is low. If the pump amplitude grows then complex multi-frequency regime sets in. Using such a pump it is possible to reproduce Bloch oscillations (BOs) in the dissipative system. It is interesting to note that at relatively low pump amplitudes the snaking pattern manifesting BOs is symmetric. For the higher pumps the symmetry breaking occurs and the dynamics look more like two coexisting different BOs.
The problem of the stabilization of single-frequency lasing is studied. It is shown that by choosing the appropriate profile of the pump it is possible to extend the single frequency regime and to increase the energy of the lasing mode significantly. Via adding the additional losses to the system it is possible to achieve even better selection of the working mode and thus realise single frequency lasing of the chosen WS mode in a significant pump range.
The use of the WS states as the working modes can be useful, because WS modes of different frequencies have the same spatial structure just shifted in space. This fact allows to tune the frequency of the lasing mode by shifting the excitation spot. In the same time, the lasing mode is wide (includes many excited resonators), which gives an opportunity to pump it creating the gain in many resonators. So, it is possible to increase the maximum energy of the excited mode. This effect may be useful for the practical purposes, especially considering the fact, that power range of the monochromatic WS lasing can be significantly extended by the appropriate shaping of the pump.
This work was supported by the Ministry of Science and Higher Education of Russian Federation, goszadanie no. 2019-1246.
99
interest_1 Higuchi, Takuya, Mark I. Stockman, and Peter Hommelhoff. "Strong-field perspective on high-harmonic radiation from bulk solids." Physical review letters 113.21 (2014): 213901.
interest_2 Wang, Huan-Yu, et al. "Non-Floquet engineering in periodically driven dissipative open quantum systems." Journal of Physics: Condensed Matter 34.36 (2022): 365402.
interest_3 Hansen, Ursula B., et al. "Magnetic Bloch oscillations and domain wall dynamics in a near-Ising ferromagnetic chain." Nature communications 13.1 (2022): 1-8.
discovery_1 Wannier, Gregory H. Elements of solid state theory. CUP Archive, (1959).
discovery_2 Shockley, William. "Stark ladders for finite, one-dimensional models of crystals." Physical Review Letters 28.6 (1972): 349.
discovery_3 F. Bloch, Uber die Quantenmechanik der Elektronen in Kristallgittern, Z. Phys. 52, 555 (1929).
discovery_4 C. Zener, A theory of the electrical breakdown of solid dielectrics, Proc. R. Soc. A 145, 523 (1934).
discovery_5 W.V. Houston, Acceleration of electrons in a crystal lattice, Phys. Rev. 57, 184 (1940).
BO_exp C. Waschke, H.G. Roskos, R. Schwedler, K. Leo, H. Kurz, and K. Kohler, Coherent submillimeter-wave emission from Bloch oscillations in a semiconductor superlattice, Phys. Rev. Lett. 70, 3319 (1993).
atomic1 M.B. Dahan, E. Peik, J. Reichel, Y. Castin, and C. Salomon, Bloch Oscillations of Atoms in an Optical Potential, Phys. Rev. Lett. 76, 4508 (1996).
atomic2 S. Wilkinson, C. Bharucha, K. Madison, Q. Niu, and M. Raizen, Observation of Atomic Wannier-Stark Ladders in an Accelerating Optical Potential, Phys. Rev. Lett. 76, 4512 (1996).
atomic3 H. R. Zhang and C. P. Sun, Bloch oscillations of polaritons of an atomic ensemble in magnetic fields, Phys. Rev. A 81, 063427 (2010)
atomic4 Z.A. Geiger, K.M. Fujiwara, K. Singh, R. Senaratne, S.V. Rajagopal, M. Lipatov, T. Shimasaki, R. Driben, V.V. Konotop, T. Meier, and D.M. Weld, Observation and Uses of Position-Space Bloch Oscillations in an Ultracold Gas, Phys. Rev. Lett. 120, 213201 (2018).
atomic5 Z. Pagel, W. Zhong, R.H. Parker, C.T. Olund, N.Y. Yao, and H. Muller, Symmetric Bloch oscillations of matter waves, Phys. Rev. A 102, 053312 (2020).
atomic6 L. Masi, T. Petrucciani, G. Ferioli, G. Semeghini, G. Modugno, M. Inguscio, and M. Fattori, Spatial Bloch Oscillations of a Quantum Gas in a “Beat-Note” Superlattice, Phys. Rev. Lett. 127, 020601 (2021).
laser S. Longhi, Dynamic localization and Bloch oscillations in the spectrum of a frequency mode-locked laser, Opt. Lett. 30, 786 (2005)
LC S. Bahmani and A.N. Askarpour, Bloch oscillations and Wannier-Stark ladder in the coupled LC circuits, Phys. Lett. A 384, 126596 (2020).
mech1 G. Monsivais and R. Esquivel-Sirvent, Stark Ladder Resonances in Acoustic Waveguides, Journal of Mechanics of Materials and Structures, 2, 8, 1585 (2007).
mech2 G. Monsivais, R. Mendez-Sanchez, A. de Anda, J. Flores, L. Gutierrez, and A. Morales, Elastic Wannier–Stark Ladders in Torsional Waves
Journal of Mechanics of Materials and Structures, 2, 1629 (2007)
mech3 N. Lanzillotti-Kimura, A. Fainstein, B. Perrin, B. Jusserand, O. Mauguin, L. Largeau, and A. Lemaitre, Bloch Oscillations of THz Acoustic Phonons in Coupled Nanocavity Structures, Phys. Rev. Lett. 104, 197402 (2010).
mech4 Y.-K. Liu, H.-W. Wu, P. Hu, and Z.-Q. Sheng, Spatial Bloch oscillations in acoustic waveguide arrays, Appl. Phys. Express 14, 064501 (2021).
plasmonic1 A.R. Davoyan, I.V. Shadrivov, A.A. Sukhorukov, and Y.S. Kivshar, Plasmonic Bloch oscillations in chirped metal-dielectric structures, Appl. Phys. Lett. 94, 161105 (2009)
plasmonic2 V. Kuzmiak, S. Eyderman, and M. Vanwolleghem, Controlling surface plasmon polaritons by a static and/or time-dependent external magnetic field, Phys. Rev. B, 86, 045403 (2012)
plasmonic3 Bo Han Cheng, Yi-Chieh Lai, and Yung-Chiang Lan, Plasmonic Photonic Bloch Oscillations in Composite Metal–Insulator–Metal Waveguide Structure, Plasmonics, 9, 137 (2014)
plasmonic4 V. Kuzmiak, A. A. Maradudin, and E. R. Mendez, Surface plasmon polariton Wannier–Stark ladder, Opt. Lett. 39, 1613 (2014)
plasmonic5 A. Block, C. Etrich, T. Limboeck, F. Bleckmann, E. Soergel, C. Rockstuhl and S. Linden, Bloch oscillations in plasmonic waveguide arrays, Nature Communications, volume 5, Article number: 3843 (2014)
plasmonic6 H. Wetter, Z. Fedorova, and S. Linden, Observation of the Wannier–Stark ladder in plasmonic waveguide arrays, Optics Letters, 47, 12, 3091 (2022)
exciton1 H. Flayac, D. D. Solnyshkov, and G. Malpuech, Bloch oscillations of an exciton-polariton Bose-Einstein condensate, Phys. Rev. B 83, 045412 (2011)
exciton2 H. Flayac, D. D. Solnyshkov, and G. Malpuech, Bloch oscillations of exciton-polaritons and photons for the generation of an alternating terahertz spin signal, Phys. Rev. B, 84, 125314 (2011)
exciton3 J. Beierlein, O.A. Egorov, T.H. Harder, P. Gagel, M. Emmerling, C. Schneider, S. Hofling, U. Peschel, and S. Klembt, Bloch Oscillations of Hybrid Light-Matter Particles in a Waveguide Array, Adv. Opt. Mater. 9, 2100126 (2021).
prediction_1 Monsivais, Guillermo, Marcelo del Castillo-Mussot, and Francisco Claro. "Stark-ladder resonances in the propagation of electromagnetic waves." Physical review letters 64.12 (1990): 1433.
prediction_2 De Sterke, C. Martijn, John E. Sipe, and Laura A. Weller-Brophy. "Electromagnetic Stark ladders in waveguide geometries." Optics letters 16.15 (1991): 1141-1143.
BO_optics1 U. Peschel, T. Pertsch, and F. Lederer, Optical Bloch oscillations in waveguide arrays. Opt. Lett. 23, 1701 (1998).
BO_optics2 A. Kavokin, G. Malpuech, A. Di Carlo, P. Lugli, and F. Rossi, Photonic Bloch oscillations in laterally confined Bragg mirrors, Phys. Rev. B 61, 4413 (2000)
BO_optics3 G. Malpuech, A. Kavokin, G. Panzarini, and A. Di Carlo, Theory of photon Bloch oscillations in photonic crystals, Phys. Rev. B 63, 035108 (2001)
BO_optics4 S Longhi, Optical Zener-Bloch oscillations in binary waveguide arrays, Europhys. Lett. 76 416 (2006)
WSL_exp1 De Sterke, C. Martijn, et al. "Observation of an optical Wannier-Stark ladder." Physical Review E 57.2 (1998): 2365.
WSL_exp2 Ghulinyan, Mher, et al. "Zener tunneling of light waves in an optical superlattice." Physical review letters 94.12 (2005): 127401.
WSL_exp3 Qi, Xinyuan, et al. "Observation of accelerating Wannier–Stark beams in optically induced photonic lattices." Optics Letters 39.4 (2014): 1065-1068.
WSL_exp4 Mukherjee, Sebabrata, et al. "Modulation-assisted tunneling in laser-fabricated photonic Wannier–Stark ladders." New journal of physics 17.11 (2015): 115002.
BO_optics_exp1 T. Pertsch, P. Dannberg, W. Elflein, A. Brauer, and
F. Lederer, Optical Bloch Oscillations in Temperature
Tuned Waveguide Arrays, Phys. Rev. Lett. 83, 4752
(1999)
BO_optics_exp2 V. Agarwal, J. A. del Rio, G. Malpuech, M. Zamfirescu,
A. Kavokin, D. Coquillat, D. Scalbert, M. Vladimirova,
and B. Gil, Photon Bloch Oscillations in Porous Silicon
Optical Superlattices, Phys. Rev. Lett. 92, 097401 (2004)
BO_optics_exp3 S. Longhi, M. Lobino, M. Marangoni, R. Ramponi, P. La-
porta, E. Cianci, and V. Foglietti, Semiclassical motion
of a multiband Bloch particle in a time-dependent field:
Optical visualization, Phys. Rev. B 74 155116 (2006)
BO_optics_exp4 C. Bersch, G. Onishchukov, and U. Peschel, Experimen-
tal observation of spectral Bloch oscillations, Opt. Lett.
34, 2372 (2009)
BO_optics_exp5 Ye-Long Xu, W.S. Fegadolli, Lin Gan, Ming-Hui Lu,
Xiao-Ping Liu, Zhi-Yuan Li, A. Scherer and Yan-Feng
Chen, Experimental realization of Bloch oscillations in
a parity-time synthetic silicon photonic lattice, Nature
Communications volume 7, Article number: 11319 (2016)
BO_optics_review I.L. Garanovich, S. Longhi, A.A. Sukhorukov, and Y.S. Kivshar, Light propagation and localization in modulated photonic lattices and waveguides
Physics Reports 518 1 (2012)
microlasers_review_1 Yang, Xi, et al. "Fiber optofluidic microlasers: structures, characteristics, and applications." Laser & Photonics Reviews 16.1 (2022): 2100171.
microlasers_review_2 Chen, Zhi, et al. "Emerging and perspectives in microlasers based on rare-earth ions activated micro-/nanomaterials." Progress in Materials Science 121 (2021): 100814.
microlasers_review_3 Zhukov, A. E., et al. "Quantum-dot microlasers based on whispering gallery mode resonators." Light: Science & Applications 10.1 (2021): 1-11.
microlasers_review_4 Zhang, Qing, et al. "Halide perovskite semiconductor lasers: materials, cavity design, and low threshold." Nano Letters 21.5 (2021): 1903-1914.
microlasers_review_5 Sumetsky, M. "Optical bottle microresonators." Progress in Quantum Electronics 64 (2019): 1-30.
microlasers_review_6 He, Huajun, et al. "MOF‐Based Organic Microlasers." Advanced Optical Materials 7.17 (2019): 1900077.
Example_model1 W. Deering, M. Molina, and G. Tsironis, Directional couplers with linear and nonlinear elements, Appl. Phys. Lett. 62, 2471, (1993).
Example_model2 J. Eilbeck, G. Tsironis, and S. K. Turitsyn, Stationary states in a doubly nonlinear trimer model of optical couplers, Phys. Scr. 52, 386 (1995).
Example_model3 P.G. Kevrekidis, K.O. Rasmussen and A.R. Bishop, The discrete nonlinear Schrödinger Equation: a survey of recent results, International Journal of Modern Physics B 15, 2833 (2001)
Example_model4 N.K. Efremidis, S. Sears, D.N. Christodoulides, J.W. Fleischer, M. Segev, Discrete solitons in photorefractive optically induced photonic lattices, Phys. Rev. E 66, 046602, (2002)
Example_model5 A.A. Sukhorukov, Yu.S. Kivshar, H.S. Eisenberg, Y. Silberberg, Spatial optical solitons in waveguide arrays, IEEE J. Quantum Electron., 39, 31–50, (2003).
.
Example_model6 U. Peschel, O. Egorov, and F. Lederer, Discrete cavity solitons, Opt. Lett. 29, 1909, (2004)
Example_model7 D. Pelinovsky and P. G. Kevrekidis, Stability of discrete solitons in nonlinear Schrödinger lattices, Physica D 212, 1-19, (2005).
Example_model8 A. Yulin, D. V. Skryabin, and P. St. J. Russell, Dissipative localized structures of light in photonic crystal films, Opt. Express, 13, 3529, (2005).
Example_model9 A.V. Yulin, D.V. Skryabin , A.G. Vladimirov, Modulational instability of discrete solitons in coupled waveguides with group velocity dispersion, Opt. Express , 14, 12347, (2006).
Example_model10 H. Susanto, P. G. Kevrekidis, B. A. Malomed, R. Carretero-Gonzalez, and D. J. Frantzeskakis, Discrete surface solitons in two dimensions, Phys. Rev. E 75, 056605, (2007).
Example_model11 O.A. Egorov and F. Lederer, Lattice-cavity solitons in a degenerate optical parametric oscillator , Phys. Rev. A 76, 053816, (2007).
Example_model12 O.A. Egorov, F. Lederer, and Yu.S. Kivshar, How does an inclined holding beam affect discrete modulational instability and solitons in nonlinear cavities?, Opt. Express 15, 4149 (2007).
|
http://arxiv.org/abs/2307.02097v1
|
20230705081742
|
Standalone, Descriptive, and Predictive Digital Twin of an Onshore Wind Farm in Complex Terrain
|
[
"Florian Stadtmann",
"Adil Rasheed",
"Tore Rasmussen"
] |
eess.SP
|
[
"eess.SP"
] |
^1 Department of Engineering Cybernetics, Norwegian University of Science and Technology, Trondheim, Norway
^2 Department of Mathematics and Cybernetics, SINTEF Digital, Trondheim, Norway
^3 Aneo, Norway
florian.stadtmann@ntnu.no
In this work, a digital twin with standalone, descriptive, and predictive capabilities is created for an existing onshore wind farm located in complex terrain. A standalone digital twin is implemented with a virtual-reality-enabled 3D interface using openly available data on the turbines and their environment. Real SCADA data from the wind farm are used to elevate the digital twin to the descriptive level. The data are complemented with weather forecasts from a microscale model nested into Scandinavian meteorological forecasts, and wind resources are visualized inside the human-machine interface. Finally, the weather data are used to infer predictions on the hourly power production of each turbine and the whole wind farm with a 61 hours forecasting horizon. The digital twin provides a data platform and interface for power predictions with a visual explanation of the prediction, and it serves as a basis for future work on digital twins.
§ INTRODUCTION
The importance of wind energy production efficiency cannot be overstated in the context of combating climate change and achieving a net-zero emissions target by 2050 <cit.>. With the proliferation of cheaper sensors and the growing trend of the Internet of Things, the potential for extracting data from wind farms has increased significantly. However, it is not sufficient to store collected information in data silos. Instead, real-time data analysis and visualization can be leveraged to enable optimal control and informed decision-making and to unlock the full potential of the data.
The concept of the digital twin has emerged as a promising solution to address these challenges. A digital twin utilizes available data in real-time to monitor the current state of an asset and its environment, predict future states, detect faults, perform what-if scenario analysis, provide decision support, and ultimately enable autonomous control of the asset <cit.>. The use of a suitable human-machine interface enhances the interpretation of analysis results and allows for effective communication with stakeholders.
A survey conducted with industry partners of the Norwegian Research Centre on Wind Energy “FME NorthWind” indicates that the wind industry is keenly interested in utilizing digital twins to reduce the cost of wind energy <cit.>. However, several challenges must be addressed before the full potential can be unlocked in wind energy applications. These challenges relate to both the implementation and acceptance of digital twins within the industry <cit.>. Overcoming these challenges will be critical to advancing the development and adoption of digital twins in the wind energy sector.
To this end the current work attempts to realize the following:
* Introducing readers to the concept of digital twins within the context of wind energy applications and providing a scale to rank digital twins based on their capabilities.
* Demonstrating a digital twin of an onshore wind farm with standalone, descriptive, and predictive capabilities. This will provide a practical illustration of the potential benefits of digital twins in wind energy applications, as well as offer insights into the challenges of developing such models.
* Discussing the potential for further research on digital twins for wind farm applications. By highlighting areas where additional research is needed, we hope to catalyze progress in this field and drive innovation in the wind energy sector.
The article is structured as follows:
First, the definition of the term digital twin used in this work is clarified in section <ref>. The capability level scales are explained briefly. In section <ref>, the implementation of the standalone digital twin is given with a focus on terrain and visual interface. The onshore Bessakerfjellet wind farm is used as a demonstration site. It is operated by Aneo and is located at (64°13' N, 10°23' E) on the Norwegian coastline. Section <ref> explains the integration and visualization of data measured at the turbines. Predictive capabilities are added in section <ref> by implementing weather forecasts and performing predictions of the wind turbines' power production.
The work is discussed in section <ref> and an outlook into future work is given. Finally, the work is summarized in section <ref>.
§ DEFINITION AND CAPABILITY LEVELS
The term digital twin is being used for different concepts. Here, the digital twin is "a virtual representation of a physical asset or a process enabled through data and simulators for real-time prediction, optimization, monitoring, control, and informed decision making" <cit.>.
The concept of a digital twin with all capabilities is shown in figure <ref>.
Since this definition still leaves some room, we use the capability level scale from <cit.> to specify the digital twin's exact capabilities. As such, a digital twin can be ranked on a scale from 0 to 5 as a standalone, descriptive, diagnostic, predictive, prescriptive, or autonomous digital twin as shown in figure <ref>. Here, a brief overview of the capability level scale is given. A more detailed description can be found in <cit.> and <cit.>.
* The standalone digital twin is a virtual representation of a wind farm that lacks a real-time connection to the physical wind farm. It can be utilized in the design, planning, and construction stages before the wind farm is operational.
* In the descriptive digital twin, measurements from the wind farm are being streamed into the digital twin. The descriptive digital twin mirrors the state of the real wind farm at each point in time and provides a platform on which data can be bundled, enhanced (e.g. through virtual sensing), processed, and visualized to the human operators and other stakeholders.
* The diagnostic digital twin uses the data gathered in the descriptive digital twin as input for analysis such as condition-based maintenance. The condition of components is tracked through e.g. vibration and temperature measurements, and anomalies are diagnosed. This way, minor deficits can be detected early and resolved before they result in major faults like turbine damage and unexpected downtime.
* A predictive digital twin does not only use current and historical data but also forecasts parameters to predict future asset states. The predictive capabilities can be used for predictive maintenance or through power forecasts for the energy market.
* In a prescriptive digital twin, recommendations are provided through what-if scenario analysis and risk assessment. Such prescriptions can include a balancing of component wear against power production based on current electricity prices and demand, or optimal maintenance scheduling based on component wear, estimated remaining useful lifetime, data anomalies, and weather forecasts.
* The autonomous digital twin acts on the prescriptions on its own. Autonomous digital twin capabilities can range from farm-wide wake steering and component wear balancing over inspection through the usage of autonomous drones to automated operation and maintenance of the wind farm.
§ STANDALONE DIGITAL TWIN
In this work, a standalone digital twin of the onshore wind farm has been implemented following a similar approach as is explained in <cit.> for a floating offshore wind turbine, including a user interface using virtual reality. In contrast to the single-turbine implementation in <cit.>, a whole wind farm is implemented here. Additionally, the local terrain around the wind farm is included.
§.§ Terrain
As an input to the terrain generation, height maps of the local terrain are being downloaded from <cit.> at a 1 m× 1 m resolution grid. Since the wind farm is located at a shoreline and the LIDAR-based height measurements cannot penetrate the water surface, the height maps are being complemented with information on ocean depth contour lines available from <cit.>. All information on onshore and offshore terrain height is combined in a single terrain map. The height is then binned into int16 and the map is split into equal chunks to improve computational efficiency during rendering. Next, aerial images are downloaded from <cit.> with a 1 m × 1 m resolution. The images are combined and split into chunks matching the chunks of the terrain height. Terrain height and texture are then imported into the Unity game engine, where they are combined. As evident from figure <ref>, a top-down view of the 3D terrain inside the game engine (center) can only be distinguished from aerial images from <cit.>(surrounding) by its improved resolution, 3D terrain, animated water, and dynamic lighting. Note that the terrain is not just implemented for visual realism while using the digital twin, it also contains information on logistical access through roads and nearby villages, and information on terrain height, water bodies, and forestation relevant for understanding wind flow.
§.§ Turbines
Since no CAD model of the turbines was available at the start of the project, a model was created in Blender. Tower height and rotor diameter are based on data sheets, while the nacelle and blades are based on pictures of the Enercon E70-4. The 3D CAD model of the turbine is shown in figure <ref>.
The horizontal position of each turbine is known, while the vertical position is inferred from the terrain height.
§ DESCRIPTIVE DIGITAL TWIN
The digital twin is enhanced with descriptive capabilities by including SCADA data from each wind turbine. At this stage, the digital twin mirrors the state of the physical wind turbine. Only minor changes were made to the implementation in <cit.>. Namely, the data structure gained an additional hierarchical level to advance from turbine to farm-level and the interactable components of the turbine were adjusted to the new turbine type. Additionally, the data input format changed, which required rebuilding the data reading module. Finally, two visualization methods were added to depict the current power production, as it cannot be directly seen on the turbine models.
§.§ Data
The available data consist of wind speed, wind direction, nacelle direction, and active power from each turbine. The measurement intervals vary from 3 to 10 minutes. Since the data are non-equidistant in time, an updating function is constantly checking for new measurements. For real-time operation, the persistence method is used to bridge time spans without measurements. If instead the digital twin is used to inspect historic data, they are interpolated between measurements. The feasibility of real-time data streaming is demonstrated as explained in section <ref>.
§.§ Visualization
The yaw angle of each turbine is directly visible from the orientation of the turbine model. The active power can be shown in text above each turbine, or alternatively through gauges with dial and color indications as shown in figure <ref>.
§ PREDICTIVE DIGITAL TWIN
First predictive capabilities are added to the digital twin by streaming publicly available weather forecast data. These external forecasts are then used to predict the theoretical power production at the turbine- and farm-levels.
§.§ Wind Field
A vector field for wind speed and direction is implemented by streaming weather forecasts from the Norwegian Meteorological Institute's Thredds service <cit.> in real-time. The MetCoOp Ensemble Prediction System (MEPS) <cit.> provides forecasts every 6 h up to 61 h ahead with a frequency of 1 h. Parameters of interest are wind speed, wind direction, air pressure, air temperature, and relative air humidity. However, the MEPS model has a resolution of only 2.5 km. For this reason, the SIMRA microscale model nested into the HARMONIE mesoscale model is used around the wind farm to increase the lateral and vertical resolution of the forecast and include effects induced through the complex terrain. More information on the HARMONIE and SIMRA models can be found in <cit.>. The SIMRA model evaluated around the Bessakerfjellet wind farm is available at <cit.>. It includes wind speed, wind direction, air pressure, and air temperature in 1 h intervals for 6 h to 18 h ahead and has been evaluated every 12 h for this particular data set. There is no technical reason preventing the SIMRA model from being evaluated more frequently and with a longer forecasting horizon apart from saving on computational resources. The wind is visualized in the digital twin through wind trails moving through the vector field, or by showing parts of the vector field directly as can be seen in figure <ref>. Vector direction matches wind direction, vector length represents wind speed, and color indicates the turbulence index.
§.§ Physics-based power prediction
In the next step, the weather forecast is used to estimate the power production at each turbine. In <cit.>, weather forecasts such as the MEPS forecast were used with support vector machines, clustering methods, and random forest algorithms to map from wind to power production in flat terrain. Here, the weather forecast is used as input, but the mapping from weather to produced power is done through physics-based models (PBMs) only to circumvent the black-box problem of data-driven methods (DDMs).
A data sheet for the turbine type is used that contains the direct mapping from wind speed to power production, as well as the power coefficient as a function of wind speed, with 1 m/s intervals. The power coefficient can be used in the well-known relation
P(v)=1/2ρ C_P(v) A v^3
where v is the wind speed, P(v) is the produced power, ρ is the density of the air, C_P(v) is the turbine-specific power coefficient, and A is the area swept by the blades.
The blade sweeping area A is known to be 3959 m^2.
In the first approach, the air density ρ is assumed to be constant with ρ_s=1.225 kg/m^3. However, air density depends on temperature and pressure. Treating air as an ideal gas, the pressure of dry air ρ(T,p) can be calculated as
ρ(T,p) = p M_d/RT
where T is the air temperature, p is the pressure of the air, M_d=0.0289652 kg/mol is the molar mass of dry air, and R=8.31446 J/K mol is the universal gas constant. Furthermore, humidity can be included through
ρ(T,p,ϕ) = (p- ϕ p_sat) M_d + ϕ p_sat M_v/RT
with M_v=0.018016 kg/mol as the molar mass of water vapour, ϕ as the relative humidity, and
p_sat = 6.11 hPa *10^7.5 (T+273.15 K)/T+510.45 K
as the saturation pressure, calculated with the Tetens equation as done in <cit.>.
The air density can be used to modify the power curve by
P(v,T,p,ϕ) = P(v) ρ(T,p,ϕ)/ρ_s
The quality of the power prediction is calculated on a one-year training set with an hourly resolution for each combination of
* wind speed, air temperature, and air pressure v based on the MEPS or SIMRA model,
* air density ρ constant, dry air, or humid air (for SIMRA-based models only),
* calculation from the power curve or through the power coefficient from the turbine manufacturer and equation <ref>,
* interpolation of power curve or power coefficient with linear or cubic,
* with or without imposing an upper limit on power output according to turbine specifications.
§.§ Data-Driven Predictions
Purely data-driven predictions using dense neural networks (DNN) and long-short-term-memory (LSTM) neural networks are implemented for measurement-based time series prediction and compared with the results from the PBMs.
In the DDMs, two years of data are being used to train the neural networks (NNs), where 10% are split off for validation. The NNs are being trained for one-step-ahead prediction of the power production, and are evaluated iteratively on their output to obtain a forecast with the full 61 h forecasting horizon. Therefore, the NN output is of size 1. The architecture of the NNs is kept simple with three layers with 5, 3, and 1 units respectively. The input lag is chosen to be 4 h for the DNN based on the partial autocorrelation. In contrast, the cells of the LSTM keep information from previous evaluations in memory. Therefore, only one input is given at a time but the NN is evaluated on a sequence of previous data points. The NNs are trained with the Adam optimizer with a default learning rate, a batch size of 64, and the mean squared error as the loss metric. A validation-loss-based early stopping is used to avoid overfitting.
Since the partial autocorrelation suggests that the last measured hour has by far the most substantial contribution to the short-term prediction, the NNs are compared to the persistence model, which always predicts the last measured value.
§.§ Results
The PBMs and DDMs are compared against the measured power production for every single turbine and for the whole farm production by using the normalized root mean squared error (NRMSE) for 3 years of available data. The best-performing model in each category is determined. The NRMSE across turbines is shown in figure <ref>, and the NRMSE on farm-level prediction in figure <ref>. Note that the farm-level predictions are more accurate as prediction errors of different turbines can cancel each other. The DDMs perform best for small forecast horizons, but their accuracy decays quickly. For one-hour-ahead forecasts, the DNN performs marginally better than the LSTM and persistence model with <0.2% NRMSE.
The SIMRA models outperform the DNN after 2 h on the farm-level and after 5 h on the turbine level. All predictions using the SIMRA model as input achieve similar accuracy, but a dynamic air density does improve the forecast by 0.4% NRMSE. The improvement from dry to humid air density and differences between the interpolation method, the cap on maximum power production, and the difference between using the calculated power curve or power coefficient as input are much smaller with <0.1% NRMSE.
The best-performing SIMRA-based model uses the turbine's power curve with cubic interpolation, a limit on the maximum power production as the rated power, and a correction for air density that accounts for humid air.
The micro-scale SIMRA model gives significantly better results than the MEPS model. Here the SIMRA model was only available up to 18 h ahead, but it is expected that the SIMRA-based models will keep outperforming the MEPS-based models also for longer forecasting horizons as the decay of accuracy with increasing prediction horizon is slow.
Differences within the MEPS models are small <0.12% NRMSE. Like the SIMRA-based models, the best-performing MEPS-based model uses the power curve directly with cubic interpolation.
The different models can be combined in a simple hybrid analysis and modeling (HAM) approach for optimal wind farm power prediction on all forecasting horizons by using the DNN for 1 h to 2 h ahead predictions, the SIMRA-based model for 3 h to 18 h ahead predictions, and the MEPs-based model for 19 h to 61 h ahead predictions.
Deriving the farm-level power forecast from the turbine level forecasts makes it possible to assess the impact of each turbine on the farm power production separately inside the virtual-reality-enabled interface and visually trace reasons for fluctuations between turbines back to wind speed, direction, and turbulence, as well as to terrain geometry and surface roughness. Therefore, the digital twin can be used to explain the farm-level power forecast.
§ DISCUSSION AND FUTURE WORK
In this work, a functional digital twin of a wind farm was presented with standalone, descriptive, and predictive capabilities.
There is much room for further research and demonstration on all capability levels. Hence, this section discusses potential improvements and future work.
§.§ Standalone
The modular nature of the implementation allowed upscaling the standalone digital twin from the turbine level presented in <cit.> to the farm-level without any difficulties, but a terrain had to be added for the onshore wind farm. In this work, the aerial images were directly mapped onto the terrain, resulting in 81 big textures with 1 m resolution and ca. 1 km^2 area. This did not compromise the real-time execution of the digital twin, but it may be advantageous to use one smaller, high-resolution texture per land cover class (grass/moss/rock/forest/etc.) instead. In future work, their placement can be automated by compiling the color channels of the aerial images into arrays with texture information. The software could be extended to include automated placing of forestation and houses. This way, the detail in the visual component of the digital twin can be increased, and the project size can be reduced.
§.§ Descriptive
The upgrade to the farm-level required an additional level in the data hierarchy. A new visualization method was included in the digital twin to ease the assessment of data such as power production across the whole wind park.
The different data formats required a new interface between the raw data and the digital twin. Standardization will play an important role in the commercialization of digital twins, and more efforts are needed to establish common standards throughout the whole wind energy industry. Both wind turbine operators and original equipment manufacturers are already collaborating on standardization efforts <cit.>
§.§ Predictive
For predictions in this work, three models have been combined in a pipeline where the best model is used depending on the time to be predicted ahead.
More sophisticated HAM approaches could potentially improve the predictions further by combining information from measured data and numerical weather models. In the broader context of digital twins for wind energy, PBMs, DDMs, and HAMs have been discussed in <cit.>.
In the context of wind power predictions, an example of a HAM approach includes a data-driven regression from mesoscale weather forecasts to power production, as has been evaluated for flat and open terrain in <cit.>. The microscale weather model could be replaced by a resolution-enhancing generative adversarial network, as has been attempted in <cit.>, but the results were criticized in <cit.>.
Finally, ensemble methods with secondary models for combining DDM and PBM outputs may extract additional value from measurements and numerical models. This approach will be investigated in future work with a more thorough investigation of different DDMs.
§.§ Diagnostic, Prescriptive, and Autonomous
In addition to the standalone, descriptive, and predictive capabilities explored here, diagnostic modules could be evaluated on the measured data for condition monitoring and component wear tracking including weather effects and turbine load. Prescriptive modules could include weather- and power-aware maintenance scheduling. Finally, the turbine state could be used for autonomous farm optimization and control to balance power production and turbine wear <cit.>.
§ CONCLUSION
In this work, a digital twin of an onshore wind farm in complex terrain with standalone, descriptive, and predictive capabilities was built.
The standalone digital twin was implemented into a game engine by creating a 3D CAD model of the turbines and combining it with height maps and aerial images of the surrounding terrain. It includes a human-machine interface capable of interaction through virtual reality and simultaneously contains meta-data about the wind turbines, the farm layout, and the environment. The digital twin was elevated to the descriptive level by including SCADA data measured at each of the wind turbines. Some data were visualized directly on the 3D CAD model, while other information was shown with animated gauges and text. Predictive capabilities were implemented to forecast the power production of each turbine in the wind farm and the results were visualized in the interface. The predictions were performed by combining existing weather forecasts and physics-based models. They were made intuitively understandable by showing wind as vector fields and trails on the terrain. Finally, the results were discussed, and an outlook on future work was given. On top of the continuation of current research, the digital twin can be extended with additional modules to cover more aspects and evolve throughout the whole life-cycle of a wind farm.
Such full-fledged digital twins will have the potential to substantially contribute towards cheaper and more sustainable wind energy for a greener future.
This publication has been prepared as part of NorthWind (Norwegian Research Centre on Wind Energy) co-financed by the Research Council of Norway (project code 321954), industry, and research partners. Read more at <cit.>.
Data for the project were provided by Aneo. Mock data were used for the creation of figure <ref> and figure <ref> to comply with confidentiality agreements.
§ REFERENCES
AR
|
http://arxiv.org/abs/2307.02729v1
|
20230706022831
|
Text Alignment Is An Efficient Unified Model for Massive NLP Tasks
|
[
"Yuheng Zha",
"Yichi Yang",
"Ruichen Li",
"Zhiting Hu"
] |
cs.CL
|
[
"cs.CL"
] |
Fine-grained Action Analysis: A Multi-modality and Multi-task Dataset of Figure Skating
Sheng-Lan Liu, Yu-Ning Ding, Si-Fan Zhang, Wen-Yue Chen, Ning Zhou, Hao Liu, Gui-Hong Lao
Sheng-Lan Liu, Yu-Ning Ding, Si-Fan Zhang, Wen-Yue Chen, Ning Zhou, Hao Liu, Gui-Hong Lao are with the Computer Science and Technology, Dalian University of Technology, Dalian 116024, China.
(Corresponding author: Sheng-Lan Liu, E-mail: liusl@dlut.edu.cn)
===============================================================================================================================================================================================================================================================================================================================================================
Large language models (LLMs), typically designed as a function of next-word prediction, have excelled across extensive NLP tasks. Despite the generality, next-word prediction is often not an efficient formulation for many of the tasks, demanding an extreme scale of model parameters (10s or 100s of billions) and sometimes yielding suboptimal performance. In practice, it is often desirable to build more efficient models—despite being less versatile, they still apply to a substantial subset of problems, delivering on par or even superior performance with much smaller model sizes.
In this paper, we propose text alignment as an efficient unified model for a wide range of crucial tasks involving text entailment, similarity, question answering (and answerability), factual consistency, and so forth. Given a pair of texts, the model measures the degree of alignment between their information. We instantiate an alignment model () through lightweight finetuning of RoBERTa (355M parameters) using 5.9M examples from 28 datasets. Despite its compact size, extensive experiments show the model's efficiency and strong performance:
(1) On over 20 datasets of aforementioned diverse tasks, the model matches or surpasses FLAN-T5 models that have around 2x or 10x more parameters; the single unified model also outperforms task-specific models finetuned on individual datasets; (2) When applied to evaluate factual consistency of language generation on 23 datasets, our model improves over various baselines, including the much larger GPT-3.5 (ChatGPT) and sometimes even GPT-4; (3) The lightweight model can also serve as an add-on component for LLMs such as GPT-3.5 in question answering tasks, improving the average exact match (EM) score by 17.94 and F1 score by 15.05 through identifying unanswerable questions.[Code will be available at <https://github.com/yuh-zha/Align>]
§ INTRODUCTION
Recent large language models (LLMs) have demonstrated exceptional generalizability in a wide range of natural language processing (NLP) tasks.
As the underlying formulation of these LLMs, next-word prediction is proven to be a general function applicable to diverse language problems. However, it is often not being an efficient solution for many tasks. LLMs often need to scale up to over tens of billions of parameters to achieve meaningful performance <cit.>, with popular models like GPT-3 boasting as many as 175B parameters <cit.>.
Additionally, even with their extreme scale, LLMs sometimes still find themselves outperformed by smaller models. For example, ChatGPT/GPT-3.5 falls behind existing finetuned baselines on most classical natural language understanding tasks <cit.>.
As a result, in many cases it is desirable to navigate the spectrum of generality-vs-efficiency tradeoff, for example, by developing smaller but general-purpose models that excel in a substantial subset of tasks.
Despite being less versatile than the extreme-scale LLMs, these models are more efficient and provide superior performance, making them more usable on the set of tasks that they are designed to handle.
Previous work has attempted to build natural language inference (NLI) models as an efficient solution for broad tasks <cit.>. But with limited NLI data (e.g., MNLI <cit.>) for training, the models exhibit limited performance and applicability across diverse domains. Another related line of research trains general text representation models with pretraining and multi-task learning <cit.>. Those models need to be specifically finetuned (with task-specific head) for each downstream task, instead of functioning as ready-to-use solutions.
In this paper, we investigate the underlying commonalities among a broad range of NLP tasks that concern the relationship between two texts, and propose a text alignment model () as an efficient unified solution, following <cit.>.
Given an arbitrary pair of texts, measures the degree of alignment between the content in the texts.
We show the formulation subsumes a substantial set of popular tasks, ranging from NLI, fact verification, semantic textual similarity, question answering, coreference resolution, paraphrase detection, to factual consistency evaluation of diverse language generation tasks, question answerability verification, and so forth (Figure <ref>).
The generality, in turn, presents an opportunity for us to use diverse data to learn the alignment model.
Specifically, we adapt and aggregate 28 datasets from the above tasks, resulting in 5.9M training examples with diverse characteristics. We then use the data to finetune a small-scale LM (e.g., RoBERTa <cit.>), yielding a model that directly applies to and excels in diverse problems and domains.
We evaluate with extensive experiments. First, we test on 25 seen and unseen datasets of the aforementioned tasks, and show our alignment model based on RoBERTa (355M parameters) achieves on par or even better performance than the FLAN-T5 models (780M and 3B) that are 2x or 8.5x as large and trained with substantially more data. In addition, the single alignment model outperforms RoBERTa specifically finetuned on each of the datasets. Second, we use to evaluate faculty consistency of natural language generation systems (e.g., for summarization, dialog, paraphrasing, etc.). On 23 datasets, the small alignment model achieves substantially higher correlation with human judgements than recent metrics, including those based on GPT-3.5 and even GPT-4. Third, we use as a question answerability verifier and incorporate it as an add-on component to existing LLMs (e.g., GPT-3.5 and FLAN-T5). It significantly enhances the LLMs' performance in three question answering datasets, improving the average exact match score by 17.94 and F1 score by 15.05.
§ RELATED WORK
Recent work has shown that LLMs are few-shot learners capable of generalizing to diverse tasks <cit.>. These LLMs are designed based on the principle of next-word-prediction, where the joint probability distribution of text sequences are factored into a product of conditional probabilities <cit.>. The performance of LLMs is highly correlated with their scales. <cit.> show that a scale of more than 10^22 training FLOPs (around 10B model parameters) is required for several different LLM designs to achieve above-random performance on many language tasks.
Another line of research has tried to design models that can either learn from multiple tasks or handle many downstream tasks. <cit.> propose to use a BERT model <cit.> with task-specific heads to learn four types of tasks, including single-sentence classification, pairwise text classification, text similarity scoring, and relevance ranking. <cit.> pre-finetune language models on 50 dataset to encourage learning more general representations, and show that the process improves model performance and data-efficiency in the finetuning stage. While we also use diverse datasets to train our model, we 1) unify language tasks into a single text pair alignment problem, 2) share all model components across multiple tasks and do not use dataset-specific heads, and 3) our model can be directly applied to a wide range of tasks without additional finetuning.
§ TEXT ALIGNMENT MODEL
In this section, we introduce the text pair alignment formulation. We first formally define the concept of text pair alignment, and then discuss how the alignment function can be used to solve a set of popular language tasks. Additionally, we cover the split-then-aggregate method used to handle long inputs. In Section <ref>, we discuss the training process of the alignment model ().
Given a text pair (x_1, x_2),
we define text x_2 to be aligned with text x_1 if all information in x_2 is supported by information in x_1, following <cit.>. For example, let
be text x_1. Then, is aligned with x_1. In contrast, both and are not aligned with x_1, as the former is contradicted by x_1, and the latter cannot be inferred from x_1. Formally, we model alignment as a function that maps the text pair (x_1, x_2) to a label y describing the level of alignment:
f:(x_1, x_2) → y .
In practice, the language tasks we wish to solve with the alignment function can be broadly categorized into two groups: one that uses discrete labels, and the other that uses continuous labels (e.g., semantic textual similarity). More specifically, tasks with discrete labels are typically formulated as either binary classification (e.g., paraphrase detection) or three way classification (e.g., fact verification).
In order to make the alignment function more general, such that it accommodates all the above cases, our alignment model produces three outputs: (y_bin), (y_3way), and y_reg. Here, (y_bin) and (y_3way) are probability distributions
over the binary (aligned, not-aligned) and 3-way (aligned, contradict, neutral) classification labels, respectively; y_reg∈ [0,1] is a real-valued score for regression tasks.
This formulation allows us to apply the alignment function to diverse tasks:
[-25pt]0pt
* For tasks that naturally fit into the text pair alignment format, such as NLI <cit.>, fact verification <cit.>, paraphrase detection <cit.>, and semantic textual similarity <cit.>,
depending on the nature of the task, we simply map one of the alignment labels to the desired label. For example, for most NLI tasks, we interpret the corresponding y_3way labels as "entailment","contradiction", and "neutral".
* In information retrieval tasks <cit.>, the goal is to find documents that can answer a given query from a large set of candidate documents.
Since relevant documents contain information related to respective queries, we use candidate documents as text x_1, and queries as text x_2. Then, a higher (y_bin=aligned) indicates the candidate document is more likely to be useful in answering the query.
* In multiple choice QA tasks <cit.>, the inputs are a context, a question, and several choices (with one of them being the correct answer). In extractive QA tasks (including ones with unanswerable questions <cit.>), the inputs only consist of a context and a question.
In either case, the expected output (the correct answer) can be inferred from the question and the context, while a wrong answer either contradicts the context or is not supported by the context. Therefore, we use the context as text x_1 and the concatenation of the question a candidate answer as text x_2. Here, a higher (y_bin=aligned)
indicates the candidate answer is more likely to be correct.
* In coreference resolution tasks <cit.>, each sample includes a context containing a pronoun, and a list of candidate entities. The goal is to find the correct entity that the pronoun is referring to. As the pronoun and the associated entity is equivalent in this context, we consider the context with the pronoun replaced with the correct entity to be aligned with the original context. To solve coreference resolution problems, we simply replace the pronoun with candidate entities and compare the resulting contexts (x_2) with the original context (x_1). We pick the candidate that produces the highest (y_3way=aligned) or (y_bin=aligned) as the correct answer.
* For generative tasks like machine summarization, dialog, and paraphrasing, the alignment function can be used to evaluate the factual consistency of generated outputs. We use the generation context (e.g., input document) as text x_1, and candidate system output (e.g., generated summary) as text x_2. In this case, the probability of (y_3way=aligned) or (y_bin=aligned) indicates if the candidate output faithfully reflects information in the context, without introducing hallucinations or contradictions.
One specific challenge of applying the alignment function to downstream tasks is that text x_1 in some datasets (e.g., contexts in QA or summarization datasets) tends to be significantly longer than the input length limit of typical language models (e.g., 512 tokens for RoBERTa). As a result, naively truncating oversized inputs could throw away important information. To alleviate this problem, inspired by <cit.>,
at inference time, instead of truncating the inputs, we split text x_1 into a set of chunks {x_1^(i)} and text x_2 into a set of sentences {x_2^(j)} such that the combined length of a chunk-sentence pair is slightly below that length limit.
Then, we evaluate each pair and aggregate the results as
(x_1, x_2) = _j max_i f(x_1^(i), x_2^(j)) ,
where the max operation selects the output with the highest aligned probability or regression score.
Since in most downstream applications, text x_2 tends to be succinct (e.g., summaries) and consists of self-contained sentences, this aggregation method can be interpreted as first finding the text x_1 chunk that most strongly supports each text x_2 "fact", and then taking the average across all text x_2 "facts".
§.§ Training
Our formulation not only allows us to solve the above tasks with a single alignment function, but also learn the alignment function from these tasks.
By adapting text pair understanding tasks into a uniform alignment format as above, we can naturally model these tasks
as simple classification and regression, allowing us to train a small model while achieving strong performance.
Specifically, we use RoBERTa <cit.> as a lightweight backbone language model, and attach three individual linear layers to predict the three types of alignment outputs, (y_bin), (y_3way), and y_reg, respectively. The two classification heads are trained with cross entropy loss, while the regression head is trained with mean squared error loss. The losses are aggregated as a weighted sum,
ℒ_total = λ_1ℒ_3way + λ_2ℒ_bin + λ_3ℒ_reg ,
where we set λ_1=1/log3, λ_2=1/log2, and λ_3=1, following <cit.>.
Besides the aforementioned downstream datasets, we also include synthetic data to increase the diversity of the training set. Specifically, for QA datasets without wrong options (e.g., extractive QA datasets like SQuAD v2 <cit.>), we first remove the ground truth answer from the context, and then use a QA model <cit.> to generate wrong answers that can be used to create not-aligned samples. Additionally, we create synthetic paraphrase samples by back translating the WikiText-103 corpus <cit.> using a neural machine translation model <cit.>. For the WikiHow summarization dataset, we use an extractive summarizer <cit.> to generate synthetic summaries in additional to ground truth summaries. Following <cit.>, we create negative samples for both WikiText-103 and WikiHow samples by randomly masking 25% of the tokens in text b and infilling with a small masked language modeling model <cit.>. In total, we collect 5.9M examples from 28 datasets to train our alignment model . We include more details of our training setup and data in Appendix <ref>.
§ EXPERIMENTS
In this section, we experiment with applying to multiple downstream tasks, including language pair understanding tasks (Section <ref>), factual consistency evaluation (Section <ref>), and question answering with unanswerable questions (Section <ref>).
§.§ Natural Language Understanding Tasks
Natural Language Understanding (NLU) is a major category of tasks for language models, and our formulation allows us to directly use to solve these tasks. Specifically, we include NLI, fact verification,
paraphrase detection, multiple-choice QA, STS, and coreference resolution datasets in the experiments. We also include unseen datasets to demonstrate the generalizability of . Experiments show the alignment model is on par with FLAN T5 that has 8.5x as many parameters. Additionally, without further task-specific finetuning, our model outperforms finetuned language models of a similar size.
§.§.§ Experiment Setup
Datasets We first evaluate on test sets of the 20 datasets used during training (in-domain setting; see Table <ref>).
Then, we use 9 unseen datasets for evaluation (zero-shot setting; see Table <ref>).
For more details about the datasets, please refer to appendix <ref>. If a dataset does not have a public test set, we use its validation set instead. For datasets that require binary, 3-way classification or regression, we use the associated output heads, respectively, as discussed in Section <ref>.
Baselines
To demonstrate the efficiency of , we compare it with FLAN-T5 <cit.> and FLAN-Alpaca[<https://github.com/declare-lab/flan-alpaca>] with model size ranging from 220M (FLAN-Alpaca-base) to 3B (FLAN-Alpaca-xlarge and FLAN-T5-xlarge). For both models, we use the same prompts as <cit.>. We also include RoBERTa finetuned on individual datasets as a baseline and show that our alignment model works well out-of-the-box, without further finetuning. Lastly, in the zero-shot setting, we also compare with RoBERTa model finetuned on MNLI, ANLI and SNLI datasets (RoBERTa-NLI) to show the generalizability of our proposed formulation.
§.§.§ Results
We report average Pearson Correlation coefficient for the STS tasks <cit.>, and average accuracy for the other tasks.
As show in Table <ref>, outperforms instruction-finetuned FLAN-T5 that is 2x as large (780M) and has comparable performance to the version 8.5x as large (3B). In the zero-shot setting (see Table <ref>), achieves comparable performance with similarly sized variants of FLAN-T5, even on datasets that exist in the FLAN-T5's training set. We report results on FLAN-Alpaca in Appendix <ref>.
Furthermore, is on par with RoBERTa finetuned on individual datasets (Figure <ref>). In the zero-shot setting, has stronger performance on average than the RoBERTa-NLI model (Figure <ref>), indicating that our formulation leads to better generalizability.
§.§ Factual Consistency Evaluation for Language Generation
Studies have shown that natural generation systems (NLG) are prone to generating text that is not consistent with the source material <cit.>. As a result, many automatic metrics have been developed with the goal of detecting factual consistency errors. As factual consistency is closely related to our definition of text pair alignment, we can directly apply for this purpose, using the NLG input context as x_1, and system outputs as x_2. We consider a system output with higher (y_3way=aligned) to be more factually consistent.
§.§.§ Experiment Setup
Dataset
Following <cit.>, we use two popular factual consistency evaluation benchmarks, TRUE (containing 11 datasets, including dialog, fact verification, and paraphrase detection
) <cit.> and SummaC (consisting of 6 summarization datasets) <cit.>. We also include Other popular meta-evaluation datasets, namely XSumFaith <cit.>, SummEval <cit.>, QAGS-XSum <cit.>, QAGS-CNNDM <cit.>, FRANK <cit.> and SamSum <cit.>. This results in 23 datasets in total for our study.
Baselines
We compare with the latest
LLM based automatic metrics: GPTScore <cit.>, G-EVAL <cit.> and a ChatGPT-based metric <cit.>. These metrics achieve the best performance when using the GPT family of LLMs, which are significantly larger than our alignment model (e.g., GPT-3 has 175B parameters).
GPTScore evaluates texts based on the probability of a LLM generating the target text, while G-EVAL augments its prompt using chain-of-thoughts techniques and asks the LLM to score the input by form-filling. <cit.> design a prompt that asks ChatGPT to score the faithfulness of the summary on a five point scale. Additionally, we include strong, smaller-scale (similar with our alignment model) baselines, including BERTScore <cit.>, BLEURT <cit.>, BARTScore <cit.>, CTC <cit.>, UniEval <cit.> and QAFactEval <cit.>, following <cit.>.
Metrics
Both the TRUE and SummaC benchmarks formulates factual consistency evaluation as binary classification (i.e., identifying factual consistency errors). Following the common practice, we report ROC AUC <cit.>, treating each model as a classifier. For the rest of datasets, we report instance-level Pearson, Spearman, and Kendall-τ correlation coefficients between automatic metric scores and human-annotated scores.
§.§.§ Results
For the LLMs-based metrics, we use the results reported by <cit.>, and consequently results for some model-dataset combinations are unavailable. Despite being much smaller than ChatGPT/GPT-3.5 or GPT-4, our alignment model achieves comparable performance on SummEval (see Table <ref>). When evaluated on the QAGS-XSum and QAGS-CNNDM datasets, even our 125M alignment model outperforms both G-EVAL and GPTScore based on GPT-3.5, while the 355M alignment model beats G-EVAL based on GPT-4.
When compared with similarly sized metrics, our method consistently outperform the strong baselines on factual consistency benchmarks and datasets (see Figure <ref>). We include detailed results in Appendix <ref>.
§.§ Question Answering with Unanswerable Question
In question answering tasks, a system must find the correct answer to a question from a context. When the question cannot be answered with information in the context, the system must indicate the question is not answerable. Despite being a well-studied task, predicting whether a question is answerable remains challenging, especially in a zero-shot setting.
A common approach to improve a system's ability to handle unanswerable questions is to introduce a verifier model in addition to the QA model <cit.>. Given a context and question pair, the QA model first predicts a candidate answer. Then, the verifier model independently predicts whether the question is answerable by comparing the candidate answer to the question and the context. Lastly, the outputs of the two models are aggregated to form the final prediction. In our experiments, we use the alignment model as the verifier.
§.§.§ Experiment Setup
Datasets
We experiment with two existing QA datasets with unanswerable questions, SQuAD v2 <cit.> and ACE-whQA <cit.>. Additionally, we construct a third dataset, Simplified Natural Questions (Simplified NQ), base on Natural Questions <cit.>. To build the dataset, for samples in Natural Questions with both short and long answers, we use the long answer as the context, and the short answer as the ground truth answer; for samples without short and long answers, we select random paragraphs from the articles as contexts and consider them to be unanswerable. Both ACE-whQA and Simplified NQ are not seen by the alignment model during training (i.e., a zero-shot experiment). We use the validation split of SQuAD v2 and Simplified NQ as their test splits are not publicly available.
Baselines
We include FLAN T5 <cit.> and GPT-3.5[, see <https://platform.openai.com/docs/models/gpt-3-5>]
to represent large sequence-to-sequence language models. In addition, we experiment with using as a verifier add-on for FLAN T5 and GPT-3.5. Here, we use 1 - (y_bin=aligned) as the unanswerable probability and use the SQuAD v2 validation split to find the best unanswerable threshold that maximizes the F1 score. The prompts we use and other experiment details are discussed in the appendix (Section <ref>).
Metrics
We follow <cit.> and report exact match and macro-averaged F1 score. To evaluate each model's performance at identifying unanswerable questions, we also formulate the problem as a binary classification task (predicting whether the sample is answerable) and report the ROC AUC <cit.>. A higher ROC AUC indicates the model is better at identifying unanswerable questions. For GPT-3.5 and FLAN T5, we consider the unanswerable classifier output to be 0 if the model predicts an answer, or 1 otherwise.
§.§.§ Results
As shown in Table <ref>, using as a verifier add-on significantly improves GPT-3.5 and FLAN T5 in most cases (increases exact match score by 17.94 on average and F1 score by 15.05), suggesting that it is effective at identifying unanswerable questions. For the Simplified NQ dataset, adding the alignment verifier to GPT-3.5 degrades exact match and F1 score, but improves AUC. This indicates that while produces meaningful unanswerable probabilities on the Simplified NQ dataset, the threshold found on the SQuAD v2 validation split is not ideal for Simplified NQ. Repeating the experiment with the best threshold selected on the Simplified NQ validation split (see numbers in parenthesis in Table <ref>) shows the potential for improvements in exact match and F1 scores, albeit this can no longer be considered a zero-shot setting.
§.§ Ablation Study
As discussed in Section <ref>, is trained on datasets from a wide set of language understand tasks. To understand their contributions to the performance of the alignment model, we conduct an ablation study by incrementally adding subsets of tasks to the training set. Specifically, we start with only NLI dataset, and then add the remaining tasks in the following order: 1) paraphrase detection (para) and fact verification (FV) datasets; 2) coreference resolution (coref), summarization (sum), information retrieval (IR), and STS datasets, and lastly 3) QA datasets. For simplicity, we use RoBERTa-base as the backbone in this experiment. As shown in Table <ref> each added subset improves the overall performance of the alignment model, suggesting our training tasks are compatible and all contributes to the model performance.
§ CONCLUSION
We propose to unify diverse language tasks into a text pair alignment problem. This framework yields an alignment model () that, despite being less versatile than LLMs, solves a wide range of language problems efficiently with superior performance. We show that outperforms task-specific models finetuned on several NLU tasks while having performance comparable to LLMs that are orders of magnitude larger. Additionally, excels in factual consistency evaluation, and can be used as an add-on to augment LLMs in QA tasks by identifying unanswerable questions.
Limitations
Our alignment framework uses splitting and aggregation to handle long inputs (see Section <ref>), with the assumption that text x_2 is short and its sentences are self-contained. While we empirically show this method works well on diverse datasets, violating this assumption has a few implications. First, if text x_2 sentences are highly interrelated, splitting them discards document-level semantic information, which could degrade performance. Second, as we need to evaluate all text x_2 sentences individually, doing so will be slow for long text x_2.
We use a wide collection of NLU datasets to learn the alignment function, with the assumption that these dataset, after being adapted into the text pair alignment format, accurately reflect our definition of alignment. However, as with all datasets, they could contain biases that are subsequently learned by our alignment model. Additionally, we augment the training set with synthetic data. While it proves to improve performance in our experiments, synthetic data likely do not perfectly model real-world data distributions.
Appendix
§ ETHICS STATEMENT
While our text pair alignment model achieves state-of-the-art performance on many downstream tasks, like all models, it does make mistakes. For example, when used for fact verification or factual consistency evaluation, it could misidentify factually correct statements as incorrect and vice versa. Additionally, as we use publicly available datasets to train the alignment model, it might have learned biases inherent to those datasets. Thus, one should proceed with caution when using for purposes other than NLP research.
§ COMPARISON WITH OTHER MODEL TYPES
We illustrate the major differences between our approach, LLMs, multitask learning models, and task-specific finetuned models in Table <ref>. Compared with LLMs, our alignment function is more efficient but less versatile. In contrast to task-specific finetuned models, the alignment function is more general and can handle more types of tasks. Unlike multitask learning models, we unify language tasks into a single text pair alignment problem, and share model components across multiple tasks (as apposed to using dataset-specific prediction heads). As a result, our alignment function can be directly applied to a wide range of tasks out-of-the-box, without any finetuning.
§ TRAINING DETAILS
§.§ Trainig Setup
We choose RoBERTa <cit.> as the backbone for the alignment model. -base and -large are based on RoBERTa-base and RoBERTa-large, respectively. For the experiments in Section <ref>, we train for 3 epochs with a batch size of 32, following common practice <cit.>. Other hyperparameters are listed in Table <ref>. For the finetuned RoBERTa and RoBERTa-NLI model in Section <ref>, we set batch size to 16 and 32, respectively.
§.§ Training Datasets
We collect datasets that falls into the scope of alignment as mentioned in Section <ref>. Table <ref> lists the datasets we use for training the alignment model. The size of these datasets ranges from 4k samples to 5M. Most of the datasets are used for binary classification task except some NLI, fact verification and STS datasets.
We only use the first 500k samples in each dataset due to limited computation resource, which results in 5.9M training samples in total. During training, the samples are randomly sampled from the entire adapted training sets.
§ ADDITIONAL EXPERIMENT DETAILS
§.§ Natural Language Understanding Tasks
Prompts For FLAN models, we use the same prompts as <cit.>. For datasets that do not appear in <cit.>, we use prompts of similar tasks. The prompt used for each dataset is listed below.
MNLI, NLI-FEVER, VitaminC:
ANLI:
SNLI:
SICK, STSB:
PAWS, PAWS-QQP:
QQP:
RACE, QuAIL, SciQ:
Multi-RC:
BoolQ:
GAP:
Results We provide additional results on the in-domain datasets of . We show the performance of finetuned RoBERTa and FLAN-Alpaca on these datasets in Table <ref>. We have compared with finetuned RoBERTa on these datasets in Figure <ref>. When comparing FLAN-T5 and FLAN-Alpaca, we notice FLAN-T5 consistently outperforms FLAN-Alpaca on all scales. Thus, we compare our alignment model with FLAN-T5 in Table <ref>.
§.§ Factual Consistency Evaluation for Language Generation
In this section, we report the detailed results associated with Figure <ref>. We list the performance of each metric on SummaC Benchmark and TRUE Benchmark in Table <ref> and Table <ref>, respectively. We also show the Pearson Correlation, Spearman Correlation and Kendall's tau Correlation coefficients on datasets in Table <ref>, <ref> and <ref>, respectively.
§.§ Question Answering with Unanswerable Question
Simplified Natural Questions
For this experiment, we construct a new SQuAD-style QA dataset with unanswerable questions, Simplified Natural Questions (Simplified NQ), base on Natural Questions <cit.>. For each sample (an search query and a Wikipedia article) in Natural Questions, <cit.> ask five annotators to find 1) an HTML bounding box (the long answer; e.g., paragraphs, tables, list items, or whole list) containing enough information to infer the answer to the query (long answer), and 2) a short span within the long answer that answers the question (the short answer). For both short and long answers, the annotators can alternatively indicate that an answer could not be found.
For the purpose of constructing Simplified NQ, we consider a sample to be answerable if at least 2 annotators identified both long and short answers. In this case, we use the most popular (among the annotators) short answer as the ground truth answer, and the most popular long answer containing the selected short answer as the context. Conversely, if less than 2 annotators identified any long answer, and less than 2 annotators identified any short answer, we consider the sample to be unanswerable and use a random paragraph from the article as the context. We discard all remaining samples to avoid ambiguity (e.g., some samples might only have long answers but not short answers). This results in a total of 3336 answerable samples and 3222 unanswerable samples in the validation set.
Prompts and QA Inference
For FLAN T5, we follow <cit.> and use the following prompt:
For GPT-3.5, we use a prompt with additional instructions:
At inference time, we truncate the context if necessary such that the entire input is at most around 2000 tokens long (2000 for FLAN T5, 2040 for GPT-3.5 to account for the longer prompt). We use greedy decoding for FLAN T5, a the default chat completion settings for GPT-3.5. When FLAN T5 outputs , we interpret it as predicting the sample to be not answerable. Similarly, if GPT-3.5's output contains any of , , , we consider the prediction to be unanswerable.
Additional Results
In addition to FLAN T5 and GPT-3.5, we also experiment with Electra <cit.>, one of the top performing single models on the SQuAD v2 leaderboard, for reference. Specifically, we reproduce <cit.>'s design that use a QA prediction head to jointly predict the answer span and unanswerable probability. As shown in Table <ref>, while Electra is a strong performer on SQuAD v2 and Simplified NQ, adding the alignment verifier to GPT-3.5 and FLAN T5 greatly reduces the performance gap. Additionally, on ACE-whQA, our design (both FLAN T5 and GPT-3.5 with alignment verifiers) outperforms Electra.
§.§ Ablation Study
We present the additional ablation result on factual consistency evaluation tasks in Table <ref>. This part follows Section <ref>, where we use the same checkpoints that are trained on incrementally added tasks. Result shows the training tasks are generally compatible and effective, though we notice adding fact verification and paraphrase detection tasks lead to a slightly performance drop. We speculate it is due to the paraphrase detection task, where a text pair is expected to have exactly the same information. The -base model, which uses all the possible training data, gets the best performance on every factual consistency evaluation task.
|
http://arxiv.org/abs/2307.00844v1
|
20230703083823
|
Rotationally symmetric momentum flow produced by scattering on an anisotropic random medium
|
[
"Yi Ding"
] |
physics.optics
|
[
"physics.optics"
] | |
http://arxiv.org/abs/2307.03336v2
|
20230707003840
|
DIG: The Data Interface Grammar
|
[
"Yiru Chen",
"Jeffery Tao",
"Eugene Wu"
] |
cs.DB
|
[
"cs.DB",
"cs.HC",
"H.2; H.5.2; H.2.3"
] |
Columbia University
1 Thørväld Circle
New York
NY
USA
yiru.chen@columbia.edu
Columbia University
New York
NY
USA
jat2164@columbia.edu
Columbia University
New York
NY
USA
ewu@cs.columbia.edu
Building interactive data interfaces is hard because
the design of an interface depends on the data processing needs for the underlying analysis task,
yet we do not have a good representation for analysis tasks.
To fill this gap, this paper advocates for a Data Interface Grammar () as an intermediate representation of analysis tasks.
We show that is compatible with existing data engineering practices, compact to represent any analysis,
simple to translate into an interface design,
and amenable to offline analysis.
We further illustrate the potential benefits of this abstraction, such as automatic interface generation,
automatic interface backend optimization, tutorial generation, and workload generation.
<ccs2012>
<concept>
<concept_id>10002951.10002952</concept_id>
<concept_desc>Information systems Data management systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003123.10010860.10010858</concept_id>
<concept_desc>Human-centered computing User interface design</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003145</concept_id>
<concept_desc>Human-centered computing Visualization</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[500]Information systems Data management systems
[500]Human-centered computing User interface design
[300]Human-centered computing Visualization
DIG: The Data Interface Grammar
Eugene Wu
August 1, 2023
================================
§ INTRODUCTION
Interactive data interfaces are essential for data exploration and analysis <cit.>. However, designing a new interface is a multi-step process that includes determining the queries that are appropriate for the analysis task, as well as the parts of the queries that users should be able to change. Once the analysis has been determined, the interface designer can now choose the appropriate visualizations, interactions, and layouts to design the interface. These two steps are closely related and need to be kept in sync, yet require distinctly specialized skill sets: to write complex queries over complex data sources, and to design and implement a usable and effective interface. Further expertise in system optimization is needed to ensure that the resulting interface is responsive as the data grows.
Is there an intermediate abstraction of an interface's underlying data needs that can decouple these tasks, so that data practitioners can focus on expressing complex analysis tasks, visualization designers can focus on interface design, and backend engineers can focus on optimization?
Let us first examine the different ways a developer might build an interface today. <Ref> is a simplified subset of a drought insurance design tool used to protect rural farmers <cit.>. The simplest approach is to predefine all possible queries in the application ahead of time, and when the user interacts with the interface, we identify which query to execute. Although the queries can be optimized ahead of time, this requires enumerating a combinatorial number of possible queries (e.g., 1332 = 2 for the dropdown * 666 for the slider). Parameterized queries allow literals in the WHERE clause to be wildcards. This compactly expresses the slider interaction and can be optimized offline <cit.>, but cannot express arbitrary structure changes in the query (e.g., the dropdown).
In short, there exist tools to create and optimize very simple data interfaces where the interactions largely correspond to filters or where the user simply cannot express very much.
Beyond this, the developer must resort to constructing query strings in the application, which is highly flexible but not amenable to analysis.
What criteria should an intermediate abstraction satisfy? We believe that it should (C1) compactly represent any analysis task that a developer may wish to express, (C2) have a well-defined correspondence to interactive interfaces composed of charts, widgets, and interactions, and (C3) be amenable to offline analysis for e.g., optimization, interface synthesis.
It is easy to see that existing approaches do not satisfy these criteria. Predefining every query is neither compact nor expressive, parameterized queries are compact but only express simple query transformations, and programmatically constructing SQL is expressive but not analyzable. Other works on interactive data interface benchmarks <cit.> model the interface in order to generate query workloads but are limited to SPJA queries and cross-filter interactions. Business Intelligence(BI) and visualization tools (e.g., Tableau <cit.>, Power BI <cit.>, Vega-lite <cit.>) are primarily focused on data cube-like operations.
In this paper, we examine two observations.
First, the queries that an interface expresses can be compactly represented as a grammar. A grammar is a set of production rules that define a valid program; each production rule defines a set of choices that encode the allowable program variations. That grammar may be a single production rule that chooses from a small enumeration of predefined queries, the entire language (e.g., SQL), or a language subset specific to an analysis.
Second, the design of a data interface has a direct correspondence to the grammar: interactions make choices in the grammar, and when all choices in the grammar have been made, the grammar is equivalent to a syntactically valid query string that the database executes. In other words, interactions navigate the space of syntactically valid queries expressible by the interface.
To this end, we propose , a Data Interface Grammar that extends Parsing Expression Grammars (PEG) with annotations specific to data programs.
For instance, the program for <Ref> is a query string (gray text) where nonterminals encode program variations: t chooses the relation name, and val chooses an integer between 1 and 36. t is annotated to be a relation name, and s≤e. In the interface, these choices are respectively bound by the dropdown and range slider.
satisfies our desired criteria.
Since it extends a formal grammar, it compactly express any set of queries useful for a task (C1), and defines a direct correspondence to interactive interface designs (C2).
Finally, since encodes the the entire space of possible programs, it is amenable to offline analysis, and <Ref> outlines examples such as interface synthesis and physical optimization(C3).
In the rest of this paper, we will first introduce Data Interface Grammar () and illustrate its correspondence with interactive interfaces. We further comment on its connections with existing data pipeline and analysis representations (<Ref>).
We then describe how simplifies interface creation via real-world examples (<Ref>), and finally highlight the benefits of the representation for solving a number of challenging data interface problems (<Ref>).
§ DATA INTERFACE GRAMMAR
A data interface helps the user navigate a space of useful data programs (e.g., SQL) through interactive controls. This section first presents , a Data Interface Grammar, to express this set of data programs in a simple, analyzable manner, and then defines the set of valid interfaces that express a given program. These definitions form the basis for useful applications like interface synthesis, physical visualization optimization that we describe in <Ref>.
§.§ Definition
is a that defines the syntactic structure of queries
that an interface wishes to express: the set of queries parsable by the grammar.
Given that existing data query languages such as SQL, PRQL <cit.>, and Pandas have well-established grammar definitions, is a superset of the widely-used Parsing Expression Grammar (PEG). By extending PEG, we both build on decades of research and tooling and simplify the ability to port existing PEG-based languages to .
We formally define = {N, Σ, P, e_S, C} as follows, where the sub-grammar rooted at each starting rule parses a set of queries:
* a finite set of nonterminals N;
* a finite set of terminals Σ that is disjoint from N;
* a finite set of parsing rules P;
* a finite set of starting rules e_S, each not referenced by any other rule;
* a set of constraints C.
Terminals.
Similar to typical grammars, matches terminals to valid strings expressible by regular expressions. Although regular expressions are useful for matching string literals, most interactions (e.g., sliders, dropdowns, visualization selections) are typed and limited to a domain of valid values that regular expressions cannot distinguish. Thus, also supports domain terminals that may reference the underlying database.
* Predicate Domain: A = {var:type | <predicate>}.
* Query Domain: A = {SELECT QUERY}.
A predicate domain specifies a typed variable along with a boolean expression that must evaluate to true for a value to be valid. For instance, val = {x:int | x∈[1,36]} specifies the terminal as an integer between 1 and 36. Note that a regular expression pattern p is expressible as a predicate domain {s:str | s matches p}.
A query domain specifies a query over the database; the terminal must be an element in the query result. For instance, prods = {SELECT name FROM products} ensures that the terminal is a valid product name. The data types may be structured as well, for instance X = { SELECT fname, lname FROM users } would choose from the first and last names of existing users. This formulation serves as hints for the interface to choose a good interaction for the rule, and as input validation rules to guarantee syntactically correct programs.
Following second order languages like SchemaSQL <cit.>, we additionally support special string types to express relation names (rel) and attribute of a relation (attr[str:rel]) where it is optionally parameterized by a relation name. Thus the following restricts names to attribute names in two relations:
Rules.
Each rule in P is structured as A = e, where A is a non-terminal and e is a parsing expression composed of a reference to a non-terminal, a terminal (e.g., a string literal), or an expression composed of either a sequence e_1 e_2, selection e_1 | ... | e_n, or zero-or-more e^* operator[PEG operators like AND and NOT can be omitted since is not used for parsing.]. Selection implicitly has a domain [1,n] that specifies which subexpression is selected, and zero-or-more's domain is the natural numbers, which specifies the number of repetitions.
Other patterns, such as e^+ and e?, are reducible to these operators.
A non-terminal A on the left side of a rule can optionally be typed by adding the suffix :type. For instance, t:rel in <Ref> specifies that `chirps` and `evi` are relation names. Type violations result in a parsing error.
Naming.
Naming is necessary for defining constraints and interface mappings next, thus we now introduce annotations and choice variables.
An annotation assigns a variable name to a non-terminal reference by appending :$varname to the reference. For instance, <Ref> assigns the two val references to s and e. If a reference is not annotated, assigns a unique name by appending a unique number to the non-terminal name (e.g., val1).
Unfortunately, variables alone are not sufficient because the same non-terminal can be referenced multiple times. Consider the following rules:
The variable v3 is ambiguous because both v1 and v2 reference it. Thus, we define a variable's fully qualified name as the path from the root of the to the variable, where each element in the path is a non-terminal reference.
All variability in is expressed by non-terminals that expand to a predicate or query domain, selection expression, or zero-or-more expression. We use the term choice variable to refer to the fully qualified reference to such a non-terminal.
For instance, v1, v2, v1/v3, v2/v3 are the choice variables in the above example. Further, let D_c be the domain of a given choice variable, as defined in the Terminals and Rules paragraphs above.
Constraints.
The developer can specify boolean expressions over choice variables; these constraints are evaluated when the user performs an interaction to determine validity. For instance s≤e in <Ref> ensures that start should be less than or equal to the end of the range. Two terms assigned to the same variable name implies an equality constraint. handles equality constraints between variables (e.g., s = e) in a special way: if one variable is bound to a value v, then the other is updated to v as well; if both are updated then we check if they are equal.
§.§ Valid Interfaces for a DIG
An interface renders query results and lets the user navigate the space of valid queries.
Since choice variables encapsulate all variability, a valid interface is one whose interactions can bind values to the set of choice variables. Once they are all bound, the grammar reduces to an executable query string, and the interface renders its evaluation result(s). We will first define interactions and how they cover choice variables, and then define the set of valid interfaces for a grammar. In practice, each starting rule in a grammar represents a separate query, and the interface will render the results of each query; this extension is straightforward and we assume a single root for clarity.
Interfaces and Interactions.
An interface UI=(V, I) consists of a view V (e.g., a table, a visualization, a paragraph) that renders the output of the starting rule and a set of interactions I. We model an interaction i = (T_i, D_i)∈ I by the state it can express. T_i is its type (e.g., dropdown, slider) and D_i(a_1,...) is its domain with schema (a_1,...). For instance, the domain for a dropdown with n options is [0,n]; for a text box is the set of all strings (perhaps up to a specified length); for a slider is the set of numbers between the min and max; and for a 2-D brush interaction in a scatter plot is the set of bounding boxes in the chart. An interaction's developer is responsible for defining its domain. Note that our interface model supports arbitrary layout because layout does not affect the interface's expressiveness[Chart layout (e.g., faceting/small multiples) may affect the set of interactions the chart can express, but is encapsulated by the chart.]
Let a mapping M_i,c = {a_i→ a_c | a_i∈ schema(D_i) a_c∈ schema(D_c) } map attributes in the interaction's domain to attributes in the choice variable's domain, and the mapping's projection π_M(D_i) be the subset of attributes in the interaction's domain that have a mapping. An interaction i is said to cover a choice variable c if 1) every attribute in D_c is mapped to in M_i,c, and 2) the interaction's domain is a superset of the choice variable's domain: π_D(D_i)⊇ D_c. These ensure that all possible assignments to c can be expressed in the interface. Given these definitions, we are now ready to define a interface validity.
An interaction UI is valid for a grammar G if every choice variable in G is covered by at least of interaction in UI, and every root rule is rendered by at least one view.
<Ref> contains two interactions and two mappings.
The dropdown maps its selected index to the choice variable t; since the dropdown is initialized with set of choices in t (e.g., “chirps”, “evi”), their domains will be identical - [1,2].
The range slider maps the left slider handle to s and the right slider handle to e; the slider's domain is {(l,r) | l∈[1,36] r∈[1,36]}, which matches the predicate domain and constraints over s and e.
Text Inputs and Parsing.
Text inputs are a special type of interaction because they can, in principle, produce arbitrary strings that are interpreted as query substrings rather than string literals. For instance, <Ref> is a query builder where the user types in predicate expressions, and clicks on “add pred” to add additional conjunctive clauses. The text input is parsed by the pred rule, which implicitly binds the attribute, operator, and value.
For these reasons, a text input[In general, this can be any interaction whose domain is all strings. ] can map to any term t in a grammar. Any input string will first be parsed and validated by the subgrammar rooted at t. The parsing process implicitly binds all of the choice variables in the subgrammar, and all parsing errors or constraint violations are passed to the interaction in order to surface as error messages.
This functionality is helpful for several reasons. First, can automatically perform parsing and validation such that any text input is guaranteed to be syntactically correct and naturally prevents issues such as SQL injection. Second, every statement is guaranteed at least one valid interface: one where a text input maps to the root of the grammar, which is equivalent to a typical console-based interface. Third, it enables a progressive interface design process, where starting from the default text-based interface, more specialized interactions are added to the interface to “carve out” more and more choice variables.
Recursive Rules.
So far, we have implicitly assumed that the grammar is hierarchical and non-recursive. However, allows recursion in the rule set. For instance, SQL allows nested queries anywhere a value or relation is expected. How does recursion map to a valid interface? We outline three categories of approaches.
The first approach is to simply map the first external reference of the recursive rule set to a text input. However, this may reduce the interface to a single text box. The second approach is to enforce a maximum recursion depth and “unroll” the recursion; this effectively produces a new grammar without this recursion that we can map to a valid interface.
The third approach is inspired by existing query builder interfaces that support nested queries: they typically instantiate a “new” query builder interface to represent the nested query. In , we first ignore the expression e_r that introduces the recursion and determine a valid interface UI for the remaining rules in the recursive rule set. We then map a button to e_r that, when clicked, instantiates a new instance of UI in the interface.
<Ref> illustrates recursion in the src rule, which specifies a table name or a subquery with the same structure. The interface lets the user choose between the options using a radio button. If the user chooses table name, they fill a text box with the relation name (e.g., “profits”), which is validated by the table rule. Otherwise, we instantiate a nested interface containing the radio buttons and set of predicates.
§.§ Cross-filter Example
We now use the popular cross-filter visualization <cit.> to illustrate end-to-end on a real-world example. Each bar chart in cross-filter renders an aggregation grouped on one attribute in the underlying dataset. For instance, <Ref> shows three bar charts grouped on arrival time, airtime, and date, respectively. Brushing over a chart grouped on attr adds a predicate that filters over attr to the other charts; the filtered aggregates are rendered as an overlay, while the unfiltered results are gray in the background.
The following grammar describes the rules to render the arrival (q1) and airtime (q2) charts; it omits constraints and the rules for the date chart.
[fontsize=,commandchars=
{}]
q1_bg = 'SELECT arrival, count() FROM flights GROUP BY arrival '
q2_bg = 'SELECT airtime, count() FROM flights GROUP BY airtime '
...
q1 = 'SELECT arrival, count() FROM flights WHERE '
p_airtime:pair' AND 'p_date:pd 'GROUP BY arrival'
q2 = 'SELECT airtime, count() FROM flights WHERE 'p_arrival:parr' AND 'p_date:pd 'GROUP BY airtime'
...
p_arrival = true | 'arrival BETWEEN 'arr:arrs ' AND 'arr:arre
p_airtime = true | 'airtime BETWEEN 'air:airs ' AND 'air:aire
p_date = true | 'date BETWEEN 'date:s ' AND 'date:e
arr = { SELECT arrival FROM flights }air = { SELECT airtime FROM flights }date = { SELECT date FROM flights }
The _bg starting rules define the background unfiltered results, while q1 and q2 define the overlay filtered queries. Each query is filtered by a conjunction of predicate rules (e.g., p_airtime); and each predicate such as p_airtime either evaluates to true, meaning the airtime chart is not brushed, or a BETWEEN clause, meaning that the airtime chart is brushed and the start and end of the brush range map to airs and aire. Notice that p_date in q1 and q2 are both named pd to ensure that their bindings are identical.
§.§ Tool Compatibility
A benefit of is that it is compatible with existing data engineering practices. For instance, data pipelines and analyses are increasingly expressed as a DAG of SQL views using tools like DBT <cit.>. This is useful because data engineers can define these DAGs, while business analysts and data consumers can use these views in visualization and data science tools.
Each DAG node is called a model and expressed as a Jinja template that, when evaluated, returns a SQL string. The template can reference custom variables and call logic to change the query by assigning values to the variables in a configuration file.
For instance, the following model uses the variables region to choose the input table (specified using the ref() function), and age to change the filter. region may be set to USA or EUR, themselves are names of other models.
[fontsize=]
SELECT cty, sum(profit) FROM ref(var("region"))
WHERE age > var("age")
DBT models that ref(), variable, and branching logic can automatically translate into grammars. ref() translates into non-terminal reference to either a base relation/view or the starting rules for grammars translated from other models; a variable translates into a terminal rule; and branching logic translates into a selection rule (e.g., e1 | e2 | ..) with one option for each branch; if the condition expression references a variable, we evaluate the expression dynamically to decide which branch to choose.
For instance, the above model translates into the following, where we assume usa and eur are the starting rules for their respective DBT models.
[fontsize=,commandchars=
{}]
q = 'SELECT cty, sum(profit) FROM 't' WHERE age > 'aget = usa | eur usa = ... eur = ...
age = { n:int | n > 0 }
§ VISION: -BASED INTERFACE CREATION
So far, we have described as a compact and expressive abstraction that naturally maps to interactive interfaces.
How can such an abstraction change how we design, implement, and use new data-oriented interfaces?
Here, we sketch a potential development cycle that can enable.
The next section sketches our progress towards this vision.
Design. Barb wants to create a new data interface to analyze user signup flows, and decides to use . One option is to manually write a grammar. Alternatively, she might induce a grammar from existing user signup analyses by extracting queries from e.g., Jupyter notebooks, DBMS query logs, or other database-backed applications, or by translating a natural language description of her analysis goals into a grammar.
Her design tool then automatically synthesizes a custom interactive interface. She likes the overall design, but resizes the canvas to fit a smaller screen, and specifies that the interface should be more expressive. The synthesized interface updates, and she re-positions the charts and widgets to fine-tune the layout.
Implementation. Barb now connects the design tool to the the user signups database. If her dataset is small, the design tool can load the database into memory and either run the interface, or export to a web application. However, if the dataset might grow over time or if it resides in a cloud database (which optimizes for throughput rather than query latency), then Barb potentially needs to engineer an entire client-server system. However, Barb does not have the time, desire, nor expertise to make all of the decisions about which DBMS, data structures, and optimization techniques to employ so that the interface is responsive.
Instead, Barb gives the design tool her budget, and specifies her desired responsiveness for the different interactions. The tool uses metadata about the underlying database to estimate how much resources are needed to meet her responsiveness goals. The proposed architecture requires materializing and caching 7GB of data structures <cit.> in server-side memory, which costs $35/month. Barb thinks it's too expensive, and moves part of the interface related to post-signup actions to a separate page; this relaxes some of the interactivity constraints, and reduces the sizes of the data structures to 2GB and costs to $15/month. When she accepts the recommendation, and the design tool allocates a cloud server, instantiates the data structures and execution plans, and hosts an end point for the new interface.
Use.
Barb knows that learning to use the new interface can be hard for users, so she records herself performing some example analyses. A new user plays with the interface for a bit, gets confused, and then watches a recording. Half way through, he wonders how he can get to that point without reloading the interface and starting from scratch. He clicks a “show me how” button, and the interface dynamically creates a tutorial from where he currently is to the point in the recording. After following the tutorial, he asks “show user flows for only adults above 50” in natural language; it automatically aligns this with the grammar's structure, translates the natural language input into the appropriate choice variable bindings, and the interface walks through the interactions needed to perform this request.
§ PROGRESS SO FAR
introduces novel problems to improve how interfaces are created, optimized, and used. We now outline for example problems that we have explored in current or prior work.
§.§ Automatic Interface generation
<Ref> defined the set of valid interfaces that can be mapped from a , and enables the potential to automatically explore and generate valid interfaces for a given grammar.
It is also possible to transform the grammar to induce new sets of valid interfaces.
Consider the following example based on our recent work called () <cit.>:
[Interface Generation]
<Ref>(a) is an initial grammar and a corresponding valid interface. The grammar expresses four queries that each differs in the filter predicate string; the interface simply selects one of the predicate strings using radio buttons. We can rewrite the grammar to the equivalent grammar in <Ref>(b) by factoring out the “=” character from each predicate and creating separate rules for the left and right sides. The corresponding interface has two sets of radio buttons, one to choose the attribute and one to choose the value. Although this appears trivially similar, we might now apply generalization rules to e.g., let var match any number, or to lift attr to an attribute type. These rules increase the expressiveness of the resulting grammar, and consequently, the set of valid interfaces that a cost model might pick from.
Where Do s Come From? There are many ways to generate a grammar. In our prior work, we have explored a sequence of SQL queries from database logs <cit.>, analyses in notebooks <cit.>, or query models in DBT <cit.>. Alternatively, it can also be generated from a large language model <cit.>, as LLMs are proficient at generating text. For instance, in <Ref>, the could be generated from a natural language query such as "How is the total count for different p when a is one versus when b is two?"
§.§ Automatic Backend Optimization
Users care about interactivity, and can detect even milliseconds of interaction delay <cit.>. As a result, designers must make complex trade-offs between the interface design, levels of responsiveness for different interactions, and the systems and resource implications to guarantee those levels of responsiveness.
Ideally, a designer can label different interactions with their latency constraints and allow an automated tool to check their feasibility and resource requirements. This is not straightforward today. Physical database advisors <cit.> take a sample of queries as input, but individual nor sets of queries do not map directly to interactions because, as we have shown, interactions transform targeted portions of a query.
In contrast, naturally models interactions based on the non-terminals they bind; annotating each interaction with latency expectations is now straightforward.
This further offers a complete picture of the interface's data processing requirements, as this annotated grammar expresses the universe of possible queries along with their latency requirements.
Given this annotated grammar, we can identify visualization-specific physical data structures to materialize and maintain, along with a placement and query execution plan that spans the client, server, and cloud DBMS, that guarantees these latency requirements. We term this problem ().
[Physical Visualization Design]
Consider the interface in <Ref>. The designer specifies that the slider shold respond in 10ms and dropdown in 100ms, and that the client and server memory constraints are set to 5GB and 50GB, respectively.
<Ref> shows two potential physical designs.
The first suggestion (a) might be to materialize BTree data structures over the chirps and evi datasets on the client in order to execute the slider range interactions as index lookups. If the estimated data structure sizes are less than 5GB, then this option is desirable.
If their sizes grow too large, then moving their placement to the server may be preferable (b). This incurs network communication latencies, but the index lookups may be faster due to a faster server CPU.
§.§ Tutorial Generation
When encountering a new interface, the user must both learn how the interface works and use it to achieve different tasks <cit.>. offers the potential to automatically generate interactive tutorial walkthroughs because it manages all of the interface state and explicitly represents its correspondence to interactions in the UI.
Thus, given a start and end interface state—expressed as the states of the UI interactions and their corresponding set of bindings in the grammar—we can automatically identify the sequence of user interactions necessary to go from start to end state, and use this sequence to generate an interactive, static, or video tutorial.
[Tutorial Generation]
Consider again the interface in <Ref> as the starting state and the following end state:
[fontsize=,samepage=true,commandchars=
{}]
SELECT year, payout1(*), ... FROM evi WHERE dekad BETWEEN 1 AND 2
To transition to the end state, we simply need to re-bind the choice variables t (using the dropdown) and s,e (using the slider). The order of interactions may be determined by e.g., a user cost model that estimates the amount of effort to perform different sequences.
More complex interface may contain data dependencies—where one choice variablev_dmay be a descendant of anotherv_a. Given the grammar, we can easily infer that the user must interact withv_abeforev_d.
§.§ Workload Generation
Visualization benchmarks <cit.> are designed to help evaluate data processing systems that power interactive data interfaces by sequences of query workloads that simulate what an interface would produce during a user's analysis process. However, existing benchmarks are limited in expressiveness—to SPJA query structures and parameterized filters. Even simple transforms like changing the input relation (<Ref>) are not supported.
In contrast, can express arbitrary query structures, arbitrary transformations, and models a direct correspondence between user interactions and their query transformations. As such, simply developing different user models—say, training a markov model or using a large language model to simulate an agent—can easily generate diverse query workloads and timings that reflect real data interfaces, queries, and user needs.
§ CONCLUSIONS
In this paper, we propose , a Data Interface Grammar that extends Parsing Expression Grammars (PEG) with annotations specific to data programs.
satisfies all three desired criteria: (C1) it can compactly express any set of queries useful for a task; (C2) it has a well-defined correspondence to interactive interfaces composed of charts, widgets,
and interactions; (C3) it is amenable to offline analysis.
We also demonstrate the compatibility with existing data engineering practices - DBT <cit.>.
We further illustrate the potential benefits of this abstraction, such as automatic interface generation,
automatic interface backend optimization, tutorial generation, and workload generation.
Addtionally, we describe how DIG simplifies interface creation via real-world examples.
Thanks to Miles Hong for helpful feedback.
This material is based upon work supported by NSF grants 1845638, 2008295, 2106197, 2103794; Amazon and Adobe. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funders.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.00907v1
|
20230703101034
|
Enhancing the Robustness of QMIX against State-adversarial Attacks
|
[
"Weiran Guo",
"Guanjun Liu",
"Ziyuan Zhou",
"Ling Wang",
"Jiacun Wang"
] |
cs.LG
|
[
"cs.LG",
"cs.CR",
"cs.MA"
] |
Enhancing the Robustness of QMIX against State-adversarial Attacks
1st Weiran Guo
Department of Computer Sciences
Tongji University
Shanghai, China
azureeeeeguo@gmail.com
2nd Guanjun Liu
Department of Computer Sciences
Tongji University
Shanghai, China
liuguanjun@tongji.edu.cn
3rd Ziyuan Zhou
Department of Computer Sciences
Tongji University
Shanghai, China
ziyuanzhou@tongji.edu.cn
4th Ling Wang
College of Transportation Engineering
Tongji University
Shanghai, China
wang_ling@tongji.edu.cn
5th Jiacun Wang
Department of Computer Science and Software Engineering
Monmouth University
West Long Branch, USA
jwang@monmouth.edu
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Deep reinforcement learning (DRL) performance is generally impacted by state-adversarial attacks, a perturbation applied to an agent's observation. Most recent research has concentrated on robust single-agent reinforcement learning (SARL) algorithms against state-adversarial attacks. Still, there has yet to be much work on robust multi-agent reinforcement learning. Using QMIX, one of the popular cooperative multi-agent reinforcement algorithms, as an example, we discuss four techniques to improve the robustness of SARL algorithms and extend them to multi-agent scenarios. To increase the robustness of multi-agent reinforcement learning (MARL) algorithms, we train models using a variety of attacks in this research. We then test the models taught using the other attacks by subjecting them to the corresponding attacks throughout the training phase. In this way, we organize and summarize techniques for enhancing robustness when used with MARL.
multi-agent reinforcement learning, robustness, state-adversarial attacks
§ INTRODUCTION
When it comes to completing activities that call for reaping the best rewards, DRL has made significant progress. However, many studies indicate that many agents who have received DRL training are still vulnerable to attacks during the testing phase. The state-adversarial attack is one of the assaults; it induces the agent to observe without altering the current environment. In practical application, it refers to the discrepancy between an ideal setting and the real world. For instance, if minor disturbances brought on by sensor limitations or malicious attacks are added to the state input in an autonomous driving task, they may lead to the car making judgments against expectations and leading to negative outcomes. Thus, strengthening the DRL algorithm's robustness is highly crucial.
Several studies suggest that there are techniques to increase the robustness of SARL, but there has yet to be much discussion on improving the robustness of MARL. The latter, though, is a subject worth exploring. A multi-agent environment is a common condition observed in various industries, from games to transportation, which is one of the key reasons. Moreover, most multi-agent problems are trickier to solve than single-agent ones. The agents are not distinct individuals but rather are connected. The total return of the entire system may drop if one of the agents is assaulted.
The above factors encourage us to research multi-agent reinforcement learning algorithms resistant to state-adversarial perturbations. The following is this paper's significant contributions:
* We list four training strategies: gradient-based adversary, policy regularization, alternating training with learned adversaries (ATLA) and policy adversarial actor director (PA-AD), all of which have been used to date to adapt SARL algorithms to the multi-agent scenario.
* The cooperative multi-agent reinforcement learning (c-MARL) algorithm QMIX<cit.> is used as an example, and four alternative ways are used to improve its robustness in a MARL environment. We next attack the multi-agent body system that was trained using these methods.
* As a result of the experiments, conclusions are reached, and a comparison of these four methodologies reveals their benefits and drawbacks.
The paper is structured as follows. Section II sorts out related works about enhancing the robustness of both SARL and MARL. Section III introduces the background of the problem we are discussing. Section IV describes four existing methods applied to SARL, transferring them to MARL. Section V shows the results of the experiments performed. Finally, conclusions and analysis are offered.
§ RELATED WORK
§.§ Adversarial Attack and Adversarial Training in SARL
There are mainly three types of adversarial attack and training methods to bolster the robustness of SARL. 1) Modifying the loss function. Zhang et al.<cit.> alter the loss function of the training stage with regularization, making it more consistent with the latent mathematical relations of the reinforcement learning problem. Oikarinen et al.<cit.> propose RADIAL-RL to derive the adversarial loss. 2) Applying heuristic attacks. Pattanaik et al.<cit.> use attacks for machine learning image recognition, e.g., FGSM<cit.>, PGD<cit.>, etc., on the state observation of the agent. 3) Training a network for the adversary. Tretschk et al.<cit.> attack the agent sequentially using the most current adversarial attack method, Adversarial Transformer Network (ATN)<cit.>, which learns to create the assault and is simple to integrate into the policy network. Zhang et al.<cit.> propose ATLA, and Sun et al.<cit.> propose PA-AD, both utilizing an opponent that generates the optimal adversary to teach the agent to be resilient to various attack strengths.
§.§ Adversarial Attack in MARL
Attacks on multi-agent systems and evaluation frameworks have been a significant focus of much of the existing research on MARL robustness. Guo et al.<cit.> propose MARLSafe, a robustness testing framework for c-MARL algorithms that evaluates three aspects of attacks, including the one involving state observation. Pham et al.<cit.> propose the first model-based adversarial attack framework for c-MARL. Lin et al.<cit.> and Hu and Zhang<cit.>, respectively, choose to apply attacks on one of the agents in the multi-agent system and during a few of the timesteps, proving that even if the agents are not all attacked the whole time, they still perform poorly.
§.§ Adversarial Training in MARL
Zhang et al.<cit.> propose robust Markov games that consider model uncertainty and improve model performance using function approximation and mini-batch updates. To overcome the difficulty of training resilient policies under adversarial state perturbations based on a gradient descent ascent technique, Han et al.<cit.> offer the Robust Multi-Agent Adversarial Actor-Critic (RMA3C) algorithm. Zhou and Liu<cit.> propose a brand-new objective function and a repetitive regularization method to enhance MARL's defending ability. Shi et al.<cit.> consider generalizability and use random noise to bridge the real and virtual settings.
§ BACKGROUND
§.§ c-MARL Algorithm: QMIX
QMIX is a value-based method for centralized, end-to-end training and decentralized execution. QMIX uses a network that, using just local data, estimates the joint action values as a complicated nonlinear combination of each agent's values. To ensure consistency between the centralized and decentralized methods, a constraint is placed on the structure. The joint action-value function is monotonic concerning each agent's action-value function.
Fig. <ref> illustrates a macroscopic structure of the QMIX network. The action-value function part for single agents and a joint action-value function part for several agents comprise the two primary components of the QMIX network architecture. A single-agent action-value function network receives the corresponding agent's partial observations o^i and actions a^i as inputs, generating the action-value function of the agent. The joint action-value function network of QMIX accepts the Q-values of all the agents' selected actions. This mixing network then calculates the joint Q-value Q^tot with hyper networks generating the necessary weights and biases.
§.§ State-adversarial Stochastic Game
A stochastic game is a game process that involves one or more players and state probability transitions, a process in which the players' actions gain rewards and cause the occurrence of the subsequent stochastic state.
The state-adversarial stochastic game fits with the multi-agent case under perturbation we discuss in this paper. A state-adversarial stochastic game (SaSG) is defined in <cit.> with state perturbations involved in the stochastic game. Define that the state adversary only changes with the current state and is unaffected by time.
§.§ Dec-POMDP with State Adversary
In most scenarios, the agents can only be described from the observed part of the environment and therefore are described using a decentralized partially observable Markov decision process (Dec-POMDP)<cit.>. With perturbations involved, we use a state-adversarial Dec-POMDP to model the subject we are studying. It can be defined as a tuple <S, {A^i}^N, P, {R^i}^N, {O^i}^N, {B^i}^M,γ>, where N is the number of agents, M is the number of the attacked agents, A^i is the action space of agent i with A the set of joint actions, P is the state transition probabilities which refers to the probability distributions based on the current state and actions, R^i is the reward function of agent i with R the global reward function of the multi-agent system, O^i is the observation space of agent i with O the joint observations of all agents, B^i is the set of adversarial states applied on agent i, and γ∈[0,1] serves as the discount factor.
Dec-POMDP in c-MARL is a Markov decision process in which a group of agents cooperate to maximize their total team reward while only obtaining local information. To be more specific, each agent i(0<i<N) in the team receives partial observation o_t^i as its input at timestep t when there are N agents participating in the cooperating task. Each agent i performs an action a_t^i according to the observation it collects, trying to get closer to the maximal total team reward R.
The observation inputs that the agents receive in reality and those that they are meant to receive differ when state perturbations are present in the c-MARL environment. Because the observations are perturbed, the observed state that the agent takes in changes from o^i to o^i. According to the observation, agent i's output actions vary. The chosen action may change if the observation does. The agent's Q-value changes from Q^i(o^i,a^i) to Q^i(o^i,a^i) as a result of such a modification, changing the joint value Q^tot as well. The c-MARL algorithm operates under the assumption that Q^tot that must be obtained represents the current optimal solution and that a change in Q^tot results in a reduction in the overall benefit.
§ METHODS
In deep learning, training employs adversarial assaults to optimize the multi-agent system's worst-case reward to resolve the max-min problem. This increases the robustness of the model. This concept is also applied in MARL, where the effect of adversarial training is measured, and the max-min problem is solved. It is known that in QMIX, a monotonic constraint relationship is satisfied between the individual Q^i and the global Q^tot, so we derive the objective function of the expected reward function in terms of the attacked single agents in our study of the problem:
max_π^imin_o^i∑_a^iπ^i(a_i|o^i)Q_φ^i(o^i,a^i)
where π^i is the policy of agent i, o^iis the perturbed state observation received by agent i, and φ^i is the parameters of agent i's RNN network. The inner minimum is crucial because we need to identify a worst-case situation; hence, we must choose an optimal adversary.
Aiming to minimize the multi-agent system's overall reward is important to choose the best opponent to impose on the observations. This is the main idea behind the attacking policy on observations during training and testing, with the objective function:
πargmin∑_i=0^∞γ^tR(s_t,a_t^1,...a_t^M,s_t+1)
where π:=(π^1,...π^M) is the set of victim agents' policies. The goal of the objective function is to make the total reward R as low as possible.
§.§ Gradient-based Adversary in MARL
Gradient-based adversarial examples are proposed in image classification issues. In an MARL case, the goal of the gradient-based adversarial approach is to reduce the total reward of the multi-agent system by generating adversarial samples and attacking M agents in the training phase, allowing QMIX to maximize the overall reward after the reduction as much as possible. Fig. <ref> illustrates how a gradient-based adversary is used in a MARL instance.
A gradient-based adversary crafts the optimal max-norm constrained perturbation based on the chosen actions and the ones with the highest probability, which provide the most rewards in QMIX. Therefore we can define the loss function as the cross-entropy loss L(θ^i,o^i,u^i) where u^i is the target action with the highest probability of agent i and θ^i is the parameters of the gradient-based adversary that attacks agent i.
Popular techniques for producing adversarial examples include FGSM, PGD, etc. We use FGSM when training robust QMIX, with o^i=o^i+δ^i where δ^i is the perturbation imposed on agent i. The perturbation can be described as:
δ^i=ϵ sign(∇_o^iL(θ^i,o^i,u^i))
where ϵ is the l_∞ norm perturbation budget. FGSM aims to produce perturbations that attempt to diverge the chosen action from the action with the highest gain; however, because these perturbations lack a target, the agents being attacked may not always act in the direction of the action with the lowest reward. In <cit.>, it is proven that gradient-based techniques have the possibility of not generating optimal attacks.
§.§ Policy Regularization in MARL
The policy regularization method uses a robust policy hinge-like regularizer as a defensive strategy to keep the best action unchanged after disturbance. <cit.> shows that the difference between the pre-perturbation value function and the post-perturbation value function in a multi-agent environment may be limited to a range if the disparities in the action distributions of the agents are not too significant.
As for any of the agents in the QMIX algorithm, it chooses the one with the largest Q-value when picking an action, therefore π(a^i|o^i)=1 if a is the best action and 0 otherwise. We use the total variation distance to measure the discrepancy between the action distributions:
D_TV(π(·|o^i),π(·|o^i)) =
[argmax_a^iπ(a^i|o^i)≠argmax_a^iπ(a^i|o^i)]
where [argmax_a^iπ(a^i|o^i)≠argmax_a^iπ(a^i|o^i)] is the Iverson bracket, the value of which is 1 if the statement inside is true and 0 otherwise. The total variance decreases if the action chosen in the perturbed state is the same as the optimum action in the clean state.
We add a regular term to the loss function that minimizes the difference in action distributions, indirectly reducing the interference with the value function. In QMIX, the loss function calculates the loss of the whole multi-agent system and therefore calculates the overall regular term:
ℒ_reg(φ):=
∑_imax{∑_o^imax_o^i∈ B^imax_a^i≠ a^i_*Q_φ^i(o^i,a^i)-Q_φ^i(o^i,a_*^i),-c}
where φ:=(φ^1,..., φ^N) is the set of the parameters of all agents' RNN networks, a_*^i is the action that reaches the largest value function of agent i, and c is a positive constant to constrain the regular term.
ℒ_tot(φ)=ℒ(φ)+κℒ_reg(φ)
where κ is the weighting factor that balances the two components. This equation is used to set a limit on the QMIX's TD error as well as the value functions' perturbation.
§.§ ATLA in MARL
According to <cit.>, the joint optimal adversarial perturbation exists and is unique. The idea of ATLA in MARL is to train a network to output the best perturbation states and add them to the observations of the agents. The objective function of this network aims to minimize the total reward of the multi-agent system—the attack process shown in Fig. <ref>.
The problem can be constructed as a stochastic game and solved by multi-agent reinforcement learning. As mentioned above, a multi-agent algorithm that can output continuous actions, such as MAPPO or MADDPG, can be used to update such a network. To make the rewards of the victim agents as little as possible, the idea is similar to building another multi-agent team in which the action space of each agent's output is a state perturbation and the reward earned is the opposite number of the reward of the attacked multi-agent group. The training process of the adversary denoted by f can be seen in Algorithm <ref>.
To train QMIX, we use the MAPPO algorithm as the adversary. For each agent i, the adversary generates a perturbation δ^i and adds it to o^i with the original observations, perturbations, negative rewards, and the following observations stored in its replay buffer. Then it is trained to produce optimal attacks to minimize the total reward of the whole multi-agent system. This is an excellent way to construct a worst-case scenario, equivalent to two groups of multiple agents cross-training against each other.
§.§ PA-AD in MARL
<cit.> demonstrates how perturbations to policies are analogous to evasive and action attacks, which is the underlying idea of PA-AD. Therefore, we can transfer the perturbations on states to the attacks on the agents' policies. The fundamental methods of PA-AD and ATLA are comparable; both train a neural network to manufacture an adversary. However, PA-AD uses a director and an actor to carry out the attacking objective instead of producing state perturbations directly. The director denoted by v points out the optimal adversarial direction of the perturbation while the actor acts toward this direction using gradient-based adversaries. Using an RL algorithm (e.g., MAPPO, QMIX), the director resolves an RL problem where its actions are the adversarial directions and rewards are the negative values of the agents' rewards. The actor denoted by g deals with the optimization problem, finding the optimal adversaries according to the directions the director points at with a supervised learning optimization method (e.g., FGSM, PGD). How the director and the actor perform is shown in Fig. <ref>. The training process of the director can be seen in Algorithm <ref>.
Another QMIX network can be used directly to train QMIX. To reduce the overall returns of the multi-agent system, its replay buffer receives negative values of the rewards from the multi-agent system. The perturbing directions are â∼ v(·|s_t) where v is the director's policy. Then use FGSM to put the actions generated by the adversarial QMIX into consideration, with the loss function as:
L_tar(ϕ^i,o^i,u^i,â^i)=-L_1(ϕ_1^i,o^i,â^i)+L_2(ϕ_2^i,o^i,u^i)
where ϕ^i=(ϕ_1^i,ϕ_2^i) is the set of the parameters of the two loss functions, â^i is the action of adversarial QMIX and also the perturbation direction placed on agent i. Both L_1 and L_2 can be entropy-loss functions.
Theoretically, PA-AD overcomes the drawbacks of ATLA and gradient-based approaches, addressing the issue of computational complexity brought on by the large state space and offering the best guidance for optimization approaches that would otherwise lack objectives.
§ EXPERIMENTS
We put the four adversarial training techniques into practice and contrast their resistance effects to state perturbations.
§.§ Environments
We implement our adversarial attacks and adversarial training on the StarCraft Multi-Agent Challenge (SMAC). StarCraft II is a commonly used multi-agent training environment widely used in MARL. We use four SMAC maps as the environments of our experiments: 2m_vs_1z (2 Marines vs. 1 Zealot), 3m (3 Marines vs. 3 Marines), 3s_vs_3z (3 Stalkers vs. 3 Zealots) and 2s3z (2 Stalkers & 3 Zealots vs. 2 Stalkers & 3 Zealots). More details about SMAC can be found in <cit.>. We consider the most extreme case in which dense attacks disturb all multiple agents at each timestep (M=N). We use the mainstream c-MARL algorithm QMIX and train it for robustness based on the gradient method, policy regularization, ATLA, and PA-AD, respectively.
§.§ Adversarial Attacks and Trainings
Training. In the training phase, we attack M agents to increase QMIX's resilience. For the 2m_vs_1z, 3m, and 3s_vs_3z maps, we set the perturbation magnitude to 0.2, and for the 2s3z map, to 0.05. The training timestep of the vanilla QMIX models (the ones trained with clean states) is 205,000, while the training timestep of the QMIX models with robust training is 405,000. The gradient-based optimization method we choose is FGSM. In the policy regularization model, we first use the already-trained QMIX model as a pre-training model. Then we use the model to add the regular term training under FGSM interference, where κ is initially set to 0 in the first half of training and is given a value of 0.1 in the second half of training. The hinge value c is set to 10. In the ATLA and PA-AD training, we let the training object network and the adversarial network cross-train at separate intervals to improve the performance of both continuously. We used the MAPPO algorithm as the adversary during the ATLA training, with the learning rates of both actor and critic set to 1e-6 and a gradient clip value of 10. The adversarial algorithm used for PA-AD training is QMIX.
Testing.
In the testing phase, we launched attacks on each of the five training models in the four maps with all the attack methods used during training except for policy regularization (which is a defense method and not an attack method). Each round of the test phase was played 32 times each, with the same perturbation magnitude as the training phase, and still attacked all the agents. Table <ref> shows the results of the cross-attack of these methods.
§.§ Results Analysis
We evaluate the pros and cons of these four strategies to improve the robustness of MARL based on the trials mentioned earlier. There are benefits and drawbacks to each of these four strategies in terms of training difficulty and the magnitude of enhancements.
* Gradient-based adversary is an easy and effective way to boost the robustness of MARL in the training period. But it does not provide the strongest attack, making the trained QMIX algorithm not robust enough to resist stronger attacks.
* Policy Regularization is based on the relationship between the values of the clean and attacked states by adding regular terms to the loss function. Unfortunately, this approach performs poorly when stronger interference is present, such as optimal adversary. Furthermore, it doesn't act stably in clean states.
* ATLA generates optimal state perturbations by training a network, which would theoretically perform better in SARL scenarios with small state spaces with a significant improvement in robustness. However, in the MARL scenario, the state adversarial network must produce numerous state perturbations for multiple agents, multiplying the network's state and action space. This is especially true in many environments where the input images are used as states, which makes training even more challenging. Gradient explosion is a problem that we ran into during the training process, and even when the training is completed, the outcomes are highly disappointing.
* PA-AD is similar to ATLA in idea, training a state adversarial network to attack the observed state of an agent. The distinction is that PA-AD creates the perturbation direction and employs state interference by that perturbation direction, thus reducing the action space and simplifying network training.
§ CONCLUSIONS AND FUTURE WORK
In this paper, we migrate the robustness training from the single-agent case to the multi-agent scenario. We also describe four approaches to enhance the robustness of the widely used MARL algorithm QMIX and discuss the theoretical foundation for these methods in the multi-agent setting. Based on these, we put each of these training techniques into practice and determine how effectively they improve learning. In future work, we will optimize the adversarial networks used in the training process, explore more robust training methods that combine high efficiency and good results, and apply them to MARL algorithms other than QMIX.
§ ACKNOWLEDGMENT
00
b1 T. Rashid, M. Samvelyan, C. S. De Witt, G. Farquhar, J. Foerster, and S. Whiteson, “Monotonic value function factorisation for deep multi-agent reinforcement learning”, The Journal of Machine Learning Research, vol. 21, no. 1, pp. 7234–7284, 2020.
b2 H. Zhang et al., “Robust deep reinforcement learning against adversarial perturbations on state observations”, Advances in Neural Information Processing Systems, vol. 33, pp. 21024–21037, 2020.
b3 T. Oikarinen, W. Zhang, A. Megretski, L. Daniel, and T.W. Weng, “Robust deep reinforcement learning through adversarial loss”, Advances in Neural Information Processing Systems, vol. 34, pp. 26156–26167, 2021.
b4 A. Pattanaik, Z. Tang, S. Liu, G. Bommannan, and G. Chowdhary, “Robust deep reinforcement learning with adversarial attacks”, arXiv preprint arXiv:1712. 03632, 2017.
b5 I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples”, arXiv preprint arXiv:1412. 6572, 2014.
b6 A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks”, arXiv preprint arXiv:1706. 06083, 2017.
b7 E. Tretschk, S. J. Oh, and M. Fritz, “Sequential attacks on agents for long-term adversarial goals”, arXiv preprint arXiv:1805. 12487, 2018.
b8 S. Baluja and I. Fischer, “Learning to attack: Adversarial transformation networks”, in Proceedings of the AAAI Conference on Artificial Intelligence, 2018, vol. 32.
b9 H. Zhang, H. Chen, D. Boning, and C.-J. Hsieh, “Robust reinforcement learning on state observations with learned optimal adversary”, arXiv preprint arXiv:2101. 08452, 2021.
b10 Y. Sun, R. Zheng, Y. Liang, and F. Huang, “Who is the strongest enemy? towards optimal and efficient evasion attacks in deep rl”, arXiv preprint arXiv:2106. 05087, 2021.
b11 J. Guo, Y. Chen, Y. Hao, Z. Yin, Y. Yu, and S. Li, “Towards Comprehensive Testing on the Robustness of Cooperative Multi-agent Reinforcement Learning”, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 115–122.
b12 N. H. Pham, L. M. Nguyen, J. Chen, H. T. Lam, S. Das, and T.W. Weng, “Evaluating Robustness of Cooperative MARL: A Model-based Approach”, arXiv preprint arXiv:2202. 03558, 2022.
b13 J. Lin, K. Dzeparoska, S. Q. Zhang, A. Leon-Garcia, and N. Papernot, “On the robustness of cooperative multi-agent reinforcement learning”, in 2020 IEEE Security and Privacy Workshops (SPW), 2020, pp. 62–68.
b14 Y. Hu and Z. Zhang, “Sparse adversarial attack in multi-agent reinforcement learning”, arXiv preprint arXiv:2205. 09362, 2022.
b15 K. Zhang, T. Sun, Y. Tao, S. Genc, S. Mallya, and T. Basar, “Robust multi-agent reinforcement learning with model uncertainty”, Advances in neural information processing systems, vol. 33, pp. 10571–10583, 2020.
b16 S. Han, S. Su, S. He, S. Han, H. Yang, and F. Miao, “What is the Solution for State Adversarial Multi-Agent Reinforcement Learning?”, arXiv preprint arXiv:2212. 02705, 2022.
b17 Z. Zhou and G. Liu, “Romfac: A robust mean-field actor-critic reinforcement learning against adversarial perturbations on states”, arXiv preprint arXiv:2205. 07229, 2022.
b18 H. Shi, G. Liu, K. Zhang, Z. Zhou and J. Wang, "MARL Sim2real Transfer: Merging Physical Reality With Digital Virtuality in Metaverse," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 53, no. 4, pp. 2107-2117, April 2023, doi: 10.1109/TSMC.2022.3229213.
b19 F. A. Oliehoek and C. Amato, A concise introduction to decentralized POMDPs. Springer, 2016.
b20 M. Samvelyan et al., “The starcraft multi-agent challenge”, arXiv preprint arXiv:1902. 04043, 2019.
|
http://arxiv.org/abs/2307.01571v1
|
20230704085509
|
Two-dimensional simulation of the spin-flip in the Kapitza-Dirac effect
|
[
"Ping Ge",
"Sven Ahrens",
"Baifei Shen"
] |
quant-ph
|
[
"quant-ph"
] |
APS/123-QED
ahrens@shnu.edu.cn
bfshen@shnu.edu.cn
Shanghai Normal University, Shanghai 200234, China
Many calculations in strong field quantum field theory are carried out by using a simple field geometry, often neglecting the spacial field envelope. In this article, we simulate the electron diffraction quantum dynamics of the Kapitza-Dirac effect in a Gaussian beam standing light wave. The two-dimensional simulation is computed in a relativistic framework, by solving the Dirac equation with the fast Fourier transform split operator method. Except the numerical propagation method, our results are obtained without applying approximations and demonstrate that a spin-flip in the Kapitza-Dirac effect is possible.
Two-dimensional simulation of the spin-flip in the Kapitza-Dirac effect
Baifei Shen
August 1, 2023
=======================================================================
§ INTRODUCTION
Spin effects for free electrons can be facilitated in present day strong laser fields <cit.>. One particular variation of spin-laser interaction of electrons is the Kapitza-Dirac effect <cit.>, for which spin effects are predicted <cit.>, in a scenario similar to Bragg scattering <cit.>. The setup of Kapitza-Dirac scattering, in which an electron traverses a standing light wave, formed by two counterpropagating beams, can be tailored to be sensitive to the spin polarization of the incoming electron <cit.>. Therewith the effect is allowing for a laser based Stern-Gerlach type spin observation <cit.>, in form of an induced Compton scattering process <cit.>, being a fundamental photon-only interaction. Experiments in the Bragg regime exist <cit.>, even with observing the cancellation of the interaction at parameters, where spin effects are expected <cit.>.
Most theoretical descriptions of the Kapitza-Dirac effect implement the standing wave potential of the external field by two counterpropagating plane waves, where the field's width and longitudinal polarization component is neglected. As Gaussian beam solutions <cit.> can be considered to be more realistic than a plane wave approach, we were accounting for the Gaussian beam influence in a recent study on spin-dynamics in Kapitza-Dirac scattering <cit.>. Still, in order to solve the problem analytically, rough approximations were imposed on the plane wave approach. In order to solve the relativistic equations of motion of the Dirac equation in the perturbative approach, a superposition of a discrete set of plane waves was used in <cit.>. Naturally, the question arises, whether the approximations of the standing wave vector potential within a perturbative solution technique are sufficiently accurate. In this article, we solve the quantum dynamics of the electron wave function on a two-dimensional grid, by using an Fast-Fourier-transform (FFT) split-operator method <cit.>. Within this method, the Gaussian beam potential can be implemented exactly, such that no approximations need to be applied to the external field. This work is thus a demonstration of spin-flip dynamics of an electron in the Kapitza-Dirac effect on the basis of a relativistic, two-dimensional simulation, in which the Dirac equation is evolved numerically.
Our article is organized as follows. In section <ref>, we discuss the simulation setup, by introducing the Gaussian laser beam (section <ref>), the relativistic quantum description (section <ref>) and the initial condition of the electron quantum state (section <ref>). We also mention details about simulation parameter configuration, as well as the numerical procedure of the Q-Wave library in section <ref>. We then present the simulation results in section <ref>. The results include the demonstration of electron diffraction dynamics as in the Kapitza-Dirac effect in (section <ref>), with displaying the spin properties of the quantum dynamics in section <ref>. We finally summarize our investigation and give an outlook on possible, related topics in section <ref>.
§ SETUP OF OUR INVESTIGATION
For our computer simulation we make use of the Q-Wave utility <cit.>. Q-Wave is an advanced computer code, available as C++ library, which implements the FFT split-operator method, among other numerical algorithms <cit.>. It provides the building blocks for numerically propagating wave functions in time. In the following, we describe the physical setup which we investigate by using Q-Wave. Regarding the units in our article, we write m for the electron rest mass, c for the vacuum speed of light, ħ for the reduced Planck constant and q for the elementary charge in a Gaussian unit system.
§.§ Gaussian beam configuration
We first describe the vector potential of our simulation. A Gaussian beam shaped standing light wave can be formed from two Gaussian beams <cit.>, where reference <cit.> builds on a solution based on an angular spectrum representation of plane waves. The laser beam is propagating along the x-axis, in our two-dimensional simulation, where the simulation area is aligned in the x-y plane. For the geometry in this article, the Gaussian beam is denoted as
A_x,d=-2dA_0w_0/wϵy/wexp(-r^2/w^2)cos(ϕ^(1)_G,d)
for the longitudinal polarization component and
A_y,d=-A_0w_0/wexp(-r^2/w^2)sin(ϕ_G,d)
for the transverse polarization component of the vector potential in Coulomb gauge [Further details about adjusting the fields in <cit.> can be found in the appendix of reference <cit.>.]. The potentials in Eq.(<ref>) further contain the phases
ϕ_G,d =ω t-dk_Lx+tan^-1(dx/x_R)-dxr^2/x_Rw^2
ϕ_G,d^(1) =ϕ_G,d+tan^-1(dx/x_R)
and the symbol w represents the x-dependent beam waist
w(x)=w_0√(1+x^2/x^2_R) .
We choose the Gaussian beam to oscillate with frequency ω=0.1 mc^2/ħ, corresponding to the wave number k_L=0.1 m c/ħ and wavelength λ=2π k_L. Reference <cit.> introduces the beam focus as ω_0=1/(k_Lϵ), where we set the beam divergences as ϵ=0.02. The Rayleigh length of the Gaussian beam is given by x_R=k_Lω_0^2/2.
The index d in Eqs. (<ref>) parameterizes the propagation direction of the laser beam, where the two possible directions d ∈ {-1,1} correspond to the left or right moving direction, respectively. In the Kapitza-Dirac effect, the standing wave vector potential can be formed from two counterpropagating beams as
A⃗ = ∑_d ( A_x,de⃗_x + A_y,de⃗_y ) .
In Fig. <ref> we display the field A⃗ as it appears after a quarter laser period t=ω/(2π). In contrast to previous theoretical investigations, transverse and longitudinal polarization are both computed without applying approximations here, with a finite beam width and a longitudinal polarization component.
§.§ Relativistic quantum theory
Since the laser field in our simulation is strong and the initial electron momentum of electron is 1 mc, we use a relativistic spin 1/2 quantum theory for the description of our simulation, given by the electromagnetically coupled Hamiltonian of the Dirac equation
H = c ( p⃗ - q/cA⃗) ·α⃗+qϕ+β mc^2 .
The gauge potential A⃗ has been introduced in subsection A, where set the scalar potential ϕ=0 in our code. The objects α⃗ and β are the 4×4 Dirac matrices in standard representation (also called Dirac representation). We write the energy eigenvalue relations in momentum space as
H ψ^s_z(p⃗)= E_pψ^s_z (p⃗) ,
with the positive plane wave solutions of the Dirac equation
ψ^s_z (p⃗) =u_z^s(p⃗) e^ir⃗·p⃗/ ħ ,
where the we denote the bi-spinors u_z^s(p⃗) as
u_z^s(p⃗)=√(E_p+mc^2/2mc^2)[ χ^s; c σ⃗·p⃗/E_p+mc^2 ] .
In Eqs.(<ref>)-(<ref>), the parameter s∈{+,- } is indexing the state of the electron spin, where the index z indicates a the quantization axis for the spin orientation to be along the z-axis. We also write E_p=√(m^2c^4+c^2p⃗^2) for the relativistic energy, p⃗=p_x·e⃗ _x+ p_y·e⃗_y for the momentum vector and σ⃗ for the vector of Pauli matrices.
§.§ The initial electron quantum state
According to the Q-Wave simulation package <cit.>, the wave packet of the electron is initialized as a Gaussian wave packet, in our two-dimensional simulation, with the density distribution
ρ(p⃗) = 1/√(2 π)σ_pexp[- ( p⃗-p⃗_0/2σ_p)^2 -ir⃗_0 ·p⃗/ħ]
in momentum space. The Gaussian distribution is centered at momentum p⃗_0, with wave packet size parameter σ_p. The second term in the exponential implies the particle's position at r⃗_0. The wave function in momentum space is set up as
φ^s_z(p⃗,0)=u^s_z(p⃗)ρ(p⃗) ,
on the basis of the distribution (<ref>). In position space, the wave function Ψ_z is then implied by the two-dimensional Fourier transformations
φ^s_z(p⃗,t) =1/2πħ∫Ψ^s_z(r⃗,t) exp(-i r⃗·p⃗/ħ) d^2r
Ψ^s_z(r⃗,t) =1/2πħ∫φ^s_z(p⃗,t) exp(-i r⃗·p⃗/ħ) d^2p .
§.§ Numerical propagation and simulation parameters
The Q-Wave library provides numerical algorithms to solve the time-evolution of quantum wave function in multiple time steps. We make use of the Fourier split operator method <cit.> for which the a time step with time stepping Δ t can be denoted as a mapping of the wave function Ψ(r,t) to the wave function Ψ(r,t + Δ t) at a later point in time by
Ψ(r⃗,t+Δ t)=U⃗(t+Δ t,t)Ψ(r⃗,t) .
The solution is implemented such that the time evolution operator U⃗(t + Δ t, t) corresponds to the computation of the exponential
U⃗(t_i+Δ t,t_i)=T̂exp[ -i/ħ∫_t_i^t_i+Δ tH(t')dt']
with time ordering operator T̂ and Hamiltonian Eq. (<ref>) of our system.
In the following we will introduce specific values of parameters, which are set in the simulation. We carry out our simulation on a grid with 2048× 128 grid points, with simulation area width 80λ and height 40λ, in the x and y direction, respectively. Along the x-axis we set the minimum and maximum simulation box limits x_min=-40λ and x_max=40λ. For the y-axis, we require the electron wave packet to be centered in our simulation area, initially and during the simulation, as sketched in Fig. <ref>. We choose the initial simulation box limits as y_min(0)=-160λ and y_max(0)=-120λ, approximately 15 half beam waists w_0 away from the laser beam center.
The electron's initial position along the y-direction is in the simulation box center at y=-140 λ. Regarding the electron's momentum, we set the momentum p_x = - ħ k_L along x-axis, to meet the Bragg condition for the two-photon Kapitza-Dirac effect <cit.>. The y-component of the electron momentum is implied by the requirement for spin effects in the Kapitza-Dirac effect <cit.> to be p_y = 1 mc.
The momentum parameter p⃗_0 for the initial electron state in Eq. (<ref>) is therefore assuming the value
p⃗_0 =
[ -k_L; m c ] ,
with inclination angle of the Bragg condition
ϑ = arctan(|p_x|/|p_y|) .
Requiring that the electron needs to move through the coordinate origin, this also implies the initial electron position along the laser beam propagation direction to be x=14λ, such that the initial position vector in Eq. (<ref>) reads
r⃗_0 =
[ 14 λ; -140 λ ] .
The momentum spread of the electron we set to σ_p = ħ k_L/200, corresponding to an electron wave function extension on the order of 100 wave laser wave lengths. Concerning the simulation time, we mention that the significant y-component of the electron momentum (<ref>) implies the approximate classical electron velocity v_y=c/√(2), with the corresponding y-component of the classical electron trajectory
y_CET(t)=-140 λ + 1/√(2) ct .
This implies the traveling time T̃=280λ√(2)/c=2.5·10^4ħ/(mc^2) for the electron to reach y=140 λ. Further, we choose the time stepping Δ t=0.05 ħ/m c^2, for resolving the oscillation of the mass term β m c^2 in the Dirac equation. Therefore, regarding the electron velocity component v_y, we need to shift the simulation area by 5 grid points along the y-direction every 2805 time steps, to keep the electron wave function centered in the simulation box, as illustrated in Fig. <ref>. After 177 grid rearrangements we reach the total simulation time T=Δ t × 2805 × 177 ≈T̃.
Regarding the external field, we mention again the Gaussian beam parameters of Eqs. (<ref>) and (<ref>) for the vector potential from section <ref>, as ω=0.1ħ/mc for the laser angular frequency, with wave number k_L=0.1mc/ħ, A_0=0.1mc/q for the field amplitude and ϵ=0.02 for the beam divergence. We remark, that for technical reasons, we introduce a shift between the kinetic and canonical momentum of the wave packet by employing a gauge with constant vector potential A_m = 1.0 mc/q in the y-axis of Eq. (<ref>) in our numeric implementation.
§ SIMULATION RESULTS AND ANALYSIS
After the introduction of the external field (section <ref>), relativistic quantum theory (section <ref>), initial quantum state (section <ref>) and simulation parameters (section <ref>) we are now turning to the discussion of the simulation results and its analysis.
§.§ Motion of the electron probability density
We display the electron probability density
|Ψ^+_z(r⃗,t)|^2 = [Ψ^+_z(r⃗,t) ]^†Ψ^+_z(r⃗,t)
at initial time t=0 in Fig. <ref>(a) and after propagation for the simulation time T in Fig. <ref>(b), where all parameters are used as described in section <ref>. For illustration of the process, which takes place between the situation in Fig. <ref>(a) and Fig. <ref>(b), we compute the y-averaged density
Φ(x,t)=∫_y_min(t)^y_max(t)|Ψ(x,y,t)|^2 dy
and display it in Fig. <ref>. We observe in Fig. <ref> that the electron is moving from the right to the left, corresponding the the initially set and the expected electron positions along the x-axis at x=14 λ and x=-14 λ, respectively. However, due to the interaction of the electron with the laser beam at time T/2, a diffracted part appears in the central region of Fig. <ref>, which moves from the center to the right, displaying the Kapitza-Dirac effect. The dynamics of the electron in Fig. <ref> explains the motion of the initial location of the electron on the right in Fig. <ref>(a) to the left in Fig. <ref>(b). Accordingly, the gray peak at the right of Fig. <ref>(b) corresponds to the diffracted electron beam. Note, that the electron is not showing any significant motion along the y-axis in Fig. <ref>, as we are moving the simulation box with the electron along the y-direction, corresponding to the sketch in Fig. <ref>. An animation of the position space dynamics of the electron density |Ψ(x,y,t)|^2 as in Fig. <ref> is provided in the supplemental material.
§.§ Investigation of spin resolved quantum dynamics
Having demonstrated quantum dynamics as predicted by the Kapitza-Dirac effect, we want to further present spin effects as discussed in <cit.>, where the references imply that the spin is rotating around the magnetic field direction of the linearly polarized standing light wave. In our case, the magnetic field is pointing in the z-direction, implying only a phase change for spin-z polarized electrons. We therefore construct the x-polarized wavefunction
φ_x^s(p⃗,t)
=1/√(2)[φ_z^+(p⃗,t)
+s φ_z^-(p⃗,t)]
from the z-polarized solutions, with s ∈{+,-}. Further, with the use of x-polarized spinors
u_x^s(p⃗)=1/√(2)[u_z^+(p⃗)+s u_z^-(p⃗)] ,
we compute the transition amplitude
c^s_f,s_i(p⃗,t) = ⟨u_x^s_f(p⃗)|φ_x^s_i(p⃗,t)|⟩
from initial spin s_i to final spin s_f with respect to the x-polarization axis. Note, that the quantities (<ref>) and (<ref>) are given in momentum space, since one can easily specify the spin in terms of the bi-spinors (<ref>). The momentum space wave function (<ref>) is implied from the simulated position space wave function Ψ(r⃗,t) by the Fourier transform (<ref>). The absolute value squared of the transition (<ref>) is displayed in Fig. <ref> at time T, the end of the simulation period. Thus, Fig. <ref> corresponds to the momentum space situation of the position space density in Fig. <ref>(b).
The prominent peak on the left in Fig. <ref>(a) corresponds to the initial condition (<ref>) with momentum coordinates (<ref>) and remains merely unchanged during the course of the simulation. It corresponds to the electron's motion from the right to the left in Fig. <ref>. In contrast, the right peak Fig. <ref>(a) and the peak in Fig. <ref>(b), arise due to the interaction of the electron with the laser, and correspond to the right moving Bragg peak in Fig. <ref>. Initial condition an the appearance of the Bragg peak over time can be viewed in detail in the animations of Fig. <ref> in the supplemental material. The figure allows for the association of spin-polarization with the moving and diffracted portions of the electron wave function. While the left moving electron beam is purely polarized in the positive x direction (as implied by the initial condition), the diffracted beam depicts a contribution with s^f=+ as well as s^f=-. Note, that the peak of the negative spin-x polarization appears to be more pronounced than the peak of the positive spin-x polarization.
For quantifying the spin amplitude, we plot the wave function's probability density in momentum space
Υ^0(p_x)=|φ^+_z(p_x,mc,T)|^2
= [φ^+_z(p_x,mc,T) ]^†φ^+_z(p_x,mc,T)
together with the spin projections
Υ
^s_f(p_x)=|c^s_f,+(p_x,mc,T)|^2
at y-axis position p_y=mc in Fig. <ref>. We observe the initial beam on the left and the diffracted beam on the right, as we have identified them already in Fig. <ref>. One can see, that the projection of the spin +x-polarization Υ^+ is coinciding with the probability density Υ^0 for the initial beam. On contrary, it is the projection of the spin -x-polarization Υ^-, which matches the probability density Υ^0 in the diffracted beam. In numbers, the diffracted beam's spin +x polarization amplitude is Υ^+(0.1mc)=4.6× 10^-5, whereas the spin -x-polarization amplitude Υ^-(0.1mc)=3.9× 10^-3 is larger by about two orders of magnitude. We thus conclude clear spin-flip dynamics along the x-spin polarization axis from our simulation, which agrees with the predictions in references <cit.> and <cit.>. A time-evolution of |φ^+_z(p_x,mc,t)|^2 and |c^s_f,+(p_x,mc,t)|^2 of the wave function's in-field dynamics in a similar fashion as in Fig. <ref> is provided in the supplemental material of this article.
§ CONCLUSION AND OUTLOOK
In this article, we have carried out a two-dimensional, relativistic simulation of the Kapitza-Dirac effect, by using an FFT split-operator method. The standing wave laser beam is modelled by two counterpropagating Gaussian beams and thus goes beyond the plane wave ansatz of previous investigations. Likewise, the electron wave function is implemented as a finite-size Gaussian wave packet. Within the used parameters, we are able to show a Bragg peak in the Bragg regime, which is the characteristic aspect of the Kapitza-Dirac effect. Further, we have demonstrated a spin-flip along the x-polarization axis of the electron spin, implying that formerly discussed spin effects are theoretically possible in Kapitza-Dirac scattering, which we conclude without applying approximations.
It will be interesting to see, how spin effects in the Kapitza-Dirac effect are changing with parameters, in particular with regard to the photon momentum ħ k_L and the beam divergence ϵ. The recently aroused question about the influence of the longitudinal beam polarization component of the Gaussian beam on the spin dynamics will be of relevance <cit.>. Further questions, which might be of interest in two-dimensional Kapitza-Dirac scattering might be the role of negative solutions in relativistic quantum dynamics their behavior with additional external fields added.
We thank Heiko Bauke for providing the Q-Wave library for implementing quantum computations. The work was supported by the National Natural Science Foundation of China (Grants No. 11975155 and 11935008).
|
http://arxiv.org/abs/2307.01011v2
|
20230703134340
|
On a chemotaxis-hapotaxis model with nonlinear diffusion modelling multiple sclerosis
|
[
"S. Fagioli",
"E. Radici",
"L. Romagnoli"
] |
math.AP
|
[
"math.AP",
"35K65, 35B45, 35Q92, 35K57, 92C17"
] |
Simone Fagioli - DISIM - Department of Information Engineering, Computer Science and Mathematics, University of L'Aquila, Via Vetoio 1 (Coppito)
67100 L'Aquila (AQ) - Italy
simone.fagioli@univaq.it
Emanuela Radici - DISIM - Department of Information Engineering, Computer Science and Mathematics, University of L'Aquila, Via Vetoio 1 (Coppito)
67100 L'Aquila (AQ) - Italy
emanuela.radici@univaq.it
Licia Romagnoli - Dipartimento di Matematica e Fisica - Facoltà di Scienze Matematiche, Fisiche e Naturali,
Università Cattolica del Sacro Cuore di Brescia,
Sede del Buon Pastore
Via Musei 41
25121 Brescia (BS) - Italy
licia.romagnoli1@unicatt.it
On a chemotaxis-hapotaxis model with nonlinear diffusion modelling multiple sclerosis]On a chemotaxis-hapotaxis model with nonlinear diffusion modelling multiple sclerosis
We investigated existence of global weak solutions for a system of chemotaxis-hapotaxis type with nonlinear degenerate diffusion, arising in modelling Multiple Sclerosis disease. The model consists of three equations describing the evolution of macrophages (m), cytokine (c) and apoptotic oligodendrocytes (d).
The main novelty in our work is the presence of a nonlinear diffusivity D(m), which results to be more appropriate from the modelling point of view. Under suitable assumptions and for sufficiently regular initial data, adapting the strategy in <cit.>, we show the existence of global bounded solutions for the model analysed.
[
S. Fagioli E. Radici L. Romagnoli
=====================================
§ INTRODUCTION
Multiple Sclerosis (MS) is a chronic inflammatory disease that affects the central nervous system, including the brain and spinal cord. It can lead to progressive disability. The disease is caused by an abnormal response of the immune system, resulting in inflammation and damage to myelin and neurons. Myelin is a lipid-rich sheath that surrounds the axons of neurons and facilitates the transmission of nerve impulses. It is produced by the oligodendrocytes. The immune system, specifically the macrophages, attacks and destroys the oligodendrocytes and the myelin sheath around the nerves. This demyelination process leads to the formation of lesions (referred to as plaques in 2D sections) in the white matter of the brain <cit.>.
Demyelination in multiple sclerosis (MS) patients is a heterogeneous process that gives rise to various clinical variants. In the classical study <cit.>, four different types of lesions (Type I - IV) were identified, each corresponding to a different clinical variant. The presence of different lesion types reflects the stage-dependent nature of the pathology, with the evolution of lesional pathology contributing to the heterogeneity. Specifically, Type III lesions are typical in the early stages of the disease, followed by Type I and Type II lesions <cit.>.
One clinical variant of particular interest is Baló MS, described in the literature as a form of MS that exhibits an acute fulminant disease course, leading to rapid progression and death within a few months. Baló MS is characterized by the presence of large demyelinated lesions displaying a distinct pattern of concentric layers, alternating between areas of preserved and destroyed myelin. These lesions are classified as Type III lesions according to the classification system mentioned earlier <cit.>.
In the papers <cit.>, the authors propose a reaction-diffusion-chemotaxis model to capture the dynamics of early-stage multiple sclerosis, with a specific focus on describing Baló's sclerosis, which is a rare and aggressive form of the disease. The model consists of three equations that govern the evolution of macrophages, cytokines, and apoptotic oligodendrocytes.
The evolution of macrophages in the model is influenced by three mechanisms. Firstly, macrophages undergo random movement, which is described by a linear isotropic diffusion term. Secondly, they exhibit chemotactic motion in response to a chemical gradient provided by the cytokines. Lastly, the production and saturation of activated macrophages are taken into account. These various factors contribute to the evolution of activated macrophages, denoted by the variable m in the model. The above considerations produce the following evolution of the activated macrophages m
∂_t m = DΔ m - χ∇·(m/1+δ m∇ c)+ m(1-m).
The reaction term in the equation mentioned above captures the production and saturation of activated macrophages. It is hypothesized that the activation of microglia, a type of macrophage in the central nervous system, plays a role in the development of early multiple sclerosis (MS) lesions <cit.>. However, the exact underlying mechanism behind this activation is still unknown.
In type II lesions, it is suggested that activated T-lymphocytes may be responsible for inducing macrophage activation. On the other hand, in type III lesions, there is a prominent activation of macrophages observed, accompanied by relatively mild infiltration of T-cells <cit.>. These observations highlight the heterogeneity in the underlying mechanisms and cellular interactions involved in different types of MS lesions.
The pro-inflammatory cytokines, which are signaling molecules involved in the immune response, are assumed to be produced by both the damaged oligodendrocytes and activated macrophages. In the model, they are described by an equation that takes into account their linear diffusion (possibly occurring at a different scale compared to the macrophages) and degradation. The equation governing the evolution of pro-inflammatory cytokines c can be written as follows:
τ∂_t c = αΔ c -c+ λ d +β m.
The destroyed oligodendrocytes, which are the target cells of the immune response, are assumed to be immotile. As a result, there is no spatial dynamics associated with them, and their evolution is governed by the following equation:
∂_t d = rm m/1+δ m(1-d).
The parameter r in the equation balances the speed of the front and the intensity of the macrophages in damaging the myelin. It determines the relative contribution of the damaging term m/1+δ m, which has been chosen to be positive and increasing with saturation for high values of the macrophage density.
The coefficient χ represents the chemotactic sensitivity, indicating how sensitive the macrophages are to the chemical gradient provided by the pro-inflammatory cytokines. The parameters τ and α are positive constants, and λ, β, and δ are nonnegative constants.
It is worth noting that when r=0, the model (<ref>) reduces to a parabolic-elliptic chemotaxis system with a volume-filling effect and a logistic source.
The system introduced in <cit.> has been studied in recent years through several contributions. Authors in <cit.> investigated various issues related to the structure, stability of stationary states, and radial solutions. Furthermore, see also <cit.> for a different term describing the production and saturation of activated macrophages. The global existence of strong solutions to this system was proven in one dimension in <cit.> and later extended to any dimension in <cit.>, where it was also shown that the solution remains uniformly bounded in time. Additionally, we would like to mention the extension of the model introduced in <cit.>, where the authors introduced a multi-species system to describe the activity of various pro- and anti-inflammatory cells and cytokines in the plaque, and quantified their effect on plaque growth.
The above-mentioned model considered cancer cell random motility, denoted by D, as a constant, resulting in linear isotropic diffusion. As emphasized in the classical references <cit.>, from a physical perspective, cell migration through biological tissues should be modeled as movement in a porous medium. Thus, we are led to consider the cell motility D as a nonlinear function of the macrophage density, denoted as D(m). Several possible choices have been presented in the literature to model different types of movement, including volume filling effects and saturation. However, in the present work, we will focus on power-law type nonlinearities for D(m), specifically D(m)≅ m^γ-1, where γ>1. More precisely, in this paper, we will investigate the following generalization of the model introduced in <cit.>:
∂_t m = ∇·(D(m) ∇ m) - χ∇·(f(m) ∇ c) + M(m),
τ∂_t c = αΔ c + λ d - c + β m,
∂_t d = r h(m) (1-d),
where the system is posed in Ω⊂ℝ^n, with n=1,2,3, as a bounded domain with a smooth boundary ∂Ω. Our approach will be based on the strategy presented in <cit.>, where the existence of global classical solutions is established through a regularisation argument on the degenerate diffusivity.
Since Keller and Segel <cit.> introduced the classical chemotaxis model in 1970, the Keller-Segel model and its modified versions have been widely studied by many researchers over the years. References such as <cit.> provide an extensive overview of these studies. It is well known that the formation of a cell aggregate can lead to finite-time blow-up phenomena. For instance, for the Keller-Segel model in the following form:
u_t=Δu-∇·(u·∇v)
τv_t-Δv+v=u,
researchers have investigated global solutions and blow-up solutions, as documented in <cit.>.
System (<ref>) can be regarded as a chemotaxis-haptotaxis model, with the first such model introduced in <cit.> to describe the invasion process of cancer cells into surrounding normal tissue. In this model, the random diffusion of activated macrophages is characterized by linear isotropic diffusion, which corresponds to the model (<ref>) with γ=1. For this specific case, global classical solutions have been obtained in two-dimensional space by <cit.>, while for the three-dimensional case, global classical solutions are only obtained for large values of μ/χ, as shown in <cit.>. In <cit.>, the authors considered the case of nonlinear diffusion without degeneracy, where the standard porous medium diffusivity Δ m^γ is replaced by Δ((m+ϵ)^γ-1m). Global classical solutions are established for this case, subject to certain restrictions on the possible porous medium exponents related to the problem dimension.
Further results in this direction have been obtained in <cit.> and <cit.>, where the existence of global and bounded classical solutions is shown for any γ > 2n-2/n. Recently, Zheng <cit.> extended these results to the cases γ > 2n/n+2. However, the cases 1<γ≤2n/n+2 remain unknown. On the other hand, for the fast diffusion cases, i.e., 0<γ<1, to the best of our knowledge, there have been no relevant studies. In <cit.>, the author presents further improvements in the optimality conditions for the nonlinear diffusion exponent.
§.§ Structure of the paper
The paper is organized as follows. In Sect. <ref> we clarify the notation and we list all the assumptions. In Sect. <ref> are stated the main results of this paper. In Sect. <ref> we collect some known technical results which will be useful for our analysis throughout the paper. In Sect. <ref> a detailed investigation of the regularised problem associated to (<ref>) is performed in order to show some useful a-priori estimates and boundedness of the solutions. The local existence in time of the weak solutions to the regularised problem is shown in the Appendix <ref>.
In Sect.<ref> we derive the existence of global weak solutions of (<ref>) by showing suitable compactness of the solutions of the regularised problems. Final remarks on the fundamental role of the nonlinear diffusion adopted in model (<ref>), endorsed by some numerical simulations on 1D and 2D domains, are listed in Sect.<ref> together with possible developments of the problem that are not further investigated in the present work.
In Appendix <ref> we describe the employed finite volume numerical scheme.
§ PRELIMINARIES AND MAIN RESULT
§.§ Assumptions
Let Ω⊂^n be a bounded domain with smooth boundary and t>0. Without loss of generality we fix the constants τ=α=λ=β=r=1 and we consider the following form for system (<ref>)
∂_t m = ∇·( D(m) ∇ m - χ f(m)∇ c)+M(m),
∂_t c = Δ c - c +d+ m,
∂_t d = h(m)(1-d),
x∈Ω, t>0,
endowed with the Neumann boundary conditions
∂ m/∂𝐧|_∂Ω=∂ c/∂𝐧|_∂Ω=∂ d/∂𝐧|_∂Ω=0,
and subject to the initial conditions
m(x,0)=m_0(x), c(x,0)=c_0(x), d(x,0)=d_0(x),
where the functions m_0, c_0 and d_0 satisfy a standard compatibility condition in the sense that
m_0∈ W^1,∞(Ω) m_0≥0 Ω m_0≠0,
c_0∈ W^1,∞(Ω) c_0≥0 Ω ∂ c_0/∂ν=0 ∂Ω,
d_0∈ C^2+ϑ(Ω̅) 0≤ d_0≤ 1 Ω̅ ∂ d_0/∂ν=0 ∂Ω,
for some ϑ∈(0,1).
In system (<ref>) we consider a nonlinear diffusion function D under the following assumptions
A-D1
D∈ C^2([0,∞)), D(s)≥ 0, s≥0,
A-D2
D(s)≥ k_D s^γ-1 s≥0,
A-D3 Φ(u)=∫_0^u D(s) s.
Concerning the reaction term in the the first equation of (<ref>) we assume that
A-M
M∈ C^1([0,∞)), M(s)/s≤μ(1-s) s≥0.
We finally assume that
A-h
h∈ C^1([0,∞)), h'(s)≤ k_h, s≥0.
and
A-f
f∈ C^1([0,∞)), f(s)≤ k_f s , s≥0.
§.§ Main result
We now state the notion of solutions we are dealing with
Let T∈(0,+∞]. Consider Ω⊂ℝ^n a bounded domain with smooth boundary ∂Ω. A triple (m, c, d) of non-negative functions is called a weak solution of (<ref>) endowed with the Neumann boundary conditions
(<ref>) on Ω×[0,T) and initial conditions (m_0,c_0,d_0), if
(m, c, d)∈ L_loc^2(0,T;L^2(Ω)) × L_loc^2(0, T;W^1,2(Ω)) × L_loc^2(0,T;W^1,2(Ω)),
D(m)∇ m∈ L_loc^1(0,T;L^1(Ω)),
and for any ϕ∈ C_0^∞(Ω×[0,T))
-∫_0^T mϕ_t -∫_Ω m_0ϕ(x,0) = -∫_0^T D(m)∇ m∇ϕ
+χ∫_0^T f(m)∇ c∇ϕ +∫_0^T M(m)ϕ ,
-∫_0^T cϕ_t -∫_Ωc_0ϕ(x,0) =-∫_0^T∇ c∇ϕ +∫_0^T(m+d-c)ϕ ,
-∫_0^T dϕ_t -∫_Ωd_0ϕ(x,0)=∫_0^T mh(m)(1-d)ϕ .
If T=∞ the weak solution is called global.
In the following we will assume that the diffusion exponent γ satisfies the following restrictions
cond-γγ>max{2-2/n,1} n=1,2,3.
The main result of the paper is contained in the following
Let Ω⊂ℝ^n be a bounded domain with smooth boundary ∂Ω and χ,μ>0. Assume that D satisfies (<ref>)-(<ref>)-(<ref>), M, f and h are under assumptions (<ref>), (<ref>) and (<ref>) respectively. Consider a triple (m_0,c_0,d_0) satisfying (<ref>). Then, for any γ>1 that satisfies (<ref>), there is C>0 such that system (<ref>) endowed with the Neumann boundary conditions
(<ref>) has a weak solution (m,c,d) in the sense of Definition <ref> that exists globally in time and satisfies
m(·,t)_L^∞(Ω)+c(·,t)_W^1,∞(Ω)+d(·,t)_W^1,∞(Ω)≤ C,
for all t∈ (0,∞).
§.§ Collection of useful inequalities
In what follows, we list some well-known results, which will be intensively used throughout the paper. Some of the results below maybe does not appear in the form the readers is confident with, however we will always cite the original source for precise references. We do not provide the proofs of the following results since, in most of the cases, they will not offer any enrichment of the paper. Most of the results are taken from <cit.>. Firstly, we recall the Gagliardo-Nirenberg interpolation inequality <cit.> in the form <cit.>
Let Ω⊂ℝ^n be a bounded domain with smooth boundary ∂Ω. Let R ≥ 1, 0<Q≤ P≤∞, S>0 be such that
1/R≤1/n+1/P.
Then there exists a positive constant C>0 such that ,
u_L^P(Ω)≤ C(∇ u^A_L^R(Ω)u_L^Q(Ω)^1-A+u_L^S(Ω)),
for all u∈ W^1,R∩ L^Q(Ω), where
A=1/Q-1/P/1/Q+1/n-1/R.
The following Lemma ensure that the conditions in Lemmas <ref> and <ref> below are simultaneously satisfied.
Let n≥ 2 and γ be under condition (<ref>). Then there exists unbounded sequences {p_k}_k∈ and {q_k}_k∈ such that for each k∈ q_k, p_k>1 and
p_k>max{γ-n-2/nq_k,2q_k(n-2)/2q_k+n-2},
q_k(p_k-γ+1)/q_k-1(n-nq_k-n+2/q_k(p_k-γ+1)/(p_k+γ-1)n+2-n)<1,
1/q_k>2/p_k+γ-1(p_k+γ-1/2-(p_k+γ-1)(2q_k+n-2)/4nq_k/p_k+γ-1/2+1/n-1/2).
The following lemma furnish a proper estimate of a boundary integral and was proved in <cit.> using previous results presented in <cit.>.
Let Ω⊂^n be a bounded domain with smooth boundary. Let q∈[1,+∞) and M≥ 0. Then, for any η>0 there is C_η>0 such that for any u∈ C^2(Ω̅) with
∂ u/∂ν=0, ∂Ω, |∇ u| ≤ M,
the inequality
∫_∂Ω|∇ u|^2q-2∂ |∇ u|^2/∂νσ≤η |∇|∇ u|^q|^2 +C_η,
holds.
The following Poincaré-type inequality is useful for our purpose. A detailed proof can be found in <cit.>.
Let Ω⊂^n be a bounded smooth domain. Let α>0 and p∈(1,+∞). Then there exists C > 0 such that
u_W^1,p(Ω)≤ C (∇ u_L^p(Ω)+( |u|^α)^1/α),
for all u∈ W^1,p(Ω).
In what fallow we will make use of the notion of Neumann-heat semigroup. Let Ω be an arbitrary domain in ^n and let -Δ denote the Neumann-Laplacian in L^2(Ω), that is the Laplacian on L^2(Ω) subject to Neumann-boundary conditions (see <cit.> for its precise definition and properties). Then -Δ is a nonnegative self-adjoint operator and it generates a C^0-semigroup e^-tΔ on L^2(Ω). Note that u = e^-tΔf solves the heat equation u_t -Δ u = 0 in Ω×(0,+∞), u∈ C(Ω×(0,+∞)) and
∂ u/∂ν = 0 ∂Ω×(0,+∞).
There exists a positive C^∞-function G : Ω×Ω×(0,+∞)→ such that
e^-tΔf(x) = G(x,y,t)f(y) y
for any f ∈ L^p(Ω), 1≤ p≤ +∞. Morever we recall a standard regularity results concerning the second equation of the system.
Suppose k∈(1,+∞) and g∈ L^k((0,T);L^k(Ω)). Consider the following evolution problem:
c_t-Δ c+c=g, (x,t)∈Ω×(0,T),
∂ c/∂ν=0, (x,t)∈∂Ω×(0,T),
c(x,0)=c_0(x), (x,t)∈Ω.
For each c_0∈ W^2,k(Ω) such that ∂ c_0/∂ν=0 and any g∈ L^k((0,T); L^k(Ω)), there exists a unique solution c∈ W^1,k((0,T); L^k(Ω))∩ L^k((0,T); W^2,k(Ω)).
§ REGULARISED NON-DEGENERATE SYSTEM
Existence and boundedness of global weak solutions to system (<ref>) will be proved by introducing a proper regularised (non-degenerate) problem for which we are able to construct a global classical solution. Moreover, the regularised system admits enough regularity that allows to pass to the limit in the regularisation parameter.
In order to do so, for ∈ (0, 1), we introduce the function D_ defined by
D_ (s) = D(s+).
Note that, according to Assumptions (<ref>)-(<ref>) we have that D_ (s) ≥ k_D (s+)^γ-1 and D_(0)>0. Moreover, we can introduce the primitive of D_, which turns to be
Φ_ϵ (s) =∫_-^s- D_(σ) σ =∫_0^s D(σ) σ.
Given a triple (m_0,c_0,d_0) satisfying (<ref>) we introduce (_0,_0,_0) such that, for some ϑ∈(0,1),
_0, _0, _0∈ C^2+ϑ(Ω), 0≤_0≤ 1, ∂_0/∂𝐧|_∂Ω=∂_0/∂𝐧|_∂Ω=∂_0/∂𝐧|_∂Ω=0,
and
_0→ m_0, _0→ c_0, _0→ d_0, .
For T>0, we consider the following regularised system
∂_t = ∇·(D_()∇) - χ∇·(f()∇)+M(),
∂_t = Δ + - + ,
∂_t = h()(1-),
with (x,t)∈Ω_T, under the initial conditions
(x,0)=_0(x), (x,0)=_0(x), (x,0)=_0(x),
System (<ref>) is endowed with the Neumann boundary conditions
∂/∂𝐧|_∂Ω=∂/∂𝐧|_∂Ω=∂/∂𝐧|_∂Ω=0.
Before going further, we first state the local existence in time of classical solutions to (<ref>), which can be attained by employing well-known fixed point arguments and standard parabolic regularity, similar to <cit.>, <cit.> and <cit.>. The proof can be found in Appendix <ref>.
Let ∈(0,1) and χ>0. Assume that the nonnegative functions _0, _0 and _0 satisfy (<ref>) for some ϑ∈(0,1). Consider D_ as in (<ref>). Then there exists a maximal existence time ∈(0, ∞] and a triple of nonnegative functions
∈ C^0(Ω̅×[0,)) ∩ C^2,1(Ω̅×(0, )),
∈ C^0(Ω̅×[0,)) ∩ C^2,1(Ω̅×(0, )),
∈ C^2,1(Ω̅ ×(0, ))
that solves (<ref>) classically on Ω× (0,) and satisfies 0≤≤_0_L^∞(Ω), ≥0 and ≥ 0 in Ω×(0, ). Moreover, either = +∞, or
(·, t)_+(·, t)_+(·, t)_→∞ t↗.
According to the above existence theory, for any s∈(0,), ((·, s),(·, s),(·, s))∈ C^2(Ω̅). Without loss of generality, we can assume that there exists a positive constant K such that
_0_C^2(Ω̅)+_0_C^2(Ω̅)+_0_C^2(Ω̅)≤ K.
§.§ A-priori estimates
In this section, we establish a series of uniform in a-priori estimates on system (<ref>). Firstly, we provide a bound on the total mass of .
There exists a positive constant K_0 only depending on Ω and m_0_ such that the solution (, , ) of (<ref>) satisfies
(·, t)_≤ K_0 t∈.
Moreover, if M=0
_L^1(Ω)=_0_L^1(Ω) t∈.
The regularity stated in Proposition <ref>, together with the Neumann boundary conditions in (<ref>) allow a direct integration of (<ref>). Then, (<ref>) follows by an ODE comparison argument. Finally, if the reaction term M is zero, then the equation for is in divergence form, then conservation of mass is a straightforward consequence of the previous integration.
We start showing that solutions to (<ref>) remain bounded.
For any p≥1 the solution (, , ) to (<ref>) satisfies
≤ 1, t∈ (0, ), x∈Ω.
Consider η_-,δ a smooth approximation of the negative part function, for some δ >0. Then,
/η_-,δ(1-) =-η'_-,δ(1-)∂_t
=-η'_-,δ(1-)h()(1-)≤ 0.
By multiplying equation (<ref>) by ^p-1, using assumptions on h in (<ref>) and Young's inequality we are able to produce the following estimate.
For any p≥1 the solution (, , ) to (<ref>) satisfies
1/p/^p ≤k_h^p/p^p + p-1/p^p ,
for all t∈ (0, ). In particular
_L^1(Ω)≤ k_h K_0,
for all t∈ (0, ), where K_0 is the constant introduced in (<ref>).
At this point we want to derive a uniform upper bound for , which represents the key to obtain all the higher-order estimates and thus to extend the classical solution globally.
Let μ,χ>0 and γ>1 be given constants. Then for any p>1 the solution (, , ) to (<ref>) satisfies
1/p/^p + p-1/2 D_()^p-2|∇|^2 + k_D(p-1)/(p+γ-1)^2 |∇ ^p+γ-1/2|^2
≤k_f^2/ k_Dχ^2(p-1)^p-γ+1|∇|^2 + μ^p - μ^p+1
for all t∈ (0, ).
We start multiplying equation in (<ref>) by ^p-1 and integrating in space, that yields
1/p/^p =- (p-1) D_()^p-2|∇|^2
-χ^p-1∇· (f()∇) +^p-1 M_ () := I_1 + I_2 + I_3,
on t∈(0, ). We first estimate the term involving the nonlinear diffusion as follows
I_1 ≤ -p-1/2 D_()^p-2|∇|^2-k_D(p-1)/2^p-2(+)^γ-1|∇|^2,
where we have used the bound from below on D_. By an integration by parts on I_2, applying Young's inequality and assumption (<ref>), we derive the following estimate
I_2 ≤k_D(p-1)/4^γ+p-3|∇|^2
+ k_f^2/k_Dχ^2(p-1)^p-γ+1|∇|^2 .
Invoking the bound (<ref>) in order to control I_3, and putting together (<ref>), (<ref>) and (<ref>) we easily obtain (<ref>).
Let μ,χ>0 and γ>1 be given constants. Then, for any q∈[1,+∞) the solution (, , ) to (<ref>) satisfies
1/q/ |∇|^2q + 2 |∇|^2q + q-1/q^2 |∇|∇|^q|^2
≤(2(q-1)+n/2) ( + )^2|∇|^2q-2 + C_1,
for all t∈ (0, ) and for some positive constant C_1 and independent from ϵ.
The proof of estimate (<ref>) is based on the one in <cit.>, see also <cit.> and <cit.>. We first recall the following identities
2∇ f·∇Δ f = Δ|∇ f|^2 - 2|D^2f|^2,
and
∇ |∇ f|^2q-2 = (q-1)|∇ f|^2q-4∇ |∇ f|^2 ,
that hold for all smooth functions f. From (<ref>), a direct computation shows that
1/q/ |∇|^2q = 2 |∇|^2q-2∇·∇Δ -2 |∇|^2q-2∇·∇
+2 |∇|^2q-2∇·∇ ( +).
Applying identity (<ref>) and integrating by parts we have
2 |∇|^2q-2∇·∇Δ = |∇|^2q-2(Δ|∇|^2 - 2|D^2|^2)
= -(q-1) |∇|^2q-4|∇|∇|^2|^2 + ∫_∂Ω|∇|^2q-2∂ |∇|^2/∂νσ
-2 |∇|^2q-2 |D^2|^2,
while an integration by parts, (<ref>) and Young's inequality give
2 |∇|^2q-2∇·∇ ( +) = -2(q-1) (+ )|∇|^2q-4∇|∇|^2·∇
-2 (+ )|∇|^2q-2Δ
≤ q-1/2 |∇|^2q-4|∇|∇|^2|^2
+2(q-1) (+ )^2|∇|^2q-2
+ 2/n |∇|^2q-2n|D^2|^2
+n/2 (+ )^2|∇|^2q-2,
where we used |Δ c_|^2≤ n|D^2 c_|^2. Adding together the two estimates above we obtain
1/q/ |∇|^2q+2 |∇|^2q≤ -q-1/2 |∇|^2q-4|∇|∇|^2|^2
+(2(q-1)+n/2) (+ )^2|∇|^2q-2
+ ∫_∂Ω|∇|^2q-2∂ |∇|^2/∂νσ.
Estimate (<ref>) can be deduced once we properly estimate the boundary integral. Note that, by denoting with e^Δ t the Neumann-heat semigroup, see <cit.>, and using the Duhamel formula we can estimate
∇(.,t)_L^1(Ω)≤ ∇ e^Δ (t-1)_0_L^1(Ω)+∫_0^t∇ e^Δ (t-τ)((·,τ)+(·,τ))_L^1(Ω)τ.
Calling λ_1>0 the first positive eigenvalue of -Δ and using the L^p-L^q-estimates for the Neumann-heat semigroup in the spirit of <cit.>, see also <cit.> for a more general version of the following estimate, we have that, for all t∈ (0,T), there exist two positive constants k_1 and k_2 such that
∇(.,t)_L^1(Ω)≤ k_1∇_0_L^1(Ω)
+k_2∫_0^t[((1+(t-τ)^-1/2)e^-λ_1(t-τ)((·,τ)_L^1(Ω)+(·,τ))_L^1(Ω))]dτ
≤ k_1∇ c_0_L^1(Ω)+k_2∫_0^t[((1+(t-τ)^-1/2)e^-λ_1(t-τ)(1+k_h)K_0]dτ≤ k_3,
where we used Lemmas <ref> and <ref>. The above estimate, together with the regularity in Proposition <ref> ensures that is an eligible function for Lemma <ref>. Thus, there exists a constant C̃:=C̃(q)>0 such that
∫_∂Ω|∇|^2q-2∂ |∇|^2/∂νσ≤q-1/q^2 |∇|∇|^q|^2 +C̃,
and using the identity |∇|^2q-4|∇|∇|^2|^2=4/q^2|∇|∇|^q|^2 we get (<ref>).
Summing up the estimates obtained in Lemmas <ref> and <ref> we easily obtain the following.
Let μ,χ>0 and γ>1 be given constants. Then, for any p>1 and for any q∈[1,+∞) the solution (, , ) to (<ref>) satisfies
/ (^p + |∇|^2q) + p(p-1)/2 D_()^p-2|∇|^2
+ k_Dp(p-1)/(p+γ-1)^2 |∇ ^p+γ-1/2|^2+ 2q |∇|^2q + (q-1) |∇|∇|^q|^2
≤k_f^2/k_Dχ^2 p(p-1)^p-γ+1|∇|^2 + q(2(q-1)+n/2) ( + )^2|∇|^2q-2+C_1,
for all t∈ (0, ), for some positive constant C_1 independent from ϵ given in (<ref>).
The following Lemmas concern the control of the terms on the r.h.s. of (<ref>). The conditions we are going to impose on the exponents p and q are guaranteed by Lemma <ref>, providing the restriction (<ref>).
Assume that γ satisfies (<ref>). Let p,q>1 such that for n≥ 2
p>γ-n-2/nq, q(p-γ+1)/q-1(n-nq-n+2/q(p-γ+1)/(p+γ-1)n+2-n)<1,
and, for n=1
q(p-γ)+1>0, (1+q(p-γ))/(q-1)(p+γ)<1.
Then for any η>0 there exists a constant C>0 depending only on η, p and q such that
^p-γ+1|∇|^2≤η|∇^p+γ-1/2|^2+η|∇|∇|^q|^2+C,
for all t∈(0,).
We consider first the case n≥2. Let θ>0 be a fixed small constant such that the following Holder exponents
α=nq/nq-n+2-θ β=nq/n-2+θ,
remain strictly positive. Then we have
^p-γ+1|∇|^2≤(^(p-γ+1)α)^1/α( |∇|^2β)^1/β.
Invoking the Poinacré's inequality, in the form of Lemma <ref>, and the Sobolev embedding
W^1,2(Ω)↪ L^2β/q(Ω),
we can perform the following estimates
( |∇|^2β)^1/β = |∇|^q_L^2β/q(Ω)^2/q≤ k_1|∇|^q_W^1,2(Ω)^2/q
≤ k_2(∇|∇|^q_L^2(Ω)^2/q+ |∇|^q_L^s/q(Ω)^2/q)
≤ k_2∇|∇|^q_L^2(Ω)^2/q+k_3,
where s∈[1,n/n-1) and the last inequality holds because of the regularity in Proposition <ref>. The term involving can be bounded by using the Gagliardo-Niremberg inequality in Lemma <ref> with the choices
R=2, Q=S=2/p+γ-1 P=2αp-γ+1/p+γ-1,
which are admissible thanks to the smallness of θ and the first of (<ref>). We recall that assumption (<ref>) of Lemma <ref> is satisfied for n=2,3. Then we can compute
(^(p-γ+1)α)^1/α ≤ k_4( ∇ ^p+γ-1/2_L^2(Ω)^2a(p-γ+1)/p+γ-1^p+γ-1/2_L^2/p+γ-1(Ω)^2(1-a)(p-γ+1)/p+γ-1+^p+γ-1/2_L^2/p+γ-1(Ω)^2(p-γ+1)/p+γ-1)
≤ k_4( ∇ ^p+γ-1/2_L^2(Ω)^2a(p-γ+1)/p+γ-1_L^1(Ω)^(1-a)(p-γ+1)+_L^1(Ω)^p-γ+1)
≤ k_5 ∇ ^p+γ-1/2_L^2(Ω)^2a(p-γ+1)/p+γ-1+k_6,
with
a=p+γ-1/2-(p+γ-1)(nq-n+2-θ)/2nq(p-γ+1)/p+γ-1/2+1/n-1/2,
and the last inequality holds because of Lemma <ref>. Note that the previous definition for the constant a, together with (<ref>) and the smallness of θ ensure that
a(p-γ+1)/p+γ-1<q-1/q,
thus, for any η>0, we can merge together (<ref>) and (<ref>) and, using twice the Young's inequality, we get
^p-γ+1|∇|^2≤η|∇^p+γ-1/2|^2+η|∇ |∇|^q|^2+C(η).
In order to conclude the proof we need to tackle the one-dimensional case. In this case, using the Holder inequality on the l.h.s. of (<ref>) with exponents q and q/q-1 we can easily reproduce an analogue of (<ref>). On the other hand, using again Lemma <ref>
(^(p-γ+1)q/q-1)^q-1/q ≤ k_7( ∂_x ^p+γ-1/2_L^2(Ω)^2ã(p-γ+1)/p+γ-1^p+γ-1/2_L^2/p+γ-1(Ω)^2(1-ã)(p-γ+1)/p+γ-1+^p+γ-1/2_L^2/p+γ-1(Ω)^2(p-γ+1)/p+γ-1)
≤ k_8 ∂_x ^p+γ-1/2_L^2(Ω)^2ã(p-γ+1)/p+γ-1+k_9,
with
ã = p+γ-1/2-(p+γ-1)(q-1)/2q(p-γ+1)/p+γ-1/2+1/2=(q(p-γ)+1)(p+γ-1)/q(p+γ)(p-γ+1).
Then, (<ref>) follows as in the multidimensional case thanks to (<ref>).
Assume that γ satisfies (<ref>). Let p,q>1 such that for n≥ 2
p>max{1,2q(n-2)/2q+n-2}, 1/q>2/p+γ-1(p+γ-1/2-(p+γ-1)(2q+n-2)/4nq/p+γ-1/2+1/n-1/2),
and for n=1
2q-1/p+γ<1.
Then for any η>0 there exists a constant C>0 such that
( + )^2|∇|^2q-2≤η∇^p+γ-1/2_L^2(Ω)^2+η∇|∇|^q_L^2(Ω)^2+C,
for all t∈.
We start dealing with the case n≥ 2. By expanding the square we are in the position of having to estimate three integrals separately:
A_1:= ^2 |∇|^2q-2 ,
A_2:= 2 |∇|^2q-2,
A_3:= ^2 |∇|^2q-2.
Fix 0<θ<2 and for each of the above integrals we perform an Holder's inequality with exponents
α = nq/2q+n-2-θ(q-1) β = nq/(q-1)(n-2+θ).
Concerning A_1 we have
A_1 ≤(^2α)^1/α( |∇|^2(q-1)β)^1/β.
Similarly to what we did in Lemma <ref>, we can use Sobolev embedding, Poincaré's inequalities and Proposition <ref> to deduce the bound
( |∇|^2(q-1)β)^1/β =|∇|^q_L^2n/n-2+θ(Ω)^2(q-1)/q≤ k_1|∇|^q_W^1,2(Ω)^2(q-1)/q
≤ k_2 (∇|∇|^q_L^2(Ω)^2(q-1)/q+|∇|^q_L^s/q(Ω)^2(q-1)/q)
≤ k_2 (∇|∇|^q_L^2(Ω)^2(q-1)/q+k_3)
with s∈[1,n/n-1) and for all t∈. Note that (<ref>) holds true for each of the three estimates we are going to perform. Taking θ small enough and applying the Gagliardo-Niremberg inequality with exponents
P=4nq/(p+γ-1)(2q+n-2-θ(q-1)), R=2, Q=2/p+γ-1=S,
and
a̅_1 =p+γ-1/2-(p+γ-1)(2q+n-2-θ(q-1))/4nq/p+γ-1/2+1/n-1/2,
we have
(^2α)^1/α = ^p+γ-1/2_L^P(Ω)^4/p+γ-1
≤ k_4(∇^p+γ-1/2_L^2(Ω)^4a̅_1/p+γ-1^p+γ-1/2_L^q̅(Ω)^4(1-a̅_1)/p+γ-1+^p+γ-1/2_L^s̅(Ω)^4/p+γ-1)
≤ k_5(∇^p+γ-1/2_L^2(Ω)^4a̅_1/p+γ-1+k_6).
where in the last inequality we used the bound on the L^1-norm in Lemma <ref>. Thus, performing twice the Young's inequality and invoking condition (<ref>) that ensure
2a̅_1/p+γ-1<1/q,
we can conclude that, given η>0
A_1 ≤ k_2 k_5(∇^p+γ-1/2_L^2(Ω)^4a̅_1/p+γ-1+k_6) (∇|∇|^q_L^2(Ω)^2(q-1)/q+k_3)
≤η∇^p+γ-1/2_L^2(Ω)^2+η∇|∇|^q_L^2(Ω)^2+k_7
We turn now to the estimate of A_2. As already mentioned, the term involving can be treated as in (<ref>). Thus, using Holder and Young inequalities we have
2( ()^α)^1/α ≤ 2(^2α)^1/2α(^2α)^1/2α
≤(^2α)^1/α+(^2α)^1/α
≤ k_5(∇^p+γ-1/2_L^2(Ω)^4a̅_1/p+γ-1+k_6)+k_8,
where the last inequality holds because of (<ref>) and Lemma <ref>.
The term A_3 can be estimated similarly to A_1, indeed
A_3 ≤(^2α)^1/α( |∇|^2(q-1)β)^1/β≤ k_8( |∇|^2(q-1)β)^1/β,
and then applying (<ref>). Thus (<ref>) follows with several application of the Young's inequality similarly to what we did in (<ref>).
In the one-dimensional case we perform a Holder's inequality with
α = q β = q/q-1,
then the equivalent of (<ref>) is straightforward and we have
(^2q)^1/q = ^p+γ-1/2_L^4q/p+γ-1(Ω)^4/p+γ-1
≤ k_9(∂_x^p+γ-1/2_L^2(Ω)^4ã_1/p+γ-1+k_10),
where the exponent ã_1 is given by
ã_1=p+γ-1/2-p+γ-1/4q/p+γ-1/2+1/2.
Thus, condition (<ref>) allows to perform Young's inequality in the spirit of what we did in the multidimensional case.
§.§ Boundedness for the regularised system
The bounds gained in Lemmas <ref> and <ref> allow to produce the following estimate.
Let μ,χ≥ 0 and γ under condition (<ref>). Let p,q∈ (1,∞) under the assumptions of Lemmas <ref> and <ref>. Then there exists a constant C:=C(p,q)>0 independent from such that
^p + |∇|^2q≤ C,
for all t∈.
Consider p and q large enough to satisfy the conditions in Lemmas <ref> and <ref>. From (<ref>), we can set the constant η in (<ref>) and (<ref>) such that we can deduce the existence of certain constants k_i, i=1,…,4
/ (^p + |∇|^2q) + k_1 |∇ ^p+γ-1/2|^2+ k_2 |∇|^2q≤ k_3,
for all t∈. Moreover,
^p = ^p+γ-1/2_L^2p/p+γ-1(Ω)^2p/p+γ-1 ≤ k_4 (∇^p+γ-1/2_L^2(Ω)^2pa/p+γ-1^p+γ-1/2_L^2/p+γ-1(Ω)^2p(1-a)/p+γ-1+^p+γ-1/2_L^2/p+γ-1(Ω)^2p/p+γ-1)
≤ k_5 (∇^p+γ-1/2_L^2(Ω)^2pa/p+γ-1K_0^2p(1-a)/p+γ-1+K_0^2p/p+γ-1),
where K_0 is the constant introduced in Lemma <ref>. Observing that by construction
pa/p+γ-1=p/p+γ-1(p+γ-1)(1-1/p)/p+γ-1+2/n-1<1,
we can apply Young's inequality in order to deduce
^p ≤η |∇ ^p+γ-1/2|^2 +C(η),
for any η>0. Upon a rearrangement in the constants we can combine the above estimates in order to conclude
/ (^p + |∇|^2q) + k_6(^p + |∇|^2q) ≤ k_7,
and an ODE comparison argument yields (<ref>).
Let μ,χ≥ 0 and γ under condition (<ref>). Let p,q∈ (1,∞) under the assumptions of Lemmas <ref> and <ref>. Then there exists a constant C:=C(p)>0 independent from such that
(·,t)_L^p(Ω)≤ C, ∇(·,t)_L^p(Ω)≤ C ,
for all t∈.
Let μ, χ>0 and γ under condition (<ref>). There exists a constant C>0 independent from such that
(·,t)_L^∞(Ω)≤ C, (·,t)_W^1,∞(Ω)≤ C (·,t)_L^∞(Ω)≤ C
for all t∈.
Fix T∈. The bound on can be derived in the same spirit of what we did in the proof of Lemma <ref> by using the Duhamel's formula and L^p-L^q-estimates for the Neumann-heat semigroup in the spirit of in <cit.>, see also <cit.>. More precisely, for all t∈ (0,T) we define
(t)=1/|Ω|(x,t) , (t)=1/|Ω|(x,t) ,
and, fixing p>n, we can estimate
(.,t)_L^∞(Ω)≤ e^t(Δ-1)c_0_L^∞(Ω)+∫_0^te^(t-τ)(Δ-1)((·,τ)+(·,τ))_L^∞(Ω)dτ
≤ c_0_L^∞(Ω)+∫_0^t[C_1((1+(t-τ)^-n/2p)e^-λ_1(t-τ)(·,τ)-(τ)_L^p(Ω)]dτ
+∫_0^t[C_1((1+(t-τ)^-n/2p)e^-λ_1(t-τ)(·,τ)-(τ)_L^p(Ω)]dτ
+∫_0^t e^-(t-τ)((τ)_L^∞(Ω)+(τ)_L^∞(Ω))dτ
≤ c_0_L^∞(Ω)+∫_0^t[C_1((1+(t-τ)^-n/2p)e^-λ_1(t-τ)4C(p)]dτ
+∫_0^t e^-(t-τ)|Ω|^-1((τ)_L^1(Ω)+(τ)_L^1(Ω))dτ≤ C_2,
since p>n, the L^1-norms of the average functions are bounded because of Lemmas <ref> and <ref>, and C(p) is the constant given by (<ref>). Similarly, we can bound
∇(.,t)_L^∞(Ω)≤ C_3∇ c_0_L^∞(Ω)
+C_4∫_0^t[((1+(t-τ)^-1/2-n/2p)e^-λ_1(t-τ)((·,τ)_L^p(Ω)+(·,τ))_L^p(Ω))]dτ
C_3∇ c_0_L^∞(Ω)+C_4∫_0^t[((1+(t-τ)^-1/2-n/2p)e^-λ_1(t-τ)2C(p)]dτ≤ C_5.
In order to derive the estimates for we adopt a by now classical iteration procedure on the exponent p, see <cit.>. Fix p_0∈ such that p_0>3/2(γ-1) and such that
p(p-1)/(p+γ-1)^2∈(1/2,3/2), p∈[p_0,∞).
We first perform the following estimates on _L^p(Ω)^p and ^p+γ-1_L^1(Ω). By Young's inequality we have
p(μ+1)_L^p(Ω)^p ≤μ p _L^p+1(Ω)^p+1+(μ+1)^p+1/μ^p(p/p+1)^p+1|Ω|,
while, in combination with Gagliardo-Niremberg inequality, we get
^p+γ-1 = ^p+γ-1/2_L^2(Ω)^2≤ C_6 ∇^p+γ-1/2_L^2(Ω)^2n/n+2^p+γ-1/2_L^1(Ω)^4/n+2+C_7^p+γ-1/2_L^1(Ω)^2
≤ nK_1/n+2∇^p+γ-1/2_L^2(Ω)^2+(C_6^n+2/22K_1^-n/2/n+2+C_7)^p+γ-1/2_L^1(Ω)^2,
where K_1 is a positive constant that will be fixed later. Remember that (<ref>) gives
/_L^p(Ω)^p + k_Dp(p-1)/(p+γ-1)^2∇ ^p+γ-1/2_L^2(Ω)^2
≤χ^2 p(p-1)/k_D^p-γ+1|∇|^2 + μ p_L^p(Ω)^p - μ p_L^p+1(Ω)^p+1
≤χ^2 p(p-1)C_5^2/k_D^p-γ+1 + μ p_L^p(Ω)^p - μ p_L^p+1(Ω)^p+1.
Using Young's inequality on the term involving ^p-γ+1 and (<ref>) we have
/_L^p(Ω)^p +_L^p(Ω)^p + k_D/2∇ ^p+γ-1/2_L^2(Ω)^2
≤χ^2 p^2 C_5^2/k_D^p+γ-1 +(χ^2 p^2 C_5^2/k_D+(μ+1)^p+1/μ^p(p/p+1)^p+1)|Ω|.
Applying (<ref>) we obtain
/_L^p(Ω)^p +_L^p(Ω)^p + k_D/2∇ ^p+γ-1/2_L^2(Ω)^2
≤χ^2 p^2 C_5^2/k_D[ nK_1/n+2∇^p+γ-1/2_L^2(Ω)^2+(C_6^n+2/22K_1^-n/2/n+2+C_7)^p+γ-1/2_L^1(Ω)^2]
+(χ^2 p^2 C_5^2/k_D+(μ+1)^p+1/μ^p(p/p+1)^p+1)|Ω|.
Setting
K_1 = k_D^2(n+2)/2nχ^2 p^2 C_5^2,
we deduce the following estimate
/_L^p(Ω)^p +_L^p(Ω)^p ≤ χ^2 p^2 C_5^2/k_D[(2nχ^2 p^2 C_5^2/k_D^2(n+2))^n/22C_6^n+2/2/n+2+C_7]^p+γ-1/2_L^1(Ω)^2
+(χ^2 p^2 C_5^2/k_D+(μ+1)^p+1/μ^p(p/p+1)^p+1)|Ω|.
Given p_0∈ as above, we then recursively define p_k=2p_k-1-(γ-1), for k∈ℕ. Is it easy to check that
(1/2+1/2^k)(γ-1)≤p_k/2^k≤ p_0,
for all k∈ℕ, that ensure an uniform in k boundedness from above and below for the ratio p_k/2^k. Introducing
M_k ≡sup_t∈(0,T) ^p_k,
and integrating in time (<ref>) with p=p_k we obtain
M_k ≤ m_0_L^p_k(Ω)^p_k+χ^2 p_k^2 C_5^2/k_D[(2nχ^2 p_k^2 C_5^2/k_D^2(n+2))^n/22C_6^n+2/2/n+2+C_7]M_k-1^2
+(χ^2 p_k^2 C_5^2/k_D+(μ+1)^p_k+1/μ^p_k(p_k/p_k+1)^p_k+1)|Ω|
≤ p_k^n+2χ^2 C_5^2/k_D(2nχ^2 C_5^2/k_D^2(n+2))^n/22C_6^n+2/2/n+2M_k-1^2.
For k sufficiently large, we can deduce the existence of a constant C_8 that depends on the upper bound for p_k/2^k such that
M_k ≤ C_8^k M_k-1^2,
that inductively leads to
M_k ≤ C_8^k+∑_j=1^k-12^j(k-j) M_0^2^k≤ C_8^2C_8^2^k M_0^2^k.
Thanks to the lower bound of p_k/2^k, by sending k→∞ we deduce the existence of C_9>0 such that
sup_t∈(0,T)(·,t)_L^∞(Ω)≤lim sup_k→∞ M_k^1/p_k≤lim sup_k→∞C_8^2C_8^2^k/p_k M_0^2^k/p_k≤ C_9.
We are left with the estimate on , that can be easily deduced from the combination of (<ref>) and (<ref>) together with the above L^∞ estimate.
Let μ, χ>0 and γ under condition (<ref>). Suppose that the initial condition (_0,_0, _0) satisfies (<ref>). Then there exist a constant C>0 such that system (<ref>) has a classical solution
(,,)∈(C^0(Ω̅×[0,∞))∩ C^2,1(Ω̅×(0,∞)) )^3
which exists globally in time and satisfies
(·,t)_L^∞(Ω)+(·,t)_W^1,∞(Ω)+(·,t)_L^∞(Ω)≤ C
for all t∈(0,∞).
A direct integration on (<ref>), together with the bounds in (<ref>), gives the following bounds.
Let μ,χ≥ 0 and γ under condition (<ref>). Let p∈ (1,∞). Then there exists a constant C>0 independent from such that
∫_0^t D_()^p-2|∇|^2≤ C(1+t),
and
∫_0^t |∇^p+γ-1/2|^2≤ C(1+t),
for all t∈ (0,∞).
§ CONVERGENCE TO GLOBAL WEAK SOLUTIONS
In the present section we finally produce existence of global weak solution to system (<ref>). The nonlinearity in the equation for requires a stronger notion of convergence with respect to the one of and . For this purpose, we produce the following dual estimates for the time derivative of .
Fix χ,μ>0 and assume that γ is under condition (<ref>). Let θ>max{1,γ/2}. Then for r>1 and for any T>0 there exists a constant C>0 such that
∂_t ^θ_L^1(0,T;(W_0^1,r(Ω))^∗)≤ C,
for any >0.
Let ζ∈ C_0^∞(Ω) be such that ζ_W_0^1,r(Ω)≤ 1. We denote with C_∞,f the -independent L^∞-bound of a generic function f. An integration by parts gives
1/θ∂_t ^θζ = -(θ-1)^θ-2 D_ () |∇|^2 ζ -^θ-1 D_ () ∇·∇ζ
+k_fχ(θ-1)^θ-1∇·∇ζ+k_fχ^θ∇·∇ζ
+^θ-1 M() ζ.
For T>0, the density of C_0^∞(Ω) in W_0^1,r(Ω) and an integration of the above equality in time, give
1/θ∂_t ^θ_L^1(0,T;(W_0^1,r(Ω))^∗)≤ (θ-1)∫_0^T^θ-2 D_ () |∇|^2 |ζ|
+∫_0^T^θ-1 D_ () |∇·∇ζ|
+k_fχ(θ-1)∫_0^T^θ-1 |∇·∇ζ|
+k_fχ∫_0^T^θ| ∇·∇ζ|
+∫_0^T^θ-1 |M() ζ| .
We now estimate each term on the r.h.s. separately. We start fixing p>1 such that
θ≥max{p,p+γ-1/2}.
Thanks to (<ref>) and the boundedness of ζ and we can deduce that
∫_0^T^θ-2 D_ () |∇|^2 |ζ| ≤ C_∞,ζ C_∞,^θ-p∫_0^T^p-2 D_ () |∇|^2
≤ C_∞,ζ C_∞,^θ-p C(1+T) .
The second term in (<ref>) can be similarly estimated by considering (<ref>) and the fact that D_ as well D is a continuous function. Indeed, by Young's inequality
∫_0^T^θ-1 D_ () |∇·∇ζ| ≤ 1/2C_∞,∇ζ∫_0^T^p-2 D_ () |∇|^2
+1/2C_∞,∇ζ∫_0^T^2θ-p D_ () |∇|^2
≤ 1/2C_∞,∇ζ C(1+T)
+1/2C_∞,∇ζC_∞,D C_∞,^2θ -p-γ+1∫_0^T^γ-1 |∇|^2
≤ 1/2C_∞,∇ζ C(1+T)
+1/2C_∞,∇ζC_∞,D C_∞,^2θ -p-γ+1 C(1+T),
where in the last inequality we use (<ref>) with p=2. Using Young's inequality, the bound in (<ref>) and the bound for in (<ref>) we have
∫_0^T^θ-1 |∇·∇ζ| ≤ 1/2 C_∞,ζ∫_0^T^2θ-2 |∇|^2
+1/2 C_∞,ζ∫_0^T |∇|^2
≤ 1/2 C_∞,ζC_∞,^2θ -p-γ-1C(1+T)+1/2 C_∞,ζ C_∞,∇^2 |Ω|T.
Invoking again the bound for in (<ref>) we can easily bound
∫_0^T^θ| ∇·∇ζ| ≤ C_∞,∇ζC_∞,^θ C_∞,∇^2|Ω|T.
Finally, by (<ref>)
∫_0^T^θ-1 |M() ζ| ≤μ C_∞,ζC_∞,^θ(1+C_∞,)|Ω|T.
Putting together (<ref>)-(<ref>), we can deduce the existence of a constant C>0 independent on such that (<ref>) reduce to
∂_t ^θ_L^1(0,T;(W_0^1,r(Ω))^∗)≤
C(1+T).
Let μ,χ≥ 0 and γ under condition (<ref>). For any T∈ (0,∞] there exist c and d belonging to L^2_loc(0,T;W^1,2(Ω))∩ L^∞(Ω× (0,T)) and m∈ L^2_loc(0,T;L^2(Ω))∩ L^∞(Ω× (0,T)) such that, up to a non relabel sub-sequence,
⇀ c, ⇀ d L^2_loc(0,T;L^2(Ω)),
∗⇀ c, ∗⇀ d L^∞(Ω× (0,T)),
∇∗⇀∇ c, L^∞(Ω× (0,T)),
→ m Ω× (0,T),
→ m L^2(0,T;L^2(Ω)),
D_()∇⇀ D(m)∇ m L^2(0,T;L^2(Ω)),
as → 0.
The desired convergences for and in (<ref>)-(<ref>) are direct consequences of the L^∞-estimates in (<ref>), combined with (<ref>) and (<ref>). For T>0 and any p>1 the bounds for in (<ref>) and (<ref>) give that the sequence {^p+γ-1/2}_ for ∈(0,1) is bounded in L^2(0,T;W^1,2(Ω)). In addition, Lemma <ref> gives that {∂_t^p+γ-1/2}_ for ∈(0,1) is bounded in L^1(0,T;W^1,2(Ω)^∗). Thus, we are in the position of apply Aubin-Lions lemma <cit.> in order to deduce the relative compactness of the sequence {^p+γ-1/2}_∈(0,1) in L^2(0,T;L^2(Ω)) and the existence in the space L^2(0,T;L^2(Ω)) of a function that we can directly identify with m^p+γ-1/2 such that, along a non relabel sub-sequence, ^p+γ-1/2→ m^p+γ-1/2 in L^2(0,T;L^2(Ω)) as → 0. The uniform L^∞-boundedness of and Lebesgue's dominated convergence Theorem allow to deduce (<ref>) and (<ref>).
We are left in proving (<ref>). Invoking again the uniform bound for in (<ref>), the regularity of D_ and (<ref>) setting p=2, we have that D_()∇ is uniformly bounded in L^2(0,T;L^2(Ω)), thus, up to sub-sequence, it converges weakly in L^2(0,T;L^2(Ω)). Notice that the weak convergence of D_()∇ implies the weak convergence of ∇Φ_e() to some function F, where Φ_ is defined in (<ref>). In order to identify F=D(m)∇ m we notice that the convergence in (<ref>) ensure that
Φ_ (+) →Φ (m) as → 0 a.e. in Ω× (0,T), with Φ as in <ref>. The strong convergence and Lebesgue's dominated convergence Theorem allow to identify the weak limit F with ∇Φ(m)=D(m)∇ m, concluding the proof.
We are now in the position of proving our main result.
Let T∈ (0,∞] and let (,,) be a global and bounded classical solution to (<ref>) in the sense of Corollary <ref> on [0,T).
Let ϕ∈ C_0^∞(Ω×[0,T)) be a test function. Multiplying each equation of (<ref>) by ϕ and integrating by parts we get
-∫_0^Tϕ_t -∫_Ω m_,0(x,0)ϕ(x,0) = -∫_0^T D_()∇∇ϕ
+χ∫_0^T f()∇∇ϕ +∫_0^T M()ϕ ,
-∫_0^Tϕ_t -∫_Ωc_,0(x,0)ϕ(x,0) =-∫_0^T∇∇ϕ +∫_0^T(+-)ϕ ,
-∫_0^Tϕ_t -∫_Ωd_,0(x,0)ϕ(x,0)=∫_0^T h()(1-)ϕ .
The convergences obtained in (<ref>)-(<ref>), together with (<ref>), allow to pass to the limit in all the terms in the above equalities.
Then we can conclude that the limiting triple (m,c,d) obtained in Lemma <ref> is a weak solution to (<ref>) in the sense of Definition <ref>. Finally, the boundedness is a direct consequence of (<ref>), (<ref>) and (<ref>).
§ CONCLUSION AND PERSPECTIVES
We conducted an investigation into the existence of global weak solutions for a system of chemotaxis-haptotaxis type with nonlinear degenerate diffusion, which arises in the modeling of Multiple Sclerosis disease. Our approach involved employing a regularization procedure on the degenerate diffusion coefficient, enabling us to obtain appropriate a priori estimates on the regularized classical solutions. It is worth noting that the system we studied is a modification of a model originally designed with linear diffusion.
Intriguingly, as highlighted in Figures <ref> and <ref>, the incorporation of nonlinear diffusion leads to the formation of more intricate patterns. These patterns can potentially represent various types of plaque formation observed in Multiple Sclerosis. Numerical simulations are performed according to the scheme sketched in Appendix <ref>
Moving forward, an interesting extension of our current work would involve exploring possible stationary states and radially symmetric solutions. By investigating these aspects, we can gain further insights into the long-term behavior and spatial characteristics of the system under consideration.
§ ACKNOWLEDGMENTS
The research of SF is supported by the Ministry of University and Research (MIUR), Italy under the grant PRIN 2020- Project N. 20204NT8W4, Nonlinear Evolutions PDEs, fluid dynamics and transport equations: theoretical foundations and applications and by the INdAM project N. E55F22000270001 “Fenomeni di trasporto in leggi di conservazione e loro applicazioni”. SF is also supported by University of L'Aquila 2021 project 04ATE2021 - “Mathematical Models For Social Innovations: Vehicular And Pedestrian Traffic, Opinion Formation And Seismology.” ER and SF are supported by the INdAM project N.E53C22001930001 “MMEAN-FIELDSS”.
§ LOCAL EXISTENCE OF WEAK SOLUTIONS FOR THE REGULARISED SYSTEM
This appendix is devoted in proving the existence of local classical weak solutions to the regularised system (<ref>), as stated in Proposition <ref>.
Fix ϵ∈ (0,1) and let T>0 be a small final time to be fixed later. Assume that there exist K>0 such that _0_L^∞(Ω)≤ K. Consider the space
X = { f∈ L^∞(Ω× (0,T)) | f≤ K+1 Ω× (0,T) }.
Fix m̅∈ X and consider the following truncated functions
D̂_ϵ(z)=
D_ϵ(0) z<0,
D_ϵ(z) 0≤ z≤ K+1,
D_ϵ(K+1) z> K+1,
and
m̂(x,t)=m̅(x,t) x∈Ω t∈(0,T),
0 x∈Ω t≥ T,
Standard ordinary differential equations theory ensures that the following equation
∂_t d =- h(m̂)d + h(m̂), x∈Ω, t>0
d(x,0)=_0(x), x∈Ω.
admits a globally defined solution d that is bounded since h(m̂) is bounded. By introducing the additional truncated function
d̂(x,t)=
d(x,t) x∈Ω t∈(0,T),
0 x∈Ω t≥ T,
invoking the standard existence theory for linear parabolic problems, see <cit.>, we can argue that the problem
∂_t c=Δ c-c+m̂+d̂ x∈Ω, t>0,
∂ c/∂ν = 0 x∈∂Ω, t>0,
c(x,0)=_0(x) x∈Ω,
admits a globally defined weak solution. Moreover, for any q>n+2/2 there exists k_1>0 such that
∇ c_L^q(Ω×(0,1))≤ k_1.
Given c solution to (<ref>), we can consider the following equation
∂_t m=∇(D̂_ϵ(m̅)∇ m) -χ∇(h(m̅)∇ c)+M(m̅) x∈Ω, t>0,
∂ m/∂ν = 0 x∈∂Ω, t>0,
m(x,0)=_0(x) x∈Ω.
Since h(m̅)∇ c∈ L^q(Ω×(0,1)), invoking again standard parabolic theory, we may deduce that there exist k_2,k_3>0 such that
m_L^∞(Ω×(0,1))≤ k_2 m_C^α,α/2(Ω̅×[0,1)≤ k_3.
The combination of the two bounds above allow to control the L^∞-norm of m as follows
m_L^∞(Ω×(0,1))≤_0_L^∞(Ω)+k_3t^α/2, t∈(0,1).
By choosing T∈(0,1) such that k_3 T^α/2≤ 1 we have that m_L^∞(Ω×(0,1))≤ K+1, thus we can easily deduce that the solution operator associated to (<ref>), S(m̅)=m maps X into itself, is continuous and S(X) is compact in L^∞(Ω×(0,T)). Then, Schauder fixed point theorem provide the existence of a fixed point m for S in X. Since m∈ X we have that actually D̂_ϵ(m)≡ D_ϵ(m), and recalling the definitions for the truncated functions m̂ and d̂ we can deduce that the triple (m,c,d) actually solves (<ref>) on Ω×(0,T) classically. This triple depends on the fixed parameter ϵ and the nonnegativy can be deduced from parabolic comparison principle. Finally, (<ref>) is a consequence of standard extensibility argument, since T only depends on the initial data.
§ A FINITE-VOLUME NUMERICAL SCHEME
The method we use is a modification of the finite volume method introduced in <cit.> and extended to systems in <cit.>, that consists in a positive preserving finite-volume method for system (<ref>). We sketch the scheme the one-dimensional setting. We consider a partition for the computational domain into finite-volume cells U_i=[x_i-1/2,x_i+1/2] of a uniform size Δ x with x_i = iΔ x, i ∈{-s, …,s }, and we define
m_i(t) := 1/Δ x∫_U_im(x,t), c_i(t) := 1/Δ x∫_U_ic(x,t), d_i(t) := 1/Δ x∫_U_id(x,t),
the averages of the solutions m, c and d computed at each cell U_i. Those averages are obtained as solutions of the semi-discrete scheme described by the following system of ODEs
m_i(t)/ = -F_i+1/2^m(t) - F_i-1/2^m(t) /Δ x+m_i(t)(1-m_i(t)),
c_i(t)/ = -F_i+1/2^c(t) - F_i-1/2^c(t) /Δ x+λd_i(t) - c_i(t) +βm_i(t),
d_i(t)/ = rm_i(t)m_i(t)/1+δm_i(t)(1-d_i(t)),
where the numerical fluxes F_i+1/2^l, l=m,c, are considered as an approximation of the transport parts in equation (<ref>). More precisely, defining
(m_x)^i = ( 2m_i+1 - m_i/Δ x, m_i+1 - m_i-1/2Δ x, 2m_i - m_i-1/Δ x),
(c_x)^i = ( 2c_i+1 - c_i/Δ x, c_i+1 - c_i-1/2Δ x, 2c_i - c_i-1/Δ x).
where the minmod limiter in (<ref>) has the following definition
(a_1, a_2, …) :=
min (a_1, a_2, …), a_i > 0 ∀ i
max (a_1, a_2, …), a_i < 0 ∀ i
0,
and
ϑ_m^i+1 = - γ/(γ-1)Δ x( m_i+1^γ-1 - m_i^γ-1) +1/2χ((c_x)^i/1+m_i+(c_x)^i+1/1+m_i+1),
the expression for F_i+1/2^m is given by
F_i+1/2^m = max (ϑ_^i+1,0)[ m_i + Δ x/2(m_x)_i] + min (ϑ_m^i+1,0)[ m_i - Δ x/2(m_x)^i],
Similarly we can recover the numerical flux for c. Finally, we integrate the semi-discrete scheme (<ref>) numerically using the third-order strong preserving Runge-Kutta (SSP-RK) ODE solver used in <cit.>.
10
bellomo2015
N. Bellomo, A. bellouquid, Y. tao, M. and winkler,
Toward a mathematical theory of Keller-Segel models of pattern formation
in biological tissues.
M3AS, 25:1663–1763, 2015.
Ali792
N.D. Alikakos.
An application of the invariance principle to reaction–diffusion
equations.
J. Differential Equations, 33:201–225, 1979.
Ali791
N.D. Alikakos.
l^p-bounds of solutions of reaction diffusion equations.
Comm. Partial Differential Equations, 4:827– 868, 1979.
Balo
J. Balò.
Encephalitis periaxialis concentrica.
Archiv. Neurol. Psychiatr., 19:242–264, 1928.
barnett
M. Barnett and J. Prineas.
Relapsing and remitting multiple sclerosis: pathology of the newly
forming lesion.
Ann Neurol, 55:458 – 468, 2004.
Barnett09
M.H. Barnett, J.D. Parratt, J.D. Pollard, and J.W. Prineas.
Ms: is it one disease?
Int. M.S. J., 16:57–65, 2009.
barresi16
R. Barresi, E. Bilott, F. Gargano, M.C. Lombardo, P. Pantano, and
M. Sammartino.
Wavefront invasion for a chemotaxis model of Multiple Sclerosis.
Ricerche Mat., 65:423–434, 2016.
bellomo22
N. Bellomo, N. Outada, J. Soler, and M. Winkler,
Chemotaxis and cross-diffusion models in complex environments: Models and analytic problems toward a multiscale vision
Mathematical Models and Methods in Applied Sciences, 32:713–792, 2022.
bellomo2020
N. Bellomo, and Y. Tao,
Stabilization in a chemotaxis model for virus infection.
Discrete Contin. Dyn. Syst. Ser. S, 13:105–117, 2020.
bellomo2017
N. Bellomo and M. Winkler.
A degenerate chemotaxis system with flux limitation: Maximally
extended solutions and absence of gradient blow-up.
Comm. Partial Differential Equations, 42:436–473, 2017.
bilotta19
E. Bilotta, F. Gargano, V. Giunta, M. C. Lombardo, P. Pantano, and
M. Sammartino.
Axisymmetric solutions for a chemotaxis model of multiple sclerosis.
Ricerche di Matematica, 68:281–294, 2019.
bisi2023chemotaxis
M. Bisi, M. Groppi, G. Martalò, and C. Soresina.
A chemotaxis reaction-diffusion model for multiple sclerosis with
allee effect, 2023.
calvez
V. Calvez and R. Khonsari.
Mathematical description of concentric demyelination in the human
brain: self-organization models, from liesegang rings to chemotaxis.
Math. Comput. Model., 47:726–742, 2008.
cao
X. Cao.
Boundedness in a three-dimensional chemotaxis-haptotaxis model.
Zeitschrift Angewandte Mathematik und Physik, 67:11, 2016.
CCH
J.A. Carrillo, A. Chertock, and Y. Huang.
A finite-volume method for nonlinear nonlocal equations with a
gradient flow structure.
Communications in Computational Physics, 17:233–258, 2015.
CHS
J.A. Carrillo, Y. Huang, and M. Schmidtchen.
Zoology of a nonlocal cross-diffusion model for two species.
SIAM J. Appl. Math., 78:1078–1104, 2018.
chap
M.A.J. Chaplain and G. Lolas.
Mathematical modelling of cancer invasion of tissue: Dynamic
heterogeneity.
Networks and Heterogeneous Media, 1:399 – 439, 2006.
Davies
E.B. Davies.
Heat kernels and spectral theory.
Cambridge University Press, 1989.
DevGiu21
L. Desvillettes and V. Giunta.
Existence and regularity for a chemotaxis model involved in the
modeling of multiple sclerosis.
Ricerche di Matematica, 70(1):99–113, 2021.
desvillettes_giunta_morgan_tang_2022
L. Desvillettes, V. Giunta, J. Morgan, and B.Q. Tang.
Global well-posedness and nonlinear stability of a chemotaxis system
modelling multiple sclerosis.
Proceedings of the Royal Society of Edinburgh Section A:
Mathematics, 152(4):826–856, 2022.
gott
S. Gottlieb, C. Shu, and E. Tadmor.
Strong stability-preserving high-order time discretization methods.
SIAM Review, 43:89–112, 2001.
horst1
H. Horstmann and G. Wang.
Blow-up in a chemotaxis model without symmetry assumptions.
European J. Appl. Math., 12:159–177, 2001.
HU20206875
X. Hu, S. Fu, and S. Ai.
Global asymptotic behavior of solutions for a parabolic-parabolic-ode
chemotaxis system modeling multiple sclerosis.
Journal of Differential Equations, 269(9):6875–6898, 2020.
ishida2014
S. Ishida, K. Seki, and T. Yokota.
Boundedness in quasilinear keller–segel systems of parabolic–
parabolic type on non-convex bounded domains.
J. Differential Equations, 256:2993–3010, 2014.
jin
C. Jin.
Boundedness and global solvability to a chemotaxis model with
nonlinear diffusion.
J. Differential Equations, 263:5759–5772, 2017.
ke
Y. Ke and J. Zheng.
A note for global existence of a two-dimensional
chemotaxis-haptotaxis model with remodeling of non-diffusible attractant.
Nonlinearity, 31:4602 – 4620, 2018.
keller
E. Keller and A. Segel.
Initiation of slime mold aggregation viewed as an instability.
J. Theoret. Biol., 26:399–415, 1970.
khonsari
R. Khonsari and V. Calvez.
The origins of concentric demyelination: self-organization in the
human brain.
PLos ONE, 2:2007, e150.
ladyzhenskaia1988linear
O. A. Ladyzhenskaia, V. A. Solonnikov, and N. N. Ural'tseva.
Linear and quasi-linear equations of parabolic type, volume 23.
American Mathematical Soc., 1988.
Lass18
H. Lassmann.
Multiple sclerosis pathology.
Cold Spring Harb Perspect Med., 8:a028936, 2018.
li2016
Y. Li and J. Lankeit.
Boundedness in a chemotaxis-haptotaxis model with nonlinear
diffusion.
Nonlinearity, 29:1564–1595, 2016.
liu2020
L. Liu, J. Zheng, Y. Li, and W. Yan.
A new (and optimal) result for the boundedness of a solution of a
quasilinear chemotaxis-haptotaxis model (with a logistic source).
J. Math. Anal. Appl., 491:124231, 2020.
lombardo
M.C. Lombardo, R. Barresi, E. Bilotta, F. Gargano, P. Pantano, and
M. Sammartino.
Demyelination patterns in a mathematical model of multiple sclerosis.
J. Math. Biol., 75:373–417, 2017.
Lucchinettietal00
C. Lucchinetti, W. Brück, J. Parisi, B. Scheithauer, M. Rodriguez, and
H. Lassmann.
Heterogeneity of multiple sclerosis lesions: implications for the
pathogenesis of demyelination.
Ann. Neurol., 47:707–17, 2000.
marik
C. Marik, P.A. Felts, J. Bauer, H. Lassmann, and K.J. Smith.
Lesion genesis in a subset of patients with multiple sclerosis: a
role for innate immunity?
Brain, 130:2800–2815, 2007.
MoFr21
N. Moise and A. Friedman.
A mathematical model of the multiple sclerosis plaque.
J Theor Biol., 512:110532, 2021.
niremberg
L. Nirenberg.
On elliptic partial differential equations.
Ann. Scuola Norm. Sup. Pisa, 13:115–162, 1959.
ponomarev
E. Ponomarev, L. Shriver, K. Maresz, and B. Dittel.
Microglial cell activation and proliferation precedes the onset of
cns autoimmunity.
J Neurosci Res, 81:374–389, 2005.
QuiSou
P. Quittner and P. Souplet.
Superlinear Parabolic Problems. Blow-up, Global Existence and
Steady States.
Birkhä user Advanced Texts, 2007.
Simon86
J. Simon.
Compact sets in the spacelp(o,t; b).
Annali di Matematica Pura ed Applicata, 146(1):65–96, 1986.
SzMRLaCh09
Z. Szymanska, C. Morales-Rodrigo, M. Lachowicz, and M. Chaplain.
Mathematical modelling of cancer invasion of tissue: The role and
effect of nonlocal interactions.
Math. Models Methods Appl. Sci., 19:257–281, 2009.
tao
Y. Tao.
Boundedness in a two-dimensional chemotaxis-haptotaxis system.
arXiv:1407.7382v1, 2014.
tao2012
Y. Tao and M. Winkler.
Boundedness in a quasilinear parabolic–parabolic keller–segel
system with subcritical sensitivity.
J. Differ. Equ., 252:692–715, 1012.
tao2008
Y. Tao and M. Winkler.
Global existence and boundedness in a keller-segel-stokes model with
arbitrary porous medium diffusion.
Discrete Contin. Dyn. Syst., 32:1901–1914, 2008.
tao2011
Y. Tao and M. Winkler.
A chemotaxis-haptotaxis model: The roles of nonlinear diffusion and
logistic source.
SIAM J. Math. Anal., 43:685–704, 2011.
taowin11
Y. Tao and M. Winkler.
A chemotaxis–haptotaxis model: the roles of nonlinear diffusion and
logistic source.
SIAM J. Math. Anal., 43:685–704, 2011.
wang
Y. Wang.
Boundedness in the higher-dimensional chemotaxis-haptotaxis model
with nonlinear diffusion.
J. Differ. Equ., 260:1975–1989, 2016.
winkler2002
M. Winkler.
A critical exponent in a degenerate parabolic equation.
Math. Methods Appl. Sci., 25:911–25, 2002.
Winkler2010JDE
M. Winkler.
Aggregation vs. global diffusive behavior in the higher-dimensional
keller–segel model.
J. of Differential Equations, 248:2899–2905, 2010.
winkler2013
M. Winkler.
Finite-time blow-up in the higher-dimensional parabolic-parabolic
keller-segel system.
J. Math. Pures Appl., 100:748–767, 2013.
winkler2008
M. Winkler.
Chemotaxis with logistic source: Very weak global solutions and their
boundedness properties.
J.Math. Anal. Appl., 48:2008, 708-729.
winkler2010
M. Winkler and K.C. Djie.
Boundedness and finite-time collapse in a chemotaxis system with
volume-filling effect.
Nonlinear Anal., 72:1044–1064, 2010.
zheng
J. Zheng.
Boundedness of solutions to a quasilinear higher-dimensional
chemotaxis-haptotaxis model with nonlinear diffusion.
Discrete Contin. Dyn. Syst., 37:627–643, 2017.
|
http://arxiv.org/abs/2307.03044v1
|
20230706150944
|
Asymptotics in finite monoidal categories
|
[
"Abel Lacabanne",
"Daniel Tubbenhauer",
"Pedro Vaz"
] |
math.RT
|
[
"math.RT",
"math.CT",
"math.RA",
"Primary: 11N45, 18M05, Secondary: 16T05, 18M20, 26A12"
] |
=1
remresetThe remreset package
=1
=1
=1mm
=10001
=10001
matrix,arrows.meta,decorations.markings,shapes.multipart,decorations.pathreplacing,decorations.pathmorphing
#1#1
C>c<[enumerate]itemsep=0.15cm,label=(*)[enumerate,2]itemsep=0.15cm,label=(*)[enumerate,3]itemsep=0.15cm,label=(*)=rsfs10
cddecorationsdecorations.markingsdecorations.pathreplacingdecorations.pathmorphingarrows.meta,shapes,positioning,matrix,calcshapes.calloutsanchorbase/.style=baseline=([yshift=-0.5ex]current bounding box.center),
tinynodes/.style=font=,text height=0.25ex,text depth=0.05ex,
smallnodes/.style=font=,text height=0.75ex,text depth=0.15ex,
#1
#1equation
#1[#1]#1
#1
#1autorefname#1
#1(#1)equationsubsectionPropositionTheoremCorollaryCorollary□
LemmadefinitionDefinitionDefinition
Classification ProblemClassification Problem
NotationNotation
ExampleExample
ExamplesExamples-10mm
ExampleremarkRemarkRemark
QuestionQuestion
Assumption#1#2#1autorefname#2
sectionSection
subsectionSection
subsubsectionSectionAsymptotics in finite monoidal categories]Asymptotics in finite monoidal categoriesA. Lacabanne, D. Tubbenhauer and P. Vaz]Abel Lacabanne, Daniel Tubbenhauer and Pedro VazA.L.: Laboratoire de Mathématiques Blaise Pascal (UMR 6620), Université Clermont Auvergne, Campus Universitaire des Cézeaux, 3 place Vasarely, 63178 Aubière Cedex, France,http://www.normalesup.org/ lacabannewww.normalesup.org/∼lacabanne,
https://orcid.org/0000-0001-8691-3270ORCID 0000-0001-8691-3270abel.lacabanne@uca.frD.T.: The University of Sydney, School of Mathematics and Statistics F07, Office Carslaw 827, NSW 2006, Australia, http://www.dtubbenhauer.comwww.dtubbenhauer.com, https://orcid.org/0000-0001-7265-5047ORCID 0000-0001-7265-5047daniel.tubbenhauer@sydney.edu.auP.V.: Institut de Recherche en Mathématique et Physique,
Université Catholique de Louvain, Chemin du Cyclotron 2,
1348 Louvain-la-Neuve, Belgium, https://perso.uclouvain.be/pedro.vazhttps://perso.uclouvain.be/pedro.vaz, https://orcid.org/0000-0001-9422-4707ORCID 0000-0001-9422-4707pedro.vaz@uclouvain.be
We give explicit formulas for the asymptotic growth rate of
the number of summands in tensor powers in certain monoidal categories
with finitely many indecomposable objects, and related structures.
Mathematics Subject Classification 2020. Primary: 11N45, 18M05; Secondary: 16T05, 18M20, 26A12.Keywords. Tensor products, asymptotic behavior, monoidal categories.toc[
[
August 1, 2023
==================
§ INTRODUCTION
subsection1
Let R=(R,C) be a finite based -algebra with basis C={1=c_0,…,c_r-1} (recalled in <ref> together with some other notions used in this introduction).
Recall that we thus have
c_ic_j=∑_km_i,j^k· c_k with
m_i,j^k∈.
Iterating this gives us coefficients m_i,j,…,l^k∈, and summing
gives us coefficients for all c∈ R.
Fix c∈ R.
We write m_n^∗(c) for these coefficients as
they appear in c^n where ∗∈{0,…,r-1}.
Define
b_n^R,c:=#total sum of coefficients m_n^∗(c).
Moreover, we define the function
b^R,c(n)→,n↦ b_n^R,c.
We are interested in the asymptotic behavior of the function
b^R,c(n). We main question we address is:
Find an explicit formula (n) such that
b^R,c(n)∼(n),
where we write ∼ for asymptotically equal.
We answer <ref> as follows.
For a_i∈, the (transposed) action matrix of c=a_0· c_0+…+a_r-1· c_r-1∈ R is the matrix (∑_ia_im_i,j^k)_k,j. Abusing language, we
will call the
submatrix of it corresponding to the connected component of 1
also the action matrix and use this below.
Assume that the Perron–Frobenius theorem holds, that is
the action matrix of c∈ R has a leading eigenvalue λ_0= c of multiplicity one that we call
the Perron–Frobenius dimension of c.
Moreover, the action matrix
has some period h∈ such that there are precisely h-1 other eigenvalues
λ_i=ζ^i c, and all of these are of multiplicity one, where
ζ=exp(2π i/h). We will drop this assumption in <ref> below.
Let us denote the right (the one with for Mv=λ_i· v_i) and left
(the one with w^T_iM=λ_i· w^T_i) eigenvectors by
v_i and w_i, normalized such that w^T_iv_i=1.
Let v_iw_i^T[1] denote taking the sum of the first column of the matrix
v_iw_i^T.
Define
(n)=
(v_0w_0^T[1]· 1+v_1w_1^T[1]·ζ^n+v_2w_2^T[1]·(ζ^2)^n+…+v_h-1w_h-1^T[1]·(ζ^h-1)^n)
·( c)^n∈.
Let λ^sec be the second largest eigenvalue of the action matrix of c.
We will prove (see <ref> below):
We have
b^R,c(n)∼(n),
and the convergence is geometric with ratio |λ^sec/ c|. In particular,
β^R,c:=lim_n→∞√(b_n^R,c)= c.
The reason why <ref> is interesting from the
categorical point of view is the following.
For us a finite monoidal category
is a category such that:
* It is monoidal.
* It is additive Krull–Schmidt.
* It has finitely many (isomorphism classes of) indecomposable objects.
Here are a few examples:
* Let G be a finite group and consider (G)=(G,) the category of finite dimensional complex representations of G. This is a prototypical example
of a finite monoidal category.
* More generally, all fusion categories are finite monoidal.
* For a finite group G, and arbitrary field, we can consider
projective (G) or injective (G) representations. These are
finite monoidal categories. More generally, one can take any finite dimensional Hopf algebra instead of a finite group.
* If we assume that a Hopf algebra H is of finite type, then we can even consider (H). An explicit example is the Taft algebra by <cit.>.
* In any additive Krull–Schmidt monoidal category one can take
, the additive idempotent completion of the full subcategory generated by an object X, as long as this has finitely
many indecomposable objects. Explicitly, for a finite group G one can take any two dimensional G-representation for X, which follows from
<cit.>. There are
many more examples, see <cit.>.
* Consider Soergel bimodules (W) as in <cit.>.
These are finite monoidal categories if W=(W,S) is of finite Coxeter type.
There are of course many more examples.
The following is very easy and omitted:
The additive Grothendieck ring of a finite monoidal category
is a finite based -algebra with basis given
by the classes of indecomposable objects.
Fix a finite monoidal category
C and an object X∈C.
Following <cit.>, we define
b_n^C,X:=#indecomposable summands in X^⊗ n counted with multiplicities.
Note that (n) has an analog in this context, denoted by the same symbol, obtained for
the (transposed) action matrix for left tensoring. Similarly as before we also have λ^sec.
We then get:
Under the same assumption as in <ref>, we have
b^C,X(n)∼(n),
and the convergence is geometric with ratio |λ^sec/X|.
In particular,
β^C,X:=lim_n→∞√(b_n^C,X)=X.
From <ref> and <ref>.
In the next section we will discuss examples of <ref>,
and then we will prove <ref>.
We also generalize these two theorems in <ref> by getting rid of
the assumption on the action matrix.
Before that,
let us finish the introduction with some (historical) remarks.
* To study asymptotic properties of tensor powers is a rather new
subject and most things are still quite mysterious.
Let us mention a few facts that are known.
An early reference we know is <cit.>, which studies
questions similar to the one in this note but for Lie algebras, and this was carried
on in several works such as <cit.>.
As another example, the paper <cit.> studies the
growth rate of the dimensions of the non-projective part of tensor powers
of a representation of a finite group. More generally, the paper <cit.>
studies, working in certain tensor categories, the growth rates of summands of categorical dimension prime to the underlying characteristic. The paper <cit.>
studies the growth rate of all summands, while <cit.>
studies the Schur–Weyl dual question.
* <ref> and <ref>
generalize <cit.>. And for us one of the main features of that proposition is it simplicity, having a
simple statement and proof. As we will see, the same is true
for <ref> and <ref> as well: Clearly,
the statements themselves are (surprisingly) simple yet general. Moreover,
the proof of <ref>, and therefore the proof
of <ref> as well, is rather straightforward
as soon as the key ideas are in place.
* The second statements in <ref> and <ref> were already
observed in <cit.>, but the (finer) asymptotic behavior appears to be new.
Finally, let us mention that similar
questions have been studied much earlier, see for example <cit.> for
a related notion involving length of projective resolutions, or <cit.>
for counting and Young diagrams.
Acknowledgments.
We like to thank Kevin Coulembier,
Pavel Etingof and Victor Ostrik for
very helpful email exchanges. DT thanks randomness for
giving them/us the key idea underlying this note.
This project was in part sponsored by Université Clermont Auvergne
and Université Catholique de Louvain, which is gratefully acknowledged.
DT was supported by the Australian research council, and
PV was supported by the Fonds de la Recherche Scientifique-FNRS under Grant no. J.0189.23.
§ EXAMPLES
Let us call <ref> and <ref>
our main theorems or for short.
To underpin the explicit nature of these theorems, we now list examples
of and also add that all the below
can be double checked using the code on <cit.>.
That page also contains a (potentially empty) Erratum.
§.§ Finite groups
Let G be a finite group. Given a finite dimensional complex G-representation
V, we denote its character by χ_V. Denote by Z_V(G)⊂ G the
subgroup consisting of elements of g that acts as a scalar on V and by
ω_V(g)∈ the corresponding scalar. If V is simple,
then ω_V is known as the central character of V.
Suppose that V is a faithful G-representation.
Since V is faithful we get that Z_V(G) is a subgroup of Z(G)
and also that the action graph of tensoring with V is connected (in the oriented sense). Then implies:
(n)=
(1/#G∑_g∈ Z_V(G)(∑_L∈Gω_L(g)_L)·ω_V(g)^n)·(_V)^n,
where G={simple G-representations}/≅.
This follows directly from after recalling the
connection from Perron–Frobenius theory to character theory
as explained in <cit.>.
If V is not faithful, then the action graph of tensoring with V needs not to be connected,
but that is not an issue in our main theorems.
Thus, the assumption that V is faithful can be easily relaxed.
Alternatively one can prove <ref> using character theory, similarly to
<cit.>. <ref> still generalizes
<cit.>.
Let us give a few explicit examples.
[Dihedral groups]
Let m∈_≥ 3 and let G be
the dihedral group of order 2m.
Let m^'=m/2, if m is even, and
m^'=(m-1)/2, if m is odd. Choose V any faithful
representation of dimension 2 of G. Then <ref>
gives the formulas
(n) =
m+1/2m· 2^n if m is odd,
m+2/2m· 2^n if m is even and m^' is odd,
((m+2)/2m· 1+1/m·(-1)^n)· 2^n if m is even and m^' is even.
Two explicit examples are m∈{4,5} and V is the G-representation corresponding to
rotation by 2π/m. Then:
m=4[anchorbase]
at (0,0)
< g r a p h i c s >
;
,
m=5[anchorbase]
at (0,0)
< g r a p h i c s >
;
.
Here and throughout, we display the graphs of b(n)/(n) in the usual way but log plotted. Moreover, for m=4 we have b(n)=(n) and we will omit plots in case that happens.
The next example can be seen as a m=4 and p>2 version of <ref>.
[Extraspecial groups]
Let p be a prime and m∈_≥ 1.
Recall that a p-group of order p^1+2m is called
extraspecial if its center Z(G) is of order
p and the quotient G/Z(G) is a p-elementary
abelian group. For each p and m, there exists
two isomorphism classes of extraspecial groups of
order p^1+2m, and they have the same character table. Thus,
by <ref> we can take any of these two without difference.
In the special case p=2 and
m=1, we recover the dihedral group and the
quaternion group of order 8.
Fix now an extraspecial group G of order p^1+2m. The simple G-representations are given as follows:
* There are p^2m nonisomorphic one dimensional representations that arise from the representations of G/Z(G).
* There are p-1 nonisomorphic irreducible representations of dimension p^m which are characterized by their central character.
Choose V any of the simple G-representation of dimension p^n. Then Z_V(G)=Z(G) and
<ref> gives
(n)=
(p^m)^n if p| n,
(p^m)^n-1 otherwise.
It turns out that this formula is not only asymptotic: we have b(n)=(n).
This is due to the fact that the character of V vanishes outside of Z(G).
[Imprimitive complex reflection groups]
Let d and m be integers in _≥ 1 and consider the imprimitive complex reflection groupG=G(d,1,m). This group can be seen as the group of m-by-m monomial matrices with entries being dth roots of unity. For d=1 we recover the symmetric group, covered by <cit.>, and if d=2 we recover the Weyl group of type B_m.
Choose V the standard representation given by the matrix description of G.
Then <ref> gives a formula akin to
<cit.> which we decided not to write down as its a bit tedious.
In any case, for the special cases d∈{1,2} and m=2, or d=2 and m=4 we get
{
d=1,
m=3
.
(n)=2/3· 3^n
, {
d=2,
m=3
.
(n)=5/12· 3^n
, {
d=2,
m=4
.
(n)=(19/96· 1+1/32· (-1)^n)· 4^n
.
We get the plots
{
d=2,
m=3
.[anchorbase]
at (0,0)
< g r a p h i c s >
;
, {
d=2,
m=4
.[anchorbase]
at (0,0)
< g r a p h i c s >
;
.
Moreover, the formula (n)=2/3· 3^n is exact for d=1 and m=3.
§.§ Fusion categories
This section discusses Fusion categories over different from (G).
[Fibonacci category]
Let F be the Fibonacci category, see for example
<cit.> where
F is denoted 𝒴ℒ_+
(or 𝒴ℒ_-, depending on conventions). All we need to know is that F
is ⊗-generated by one object X with action matrix
M(X)=
[ 0 1; 1 1 ].
We want to estimate b^F,X(n). To this end,
the eigenvalues of M(X) are the two roots of x^2=x+1, in particular,
X=ϕ, the golden ratio. Its Perron–Frobenius eigenvectors
are v=w=0.7(√(5)-1/√(10-2√(5)),√(1/10(√(5)+5)))^T and we therefore get
(n)=
1/10(√(5)+5)·ϕ^n=1/√(5)ϕ^n-1,
[anchorbase]
at (0,0)
< g r a p h i c s >
;
,
from . Note that the classical asymptotic for the Fibonacci numbers is 1/√(5)ϕ^n and not 1/√(5)ϕ^n-1, but
b^F,X(n) is equal to the (n+1)th
Fibonacci number and hence the off-by-one-error in the exponent.
[Verlinde category]
We now consider the Verlinde category for k∈_≥ 2, see for example <cit.> (denoted differently therein). This fusion category has k simple objects, and we
take the generating object X of categorical dimension
2cos(π/(k+1)). The case k=2 compares to super vector spaces.
The action matrix for X has the type A Dynkin diagram as its associated graph, and the eigenvalues and eigenvectors of this graph are well-known, see
for example <cit.>. In particular, X=2cos(π/(k+1)).
Let q=exp(π i/(k+1)).
Then gives us
(n)=
[1]_q+…+[k]_q/[1]_q^2+…+[k]_q^2·(2cos(π/(k+1)))^n if k is even,
([1]_q+…+[k]_q/[1]_q^2+…+[k]_q^2· 1 + [1]_q-[2]_q+…-[k-1]_q+[k]_q/[1]_q^2+…+[k]_q^2· (-1)^n)·(2cos(π/(k+1)))^n if k is odd.
Here [a]_q denotes the ath quantum number evaluated at q. We get, for example:
k=4[anchorbase]
at (0,0)
< g r a p h i c s >
;
,
k=6[anchorbase]
at (0,0)
< g r a p h i c s >
;
,
k=7[anchorbase]
at (0,0)
< g r a p h i c s >
;
,
k=9[anchorbase]
at (0,0)
< g r a p h i c s >
;
.
Moreover, for k∈{3,5} the formula (n) is spot on.
[Higher rank Verlinde categories]
Verlinde categories can be defined for all simple Lie algebras as
quotients of representations of quantum groups at a root of unity
as explained in <cit.>.
Let us focus in this example on , the one for the special
linear group of rank three (with k determined as e in <cit.>).
For we take the generating object X corresponding to
the vector representation of SL_3(). Its action matrix
is the oriented version of the graph displayed in <cit.> with the orientation as in <cit.>. Using this, and omitting k=1 since this is trivial, gives
0.97k=2(n)=1/10(√(5)+5)·ϕ^k
,
k=3(n)=
1/2· 2^n
,
k=4(n)=
1/7(2+2cos(3π/7))·(1+2cos(2π/7))^n
,
k=2[anchorbase]
at (0,0)
< g r a p h i c s >
;
,
k=4[anchorbase]
at (0,0)
< g r a p h i c s >
;
.
For k=3 the displayed formulas are exact.
Moreover, one can find (n) explicitly in general using the formulas
in <cit.> or <cit.>.
§.§ Nonsemisimple examples
We now discuss to two nonsemisimple examples.
[ in defining characteristic]
Let our ground field be for some prime p>2. We consider
the finite group and its representations over .
Take V=^2 to be the vector representation of
. In this case the action matrices are exemplified by
p=3[ 0 1 0 0 0; 1 0 1 0 0; 0 0 0 1 0; 0 0 3 0 1; 0 0 0 1 0; ]
,
p=5[ 0 1 0 0 0 0 0 0 0; 1 0 1 0 0 0 0 0 0; 0 1 0 1 0 0 0 0 0; 0 0 1 0 1 0 0 0 0; 0 0 0 0 0 1 0 0 0; 0 0 0 0 2 0 1 0 0; 0 0 0 0 0 1 0 1 0; 0 0 0 0 1 0 1 0 1; 0 0 0 0 0 0 0 1 0; ]
,
p=7[ 0 1 0 0 0 0 0 0 0 0 0 0 0; 1 0 1 0 0 0 0 0 0 0 0 0 0; 0 1 0 1 0 0 0 0 0 0 0 0 0; 0 0 1 0 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 1 0 0 0 0 0 0 0; 0 0 0 0 1 0 1 0 0 0 0 0 0; 0 0 0 0 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 2 0 1 0 0 0 0; 0 0 0 0 0 0 0 1 0 1 0 0 0; 0 0 0 0 0 0 0 0 1 0 1 0 0; 0 0 0 0 0 0 0 0 0 1 0 1 0; 0 0 0 0 0 0 1 0 0 0 1 0 1; 0 0 0 0 0 0 0 0 0 0 0 1 0; ]
.
These can be described as follows.
The matrix is the one obtained as a (2p-1)-by-(2p-1) cut-off of the matrix
for the infinite group over that can be obtained from <cit.>, together with an extra entry 1 in position (2p-2,p).
Then gives
(n)=(1/2p-2· 1+1/2p^2-2p·(-1)^n)· 2^n.
Explicitly, for p∈{3,5} we get
p=3[anchorbase]
at (0,0)
< g r a p h i c s >
;
,
p=5[anchorbase]
at (0,0)
< g r a p h i c s >
;
.
The convergence is rather slow (but still geometric).
[Dihedral Soergel bimodules]
We now look at the
category of dihedral Soergel bimodules
as studied in details in, for example, <cit.>,
<cit.> or <cit.>.
In particular, <cit.> lists all the formulas
relevant for . To get a finite based -algebra we collapse the grading, meaning
we specialize <cit.> at q=1.
Fix ⟨ s,t|s^2=t^2=(st)^m⟩ as the presentation
for the dihedral group of order 2m where m∈_≥ 3.
Let us take X to be the Bott–Samelson generator for st.
By the explicit
formulas in <cit.>,
the action graph of X is almost the same as the action graph
of tensoring with ^3 as a SO_3()-representation.
The first ones are (read from left to right):
m∈{3,5,7}[anchorbase]
at (0,0)
< g r a p h i c s >
;
, [anchorbase]
at (0,0)
< g r a p h i c s >
;
, [anchorbase]
at (0,0)
< g r a p h i c s >
;
,
m∈{4,6,8}[anchorbase]
at (0,0)
< g r a p h i c s >
;
, [anchorbase]
at (0,0)
< g r a p h i c s >
;
, [anchorbase]
at (0,0)
< g r a p h i c s >
;
.
The pattern generalizes. It is then easy to show that
the leading eigenvalues is always 4 and
the absolute values of all other eigenvalues
are strictly smaller. Moreover, gives:
(n)=1/2m· 4^n,
m=3[anchorbase]
at (0,0)
< g r a p h i c s >
;
,
m=7[anchorbase]
at (0,0)
< g r a p h i c s >
;
.
The rate of convergence is rather slow for m≫ 0.
§ GENERALIZATIONS AND PROOFS
We will prove several versions of <ref>.
§.§ Perron–Frobenius theory
We start with the main player, the Perron–Frobenius theorem.
To this end, recall that one can associate
an oriented and weighted graph, its adjacency graph, to an m-by-m matrix
M=(m_ij)_1≤ i,j≤ m∈ as follows:
* The vertices are {1,…,m}.
* There is an edge with weight m_ij from i to j.
We call a nonzero matrix
M∈irreducible if its associated graph
is connected in the oriented sense (this is called strongly connected).
Recall that in this note a right eigenvector satisfies Mv=λ· v,
and a left eigenvector satisfies wM=λ· w.
[Perron–Frobenius theorem part I]
Let M∈ be irreducible.
* M has a Perron–Frobenius eigenvalue, that is,
λ∈_>0 such that λ≥|μ| for all other eigenvalues μ.
This eigenvalue appears with multiplicity one, and all other eigenvalues
with λ=|μ| also appear with multiplicity one.
* There exists h∈_≥ 1, the period, such that all eigenvalues
μ with λ=|μ| are exp(k2π i/h)λ for k∈{0,…,h-1}.
We call these pseudo-dominant eigenvalues.
* The eigenvectors, left and right, for the Perron–Frobenius eigenvalues can be normalized to have values in .
Well-known. See for example, Frobenius' paper 92 in Band 3 of
<cit.>. (This is the paper “Über Matrizen aus nicht negativen Elementen”.)
Fix a function f→. We say that f(n) converges geometrically
to a∈ with ratio β∈[0,1) if for all γ∈(β,1) we have that
{(f(n)-a)/γ^n}_n∈ is bounded.
The following accompanies <ref>:
[Perron–Frobenius theorem part II]
Let M∈ be irreducible, λ be its Perron–Frobenius eigenvalue and h be its period. Let ζ=exp(2π i/h).
For each k∈{0,…,h-1}, choose a left eigenvector v_i and a right eigenvector w_i with eigenvalue ζ^kλ, normalized such that w_i^Tv_i=1.
Then we have:
M^n∼ v_0w_0^T·λ^n
+v_1w_1^T·(ζλ)^n
+v_2w_2^T·(ζ^2λ)^n+…
+v_h-1w_h-1^T·(ζ^h-1λ)^n.
Moreover, the convergence is geometric with ratio |λ^sec/λ|, where
λ^sec is any second largest (in the sense of absolute value) eigenvalue.
This is known, but proofs are a bit tricky to find in the literature, so we give one.
The proof also shows where the vectors v_i and w_j come from.
For any μ∈, let V_μ be the generalized eigenspace associated to the eigenvalue μ. Then we have
^m≅⊕_k=0^hV_ζ^kλ⊕⊕_μ,|μ|<λV_μ.
By <ref>, the space V_ζ^kλ is the eigenspace associated to the eigenvalue ζ^kλ and v_kw_k^T is the projection onto that subspace.
This implies that we have
M^n=v_0w_0^T·λ^n
+v_1w_1^T·(ζλ)^n
+v_2w_2^T·(ζ^2λ)^n+…
+v_h-1w_h-1^T·(ζ^h-1λ)^n
+R(n),
where R(n) is the multiplication action of M^n onto the rest. Since the eigenvalues μ of M on the rest satisfies |μ|<λ, we have R(n)/λ^n→_n→∞0 geometrically with ratio |λ^sec/λ|.
For a general matrix M∈ things change, but not too much:
[Perron–Frobenius theorem part III]
Let M∈.
* M has a Perron–Frobenius eigenvalue, that is,
λ∈ such that λ≥|μ| for all other eigenvalues μ.
* Let s be the multiplicity of the Perron–Frobenius eigenvalue.
There exists (h_1,…,h_s)∈_≥ 1^s, the periods, such that all eigenvalues μ with λ=|μ| are exp(k2π i/h_i)λ for k∈{0,…,h_i-1}, for some period. We call these pseudo-dominant eigenvalues.
* The eigenvectors, left and right, for the Perron–Frobenius eigenvalues can be normalized to have values in .
* Let h=lcm(h_1,…,h_s), and
let ν the maximal dimension of the Jordan blocks of M containing λ.
There exist matrices
S^i(n) with polynomial entries of degree ≤(ν-1) for i∈{0,…,h-1} such that
lim_n→∞|(M/λ)^hn+i-S^i(n)|→ 0
∀ i∈{0,…,h-1},
and the convergence is geometric with ratio |λ^sec/λ|^h.
There are also explicit formulas for the matrices S^i(n), see <cit.>.
This can be found in <cit.>. See also
<cit.> (in the second version) for a useful list of properties of nonnegative matrices.
§.§ Three versions of the
We recall based algebras. These algebras originate in work of Lusztig on
so-called special representations of Weyl groups <cit.>.
We follow <cit.> with our definition.
Let ⊂ be a unital subring. A -algebra R with a finite
-basis C={1=c_0,…,c_r-1} is called a finite
based -algebra if all structure constants are in
with respect to the basis C. That is, <ref> holds.
Examples include:
* The Grothendieck rings of all the examples in <ref>.
* Group or more general semigroup algebras for finite groups or semigroups.
* There are many interesting infinite examples coming from
skein theory, see <cit.>.
Decategorifications are our main examples where
can be replaced by .
A finite based -algebra is actually a pair (R,C), but we will
write R for short. Next, fix such an R and c∈ C.
In this setting we can define the (pre) action matrixM^'(c)_k,j=∑_ia_im_i,j^k∈.
The action matrixM(c) is then the adjacency matrix for the
connected component, in the nonoriented sense, of the identity 1∈ C in the adjacency graph of M^'(c).
Note that M(c)∈ is a submatrix of M^'(c)∈[r]
for some 1≤ m≤ r.
We give three versions of , stated in terms of finite based -algebras.
The categorical version then follows immediately from <ref>.
[Version 1]
Fix a finite based -algebra R, and c a -linear combination of elements
from C. Assume that the action matrix M(c) is irreducible.
Then <ref> holds with (n) as in <ref>.
Consider the following matrix equation:
M(c)c(n-1)=c(n),
where c(k)=(c_0(k),…,c_r-1(k))∈^r are
vectors such that their ith entry is the
multiplicity of c_i in c^k, and a(0)=(1,0…,0)^T with the
one is the slot of c_0=1. This equation holds by
the definition of the action matrix.
Iterating this process, we get
M(c)^nc(0)=c(n).
Note that M(c)^nc(0) is the same as taking the first column of M(c)^n.
Hence,
b^R,c(n)=M(c)^n[1]
in the notation of the introduction. Thus, <ref> implies the result.
<ref> is sufficient for many example. Explicitly,
<ref> works for all transitive finite based -algebras.
Examples include all finite monoidal categories that are rigid by
<cit.>.
We say that M∈ has the Perron–Frobenius property
if its Perron–Frobenius eigenvalue has multiplicity one.
[Version 2]
Fix a finite based -algebra R, and c an -linear combination of elements
from C. Assume that the action matrix M(c) has the Perron–Frobenius property.
Then <ref> holds with (n) as in <ref>.
The iteration works as in the proof of <ref>, so let us focus on the growth rate. We will use <ref> for s=1. This implies
that ν=1, by its definition. In particular, we only have
S^i(n) with entries of degree zero, so these are matrices that do not depend on n, so we can simply write S^i. We will argue that they
are essentially the matrices v_iw_i^T.
Precisely, as follows from <cit.>, we have
S^i =
v_0w_0^T
+v_1w_1^T·ζ^i
+v_2w_2^T·ζ^2i+…
+v_h-1w_h-1^T·ζ^(h-1)i
=
v_0w_0^T
+v_1w_1^T·ζ^nh+i
+v_2w_2^T·ζ^2(nh+i)+…
+v_h-1w_h-1^T·ζ^(h-1)(nh+i)i
.
Now we apply <ref>.(c).
<ref> is the version we used in <ref>.
Recall that the polynomials S^i(n) are explicitly given in
<cit.> and define:
(n)=
1/h∑_i=0^h-1∑_j=0^h-1S^j(⌊ n/h⌋)·ζ^i(n-j)
.
[Version 3]
Fix a finite based -algebra R, and c an -linear combination of elements
from C. Then <ref> holds with (n) as in <ref>.
Observing that 1+ζ^i+…+ζ^(h-1)i=0 if i≢0 h, this follows as for the previous theorems.
MMMT20[Alp79]Al-proj-sl2
J.L. Alperin.
Projective modules for SL(2, 2n).
J. Pure Appl. Algebra, 15(3):219–234, 1979.
URL: https://doi.org/10.1016/0022-4049(79)90017-3doi:10.1016/0022-4049(79)90017-3.
[AE81]AlEv-representations-quillen
J.L. Alperin and L. Evens.
Representations, resolutions and Quillen's dimension theorem.
J. Pure Appl. Algebra, 22(1):1–9, 1981.
URL: https://doi.org/10.1016/0022-4049(81)90079-7doi:10.1016/0022-4049(81)90079-7.
[AP95]AnPa-fusion-lie-algebras
H.H. Andersen and J. Paradowski.
Fusion categories arising from semisimple Lie algebras.
Comm. Math. Phys., 169(3):563–588, 1995.
URL: https://doi.org/10.1007/BF02099312doi:10.1007/BF02099312.
[BS20]BeSy-non-projective-part
D. Benson and P. Symonds.
The non-projective part of the tensor powers of a module.
J. Lond. Math. Soc. (2), 101(2):828–856, 2020.
URL: <https://arxiv.org/abs/1902.02895>, https://doi.org/10.1112/jlms.12288doi:10.1112/jlms.12288.
[Bia93]Bi-asymptotic-lie
P. Biane.
Estimation asymptotique des multiplicités dans les puissances
tensorielles d'un 𝔤-module.
C. R. Acad. Sci. Paris Sér. I Math., 316(8):849–852, 1993.
[CVOZ14]ChVaOyZh-green-taft
H. Chen, F. Van Oystaeyen, and Y. Zhang.
The Green rings of Taft algebras.
Proc. Amer. Math. Soc., 142(3):765–775, 2014.
URL: <https://arxiv.org/abs/1111.1837>, https://doi.org/10.1090/S0002-9939-2013-11823-Xdoi:10.1090/S0002-9939-2013-11823-X.
[CEO23a]CoEtOs-growth-mod-p
K. Coulembier, P. Etingof, and V. Ostrik.
Asymptotic properties of tensor powers in symmetric tensor
categories.
2023.
URL: <https://arxiv.org/abs/2301.09804>.
[CEO23b]CoEtOs-frobenius-exact
K. Coulembier, P. Etingof, and V. Ostrik.
On Frobenius exact symmetric tensor categories.
Ann. of Math. (2), 197(3):1235–1279, 2023.
With Appendix A by Alexander Kleshchev.
URL: <https://arxiv.org/abs/2107.02372>, https://doi.org/10.4007/annals.2023.197.3.5doi:10.4007/annals.2023.197.3.5.
[COT23]CoOsTu-growth
K. Coulembier, V. Ostrik, and D. Tubbenhauer.
Growth rates of the number of indecomposable summands in tensor
powers.
2023.
URL: <https://arxiv.org/abs/2301.00885>.
[Cra13]Cr-tensor-simple-modules
D.A. Craven.
On tensor products of simple modules for simple groups.
Algebr. Represent. Theory, 16(2):377–404, 2013.
URL: <https://arxiv.org/abs/1102.3447>, https://doi.org/10.1007/s10468-011-9311-5doi:10.1007/s10468-011-9311-5.
[Eli16]El-two-color-soergel
B. Elias.
The two-color Soergel calculus.
Compos. Math., 152(2):327–398, 2016.
URL: <https://arxiv.org/abs/1308.6611>, https://doi.org/10.1112/S0010437X15007587doi:10.1112/S0010437X15007587.
[EGNO15]EtGeNiOs-tensor-categories
P. Etingof, S. Gelaki, D. Nikshych, and V. Ostrik.
Tensor categories, volume 205 of Mathematical Surveys and
Monographs.
American Mathematical Society, Providence, RI, 2015.
URL: https://doi.org/10.1090/surv/205doi:10.1090/surv/205.
[Fro68]Fr-werke
F.G. Frobenius.
Gesammelte Abhandlungen. Bände I, II, III. Springer-Verlag, Berlin-New York, 1968.
[Hog07]Ho-handbook-linear-algebra
L. Hogben, editor.
Handbook of linear algebra.
Discrete Mathematics and its Applications (Boca Raton). Chapman &
Hall/CRC, Boca Raton, FL, 2007.
Associate editors: R. Brualdi, A. Greenbaum and R. Mathias.
URL: https://doi.org/10.1201/b16113doi:10.1201/b16113.
[KM16]KiMa-based-algebras
T. Kildetoft and V. Mazorchuk.
Special modules over positively based algebras.
Doc. Math., 21:1171–1192, 2016.
URL: <https://arxiv.org/abs/1601.06975>.
[KST22]KhSiTu-monoidal-cryptography
M. Khovanov, M. Sitaraman, and D. Tubbenhauer.
Monoidal categories, representation gap and cryptography.
2022.
To appear in Trans. Amer. Math. Soc.
URL: <https://arxiv.org/abs/2201.01805>.
[LTV23]LaTuVa-code-growth
A. Lacabanne, D. Tubbenhauer, and P. Vaz.
Code and erratum on GitHub for the paper Asymptotics in finite
monoidal categories.
2023.
URL: <https://github.com/dtubbenhauer/growth-pfdim>.
[LS77]LoSh-random-young
B.F. Logan and L.A. Shepp.
A variational problem for random Young tableaux.
Advances in Math., 26(2):206–222, 1977.
URL: https://doi.org/10.1016/0001-8708(77)90030-5doi:10.1016/0001-8708(77)90030-5.
[Lus79]Lu-irreps-weyl-I
G. Lusztig.
A class of irreducible representations of a Weyl group.
Nederl. Akad. Wetensch. Indag. Math., 82(3):323–335, 1979.
URL: https://doi.org/10.1016/1385-7258(79)90036-2doi:10.1016/1385-7258(79)90036-2.
[MMMT20]MaMaMiTu-trihedral
M. Mackaay, V. Mazorchuk, V. Miemietz, and D. Tubbenhauer.
Trihedral Soergel bimodules.
Fund. Math., 248(3):219–300, 2020.
URL: <https://arxiv.org/abs/1804.08920>, https://doi.org/10.4064/fm566-3-2019doi:10.4064/fm566-3-2019.
[MT19]MaTu-soergel
M. Mackaay and D. Tubbenhauer.
Two-color Soergel calculus and simple transitive
2-representations.
Canad. J. Math., 71(6):1523–1566, 2019.
URL: <https://arxiv.org/abs/1609.00962>, https://doi.org/10.4153/CJM-2017-061-2doi:10.4153/CJM-2017-061-2.
[PR20]PoRe-mult-large-tensor-powers
O. Postnova and N. Reshetikhin.
On multiplicities of irreducibles in large tensor product of
representations of simple Lie algebras.
Lett. Math. Phys., 110(1):147–178, 2020.
URL: <https://arxiv.org/abs/1812.11236>, https://doi.org/10.1007/s11005-019-01217-4doi:10.1007/s11005-019-01217-4.
[Rot81]Ro-expansion-sums-matrix-powers
U.G. Rothblum.
Expansions of sums of matrix powers.
SIAM Rev., 23(2):143–164, 1981.
URL: https://doi.org/10.1137/1023036doi:10.1137/1023036.
[Smi70]Sm-ADE
J.H. Smith.
Some properties of the spectrum of a graph.
In Combinatorial Structures and their Applications (Proc.
Calgary Internat. Conf., Calgary, Alta., 1969), pages 403–406.
Gordon and Breach, New York, 1970.
[Soe92]So-hcbim
W. Soergel.
The combinatorics of Harish-Chandra bimodules.
J. Reine Angew. Math., 429:49–74, 1992.
URL: https://doi.org/10.1515/crll.1992.429.49doi:10.1515/crll.1992.429.49.
[STWZ23]SuTuWeZh-mixed-tilting
L. Sutton, D. Tubbenhauer, P. Wedrich, and J. Zhu.
Sl2 tilting modules in the mixed case.
Selecta Math. (N.S.), 29(3):39, 2023.
URL: <https://arxiv.org/abs/2105.07724>, https://doi.org/10.1007/s00029-023-00835-0doi:10.1007/s00029-023-00835-0.
[Thu14]Th-positive-skein
D.P. Thurston.
Positive basis for surface skein algebras.
Proc. Natl. Acad. Sci. USA, 111(27):9725–9732, 2014.
URL: <https://arxiv.org/abs/1310.1959>, https://doi.org/10.1073/pnas.1313070111doi:10.1073/pnas.1313070111.
[Tub22]Tu-sandwich-cellular
D. Tubbenhauer.
Sandwich cellularity and a version of cell theory.
2022.
To appear in Rocky Mountain J. Math.
URL: <https://arxiv.org/abs/2206.06678>.
[Zub98]Zu-gen-dynkin-diagrams
J.-B. Zuber.
Generalized Dynkin diagrams and root systems and their folding.
In Topological field theory, primitive forms and related topics
(Kyoto, 1996), volume 160 of Progr. Math., pages 453–493.
Birkhäuser Boston, Boston, MA, 1998.
URL: <https://arxiv.org/abs/hep-th/9707046>.
https://doi.org/10.1007/978-1-4612-0705-4_16doi:10.1007/978-1-4612-0705-4_16.
|
http://arxiv.org/abs/2307.01971v1
|
20230705005120
|
Lifshitz model in the presence of spin-orbit coupling
|
[
"M. E. Raikh"
] |
cond-mat.dis-nn
|
[
"cond-mat.dis-nn"
] |
Department of Physics and
Astronomy, University of Utah, Salt Lake City, UT 84112
Wave function of a localized state created by a
short-range impurity in two dimensions falls off with distance, r,
from the impurity as 1/r^1/2exp(-r/a), where a
is the localization radius. With randomly positioned identical impurities
with low concentration, n ≪ a^-2, the level smears into a band
due to the overlap of the impurity wave functions. This is the essence
of the Lifshitz model. We demonstrate that, upon incorporation of the spin-orbit coupling, the impurity wave functions acquire oscillating
factors which, subsequently, modify their overlap.
As a result of such modification,
the density of states develops singularities at certain energies.
73.50.-h, 75.47.-m
Lifshitz model in the presence of spin-orbit
coupling
M. E. Raikh
August 1, 2023
========================================================
§ INTRODUCTION
Conventional Lifshitz model<cit.> describes single-electron states in a system of randomly-positioned identical short-range
impurities. It is assumed that each impurity creates a localized level with a radius of wave
function, a, much smaller than a typical distance between the neighboring impurities.
In two dimensions this distance is n^-1/2,
where n is the impurity concentration. Due to
overlap of the wave functions of neighboring
impurities, individual levels smear into an impurity band.
Presence of a small parameter na^2≪ 1 allows to establish
the form of the density of states as a function of E-E_0,
which is the electron energy measured from the level position, E_0.
The key argument summarized in the book Ref. book why this density can be found analytically is that the eigenstates
of the disordered system are composed of pairs of impurities which
hybridize the respective wave functions. As a result of this hybridization
one of the levels shifts up from E_0, while the other level shifts down.
Thus, the density of states in the Lifshitz model has the form of two peaks.
Assume now that the bare Hamiltonian of the 2D system contains a spin-orbit
term e.g. of the Rashba type.<cit.> This certainly does not have an effect on the
density of states if the spin splitting of the bare band is much smaller
than E_0. In the opposite limit, the localized states
are classified according to helicity. Importantly,
the wave functions of the short-range impurities get modified dramatically by spin-orbit coupling compared to a
simple exponential decay.<cit.>
The main message of the present
paper is that in the limit of strong spin-orbit coupling the shape of the density of states in the Lifshitz model changes dramatically, namely, it develops singularities at certain energies.
§ A SINGLE SHORT-RANGE IMPURITY IN THE PRESENCE OF SPIN-ORBIT COUPLING
The density of states of free electrons with a quadratic Hamiltonian
Ĥ_0=ħ^2 k^2/2m, where m is the effective mass and
k is the wave vector, is energy-independent.
Upon adding a spin-orbit term
Ĥ_SO= α[ k×σ̂]· n,
where σ is the vector of the Pauli matrices, α is the spin-orbit
constant, and n is the normal to the 2D plane, to Ĥ_0
modifies the density of states qualitatively.<cit.>
Indeed, the spectrum of the full Hamiltonian Ĥ_0+Ĥ_SO consists
of two branches
E_1,2( k)=ħ^2k^2/2m∓α| k|.
The corresponding wave functions have the form
Ψ_ k^(1,2)( r)=e^i krχ_ k^(1,2),
where the spinors χ_ k^(1,2) are defined as
χ_ k^(1)=1/√(2)[ e^iΦ_ k; -1 ], χ_ k^(2)=1/√(2)[ 1; e^-iΦ_ k ].
Here Φ_ k is the azimuthal angle of the vector k.
It is crucial that the lower branch of the spectrum
Eq. (<ref>) has a minimum at
k=k_0= mα/ħ^2.
Near this minimum the spectrum can be simplified as
E_1( k)=-Δ +ħ^2(k-k_0)^2/2m,
with the depth of the minimum
Δ=mα^2/2ħ^2=ħ^2k_0^2/2m.
As a result, the wave function of a localized state created by a
point-like impurity is composed of the free-electron state with
energies close to -Δ. The density of these states behaves
as (E+Δ)^-1/2, i.e. it has a one-dimensional character.<cit.>
Calculation of the level position in this setting was carried out in Ref.
Chaplik. We reproduce it below for pedagogical reasons and in order to
generalize it later to the impurity pairs.
Denote with U( r) the impurity potential. The solution of the Schrödinger equation
[Ĥ_0 +Ĥ_SO+ U( r)]Ψ( r)
=E_0Ψ( r)
for the level, E_0, close to -Δ
can be searched in the form of the combination of
the states of the lower branch only
Ψ( r)=∫ d kA( k)χ_ k^(1) e^-i kr.
Substitution of this form into Eq. (<ref>) yields
∫ d kA( k)χ_ k^(1) e^-i kr[E_1( k)-E_0] =-U( r)Ψ(0).
In Eq. (<ref>) we took into account that U( r)
is short-ranged.
Multiplying this equation by χ_ q^(1)*e^i qr
and integrating d r, we get
(2π)^2A( q)(E_1 ( q)-E_0)
=
-(Ψ(0)χ_ q^(1)*)
∫ d rU( r)e^i qr.
Expressing A( q), substituting it into
Eq. (<ref>) and setting r=0, we arrive to
the self-consistency equation
Ψ(0)=-1/(2π)^2∫ d qχ_ q^(1)(Ψ(0)χ_ q^(1)*)/E_1( q)-E_0∫ d rU( r)e^i qr.
This equation defines the position of the level, E_0.
For isotropic potential, the dependence on the direction
of q disappears from the integral d r, yielding J_0(qr), where J_0 is the Bessel function. Then
the angular integration over q can be easily performed. Concerning the integral over | q|, it comes from the domain (| q|-k_0)≪ k_0.
Finally, the solution of Eq. (<ref>) takes the
form<cit.>
ε_0=-Δ-E_0=π^2mk_0^2/2ħ^2[ ∫_0^∞ dr r U( r)J_0(k_0r) ]^2.
The solution is doubly degenerate with respect to the components of respective spinors. The
meaning of ε_0 is the binding energy
measured from the minimum of the spectrum of the
lower branch.
To estimate the binding energy predicted by
Eq. (<ref>), we assume that the magnitude of
the potential is U_0, while the radius of potential is a. To insure that the wave function does not change within the interval
r<a the condition k_0a<<1 should be met,
which is equivalent to the replacement of the
Bessel function in the integrand by 1.
Then, within a numerical factor, we get
ε_0∼ (k_0a)^2U_0^2/ħ^2/ma^2.
To test the assumptions made in course of solving of Eq. (<ref>) this binding energy must be much smaller
than the depth ħ^2k_0^2/2m of the minimum in the spectrum
(to justify the integration over q). Also, this binding
energy should be much smaller than U_0 (to replace Ψ( r) by Ψ (0).
The first requirement leads to the usual condition U_0≪ħ^2/ma^2. The second requirement yields a complimentary condition (k_0a)^2U_0≪ħ^2/ma^2, which is
weaker.
The form of the wave function at distances
r≫ a is established from Eqs. (<ref>) and (<ref>). Introducing a wave vector
k_c=(2mε_0/ħ^2)^1/2
and the dimensionless variable z defined as
z= q-k_0/k_c,
and performing the angular integration, we get for
a nonzero component of a spinor
Ψ(r)∝∫_0^∞J_0(k_0r+k_crz)/z^2+1.
Since characteristic z is ∼ 1, the
localization length of Ψ (r) is r_c∼ k_c^-1.
For r<r_c, the z-dependence in
the argument of the Bessel function can be neglected, so that Ψ(r) ∝ J_0(k_0r). For r>r_c
the Bessel function can be replaced by a large-argument
asymptote: J_0(z)≈( 2/π z)^1/2cos(z-π/4). This yields
Ψ(r)∝cos(k_0r-π/4)/r^1/2exp(-k_c r ).
We see that, unlike the conventional impurities,
the wave function, in addition to the exponential decay, contains an oscillating factor with a period 2π/k_0. This behavior
is illustrated in the figure.
In the next section we recalculate this oscillations into the
splitting of the levels of two impurities.
§ TWO IMPURITIES
Let the impurities be located at ±1/2ρ, so that the net potential has the
form
Ũ( r)=U( r-ρ/2)+
U( r+ρ/2).
It is straightforward to generalize Eq. (<ref>) to the
case of two impurities
(2π)^2A( q)(E_1 ( q)-E_0)=
-∫ d rU( r)e^i qr×
{ e^i qρ/2(Ψ(ρ/2)χ_ q^(1)*)
+e^-i qρ/2(Ψ(-ρ/2)χ_ q^(1)*)}.
Expressing A( q) from Eq. (<ref>), substituting
it into Eq. (<ref>) and setting r=ρ/2 and r=-ρ/2,
we arrive at the system of equations for
Ψ(ρ/2) and
Ψ(-ρ/2).
To cast this system in a concise form we take the
advantage of the fact that the distance, ρ,
between the impurities is much bigger than a,
so that the splitting of the levels due to
the overlap of the impurity wave functions is
smaller than ε_0 given by
Eq. (<ref>). We introduce the
following notations for the elements of the spinors Ψ(ρ/2)
and Ψ(-ρ/2)
Ψ(ρ/2)=
[ a_1; b_1 ], Ψ(-ρ/2)=
[ a_2; b_2 ].
Then the generalization of Eq.
(<ref>) to the case of two impurities
takes the form
κ a_1 =[J_0(k_0ρ)a_2 -iJ_1(k_0ρ)b_2]exp(-k_c ρ),
κ b_1 = [iJ_1(k_0ρ)a_2 +J_0(k_0ρ)b_2]exp(-k_c ρ),
κ a_2 =[J_0(k_0ρ)a_1 -iJ_1(k_0ρ)b_1]exp(-k_c ρ),
κ b_2 = [iJ_1(k_0ρ)a_1 +J_0(k_0ρ)b_1]exp(-k_c ρ),
where the parameter κ is related to the
energy, ε, and to the single-impurity binding energy ε_0 as
κ=
(ε/ε_0)^1/2
-1.
The four solutions of the
system Eq. (<ref>) can be easily found, namely
κ^+(ρ)=±[J_0(k_0ρ) +J_1(k_0ρ)]
exp(-k_c ρ),
κ^-(ρ)=±[J_0(k_0ρ) -J_1(k_0ρ)]
exp(-k_c ρ).
Concerning the eigenvectors, their structure can be illustrated e.g. for positive κ^+ when it is symmetric,
namely, a_2=a_1 and b_2=b_1=ia_1.
Eq. (<ref>) allows to trace how the overlap-induced splitting of single-impurity
levels evolves with the increase of the distance between the impurities. For
k_0ρ≪ 1 the Bessel function
J_0(k_0ρ) should be replaced by 1,
while J_1(k_0ρ) should be replaced by 0.
Then the splitting is of 1D-type and does not
depend on the spin-orbit parameter, k_0.
In the opposite limit k_0ρ≫ 1, substituting the large-argument asymptotes of the Bessel functions into Eq. (<ref>) yields
κ^+(ρ)=±2sin (k_0ρ)/(k_0ρ)^1/2exp(-k_c ρ),
κ^-(ρ)=±2cos( k_0ρ)/(k_0ρ)^1/2exp(-k_c ρ).
The 1/ρ^1/2 behavior of the prefactor
confirms that, at large ρ, the splitting is of the 2D-type and oscillates with distance
due to the spin-orbit coupling. Since k_c ≫ k_0,
the splitting of the impurity levels strongly oscillates with distance, ρ, between the impurities.
§ DENSITY OF STATES
Assume now that the density, n, of the short-range impurities is finite. As a result of the overlap of
single-impurity wave functions, a level with binding energy
ε_0 gets smeared into a band.
A basic assumption of the conventional Lifshitz model,<cit.> is that the width of the band
is much smaller than ε_0. This amounts to
the smallness of the parameter
nk_c^-2≪ 1. For such low densities the overlap-induced splitting Eq. (<ref>) of two levels at a typical distance ρ∼ n^-1/2 contains an exponential factor, exp[-k_cn^-1/2], which is small by virtue of this parameter.
A simplified argument which allows to establish the shape of the density of states, g(ε), in the vicinity of ε =ε_0 goes as follows.<cit.> A shift of a given single-impurity level will be anomalously small, if it has no neighboring impurities within a circle of a certain radius, ρ_ε. This
radius is anomalously large compared to the typical distance between the impurities, i.e. ρ_ε≫ n^-1/2.
With sub-logarithmic accuracy, the condition for ρ_ε reads
exp(-k_cρ_ε)=|ε-ε_0|/ε_0,
where the right-hand side is κ defined by Eq. (<ref>) and taken at
|ε-ε_0|≪ε_0. Probability that the neighboring impurities are absent in the circle with a
radius ρ_ε is equal to
exp(-π n ρ_ε^2 ). Substitution of
ρ_ε found from Eq. (<ref>) into this probability,
yields the exponent in the density of states
g(ε)∝exp[-π n/k_c^2ln^2(ε_0/|ε-ε_0|) ].
Since the ratio n/k_c^2 is small,
Eq. (<ref>) describes a sharp minimum
in a parametrically narrow domain
|ε -ε_0| ∼ε_0exp[-k_c/(π n)^1/2].
To incorporate the spin-orbit coupling, we use the modified splitting Eq. (<ref>) and present the
density of states as a sum
g(ε)=g^+(ε)+g^-(ε),
where g^+ and g^- are defined as
g^+(ε)=∫_0^∞ dρF(ρ)
δ[|ε-ε_0|/2ε_0-4|sin k_0ρ|/(k_0ρ)^1/2 e^-(k_cρ )],
g^-(ε)=∫_0^∞ dρF(ρ)
δ[|ε-ε_0|/2ε_0-4|cos k_0ρ|/(k_0ρ)^1/2 e^-(k_cρ )].
Here F(ρ) is the nearest neighbor distribution
F(ρ)=2π nρexp(-π nρ^2 ).
Our key point is that, due to the spin-orbit coupling,
the level splitting defined by κ^+
turns to zero at certain distances,
ρ_N=π N/k_0, while κ^-
turns to zero at ρ_N+1/2=π (N+1/2)/k_0. Substituting
ρ=ρ_N+δρ_N
into the first equation of Eq. (<ref>), we present
g^+(ε) as a sum
g^+(ε)=∑_N(π N)^1/2e^k_cρ_N/4F(ρ_N)
×∫ dδρ_N δ[ |ε-ε_0|/8ε_0
(π N)^1/2e^k_cρ_N -|sin (k_0δρ_N)| ].
In deriving Eq. (<ref>) we made two assumptions:
δρ_N ≪ρ_N and k_cδρ_N ≪ 1.
As it is seen from Eq. (<ref>), characteristic δρ_N is ∼ k_0^-1, so that the first assumption is valid for N≫ 1, while the second condition is ensured by the relation k_c ≪ k_0, which was assumed above.
The "memory" about the spin-orbit coupling in Eq. (<ref>) is
encoded in the combination sin (k_0δρ_N). Taking the limit k_0→ 0 and replacing the summation
over N by integration, one recovers the result Eq. (<ref>).
To reveal the role of spin-orbit coupling we
perform the integration in Eq. (<ref>) with a help of the δ-function, we find
g^+(ε)=1/4∑_NF(ρ_N)/[exp(-2k_cρ_N)/π N -(ε -ε_0/8ε_0)^2 ]^1/2.
The expression for g^-(ε) has a
similar form with replacement of N by
N+1/2.
Note that each term in Eq. (<ref>) exhibits a square-root
singularity near energies
ε_N^±=ε_0[ 1±8/(π N)^1/2exp(-2π k_c/k_0N ) ].
On the other hand, by virtue of the small parameter k_c/k_0, the intervals {ε_N^+,ε_N^-} for different N overlap. Thus, the overall shape of the density of states is smooth. To uncover the role of the discreteness of N, we transform Eq. (<ref>) using the Poisson summation
∑_NS(N)=∫ dx S(x)+2∑_l∫ dx S(x)
cos(2π lx).
In our case, the function S(x) has a form
S(x)=1/4F(π x/k_0)/[exp(-2π k_cx/k_0)/π x -(ε -ε_0/8ε_0)^2 ]^1/2.
The denominator in Eq. (<ref>) turns to zero
at
x=x_ε≈k_0/2π k_cln(8ε_0/ε-ε_0)^2.
The first term in Eq. (<ref>)
reproduces the standard result Eq. (<ref>)
for the density of states in the absence of spin-orbit coupling. To evaluate the terms with l ≥ 1 one should expand the integrand around x_ε.
This leads to the oscillating spin-orbit component in the density
of states ∝cos(2π lx_ε).
§ CONCLUDING REMARKS
The prime observation of the present paper is that, in the presence of spin-orbit coupling,
the pairs of impurities located at certain distances from each other do not hybridize their levels. This leads to singularities in the
in the density of states at certain energies.
There is a certain similarity between the impurity state in a 2D electron gas with strong spin-orbit coupling and the in-gap state created by a magnetic impurity
in a superconductor.<cit.>
Similarly to Eq. (<ref>), the wave function
of the in-gap state oscillates with distance as
cos k_Fr, where k_F is the Fermi momentum.
However, the limit of low density of magnetic impurities considered in the present paper is not
relevant for superconductors. Rather, in the relevant regime, there are many impurities within the localization radius determined by the coherence length. In this limit, the self-consistent Born approximation applies, and the two-peak structure of
the density of the impurity states transforms into a
semicircle.<cit.>
20
Lifshitz
I. M. Lifshitz, "Energy spectrum structure and quantum states of disordered condensed
systems," Usp. Fiz. Nauk 83, 617 (1964) [English transl.: Sov. Phys.-Usp. 7, 549
(1964-1965) ].
Gredeskul
I. M. Lifshitz, S. A. Gredeskul, and L. A. Pastur, "Introduction to the Theory of Disordered
Systems," (Willey, New York, 1988).
book
B. I. Shklovskii and A. L. Efros, "Electronic Properties of Doped Semiconductors,"
(Springer, Berlin, 1984).
Rashba Yu. A. Bychkov and E. I. Rashba, "Properties of a 2D electron gas with lifted spectral degeneracy,"
Pis’ma Zh. Eksp. Teor. Fiz. 39, 64 (1984) [JETP Lett.
39, 78 (1984)].
Galstyan
A. G. Galstyan and M. E. Raikh,
"Disorder-induced broadening of the density of states for two-dimensional electrons with strong spin-orbit coupling,"
Phys. Rev. B 58, 6736 (1998).
Chaplik
A. V. Chaplik and L. I. Magarill,
"Bound States in a Two-Dimensional Short Range Potential Induced by the Spin-Orbit Interaction,"
Phys. Rev. Lett. 96, 126402 (2006).
Mkhitaryan
V. V. Mkhitaryan and M. E. Raikh,
"Disorder-induced tail states in gapped bilayer graphene,"
Phys. Rev. B 78, 195409 (2008).
Voloshin B. Skinner, B. I. Shklovskii, and M. B. Voloshin,
"Bound state energy of a Coulomb impurity in gapped bilayer graphene,"
Phys. Rev. B 89, 041405(R) (2014).
Hutchinson
J. Hutchinson and J. Maciejko,
"Unconventional transport in low-density two-dimensional Rashba systems,"
Phys. Rev. B 98, 195305 (2018).
Yu L. Yu, "Bound state in superconductors with paramagnetic impurities," Acta Phys. Sin. 21, 75 (1965).
Shiba
H. Shiba, "Classical Spins in Superconductors,"
Progress of Theoretical Physics, 40, 435 (1968).
Rusinov
A. I. Rusinov, "Superconductivity near a paramagnetic impurity,"
Zh. Eksp. Teor. Fiz. 56, 2047 (1969) [Sov. Phys.
JETP 29, 1101 (1969)].
Glazman
Ya. V. Fominov, M. Houzet, and L. I. Glazman,
"Surface impedance of superconductors with weak magnetic impurities,"
Phys. Rev. B 84, 224517 (2011).
|
http://arxiv.org/abs/2307.03182v1
|
20230706175819
|
Long-term follow-up observations of extreme coronal line emitting galaxies
|
[
"Peter Clark",
"Or Graur",
"Joseph Callow",
"Jessica Aguilar",
"Steven Ahlen",
"Joseph P. Anderson",
"Edo Berger",
"Thomas Brink",
"David Brooks",
"Ting-Wan Chen",
"Todd Claybaugh",
"Axel de la Macorra",
"Peter Doel",
"Alexei Filippenko",
"Jamie Forero-Romero",
"Sebastian Gomez",
"Mariusz Gromadzki",
"Klaus Honscheid",
"Cosimo Inserra",
"Theodore Kisner",
"Martin Landriau",
"Lydia Makrygianni",
"Marc Manera",
"Aaron Meisner",
"Ramon Miquel",
"John Moustakas",
"Tomás E. Müller-Bravo",
"Matt Nicholl",
"Jundan Nie",
"Francesca Onori",
"Antonella Palmese",
"Claire Poppett",
"Thomas Reynolds",
"Mehdi Rezaie",
"Graziano Rossi",
"Eusebio Sanchez",
"Michael Schubnell",
"Gregory Tarlé",
"Benjamin A. Weaver",
"Thomas Wevers",
"David R. Young",
"WeiKang Zheng",
"Zhimin Zhou"
] |
astro-ph.HE
|
[
"astro-ph.HE"
] |
firstpage–lastpage
The split majoron model
confronts the NANOGrav signal
[
August 1, 2023
=======================================================
We present new spectroscopic and photometric follow-up observations of the known sample of extreme coronal line emitting galaxies (ECLEs) identified in the Sloan Digital Sky Survey (SDSS). With these new data, observations of the ECLE sample now span a period of two decades following their initial SDSS detections. We confirm the non-recurrence of the iron coronal line signatures in five of the seven objects, further supporting their identification as the transient light echoes of tidal disruption events (TDEs). Photometric observations of these objects in optical bands show little overall evolution. In contrast, mid-infrared (MIR) observations show ongoing long-term declines. The remaining two objects had been classified as active galactic nuclei (AGN) with unusually strong coronal lines rather than being TDE related, given the persistence of the coronal lines in earlier follow-up spectra. We confirm this classification, with our spectra continuing to show the presence of strong, unchanged coronal-line features and AGN-like MIR colours and behaviour. We have constructed spectral templates of both subtypes of ECLE to aid in distinguishing the likely origin of newly discovered ECLEs. We highlight the need for higher cadence, and more rapid, follow-up observations of such objects to better constrain their properties and evolution. We also discuss the relationships between ECLEs, TDEs, and other identified transients having significant MIR variability.
transients: tidal disruption events, galaxies: active
§ INTRODUCTION
Tidal disruption events (TDEs) are luminous flaring transients produced by the gravitational shredding of a star that passes too close to its galaxy's central supermassive black hole (SMBH). This leads to a portion of the star's mass being accreted onto the disrupting SMBH via an accretion disk, with the remaining material becoming unbound and ejected from the system. Whilst around half of the disrupted star's mass is initially gravitationally bound to the black hole following the disruption <cit.>, the actual amount of material accreted is significantly less as more material becomes unbound as the event evolves <cit.>. It is thought that either the circularisation of the accretion disk or collisions within the infalling material streams (or a combination of both) releases the energy observed as the flaring TDE (e.g., ), though the specifics of the processes are still under debate. SMBHs of <10^8 are expected to be responsible for producing TDEs, as at larger SMBH masses the Roche limit (the radius within which a star will be tidally disrupted) is within the event horizon and so the star is absorbed whole prior to disruption, thereby not producing a transient <cit.>.
TDEs have been observed with a wide range of properties and have been detected through numerous methods across the electromagnetic spectrum. The first events were identified in the 1990s by X-ray surveys, at energies where overall TDE emission is predicted to peak <cit.>. TDEs are now routinely detected by wide-field optical surveys. Examples of such surveys, from which we utilise data in this work, are the Asteroid Terrestrial-impact Last Alert System <cit.>, Pan-STARRS1 <cit.>, and the Zwicky Transient Facility <cit.>. Subsequent follow-up observations have also detected TDEs at radio and infrared wavelengths — for example, <cit.> and <cit.>, respectively.
A literature search reveals that upward of 100 TDE candidates have been identified thus far (e.g., ). However, given the wide range of properties observed, and the varied methods used in their detection, it is still debated whether all candidates identified so far are genuine TDEs or are the result of more than one kind of accretion activity onto an SMBH (e.g., flares from temporary increases in the accretion rate of active galactic nuclei (AGN)).
A small subset of TDE candidates have been identified from residual signatures in the spectra of their host galaxies. Nuclear-focused spectra of these galaxies exhibit strong, narrow emission lines of ionic species more commonly associated with the high-temperature environment of the Solar corona, most notably emission lines produced by high-ionisation states of iron (–). As a result, these objects have been given the classification of `extreme coronal line emitters' or `extreme coronal line emitting galaxies' (ECLEs) <cit.>.
The first ECLE (SDSS J095209.56+214313.3, which we refer to as SDSS J0952) was identified by <cit.>, who noted that the object changed in brightness and overall spectral energy distribution (SED) between photometric observations by the Sloan Digital Sky Survey <cit.> in 2004 and subsequent spectroscopic observations the following year. During this time, the object dimmed to be more consistent with that of near-infrared (NIR) photometry obtained in 1998 by the Two Micron All-Sky Survey <cit.>, whilst displaying a continuum best described by a combination of underlying starlight and an additional power-law component. This spectrum also presented the strong emission lines of highly ionised Fe that subsequently became the hallmark spectral features for the identification of ECLEs. These Fe emission lines had both broad and narrow components and were accompanied by multipeaked Balmer emission lines.
Additionally, ultraviolet (UV) observations obtained two months after the SDSS spectrum by the Galaxy Evolution Explorer <cit.> were found to be significantly brighter than would be expected from host-galaxy starlight alone yet consistent with an extrapolation of the power-law component identified in the continuum of the SDSS spectrum. Follow-up photometry and spectroscopy tracked a decline in luminosity across the electromagnetic spectrum and fading of the observed Fe coronal lines, with the higher ionisation state lines fading more quickly.
<cit.> identified a second similar object (SDSS J074820.67+471214.3 : SDSS J0748). A systematic survey of the seventh data release of the SDSS <cit.> conducted by <cit.> recovered five additional objects showing similar, though not identical, properties, bringing the total number of known ECLEs to seven.
The connection between the appearance of the Fe coronal lines and TDE light echoes was first made by <cit.> through their observations of SDSS J0952. The high ionisation potentials of the highly ionised states of Fe (358 eV for ) require the presence of a soft X-ray continuum. Whilst the process that generates this X-ray continuum in a TDE remains somewhat unclear, modelling indicates the likely source is the resulting accretion disk after the material removed from the disrupted star has circularised around the SMBH <cit.>. This continuum may be obscured by the presence of dense circumnuclear material, which once ionised generates the observed coronal lines.
Several other possible explanations for ECLEs have been suggested, including a new form of AGN variability or an exotic form of supernova (SN). The TDE light-echo explanation for ECLEs has been supported by the long duration of the events. ECLEs have been seen to leave detectable emission-line signatures in their host spectra for at least several years post discovery and continue to display mid-infrared (MIR) evolution over the course of more than a decade, longer than would be expected of other forms of astrophysical transients, such as supernovae <cit.>. The spectroscopic and MIR photometric evolution of ECLEs are both less erratic and larger in amplitude than what is observed in most AGN variability, which is normally seen to be ∼ 0.1 mag in amplitude on timescales of weeks to months <cit.>.
Previously, the most clear connection between ECLEs and TDEs was seen in the discovery spectrum of SDSS J0748. This object was first observed with a broad, strong feature along with broad Hα emission commonly associated with conventional, optically selected TDEs <cit.>. Additionally, a further two objects (SDSS J0952 and J1350) were also initially observed with clearly broad and complex Hα emission features comprised of multiple components, with the broad components fading over time <cit.>.
Recently, the connection between ECLEs and the wider group of optically selected TDEs has become much more evident through observations of the TDE AT 2017gge. This object was classified as a centrally located optical transient with broad H and He spectral features consistent with a TDE <cit.>. AT 2017gge was observed to display a delayed X-ray flare (∼ 200 d post optical discovery) coincident with the emergence of coronal Fe lines (–) that have persisted (with altering line ratios) until at least 1700 d post-discovery <cit.>. It has also been seen to display an outburst in the MIR followed by an ongoing, multiyear decline. AT 2017gge was identified as a `Mid-infrared Outburst in a Nearby Galaxy' (MIRONG) in the sample compiled by <cit.>, with its MIR behaviour found to be consistent with a TDE by <cit.>. The recently classified TDE AT 2022upj has also been observed with Fe coronal emission lines. However, as opposed to AT 2017gge, these lines are present in spectra obtained near maximum light <cit.>.
The long duration of the ECLE spectroscopic signatures, as well as their occurrence not being limited to a specific type of galaxy <cit.>, allows them to serve as a window into the long-term behaviour of the environments surrounding both active and quiescent black holes. This includes cases where the initial TDE was not directly observed — the coronal line signatures of ECLEs can persist long after the TDE is no longer photometrically detectable and after any broad H or He features have faded <cit.>.
Despite the limited sample size, two spectroscopic subclasses of ECLE were suggested by <cit.>: those objects showing emission features (4/7) and those without (3/7). Two scenarios for this subclassification were proposed, with either being collisionally de-excited in some objects having a higher density of circumnuclear material (with the higher ionisation states of Fe not being affected owing to their significantly higher critical densities) or that the soft X-ray radiation field was of sufficient intensity to overionise the line-emitting material, which prevented the formation of the lines.
Follow-up spectra obtained by <cit.> up to 9 yr after the initial SDSS observations found that four objects displayed significant evolution over this period, with the remaining three being spectroscopically non-variable. We note here that this classification does not divide the sample into the same two groups as the initial detection/nondetection of put forward by <cit.>. <cit.> suggested that the three non-variable objects were the result of AGN activity or the tidal disruption of giant stars rather than the disruption of a main-sequence star as is usually suggested for observed TDEs. A subset of AGN have been observed to display spectra with Fe coronal lines, though these lines in AGN are observed at lower intensities than in ECLEs <cit.>. ECLEs have also been observed with line ratios expected of star-forming galaxies rather than those of typical AGN. However, these ratios have been seen to shift to more AGN-like values as the ECLEs evolve.
X-ray observations of one of the original objects, SDSS J134244.41+053056.1 (SDSS J1342), obtained with the Neil Gehrels Swift Observatory <cit.> and XMM-Newton <cit.>, revealed a long-term decline consistent with the t^-5/3 power law expected from accretion events. The authors concluded that this object was consistent with a long-duration TDE by a 10^5 SMBH <cit.>.
Here we present new spectroscopic follow-up observations of all seven of the ECLEs in the <cit.> sample with the time-span between initial observation and these new spectra now approaching two decades. Summary information for all seven objects in this work (including the abbreviated names used and coordinates) is shown in Table <ref>.
This paper is organised in the following manner. Section <ref> outlines the observations and reduction techniques. In Section <ref> we detail the analysis of the new set of follow-up spectra, including updated Baldwin-Phillips-Terlevich <cit.> diagnostics for each object. We use the SDSS discovery spectra of the sample to construct template spectra of both the variable and non-variable ECLEs, and we use these to compare against other SDSS galaxy templates. Additionally, we present updated optical and MIR photometric analyses of the evolution of all ECLEs. Whilst there has been little overall evolution across the sample at optical wavelengths, the majority of ECLEs with variable coronal lines show ongoing MIR declines. In Section <ref>, we discuss the links between ECLEs and other types of transient identified with coronal lines. Finally, in Section <ref>, we present a summary of our main findings.
Throughout, we assume a Hubble constant H_0 = 70 Mpc^-1 and adopt a standard cosmological model with Ω_M=0.27 and Ω_Λ=0.73.
§ OBSERVATIONS AND DATA REDUCTION
§.§ Optical Spectroscopy
We obtained optical spectra with a combination of the 6.5 m MMT <cit.> using the Binospec spectrograph <cit.>; the European Southern Observatory (ESO) 4 m New Technology Telescope (NTT) using the EFOSC2 instrument <cit.> as part of the advanced Public ESO Spectroscopic Survey of Transient Objects <cit.>; the Shane 3 m telescope at Lick Observatory making use of the Kast Double Spectrograph <cit.>; and the Dark Energy Spectroscopic Instrument (DESI) mounted on the Mayall 4 m telescope <cit.>.
The MMT spectrum was reduced using a He-Ne-Ar comparison lamp and flat-field taken immediately after the spectrum, and flux-calibrated using a standard star observed during the night.
NTT + EFOSC2 spectra were obtained through the ePESSTO+ collaboration and reduced using a custom pipeline, applying bias-subtraction, flat-fielding, wavelength and flux calibration, and telluric correction, as described by <cit.>.
The Kast observations utilised a 2”-wide slit, 600/4310 grism, and 300/7500 grating. This instrument configuration has a combined wavelength range of ∼ 3500–10,500 Å, and a spectral resolving power of R ≈ 800. To minimise slit losses caused by atmospheric dispersion <cit.>, the Kast spectra were acquired with the slit oriented at or near the parallactic angle. The Kast data were reduced followed standard techniques for CCD processing and spectrum extraction <cit.> utilising IRAF <cit.> routines and custom Python and IDL codes.[<https://github.com/ishivvers/TheKastShiv>] Low-order polynomial fits to comparison-lamp spectra were used to calibrate the wavelength scale, and small adjustments derived from night-sky lines in the target frames were applied. The spectra were flux-calibrated using observations of appropriate spectrophotometric standard stars observed on the same night, at similar airmasses, and with an identical instrument configuration.
The DESI spectrum of SDSS J1342 was obtained as part of survey validation <cit.>, whilst those of SDSS J0938 and J0952 were taken as part of the bright galaxy survey (BGS) during main survey operations <cit.>. All DESI spectra were processed by the custom DESI spectroscopic pipeline, which includes a full suite of processing and correction steps to provide fully flux- and wavelength-calibrated spectra <cit.>.
The details of the full set of optical spectra obtained for all ECLE candidates are given in Appendix Table <ref>.
As these spectra were obtained over long durations from multiple instruments, there are a few caveats to be aware of. The SDSS and DESI spectra were obtained via fibres placed on the nuclei of the galaxies, whereas the MMT, NTT, and Shane 3 m telescopes obtained long-slit spectra. Furthermore, the SDSS fibres had diameters of 3”, while DESI fibres are smaller at 1.5” in diameter <cit.>. Consequently, DESI spectra will contain less light from the outer regions of the host galaxies despite being centred on the same location. This may act to introduce artificial changes in line fluxes and ratios depending on the line-emitting regions included or excluded by the fibres. The same is true for the long-slit spectra that have been extracted using apertures smaller in area than the SDSS fiber spectra. As described by <cit.>, this will primarily affect starlight contributions and low-ionisation (narrow) lines from any extended star-forming regions rather than the centrally located coronal lines.
The varying resolutions of instruments (in particular the lower resolution of the NTT and Kast spectra) also leads to the artificial broadening of narrow lines which must be considered when making comparisons between the spectra.
§.§ Optical Photometry
Whilst there has been no dedicated long-term photometric follow-up program of the ECLEs, all-sky surveys provide an opportunity to obtain repeated coincidental observations across multiple filters and over an extended period of time. We have explored observations of our sample obtained by the ATLAS, Catalina Real-Time Transient Survey <cit.>, PS1, and ZTF sky surveys. Their photometry is presented largely `as-is' without significant additional processing or filtering unless otherwise stated.
The ATLAS data were retrieved using the ATLAS forced-photometry server <cit.>.[<https://fallingstar-data.com/forcedphot/>] The ATLAS dataset comprises all available observations at the location of each ECLE, averaged to a cadence of 14 days following additional data cleaning to remove any detections of < 3σ significance or other quality issues. Observations were made using the ATLAS broad-band filters `cyan' (c; approximately equivalent to g + r) and `orange' (o; approximately equivalent to r + i). ATLAS observations are available in the range MJD 57230–59974. Whilst the forced-photometry server can provide template-subtracted difference photometry, we do not make use of this option, instead using the direct source photometry to allow for like-to-like comparisons between photometry from other sources for which difference photometry is not available. Additionally, as ATLAS-specific photometric extinction coefficients are unavailable, the photometry has been corrected using a mean of the corresponding SDSS filter pairs that cover the same approximate filter range of the ATLAS broad-band filters.
The CRTS dataset was compiled from the second public data release,[<http://nesssi.cacr.caltech.edu/DataRelease/>] and consists of CRTS V-band observations, covering an MJD range of 53464–56656 <cit.>.
PS1 observations were retrieved from the second public data release <cit.> available through the Mikulski Archive for Space Telescopes (MAST)[<https://archive.stsci.edu/>] across all available filters (grizy) and cover an MJD range of 54996–57009.
ZTF observations were made using the gri filters, and retrieved from the fifteenth public data release accessed through the NASA/IPAC infrared science archive (IRSA).[<https://irsa.ipac.caltech.edu/>] These observations cover an MJD range of 58198–59889 <cit.>.
In addition to these datasets available for the full ECLE sample, we make use of Lincoln Near-Earth Asteroid Program <cit.> observations of SDSS J0952 first published by <cit.>. These data were obtained without a specific photometric filter, with the instrument's response covering the approximate range of the SDSS griz filters.
A summary of all photometric datasets used here is provided in Table <ref>. Throughout this work, unless stated otherwise, apparent magnitudes are listed as observed, with no additional corrections. Wherever we note absolute magnitudes, a correction for Milky Way extinction has been applied using the appropriate photometric extinction coefficient. Unless specified otherwise, coefficients have been retrieved from <cit.>. To match the preferred extinction parameters of <cit.>, we apply the extinction law of <cit.> throughout this paper and assume R_V = 3.1.
§.§ Infrared Photometry
To explore the behaviour of each ECLE well before its initial outburst, we retrieve near-infrared (NIR) JHK photometry for each candidate obtained through the Two Micron All-Sky Survey <cit.> from IRSA. This analysis is described in Section <ref>.
In a similar manner to <cit.>, we also retrieve the available MIR photometry obtained by the Wide-field Infrared Survey Explorer (WISE), from both the ALLWISE <cit.> and NEOWISE Reactivation Releases (NEOWISE-R) <cit.> from IRSA.
Given the time between this work and that of <cit.>, an additional ∼ 6 yr of NEOWISE-R data are available, providing a means to further explore the long-term evolution in the W1 and W2 bands. The start of WISE observations is several years following the initial detection of the ECLEs and so cannot be used to infer their early-time behaviour.
As WISE obtains several images of each object during each observing phase (once every six months), we process the data using a custom Python script. This script filters out any observation marked as an upper limit, was observed when the spacecraft was close to the South Atlantic Anomaly (saasep < 5.0) or the sky position of the Moon. Additionally, any observation with a low frame quality or that suffered from potential `contamination or confusion' as flagged by the WISE pipeline was also removed, with the exception of W1 observations flagged as potentially contaminated but not dominated by a nearby bright star halo. This choice was made to prevent the removal of the vast majority of W1 observations of SDSS J1350, which visual inspection showed to be unlikely to be significantly affected by the presence of a nearby star.
A weighted average is then used to provide a single magnitude value per filter for each observation block. <cit.> previously explored whether the variable ECLEs displayed variability during each observation block, with no such variability being detected. As such, combining the individual observations allows for any long-term trends to be seen more easily.
§.§ Search for Additional Transient Activity
As well as retrieving archival photometric data, we performed a search of the Transient Name Server (TNS)[<https://www.wis-tns.org/>] at the coordinates of each ECLE to confirm that no other survey (i.e., those whose data are not explored in detail here) had reported new transient activity of any of the ECLEs over the last several years. No such reports were found for the five TDE-related ECLEs. The lack of such reports supports the assumption that members of the variable subclass of ECLE are produced by a single-epoch event, rather than a recurring process.
One report was located at the position of SDSS J1055 - AT 2023jke <cit.>. Whilst this newly reported transient lacks any spectroscopic follow-up, given the nature of the host galaxy we attribute it to AGN variability.
§ ANALYSIS AND RESULTS
§.§ Overall Optical Spectral Evolution
We now explore the observed spectroscopic evolution of each ECLE in turn. In all of the following figures, the spectra are shown with the earliest at the top of the plot with progressively more recent spectra displayed below. All spectra are colour coded based on the telescope and instrument with which they were obtained. The spectroscopic sequences of the ECLE sample are shown in Figure <ref>.
§.§.§ SDSS J0748
The initial 2004 SDSS spectrum of SDSS J0748 displayed - emission lines along with broad and Balmer lines that typify the H+He subclass of optically selected, active TDEs <cit.>. All of these features had faded prior to the 2011 <cit.> MMT spectrum and are also absent in our 2019 MMT spectrum. The spectral shapes of both MMT spectra are consistent. This indicates that the initial flaring event was a single epoch rather than a recurring transient, with the optical spectrum having now most likely returned to a quiescent state.
§.§.§ SDSS J0938
SDSS J0938 was reclassified by <cit.> as a Seyfert 2 AGN with star-forming regions rather than being related to a transient TDE. This reclassification was based on the lack of variability in the coronal emission lines between the original 2006 SDSS spectrum and their 2011 MMT follow-up spectrum. Our 2021 NTT and 2022 DESI spectra show no detectable changes in any of the coronal lines (beyond the expected width changes as a result of instrumental resolution) or in the overall spectral shape. Based on these findings, we concur with this AGN classification. However, the processes involved in generating such strong coronal lines relative to the rest of the AGN population over timescales of at least two decades are still unclear.
§.§.§ SDSS J0952
Between the 2005 SDSS spectrum and the 2011 <cit.> MMT spectrum, the Fe coronal lines displayed by SDSS J0952 faded significantly though remained detectable. These features have continued to fade and are no longer present in either our 2021 NTT or 2022 DESI spectra. A broad Hα component was also seen in the initial SDSS spectrum which, like the Fe lines, had faded between the SDSS and MMT follow-up spectra. Whilst challenging observing conditions mean the NTT spectrum is of limited quality (with a low signal-to-noise ratio (S/N)), the most recent DESI spectrum confirms the presence of only narrow Hα. Likewise, a narrow feature was visible in the initial SDSS spectrum but is absent from the follow-up spectra.
§.§.§ SDSS J1055
In a similar manner to SDSS J0938, SDSS J1055 was reclassified by <cit.> as a Seyfert 1 AGN based on its spectral invariance between 2002 and 2011. Our 2021 Kast spectrum confirms this lack of evolution, supporting the AGN reclassification.
§.§.§ SDSS J1241
This object was originally identified by <cit.> as non-variable. In their analysis they lacked the red component of the spectrum owing to observational issues, with the blue component showing that the 3759 Å and 3896 Å emission lines remained prominent and that there were no significant changes to the continuum or overall spectral shape within the blue region of the spectrum. Our follow-up spectrum of SDSS J1241 covers the full range of the original SDSS observation and reveals that the object has in fact displayed spectral variability consistent with the other variable candidates.
Specifically, the coronal lines can now be seen to have faded, with none detected in our 2021 Kast spectrum. lines have been seen to persist or develop with time in other ECLE candidates relative to the other Fe coronal lines, so it is possible that the higher ionisation lines had faded at the time of the 2011 MMT spectrum, though this is not possible to confirm. Whilst the lower resolution of the Kast spectrum makes it difficult to confirm, the 3896 Å emission line also appears to have reduced in strength significantly compared to both the 2004 SDSS and 2011 MMT spectra.
§.§.§ SDSS J1342
The initial 2002 SDSS spectrum of SDSS J1342 displayed , , and but lacked any lines. By the time of the MMT spectrum in 2011, the higher ionisation lines were no longer detectable, but lines were now clearly observable.
The higher-resolution 2022 DESI spectrum reveals a persistence of the emission features first seen in the 2011 MMT follow-up spectrum, with no indication of recurrence of the higher ionisation state lines. The NTT spectrum of SDSS J1342 obtained around one month after the DESI spectrum does display some apparent coronal emission features, though this spectrum is of too low resolution and S/N for any additional confirmation. This highlights the necessity of high-S/N and high-resolution follow-up spectra to fully capture the evolution of ECLEs.
This object is most interesting for the very large increase in the line flux of λλ4959, 5007 observed in both the DESI and NTT spectra. Whilst <cit.> note the increase in emission strength in all four of the ECLEs they identify as variable between the intial SDSS spectra and their 2011 MMT spectra, the increase displayed by SDSS J1342 after 2011 is much more extreme, and unique among the current ECLE sample. We discuss this further in Section <ref>.
§.§.§ SDSS J1350
SDSS J1350 initially exhibited – emission lines which faded between the 2006 SDSS spectrum and the follow-up 2011 MMT spectrum obtained by <cit.>, with emission lines developing over the same period. Like the higher ionisation state lines before them, these lines have now faded; no remaining coronal lines are present in our recent 2021 NTT follow-up spectrum, with the possible exception of a low-S/N feature. Given its low S/N and the lack of lower ionisation state lines, we do not claim its detection.
§.§ BPT Diagnostics and Line-Ratio Analysis
In the original SDSS observations analysed by <cit.>, the ECLE candidates were seen to display emission-line intensity ratios consistent with star-forming galaxies and did not meet the diagnostic thresholds of AGN activity when plotted on the usual set of BPT line-diagnostic diagrams <cit.>. As the objects evolved, their emission-line ratios were seen to change over time. Follow-up observations by <cit.> revealed a tendency for these ratios to drift to values more indicative of AGN. They note that that this evolution is largely due to the increasing line strength and an observational effect resulting from the smaller aperture size of the MMT spectra obtained by <cit.> compared to the original SDSS observations — that is, the spectra were more restricted to the nuclear region with a reduction in the light obtained from more distant star-forming regions.
We retrieve the data used to construct these BPT diagrams from Table 3 of <cit.> and Table 2 of <cit.> to produce a full comparison of the behaviour of the ECLE sample given in Figure <ref>. Table 3 of <cit.> does not include flux measurements for the diagnostic line which we measure here. Likewise, <cit.> do not include line fluxes for the two objects with non-variable coronal lines, which we include using our own measurements of the 2013 MMT spectral dataset.
Furthermore, as the lower resolution of the NTT spectra obtained using the EFOSC2 instrument makes accurate emission-line flux measurements very difficult, we opt to use only the higher resolution spectra from Kast and DESI, with the exception of SDSS J1350, for which only an NTT spectrum is available.
While the non-variable objects show some changes in the measured line ratios between observations — likely the result of differences in the exact regions of the host galaxy explored in each observation and measurement differences introduced by the varying resolutions of the instruments — they remain within the star-forming or composite region in each set of spectra.
We further explore the spectral evolution of the objects by showing a comparison of each of these line regions in Figure <ref>. As with the full spectral comparisons previously presented, each spectrum has been scaled to have the same mean flux density in the range 5925–6000 Å as the original SDSS spectrum; however, in this case the spectra are directly over-plotted to highlight relative changes rather than offset to display an evolutionary sequence.
<cit.> observed strengthening of the lines in all of their variable objects, with a proportion of this strengthening attributed to the more nuclear-focused nature of their spectra compared to the original SDSS observations. These smaller spectral footprints reduced the contribution of starlight from the outskirts of each galaxy, increasing the proportion of the spectrum contributed by the narrow-line region.
Between the spectra obtained by <cit.> in 2011 and our recent spectra, continued strengthening of line emission (relative to the continuum flux) is observed in two objects: SDSS J0748, and most significantly SDSS J1342.
The sharp increase in λ5007 emission strength in SDSS J1342 (it is now the spectrum's dominant feature) may be the result of either the TDE triggering AGN activity by increasing the accretion rate onto the SMBH in the form of a `turn-on' event (e.g., ), or due to the delayed response of more distant low-density gas to the TDE flare as proposed by <cit.>.
Balmer emission has also increased in strength, though with no associated higher Doppler broadened line velocities which has been seen in other such events. Further observations will be required to determine if this behaviour is a temporary change, or indicative of a more permanent alteration in the behaviour of the galaxy's SMBH.
The low S/N ratio of the NTT spectrum of SDSS J0952 makes line measurements impossible, but the DESI spectrum obtained at approximately the same time shows the relative strength of the , Balmer, and lines to be largely unchanged when compared to the 2011 MMT spectrum.
Several of the objects appear to display changes in line velocities over time. We investigated whether these apparent changes in peak velocity could be confirmed as physical; however, at the velocity resolution of the available spectra no such changes could be confirmed. Based on these measurements, we conclude that the material responsible for producing the narrow emission lines in these objects has displayed a flat velocity profile over the observation period.
§.§ Spectral Template Construction and Comparison
In order to look for observational signatures in the spectra of ECLEs that could be used to better distinguish between TDE and AGN related ECLEs based on a single spectroscopic observation, we have used the original SDSS spectral sample to produce two median-combined ECLE template spectra.
One is composed of those objects showing variable coronal lines (those related to transient events rather than AGN activity), though excluding SDSS J0748 as it is the only object with significant contamination from broad features produced by the still active TDE, with the resulting spectrum (and the spectra utilised in its construction) shown in Figure <ref>. Similarly, we have constructed a second template spectrum from the SDSS sample of ECLEs with non-variable coronal lines (i.e., SDSS J0938 and J1055) with the comparison between the two ECLE templates shown in Figure <ref>.
The template spectra have been corrected for redshift and Milky Way extinction, but have not had additional underlying spectral components (e.g., nonthermal AGN activity) removed as we are most interested in comparisons between directly observed spectra for future candidate classification purposes. Each of the included spectra is weighted equally in the comparison following normalisation and then median combination after the mean offset in the clean spectra region between the rest-frame wavelengths 5925–6000 Å is taken into account.
As the known sample of ECLEs is limited, the template spectra are composed of spectra from objects at different stages of evolution. The process of template construction will become more robust as more ECLEs are identified and observed with a faster cadence. We also note that our combined template has a broadened Hα feature due to the inclusion of objects with residual broad TDE features. We consider the inclusion of these objects to be an acceptable compromise given the small number of objects overall, and the lack of more general continuum contamination.
Whilst the two template spectra have similar overall spectral shapes, the variable ECLE template spectrum is redder than the non-variable template; this difference in shape is clear at wavelengths blueward of ∼ 5500 Å.
Narrow oxygen emission lines are of comparable relative strength in both template spectra. In contrast, Balmer emission features are seen to be both stronger and broader in objects with non-variable coronal lines.
Whilst there is no clear distinction in the behaviour of the emission lines between the spectral categories (evidenced by the minimal residual profiles at these line locations), and emission are much more pronounced in objects with variable coronal lines. The same could also be said for emission when the continuum difference between the two template spectra in this region is considered, though the difference is not as clear.
We also use these template spectra to compare both ECLE categories to the SDSS cross-correlation template spectra of a range of galaxy classes, including quiescent galaxies, quasistellar objects (QSOs), and star-forming galaxies. We note here that Galaxy templates 1–3 represent increments on the continuum between fully quiescent galaxies (the `Early-Type Galaxy' template) and those with high star-formation rates (the `Late-Type Galaxy' template).[The templates used in this analysis were obtained from <https://classic.sdss.org/dr7/algorithms/spectemplates/index.html>] The best-matching comparison was determined using the Akaike information criterion <cit.>.
We explore the fit in the `blue' and `red' spectral regions, separated at 6000 Å, to provide a more nuanced comparison. We present the comparison for the best overall match to the non-variable coronal line ECLE template spectrum in Figure <ref> and the corresponding comparison for the variable ECLE template spectrum in Figure <ref>. This comparison was made using the templates rebinned to a range of resolutions (2, 5, 10, and 20 Å) to explore how the use of lower resolution spectra would affect the comparisons and to determine if coronal lines would be observable in such spectra. We find that even at 20 Å resolution the coronal lines are still clearly distinguishable in our template spectra and in the residual patterns resulting from the comparisons. Note that the construction of the template boosts the S/N ratio of recurrent spectral features. Hence, the unambiguous presence of coronal-line signatures in similar low-resolution spectra of single objects is much less certain.
The full comparison results for the 2 Å version of the ECLE template analysis are provided in Tables <ref>–<ref> and Figure <ref> for the non-variable template, and likewise for the variable template comparison in Table <ref>–<ref> and Figure <ref>.
Both ECLE templates are found to have the best overall matches and blue-region matches to `intermediate' galaxies between the `Early' and `Late-type' galaxy spectra. The best overall and blue match to the non-variable ECLE template is found to be `Galaxy 2,' whilst the variable ECLE template is found to be most similar to `Galaxy 3.'
The best comparison match in the red spectral region for both ECLE templates is the `Late-type' comparison template. This difference is driven by the improved match to the broad Hα complex and better match to the red region's continuum shape.
The poorest fits to both ECLE templates (both overall and when subdivided) are found with the `Luminous Red Galaxy' (LRG) and `Early-type' galaxy spectra. These galaxy types have significantly lower relative fluxes in the bluest and reddest regions whilst also lacking the strong Balmer and oxygen features observed in the ECLE spectra.
Whilst the differences observed in the best-matching template spectra could be taken to suggest a difference in the underlying stellar populations present in both groups of ECLE, the differences in overall spectral shape will also be influenced by the presence or absence of an AGN-generated spectral component and any residual broad features from the TDE flare. However, the difference between the generated templates does provide an additional tool for the classification of these objects as TDE- or AGN-driven based on a single spectroscopic observation, rather than through long-term follow-up observations. Additional analysis will be required to expand on this, and to refine the ECLE templates themselves with new observations of objects at different, and well-defined, stages of evolution.
As expected when rebinned to lower resolution, it becomes more difficult to identify the best-matching template, as distinguishing features are blurred by the lowered resolution. Additionally, whilst the best-matching templates remain `intermediate' galaxies, there is some variation as to what specific spectrum is preferred at the varying resolutions, highlighting the need for spectra to be obtained at as high a resolution as possible.
§.§ Optical Photometric Evolution
As described in Section <ref>, we have used data from a number of all-sky surveys to explore the optical photometric evolution of each ECLE. The combined optical light curves for each are shown in Figure <ref>. In these plots the ECLE sample has been divided into the same three groupings used to present their optical spectra and are shown in the same order. Where eight or more epochs of data are available in a given filter from the same source, a cubic polynomial fit is also shown as a visual aid.
The current ECLE sample has been observed over a period of ∼ 20 yr, which whilst invaluable for monitoring their long-term behaviour, presents a number of issues. Photometric surveys do not tend to operate for such extended durations, which in turn leads to the sources of photometry (and the filters available) to change over time. As such, providing a fully consistent picture across the full range of observations is impossible. Instead, we instead focus on long-term trends.
As a result of the large period over which the original SDSS spectra were collected, the time between the start of spectral coverage and when regularly spaced photometric observations began varies significantly. For three ECLEs (SDSS J0938, J0952, and J1350), the SDSS spectrum was obtained during CRTS V-band observations, whilst for the other four ECLEs, a period of at least a year separates the initial spectrum from such photometric observations. The start time of each ECLE flare is also poorly constrained, making their evolutionary phases uncertain.
We now describe the optical photometric evolution of each ECLE. The evolution of each ECLE is also shown in Figure. <ref>.
§.§.§ SDSS J0748
<cit.> found that SDSS J0748 had brightened between the SDSS photometric and spectroscopic observations, indicating that the TDE likely occurred in the gap between the two sets of observations. This is supported by the SDSS spectrum being the only ECLE spectrum observed thus far with a distinct broad feature typical of conventional optically selected TDEs. The presence of such a feature indicates that SDSS J0748 was likely spectroscopically observed during the active TDE phase of its evolution. Unfortunately, there is no photometry of SDSS J0748 during this period to provide additional context for its early evolution.
Over the full period of observation, it has displayed a largely stable brightness. However, in more recent ATLAS observations it has displayed a long-term decline of ∼ 0.2 mag in c observations and an undulation in its o-band light curve, first fading by 0.2 mag between the start of observations until ∼ MJD 59000, before rebrightening by ∼ 0.15 mag over the following period to its current value. ZTF photometry covers the later period observed by ATLAS, with SDSS J0748 observed to be stable in brightness in these gri observations.
The only PS1 data available for SDSS J0748 are two epochs of g-band observations, which do appear to show a significant decline between the two observations. This is not observed in the CRTS data during the same time period, which (whilst a much broader filter) do cover the full PS1 g-band wavelength range.
§.§.§ SDSS J0938
SDSS J0938 has shown multiple long-term fading and brightening episodes with amplitudes of several tenths of a magnitude, consistent with expected long-term AGN variability.
§.§.§ SDSS J0952
SDSS J0952 was noted by <cit.> to have faded between SDSS observations. This fading is supported by the contemporaneous CRTS observations which also show a slow decline for the first few years of observations (approximate MJD range: 53460–56660). The LINEAR observations over this period also capture the TDE flaring behaviour, though as described by <cit.> the time of peak flare brightness was not observed. Following this initial decline, SDSS J0952 remained at a stable brightness in the remaining CRTS observations. Whilst the PS1 observations of SDSS J0952 display some level of scatter, it is important to note that this level (up to 0.2 mag) is similar to the range of scatter observed in the CRTS observations. Given the lack of clear trends in the PS1 observations, we attribute this scatter to stochastic variability between observations.
More recent ATLAS observations of SDSS J0952 have shown a 0.15 mag decline in the c band between the start of observations and MJD 58750 before stabilising at the current value of ∼ -19.9 mag. Similar to SDSS J0748, SDSS J0952 has displayed an undulation in its o-band light curve with an amplitude of ∼ 0.05 mag. ZTF observations of SDSS J0952 have shown no evolution, with the only observed variation being stochastic in nature.
§.§.§ SDSS J1055
In the study conducted by <cit.>, SDSS J1055 was observed to have faded between the SDSS photometric and spectroscopic observations. This behaviour is consistent with both its AGN classification and its more recent photometric evolution, which, like SDSS J0938, has consisted of long-term undulations.
§.§.§ SDSS J1241
SDSS J1241 was not found to have varied in brightness between the two epochs of SDSS observations. Likewise, no photometric evolution was observed during CRTS observations, with a stable brightness measured over the full time-span of the survey. The two epochs of PS1 photometry from this period are from two different filters so reveal nothing further about its evolution.
Interestingly, during the first three years of ATLAS observations, SDSS J1241 faded from ∼ -19.25 mag to ∼ -18.5 mag in the c (with significant scatter among individual observations) and o bands before stabilising. The most recent set of ATLAS observations, obtained after MJD 59500, appear to show the object increasing in brightness once more in the o band though remaining flat in c-band data. In contrast, ZTF gri observations of this object, whilst not covering the time period of the decline observed by ATLAS, are stable across all three bands for the full duration of available observations.
We note again here that the ATLAS photometry used in this work is based on observed images rather than difference imaging for direct comparison with other surveys. When difference imaging is used, this decline is not observed in the ATLAS data. As such, we do not view this decline as physical and treat the late-time optical behaviour of SDSS J1241 as largely constant.
§.§.§ SDSS J1342
<cit.> found the brightness of SDSS J1342 to be unchanged between its photometric and spectroscopic observations. Likewise, CRTS V-band observations of SDSS J1342 exhibit a stable brightness across the survey, with PS1 observations during the same period also showing no overall evolution (beyond stochastic variability, as seen in SDSS J0952).
The ATLAS observations of SDSS J1342 reveal a decline of 0.15 mag between the start of ATLAS observations and MJD 58250 in the c band, after which the object stabilised in brightness, as well as a similar decline of ∼ 0.2 mag in ATLAS o-band observations. ZTF gi data obtained over the same period display consistent brightness.
§.§.§ SDSS J1350
In contrast to the previously described objects, SDSS J1350 has shown more significant optical variability.
An increase in brightness was observed by <cit.> to have occurred between the SDSS spectroscopic and photometric observations.
A sharp increase in brightness in the CRTS V band by ∼ 1.5 mag was observed around MJD 54600 before plateauing for several years. We note that there are significantly fewer CRTS observations of this object, 33 compared to a mean of 389 observations for the other objects in the ECLE sample. As such, the CRTS light curve of SDSS J1350 is much less well sampled than the other objects. Two epochs of CRTS data have been removed from this light curve, with both being ∼ 3 mag brighter than the previous and subsequent observations. We attribute these anomalies to a nearby bright (r = 8.75 mag) star rather than astrophysical behaviour.
SDSS J1350's later behaviour in PS1, ATLAS, and ZTF observations is more stable, with no additional such changes observed, though early ATLAS observations in both bands have larger uncertainties.
§.§ MIR Photometric Evolution
The MIR evolution of four of the seven ECLEs first identified by <cit.> was previously described by <cit.> using data obtained with WISE <cit.>. The objects included in this analysis had been identified by <cit.> as showing long-term variability (i.e., SDSS J0748, J0952, J1342, and J1350), with the remaining three objects (SDSS J0938, J1055, and J1241) excluded. The WISE observations were obtained between early 2010 and late 2015 (∼ MJD 55200–57350).
Here we present a continuation of this analysis using the more recent data obtained through the NEOWISE Reactivation survey <cit.>[<https://wise2.ipac.caltech.edu/docs/release/neowise/neowise_2022_release_intro.html>], and extend this to include the remaining three objects in the original <cit.> sample. A summary of the MIR evolution of the full sample of ECLEs is given in Figure <ref>
The MIR evolution of the four objects previously identified as displaying long-term variation all showed declines in their W1 and W2 luminosities over timescales of years, with all also trending toward bluer W1–W2 colours. All the ECLEs studied by <cit.> were seen to share a similar overall behaviour, despite the original flaring events occurring at different times, with declines of ∼ 0.5–1.1 mag over the course of ∼ 5.5 yr of WISE observations. This MIR evolution is also shared by SDSS J1241, which our follow-up optical spectroscopy has revealed to also display fading coronal-line emission.
All of the ECLEs with variable coronal lines, with the exception of SDSS J0748, continue to show declines in one or both W1 and W2 bands, along with colour evolution. This ongoing evolution indicates that these objects have not yet completely faded back to their quiescent pre-event states.
This behaviour differs from that used by <cit.> to determine the host-galaxy contributions to the MIR transient light curves. Their models (with caveats) were constructed assuming the objects had reached a plateau consistent with the flaring transient event having faded and the light of the host galaxy now dominating in the final epoch of NEOWISE data available to them. With the additional ∼ 7 yr of data, we can see that this was not in fact the case, with the objects all showing continuing MIR declines in the intervening years. As such, the galactic contribution differs from their values which were, by and large, overestimated.
Whilst the trend toward bluer W1–W2 colours with time is seen across the full sample of variable objects, this trend has not been as smooth in recent years, with the variable objects displaying some scatter around the overall evolutionary trend in both bands. SDSS J0952 was noted by <cit.> as potentially displaying `non-monotonous [sic] variability' which they attributed to complexities in the object's dust formation. Whilst this variability is not clear in its W1–W2 colour evolution, it is apparent in the individual filter light curves, with several instances of observations being several standard deviations above or below the smoothed overall trend.
SDSS J0748 and J0952 have displayed smooth evolution over the course of observations in both bands, with the exception of some stochastic variability in the case of SDSS J0952, and a single epoch of significant colour deviation for SDSS J0748 at MJD 57674. The W1-band evolution of J1241 has also been remarkably consistent, though its W2-band curve displays a two-phase decline, with a reduction in the rate of decline after MJD 58000.
The most significant deviations from smooth overall evolution amongst the variable coronal line objects are seen in W2-band observations of SDSS J1342 and SDSS J1350, both of which display shoulders. During these shoulders their W2 luminosity remains constant or even rises slightly before the long-term overall decline resumes. SDSS J1350 has displayed one such shoulder beginning around MJD 57000 and lasting until ∼ MJD 5800, where as SDSS J1342 has shown two shoulders, at MJD 57200–57770 and 58500–59020.
A W1–W2 colour cut can be used to effectively differentiate between AGN hosting and nonhosting galaxies. This cut was developed specifically for WISE observations by <cit.>, with AGN activity indicated by W1–W2 ≥ 0.8 mag. When applied to the ECLE sample, all variable ECLEs with the exception of SDSS J1241 are initially observed to be at, or above, this colour cut indicating AGN-like activity. SDSS J1241, which was not included in the <cit.> analysis, had an initial W1–W2 colour index of 0.63 ± 0.01 mag in the first epoch of ALLWISE observations, suggesting no dominant AGN activity. This AGN activity colour cut was initially selected as it was shown to have both good completeness (78%) and reliability (95%) in dividing AGN from non-AGN in WISE data. Evolution is observed in the W1–W2 colours of all five variable objects, with all trending toward bluer colour indices over time and now falling well below the W1–W2 AGN colour cut with values in the range 0.10–0.52 mag.
If the continued MIR flux evolution is taken as the result of the accretion of residual material from the TDE, the observed shoulders in brightness could arise from periods where the accretion rates have stabilised. This is perhaps in turn related to the density of the material being accreted, with the overall reduction seen as the overall mass of material available to the SMBH reduces as an individual TDE can only provide a fixed mass of material to the system. Alternatively, if there is an underlying, weak, or obscured AGN within the galaxy, these light-curve features could be the result of increases in the accretion rate from material not necessarily produced in the initial transient flare, as AGN themselves are known to display MIR variability <cit.>.
These trends in MIR brightness and W1–W2 colour evolution in the variable objects are in stark contrast to the two objects with non-variable coronal lines (i.e., the AGN-related SDSS J0938 and J1055), which do not display such long-duration reddening.
SDSS J0938 has been observed with multiyear undulations in both bands and an overall range in brightness of ∼ 0.1 mag, the object being slightly dimmer on average compared to its first observation during these undulations. SDSS J0938 is seen to brighten slightly more in the W1 band during this undulation, and as a result its colour is seen to evolve blueward during the brightening period, though this change is small overall and remains well within the expected AGN colour region.
SDSS J1055 is observed to have brightened in both bands over the course of WISE observations by 0.3–0.4 mag in the most recent available observations, though this evolution has included several epochs of brightening and fading, with the object observed to be fading in the most recent observations, though remaining significantly brighter than when first observed. The W1–W2 colour of SDSS J1055 has also shown variability, though at a lower level than in the individual bands, with a value close to 0.9 mag seen across the observation period.
It is clear that the ECLE sample with variable coronal lines has continued to decline in the MIR during the time period covered by the NEOWISE observations. <cit.> fitted the available data in flux space using both a power-law and exponential model, finding power-law decay to be preferable. We extend this modelling to include SDSS J1241 and utilise the new NEOWISE photometry employing the same power-law model, given by
f(t)=A t^B+C .
The results of this fitting are shown in Figure <ref>.
Fitting was initially conducted independently for each band, with the times of outburst (which are poorly constrained for the ECLE sample) being the same as those used by <cit.>. The exception is SDSS J1241, included here for the first time; for which we adopt a value of 1 yr prior to its SDSS photometric observation as an approximate outburst time, based on it already declining by the time of its SDSS spectrum. The most important values in this fitting are the power-law index, given by B, and the quiescent flux of the galaxy, given by C. The remaining term, A, is a constant scaling factor.
In all cases, the power-law index B is well constrained, though the quiescent galaxy flux C is poorly constrained in the W2 band for objects other than SDSS J0952 and SDSS J1350, likely owing to the presence of more deviation from a smooth decline in the W2 band compared to W1. The quiescent flux of SDSS J1241 is also poorly constrained in the W1 band, with the decline of this object in both bands significantly shallower than the rest of the variable coronal-line ECLEs. When compared, the measured power-law indices for each object are found to be consistent between the W1 and W2 bands with the exception of SDSS J1342, where the decline is significantly steeper in the W1 band (C = -2.54 ± 0.15) than in the W2 band (C = -0.74 ± 0.23).
Following the initial fitting, the power-law index of the W2 data was set to match the best-fitting W1 index to explore if the host quiescent flux in W2 could be better constrained. For all but SDSS J1241 (which has a poorly constrained host component in W1), a W2 host contribution can now be obtained, with the overall fitted light-curve shapes remaining largely unchanged.
The results of this fitting are presented in Figure <ref> with the parameters of the fits given in Table <ref>. Additionally, a comparison of the power-law indices obtained in this fitting is shown in the left panel of Figure <ref>.
Using the weighted average of the W1 and freely fitted W2 results, we compare to the power-law decline indices measured by <cit.> for a sample of TDEs in X-rays and present this comparison in the right panel of Figure <ref>. For our sample, SDSS J0952 and J1342 fall within the region expected of the standard fallback accretion, which as modelled by <cit.>) extends to values steeper than -5/3. SDSS J0748 and J1350 have shallower declines consistent with disk accretion. The shallowest declining object in our sample is SDSS J1241, whose individual-band values are consistent with disk emission.
We provide the observed W1–W2 and W2–W3 colours obtained during the initial ALLWISE sky survey in Table <ref>. The use of a second colour index expands the parameter space and allows for a better identification of different classes of object.
Owing to the limited duration of the ALLWISE mission, most objects have only one observation epoch where all data from all three filters are available. SDSS J1342 and J1350 do have two epochs of data available with both included in Figure <ref>, though these observations are consistent for each object within the observational uncertainties.
<cit.> noted that the measured values of W2–W3 vs. W1–W2 for the four objects they examined fell within the region of the parameter space from <cit.> expected of Seyfert galaxies, QSOs, and luminous infrared galaxies (LIRGs), and removed from the part of the parameter space occupied by elliptical and spiral galaxies not hosting AGN. We can extend this to the remaining three objects and confirm that these are also found within the same region of parameter space.
Whilst no later W2–W3 observations are available, the observed evolution of the W1–W2 colour of all five variable objects moves them into parameter space occupied by non-AGN hosting star-forming galaxies (assuming no change in W2–W3 colour), with the two nonspectroscopically variable ECLEs remaining within the Seyfert/QSO/LIRG region.
The MIR and optical spectroscopic evolution of the variable ECLEs thus appear to be in conflict. In the years after the initial flare, optical spectra reveal line ratios trending from non-AGN regions of BPT diagrams to those consistent with Seyfert-type AGN — in particular, SDSS J1342 with the drastic increase in λ5007 emission. Yet, at the same time, their MIR evolution is seen to trend away from AGN-like colours. A possible explanation for this conflict would be the contribution of the delayed response of more distant low-density gas to the initial TDE flare responsible for the generation of the increased emission. This would then not require an ongoing elevation in the accretion rate onto the SMBH, which has in fact been returning to quiescent values shown through the long-term MIR decline. Differences in the line evolution across the sample would therefore indicate differences in the environments close to the SMBHs.
§.§ Pre-Outburst NIR Analysis
Six of the ECLEs (the exception being SDSS J1241) were observed as part of the 2MASS All-Sky Survey in the JHK bands (we include this photometry here in Table <ref>). These observations were obtained between January 1998 and January 2001, well before the expected time of the initial ECLE flaring activity. This presents the opportunity to explore the quiescent behaviour of the ECLE galaxies. Figure <ref> shows the objects with available data in J–H vs. H–K parameter space. This allows for the separation of objects where IR luminosity is primarily the result of starlight from those where the IR flux is driven by AGN activity <cit.>. We note that given the wide range of NIR colours displayed by galaxies of the same spectroscopic classification, they cannot be distinguished effectively using these NIR colours alone.
<cit.> made use of these observations of SDSS J0952 and found it to be located in the region expected of quiescent galaxies, as would be expected prior to a transient flaring event. We now extend this to the remaining five objects with available data. Similarly to the results of <cit.>, SDSS J1350 is found in this region of nonactive galaxies. However, the picture for the remaining four is more complicated.
The non-variable SDSS J0938 and variable ECLE SDSS J1342 are located in the region consistent with a combination of starlight and AGN activity. The final two objects, SDSS J0748 and J1055, also have H–K colours indicative of a combination of starlight and AGN activity, but they are separated from the rest of the sample by their bluer than expected J–H colours. The weighted mean J–H colour of SDSS J0748 and J1055 is 0.35 ± 0.11 mag compared to 0.80 ± 0.09 mag for the other four ECLEs. Given the large uncertainties in each object's photometry (∼ 0.17 mag) and in the weighted means, the statistical significance of this offset is small.
§ DISCUSSION
ECLE behaviour is complex, with even the limited sized sample of seven objects composed of two subpopulations. Owing to the discovery of the known sample already in their declining phases, the early-time evolution of these objects has not yet been well observed. Here we explore the connections between ECLEs and other related classifications of objects: optically selected TDEs, which have been seen to develop coronal emission lines, and galaxies identified as showing flares/outbursts at MIR wavelengths.
§.§ Coronal-Line TDEs
Until the identification of coronal emission lines in the > 200 d spectra of the optically selected TDE AT 2017gge <cit.>, SDSS J0748 was the most clear example of an object with both conventional TDE features and the ECLE-defining iron coronal lines. Now that a second optically selected TDE has been observed to develop Fe coronal lines during its active phase of evolution, AT 2022upj <cit.>, the link between the two groups is unambiguous. It remains to be determined what percentage of the overall TDE population displays coronal lines at some phase of their evolution, a determination which has not been helped by the lag between discovery and follow-up observations of the original ECLE sample. It is already clear from these two examples, where the time between the triggering TDE event and the development of the coronal lines can be well constrained (∼ 200 d for AT 2017gge and ∼ 2 months in the case of AT 2022upj), that the timescales of such events and thus their environments are varied. This will provide the opportunity to utilise the techniques developed in AGN reverberation mapping <cit.> to better model physical properties of the SMBH systems involved in these events. Further study into both newly identified and existing TDE host galaxies (the spectra of which may now be displaying residual or delayed coronal-line signatures) will be key in furthering our understanding of these events.
§.§ Comparison with Mid-InfraRed Outbursts in Nearby Galaxies
Further analysis of the MIR evolution of the four objects identified as variable by <cit.> was conducted by <cit.> and revealed long-term declines in all of them. The study Mid-infrared Outbursts in Nearby Galaxies (MIRONG) <cit.> — galaxies displaying flaring behaviour of at least 0.5 mag in the MIR that is not necessarily associated with observed optical variability — revealed that several objects displayed transient Fe coronal lines. However, the timescales of the coronal line and MIR evolution in MIRONG and ECLEs appear to differ.
Both display increases in luminosity via outbursts, followed by long-term declines <cit.>.
MIRONG differ from ECLEs in that their observed MIR outburst from a quiescent state was the primary selection criterion for their initial identification, whereas the quiescent state of ECLEs was not observed in the MIR prior to their flaring events.
Owing to the timing of the WISE mission, the available MIR light curves for the ECLE sample begin 5–9 yr following the outburst event <cit.>. 53 of the 137 galaxies in the MIRONG sample have had high-quality follow-up spectroscopy described by <cit.>. Of this subset, 22 (42%) displayed emission-line variability, and most interestingly nine have been detected with variable Fe coronal lines (17% of the overall MIRONG sample and 42% of those with emission-line variability).
All but one of these objects have also shown reductions in their Hα line flux over the course of the follow-up spectroscopuy, with two having Hα fluxes now consistent with a quiescent state. The coronal lines in each of these objects were weak and short lived, fading after the first follow-up spectrum.
The exception to both of these behaviours is SDSS J1442+5558. This object has maintained strong Hα flux consistent with an AGN state-change (specifically a `turn-on' event) with consistently increased Hα flux for at least 5 yr, with the Fe coronal lines developing in the most recent spectra available (years post state-change).
One spectroscopically variable ECLE (SDSS J1342) still displays coronal emission lines (though only of ) over more than a decade following its discovery spectrum, in contrast to the short-duration coronal lines observed in some, but not all, TDE-associated MIRONG. The differences between ECLE and MIRONG MIR behaviour could be the result of the differing local environments (e.g., dust content and composition), with MIRONG observed to have much larger dust covering fractions than optically selected TDEs. Differences in the mass and the structure of stars undergoing disruption between both groups could also play a role in their differing timescales.
The two groups of objects could be related — a large subset of the MIRONG sample have been identified as TDE candidates — with the differences in observed properties associated with environments in which they occur (e.g., local dust mass and composition). As described, ECLEs are composed of two subpopulations (TDE and AGN produced), a similar combination of multiple populations.
§ CONCLUSIONS
We have explored the long-term evolution, both spectroscopically and photometrically, of the ECLE sample of seven objects first identified in the SDSS by <cit.>. Through this analysis, we conclude the following.
* The coronal-line persistence of two objects within the sample, first described by <cit.>, is confirmed, showing that the coronal lines in these two objects (SDSS J0938 and J1055) are persistent over a time-span of two decades.
* The third object classified by <cit.> as invariable (SDSS J1241) does in fact exhibit diminishing coronal-line emission. It also displays MIR evolution consistent with the other previously identified variable coronal-line ECLEs.
* Follow-up spectroscopy of objects where coronal lines have previously faded shows that these lines have not recurred (subject to caveats on the limited spectroscopic cadence), supporting their generation in single transient events rather than ongoing or recurring processes.
* We demonstrated a significant increase in the flux of SDSS J1342 since the previous follow-up spectrum in 2011, with the line having evolved to be the most dominant spectral feature.
* The long-duration MIR fading displayed by those objects with variable coronal-line emission as first identified by <cit.> has continued for at least an additional 6 yr. The declines of all variable coronal lines remain consistent with ongoing power-law declines.
* Spectral templates of variable and non-variable ECLEs were constructed. Those objects with non-variable coronal-line signatures appear to be bluer overall than those with variable coronal lines (subject to poorly constrained phases). Whilst all Fe coronal lines can be observed in both the variable and non-variable objects, those with variable coronal lines display relatively stronger and lines at early phases of their evolution. The same is likely also true of though harder to confirm owing to differences in the underlying continuum.
* The optical evolution of the variable ECLEs appears to indicate AGN-like activity. BPT line-ratio diagnostics of the most recent spectra continue to be more indicative of AGN values than was observed in the initial SDSS spectra. In contrast, the MIR colour evolution of these objects displays a continued trend away from the values expected of AGN. More modelling will be required to conclusively understand the behaviour of this class of object, though the delayed response of gas more distant from the SMBH could be used to explain the altered line ratios without the requirement of increased accretion activity not indicated by their MIR evolution. The range of behaviour displayed also highlights the importance of observing ECLEs over a wide wavelength range.
* High-resolution and high-S/N spectra are necessary to confirm the presence of weak and narrow coronal lines - - which have persisted in SDSS J1342 for two decades. These features would have been missed or gone unconfirmed if relying on a lower resolution spectrum.
The analysis undertaken here has strengthened the identification of five of the seven currently identified ECLEs as the light echoes of nonrecurring TDEs; no ECLE with variable coronal lines shows a resurgence in coronal-line emission. The work also highlights the importance of observing TDEs and their hosts across a large wavelength regime (i.e., optical observations alone are insufficient) so that a complete picture of their behaviour, which has been seen to be conflicting between optical and MIR evolution, can be developed. Identification and monitoring of new ECLEs will be required to explore the full range of parameters displayed by the group and provide more rigid constraints for physical modelling. Given the recent discovery of coronal-line TDEs, and the varied, but long, duration of ECLE behaviour, additional late-time observations of known TDE host galaxies are clearly required. Such observations are needed to determine how common ECLE behaviour is following a TDE, along with placing better constraints on the timescales for both the onset and duration of such behaviour. In turn, this will improve our understanding of the local environments of SMBHs; the diversity of behaviour observed in ECLEs is likely to be strongly linked to the location and composition of material close to the SMBH responsible for the initial stellar disruption.
§ ACKNOWLEDGEMENTS
This work was supported by the Science & Technology Facilities Council [grants ST/S000550/1 and ST/W001225/1].
It was also funded by ANID, Millennium Science Initiative, ICN12_009.
T.E.M.B. acknowledges financial support from the Spanish Ministerio de Ciencia e Innovación (MCIN), the Agencia Estatal de Investigación (AEI) 10.13039/501100011033, and the European Union Next Generation EU/PRTR funds under the 2021 Juan de la Cierva program FJC2021-04.
M.N. is supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 948381) and by funding from the UK Space Agency.
A.V.F.'s group at U.C. Berkeley has received financial assistance from the Christopher R. Redlich Fund, Alan Eustace (W.Z. is a Eustace Specialist in Astronomy), Frank and Kathleen Wood (T.G.B. is a Wood Specialist in Astronomy), and many other donors.
This material is based upon work supported by the U.S. Department of Energy (DOE), Office of Science, Office of High-Energy Physics, under Contract No. DE–AC02–05CH11231, and by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract. Additional support for DESI was provided by the U.S. National Science Foundation (NSF), Division of Astronomical Sciences under Contract No. AST-0950945 to the NSF’s National Optical-Infrared Astronomy Research Laboratory; the Science and Technologies Facilities Council of the United Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons Foundation; the French Alternative Energies and Atomic Energy Commission (CEA); the National Council of Science and Technology of Mexico (CONACYT); the Ministry of Science and Innovation of Spain (MICINN), and by the DESI Member Institutions: <https://www.desi.lbl.gov/collaborating-institutions>. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U. S. National Science Foundation, the U. S. Department of Energy, or any of the listed funding agencies.
The authors are honored to be permitted to conduct scientific research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation.
Based in part on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, as part of ePESSTO+ (the advanced Public ESO Spectroscopic Survey for Transient Objects Survey). ePESSTO+ observations were obtained under ESO program IDs 1103.D-0328 and 106.216C (PI Inserra).
Some of the observations reported here were obtained at the MMT Observatory, a joint facility of the Smithsonian Institution and the University of Arizona.
The Kast red CCD detector upgrade on the Shane 3 m telescope at Lick Observatory, led by B. Holden, was made possible by the Heising–Simons Foundation, William and Marina Kast, and the University of California Observatories. Research at Lick Observatory is partially supported by a generous gift from Google.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS website is www.sdss.org.
This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration (NASA) and operated by the California Institute of Technology.
This publication also makes use of data products from NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the Planetary Science Division of NASA.
The CRTS survey is supported by the U.S. National Science Foundation (NSF) under grants AST-0909182 and AST-1313422.
This work has made use of data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project. The ATLAS project is primarily funded to search for near-Earth objects through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575; byproducts of the NEO search include images and catalogues from the survey area. This work was partially funded by Kepler/K2 grant J1944/80NSSC19K0112 and HST GO-15889, and STFC grants ST/T000198/1 and ST/S006109/1. The ATLAS science products have been made possible through the contributions of the University of Hawaii Institute for Astronomy, the Queen’s University Belfast, the Space Telescope Science Institute, the South African Astronomical Observatory, and The Millennium Institute of Astrophysics (MAS), Chile.
The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, NASA under grant NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, NSF grant AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
Based in part on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the NSF under grants AST-1440341 and AST-2034437, and a collaboration including current partners Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, Deutsches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Livermore National Laboratories, IN2P3, University of Warwick, Ruhr University Bochum, Northwestern University and former partners the University of Washington, Los Alamos National Laboratories, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW.
The authors thank Chenwei Yang for providing the spectra used in the 2013 analysis for comparative use in this work, and for providing clarification on the previous BPT analysis.
§ DATA AVAILABILITY
The data underlying this work are available in the article and in its online supplementary material available through Zenodo <cit.>. Previously unpublished spectra will be made available through the Weizmann Interactive Supernova Data Repository (WISeREP) online archive.
mnras
§ OBJECT SUMMARY INFORMATION
§ SPECTRAL TEMPLATE COMPARISON DIFFERENCE MATRICES
Here we present difference matrices outlining the comparisons between the SDSS template galaxy spectra and our template ECLE spectra.
§ MIR POWER-LAW FITTING PARAMETERS
Here we present the results of the power-law fits to the MIR data of each of the objects with variable coronal lines.
|
http://arxiv.org/abs/2307.00541v1
|
20230702110900
|
Collaborative Policy Learning for Dynamic Scheduling Tasks in Cloud-Edge-Terminal IoT Networks Using Federated Reinforcement Learning
|
[
"Do-Yup Kim",
"Da-Eun Lee",
"Ji-Wan Kim",
"Hyun-Suk Lee"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.DC",
"eess.SP",
"68M20, 68T05, 68T07",
"C.2.1; C.2.4; I.2.8; I.2.11"
] |
Collaborative Policy Learning for Dynamic Scheduling Tasks in Cloud-Edge-Terminal IoT Networks Using Federated Reinforcement Learning
Do-Yup Kim, Member, IEEE, Da-Eun Lee, Ji-Wan Kim, and Hyun-Suk Lee
D.-Y. Kim is with the Department of Information and Communication AI Engineering, Kyungnam University, Changwon-si, Gyeongsangnam-do 51767, South Korea (e-mail: doyup09@kyungnam.ac.kr).
D.-E. Lee, J.-W. Kim, and H.-S. Lee are with the School of Intelligent Mechatronics Engineering, Sejong University, Seoul, South Korea (e-mail: kjs990516@naver.com, jiwan1228@naver.com, hyunsuk@sejong.ac.kr).
August 1, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper, we examine cloud-edge-terminal IoT networks, where edges undertake a range of typical dynamic scheduling tasks.
In these IoT networks, a central policy for each task can be constructed at a cloud server.
The central policy can be then used by the edges conducting the task, thereby mitigating the need for them to learn their own policy from scratch.
Furthermore, this central policy can be collaboratively learned at the cloud server by aggregating local experiences from the edges, thanks to the hierarchical architecture of the IoT networks.
To this end, we propose a novel collaborative policy learning framework for dynamic scheduling tasks using federated reinforcement learning.
For effective learning, our framework adaptively selects the tasks for collaborative learning in each round, taking into account the need for fairness among tasks.
In addition, as a key enabler of the framework, we propose an edge-agnostic policy structure that enables the aggregation of local policies from different edges.
We then provide the convergence analysis of the framework.
Through simulations, we demonstrate that our proposed framework significantly outperforms the approaches without collaborative policy learning. Notably, it accelerates the learning speed of the policies and allows newly arrived edges to adapt to their tasks more easily.=-1
Agnostic policy, cloud computing, edge networks, federated learning, IoT networks, reinforcement learning, dynamic scheduling.
IEEEexample:BSTcontrol
§ INTRODUCTION
With the recent explosive development of internet-of-things (IoT) applications, a hierarchical architecture for IoT networks has been widely studied to ensure agility, flexibility, and scalability <cit.>.
In this hierarchical architecture, IoT networks can be decomposed into edges and a cloud-edge network, as illustrated in Fig. <ref>.
Each edge forms its own network, called an edge network, comprising an access point (AP) and IoT terminal devices, while the cloud-edge network consists of a cloud server and edge networks.
Such hierarchical IoT networks are typically referred to as cloud-edge-terminal IoT networks, as they consist of a cloud server, edges, and IoT terminal devices.=-1
In this hierarchical architecture, edges in IoT networks carry out numerous tasks, such as inference, prediction, planning, and scheduling, to support various IoT applications and services.
In particular, a variety of dynamic scheduling tasks have been widely considered as major tasks in IoT networks.
Dynamic scheduling tasks typically involve a problem, where an item is chosen from multiple items to achieve a goal, which has been widely considered in various applications, from recommendation <cit.> to resource scheduling <cit.> to queueing <cit.>.
In IoT networks, different edge functionalities, such as radio resource management <cit.>, data gathering <cit.>, and wireless power transfer <cit.>, correspond to this problem, in which an edge selects an IoT terminal device from multiple IoT terminal devices for the corresponding functionalities.
It is worth emphasizing that in typical IoT networks, multiple edges share common dynamic scheduling tasks since these functionalities are generally used in IoT networks.
For example, most edge nodes should conduct radio resource management tasks to serve IoT terminal devices. Additionally, in most sensor applications, each edge node carries out data aggregation scheduling tasks, which schedule IoT terminal devices to effectively aggregate data from each IoT terminal device.=-1
To efficiently address dynamic scheduling tasks in IoT networks, deep learning, especially deep reinforcement learning (DRL), has been widely applied <cit.>.
DRL is one of the representative methods for solving complex stochastic problems, thanks to the large representational capability of deep learning.
Specifically, in DRL-based approaches, an agent directly learns a policy represented by a deep neural network (DNN) model to address its task using data or experiences obtained from interactions with environments.
Consequently, these approaches allow each edge to find policies for its tasks without the formulation and optimization of complex scheduling task problems based on hand-crafted mathematical models, as in traditional approaches.=-1
In cloud-edge-terminal IoT networks, a cloud server can play the role of coordinator to manage policies for tasks, thanks to the hierarchical architecture.
Therefore, it may be possible that a central policy for each dynamic scheduling task can be constructed at a cloud server.
Then, newly arrived edges can avoid performance deterioration due to an initial learning phase by using the central policy instead of learning its own policy.
Besides, with the coordination of the cloud server, the edges that conduct the task can cooperate in learning the central policy so as to learn the policy more efficiently.
One intuitive way for such cooperation is to directly collect data (i.e., experiences) from the edges to the cloud server.
The cloud server then learns a policy to solve the problem using the collected data and redistributes the policy to the edges.
However, this approach is impractical since directly uploading the data from edges to the cloud server causes privacy and security issues <cit.>.
Moreover, it incurs unaffordable communication costs due to the transmission of an enormous amount of data from all the edges to the cloud server <cit.>.=-1
As a viable solution to address these issues, federated learning has been widely studied <cit.>.
In federated learning, a cloud server and local learners sharing an identical task can cooperate to efficiently train a central DNN model to address the task.
Specifically, in each round, each local learner trains its local DNN model using its local training data and uploads its trained local DNN model to the cloud server instead of its local data.
The cloud server can then improve the central model by aggregating the received local models and redistributing it to the local learners.
This enables the central model to be trained in a distributed manner while avoiding privacy issues.
By applying this procedure to DRL, federated reinforcement learning (FRL) has also been studied <cit.>.
We refer the reader to comprehensive surveys of federated learning in <cit.> for more details.=-1
The hierarchical architecture of cloud-edge-terminal IoT networks is suitable for applying federated learning to collaboratively learn a policy for each task.
Since multiple edges share an identical task, each edge can act as a local learner for the policy of the task, and the cloud server can aggregate the local policies of the edges.
Thus, federated learning in cloud-edge-terminal IoT networks has been studied for tasks such as mobile keyboard prediction, cyberattack detection, and energy demand prediction <cit.>.
However, there is no work yet on FRL frameworks that enables edges to collaboratively address their dynamic scheduling tasks, even though a variety of works for dynamic scheduling tasks have been studied based on DRL.
This is because conventional DRL-based approaches that have been studied so far are inapplicable to FRL.
Specifically, they are developed to learn a policy focused on achieving its goal only in a target edge.
Consequently, the policy focuses on addressing the characteristics of the target edge, such as the number of IoT terminal devices and the statistics of system uncertainties, rather than generalizing them for application to all edges.
Furthermore, the conventional DRL-based approaches make the corresponding policies for different target edges have different structures, even if their tasks are identical.
As a result, it is difficult to aggregate the policies learned from different edges via FRL due to their dependency on edge-specific characteristics.
Therefore, to enable edges to collaboratively learn policies for dynamic scheduling tasks, a novel policy structure is needed which can be used for any different edges while avoiding such edge-specific characteristics.=-1
Even if FRL can be applied for collaborative policy learning for dynamic scheduling tasks in cloud-edge-terminal IoT networks, it is difficult to simply use it because of the scarcity of cloud resources, such as computing power, memory, and network bandwidth <cit.>.
The larger the number of edges participating in FRL for collaborative policy learning, the greater the usage of cloud resources, making it harder to aggregate local policies for all tasks.
Moreover, while the larger number of edges participating in FRL generally improves the efficiency of FRL due to the increased amount of experiences <cit.>, some edges may not be available to participate in FRL in each round.
Hence, to maximize the effectiveness of collaborative policy learning, the tasks whose local policies are to be aggregated in each round should be carefully selected to effectively utilize limited cloud resources while considering the following factors: the number of available edges for each task in the round and the number of edges that have participated in FRLg of each task so far.
However, there is no such work on collaborative policy learning frameworks for tasks in cloud-edge-terminal IoT networks yet.
In this paper, we study collaborative policy learning for dynamic scheduling tasks in cloud-edge-terminal IoT networks.
In these IoT networks, edges share a variety of dynamic scheduling tasks, as illustrated in Fig. <ref>.
Specifically, each edge conducts its own dynamic scheduling tasks and learns the policies for the corresponding tasks using DRL.
Meanwhile, a cloud server trains the central policy for each task by aggregating the local policies that are learned at different edges via FRL.
To this end, we propose a collaborative policy learning framework for dynamic scheduling tasks in cloud-edge-terminal IoT networks.
In this framework, a cloud server manages the policy for each dynamic scheduling task, which is commonly conducted across multiple edges, and learns it by aggregating the local policies for the task from the edges.
This collaborative learning process accelerates the learning speed of the policy for each task in the cloud-edge-terminal IoT networks.
Additionally, when new edges arrive in the network, they can easily adapt to conducting their tasks by utilizing the central policies for the tasks.=-1
The contributions of this paper are summarized as follows:
* We propose a novel collaborative policy learning framework for dynamic scheduling tasks in IoT networks using FRL. It learns the central policy for each task, which is edge-agnostic, by effectively utilizing limited cloud resources and considering the uncertainties in the availability of participating edges for FRL. We provide a convergence analysis of the proposed framework.=-1
* In the proposed framework, we develop a task selection algorithm that adaptively selects the tasks for which local policies are to be federated. This enhances the effectiveness of learning the central policies. Specifically, it aims to maximize the total average number of edges that participate in FRL, while considering fairness among tasks. As a result, this approach facilitates the effective learning of central policies for all tasks.=-1
* As an enabler of the proposed framework, we propose an edge-agnostic policy structure for a given dynamic scheduling task, which is applicable to collaborative policy learning. It possesses the capability to generalize the edge-specific characteristics of the policy for the task. Consequently, local policies based on this edge-agnostic policy structure can be well aggregated by FRL.=-1
* Through extensive experiments, we demonstrate that the proposed framework enables cloud-edge-terminal IoT networks to learn the policies for dynamic scheduling tasks in a distributed manner. Thanks to this, it achieves significant performance improvement compared with the approaches that do not utilize collaborative policy learning. In addition, our framework provides adaptability for newly arrived edges and accelerates the learning speed of the policy.=-1
The rest of this paper is organized as follows. Section <ref> presents the system model and problem formulation for dynamic scheduling tasks. Section <ref> discusses some key challenges in the context of collaborative policy learning, and Section <ref> presents a collaborative policy learning framework designed to address these challenges. In Section <ref>, we present experimental results to validate the effectiveness of the proposed framework. Finally, Section <ref> provides the conclusion of the paper.
§ CLOUD-EDGE-TERMINAL IOT NETWORKS WITH MULTIPLE DYNAMIC SCHEDULING TASKS
§.§ System Model of Cloud-Edge-Terminal IoT Networks
We consider a cloud-edge-terminal IoT network[For brevity, we will henceforth refer to “cloud-edge-terminal IoT network” simply as “IoT network” throughout the paper.] that consists of a cloud server and multiple edges, each with multiple IoT devices, as illustrated in Fig. <ref>.
We denote the set of edges by ={1,2,…,N}, where N is the number of edges, and edge n∈ is composed of one access point (AP) and M_n IoT devices.
The set of IoT devices in edge n is denoted by _n={1,2,…,M_n}.
Each edge carries out one of several general dynamic scheduling tasks,[For brevity, we will interchangeably use “dynamic scheduling task” and “task” throughout the paper if there is no confusion.] such as radio resource management <cit.>, data gathering <cit.>, and wireless power transfer <cit.>.[It is worth noting that this system model offers a straightforward extension to scenarios where an edge carries out multiple tasks. This can be accomplished by conceptualizing the edge as a collection of distinct virtual edges, with each one representing an individual task.]
We postulate the existence of L distinct types of tasks, and we denote the set of these tasks by ={1,2,…,L}.
Here, each element l ∈ signifies a unique individual task.
We proceed to denote the task of edge n as l(n)∈.
Additionally, we define the set of edges involved in task l as (l) = {n:l(n)=l}.
Lastly, we consider the maximum network bandwidth W, memory resource O, and computing resource C of the IoT network for performing FRL in the cloud server.
§.§ Dynamic Scheduling Tasks in Edges
We now describe various types of dynamic scheduling tasks, as provided in the previous subsection, using the following common procedure.
Each edge selects an IoT device and makes decisions relevant to scheduling (e.g., transmission power in wireless network scheduling and the number of jobs to be serviced in job scheduling) to achieve the goal of the respective task.
Additionally, each edge considers each IoT device's conditions relevant to scheduling (e.g., the current queue length in queue scheduling and the channel conditions in wireless network scheduling) for effective scheduling.
From this procedure, we formulate a generic dynamic scheduling problem structure for edges that can represent various types of dynamic scheduling tasks.
To this end, we first provide a system model for each edge n performing its corresponding task l(n).
Each edge is assumed to performs its task over a discrete time horizon t∈{1,2,…}.
It is worth noting that the time horizon is defined individually for each edge to describer its task, and it does not imply a global time horizon that spans across multiple edges.
Continuing, we define the state information vector of IoT device m∈_n in time slot t by s_n,m^t=(s_n,m,1^t, …, s_n,m,K(l(n))^t), where s_n,m,k^t is the kth state information of IoT device m in time slot t, and K(l) is the number of types of state information for task l.
Then, we can define a state of edge n in time slot t as
s_n^t=(s_n,1^t,…,s_n,M_n^t)∈_n,
where _n is the state space.
We also define an action of edge n in time slot t as
a_n^t=(m_n^t,g_1^t,…,g_G(l(n))^t)∈_n,
where m_n^t∈_n is the IoT device scheduled by edge n in time slot t, and {g_1^t, … ,g_G(l(n))^t} represent the decision set relevant to scheduling, where G(l) is the number of relevant decisions for task l.
Next, we let u_l(s,a) be the reward function for task l, which represents the goal of task l.
We then define the transition probabilities ℙ(s_n^t+1|s_n^t,a_n^t) in accordance with the system uncertainties present in the corresponding edge.
Subsequently, we define a policy, π_n:_n→_n, that maps states into actions.
With these definitions in place, the dynamic scheduling problem of edge n can be formulated as a Markov decision process (MDP), expressed as follows:
_π_n:_n→_n U_l(n)^π_n(s_n) ≜𝔼[. ∑_t=0^∞γ^t u_l(n)(s_n^t,π_n(s_n^t))|s_n^0=s_n ],
where γ is a discount factor.
For this problem, the optimal value function can be defined by
J^*_n(s_n) = max_π_n U^π_n_l(n)(s_n), ∀ s_n∈_n,
and its corresponding optimal policy is given by
π^*_n = _π_n U^π_n_l(n)(s_n), ∀ s_n∈_n.
The problem formulation presented in (<ref>) is widely used in the literature to represent a diverse range of dynamic scheduling tasks <cit.>.
This popularity is due to the fact that most scheduling systems, including those in IoT networks, manage and identify their corresponding items/devices using indexing, as in the formulation.
In Appendix <ref>, we provide several examples of representative tasks in IoT networks.
These include wireless power transfer, data gathering, and radio resource scheduling, all of which is modeled using this formulation.
§ CONCEPT AND KEY CHALLENGES ON COLLABORATIVE POLICY LEARNING FOR DYNAMIC SCHEDULING TASKS
In this section, we provide the concept of collaborative policy learning for dynamic scheduling tasks and discuss the key challenges involved in implementing it.
§.§ Concept of Collaborative Policy Learning
In the IoT network, each edge can optimally solve the dynamic scheduling problem in (<ref>) by finding its optimal policy π_n^* in (<ref>).
To this end, standard dynamic programming (DP) approaches, such as value iteration and policy iteration, and traditional reinforcement learning (RL) approaches, such as SARSA and Q-learning, can be used.
However, DP approaches are generally impractical for practical applications, as they require perfect prior information on system uncertainties.
Also, both of DP and RL have a large computational complexity due to the curse of dimensionality.
To overcome these practical limitations, DRL has been widely used recently to solve such problems <cit.>.
In DRL, an agent constructs a DNN that can represent the policy of the problem.
The agent then trains the DNN to approximate the optimal policy to solve the problem.
Consequently, each edge can solve its dynamic scheduling problem by training a policy represented by a DNN.
Since the policy is represented as a DNN with such an approach based on DRL, a central DNN (i.e. a central policy) for each task may be collaboratively learned at the cloud server.
To this end, we can use FRL to learn the central DNN by effectively aggregating the local DNNs (i.e., the local policies) from all edges conducting the task.
We now describe the FRL procedure for collaborative policy learning for dynamic scheduling tasks in the IoT network.
This process unfolds takes place over a discrete time horizon, which consists of multiple rounds denoted by ={1,2,…}.
The index of rounds is denoted by r.
The time horizon of FRL typically spans a larger time scale than that of each task.
As a result, FRL aggregates the DNNs, which are locally trained by the edges over multiple time slots, in each round.
Since we consider L tasks, FRL is applied to L DNNs in the cloud network.
The central parameters of the DNN for task l at the cloud server are denoted by _l, and the local parameters of the DNN at edge n are denoted by _n.
We define the vector of the parameters of the DNNs of all edges as =(_1,…,_N).
With these definitions, we can formally define the problem of the collaborative policy learning framework as follows:
_ l() ≜∑_l∈1/K̅_l∑_n∈(l)∑_k=1^K_nf_n(_n,k),
where K_n is the number of experiences from edge n, K̅_l=∑_n∈(l)K_n, and f_n(_n,k) is an empirical loss function with _n at the kth experience of edge n.
To solve the problem, the cloud server broadcasts the central parameters, _l^r, for task l in round r to the edges in (l).
Then, in round r, each edge n∈(l) substitutes its local parameters, _n^r, with _l^r.
After this substitution, each edge trains its local parameters using its local experiences.
These trained parameters are then uploaded to the cloud server.
The cloud server updates its central parameters for task l by aggregating the received parameters from edges in (l), using
_l^r+1=_l^r-∑_n∈(l)c_n ∇ g_n^r,
where ∇ g_n^r is the local gradient of edge n in round r, and c_n is the central learning weight of edge n.
Here, ∇ g_n^r reflects the disparity between the central parameter, _l^r, in round r and the local parameters, _n^r', of edge n following local training in round r.
Meanwhile, c_n is established based on the contribution of edge n to the central parameter updates for task l(n), defined as
c_n=K_n/∑_n'∈(l(n))K_n'.
Once the central parameters are updated, the current round is completed.
The process then proceeds to the next round.
By repeating this process, FRL solves the problem in (<ref>).
§.§ Key Challenges on Collaborative Policy Learning
§.§.§ Limited Cloud Resources for Collaborative Policy Learning on Multiple Tasks
FRL operates in an IoT network to handle multiple tasks.
However, as described in Section <ref>, it must do so using only limited cloud resources, such as computing power, memory, and network bandwidth.
This limitation implies that if there are not enough cloud resources to proceed with FRL for all tasks in each round, only a subset of tasks may be selected for FRL.
Specifically, the amount of cloud resources required to conduct FRL for each task depends on the number of edges participating in FRL.
In each round, some edges may be unable to participate in FRL due to various reasons, such as other higher-priority jobs or network shutdowns for energy-saving purposes.
However, in typical FRL, once the central parameters are updated by aggregating the local parameters of participating edges, the local parameters of non-participating edges are abandoned.
All edges' local parameters are then substituted by the central parameters.
This is because using outdated local parameters in FRL may negatively affect the convergence of central parameters <cit.>.=-1
Additionally, according to the convergence analysis of FRL, the effectiveness of FRL improves as the number of participants in FRL and corresponding data increases <cit.>.
This implies that even if tasks are selected uniformly, the effectiveness of FRL for each task may vary significantly based on the number of participants.
Therefore, to ensure that all tasks benefit fairly from FRL, it is essential to consider fairness in terms of the number of participants, rather than the number of times they are selected.
In conclusion, to effectively utilize cloud resources for FRL, tasks should be carefully selected to maximize the number of participants while maintaining fairness among tasks in terms of the number of participants.
This issue will be addressed in Section <ref>.
§.§.§ Collaborative Learning-Inapplicability of Conventional Policy Structures
In this subsection, we explain why the conventional policy structures inapplicable to FRL for collaborative policy learning.
From the problem in (<ref>), we can see that the edges with task l share an identical problem structure, which is defined by the state and decisions relevant to K(l) and G(l), respectively, and the reward function u_l(s,a).
Accordingly, it seems feasible to collaboratively learn the policy for the task l by using FRL (i.e., simply aggregating the DNNs from which edges with identical tasks locally train via DRL).
However, in practice, it is challenging to adopt FRL if DRL is directly applied to solve the problem in (<ref>), as in conventional works <cit.>.
This is because the problems have different dynamics due to the varying number of IoT devices (i.e., M_n) and the transition probabilities.
For example, for edges n_1 and n_2 with varying numbers of IoT devices and system uncertainties, their state and action spaces can be different (i.e., _n_1≠_n_2 and _n_1≠_n_2), and their transition probabilities differ as well.
This implies that the DNNs for the policies in edges n_1 and n_2, based on the conventional works, have different structures (e.g., the DNNs may have different numbers of input and output units).
Besides, even though the state and action spaces are identical, they cannot be simply aggregated via FRL due to the different underlying statistical characteristics on the edges.
Therefore, one of the key challenges is that conventional dynamic scheduling policies are inapplicable to collaborative policy learning.
To overcome this issue, we need a policy structure that has a generalization capability over different edges, which implies that the policy for l learned from one edge can be used other edges in (l).
Hence, such a policy structure allows us to collaboratively learn the DNN (i.e. a central policy) for task l at the cloud server by effectively using the DNNs (i.e., the local policies) from all edges in (l).
This issue will be addressed in Section <ref>.
§ COLLABORATIVE POLICY LEARNING FOR DYNAMIC SCHEDULING TASKS IN IOT NETWORKS
In this section, we introduce two key enablers of collaborative policy learning for dynamic scheduling tasks in IoT networks.
First, we present a task selection algorithm tailored for efficient FRL in resource-limited IoT networks.
Second, we propose a policy structure suitable for collaborative learning in dynamic scheduling tasks.
These enablers address the key challenges outlined in Section <ref>, laying the groundwork for a collaborative policy learning framework for dynamic scheduling tasks in IoT networks leveraging FRL.
§.§ Opportunistic Task Selection for Effective Collaborative Policy Learning
In this subsection, we address the issue of FRL for multiple tasks due to limited cloud resources raised in Section <ref>.
Firstly, we define the required resources for each participant (i.e., edge) with task l as the required network bandwidth B_l, the required memory resources O_l, and the required computing resources C_l.
We then model the availability of each edge to participate in FRL in each round as a stationary process.
To represent the availabilities of all edges concisely, we define an availability state that corresponds to a combination of the availability conditions of all edges in a round and denote it by p∈, where is the availability state space.
The availability indicator of edge n in availability state p is represented by x_n^p∈{0,1}, where 1 indicates that edge n is available to participate in FRL, and 0 indicates that it is not.
The vector of the availability indicator of edges in availability state p is defined as ^p=(x_n^p)_∀ n∈.
The number of available edges with task l in a round with availability state p can be given as x_l^p=∑_n∈(l)x_n^p.
Then, the required bandwidth for task l in a round with availability state p is given by B_l^p=x_l^pB_l.
Similarly, the required memory resources and computing resources are given by O_l^p=x_l^pO_l and C_l^p=x_l^pC_l, respectively.
For a task selection problem, we define a task selection indicator, q_l^p, for task l in availability state p as
q_l^p =
1, [t].3if task l is selected for FRL in a round with availability state p,
0, otherwise.
For convenience, we additionally define the vector of task selection indicators in availability state p as ^p=(q_l^p)_∀ l∈, and subsequently, the vector of all task selection indicators as =(^p)_∀ p∈.
Given that the required network bandwidth, memory resources, and computing resources for selected tasks must not exceed their corresponding maximum resources allowed for FRL in the cloud server, we consider the following constraints:
∑_l∈q_l^px_l^pB_l^p ≤ B, ∀ p∈,
∑_l∈q_l^px_l^pO_l^p ≤ O, ∀ p∈,
∑_l∈q_l^px_l^pC_l^p ≤ C, ∀ p∈.
As discussed in Section <ref>, effective FRL necessitates strategic task selection.
This strategy aims to maximize the number of participating edges in FRL and ensure that all tasks derive benefits.
To achieve this goal, we adopt a fairness concept in terms of the number of participating edges.
By taking the fairness into account in the average number of participating edges, we can guarantee that all tasks, including those operating at a smaller number of edges, benefit from FRL.
We calculate the average number of participants for task l as ∑_p∈ϕ^p q_l^p x_l^p, where ϕ^p is the probability of the availability state being in p.
We then define the constraint of the minimum average number of participants for task l as
∑_p∈ϕ^p q_l^p x_l^p ≥ X_l, ∀ l∈,
where X_l is the required minimum average number of participants for task l.
Furthermore, we define the utility function for task l as a function of its average number of participants, given by V_l(∑_p∈ϕ^p q_l^p x_l^p).
The utility function for task l here is different from the reward function for task l defined in Section <ref>. The former is used in the task selection problem formulated in (<ref>), while the latter is used to represent the goal of task l.
Finally, we present the formulation of the task selection problem as follows:
C'l
_ ∑_l∈ V_l (∑_p∈ϕ^p q_l^p x_l^p)
(<ref>), (<ref>), (<ref>), (<ref>).
It is important to note that the task selection problem can accommodate various fairness definitions over tasks, such as proportional fairness and minmax fairness, by appropriately choosing the utility function and constraint parameters (i.e., X_l's).
For example, we can achieve weighted proportional fairness if we choose the utility function as V_l(·)=w_llog(·), ∀ l∈, where w_l is the weight of task l, and the constraint parameters as X_l=-∞, ∀ l∈ (i.e., no constraints for the minimum average number of participants).
Hence, the choice of the utility function and constraint parameters can depend on the network characteristics, such as the size of DNNs and the complexity of tasks.=-1
We now develop an algorithm to optimally solve the task selection problem in (<ref>).
To this end, we first relax the integer variables into the continuous ones and introduce auxiliary variables y_l, ∀ l∈, which represent the average numbers of participants (i.e., ∑_p∈ϕ^p q_l^p x_l^p).
We then apply the Lagrangian approach and a stochastic subgradient algorithm, as in the opportunistic framework <cit.>.
Due to the page limit, the details of the task selection algorithm are omitted.
The task selection algorithm in round r determines the task selection, ^(r)=(q_l^(r))_∀ l∈, in round r, using[To explicitly denote the round, we use a superscript (·)^(r) instead of (·)^p. This is justified because the availability state in each round is determined based on the system conditions in that specific round, such as the number of participants, the channel conditions, etc.]=-1
^(r)=_(q_l)_∀ l∈:(<ref>),(<ref>),(<ref>){∑_l∈(λ_l^(r)+μ_l^(r)) q_l x_l^(r)},
where λ_l^(r) is the Lagrange multiplier of task l in round r with respect to the auxiliary variable y_l, μ_l^(r) is the one with respect to the constraint in (<ref>), and x_l^(r) is the number of available edges with task l in round r.
At the end of round r, the Lagrange multipliers are updated, using
λ_l^(r+1) = [λ_l^(r)-α^(r)( q_l^(r)x_l^(r)-y_l^(r)) ]^+,
μ_l^(r+1) = [μ_l^(r)-α^(r)( q_l^(r)x_l^(r)-X_l ) ]^+,
where [·]^+ = max{0,·}, α^(r) is the positive step size in round r, and y_l^(r)=_y_l≥ 0{ V_l(y_l)-λ_l^(r)y_l }.
We can demonstrate the optimality of this algorithm using the following theorem.
The task selection algorithm described in (<ref>), (<ref>), and (<ref>) optimally solves the dynamic scheduling task selection problem in (<ref>).
In the interest of brevity, we refer readers to <cit.> for the proof.
Moving forward, implementing the algorithm necessitates solving the task selection problem in (<ref>) for each round.
By denoting the weight of task l in round r as w^(r)_l=(λ_l^(r)+μ_l^(r))x_l^(r), we can recast the problem as
max_(q_l)_∀ l∈:(<ref>),(<ref>),(<ref>)∑_l∈ w_l^(r) q_l.
We denote the solution to this problem as ^(r).
Notably, the problem in (<ref>) is a typical multidimensional knapsack problem <cit.>, which can be solved efficiently using dynamic programming or branch-and-bound methods <cit.>.
§.§ Collaborative Learning-Applicable Edge-Agnostic Policy Structure for General Dynamic Scheduling Tasks
As we discussed in Section <ref>, the diverse range of dynamic scheduling tasks, such as wireless power transfer, data gathering, and radio resource scheduling, have an identical problem structure in (<ref>).
Hence, if multiple edges share such identical tasks, for each task, they also share the identical problem structure, which is determined by the types of state information, decisions, and reward function for the task.
However, as emphasized in Section <ref>, the edges typically have different dynamics, due to the varying number of IoT devices and system uncertainties.
This renders conventional dynamic scheduling policies to be inapplicable to collaborative policy learning because of its lack of generalization capability, as discussed in Section <ref>.
To ensure that a policy for each task is capable of generalization over different edges that conduct the task, it should be designed to represent states and actions in a way that is independent of the dynamics of the edges.
Furthermore, the policy should be able to learn a scheduling principle, capable of identifying which condition (i.e., state information) of IoT devices is more favorable for effective scheduling.
For example, suppose a policy that represents such a principle for task l.
Then, any edge that conducts task l could identify its best IoT device to schedule by comparing the current conditions of all IoT devices based on the policy.
If each edge learns a DNN that represents such a policy using DRL, the DNNs from all edges can be aggregated via FRL thanks to the generalization capability across the edges.
Consequently, it would enable collaborative policy learning.=-1
Here, we propose a collaborative learning-applicable edge-agnostic policy structure that satisfies the aforementioned features by borrowing the concept of the circumstance-independent (CI) policy structure in <cit.>.
The CI policy structure represents a policy for the radio resource scheduling problem in a single-cell wireless network, regardless of the network's dynamic circumstances.
For an edge-agnostic policy structure, we generalize the concept of the CI policy structure for dynamic scheduling tasks and extend it to be used in multiple edges for FRL.
Next, we present edge-agnostic state and action structures. These structures focus on the conditions of the IoT devices in each edge, rather than on each IoT device itself, as in (<ref>) and (<ref>).
§.§.§ Structure of Edge-Agnostic State, Action, and Policy
We first define an edge-agnostic state that represents the conditions of IoT devices in any edges.
Specifically, it indicates whether an IoT device with a specific state information condition exists or not in each time slot.
To achieve this, the space of each kth state information of task l, where k∈{1,2,…,K(l)}, is partitioned into H_k,l disjoint intervals.
The intervals in the partitions for kth state information are indexed by h_k,l∈{1,2,…,H_k,l}.
The condition of each IoT device in the edge with task l can then be represented as a combination of the intervals of each state information, as illustrated in Fig. <ref>.
The structure of the edge-agnostic state for task l is defined as a K-dimensional matrix whose size is given by ∏_k∈{1,…,K(l)}H_k,l.
Each element of the state is indexed by a tuple h=(h_1,l,…,h_K(l),l).
Formally, we denote the edge-agnostic state for task l by s̅_l and define it as=-1
s̅_l(h) =
1, if there exists any IoT device in condition h,
0, otherwise,
where s̅_l(h) denotes the element of state s̅_l whose index is given by h.
The edge-agnostic state space for task l can be defined by _l={0,1}^∏_k∈{1,…,K(l)}H_k,l.
It is noteworthy that the edge-agnostic state for each task can describe the conditions of IoT devices in any edges with the task, regardless of the number of IoT devices.
Based on the edge-agnostic state, we can define an edge-agnostic action that indicates the condition to be scheduled rather than a specific IoT device.
Specifically, the edge-agnostic action for task l can be defined using the index of the element of the edge-agnostic state and relevant scheduling decisions as
a̅_l =(h_1,l,…,h_K(l),l,g_1,l,…,g_G(l),l)∈_l,
where _l is the edge-agnostic action space for task l.
Note that not all combinations of conditions in the edge-agnostic state may be feasible for scheduling, as there may not be any IoT device satisfying a particular condition.
Thus, we define the feasible edge-agnostic action space with state s̅_l as=-1
_l(s̅_l)={a̅_l∈𝒜̅_l| s̅_l(h) = 1}.
With the aforementioned elements, an edge-agnostic policy for task l can be defined as π̅_l:_l→_l.
As the general scheduling principle, the edge-agnostic state and action for task l focus on the condition of IoT devices, and the edge-agnostic policy represents the selection of specific condition in scheduling, rather than the selection of the index of a specific IoT device.
This implies that when the edge-agnostic policy for each task is learned through DRL, its corresponding DNN is trained to approximate the optimal general scheduling principle for the task that has a generalization capability over different edges.
Therefore, the edge-agnostic policy can be utilized in any edges with task l, even if the edges have different dynamics such as the numbers of IoT devices.
Consequently, FRL can be applied to the DNN for collaborative learning thanks to the generalization capability.
§.§.§ DRL for Learning Edge-Agnostic Policy
We propose a procedure to learning the edge-agnostic policy via DRL in edge n, based on the system model described in Section <ref>.
Specifically, using the reward function u_l(n) defined in the section, edge n can learn the edge-agnostic policy π_l(n) that solves the dynamic scheduling problem of edge n (and can also be used for other edges with task l(n)) via DRL methods.
For the sake of simplicity, we describe the proposed approach based on the well-known deep Q-network (DQN) in <cit.>, but other methods can also be employed.=-1
In DQN, a DNN is employed to approximate the optimal action-value function, based on the edge-agnostic states and actions.
The DNN structure for the edge-agnostic policy is determined based on task l(n), and all edges associated with the same task have an identical DNN structure.
We denote the parameters of the DNN for task l by _l and those in edge n by _n.
Consequently, the DNN in edge n, _n, has a structure identical to that of _l(n).
The optimal action-value function with a given s̅_l(n) and a̅_l(n) is denoted by Q̅^*_l(n)(s̅_l(n),a̅_l(n)), while its Q-approximation derived from the DNN is denoted by Q̅_l(n)(s̅_l(n),a̅_l(n);_n).
In time slot t, the observed state s_n^t in accordance with (<ref>) is translated into the edge-agnostic state s̅_l(n)^t as per (<ref>).
Based on s̅_l(n)^t, the edge-agnostic policy chooses the edge-agnostic action a̅_l(n)^t from _l(s̅_l^t), according to its exploration-exploitation strategy (for instance, an ϵ-greedy method).
Subsequently, the selected edge-agnostic action a̅_l(n)^t in line with (<ref>) is translated into the action a_n^t as per (<ref>).
When more than one IoT device fulfills the condition indicated by the edge-agnostic action, one of these IoT devices is arbitrary selected as the scheduled IoT device for that time slot.
The translation of states and actions is illustrated in Fig. <ref>.
After scheduling, the reward u_l(n)^t(s_n^t,a_n^t) and the next state s_n^t+1 are observed.
Then, an edge-agnostic experience sample for time slot t is generated as (s̅_l(n)^t,a̅_l(n)^t,u_l(n)^t,s̅_l(n)^t+1).
Using these experience samples, the DNN is trained in line with standard DQN methods, incorporating experience replay and fixed-target Q-network.
§.§ Collaborative Policy Learning Framework for Dynamic Scheduling Tasks Using FRL
In this subsection, we propose a collaborative policy learning framework for dynamic scheduling tasks in IoT networks leveraging FRL.
The framework is built upon the task selection algorithm and the collaborative learning-applicable scheduling policy discussed in previous subsections.
Initially, both the cloud server and each edge initialize their DNNs to learn the edge-agnostic policy applicable to dynamic scheduling tasks.
In the cloud server, the central parameters of the DNNs for all tasks are initialized as _l^1, ∀ l∈, to facilitate FRL across all tasks.
Concurrently, each edge n initializes the local parameters, _n, of its DNN as _l(n)^1 to maintain identical DNN structures for the same task.
Moreover, the edge sets its local parameters, _n^1, at the onset of the first round to be _n.
It is crucial to understand that _n, without the round index, represents the local parameters trained in the DQN algorithm at edge n.
Subsequently, edge n begins executing its DQN algorithm with its local parameters, _n, as described in Section <ref>.
Notably, these DQN algorithms operate concurrently and can be temporarily suspended to accommodate FRL.
During round r, the cloud server evaluates the availability of the edges to engage in FRL, denoted as ^(r).
Based on this assessment, it makes a task selection decision, denoted as ^(r), in accordance with (<ref>).
This selection ensures the convergence of FRL for tasks, as we will demonstrate later.
For each selected task l, the cloud server and the available edges conduct FRL for the task through FedDS in parallel.
During this process, every available edge n associated with task l (i.e., n∈{n':n'∈(l) and x_n'^(r)=1}) temporarily suspends its DQN algorithm to maintain the current local parameters _n.
Edge n calculates the local gradients, ∇ g_n^r, utilizing the local parameters of its DNN at the onset of round r, denoted as _n^r, and the current ones, denoted as _n.
Following this, edge n uploads the local gradients to the cloud server.
Upon receiving the local gradients from the available edges during round r, the cloud server computes the central parameters of the DNN for task l, denoted as _l^r+1, using
_l^r+1=_l^r-∑_n∈(l)c_n^rx_n^(r)∇ g_n^r,
where c_n^r=N_lc_n/x_l^(r) with N_l being the number of edges associated with task l.
This procedure trains the edge-agnostic policy for task l by gathering experiences from all available edges associated with task l.
Following this, the cloud server broadcasts the updated central parameters, _l^r+1, to all edges with task l.
Each edge n with task l substitutes its locally trained parameters from its DQN algorithm with _l^r+1.
It then sets its local parameters at the onset of round r+1, denoted as _n^r+1, to be w_n.
Once each edge resumes its previously paused DQN algorithm, the FRL process concludes.
Subsequent to the FRL process, the cloud server updates the Lagrange multipliers for tasks as depicted in (<ref>) and (<ref>) to ensure fairness across them, as defined in relation to the task selection problem.
The entire framework is summarized in Algorithm <ref>.
§.§ Convergence Analysis of Collaboartive Policy Learning
In this subsection, we provide a convergence analysis of the proposed collaborative policy learning framework.
To this end, we first introduce the following assumptions which are typical ones in the literature <cit.>:
The objective function of FL l() is L-smooth, which means it has a Lipschitz continuous gradient with a constant L>0. Symbolically, this can be written as, for any two points _1 and _2, l(_1)-l(_2)≤⟨∇ l(_2),_1-_2 ⟩+L/2‖_1-_2‖^2.
The objective function of FL l() is ξ-strongly convex with ξ>0, which means that for any _1 and _2, the following inequality holds: l(_1)-l(_2)≥⟨∇ l(_2),_1-_2 ⟩+ξ/2‖_1-_2‖^2.
The variance of the gradients at each edge is bounded for all rounds, i.e., 𝔼‖ g_n^r-g̅_n^r‖^2≤ V^2, ∀ n, r, where g̅_n^r denotes the mean of the gradients at edge n in round r.
The expected squared norm of the gradients at each edge is uniformly bounded for all rounds, i.e., 𝔼‖ g_n^r‖^2≤ V^2, ∀ n, r.
To capture and quantify the non-independent and identically distributed (non-i.i.d.) experiences among edges, we introduce a parameter to represent the degree of experience distribution difference for edge n, expressed as Γ_l(n)^n=f_n(_l(n)^*)-f_n^*, where _l^* denotes the minimizer of the loss function for task l, and f_n^* represents the minimum value of f_n.
Subsequently, we define Γ_l=∑_n∈(l)c_nΓ_l^n.
We proceed under the assumption that each edge participates in FRL during each round with equal probability.
Given this assumption, we can demonstrate the convergence of FRL for dynamic scheduling tasks using the forthcoming theorem.
The collaborative policy learning framework for dynamic scheduling tasks in Algorithm <ref> achieves the following convergence rate of the target DNN for task l:
O((N_l^2+σ̅_l^2+Γ_l)T^-1),
where T is the number of rounds, and σ̅_l^2=∑_n∈(l)(c_nσ_n)^2.
See Appendix <ref>.
Theorem <ref> takes consideration of the opportunistic task selection in Section <ref> compared with the analysis in <cit.>.
Consequently, it clearly shows the convergence of the proposed collaborative policy learning framework.
§ EXPERIMENTAL RESULTS
In this section, we showcase experimental results evaluating the performance of our proposed collaborative policy learning framework for dynamic scheduling tasks.
To achieve this, we have created a dedicated Python-based simulator and run simulations on a simulated IoT network composed of multiple edges.
Each edge is assigned one of the following three tasks:
* Task A: Wireless power transfer task – This task aims to minimize power outages of IoT devices attributable to low battery levels <cit.>.
In each time slot, an AP wirelessly transfers power to a selected IoT device, with the charging rate being dependent on the device's channel condition.
If an IoT device is in an active state, its battery is discharged at a given rate.
The active state stochastically changes based on a Markov model.
The battery level of each IoT device is updated according to its active state and the amount of wireless power transferred from the AP.
The active state, battery level, and charging rate of each IoT device are used as state information.
The cost in each time slot is determined by the number of IoT devices whose battery level is below a threshold and those whose battery is empty.
The negative cost is taken as a reward.
* Task B: Data gathering task – This task aims to maximize the number of gathered data samples while minimizing dropped data samples in an IoT network <cit.>.
In each time slot, an AP selects an IoT device to transmit its data samples to the AP.
The transmission capacity of each IoT device to gather the data sample is time-varying, and the data samples randomly arrive at the buffer of each IoT device.
If the buffer overflows, the exceeded data samples are dropped.
The remaining buffer size and transmission capacity of each IoT device are used as state information.
The reward in each time slot is defined as the the number of gathered data samples minus the number of dropped data samples in that time slot.
* Task C: Radio resource scheduling task – This task aims to minimize the transmission power at an AP while ensuring the minimum average data rate requirements of IoT devices <cit.>.
In each time slot, an AP selects an IoT device to serve and the corresponding transmission power.
The determined transmission power then impacts the achievable data rate for the IoT device, following the Shannon capacity.
The data rate depends on the IoT device's channel gain, which varies over time based on a channel model with a log-normal shadowing.
IoT devices also update their degree of dissatisfaction regarding the data rate requirements (DoD).
The channel gain and DoD of each IoT device are used as state information.
The reward for each time slot is calculated as the achieved data rate weighted by the DoD minus the transmission power.
We consider three distinct scenarios for each task to demonstrate the edge-agnostic feature.
Take the radio resource scheduling task as an example, where we examine three different scenarios with varying numbers of users and data rate requirements.
For a more comprehensive understanding of each task, please refer to Appendix <ref>.
Furthermore, to facilitate comparative analysis of each task's performance, we normalize the reward in the subsequent results.
We now present the basic simulation settings, which form a base setup and are consistently applied unless otherwise specified.
For each of the three tasks, we consider a total of twenty edges: seven for scenario A, seven for scenario B, and six for scenario C, which altogether constitute sixty edges in the IoT network.
We set the arrival rates for the edges with tasks A, B, and C to 0.7, 0.4, and 0.4, respectively.
The maximum network bandwidth, memory, and computing resources of the IoT networks for federated learning are all set to 21.
The DQN algorithm employs a fully-connected DNN with three hidden layers of 300 units across all tasks.
We set the learning rate, batch size, train interval, and target update interval to 10^-5, 32, 50, and 100, respectively.
Given that all tasks share the same DNN structure, the bandwidth, memory, and computing resources required at each edge are identical for all tasks.
Consequently, without loss of generality, we assign a ujnit value to these parameters across all tasks.
The number of time slots per round for federated learning is set to 250, and the simulation is run over 200 rounds.
To assess the performance of our collaborative policy learning framework, we compare it to both an ideal benchmark and a baseline that excludes FRL. The algorithms utilized in this comparison are defined as follows:
* Bench represents an ideal benchmark algorithm that is founded on our framework, but it neglects the maximum resource constraints for FRL as indicated in (<ref>), (<ref>), and (<ref>).
This is a theoretical model and cannot be practically achieved.
In each round, Bench always conducts FRL for all tasks, thereby delivering a performance upper bound.
* FL-PF represents our framework with a proportional fair task selection. It is implemented by setting the utility function of tasks to a logarithm function (i.e., V_l(x)=log(x), ∀ l∈), and the required minimum average number of participants for task l to 5 (i.e., X_l=5).
* FL-Greedy represents our framework with a greedy task selection.
In each round, tasks are selected as much as possible in a decreasing order of the number of available edges.
* FL-RR represents our framework that employs a round-robin task selection.
In each round, tasks are selected as much as possible in a round-robin way (i.e., in a circular order of tasks).=-1
* No-FL represents an algorithm without FRL.
In this algorithm, each edge learns its policy independently and individually.
This is implemented by setting the task selection indicators for all tasks and rounds to 0 (i.e., q_l^r=0, ∀ l∈, ∀ r∈).
§.§ Participants of Collaborative Policy Learning
We first present the sum of the average numbers of participants (i.e., participating edges) for all tasks in Fig. <ref>.
Note that Bench and No-FL are excluded from the figure as Bench reaches the maximum number of participants without resource constraint, while No-FL consistently achieves zero participants.
As observed from the figure, FL-PF attains a greater total number of participants than Greedy and RR, while ensuring fairness among the tasks in terms of participant numbers.
Conversely, FL-Greedy and FL-RR lead to more edges participating in task A than tasks B and C.
This imbalance creates unfairness among the tasks and may result in tasks B and C not achieving enough performance improvement from collaborative policy learning.
These observations suggest that FL-PF selects tasks in a manner that promotes effective collaborative policy learning, considering the time-varying availability conditions of edges and limited resources.
We delve into a more detailed comparison of the performances of the different algorithms in the following subsections.=-1
To illustrate the achievement of the minimum average number of participants, we depict the average number of participants for each task in Fig. <ref>.
As shown in Fig. <ref>, all collaborative policy learning algorithms (i.e., Bench, FL-PF, FL-Greedy, and FL-RR) successfully meet the minimum number of participants requirement.
Due to task A having the highest arrival rate, FL-Greedy excessively selects task A in nearly every round, which results in a participant count close to that of Bench.
However, as indicated in Figs. <ref> and <ref>, only FL-PF fulfills the minimum number of participants for tasks B and C.
FL-Greedy falls short of the minimum because of its skewed selection towards task A.
Conversely, while FL-RR selects tasks in a circularly fair manner, it does not take the number of participants into account, leading to fluctuating participant counts that depend on the arrival rate of each task.
From these figures, it is clear that FL-PF consistently meets the minimum number of participants across all tasks.
§.§ Rewards of Dynamic Scheduling Tasks
In Fig. <ref>, we provide the sum of the average rewards of all edges.
As observed from the figure, all collaborative policy learning algorithms exhibit superior performance compared to No-FL.
Notably, FL-PF outperforms FL-RR and FL-Greedy and closely matches the performance of Bench.
This evidently demonstrates that FL-PF selects tasks in a more effective manner for collaborative policy learning compared to FL-RR and FL-Greedy.=-1
To delve deeper, we present the average reward of edges for each task in Fig. <ref>.
Fig. <ref> reveals that all collaborative policy learning algorithms yield similar rewards, significantly exceeding that of No-FL.
Interestingly, FL-Greedy secures an average reward almost identical to that of Bench.
In Figs. <ref> and <ref>, FL-PF surpasses both FL-RR and FL-Greedy, attaining a reward close to Bench.
While FL-Greedy only marginally outperforms No-FL, the other collaborative policy learning algorithms significantly surpass it.
Fig. <ref> also reflects the relationship between the performance of collaborative policy learning and the number of participants, as demonstrated in Fig. <ref>.
From the figures, it is clear that the number of participants and the reward follow similar trends.
For tasks B and C, FL-PF reaps larger rewards compared to FL-RR, while also achieving a higher number of participants.
Moreover, FL-Greedy secures rewards nearly equal to Bench for task A, which boasts a large number of participants.
Conversely, it achieves rewards comparable to No-FL for tasks B and C, which have very few participants.
These findings strongly suggest that fairness among tasks, in terms of the number of participants, should be considered to ensure performance improvement from collaborative policy learning across all tasks.
§.§ Effectiveness to Unseen Edge Arrivals
Here, we take into consideration newly arrived edges to demonstrate the efficacy of our collaborative policy learning framework when dealing with unforeseen edge arrivals.
For each task, we simulate the arrival of four edges after 25,000 time slots; two edges are associated with scenario D and two with scenario E, as defined in Table <ref> in Appendix <ref>.
It is important to note that these scenarios are novel to the IoT network, and as such, the policies for each task have no prior experience with them.
Fig. <ref> illustrates the moving average rewards of FL-PF and No-FL for each task, employing a 2,500-time slot average window for the moving average operation.
It should be noted that FL-PF is chosen as the representative algorithm among the collaborative policy learning algorithms for this comparison.
As seen in Figs. <ref>, <ref>, and <ref>, FL-PF does not experience reward degradation due to the arrival of new edges, in contrast to No-FL.
Specifically, our collaborative policy learning framework can immediately utilize the task policy located at the cloud server when a new edge with a task arrives.
On the other hand, No-FL necessitates learning a new policy for the newly arrived edge, leading to performance degradation during the initial learning phase.
These results distinctly underscore the effectiveness of our collaborative policy learning framework in managing dynamic edge arrivals in realistic IoT networks.
§.§ Impact of Number of Edges
We explore the impact of the number of edges on our collaborative policy learning framework.
To this end, we show the total average rewards of FL-PF and No-FL with varying numbers of edges for each task in Fig. <ref>.
We adjust the number of edges from 10 to 30.
As federated learning involving a larger number of edges necessitates more resources, we proportionally set the maximum network bandwidth, memory, and computing resources relative to the basic setting.
In Fig. <ref>, we contrast the learning speeds of FL-PF and No-FL as a function of the number of edges.
It is clear from the figure that FL-PF learns significantly faster than No-FL, which attests to the efficacy of collaborative policy learning.
Moreover, FL-PF's learning speed increases as the number of edges grows.
This suggests that our proposed framework can more rapidly learn an edge-agnostic policy when there are more edges, by capitalizing on their collective experiences through collaborative policy learning.
Conversely, no discernable trend exists in No-FL's learning speed relative to the number of edges, as each edge in No-FL must rely solely on its own experience to learn a policy.
Fig. <ref> compares the total average rewards of FL-PF and No-FL at the conclusion of the simulation.
The total average reward of FL-PF increases as the number of edges grows, suggesting that a larger number of edges is beneficial for achieving greater rewards, due to the quicker learning speed.
However, no discernable trend is observed in the total average reward of No-FL in relation to the number of edges, given the absence of collaborative policy learning.
These results clearly demonstrate that our proposed collaborative policy learning framework can effectively leverage experiences from a larger number of edges.
§ CONCLUSION
In this paper, we proposed a collaborative policy learning framework for the dynamic scheduling tasks in IoT networks using FRL.
This framework effectively utilizes limited cloud resources while ensuring fair local policy aggregation across tasks.
To achieve this, we developed a task selection algorithm that maximizes the average number of participants (i.e., participating edges) in collaborative policy learning while satisfying the minimum number of edges required for each task.
We also investigated the convergence of collaborative policy learning based on this task selection algorithm.
A key enabler of the proposed framework is the edge-agnostic policy structure that we proposed for dynamic scheduling tasks, which is applicable to collaborative learning.
Our experimental results demonstrate that the proposed framework offers significant performance improvements compared to algorithms without collaborative policy learning.
Notably, the collaborative policy learning approach, when combined with our proposed task selection algorithm, achieves the best performance.
Furthermore, our results clearly illustrate the framework's adaptability to newly arrived edges and its ability to accelerate the learning speed of the policy.
§ PROOF OF THEOREM 2
Since we can show the convergence rate of the DNN for task l similarly to the theoretical results in <cit.>.
First we derive the following theorem from Theorem 3.1 in <cit.>:
By choosing the learning rate, η_r, as η_r = 16E/ξ𝔼[∑_n∈(l)c_n^r]1/rE+γ, we can obtain
𝔼‖_l^r-_l^*‖^2≤G/rE+γ,
where E is the number of local epochs,
γ = max{32E(1+N_l)L/ξ𝔼[∑_n∈(l)c_n^rE],4E^2N_l/𝔼[∑_n∈(l)c_n^rE]},
G = max{γ^2𝔼‖_l^0-_l^*‖^2,( 16E/ξ𝔼[∑_n∈(l)c_n^rE])^2𝔼[B_r]/E},
B_r = 2(2+N_l)L∑_n∈(l)c_n^rEΓ_l^n+2EV^2∑_n∈(l)(c_n^r)^2/c_nE
+(4(1+N_l)L+ξ/2(1+N_l)L)E(E-1)V^2(∑_n∈(l)(2+N_l)c_n^r-2N_l)
+∑_n∈(l)(c_n^r)^2Eσ_n^2.
From the assumptions, we have 𝔼[B]=O(N_l^2𝔼[1/X_l^r|X_l^r≠0]+∑_n∈(l)(c_nσ_n)^2+Γ_l), and γ=O(N_l), where X_l^r denotes the number of participants for task l in round r.
Thus, G=O(N_l^2𝔼[1/X_l^r|X_l^r≠0]+∑_n∈(l)(c_nσ_n)^2+Γ_l).
Since we can easily derive these equations in similar steps to Corollary 4.0.1 in <cit.>, we here omit the proofs and refer to <cit.> for more details.
Given that 𝔼[1/X_l^r|X_l^r≠0]≤ 1, we can derive the theorem.
§ DETAILED DESCRIPTION AND SCENARIOS OF EACH TASK IN EXPERIMENTS
In this appendix, we provide a detailed description of each task used in the experimental result section.
We also provide three different scenarios of each task in Table <ref>.
Task A: Wireless power transfer task –
This task aims to minimize the power outages of IoT devices cased by low battery levels <cit.>.
For simple presentation, we assume, without loss of generality, that each time slot lasts for one second, and an AP wirelessly transfer power to an IoT device in each time slot.
The charging rate of each IoT device through wireless power transfer in time slot t, denoted by P_m^ch,t, depends on its wireless channel condition at that time slot, which typically varies with time.
If an IoT device is in an active state, its battery is discharged at a given rate.
We denote the active state of IoT device m in time slot t by x_m^t, where 1 represents active and 0 represents inactive.
The active state probabilistically changes based on a Markov model.
The state transition probability of IoT device m from active to inactive is denoted by p_m^ai, and that from inactive to active is denoted by p_m^ia, both of which are set to 0.5 in this task.
The battery level of IoT device m in time slot t is denoted by b_m^t.
Then, the battery level of IoT device m is updated according to its active state and wireless power transfer from the AP as follows:
b_m^t+1=min[max[0,b_m^t-x_m^tP_m^dch+q_m^tP_m^ch,t],B_m],
where P_m^dch is the discharging rate of IoT device m in the active state, q_m^t is the scheduling indicator of IoT device m in time slot t, and B_m is the maximum battery level.
In this task, the battery level b_m^t, active state x_m^t, and charging rate P_m^ch,t of each IoT device in each time slot are used as state information.
The reward in each time slot is defined by the number of IoT devices whose battery levels are low (e.g., under 10 % of the maximum battery level) and that experience an outage as follows:
r^t=-∑_m∈[1{b_m^t ≤ B_m^low} + C_m1{b_m^t=0}],
where B_m^low is the threshold for the low battery state of IoT device m, and C_m is the cost parameter for the outage of IoT device m.
Task B: Data Gathering Task –
This task aims to maximize the number of gathered data samples while minimizing the dropped data samples in an IoT network <cit.>.
In each time slot, an IoT device is selected to transmit its data samples to the AP
We denote, for each IoT device m in time slot t, the buffer size by b_m^t, the transmission capacity by c_m^t, and the number of arrived data samples by d_m^t.
The transmission capacity of IoT device m in each time slot is determined by applying a floor function to a number sampled from a Gaussian random variable with mean C_m and variance 9, and the number of arrived data samples at IoT device m in each time slot is sampled from a Poisson distribution with a mean of D_m, where D_m is the arrival rate of IoT device m.
Then, the buffer size of IoT device m is updated using b_m^t+1=min[max[0,b_m^t+d_m^t-q_m^tc_m^t],B_m], where q_m^t is the scheduling indicator of IoT device m in time slot t, and B_m is the maximum buffer size of IoT device m.
The remaining buffer size of IoT device m in time slot t is defined as b̅_m^t=B_m-b_m^t.
If the buffer overflows, the exceeded data samples are dropped, and the number of dropped data samples of IoT device m in time slot t is denoted by e_m^t.
The state information for the problem includes the remaining buffer size, b̅_m^t, and transmission capacity, c_m^t, of each IoT device in each time slot.
The reward in each time slot is defined as the difference between the number of gathered data samples and the number of dropped data samples in the time slot, which is computed as:
r^t=∑_m∈q_m^tmin[c_m^t,b_m^t+d_m^t]-e_m^t.
Task C: Radio Resource Scheduling Task –
This task aims to minimize the transmission power at an AP while satisfying the minimum average data rate requirements of IoT devices <cit.>.
Hence, we refer the readers to <cit.> for more details.
IEEEtran
|
http://arxiv.org/abs/2307.01351v1
|
20230703205548
|
A geometric framework for discrete time port-Hamiltonian systems
|
[
"Karim Cherifi",
"Hannes Gernandt",
"Dorothea Hinsen",
"Volker Mehrmann"
] |
math.OC
|
[
"math.OC",
"math.DS"
] |
arrows,positioning
SizeOfCirc
smallpmatrix
([ )
smallbmatrix
[[ ]
positioning,arrows,decorations.markings
block/.style=
draw,
rectangle,
minimum height=1.5cm,
minimum width=2.5cm, align=center,
fill=blue!20
,
line/.style=->,>=latex'
negated/.style=
decoration=markings,
mark= at position 0.5 with
[transform shape] (tempnode) \\;
,
postaction=decorate
definitionDefinition[section]
theorem[definition]Theorem
proposition[definition]Proposition
lemma[definition]Lemma
corollary[definition]Corollary
remark[definition]Remark
definition
ass[definition]Assumption
example[definition]Example
A geometric framework for discrete time port-Hamiltonian systems
Karim CherifiInstitut für Mathematik, Technische Universität Berlin, Straße des 17. Juni 136, 10623 Berlin, Germany (). Hannes GernandtFraunhofer IEG, Fraunhofer Research Institution for Energy Infrastructures and Geothermal Systems IEG, Cottbus, Gulbener Straße 23, 03046 Cottbus, Germany ().
Dorothea Hinsen Institut für Mathematik, Technische Universität Berlin, Straße des 17. Juni 136, 10623 Berlin, Germany (). Volker Mehrmann Institut für Mathematik, Technische Universität Berlin, Straße des 17. Juni 136, 10623 Berlin, Germany ().
August 1, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Port-Hamiltonian systems provide an energy-based formulation with a model class that is closed under structure preserving interconnection.
For continuous-time systems these interconnections are constructed by geometric objects called Dirac structures. In this paper, we derive this geometric formulation and the interconnection properties for scattering passive discrete-time port-Hamiltonian systems.
§ INTRODUCTION
For some major classes of continuous-time (linear) control systems, in particular port-Hamiltonion (pH) systems, it is well-established that staying close to the underlying physics requires a general geometric framework. This has lead to the definition of continuous-time port-Hamiltonian systems via Dirac, Lagrange and monotone structures, see e.g. <cit.>. In this paper this framework will be extended to discrete-time descriptor systems <cit.> of the form
Ex_k+1 =Ax_k+Bu_k, y_k=Cx_k+Du_k, k≥ 0, x_0=x^0,
with sequences of vectors x_k∈^n, u_k,y_k∈^m, k=0,1,2,…, E,A∈^n× n, B∈^n× m, C∈^m× n. Here ∈{ℝ,ℂ}, and we allow a possibly singular E.
Discrete-time descriptor systems of the form (<ref>) are called scattering passive if there exists a storage function V:^n→[0,∞) satisfying V(0)=0 and the dissipation inequality
V(Ex_k+1)-V(Ex_k)≤u_k^2-y_k^2 for all k≥ 0
for all combinations of u_k∈^m and x_0∈^n for which a solution of (<ref>) exists.
In <cit.> a definition of discrete-time scattering pH descriptor systems was presented for the case that the system is causal, i.e. the solution x_k at index k does not depend on future inputs. (This has been characterized in <cit.> by the Kronecker index of the pair (E,A) being at most one). In this case the system can be transformed to a reduced standard discrete-time system
x̃_k+1 =x̃_k+ u_k, ỹ_k=x̃_k+ u_k, k≥ 0, x̃_0=x̃^0,
with sequences of vectors x̃_k∈^r, u_k,ỹ_k∈^m, k=0,1,2,…, ∈^r× r, ∈^r× m, ∈^m× r,
together with an algebraic equation that uniquely specifies the remaining n-r components of x_k. A causal system of the form (<ref>) is called discrete-time scattering pH system if for the reduced standard system (<ref>) (denoted by (I_r,,,,)), there exists a Hermitian matrix 𝒳=𝒳^H>0 such that for the norm weighted with diag(𝒳,I_m) the inequality
[ ; ]_𝒳≤ 1
holds.
In (<ref>) the weight 𝒳∈^n× n defines the Hamiltonian of the reduced system according to (x̃):=12x̃^H𝒳x̃, where x̃^H denotes the complex conjugate transpose of x̃.
In this paper, we extend and define the geometric system formulation to scattering passive discrete-time pH systems of the form (<ref>). Once this is defined, we discuss the interconnection of multiple discrete-time pH systems in analogy to the continuous-time case, where it is known that the interconnection of multiple pH systems results again in a pH system.
As special case we also discuss norm preserving interconnections.
It is shown in <cit.> that interconnections using Dirac structures not only preserve the passivity of the interconnected system but also the pH system structure. With these results in mind, it is interesting to analyze whether this property holds for the
scattering pH formulation of discrete-time systems using contractive interconnections.
Here the difficulty is that we have considered only scattering pH formulations of causal descriptor systems. However, even the interconnection of two standard state-space discrete-time systems may lead to a non-causal discrete-time descriptor system. We therefore restrict ourselves to the case that in the interconnection causality is preserved.
§ PRELIMINARIES
The geometric formulation of discrete-time pH systems will be based on using subspaces ℳ of ^n×^n with the following additional structural properties.
A subspace ℳ⊆^n×^n is called contractive if v≤w holds for all (v,w)∈ℳ.
Furthermore, ℳ is called maximally contractive if ℳ is not a proper subspace of a contractive subspace of ^n×^n. A subspace ℳ⊆^n×^n is called norm preserving if v=w holds for all (v,w)∈ℳ.
Furthermore, ℳ is called maximally norm preserving if ℳ is not a proper subspace of a norm preserving subspace of ^n×^n. A subspace ℳ= M_1^H
M_2^H, M_1,M_2∈^n× n is called monotone if
M_2M_1^H+M_1M_2^H≥ 0,
and maximally monotone if ℳ is not a proper subspace of a monotone subspace of ^n×^n.
Recently, monotone subspaces were used in the continuous-time pH context <cit.>, as an extension of results given in <cit.>, where subspaces which satisfy (<ref>) with equality were considered. In this special case, ℳ is called a Dirac structure. Closely related to this are
Lagrangian subspaces = L_1
L_2, for some L_1,L_2∈^n× n which satisfy L_2^HL_1=L_1^HL_2.
In the following, we will focus on contractive subspaces to describe discrete-time pH systems which can be seen as a counterpart of monotone structures for the continuous-time case. More precisely, the relation between monotone and contractive subspaces is established by a subspace variant of the well-known Cayley transformation, see also <cit.>.
For a subspace ℳ of ^n×^n and for some α,β∈∖{0}, the Cayley transform is defined by
𝒞_α,β(ℳ) :={(v, w) | (α(v+w),- β(v-w))∈ℳ}.
With this, we can give the following characterization of contractive and monotone subspaces.
Let =[ M_1^H; M_2^H ] for some M_1,M_2∈^n× r be contractive. Then the following assertions hold.
(a) is contractive if and only if M_2M_2^H-M_1M_1^H≤ 0.
(b) For all α,β∈∖{0} one has 𝒞_α,β(ℳ)=[ β M_1^H+α M_2^H; β M_1^H-α M_2^H ].
(c) Let αβ>0. If is monotone then 𝒞_α,β(ℳ) is contractive. Conversely, if is contractive and if |α||β|≤ 1, then 𝒞_α,β(ℳ) is monotone.
(d) is maximally contractive if and only if =n.
(a) For all (v,w)∈, there exists some z∈^r such that (v,w)=(M_1^Hz,M_2^Hz) holds. Hence, is contractive if and only if for all z∈^r the inequality
z^HM_2M_2^Hz=M_2^Hz^2=w^2≤v^2=M_1^Hz^2=z^HM_1M_1^Hz
holds.
(b) By definition, we have (v,w)∈𝒞_α,β() if and only if
α(v+w)=M_1^Hz, -β(v-w)=M_2^Hz
holds for some z∈^r. Hence
v=1/2(α^-1M_1^H+β^-1M_2^H)z, w=1/2(α^-1M_1^H-β^-1M_2^H)z
and after multiplication with 2αβ we obtain the formula in (b).
(c)
If is monotone, then αβ>0 yields
(β M_1^H-α M_2^H)^H(β M_1^H-α M_2^H)-(β M_1^H+α M_2^H)^H(β M_1^H+α M_2^H)
=-βα M_1M_2^H-αβ M_2M_1^H-βα M_1M_2^H-αβ M_2M_1^H
=-2αβ(M_1M_2^H+M_2M_1^H)
≤0.
Hence, (a) and (b) imply that 𝒞_α,β() is contractive. Conversely, if is contractive, then αβ>0 and |α||β|≤ 1 imply
(β M_1^H+α M_2^H)^H(β M_1^H-α M_2^H)+(β M_1^H-α M_2^H)^H(β M_1^H+α M_2^H)
=2|β|^2(M_1M_1^H-|α|^2|β|^2M_2M_2^H)
≥ 2|β|^2(1-|α|^2|β|^2)M_2M_2^H
≥ 0.
(d) Observe that the image representation of the Cayley transformed relation is given by
β[ βαM_1^H+M_2^H; βαM_1^H-M_2^H ]=[ βαI_n I_n; βαI_n -I_n ][ M_1^H; M_2^H ].
Since the block matrix on the right-hand side of (<ref>) is invertible, the dimension of the subspace is preserved under the Cayley transformation. Furthermore, we know from (c) that the Cayley transform relates monotone and contractive subspaces. It is also known that a monotone subspace ⊆^n×^n is maximal, i.e. it is not contained in a monotone subspace of larger dimension, if and only if =n holds, see <cit.>. Hence, maximally contractive subspaces are mapped to maximally monotone subspaces via the Cayley transformation, where the latter subspaces are n-dimensional. Hence assertion (d) follows.
In this section we have introduced the concept of contractive and monotone subspaces and their coordinate representations. In the next section we introduce the geometric formulation of discrete-time port-Hamiltonian systems.
§ GEOMETRIC FORMULATION OF DISCRETE-TIME PORT-HAMILTONIAN SYSTEMS
In this section, we introduce a geometric coordinate-free formulation of discrete-time port-Hamiltonian systems. As a motivation, we consider first the special case of pH systems without input and output variables in continuous time. The case of continuous-time dissipative Hamiltonian (dH) descriptor systems
ddt Ex(t)=(J-R)Qx(t), J=-J^H, R=R^H≥ 0, Q^HE=E^HQ
was investigated in <cit.>. Based on <cit.>, it was shown in <cit.> that a geometric formulation of (<ref>) is given by
(z(t),-ż(t))∈ℳ^-1ℒ={(z,w) | (w,v)∈ℳ, (z,v)∈ℒ},
where z:[0,∞)→^n is a continuously differentiable function, ℳ is a monotone subspace and ℒ is a Lagrange subspace.
An immediate approach to obtain a discrete-time formulation is to integrate in (<ref>) using the trapezoidal rule.
This leads to the following geometric formulation of a discrete-time scattering dH system
(h/2(z_k+z_k+1),-(z_k+1-z_k))∈ℳ^-1ℒ ⟺ (z_k,z_k+1)∈𝒞_h/2,1(ℳ^-1ℒ).
If we consider for simplicity ℒ={(x,x) | x∈^n}, then ℳ^-1ℒ is a monotone subspace, and, invoking Proposition <ref>, we find that the subspace 𝒞_h/2,1(ℳ^-1ℒ) in (<ref>) is contractive.
After this motivation, we give a geometric definition based on the analogy to continuous-time pH descriptor systems in <cit.>.
A geometric representation of a pH system with state space ^n and external dimension m is given by a triple (ℳ, ℛ) consisting of
* a maximal norm-preserving subspace 𝒩⊆^n+r+m×^n+r+m,
* a maximal contractive subspace 𝒞⊆^r×^r.
By a solution of the discrete-time pH system (𝒩, 𝒞) we understand an input-state-output trajectory ((u_k)_k≥ 0,(x_k)_k≥ 0, (y_k)_k≥ 0) for which there exist sequences (f_k,R)_k≥ 0 and (e_k,R)_k≥ 0, and such that for all k ≥ 0 we have
(x_k+1, f_k,R, y_k, x_k, e_k,R, u_k)∈𝒩,
(f_k,R, e_k,R) ∈𝒞.
The solutions of a geometric discrete-time pH system satisfy the following power balance equation
x_k+1^2+f_k,R^2+y_k^2=x_k^2+e_k,R^2 +u_k^2, k≥ 0.
Rearranging terms and employing the contractivity implies that
x_k+1^2-x_k^2≤u_k^2-y_k^2, k≥ 0,
which shows that the geometric definition of discrete-time scattering pH systems leads to scattering passive systems.
The geometric definition introduced in <cit.> also includes an additional Lagrange structure to generalize the concept of the Hamiltonian. An extension of Definition <ref> in this direction is rather straight forward, but left for future work. Here one introduces Lagrangian effort variables e_L,k which must fulfill (x_k+1,e_k,L)∈𝒞_h/2,1(ℒ) for some h>0 and one has to replace x_k in the left hand side expression in (<ref>) by e_k,L.
§ CONTRACTIVE INTERCONNECTION OF SCATTERING PASSIVE SYSTEMS
It is well known that the loss-less interconnection of continuous-time pH systems using Dirac subspaces preserves the pH system structure <cit.>. In this section, we present an analogous interconnection result for discrete-time pH systems. A related approach describing the interconnection of Dirac subspaces of scattering passive systems was given in <cit.> and it is based on the use of the Redheffer star product and wave variable representations of effort and flow variables.
To describe the system interconnection, we restrict ourselves to the interconnection of two scattering passive systems given by
E_1x_1,k+1 =A_1x_1,k+B_1 u_1,k, y_1,k =C_1x_1,k+D_1u_1,k,
E_2x_2,k+1 =A_2x_2,k+B_2 u_2,k y_2,k =C_2x_2,k+D_2u_2,k
for coefficient matrices of appropriate sizes and where the inputs and outputs of the systems are partitioned as
u_i,k=[ u_i,k^1; u_i,k^2 ], y_i,k=[ y_i,k^1; y_i,k^2 ], B_i=[B_i^1,B_i^2], C_i=[ C_i^1; C_i^2 ], D_i=[ D_i^11 D_i^12; D_i^21 D_i^22 ], i=1,2,
where the components u_i,k^1 and y_i,k^1 are available for coupling.
In the following we consider a contractive interconnection of the scattering passive systems (<ref>) and (<ref>) given by
(u_1,k^1,u_2,k^1,y_1,k^1,y_2,k^1)∈, [ y_1,k^1; y_2,k^1 ]≤[ u_1,k^1; u_2,k^1 ], k≥ 0.
A special case of such an interconnection (<ref>) is given by
y_1,k^1=u_2,k^1, u_1,k^1=y_2,k^1
and was used in <cit.> for the interconnection of Dirac structures. It is closely related to the Redheffer star product which was studied in <cit.>.
Then,we have the following main result on structure preserving interconnection. To show that the scattering pH structure is preserved, we restrict to the case E_1=E_2=I_n because of spatial limitations. However, the result can be generalized for descriptor systems having index at most one.
Consider scattering passive systems (<ref>) and (<ref>), i.e. (<ref>) holds, together with an interconnection via (<ref>). Then the following holds:
(a) The interconnected system given by (<ref>), (<ref>) and (<ref>) is scattering passive.
(b) If the systems (<ref>) and (<ref>) are discrete time scattering pH, i.e. (<ref>) holds, and let E_1=E_2=Id and I-D_1^11D_2^11 be invertible, then the interconnected system (<ref>) is equivalent to a discrete-time pH system.
(c) Let (<ref>) and (<ref>) be scattering pH, i.e. (<ref>) holds for some positive definite 𝒳_1,𝒳_2>0, and fulfill E_1=E_2=Id. If u_i,k=u_i,k^1 and y_i,k=y_i,k^1 holds for i=1,2 and all k≥ 0, then the interconnection (<ref>) leads to a contractive closed loop system of the form
[ x_1, k+1; x_2,k+1 ]=Â[ x_1,k; x_2,k ], Â z_𝒳̂≤z_𝒳̂, where z_𝒳̂:=[ 𝒳_1 0; 0 𝒳_2 ]z.
If we consider a kernel representation =[M_1^1,M_1^2,M_2^1,M_2^2] for some matrices of appropriate sizes, then the interconnected system can be written as
𝐀 =[ A_1 0 B_1^1 0 0 0; 0 A_2 0 B_2^1 0 0; C_1^1 0 D_1^11 0 -I 0; 0 C_2^1 0 D_2^11 0 -I; 0 0 M_1^1 M_1^2 M_2^1 M_2^2 ], 𝐁 =[ B_1^2 0; 0 B_2^2; D_1^12 0; 0 D_2^12; 0 0 ], 𝐱_k =[ x_1,k; x_2,k; u_1,k^1; u_2,k^1; y_1,k^1; y_2,k^1 ], 𝐄 =[ E_1 0 0; 0 E_2 0; 0 0 0 ]
𝐂 =[ C_1^2 0 D_1^21 0 0 0; 0 C_2^2 0 D_2^21 0 0 ], 𝐃 =[ D_1^22 0; 0 D_2^22 ], 𝐮_k =[ u_1,k^2; u_2,k^2 ] , 𝐲_k =[ y_1,k^2; y_2,k^2 ].
Since the systems (<ref>) and (<ref>) are assumed to be scattering passive, there exists storage functions V_1 and V_2 such that the following holds
V_i(E_ix_i,k+1)-V_i(E_ix_i,k)≤u_i,k^2-y_i,k^2, i=1,2.
Furthermore, by the contractive interconnection (<ref>) we have
y_1,k^1^2+y_2,k^1^2=[ y_1,k^1; y_2,k^1 ]^2≤[ u_1,k^1; u_2,k^1 ]^2=u_1,k^1^2+u_2,k^1^2.
Adding the inequalities (<ref>) and using 𝐕(𝐄𝐱_k):=V_1(E_1x_1,k)+V_2(E_2x_2,k) yields
𝐕(𝐄𝐱_k+1)-𝐕(𝐄𝐱_k)≤∑_i,j=1^2u_i,k^j^2-y_i,k^j^2 ≤u_1,k^2^2+u_2,k^2^2-y_1,k^2^2-y_2,k^2^2
=𝐮_k^2-𝐲_k^2,
which is the dissipation inequality for the interconnected system with respect to the scattering supply rate and the quadratic storage function 𝐕. Hence, the interconnected system is scattering passive, which proves (a).
We continue with the proof of (b) and consider the interconnection (<ref>) of the systems (<ref>) and (<ref>) which is given by
𝐀 =[ A_1 0 B_1^1 0; 0 A_2 0 B_2^1; C_1^1 0 D_1^11 -I; 0 C_2^1 -I D_2^11 ], 𝐁 =[ B_1^2 0; 0 B_2^2; D_1^12 0; 0 D_2^12 ], 𝐱_k =[ x_1,k; x_2,k; u_1,k^1; u_2,k^1 ], 𝐲_k=[ y_1,k^2; y_2,k^2 ],
𝐂 =[ C_1^2 0 D_1^21 0; 0 C_2^2 0 D_2^21 ], 𝐃 =[ D_1^22 0; 0 D_2^22 ], 𝐄 =[ E_1 0 0; 0 E_2 0; 0 0 0 ], 𝐮_k=[ u_1,k^2; u_2,k^2 ].
It was shown in <cit.> that the invertibility of I-D_1^11D_2^11 is equivalent to that of I-D_2^11D_1^11 and also to the following condition
[ D_1^11 -I; -I D_2^11 ]={0}.
Therefore, we can rewrite
[ u_1,k^1; u_2,k^1 ]=-[ D_1^11 -I; -I D_2^11 ]^-1[ C_1^1 0; 0 C_2^1 ][ x_1,k; x_2,k ].
Hence, the interconnected system is equivalent to
𝐀̂ :=[ A_1 0; 0 A_2 ]-[ B_1^1 0; 0 B_2^1 ][ D_1^11 -I; -I D_2^11 ]^-1[ C_1^1 0; 0 C_2^1 ]
, 𝐄̂: =[ E_1 0; 0 E_2 ],
𝐁̂ :=[ B_1^2 0; 0 B_2^2 ]-[ D_1^11 -I; -I D_2^11 ]^-1[ D_1^12 0; 0 D_2^12 ], 𝐱̂_k =[ x_1,k; x_2,k ],
𝐂̂ :=[ C_1^2 0; 0 C_2^2 ]-[ D_1^21 0; 0 D_2^21 ][ D_1^11 -I; -I D_2^11 ]^-1[ C_1^1 0; 0 C_2^1 ], 𝐲̂_k =[ y_1,k^2; y_2,k^2 ],
𝐃̂ :=[ D_1^22 0; 0 D_2^22 ]-[ D_1^21 0; 0 D_2^21 ][ D_1^11 -I; -I D_2^11 ]^-1[ D_1^12 0; 0 D_2^12 ], 𝐮̂_k =[ u_1,k^2; u_2,k^2 ].
Assertion (a) yields that the interconnected system is scattering passive and since (<ref>) is valid for each of the systems, the following holds
𝐀̂𝐱̂_k+𝐁̂𝐮̂_k_𝒳̂^2-𝐱̂_k_𝒳̂^2=𝐱̂_k+1_𝒳̂^2-𝐱̂_k_𝒳̂^2≤𝐮̂_k^2-𝐲̂_k^2=𝐮̂_k^2-𝐂̂𝐱̂_k+𝐃̂𝐮̂_k^2,
and therefore
[ 𝐀̂ 𝐁̂; 𝐂̂ 𝐃̂ ][ 𝐱̂_k; 𝐮̂_k ]_𝒳̂^2≤[ 𝐱̂_k; 𝐮̂_k ]_𝒳̂^2 or, equivalently, [ 𝐀̂ 𝐁̂; 𝐂̂ 𝐃̂ ]_𝒳̂≤ 1,
which means that the interconnected system (𝐀̂,𝐁̂,𝐂̂,𝐃̂) is scattering pH according to (<ref>).
We proceed with the proof of (c). In this case the scattering pH systems are given by
[ x_1,k+1; y_1,k^1 ]=[ A_1 B_1; C_1 D_1 ][ x_1,k; u_1,k^1 ], [ x_2,k+1; y_2,k^1 ]=[ A_2 B_2; C_2 D_2 ][ x_2,k; u_2,k^1 ],
and since A_i B_1
C_i D_i_𝒳_i≤ 1 holds for i=1,2, we obtain the closed loop system
[ x_1, k+1; x_2,k+1 ]=[ A_1+B_1D_2(I-D_1D_2)^-1C_1 B_1C_2+B_1D_2(I-D_1D_2)^-1D_1C_2; B_2(I-D_1D_2)^-1C_1 A_2+B_2(I-D_1D_2)^-1D_1C_2 ][ x_1,k; x_2,k ].
Adding the dissipation inequalities of the systems implies
x_1,k+1_𝒳_1^2+x_2,k+1_𝒳_2^2 ≤x_1,k_𝒳_1^2+u_1,k^2-y_1,k^2+x_2,k_𝒳_1^2+u_2,k^2-y_2,k^2
≤x_1,k_𝒳_1^2+x_2,k_𝒳_2^2,
which implies that the coefficient matrix of the closed loop system (<ref>) is a contraction.
§ CONCLUSION
In this paper, we presented a geometric approach to scattering passive discrete time pH systems which is based on contractive subspaces. It also discussed the interconnection of discrete time pH systems and we showed that the interconnection of two pH systems results in another pH system. We also had a closer look at the Redheffer interconnection and showed that the interconnection in this case also results in a pH system.
§ ACKNOWLEDGEMENT
The work of K. Cherifi has been supported by ProFIT (co-financed by the Europäischen Fonds für regionale Entwicklung (EFRE)) within the WvSC project: EA 2.0 - Elektrische Antriebstechnik (project No. 10167552).
The work of H. Gernandt has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the Priority Programme 1984 “Hybrid and multimodal energy systems” (Project No. 361092219) and the Wenner-Gren Foundation. The work of D. Hinsen and V. Mehrmann has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) CRC 910 Control of self-organizing nonlinear systems: Theoretical methods and concepts of application: Project No. 163436311 and by Bundesministerium für Bildung und Forschung (BMBF) EKSSE: Energieeffiziente Koordination und Steuerung des Schienenverkehrs in Echtzeit (grant no. 05M22KTB).
plain ] ]
|
http://arxiv.org/abs/2307.00938v1
|
20230703112425
|
Interpolation of Point Distributions for Digital Stippling
|
[
"Germán Arroyo",
"Domingo Martín",
"Tobias Isenberg"
] |
cs.GR
|
[
"cs.GR",
"97R60"
] |
arroyo@ugr.es
0000-0001-7229-5029
University of Granada
Spain
dmartin@ugr.es
0000-0002-4088-0554
University of Granada
Spain
tobias.isenberg@inria.fr
0000-0001-7953-8644
Université Paris-Saclay, CNRS, Inria, LISN
France
We present a new way to merge any two point distribution approaches using distance fields. Our new process allows us to produce digital stippling that fills areas with stipple dots without visual artifacts as well as includes clear linear features without fussiness. Our merging thus benefits from past work that can optimize for either goal individually, yet typically by sacrificing the other. The new possibility of combining any two distributions using different distance field functions and their parameters also allows us to produce a vast range of stippling styles, which we demonstrate as well.
<ccs2012>
<concept>
<concept_id>10010147.10010371.10010372.10010375</concept_id>
<concept_desc>Computing methodologies Non-photorealistic rendering</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Non-photorealistic rendering
Interpolation of Point Distributions for Digital Stippling
Tobias Isenberg
August 1, 2023
==========================================================
topnum@
botnum@
§ INTRODUCTION
Digital stippling <cit.> is a non-photorealistic rendering technique <cit.> that emulates hand-made stippling by means of distributing stipple dots based on target images. The traditional art form has been and continues to be used for illustrations in several scientific domains such as biology, entomology, or archeology. For artists, while it seemingly only requires to place dots on paper using an ink pen, the traditional technique is an arduous and extremely repetitive task. It requires of time to produce relatively small drawings (, A4) because the artist must place thousands of dots and has to correctly reproduce tone, shape, and texture. Ultimately, this illustration technique is thus increasingly being replaced by others due to its high cost.
Research in NPR, however, has led to numerous approaches for producing digital stippling without such long production times <cit.>, taking into account not only the actual dot distribution but also the image resolutions <cit.>, the shape of the dots <cit.>, and properties such as the introduced abstraction <cit.>. One of the main driving forces of digital stippling has been the stipple dot placement without visual artifacts, as early approaches based on Centroidal Voronoi Diagrams <cit.> lead to unintended chains of dots which stipple artists are trained to avoid <cit.>. While several approaches have since addressed this artifact issue (see the overview in 's [] survey), sometimes it is still important that the dots are, in fact, aligned to intentionally represent dedicated features. Consequently, some approaches (, <cit.>) focus specifically on placing dots in a structure-aware fashion.
The fundamental problem is now that any given point distribution technique usually either produces nice point distributions for filling areas of an image, or it produces adequate feature stippling. More generally, to produce a digital stippling one may want to combine any two dot distribution techniques that one chooses for a given visual goal. In this article we thus describe a new approach to realize such a smooth interpolation between two point distributions such that their respective properties are maintained. For this purpose we consider stippling algorithms as discrete probabilistic functions, so we can create a new algorithm based on the interpolated function of their distributions. Our approach to combine two distributions is general enough such that it not only allows us to solve our particular problem. Instead, it also allows us to produce a great variety of results by only selecting different distributions and/or adjusting the distance field, which we see as a novel way of controlling the stylization for digital stippling.
§ PREVIOUS WORK
A complete survey of digital stippling is beyond the scope of this section and has recently been presented elsewhere <cit.>, so here we only briefly mention some main related work and those techniques that specifically apply to our approach. The reproduction of stippling using computers began in the late 1990s. Researchers initially tried to place dots appropriately to represent a picture. These initial solutions were based on the concept of Centroidal Voronoi Diagrams (also called Lloyd's method <cit.>); the main idea is to equalize the energy between Voronoi regions by moving the Voronoi sites to the cell's centroids. The first implementation was a painting systems with brushes to modify the dot distribution <cit.>. <cit.> later extended this approach by using the tone of the original image as a weight to control the distance metric.
Original CVDs and their extensions have the mentioned problem of producing patterns due to the regularity of the distribution, which must be avoided for realistic reproduction <cit.>. Several solutions have been investigated to reduce or solve this problem. <cit.>, for instance, used autonomous agents, the Renderbots, to place stipples with some random processing. Another idea is to use distributions with blue noise properties. <cit.>, for example, presented a method based on Wang tiles and Poisson disk sampling, which has blue noise characteristics and thus avoids patterns and, moreover, allows them to smoothly and indefinitely zoom into stipple images. <cit.> presented a capacity-constrained way to create point distributions based on Lloyd's method that also possess blue noise characteristics. A generalization of the original CVD but using an optimization of the energy was described by <cit.>. Another blue noise approach that is also fast was presented by <cit.>. Finally, <cit.> used a Monte Carlo technique for sampling an adaptive probability density function.
Another way to avoid visible patterns is to use example-based techniques and hand-made stippling as input. For example, <cit.> synthesized different hatching and stippling styles using techniques from texture synthesis. Later, <cit.> synthesized dot distributions using gray-level co-occurrence matrices (GLCM) for a statistical analysis of hand-made samples. Finally, <cit.> showed that the use of resolution-dependent halftoning for dot distributions combined with scanned dots can also achieve satisfactory results.
In contrast to the approaches discussed thus far that aim to avoid visual details, others attempt to use stipples that line up to form linear structures on purpose. One form is called hedcut stippling <cit.> and is somewhat related to hatching. More related to our own work, however, are structure-aware techniques that aim to highlight particular sparse features in an image using aligned stipples, while reproducing the remainder of the image without such artifacts. For example, <cit.> used a graph whose edge weights recorded the magnitude of the local image gradient. He then used a version of Disjkstra's algorithm to determine stipple dot positions, emphasizing particularly edges in an image. Later, <cit.> based another structure-aware stippling technique on an approach for contrast-aware halftoning <cit.>.
None of the techniques are ideal, however. For example, while <cit.> are able to nicely emphasize linear structures, the regularity of their dot placement is sometimes noticeable, in particular, in dark regions. In contrast, example-based approach such as those by <cit.> and <cit.> can produce convincing distributions for relatively dark or relatively light regions, but fail to properly reproduce linear structures.
§ OVERVIEW
Our visual goal is that of hand-made stippling. Here, artists avoid any visible artifact in dot placement for most of the stippled regions <cit.>. To represent features, edges, or borders, however, they align the dots or even use proper lines instead of chained dots (, see the examples in 's [] book or in Figure 1 in 's [] survey). To be able to achieve a similar effect with digital stippling we need to address the mentioned problem of recent related work which either focuses on artifact avoidance or on feature representation. In this paper we thus demonstrate how we can interpolate between any given two distributions to be able to use them for those parts where they shine, and use others where they would not be ideal. We base our approach on a dedicated distance field that we use to determine which of the two distributions to use. Usually we derive this field using some edge detector, but other fields are possible. With this approach we can smoothly transition between any two given point distributions, without introducing new patterns to the stippling.
We show a graphical overview of our approach in
<ref>. In general terms, our method takes two stippling
algorithms and a distance field such as one that is computed from an edge detector applied to the source image.
We then construct a stochastic function from this information to get the Probability Density Functions (PDFs) of both algorithms. We then interpolate the two PDFs according to the distance field, and then rendering the result to obtain the final illustration.
Next, we first explain the general method to combine any two point distributions, before we demonstrate how this approach allows us to solve the artifact issue for stippling.
§ DISTRIBUTIONS AND INTERPOLATION FUNCTIONS
Any stippling algorithm can be considered as a function that
takes a source image (as well as certain parameters) as input and then
determines dot positions on a virtual paper (ignoring the dot aspects
<cit.> for now). We can consider the
process of placing the dots as a random function that places a dot at a
particular location. In the case of those stippling algorithms that use
random numbers (, <cit.>), the probability of placing the dots is given by the techniques' respective Probability Density Functions (PDFs) directly. For the rest of deterministic stippling and point distribution algorithms, we can simulate a corresponding stochastic version simply by sampling the stippling result according to the density of stippling dots per area and deriving a respective PDF.
A PDF is a function that makes use of a random variable X and has a density function associated
f_X, where f_X is a non-negative Lebesgue-integrable function. It can thus be defined as
[a≤ X≤ b]=∫ _a^bf_X(x) dx .
The 2D nature of the virtual stippling canvas makes it convenient to redefine the
PDF in terms of area instead of some interval as
[X ∈ A]=∫ _Af_X(x) dx .
In our case, X represents some random stippling dot so, intuitively, one can think of f_X(x) dx as being the probability of some random stippling dot X falling within some infinitesimal area A.
The linear interpolation of two different PDFs (f and g) for the same area A can be written <cit.> as
I_X(α) = (1-α)f_X + α g_X, with X ∈ A
for some scalar value α∈ [0,1]. We thus have
Pr[X ∈ A] = (1-α) ∫_A f_X(x) dx + α∫_A g_X(x) dx , with X ∈ A .
For those stippling algorithms that directly use continuous, integrable functions it is enough to compute the interpolation of the respective two PDFs to merge both algorithms. But even those algorithms that use PDFs for the dot placement typically redefine the PDF in terms of time: the PDF is constructed while the dots are placed. Therefore, such algorithms are terribly slow due their need of re-adjusting the PDF for each iteration, changing the probabilities every time that a new dot is placed on the canvas (normally by reducing the probability in the area where the dot was placed).
For this reason, and for these particular cases, the interpolation
between the PDFs cannot be performed until all dots have been placed and
the PDF is finalized. An additional downside of this approach
is that it increases the algorithm's computational complexity.
Moreover, it is impossible to combine such techniques with approaches
that lack a well-defined PDF such as Voronoi-based algorithms. To
address this problem, we propose to use a discrete probability function
(DPF) instead of a regular PDF, we detail next.
§.§ Discrete Probability Functions
One possibility of converting the PDF into a DPF is by the discretization of the canvas, that would give us a random variable X that specifies cells of the resulting grid instead of the continuous positions in the 2D space of the stippling canvas.
We can go a step further, however, and directly consider the grid cells to contain the respective probability value of the cell receiving the next stipple dot or not.
This representation has the advantage that we can model any purely deterministic stippling algorithm, guiding the probabilities for each cell of the grid as if it was a DPF. Moreover, with a sufficiently fine resolution of the grid, for deterministic algorithms we produce virtually the same result as before the discretization.
Let us assume such a sufficiently fine-grained grid placed over the stippled image, with N stipple points to be placed. We then mark any cell that has no stipple point location in it as a white cell (W), and any cell that has one or more stipple point locations in it as a black cell (B). White cells have no probability to be stippled, so we assign them a probability of 0%. Black cells (their total number is M with M ≤ N) have some probability greater than 0% of being stippled, so they get a non-zero probability. With the probability of a cell Pr[B_i] we refer to the chance that the next stipple dot that is placed is assigned to the given cell. We can thus state that ∑_i=1^M Pr[B_i] = 1 because the next stipple has a 100% probability to be placed into one of the remaining black cells. We can derive the real initial probability of each black cell by sampling the PDF for non-deterministic algorithms, by using some heuristic (, considering the amount of dots per cell), or simply by assigning the same probability to all black cells (Pr[B_i] = 1/M).
This way, a simple stochastic algorithm gives us a dot distribution virtually identical to that the original algorithm as follows:
* Cast a single stipple dot to the grid. Randomly determine one black cell into which it is placed, considering the probabilities of each black cell.
* For the chosen cell, reduce the probability of this black cell getting future stipples by subtracting the probability of the current stipple 1/N', N' being the number of remaining stipples to be placed including the current one, from the cell, and distribute this probability equally to the rest of the remaining black cells. This means that these remaining cells now have a higher probability to get the next stipple point than before.
* If a black cell reaches 0% probability it turns into a white cell. If it would become negative, only subtract the remaining positive amount in the previous step and distribute it.
* Repeat from step (1) until there is only one black cell (with probability equal to 1), the last stipple dot is placed into this cell and the algorithm ends.
This algorithm produces a stipple result equivalent and virtually identical to that of a deterministic technique—up to the chosen grid resolution and with the possibility that the number of dots in a given grid cell may differ occasionally due to the probabilistic approach. Moreover, if we had some way of computing the actual probability (as opposed to deriving it from an input stipple distribution), we would be able to maintain the exact same distribution as described by the PDF. Our goal, however, is not to run the algorithm. Instead, we want to derive a DPF for any given stippling technique, which is simply the grid with black and white cells and their associated discrete probabilities. We now use this DPF to be able to smoothly interpolate between two arbitrary dot distributions.
§.§ Linear Interpolation
Given two DPFs f and g corresponding to two different dot distributions, we can express the interpolation of both DPF as
interpolated_DPF(α) = f · (1 - α) + g ·α ; α∈ [0,1] .
We just derived the input DPFs as a set of cells with probabilities (denoted Ω as the set of all cells in the grid) and we assume that both DPFs use the same resolution and alignment. To be able to compute <ref> we thus need to do a pair-wise interpolation of the respective probabilities in the cells in both DPFs.
We do this interpolation also in a probabilistic way, such that the chance that the cell value from f has influence is 1 - α (let us call this event F) and the chance that the value from g takes effect is α (which we call event G). We model the event of a cell's probability taking effect using a uniform/rectangular distribution U_cell in the range [0, 1], which we can compare to 1 - α or to α and which we need to define for each cell in the grid. We are interested in dots being placed, so we want to know the union of events F and G, so we can specify the interpolation of a given cell as
Pr[cell ∈Ω, α∈ [0,1]] =
[(f_cell∩ (U_cell≥α)) ∪(g_cell∩ (U_cell < α))] .
In probability theory, given two different probability events A and B, and if A and B are independent, the probability of having the two events happen at the same time is (A ∩ B) = (A) ·(B). The probability that events A or B occur is the probability of the union of A and B and can be written as (A ∪ B) = (A) + (B) + (A ∩ B), with the last term (A ∩ B) being 0 in our case.
In an example where f_cell has a 50% probability of placing a dot and g_cell has 10% and for an α of 0.1, we would thus get an interpolated probability Pr[cell, α =0.1] = 0.5 · (1 - 0.1) + 0.1 · 0.1 = 0.46.
We can thus apply the interpolation of the grid of probabilities by means of <ref>, leading to results as demonstrated in <ref>. <ref>–fig:iterpolation:d show the direct application for the interpolation of two DPFs each, with a very coarse grid size to demonstrate the effect. <ref> shows the interpolation between DPFs derived from a random distribution and a normal distribution, while <ref> interpolates from the DPF of the normal distribution to the DPF of a procedural figure. Notice that, due to the coarse grid size of the example, the single stipple of each cell is placed in the resulting regular grid pattern. Next, <ref>–fig:iterpolation:d show the DPF-based interpolation for equivalent distributions at a high resolution (sampled from the original PDFs). For comparison, <ref>–fig:iterpolation:f then demonstrate the PDF-based interpolation for the same distribution (, before their discretization to DPF) by following <ref> and then sampling it directly (, using a Monte Carlo algorithm <cit.>). As one can see, the resulting DPF-based interpolation and the PDF-based interpolation produce visually similar results, with the exception of the density of the coverage of the ring image that we explain below.
This last example shows that two PDFs can be interpolated. However, as argued at the beginning of this section we do not actually have a PDF for all input distributions that we could use, in particular when we want to represent an arbitrary input image such as in stippling. Here the best approach is to represent the input resolutions as DPFs and to interpolate between them, as shown for a toy example in <ref>: while the distribution on the right is derived from a PDF, the distribution on the left is derived from an input image and <ref> shows the DPF-based interpolation between them. The argument holds even more for the interpolation between two image-based distributions as shown in the realistic example in <ref>, in which we interpolate using DPFs between two different stochastic algorithms (a halftoning algorithm on the left and a weighted Voronoi algorithm <cit.> on the right).
One important property of the DPF-based interpolation, in contrast to the use of PDFs, is that it is computed in terms of cells not in term of samples. This means that, while in the case of the sampling of a PDF the number of samples (or stipple points) can remain the same during the interpolation, in DPF-based interpolation we cannot guarantee that the number of samples remains constant during the interpolation process. We even cannot guarantee a constant sample count if the two algorithms to be interpolated each have the same amount of samples to start. Such situation is rare when dealing with real algorithms, however, so this constraint has little practical implications such as when applying the DPF-based interpolation to digital stippling. Moreover, PDF sampling cannot guarantee that we get completely black areas are covered by the samples as shown in the last image on the right of <ref>: we instead get noisy images. The grid of the DPF, in contrast, forces us to always sample all the cells of the grid, which in turn allows us to get close to the original as shown in the last image on the right of <ref>.
§.§ DPF Interpolation based on a Distance Field
The problem with the linear interpolation of DPFs (or, similarly, PDFs) as discussed thus far is that the interpolation is applied globally for the entire canvas. We cannot yet control the interpolation depending on the original picture's composition to combine different distributions as we motivated in <ref>. We thus now define a new function Δ that controls the interpolation based on the 2D distance between a specific cell and a given subset of cells as
Δ(x,∂Ω ):= y∈∂Ωinf d(x,y)/max(Ω) ; x ∈Ω .
Here, Ω represents the complete space, , the set of all the cells in the grid. This set can be considered to be a partially ordered set because we can order the cells based on some arbitrary function such as the distance to a given, selected cell x. We can thus use the infimum to get the smallest value of possible distances d(x,y) between x and any cell in a subset of cells ∂Ω.
The distance d(x,y) could use any distance metric, we use a simple Euclidean distance.
While ∂Ω could be equal to Ω such that it contains all its cells, in practice it will be just a part of the cells. The reason is that our goal is to construct a distance field from some mask that could guide the interpolation of the two distributions. We thus construct a mask by marking some of the cells of Ω as black cells, constructing the subset ∂Ω. With the Euclidean distance d(x,y), inf _y∈∂Ωd(x,y) gives us the distance from x to the closest black cell. We then normalize the result based on the maximum distance in Ω, namely max(Ω). Overall, Δ_cell thus calculates the normalized distance field from each cell x to the closest black cell in the grid (Ω).
<ref> shows the effect of three different masks (, different subsets ∂Ω⊂Ω) and their resulting distance fields on the interpolation between a uniform random distribution (g) and a deterministic algorithm drawing a 2D torus (f). In these examples the masks are independent from the used point distributions. Ultimately, however, we want to control the interpolation based on features in the original image, which also led to the two point distributions (in the case of digital stippling). We thus make the interpolated distribution I depend on Δ instead of α and rewrite <ref> as
I(Δ_cell) = f_cell· (1 - Δ_cell) + g_cell·Δ_cell .
In addition to this distance-based interpolation we also want to provide users with control for when f or g become dominant. We thus add a bias b ∈ [-1,1] to Δ_cell, so <ref> finally becomes
I(Δ_cell, b) = f_cell· (1 - Δ_cell-b) + g_cell· (Δ_cell + b) .
<ref> shows the effect of this bias b on the interpolation between a uniform random distribution (g) and a deterministic algorithm drawing a 2D torus (f). When b = 0 we get function f, whereas when b = 1 the returned values correspond to g.
So three functions are involved in the process of the interpolation. On one hand we have the two DPFs that can be implemented as simple masks (for deterministic algorithms these mask are binary). On the other hand we have the distance field mask. This is another binary mask that represents how the interpolation is performed. As noted before, one advantage of using DPFs instead of PDFs is that we can directly use the result of any stippling algorithm. We can thus take advantage of the parallelism of modern GPUs due to the cells in the masks being independent from each other. Another advantage is the simplicity of the computation of the distance field mask. The problem with the use of DPFs is the regularity of the resulting cells, but for that we can use some post-processing in the render stage, as we explain later.
§ ARTIFACT-CONTROLLED DIGITAL STIPPLING
As we had discussed, many existing techniques are successful in shading areas in images. While some early techniques based CVDs created unwanted linear features, most recent techniques produce artifact-free distributions. With our new distribution interpolation technique we could thus select any of these techniques and use it to shade larger regions, and combine it with another technique that can represent borders and features well. Next, however, we discuss specific choices for the algorithms and show how they efficiently facilitate artifact-controlled stippling.
§.§ Border Stippling
We begin with the stippling of borders, for which we could now use a dedicated structure-aware stippling technique (, by <cit.> or <cit.>). With our new ability to combine any two dot distributions, however, we can use the easier early stippling techniques, which are simpler to implement. 's WCVD [], for instance, produces results with chained stipples even in shaded areas, but as long as we only use the feature lines it produces for which we do want stipple chaining this does not pose a problem.
The key insight for the stippling approach we describe next is that, using our previously described interpolation technique for dot distributions, we would first express both as DPFs, assuming that each resulting black pixel represents only one dot of the stippling result. For some of them, including 's WCVD [], this process involves taking an image as an input, doing some image processing such as contrast adjustment or edge detection, and then running the actual dot distribution process, before extracting the DPF from the generated dot distribution. Second, we would interpolate the two generated DPFs as described in <ref>, and then place stipple dots for the interpolated DPF as described in <ref>. It turns out, however, that we can optimize the conversion from input image to filtered image to analytic dot positions to DPF (which is also essentially an image) by using the filtered image, after some additional (image) processing, directly as the input to our interpolation. Next we thus explain this process for generating the edge-representing DPF for our hybrid stippling process, before we discuss the shading of areas and the mixing of both approaches using a distance field.
One of our main goals is that we want those stipple dots that represent single lines in an image or that are borders of a stippled region to be aligned. Our first step is thus to compute an adjusted input image that emphasizes these edges. For this purpose we can employ established edge detection filters (, by <cit.>, by <cit.>, by <cit.>, the Laplacian of Gaussians—LoG <cit.>, or the Difference of Gaussians—DoG <cit.>), and past NPR work <cit.> has discussed that some of these filters need to be combined to better cover those edges typically used in hand-drawn line images. Nonetheless, these filters are not always able to cover all lines a human artist would draw <cit.>, and they also often require difficult fine-tuning or parameters to the chosen input image. In recent years, researchers have tried to address these problems by using deep neural networks (DNNs) (, <cit.>). Although these solutions can detect edges successfully, they need many training images and expensive human input to annotate the valid edges of the images. In addition, they require powerful hardware provide interactive response times. In this work we thus restrict ourselves to using traditional image filters as they are quite successful and because we combine them with stippled areas; yet they could be easily replaced by another technique in the future.
.5pt
One essential prerequisite is to obtain edges of one-pixel thickness. We need this constraint to align stipple points later-on along the edge. For example, lines with a width of two pixels may end up being stippled in a zig-zag fashion as shown in <ref>, for an average stipple distance of √(2) pixels.
For these reasons we use 's [] filter for the examples in the remainder of our work to guide the placement of stipples along one-dimensional feature edge paths.
We thus now have a filtered input image for the first of the stippling techniques ('s [] WCVD stippling) that we use in our hybrid approach. As indicated above, similar to the binary image we ultimately need in our DPF-based interpolation of stippling distributions, this input image also essentially is a binary image, containing roughly the same information: black pixels that indicate locations for placing (much larger) stipple dots after the interpolation. The use of WCVD stippling just to then extract the DPF from the resulting distribution thus seems like an unnecessary detour, even with a fast GPU-based WCVD implementation.
In our approach for producing the edge DPF we thus do not actually run 's WCVD but use image processing techniques to generate a similar result, one that also emphasizes edges and ensures an even distribution of stipple dots along them. Nonetheless, the result of first stippling and then extracting the DPF is not identical to the edge image we used as the input: the DPF image would not contain a consecutive series of pixels for every edge, in contrast to the filtered edge image. We thus need to add a controllable amount of space in-between the aligned stipples, a gap that would normally be ensured by the WCVD stippling, which limits the overall number of stipple dots in the image and thus also the number of black pixels that represent an edge in the DPF. Yet we can use a similar ink-constraining process and apply it directly to the edge image—in a way a simplified 1D case of WCVD stippling to distribute the remaining pixels evenly along edges.
For this purpose we determine all edge pixels and treat them as paths. Starting with a given black pixel, we find all its black neighbors in the 8-neighborhood. Given the direction that we came from the previous pixel, we then select that black pixel among the current pixel's neighbors that best continues the path (or a random neighbor if we started a new path). We continue this stepping along pixel paths until a path cannot be continued anymore (, there are no more black neighbors).
Once this is completed, we start a new path tract by selecting a new random pixel that was not visited yet—until all pixels have been visited. To best emphasize long, outline strokes, we begin by searching the image from top-to-bottom and left-to-right until we find the first black pixel—our initial starting pixel. We also use the heuristic of using the left-most pixel (w.r.t. our incoming direction) in the 8-neighborhood as we select the next pixel
to ensure walking around shapes first. We also tested the addition of a backward search after the forward search to follow paths in both directions, but the results were visually equivalent to only doing a forward search, so we kept this simpler approach.
The process as we have described it so far simply yields all black pixels, and the result is identical to the edge buffer. To simulate the effect of WCVD stippling, we introduce sections of white pixels between the black ones. We thus only output a new pixel once we have covered a user-specified distance d along the path (counting Δ_d=1 for horizontal or vertical steps and Δ_d=√(2) for diagonal steps). To avoid regular patterns, we add noise to the distance. While we could have used a normally distributed noise offset around an average distance d_0, our test have shown that we can produce visually satisfying results by adding noise to d_0 in form of a random offset from [-d_n, d_n]. We demonstrate the effect of this noise being added to d_0 in <ref>, for different amounts of noise.
One issue with this process is that the Canny filter, which we use, produces combinations of three black pixels (see <ref>), which resembles a zig-zag line in the final image instead of a straight one or resembles a block. We thus remove these cases prior to stepping along the pixel paths. We show a more realistic example in <ref>.
For an example edge input as the one in <ref>, this process then yields results as we show in <ref>. Compared to the WCVD stippling result for the same input shown in <ref>, we argue that the result is visually equivalent. Figures <ref> and <ref> show detail sections for this comparison, respectively.
§.§ Stippling shaded regions
In shaded areas, it is well established that CVD and WCVD are counterproductive due to the regularity of these distributions. As we discussed in <ref>, other approaches produce similar blue noise distributions to avoid visual patterns and artifacts. To achieve our goal of realism in digital stippling, however, we base our solution on 's [] use of scanned grayscale dots and halftoning-based placement because their process considers the target image's physical size. The packing factor and the noise added to dot locations in this process produce good results for shaded areas (, areas with a low gradient). Moreover, it is fast to compute because its dot distribution is based on a halftoned image <cit.>, which also has blue noise properties, and its results resemble hand-made distributions according to the statistical analysis by <cit.>.
The main problem of this approach is that it uses a grid to distribute points and adds random noise to remove the regularity. While this leads to good results in shaded regions and makes stippled edges appear fuzzy and unclear, with our distribution interpolation we can now combine it with the edge stippling discussed before. We thus only need to derive the needed mixing function α to control the interpolation, as we describe next.
§.§ Mixing functions
Now that we have the two distributions in place, we need to define the specific function that controls the interpolation between them. This function needs to encode our initial goal of using the edge-emphasizing stippling from <ref> for the edges, and the distribution mentioned in <ref> for representing filled areas. In <ref> we initially controlled the interpolation by means of a parameter α, but later improved upon this concept in <ref> to a interpolation based on a distance field Δ and with a bias b.
The distance field Δ intuitively appears to be well suited to achieve our goal, provided that we use the initial edge Canny-filtered image generated in <ref> as the distance mask ∂Ω. We may want to, however, achieve a number of different goals as follows:
* show both border and the area stipples at the same time, with different densities depending on the distance from the edge,
* leave some white space around the border to enhance it,
* produce an inversion effect in which edges are represented by the lack of stipples, etc.
Our current linear computation of the distance Δ is too inflexible to achieve these goals. We thus update <ref> that we used for the interpolation to also employ a general function Γ that maps the distance field, the domain, to the same codomain but with a different shape to fulfill the desired goal:
I(Δ_cell, b) = f_cell·(1 - Γ(Δ_cell)-b) + g_cell·(Γ(Δ_cell) + b) .
By default, Γ is defined as a linear function (Γ(cell) = Δ_cell). The possibilities of the Γ function can be shown with an example: If we want to control, for instance, the density of pixels for producing the combination of the borders and the area at the same time and to produce a band surrounding the borders, we could use this Γ function, given the parameters L_1 and L_2 with L_1,L_2 ∈ [0,1]:
Γ(Δ_cell) = {[ 0 : Δ_cell≤ L_1; Δ_cell-L_1/L_2-L_1 : L_1 ≤Δ_cell≤ L_2; 1 : Δ_cell > L_2 ]. .
§.§ Stipple Rendering
We can now render the resulting interpolated distribution, which is represented as grid of black and white pixels and with black pixels to receive dots. Following our stochastic algorithm from <ref> we can then place dots into black pixel regions, until we treated all black pixels. In contrast to recent approaches which freely re-arrange stipple dots based on an initial positions (, <cit.>), we thus have to place dots based on the black pixels and consider the dot sizes and shapes. For the realistic capture of dot distributions such as in traditional stippling, in particular, the grid of the DPF has to be highly detailed, while a placed dot is ultimately much larger than a grid pixel.
As we based the stippling of shaded regions on 's [] halftoning-based dot placement, we also use it for the overall rendering of stipple dots. This allows us to take care of the target resolution of the output, the physical size of the stipple points, and overlapping of stipple dots. removed the regularity of the halftoning-based dot placement by adding noise, which worked great for areas but made their stippled linear features look fuzzy. In our new method, we resolve this problem by controlling the added noise individually for areas and linear features. In most cases, the added noise to linear features is zero, but it depends on needs of the artist and the configuration of the pixels.
We use the same approach to control the range of sizes for dots in a discrete or continuous way. The discrete option allows us to simulate the use of real technical pens, which have a limited set of tip sizes and allow us to use scanned dots (, textures). Both discrete and continuous control can be used to produce vector results in SVG format. The selection of the size for each individual pixel can be computed in a uniform random way, or by modulating it for the tone of the corresponding pixel in the original image. This control provides us with a great level of flexibility in the appearance of the results, as we discuss and showcase next.
§ RESULTS
To be able to illustrate the spectrum of possibilities with our new interpolated point distributions (henceforth IPD), we first compare them with established digital stippling approaches and, second, discuss examples to showcase the effect of different parametrization.
§.§ Comparison
For the comparison we selected three techniques from the literature that placed particular emphasis on either representing features or areas with digital stippling. First, we selected 's [] weighted centroidal Voronoi stippling (henceforth WVS) because it is able to capture linear features well, despite creating line artifacts in areas due to being based on centroidal Voronoi diagrams. Second, we compare with 's [] structure-preserving stippling (henceforth SPS) which placed particular emphasis on linear structures. Third, we compare with 's [] example-based grayscale stippling (henceforth EBG) because it managed to avoid linear structures in areas and came close to artistic examples, yet at the expense of only being able to produce fuzzy linear structures. These techniques differ from each other as follows:
Color: WVS and SPS are B&W only processes. EBG is a grayscale process that is inspired by the process of ink accumulation.
Dot shape: WVS and SPS use circles. EBG uses scanned dots, , irregular shapes recorded as textures.
Dot size: WVS can use dots with a constant size, but can also modulate the dot size depending on the Voronoi region's area and the tone of the input image <cit.>. This modulation maximizes the perceived effect of each dot. SPS produces dots of constant size. EBG is based on scanning original hand-made stippling artwork and extracting individual dots, thus uses random stipple sizes within a given range.
Output size: EBG is a raster method with the goal of producing realistic stipple output for a given physical output size and pixel resolution, using matching scans of dots from stipple artwork.
WVS and SPS are vector methods, their dot size can be controlled independently and continuously.
We selected four target images for our tests: City and Headlamp from 's [] benchmark set: these showcase straight and curved linear features. We also use Lenna and 's [] Plant because they were used in the past for studying structure-preserving stippling. For a fair comparison between the resolution-dependent EBG with the resolution-independent WVS, SPS, and IPD techniques we set a physical target equivalent of A5 output size at 300 ppi resolution. Based on these constraints, we used the original EBG code to produce stipple images, noting the stipple count. We then used 's [] original implementation of WVS to produce results with the same target stipple counts (Figures <ref> and <ref>),
and with the help of 's original implementation also obtained stippled versions for SPS
(<ref>), again with the same target stipple count per image.[City: 171,172, Headlight: 137,327, Lenna: 1,134,741, and 's Plant: 149,327.] For IPD we used the combination of WVD with EBG described in <ref>.
With all four techniques, we created several versions of the four benchmark images shown in Figures <ref>–<ref>. To save space we do not show the whole A5 images (find them in the additional material in Figures <ref>–<ref>), but only use a representative section. Specifically, we show vector images with constant, small, black, and circular dots in <ref> to show the distributions without any dot overlap. In <ref> we still use black circles and vector output, but modulate the dot size by the tone color of the corresponding pixel in the original image such that darker tones produce bigger and brighter tones produce smaller dots. Finally, <ref> shows a raster output with random dot sizes and scanned dots, with the actual dot size range being derived for all images using the EBG approach.
By using small circular dots with constant sizes, <ref> allows us to analyze the spatial character of the different dot distributions. We can observe, , that SPS uses a regular grid alignment of the dots somewhat reminiscent of halftoning, which is not visible in its actual results as then the larger dot size leads to overlapping stippling (Figures <ref> and <ref>). WVS, in contrast, is not bound by a grid but exhibits chain artifacts, also in normally dense regions where dot size attenuation normally would hide this fact (<ref>). Nonetheless, the chain artifacts also play to WVS's advantage for stipples that should line up, such as the example in <ref>. EBG shows less chain artifacts in areas, but its edges and borders are not as clear due to the halftoning-based placement and the added randomness. EBG' halftoning grid itself is not visible because, for <ref>, we used the EBG process as if with large dots and then used the final dot positions, without rounding them. Finally, we see that IPD combines the overall good distribution of EBG in areas with better edges from WVS, evident, , in <ref>.
In a next comparison, we added variable, more realistic dot sizes that are also modulated based on the source image's brightness <cit.> (<ref>). We computed the range of the actual dot sizes according to the considerations by EBG, which aimed to simulate realistic dot sizes. This change of dot sizes naturally makes all images much darker. The modulated dot sizes also make some artifacts such as the grid arrangement of SPS less apparent, but it does not completely hide them. Rather, in SPS it leads to completely black regions, while for WVS, EBG, and IPD we see more realistic stipple clusters forming. Note that this is not the same modulation as used by <cit.> (and which we show in <ref>); did not only rely on the local image graylevel for deciding the modulation and also uses a range of dot sizes for a given graylevel. For the size of the line features in IPD we used a dedicated way to specify the dot size because we could not apply modulation—for linear features extracted by some edge detection mechanism there is no meaningful brightness to use. To avoid having to use a constant size, we instead chose the mean value of the dot size range used for the areas, with an additional random scaling of ± 25% to provide some variability. In comparing the line features, we see that they come out well for WVS and SPS, and, in contrast to EBG, also IPD shows much better aligned features. Unfortunately, however, IPD has problems with single lines of dots such as the highlight on the car headlamp as can be seen in <ref>. The root cause for these double lines is our use of the Canny filter to control the interpolation. We discuss this effect and how to address it below in <ref>.
Finally, <ref> is based on the same modulated dot sizes as <ref>, but with realistic stipple dot textures applied. For this figure, we used the normal scale-dependent process for EBG, and for WVS, SPS, and IPD we placed dot textures to the vector positions of <ref>, with rounding of the resulting scale-dependent texels to avoid texture resampling. We enlarged the modulated dot sizes of <ref> by doubling their size to accomodate the fact that a texture of an irreguarly shaped dot is not perceived as a black circle of the same size, but smaller due the intensity change toward the center of the dot and the overall irregular shape.[We determined the scaling factor of 2× empirically. In fact, the sizes of the dots in <ref> are mandated by the EBG process. In practice we thus adjusted the dot sizes in <ref> to be half of the sizes of the textured EBG dots in <ref>.]
As we can see from the comparison of the images in <ref>, the irregular shapes of the real dots even further hide the regularity artifacts of SPS, yet the areas of SPS still appear very dark and the images appear to have less tonal variety than the other techniques. Moreover, we see that the interpolation of IPD allowed us to create edges of similar quality as those used in SPS—both for isolated lines and for edges of areas. This result is visible, in particular, in <ref> vs. fig:results_random_ebg_scanned_dots:h vs. fig:results_random_ebg_scanned_dots:l. Some specific edges, however, are still best portrayed by WVS, such as the lines on top of the headlight of the car—neither SPS nor IPD capture them quite as well as WVS, yet both are about the same and much better than WVS. With respect to the area portrayal, WVS shows some unnatural clusters, and EBG and IPD seem both better than the other two—with IPD showing some more tonal variation than EBG. The reason for this effect is that IPD distributes slightly differently and places more points in borders than EBG.
These examples show that our new IPD approach indeed combines high-quality stippled areas with a good reproduction of edges, addressing the problem of EBG's fuzzy lines. As noted before, however, the issue of isolated lines remains, as we discuss next. Moreover, our comparisons used two specific source techniques with a specific parametrization, while IPD can combine the results from any two techniques and with a flexible parametrization, as we also show next.
§.§ Alternative results
For other results we can drop the limitations that we applied to the previous comparison, such as the fixed number of stipple dots used in Figures <ref>–<ref>. These led to certain artifacts for IPD such as the duplicated rows of dots for isolated lines and the removal of dots around the linear isolated features of the plant example, which may or may not be desired. We thus first explore the use of alternative edge detection filters for controlling the interpolation as well as use different parameterizations for the dots, depending on from which of the two input distributions they arose.
<ref> shows the result for the use of Canny, Difference of Gaussian (DoG), and Laplace of Gaussian (LoG), with the additional use of filters to address the issue of double edges for single lines. To better highlight the properties of the generated edge stipples—and to produce an alternative visual result—, in <ref> we use a constant small dot size of 2 for the stipples generated for areas, while showing edge stipples larger with size 4. For this purpose we manually used a combination of Gaussian blur as well as contrast and brightness adjustments, before applying the respective edge filters. We show the resulting edge images in Figures <ref>–<ref>. With this process we can see that we can remove the double edges even for the Canny filter as shown in Figures <ref>–fig:filter_comparison:d. The DoG filter is able to create single edges from single lines even without the use of other filters, but has the problem that its edges are somewhat noisier and may lead to zig-zags in places as can be seen in Figures <ref>–fig:filter_comparison:h. Finally, the results for LoG (Figures <ref>–fig:filter_comparison:l) show a slightly different portrayal of the edges, with some cleaner (, Secord's Plant) and some noisier (, Headlight) than DoG. Ultimately, it is thus upon the artist to select the most suitable edge detection process and to potentially apply additional filters to get the best results for a given visual goal, depending on the chosen input image.
We can also adjust the interpolation function to increase the distance around the borders such that area dots have less possibility of being drawn, as shown in <ref>–fig:effect_results:d. This leads to a white border around edges, which may be desired in some cases. Alternatively, we can also change the borders filter to use, for example, a DoG and remove the block that tries to align the strokes, producing that every black pixels is drawn as a dot as shown in <ref>–fig:effect_results:h. This effect emphasizes the linear edges particularly well through dark clusters of stipple dots, as can be seen particularly well in the full-size versions in <ref> in the additional material. If the process results in large, essentially black regions then a process to replace the stipples with solid polygons could be added <cit.>.
Finally, we can combine our process also with additional masks as shown in <ref> to adjust the way the distance function is computed in <ref>. For example, we can use the original edge image (for a chosen edge detection process) to generate offset lines similar to 's [] hedcut stippling, and then use this offset line image as the input to compute the distance function in <ref>. This process generates an effect as shown in <ref>. We can also use dedicated masks to emphasize specific parts of the image as shown in <ref>.
Here, we masked the region of the lamp in the image in white, leave the rest black, and then use this image as the input to compute the distance Δ. By then adjusting b such that edge stipples are always drawn we achieve the emphasis effect in <ref> because, in the white region of the mask (for the area distribution), we define Δ to be 0 and thus ensure that the region's area stipples are always shown.
Finally, with the same approach we can also use image-independent masks such as wavy lines to generate the background effects as we show in <ref>.
§ CONCLUSION
In summary, we described a method to smoothly interpolate between two point distributions. The fundamental contribution of our approach is that we do not interpolate the resulting images, but the underlying dot distributions by means of Probability Density Functions (PDFs). Our approach is general: it can deal with any input distribution by relying on the discrete version of the PDFs such that it can also deal with input distributions that are not expressed as analytic PDFs but as discrete sets of dots. Moreover, the discrete approach also allows us to take advantage of GPU-based implementations and raster image filters.
This new distribution interpolation allows us to combine traditional stippling techniques for different regions in an input image—those that excel in shaded areas with those that focus on structure preservation. We thus can now generate stippling images that express the best of these two worlds and smoothly interpolate both point distributions—without the artifacts that would arise from a purely pixel-based interpolation of final stipple image results. For this interpolation between the distributions we made use of a rasterized distance field, based on an edge image that expresses the features that one wants to preserve. Our overall process provides a lot of flexibility in that individual stages can be adjusted, to create additional visual effects and feature emphasis.
The main limitation of our method is its inherent use of discrete raster representations during the interpolation process. For example, this approach can produce “stair” effects, in particular, for linear structures. Typical ways to address these problems is to increase the resolution of the image or to apply anti-aliasing, the former of which we can also use in our case. The discrete approach also leads to another limitation, which is that our results are probabilistic—meaning that we only reproduce the input distribution up to the resolution of the interpolation raster of the discrete probability function. With a sufficiently fine interpolation raster, however, we can address this limitation because the resolution of the interpolation raster is largely independent from that of the input images, provided that we can compute respective distances from edge features.
In the future we thus want to focus on ways to reduce rasterization artifacts and to better control the interpolation output. For instance, we plan to investigate the mentioned higher-resolution interpolation processing (, using vectorization of the edge image), explore better edge detection for isolated linear features, and ultimately come up with a fully analytic way of interpolation. Independent of these efforts, we also want to explore ways in which our tool can best be given to illustrators, to provide them with detailed control yet without the need to deal with complex parameter settings.
We thank the author of WVS for his tool and the authors of SPS for their comparison images. This work was partially funded by the Spanish Ministry of Economy and Competitiveness with FEDER funds through project TIN 2017-85259-R.
ACM-Reference-Format
§ ADDITIONAL MATERIAL
Below we provide additional material to support our discussion.
§.§ Original images
Figures <ref>–<ref> show the input to our algorithm comparison in <ref>. Figures <ref>–<ref> show the original results produced by the WVS and SPS techniques as they were produced by the original tools.
§.§ Large, complete results
In Figures <ref>–<ref> we show the complete versions of the images in Figures <ref>–<ref> we created for our comparison in <ref>. In addition, Figures <ref>–<ref> show the complete versions of the examples in <ref>. Finally, <ref> shows larger versions of the images in <ref> so that details are better visible. Note that we were not able to include them in their intended target size of A5 as this would have taken too much space. Instead, we show them at 2/5 of A5 size.
§.§ Additional results worthy of note
A few additional results can further illustrate our approach. <ref> compares the approach from <ref> with a different filter setup that places more emphasis on the edges. We also show the possible control of emphasis in <ref> which compares the previous result in <ref> or <ref> with a version that uses less emphasis by showing more area stipples.
§.§ Demo program and video figure
We provide a demo implementation of our approach that we submit as additional material. We based it on StippleShop (https://github.com/dmperandres/StippleShop), which uses a block-based paradigm and already provides basic methods that are necessary to construct our solution. We added the necessary blocks to implement our new solution including the distance field, the interpolation, and the rendering. We illustrate the use of this demo for IPD in an additional video figure that serves as a tutorial. The tool can also be used to create images for other stippling approaches for a comparison.
|
http://arxiv.org/abs/2307.03257v1
|
20230706192613
|
Stable, entropy-consistent, and localized artificial-diffusivity method for capturing discontinuities
|
[
"Suhas S. Jain",
"Rahul Agrawal",
"Parviz Moin"
] |
physics.flu-dyn
|
[
"physics.flu-dyn",
"physics.comp-ph"
] |
sjsuresh@stanford.edu
Center for Turbulence Research, Stanford University, California, United States of America-94305
Center for Turbulence Research, Stanford University, California, United States of America-94305
Center for Turbulence Research, Stanford University, California, United States of America-94305
In this work, a localized artificial-viscosity/diffusivity method is proposed for accurately capturing discontinuities in compressible flows.
There have been numerous efforts to improve the artificial diffusivity formulation in the last two decades, through appropriate localization of the artificial bulk viscosity for capturing shocks. However, for capturing contact discontinuities, either a density or internal energy variable is used as a detector. An issue with this sensor is that it not only detects contact discontinuities, but also falsely detects the regions of shocks and vortical motions. Using this detector to add artificial mass/thermal diffusivity for capturing contact discontinuities is hence unnecessarily dissipative.
To overcome this issue, we propose a sensor similar to the Ducros sensor (for shocks) to detect contact discontinuities, and further localize artificial mass/thermal diffusivity for capturing contact discontinuities.
The proposed method contains coefficients that are less sensitive to the choice of the flow problem. This is achieved by improved localization of the artificial diffusivity in the present method. A discretely consistent dissipative flux formulation is presented and is coupled with a robust low-dissipative scheme, which eliminates the need for filtering the solution variables.
The proposed method also does not require filtering for the discontinuity detector/sensor functions, which is typically done to smear out the artificial fluid properties and obtain stable solutions. Hence, the challenges associated with extending the filtering procedure for unstructured grids is eliminated, thereby, making the proposed method easily applicable for unstructured grids.
Finally, a straightforward extension of the proposed method to two-phase flows is also presented.
Stable, entropy-consistent, and localized artificial-diffusivity method
for capturing discontinuities
Parviz Moin
August 1, 2023
======================================================================================================
Keywords: compressible flows, turbulent flows, large-eddy simulation, artificial-viscosity method
§ INTRODUCTION
Simulations of high-Mach-number compressible flows require a stable, and an accurate discontinuity-capturing method. As the Reynolds number increases, it is expected that the methods should capture both the turbulent structure near/far from the shock, and also capture the flow discontinuities such as shocks, contacts, and material interfaces, and also the interactions between the two.
In incompressible flows, it is well known that low/non-dissipative schemes such as central schemes are required for accurate coarse-grained simulations such as the large-eddy simulation paradigm <cit.>. However, for capturing discontinuities, these standard central schemes introduce non-physical oscillations. Hence, the challenge associated with accurate simulations of compressible flows lies in the different needs for simulating turbulence and capturing discontinuities.
Methods for capturing shocks/discontinuities can be broadly classified into three classes:
(a) flux limiters and nonlinear schemes - these act at the scheme level, and therefore can be termed "implicit methods".
Both flux limiters <cit.> and nonlinear schemes, such as essentially non-oscillatory (ENO) <cit.>, weighted ENO (WENO) <cit.>, weighted nonlinear compact (WCN) <cit.> and targeted ENO (TENO) <cit.> schemes, have been extensively used for capturing shocks in compressible flows.
(b) artificial viscosity and artificial diffusivity - these act at the partial-differential equation (PDE) level, and therefore can be termed "explicit methods".
(c) hybrid methods - they are a combination of implicit methods with central schemes that act in different regions of the domain.
Some examples of hybrid approaches in the literature include the work of Ref. <cit.> where a modified Ducros sensor was used to switch between a central scheme and a WENO scheme, and Ref. <cit.> where a modified Ducros sensor was used to switch between a central scheme and a modified Steger-Warming scheme.
The comparison of various shock-capturing methods can be found in Ref. <cit.>.
In this work, a novel localized artificial-viscosity/diffusivity (AV/LAD)-based method for capturing shock and contact discontinuities is presented, and aimed towards capturing the aforementioned effects in compressible flows.
Using an analogy between the Lax-Friedrichs (LF) flux and the artificial-viscosity methods, a discrete and consistent LF-type dissipative flux formulation is proposed for the LAD method. The proposed method satisfies the discrete kinetic energy– and entropy-consistency conditions presented in Ref. <cit.>, and thus results in stable numerical simulations.
We also propose new discontinuity detectors/sensors that localize where the artificial diffusivity is acting, and show that the proposed method is suitable for both direct numerical simulation (DNS) and large-eddy simulation (LES) of compressible turbulent flows with discontinuities.
The sensors are designed in a way that the resulting method is less sensitive to the model coefficients (doesn't require tuning coefficients depending on the problem being solved, which is otherwise typically required for LAD methods).
Finally, the extension of the proposed method to compressible two-phase flows is also presented.
In this work, we choose an artificial viscosity/diffusivity approach because of its advantages such as low cost, simplicity, and ease of implementation.
Further, this method can be used with any underlying scheme, and more importantly, with central schemes which is beneficial for simulating turbulent flows.
Moreover, it also turns off naturally in smooth regions of the flow where there are no discontinuities.
The artificial viscosity/diffusivity-based discontinuity-capturing methods can also be broadly categorized into two types of approaches. In one approach, artificial dissipation is directly added to all the equations without relating to the physical quantities. This is sometimes referred to as Laplacian viscosity. In the other approach, physical fluid properties, such as bulk viscosity, shear viscosity, and thermal conductivity, are augmented by artificial fluid properties to capture discontinuities. This is referred to as localized artificial viscosity/diffusivity (LAD). It is important to note that these two formulations can be expressed interchangeably. The major difference is the localization that is present in LAD, but typically absent in Laplacian viscosity approach.
The idea behind artificial viscosity was first introduced by Ref. <cit.>.
Ref. <cit.> regularize solutions to the Euler equations.
To minimize the undesirable dissipation, Ref. <cit.> proposed a spectral vanishing viscosity (SVV) approach where only high-frequency components are damped.
Later, Ref. <cit.> introduced the high-wavenumber viscosity approach, an idea similar to the SVV approach, but the dissipation was added in the physical space.
To further minimize the dissipation, they introduced different viscosities in the subsequent work <cit.>, an artificial bulk viscosity (ABV) for shocks and artificial shear viscosity (ASV) for turbulence.
Along similar lines, Ref. <cit.> introduced artificial thermal diffusivity (ATD) for capturing temperature gradients and artificial species diffusivity for capturing species gradients using an entropy indicator function; and extended the LAD formulation for multicomponent reacting flows.
Similarly, Ref. <cit.> introduced artificial thermal conductivity for contact discontinuities and artificial diffusivity for material interfaces.
More recently, the LAD formulation was also extended to curvilinear and anisotropic meshes by Ref. <cit.>.
The original LAD formulation used a strain rate-based indicator function in ABV to detect shocks. Ref. <cit.> replaced the strain rate-based indicator function with a negative dilatation-based indicator to localize ABV to shock regions and to turn it off in the regions of vortical motions.
Ref. <cit.> augmented the ABV with a Ducros-type sensor to further localize the ABV for regions of shock and to turn it off in the regions of weak compression.
Later, the LAD was coupled with a high-order flux reconstruction (FR) method <cit.>, a spectral difference method <cit.>, and a discontinuous Galerkin method <cit.> to extend the formulation to unstructured grids.
Later, Ref. <cit.> adopted the FR+LAD framework and proposed a more sophisticated filter for robust simulations on unstructured grids.
Ref. <cit.> proposed a LAD formulation where the artificial fluid properties are independently applied in each direction to avoid over-dissipation of discontinuities and numerical stiffness for high-aspect-ratio grids.
Ref. <cit.> proposed artificial mass diffusivity (AMD) as an alternative to ATD for multicomponent flows.
The LAD has also been used with adaptive grid refinement, where the artificial diffusivity was used as an indicator for grid refinement <cit.>.
More recently, Ref. <cit.> further extended the LAD method by coupling it with a diffuse-interface method <cit.> for the simulation of multiphase fluid flows and elastic-plastic deformation of solid-solid materials.
It is worth mentioning the studies that used the other type of artificial (Laplacian) viscosity
<cit.>.
Ref. <cit.> used polynomial order dependent artificial viscosity for discontinuous Galerkin schemes.
Ref. <cit.> proposed an artificial viscosity based on a scalar PDE.
Later, Ref. <cit.> proposed entropy-stable artificial viscosity, which has been extensively used in the recent works such as that of Ref. <cit.>.
Along similar lines, Ref. <cit.> proposed the entropy viscosity method, an artificial viscosity based on the local rate of generation of entropy. This was later extended to discontinuous finite element methods by Ref. <cit.> and discontinuous spectral element method by Ref. <cit.>.
More recently, Ref. <cit.> proposed an artificial neural network to predict the local artificial viscosity.
There are many more variants of this approach, but a drawback that is common to all these methods is that they use the same sensor for all the discontinuities, which could be overly dissipative for simulations of turbulent flows. Therefore, in this work, a LAD-based artificial viscosity is used because of the localized nature of the dissipation that is suitable for the simulations of turbulent flows.
A characteristic of the LAD approaches is that they are most commonly used in conjunction with a high-order central scheme <cit.>, with some exceptions <cit.>. However, in this work, we choose to use a second-order central-difference scheme with LAD since low-order central schemes are known to have some advantages for the simulation of turbulent flows <cit.> due to their (a) non-dissipative nature, (b) low cost, (c) low aliasing error, (d) easy extension to unstructured grids, (e) ease of boundary treatment, and (f) improved stability.
In addition to these, the order of accuracy of high-order schemes can only be realized in an asymptotic regime, but the flow fields will be well resolved typically much before reaching this asymptotic regime.
Moreover, for flows with discontinuities, the order of accuracy is also locally reduced around discontinuities.
Recently, Ref. <cit.> compared a second-order Godunov method with a higher-order finite-volume WENO shock-capturing method and showed that it was significantly cost-effective to run a refined simulation with a second-order method than to run a coarse simulation using a higher-order method to obtain a solution of similar accuracy.
§.§ Challenges with the existing methods
The issues/challenges with the existing artificial viscosity methods are:
* The LAD methods are generally used only with high-order numerical methods and have not been used with a low-order method; therefore, it is unclear how the LAD methods perform with a low-order method and there are no guidelines on how to choose the parameters when used with a low-order method.
* Some of the existing artificial viscosity formulations are too dissipative because of lack of proper localization, and hence are not suitable for the simulation of turbulent flows <cit.>, and others are less dissipative but do not add enough dissipation locally to resolve the jumps and result in inaccurate capturing of discontinuities.
* Some formulations are also not stable for high-Reynolds-number flows and require low-pass filters to eliminate oscillations, particularly with high-order numerical methods. Typically, these filters are used in the hope of achieving a stable method; however, the use of low-pass filtering might not necessarily always bring numerical stability to the method (see Ref. Ref. <cit.>).
* In the existing LAD formulations, an artificial fluid property, X^*, is typically defined as
X^* ∼ C_X Δ^r+2|∇^r s_X|f_X,
where C_X is a model constant, Δ is the local grid size, s_X is a discontinuity indicator function with an additional localization sensor f_X. The overbar denotes a Gaussian filtering operation which is required to obtain a smooth artificial fluid property (particularly for large values of r) to achieve a stable method <cit.>. However, it is not trivial to extend this filtering operation to unstructured grids for simulations in complex geometries <cit.>.
* In the existing formulations, the artificial thermal/mass diffusivity (ATD/AMD) added to capture contact discontinuities is also active in the regions of shocks and vortical motions due to the choice of indicator function and lack of proper localization. This adds unnecessary additional dissipation by ATD/AMD in the regions of shock where ABV is already acting to capture shocks and in the regions of unresolved eddies where a subgrid model is already active.
* Additionally, the coefficients C_X may also require problem-dependent tuning <cit.>, which is primarily due to the insufficient localization of the added artificial dissipation. For example, the coefficient values that are tuned for the simulation of turbulent flows with weak shocks/shocklets might not be appropriate for the simulation of stronger shocks, and similarly, the coefficient values that are tuned to capture stronger shocks could be too dissipative for the simulation of turbulent flows.
* It is also common to turn on and off the artificial fluid properties <cit.> depending on the problem being solved, which is not predictive.
The proposed method in this work aims to address these challenges with existing AV/LAD methods.
A preliminary version of this work has been published as a technical report in the annual publication of the Center for Turbulence Research <cit.>.
The rest of this paper is organized as follows. Section <ref> contains the proposed localized artificial-viscosity/diffusivity model along with the details of the sensors used, a new sensor for detecting contact discontinuities, the coefficient values used, and an extension to two-phase flows. Section <ref> contains the consistency conditions for kinetic energy–and entropy-consistent discretization and a consistent dissipative flux formulation. Section <ref> contains the simulation results using the proposed method, and finally the concluding remarks are presented in Section <ref>.
§ PROPOSED ARTIFICIAL-VISCOSITY/DIFFUSIVITY METHOD
The idea behind a LAD method is to augment the physical fluid properties with the grid-dependent artificial fluid properties locally in the regions of the flow where discontinuities such as shocks (artificial bulk viscosity), contacts (artificial mass/thermal diffusivity), and eddies (artificial shear viscosity) are not resolved by the grid <cit.>.
For capturing shocks on the grid, an artificial bulk viscosity (ABV), β^*, is appended to the physical bulk viscosity, β_p, as
β = β_p + β^*.
Initially, a strain-rate-based sensor was used to detect shocks <cit.>, but this made the ABV active in regions away from shocks and in turbulent flows.
Since shocks are associated with high values of negative dilatation, to reduce the ABV away from shocks, Ref. <cit.> and Ref. <cit.> proposed a dilatation-based sensor. Further, Ref. <cit.> appended the Ducros sensor <cit.> to ABV to reduce ABV in the regions of high enstrophy that represent turbulent motions and to localize ABV further.
For unresolved vortical motions, an artificial shear viscosity (ASV), μ^*, is used as a subgrid model where it is appended to the physical shear viscosity, μ_p, as
μ = μ_p + μ^*.
Typically, the magnitude of the strain rate tensor is used as an indicator function <cit.>. Alternatively, an eddy-viscosity model <cit.> can be used as a subgrid-scale model for unresolved eddies, which is a more widely adopted approach, and we use this approach in this work. This is motivated by the observations that a dynamic subgrid-scale model appropriately turns on/off its activity based on the amount of resolved turbulence, whereas a numerical dissipation approach does not account for it appropriately.
Similarly, for capturing unresolved contact discontinuities, an artificial thermal diffusivity (ATD), κ^*, is appended to the physical thermal diffusivity, κ_p, <cit.> as
κ = κ_p + κ^*.
Alternatively, artificial mass diffusivity (AMD), D^*, can be used in the place of ATD <cit.> where artificial terms such as ∇⃗·(D^* ∇⃗ρ), ∇⃗·(D^* ∇⃗ρ⊗u⃗), and ∇⃗·{(|u⃗|^2/2)D^* ∇⃗ρ} are consistently added to the mass, momentum, and energy equations. These consistent corrections for the momentum and energy equations are similar to the consistent corrections used in capturing material interfaces in two-phase flows <cit.>.
We adopt the AMD approach over ATD because it can be shown that AMD satisfies the interface equilibrium condition (IEC) or sometimes also referred to as the pressure equilibrium, an important thermodynamic consistency condition for robust numerical simulations of compressible two-phase flows <cit.>, with the five-equation model which the ATD does not satisfy (see Section <ref>). However, for single-phase flows, both ATD and AMD are equally applicable and will yield similar results.
§.§ Proposed model
The system of conservation equations (mass, momentum, and energy) along with the proposed artificial-viscosity method can be written as
∂ρ/∂ t + ∂ρ u_j/∂ x_j = A_ρ,
∂ρ u_i/∂ t + ∂ρ u_i u_j/∂ x_j + ∂ p/∂ x_i = ∂τ_ij/∂ x_j + ρ g_i + A_ρ u,
and
∂ E/∂ t + ∂(E + p ) u_j/∂ x_j = ∂τ_ij u_i/∂ x_j + ρ u_i g_i + A_E,
where ρ is the density, p is the pressure, u is the velocity, E=ρ(e + u_i u_i/2) is the total energy and e is the internal energy, τ_ij is the stress tensor, and g_i represents a generic body force. Throughout this paper, i and j represent Einstein indices, and x and t represent space and time coordinates, respectively. In Eqs. (<ref>)-(<ref>), A_ρ, A_ρ u, and A_E are the artificial terms added to the mass, momentum, and energy equations, respectively, to capture shocks and contact discontinuities. They can be written as
A_ρ = ∂/∂ x_j(D^* ∂ρ/∂ x_j),
A_ρ u = ∂/∂ x_j(D^* u_i ∂ρ/∂ x_j) + ∂/∂ x_j(β^* ∂ u_k/∂ x_kδ_ij),
and
A_E = ∂/∂ x_j[D^* ∂ρ/∂ x_j(u_k u_k/2)]
+ ∂/∂ x_j(D^* ∂ρ e/∂ x_j)
+ ∂/∂ x_j(β^* ∂ u_k/∂ x_kδ_ij u_i ).
Here, Eq. (<ref>), the first term in Eq. (<ref>), and the first two terms in Eq. (<ref>) are
added to capture contact discontinuities; and the second term in Eq. (<ref>) and the last term in Eq. (<ref>) are added to capture shocks.
These consistent terms in the momentum and energy equations can be derived similarly to the derivation of interface-regularization terms as described in Ref. <cit.>.
The consistent terms in the momentum and energy equations are introduced such that there is no spurious contribution to the total kinetic energy of the system.
Note that, in this work, the ASV is not used (or is equivalently set to zero). Instead, a dynamic Smagorinsky model is used for unresolved eddies, and in Section <ref>, the sensitivity of the results to the presence of a dynamic subgrid-scale model is investigated.
§.§ Dynamic subgrid scale model
For the sake of completeness, the details of the subgrid-scale model are provided here. The resolved turbulent stresses, L_ij = - u_iu_j + u_i u_j [(·) denotes test-filter operation], are related to the modeled stresses, M_ij, in the “test window” <cit.> as
L_ij = 2(C_s Δ)^2 ( Δ^2/Δ^2|S|S_ij - |S|S_ij) = 2(C_s Δ)^2 M_ij,
where Δ and Δ denote test-level and grid-level filter widths, respectively. C_s is subgrid-scale the model coefficient, and S_ij is the rate of strain tensor from resolved LES fields, and |S| is its magnitude. Ref. <cit.> proposed a least-squares solution of this system, leading to the expression for the model coefficient
(C_s Δ)^2 = L_ij M_ij/2 M_ij M_ij.
In this work, both the numerator and denominator have been averaged in the volume (since for a periodic box all three spatial directions are homogeneous), to arrive at
(C_s Δ)^2 = ⟨ L_ij M_ij⟩/⟨ 2 M_ij M_ij⟩,
where ⟨·⟩ is the volumetric-averaging operator. The final form of the eddy-viscosity model is then written as,
μ^* = (C_s Δ)^2 |S|
§.§ Artificial fluid properties
The artificial diffusivities used in this work can be defined as
D^* = C_D1/ρ|∑_j ∂^r ρ/∂ x^r_j (Δ x_j)^r+1|(|u_j| + c_s)
f_D,
and
β^* = C_βρ|∑_j∂^r θ/∂ x^r_j (Δ x_j)^r+2| H(-θ) f_β,
where Δ x is the grid size; C_D and C_β are the model coefficients for AMD and ABV, respectively; c_s is the speed of sound; θ=∇⃗·u⃗ is the dilatation; and f_D and f_β are the localization sensors. Here, ρ and θ act as indicator functions in AMD and ABV to detect contact discontinuities and shocks, respectively. The Heaviside function H(-θ) turns off the ABV in the regions of expansion. It is important to note that the Gaussian filtering operation is not performed on these artificial fluid properties, unlike the previous LAD methods, where the Gaussian filtering was required to obtain stable solutions. This makes it easy for the present method to be extended for unstructured grids.
The localization sensor in ABV is given as
f_β = (θ^2/θ^2 + aω_iω_i + ε),
which is a modified version (a>1) of the Ducros sensor <cit.>, where ε=1e^-15 is a small number added to prevent division by zero. The original Ducros sensor (a=1) was used by Ref. <cit.> to localize ABV for capturing shocks. The idea behind using this sensor is to identify the regions of weak compression where the enstrophy could be non-negligible compared to the local dilatation, and to turn off ABV in those regions. The modification (a>1) here only makes the localization sensor stronger.
§.§ A sensor for detecting contact discontinuities
In all the previous studies, either density or internal energy is used as an indicator function in AMD/ATD to detect contact discontinuities <cit.>.
An issue with this approach is that this indicator will not only detect contact discontinuities, but will also detect shocks and vortical motions where there are jumps in density and internal energy (finite gradients). Therefore, to overcome this issue, we propose a sensor—motivated by the idea behind the Ducros sensor—for detecting contact discontinuities as
f_D = [|∂ρ/∂ x_j|^2/|∂ρ/∂ x_j|^2 + a(θ^2 + ω_iω_i)(ρ/|u⃗|)^2 + ε].
This sensor by construction can detect contact discontinuities very effectively and can distinguish it from the regions of shocks (high dilatation) and vortical motions (high enstrophy). This sensor will turn on only when there is a jump in density, and will turn off in the regions of high dilatation or enstrophy.
In this work, the purpose of adding this new sensor f_D is to further localize AMD by making it active only in the regions containing contact discontinuities and to turn off AMD in the regions with high enstrophy and high dilatation, which represent vortical motions and shocks, respectively. Note that the proposed f_D can also be used with an ATD approach for capturing contact discontinuities without the loss of generality.
§.§ Problem-independent coefficients and the choice of r
The coefficients C_D and C_β are generally dependent on the numerical scheme used. In this work, for the second-order central schemes, we use the values C_D≈0.5, C_β≈100, and a≈100. For the flows considered, these values are shown to produce fairly similar results, as long as the values are changed by less than an order of magnitude (see Sections <ref> and <ref>). This is primarily due to the use of sophisticated sensors in Eqs. (<ref>) and (<ref>) that are responsible for adding dissipation locally only in the regions where it is needed the most. Hence, the values of these coefficients C_D and C_β need to be determined only once for a particular scheme. Without the use of the new sensor in Section <ref>, the current choice of values for C_D and C_β is too dissipative for turbulent flows with weak compressibility (see Figure <ref> and the discussion in Section <ref>), and if the values for these coefficients are reduced to make them suitable for turbulent flows, the dissipation is insufficient to capture stronger shocks. However, in the presence of the proposed sensor, the current choice of model coefficients can be used without the need for tuning it for each problem, e.g., stronger shocks and turbulent flows with weak compressibility.
Furthermore, in this study, a value of r=1 is used. Note that a higher value of r was used in other studies that use higher-order schemes <cit.>. In previous studies, a higher value of r was suggested because it results in dissipating only the higher-wavenumber content that is not resolved by the scheme <cit.>. However, in this study, with the use of a low-order scheme, the motivation to use r=1 (a lower value of r) is to have a sensor that is more localized only at the discontinuities and not to dissipate unresolved scales, unlike the motivation for higher-order schemes. For lower-order schemes, a higher value of r>2 makes the sensor active in other regions in the domain where there is no shock or a contact discontinuity, and this would not be ideal for the simulation of turbulent flows.
§.§ Extension to two-phase flows
In this section, the proposed artificial-viscosity method is extended for two-phase flows. Recently, a conservative version of the diffuse-interface model that can be discretized using a central-differencing scheme was proposed to simulate compressible two-phase flows <cit.>. A five-equation model based on Ref. <cit.> and Ref. <cit.> was proposed in Ref. <cit.> and used with a low-order central scheme, and a four-equation model was proposed in Ref. <cit.> and used with a high-order central scheme.
Here, we present an extension of the proposed artificial-diffusivity method for two-phase flows. The system of conservation equations for volume fraction, mass of each phase, momentum, and total energy can be written as
∂ϕ_1/∂ t + ∂ u_j ϕ_1/∂ x_j = (ϕ_1 + ζ_1)∂ u_j/∂ x_j + ∂ a_1j/∂ x_j + ∂/∂ x_j(D^* ∂ϕ_l/∂ x_j),
∂ρ_lϕ_l/∂ t + ∂ u_j ρ_l ϕ_l/∂ x_j = ∂ R_lj/∂ x_j + ∂/∂ x_j(D^* ∂ρ_l ϕ_l/∂ x_j), l=1,2,
∂ρ u_i/∂ t + ∂ρ u_i u_j/∂ x_j + ∂ p/∂ x_i = ∂ u_i f_j/∂ x_j + ∂τ_ij/∂ x_j + σκ∂ϕ_1/∂ x_i + ρ g_i
+∂/∂ x_j(D^* u_i ∂ρ/∂ x_j)
+ ∂/∂ x_j(β^* ∂ u_k/∂ x_kδ_ij),
∂ E/∂ t + ∂(E + p ) u_j/∂ x_j = ∂τ_ij u_i/∂ x_j + ∂ k f_j/∂ x_j + ∑_l=1^2 ∂ρ_l h_l a_lj/∂ x_j + σκ u_i∂ϕ_1/∂ x_i + ρ u_i g_i
+ ∂/∂ x_j[D^* ∂ρ/∂ x_j(u_k u_k/2)]
+ ∂/∂ x_j(D^* ∂ρ e/∂ x_j)
+ ∂/∂ x_j(β^* ∂ u_k/∂ x_kδ_ij u_i ),
where ϕ_l is the volume fraction of phase l that satisfies the condition ∑_l=1^2 ϕ_l=1; ρ_l is the density of phase l; ρ is the total density, defined as ρ=∑_l=1^2ρ_lϕ_l; u⃗ is the velocity; p is the pressure; e is the specific mixture internal energy, which can be related to the specific internal energy of phase l, e_l, as ρ e=∑_l=1^2 ρ_le_l; k=u_iu_i/2 is the specific kinetic energy; E=ρ(e+k) is the total energy of the mixture per unit volume; and the function ζ_1 is given by
ζ_1 = ρ_2 c_2^2 - ρ_1 c_1^2/ρ_1 c_1^2/ϕ_1 + ρ_2 c_2^2/ϕ_2,
for the Kapila's five-equation model and is ζ_1=0 in the Allaire's five-equation model, where c_l is the speed of sound for phase l.
In Eq. (<ref>), h_l=e_l+p/ρ_l represents the specific enthalpy of phase l and can be expressed in terms of ρ_l and p using the stiffened-gas equation of state as
h_l=(p + π_l)γ_l/ρ_l(γ_l - 1).
In Eqs. (<ref>)-(<ref>), σ is the surface-tension coefficient, κ=-∇⃗·n⃗_1 is the curvature of the interface, n⃗_l is the normal vector of the interface for phase l, g⃗ is the gravitational acceleration, and a⃗_l is the volumetric interface-regularization flux for phase l which is responsible for keeping the finite thickness of the material interface, and this satisfies the condition a⃗(ϕ_1)=-a⃗(ϕ_2). A conservative phase-field model or an accurate conservative phase-field models can be used as the interface-regularization fluxes, as was proposed in Ref. <cit.> and Ref. <cit.>, respectively.
R⃗_l=ρ_la⃗_l is the consistent regularization flux for the mass of phase l,
and f⃗=∑_l=1^2 R⃗_l=∑_l=1^2 ρ_la⃗_l is the net consistent regularization flux for the mixture mass.
The ABV in Eq. (<ref>) requires no modification for two-phase flows, but AMD requires modification because of the jump in density at the material interface which will activate the AMD. Since the diffuse-interface model is already active to capture material interface, AMD has to be turned off at the interface. This can be easily achieved by replacing the gradients of density in Eqs. (<ref>), (<ref>) with a volume weighted gradients of phase density as
∂ρ/∂ x_j→∑_l ϕ_l ∂ρ_l/∂ x_j,
which will prevent AMD from activating at material interfaces.
§ DISCRETE KINETIC ENERGY– AND ENTROPY-CONSISTENT DISSIPATIVE FLUX FORMULATION
We first show an analogy between the Lax-Friedrichs (LF) flux and the artificial-viscosity methods, and then use this idea to develop a kinetic energy–and entropy-consistent flux formulation for the proposed artificial-viscosity method. Consider a generic conservation equation of the form
∂ c/∂ t + ∂ f(c)/∂ x = 0,
where c is a conserved quantity, and f(c) is the flux function. Using the explicit Euler (EE) time-advancement scheme and the second-order central scheme, the discrete form of the equation can be written as
c^n+1_m = c^n_m - Δ t/2 Δ x[ f(c^n_m+1) - f(c^n_m-1) ],
where m is the grid index.
Now, replacing c^n_m with (c^n_m+1 + c^n_m)/2, and rewriting in conservation (flux) form, we obtain
c^n+1_m = c^n_m - λ( f̂_m+1/2 - f̂_m-1/2),
where the numerical flux, f̂_m+1/2, is the well known LF flux <cit.>, given by
f̂_m+1/2 = [ f(c^n_m+1) + f(c^n_m-1)/2] - 1/2λ (c^n_m+1 - c^n_m),
and λ = Δ t/Δ x has units of inverse velocity. Now consider the same conservation equation in Eq. (<ref>) augmented with a generic artificial-viscosity fluid property, ϵ^*, as
∂ c/∂ t + ∂ f(c)/∂ x = ∂/∂ x[ϵ^* ∂ c/∂ x].
Using EE and second-order central schemes and writing in conservation form, we arrive at
c^n+1_m = c^n_m - λ[ f̂_m+1/2 - f̂_m-1/2],
where the numerical flux is
f̂_m+1/2 = [ f(c^n_m+1) + f(c^n_m-1)/2] - ϵ^*_m+1/2/Δ x (c^n_m+1 - c^n_m).
Note that if ϵ^* = 2 Δ x/λ, then the LF flux and the artificial-viscosity method are identical (provided that a second-order central scheme is used for the discretization of the artificial-viscosity terms).
The LF flux is known to be entropy stable, but it is also highly dissipative. Therefore, the idea proposed in this work is to replace the non-dissipative central-flux in the LF flux with a robust kinetic energy–and entropy-preserving (KEEP) type flux in Ref. <cit.> and to further localize the dissipative part of the LF flux, only to those regions where they are needed, with the use of sensors. The new proposed flux can then be represented as
f̂_m+1/2 = .f̂_m+1/2|_KEEP - Â_m+1/2^d,
where Â_m+1/2^d is the discrete dissipative flux for the artificial term, given by
Â_m+1/2^d = ϵ^*_m+1/2/Δ x(c^n+1_m+1 - c^n_m).
Here, ϵ^*_m+1/2 represents a localized artificial-fluid property. In this work, the non-dissipative central flux is replaced with the second-order KEEP scheme of Ref. <cit.>. However, in general, this central flux can be replaced with any other robust low/non-dissipative flux <cit.>.
Ref. <cit.> explored a similar idea by replacing the non-dissipative central flux with a kinetic energy-preserving (KEP) flux. But the dissipative flux in their approach was not localized and was present everywhere in the domain. Another difference is that they used a scalar dissipation of momentum to capture all discontinuities.
The scalar dissipation of momentum not only acts on the dilatational motion at the shocks, but also dissipates the vortical structures, which makes the method even more dissipative and unsuitable for the simulation of turbulent flows. They concluded that this approach was too dissipative for turbulent flows.
§.§ Discrete consistency conditions
The discrete consistency conditions between the mass, momentum, kinetic energy, and internal energy convective fluxes and artificial fluxes (for capturing material interfaces) were proposed by Ref. <cit.>. Following a similar procedure, the consistency conditions can be further extended to include dissipative fluxes, which represent the artificial fluid diffusivity, in this work. If the full flux in the mass equation [Eq. (<ref>)] is written as
Ĉ^f_j|_(m±1/2) = Ĉ_j|_(m±1/2) + Ĉ_j^'|_(m±1/2),
where Ĉ_j|_(m±1/2) is the convective part and Ĉ_j^'|_(m±1/2) is the dissipative AMD contribution, then, the momentum- and kinetic energy–consistency conditions are given by
M̂^f_ij|_(m±1/2) = (Ĉ_j|_(m±1/2) + Ĉ_j^'|_(m±1/2)) u_i^(m±1/2) + M̂^'_ij|_(m±1/2),
and
K̂^f_j|_(m±1/2) = (Ĉ_j|_(m±1/2) + Ĉ_j^'|_(m±1/2)) u_i|_(m±1) u_i|_(m)/2 + u_i^(m±1/2)M̂^'_ij|_(m±1/2),
where M̂^'_ij|_(m±1/2) represents the additional momentum dissipative flux (ABV contribution) and the overbar ·^(m±1/2) denotes an arithmetic average of a quantity at m and m±1;
§.§ Discrete fluxes for the proposed method
Using the consistency conditions in Eqs. (<ref>)-(<ref>), and following the notation used in Eq. (<ref>) for the LF-type dissipative fluxes, the proposed artificial-viscosity method can be written in discrete flux form as
Â_̂ρ̂^d_j|_(m±1/2) = Ĉ_j^'|_(m±1/2) = - D^*_m±1/2/Δ x_j (Δ_j ρ),
Â_̂ρ̂ ̂û^d_ij|_(m±1/2) = Ĉ_j^'|_(m±1/2)u_i^(m±1/2) + M̂^'_ij|_(m±1/2)
=( - D^*_m±1/2/Δ x_j (Δ_j ρ) ) u_i^(m±1/2) - β^*_m±1/2∂ u_k/∂ x_k|_(m±1/2)δ_ij,
K̂^d_j|_(m±1/2) =
Ĉ_j^'|_(m±1/2)u_i|_(m±1) u_i|_(m)/2 + u_i^(m±1/2)M̂^'_ij|_(m±1/2)
=(- D^*_m±1/2/Δ x_j (Δ_j ρ) ) u_i|_(m±1) u_i|_(m)/2 + ( - β^*_m±1/2∂ u_k/∂ x_k|_(m±1/2)δ_ij) u_i^(m±1/2),
and
Î^d_j|_(m±1/2) = - D^*_m±1/2/Δ x_jΔ_j (ρ e),
where Â_̂ρ̂^d_j|_(m±1/2), Â_̂ρ̂ ̂û^d_ij|_(m±1/2), and Â_̂Ê^d_j|_(m±1/2) =K̂^d_j|_(m±1/2) + Î^d_j|_(m±1/2) are the localized LF-type total discrete dissipative fluxes for the artificial terms in the mass, momentum, and energy equations [Eqs. (<ref>)-(<ref>)], respectively.
§ RESULTS
In this section, the proposed artificial-viscosity method is used to simulate a variety of test cases, involving (a) a classical one-dimensional shock-tube case, to assess the accuracy of this method in capturing the discontinuities and to illustrate the effect of the new sensor in Eq. (<ref>), (b) an LES of compressible turbulent flow, to assess the low-dissipative nature with and without the use of the new sensor, and robustness in simulating compressible turbulent flows, (c) a shock-vortex interaction simulation, to assess the capability and accuracy of the method for simulating a more resolved DNS-type calculations, and (d) a drop advection simulation, to illustrate the advantages of the use of AMD over ATD for two-phase flows.
The proposed method is implemented in the low-dissipative CTR-DIs3D solver <cit.>, which uses a second-order central scheme and a fourth-order Runge-Kutta scheme for spatial and temporal discretizations, respectively.
§.§ Modified Sod test case
The modified Sod-shock tube is a classic one-dimensional test case, used to assess the accuracy of shock and contact discontinuity-capturing methods, that was originally proposed by Ref. <cit.>. Here, a modified version of the test case is used, because an entropy-consistent scheme is needed to avoid the entropy-violating jump that would otherwise form in the expansion region of this modified Sod shock-tube case <cit.>. The initial setup consists of a Riemann problem with the left state (ρ , u , p)=(1.0,0.75,1.0) and the right state (ρ,u, p)=(0.125,0.0,0.1), and the discontinuity located at x=0.3. Note that a sharp discontinuity was not used at the initial time, instead the initial discontinuity was smoothed to 1-2 grid cells thick. Starting with a sharp initial discontinuity can introduce locally unnecessarily large artificial fluid properties, which can impose a severe Courant-Friedrich-Lewy (CFL) restriction, which is a well known issue for LAD methods <cit.>. On the other hand, starting with a smooth discontinuity will not add large artificial fluid properties at time t=0, thereby minimizing the issue of severe CFL restriction.
The number of grid points is chosen to be N=400, and the results are presented at the final time of t=0.2 in Figure <ref> along with the analytical (exact) solution.
The simulation results in Figure <ref> show that the shock and contact discontinuities are captured accurately with the proposed LAD method, and that the method does not suffer from the formation of entropy-violating jump in the expansion region. The density, ρ, and artificial mass diffusivity, D^*, are plotted in Figure <ref>(a) and <ref>(b), respectively, at time t=0.2, with and without the newly proposed sensor, f_D.
Figure <ref>(b) shows that the AMD (D^*) is active in the regions of shock, contact discontinuity, and expansion fan when the proposed f_D sensor is not being used. However, the
use of the f_D sensor localizes D^* mostly to regions around the contact discontinuity, without introducing any more oscillations in the solution in the regions of shock (see Figure <ref>(a)). Non-zero values of D^* around shocks would unnecessarily make the method more dissipative because the ABV (β^*) is already active in this region to resolve the shock. Therefore, the use of the new switching sensor f_D makes the method less dissipative without affecting the accuracy of the solution.
§.§.§ Sensitivity to model coefficients
As mentioned in Section <ref>, the coefficient values are chosen to be C_D=0.5 and C_β=100 throughout this work with the second-order central scheme. It was claimed that the simulations are robust to these values and that the accuracy of the solution will not be significantly affected as long as the coefficient values are not changed by an order of magnitude. To illustrate this, the modified Sod-shock tube simulation is repeated for different values of the model coefficients in Figure <ref>.
Figure <ref> shows the solution density field for different values of model coefficients, C_β and C_D in ABV and AMD, respectively. Small oscillations can be seen around the shock for C_β=10 (10 times smaller than the proposed value), and the thickness of the shock is slightly increased for C_β=1000 (10 times higher than the proposed value). The contact discontinuity appears to be overly smeared for C_D=100 (200 times higher than the proposed value), but for smaller values of C_D, the contact discontinuity is intact and is not affected.
Overall, it can be seen that the solution is quite robust to the choices of both C_β and C_D, and the accuracy of the solution is only affected when these model coefficients are varied by more than an order of magnitude away from the proposed values.
§.§ Decaying homogeneous isotropic turbulence with shocklets
In this section, the non-dissipative nature, accuracy, and robustness of the proposed artificial-viscosity method are assessed for the LES of compressible turbulent flows with shocklets. Here, a decaying homogeneous isotropic turbulence (HIT) is simulated at a high enough Mach number that shocklets are generated in the flow <cit.>. In the incompressible limit of this flow, Agrawal et al. <cit.> recently showed that the LES with dynamic Smagorinsky model accurately predicts the decay rate of the kinetic energy. For this reason, the dynamic Smagorinsky model is used in this work as well.
The initial Taylor-scale Reynolds number of the flow is Re_λ,o=100, and the initial turbulent Mach number is M_t,o=0.6. The initial conditions for this simulation are generated following the procedure described in Ref. <cit.>. The Prandtl number is chosen to be Pr =0.7, and the material properties of the fluid are chosen to be γ=1.4 (specific heat ratio) and R=1 (specific gas constant) in the ideal gas law.
The domain is a triply periodic cube with dimensions [0,2π].
Here, a coarse resolution of 64^3 grid points is used, and hence, this is a good test to assess the amount of numerical dissipation added by the shock–and contact discontinuity-capturing method.
Figure <ref> shows the results from the simulation (a) with f_D and f_β (the proposed method), (b) with f_D and without f_β, (c) without f_D and with f_β (similar to the formulation proposed in Ref. <cit.>, but note that here a modified form of the Ducros sensor is used instead of the original Ducros sensor that was used in Ref. <cit.>), and (d) without f_D and without f_β (similar to the formulation proposed in Ref. <cit.>). To compare the results, a direct numerical simulation (DNS) of the same test case is performed on a 256^3 mesh; the results are also filtered, using a Gaussian filter, onto a 64^3 mesh and are shown in Figure <ref>. In Figure <ref> no subgrid model (ASV or eddy-viscosity model) is used for unresolved eddies in all the simulations, and hence, the vorticity variance is overpredicted. The effect of the use of a subgrid model for unresolved eddies is explored in Section <ref>.
Figure <ref> shows that the proposed method that uses both f_D and f_β sensors is the least dissipative of all the formulations. If the new f_D sensor in Eq. (<ref>) is not used, the dilatational motions, the density fluctuations, and vortical motions are all significantly damped, with a greater effect on the dilatational motions and the density fluctuations. Similarly, if the modified Ducros sensor in Eq. (<ref>) is not used, all the quantities are damped. Finally, if both f_D and f_β sensors are not used, the formulation is highly dissipative, making the simulations inaccurate.
Note that this observation is for the current choice of values for the model coefficients C_β and C_D (Section <ref>) used in this work, which were chosen such that the stronger shocks and contact discontinuities can be accurately captured with the current choice of the numerical scheme (see Section <ref>). If one chooses the values of C_β and C_D significantly lower than the proposed values, it is possible to perform a low dissipative simulation of decaying HIT even without the use of sensors. But these values of coefficients then will only work for turbulent flows with weak compressibility and will not be suitable for the simulation of stronger shocks and contact discontinuities.
Therefore, it is instead preferable to use the sensors to appropriately localize the dissipation than tuning the model coefficients for each problem.
The two-dimensional slices from the simulation, with and without the f_D sensor, are plotted in Figure <ref>. With the f_D sensor active, the AMD can be seen to be significantly reduced by almost more than 2 orders of magnitude without spuriously affecting the scales of density fluctuations and dilatational motions. In fact, without the f_D sensor, the dilatational field and the density field are seen to be damped slightly (see the range in the color bar). The ABV is seen to be active in the regions of high dilatation, as expected, and without the f_D sensor, the ABV is increased by a small amount to compensate for the reduced AMD. But the overall structure of the regions where ABV is active is still the same, with and without the f_D sensor.
In summary, the current simulation results show that the use of f_D and f_β sensors are necessary to recover the correct behavior of kinetic energy, and variances of dilatation, density, and vorticity.
The simulations are the least accurate without the use of f_D and f_β sensors. Using these f_D and f_β sensors individually improves the results by making the method less dissipative, and using both sensors gives the best results.
Therefore, the proposed method along with the sensors results in a robust, accurate, and low-dissipative method for capturing shocks and contact discontinuities for LES of compressible turbulent flows.
§.§.§ Effect of an eddy-viscosity model
Since this is an LES of a turbulent flow, a subgrid model (either an ASV or an eddy-viscosity model) is required for unresolved eddies. So far, no model was used in Figure <ref> to isolate only the effect of ABV and AMD and to illustrate the effect of the use of the sensors f_β and f_D. Here, the LES of decaying HIT is repeated with the use of a dynamic Smagorinsky model described in Section <ref> to see the effect of this model.
Figure <ref> shows the simulation results for the proposed method, that uses both f_D and f_β sensors, with and without the dynamic Smagorinsky model (DSM).
The use of the DSM has an important effect on kinetic energy and all the variances in Figure <ref>. Without the DSM, all the quantities are overpredicted, but with the use of DSM, the simulation is most accurate with all the quantities following very closely with the predicted filtered DNS data. Particularly, the accuracy of the evolution of kinetic energy and vorticity variance is greatly improved with the use of DSM. It is important to also appreciate here that this is only possible because of the low-dissipative nature of the proposed LAD method, with the new sensors that appropriately localize the dissipation to the regions where it's needed. Without the use of f_D and f_β sensors, the formulation was very dissipative, as seen in Figure <ref>, and with the addition of DSM, it would only become more dissipative, significantly affecting the accuracy of the simulation.
In addition to the integrated quantities, we plot the spectrum of kinetic energy and density variance in Figure <ref>, with and without DSM, which gives a more detailed information on how the models act at different scales. It shows that the use of DSM along with the proposed LAD approach clearly improves the result by bringing the spectra closer to the DNS spectra, except with deviations at small scales which are to be expected in a LES calculation.
§.§.§ Sensitivity to model coefficients
The sensitivity to the model coefficients C_D and C_β was already tested in Section <ref>. Here, the sensitivity to the model coefficient a in the f_D and f_β sensors in Eqs. <ref>, (<ref>) is tested. Here, a is added in the sensors to make the sensor stronger and to further localize the regions where ABV and AMD/ATD are active. To illustrate this, the decaying HIT simulation is repeated for different values of a in Figure <ref>.
Figure <ref> shows the results from the decaying HIT simulation, with f_D, f_β, and DSM active, for a=10, 100 (proposed method + DSM), 200, and 1000. Clearly, varying a only affects the dilatational motions and the density fluctuations, and the effect is relatively small. The biggest improvement is seen when the value of a is increased from 10 to 100 and the solution becomes less dissipative and is closer to the predicted filtered DNS data, and beyond this it saturates, a further increase in the value of a only has a minimal effect on the solution. Hence, a value of a=100 is chosen in this work.
§.§ Shock-vortex interaction
In this section, a shock-vortex interaction is simulated using the proposed artificial-viscosity method. Section <ref> demonstrated that the proposed method is low-dissipative and is suitable for LES of compressible turbulent flows. This section, in contrast, will assess the accuracy and suitability of the present method for more resolved simulations and DNS of compressible turbulent flows.
This test case is taken from the work of Ref. <cit.>, Ref. <cit.>, and Ref. <cit.> and has also been used to evaluate the shock-capturing capability by Ref. <cit.> and Ref. <cit.>. The initial setup of this case consists of a M=1.2 stationary shock located at x=0 and an isentropic vortex of strength M_v=0.25, initially located upstream of the shock at x=2. The domain extent is [-30,10]×[-20,20].
The initial vortex field is given by
u_θ(r) = M_v r exp(1 - r^2/2),
u_r(r) = 0,
p(r) = 1/γ[ 1 - (γ - 1/2) M_v^2 exp(1 - r^2)]^γ/γ - 1,
and
ρ(r) = [ 1 - (γ - 1/2) M_v^2 exp(1 - r^2)]^1/γ - 1,
where u_θ is the angular velocity, u_r is the radial velocity, r is the radial distance from the center of the vortex, and γ is the ratio of specific heats. The schematic of the setup after the vortex has passed through the shock is shown in figure <ref>.
The sound pressure field at the time of t=6 is shown in Figure <ref> along with the ABV, β^*. Here, in this simulation, AMD is also active along with the f_D sensor. At time t=6, the vortex has passed through the shock and deforms the shock surface. The results show that the f_β sensor has successfully localized ABV only to those regions around the shock, while maintaining the accuracy in capturing the shock without the need for separately tuning the coefficients or the need to turn off AMD in this case.
The sound pressure is also plotted in Figure <ref> along the radial direction r from the vortex center for a fixed value of θ=-45^∘ at two different times (t=6, 8) and for three grid resolutions against the reference solution from Ref. <cit.>. The plots show that the simulations are grid converged and it matches well with the reference solution.
§.§ Droplet advection
In this section, a droplet advection is simulated to assess the suitability of the proposed artificial-viscosity method for two-phase flows. The LAD formulation was extended to two-phase flows in Section <ref> for a five-equation model <cit.> or a four-equation model <cit.> that can be used with a central-difference scheme.
The five-equation model doesn't assume thermal equilibrium, and admits two temperatures for two phases. Therefore, ATD cannot be directly used with a five-equation model, which would lead to the violation of interface-equilibrium condition (IEC). However, AMD is constructed in such a way that it satisfies IEC with a five-equation model (see Appendix A).
Here, the use of AMD and ATD with a five-equation model for a simple one-dimensional advection of the drop is assessed. The domain length is 5 units and has periodic boundary conditions on both sides. A drop of radius 0.5 units is initially placed at the center of the domain at x=2.5.
The initial velocity is prescribed to be u=2.5 and the initial pressure is p=1.
Since both the velocity and pressure is uniform at the initial time, the pressure and velocity has to remain uniform for all times (definition of IEC).
Here, the material interface is the only discontinuity in this problem, and since the phase-field model <cit.> is already acting to capture the material interface, the artificial viscosities (ABV and AMD) should not have any effect on this problem. Figure <ref> shows the pressure and volume fraction at the final time of t=2 with the use of ATD, AMD, and without ATD or AMD.
When no ATD or AMD is used, the pressure remains uniform as expected.
Since ATD violates IEC, spurious oscillations in the pressure field can be seen at the interface location when ATD is used. But when AMD is used, the pressure remains uniform because AMD satisfies IEC, similar to when no ATD or AMD is used. Hence, it is recommended to use AMD for capturing contact discontinuities in two-phase flows.
§ SUMMARY AND CONCLUSIONS
In this work, we propose a novel, entropy-consistent, and stable localized artificial-viscosity/diffusivity (LAD) method for capturing shocks and contact discontinuities in compressible flows.
The artificial-viscosity methods have many advantages over other discontinuity capturing methods due to its simplicity and the ability to be used with central-difference schemes, which is beneficial for the simulation of turbulent flows. But the main challenge lies in (a) the need for appropriate localization of the dissipation to regions of discontinuity where it is required, (b) the need for sophisticated filtering operation for the artificial fluid properties to stabilize the method, which will make it difficult to extend the formulation for unstructured grids, and (c) the need for tuning the coefficients and to turn on-and-off the artificial fluid properties depending on the problem. The proposed method in this work overcomes most of these challenges.
Moreover, all existing LAD methods are used with a high-order central scheme, and it is not clear how these methods perform with low-order schemes. In this work, a second-order central scheme is chosen because of its low cost; low aliasing error; easy implementation, boundary treatment, and extension to unstructured grids; and improved stability. Guidelines on how to choose the parameters in the method are provided.
Furthermore, in artificial mass/thermal diffusivity (AMD/ATD), either the density or internal energy was used previously as the indicator function to detect contact discontinuities. But the main issue with this sensor is that it not only activates in the regions of contact discontinuity, but also activates in the regions of shock and vertical motions. This would be a problem because the artificial bulk viscosity (ABV) is already active in the regions of shock and a subgrid model is already active for unresolved eddies, and therefore, adding artificial mass/thermal diffusivity here will be unnecessarily dissipative. To prevent this from happening, we propose a sensor, "like" a Ducros sensor, that can effectively distinguish contact discontinuities from shocks and vortical motions, and turns on the AMD/ATD only in the regions of contact discontinuities. In ABV, we also use a stronger modified Ducros sensor instead of the original sensor, which further localizes ABV to only the regions of shock.
The use of these sensors in AMD/ATD and ABV will result in a LAD method that does not require tuning of the model coefficients, which is otherwise required, depending on the problem being solved.
We show that the proposed method accurately captures shocks and contact discontinuities, without the need for problem-dependent tuning, for a range of problems, such as a one-dimensional Sod test case, decaying homogeneous isotropic turbulence with shocklets, and shock-vortex interaction.
Using an analogy between the Lax-Friedrichs (LF) flux and the artificial-viscosity methods, a discrete LF-type flux formulation is presented for the proposed LAD method that satisfies discrete consistency conditions for kinetic energy and entropy. This LAD formulation is then coupled with the system of equations for compressible flows that are discretized using a robust central scheme. This results into a stable, low-dissipative, artificial-viscosity formulation which does not require filtering the solution and the artificial fluid properties that were previously required to obtain stable solutions.
Therefore, the proposed method is suitable for LES and DNS of compressible turbulent flows with discontinuities in complex geometries.
An extension of the proposed method to capture shocks and contact discontinuities in compressible two-phase flows is also presented. It is shown that the proposed method satisfies the interface equilibrium condition, a crucial thermodynamic consistency condition for robust numerical simulations of compressible two-phase flows.
§ APPENDIX A: INTERFACE EQUILIBRIUM CONDITION
According to the definition of IEC, if velocity and pressure are initially uniform, they have to remain uniform at all times. AMD satisfies IEC, but ATD does not satisfy IEC.
Hence, it might be preferable to use AMD for two-phase flows because it satisfies IEC, irrespective of the choice of the model.
To show this, let us consider only the LAD terms in Eqs. (<ref>)-(<ref>) and ignore other terms because they satisfy IEC <cit.>, and make a one-dimensional assumption.
§.§ LAD with AMD and ABV
The simplified system of equations, with only AMD and ABV terms, can be written as
∂ϕ/∂ t = ∂/∂ x( D^* ∂ϕ/∂ x),
∂ρ/∂ t = ∂/∂ x( D^* ∂ρ/∂ x),
∂ρ u/∂ t = ∂/∂ x( D^* u ∂ρ/∂ x) + ∂/∂ x( β^* ∂ u/∂ x),
∂ E/∂ t = ∂/∂ x( D^* k ∂ρ/∂ x) + ∂/∂ x( D^* ∂ρ e/∂ x) + ∂/∂ x( β^* u ∂ u/∂ x).
Assuming u=constant initially in Eq. (<ref>), it can be rewritten as
ρ∂ u/∂ t + u[∂ρ/∂ t = ∂/∂ x( D^* ∂ρ/∂ x)],
where the relation in within [·] is 0 because of Eq. (<ref>). Hence, ∂ u/∂ t=0 and u remains constant.
Now, taking a dot product of velocity with the momentum equation in Eq. (<ref>), the kinetic energy can be derived as
∂ρ k/∂ t = ∂/∂ x( D^* k ∂ρ/∂ x) + u ∂/∂ x( β^* ∂ u/∂ x).
The internal energy equation can be obtained by subtracting the kinetic energy equation in Eq. <ref> from the total energy equation in Eq. (<ref>) as
∂ρ e/∂ t = ∂/∂ x( D^* ∂ρ e/∂ x) + β^* ( ∂ u/∂ x)^2,
In a five-equation model, thermal equilibrium is not assumed, and each phase has its own temperature T_l, but an isobaric closure law is assumed to close the system of equations. Now, expressing ρ e using the mixture rule, assuming ideal gas law for each phase, invoking isobaric law, internal can be written as
ρ e = ∑_l ρ_l e_l ϕ_l = ∑_l ϕ_l p_l/γ_l - 1 = p ∑_l ϕ_l/γ_l - 1 = p α,
where α = ∑_l ϕ_l/(γ_l - 1). Using Eq. (<ref>) and rewriting Eq. (<ref>) in terms of p, we get
∂ p α/∂ t = ∂/∂ x( D^* ∂ p α/∂ x) + β^* ( ∂ u/∂ x)^2,
Now, assuming u=p=constants at initial time, we can rewrite this as
α∂ p /∂ t + p [ ∂α/∂ t = ∂/∂ x( D^* ∂α/∂ x) ],
where the relation in within [·] is 0 because it satisfies Eq. (<ref>). Hence, ∂ p/∂ t=0 and p remains constant, and therefore IEC is satisfied with AMD.
§.§ LAD with ATD and ABV
Similarly, the simplified system of equations, with only ATD and ABV terms, can be written as
∂ϕ/∂ t = 0,
∂ρ/∂ t = 0,
∂ρ u/∂ t = ∂/∂ x( β^* ∂ u/∂ x),
∂ E/∂ t = ∂/∂ x( D^* ∂ρ e/∂ x) + ∂/∂ x( β^* u ∂ u/∂ x),
∂ρ k/∂ t = u ∂/∂ x( β^* ∂ u/∂ x),
∂ρ e/∂ t = ∂/∂ x( D^* ∂ρ e/∂ x) + β^* ( ∂ u/∂ x)^2.
Assuming u=constant initially, from Eqs. (<ref>), (<ref>), it is easy to see that ∂ u/∂ t=0, and therefore, u remains constant.
Now, rewriting Eq. (<ref>) in terms of p and assuming u=p=constants at initial time, we can rewrite it as
α∂ p /∂ t + p [ ∂α/∂ t = ∂/∂ x( D^* ∂α/∂ x) ].
However, the relation in within [·] is not 0 here, unlike in the case of AMD, because it doesn't satisfy Eq. (<ref>). Hence, ∂ p/∂ t 0, and therefore, IEC is not satisfied with ATD.
§ ACKNOWLEDGMENTS
S. S. J. gratefully acknowledges partial financial support from the Franklin P. and Caroline M. Johnson Graduate Fellowship and Boeing Co.. R.A. and P.M. acknowledge support from NASA's Transformational Tools and Technologies project grant, #80NSSC20M0201.
S. S. J. thanks Tim Flint for providing helpful comments on this work and for helping with the exact solution of the Sod shock-tube test case, and acknowledges fruitful discussions with Henry Collis.
unsrt
|
http://arxiv.org/abs/2307.01101v1
|
20230703152608
|
Assessment of the Utilization of Quadruped Robots in Pharmaceutical Research and Development Laboratories
|
[
"Brian Parkinson",
"Ádám Wolf",
"Péter Galambos",
"Károly Széll"
] |
cs.RO
|
[
"cs.RO"
] |
Implications for the non-Gaussianity of curvature perturbation from pulsar timing arrays
Qing-Guo Huang
Received XXX; accepted XX
========================================================================================
empty
empty
Drug development is becoming more and more complex and resource-intensive. To reduce the costs and the time-to-market, the pharmaceutical industry employs cutting-edge automation solutions. Supportive robotics technologies, such as stationary and mobile manipulators, exist in various laboratory settings. However, they still lack the mobility and dexterity to navigate and operate in human-centered environments. We evaluate the feasibility of quadruped robots for the specific use case of remote inspection, utilizing the out-of-the-box capabilities of Boston Dynamics' Spot platform. We also provide an outlook on the newest technological advancements and the future applications these are anticipated to enable.
Quadruped, Mobile Robotics, Autonomous Inspection, Laboratory Automation
§ INTRODUCTION
Over the past few decades, drug discovery and development have become increasingly complex in nature <cit.>. New modalities require a multidisciplinary approach to overcome this hurdle. An example is the adoption of new and innovative lab automation technologies in an effort to increase efficiency and accelerate development. R&D labs present a highly dynamic and compact environment whereby human operators are required to undertake repetitive and mundane tasks. Examples include the preparation and transportation of standards, reagents, and samples, as well as the maintenance of various unit operations, to name a few.
Advanced and continuous processing is another example of an innovative approach, aiming to accelerate drug production in a more dynamic and flexible manner, which presents many challenges <cit.>.
One of which is the need for operators to perform an on-call duty to monitor the overall system, check the status of unit operations, and potentially intervene in cases where situations go awry. Recent advancement in the technological readiness of collaborative robotics has revolutionized the pharmaceutical industry's ability to outsource a few of the aforementioned tasks. The adoption of robots is an effort to improve work conditions and allow human operators to undertake more intellectually challenging tasks such as data analysis and the development of novel methods and processes.
The paradigm shift towards automating laboratory-based activities also included the integration of increased computing power and robotic platforms; on the one hand, to aid the lab personnel and, on the other, to accommodate the hike in complexity. As such, robots have already been around in laboratories for decades, and the efficacy of the adoption has been well documented <cit.>. In recent years, the COVID-19 pandemic further accelerated the adoption of modern robotics and other intelligent technologies in laboratories and clinical environments <cit.>.
Traditionally, repetitive tasks like pipetting have been achieved by integrating Cartesian pipetting stations into the lab workflow. Commonly known vendors, such as Tecan or Hamilton, are used for sample preparation, high throughput screening, and execution of bioassays <cit.>.
A novel advancement towards the beginning of the millennium was the appearance of bench-top sample transportation robots <cit.>. This enabled the feasibility of connecting stand-alone unit operations both physically and by means of a higher-level, over-arching control system. The intricacy of processes called for more versatile solutions, thus leading to the appearance of mobile manipulators (MoMas) in laboratory environments <cit.>. This was made possible by the advancements in the fields of mobile- and service robotics, including advanced navigation and motion planning capabilities <cit.>. Wheeled MoMas are already becoming ubiquitous for simple pick-and-place sample transportation applications <cit.>. However, some laboratory environments require even more mobility from the robot's side. For example, conventional wheeled MoMas are not capable of opening doors, climbing stairs, or retrieving information via an integrated sensing module. Moreover, use cases, such as remote inspection and error handling, would require robots to support a means of direct remote control or even telemanipulation. Most present-day wheeled MoMas lack the ability to fulfill these applications.
A typical R&D lab presents a compact environment that is subject to changes in modality and in the requirements of lab operators and engineers. A future state for such an environment would entail robotic platforms which act as lab assistants by either enabling remote presence when an operator is not physically available or as a means of outsourcing easily robotized tasks such as the transport of cargo. The Spot platform presented the most technologically ready and versatile solution compared to the other products available by Boston Dynamics (BD) <cit.>.
The aim of this paper is to discuss the initial strategy and path toward the development of Spot as a platform capable of assisting and alleviating the workload that a lab operator has. Moreover, we discuss how Spot overcomes specific limitations seen by present-day MoMa solutions and evaluate areas of improvement in the context of lab automation. For this purpose, we utilize two Spot units, each featuring a different payload configuration, as shown in Fig. <ref>.
In Section <ref>, we focus on use cases in life science laboratories. We assess the user needs in the form of a non-representative survey and list a few considerations regarding their implementation. In Section <ref>, we introduce the Spot platform as a means to overcome some of the limitations that state-of-the-art solutions are facing. We first focus on Spot's basic capabilities and typical applications. In Section <ref>, we present our proof-of-concept (PoC) study on remote inspection. We specifically do this in a life science laboratory setting, which is a new domain, compared to state-of-the-art applications. We also discuss the usability of Spot's manipulator arm and the capacity to execute autonomous missions. In Section <ref>, we provide an outlook on the anticipated advancements in terms of Spot's capabilities. Finally, in Section <ref>, we conclude and summarize the paper.
§ USE CASES IN LIFE SCIENCE LABORATORIES
In order to assess potential use cases specific to an R&D lab footprint, we conducted virtual interviews and a user survey. Both formats began with a high-level introduction to Spot and its out-of-the-box capabilities. The participants included both operators and engineers, as well as project leads and department heads from the process development team for biologics.
The purpose of the interviews and user survey was to leverage the experience of participants familiar with lab-based activities and identify specific areas in which Spot's capabilities can be utilized in order to alleviate workload or provide assistance.
Results from the virtual interviews and user surveys were clustered based on the application examples, including visual inspection via remote operation, data capture and processing, and robotic manipulations.
One-on-one meetings
* Participants were introduced to Spot via a short presentation and were then asked for feedback on potential use cases for Spot within a lab footprint.
* Notes were taken for further evaluation
Online Survey
* Participants were provided with the link to a forms-based user feedback survey.
* Results were automatically processed and presented.
Results from both the interviews and user surveys confirmed that the remote operation of a robot would provide robust insight into the lab environment in the case of an adverse event. Operators & engineers referred to scenarios during on-call duty whereby an alarm was triggered by deviation from a set point within a given unit operation. Examples of unit operations include but are not limited to, bench-top bioreactors, surge tanks, and chromatography units. Participants envisioned a scenario where they could sign in and drive the robot to the site of the alarm and utilize an onboard camera to assess the event better. Additionally, participants claimed that they would be better suited to confirm if the alarm was indeed caused by a deviation or a false positive, thus preventing the operator from needing to travel on-site.
Participants also identified sample transportation as a potential application that could be outsourced to a robotic operator. Additionally, participants mentioned how labor-intensive and time-consuming sample transportation from work cell to work cell can be.
After being introduced to the robot's capabilities and industrial applications, the majority of participants suggested use cases that could be implemented within the out-of-the-box capabilities. These included remote and autonomous inspection. Participants also envisioned the possibility of utilizing the manipulator arm. Examples include controlling the arm remotely and intervening in the case of errors in automated lab processes. The next release of the Scout interface is expected to make this possible by enabling the control of the manipulator arm.
Another possible use case is the concept of using Spot within an ecosystem of various other autonomous mobile robots (AMR). The general notion would be to have Spot conduct regular reconnaissance missions, checking the lab footprint and ensuring that wheeled mobile robots have a clear path from start to destination.
Participants from a Process Analytic background identified sample transportation and robotic manipulations as a potential use case. Participants within process analytics usually analyze samples stored in ANSI-SLAS format[Meets the Standards ANSI/SLAS 1-2004 through ANSI/SLAS 4-2004.] labware such as microtiter plates. Pick and place application would represent the opportunity to alleviate technicians of transporting samples from one work cell to another or from one room to another, an otherwise laborious and time-consuming task. The precise motion control enabled by the robotic arm provides a solution devoid of sequential teaching of robotic vectors. However, the success rate for picking objects of this kind is not optimal for routine sample transportation. Possible workarounds could center around a designated 'sample pick up' station, whereby Spot would navigate towards and complete a predefined sequence of movements that has been already robustly tested. Additionally, a 3D printed stowing section could be designed and mounted onto Spot, allowing the quadruped to pick and stow the samples, ensuring that samples are not exposed to irregular movements due to the nature of the manipulator arm whilst picking up and carrying objects.
§ IDENTIFYING A SUITABLE MOBILE ROBOTIC PLATFORM
To answer the need expressed by the users for an easy-to-use remote inspection solution, we reviewed the currently-available mobile robot solutions on the market. Our attention was brought to BD's portfolio.
To counter the effects of labor shortage, increase working efficiency and mitigate monotonous and repetitive tasks, BD is striving towards providing general-purpose robots which will have the ability to serve in human-centric environments. The path envisioned starts with implementing robust mobility and subsequently developing more advanced autonomy. BD has already demonstrated how wheeled and legged robots can achieve the desired level of flexibility and maneuverability. The ultimate goal is to implement skilled and niche manipulation, for which BD has developed the Atlas project as a basic research platform. Atlas is a biped robot designed to perform complex maneuvers and provides insight into how a humanoid robotic platform would handle mundane tasks by interacting with tools and equipment originally designed for humans <cit.>.
Stretch, on the contrary, is a mobile robotic platform intended for heavy-duty, repetitive tasks typically found in warehouse operations, i.e., on- and off-loading of logistics trucks as well as case handling. Therefore, the Stretch unit features an articulated robotic manipulator arm with an advanced gripper, in addition to advanced computer vision technology for object detection, identification, and sequence planning <cit.>.
Finally, Spot, the quadruped robot, is now a mature commercial product boasting a plethora of versatile capabilities. Equipping it with advanced perception and simple manipulation features is the next step. Our paper provides an overview of the usage of the currently-available functionalities, and we also provide an outlook on what is currently being worked on, specifically focusing on Spot.
§.§ The Spot quadruped robot
Spot's basic capabilities center around its advanced mobile autonomy, which is further enhanced by its quadruped design. Therefore, it is capable of robustly navigating a variety of different surfaces and terrains, including rocky environments as well as stairs, giving it an advantage over wheeled mobile robots. Figure <ref> illustrates the main body components of the robot that provide this level of autonomy. Additionally, its IP54 rating (ingress protection rating, which assesses how well a device is protected against dust and rain) allows it to operate outdoors, even in rainy or dusty circumstances.
The quadruped features projected stereo cameras facing in five directions, permitting a 360-degree field of view. This enables localization and obstacle detection for autonomous navigation. The robot generates a 3D representation of its surroundings which in turn allows for path planning.
Regarding connectivity, Spot uses WiFi to communicate with other devices, such as the standard tablet controller provided by BD. Spot is capable of hosting other clients via its own WiFi access point or as a client, allowing it to be integrated into broader networks for communication with multiple devices. This forms the basis for the networking architecture, which was used to establish the remote troubleshooting use case.
Additionally, Spot's application can be further enhanced by the integration of multiple different payloads, which are either provided by BD or can be custom-made. A variety is offered by BD <cit.>, including:
* A pan-tilt-zoom (PTZ) camera for 360view, a high-resolution robotic zoom camera, and a thermal & infrared (IR) detector, enabling image acquisition and sensing (Fig. <ref>, left).
* The Spot arm, (Fig. <ref>, right) is utilized for gross manipulations, such as pick and place, constrained manipulation, which entails turning valves, pulling levers, etc., and door opening.
* Additional computational unit called the Spot Core, which enables the implementation of custom applications that run onboard the robot
* Extended autonomy payload (EAP2), which provides a wider range of perception than the surround cameras thanks to a LiDAR sensor. This gives the robot more confidence in localizing itself and navigating in different environments.
§.§ Typical application areas
Typical application areas for Spot include remote-controlled or autonomous operation within industrial settings <cit.>, i.e., oil rigs, (nuclear) power plants <cit.>, construction sites <cit.>, and mines. Disaster and safety response scenarios are also covered, generally, in situations whereby dangerous environments could potentially endanger the lives of human operators.
Usage of Spot and similar quadrupeds also gained prominence and notoriety during the height of the COVID-19 pandemic. The Unitree Laikago quadruped was utilized for enforcing social distancing <cit.>. Spot was used within healthcare and clinical settings in order to minimize and mitigate risk and exposure between healthcare workers and patients <cit.>. Spot's custom payload integration enabled telepresence, delivery of medicine to patients, and inspection of patients' vital signs; in particular, body temperature, which was a key indicator of possible infection.
Spot's utilization within healthcare and clinical settings opens the door for potential future states whereby mobile robots of a similar framework are used for a variety of applications, two examples that would provide benefit to a wide range of medical or pharmaceutical environments are disinfection of either a hospital room or a lab with a high biosafety level (BSL) and internal delivery of particular goods within a given environment.
Life science laboratories, particularly research and development (R&D), require a different set of requirements, which we have begun to test and evaluate.
§ CONSIDERATIONS
The strengths of Spot and its out-of-the-box capabilities lie in navigating different terrains and having the ability to be equipped with different payloads for different tasks. The typical lab robot use case of pick-and-place labware transportation is out of scope for now. Precise motion planning can be achieved through the Spot API and will be evaluated further in future experiments. Limitations of the robotic arm module mainly center around the end effector, which is currently not suited towards handling labware such as microtiter and deep-well plates, typically used in an R&D lab. It would be possible to create a custom end effector specific for handling labware of this nature.
The standing height of the robot is also a limitation, as it is unsuitable for a wide coverage of bench-top devices and unit operations. Moreover, the standing height of the robot and the range of the Spot arm also impact and limit the set of robotic manipulations that it can conduct.
§ PROOF-OF-CONCEPT STUDIES
§.§ Equipment
In order to conduct the PoC studies, we used the two Spot units introduced in Section <ref>. Both were equipped with the computational unit, called Spot core, which enables advanced data processing and enhances communication applications onboard the robot. One of the units was also fitted with the robotic manipulator arm, called Spot Arm, whereas the other unit carried the PTZ camera as the primary payload.
Enabling remote access was a key factor in setting up the experimental environment. This involved connecting both robots to a wireless local area network (W-LAN) in client mode. The Site hub server, which hosts the web-based control interface, needs to be integrated into the corporate network. The robots must be in the same domain, reachable (routed) from the Site Hub and the control tablet. Thus, within the corporate network (intranet), any user laptop would be able to reach the browser interface. Fig. <ref> shows the experimental network setup.
§.§ Remote inspection
We conducted PoC studies within a pharmaceutical laboratory, which was geared toward process development. We utilized the Spot with the PTZ camera, in addition to the Spot with the robotic arm and the web control interface known as Scout.
Primarily focusing on the remote inspection use case, specifically visual inspection, we introduced Spot as a robotic platform and presented its out-of-the-box capabilities in order to give the participants a high-level overview. Thereafter, we clustered the use cases stemming from the user surveys and interviews into specific applications.
We focused on a scenario whereby a device had lost connection to a process control system or if it were an outdated (legacy) device without a suitable interface and would require a readout of a display.
This enabled us to perform the tests in the aforementioned sections. This included the following steps:
* Logging in to the Scout interface from the office PC and activating the robot
* Driving it via the camera views and the 3D representation of the environment to the designated laboratory
* Switching to Inspection mode; activating the PTZ camera
* Localizing the instrument for the readout
* Zooming in and triggering an image acquisition, thus storing an image locally
* Navigating back to the idle position of the robot in the previous room, where the experiment finished.
Fig. <ref> shows four key screenshots of the experiment. On subfig. <ref>, the main view of the PTZ-camera can be seen while on subfig. <ref>, the real-time pointcloud representation of the environment is shown. The target instrument can be seen in subfig. <ref>, placed on a conventional laboratory bench. Finally, subfig. <ref> shows the readout image from the scale's display. The low viewing angle results in a sub-optimal view on the LCD screen, resulting in a slight ghost effect. The height of Spot is a limiting factor, both in the case of inspection activities in human environments and in terms of bench-top manipulation.
§.§ Manipulation
Spot Arm is capable of user-initiated autonomy, in which the user triggers certain actions via the controller interface enabled by the Android tablet. Users can select specific objects for the robot to pick by point and click. It should be noted that the user has to keep cautious that the robot arm does not collide with the environment. Additionally, implementing robotic manipulations within pre-recorded missions (otherwise known as auto walk) is also a possibility.
Currently, control of the manipulator arm is not available via the Scout solution. This is a feature that BD is currently developing and will address with a future release.
§ USER TESTS
We conducted user tests aimed at simulating a scenario whereby operators remotely access, drive & return the Spot robot through a given lab footprint, identify PoIs that would be subject to technical difficulties and conduct a visual inspection.
The user test began by giving participants an introduction to the Scout Solution interface, followed by a demonstration on how to remotely control and navigate the Spot and operation of the Spot CAM payload.
Users were then asked to remotely control and navigate the robot from a home position to an upstream process development lab and visually inspect unit operations such as Bioreactors and their Direct control units (DCU). Following visual inspection and visual data captures, participants then returned Spot to the home position and docked the Spot unit.
After the practical section of the user test had concluded, participants were asked to complete a questionnaire, where they rated the user interface and user experience and provided feedback.
A total of 9 participants took part in the user test. Participants were from various technical and non-technical functions within the R&D organization and had varying experience levels in working and controlling robots.
Most correspondents reported that the overall user interface and experience were very intuitive and provided a relatively easy means of controlling the robot, despite it being their first time operating a robot of any kind remotely. On the other hand, participants also mentioned experiencing latency whilst remotely controlling Spot & the Spot CAM payload, which resulted in poor response time.
The users, on average, rated the remote control function at 6,4 (out of 10) and the user interface and user experience at 7,2 (also out of 10).
§ DISCUSSION
Operators & technicians are required to respond to alarms during off-duty hours during process development. Examples can include PO2 & CO2 sensor alarms for Bioreactor unit operations, and other examples include alarms detecting a significant change in temperature for laboratory refrigerators, housing equipment such as filtration units, surge tanks, or chromatography units.
If we consider the operators and engineers that are working on continuous and advanced process development that live in close proximity (10-20km) to the facility, this would translate to approximately 20 minutes (best case scenario) of travel time via car, followed by an additional 15-20 minutes to: pass security, wear appropriate lab personal protective equipment (PPE), check the status of the alarm, and take adequate measures to assess and then mitigate the problem. Moreover, alarms and sensors are subject to producing false positive alarms, and the arbitrary information provided by DCUs for specific unit operations may not adequately explain the exact root cause, i.e., issues caused by sub-components such as valves or tubing.
Utilizing the Scout solution as a first step could save 30-40 mins of travel and operation time by navigating Spot to the PoI and assessing the issue visually via remote presence.
A future perspective would involve Spot being employed as a first line of action against alarms that are caused by faulty gas sensors& alarms. This has wider implications in terms of its deployment seeing as this is a topic that affects all site labs and aligns with the objectives of the environmental, health, and safety (EHS) department.
§ OUTLOOK AND FUTURE WORK
Part of the remote inspection activity could be implemented with the autonomous navigation functionality, the Autowalk, in that the user would not need to drive the robot manually but record a specific path. Then, Spot could perform this pre-programmed path to arrive at a Point Of Interest (PoI). However, to achieve full autonomy, the enhanced autonomy module (EAP2) would be needed, and topological mapping needs to be implemented to enable arbitrary navigation between PoI's in a graph-like representation. Oxford Robotics Institute's (ORI) AutoInspect solution offers Simultaneous Localization and Mapping (SLAM) of this sort <cit.>.
Human-initiated autonomy is a key principle in leveraging mobility, perception, and manipulation skills. This represents a middle ground between full autonomy and direct telemanipulation. The former means that the robot performs pre-recorded actions and uses built-in algorithms for decision-making. This includes high-level semantic planning, whereby a set of actions is laid out based on a desired state specification. The actions then need to be translated to low-level physical interactions with the environment, e.g., in the form of motions being expressed geometrically <cit.>. In the case of telemanipulation, a human is responsible for determining the sequence of actions and also of executing the motions. The robot, in this case, can be considered to be an extended actuator for the human brain. In most cases, the motions of the human operator are mapped to the robotic actuators. A typical example is surgical robotics, where the surgeon controls a special master joystick, and the movements are executed by one or more endoscopic actuators <cit.>.
In this case, delays between the master (controller) and the slave (actor) units' motions can be catastrophic. In most instances, however, there is a direct-wired connection, and the surgeon is sitting in the next room. In situations whereby the signal has to travel greater distances, the delays can impede the usability of the system. A technique to resolve this issue is to leave a certain amount of autonomy to the robotic actor, as presented by Schmaus et al. in the context of orbit-to-ground teleoperation for space exploration applications <cit.>. By automating certain sub-tasks and leaving only high-level control to the (remote) human operator, the time-critical actions can be controlled locally, thus enabling a quicker response. Quere et al. present the concept of shared control templates for assistive robotics <cit.>. Key factors to enable this are suitable knowledge representation frameworks <cit.> and appropriate cognition on the robot's side, which enables autonomous task planning and execution.
Potential future work may include implementing the above-mentioned concepts for Spot and/or other mobile manipulator platforms. This will include further assessing BD's present and upcoming capabilities regarding manipulation. Furthermore, the perspective of combining advanced telemanipulation technologies, such as SRI's Robot Telemanipulation System <cit.> with different autonomous mobile robot (AMR) platforms, needs to be evaluated.
§ CONCLUSION
We provided a perspective on utilizing Spot, the quadruped robot of BD, in a pharmaceutical R&D laboratory environment. We assessed the capabilities of two specific payloads, including the pan-tilt-zoom camera and the manipulator arm. We conducted user interviews to generate use cases that are specific to a lab footprint. We did initial testing in the scope of remote troubleshooting with BD's web-based Scout interface. Finally, we provide an outlook on manipulation and other advanced capabilities.
§ ACKNOWLEDGEMENTS
This work was funded by Baxalta Innovations GmbH, a Takeda company. We thank our teammates for the fruitful collaboration and their help in reviewing the article: Michael Schwaerzler, Masatoshi Karashima, Patricia Wildberger, Nozomi Ogawa, and Seishiro Sawamura.
This work was supported by the Doctoral School of Applied Informatics and Applied Mathematics, Óbuda University.
Péter Galambos and Károly Széll thankfully acknowledge the financial support of this work by project no. 2019-1.3.1-KK-2019-00007 implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the 2019-1.3.1-KK funding scheme. Péter Galambos is a Bolyai Fellow of the Hungarian Academy of Sciences. Péter Galambos is supported by the UNKP-22-5 (Bolyai+) New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund.
We thank Boston Dynamics for reviewing and approving the article.
§ CONFLICT OF INTEREST STATEMENT
Brian Parkinson and Ádám Wolf are employees of Baxalta Innovations GmbH, a Takeda company, Vienna,
Austria.
IEEEtran
14
|
http://arxiv.org/abs/2307.02672v1
|
20230705220438
|
GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations
|
[
"Julia Lust",
"Alexandru P. Condurache"
] |
cs.LG
|
[
"cs.LG",
"cs.CV"
] |
GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations
Julia Lust
Robert Bosch GmbH, Stuttgart, Germany
University of Lübeck, Lübeck, Germany
juliarebecca.lust@de.bosch.com
Alexandru P. Condurache
Robert Bosch GmbH, Stuttgart, Germany
University of Lübeck, Lübeck, Germany
alexandrupaul.condurache@de.bosch.com
August 1, 2023
============================================================================================================================================================================================================================================================================
Deep neural networks tend to make overconfident predictions and often require additional detectors for misclassifications, particularly for safety-critical applications. Existing detection methods usually only focus on adversarial attacks or out-of-distribution samples as reasons for false predictions. However, generalization errors occur due to diverse reasons often related to poorly learning relevant invariances. We therefore propose GIT, a holistic approach for the detection of generalization errors that combines the usage of gradient information and invariance transformations. The invariance transformations are designed to shift misclassified samples back into the generalization area of the neural network, while the gradient information measures the contradiction between the initial prediction and the corresponding inherent computations of the neural network using the transformed sample. Our experiments demonstrate the superior performance of GIT compared to the state-of-the-art on a variety of network architectures, problem setups and perturbation types.
§ INTRODUCTION
Deep Neural Networks (DNNs) have become the standard approach for a wide variety of tasks such as speech recognition and especially computer vision <cit.>. Despite their success, they suffer from the tendency to make overconfident predictions. For example, the softmax score of the winning class is a bad measurement for the prediction's uncertainty <cit.>. However, a reliable uncertainty prediction is important, especially when DNNs are considered for safety relevant tasks, such as autonomous driving <cit.> or medical prognoses <cit.> where errors can have fatal consequences.
Investigating reasons for errors of a DNN is directly related to its generalization behaviour – the ability of a DNN to correctly classify unseen data. The generalization ability is usually evaluated using a test set independent from the training set. Both sets are sampled from the problem space typically following the same sampling distribution. The generalization ability depends on the architecture, the training procedure and especially the training data. A generalization area can be assigned to each trained DNN as the region of the problem space in which the DNN decides reasonably correct on the input samples <cit.>. Among the reasons limiting the generalization area in practice are inadequate sampling of the problem-space distribution, distribution shifts or a poorly chosen model capacity. Samples outside of the generalization area will likely be misclassified and should be detected at inference time.
In this paper we focus on image classification DNNs and briefly touch upon object detection. We believe the concepts we address are applicable to many deep learning approaches.
Currently, the topic of detecting samples outside the generalization area for image classification is covered in three distinct literature fields: Predictive Uncertainty, Adversarial Examples and Out-of-Distribution Detection. Each field investigates a specific reason for misclassification and proposes different methods geared towards the detection of the corresponding misclassified samples. Predictive Uncertainty considers misclassifications that occur randomly and typically close to samples inside the generalization area <cit.>. Adversarial Examples are constructed to fool the DNN on purpose for example by shifting the image via slight changes into small pockets of limited occurrence probability during training <cit.>. Out-of-Distribution data includes samples that are from outside the problem space, e.g. samples from another dataset <cit.>.
Misclassifications occur when data points undergo various perturbations that push them outside of the generalization area. The perturbation moves a data sample along invariance directions up to the point where it surpasses the amount of invariance that is captured by the model. We believe that current literature lacks a method and an inference time evaluation procedure that combines the following objectives:
O1 Detecting misclassifications caused by (real-world) relevant perturbation types. Most current literature only considers adversarial perturbations which are not as real-world relevant as e.g. the corruptions proposed by Hendrycks et al. in their ImageNet-C perturbation collection <cit.>.
O2 Considering perturbed but correctly classified data. If a classifier is invariant to perturbations up to a certain amplitude, then only some samples would be wrongly classified. Perturbed samples not leading to a wrong prediction by the DNN should not be detected as a misclassification but accepted as correctly classified. In current set-ups these samples are either ignored or even considered in the class of misclassifications.
O3 One detection method that generalizes to all reasons of misclassifications. Many methods are trained and tested for each corruption type separately. However, in real-world applications several different types of corruptions may be relevant for the generalization behavior. Therefore, a setup that properly evaluates the generalization ability of the detection method is necessary.
Some of these objectives are met by the methods Mahalanobis <cit.> and GraN <cit.>. While Mahalanobis is evaluated on adversarial as well as out-of-distribution data, the authors did neither consider perturbed data that is still correctly classified nor real-world relevant perturbations. GraN, on the other hand considers a large amount of different perturbation types and furthermore includes some correctly classified perturbed samples in its evaluation. However, the method has not been tested for its generalization ability between different perturbations. We found that both methods have weaknesses when tested on a setup that meets all the above objectives.
Therefore, we introduce GIT (Fig. <ref>), which is designed to consider several reasons for misclassification using Gradient features and multiple Invariance Transformations. The latter are based on prior knowledge about key invariances that need to be properly handled by the DNN for a correct decision. They are thought to shift samples outside the generalization area back into it, while keeping samples already inside the generalization area within it. For shifted samples this increases the contradiction between the DNN and the prediction, e.g. if many paths of the network characterised by the corresponding weights would hint towards another output of the network for the transformed input. This contradiction can be measured by the gradient of the weights computed for the loss function comparing the original and the transformed output.
We evaluate GIT on two classification network architectures (ResNet and DenseNet) and on four dataset (SVHN, CIFAR-10, CIFAR-100 and ImageNet). According to our objectives, we use eleven different perturbation types (O1), include perturbed data in the class of correctly classified images (O2) and evaluate GIT when it only has access to one of the perturbation types during training and has to generalize to the others (O3). Additionally, we perform experiments for object detection using the KITTI dataset and a Single-Shot-Detector network to further test the adaptability of GIT. GIT achieves close to state-of-the-art performance for out-of-distribution tasks and significantly outperforms the baseline methods for more general real-world and adversarial corruptions. Additionally, it is able to generalize among perturbations and it can also be applied successfully to object detection.
§ RELATED WORK
There are currently three literature fields that address generalization failures at inference time: Adversarial example detection, out-of-distribution detection and predictive uncertainty. Each field covers one specific reason for misclassifications and only few works deal with connections across those literature fields. However, when having a closer look into the methods of these fields they all can be split into four main categories <cit.>: generative, inconsistency, ensemble and metric based approaches.
Generative methods are typically based on an Autoencoder or a Generative Adversarial Network that is trained to shift images into the direction of the training distribution. For adversarial images the goal is to remove the adversarial perturbation and gain an image that can be correctly classified <cit.>. In the case of out-of-distribution, a huge difference between the input and the output image of the generative method hints towards a distance to the training distribution <cit.>.
Similarly, inconsistency methods based approaches expect the output of the DNN for a misclassified image to be more sensitive to small changes in the image. In comparison to generative methods they use rather simple transformations. For out-of-distribution detection, ODIN is a well-known approach <cit.>. Its transformation is based on a gradient descent procedure with step-size ϵ performed on the original image x maximising the loss function L computed for the output of the network F(x) and the predicted class y
x̃=x-ϵ·sign(-∇_x L(x,y)) .
Hsu et al. introduced a more sophisticated version of ODIN <cit.>. Unfortunately, existing inconsistency methods do not work well for some adversarial attacks <cit.>.
Ensemble methods are based on an ensemble of DNNs. The more the outputs of the DNNs differ for the same input, the more the image is expected to be misclassified. Well known examples are Monte-Carlo-Dropout <cit.> and methods building upon Bayesian Neural Networks <cit.>. Furthermore, there are ensembles in which each network is trained on slightly different training sets or output requirements <cit.>. Recently, distillation methods based on ensemble results are gaining more attention. The ensemble of models allow a single DNN to explicitly model a distribution over the outputs <cit.>. Ensemble methods are mainly used in the category of Predictive Uncertainty and out-of-distribution detection.
Metric methods use stochastic principles to evaluate whether the current input sample is behaving similarly to correctly classified input samples investigated during training using activation or gradient information. Earlier, such methods were typically based on the activation outputs of the network. A well-known Adversarial Examples detection approach is LID <cit.>. It is based on a Local Intrinsic Dimensionality score, a weighted distance based on k-nearest neighbours from the training set. The method Mahalanobis works similar to LID and is based on the Mahalanobis distance M(x)_l computed for each layer l of the DNN <cit.>. The Mahalanobis distance is computed for each layer-output f(x)_l of the test sample x and the closest class-conditional Gaussian distribution defined by the mean of all layer-outputs μ_y and the layer-output covariance Σ_y for training samples of the class y
M(x)_l=- (f(x)_l - μ_y) Σ_y ^ -1 (f(x)_l-μ_y) .
The Mahalanobis distance is combined with a gradient shift procedure similar to that described in (<ref>). Another new approach for adversarial detection is using additional procedures such as classification networks consisting of one fully connected softmax layer on top of each activation layer <cit.> or more sophisticated k-nearest neighbour procedures in combination with influence functions <cit.>. Other metric based methods in this field are dominated by computationally expensive layer-wise higher-order Gram matrices <cit.> or additional residual flow procedures <cit.>. Currently gradient based methods are gaining importance in which the gradient of the network regarding a loss function computed on the predicted class and the corresponding softmax output is serving as the features for the detector <cit.>. Recently GraN <cit.>, that combines gradients with a gaussian smoothing, outperformed the state-of-the art method Mahalanobis <cit.> in the combined area of Adversarial Examples and Out-of-Distribution detection. GradNorm <cit.> stated state-of-the-art performance in the detection of Out-of-Distribution samples. GradNorm uses an uniform vector instead of the predicted class vector for gradient computation and only consider the gradients of the last layer.
Our method GIT combines the fields of metric and inconsistency methods by using invariance transformations and gradient information. Mahalanobis <cit.> and GraN <cit.> are the only methods that have been evaluated on more than one reason for misclassification. Due to their state-of-the-art performance in this field they will serve as baseline methods in our experiments. As an additional baseline we evaluated GradNorm, since it is similarly to GIT based on gradients and currently achieves state-of-the-art in the field of out-of-distribution detection.
§ METHOD
GIT detects whether an input is misclassified during inference time of a DNN. The architecture of GIT is visualised in Fig. <ref>. GIT consists of three components: Invariance transformations shift misclassified data samples x back into the direction of the generalization area while not affecting correctly classified samples, the feature extraction based on gradient information and the head that combines the features into the output p ∈ [0,1] stating whether the input x is misclassified. A GIT stream is defined by one transformation in combination with the corresponding gradient based feature extraction. The head fuses all streams.
The intuition behind GIT is that three possible cases can occur when distinguishing a data point outside, from a data point inside of the generalization area:
* Data from inside the generalization area is modified by the transformations in order to eliminate variance along directions of invariance already captured by the model. Therefore this data remains in the generalization area and correctly classified data does not lead to high gradients.
* Data from outside the generalization area can be shifted inside the generalization area by at least one transformation. In this case the transformation eliminates the variance along directions of invariances that are not captured in the model. This leads to high gradients used as features in the detection head of GIT. The data point is detected as outside of the generalization area.
* If the data point is outside of the generalization area but a single stream is not able to provide meaningful gradient features, the multi-stream approach allows several other streams to contribute meaningful gradients. These can be leveraged by the detection head of the classification chain, leading to a successful detection.
§.§ Invariance Transformations
Generalization errors are caused by bad learning of invariances. Each stream of the multistream architecture is based on an invariance transformation. The transformations cover prior knowledge on invariances that need to be captured by the classifier in order to correctly classify samples <cit.>. Depending on the amount of prior knowledge concerning relevant invariances, the number and the concrete transformations can be adjusted. There is one stream per invariance transformation plus an additional identity stream as discussed above. In this paper we propose three invariance streams based on filter applications dealing with global invariances and one stream based on an autoencoder thought to cover local invariances. Each stream T_i generates a transformed image
x_i = T_i(x_0)
from an input image x_0.
Gaussian filters use a two dimensional symmetric kernel derived from a Gaussian distribution G(u,v) approximated for discrete pixel values <cit.>. The resulting discrete values build a filter mask. This filter was originally used in the smoothing step of GraN <cit.>. It is supposed to eliminate classification errors due to additive noise. Furthermore, to a certain extent it also eliminates high-frequency image content that often is responsible for spurious correlations being learned during training.
Wiener filtering eliminates classification errors due to poorly captured invariance to sensor noise and blur caused by for example poor optics. It is minimizing the mean squared error between a noisy image and the filtered image under the assumption of a known, stationary noise and frequency response of the imaging system <cit.>. The noise level is a hyper-parameter defining the final filter.
Median filtering eliminates classification errors due to poorly captured invariance to salt-and-pepper noise, e.g. caused by corrupt pixels. A Median filter selects the median value from all pixels in the appropriate square neighbourhood around the target pixel. The Median filter only depends on the hyper-parameter defining the size of the filter window.
Autoencoders <cit.> are supposed to eliminate classification errors due to poorly captured unspecified in-distribution invariances. They are trained using a loss function comparing the input to the output image for images of the training set of the method under test, i.e., the classifier whose generalization area is analysed.
§.§ Gradient Information
For each stream i ∈{ 1,…,n } the gradients measure the contradiction between the current prediction y and the output of the transformed sample F(x_i) within the network. For the computation of the gradients the classification DNN performs a foward pass for the original image x, and the transformed image x_i. For the original image x the predicted class y is derived as the index of the largest value of the output F(x_0) of the network for the original image x_0. The output F(x_i) of the transformed image x_i and the predicted class y as one-hot vector are compared using the cross entropy loss function L(·,·). In the next step, the gradient
for the network weights ω is computed by a backward pass of the DNN.
The large vector of gradients is reduced to a smaller set of features applying a layer-wise average pooling
∂ L(F(x_i),y)∂ω ↦ [ ||∂ L(F(x_i),y)∂ω_1||_1; ⋮; ||∂ L(F(x_i),y)∂ω_L||_1 ].
For each layer l ∈{1,…,L} of the DNN, the gradients regarding the layer's weights ω_l are replaced by their L_1 norm. This results in a feature vector of size L which reflects the number of layers in the DNN.
§.§ Head
For each stream the feature vector is processed by a logistic regression network. The outputs of all streams are then combined by another logistic regression network to one single value p.
Training: Training and validation data for the misclassification detection task (i.e., analysing the generalization area) can be gathered from correctly and misclassified samples of the classification dataset. Each logistic regression stream is trained and hyper-parameter optimised individually. Then, the logistic regression network combining the individual streams is trained on the predicted outputs.
§ EVALUATION
We evaluate GIT on several problem setups corresponding to the classification datasets CIFAR-10, CIFAR-100 <cit.>, SVHN <cit.> and ImageNet <cit.> and on two popular models: DenseNet <cit.> and a ResNet <cit.>. The DNNs and their training procedure are adopted from reference <cit.> and <cit.> for ImageNet (see Appendix for details). For each classification problem, we use the corresponding training set to train the models and build all perturbation setups based on the test set. Their generation is described in the next section. Each perturbation setup is split randomly into 80% training data, 10% validation data and 10% test data. As explained in Section 2 we use Mahalanobis, GraN and GradNorm as baseline methods. The trainable parts of Mahalanobis, GraN, GradNorm and GIT (own) are trained on the perturbation-setup training data, validated on the validation data and tested on the test set for each setup. The range of possible hyper-parameters for Mahalanobis are adopted from reference <cit.>. For GraN and GIT the standard deviation σ for the Gaussian smoothing is chosen from σ∈{ 0.1,0.2.,…, 1.0 }. For GIT the Median filter size is chosen from { 2× 2, 3× 3,…, 10 × 10} and the noise level for the Wiener filter from { 0.01,0.02.,…, 0.10 }. We trained the autoencoder on the problem-setup training set (see Appendix for implementation details). As evaluation score we use the area under the receiver operating characteristic curve (AUROC),which plots the true positive rate (TPR) against the false positive rate (FPR).
§.§ Perturbation setups
A sample is misclassified by a DNN if it is not within the generalization area of the DNN. Possible reasons for a sample to be outside the generalization area can be split into three main categories: Predictive Uncertainty, Adversarial Examples and Out-of-Distribution Detection. According to those categories we build eleven perturbation setups.
Predictive Uncertainty (P.Unc.): The original perturbation setup is build by the samples that have been misclassified by the classification network and the same amount of randomly chosen correctly classified images of the problem-setup test dataset. For the Gaussian, Shot and Impulse setup the test data is corrupted with the corresponding noise. The noise level is adapted such that half of the original data is misclassified and half of the data is still correctly classified by the DNN.
Explanation: These setups cover normal misclassifications and misclassifications caused by corruptions e.g. due to the optical path and sensor setup. Sensor related corruptions are relevant in most computer vision tasks. Similar to Lust and Condurache <cit.> we therefore based these setups on the sensor related corruption types introduced by Henrycks et al. <cit.>: Gaussian noise is based on a normal distribution and can appear in low-lighting conditions. Shot noise is generated using a poisson distribtuion and simulates electronic noise caused by the discrete nature of light. Impulse noise is similar to salt-and-pepper noise for black and white images and can be used to simulate bit errors.
Adversarial Examples (Adv.): To generate Adversarial Examples four different attack methods are used: FGSM <cit.>, BIM <cit.>, Deepfool <cit.> and CWL2 <cit.>. The generation of the Adversarial Examples is adopted from Lee et al. <cit.>. Each attack is applied to the problem-setup test images with a corruption level leading to a misclassification in 50% of the images.
Explanation: Adversarial Examples are artificially generated samples that are constructed to fool a network into making a false decision. Usually, Adversarial example methods shift a correctly classified, original image such that the predicted class changes while the difference is constructed to be as small as possible. Common Adversarial Examples detection setups only consider adversarially perturbed images that lead to a misclassification. We consider both, adversarial images leading to a misclassification in the set of wrongly classified images and images not leading to a misclassification in the set of correctly classified images.
Out-Of-Distribution (OOD): In the Out-of-Distribution detection setups we used datasets different to the current training data as Out-of-Distribution data. If the current training data is CIFAR-10/100 then SVHN <cit.>, TinyImageNet (ImgN) <cit.> and LSUN <cit.> are used as Out-of-Distribution data, similar for SVHN where CIFAR-10 (C) <cit.> replaces SVHN. For ImageNet the corresponding large scale datasets iNaturalist (Nat) <cit.> Places <cit.> and SUN <cit.> are used. Each Out-of-Distribution setup is built from one of the Out-of-Distribution dataset and correctly classified images from the problem-setup test data.
Explanation: This procedure is standard for Out-of-Distribution detection. It simulates the occurrence of data not present in the training distribution.
§.§ Results and Ablation Studies
In the following, we evaluate the performance of GIT in comparison to other methods, provide two ablation studies and show how to adapt GIT for object detection.
§.§.§ Comparing GIT to other Methods
We evaluated the performance of Mahalanobis, GraN, GradNorm and our detection method GIT (Tab. <ref>). For each dataset, model and perturbation combination the detection methods were trained only on the corresponding FGSM perturbation setup. We report the AUROC scores for each combination and method, the resulting TNR at TPR 95% evaluation is in the Appendix. Since the adversarial method DeepFool relies on calculating the gradients for each possible class per sample, it is computationally infeasible to apply it to the large scale dataset ImageNet with its 1000 classes. Furthermore, since simple autoencoders do not work well for large scale datasets we only consider the gaussian, median and wiener transformations in the case of ImageNet.
GIT significantly outperforms GradNorm, GraN and Mahalanobis among a wide variety of set-ups, except for some Out-of-Distribution setups, showcasing its generalization ability among a multitude of perturbations. The distance-based method Mahalanobis achieves a good AUROC score only in the Out-of-Distribution setup. Its performance is particularly poor for DenseNet that has many skip connections allowing information flow between all regions of the network. The Adversarial Example and Predictive Uncertainty setups are build such that the data points lie close to the classification boundary of the DNN. Correctly and incorrectly classified samples are similarly far away from the original data distribution. Distance-based methods such as Mahalanobis are good in detecting samples far away from the original distribution as in the case of Out-of-Distribution setups. However, they are not able to consider whether the samples are on the correct side of the decision boundary. On the contrary, gradient based methods consider the contradiction within the network to the predicted class via the gradient. They are therefore more capable of detecting samples outside but close to the generalization area.
The performance of GIT for the Out-of-Distribution setups is mixed. In case of OOD the invariance transformations are unable to shift the data point back inside the generalization area which impedes the detection of such kind of data. This effect can be compensated when adding Out-of-Distribution data during the training of GIT. In our ablation study (Sec. <ref>, Tab. <ref>) GIT could be significantly improved for Out-of-Distribution detection when increasing the variabilty in the perturbation-setup training data by adding Out-of-Distribution and Predictive Uncertainty training samples besides FGSM data.
GradNorm uses only the gradients of the last layer as feature input. This makes the performance more dependent on the used data and classification DNN. We considered this by adapting GradNorm using the gradients of all layers which improved its performance but could not outperform GIT (see Appendix). GradNorm uses an uniform vector as target while GraN and GIT use the predicted class one-hot vector. The important difference is that for GradNorm only one of the two inputs of the loss function depends on the actual input. The integration of invariance transformations in order to compare the output of the original and a transformed image is therefore not possible. GraN and GIT can use this idea, while GIT additionally has the advantage of the multistream architecture which enables it to further generalize by considering which gradient of which transformation stream carries the most relevant information for the specific unknown perturbation. This advantage allows GIT to significantly outperform the other methods in most problem setups.
§.§.§ Ablation: Relevance of the Different Streams
We evaluated the importance of different streams in GIT. For this purpose we tested different combinations in Tab. <ref>. In the first two columns we only used the Gaussian (Gaus.) and the autoencoder (AE) and gradually added the others. The more streams GIT considers, the better the detection performance of the method irrespective of the ordering of the added streams. The method is able to decide which stream delivers the most relevant information automatically which further justifies the multistream concept.
§.§.§ Ablation: Relevance of the seen perturbations
In our main experiment shown in Tab. <ref>, we used only data from the Adversarial Example method FGSM to train GIT (and the other trainable detectors). The experiments show that using Adversarial Example data, GIT can generalize to other classes of perturbations like Predictive Uncertainty and Out-of-Distribution (albeit with mixed results in the case of Out-of-Distribution detection). This training procedure is similar to the experimental setups currently usual in the state-of-the-art and provides a glimpse on the generalization capabilities of the detectors over various perturbations. However, in practice one would train on all available perturbations to achieve a detector with best possible performance. Therefore, we further investigated the relevance of the variability of the seen perturbation training set by further training GIT on only original data and all perturbation datasets as shown in Tab. <ref>. The original data is not enough to cover the whole problem space and as expected the overall performance rises as more perturbation types are seen during training. Conversely, the variability in the adversarial FGSM data seems to already cover the other Adversarial Examples and Predictive Uncertainty setups since the results on these setups do not alter much when training with all perturbations. However, some improvement can be achieved for Out-of-Distribution data when all perturbations are seen during training. We conclude that, although generalization between different perturbation setups works, when seeking a best possible detector it is important to include at least some perturbations from each category.
§.§ Extension to Object Detection
We extended the method GIT to be applicable in the field of Object Detection to demonstrate the simple adaptability of GIT to other vision based problems.
Necessary adaptations: The output of an object detection problem is an undefined number of objects per image, each provided with classification and location information. The extraction of the gradients is based on the classification part of the network and all possible object candidates per image are considered individually. Similar as for image classification the gradients are computed using the classification loss function that receives as target the one hot vector of the predicted class of the corresponding object candidates. Then, the training of the head and the application of the invariance transformations is directly adopted from the classification case.
Experiments: We based our experiments on an efficient Single-Shot-Detector (SSD) <cit.> evaluated on KITTI <cit.> which was split into 80% training data for the SSD, 10% training data for detection method and 10% evaluation data, implementation details are provided in the Appendix.
The evaluation of uncertainty or error prediction in the case of object detection is not as straightforward as in classification. When applying the uncertainty measures only on the final output of the SSD, and hence the actually detected objects, false negatives are not considered. Therefore, we evaluate the methods using the mean Average Precision (mAP) that directly considers the predicted uncertainty of the methods: Each detection method predicts uncertainty information which can be directly used as object confidence. The classification-score of the predicted class is replaced with the predicted object confidence. Consequently, a better confidence estimate results in an improved mAP score. A more detailed explanation can be found in the Appendix.
As baseline methods we used Monte Carlo Dropout <cit.>, Deep Ensembles, invariance transformations and gradient information <cit.> on their own. Implementation details on the methods can be found in the Appendix. Each method has only the original data on hand during training and needs to generalize to the perturbed data. Results are shown in Table <ref>.
In most cases the multistream approach is able to cover the performance of the best single stream or even improves it. Gaussian transformation and gradient information each on its own are unable to outperform the baselines given by MC-Dropout and DeepEnsemble. However, their combination outperforms the other methods even on the non perturbed original setup which further shows the good interaction of the gradients and invariance transformations and the applicability of GIT beyond classification. Consequently, GIT can effectively be used to increase the robustness of object classifiers.
§ CONCLUSION
Most state-of-the-art error detection methods for DNNs focus only on a single reason for misclassification. However, in real-world applications the reasons for misclassification are often unknown and diverse and therefore, generalization to a wide variety of perturbations is necessary. Furthermore, current approaches do not consider perturbed samples that are still correctly classified. They either ignore their existence or even assign them to the negative (misclassified) class.
We therefore developed and investigated a novel detection method that combines Gradient information and Invariance Transformations (GIT) in a multistream approach and built up an extensive experimental set-up to cover the detection of Out-of-Distribution, Predictive Uncertainty and Adversarial Examples on several datasets and network architectures. While GIT was trained on only a single perturbation type, we evaluated the generalization capability to other perturbation types. The experiments show that the multistream concept leads GIT to a robust detection of misclassified samples of all types. GIT is on par with state-of-the art for Out-of-Distribution detection and highly superior for other reasons of generalization failures, especially in difficult situations when the perturbed data points lie close to the decision boundary. We further examined GIT in ablations studies and demonstrated that GIT can be easily extended to object detection, which paves the way towards more application fields.
In future work, we want to evaluate more sophisticated transformation streams such as Generative Adversarial Networks or Variational Autoencoders and investigate replacing the current stream-wise training with an end-to-end training. We aim to further extend and improve GIT for object detection and other application areas. Since the used invariances are currently based on prior knowledge of the image domain, other fields would require other, domain-specific invariances.
plain
§ APPENDIX
§.§.§ Evaluations using TNR at 95% TPR
To have further insight into the performance of GIT we additionally used TNR at TPR 95% as metric. Results are shown in Table <ref>.
§.§.§ Hyper-Parameters for the Perturbation Generation
The hyper-parameters (for FGSM and BIM the adversarial noise level; for DeepFool and CWL2 the step size; the noise factors for the generation of the gaussian, shot and impulse perturbations) for the generation of the perturbation set-ups for DenseNet and ResNet are given in Table <ref>.
§.§.§ Classification DNNs
We based our experiments on DenseNets <cit.> and ResNets <cit.>. The experiments for the datasets Cifar10, Cifar100 and SVHN follow the same setup as in <cit.>. We used their pretrained DenseNet with 100 layers and their pretrained ResNet with 34 layers. Their models are available at <https://github.com/pokaxpoka/deep_Mahalanobis_detector>. Our ImageNet Experiments are adapted from <cit.>. For ImageNet we used a ResNetv2-101 and a DenseNet-
121 model. A trained version is available at <https://github.com/google-research/big_transfer>.
§.§.§ Autoencoder Stream
The architecture of the Autoencoder used for one stream of GIT is shown in Figure <ref>. For each dataset the Autoencoder is trained in a self-supervised manner on the corresponding original training dataset using the BCE-Loss and Adam as optimizer.
§.§.§ GradNorm using all layers
Huang et al. <cit.> introduced the detection method GradNorm based on the norm of the gradients of the last layer of the neural network. We adapted GradNorm (ad.GradNorm) by using the gradient information of all layers and trained a logistic regression network to combine the gradient features, similarly to the other data based detectors. Similarly we adapted GIT to only be based on the last layer. Results for GIT are quite similar except for some Out-of-Distribution setups where the method in which all layers are used leads to a higher score. For GradNorm the results improve when all layers are used for the adversarial case but get worse for the others. Table <ref> exemplary shows the results for CIFAR-10 on DenseNet. In general, GIT outperforms GradNorm, even when all layers are considered.
§.§.§ Object Detection
The SSD's network architecture and hyper-parameters are taken from <https://github.com/amdegroot/ssd.pytorch>. As weight initialisation we used Kaiming-Normal due to a small performance improvement.
For the Monte Carlo Dropout variant, two dropout layers were added to the last two
convolutional layers of the SSD’s backbone. The hyper-parameters are adopted as described in reference <cit.>. Dropout is used during both the training
and the testing phase. During testing 10 forward runs are conducted in order to sample different predictions.
For the Deep Ensemble method the 10 different predictions were samples by differently trained networks, each trained from another set of randomly initialised weights.
Both, the results of MC-Dropout and of the Deep Ensemble method are merged using the intersection over union based approach proposed in reference <cit.>. The merging threshold is set to 0.7 and only boxes of the same class are merged.
Gaussian perturbations were sampled from a Gaussian distribution with a standard deviation of 10.0. For Gaussian smoothing the standard deviation is chosen from the set { 0.1, 0.2, …, 1.0}. For the Median filter the size of the filter is chosen from {2×2, 3×3,…,10×10}. Both hyper-parameters are optimised using the non-perturbed validation data.
In the object detection setup, we directly use the mAP score in order to evaluate the uncertainty scores. Usually, the highest classification score output is used to accept or reject candidate bounding boxes (e.g. via non-maximum suppression and thresholding) and hence directly influences the mAP scores. By replacing this highest score with a confidence measure provided by the detection methods, a higher mAP can be achieved, e.g. by correctly rejecting false positive boxes. The better the confidence measure, the higher the mAP score.
|
http://arxiv.org/abs/2307.01755v1
|
20230704145147
|
Role of work function distribution on field emission effects
|
[
"Nandan Pakhira",
"Rajib Mahato"
] |
cond-mat.str-el
|
[
"cond-mat.str-el"
] |
Department of Physics, Kazi Nazrul University, Asansol, West Bengal 713340, India
Department of Physics, Kazi Nazrul University, Asansol, West Bengal 713340, India
Central Electronics Engineering Research Institute, Pilani, Rajasthan 333031, India
Field emission effect is the emission of electrons from a cold metallic surface in the presence of an electric field. The emission current exponentially depends on the work function
of the metallic surface. In this work we consider the role of work function distribution on the field emission current. The work function distribution can arise due to
nano-scale inhomogeneities of the surface as well as for collection of nano-particles with size distribution. We consider both Gaussian distribution as well as log-normal distribution.
For Gaussian distribution, the field emission current, J_av, averaged over work function distribution shows Gaussian dependence, J_av∝exp(ασ^2),
where σ is the width of the work function distribution and α is a fitting parameter. For log-normal distribution, J_av shows compressed exponential behaviour,
J_av∝exp(γσ^n), where the exponent n > 1 is a non-universal parameter. We also study in detail field emission current for various electric field strength
applied to systems with high density, characterised by Fermi energy, E_F≫Φ, Φ being the work function of the system as well as systems with low density characterised by
E_F≪Φ.
Role of work function distribution on field emission effects
Rajib Mahato
August 1, 2023
============================================================
§ INTRODUCTION
Field emission is the process in which electrons from cold surfaces are emitted in the presence of applied strong electric field. This process should be compared
agaist the thermo-ionic process in which electrons are emitted from hot metal surface. The field emission forms the back bone of modern semiconductor devices. This
effect was first described by Fowler and Nordheim <cit.>. They considered quantum mechanical tunneling through a triangular potential energy (PE) barrier,
created by the application of constant electric field. Much later Murphy and Good <cit.> (MG) introduced a more realistic PE barrier by taking into account the
induced image charge formed in the presence of emitted electron. MG calculated barrier transmission coefficient under semi-classical WKB approximation. More recently,
essentially an exact solution of the problem was obtained by Choy et. al. <cit.>. Also, various cases including finite temperature (thermal emission),
tunneling, curvature of the emission surface have been considered by various authors <cit.>
To the best of our knowledge in all of those studies authors have considered constant local work function. The work function of a material depends on the
composition, structure, geometry, local charge distribution etc. of the emitting surface. Assumption of constant work function is only suitable for an atomistic smooth
homogeneous surface. For surfaces with inhomogeneities over nano-scale (much smaller than the size of the collectors) assumption of constant work function is no longer
valid. Also it has been shown <cit.> that in a system of nano-particles there is a distribution of the size of the nano-particles. Since the work
function is more of a property of the surface we naturally can expect that the work function of nano-particles will also have a distribution. The actual microscopic model
for work function distribution for a system of nano-particles is beyond the scope of this work. Interestingly, Gamez et. al. <cit.> using scanning
tunneling microscope (STM) have measured the pair distribution function (PDF) for Pt nano-particles and they found that it follows log-normal distrubution.
In this work, purely as a mathematical model, we choose Gaussian and log-normal distribution for the work function. We then study the field emision current averaged
over work function distribution. The organization of the rest of the paper is as follows. In Sec. II we describe the mathematical formalism used to calculate field emission
current. In Sec. III we describe the work function distribution used to calculate average current. In Sec. IV we present our results for both the case of Gaussian
distribution and log-normal distribution. Finally in Sec. V we conclude.
§ MATHEMATICAL FORMALISM
We closely follow and summarize the results obtained by Lopes et. al. <cit.> for field-emission current density, J. In the standard FN-type MG theory
the field emission current density is given by the well known expression <cit.>,
J = e^3 ℰ^2/16π^2ħΦ1/t^2(y_0)exp(-4/3√(2m)/ħΦ^3/2/eℰv(y_0))
where e and m are the charge and mass of electron, ℰ is the strength of the applied electric field, Φ is the local work function of the emitting surface and
v(y) = [1+√(1-y^2)/2]^1/2[E(λ)-(1-√(1-y^2))K(λ)].
K(λ) and E(λ) are the complete elliptic integral of the first and second kind, with
λ^2 = 2√(1-y^2)/1+√(1-y^2)
and
y^2 = e^3ℰ/4πϵ_0(V_0-E)^2 < 1 .
It is important to mention that V_0=Φ+E_F is the height of the potential energy barrier (E_F is the Fermi energy) and E is the energy of the electron.
Finally,
t(y_0) = [v(y)-2/3ydv/dy]_y=y_0
with
y_0^2 = e^3ℰ/4πϵ_0Φ^2
From the relations above it is quite evident that calculation of transmission current requires evaluation of numerical integrals for complete elliptic integrals.
Due to the singularities present in complete elliptic integrals it is very hard to extract meaningful results purely numerically <cit.>. Under this
circumstances we can consider series expansion for complete elliptic integrals <cit.> as follows
E(q) = 1+1/2[ln(4/q)-1/2]q^2+⋯
K(q) = ln(4/q)+[ln(4/q)-1]q/4 + ⋯
where
q = √(1-p^2)
and p^2=x_2-x_1/x_2. x_1 and x_2 are the roots of the quadratic equation
V_0-E -Ze^2/4x - eℰx = 0
ls
A detailed calculation gives the following form for the field emission current density
J = e^3ℰ^2/16π^2ħΦ[1-u(y_0)]/t^2(y_0)exp[-4/3√(2m/ħ^2)Φ^3/2/eℰv(y_0)]
where
u(y_0) = [1+2√(2mΦ)/eħE_F/ℰt(y_0)]
exp[-2√(2mΦ)/eħE_F/ℰt(y_0)]
with
v(y_0) = 1 - [3/8Z'ln(8/√(Z'))+Z'/16]+3/8Z'y_0^2ln(y_0)
+ Z'^2/32[1-ln(8/√(Z'))]y_0^4+Z'^2/32y_0^4ln(y_0) + ⋯
and
t(y_0) = v(y_0) -2/3y_0dv/dy(y_0)
with y_0 given by Eq. <ref> and Z'=1.179 is an arbitrary constant obtained from suitable boundary condition <cit.>.
The expression for transmission current in Eq. <ref> carries an interesting pre-exponential factor [1-u(y_0)] as compared to the standard form in
Eq. <ref>. This factor exhibits an explicit dependence on the external electric field (ℰ), work function (Φ) and Fermi energy, E_F.
It is worth to mention that, from Eq. <ref>, we can immediately investigate two important particular cases depending on the value of E_F. For large
values of Fermi energies, i.e., 2√(2m)/eħΦ^1/2E_F/ℰ≫ 1, or equivalently E_F≫eħℰ/2√(2mΦ),
it is easy to see that u(y_0)≈ 0 in Eq. <ref> since the exponential dominates. So, in this limit we recover the standard FN-type MG equation.
On the other hand, for the opposite regime, i.e., for very small Fermi energies, we have E_F≪eħℰ/2√(2mΦ). In this limit we can
expand the exponential in Eq. <ref> and get
J =me/2π^2ħ^3 E_F^2exp(-4/3√(2m/ħ^2)Φ^3/2/eℰv(y_0))
It is worth to emphasize that for small values of Fermi energies a very different expression, compared to the standard FN-type MG formula, was
obtained <cit.> for the current density. Note that in this regime the pre-exponential factor in Eq. <ref> does not depend on
either t(y_0) or the external electric field (ℰ), but it remains dependent on the Fermi energy (E_F).
§ WORK FUNCTION DISTRIBUTION
In the previous section we have summarized field emission current for a given local work function Φ. For a system of nano-particles or systems with
inhomogeneity the work function will be different over the length scale of the size of the collector for emitted electrons. In such a situation we need to average
the field emission current over the distribution of work function as follows :
J_av = ∫ P(Φ) J(Φ,E_F) dϕ
where P(Φ) is the distribution of the work function, Φ. We choose two widely used distribution functions, namely (i) Gaussian and (ii) log-normal
distribution. It is well known that systems with bulk disorder follows Gaussian distribution and the work function distribution function, P(Φ), is given by
P_N(Φ) = 1/σ√(2π)exp[-(Φ-Φ_0)^2/2σ^2],
where σ is the standard deviation of the distribution and Φ_0 is the known bulk value for a given material.
We also use log-normal distribution for the work function :
P_LN(Φ) = 1/Φσ√(2π)exp[-(lnΦ - μ)^2/2σ^2]
It is important to mention that we were inspired by the experimental result of Gamez et. al. <cit.>. They showed that for a system of Pt nano-particles
the pair distribution function (PDF) for the radius of nano-particles follows log-normal distribution
F(r) = 1/rs√(2π)exp[-(ln r-μ)^2/2s^2]
with
s^2 = ln[(P_sig/P_size)+1]
μ = ln(P_size)-s^2/2
where P_size and P_sig are the average particle diameter and standard deviation, respectively. Since work function crucially depends on the
surface properties of a system it may follow log-normal distribution as in the case of surface disorder.
Various statistical properties of this distribution are summarized in the table <ref> :
§ RESULTS
In this section we show our results for current density averaged over two choice of probability distributions as have been discussed in the previous section.
§.§ Gaussian work function distribution
We first consider the case of Gaussian distribution for the work function. In Fig. <ref> we show the histogram plot of the work function, sampled over
Gaussian distribution for four choices of bulk work function Φ_0=3.0, 3.5, 4.0 and 4.5 eV with σ=0.05. In each case we also fit the histogram
plot to Gaussian distribution. From this fit we can see that our choice of random variables for Φ are well sampled over Gaussian distribution.
§.§.§ Case with Φ≪ E_F
We now consider the case with E_F=10 eV and Φ_0 = 3.0, 3.5, 4.0 and 4.5 eV. Since E_F∝ n^2/3, n being the density of electrons, this will
correspond to high density limit. In Fig. <ref> we show the current density averaged over various number of random variables, N, for Φ. For
N < 10^4 there is significant fluctuations of J_av but for N > 4×10^4, J_av show small fluctuations over mean value. We choose
N = 10^5 for calculation of J_av for all the cases and J_av will be free of statistical errors.
As shown in Fig. <ref>, J_av varries over several orders of magnitude as we varry Φ_0 from 3.0 eV to 4.5 eV. So in order to understand
role of work function distribution on field emission current we need to study the dimensionless scaled quantity J_av/J_0, where J_0 is the field emission
current for bulk value of the work function Φ_0 and can be calculated from Eq. <ref> with Φ = Φ_0.
In Fig. <ref> we show, log(J_av/J_0), as a function of the width of the work function distribution, σ for a given applied electric
field, ℰ = 3 × 10^7V/cm. We choose 0.01 ≤σ≤ 0.1. Since the full width at half maxima (FWHM) for a Gaussian distribution is 2.35σ
and more than 95% integrated weight is within a width 4σ (between -2σ and +2σ) our choice of σ ensures that the deviation of Φ from its
bulk value Φ_0 is less than 10%. The first noticeable feature in Fig. <ref> is that the current density, J_av, averaged over work
function distribution increases monotonically with the width of the distribution, σ. This is because the transport current, J(Φ) ∝exp(-ζΦ^3/2) where
ζ = 4/3√(2m/ħ^2)v(y_0)/eℰ and the averaging over work function distribution leads to exploration of regions with lower barrier
height hence increase of tunneling current.
Another noticeable feature in Fig. <ref> is that the logarithm of the scaled average current log(J_av/J_0) can be fitted well with the
functional form f(σ) = ασ^2, where α is a fitting parameter. This clearly shows that J_av follows Gaussian behaviour
J_av = J_0exp(ασ^2). The fitting parameter α in general depends on Φ_0,ℰ and E_F. The dependence of α with Φ_0 is
nearly linear as shown in the inset of Fig. <ref>.
In Fig. <ref> we show the behaviour of log(J_av/J_0) for various field strength ℰ as well as Φ_0.
For ℰ=10^7 V/cm the increase of J_av is more than e times over the bulk value J_0 for σ=0.1. This can be explained by the fact that the
tunneling current J ∝ℰ^2/Φexp(-ζΦ^3/2/ℰ). With increasing field strength the tunneling current dramatically increases mainly
due to its exp(-1/ℰ) dependence. Averaging over work function distribution increases tunneling current as explained earlier. However this effect becomes sub-dominant
for larger field strengths as the tunneling current increses over several orders of magnitude for just one order of magnitude increase of ℰ. The behaviour is similar
for all the four choices of Φ_0. In the inset of each plot in Fig. <ref> we have shown the dependence of fitting parameter α with the
field strength ℰ. As can be clearly seen α very strongly depends on ℰ.
§.§.§ Case with Φ≫ E_F
Next we consider the case with E_F=0.05 eV and Φ_0 = 3.0, 3.5, 4.0 and 4.5 eV . This will correspond to systems with low density as E_F∝ n^2/3. In this case the tunneling
current does not depend on the prefactor e^3ℰ^2/16π^2ħΦ[1-u(y_0)]/t^2(y_0) but still depends on the standard exponential factor
exp(-ζΦ^3/2/ℰ). In Fig. <ref> we show J_av as a function of N, the number of random variables for averaging over work function distribution
for Φ_0 = 3.0, 3.5, 4.0 and 4.5 eV. As can be clearly seen J_av is atleast one order of magnitude smaller than the case with Φ≪ E_F. This is mainly due to the fact that
the prefactor me/2π^2ħ^3 E_F^2 for tunneling current J in this case is much smaller than the prefactor
e^3ℰ^2/16π^2ħΦ[1-u(y_0)]/t^2(y_0) corresponding to the case with Φ_0≪ E_F. As in the earlier case J_av shows significant
fluctuation for N < 10^4. We choose N=10^5 for various Φ_0 and ℰ.
In Fig. <ref> we show log(J_av/J_0) as a function of the width, σ, of the Gaussian work function distribution for a given applied
electric field strength ℰ= 3 × 10^7V/cm. As in the case with Φ_0≪ E_F, J_av monotonically increases with σ. But the increase is
slower than the case with Φ_0≪ E_F. This is mainly because of the absence of Φ dependent prefactor in the expression for J. Most intrestingly, as in the earlier case with
Φ_0≪ E_F, log(J_av/J_0) can be well fitted with f(σ) = ασ^2. The fitting parameter α linearly depends on Φ_0.
In Fig. <ref> we show the dependence of log(J_av/J_0) with σ for various field strength ℰ and Φ_0. As in the case with
Φ_0≪ E_F, effect of work function distribution is strongest for the weakest field ℰ = 10^7 V/cm. The tunneling current for ℰ=10^7 V/cm is very small
( 10^-10A/cm^2) and very small change (lowering) of potential barrier can have much larger effect. With increasing field strength by one order of
magnitude changes the tunneling current by several orders of magnitude which largely cancels the effect of work function distribution. However for each case log(J_av/J_0) can be
fitted well with f(σ) = ασ^2. The fitting parameter α strongly depends on ℰ as shown in the inset of each plot.
§.§ Log-normal work function distribution
Next we consider log-normal distribution for the work function. In Fig. <ref> we show the histogram plot of the work function, sampled over log-normal distribution for
four choice of median work function M≡ e^μ = 3.0, 3.5, 4.0 and 4.5 eV with σ=0.05. From this plot we can see that our choice of random variables for Φ are well
sampled over the log-normal distribution.
§.§.§ Case with Φ≪ E_F
First we consider the familiar case with Φ≪ E_F. We choose E_F=10 eV and μ = ln 3.0, ln 3.5, ln 4.0 and ln 4.5. This will correspond to median value M=3.0, 3.5, 4.0 and
4.5 eV. In Fig. <ref> we show J_av as a function of number of random variables for Φ, chosen from log-normal distribution. The current density is
several orders of magnitude higher than the case of Gaussian distribution. Also the fluctuations are higher than the case of Gaussian distribution. As in the earlier cases, we choose N=10^5 to
get good statistical measure.
In Fig. <ref> we show log(J_av/J_0) as a function of the width (σ) of the log-normal work function distribution for M = 3.0, 3.5, 4.0 and
4.5 eV and a given field strength ℰ=3× 10^7 V/cm, respectively. As in the case of Gaussian distribution, J_av increases monotonically as a function of the width
of the distribution, σ. However the increase in this case is much stronger - more than 10 times J_0 for σ=0.1. This strong increase can be attributed to the skewness of
distribution function. There is significantly higher probability of lowering of effective potential barrier than the Gaussian distribution with no skewness. As a result, the current density
J ∝exp(-ζΦ^3/2/ℰ) will show exponential increase with the width of the distribution. As shown in Fig. <ref>, log(J_av/J_0)
can be fitted well with the function g(σ)=γσ^n with non-universal exponent n > 1. The fitting parameter γ increases linearly with the mean of the distribution.
In Fig. <ref> we show log(J_av/J_0) as a function of the width, σ, of the log-normal work function distribution function for various
field strength ℰ varying over one order of magnitude - from 10^7V/cm to 10^8V/cm. For the lowest field ℰ=10^7V/cm, J_av
increases by several orders of magnitude over J_0. This is mainly because of the fact that low field tunneling current is very small and the effect of averaging over work function distribution
together with the skewness of the distribution increases the tunneling current drastically. Also the rise is near exponential as n ∼ 1. With increasing ℰ the tunneling current
increases and the effect of work function distribution gets largely suppressed. The fitting parameter γ strongly depends on the field strength ℰ.
§.§.§ Case with Φ≫ E_F
Finally we consider the case with E_F=0.05 eV and log-normal work function distribution. As in the other cases we first study the dependence of J_av on the number of random
variables N. As shown in Fig. <ref> J_av shows strong fluctuations for N < 10^4 and settles to a mean value for N > 6× 10^4. We choose N=10^5
for calculations. The magnitudes of currents are several orders of magnitude higher than the case of Gaussian distribution. However the currents are much smaller than the cases with
E_F=10.0 eV and log-normal distribution.
In Fig. <ref> we show log(J_av/J_0) as a function of the width of the log-normal work function distribution for a given field strength
ℰ = 3× 10^7 V/cm for μ= ln 3.0, ln 3.5, ln 4.0 and ln 4.5, respectively. This choice of μ corresponds to median value of the distribution M = 3.0, 3.5, 4.0 and
4.5 eV, respectively. As in the earlier cases J_av monotonically increases with σ. As in the earlier cases log(J_av/J_0) can be fitted well with
g(σ)=γσ^n with n >1. The fitting parameter γ and the exponent n are nearly identical with the case with Φ≪ E_F. With increasing
M, log(J_av/J_0) increases and the exponent n decreases. This is because with increasing M (Φ) the tuneling current decreases and thereby enhances the work function
distribution effect and near exponental increase in tunneling current with σ. The fitting parameter γ behaves linearly with M = exp(μ)
In Fig. <ref> we show log(J_av/J_0) as a function of the width, σ, of the log-normal work function distribution for various field
strength, ℰ, as well as the median of the distribution M=e^μ. For the lowest field strength ℰ=10^7V/cm the tunneling current, J_av, averaged
over the work function distribution is several orders of magnitude larger than J_0, the tunneling current calculated with Φ=M. As stated earlier this is due to the fact that for
ℰ=10^7 V/cm the tunneling current is very small and the skewness of the work function distribution effetively reduces the potential barrier significantly during the averaging process.
With increasing field strength the tunneling current increases over several orders of magnitude and the effect of the work function distribution gets masked. With increasing M the tunneling
current decreases and the work function distribution effect enhances. For each case log(J_av/J_0) can be fitted well with g(σ)=γσ^n with n > 1.
The exponent, n, increases with the increasing field strength. For a given field strength, ℰ, n decreases with increasing median value M of the work function distribution.
In particular, for ℰ=10^7 V/cm n∼ 1 giving rise to near exponential behaviour of the work function averaged tunneling current. The fitting parameter γ sharply decreases
with increasing field strength.
§ CONCLUSIONS
In conclusion, we have considered the role of work function distribution on the tunneling current in field emission effect. We have calculated the tunneling current, J_av, averaged
over work function distribution. We have considered both Gaussian as well as a log-normal work function distribution. For each case we have studied dependence of J_av on N, the
number of random variables sampled over work function distribution. J_av shows significant fluctuations for N < 10^4 but settles to an average value for N > 4×10^4.
We choose N=10^5 through out the calculation. We first calculate J_0, the tunneling current calculated for Φ=Φ_0(Gaussian) or μ = ln M (log-norml), where Φ_0 or M
(median of the distribution) corresponds to the bulk value of the work function. For each case we have studied the logarithm of the scaled current distribution log(J_av/J_0)
as a function of the width, σ, of the work function distribution for various Φ_0 (Gaussian) or M (log-normal) and the applied field strength ℰ. We have also considered
systems with high density characterized by Φ≪ E_F (E_F = 10 eV) as well as systems with low density characterized by Φ≫ E_F (E_F = 0.05 eV). Both for Gaussian and
log-normal distribution log(J_av/J_0) increases monotonically with the increasing width, σ, of the work function distribution for a given field strength ℰ. This is
due to the fact that averaging over work function distribution leads to exploration of the regions of lower potential height and hence increased tunneling current. The dimensionless quantity
log(J_av/J_0) can be fitted with f(σ)=ασ^2 (Gaussian) or with g(σ)=γσ^n (log-normal). The fitting parameters α and γ increases
linearly with the bulk value of the work function i. e. Φ_0 (Gaussian) or M (log-normal). For log-normal distribution the non-universal exponent n > 1 shows compressed exponential behaviour.
§.§ Acknowledgements
We would like to thank Nei Lopes, Arghya Taraphder for many valuable discussions. One of us (N. P) would like to thank IIT, Kharagpur for local hospitality where part of the work was done. One of us
(R. M) would like to thank Cetral Electronics Engineering Research Institute, Pilani for providing local hospitality and research support where part of the work was done.
99
FNpaperR. H. Fowler and L. Nordheim, Proc. Roy. Soc. London A, 119, 173 (1928).
MGpaperE. L. Murphy and R. H. Good, Phys. Rev., 102, 1464 (1956).
ChoyJPCM2005T. C. Choy, A. H. Harker and A. M. Stoneham, J. Phys. Cond. Matt., 17, 1505 (2005).
CutlerSurfaceScience1993P. H. Cutler, J. He, J. Miller, N. M. Miskovsky, B. Weiss and T. E. Sullivan, Progress in Surface Science, 42,169185 (1993).
ForbesUltramicro2001R. G. Forbes, K. L. Jensen, Ultramicroscopy, 89, 1722 (2001).
EdgcombePRB2005C. J. Edgcombbe, Phys. Rev. B, 72, 045420 (2005).
JensenAPL2006K. L. Jensen and M. Cahay, Appl. Phys. Lett., 88, 154105 (2006).
FischerJVacSciTech2013A. Fischer, M. S. Mousa, and R. G. Forbes, J. Vac. Sci. Technol. B, 31, 032201 (2013).
ForbesJVacSciTech2013R. G. Forbes, A. Fischer, and M. S. Mousa, J. Vac. Sci. Technol. B,31, 02B103 (2013).
KyritsakisProcRoySocA2015A. Kyritsakis and J. P. Xanthakis, Proc. R. Soc. A, 471, 20140811 (2015)
HolgatePRAppl2017J. T. Holgate and M. Coppins, Phys. Rev. Appl., 7(4), 044019 (2017).
GamezJApplCrys2017XLiana Gamez, Maxwell Terban, Simon Billinge and Maria Martinez-Inesta, 50, 741 (2017).
LopesCondMatNei Lopes and A. V. Andrade-Neto, arXiv:1408.3663v3
LopesPhysLett2020Nei Lopes and A. V. Andrade-Neto, Phys. Lett. A, 384, 126399 (2020).
BiswasPhysPlasmas2017D. Biswas and R. Ramachandran, Phys. Plasmas 24, 073107 (2017).
HaugBookA. Haug, Theoretical Solid State Physics, Volume 1, (Pergamon Press, Oxford, 1975.)
ForbesAPL2006R. G. Forbes, Appl. Phys. Lett. 89, 113122 (2006).
ForbesJVacSciTech2010R. G. Forbes and J. H. B. Deane, J. Vac. Sci. Technol. B, 28, C2A33 (2010).
DolanPhysRev1953W. W. Dolan, Phys. Rev. 91, 510 (1953).
GradshteynBookI. S. Gradshteyn and I. M. Ryzhik, Tables of Integrals, Series and Products. Academic, New York (1965).
|
http://arxiv.org/abs/2307.00209v2
|
20230701032356
|
Image Matters: A New Dataset and Empirical Study for Multimodal Hyperbole Detection
|
[
"Huixuan Zhang",
"Xiaojun Wan"
] |
cs.CV
|
[
"cs.CV",
"cs.AI",
"cs.CL"
] |
=1
|
http://arxiv.org/abs/2307.01243v1
|
20230703172832
|
Unification of the four forces in the Spin(11,1) geometric algebra
|
[
"Andrew J. S. Hamilton",
"Tyler McMaken"
] |
physics.gen-ph
|
[
"physics.gen-ph"
] |
Unification of the four forces in the Spin(11,1) geometric algebra]Unification of the four forces in the Spin(11,1) geometric algebra
^1 JILA, Box 440, U. Colorado, Boulder, CO 80309, USA, ^2 Dept. Astrophysical & Planetary Sciences, U. Colorado, Boulder, ^3 Dept. Physics, U. Colorado, Boulder
Andrew.Hamilton@colorado.edu, Tyler.McMaken@colorado.edu
(10), or equivalently its covering group (10),
is a well-known promising grand unified group that contains
the standard-model group.
The spinors of the group (N) of rotations in N spacetime dimensions
are indexed by a bitcode with [N/2] bits.
Fermions in (10)
are described by five bits yzrgb,
consisting of two weak bits y and z, and three colour bits r, g, b.
If a sixth bit t is added, necessary to accommodate a time dimension,
then the enlarged (11,1) algebra contains the
standard-model and Dirac algebras as commuting subalgebras,
unifying the four forces.
The minimal symmetry breaking chain that breaks (11,1)
to the standard model is unique,
proceeding via the Pati-Salam group.
The minimal Higgs sector is similarly unique,
consisting of the dimension 66 adjoint representation of (11,1);
in effect, the scalar Higgs sector matches the vector gauge sector.
Although the unified algebra is that of (11,1),
the persistence of the electroweak Higgs field after grand symmetry breaking
suggests that the gauge group before grand symmetry breaking is
(10,1), not the full group (11,1).
The running of coupling parameters predicts that
the standard model should unify to the Pati-Salam group
(4)_w ×(6)_c
at 10^12GeV,
and thence to (10,1)
at 10^15GeV.
The grand Higgs field
breaks t-symmetry,
can drive cosmological inflation,
and generates a large Majorana mass for the right-handed neutrino
by flipping its t-bit.
The electroweak Higgs field breaks y-symmetry,
and generates masses for fermions by flipping their y-bit.
[
Andrew J. S. Hamilton^1,2 and Tyler McMaken^1,3
August 1, 2023
===================================================
§ INTRODUCTION
(10),
introduced by <cit.>,
or equivalently its covering group (10),
remains a popular candidate for grand unification.
The group (10) is the grandest of the three grand unified groups
proposed in the 1970s,
containing as subgroups the other two grand unified groups
(5)
<cit.>,
and the Pati-Salam group
(2)_×(2)_×(4)_c
<cit.>,
which is isomorphic to
(4)_w ×(6)_c.
There is a large and steadily growing literature on (10)
as a grand unified group, with or without supersymmetry.
Besides unifying each generation of fermions in a single spinor multiplet,
(10)
predicts a right-handed neutrino
<cit.>
which, through the see-saw mechanism
<cit.>,
allows the left-handed neutrino to acquire a small mass.
(10) is consistent with the Super-Kamiokande limit
≳ 1.6 × 10^34yr
on proton lifetime
<cit.>
provided that the scale of grand unification exceeds about
4 × 10^15GeV
<cit.>.
(10)
can accommodate a neutrino sector with mixing angles consistent with experiment
<cit.>.
Out-of-equilibrium CP-violating decay of three generations
of right-handed neutrino in the early Universe can lead
to an asymmetry in baryon minus lepton number B-L,
a process called leptogenesis
(because the process generates L but not B)
<cit.>,
which can then induce baryogenesis by sphaleron processes
<cit.>,
which tend to erase B+L.
The principal uncertainty over (10) is the
Higgs fields and symmetry breaking chain
that break it to the standard model
(1)_Y ×(2)_×(3)_c
<cit.>.
The various choices lead to the prediction of various
scalar or other particles,
which could potentially form the dark matter
<cit.>,
or produce experimentally detectable signatures
<cit.>.
Yet other possibilities arise if the grand symmetry group is enlarged.
For example, axions, which could be the dark matter
<cit.>,
or not <cit.>,
could be accommodated by expanding the group to the product
(10) ×(1)_PQ
<cit.>
where (1)_PQ is a Peccei-Quinn symmetry.
The present paper follows a different path,
which seems to have been overlooked,
namely the possibility that the (10) group itself
unifies nontrivially with the Lorentz group (3,1)
in (11,1).
The proposed unification is
in a sense
obvious.
Each of the 2^5 = 32 spinors in a fermion multiplet of (10)
is actually a 2-component chiral (massless) Weyl spinor,
so each fermion multiplet contains 2^6 = 64 components.
Each 2-component Weyl spinor transforms under the Lorentz group (3,1)
of spacetime.
It is natural to ask whether (10) and (3,1) might combine
into a common spin group,
which given that the spinor representation must have 64 components,
and there must be a timelike dimension,
must be the group (11,1) of rotations in 11+1 spacetime dimensions.
The reason it is possible to adjoin to (10) just 2 extra dimensions,
and not the full 4 dimensions of spacetime,
is that (10) and (3,1) redundantly contain the degrees
of freedom associated with flipping chirality.
The redundancy is expressed mathematically by the coincidence
of the (10) and (3,1) chiral operators, equation (<ref>).
The viability of (11,1) as a hypothetical unified group
is affirmed by the circumstance that the discrete symmetries
(of the spinor metric, which in the Dirac algebra is antisymmetric,
and of the conjugation operator, which in the Dirac algebra is symmetric)
of (11,1) agree with those of (3,1),
as follows from the well-known Cartan-Bott dimension-8 periodicity
of Clifford algebras
<cit.>.
Nesti & Percacci <cit.>
and
Krasnov <cit.>
have previously proposed that (10) and the Lorentz group (3,1)
are unified in (11,3),
and that the 64 spinors of a generation comprise one of the two chiral
components of the 2^7 = 128 spinors of (11,3).
This possibility is discussed further in <ref>.
Why might the unification of (10) and (3,1) in (11,1)
have been missed?
A possible reason is the Coleman-Mandula no-go theorem
<cit.>,
which states that any symmetry group that contains the Poincaré group
and that admits nontrivial analytic scattering amplitudes
must be a direct product of the Poincaré group and an internal group.
The Coleman-Mandula theorem generalizes to higher dimensions
<cit.>.
(11,1) satisfies the generalized Coleman-Mandula theorem
because then the spacetime group and the internal group are one and the same.
The Coleman-Mandula theorem refers to the Poincaré group, which is the global
group of rotations and translations in rigid Minkowski (flat) spacetime.
Indeed Minkowski space is the arena of textbook quantum field theory.
But the tangent space of general relativity is not that of the
Poincaré group or the associated Poincaré algebra.
Rather, the tangent algebra of general relativity is the Dirac algebra,
which is the geometric algebra (Clifford algebra) associated with (3,1).
The Dirac algebra is needed to describe spinors in general relativity;
and general relativity can be expressed elegantly in the language of
multivector-valued differential forms <cit.>,
an approach pioneered by Cartan
<cit.>.
After symmetry breaking,
the Coleman-Mandula theorem requires only that
the generators of unbroken symmetries commute with those of spacetime.
The unification proposed in the present paper achieves just that:
as shown in <ref>,
the Dirac algebra
and
the algebra of unbroken symmetries of (10)
are commuting subalgebras of the geometric (Clifford) algebra
of (11,1).
The usual assumption
that (10) combines with (3,1) as a direct product
is equivalent to the assumption that
each of the 32 fermions of a generation in (10) is a Weyl spinor.
The two components of a Weyl spinor
transform into each other under proper Lorentz transformations,
therefore necessarily have the same standard-model charges.
In (10), standard-model charges are related to (10) charges
by equations (<ref>).
As elucidated in <ref>,
unifying (10) and (3,1) in (11,1)
requires abandoning the assumption that equations (<ref>) hold
without modification.
Remarkably,
there exists an adjustment (<ref>) of the basis bivectors of
the (10) algebra
that modifies standard-model charge operators in such a way that
the standard-model and Dirac algebras become commuting subalgebras
of the (11,1) geometric algebra,
yielding a nontrivial unification of (10) and (3,1)
in (11,1).
The modification (<ref>) is not obvious,
and may be another reason why unification in (11,1)
has not been noticed previously.
The modification (<ref>) is crucial;
without it, the unification proposed in this paper would fail,
and this paper would not have been written.
The (11,1) model is tightly constrained,
and as detailed in sections <ref>–<ref>,
there appears to be a unique minimal model.
The minimal Higgs sector
consists of the dimension 66 adjoint representation of (11,1).
This is simpler than the standard (10) model,
whose Higgs sector requires
<cit.>
several distinct multiplets,
typically a bivector (45) to break grand symmetry,
a vector (10) to break electroweak symmetry,
and
a pentavector (126) to break B-L symmetry to allow the right-handed
neutrino to acquire a Majorana mass.
The minimal symmetry breaking chain in (11,1)
proceeds by the Pati-Salam group (4)_w ×(6)_c,
as first proposed by <cit.>,
and advocated by <cit.>,
albeit with a different Higgs sector,
(11,1)
??2em(10,1)
10^15GeV3em(4)_w ×(6)_c ×(3,1)
10^12GeV3em
(1)_Y ×(2)_×(3)_c ×(3,1)
160 GeV3em(1)_Q ×(3)_c ×(3,1)
.
The top line of the chain (<ref>) is the prediction,
while the bottom line is the standard model.
Note that, as discussed in sections <ref> and <ref>,
the grand unified group is (10,1) rather than (11,1) itself.
The predicted energy of grand unification from (10,1)
is 10^15GeV, equation (<ref>),
which is less than the lower limit of
4 × 10^15GeV
obtained by
<cit.>
from the Super-Kamiokande lower limit on proton lifetime <cit.>,
so the (11,1) model may already be ruled out by proton decay.
<cit.>'s limit seems robust,
because the proton lifetime τ depends rather steeply
on the unification scale M_X, as τ∝ M_X^4.
However,
the grand symmetry breaking scale of 10^15GeV predicted here,
<ref>,
is based on a simple model
(1-loop renormalization,
and an abrupt transition at the Pati-Salam symmetry breaking scale),
and moreover the (11,1) Higgs sector is different from the
(10) models considered by <cit.>.
It would seem wise to incorporate (11,1) into the pantheon
of beyond-standard-model models,
and to subject it to the same kind of scrutiny that (10) has received.
The plan of this paper is as follows.
Section <ref>
recasts the charges of fermions in the standard model
in terms of a (10) bitcode,
as first pointed out by <cit.>,
and advocated by <cit.>,
and points out some patterns in the (10) chart (<ref>)
of fermions that suggest that the Dirac and (10) algebras might
fit nontrivially in a unified algebra.
Section <ref>
argues that if the unified algebra is a spin algebra,
then that algebra must have 11+1 spacetime dimensions.
Section <ref>
derives the main result of this paper,
that the Dirac and standard-model algebras are commuting subalgebras
of the (11,1) geometric algebra,
provided that standard-model bivectors are
modified per (<ref>).
Section <ref> considers
the possible unification path from the standard model toward (11,1),
concluding that the only option for unification between electroweak
and grand unification is the Pati-Salam group (4)_w ×(6)_c.
Section <ref>
addresses electroweak symmetry breaking,
<ref>
grand symmetry breaking,
and <ref> Pati-Salam symmetry breaking.
Section <ref>
shows from the running of coupling parameters that
Pati-Salam unification occurs at 10^12GeV,
and grand unification at 10^15GeV.
Section <ref> summarizes the predictions,
and highlights areas where the critical reader might look for flaws.
Section <ref> summarizes the conclusions.
The notation in this paper follows that of <cit.>.
§ SPIN(10)
§.§ Spin(10) charges
Introduced originally by Cartan <cit.>,
a spinor is the fundamental representation of the group of rotations
in N spacetime dimensions.
As emphasized by <cit.>,
spinors have the intriguing property that their index is a bitcode,
with [N/2] bits in N spacetime dimensions.
Mathematically,
the dimension of the spinor representation is 2^[N/2]
(strictly, in even N dimensions
there are two isomorphic irreducible spinor representations,
of opposite chirality,
each of dimension 2^[N/2]-1;
chirality counts whether the number of up-bits is odd or even).
The halving of dimensions is associated with the
fact that spinors have a natural complex structure.
In two or three dimensions, the number of bits is one, a Pauli spinor,
with 2^1 = 2 complex components.
In four dimensions, the number of bits is two, a Dirac spinor,
with 2^2 = 4 complex components.
A Dirac spinor in 3+1 spacetime dimensions has a spin bit and a boost bit,
<ref>.
The Dirac spinor is called right-handed if the spin and boost bits are aligned,
left-handed if anti-aligned.
A massive Dirac spinor is a linear combination of right- and left-handed
components,
but only the left-handed component couples to the weak force
(2)_.
As first pointed out by
<cit.>,
and reviewed by <cit.>,
(10) describes a generation of fermions of the standard model
with a bitcode with [10/2] = 5 bits y , z , r , g , b
consisting of two weak bits y and z, and three colour bits r , g , b.
<ref> comments on the naming of bits.
Each bit can be either up or down,
signifying a charge of +12 or -12.
The standard model has 5 conserved charges consisting of
hypercharge Y, weak isospin I_,
and three colours R, G, and B.
The relation between standard-model charges and (10) charges is
Y
=
y + z - 23 ( r + g + b )
,
I_ =
12 ( z - y )
,
C
=
c + 12
( C = R , G , B , c = r , g , b )
.
The relation (<ref>) between charges is possible
only thanks to certain coincidences
<cit.>
in the pattern of charges of fermions in the standard model.
Traditionally a quark has colour charge C consisting of one unit of
either R, G, or B.
(10) on the other hand says that an r quark (for example) has
rgb bits ↑↓↓,
meaning that its r charge is +12
while its g and b charges are -12.
In the (10) picture,
when an r quark turns into a g quark
through interaction with a gr̅ gluon,
its r charge flips from +12 to -12
while its g charge flips from -12 to +12,
[ ↑↓↓ 4em ↓↑↓ .; r quark gr̅ gluon g quark ]
In so doing, the quark loses one unit of r charge,
and gains one unit of g charge,
consistent with the traditional picture.
Standard-model transformations also preserve baryon number B
and lepton number L.
Mysteriously, although standard-model transformations preserve
B and L, neither (1)_B nor (1)_L, nor any combination thereof,
is a local symmetry of the standard model
(there is no force associated with these symmetries).
The combination (1)_B-L is however a subgroup of (10),
so B-L is a conserved charge of (10),
B - L
=
- 23 ( r + g + b )
.
(10) does not preserve baryon and lepton number individually.
After electroweak symmetry breaking,
the electroweak group (1)_Y ×(2)_
breaks down to the electromagnetic group (1)_Q.
The electromagnetic charge Q is
Q
=
12 Y + I_
=
z - 13 ( r + g + b )
.
Electroweak symmetry breaking is a loss of y-symmetry,
a loss of conservation of y-charge.
§.§ Spin(10) chart of spinors
Figure <ref> illustrates
one generation (the electron generation) of fermions of the standard model
arranged according to their (10) yzrgb charges.
The following (10) chart
shows the electron generation of fermions of the standard model
arrayed in columns according to the number of up-bits
(compare Table 4 of <cit.>; see also <cit.>).
The left element of each entry (before the colon) signifies which bits
are up, from – (no bits up, or
↓↓↓↓↓)
in the leftmost (0) column,
to yzrgb (all bits up, or
↑↑↑↑↑)
in the rightmost (5) column;
the right element of each entry is the corresponding fermion,
which comprise (electron) neutrinos ν, electrons e,
and up and down quarks u and d,
each in right- and left-handed Dirac chiralities and ,
and each in (unbarred) particle and (barred) anti-particle species,
a total of 2^5 = 32 fermions:
[ 6c; 0 1 2 3 4 5; .5ex : ν̅_ y : ν̅_ dc̅ : u̅_^c̅ dyc̅ : u̅_^c̅ zrgb : ν_ yzrgb : ν_; z : e̅_ yz : e̅_ rgb :
e_ yrgb :
e_ ; c :
d_^c yc :
d_^c dzc̅ : d̅_^c̅ dyzc̅ : d̅_^c̅; zc :
u_^c yzc :
u_^c ; ]
Here c denotes any of the three colours r, g, or b
(one colour bit up),
while c̅ denotes any of the three anticolours gb, br, or rg
(two colour bits up, the bit flip of a one-colour-bit-up spinor).
Every compact spin group (N) contains a subgroup ([N/2])
that preserves the number of up-bits <cit.>.
The columns of the chart (<ref>)
are (5) multiplets within (10),
with dimensions respectively 1, 5, 10, 10, 5, 1.
The standard-model group is a subgroup of (5).
All standard-model interactions preserve the number of (10) up-bits.
Each entry in the chart (<ref>)
is a 2-component massless Weyl fermion.
Lorentz transformations transform the 2 components
of a Weyl fermion among each other.
Flipping all 5 yzrgb bits turns a fermion into its antifermion partner.
§.§ Notable features of the Spin(10) chart
The (10) chart (<ref>) shows some notable features.
The most striking feature is that (10) chirality
coincides with Dirac chirality.
Chirality counts whether the number of up-bits is even or odd,
with right-handed defined as all bits up.
The odd and even columns of the (10) chart (<ref>)
have respectively right-handed () and left-handed ()
Dirac chirality.
Modulo a phase,
chirality is (the eigenvalue of) the pseudoscalar of the algebra,
the
product of all the N vectors in the N-dimensional geometry.
The Dirac chiral operator is traditionally denoted _5,
which equals - times the Dirac pseudoscalar I.
The (10) chiral operator can be denoted ϰ_10,
which equals - times the (10) pseudoscalar I_10.
The coincidence
(signified !=)
between Dirac and (10) chirality in the chart (<ref>) is
γ_5
≡
-
I
≡
-_0 _1
_2 _3
!=ϰ_10 ≡
-
I_10≡
-^+_y ^-_y
^+_z ^-_z
^+_r ^-_r
^+_g ^-_g
^+_b ^-_b
.
The coincidence of Dirac and (10) chiralities
suggests that the vectors _m, (m = 0, 1, 2, 3) of spacetime
are related to the vectors ^±_k (k = y, z, r, g, b) of (10),
in contrast to the usual assumption that the generators of spacetime
are unrelated to (commute with) those of grand unified symmetries.
A second notable feature of the chart (<ref>)
is that standard-model transformations are arrayed vertically,
whereas the four components of fermions of the same species,
such as electrons e_ and e_
and their positron partners e̅_ and e̅_,
are arrayed (mostly) horizontally.
In Dirac theory, a Dirac spinor such as an electron
has four complex components that
are distinguished by their properties under Lorentz transformations.
With standard-model transformations vertical
and spacetime transformations horizontal,
the chart (<ref>) seems to invite the idea
that (10) somehow contains both standard-model and
spacetime transformations under one tent.
A third notable feature
is that flipping the y-bit preserves the identity of the spinor
but flips its chirality;
for example the electron is flipped e_ ↔ e_.
Electroweak symmetry breaking is a loss of y-symmetry
and a loss of conservation of y-charge.
In the standard model, electroweak symmetry breaking is accomplished
by a Higgs field that carries y-charge and gives mass to fermions
by flipping their y-bit.
A fourth tea leaf is that fermions (unbarred) and antifermions (barred)
in the chart (<ref>)
have colour chirality ϰ_rgb respectively positive and negative,
ϰ_rgb≡
I_rgb≡^+_r ^-_r ^+_g ^-_g ^+_b ^-_b
.
Colour chirality ϰ_rgb is positive or negative
as the number of colour up-bits is odd or even.
§.§ Spin(10) gauge fields
Associated with a spin group (N) is
a Clifford algebra <cit.>,
or geometric algebra <cit.>.
In 3+1 spacetime dimensions,
the geometric algebra is the Dirac algebra
of Dirac γ-matrices and their products
(whence the notation _a for the vectors of a geometric algebra),
<ref>.
The orthonormal vectors _a, a = 1 , ... , N
of a geometric algebra in N dimensions satisfy the product rule
_a _b
=
{[ δ_ab ( a = b ) ,; - _b _a ( a ≠ b ) . ].
If a dimension is timelike, as in the Dirac algebra,
then it is convenient to treat the timelike dimension
as the imaginary times a spacelike dimension,
as in equation (<ref>).
An antisymmetric product of vectors is commonly written with a wedge sign,
_a _b,
although in the present paper the wedge sign is often omitted for brevity
when there is no ambiguity.
Physically,
a wedge product _a_1 ... _a_p
of p vectors,
a multivector of grade p,
or p-vector,
represents a p-dimensional element in the N-dimensional space.
The geometric algebra is the algebra of linear combinations of
multivectors of all possible grades p = 0 to N.
In the presence of spinors,
the orthonormal basis vectors _a
of a geometric algebra group naturally into pairs
<cit.>,
conveniently denoted ^+_k and ^-_k.
Spinors with k bit up and down transform with opposite phases
^∓θ/2
(or opposite boosts ^±θ/2)
under rotations in the 2-dimensional ^+_k ^-_k plane.
The 10 orthonormal basis vectors of the (10) geometric algebra are
^±_k,
k = y, z, r, g, b.
Chiral combinations of the orthonormal basis vectors are defined by
_k
≡^+_k + ^-_k √(2) , _k̅≡^+_k - ^-_k √(2) .
The orthonormal vectors
^+_k and ^-_k
can be thought of as, modulo a normalization,
the real and imaginary parts of a complex vector
_k whose complex conjugate is _k̅.
The chiral vectors _k and _k̅
carry respectively + and - one unit of k charge;
they transform with opposite phases
^∓θ
(or opposite boosts ^±θ)
under rotations in the 2-dimensional ^+_k ^-_k plane,
and they raise and lower the charge of the object (spinor or multivector)
that they act on by one unit of k charge.
The gauge fields associated with any gauge group form a multiplet
labelled by the generators of the group.
The generators of the (2n) group with n = 5
are its n ( 2n - 1 ) = 45 orthonormal bivectors
(products of two orthonormal vectors),
comprising the 2 n ( n - 1 ) = 40 bivectors
^+_k ^+_l
=
12 ( _k + _k̅ )
( _l + _l̅ )
,
^+_k ^-_l
=
12 ( _k + _k̅ )
( _l - _l̅ )
,
^-_k ^+_l
=
12 ( _k - _k̅ )
( _l + _l̅ )
,
^-_k ^-_l
=
- 12 ( _k - _k̅ )
( _l - _l̅ )
,
with distinct indices k and l running over
y, z, r, g, b,
together with the n = 5 diagonal bivectors
12 ^+_k ^-_k
=
2 _k _k̅ ,
with indices k running over
y, z, r, g, b.
The generators of a gauge group serve two roles.
On the one hand they generate the symmetries that rotate fields.
On the other hand,
the generators are themselves fields
that are rotated by the symmetries they generate.
To appreciate the distinction,
consider the diagonal chiral bivector
12 _k _k̅ .
On the one hand, the diagonal bivector (<ref>) acts as an operator
whose eigenvalues equal the k charge of the objects it acts on
(the normalization factor of 12 serves to ensure
that is true).
As an operator, a generator acts on its argument by commutation
(equivalent to left multiplication
if the argument is a column spinor,
or right multiplication
if the argument is a row spinor).
On the other hand, the diagonal bivector (<ref>)
is itself an object, a field, whose k charge is zero.
As a field, a generator is itself acted on by commutation.
Expressions for gauge fields of subgroups of (10),
including for (5) and the standard-model,
are collected in <ref>.
§ EXTRA DIMENSIONS
The primary purpose of this paper is to explore whether the Dirac
and (10) algebras unify nontrivially in a common geometric algebra.
This section argues that the minimal unified geometric algebra is
that associated with (11,1) in 11+1 spacetime dimensions.
§.§ Two extra dimensions, and one extra bit
Each spinor multiplet of (10) contains 2^[10/2] = 2^5 = 32 fermions.
Each of the 32 fermions of the multiplet is itself a Weyl (massless, chiral)
spinor, containing 2 degrees of freedom,
for a total of 2^6 = 64 degrees of freedom.
Each 2-component Weyl fermion transforms under the Lorentz group (3,1)
of spacetime.
If the spinors of (10) and (3,1) are to be unified
in a minimal fashion,
in a common spin group with no superfluous degrees of freedom,
then that group must have a spinor multiplet of dimension 2^6.
Certainly, there must be one timelike dimension,
to accommodate the time dimension of the Dirac algebra.
Moreover,
the total number of spacetime dimensions must be even,
because the spinor multiplet must be chiral,
and only in even dimensions
is the chirality of a spinor invariant under rotations.
Before immediately concluding that the minimal spin group must therefore have
11+1 spacetime dimensions,
bear in mind that the Dirac algebra has 2^2 = 4 complex components,
equivalent to 2^3 = 8 real degrees of freedom.
The 32 2-component Weyl spinors of (10) comprise 64 real, not complex,
degrees of freedom,
because, as is evident in the chart (<ref>),
the 64 components include the degrees of freedom associated with both
fermions and antifermions, which are conjugates of each other.
Therefore it might be possible that the 64 spinors might
constitute 2^5 = 32 complex components of spinors
in 9+1 spacetime dimensions.
The possibility of 9+1 spacetime dimensions
is ruled out by chirality.
In the standard model,
conjugation flips chirality:
the conjugate of a right-handed fermion is a left-handed antifermion.
In any geometric algebra,
the conjugation operator is, modulo a phase, the product of the spinor metric
ε with the product of all time dimensions <cit.>.
The spinor metric has the property that it flips all bits,
and in even N dimensions
each vector (multivector of grade 1) of the algebra flips one bit.
Therefore, if there is one time dimension
(or more generally, an odd number of time dimensions),
conjugation flips chirality if and only if the number [N/2] of bits is even.
This rules out 9+1 spacetime dimensions,
which has one time dimension and an odd number 5 of bits.
Therefore,
the minimal spin group must have 11+1 spacetime dimensions.
Adding two extra dimensions adjoins an additional, 6th, t-bit,
or time bit,
to the 5 yzrgb bits of a (10) spinor.
Like the other 5 bits, the t-charge of a (11,1) spinor
can be either +12 (t-bit up) or -12 (t-bit down).
In Dirac theory,
massive spinors and antispinors are complex conjugates of each other,
and massive spinors at rest are eigenvectors of the time axis,
equations (<ref>) and (<ref>).
These two conditions require interpreting
the 12th dimension ^-_t,
not the 11th dimension ^+_t,
as providing a timelike dimension
_0,
_0
≡^-_t
.
§.§ The spinor metric and the conjugation operator
Any spin algebra contains two operators,
the spinor metric ε
and the conjugation operator ,
that are invariant under rotations
<cit.>.
A consistent translation between Dirac and (11,1) representations
must agree on the behaviour of these two operators.
The Dirac spinor metric ε, equations (<ref>),
and conjugation operator , equations (<ref>),
are respectively antisymmetric (ε^2 = -1)
and symmetric (^∗ = 1).
Consistency requires that the (11,1) spinor metric
and conjugation operator be similarly antisymmetric and symmetric.
Consultation of Tables 1 and 4 of <cit.>
shows that in 11+1 dimensions
only the standard choice ε of spinor metric
and associated conjugation operator
possess the desired antisymmetry and symmetry.
The standard spinor metric in 11+1 dimensions is
ε
=
^+_t ^+_y ^+_z ^+_r ^+_g ^+_b
.
Below it will be found that the representation of the spatial rotation
generator σ_2, equation (<ref>), coincides with
the representation of the spinor metric (<ref>),
which is similar to the coincidence (<ref>) between I σ_2
and the spinor metric ε
in the chiral representation of the Dirac algebra.
Given the (11,1) spinor metric (<ref>),
and with the time axis _0 = ^-_t,
the (11,1) conjugation operator is
=
- ε_0^
=
ε_0
=
- ε^-_t
=
^+_t ^-_t ^+_y ^+_z ^+_r ^+_g ^+_b
.
Whereas the (11,1) spinor metric (<ref>)
flips all bits,
the (11,1) conjugation operator (<ref>)
flips all bits except t, that is, it flips yzrgb.
This is the same as the (10) conjugation operator,
which flips the five yzrgb bits.
That the (11,1) and Dirac ((3,1)) geometric algebras
share the same symmetries is no coincidence:
it stems from the period-8 Cartan-Bott periodicity
<cit.>
of geometric algebras,
evident in Tables 1 and 4 of <cit.>.
§.§ Spin(11,3)?
Nesti & Percacci <cit.>
and
Krasnov <cit.>
have previously proposed that (10) and the Lorentz group (3,1)
are unified in (11,3),
and that the 64 spinors of a generation comprise one of the two chiral
components of the 2^7 = 128 spinors of (11,3).
The 64 spinors comprise 32 spinors that are direct products of
right-handed (10) spinors with right-handed Dirac spinors,
and their 32 antispinor conjugate partners that are direct products of
left-handed (10) spinors with left-handed Dirac spinors.
It is precisely the coincidence of (10) and Dirac chirality
that ensures the consistency of the construction.
The Coleman-Mandula theorem is satisfied because
the even (10) geometric algebra
and
the Dirac algebra
are commuting subalgebras of the (11,3) geometric algebra.
Percacci <cit.>
(see <cit.>)
has previously proposed that the grand unified group (10)
and the Lorentz group (3,1) are unified in (13,1).
That proposal runs into the difficulty
that the conjugation operator in 13+1 spacetime dimensions is antisymmetric,
whereas the conjugation operator in the Dirac algebra is symmetric
(see Table 4 of <cit.>).
§ THE DIRAC AND STANDARD-MODEL ALGEBRAS ARE COMMUTING SUBALGEBRAS OF THE SPIN(11,1) GEOMETRIC ALGEBRA
§.§ Spin(11,1) chart of spinors
The challenge now is to reinterpret
the (10) chart (<ref>) of fermions
as living in (11,1).
Recall that in the standard picture
each entry in the (10) chart is assumed to be a 2-component Weyl spinor,
which is equivalent to the usual assumption
that spacetime and (10) combine as a direct product.
That picture must be abandoned here,
because the hypothesis of this paper is precisely
that the Dirac and (10) groups do not combine as a direct product.
In the (11,1) picture,
the two components of each Weyl spinor must be distributed into
two separate entries.
Translated into (11,1),
each entry in the (10) chart (<ref>)
must still contain 2 components,
one with t-bit up, the other with t-bit down,
but those 2 components do not Lorentz-transform into each other.
At first sight it might seem that it would be impossible to distribute
the two components of a Weyl spinor in separate entries,
because the two components of a Weyl spinor necessarily have the same charge
(being related by a proper Lorentz transformation),
whereas different entries in the (10) chart (<ref>)
carry different charges.
However, there is a trick that gets around that difficulty
(this is a key trick, without which this paper would not have been written).
The trick is suggested by the fourth tea leaf of <ref>,
that the spinors of the (10) chart (<ref>)
are fermions (unbarred) or antifermions (barred)
as the colour chirality ϰ_rgb, equation (<ref>),
is positive or negative.
The five (10) charges of a spinor are eigenvalues of the five diagonal
bivectors (<ref>).
If these diagonal bivectors are modified by multiplying them
by ϰ_rgb, then their eigenvalues will measure the charge
of the fermion, not the antifermion, in all entries of the (10) chart.
A key point that allows this adjustment to be made consistently
is that ϰ_rgb commutes with all standard-model bivectors.
Notably, ϰ_rgb does not commute
with (5) bivectors that transform between leptons and quarks;
but that is fine,
because (5) is not an unbroken symmetry of the standard model.
A consistent way to implement this modification,
that leaves the bivector algebra of the standard model
(but not of (5)) unchanged,
is to multiply all imaginary bivectors ^+_k ^-_l
by ϰ_rgb,
while leaving all real bivectors
^+_k ^+_l and ^-_k ^-_l
unchanged,
^+_k
^-_l
→^+_k
^-_l
ϰ_rgb ,
k,l = t,y,z,r,g,b
.
Equivalently,
replace the imaginary in all (10) multivectors
by the colour pseudoscalar - I_rgb = ϰ_rgb,
equation (<ref>).
Although at this point the modification (<ref>)
is needed only for bivectors in the (10) algebra,
for which k,l = y,z,r,g,b
(excluding t),
it turns out that the electroweak and grand Higgs fields,
equations (<ref>) and (<ref>),
have the correct symmetry-breaking behaviour
only if the modification (<ref>)
is extended to all (11,1) bivectors, that is,
for all of k,l = t,y,z,r,g,b.
The purpose of making the modification (<ref>)
was to allow Lorentz transformations
to connect fermions across different entries of
the (10) chart (<ref>),
and that works, as will now be shown.
The modification (<ref>)
serves to replace each antifermion in the chart with the corresponding fermion.
For example, the positron entries
e̅_ and e̅_
are replaced by electrons
e_ and e_.
What about antifermions?
Where have they gone?
The answer is that antifermions are obtained from fermions in the usual way
<cit.>,
by taking their complex conjugates and multiplying by the conjugation operator
, equation (<ref>),
ψ≡ψ^∗.
Thus antifermions appear in a second copy of
the (10) chart (<ref>),
a conjugated version in which all fermions are replaced by antifermions.
It requires some work, <ref>,
to establish the correct assignment of Dirac boost (⇑ or ⇓)
and spin (↑ or ↓) bits,
but the end result is the following (11,1) chart of spinors,
arranged in columns by the number of (10) up-bits
as in the earlier chart (<ref>):
[ 0 1 2 3 4 5; .5ex : ν̅_
ν_ y :
ν̅_
ν_ dc̅ :
u̅_^ c̅
u_^ c dyc̅ :
u̅_^ c̅
u_^ c zrgb :
ν_
ν̅_ yzrgb : ν_
ν̅_; z :
e̅_
e_ yz :
e̅_
e_ rgb :
e_
e̅_ yrgb :
e_
e̅_ ; c :
d_^ c
d̅_^ c̅ yc :
d_^ c
d̅_^ c̅ dzc̅ :
d̅_^ c̅
d_^ c dyzc̅ :
d̅_^ c̅
d_^ c; zc :
u_^ c
u̅_^ c̅ yzc :
u_^ c
u̅_^ c̅ ; ]
Whereas in the original (10) chart (<ref>)
each entry was a 2-component Weyl spinor,
in the (11,1) chart (<ref>)
the 2 components of each Weyl spinor appear in bit-flipped entries.
For example, the right-handed electron e_ of the original chart
is replaced by e_,
and its spatially rotated partner e_ of the same chirality
appears in the all-bit-flipped entry.
Each entry still has two components,
but in the (11,1) chart those two components differ by their t-bit:
the upper component has t-bit up,
the lower t-bit down.
The net number of degrees of freedom remains the same, 2^6 = 64.
Flipping the t-bit transforms a fermion
into its antifermionic partner of opposite boost.
Flipping all bits except the t-bit transforms a fermion
into its antifermionic partner of opposite spin.
The Dirac boost bit in the (11,1) chart (<ref>) is
⇑ or ⇓ as ϰ_tyz is positive or negative,
that is, as the number of tyz up-bits is odd or even.
The Dirac spin bit is ↑ or ↓
as ϰ_rgb is positive or negative,
that is, as the number of rgb up-bits is odd or even.
Whereas in the original (10) chart (<ref>)
a spinor was a fermion or antifermion
as ϰ_rgb was positive or negative,
in the (11,1) chart (<ref>),
a spinor is a fermion or antifermion
as the time-colour chirality ϰ_trgb is positive or negative,
that is, as the number of trgb up-bits is even or odd.
A Dirac spinor has 4 complex components, for a total of 8 degrees of freedom.
As previously in the (10) chart (<ref>),
each Dirac spinor in the (11,1) chart (<ref>)
has 8 components, so each component must represent one real degree of freedom.
In any geometric algebra with one time dimension,
conjugation flips all bits except the boost bit.
This is true in the Dirac algebra,
equation (<ref>),
and it is also true in the (11,1) algebra,
where the boost bit is the time bit t.
Figure <ref> illustrates
one generation (the electron generation) of fermions of the standard model
arranged according to their (11,1) tyzrgb charges.
The correctness of the assignment of Dirac boost and spin bits
in the (11,1) chart (<ref>),
and the consistency of the entire construction,
will now be established.
§.§ Justification of the Spin(11,1) chart
The assignment of Dirac boost and spin bits for each individual species
(electrons, for example)
in the (11,1) chart (<ref>)
is determined by two conditions,
that
conjugation flips spin ↑ ↔ ↓,
while
flipping the y-bit flips boost ⇑ ↔ ⇓.
The first condition holds because
the expression (<ref>) for the Dirac conjugation operator
in the chiral representation shows that Dirac conjugation flips
chirality and spin, but not boost.
The second condition holds because,
after electroweak symmetry breaking,
flipping the y-bit flips spinors
between right- and left-handed Dirac chiralities of the same species,
for example e_ ↔ e_.
Massive spinors are linear combinations of the two chiralities.
Since massive spinors have definite spin, either ↑ or ↓,
flipping the y-bit must flip the Dirac boost bit
while preserving the spin bit,
for example,
e_ ↔ e_.
The two conditions
suffice to determine the translation between
Dirac and (11,1) spinors of the same species,
but they do not fix the translation across different species.
The translation across different species
is determined by the condition that Lorentz transformations commute
with standard-model transformations,
in accordance with the Coleman-Mandula theorem.
The Dirac algebra contains two mutually commuting operators,
_0 _3 and _1 _2,
that respectively generate a boost and a spatial rotation
without flipping any bits.
The operator _0 _3 generates a boost
in the ⇑-⇓ boost plane:
a boost by rapidity θ
multiplies ⇑ and ⇓ spinors
by a real number ^±θ/2.
The operator _1 _2 generates a spatial rotation
in the ↑-↓ spin plane:
a rotation by angle θ
multiplies ↑ and ↓ spinors
by a phase ^∓θ/2.
The (11,1) geometric algebra contains three mutually commuting
operators that flip no bits,
and at the same time commute with all standard-model transformations,
namely the time pseudoscalar I_t,
the weak pseudoscalar I_yz,
and the colour pseudoscalar I_rgb
(the definition (<ref>) essentially repeats the earlier
definition (<ref>)),
I_t
≡-^+_t ^-_t
=
ϰ_t
≡_t _t̅
,
I_yz
≡^+_y ^-_y ^+_z ^-_z
=
- ϰ_yz
≡- _y _y̅ _z _z̅
,
I_rgb
≡^+_r ^-_r ^+_g ^-_g ^+_b ^-_b
=
-ϰ_rgb
≡-_r _r̅ _g _g̅ _b _b̅
.
The weak pseudoscalar I_yz changes sign when an odd number of yz bits
are flipped,
and, as remarked above,
flipping an odd number of yz bits flips the Dirac boost bit.
But I_yz is a spacelike operator
(it generates a rotation by a phase),
so cannot by itself generate a boost.
On the other hand the time pseudoscalar I_t does generate a Lorentz boost,
whose action is independent of the yz bits.
Thus the combination I_tyz generates a Lorentz boost
that acts oppositely on spinors of opposite weak chirality,
consistent with the behaviour of the Dirac boost operator _0 _3.
The (11,1) boost operator that can be identified with
the Dirac boost operator _0 _3 is
I_tyz≡
-^+_t ^-_t ^+_y ^-_y ^+_z ^-_z
=
- ϰ_tyz≡
-
_t _t̅_y _y̅_z _z̅ .
It was remarked just before the definition (<ref>)
of colour chirality
that (10)-conjugate spinors have opposite colour chirality
ϰ_rgb.
As remarked above, in the Dirac algebra,
conjugation flips spin but not boost.
Therefore the colour pseudoscalar I_rgb can be identified with
the Dirac spin operator _1 _2.
The product of the boost operator I_tyz
and the spin operator I_rgb equals
the 12-dimensional pseudoscalar ,
≡
I_tyz I_rgb
=
-^+_t ^-_t
^+_y ^-_y ^+_z ^-_z
^+_r ^-_r ^+_g ^-_g ^+_b ^-_b
=
ϰ_12≡_t _t̅_y _y̅_z _z̅_r _r̅_g _g̅_b _b̅ .
The 12-dimensional chiral operator ϰ_12
corresponding to the Dirac chiral operator γ_5 = - I is
ϰ_12
=
- .
It is the 12-dimensional chiral operator ϰ_12
that should be identified with the Dirac chiral operator γ_5,
not the 10-dimensional chiral operator ϰ_10
as suggested by the coincidence (<ref>).
The final ingredient to complete the translation between
Dirac and (11,1) algebras is to identify an operator that
connects the two Weyl components of each spinor
in the (11,1) chart (<ref>)
to each other.
In the Dirac algebra,
the two Weyl components are connected by flipping both bits.
A Dirac operator that flips both bits is
I σ_2 = - _1 _3,
equation (<ref>),
which is also the generator of a spatial rotation.
The corresponding operator that flips all bits in the (11,1) algebra is
σ_2
≡^+_t ^+_y ^+_z ^+_r ^+_g ^+_b
,
where is the pseudoscalar (<ref>).
Equation (<ref>) can be regarded as defining σ_2.
Below, equation (<ref>),
σ_2 will be identified as a generator of a Lorentz boost.
The expression (<ref>) for σ_2 coincides with
that for the (11,1) spinor metric ε,
equation (<ref>),
but the two are not the same because σ_2 transforms as a multivector
whereas the spinor metric ε transforms
as a (Lorentz-invariant) spinor tensor.
The coincidence of the expressions for σ_2 and ε
is similar to the coincidence (<ref>) between I σ_2
and the spinor metric ε
in the chiral representation of the Dirac algebra.
To qualify as a satisfactory operator that generates a spatial rotation,
the operator σ_2 defined by equation (<ref>)
must satisfy two conditions.
First,
consistent with the expected anticommutation of generators of spatial rotations,
σ_2 must anticommute with I_rgb,
which was identified above as generating a spatial rotation.
Second,
in accordance with the Coleman-Mandula theorem,
σ_2 must commute with all standard-model bivectors
modified per (<ref>).
Both conditions hold, and that should not be too surprising.
Recall that the modification (<ref>) of (10)
bivectors was done precisely to enable the existence of an operator
that connects the two Weyl components of a spinor by a bit-flip;
and the two Weyl components of a spinor are related by a spatial rotation.
Note that whereas the generator σ_2, equation (<ref>),
of a spatial rotation flips all six tyzrgb bits,
and preserves standard-model charges,
the (11,1) conjugation operator , equation (<ref>),
which transforms a fermion into its antifermionic partner,
flips all bits except t, and flips all standard-model charges,
consistent with the assignment of fermions
in the (11,1) chart (<ref>).
§.§ The Dirac algebra as a subalgebra of the Spin(11,1) geometric algebra
The Dirac algebra can now be expressed in terms of the
(11,1) geometric algebra.
The generators
I_rgb and σ_2,
equations (<ref>) and (<ref>),
and their product
constitute a set of 3 anticommuting generators of spatial rotations that
commute with all standard-model bivectors
modified per (<ref>).
The pseudoscalar is given by equation (<ref>).
The full set of 6 Lorentz generators,
consisting of 3 spatial generators σ_a
and 3 boost generators σ_a, is
σ_1
=
- ^+_t ^+_y ^+_z ^-_r ^-_g ^-_b
,
σ_2
=
^+_t ^+_y ^+_z ^+_r ^+_g ^+_b
,
I_rgb
=
σ_3
=
^+_r ^-_r ^+_g ^-_g ^+_b ^-_b
,
σ_1
=
^-_t ^-_y ^-_z ^+_r ^+_g ^+_b
,
σ_2
=
^-_t ^-_y ^-_z ^-_r ^-_g ^-_b
,
I_tyz
=
σ_3
=
-^+_t ^-_t ^+_y ^-_y ^+_z ^-_z
.
The 6 Lorentz generators all have grade 6.
They are not bivectors,
but they nevertheless generate Lorentz transformations.
The 8 basis elements of the complete Lie algebra of Lorentz transformations
comprise the 6 Lorentz generators (<ref>)
along with the unit element and the pseudoscalar
given by equation (<ref>).
The commutation rules of the elements of the Lie algebra
are those of the Lorentz algebra.
With the modification (<ref>) to standard-model generators,
all the Lorentz generators commute with all standard-model generators.
Given a time vector _0
and a set of generators σ_a of Lorentz boosts,
spatial vectors _a
can be deduced by Lorentz transforming _0 appropriately.
Since the boost generators satisfy
σ_a = _0 _a,
spatial vectors satisfy
_a = - _0 σ_a.
With the time axis _0 = ^-_t
and the expressions (<ref>) for σ_a,
the full set of 4 orthonormal spacetime vectors _m is
_0
=
^-_t
,
_1
=
^-_y ^-_z ^+_r ^+_g ^+_b
,
_2
=
^-_y ^-_z ^-_r ^-_g ^-_b
,
_3
=
^+_t ^+_y ^-_y ^+_z ^-_z
.
The vectors (<ref>) all have grade 1 mod 4.
The multiplication rules for
the vectors _m given by equations (<ref>)
agree with the usual multiplication rules for Dirac γ-matrices:
the vectors _m anticommute,
and their scalar products form the Minkowski metric.
All the spacetime vectors _m commute with all standard-model bivectors
modified per (<ref>).
The Dirac pseudoscalar I coincides with
the (11,1) pseudoscalar
defined by equation (<ref>),
I
≡_0 _1 _2 _3
=
.
Equivalently,
the Dirac chiral operator γ_5 ≡ - I
coincides with the (11,1) chiral operator
ϰ_12≡ -.
Thus the Dirac and standard-model algebras are subalgebras
of the (11,1) geometric algebra,
such that all Dirac generators commute with all standard-model generators
modified per (<ref>),
as was to be proved.
The time dimension (<ref>) is just a simple vector
in the (11,1) algebra,
but the 3 spatial dimensions (<ref>)–(<ref>)
are all 5-dimensional.
The spatial dimensions
share a common 2-dimensional factor ^-_y ^-_z.
Aside from that common factor,
each of the 3 spatial dimensions is itself 3-dimensional:
^+_r ^+_g ^+_b,
^-_r ^-_g ^-_b,
and
^+_t ^+_y ^+_z.
§ THE PATH FROM THE STANDARD MODEL TO GRAND UNIFICATION
The previous section <ref>
established the main result of this paper,
that the Dirac and standard-model algebras are commuting subalgebras
of the (11,1) geometric algebra.
It would be nice to declare victory at this point.
However, there is more hard work to do,
to determine whether there exists a viable symmetry breaking chain
from (11,1) to the standard model.
As described in the Introduction, <ref>,
there is a substantial literature on possible symmetry breaking chains
from (10).
The bottom line of the next several
sections <ref>–<ref>,
is that there does appear to be a symmetry breaking chain (<ref>),
and the Higgs sector that mediates it is encouragingly economical,
namely a 66-component (11,1) bivector multiplet
modified per (<ref>).
In other words, the Higgs field is the scalar (spin 0)
counterpart of the vector (spin 1) gauge fields in (11,1).
A key property of Higgs fields is that an unbroken massless gauge field
is broken and becomes massive by absorbing a Higgs scalar into its
longitudinal component.
With the Higgs sector matching the gauge sector,
for each gauge field in (11,1)
there is a matching Higgs field available to make it massive if needed.
It is possible that there is a more complicated route
that involves adjoining fields outside the realm of (11,1),
but we have not pursued that possibility.
If there are other possibilities,
the model with a 66-component (11,1) Higgs sector
is the minimal model.
Symmetry breaking from (11,1) is constrained
by the condition that the spacetime and internal algebras commute
at every step of unification,
as required by the Coleman-Mandula theorem.
For example,
although the Dirac vectors _m,
equations (<ref>),
all commute with the all the bivector generators of the
(1)_Y ×(2)_×(3)_c
standard model,
modified per (<ref>),
the Dirac vectors _m
do not commute with general (5) transformations.
As long as spacetime is 4-dimensional and spacetime transformations
commute with internal rotations,
(5) cannot be an internal symmetry.
An exhaustive search establishes that the largest subgroup of (11,1)
whose bivector generators,
modified per (<ref>),
all commute with the spacetime vectors (<ref>)
is a product of weak and colour groups
(5)_w ×(6)_c,
in which the generators of (5)_w are the 10 bivectors formed
from antisymmetric products of the five vectors
^+_t,
^+_y, ^-_y,
^+_z, ^-_z,
while the generators of (6)_c are the 15 bivectors formed
from antisymmetric products of the six vectors
^+_k and ^-_l
with k and l running over r,g,b.
More generally,
the largest subalgebra of the (11,1) geometric algebra whose generators,
modified per (<ref>),
all commute with the spacetime vectors (<ref>)
is the 2^4 × 2^5 = 2^9 dimensional direct product of the even
geometric algebras associated with respectively (5)_w and (6)_c.
Even more generally,
in order to commute with all spacetime vectors (<ref>),
a tensor of (modified) multivectors
must have an even total number of t^- indices,
and an even total number of t^+,y^±,z^± indices,
and an even total number of r^±,g^±,b^± indices.
This suggests that (5)_w ×(6)_c could be on the path
up to grand unification while spacetime is still 4-dimensional.
As discussed in <ref>,
the 4 modified bivectors involving the 11th spatial dimension ^+_t,
namely ^+_t ^±_k, k = y,z,
modified per (<ref>),
call attention to themselves because they match the
properties of the Lorentz-scalar electroweak Higgs multiplet.
Although the 4 (modified) bivectors
^+_t ^±_k, k = y,z,
can generate the electroweak Higgs field, it turns out that they
cannot in addition generate a gauge field,
because they do not commute
with the grand Higgs field (<ref>)
that causes grand symmetry breaking.
This leaves the Pati-Salam group
(4)_w ×(6)_c
as the largest possible gauge group on the path
to grand unification while spacetime is still 4-dimensional.
If the (11,1) model of this paper is correct,
then surely intermediate unification to (4)_w ×(6)_c must occur,
since otherwise,
in the absence of new physics such as supersymmetry <cit.>,
the three coupling parameters of the standard model do not meet,
and grand unification does not occur.
A gauge field is a Lorentz vector
because it arises as a connection in a gauge-covariant spacetime derivative.
A Higgs field on the other hand must be a Lorentz scalar, since otherwise
it would break the Lorentz symmetry of the vacuum
(it would impose a preferred spatial direction and/or rest frame),
contrary to observation.
For a gauge field,
a scalar Lagrangian can be constructed from
the commutator of the gauge-covariant derivative.
For a Higgs field,
any generator that commutes with Lorentz symmetries (<ref>)
defines a Lorentz scalar that could play the role of a scalar field
in the Lagrangian.
Whereas gauge fields
must transform according to the adjoint representation of the group,
Higgs generators could potentially transform in any representation of the group.
However, as remarked above,
the minimal Higgs sector in the (11,1) model turns out to
transform as the adjoint representation, just like the gauge fields.
The general principles underlying symmetry breaking by the Higgs mechanism
<cit.>
are:
(1) the Higgs field before symmetry breaking is a scalar (spin 0) multiplet
of the unbroken symmetry;
(2) one component of the Higgs multiplet acquires a nonzero vacuum
expectation value, and that component is invariant (uncharged, a singlet)
under the symmetries that remain after symmetry breaking;
(3) components of the Higgs multiplet whose symmetry is broken
are absorbed into longitudinal components of the broken gauge (spin 1) fields
by the Goldstone mechanism <cit.>,
giving those gauge fields mass;
and (4) unbroken components of the Higgs multiplet persist as scalar fields,
potentially available to mediate the next level of symmetry breaking.
There are three stages of symmetry breaking in the (11,1) model,
namely grand symmetry breaking,
Pati-Salam ((4)_w ×(6)_c) symmetry breaking,
and electroweak symmetry breaking.
All the Higgs fields involved in symmetry breaking lie
in a common 66-component (11,1) multiplet of bivectors
modified per (<ref>).
Denote this Higgs multiplet ,
after Englert-Brout
<cit.>,
who proposed the Higgs mechanism at the same time as
(marginally before) Higgs <cit.>,
≡
E^k^± l^±^±_k ^±_l
, k, l t,y,z,r,g,b
,
in which it is to be understood that all imaginary bivectors
(^+_k ^-_l or ^-_k ^+_l)
are to be multiplied by ϰ_rgb,
per the modification (<ref>).
The modification (<ref>) is necessary to ensure that,
after grand symmetry breaking,
all unbroken fields commute with all Lorentz bivectors (<ref>).
There is a complication to the simple story of the previous paragraph.
The 4 (modified) bivectors
^+_t ^±_k, k = y,z,
that belong to the (5)_w but not (4)_w algebra
appear to be special.
The grand Higgs field ⟨⟩,
equation (<ref>),
fails to commute with these 4 bivectors,
yet the associated scalar fields are not absorbed into their
partner vector fields,
but rather persist as the 4-component electroweak Higgs multiplet.
The only way we have been able to make sense of that complication
is to postulate that the dimension ^+_t is a scalar dimension
that does not generate any symmetry,
a possibility discussed in 4.4 of <cit.>.
In other words,
the gauge group before grand symmetry breaking is not (11,1),
but rather (10,1),
the group generated by the 11 bivectors formed from
the time vector ^-_t
and the 10 (10) vectors ^±_k, k = y,z,r,g,b.
This conclusion is discussed further in <ref>.
§ ELECTROWEAK SYMMETRY BREAKING
The 4 bivector generators ^+_t ^±_k with k = y,z
call attention to themselves
because they transform spinors by one unit
of standard-model charge y or z,
whereas the remaining 6 + 15 = 21 of the 10 + 15 = 25 bivector generators
of (5)_w ×(6)_c,
which generate the Pati-Salam group
(4)_w ×(6)_c <cit.>,
transform spinors by an even number of standard-model charges yzrgb.
The electroweak Higgs field responsible for breaking y-symmetry
carries one unit of y-charge.
The electroweak Higgs field gives masses to fermions
by flipping their y-bit.
The Weinberg theory of electroweak symmetry breaking <cit.>
requires the electroweak Higgs field to be part of a multiplet of 4 fields
that transform into each other under
(1)_Y ×(2)_.
Indeed the 4 bivector generators ^+_t ^±_k with k = y,z
provide such a set of fields.
The 4-component Higgs field may be defined by
≡
H^k^±^+_t ^±_k
,
k = y, z
,
where the imaginary bivectors
^+_t ^-_k, k = y,z,
are to be understood as being multiplied by ϰ_rgb
per the modification (<ref>).
Electroweak symmetry breaking occurs when the Higgs field
acquires a vacuum expectation value ⟨⟩
proportional to ^+_t ^-_y,
⟨⟩
=
⟨ H ⟩^+_t ^-_y ϰ_rgb ,
in which the modification factor ϰ_rgb has been included
explicitly to avoid possible misunderstanding.
When combined with the time axis -_0 ≡^-_t
in a fermion mass term
ψ·⟨⟩ψ = - ψ^†_0 ⟨⟩ψ,
the vacuum Higgs field (<ref>) yields a Dirac mass term
^-_t⟨⟩
=
⟨ H ⟩^-_t ^+_t ^-_y ϰ_rgb ,
which indeed flips the y-bit as it should.
The vacuum Higgs field (<ref>) is proportional to
^-_y
not ^+_y
because the factor ^-_y in the mass term (<ref>)
preserves the spinor identity,
whereas ^+_y flips between spinor and antispinor,
in much the same way that in the Dirac algebra
the time axis _0 maps massive spinors and antispinors to themselves,
whereas the time axis' boost partner _3
flips between massive spinor and massive antispinor,
equations (<ref>).
Note that whereas the Higgs multiplet , equation (<ref>),
commutes with the time axis _0,
the pseudoscalar multiplet does not,
so it is ⟨⟩ and not ⟨⟩
that generates a Dirac mass.
The electroweak Higgs field (<ref>) possesses
(4)_w rotational symmetry in the 4-dimensional space of y,z vectors.
The manner in which the (4)_w symmetry
is broken to the electroweak symmetry
(1)_Y ×(2)_
is noted in <ref>,
equation (<ref>).
The remainder of this section <ref>
is essentially a standard exposition of electroweak symmetry breaking,
adapted to the present notation;
see for example <cit.>
for a pedagogical account.
To avoid overloading the notation,
all the gauge and Higgs generators
in the rest of this section <ref>
are treated as bivectors,
whereas correctly they should be modified
per (<ref>),
by multiplying all imaginary bivectors by the colour chiral operator
ϰ_rgb.
The operator ϰ_rgb commutes with all standard-model
bivectors, so its exclusion makes no difference to the algebra.
For a spinor field ψ, the gauge-covariant derivative
with respect to (1)_Y ×(2)_ transformations is
_m ψ
=
(
∂_m
+
g_Y _m
+
g_w _m
)
ψ ,
where _m and _m
are the (1)_Y and (2)_ gauge fields
_m
≡ B_m Y
, _m
≡ W_m^k τ_k
,
and g_Y and g_w are dimensionless coupling strengths
for those fields.
Here Y, equation (<ref>),
is the generator of the hypercharge symmetry (1)_Y,
while τ_k
with τ_k satisfying the Pauli algebra,
equations (<ref>),
are the 3 generators of (2)_.
The weak Pauli matrix τ_3 acting on a spinor
has eigenvalue equal to twice the weak isospin 2 I_ = z - y,
equation (<ref>).
The electromagnetic charge generator Q, equation (<ref>),
is related to the hypercharge and weak generators Y and τ_3 by,
equation (<ref>),
Q
=
12 ( Y + τ_3 )
.
The sum W_m^k τ_k in the gauge field _m, equation (<ref>),
can be expressed with respect to either an orthonormal or a chiral basis,
W_m^k τ_k
=
W_m^1 τ_1 + W_m^2 τ_2 + W_m^3 τ_3
=
W_m^+ τ_+ + W_m^- τ_- + W_m^3 τ_3
,
where
W_m^±≡ ( W_m^1 ∓ W_m^2 ) / √(2),
and the chiral Pauli operators τ_± are
τ_+
≡τ_1 + τ_2 √(2)
=
_z _y̅√(2) , τ_-
≡τ_1 - τ_2 √(2)
=
_y _z̅√(2) .
The operator τ_+
increases z-charge by 1
and decreases y-charge by 1,
and therefore carries +1 unit
of each of electric charge Q and weak isospin I_.
Conversely,
τ_-
decreases z-charge by 1
and increases y-charge by 1,
and therefore carries -1 unit
of each of electric charge Q and weak isospin I_.
The operators Y and τ_3 leave y- and z-charge unchanged,
so carry zero electric charge Q and weak isospin I_.
Introduce the weak mixing, or Weinberg, angle θ_w defined by
sinθ_w
≡g_Y g , cosθ_w
≡g_w g ,
g
≡√(g_Y^2 + g_w^2) .
Define the electromagnetic and weak fields A_m and Z_m
to be the orthogonal linear combinations of
B_m and W_m^3,
(
[ A_m; Z_m ])
≡(
[ cosθ_w sinθ_w; -sinθ_w cosθ_w ])
(
[ B_m; W_m^3 ])
.
In terms of the rotated electromagnetic and weak fields A_m and Z_m,
the electroweak gauge connection is
g_Y _m
+
g_w _m
=
(
2 e A_m Q
+
2 g Z_m ( I_ - sin^2θ_w Q )
+
g_w ( W_m^+ τ_+ + W_m^- τ_- )
)
,
where the electromagnetic coupling e is
e
=
g_Y g_w g
=
g_Y cosθ_w
=
g_w sinθ_w
=
g cosθ_w sinθ_w
.
The electromagnetic coupling e is related to the fine-structure constant
α by e^2 = 4πα.
The gauge-covariant derivative of the 4-component Higgs field
with respect to (1)_Y ×(2)_ transformations is
_m
=
∂_m
+
g_Y [ _m , ]
+
g_w [ _m , ]
.
Whereas in the covariant derivative (<ref>) of a spinor ψ,
the fields _m and _m act directly on the spinor,
in the covariant derivative (<ref>) of the Higgs field ,
the fields act as a commutator,
because whereas a spinor ψ transforms as
ψ→ R ψ
under a rotor R (an element of the group (1)_Y ×(2)_),
a multivector such as the Higgs field transforms as
→ R R,
with R = R^-1 the reverse of R.
The covariant derivative of the expectation value (<ref>)
of the Higgs field is
_m ⟨⟩
=
⟨ H ⟩(
g_Y B_m [ Y , ^+_t ^-_y ]
+
g_w W_m^k [ τ_k , ^+_t ^-_y ]
)
.
The relevant commutators
of the generators Y of (1)_Y,
equation (<ref>),
and τ_k of (2)_,
equations (<ref>),
with the electroweak Higgs field ^+_t ^-_y are
[ Y , ^+_t ^-_y ]
=
-
^+_y ^-_y
^+_t ^-_y
=
-
^+_t ^+_y
,
[ τ_1 , ^+_t ^-_y ]
=
-
^-_y ^+_z
^+_t ^-_y
=
-
^+_t ^+_z
,
[ τ_2 , ^+_t ^-_y ]
=
-
^-_y ^-_z
^+_t ^-_y
=
-
^+_t ^-_z
,
[ τ_3 , ^+_t ^-_y ]
=
-
^+_y ^-_y
^+_t ^-_y
=
-
^+_t ^+_y
.
With the commutators (<ref>),
the covariant derivative (<ref>)
of the expectation value of the Higgs field becomes
_m ⟨⟩ =
⟨ H ⟩(
( g_Y B_m - g_w W_m^3 )
^+_t ^+_y
+
g_w (
W_m^1 ^+_t ^+_z
+
W_m^2 ^+_t ^-_z
)
)
=
⟨ H ⟩(
-
g
Z_m
^+_t ^+_y
+
g_w (
W_m^+ ^+_t _z + W_m^- ^+_t _z̅
)
)
.
Notice that the expression (<ref>)
for the covariant derivative _m ⟨⟩
does not depend on the electromagnetic field A_m,
which stems from the fact that ⟨⟩
commutes with the electric charge operator Q.
The kinetic term in the scalar Higgs Lagrangian is
-12 ( ^m ) · ( _m ),
where is the reverse of .
When the Higgs field acquires a nonzero expectation value
⟨⟩,
it contributes to the Lagrangian a kinetic term proportional to
(abbreviating Z^m Z_m = ( Z_m )^2 and so forth)
( ^m ⟨⟩)
·( _m ⟨⟩)
=
⟨ H ⟩^2
(
g^2 ( Z_m )^2
+
g_w^2 ( ( W_m^+ )^2 + ( W_m^- )^2 )
)
.
The contribution (<ref>) to the Lagrangian has the form
of mass-squared terms for the Z_m and W^±_m electroweak fields.
The Higgs field thus generates masses
m_Z and m_W
for the Z_m and W^±_m fields,
m_Z
≡⟨ H ⟩
g
,
m_W
≡⟨ H ⟩
g_w
.
The electromagnetic field A_m remains massless.
In accordance with the remarks after equations (<ref>),
the electromagnetic field A_m and weak field Z_m
both carry zero electric charge and weak isospin,
while the weak fields W_m^± carry respectively ± 1 unit of each of
electric charge Q and weak isospin I_.
§ GRAND SYMMETRY BREAKING
§.§ Grand Higgs field
As set forth in <ref>,
the largest subgroup of (11,1) that commutes with all the
spacetime vectors (<ref>)
and can therefore be on the path to grand unification
while spacetime is still 4-dimensional is
(5)_w ×(6)_c.
The transition to this group,
or possibly to some maximal subgroup of it
(which proves to be the Pati-Salam group
(4)_w ×(6)_c),
marks grand symmetry breaking.
In the (11,1) model,
grand unification involves a reduction of the number of spacetime dimensions
to the 3+1 observed today.
According to the general rules governing symmetry breaking by a Higgs field,
the grand Higgs field
must acquire a nonzero expectation value that
(1) is a Lorentz scalar,
commuting with all Lorentz bivectors (<ref>),
(2) commutes with all generators of the unbroken group,
and (3) fails to commute with other generators.
There is just one multivector of the (11,1) algebra
that fits the aspiration to be the grand Higgs field,
namely the time bivector
^+_t ^-_t
multiplied by the colour chiral operator ϰ_rgb,
⟨⟩
=
- ⟨ T ⟩^+_t ^-_t
ϰ_rgb ,
the imaginary coming from the time vector being timelike,
_0 = ^-_t.
The grand Higgs field (<ref>)
(1) commutes with all Lorentz bivectors (<ref>)
and is therefore a Lorentz scalar,
(2) commutes with the 21 Pati-Salam (4)_w ×(6)_c bivectors
modified per (<ref>),
and (3) fails to commute with the 24 modifed bivectors of (10)
that are not Pati-Salam bivectors.
Although ⟨⟩ is a Lorentz scalar,
commuting with all Lorentz bivectors (<ref>),
it anticommutes with all Dirac vectors (<ref>),
but that is fine because, unlike a gauge field,
a Higgs field need only be a Lorentz scalar, not a Dirac scalar.
In particular, ⟨⟩ fails to commute with the time axis
_0 = ^-_t,
and therefore breaks t-symmetry,
which has the happy side effect of generating a Majorana mass
for the right-handed neutrino, <ref>.
Besides the modified bivectors of (10),
the grand Higgs field (<ref>) commutes with
the 12 modified bivectors ^±_t ^±_k with k = r,g,b,
and it fails to commute with
the 8 modified bivectors ^±_t ^±_k with k = y,z.
The grand Higgs field ⟨⟩,
equation (<ref>),
is a bivector modified per (<ref>),
just like the electroweak Higgs field (<ref>),
and just like the gauge bivectors of the standard model
discussed in <ref>.
According to the general rules,
the grand Higgs field ⟨⟩
must be a component of a multiplet of the unbroken (11,1) symmetry,
and the gauge (vector) and Higgs (scalar) fields
corresponding to the symmetries that are broken by the grand Higgs field
must transform consistently,
so that the scalars and vectors can combine into massive gauge bosons.
These requirements point to the conclusion
that the Higgs field is a 66-component (11,1) field
of bivectors modified per (<ref>),
transforming in the adjoint representation,
as anticipated in equation (<ref>).
Here is a synopsis of what happens to the various fields as a result
of grand symmetry breaking
(all bivectors should be understood as modified per (<ref>)).
* The 6+15 = 21 bivectors
of (4)_w ×(6)_c,
namely ^±_k ^±_l with k and l both in y,z,
or k and l both in r,g,b,
commute with the grand Higgs field (<ref>)
and with the Dirac vectors (<ref>),
and therefore yield unbroken scalar and gauge fields.
(4)_w ×(6)_c
preserves time, weak, and colour chiralities
ϰ_t, ϰ_yz, and ϰ_rgb,
equations (<ref>),
so (4)_w and (6)_c gauge bosons
allow transformations between fermions
of the same boost (ϰ_tyz) and spin (ϰ_rgb),
(2)_ :
ν_
↔
e_
,
u^c_
↔
d^c_
,
(6)_c / (3)_c
:
ν_
↔
u^c_
,
e_
↔
d^c_
.
The (2)_ transitions (<ref>)
are right-handed versions of analogous (2)_ left-handed transitions
in the standard model.
The (6)_c / (3)_c transitions (<ref>)
and their left-handed equivalents
transform between leptons and quarks,
for which reason the associated gauge bosons are called leptoquarks.
* The 4 × 6 = 24 bivectors
of (10) / ( (4)_w ×(6)_c ),
namely ^±_k ^±_l with k in y,z, and l in r,g,b,
fail to commute with the grand Higgs field (<ref>),
and they also fail to commute
with some Lorentz bivectors (<ref>).
The transitions
preserve time and (10) chiralities
ϰ_t and ϰ_yzrgb,
but break the individual weak and colour chiralities:
ν̅_
e̅_ ↔
u^c_
d^c_ ↔ u̅^c̅_
d̅^c̅_ ↔ ν_
e_ .
The 24 scalar bivectors combine with the 24 vector bivectors
by the Goldstone mechanism <cit.>
to yield 24 massive charged gauge bosons,
which include 12 bivectors in (5),
with charges kl̅ or k̅l,
and 12 bivectors not in (5),
with charges k l or k̅l̅.
Observations and experiment show no sign of Lorentz violation
<cit.>.
To avoid a violation of Lorentz invariance,
the gauge bosons must presumably acquire a mass
of the order of the grand unified scale.
* The 4 bivectors
^+_t ^±_k, k = y,z,
fail to commute with the grand Higgs field (<ref>),
but commute with the Dirac vectors (<ref>).
If the grand gauge group were (11,1),
then the 4 scalar bivectors would combine with the 4 vector bivectors
by the Goldstone mechanism
to yield 4 massive gauge bosons at grand unification.
But in practice the 4 bivectors
^+_t ^±_k, k = y,z
apparently persist to produce the electroweak Higgs
multiplet (<ref>) that mediates electroweak symmetry breaking.
The only way we have been able to make sense of this complication
is to postulate that the dimension ^+_t is a scalar dimension
that does not generate any symmetry,
a possibility discussed in 4.4 of <cit.>.
If there is no gauge symmetry,
then the Goldstone mechanism <cit.> does not operate,
and the scalar field can persist.
* The 6 bivectors
^+_t ^±_k, k = r,g,b,
commute with the grand Higgs field (<ref>),
but fail to commute with several Lorentz bivectors (<ref>).
If ^+_t is a scalar dimension,
then the bivectors do not generate a gauge symmetry,
but there is still a 6-component “scalar” multiplet.
To avoid a violation of Lorentz invariance,
the multiplet must presumably become massive,
of the order of the grand unified scale,
as the dimensionality of spacetime drops from 10+1 to 3+1.
* The 4 bivectors
^-_t ^±_k, k = y,z,
fail to commute with the grand Higgs field (<ref>),
but commute with the Lorentz bivectors (<ref>)
(though not with the Dirac vectors (<ref>)).
The 4 scalar bivectors combine with the 4 vector bivectors
to yield 4 massive gauge bosons at grand symmetry breaking.
* The 6 bivectors
^-_t ^±_k, k = r,g,b,
commute with the grand Higgs field (<ref>),
but fail to commute with several
Lorentz bivectors (<ref>).
To avoid a violation of Lorentz invariance,
the 6 scalar bivectors must combine with the 6 vector bivectors
to yield 6 massive gauge bosons at grand symmetry breaking.
§.§ Cosmological inflation
A leading idea of the standard model of cosmology is that inflation
in the early universe was driven by the energy associated with
grand unification.
There is a vast literature on the subject,
e.g. <cit.>.
The grand Higgs field ⟨⟩, equation (<ref>),
is available to drive inflation.
The mechanism of grand symmetry breaking by a single Higgs field
is consistent with the simplest conventional models of inflation.
It is perhaps surprising that the simplest model works,
since the restructuring of spacetime
could potentially be a complicated process
<cit.>.
§.§ Gauge boson masses at grand symmetry breaking
Subsection <ref> argued that
grand symmetry breaking breaks 34 of the 55 bivectors of (10,1),
generating 34 massive gauge bosons.
Of the 34 broken bivectors,
28 are broken by virtue of failing to commute with
the grand Higgs field ⟨⟩, equation (<ref>),
while the other 6 are broken by virtue of failing to commute with
Lorentz bivectors (<ref>).
Exactly how the 6 are broken as the dimension of spacetime drops from 10+1
to 3+1 lies outside the realm of standard Higgs theory,
and beyond the scope of the present paper to model.
The (10,1) covariant derivative of the expectation value
⟨⟩, equation (<ref>),
of the grand Higgs field is
D_m ⟨⟩
=
g
∑_A
[ _m , ⟨⟩ ]
=
-⟨ T ⟩ g
∑_A
C_m^A [ _A , ^+_t ^-_t ϰ_rgb ]
,
where g is a grand coupling parameter,
and
_m = C^A_m _A
are the gauge fields associated with the 55 (10,1) bivectors _A
modified per (<ref>).
The nonvanishing commutators of the (10,1) bivectors with
the grand Higgs generator
^+_t ^-_t ϰ_rgb
are the 24 + 4 = 28 commutators
[ ^±_k ^±_l , ^+_t ^-_t ϰ_rgb ]
=
2 ^±_k ^±_l ^+_t ^-_t ϰ_rgb
( k y,z , l r,g,b )
,
[ ^-_t ^±_l , ^+_t ^-_t ϰ_rgb ]
=
2 ^-_t ^±_l ^+_t ^-_t ϰ_rgb
( l y,z )
.
The bivector factors ^±_k ^±_l
should strictly be modified per (<ref>),
but this modification can be omitted here because ^+_t ^-_t
commutes with ϰ_rgb, so the commutation rules (<ref>)
are unaffected by the possible extra factor of ϰ_rgb
on ^±_k ^±_l.
When the grand Higgs field acquires a nonzero expectation value
⟨⟩,
it contributes to the Lagrangian a kinetic term proportional to
( ^m ⟨⟩)
·( _m ⟨⟩)
=
4
⟨ T ⟩^2
g^2
∑_A
( C_m^A )^2
,
in which the sum over A is over noncommuting gauge fields _m.
The contribution (<ref>)
takes the form of a mass-squared term for each of the noncommuting fields,
so the grand Higgs field ⟨⟩ generates a mass
m_T
=
2 ⟨ T ⟩ g
for each of the 28 noncommuting gauge fields.
It was noted in <ref>
that the 24 bivectors
^±_k ^±_l with k in y,z, and l in r,g,b,
fail to commute not only with the grand Higgs field ⟨⟩,
but also with the Lorentz bivectors (<ref>),
so the 24 bivectors do not form Lorentz-covariant multiplets,
and the covariant derivative _m of these bivectors in equation (<ref>)
cannot simply be a standard 3+1 dimensional Lorentz-covariant derivative.
However,
the dynamics of the various fields must presumably emerge
from a suitable scalar Lagrangian,
so the expression (<ref>)
must be a valid spacetime-scalar equation,
and the resulting gauge boson mass m_T given by equation (<ref>)
must be a scalar.
Again, it is beyond the scope of this paper to dig more deeply into
just what happens as the dimensionality of spacetime drops from 10+1 to 3+1.
§.§ The 11th dimension as a scalar
It was argued in <ref> that the 4 bivectors
^+_t ^±_k, k = y,z
cannot generate a gauge symmetry,
because if they did,
then that gauge symmetry would be broken
by the grand Higgs field ⟨⟩ at grand symmetry breaking,
and the associated scalar multiplet would be absorbed by the Goldstone mechanism
into 4 massive gauge bosons,
whereas the scalar multiplet
apparently persists to produce the electroweak Higgs
multiplet (<ref>) that mediates electroweak symmetry breaking.
The only solution that we can see to this problem
is to postulate that the dimension ^+_t is a scalar dimension
that does not generate any symmetry,
a possibility discussed in 4.4 of <cit.>.
The dimension ^+_t stands out
as the only spacelike vector of (11,1)
that is not a factor in any unbroken gauge symmetry of the standard model.
As first shown by <cit.>,
and expounded by <cit.>,
the algebra of outer products of spinors is isomorphic
to the geometric (Clifford) algebra,
in any number of spacetime dimensions.
The natural complex structure of spinors means that
geometric algebras live naturally in even dimensions.
In odd dimensions,
there are two ways to realize a geometric algebra as an outer product of
spinors.
One is to project the odd-dimensional algebra into one dimension lower;
the second is to embed the odd-dimensional algebra into one dimension higher,
and to treat the extra dimension as a scalar.
The first option, projecting the odd algebra into one lower dimension,
is the usual one.
For example, the pseudoscalar of the geometric algebra in 3 dimensions,
the Pauli algebra,
is identified with times the unit matrix,
σ_1 σ_2 σ_3 = 1.
The disadvantage of the first option is that the geometric algebra does not
contain within itself a time-reversal or parity-reversal operator.
By contrast, in the second option,
the extra axis, here ^+_t,
which as usual anticommutes with all other axes ^±_k,
serves the role of a time-reversal operator,
because it flips the 10 other spatial axes (leaving spatial parity unchanged),
and also flips the 1 time axis, reversing time.
A time-reversal operator is important for a consistent quantum field theory.
§.§ The grand Higgs field generates a Majorana mass for the right-handed neutrino
The grand Higgs field ⟨⟩, equation (<ref>),
has the felicitous property that it is able to generate a Majorana mass
for the right-handed neutrino.
The Majorana mass term is
ν·⟨⟩ν
=
- ν^†_0 ⟨⟩ν
=
ν^†^-_t ⟨⟩ν ,
which flips the right-handed neutrino's t-bit.
The right-handed neutrino,
the all-bit-up entry in the (11,1) chart (<ref>),
has two components,
a right-handed neutrino ν_ with t-bit up,
and a left-handed antineutrino partner ν̅_ with t-bit down.
The two components have opposite boost, but the same spin,
so flipping the t-bit generates a mass for the right-handed neutrino.
Only the right-handed neutrino can acquire mass this way,
because only the right-handed neutrino has zero couplings to the gauge groups
(1)_Y ×(2)_×(3)_c
of the standard model.
For other fermions,
conservation of standard-model charges prohibits flipping the t-bit.
Although the grand Higgs field ⟨⟩ aquires its nonzero
expectation value at grand symmetry breaking,
it is only later, at Pati-Salam symmetry breaking, <ref>,
that the grand Higgs field
can generate a Majorana mass for the right-handed neutrino.
Before Pati-Salam symmetry breaking,
(4)_w ×(6)_c transitions (<ref>)
unite right-handed neutrinos with other right-handed leptons and quarks.
(4)_w ×(6)_c transitions preserve chirality,
and likewise Lorentz transformations preserve chirality,
so before Pati-Salam symmetry breaking
all fermions are massless chiral fermions,
including the right-handed neutrino,
which is part of the 16-component right-handed lepton-quark multiplet
{ν_ , e_ , u^c_ , d^c_}.
After Pati-Salam symmetry breaking into the standard model,
the right-handed (4)_w ×(6)_c transitions (<ref>)
no longer occur.
The right-handed neutrino becomes isolated into a singlet of the standard model,
and standard-model symmetries do not prohibit
the Majorana mass term (<ref>) from flipping the right-handed neutrino's
t-bit.
The nonvanishing charges of the right-handed neutrino ν_
and of its left-handed antineutrino partner ν̅_
are ρ_3 = 1 and -1 respectively,
and B-L = -1 and +1 respectively.
Thus the Majorana mass term (<ref>) violates
baryon-minus-lepton B-L symmetry.
According to the standard picture of leptogenesis
<cit.>,
out-of-equilibrium CP-violating decay of an initially thermal distribution of
massive right-handed Majorana neutrinos can generate a nonzero B-L number,
which sphaleron processes then transform into a nonzero baryon number.
The argument has led to a conundrum.
The grand Higgs field ⟨⟩,
equation (<ref>),
commutes with ρ_3 and B-L,
yet the mass term
ν·⟨⟩ν,
equation (<ref>),
violates conservation of ρ_3 and B-L.
This would seem to violate the Noetherian <cit.>
principle relating symmetries to conservation laws.
The solution to the conundrum seems to be that
⟨⟩
breaks the t-symmetry that links fermions and antifermions,
by virtue of ⟨⟩ not commuting with the time axis
_0 ≡ -^-_t
(⟨⟩ is nevertheless a Lorentz scalar,
because it commutes with the Lorentz bivectors (<ref>)).
Breaking t-symmetry apparently allows
⟨⟩
to break ρ_3 and B-L symmetries in spite of commuting with them.
This issue is revisited in <ref>,
and called out as a potential flaw in <ref>.
In the usual (10) model,
the standard Higgs used to break B-L
so as to generate a Majorana mass is
<cit.>
a pentavector, belonging to the dimension
105 = 126 + 126
representation of (10).
Specifically, the pentavector
_y _z _r _g _b
+
_y̅_z̅_r̅_g̅_b̅
has the property that it flips between the all-bit-up right-handed neutrino
and its all-bit-down left-handed antineutrino partner
in the (10) chart (<ref>),
↑↑↑↑↑ ↔ ↓↓↓↓↓,
while yielding zero acting on any other spinor.
The pentavector (<ref>)
commutes with (3)_c but not with B-L,
so explicitly breaks B-L symmetry.
The Higgs pentavector (<ref>) that generates a Majorana
mass in (10)
does not work for the (11,1) model
because the modification (<ref>)
that allows (10) and (3,1) to unify into (11,1)
puts the left-handed antineutrino partner ν̅_
of the right-handed neutrino ν_
into the same (5) column of
the (11,1) chart (<ref>),
instead of into the opposite (5) columns of
the (10) chart (<ref>).
In (11,1),
the multivector ^-_t ⟨⟩ that transforms between
ν_ and ν̅_
has (10) grade 0, not 5.
§ PATI-SALAM SYMMETRY BREAKING
§.§ The Higgs field that breaks Pati-Salam symmetry
According to the general rules,
the Higgs field that breaks the Pati-Salam (4)_w ×(6)_c
symmetry must be part of a multiplet of Lorentz-scalar fields
that rotate into each other under (4)_w ×(6)_c.
Now the grand Higgs field ⟨⟩, equation (<ref>),
that mediates grand symmetry breaking is part of a 66-component scalar
multiplet , equation (<ref>),
of (11,1) bivectors modified per (<ref>).
After grand symmetry breaking to the Pati-Salam group,
the scalar multiplet contains an unbroken 21-component Pati-Salam
multiplet generated by the 6 + 15 = 21 modified bivectors
of (4)_w ×(6)_c.
This scalar Pati-Salam multiplet could potentially contain a component
that, when it acquires an expectation value ⟨⟩,
breaks the Pati-Salam symmetry.
Indeed it does.
The Higgs field that breaks the Pati-Salam
symmetry must leave the standard-model group unbroken,
so must commute with standard-model bivectors
modified per (<ref>).
The weak subgroup (2)_
permutes yz charges while preserving the total charge y+z,
while the colour subgroup (3)_c
permutes rgb charges while preserving the total charge r+g+b.
Therefore the Higgs field must be a linear combination
of generators that are invariant under permutations of each of yz and rgb.
There are two such basis bivectors in (4)_w ×(6)_c,
namely the third Pauli generator
ρ_3
that generates the (1)_ subgroup
of the right-handed weak group (2)_,
equation (<ref>);
and the baryon minus lepton operator ( B-L )
that generates the (1)_c subgroup of the colour group (6)_c,
equation (<ref>).
The corresponding conserved charges are
(the expression for B-L repeats equation (<ref>))
ρ_3
=
y + z
,
B - L
=
- 23 ( r + g + b )
.
If the Higgs field lies inside the
(4)_w ×(6)_c submultiplet of ,
then must be a linear combination of the bivectors
ρ_3 and (B-L)
modified per (<ref>),
=
(
P^3 ρ_3
+
P_m^B-L (B-L)
)
κ_rgb .
The factor κ_rgb is from
the modification (<ref>).
When the Higgs doublet , equation (<ref>),
acquires a vacuum expectation value ⟨⟩,
it must leave unbroken the hypercharge group (1)_Y of the standard model,
which is generated by the bivector Y,
equation (<ref>).
The hypercharge Y
is a linear combination of the charges ρ_3 and B-L,
equation (<ref>),
Y
=
ρ_3 + B - L
.
When the Higgs doublet acquires its vacuum expectation value,
it condenses the two phases R and B-L into one,
(1)_R ×(1)_B-L→(1)_Y
.
This is similar to the way that electroweak symmetry breaking involves
condensing the two phases of hypercharge and weak isospin into one,
(1)_Y ×(1)_→(1)_Q.
The symmetry breaking (<ref>) breaks the Pati-Salam group
(4)_w ×(6)_c
to the standard-model group
(1)_Y ×(2)_×(3)_c.
The linear combination of
ρ_3 and B-L
to which breaks can be deduced from the condition that the
unbroken symmetry is (1)_Y.
The
(4)_w ×(6)_c
gauge connection is
g_r _m
+
g_w _m
+
g_c _m
,
where
_m,
_m,
and _m are respectively
gauge fields of
(2)_,
(2)_,
and (6)_c,
and
g_r,
g_w,
and g_c are the respective
dimensionless coupling parameters.
The part of the (4)_w ×(6)_c
connection (<ref>)
associated with the (1)_R ×(1)_B-L symmetry is
(
g_r R_m^3 ρ_3
+
g_c C_m^B-L (B-L)
)
ϰ_rgb ,
where R_m^3 and C_m^B-L are the corresponding connection coefficients.
Analogously to the weak mixing angle θ_w,
equations (<ref>),
define the (2)_×(6)_c mixing angle θ_r by
sinθ_r
≡g_r g , cosθ_r
≡g_c g ,
g ≡√(g_r^2 + g_c^2) .
And analogously to the rotated fields A_m and Z_m of electroweak theory,
equation (<ref>),
define gauge fields B_m and P_m to be orthogonal linear combinations
of R_m^3 and C_m^B-L,
(
[ B_m; P_m ])
≡(
[ cosθ_r sinθ_r; -sinθ_r cosθ_r ])
(
[ R_m^3; C_m^B-L ])
.
In terms of the rotated fields B_m and P_m,
the (1)_R ×(1)_B-L connection (<ref>) is
(a commuting factor of ϰ_rgb
from the modification (<ref>)
has been dropped for brevity
from both sides of equation (<ref>))
(
g_r R_m^3 ρ_3
+
g_c C_m^B-L (B-L)
)
=
(
g_Y B_m Y
-
g
P_m
( sin^2θ_r ρ_3 - cos^2θ_r (B-L) )
)
,
where the hypercharge coupling parameter g_Y is
g_Y
≡
g_r cosθ_r
=
g_c sinθ_r
=
g
cosθ_r sinθ_r
.
The term proportional to B_m Y in the connection (<ref>)
has the correct form for the unbroken (1)_Y hypercharge connection.
The remaining term on the right hand side of equation (<ref>)
represents the broken direction,
the direction along which the Higgs doublet
acquires an expectation value ⟨⟩,
⟨⟩
=
⟨ P ⟩
( sin^2θ_r ρ_3 - cos^2θ_r (B-L) ) ϰ_rgb ,
where the factor ϰ_rgb
from the modification (<ref>)
has been restored.
§.§ Masses of Pati-Salam gauge bosons
For simplicity, factors of ϰ_rgb
from the modification (<ref>)
are omitted throughout this subsection (<ref>).
The factor ϰ_rgb commutes with all the generators,
so has no effect on the predictions (<ref>) of masses
of Pati-Salam gauge bosons.
The (4)_w ×(6)_c covariant derivative
of the expectation value ⟨⟩
of the Higgs doublet , equation (<ref>), is
_m ⟨⟩ =
⟨ P ⟩(
g_r sin^2θ_r [ _m , ρ_3 ]
-
g_c cos^2θ_r [ _m , (B-L) ]
)
=
g ⟨ P ⟩(
sin^3θ_r [ _m , ρ_3 ]
-
cos^3θ_r [ _m , (B-L) ]
)
,
where
_m and _m are the gauge fields
of respectively
(2)_ and (6)_c,
_m
=
R_m^k ρ_k
, _m
=
C_m^[k^± l^±]^±_k ^±_l
( k,l r,g,b )
.
The nonvanishing commutators of the Pati-Salam
(4)_w ×(6)_c gauge fields with the component generators
ρ_3 and B-L
of the expectation value ⟨⟩ of the Higgs doublet are
[ ρ_2 , ρ_3 ]
=
2 ρ_1
,
[ ρ_1 , ρ_3 ]
=
- 2 ρ_2
,
[ 12 ( 1 + ϰ_kl ) ^+_k ^+_l , B-L ]
=
- 23
( 1 + ϰ_kl ) ^+_k ^-_l
( k,l r,g,b )
,
[ 12 ( 1 + ϰ_kl ) ^+_k ^-_l , B-L ]
=
23
( 1 + ϰ_kl ) ^+_k ^+_l
( k,l r,g,b )
.
The right-handed Pauli matrices ρ_k are given by
equations (<ref>).
The leptoquark bivectors of (6)_c in
equations (<ref>)
and (<ref>)
are given in expanded form by
equations (<ref>).
The scalar product of each commutator of equations (<ref>)
with its reverse equals
2 for the top two lines,
and 2 × ( 23 )^2 for the bottom two lines.
The square of the covariant derivative (<ref>) is then
( ^m ⟨⟩ ) · ( _m ⟨⟩ )
=
g^2 ⟨ P ⟩^2
(
2
sin^6θ_r
( R_m^k )^2
+
2
( 23 )^2
cos^6θ_r
( C_m^[k^± l^±] )^2
)
,
where the fields on the right hand side are the subset of the weak and colour
fields
that fail to commute with the Higgs field ⟨⟩.
The two right-handed weak fields R^k_m, k = 1,2,
carry two units of y,z charge and zero colour charge r,g,b;
they transform right-handed leptons and quarks into their
right-handed weak partners,
transformations (<ref>)
and their antiparticle equivalents.
The six leptoquark fields C^[k^± l^±]
carry zero y or z charge, and two units of r,g,b charge;
they are called leptoquarks because they transform
between leptons and quarks,
transformations (<ref>)
and their antiparticle equivalents.
Equation (<ref>) shows that the Higgs field ⟨⟩
generates masses for the noncommuting fields,
m_R
=
√(2)
g
⟨ P ⟩sin^3θ_r
,
m_C
=
√(2) 23
g
⟨ P ⟩cos^3θ_r
.
The mass m_R is the mass of each of the two right-handed weak gauge bosons
after (4)_w ×(6)_c symmetry breaking;
the mass is different from
the mass m_W of the two left-handed charged weak gauge bosons.
The mass m_C is the mass of each of the six leptoquark gauge bosons
after (4)_w ×(6)_c symmetry breaking.
A prediction of the model is that the masses
m_R and m_C
are related by, from equations (<ref>),
m_R m_C
=
32tan^3θ_r
=
1.1
,
with tanθ_r = 0.90 from equation (<ref>).
Besides not commuting with right-handed weak and leptoquark fields,
the (4)_w ×(6)_c Higgs field ⟨⟩
also fails to commute with
the electroweak Higgs multiplet (<ref>),
[ ^+_t ^±_k , ρ_3 ]
=
±^+_t ^∓_k
k = y , z
.
But, as discussed in <ref>,
the bivectors ^+_t ^∓_k, k = y,z,
do not generate any gauge symmetry,
so _m ⟨⟩, equation (<ref>),
goes not generate any mass term for the electroweak Higgs multiplet.
(4)_w ×(6)_c symmetry breaking
breaks the original (4)_w symmetry of
the electroweak Higgs multiplet
down to the electroweak symmetry (1)_Y ×(2)_,
but the number of Higgs components, 4, remains unchanged.
§.§ Scalar fields after Pati-Salam symmetry breaking
Of the 21 Pati-Salam (4)_w ×(6)_c gauge and scalar fields,
the 1+3+8 = 12 standard-model
(1)_Y ×(2)_×(3)_c
fields remain unbroken,
the 2+6=8 right-handed and leptoquark fields are broken and become massive,
and the 1 Pati-Salam Higgs scalar ⟨⟩ acquires
an expectation value.
What about the (1)_P gauge field?
In the Coleman-Weinberg <cit.> mechanism,
a (1)_P gauge field becomes massive
by coupling to a complex scalar field carrying P charge,
which here is a linear combination (<ref>) of ρ_3 and B-L charge.
Now the field that apparently breaks ρ_3 symmetry and B-L symmetry
while preserving Y symmetry
is the grand Higgs field ⟨⟩,
equation (<ref>).
As discussed in <ref>,
the grand Higgs field ⟨⟩
carries no explicit ρ_3 and B-L charge;
but it does break t-symmetry,
which allows it to generate a Majorana mass term
that flips between the right-handed neutrino with ρ_3 = 1 and B-L = -1,
and its left-handed antineutrino partner with ρ_3 = -1 and B-L = +1.
This is a novel situation that to our knowledge
has not been considered previously in the literature.
Our guess is that the behaviour of
the ⟨⟩ and ⟨⟩ fields
during Pati-Salam symmetry breaking are closely related.
As argued in <ref>,
even though the grand Higgs field ⟨⟩
acquires its expectation value at grand symmetry breaking,
it can generate a Majorana mass for the right-handed neutrino
only later, at Pati-Salam symmetry breaking,
when the Pati-Salam Higgs field ⟨⟩
acquires its expectation value
and breaks the symmetries (<ref>)
that connect right-handed neutrinos to other right-handed fermions.
In turn,
the Majorana mass term (<ref>) violates conservation of ρ_3
and B-L while preserving Y,
potentially allowing (1)_P symmetry to be broken.
We emphasize that this is just our best guess;
the scenario warrants further investigation.
After (4)_w ×(6)_c symmetry breaking,
the components of the original scalar multiplet ,
equation (<ref>),
that survive as potentially light scalar fields comprise
the 4 components of the electroweak Higgs multiplet (<ref>),
and a standard-model (1)_Y ×(2)_×(3)_c component
of dimension 1+3+8 = 12.
§.§ Scalar fields after electroweak symmetry breaking
The fate of the 4-component electroweak Higgs multiplet (<ref>)
at electroweak symmetry breaking is known:
three of the four degrees of freedom of the Higgs multiplet
generate masses for the W^± and Z electroweak gauge bosons,
while the fourth yields a single electroweak Higgs scalar,
of mass 125 GeV,
that was detected at the Large Hadron Collider (LHC) in 2012
<cit.>.
The 12-component standard-model (1)_Y ×(2)_×(3)_c
remnant of the original (11,1) scalar field,
equation (<ref>),
persists after electroweak symmetry breaking.
It is the 4 components of the electroweak Higgs multiplet that are
rearranged by electroweak symmetry breaking,
not the corresponding components of the field.
This is just as well:
the world would be a very different place if the electroweak Higgs field
were not there to couple right- and left-handed fermions
and thereby make them massive.
Electroweak symmetry breaking breaks the symmetry of the
12-component standard-model remnant of ,
but the number of scalar components, 12, remains the same.
These 12 scalar fields
are scalar counterparts of standard-model gauge bosons,
with identical standard-model charges.
The masses of the scalar fields are unknown,
because they depend on the unknown potential energies of the scalar fields.
Presumably,
to have escaped detection,
the masses of most of these fields must be greater than the electroweak scale.
Of the 12 scalar fields, two,
namely the scalar counterparts of the photon or Z-boson,
carry zero standard-model charge,
and could potentially constitute the cosmological dark matter.
A crucial property of a dark matter particle is that it must be stable
against annihilation into lighter particles.
One possibility is that a scalar particle might be stabilized by a parity
symmetry <cit.>,
but no such symmetry is apparent in the present model.
An alternative is that the scalar particle is so light,
m ∼ 10^-22eV,
that its decay rate is tiny,
a hypothesis called Fuzzy Dark Matter
<cit.>.
As far as we can see,
in the plain (11,1) model,
an ultralight scalar is the only possible candidate for dark matter.
Event Horizon Telescope observations of the M87 black hole
disfavor ultralight scalar dark matter with mass in the range
∼ 3-5 × 10^-21eV
<cit.>.
§.§ Fermion masses
In the standard model,
after electroweak symmetry breaking,
fermions acquire nonzero rest masses by interacting with the
electroweak Higgs field.
The electroweak Higgs field ⟨⟩, equation (<ref>),
carries weak charge y,
and flips fermions between right- and left-handed chiralities.
In the present picture,
the Higgs field ⟨⟩ flips fermions between
Dirac boosts ⇑ ↔ ⇓ while preserving spin,
in accordance with the (11,1) chart (<ref>) of spinors.
It is the flipping between boosts that allows a fermion
to be a superposition of opposite boosts, and therefore to have rest mass.
Unlike the electroweak Higgs field,
the Pati-Salam Higgs field ⟨⟩,
equation (<ref>),
carries zero standard model charge,
and does not flip between chiralities.
The ⟨⟩ Higgs field does not flip boosts,
and cannot give fermions rest masses.
The conclusion that fermions can acquire their mass
only after electroweak symmetry breaking
at 160 GeV <cit.>
is intriguing.
In the (11,1) model,
each generation of fermions belongs to a spinor multiplet of (11,1).
The masses of the three known generations of fermions (excluding neutrinos)
show a striking progression of masses,
with all e-generation fermions being less massive
than all μ-generation masses, which are in turn all less massive
than τ-generation masses.
The most massive fermion known is the τ-generation top,
with mass 173 GeV
<cit.>
just above the 160 GeV electroweak scale.
If the progression of masses continues, then there is no fourth generation.
The known fermions are all there is.
This is consistent with the evidence that
there are only 3 neutrino types with masses less than
half the mass of the Z neutral weak gauge boson,
12 m_Z ≈ 45 GeV
<cit.>,
N_ν =
2.991 ± 0.007
.
Evidence from the cosmic microwave background
indicates that there are only 3 neutrino types
with masses less than about the electron mass
(the observations set limits on the number of neutrino types
post electron-positron annihilation)
<cit.>,
N_ν = 3.0 ± 0.5
.
The question of why the rest masses of fermions have the pattern observed
remains one of the deepest mysteries of the standard model
<cit.>.
§.§ Neutrino masses
Neutrinos cannot acquire their mass in the same way as the other fundamental
fermions, because only left-handed neutrinos (and right-handed antineutrinos)
are observed in Nature.
The leading standard solution to the puzzle of neutrino masses
is the see-saw mechanism
proposed by <cit.>,
which requires that
there exists a right-handed neutrino with a large Majorana mass,
as is true in the present model,
<ref>.
The see-saw mechanism posits that,
after electroweak symmetry breaking,
neutrinos have Dirac mass m_D like other fermions, but in addition,
alone among fermions,
a right-handed neutrino ν_,
having no conserved standard-model charge,
has a Majorana mass m_M
that couples it to its
left-handed antineutrino partner ν_.
The neutrino mass matrix
has eigenvalues m_± with
m_±
=
±12
m_M
+
√(( 12 m_M )^2 + m_D^2) ,
satisfying
m_+ m_- = m_D^2.
If the Majorana mass is much larger than the Dirac mass,
m_M ≫ m_D,
then the masses of the massive and light neutrinos satisfy
the see-saw condition
m_M ≈ m_+
≫
m_- ≈m_D^2 m_M .
The massive eigenstate is mostly the right-handed neutrino
with a small admixture of its left-handed antineutrino partner.
The light eigenstate is mostly the left-handed neutrino
with a small admixture of its right-handed antineutrino partner.
The see-saw mechanism offers an explanation for why
observed left-handed, weakly interacting neutrinos have nonzero mass
but are nevertheless light compared to other fermions.
If the mass of the left-handed tauon neutrino
is estimated at ∼ 0.1 eV
<cit.>,
and the Dirac mass of tauon leptons and quarks
is estimated at ∼ 10 GeV
<cit.>,
then the see-saw mechanism would predict
the mass of the right-handed tauon neutrino to be ∼ 10^12GeV.
This is consistent with the 10^12GeV
energy scale of Pati-Salam symmetry breaking
found in <ref>, equation (<ref>).
Massive right-handed neutrinos cannot be the cosmological dark matter
because they are unstable,
the small admixture of left-handedness allowing them to decay
by left-handed weak interactions
<cit.>.
Right-handed neutrinos are sometimes considered as candidates for dark matter,
but that requires their mixing with left-handed neutrinos to be tiny
(so-called sterile neutrinos), which is not true here.
§ ENERGY SCALES OF UNIFICATION
This section <ref> shows
from the running of the coupling parameters,
Figure <ref>,
that the predicted energy scale of (4)_w ×(6)_c unification is
10^12GeV,
while that of grand unification is
10^15GeV,
equations (<ref>).
The relations (<ref>) predict that
the hypercharge, right-handed weak, and colour coupling parameters
are related by
g g_Y g_r g_c
=
1
.
In renormalization theory the coupling parameters
vary with the energy at which they are probed.
The condition (<ref>) can be interpreted as determining
the energy scale of (4)_w ×(6)_c symmetry breaking.
(4)_w is isomorphic to
the product (2)_×(2)_
of right- and left-handed weak groups,
and there could potentially be two distinct coupling parameters
associated with the two groups.
This section starts out treating the two coupling parameters as distinct,
but then focuses on the special case of equal right- and left-handed
coupling parameters, corresponding to exact chiral symmetry.
Equal coupling parameters would be expected if (4)_w
is the broken remnant of a higher spin group,
as in the scenario considered in this paper.
According to renormalization theory,
to leading (one-loop) order,
the coupling parameter g associated with a gauge group G varies
with the log of the cutoff energy μ as
(e.g. <cit.>,
<cit.>,
<cit.>,
or
<cit.>)
g^-2lnμ
=
1 8π^2(
113 S_2 ( G , )
-
23
S_2 ( G , )
n_
-
16
S_2 ( G , )
n_s
)
,
where S_2 ( G , )
is the Dynkin index
of the representation of the group,
and n_ and n_s are respectively the number of
fermion and real scalar multiplets that couple to the gauge group G.
For (1)_Y, the formula is
<cit.>
g_Y^-2lnμ
=
1 8π^2(
- 23∑_f
( 12 Y )^2
-
16∑_s
( 12 Y )^2
)
.
The coupling parameters of the standard model,
evaluated at the energy μ = m_Z = 91.1884 GeV of the Z boson,
are, from <cit.>,
g_Y^2
=
e^2 cos^2θ_w
=
0.1278
,
g_w^2
=
e^2 sin^2θ_w
=
0.4242
,
g_c^2
=
4πα_s
=
1.492
,
where
the weak mixing angle satisfies
sin^2θ_w = 0.2315,
the electromagnetic coupling is
e^2 = 4πα,
and the fine structure and strong coupling constants
α and α_s are
α(μ = m_Z) = 1/127.955
, α_s(μ = m_Z) = 0.1187
.
The Dynkin index S_2 of a multiplet in the
representations relevant here is
S_2 ( (N) , )
=
N
,
S_2 ( (N) , )
=
12
,
S_2 ( (N) , )
=
(
N-2
p-1
)
, S_2 ( (N) , )
=
2^[(N-1)/2] - 3
.
The adjoint representation is the special representation where
the fields upon which the group acts are the group generators themselves.
For (N), the adjoint representation is the bivector representation,
the multivector representation of grade p = 2.
(N) has a unique spinor representation,
but (N) has spinor representations of spinor grades p ≤ N/2
(for example the (5) representations given by the columns
of the (10) chart (<ref>)).
The spinor representation of (N) given by
equation (<ref>)
is that for spinor grade p = 1,
which is the only nontrivial spinor grade
for the groups (2) and (3)_c relevant here.
The numbers of fermion, electroweak scalar, and adjoint scalar multiplets
in each of the standard-model and Pati-Salam regimes are as follows.
* Fermions in the standard-model regime.
In the standard model,
the left-handed weak group (2)_ acts on 4 fermion multiplets,
namely the ( ν_ , e_ ) left-handed lepton multiplet,
and the three ( u_^c , d_^c ) left-handed quark multiplets
of colours c = r, g, b.
The colour group (3)_c
acts on 4 fermion multiplets,
namely the right- and left-handed up and down quark multiplets
u_, u_, d_, and d_.
Each fermion multiplet comes in 3 generations,
so the number of fermion multiplets in equation (<ref>)
is n_ = 4 × 3 = 12 for each of (2)_ and (3)_c.
* Fermions in the Pati-Salam regime.
In the Pati-Salam regime,
the weak right-handed group (2)_ is similar to
the left-handed group (2)_,
but acting on right-handed in place of left-handed multiplets.
The number of (2)_ fermion multiplets
is the same as for (2)_,
n_ = 4 × 3 = 12.
The colour group (6)_c acts on enlarged fermion multiplets
that contain leptons as well as quarks,
( ν_ , u_ ), ( ν_ , u_ ),
( e_ , d_ ), and ( e_ , d_ ),
but the number of fermion multiplets remains the same,
n_ = 4 × 3 = 12.
* Electroweak scalars in the standard-model regime.
In the standard model,
the electroweak multiplet transforms
as a pair of 2-component (2)_ spinors
with opposite hypercharges Y = ± 1,
so n_s = 2 for each of (1)_Y and (2)_,
while n_s = 0 for (3)_c.
This is the same as in the “minimal” standard model
of the electroweak Higgs field.
* Electroweak scalars in the Pati-Salam regime.
In the Pati-Salam regime,
the electroweak Higgs multiplet (<ref>) transforms
as a 4-component vector under (4)_w.
But (4)_w is isomorphic to (2)_×(2)_,
and the electroweak multiplet effectively transforms
as a pair of 2-component spinors under each of
(2)_ and (2)_,
so n_s = 2 for each of (2)_ and (2)_.
The scalar Dynkin index is the spinor index S_2 = 12.
* Adjoint Higgs scalars in the standard-model regime.
In the standard model regime,
the Higgs multiplet that breaks
the (4)_w ×(6)_c symmetry,
equation (<ref>),
transforms as a bivector under each of
(2)_ and (6)_c.
The bivectors carry zero Y charge.
Thus the multiplet contributes
n_s = 1 for each of (2)_ and (6)_c,
with adjoint Dynkin index S_2 = 2 for (2)_,
and adjoint index S_2 = 3 for (6)_c.
* Adjoint Higgs scalars in the Pati-Salam regime.
In the Pati-Salam regime,
the Higgs multiplet that breaks
the (4)_w ×(6)_c symmetry
transforms as a bivector under each of
(2)_, (2)_, and (6)_c.
Thus the multiplet contributes
n_s = 1 for each of (2)_, (2)_, and (6)_c,
with adjoint Dynkin index S_2 = 2 for (2)_ and (2)_,
and adjoint index S_2 = 4 for (6)_c.
The above counting of scalars includes those from the adjoint Higgs scalar
multiplet , equation (<ref>).
Correctly, only scalars whose masses are less than the running energy scale
μ should be included.
In the literature,
in calculating the running of coupling parameters
it is common to ignore Higgs scalars
other than the known electroweak Higgs scalar,
on the grounds that any additional Higgs scalars may well be massive.
The calculation shown in Figure <ref>
assumes contrariwise that the Higgs scalars coming from
are less than the running energy scale μ,
because this yields the largest energy of grand unification,
10^15GeV, equations (<ref>),
more consistent with the constraint estimated by <cit.>.
To summarize,
in the standard model,
the factor in parentheses on the right hand side of equation (<ref>)
for the running of coupling parameters is
(1)_Y :
-
23 ×56 ×12
-
16 ×12 ×2
=
-416
,
(2)_:
113 ×2
-
23 ×12 ×12
-
16 ×( 12 ×2 + 2 ×1 )
=
176
,
(3)_c :
113 ×3
-
23 ×12 ×12
-
16 ×3 ×1
=
132
.
The sum over fermions of squared hypercharge in equation (<ref>) is
∑_f ( 12 Y )^2 = 103
per generation, or an average of 56
per (2)_ or (3)_c fermion multiplet,
which accounts for
the factor 56 in the (1)_Y expression (<ref>)
(a common practice, not followed here,
is to multiply the hypercharge coupling parameter
g_Y^2 by 53
so that the fermionic factor 56 in (<ref>)
becomes 12,
the same as the fermionic factors in (2)_ and (3)_c;
the scalar factor in (<ref>) would then change
from 16 to 110,
and the overall value would change
from -416 to -4110).
In the Pati-Salam regime
before (4)_w ×(6)_c symmetry breaking,
the factor in the running of coupling parameters is
(2)_, (2)_:
113 ×2
-
23 ×12 ×12
-
16 ×( 12 ×2 + 2 ×1 )
=
176
,
(6)_c :
113 ×4
-
23 ×12 ×12
-
16 ×4 ×1
=
10
.
The factors for (2)_ and (2)_
are the factors for each of the groups individually
(and are, naturally, the same);
the factors do not add.
If
the right-handed weak coupling parameter g_r is treated as unknown,
then the condition (<ref>) leads to no prediction
for the energy scale of (4)_w ×(6)_c unification.
However,
in a scenario where (4)_w is the broken remnant of
a higher-dimensional spin group, as considered in this paper,
the right- and left-handed coupling parameters g_r and g_w of (4)_w
are same,
g_r = g_w
,
in which case the condition (<ref>) leads to a definite prediction.
The left panel of Figure <ref>
shows the combination
g g_Y / ( g_w g_c ),
equation (<ref>),
which is predicted to be 1 at (4)_w ×(6)_c symmetry breaking.
The right panel of
Figure <ref> shows the running of the hypercharge, weak, and colour
coupling parameters g_Y, g_w, and g_c
as a function of the renormalization cutoff energy μ.
The energy scale
of (4)_w ×(6)_c symmetry breaking,
where g g_Y / ( g_w g_c ) = 1,
and the energy scale of grand unification,
where g_w = g_c,
are predicted to be
μ_g g_Yg_w g_c = 1
=
1.4 × 10^12GeV , μ_g_w = g_c
=
1.0 × 10^15GeV
.
The energy scales (<ref>)
assume that the adjoint Higgs scalars are all light compared to the scale μ.
If instead the adjoint Higgs scalars are all heavy compared to μ,
then
μ_g g_Yg_w g_c = 1
=
4.3 × 10^11GeV , μ_g_w = g_c
=
3.4 × 10^14GeV .
The predicted energy ≈ 10^12GeV of Pati-Salam unification
is comparable to (slightly greater than) that of the
most energetic cosmic rays observed <cit.>.
The predicted energy ≈ 10^15GeV of grand unification
falls below the lower limit
μ_(10)≳ 4 × 10^15GeV
inferred for (10) models by <cit.>
from the Super-Kamiokande lower limit of ≳ 1.6 × 10^34yr
on the proton lifetime <cit.>.
Thus the (11,1) model may already be ruled out.
However,
it should be borne in mind, first,
that the prediction of 10^15GeV
is based on the simplifying assumptions of 1-loop renormalization
and an abrupt transition at the Pati-Salam energy,
and, second, that the (11,1) symmetry breaking chain
differs from the (10) chains considered by <cit.>.
An improved computation would be desirable.
The upper limit on the energy scale of cosmological inflation
inferred from the upper limit to B-mode polarization power
in the cosmic microwave background measured by the Planck satellite is
<cit.>
μ_inflation≲
2 × 10^16GeV .
The predicted energy scale (<ref>) of grand symmetry breaking
is within the upper limit (<ref>),
although it is not clear that the grand symmetry breaking
and inflationary scales can be compared directly.
The (4)_w ×(6)_c mixing angle θ_r
defined by equations (<ref>)
is predicted to be
θ_r
=
0.73
=
42^∘ , tanθ_r
=
0.90
,
insensitive to whether adjoint scalars are taken to be light or massive
in the running of coupling parameters.
§ SUMMARY OF PREDICTIONS, AND POTENTIAL FLAWS
Subsection <ref> summarizes the predictions of the proposed
(11,1) model.
Subsection <ref> highlights possible flaws that merit scrutiny.
§.§ Predictions
* (11,1),
like (10),
predicts a right-handed neutrino,
with consequences that have been well explored in the literature.
* The tight fit of the Dirac and standard-model algebras
in the (11,1) geometric algebra
admits a unique minimal symmetry breaking chain (<ref>),
proceeding via the Pati-Salam group (4)_w ×(6)_c.
* (5) cannot be on the path to grand unification,
because its generators fail to commute
with the spacetime generators (<ref>)
in (11,1).
* All the Higgs fields involved in symmetry breaking,
including grand, Pati-Salam, and electroweak symmetry breaking,
lie in a common 66-component (11,1) scalar multiplet of bivectors,
equation (<ref>).
The Higgs fields are the scalar (spin 0) counterparts
of the vector (spin 1) gauge fields of (11,1).
* The grand Higgs field ⟨⟩, equation (<ref>),
is available to drive cosmological inflation.
* The generators of the electroweak Higgs field ,
equation (<ref>),
fail to commute with the grand Higgs field ⟨⟩.
To allow the electroweak Higgs field to persist without being
absorbed at grand symmetry breaking,
we argue, <ref>,
that the 11th spatial dimension ^+_t behaves as a scalar dimension,
as discussed in 4.4 of <cit.>.
Consequently the grand unified group is (10,1) rather than (11,1).
* Grand symmetry breaking is predicted to occur at 10^15GeV,
and Pati-Salam symmetry breaking at 10^12GeV,
equations (<ref>).
* The grand Higgs field ⟨⟩ breaks t-symmetry,
which generates a Majorana mass term (<ref>) for the right-handed neutrino,
although the right-handed neutrino acquires that Majorana mass
only at Pati-Salam symmetry breaking.
Later, after electroweak symmetry breaking,
the left-handed neutrino can acquire a small mass
by the standard see-saw mechanism
<cit.>.
* The Pati-Salam group
(4)_w ×(6)_c
predicts 8 vector gauge boson fields beyond those of the standard model,
comprising
2 right-handed weak gauge bosons R,
and 6 leptoquark gauge bosons C.
At Pati-Salam symmetry breaking,
the weak and leptoquark bosons acquire masses in the ratio
m_R / m_C = 1.1,
equation (<ref>).
* The ⟨⟩ Higgs field, equation (<ref>),
that breaks Pati-Salam symmetry carries zero standard model charge,
and in particular does not flip the Dirac boost bit,
so, unlike the electroweak Higgs field, cannot give masses to fermions,
<ref>.
Fermions other than the right-handed neutrino
remain massless until electroweak symmetry breaking.
* As remarked at the end of <ref>,
in the (11,1) model
the only evident candidate for cosmological dark matter is an ultralight scalar
<cit.>
with vanishing standard model charge,
the scalar counterpart of either the photon or the Z-boson.
* predcount
In the model proposed in this paper,
supersymmetry <cit.> is not needed to achieve grand unification.
§.§ Places to pick holes
Here we bring attention to areas that the critical reader might
examine for flaws.
“We are very much aware that we are exploring
unconventional ideas and that there may be some
basic flaw in our whole approach which we have
been too stupid to see.” — Sidney Coleman & Erick Weinberg
<cit.>.
* The adjustment (<ref>)
of (10) bivectors is essential
to permit unification of (10) and
the Lorentz group (3,1) in (11,1).
Without this adjustment, the (11,1) idea fails.
* To allow the electroweak Higgs bivectors to survive grand symmetry breaking,
it appears necessary to postulate, <ref>,
that the 11th dimension ^+_t,
the spatial partner of the time vector ^-_t,
is a scalar that does not generate any gauge symmetry,
in which case the symmetry at grand symmetry breaking is (10,1)
rather than the parent symmetry (11,1).
We have not speculated how (11,1) might break to (10,1).
* The 12 (modified) bivectors
^±_t ^±_k, k = r,g,b,
are unbroken by the grand Higgs field ⟨⟩,
but fail to commute with several Lorentz bivectors (<ref>),
so cannot persist after grand symmetry breaking,
<ref>.
We suggest that these bivectors are broken and become massive
as the dimensionality of spacetime drops from 10+1 to 3+1,
but we have not offered a detailed model of this process.
* The grand Higgs field ⟨⟩ breaks t-symmetry,
thereby generating a Majorana mass term
ν·⟨⟩ν, equation (<ref>),
for the right-handed neutrino.
The result is that B-L is violated
even though ⟨⟩ commutes with B-L.
* The right-handed neutrino can acquire its Majorana mass only
after Pati-Salam symmetry breaking,
when the neutrino is isolated into a standard-model singlet.
But the Pati-Salam Higgs field ⟨⟩
can acquire its expectation value only after B-L is violated
by the right-handed neutrino acquiring its Majorana mass.
The two processes must be tightly coupled, <ref>,
but we have not offered a detailed model.
* The predicted energy scale of grand unification is 10^15GeV,
equations (<ref>),
which is less than the lower limit of 4 × 10^15GeV
inferred for (10) models by <cit.>
from the Super-Kamiokande lower limit on the proton lifetime
<cit.>.
A more careful calculation needs to be done that follows the
symmetry breaking chain of the (11,1) model
rather than the (10) models of <cit.>.
§ CONCLUSIONS
(10) is well-known as a promising candidate for
a grand unified group that contains the standard-model group
(1)_Y ×(2)_×(3)_c.
But the full elegance of (10) emerges only when its elements
are recast in terms of spinors <cit.>.
Spinors in (N) are indexed by a bitcode with [N/2] bits,
each bit representing a conserved charge,
and each bit taking the values +12 (↑)
or -12 (↓).
The 5 charges of the standard model
— hypercharge, weak isospin, and three colours —
are harmoniously re-expressed in terms of a 5-bit bitcode yzrgb,
with 2 weak bits y and z, and 3 colour bits r, g, b,
equations (<ref>).
This paper is motivated by some striking (to the authors) features,
<ref>,
of the (10) chart (<ref>) of standard model fermions,
which seem to suggest that
the (10) algebra
and
the Dirac algebra,
which is the geometric algebra of the Lorentz group (3,1),
might unify nontrivially in a common spin algebra.
The standard treatment of (10) as a grand unified group
takes each of the 2^5 = 32 fermions of a generation to be a Weyl fermion,
a 2-component chiral (massless) fermion transforming under (3,1),
for a total of 2^6 degrees of freedom.
If the (10) and (3,1) algebras unify nontrivially,
then the unified algebra must have 2^6 spinor degrees of freedom.
Given that there must be 1 time dimension,
the spinor algebra must be that of (11,1).
This requires adding a 6th bit, the t-bit,
or time bit, to the 5 bits yzrgb of (10).
The usual assumption,
motivated by the Coleman-Mandula no-go theorem
<cit.>,
is that the generators of a grand unified group such as (10)
commute with those of spacetime.
But that assumption is unnecessarily strong:
the Coleman-Mandula theorem requires only that the generators of
unbroken internal symmetries commute with those of spacetime.
In the (11,1) model,
before grand symmetry breaking
the internal group and the spacetime group are one and the same,
so the Coleman-Mandula theorem is satisfied trivially.
After grand symmetry breaking,
the (11,1) algebra contains
the internal (initally Pati-Salam, then standard-model)
and Dirac algebras as commuting subalgebras,
in accordance with the Coleman-Mandula theorem.
The main result of this paper is the expressions (<ref>)
for the vectors of the Dirac algebra
in terms of multivectors of (11,1).
The Dirac vectors (<ref>) satisfy the Dirac algebra,
and they all commute with all standard model bivectors
modified per (<ref>),
in accordance with the Coleman-Mandula theorem.
The modification (<ref>),
which is the key trick of this paper,
is to multiply all imaginary standard model bivectors
by the colour chiral operator ϰ_rgb,
which leaves the standard-model algebra unchanged,
but allows the Dirac algebra to be embedded in the (11,1) algebra
in a nontrivial way.
The remainder of the paper,
<ref>–<ref>,
identifies the symmetry breaking chain and associated Higgs sector
that breaks (11,1) to the standard model.
The tight fit of the Dirac and standard-model algebras
in the (11,1) algebra leaves little freedom of choice.
The minimal symmetry breaking chain (<ref>) is unique,
proceeding via the Pati-Salam group.
The minimal Higgs sector is similarly unique,
consisting of a 66-component (11,1) scalar multiplet
transforming in the adjoint represention.
The same Higgs multiplet mediates all symmetry breakings,
including grand, Pati-Salam, and electroweak.
The running of coupling parameters predicts that
grand symmetry breaking occurs at 10^15GeV,
and Pati-Salam symmetry breaking at 10^12GeV,
equations (<ref>).
The grand Higgs field ⟨⟩,
equation (<ref>),
breaks t symmetry,
is available to drive cosmological inflation at the grand unified scale,
and generates a large Majorana mass for the right-handed neutrino
by flipping its t-bit.
The electroweak Higgs field ⟨⟩,
equation (<ref>),
breaks y symmetry,
and generates fermion masses by flipping their y-bit.
Although the unified algebra is that of (11,1),
the gauge field before grand symmetry breaking cannot be the full group
(11,1),
because if it were,
then the subset of (11,1) bivectors
comprising the 4 electroweak Higgs bivectors
^+_t ^±_k, k = y,z,
equation (<ref>),
would be absorbed into generating masses for the corresponding bivector
gauge fields,
whereas in practice those 4 electroweak Higgs bivectors persist,
remaining available to mediate electroweak symmetry breaking.
The solution proposed in <ref>
is that the spatial dimension ^+_t,
the partner of the time dimension ^-_t,
behaves as a scalar dimension, not participating in any gauge symmetry,
as discussed in 4.4 of <cit.>.
The grand unified group is then (10,1), not (11,1).
Section <ref> summarizes the various predictions
of the (11,1) model,
while section <ref> highlights its possible flaws.
It is beyond the scope of this paper to address whether the proposed
(11,1) model might be accommodated in string theory.
It can scarcely escape notice that (10,1)
has the same number 11 of spacetime dimensions as maximal supergravity
<cit.>,
the low-energy limit of M theory <cit.>,
which is the conjectured extension of string theory
to include higher-dimensional objects, branes.
Extensions of string theory to 12 spacetime dimensions, F-theory,
have also been proposed <cit.>.
String-theory-inspired models usually assume that
the parent spacetime is, at least locally,
a product space, consisting of 4 large dimensions
multiplied by a space of compactified or hidden dimensions.
By contrast,
in the (11,1) geometric algebra,
although the time dimension is a vector,
each of the 3 spatial Dirac dimensions
is a pentavector, a 5-dimensional multivector,
equations (<ref>).
The spatial dimensions share a common 2-dimensional factor,
and beyond that are each 3-dimensional.
Is this arrangement viable in string theory?
§ DATA STATEMENT
No new data were created or analysed in this study.
We thank Pierre Ramond, John Baez, John Huerta, and Kirill Krasnov
for helpful conversations and correspondence.
This research was supported in part by FQXI mini-grant FQXI-MGB-1626.
§ DIRAC ALGEBRA
The Dirac algebra is the geometric (Clifford) algebra associated with the group
(3,1) of Lorentz transformations in 3+1 spacetime dimensions.
In the chiral representation,
the 4 chiral basis spinors
_a ≡{_ , _ , _ , _}
of the Dirac algebra are the column spinors[
The ordering of the Dirac bits is opposite from that in the construction
in 3.1 of <cit.>,
and the sign of one of the spinors is flipped,
{_ , _ , _ , _}
=
{_ , _ , _ , -_} .
The reordering of bits and the sign flip ensures
the chiral and Dirac representations of the orthonormal Dirac γ-matrices
take a standard form,
equations (<ref>) and (<ref>)]
_
=
(
[ 1; 0; 0; 0 ])
, _
=
(
[ 0; 1; 0; 0 ])
, _
=
(
[ 0; 0; 1; 0 ])
, _
=
(
[ 0; 0; 0; 1 ])
.
The indices
{ , , , }
signify the transformation properties of the basis spinors:
⇑ and ⇓
signify boost weight +1/2 and -1/2,
while
↑ and ↓
signify spin weight +1/2 and -1/2.
The first two spinors,
_ and _,
have right-handed Dirac chirality (even number of up-bits),
while the last two,
_ and _,
have left-handed Dirac chirality (odd number of up-bits).
Chiral spinors are natively massless.
Dirac basis spinors
_a ≡{_ , _ , _ , _}
are massive linear combinations of chiral spinors,
Dirac_a
=
X_abchiral_b ,
where X is the
symmetric (X = X^),
unitary (X^-1 = X^†)
matrix
X
≡1 √(2)(
[ 1 0 - 0; 0 1 0 -; - 0 1 0; 0 - 0 1 ])
.
The Dirac basis spinors
_ and _
represent massive spinors in their rest frames,
while
_ and _
represent massive antispinors in their rest frames.
Orthonormal vectors
{_0 , _1 , _2 , _3 }
in the chiral representation are the four 4 × 4 unitary matrices
_0 =
(
[ 0 1; -1 0 ])
, _a =
(
[ 0 σ_a; σ_a 0 ])
,
where σ_a are Pauli matrices, and 1 is the 2 × 2 unit matrix.
The advantage of the representation (<ref>)
compared to some others commonly found in the literature is that
the chiral basis vectors in the chiral representation are purely real,
equations (<ref>).
The Dirac representation of orthonormal or chiral basis vectors
may be obtained from the chiral representation by the transformation
Dirac_m
=
X
chiral_m
X^-1 ,
with X from equation (<ref>).
The orthonormal vectors in the Dirac representation are
_0 =
(
[ 1 0; 0 -1 ])
, _a =
(
[ 0 σ_a; σ_a 0 ])
.
In the notation ^±_k of paired orthonormal vectors
used elsewhere in this paper, equations (<ref>),
the orthonormal vectors _a are
{_1 , _2 , _3 , _0 }
=
{_1^+ , _1^- , _2^+ , _2^- } .
The bivectors _a and I _a
(_a are 4 × 4 matrices satisfying the same algebra
as Pauli matrices)
and the pseudoscalar I
in the chiral representation
are
_0 _a
=
_a
≡(
[ σ_a 0; 0 - σ_a ])
,
12ε_abc_b _c
=
I _a
=
(
[ σ_a 0; 0 σ_a ])
,
I
=
(
[ 1 0; 0 -1 ])
.
Chiral basis vectors in the chiral representation,
commonly called Newman-Penrose basis vectors,
are the real matrices
_v =
(
[ 0 σ_v; - σ_u 0 ])
, _u =
(
[ 0 σ_u; - σ_v 0 ])
, _+ =
(
[ 0 σ_+; σ_+ 0 ])
, _- =
(
[ 0 σ_-; σ_- 0 ])
,
where σ_m are the Newman-Penrose Pauli matrices
σ_v
≡1 √(2)( 1 + σ_3 )
=
√(2)(
[ 1 0; 0 0 ])
, σ_u
≡1 √(2)( 1 - σ_3 )
=
√(2)(
[ 0 0; 0 1 ])
,
σ_+
≡1 √(2)( σ_1 + σ_2 )
=
√(2)(
[ 0 1; 0 0 ])
, σ_-
≡1 √(2)( σ_1 - σ_2 )
=
√(2)(
[ 0 0; 1 0 ])
.
The labelling v, u, +, - for the chiral basis vectors
is conventional in the Newman-Penrose community.
In the notation _k and _k̅
of paired chiral vectors
used elsewhere in this paper, equations (<ref>),
the chiral (Newman-Penrose) basis vectors _a are
{_+ , _- , _v , _u }
=
{_1 , _1̅ , _2 , -_2̅} .
The spinor metric ε
in the chiral
representation is <cit.>
ε
=
I _2
=
(
[ 0 1 0 0; -1 0 0 0; 0 0 0 1; 0 0 -1 0 ])
.
The expression for ε
shows that the spinor metric flips boost and spin,
leaving chirality unchanged.
The conjugation operator is related to the spinor metric by
= -ε_0^.
The conjugation operator in the chiral
representation is
=
I _2
=
(
[ 0 0 0 ; 0 0 - 0; 0 - 0 0; 0 0 0 ])
.
The expression for
shows that the conjugation operator flips chirality and spin,
leaving boost unchanged.
The expressions for the spinor metric and conjugation operator
in terms of multivectors are to be understood as
holding in the particular representation.
The spinor metric and conjugation operators do not transform as multivectors
under Lorentz transformations;
rather, they are Lorentz invariant, unchanged under Lorentz transformations.
§ NAMING OF BITS
This paper adopts the notation t,y,z,r,g,b
for the 6 bits of (11,1) spinors,
consisting of the time bit t, two weak bits y,z,
and three colour bits r,g,b.
<cit.>
used r,w,b (red, white, blue) for the colour bits,
and (arbitrarily) p, g (purple, green) for the weak bits.
<cit.>
adopted r,g,b (red, green, blue) for the colour bits,
and d, u (down, up) for the weak bits,
on the grounds that down and up quarks d^c_ and u^c_
have up bits respectively dc and uc
(with c one of r,g,b),
per the (10) chart (<ref>).
The present paper follows <cit.> for the colour bits,
since it is consistent with the elegant mental picture that
equal amounts of the three colours r,g,b are colourless,
as are all free fermionic bound states at low energy
(leptons, baryons, mesons).
However, <cit.>'s adoption of d,u for the weak bits is potentially
confusing, first because each bit d or u can itself be either down or up,
and second because d and u might be misinterpreted as measuring
the number of d and u quarks,
potentially confusing weak isospin with isospin.
The present paper chooses y, z for the weak bits
in part because it reduces the potential for misinterpretation,
and in part because y and z are infrared bands
to be used by the Vera Rubin Observatory (formerly the LSST)
<cit.>,
for which astronomical first light is expected in 2024.
The sequence yzrgb is in (inverse) order of wavelength,
{y,z,r,g,b}∼{1000, 900, 600, 500, 400}nm .
Note that hypercharge Y carries one unit of y-charge,
in addition to other charges,
equation (<ref>).
§ GAUGE FIELDS OF SUBGROUPS OF (10)
The standard-model gauge group
(1)_Y ×(2)_×(3)_c
is a subgroup of the (5) subgroup of (10).
The gauge fields (generators) of (5) comprise
the subset of gauge fields of (10)
that leave the number of up-bits of a spinor unchanged.
The gauge bivectors of (n) with n = 5 constitute
( n + 1 ) ( n - 1 ) = 24 bivectors
comprising the n ( n - 1 ) = 20 off-diagonal bivectors
12 ( 1 - ϰ_kl )
^+_k ^+_l
=
12
( ^+_k ^+_l
+
^-_k ^-_l )
=
12
( _k _l̅ + _k̅ _l )
,
12 ( 1 - ϰ_kl )
^+_k ^-_l
=
12
( ^+_k ^-_l
-
^-_k ^+_l )
=
2
( _k _l̅ - _k̅ _l )
,
and the n-1=4 diagonal bivectors
12 ^+_k ^-_k
=
2 _k _k̅ 12∑_k ^+_k ^-_k
=
2∑_k _k _k̅ ,
with indices k and l running over
y, z, r, g, b.
The quantity
ϰ_kl≡_k _k̅_l _l̅ = - ^+_k ^-_k ^+_l ^-_l
in equations (<ref>)
is the kl chiral operator.
The factor 12 ( 1 - ϰ_kl )
is a projection operator,
whose square is itself,
which serves to project its argument into the space where
the sum of k and l charges is zero.
The S in (5) restricts to (5) matrices of unit determinant,
effectively removing the bivector
12∑_k ^+_k ^-_k
that rotates all spinors in an (5) multiplet by a common phase.
The bivectors of (10) that are not in (5)
are the 20 off-diagonal bivectors
12 ( 1 + ϰ_kl )
^+_k ^+_l
=
12
( ^+_k ^+_l
-
^-_k ^-_l )
=
12
( _k _l + _k̅ _l̅ )
,
12 ( 1 + ϰ_kl )
^+_k ^-_l
=
12
( ^+_k ^-_l
+
^-_k ^+_l )
=
- 2
( _k _l - _k̅ _l̅ )
,
and the 1 diagonal bivector
X
≡12∑_k ^+_k ^-_k
=
2∑_k _k _k̅ ,
with indices k running over
y, z, r, g, b.
The bivector X measures total yzrgb charge y+z+r+g+b,
and X is the generator of the (1) factor
that would complete (5) to (5).
The 6 bivectors involving the weak charges y and z generate the weak group
(4)_w = (2)_×(2)_,
a product of right- and left-handed components.
The subgroup of (4)_w that preserves the number of yz up-bits
is the standard-model left-handed component (2)_.
The bivectors of the left-handed weak group (2)_
comprise the 2+(2-1) = 3 bivectors (<ref>)
and (<ref>)
with k and l running over y and z.
The bivectors are equivalent to τ_k
with τ_k satisfying the Pauli algebra,
τ_1
≡- 12 ( 1 - ϰ_yz ) ^+_y ^-_z
=
- 12
( ^+_y ^-_z - ^-_y ^+_z )
=
- 2
( _y _z̅ + _z _y̅ )
,
τ_2
≡- 12 ( 1 - ϰ_yz ) ^+_y ^+_z
=
- 12
( ^+_y ^+_z + ^-_y ^-_z )
=
- 12
( _y _z̅ - _z _y̅ )
,
τ_3
≡- 12 ( 1 - ϰ_yz ) ^+_y ^-_y
=
- 12
( ^+_y ^-_y - ^+_z ^-_z )
=
- 2
( _z _z̅ - _y _y̅ )
,
where
ϰ_yz≡_y _y̅_z _z̅
is the weak chiral operator.
The left-handed projection operator 12 ( 1 - ϰ_yz )
equals 1 acting on left-handed weak chiral states,
and vanishes acting on right-handed weak chiral states.
The bivectors of the right-handed weak group (2)_
comprise 3 bivectors equivalent to ρ_k
with ρ_k satisfying the Pauli algebra,
ρ_1
≡- 12 ( 1 + ϰ_yz ) ^+_y ^-_z
=
- 12
( ^+_y ^-_z + ^-_y ^+_z )
=
2
( _y _z - _y̅ _z̅ )
,
ρ_2
≡- 12 ( 1 + ϰ_yz ) ^+_y ^+_z
=
- 12
( ^+_y ^+_z - ^-_y ^-_z )
=
12
( _y _z + _y̅ _z̅ )
,
ρ_3
≡- 12 ( 1 + ϰ_yz ) ^+_y ^-_y
=
- 12
( ^+_y ^-_y + ^+_z ^-_z )
=
2
( _y _y̅ + _z _z̅ )
.
The right-handed projection operator 12 ( 1 + ϰ_yz )
equals 1 acting on right-handed weak chiral states,
and vanishes acting on left-handed weak chiral states.
The expressions for the bivectors ρ_1 and ρ_2 are equivalent
to equations (<ref>) and (<ref>)
with indices k and l equal to y and z.
The diagonal bivector ρ_3 measures the total yz charge y+z,
ρ_3
=
12∑_k = y,z^+_k ^-_k
=
2∑_k = y,z_k _k̅ .
The 15 bivectors involving the colour charges r, g, and b generate the
colour group (6)_c.
The subgroup of (6)_c that preserves the number of rgb up-bits
is the standard-model colour group (3)_c.
The bivectors of the colour group (3)_c
comprise the 6+(3-1) = 8 bivectors (<ref>)
and (<ref>)
with k and l running over r, g, and b.
The 7 other bivectors of (6)_c consist of
the 6 leptoquark bivectors
given by equations (<ref>) with k,l
drawn from pairs of distinct colour indices r,g,b,
and the 1 diagonal bivector that measures the total rgb charge r+g+b,
- 32 ( B - L )
=
12∑_k = r,g,b^+_k ^-_k
=
2∑_k = r,g,b_k _k̅ .
The 1 hypercharge bivector, the generator of (1)_Y,
is defined to be the bivector whose eigenvalue is Y
where Y is the hypercharge, equation (<ref>),
Y
≡12∑_k = y,z^+_k ^-_k
-
13∑_k = r,g,b^+_k ^-_k
=
(
12∑_k = y,z_k _k̅
-
13∑_k = r,g,b_k _k̅)
.
The 1 electromagnetic charge bivector,
the generator of (1)_Q,
is defined to be the bivector whose eigenvalue is Q
where Q is the electric charge, equation (<ref>),
Q
≡12^+_z ^-_z
-
16∑_k = r,g,b^+_k ^-_k
=
(
12_z _z̅
-
16∑_k = r,g,b_k _k̅)
.
|
http://arxiv.org/abs/2307.01328v1
|
20230703200240
|
A maximal inequality for local empirical processes under weak dependence
|
[
"Luis Alvarez",
"Cristine Pinto"
] |
econ.EM
|
[
"econ.EM",
"math.ST",
"stat.TH"
] |
The alpha particle charge radius, the radion and the proton radius puzzle
A. S. Lemos
Received Month day, 2023; accepted June day, 2023
=========================================================================
We introduce a maximal inequality for a local empirical process under strongly mixing data. Local empirical processes are defined as the (local) averages 1/nh∑_i=1^n 1{x - h ≤ X_i ≤ x+h}f(Z_i), where f belongs to a class of functions, x ∈ℝ and h > 0 is a bandwidth. Our nonasymptotic bounds control estimation error uniformly over the function class, evaluation point x and bandwidth h. They are also general enough to accomodate function classes whose complexity increases with n. As an application, we apply our bounds to function classes that exhibit polynomial decay in their uniform covering numbers. When specialized to the problem of kernel density estimation, our bounds reveal that, under weak dependence with exponential decay, these estimators achieve the same (up to a logarithmic factor) sharp uniform-in-bandwidth rates derived in the iid setting by <cit.>.
§ INTRODUCTION
In this paper, we introduce a maximal inequality for a local empirical process under exponential decay of the sample α-mixing coefficients. We provide a nonasymptotic bound on an Orlicz norm of the uniform estimation error of the local-average process 1/nh∑_i=1^n 1{x - h ≤ X_i ≤ x+h}f(Z_i), where uniformity holds simultaneously over the evaluation point x ∈ℝ, bandwidth h ∈ [a_n, b_n), and function f ∈ℱ, with a_n ≤ b_n being positive constants and ℱ a function class. The nonasymptotic nature of our results allows for function classes whose complexity, as measured by their uniform covering numbers, increases with n. The latter is especially useful for applications in high-dimensional statistics (see <cit.> for an example). We also discuss how to extend our results to multidimensional x and subgeometric decay of mixing coefficients.
We then apply our results to functional classes ℱ that exhibit polynomial decay in their uniform covering numbers. This class is particulary interesting, as polynomial decay in uniform entropy is ensured by a finite VC dimension <cit.>. When specialized to the problem of kernel density estimation, our results show that, under exponential decay in the mixing coefficients, kernel estimators achieve the same, up to a logarithmic factor, sharp uniform-in-bandwidth rates obtained by <cit.> in the iid setting.
To the best of our knowledge, the closest paper to ours is <cit.>, who provides uniform-in-bandwidth asymptotic rates for local empirical processes under stationarity, a β-mixing condition and polynomial decay in bracketing entropy. There are some important differences between his approach and ours, though. First, 's analysis is asymptotic, thus not applicable to function classes of increasing complexity. Second, while the author uses bracketing entropy to control complexity, our analysis relies on uniform covering numbers. Finally, his main result (Theorem 2.1) leads to a rate of a_n^-1/2 over the function classes encompassed by his setting. When compared to our application in <Ref>, we see that our resulting rates are logarithmic in a_n, thus offering a great improvement over his results in kernel-type problems.
The remainder of this paper is organized as follows. Section <ref> introduces our main result. Section <ref> applies it to function classes with uniform entropy displaying polynomial decay. Section <ref> concludes.
§ A MAXIMAL INEQUALITY FOR LOCAL EMPIRICAL PROCESSES
We present our main result in the theorem below. In what follows, we define, for some p ≥ 0, ψ_p(x) exp(-x^p) - 1; and, for a random variable Z, the Orlicz norm with respect to ψ_p is given by:
‖ Z ‖_ψ_pinf{C > 0: 𝔼[ψ_p(|Z|/C)]<∞} .
Moreover, for a sequence {Y_i}_i=1^n of random variables defined on a common probability space (Ω, Σ, ℙ), we define the α-mixing coefficients as, for each k ∈ℕ.
α(k) sup_t ∈ℕsup_A ∈σ(Y_1,…, Y_t), B ∈σ( Y_t+k,Y_t+k+1,…) |ℙ[A∩ B] - ℙ[A]ℙ[B]| .
Let {(X_i, Z_i)}_i ∈ℕ be a sequence of random variables defined on a common probability space (Ω, Σ, ℙ), with mixing coefficients satisfying, for some c > 0, α(i) ≤exp(-2 c i) ∀ i ∈ℕ. Suppose that each X_i is real valued, with common distribution fucntion G that admits a bounded Lebesgue density g. Suppose that each Z_i takes values on a measurable space (𝒵, Σ). Let ℱ be a pointwise measurable class of real-valued functions with domain 𝒵. Define:
S_n(x,f; h) 1/√(nh)∑_i=1^n 1{ x - h ≤ X_i ≤ x+h} f(Z_i) , u ∈ [0,1], f ∈ℱ, h ∈ [a_n,b_n) ,
where 0<a_n≤ b_n are positive constants, with 1/‖ g‖_∞ n ≤ b_n < 1/‖ g‖_∞.
Suppose that the class of functions ℱ admits a measurable and bounded envelope κ. Assume that, for some q ∈ (1,∞], ω(ϵ) sup_P ∈𝒫(𝒵)𝒩(2ϵ‖κ‖_∞,ℱ, ‖·‖_q,P), is finite for every ϵ > 0, where ‖ f ‖_P,q = (P|f|^q)^1/q and the supremum is taken over all probability measures on (𝒵, Σ). Then, for every n ≥ 2:
‖sup_x ∈ℝ, h ∈ [a_n, b_n), f ∈ℱ|S_n(x,f;h) - 𝔼[S_n(x,f;h)]| ‖_ψ_1≤ C‖κ‖_∞/√(na_n) + D ‖κ‖_∞/√(na_n)∑_l=⌈ -log_2(‖ g‖_∞ b_n)⌉^⌈log_2(n) ⌉inf_δ≥ 0ψ_l(δ) ,
where C>0 is an absolute consant; the constant D>0 depends solely on c; and:
ψ_l(δ) = (log(n)^2[(l+1)log(ω(δ))] + √(ln/2^l 1)√([(l+1)log(ω(δ))])) +
nδ[(n^-1log(n)^2(l+1) + √(l/2^l n1/n^2)√((l+1)))^q-1/q +1/2^(q-1)/q(l+1)] .
The proof of Theorem <ref> is deferred to <Ref>. It relies on a chaining argument due to (), which we couple with the Bernstein inequality of <cit.> to help achieve control of the error of the localizing function 1{x ≤ X ≤ x + h } uniformly over the bandwidth. Note that our bound is nonasymptotic, with the constant D depending solely on the decay parameter c. Our bound also depends on minimising the functions ψ_l(δ). This minimisation involves finding a “small” δ that is able to ensure appropriate control of the uniform entropy ω(δ). In the next section, we show that, for classes with polynomial decay in their uniform entropy, a useul upper bound to this infimum can be computed.
It is possible to extend our results to the setting where the distribution of X_i varies with i. In this case, one has to adapt Lemma <ref> in Appendix <ref> to allow for a different localization parameter K_i for every i. Since this brings clutter to the proofs and does not reveal any new insights, for ease of exposition, we focus on the case where the X_i are identically distributed. Observe that, in the statement of <Ref>, we do not require the Z_i to be identically distributed.
It is possible to extend our results to the setting where X is multidimensional by relying on Lemma 7.2 of <cit.>. This result decomposes the difference between a d-dimensional cdf at two points as a sum of d differences of the d marginal cdfs at different evaluation points. Using this result, we may extend Lemma <ref> in Appendix <ref> to the multivariate setting, and use it to prove a multivariate version of Theorem <ref>.
The statement of <Ref> requires the function class to display bounded envelope; and mixing coefficients to exhibit exponential (geometric) decay. These assumptions can be relaxed by replacing the Bernstein inequality of <cit.> with a result due to <cit.>. In this case, a tradeoff between the tail behaviour of the envelope function and the decay of mixing coefficients emerges. Consequently, we achieve control over the uniform estimation error under a different Orlicz norm.
§ APPLICATION: FUNCTION CLASSES WITH POLYNOMIAL DECAY
In this section, we apply our results to classes of functions that exhibit polynomial decay in their covering numbers. Specifically, in the setting of <Ref>, we take q=2 and consider a sequence of bounded function classes {ℱ_n}_n ∈ℕ with corresponding envelope κ_n, whose uniform entropy numbers ω_n satisfy:
ω_n(δ) ≤ C_n (1/δ)^v_n , ∀δ∈ (0,1],
for positive constants C_n> 0,v_n > 0, n ∈ℕ. We also assume the envelopes to be uniformly bounded, with sup_n ∈ℕ‖κ_n‖_∞≤ϕ < ∞. Suppose that na_n →∞. In this case, we may find a constant G>0 such that, for every n above a certain threshold, ⌈ - log_2(‖ g ‖_∞ b_n )⌉≤ l ≤⌈log_2(n)⌉; and δ < exp(1/v_n(log(C_n) - (l+1))):
ψ_l(δ) ≤ G log(n)^2 (log(C_n) - v_n log(δ)) + G √(l n/2^l)√((log(C_n) - v_n log(δ))) + G n δlog(n)^3/2/2^(1/2)(l+1) ,
Consider the choice:
δ_l = C_n^1/v_n/2^((1/v_n))(l+1)√(n) (‖ g ‖ b_n)^1/v_nlog(n)^3/2 .
Assuming that:
log(n)^3 loglog n/√(Ta_t)→ 0 .
In this case, we obtain the rate:
‖sup_x ∈ℝ, h ∈ [a_n, b_n), f ∈ℱ|S_n(x,f;h) - 𝔼[S_n(x,f;h)]| ‖_ψ_1=
O(√(b_n/a_n)(C_n^1/v_n√(-log(‖ g ‖ b_n)v_n log(n)) -log(‖ g ‖ b_n)) ) .
Our rate explicitly accomodates for function classes of increasing complexity. If the sequence of functions has finite (albeit possibly increasing) VC dimension, then Theorem 2.6.7 of <cit.> shows that the v_n and C_n may be taken such that C_n^1/v_n is bounded. In this case, the complexity of the function class only affects the rate through the √(v_n) term, where v_n scales linearly with the VC dimension.
Finally, we consider the problem of kernel density estimation. In this case, the function class is the same for every n ∈ℕ. If the bandwidths are taken such that a_n ≍ b_n, then our rate simplifies to:
√(-log(a_n))√(log(n) -log(a_n)) .
In contrast, Remark 2 in <cit.> shows that, in the iid setting, one can achieve the rate:
√(loglog (n) - log(a_n)) ,
which coincides with our rate except for logarithmic factors. Note that, if we consider, as it is usually done in practice, polynomial bandwidths, i.e a_n = C n^-α for α > 0, then our rate simplifies to log(n), whereas 's collapses to √(log(n)).
§ CONCLUDING REMARKS
In this paper, we introduced a maximal inequality for the uniform estimation error of a local empirical process under strongly mixing data, where uniformity holds simultaneously over the function class, bandwidth and evaluation point. Our nonasymptotic bounds accomodate function classes with increasing complexity, which is a useful feature for “high-dimensional” statistical analyses. As an application, we computed our bounds to function classes that exhibit polynomial decay in their uniform entropy. When specialized to the kernel density estimation problem, these results show that our bound leads to the same optimal rates derived by <cit.> in the iid setting.
More generally, we view our results as a first step in the development of rigorous uniform inference tools in local estimation problems under weak dependence and data-driven bandwidths. Specifically, one may combine our results with couplings in the weakly dependent setting (e.g. , ) to devise test statistics that control size uniformly over the evaluation point x. An example is the construction of uniform-in-x confidence bands for local polynomial quantile regression estimators with time series data. We intend to study such procedures in future research.
§ PROOF OF <REF>
We first prove a preliminary result for uniform random variables.
Let {(U_i, Z_i)}_i∈ℕ be a sequence of random variables defined on a common probability space (Ω, Σ, ℙ), with mixing coefficients satisfying, for some c > 0, α(i) ≤exp(-2 c i) ∀ i ∈ℕ. Suppose that each U_i is uniformly distributed on [0,1], and that each Z_i takes values on a measurable space (𝒵, Σ). Let ℱ be a nonnegative pointwise measurable class of real-valued functions with domain 𝒵. Define:
Z_n(u,f) 1/√(n)(∑_i=1^n 1{U_i ≤ u} f(Z_i) - 𝔼[1{U_i ≤ u}f(Z_i)] ) , u ∈ [0,1], f ∈ℱ .
Suppose that the class of functions ℱ admits a measurable and bounded envelope κ. Assume that, for some q ∈ (1,∞], ω(ϵ) sup_P ∈𝒫(𝒵)𝒩(2ϵ‖κ‖_∞,ℱ, ‖·‖_q,P), is finite for every ϵ > 0, where ‖ f ‖_P,q = (P|f|^q)^1/q and the supremum is taken over all probability measures on (𝒵, Σ). Then, for every n ≥ 2 and K ∈ℕ such that K ≤⌈log_2(n) ⌉:
‖sup_u ∈ [0,1], f ∈ℱ|Z_n(u,f) - Z_n(Π_K(u),f)| ‖_ψ_1≤‖κ‖_∞/√(n) + D ‖κ‖_∞∑_l=K+1^⌈log_2(n) ⌉inf_δ≥ 0ψ_l(δ) ,
where the constant D > 0 depends solely on c; for J ∈ℕ, Π_J(u) = ⌊ 2^J u ⌋/2^J; and:
ψ_l(δ) = (n^-1/2log(n)^2[(l+1)log(ω(δ))] + √(l/2^l1/n)√([(l+1)log(ω(δ))])) +
√(n)δ[(n^-1log(n)^2(l+1) + √(l/2^l n1/n^2)√((l+1)))^q-1/q +1/2^(q-1)/q(l+1)] .
We begin by adapting the chaining argument in the proof of Proposition 7.2. of <cit.>. Fix f ∈ℱ. Observe that for L ≥ K, we may write:
sup_u ∈ [0,1] | Z_n(u,f) - Z_n(Π_K(u), f)| ≤∑_l= K+1^LΔ_l(f) + Δ^*_L(f) ,
where Δ_l(f) = max_k ∈{1, …, 2^l} |Z_n((k-1)/2^l,f) - Z_n(k/2^l,f)|; and Δ^*_L(f) = sup_u ∈ [0,1] |Z_n(Π_L(u),f) - Z_n(Π(u), f)|. Observe that:
- ‖κ‖_∞√(n)/2^L≤ Z_n(Π(u)) - Z_n(Π_L(u)) ≤Δ_L(f) + ‖κ‖_∞√(n)/2^L .
From the above, we obtain that:
sup_u ∈ [0,1] | Z_n(u,f) - Z_n(Π_K(u), f)| ≤ 2 ∑_l=K+1^L Δ_l(f) + ‖κ‖_∞√(n)/2^L ,
and, by taking L = ⌈log_2(n)⌉, we arrive at:
sup_u ∈ [0,1] | Z_n(u,f) - Z_n(Π_K(u), f)| ≤ 2 ∑_l=K+1^⌈log_2(n) ⌉Δ_l(f) + ‖κ‖_∞1/√(n) .
The previous argument holds for a fixed f. We achieve uniformity as follows. For a given sequence {δ_l}_l=K+1^⌈log_2(n)⌉, we may write, for each l:
sup_f ∈ℱΔ_l(f) ≤max_f ∈ T_lmax_k ∈{1, …, 2^l} |Z_n((k-1)/2^l,f) - Z_n(k/2^l,f)|
+ sup_f ∈ℱ: ‖ f - f'‖_q,ℙ̂_n‖ f - f'‖_q,ℙ̅_n≤δ_lmax_k ∈{1, …, 2^l} |Z_n((k-1)/2^l,f) - Z_n(k/2^l,f) - (Z_n((k-1)/2^l,f') - Z_n(k/2^l,f'))| ,
with T_l being, simultaneously, a 2δ_l ‖κ‖_∞-cover of ℱ with respect to ‖·‖_q,ℙ̅_n and ‖·‖_q,ℙ̂_n, where ‖κ‖_q,ℙ̅_n= (1/n∑_i=1^n 𝔼[|f(Z_i)|^q])^1/q; and ℙ̂_n denotes the empirical measure 1/n∑_i=1^n δ_Z_i. Observe that, by the assumption in the statement of the lemma, |T_l| ≤ω(δ_l)^2; and that the set T_l is random. By filling the T_l with constant functions equal to zero whenever |T_l| < ω(δ_l)^2, we may assume, without loss of generality, that |T_l| = ω(δ_l)^2 and we may thus order the random elements in T_l = {f_1,l, f_2,l, …, f_ω(δ_l)^2, l}. We thus consider T_l to be a set of ω(δ_l)^2 random functions.
We will deal with each term on the right-hand side of (<ref>) separately. Consider the first term. For a given l ∈ℕ and some k ∈{1,…, 2^l} and f ∈ T_l, define Y_i,l(k,f) = 1/√(n)f(Z_i)1{ (k-1)/2^l ≤ U_i ≤ k/2^l }. Rio's covariance inequality <cit.> yields:
∑_s≥ i^∞ |ℂ(Y_i,l(j,f), Y_s,l(j,f))| ≤4 ‖κ‖_∞^2/n∑_s=0^∞α(s) 1/2^l≤4 ‖κ‖_∞^2/n(l log(2)/c21/2^l + 1/2^l-11/1 - exp(-2c)) v_l^2 n^-1‖κ‖^2 .
It then follows by Theorem 2 of <cit.> that, fo a constant C depending only on c, we have that, for any n ≥ 2 and every l ∈ℕ, k ∈{1,2,…, 2^l} and f ∈ T_l:
ℙ[|Z_n((k-1)/2^l,f) - Z_n(k/2^l,f)| ≥ x] ≤exp(Cx^2/v^2_l ‖κ‖_∞^2 + ‖κ‖^2_∞ n^-1 + x ‖κ‖_∞ n^-1/2log(n)^2), ∀ x ≥ 0 .
By Lemma 2.2.10 of <cit.>, there is a constant K depending only on c such that, for every l ∈ℕ and n ≥ 2:
‖max_f ∈ T_lmax_k ∈{1, …, 2^l} |Z_n((k-1)/2^l,f) - Z_n(k/2^l,f)| ‖_ψ_1≤
K ‖κ‖_∞( n^-1/2log(n)^2 log(1+ 2^lω(δ_l)^2) + √( v_l^2 + n^-1)√(log(1+ 2^lω(δ_l)^2))) .
Next, we deal with the second term in (<ref>). A simple argument reveals that:
max_k ∈{1,…, 2^l}sup_f ∈ℱ: ‖ f - f'‖_q,ℙ̂_n‖ f - f'‖_q,ℙ̅_n≤δ_l |Z_n((k-1)/2^l,f) - Z_n(k/2^l,f) - (Z_n((k-1)/2^l,f') - Z_n(k/2^l,f'))| ≤
2‖κ‖_∞δ_l n^(2 -q)/2qmax_k ∈{1,… 2^l}|∑_i=1^n 1{(k-1)/2^l ≤ U_i ≤ k/2^l}|^q-1/q+ 2‖κ‖_∞δ_l√(n)/2^(q-1/q)l ,
from which it follows that:
‖max_k ∈{1,…, 2^l}sup_f ∈ℱ: ‖ f - f'‖_q,ℙ̂_n‖ f - f'‖_q,ℙ̅_n≤δ_l |Z_n((k-1)/2^l,f) - Z_n(k/2^l,f) - (Z_n((k-1)/2^l,f') - Z_n(k/2^l,f'))| ‖_ψ_1≤
2 S ‖κ‖_∞δ_l n^1/2q‖max_k ∈{1,… 2^l}|1/√(n)∑_i=1^n (1{(k-1)/2^l ≤ U_i ≤ k/2^l} - 1/2^l) |‖_ψ_1^q-1/q + S 4‖κ‖_∞δ_l√(n)/2^(q-1/q)l≤
2 Sδ_l ‖κ‖_∞√(n)(K( n^-1log(n)^2 log(1+ 2^l) + √(n^-1(v_l^2 + n^-1))√(log(1+ 2^l)))^q-1/q + 2/2^(q-1/q)l) ,
where S>0 is an absolute constant, and the last inequality follows from replicating the preceding argument replacing T_l with a constant function equal to one. Using that log(1+2^l ω(δ_l)^2) ≤log(2^l+1ω(δ_l)^2) = (l+1) log(2) + 2 log(ω(δ_l)) and v^2_l ≤ M l/2^l for a constant M depending only on c, we can simplify both (<ref>) and (<ref>). Then, plugging these back onto (<ref>), optimising, and then plugging back onto (<ref>); we obtain the desired result.
We are now ready to prove the main result. We begin by proving the result for a nonnegative class of functions with bounded envelope. Start by noticing that:
|S(x,f;h)- 𝔼[S(x,f;h)]| ≤|1/√(nh)∑_i=1^n (1{ x ≤ X_i ≤ x + h} f(Z_i)- 𝔼[1{ x ≤ X_i ≤ x + h} f(Z_i)])| +
|1/√(nh)∑_i=1^n (1{ x - h ≤ X_i ≤ x} f(Z_i) - 𝔼[1{ x - h ≤ X_i ≤ x } f(Z_i)])| +
| 1/√(nh)∑_i=1^n 1{X_i = x}| .
We deal with each term separately. Consider the first term. Observe that, since each X_i is continuously distributed, we have that:
|1/√(nh)∑_i=1^n (1{ x ≤ X_i ≤ x + h} f(Z_i)- 𝔼[1{ x ≤ X_i ≤ x + h} f(Z_i)])| =
|1/√(nh)∑_i=1^n (1{ G(x) < U_i ≤ G(x + h)} f(Z_i)- 𝔼[1{ G(x) < U_i ≤ G(x + h)} f(Z_i)])| +
|1/√(nh)∑_i=1^n 1{ U_i = G(x)} f(Z_i)| ,
where each U_i G(X_i) is uniformly distributed. The last term on the right hand side of (<ref>) is, with probability one, bounded above by ‖ g‖_∞/√(a_n n) uniformly over h, f and x. Next, note that, for any x ∈ℝ and h > 0, trivially:
G(x+h) - G(x) ≤‖ g‖_∞ h .
Combining this fact with continuity of G reveals that, for each x ∈ℝ, the set:
Ξ_x = {G(x+h) - G(x): 0 ≤ h < b_n} ,
is an interval contained in
[0, ‖ g ‖ b_n) .
Therefore, setting K = ⌊ - log_2(‖ g ‖_∞ b_n)⌋ yields that:
sup_x ∈ℝsup_a_n ≤ h < b_nsup_f ∈ℱ|1/√(nh)∑_i=1^n (1{ x ≤ X_i ≤ x + h} f(Z_i)- 𝔼[1{ x ≤ X_i ≤ x + h} f(Z_i)])| ≤
1/√(a_n)sup_u ∈ [0,1]sup_s ∈ [0,2^-K) sup_f ∈ℱ|1/√(n)∑_i=1^n (1{ u < U_i ≤ u+s} f(Z_i)- 𝔼[1{ u < U_i ≤ u + s} f(Z_i)])| + ‖ g‖_∞/√(a_n n) .
But we then note that:
sup_u ∈ [0,1]sup_s ∈ [0,2^-K) sup_f ∈ℱ|1/√(n)∑_i=1^n (1{ u < U_i ≤ u+s} f(Z_i)- 𝔼[1{ u < U_i ≤ u + s} f(Z_i)])| ≤
sup_k ∈{1,2, … 2^K}sup_s ∈ [0,2^-K) sup_f ∈ℱ|1/√(n)∑_i=1^n (1{(k-1)/2^K < U_i ≤(k-1)/2^K+ s} f(Z_i)- 𝔼[1{(k-1)/2^K < U_i ≤(k-1)/2^K+ s} f(Z_i)])| +
sup_k ∈{1,2, … 2^K}sup_s ∈ [0,2^-K) sup_f ∈ℱ|1/√(n)∑_i=1^n (1{(k-1)/2^K -s < U_i ≤(k-1)/2^K} f(Z_i)- 𝔼[1{(k-1)/2^K-s < U_i ≤(k-1)/2^K} f(Z_i)])| .
and, since both U_i and 1-U_i are uniformly distributed, Lemma <ref> provides control over the Orlicz norm of both terms on the right-hand side of (<ref>). An analogous argument enables controls of the uniform error of the second term in (<ref>). Finally, as for the third term in (<ref>), we note that its uniform error is bounded above by ‖κ‖_∞/√(na_n). Applying Lemma <ref> and combining the resulting terms proves Theorem <ref> for nonnegative classes.
To extend the result to general bounded classes, consider the constant function f(z) = -‖κ‖_∞ for all z ∈𝒵. Next, decompose the function class as ℱ = (ℱ - f) + {f_n}. The class (ℱ - f) is nonnegative, admits bounded envelope 2‖κ‖ and same covering numbers as ℱ. Therefore, the previous result applies to it. The second class of functions is a singleton and f_n has constant sign; therefore the previous result also applies to it, with w_n(δ) = 1 and hence δ = 0. Combining with the bound on (ℱ - f_n) yields the desired result.
chicago
|
http://arxiv.org/abs/2307.01576v1
|
20230704090944
|
Merging and band transition of bound states in the continuum in leaky-mode photonic lattices
|
[
"Sun-Goo Lee",
"Seong-Han Kim",
"Wook-Jae Lee"
] |
physics.optics
|
[
"physics.optics"
] |
APS/123-QED
sungooleee@gmail.com
Department of Data Information and Physics, Kongju National University, Gongju, 32588, Republic of Korea
Institute of Application and Fusion for Light, Kongju National University, Cheonan, 31080, Republic of Korea
Advanced Photonics Research Institute, Gwangju Institute of Science and Technology, Gwangju 61005, Republic of Korea
Department of Data Information and Physics, Kongju National University, Gongju, 32588, Republic of Korea
Institute of Application and Fusion for Light, Kongju National University, Cheonan, 31080, Republic of Korea
Bound states in the continuum (BICs) theoretically have the ability to confine electromagnetic waves in limited regions with infinite radiative quality (Q) factors. However, in practical experiments, resonances can only exhibit finite Q factors due to unwanted scattering losses caused by fabrication imperfections. Recently, it has been shown that ultrahigh-Q guided-mode resonances (GMRs), which are robust to fabrication imperfections, can be realized by merging multiple BICs in momentum space. In this study, we analytically and numerically investigate the merging and band transition of accidental BICs in planar photonic lattices. Accidental BICs can merge at the edges of the second stop band, either with or without a symmetry-protected BIC. We show that as the thickness of the photonic lattice gradually increases, the merged state of BICs transitions from the upper to the lower band edge. Using coupled-mode analysis, we present the analytical merging thickness at which multiple accidental BICs merge at the second-order Γ point. Our coupled-mode analysis could be beneficial for achieving ultrahigh-Q GMRs in various photonic lattices composed of materials with different dielectric constants.
Merging and band transition of bound states in the continuum in leaky-mode photonic lattices
Wook-Jae Lee
August 1, 2023
============================================================================================
§ INTRODUCTION
In photonics, bound states in the continuum (BICs) refer to the special eigensolutions of electromagnetic wave equations in non-Hermitian physical systems <cit.>. Unusually localized BICs can exhibit theoretically infinite radiative Q factors, even though they can couple with outgoing radiative waves that can carry electromagnetic energy <cit.>. Due to their remarkable ability to increase light-matter interactions in confined regions, extensive studies on BICs have been conducted for basic research and practical applications in recent years <cit.>. In theoretical or numerical studies, BICs with infinite Q factors can be found in various open photonic systems, including photonic crystals <cit.>, metasurfaces <cit.>, and plasmonic structures <cit.>. In experimental implementations, however, unwanted scattering losses attributed to fabrication imperfections, such as surface roughness, imperfect boundaries of individual scattering elements, and structural disorder, give rise to the considerable reduction of radiative Q factors. In practical photonic structures, the theoretically perfect BICs appear as quasi-BICs with finite Q factors due to out-of-plane coupling with continuous radiating waves.
Subwavelength photonic crystal slabs can exhibit abundant BICs if the lattices possess time-reversal symmetry, up-down mirror symmetry, and proper rotational symmetry <cit.>. Recently, topologically-protected BICs in photonic crystal slabs have attracted particular interest because they are stable <cit.>. Furthermore, current nanofabrication technology is sufficient to implement BICs in photnoic crystal slabs <cit.>. Due to their topological nature, accidental BICs can be moved along the band diagram by varying the structural parameters while maintaining the symmetry of the systems <cit.>. It has also been demonstrated that multiple accidental BICs can be merged at the Brillouin zone center, which is also referred to as the lattice Γ point in a photonic band structure, by adjusting structural parameters <cit.>. In the merged states of multiple BICs, undesired out-of-plane scattering losses can be significantly suppressed by increasing the radiative Q factors of nearby eigenstates over a finite range of wavevectors. Merging BICs is important for practical applications because it provides a powerful mechanism to achieve robust ultrahigh-Q resonances, that enhance light-matter interactions.
The concept of merging multiple BICs has been introduced recently and has been investigated in only a few studies so far. In this paper, we present analytical and numerical results on the merging and band transition of accidental BICs in one-dimensional (1D) and two-dimensional (2D) photonic lattice slabs. Guided modes become BICs when out-of-plane radiation completely disappears. We present an analytical expression, an overlap integral in the slab region, associated with the out-of-plane radiation of the guide modes, and show that accidental BICs can appear at generic k points in reciprocal space where the value of the overlap integral approaches zero. As the thickness of photonic lattice slabs increases, accidental BICs gradually move downward along the upper dispersion curve and eventually meet at the lattice Γ point. As the thickness is further increased, the merged state of BICs transitions from the upper to lower band edges at the lattice Γ point. The BICs move down and away from each other with further increase in thickness. Merging thicknesses, at which multiple accidental BICs merge at the second-order Γ point, are calculated through the coupled-mode analysis. Our coupled-mode analysis could be applied to achieve ultrahigh-Q resonances in diverse 1D and 2D photonic lattices made of materials with different dielectric constants.
§ MERGING AND BAND TRANSITION OF BICS IN 1D PHOTONIC LATTICES
Figure <ref>(a) compares a conventional homogeneous slab waveguide and a representative 1D photonic lattice for studying the merging and band transition of BICs in this study. The waveguide and photonic lattice are surrounded by a surrounding medium with a dielectric constant ϵ_s. The waveguide consists of a homogeneous material with a dielectric constant ϵ_w, and the 1D photonic lattice is composed of high (ϵ_h) and low (ϵ_l) dielectric constant materials. The period is Λ, the width of the high dielectric constant part is ρΛ, and the thickness of the photonic lattice is t. In the homogeneous waveguide with ϵ_w > ϵ_s, guided modes posses purely real eigenfrequencies Ω = Ω_Re and propagate along the waveguide without out-of-plane radiation because they are perfectly protected by the total internal reflection (TIR). As illustrated in Fig. <ref>(b), dispersion curves for waveguide modes should be located in yellow region below the light line. The 1D lattice can also support transverse electric (TE) guided modes because its average dielectric constant ϵ_0=ϵ_l+ρ (ϵ_h-ϵ_l) is larger than ϵ_s. With Δϵ = ϵ_h-ϵ_l > 0, as illustrated in Fig. <ref>(c), all Bloch guided modes can be plotted in the irreducible Brillouin zone and photonic band gaps open at the Bragg condition k_z = jK/2, where k_z is the Bloch wavevector, K=2 π/Λ is the magnitude of the grating vector, and j is an integer. Near the second stop band in the white region, guided modes posses complex eigenfrequencies Ω = Ω_Re+i Ω_Im and exhibit interesting leaky-wave effects such as BICs and guided-mode resonances (GMRs), also known as guided resonances, via zero-order diffraction <cit.>. Guided modes in the yellow region are not associated with the leaky-wave effects because they do not couple with external waves in the radiation continuum, and Bloch modes in the grey region are less practical because they generate unwanted higher-order diffracted waves as well as the desired zero-order diffraction. In general, numerous leaky guided modes coexist in the photonic lattices with a slab geometry and the various modes possess their own dispersion curves, photonic band gaps, and high-Q BICs. In this study, we focus our attention to the BICs in the vicinity of the second stop band (j=2) open by fundamental TE_0 mode, as this simplest case brings out the key properties of the accidental BICs. In fact, most of important studies on the BICs are associated the leaky Bloch modes in the vicinities of the second stop bands. To elucidate the fundamental properties of BICs, we employ a semianalytical dispersion model and rigorous finite-element method (FEM) simulations.
Merging and band transition of BICs are possible because they are topologically protected <cit.>. Figure <ref>(a) illustrates the evolution of the dispersion curves of guided modes in the vicinity of the second stop band under variation of the thickness t. No noticeable changes in the dispersion curves are observed with variations in thickness. The simulated spatial electric field (E_y) distributions in the insets show that regardless of t, upper band edge modes with asymmetric field distributions become nonleaky symmetry-protected BICs. Conversely, lower band edge modes with symmetric field distributions can be either leaky or nonleaky depending on the thickness values. The merging and band transition of accidental BICs can be clearly seen in Fig. <ref>(b), which depicts Q factors as a function of k_z. When t=1.13 Λ, two accidental BICs with -1 charge and one symmetry-protected BIC with +1 charge appear, resulting in three isolated peaks in the Q factor curve of the upper band. Due to the 180^∘ rotational symmetry of the 1D lattice, two accidental BICs appear in pairs at wavevectors k_z=± 0.01995 K simultaneously. As the value of t increases from 1.13 Λ, two accidental BICs gradually move downward along the upper dispersion curve and approach to Γ point where a symmetry-protected BIC is located. When t=1.14489 Λ, two accidental BICs and one symmetry-protected BIC merge at the lattice Γ point, resulting in a single enhanced peak in the Q factor curve. Compared to the isolated symmetry-protected or accidental BICs, the merged state at t=1.14489 Λ exhibits significantly improved radiative Q factors over a wide range of wavevectors. As the value of t further increases to 1.16282 Λ, simultaneous peaks in the Q factor curves are observed at both the upper and lower band edges. The peak in the upper band is attributed to the symmetry-protected BIC, evident from the asymmetric field distributions shown in the inset of Fig. <ref>(a). We propose that the interband transition of the merged state of BICs, induced by the variation of t, leads to the enhanced peak at the lower band edge. Upon further increasing t beyond 1.16282 Λ, the merged state at the Γ point splits into two isolated peaks with -1 charge.
§ COUPLED-MODE ANAYLSIS
In the planar waveguide structures shown in Fig. <ref>(a), the dispersion relations of guided modes, including eigenfrequencies and radiative Q factors, can be obtained by solving the 1D wave equation given by <cit.>
(∂^2/∂ x^2 + ∂^2/∂ z^2 ) E_y(x,z) + ϵ (x,z) k_0^2 E_y(x,z)= 0,
where k_0 denotes the wave number in free space. For the homogeneous waveguide plotted in Fig. <ref>(b), Eq. (<ref>) can be solved analytically and spatial electric field distribution of the TE_0 mode satisfies E_x=0, E_z=0, and
E_y(x,z) = φ(x) e^iβ z,
where φ(x) characterizes the transverse profile of the TE_0 mode and β denotes the propagation constant along the z direction <cit.>. The TE_0 mode can propagate without radiative loss along the waveguide due to TIR.
For the photonic lattices in Fig. <ref>(a), Eq. (<ref>) can be solved numerically by expanding the electric field E_y(x,z) as a Bloch form and the periodic dielectric function ϵ (x,z) in a Fourier series <cit.>. Using the current state of computational software and hardware, it is possible to obtain exact solutions of Eq. (<ref>) for certain system parameters. However, relying solely on direct numerical simulations may not be sufficient to fully comprehend the fundamental properties of BICs, even though numerical calculations can provide accurate dispersion relations of leaky guided modes. To gain a deeper understanding of the merging and band transition of BICs, we employ the semianalytical dispersion model introduced by Kazarinov and Henry (KH) <cit.>. In this model, the spatial dielectric function and electric field are approximated as:
ϵ(x,z) = γ_0 + ∑_n=1,2(γ_ne^inKz + γ_-ne^-inKz),
E_y(x,z) = (Ae^iKz + Be^-iKz)φ(x) + Δ E(x,z).
In Eq. (<ref>), the Fourier coefficient γ_n has a zero value outside the periodic layer, and γ_0=ϵ_0. In Eq. (<ref>), A(z)∼exp(ik_z z) and B(z)∼exp(-ik_z z) are the slowly varying envelopes of the two counter-propagating waves, and Δ E(x,z) represents the diffracted wave radiating away from the periodic structure. For symmetric lattices with ϵ(x,z) = ϵ^∗(x,-z) and γ_-n = γ_n, near the second stop band, the eigenfrequencies of guided modes can be written as
Ω(k_z)=Ω_0 - ih_1±√(k_z^2+(h_2+ih_1)^2)/Kh_0,
where Ω_0 is the Bragg frequency under vanishing index modulation, and the coupling coefficients h_n=0,1,2 are given by
h_0 = k_0/K∫_-∞^∞γ_0(x) φ(x)φ^*(x) dx,
h_1 = i k_0^4 γ_1^2/2K∫_-t^0∫_-t^0 G(x,x^')φ(x')φ^*(x) dx' dx,
h_2 = k_0^2 γ_2/2K∫_-t^0φ(x)φ^*(x) dx,
where G(x,x') denotes the Green's function for diffracted fields <cit.>. Equation (<ref>) indicates that the leaky stop band with two band edges Ω^b=Ω_0+h_2/(Kh_0) and Ω^a=Ω_0-(h_2+i2h_1)/(Kh_0) opens at k_z = 0. For symmetric lattices, the coupling coefficient h_1 is generally a complex value, while h_0 and h_2 are real values, irrespective of the lattice parameters. With the time dependence of exp(-iΩ t), Bloch modes near the second stop band tend to lose their electromagnetic energy over time because they have complex frequencies. However, one of the band edge modes with a purely real eigenfrequency Ω^b can become a nonleaky symmetry-protected BIC at the lattice's Γ point.
By investigating the dispersion relations in Eq. (<ref>) with the associated coupling coefficients h_0, h_1, and h_2, one can notice that Im[Ω] becomes zero when Re[h_1] approaches zero. With Re[h_1]=0, dispersion relations in Eq. (<ref>) can be rewritten as
Ω(k_z)=Ω_0 + Im[h_1] ±√(k_z^2+(h_2 - Im[h_1])^2)/Kh_0.
Equation (<ref>) shows that accidental BICs with purely real eigenfrequencies can be observed at any generic k point, including the lattice Γ point. Due to their topological nature, these accidental BICs do not destroy but rather move along dispersion curves as the lattice parameters change continuously. In 1D photonic lattices possessing 180^∘ rotational symmetry, two accidental BICs should appear in pairs at k_z = ± |k_BIC| simultaneously. It is important to note that the formation of accidental BICs is independent of the existence of symmetry-protected BICs. Thus, at the asymmetric edge, two accidental BICs and one symmetry-protected BIC can merge as illustrated in Fig. <ref> when Re[h_1] approaches 0 due to a variation in lattice parameters. On the other hand, at the symmetric edge, two accidental BICs can merge with the appropriate lattice parameters. In previous studies <cit.>, it was shown that the position of a symmetry-protected BIC changes from the upper to lower band edge through the band flip, also known as the topological phase transition. This transition can be achieved by varying the value of the parameter ρ. Before the band flip, as shown in Fig. <ref> with ρ=0.35, accidental BICs can merge with a symmetry-protected BIC at the upper band edge. Conversely, after the band flip with appropriate value of ρ, the merged state with a symmetry-protected BIC appears at the lower band edge.
We now present an analytical expression that is associated with the formation and band transition of accidental BICs. By examining the value of h_1 with the appropriate Green's function <cit.>, we can demonstrate that Re(h_1) approaches zero when the overlap integral given by
Σ =∫_-t^0 e^iμ xφ(x) dx,
approaches zero; see Supplemental Material <cit.> for details. In Eq. (<ref>), μ = √(ϵ_0 k_0^2 - δ^2), and the transverse profile of the TE_0 mode is given by φ(x)= c e^+i α x + c^* e^-i α x, where α =√(ϵ_0 k_0^2 - β^2), and the complex coefficient c is set to satisfy ∫_-∞^∞ |φ(x)|^2dx = 1. As shown in Figs. <ref>(b) and <ref>(c), waveguide modes with a positive (negative) value of δ at β = K + δ correspond to the leaky modes in the upper (lower) dispersion curve of photonic lattices at the Bloch wavevector k_z=|δ|. Merging and band transition of accidental BICs illustrated in Fig. <ref> can be explained by investigating analytical BIC points δ_BIC, where the overlap integral Σ becomes zero, as a function of the thickness t. Figure <ref> illustrates the calculated δ_BIC with Σ=0. In the calculations, the value of t varies from Λ to 1.3 Λ, and β changes in the small discrete step of 10^-5 K. δ_BIC(t), represented by the blue solid line, is defined as the first value at which the value of |Σ| becomes smaller than 10^-5 for a given t. Figure <ref> indicates that as the thickness gradually increases from Λ, δ_BIC monotonically moves downward and reaches δ = 0 (β = K), which corresponds to the second-order Γ point, when the thickness is t_0= 1.15464 Λ. As the thickness increases further, δ_BIC continues to move downward and gets away from the Γ point.
The analytical BICs in Fig. <ref> and the dispersion curve in Fig. <ref>(b) indicate that the eigenfrequencies of accidental BICs decrease continuously with increasing thickness. However, the effects of dielectric constant modulation, such as band folding and photonic band gap, cannot be observed in the analytical BICs represented by the blue line in Fig.<ref>, as the overlap integral Σ in Eq.(<ref>) is determined by the transverse mode profile φ(x) and the Green's function for the unmodulated homogeneous waveguide slab of thickness t. To verify whether accidental BICs indeed occur as Σ approaches zero, we conducted additional investigations of the BIC points k_BIC using FEM simulations. The results are represented as magenta circles in Fig. <ref>. Due to the mirror symmetry of the photonic lattice, FEM-simulated accidental BICs with infinite Q factors appear in pairs at ± |k_BIC|. When the thickness (t < t_0) is far from t_0 = 1.15464 Λ, the values of δ_BIC and k_BIC agree well. However, as t approaches t_0, the FEM-simulated values of k_BIC slightly deviate from the analytic values of δ_BIC. The deviation |k_BIC - δ_BIC| increases until t approaches t_u = 1.14489 Λ, where k_BIC reaches the Γ point. As t further increases beyond t_u, the BIC point is not found for a while, but when t_l = 1.16282 Λ, the BIC point appears again at the Γ point. The deviation |k_BIC - δ_BIC| decreases with additional increases in thickness (t > t_l), and k_BIC agrees well with δ_BIC if t is far away from t_l. In the vicinity of the second-order Γ point, photonic band gaps open by the coupling between two counterpropagating waves <cit.>. Since the coupling strength is maximum at k_z=0 and decreases rapidly as the Bloch wavevector moves away from the Γ point, the deviation |k_BIC - δ_BIC| could be noticeable only around the Γ point. In the FEM simulations, there exist a thickness range, t_u < t< t_l, where accidental BIC can not be found due to the presence of a photonic band gap. At t=t_u and t=t_l, two accidental BICs meet at the upper and lower band edges, respectively, resulting in the formation of merged BIC states. A symmetry-protected BIC can be included in the merged state at the upper or lower band depending on the value of lattice parameter ρ, not t.
§ ULTRAHIGH-Q RESONANCES ROBUST TO FABRICATION IMPERFECTIONS
Our coupled-mode analysis reveals that the value of Σ can be tuned to zero at the second-order Γ point of photonic lattices by adjusting the lattice parameters. This tuning can cause accidental BICs to merge, resulting in interesting ultrahigh-Q resonances. The overlap integral Σ in Eq. (<ref>) mathematically depends on the lattice parameters t, ϵ_0, and ϵ_s. However, thickness variation is essential to achieve merged states of BICs. While positions of accidental BICs can be substantially moved along dispersion curves with thickness variation, BICs only move slightly with variations of ϵ_0 and ϵ_s. In conventional experimental configurations at wavelengths around λ = 1550 nm, the value of ϵ_s typically ranges from 1 to 2.25, and ϵ_0 varies from 3 to 12. For photonic lattices with these typical values of ϵ_0 and ϵ_s, it is meaningful to investigate the normalized merging thickness t_0/Λ, where Σ = 0. Red lines in Figs. <ref>(a) and <ref>(b) display the calculated t_0/Λ as a function of ϵ_0 for ϵ_s=1 and ϵ_s=2.25, respectively. FEM-simulated merging thicknesses t_u/Λ and t_l/Λ, where accidental BICs merge at the upper and lower band edges, respectively, are also displayed for comparison. Figure <ref> demonstrates that t_0/Λ obtained from Eq. (<ref>) could provide useful reference for achieving merged states of BICs for ultrahigh-Q resonances. In Fig. <ref>, the values of t_0/Λ change from 1.14588 to 1.16981 when ϵ_s = 1, and from 1.08116 to 1.16066 when ϵ_s = 2.25. As the value of ϵ_0, representing the averaged dielectric constant in the waveguide region, decreases, the calculated and simulated merging thicknesses decrease. Figure <ref> also shows that a higher dielectric constant ϵ_s in the cladding region results in a decrease in merging thicknesses. Changes in ϵ_0 and ϵ_s have a limited impact on the analytical and FEM-simulated merging thicknesses. The dependency of merging thicknesses on the dielectric constants ϵ_0 and ϵ_s may be attributed to the strong confinement of guided modes in the waveguide and the scaling property of Maxwell's equations <cit.>.
§ MERGING AND BAND TRANSITION OF BICS IN 2D PHOTONIC LATTICES
We also investigate the merging and band transition of BICs in 2D photonic crystals with a thin-film geometry. As illustrated in Fig. <ref>(a), we use a simple 2D periodic slab composed of square arrays of square-shaped high dielectric constant (ϵ_h) materials in the background medium with a low dielectric constant (ϵ_l). Here, the averaged dielectric constant is given by ϵ_0=ϵ_l + ρ^2 Δϵ. FEM-simulated dispersion curves plotted in Fig. <ref>(b) shows that there are four bands, A, B, C, and D, in the vicinity of the second-order Γ point. Additionally, spatial electric field (|𝐄|) distributions of four band edge modes and radiative Q factors displayed in Figs. <ref>(c) and <ref>(d), respectively, show that the band edge modes in A and B are nonleaky symmetry-protected BICs. Conversely, band edge mode in C and D are degenerate and generally radiative out of the photonic crystal slab. Merging and band transition of accidental BICs can be clearly seen in Fig. <ref>(e), which depicts the evolution of radiative Q factors as the slab thickness varies. The radiative Q factors are simulated along the ΓX and ΓM directions in the Brillouin zone. In this study, we investigate the merging and band transition of BICs in the highest band A and the lowest band D, for convenience. When t=1.161 Λ, two accidental BICs with ±1 charge and one symmetry-protected BIC with -1 charge appear, resulting in three distinct peaks in the Q factor curve in band A. Accidental BICs with +1 and -1 charge approach the symmetry-protected BICs along the ΓX and ΓM directions, respectively. Due to the C_4 rotational symmetry of the 2D lattice, four sets of accidental BICs with +1 charge and four sets with -1 charge appear simultaneously. As the value of t increases from 1.161 Λ, eight accidental BICs gradually move downward along the band A and approach to Γ point where a symmetry-protected BIC is located. When t=1.176 Λ, eight accidental BICs and one symmetry-protected BIC merge at the lattice Γ point, resulting in a noticeably enhanced peak in the Q factor curve. When t=1.18072 Λ, two peaks are observed in the Q factor curves at the edges of band A and band D. The peak in band A is attributed to the symmetry-protected BIC, while the enhanced peak in band D arises from the merged state of accidental BICs. As t is increased beyond 1.18072 Λ, the merged state at the Γ point splits into isolated peaks with +1 and -1 charges, and these peaks move downward along band D.
In the FEM simulations with 2D photonic crystal slabs, the merging of BICs was achieved when t=1.176 Λ and t=1.18072 Λ, which are close to the analytical merging thickness of t_0/Λ = 1.167 from Fig. <ref>(a). Furthermore, Jin et al. have recently demonstrated experimentally that on-chip photonic resonances can be made robust against fabrication imperfections by combining nine BICs in 2D photonic crystal slabs <cit.>. The merged state of BICs was achieved at a normalized thickness of h/a=1.13, which is also close to t_0/Λ =1.167. Hence, it is reasonable to conclude that the analytical merging thickness obtained from Eq. (<ref>) could also be beneficial for achieving merged states of BICs in practical 2D photonic crystal slabs. Compared to the merged states in a 1D photonic lattice plotted in Fig. <ref>(b), the merged states in a 2D lattice exhibit significantly improved radiative Q factors across a wide range of wavevectors. This improvement is due to the convergence of a larger number of BICs at the center of the Brillouin zone in 2D configurations. Jin et al. showed that there was an improvement in the scaling property from Q∝1/k_z^-2 to Q∝1/k_z^-6 by merging eight accidental BICs and one symmetry-protected BIC in 2D photonic crystal slabs.
§ CONCLUSION
In conclusion, we analytically and numerically investigated the formation, merging, and band transition of accidental BICs in 1D and 2D photonic crystal slabs. Using a simple coupled-mode analysis, we derived an analytical expression for the overlap integral associated with the out-of-plane radiation of guided modes. Accidental BICs can emerge at generic k points, including the lattice Γ point, where the value of the overlap integral approaches zero. Due to the 180^∘ (90^∘) rotational symmetry of 1D (2D) lattices, two (eight) accidental BICs appear in the momentum space simultaneously. As the thickness of the slab increases, the accidental BICs move downward along the dispersion curve, resulting in the merged state at the edges of the second stop band. As the thickness increases further, the merged state of BICs transitions from the upper to the lower band edge. With a further increase in thickness, the merged BICs at the lower band edge split into multiple isolated ones and move away from each other. We presented the theoretical merging thickness t_0/Λ, which is the ratio between the thickness and period of photonic crystal slabs, where multiple BICs merge at the second-order Γ point. The merging of multiple BICs are important for practical applications because it provides a powerful mechanism to achieve robust ultrahigh-Q resonances capable of enhancing light-matter interactions. Our coupled-mode analysis and FEM-simulated results are helpful for achieving topologically enabled ultrahigh-Q resonances that are robust to fabrication imperfections in diverse 1D and 2D photonic lattices made of various dielectric materials.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author.
Acknowledgements
This research was supported by the grant from the National Research Foundation of Korea, funded by the Ministry of Science and ICT (No. 2022R1A2C1011091).
Conflict of Interest
The authors declare no conflicts of interest.
Data Availability Statement
Data underlying the results in this paper may be obtained from the authors upon reasonable request.
Keywords
bound states in the continuum, guided-mode resonances, topologically enabled ultrahigh-Q
1
Marinica2008 D. C. Marinica, A. G. Borisov, and S. V. Shabanov, “Bound States in the continuum in photonics,” Phys. Rev. Lett. 100, 183902 (2008).
Plotnik2011 Y. Plotnik, O. Peleg, F. Dreisow, M. Heinrich, S. Nolte, A. Szameit, and M. Segev, “Experimental observation of optical bound states in the continuum,” Phys. Rev. Lett. 107, 183901 (2011).
Koshelev2019 K. Koshelev, G. Favraud, A. Bogdanov, Y. Kivshar, and A. Fratalocchi, “Nonradiating photonics with resonant dielectric nanostructures,” Nanophotonics 8, 725–745 (2019).
Hsu2016 C. W. Hsu, B. Zhen, A. D. Stone, J. D. Joannopoulos, and M. Soljačić, “Bound states in the continuum” Nature Reviews Materials 1, 1–13 (2016).
YXGao2017 Y.-X. Xiao, G. Ma, Z.-Q. Zhang, and C. T. Chan, “Topological subspace-induced bound state in the continuum,” Phys. Rev. Lett. 118(16), 166803 (2017).
Minkov2018 M. Minkov, I. A. Williamson, M. Xiao, and S. Fan, “Zero-index bound states in the continuum,” Phys. Rev. Lett. 121, 263901 (2018).
XGao2019 X. Gao, B. Zhen, M. Soljačić, H. Chen, and C. W. Hsu, “Bound States in the Continuum in Fiber Bragg Gratings,” ACS Photonics 6, 2996–3002 (2019).
SGLee2019-1 S.-G. Lee and R. Magnusson, “Band flips and bound-state transitions in leaky-mode photonic lattices,” Phys. Rev. B 99(4), 045304 (2019).
Kodigala2017 A. Kodigala, T. Lepetit, Q. Gu, B. Bahari, Y. Fainman, and B. Kanté, “Lasing action from photonic bound states in continuum,” Nature 541, 196–199 (2017).
STHa2018 S. T. Ha, Y. H. Fu, N. K. Emani, Z. Pan, R. M. Bakker, R. Paniagua-Domínguez, and A. I. Kuznetsov, “Directional lasing in resonant semiconductor nanoantenna arrays” Nat. Nanotechnol. 13, 1042–1047 (2018).
XYin2023 X. Yin, T. Inoue, C. Peng, and S. Noda, “Topological Unidirectional Guided Resonances Emerged from Interband Coupling," Phys. Rev. Lett. 130(5), 056401 (2023).
JWang2020 J. Wang, M. Clementi, M. Minkov, A. Barone, J.-F. Carlin, N. Grandjean, D. Gerace, S. Fan, M. Galli, and R. Houdre, “Doubly resonant second-harmonic generation of a vortex beam from a bound state in the continuum," Optica 7, 1126–1132 (2020).
MMinkov2019 M. Minkov, D. Gerace, and S. Fan, “Doubly resonant χ^(2) nonlinear photonic crystal cavity based on a bound state in the continuum," Optica 6(8), 1039–1045 (2019).
Koshelev2018 K. Koshelev, S. Lepeshov, M. Liu, A. Bogdanov, and Y. Kivshar, “Asymmetric metasurfaces with high-Q resonances governed by bound states in the continuum,” Phys. Rev. Lett. 121(19), 193903 (2018).
Abujetas2019 D. R. Abujetas, N. van Hoof, S. ter Huurne, J. G. Rivas, and J. A. Sánchez-Gil, “Spectral and temporal evidence of robust photonic bound states in the continuum on terahertz metasurfaces,” Optica 6(8), 996–1001 (2019).
Kupriianov2019 A. S. Kupriianov, Y. Xu, A. Sayanskiy, V. Dmitriev, Y. S. Kivshar, and V. R. Tuz,“Metasurface engineering through bound states in the continuum,” Phys. Rev. Appl. 12(1), 014024 (2019).
Azzam2018 S. I. Azzam, V. M. Shalaev, A. Boltasseva, and A. V. Kildishev, “Formation of bound states in the continuum in hybrid plasmonic-photonic systems,” Phys. Rev. Lett. 121(25), 253901 (2018).
BinAlam2021 M. S. Bin-Alam, O. Reshef, Y. Mamchur, M. Z. Alam, G. Carlow, J. Upham, B. T. Sullivan, J.-M. Ménard, M. J. Huttunen, R. W. Boyd, and K. Dolgaleva, “Ultra-high-Q resonances in plasmonic metasurfaces," Nat. Commun. 12, 974 (2021).
YZhou2022 Y. Zhou, Z. Guo, X. Zhao, F. Wang, Z. Yu, Y. Chen, Z. Liu, S. Zhang, S. Sun, and X. Wu, “Dual-quasi bound states in the continuum enabled plasmonic metasurfaces," Adv. Opt. Mater. 10, 2200965 (2022).
LNi2016 L. Ni, Z. Wang, C. Peng, and Z. Li, “Tunable optical bound states in the continuum beyond in-plane symmetry protection,” Phys. Rev. B 94, 245148 (2016).
SGLee2020-1 S.-G. Lee, S. H. Kim, and C. S. Kee, “Bound states in the continuum (BIC) accompanied by avoided crossings in leaky-mode photonic lattices," Nanophotonics 9(14), 4374–4380 (2020).
XGao2016 X. Gao, C. W. Hsu, B. Zhen, X. Lin, J. D. Joannopoulos, M. Soljačić, and H. Chen, “Formation mechanism of guided resonances and bound states in the continuum in photonic crystal slabs,” Scientific Reports 6, 31908 (2016).
Gansch2016 R. Gansch, S. Kalchmair, P. Genevet, T. Zederbauer, H. Detz, A. M. Andrews, W. Schrenk, F. Capasso, M. Lončar, and G. Strasser, “Measurement of bound states in the continuum by a detector embedded in a photonic crystal,” Light: Sci. Appl. 5(9), e16147 (2016).
Yang2014 Y. Yang, C. Peng, Y. Liang, Z. Li, and S. Noda, “Analytical perspective for bound states in the continuum in photonic crystal slabs,” Phys. Rev. Lett. 113(3), 037401 (2014).
SGLee2021-1 S.-G. Lee, S. H. Kim, and C. S. Kee, “Metasurfaces with Bound States in the Continuum Enabled by Eliminating First Fourier Harmonic Component in Lattice Parameterss," Phys. Rev. Lett. 126(1), 013601 (2021).
Doeleman2018 H. M. Doeleman, F. Monticone, W. Hollander, A. Alù, and A. F. Koenderink, “Experimental observation of a polarization vortex at an optical bound state in the continuum,” Nat. Photonics 12, 397–401 (2018).
BWang2020 B. Wang, W. Liu, M. Zhao, J. Wang, Y. Zhang, A. Chen, F. Guan, X. Liu, L. Shi, and J. Zi, “Generating optical vortex beams by momentum-space polarization vortices centered at bound states in the continuum.” Nat. Photonics 14, 623–628 (2020).
XYin2020 X. Yin, J. Jin, M. Soljačić, C. Peng, and B. Zhen, “Observation of topologically enabled unidirectional guided resonances,” Nature 580, 467–471(2020).
BZhen2014 B. Zhen, C. W. Hsu, L. Lu, A. D. Stone, and M. Soljačić, “Topological nature of optical bound states in the continuum” Phys. Rev. Lett. 113(25), 257401 (2014).
MKang2022-2 M. Kang, S. Zhang, M. Xiao, and H. Xu, Merging Bound States in the Continuum at Off-High Symmetry Points, Phys. Rev. Lett. 126, 117402 (2021).
JJin2019 J. Jin, X. Yin, L. Ni, M. Soljačić, B. Zhen, and C. Peng, “Topologically enabled ultrahigh-Q guided resonances robust to out-of-plane scattering,” Nature 574, 501–504 (2019).
MKang2022-1 M. Kang, L. Mao, S. Zhang, M. Xiao, H. Xu, and C. T. Chan, “Merging bound states in the continuum by harnessing higher-order topological charges," Light Sci. Appl. 11, 228 (2022).
YDing2007 Y. Ding and R. Magnusson, “Band gaps and leaky-wave effects in resonant photonic-crystal waveguides,” Opt. Express 15(2), 680–694 (2007).
SGLee2019-2 S.-G. Lee and R. Magnusson, “Band dynamics of leaky-mode photonic lattices,” Optics Express 27, 18180 (2019).
Yariv1984 A. Yariv and P. Yeh, Optical Waves in Crystals (Wiley, New York, 1984).
Agrawal2004 G. P. Agrawal, Lightwave Technology: Components and Devices (John Wiley & Sons, New Jersey, 2004).
Inoue2004 K. Inoue and K. Ohtaka, Photonic Crystals: Physics, Fabrication and Applications (Springer-Verlag Berlin Heidelberg, 2004).
Kazarinov1985 R. F. Kazarinov and C. H. Henry, “Second-order distributed feedback lasers with mode selection provided by first-order radiation loss”, IEEE J. Quant. Electronics 21, 144–150 (1985).
Supplemental See Supplemental Material for the proof that Re[h_1]=0 when Σ=0.
Joannopoulos1995 J. D. Joannopoulos, R. D. Meade, and J. N. Winn, Photonic Crystals: Molding the Flow of Light, (Princeton University, 1995).
|
http://arxiv.org/abs/2307.00864v1
|
20230703090628
|
Ground state EIT cooling of $^{171}$Yb$^+$ ion in polychromatic field
|
[
"D. S. Krysenko",
"O. N. Prudnikov",
"A. V. Taichenachev",
"V. I. Yudin",
"S. V. Chepurov",
"S. N. Bagaev"
] |
physics.atom-ph
|
[
"physics.atom-ph"
] |
Institute of Laser Physics, 630090, Novosibirsk, Russia
Novosibirsk State Technical University, 630073, Novosibirsk,
Russia
oleg.nsu@gmail.com
Institute of Laser Physics, 630090, Novosibirsk, Russia
Novosibirsk State University, 630090, Novosibirsk, Russia
Institute of Laser Physics, 630090, Novosibirsk, Russia
Novosibirsk State University, 630090, Novosibirsk, Russia
Institute of Laser Physics, 630090, Novosibirsk, Russia
Novosibirsk State University, 630090, Novosibirsk, Russia
Novosibirsk State Technical University, 630073, Novosibirsk,
Russia
Institute of Laser Physics, 630090, Novosibirsk, Russia
Institute of Laser Physics, 630090, Novosibirsk, Russia
The work propose a scheme of deep laser cooling of ^171Yb^+. The cooling is based on the effect of electromagnetically induced transparency (EIT) in a polychromatic field with three frequency components are resonant to optical transitions of the ^2S_1/2→ ^2P_1/2 line. The deep cooling down to the ground motional state in a trap allows for a significant suppression of the second order Doppler shift in frequency standards. Moreover, there is no need to use a magnetic field, which is required for Doppler cooling of ^171Yb^+ in a field with two-frequency component. The cooling without use of magnetic field is important for deep suppression of quadratic Zeeman shifts of clock transitions from uncontrolled residual magnetic fields.
Ground state EIT cooling of ^171Yb^+ ion in polychromatic field
S. N. Bagaev
August 1, 2023
===============================================================
§ INTRODUCTION
Laser cooling is a necessary step for modern experiments with quantum systems based on neutral atoms and ions that have wide applications, including the study of fundamental properties of cold atomic Bose and Fermi condensates <cit.>, implementation of quantum logic elements and quantum computing <cit.>. The development of modern frequency standards using cold atoms <cit.> and ions <cit.>
has become highly relevant. The achieved level of accuracy in optical frequency standards, Δν/ν < 10^-18, opens up new horizons for developments of modern fundamental researches, such as the study of the effects of Earth's gravitation on the space-time continuum <cit.>, the test of fundamental constants <cit.>, verification of the general theory of relativity, Lorentz invariance of space <cit.>, the search for dark matter <cit.>, etc.
To achieve high precision level of frequency standards, it is necessary to take into account systematic shifts of atomic levels of different nature. Therefore, works are aimed at suppression of these shifts are highly relevant. For example, in the context of the ^171Yb^+ ion-based frequency standard, further progress can be linked to the control and suppression of systematic shifts caused by residual magnetic field, black-body radiation (BBR) shifts, and shifts related to the quadratic Doppler effect <cit.>. However, the main challenge here is that the transition ^2S_1/2→ ^2P_1/2 is used for laser cooling is not closed, that require to use a laser field with at least two frequency components <cit.> (see Fig.<ref>). In this case, a relatively large magnetic field of of ∼1-10 G is required to destroy coherent population trapping (CPT) effect appeared at the ^2S_1/2(F=1) state for the cooling stage. The laser cooling here can reach minimum temperature that corresponds to the Doppler limit k_B T_D ≃ħγ/2, where γ is the natural linewidth of the optical transition ^2S_1/2→ ^2P_1/2.
The hysteresis effects during the switching off of the magnetic field is required for cooling create certain difficulties in minimizing the residual magnetic field and keeping it constant in various cycles of cooling and clock operation. The similar difficulties arise when implementing quantum logic and quantum computing elements based on ^171Yb^+ ions <cit.>.
In this work, we propose an alternative method of laser cooling that makes it possible to eliminate the use of a magnetic field and, in contrast to the standard <cit.>, allows atoms to be cooled significantly below the Doppler limit T_D, which allows to significantly suppress the second-order Doppler shift in frequency standard.
§ DEEP LASER COOLING OF ^171YB^+
For laser cooling of ^171Yb^+ ion the light field with at least two frequency components have to be used <cit.> (see Fig. <ref>). Here, one of the frequency components is close to the resonance of ^2S_1/2 (F=0) → ^2P_1/2(F=1) transition, and the other one to the ^2S_1 /2 (F=1) → ^2P_1/2(F=0). In this case, an additional magnetic field is required to destroy the CPT effect arising at the ^2S_1/2 (F=1) level. Let us note that in this scheme, the laser cooling arises as a result of the action of the dissipative Doppler force on a moving ion, which leads to cooling down to the temperature of the Doppler limit <cit.>. Deeper laser cooling, down to the ground motional state, can be achieved under conditions of resolved sideband cooling <cit.>, when the ion is localized in a trap on scales smaller than the wavelength (the Lamb-Dicke parameter η = √(E_R/ħ ω_osc)≪ 1, where E_R = ħ^2 k^2/2M is the recoil energy, M is the ion mass), and ω_osc is the ion oscillations frequency in the trap is enough large ω_osc≫γ (i.e., transitions between different motional states of the ion have to be spectrally resolvable). Let us note, that these conditions are hard to fulfill for ^171Yb^+ ions ^2S_1/2→ ^2P_1/2 cooling transition, where the natural line width γ/2π = 23 MHz and the typical oscillation frequency in the trap ω_osc/2π≃ 400 - 600 kHz.
The laser cooling by using the Electromagnetically Induced Transparency (EIT) technique <cit.> is a further development of deep laser cooling methods that do not require ω_osc≫γ. To implement it, a three-level Λ system is required, in which transitions are induced by a pair of light waves. Under conditions when the detunings of light waves are equal, the atoms are pumped into a dark state that does not interact with the field, which allows to substantially suppress the heating processes associated with the emission of spontaneous field photons.
Furthermore, the presence of a narrow EIT resonance, with a width much smaller than linewidth γ of excited state, enables cooling via two-photon transitions between different motional states of the ground levels in the Λ-scheme, similar to the Raman cooling technique <cit.>.
The choice of the interaction scheme for the implementation of the EIT cooling of the ^171Yb^+ ion is a non-trivial task. In the work by <cit.>, it was proposed to use three frequency components near the resonance of the optical transition ^2S_1/2 (F=1) → ^2P_1/2(F=0), known as the double EIT scheme <cit.>. In this case, each frequency component determines transitions between different Zeeman levels of the ground state |^2S_1/2, F=1, μ = 0, ± 1 ⟩ and the excited state |^2P_1/2, F=0, μ = 0 ⟩. In addition, to pump the ^2S_1/2, (F=1) state, as well as to return to a closed interaction cycle, it is necessary to use an additional laser field resonant to the ^2S_1/2 (F=0) to ^2P_1/2(F=1), which significantly complicates the overall scheme of laser cooling.
In this paper, to implement deep EIT cooling, as well as the preliminary Doppler cooling preceding it, we propose to use a polychromatic field of running waves with three frequencies
E( r,t) = Re{∑_n=1,2,3 E_n e^i k_n r e^-iω_n t},
where E_n are complex vectors defining the polarization and amplitude of each frequency component of the field n = 1,2,3.
For definiteness, let us assume that the field components have linear polarizations Fig. <ref>(a). The wave vectors of the E_1 and E_3 components are directed along the x-axis, their polarization vectors are along the z-axis. The wave vector of the E_2 component is in the xz-plane at some angle θ to the x-axis. The orientation of the polarization vector E_2 may vary. The frequencies of the field components are chosen so that they provide light-induced transitions between different hyperfine components of the ^2S_1/2 and ^2P_1/2 levels according to the scheme shown in Fig. <ref> (b).
§.§ Doppler cooling
As the EIT ground-state cooling technique is applicable to ions already prepared at low temperatures <cit.>, the preliminary Doppler cooling is required. For the considered field configuration (<ref>), the effective Doppler cooling can be realized with linear codirectional polarizations of the light field components E_1 || E_2 || E_3. In <cit.> we carried out a detailed analysis of laser cooling in such a field. It was shown that the minimum temperature corresponds to Doppler cooling limit
k_B T = ħγ/3 ,
and is achieved for the low intensities, under the condition that the Rabi frequencies of each frequency component are equal Ω_1 = Ω_2 = Ω_3 (Ω_n = | E_n|d/ħ, d is the transition dipole moment ^2S_1/2→ ^ 2P_1/2), and the detunings of each frequency component have to be chosen
δ_1=δ_2=δ_3 = -γ/2 .
Here, the detunings are defined as δ_n = ω_n-ω_0n the difference between the frequency of the field component E_n and the frequency of the corresponding resonant transition ω_0n (see Fig. <ref>(b)).
§.§ Ground state EIT cooling
To implement the second stage of deep laser cooling in proposed field configuration Fig. <ref>, we direct the polarization vector E_2 along the y axis so that E_2 ⊥ E_1,3. In this case, the interaction with light components forms a double Λ scheme for the Zeeman sublevels of hyperfine states ^2S_1/2 and ^2P_1/2 (see Fig. <ref>) that allows to implement EIT ground state cooling technique. For EIT cooling in the considered scheme, it is necessary to choose the detuning of the driven field component E_1 to be blue-detuned δ_1 > 0. The intensity of this component have to be chosen so that ac Stark shift of dressed states that corresponds to bare states Zeeman sublevels |^2S_1/2, F=1 , m=± 1 ⟩ are light-shifted upwards by amount is equal to the trap frequency ω_osc. The field components E_2 and E_1 drive two-photon transitions between the ground states |^2S_1/2, F=1 , m=± 1 ⟩ and |^2S_1/2, F=1 , m=0 ⟩, and for the detuning δ_2 = δ_1, the efficient EIT cooling down to the ground motional state can be achieved, similar to thee-level Λ atomic system <cit.>. The third field frequency component E_3 here plays the role of optical pumping for depopulating the state |^2S_1/2, F=1 , m=0 ⟩.
The dynamics of the average vibrational quantum number N̅ = ∑_n=0^∞ n P_n (where P_n is the population of the ion's n-th motional state in the trap) is determined by the rate balance equation <cit.>
d/dtN̅ = -(A_- - A_+ )N̅ +A_+ ,
where the rate coefficients
A_± = η^2 ( W_0 + W_∓) ,
are given by the spontaneous decays rate, and similar to <cit.> W = γ ρ^ee, where ρ^ee is the total population of excited state ^2P_1/2:
W_0 = W|_δ_2=δ_1 ,
is spontaneous decay rate at δ_2=δ_1, and
W_± = W|_δ_2=δ_1 ±ω_osc
are spontaneous decay rates at δ_2=δ_1 ±ω_osc. Ion cooling is achieved under conditions A_->A_+. In this case, the stationary solution of the equation (<ref>) has the form:
N̅_f = A_+/A_–A_+=W_0+W_-/W_+-W_- ,
and determines the minimum laser cooling temperature of the ion in the trap.
The Lamb-Dicke parameter for two-photon transitions is determined by the difference between the wave vectors k_1 and k_2
η = |( k_1- k_2 ) · e_m | √(ħ/2Mω_osc) ,
where e_m denotes the unit vector describing the oscillation
direction of the mode to be cooled. Thus, by varying the angle θ between the wave vectors k_1 and k_2 (see Fig.<ref>), as well as the overall direction of the waves relative to the principal axes of the trap, it is possible to control the laser cooling rate for different motional modes.
The relations for the spontaneous decay rates W, can be obtained by solving the density matrix (Bloch equations) for Yb ion <cit.>.
For the field configuration Fig. <ref>, where E_2 ⊥ E_1,3, we get the following expression for the total population of the excited state ^2P_ 1/2 and respectively W:
W = γ108 Δ^2 Ω_1^2 Ω_2^2/D ,
where Δ = δ_2 - δ_1,
D = 2 Ω_1^6 +
( 13/2Ω_2^2-48 δ_2 Δ) Ω_1^4 + (7 Ω_2^4 +72 Δ^2 [3 Ω_2^2 +γ^2/4 +4 δ_2^2 ] ) Ω_1^2
+ 5/2 Ω_2^2 (Ω_2^4+24 δ_1 Δ Ω_2^2 + 288/5 Δ^2 (γ^2+4 δ_1^2)) +108 Δ^2 Ω_1^2Ω_2^2/S_3 .
Here S_3 = Ω_3^2/(γ^2 + 4 δ_3^2) is the saturation parameter of the E_3 component. For Ω_1 ≫Ω_2 and δ_1 ≫γ (δ_1 >0), the scattering rate has the form Fig. <ref>, which is typical of EIT laser cooling <cit.>.
As can be seen from Fig.<ref>, the condition δ_2 = δ_1 corresponds to CPT. The narrow peak at detuning δ_2 = ( √(Ω_1^2/3+δ_1^2) + δ_1 )/2 corresponds to the two-photon resonance. Thus the condition under which the peak position corresponds to δ_2=δ_1 + ω_osc leads to the highest laser cooling rate and the lowest temperature <cit.>. This condition determines the intensity of the E_1 component for a given δ_1 and a trap frequency ω_osc
Ω_1 = 2 √(3)√(ω_osc (δ_1+ω_osc) ) .
The obtained analytical expression (<ref>) allows us to estimate the limit of laser cooling of the ^171Yb^+ ion in the proposed EIT scheme. Thus, the average population of vibrational states (<ref>) in the limit Ω_1 ≫Ω_2 takes the form:
N̅_f = 6 Ω_2^2+S_3 γ^2/16 δ_1^2 S_3
which, in the limit of low intensity E_2 component, specifically, Ω_2 ≪√(γ S_3/6), leads to
N̅_f = γ^2/16 δ_1^2 .
The obtained expression (<ref>) corresponds to the limit of EIT cooling in the standard Λ scheme <cit.>. Therefore, deep laser cooling of Yb ion N̅_f ≪ 1 is achieved under the conditions δ_1 ≫γ.
§ CONCLUSION
The work propose an scheme for ground state laser cooling of ^171Yb^+ ions that does not require the use of a magnetic field. For laser cooling, a polychromatic configuration of the light field is used, consisting of three monochromatic running waves resonant to optical transitions of the ^2S_1/2→ ^2P_1/2 line. For the first stage of Doppler cooling, the light frequency components are running waves with codirectional linear polarizations.
In this case, each of the frequency components of the field has a mechanical action on the ion, which finally leads to the the cooling down to temperature of the Doppler limit. In our case, for a trap with ω_osc/2π≃ 600 kHz, this temperature corresponds to the average vibrational quantum number N̅≃ 20. To implement the second stage of deep laser cooling to the motional ground state, i.e. N̅≪ 1, the polarization of one of the frequency components have to be oriented relative to the others by an angle of 90^o.
The exclusion of the magnetic field from the cycle: laser cooling – clock operations, on the one hand, allows to reduce the cycle time by eliminating the time interval required to turn off and attenuate the magnetic field used in standard two-frequency component cooling scheme. Reducing the cycle time contributes to faster accumulation of measurement statistics in optical frequency standards. On the other hand, the absence of the need to use a magnetic field allows for more accurate control of the residual magnetic field and eliminates its fluctuations in various measurement cycles, which is important for further increasing the accuracy of optical standards based on ^171Yb^+.
In addition, deep ground state cooling of the ion to N̅ < 1 significantly suppress the second-order Doppler shift to a level bellow Δν/ν < 10^-19, which make it possible to remove it from the uncertainty budget of frequency standards based on ^171Yb^+.
Note, that suggested method of ground-state cooling is also important for the developments of quantum logic elements based on cold ^171Yb^+ ions.
*
Cornell2002 E. Cornell and C.E. Wieman Rev. Mod. Phys. 74, 875 (2002)
Ketterle W. Ketterle Rev. Mod. Phys. 74, 1131 (2002)
DeMarco B. DeMarco and D.S. Jin Science 285, 1703–1706 1999
DeMarco_PRL B. DeMarco B, J.L. Bohn, J.P. Burke, M. Holland, and D.S. Jin Phys. Rev. Lett. 82, 4208–4211 (1999)
Nielsen M. A. Nielsen, I.L. Chuang “Quantum Computation and Quantum Information”, Cambridge University Press, 2010
Falke S. Falke, et al. New J. Phys. 16, 073023 (2014)
Katori2020 M. Takamoto, I. Ushijima, N. Ohmae, T. Yahagi, K. Kokado, H. Shinkai, and H. Katori, Nat. Photonics 14, 411–415 (2020)
McGrew W. F. McGrew, X. Zhang, R. J. Fasano, S. A. Schaffer, K. Beloy, D. Nicolodi, R. C. Brown, N. Hinkley, G. Milani, M. Schioppo, T. H. Yoon, A. D. Ludlow, Nature 564, 87–90 (2018)
Wineland_Al C. W. Chou, D. B. Hume, J. C. J. Koelemeij, D. J. Wineland, T. Rosenband, Phys. Rev. Lett. 104, 070802 (2010)
Huntemann N. Huntemann, C. Sanner, B. Lipphardt, C. Tamm, and E. Peik, Phys. Rev. Lett. 116, 063001 (2016)
Huang_Ca Y. Huang, H. Guan, P. Liu, W. Bian, L. Ma, K. Liang, T. Li, and K. Gao, Phys. Rev. Lett. 116, 01300 (2016)
Lion2017 G. Lion, I. Panet, P. Wolf, C. Guerlin, S. Bize, and P. Delva, Journal of Geodesy 91, 597–611 (2017)
Ludlow2018 W. F. McGrew, X. Zhang X, R. J. Fasano, S. A. Schaffer, K. Beloy, D. Nicolodi, R. C. Brown, N. Hinkley, G. Milani, M. Schioppo, T.H. Yoon, and A.D. Ludlow, Nature 564, 87–90, (2018)
Godun14 R.M. Godun. P.B.R. Nisbet-Jones, J.M. Jones, S.A. King, L.A.M. Johnson, H.S. Margolis, K. Szymaniec, S.N. Lea, K. Bongs, and P. Gill, Phys. Rev. Lett. 113, 210801 (2014)
Huntemann14 N. Huntemann, B. Lipphardt, Chr. Tamm, V. Gerginov, S. Weyers, and E. Peik, Phys. Rev. Lett. 113, 210802 (2014)
Dzuba V. Dzuba, V.V. Flambaum, M.S. Safronova, S.G. Porsev, T. Pruttivarasin, M.A. Hohensee, and H. Haffner, Nature Physics 12, 465–468 (2016)
Sanner C. Sanner, N. Huntemann, R. Lange, C. Tamm, E. Peik, M.S. Safronova and S. G. Porsev, Nature 567, 204–208 (2019)
Laura L.S. Dreissen, C.-H. Yeh, H. A. Fürst, K.C. Grensemann, T.E. Mehlstäubler, Nature Communications 13, 7314 (2022)
Arvanitaki A. Arvanitaki, J. Huang, and K.V. Tilburg, Phys. Rev. D 91, 015015 (2015)
Stadnik Y.V. Stadnik, V.V. Flambaum, Phys. Rev. Lett. 115, 201301 (2015)
Tamm Chr. Tamm, S. Weyers, B. Lipphardt, and E. Peik, Phys. Rev. A 80, 043403 (2009)
Prudnikov_2017 O.N. Prudnikov, S.V. Chepurov, A.A. Lugovoy, K. M. Rumynin, S.N. Kuznetsov, A. V. Taichenachev, V.I. Yudin, S.N. Bagayev, Quantum Electronics, 47, 806–811 (2017)
Prudnikov_2019 S.V. Chepurov, A.A. Lugovoy, O.N. Prudnikov, A.V. Taichenachev, S.N. Bagayev, Quantum Electronics 49, 412 – 417 (2019)
Kolachevsky M. A. Aksenov, I. V. Zalivako, I. A. Semerikov, A. S. Borisenko, N. V. Semenin, P. L. Sidorov, A. K. Fedorov, K. Yu. Khabarova, and N. N. Kolachevsky
Phys. Rev. A 107, 052612 (2023)
Wineland D.J. Wineland, W.M. Itano, PRA 20, 1521 (1979)
Javanainen1981 J. Javanainen, Appl. Phys. 24, 151-162 (1981)
Wineland2003 D. Leibfried, R. Blatt, C. Monroe, D. Wineland, Reviews Of Modern Physics 75, 281 (2003)
Morigi2000 G. Morigi, J. Eschner, and C. H. Keitel, Phys. Rev. Lett. 85, 4458 (2000).
Morigi2003 J. Eschner, G. Morigi, F. Schmidt-Kaler, and R. Blatt, J. Opt. Soc. Am. B 20, 1003 (2003)
Roos2016 R. Lechner, C. Maier, C. Hempel, P. Jurcevic, B.P. Lanyon, T. Monz, M. Brownnutt, R. Blatt, and C.F. Roos, PRA 93, 053401 (2016)
Khabarova I. A. Semerikov, I. V. Zalivako, A. S. Borisenko, K. Y. Khabarova, and N. N. Kolachevsky, Journal of Russian Laser Research 39, 568, (2018)
Evers J. Evers and C. H. Keitel, Europhys. Lett., 68, 370 (2004).
Krysenko2023 D.S. Krysenko, O.N. Prudnikov, JETP 137, (2023) (accepted for publication)
|
http://arxiv.org/abs/2307.02263v2
|
20230705130121
|
Dynamical Isometry based Rigorous Fair Neural Architecture Search
|
[
"Jianxiang Luo",
"Junyi Hu",
"Tianji Pang",
"Weihao Huang",
"Chuang Liu"
] |
cs.LG
|
[
"cs.LG",
"cs.CY"
] |
Dynamical Isometry based Rigorous Fair Neural Architecture Search
Jianxiang Luo2 Junyi Hu1,2Corresponding author. Tianji Pang2 Weihao Huang1,2 Chuang Liu2,3
1Tsinghua University
2Glasssix Technology (Beijing) Group Co., Ltd
3Northwestern Polytechnical University
{jianxiangluo,inlmouse,tianjipang,weihaohuang,andyliu}@glasssix.com
August 1, 2023
=================================================================================================================================================================================================================================================================================================
Recently, the weight-sharing technique has significantly speeded up the training and evaluation procedure of neural architecture search. However, most existing weight-sharing strategies are solely based on experience or observation, which makes the searching results lack interpretability and rationality. In addition, due to the negligence of fairness, current methods are prone to make misjudgments in module evaluation. To address these problems, we propose a novel neural architecture search algorithm based on dynamical isometry. We use the fix point analysis method in the mean field theory to analyze the dynamics behavior in the steady state random neural network, and how dynamic isometry guarantees the fairness of weight-sharing based NAS. Meanwhile, we prove that our module selection strategy is rigorous fair by estimating the generalization error of all modules with well-conditioned Jacobian. Extensive experiments show that, with the same size, the architecture searched by the proposed method can achieve state-of-the-art top-1 validation accuracy on ImageNet classification. In addition, we demonstrate that our method is able to achieve better and more stable training performance without loss of generality.
§ INTRODUCTION
Neural architecture search (NAS) is a widely used machine learning technology, which automates the design of neural network architecture to find the best model architecture for given tasks. Though the recently proposed weight-sharing strategy helped the traditional NAS methods avoid the burden of training massive neural network architectures from scratch <cit.> and has significantly improved the computational efficiency of current NAS algorithms <cit.>, however, the weight-sharing strategy makes the parameters of each candidate subnet highly coupled. This makes subnet candidates hard to obtain actual independent evaluations, leading to insufficient results.
To solve these two problems, Boyu Chen et, al proposed the BNNAS<cit.> algorithm which used the weights of the batch normalization (BN)<cit.>layer, named as BN-based indicator, to evaluate the importance of the subnets. During supernet pre-training of BNNAS, only the weights of the BN layer are updated with gradients, while the rest of the random parameters are frozen. As the result, the fixed random parameters and unfixed indicators are successfully decoupled. Though using the BN-based indicator as the subnet performance criterion is inspiring, however, this subnet selecting criterion is purely empirical and has no mathematical guarantees. In addition, a reasonable gradient descent method will affect the results of the architecture search<cit.>. So it is obviously that random parameters initialization is unable to ensure that the gradients (signal) received by each candidate subnet in each layer are equivalent during the training procedure. As the network goes deeper, this issue becomes progressively severe, affecting the fair evaluation of all BN-based indicators. Furthermore, we discovered that the magnitudes of the BN-based indicators in different layers are not comparable in the module selecting strategies of BNNAS. It is not reasonable to rank the whole performance of the subnets by directly summing the BN scores of each module.
In order to quantify the dynamics behavior of randomly initialized supernet during pretraining, we need to analyze the dynamics of random initialized neural networks. Noticing, the mean field theory (MFT) has been used to establish a theoretical understanding of neural networks with random parameters and quantitatively portrayed the “average” dynamics of signal propagation inside of themselves<cit.>. MFT and extensive experiments all showed that networks can be trained most efficiently and stably when their input-output Jacobian of every layer achieves dynamical isometry(orthogonal), namely the property that the entire distribution of singular values of the Jacobian matrix is close to 1<cit.>. The well-conditioned Jacobian ensures stable propagation while avoiding the vanishing or exploding of the signal. In the NAS algorithm, by initializing each module of the supernet to achieve dynamic isometry, the input signal of the network can be equally propagated to any place of the supernet, which is able to reflect the architecture information in neural networks as much as possible.
In the present work, we continue the line of most one-shot and weight-sharing based NAS techniques and proposed a fairer and more efficient approach for neural architecture search. Specifically, we obtained the dynamical isometry module by triangular decomposing the randomly initialized weights of gaussian distribution. This network weights initialization strategy ensures that each candidate module is dynamical isometry while remaining frozen during supernet pretraining. The input signal can be equally propagated to each and every module in the search space horizontally and vertically. Following these principles, the BN-based indicators are pitched into the bottom of each module in the subnets as the performance indicators of those modules with different structures. To deal with the above-mentioned module selecting dilemma, we select the module with largest BN-based indicator as the target module of each layer. At last, we present rigorous proof of the feasibility of using the parameters of the BN layer as subnets evaluation criterion. Extensively experiment showed, with the same size, the architecture searched by the proposed method can achieve state-of-the-art top-1 validation accuracy on ImageNet classification. In addition, we demonstrated the proposed method is able to achieve better and more stable training performance without loss of generality.
The contributions of this work can be summarized as follows:
* We designed an initialization method for NAS algorithms which guarantees the equivalent inputs for both shallow and deep modules and ensures the fairness of search evaluation.
* We give a mathematical proof that the value of the BN layer’s parameters is able to reveal the signal propagation capability of the candidate modules, which for the first time shows the interpretability and rationality of the BN-based indicator theoretically.
* We point out a new module selection strategy which fixed the problem of selecting the module by the BN-based indicator’s numerical scale across layers is uneven.
§ RELATED WORKS
§.§ Weight-sharing NAS
We denote the search space of a weight-sharing based NAS algorithm as 𝒜. The entire supernet can be denoted as 𝒩 (𝒜,𝐖), where 𝐖 denotes the parameter weights of the supernet. Then, all parameter weights of the supernet can be jointly optimized as follows:
𝐖_𝒜 = _𝐖ℒ ( 𝒩 ( 𝒜, 𝐖 ) ).
Single path one-shot NAS<cit.> avoids the coupling of weights in the joint optimization by selecting a uniform path of candidate modules from each layer to form a subnet, and trains each subnet’s path individually. The component module of the subnet is a ⊆𝒜 with parameters 𝐖_a. And on this basis, FairNAS<cit.> analyzes the subnet sampling strategy and proposes a "strict fairness" method. We use the notation a ∼Φ(𝒜) to denote this sampling strategy:
𝐖_𝒜 = _𝐖_a𝔼_a ∼Φ(𝒜)[ ℒ ( 𝒩 ( a, 𝐖_a ) ) ].
After the supernet being pre-trained, the weight-sharing NAS method evaluates the accuracy of each subnet on the validation set one by one or with some strategies to obtain the best subnet structure. This selected subnet is generally retrained based on the training set to obtain the final output of the NAS algorithm.
BNNAS<cit.> is inspired by <cit.>, which studies the role of the Batch Normalization layer in the network’s forward propagation. They heuristically came up with the idea of making full use of the BN layer in NAS..This method regards the BN layers’ learnable weights as the evaluation of the candidate module and names those layers as the BN indicator. During the training of the supernet, only the parameters of the BN layers are back-propagated with gradients while the remaining parameters in the network are frozen. After the supernet being trained, BNNAS uses the value of the optimized BN indicator’s weight as the evaluation criterion for the current module. An evolutionary algorithm is used to meritively select modules from the candidate set and combines the selected modules into a final subnet.
Though this idea improves the searching speed of the NAS algorithm, however, it ignores the fairness in training and search procedure. Since the weights of individual modules in the supernet are randomly initialized and frozen during supernet training, the elusive parameters can affect the structural expressiveness of modules. It is also difficult for modules in deep layers to obtain “equivalent" input feature maps for fair cross-sectional comparison. Inspired by mean-field theory, we find that signal propagation in the network can solve the above problem when dynamical isometry is achieved. We can also explain the plausibility of the BN-indicator in a neural network under the mean field.
§.§ Dynamical Isometry
The dynamics of a neural network as the signals propagate through is a classical research subject<cit.>. A. M. Saxe et. al regard the learning dynamics on the weight space as a set of nonlinear differential equations<cit.>, which describes the dynamics as a function between the inputs and outputs of the network. According to this equation, the Jacobian matrix of the input-output of the network can be quantified.
Consider a neural network with L layers, which is composed of a series of linear and nonlinear functions. 𝐖_l and 𝐛_l denote the weight matrices and bias vectors of the linear function in the l-th layer, with l=1,⋯, L. The activation function is σ(·). The forward propagation of the network is:
𝐡_(l+1) = σ( 𝐖_l·𝐡_(l) + 𝐛_l),
where 𝐡_l represents the state of the signal at different layers. We denote the input-output Jacobian matrix of the network by 𝐉, and the non-linear part of the network is 𝐃. 𝐉 can be used to describe the relationship between the input and the output signal of any layer of the network:
𝐉= ∂𝐡_l/∂𝐡_0 = ∏ _j=1^l𝐃_j𝐖_j.
Order-to-chaos phase transition and Fixed point analysis. Singular values χ of the Jacobian matrix affect the signal propagation in terms of network depth. <cit.> and <cit.> studied the order-to-chaos phase transition of the network through χ:
χ = ϕ(( 𝐃𝐖) ^T𝐃𝐖),
where ϕ is the function that computes the expectation. The value of χ is influenced by the initialization of the network weights. Only χ=1 can avoid gradient disappearance or gradient explosion, that is, chaos phase or order phase.
The fixed point of the signal in the process of network propagation, which is related to the variance change of the signal. (<cit.>) study the variance of the input-output of each module in the network. Denote the variance of the module’s weight is v_𝐖, the variance of the bias is v_𝐛. Then when the input signal satisfies a Gaussian distribution with mean 0 and variance v_h, the variance of the output of this signal after the module is:
v_𝐡_l = v_𝐖∫𝒟h σ( √(v_𝐡_l-1)h)^2 + v _𝐛,
where 𝒟h = d h/√(2 π) e^-h^2/2 denotes the standard Gaussian measure. When the propagation of the signal is at a fixed point, the variance of the input and output vectors of any module in the network is the same. We say that the subnets on the fixed point can achieve dynamic isometry:
v_𝐡_* = v_𝐖∫𝒟h σ( √(v_𝐡_*)h)^2 + v _𝐛.
Gaussian weight initialization. <cit.> demonstrates the difficulty of using a Gaussian distribution to initialize the parameters of a network model to achieve dynamic isometry. They derived the Stieltjes transform from the limiting spectral density of the matrix 𝐉𝐉^T, and generate the moment of it M_𝐉𝐉^T. The first two moments denote m_1 and m_2, and have:
m_1 =(v_𝐖 p(𝐡_*) ) ^ L
m_2 = (v_𝐖 p(𝐡_*)) ^2L (L + p(𝐡_*)) / p(𝐡_*).
where p(𝐡_*) denotes the probability that the current module is a linear function, which can also be expressed as the proportion of neural operating in the linear regime. Both first and second moments increase exponentially with the number of layers. When network are at the fixed points, that is m_1 ≈ 1, the variance is L/p( 𝐡 _* ) which continues to increase with the number of layers. Therefore, for any neural network, Gaussian weight initialization cannot be selected to avoid this growth.
§ METHOD
§.§ Dynamic Isometry in NAS
In order to evaluate the candidate modules of the neural network in the supernet, we refine the input-output Jacobian matrices of the entire network to individual modules. Each subnet in the search space is composed of the subposition of independent continuously modules. Our contribution is that when candidate modules at the same level are compared horizontally, the dynamics of the modules remain the same, which provides absolute fairness for the subsequent evaluation mechanism.
§.§.§ Candidate module dynamic isometric
We denote 𝐉 _l,m as the the input-output Jacobian matrix of the m-th candidate module in the l-th layer in the network, where l = 1,⋯,L and m = 1,⋯,M. The signal in the network add the index of the candidate module, 𝐡_l,m.
𝐉_l,m = ∂𝐡_l,m/∂𝐡_l-1,m.
To explore the dynamic isometry of candidate modules according to 𝐉_l,m, we denote λ _i as the i-th eigenvalue of 𝐉_l,m𝐉_l,m^T. We have tr(𝐉_l,m𝐉_l,m^T) = 1/w∑_i=1^wλ_i and assume the eigenvalues of this matrix are independent of each other. The parameter w can reflect the width of the network to some extent through the Jacobian matrix. The variance of tr (𝐉 _l,m𝐉 _l,m^T) can be given by
D [ tr( 𝐉_l,m𝐉_l,m^T) ]
= 1/w∑_i=1^w𝔼[λ _i^2] - 𝔼^2[ λ_i]
= ϕ(( 𝐉_l,m𝐉_l,m^T) ^2) - ϕ ^2 (𝐉_l,m𝐉_l,m^T)
:= φ( 𝐉_l,m𝐉_l,m^T).
According to the satisfied conditions of dynamic isometric and the above formula, we define that the condition under which the candidate module can achieve dynamic isometric is
ϕ(𝐉_l,m𝐉_l,m^T) ≈ 1,
φ(𝐉_l,m𝐉_l,m^T ) ≈ 0.
We can use the moment generating function to prove this setting by referring to the Tanh activation and Orthogonal weights of corresponding literature. In fact ϕ(𝐉_l,m𝐉 _l,m ^T) and φ(𝐉 _l,m𝐉 _l,m ^T ) can be considered as the first two moments of 𝐉 _l,m. With this setting, we can ensure that the expectation of the singular values of each candidate module's input-output Jacobian matrices is close to 1. It is worth mentioning that, according to equation <ref>, we find when the network width of each candidate module is large enough, this expectation can be equal to 1.
The input 𝐡 _0 is the same for all subnets in the search space. When we guarantee each module to achieves dynamic isometry, as the number of layers increases, the dynamics of the signal passed to the subsequent candidate modules remain stable. The only thing that affects the output of the model is the structure of the network itself, which is the original intention of our neural network architecture search.
§.§.§ Orthogonal Initialization
While keeping the supernet weights frozen, a well-designed initialization method is needed in order to achieve dynamic isometrics of the network. If a candidate module implements orthogonal weights, for all the parameters of the linear function in that module we have 𝐖𝐖^T = 𝐈. We want the Jacobian of this module 𝐉 _l,m := 𝐃 _l,m𝐖 _l,m to satisfy
ϕ( 𝐉 _l,m𝐉 _l,m^T ) = 1.
For different activation functions, suitable orthogonal initialization can achieve ϕ (𝐉𝐉 ^T)=1. However, in order to achieve dynamic isometry according to the equation <ref>, we need to satisfyφ(𝐉𝐉^T)=0. The activation function Tanh is defined as f(x)=2/1+e^-2x -1, and for derivation f ^'(x) = 1 - f ^2(x). According to the properties of Tanh we have ∀ x∈ℝ ,|tanh(x)|/|x|<1, therefore, when the number of layers is large enough, almost the function will concentrate around 0. Then the Taylor expansion reduces to f(x) ≈ f(0) + f ^' (0) x=x, The above Taylor formula is similar to the identity matrix, approximately, we have ϕ(𝐉𝐉 ^T)=1 and φ(𝐉𝐉 ^T)=0.
After satisfying the dynamic isometry condition of equation <ref>, we ensure that the input and output signal of the same module have the identical mean and variance.
§.§ How Dynamic Isometry Guarantees Fair Evaluation
The complex search space is organized into a regular directed acyclic figure <ref>, and the evaluation BN indicators are embedded in each module as part of the network. Under the premise of choosing a topological path, the candidate modules in the first layer network are lucky, they accept the same input of the supernet 𝐡_0,m∈ℝ ^n × n × d. However, it is difficult for candidate modules in subsequent layers to have the same input, each time they accept different outputs from different candidate modules in the upper layer.
In this chapter, we explain the grounds that our search algorithm is absolutely fair under the premise that every candidate module in the search space is dynamical isometry and how does our initialization method guarantee that the outputs or inputs of candidate modules in the same layer are nearly equal.
Assuming that the initialization parameters of each module have made it meet the conditions of dynamic isometry, the output of modules can be denoted as 𝐡_l,m. The input-output Jacobian matrix of each module can be denoted as 𝐉_l,m.
The linear part in the input-output Jacobian matrix 𝐉_l,m is denoted as f_l,m. In this paper, the linear function is the convolution operation. Let F_p∈ℝ^r × r denotes the filters in f_l,m, the number of which is N, where p denote the index. The remaining activation function methods we denote as σ(·), we concentrate on the tanh function.
For the Batch normalization layers in the block and our BN-based indicator, we state their representation
𝐡_l,m ^bo= γ·𝐡_l,m ^bi-𝔼[𝐡 _l,m ^bi]/√(Var ^2[𝐡_l,m ^bi] - ε) + β ,
where 𝐡_l,m ^bi and 𝐡_l,m ^bo represent the input and output of the Batch normalization in each module, respectively. The weight γ and bias β are learnable parameters in BN layers to affine the normalized features. ε is a positive number. We set 𝔼[𝐡 _l,m]=0 and the bias is 0. In addition, v_h denote the variance of 𝐡 _l,m. Then the representation of the BN layer function is simplified to
B(h_l,m) := γ/√(v_h-ε)h_l,m.
Since different measures f_l,m are independently and identically distributed, we take one of the modules for analysis. Ideally, the changes that occur in the signal after passing through the convolution module should be caused by the structure of the module. However, even modules with well optimised parameters may bring biases into the signal's feature extraction. This problem is exacerbated when randomised parameters are used, and biases accumulate cascading through the supernet training.
We investigated that the impact caused by such randomised parameters is very limited in the case where the modules are dynamically isometry. Considering two random tensors as inputs to the same module, we analyse the change in the output distance of the convolution module for the two random variables. We show that the expected distance between the output of the volume tensor and the actual distance is controlled probabilistically when the module satisfies dynamic isometry.In the process of filter inner product, the part of the signal 𝐡_l,m is defined as [x]_i,j^r ∈ℝ ^r × r × d, and i,j ∈𝐙_n indicate location. For another input 𝐡_l,m^' using the same method to do the split and get [y]_i,j^r ∈ℝ ^r × r × d.
Let * denotes the cyclic convolution operation, 𝐡,𝐡^'∈ℝ ^n× n× d be inputs to a filter F_p ∈ℝ^r × r × d, with r<n. The BN and activation function are expressed as B(·) and σ(·) respectively. All of F_p denote F, which are independent and identically distributed, and orthogonal tensors decomposed from Gaussian distributed tensors with variance v^2.
The expectation output of the inner product of two tensors is
𝔼[ ⟨ B( σ( F * 𝐡)), B( σ( F * 𝐡^'))⟩].
Moreover, we denote the variance of the two inputs as v_h^' = ‖ (v_h - ε) ^-1/2‖, v_h^'^' = ‖ (v_h^' - ε) ^-1/2‖ and have
R := max{max_i,j ∈ℤ_n‖ [x]_ij^r‖ v_h^',
max_i,j ∈ℤ_n‖ [y]_ij^r‖ v_h^'^'}
While the activation function satisfies the Lipschitz condition and the Lipschitz constant is L, there is at most δ > 0 probability such that the difference between the expected and the actual output satisfies:
ℙ[ | 1/N∑_p=1^N⟨ B( σ( F_p * 𝐡) ) , B ( σ( F_p * 𝐡 ^') ) ⟩..
.. - 𝔼[ ⟨ B ( σ( F * 𝐡)), B( σ( F * 𝐡 ^') ) ⟩] | ≥ϵ] ≤δ
for δ=2n^2 exp{-min( K^2,K)cN}, where c>0 is an absolute constant and K = ϵ/D ‖γ‖ ^2 v^2 L^2 R^2 n^2 while D>0 is an absolute constant.
Since the network parameters are untrained, it is almost impossible to achieve the expected state of the candidate module. We assume that ϵ in equation <ref> is always present and fixed. Then by adjusting δ so that the distance of change of the output tensor approximates the theoretical distance of change of the output with maximum probability, is to make δ as small as possible. Then after freezing the weights of the candidate modules with implementing dynamic isometry, only the parameter γ of the BN layer is the only variable.
As the value of γ increases, the closer the actual distance between any two outputs is to their desired distance.That is, the more the output signal changes to reflect the structural information of the module itself, avoiding interference and deviations caused by the randomness of the parameters. A larger value of γ reflects a fairer neural network signaling, the change in signal is more responsive to the structural information of the module.
With the implementation of dynamic isometry, the randomness of the initialisation parameters of the candidate modules has a limited impact on signal propagation, especially in the case of fixed weights.In addition optimising the parameters of the BN layer is beneficial for NAS fairness improvement, the value size of γ reduces the side effects of fixed parameters and promotes fair NAS evaluation.
§.§ Fairness Improvements for NAS
In order to facilitate the distinction, we call BNNAS's BN-indicator as BN-based block indicator, which measures the importance of the block. Its numerical value determines whether the current module is outstanding.
If only to find one optimal subnet, we need to select the highest-rated candidate module in each layer of the network. However, due to some circumstances, when we want to search several good candidate subnets, simply adding up the weight averages and sorting is unconvincing. Since the BN-based block indicators are difficult to compare vertically, it is hard for us to obtain the module importance relationship between layers. Therefore, we add a BN-based layer indicator to every layer to measure the importance of the current layer in the entire network.
The network layer is divided into reduced layers and normal layers. Considering the indispensability of the reduced layer, we do not evaluate it. We add a parallel identity connection to each layer in the network and a BN layer after the identity as the BN-based layer indicator.
Figure <ref> shows the details of the BN-based layer indicator.The sampling method of the algorithm after adding BN-based layer indicator is shown in algorithm 1. We still use the method of fair sampling based on FairNAS<cit.>, so that each candidate module has the same expectation of being selected. The pseudo code of the algorithm for the entire search process is algorithm 2.
In addition, the training strategy based on SPOS does not affect the dynamic isometry of the network when the subnet selects the BN-based layer indicator.
§ EXPERIMENTS
To prove the effectiveness of our parameter initialization method, we apply our weight initialization method to the NAS algorithm with the addition of BN-based layer indicator and set up comparative experiments. For comparative experiments with state-of-the-art methods, our experimental steps are strictly performed under the settings of BNNAS and SPOS<cit.>.
Dataset. For the dataset's choice, we choose to use ImageNet<cit.>, which contains a train set with over one million samples and a validation set with fifty thousand samples. Only the training set needs to be used in the training and search of the supernet, and the validation set will not be obtained until the found subnet is retrained.
Search Space. The search space follows the same settings as BNNAS and SPOS. Firstly, the search space of the first set of experiments is mainly composed of MobileNetV2 blocks<cit.> with the kernel size in {3,5,7} and internal expansion rate in {3,6} for candidate modules. Different kernel sizes and internal expansion rates can combine to six candidate modules for each layer in the supernet to choose from. The entire supernet totally has 20 layers, including reduced layers and normal layers. In addition, for another set of experiments, we set the ShuffleNetV2 block<cit.> as the subject of the candidate module. The number of layers of the network is set to 20, and the candidate modules of each layer are composed of ShuffleNetV2 block with kernel sizes of {3,5,7}, and a ShuffleNet Xception block<cit.>. The size of the search space for these two sets of experiments is 6^20 and 4^20, respectively.
Data processing and training setup. We augment the ImageNet dataset before training, the specific operations including random cropping, light adjusting, and random horizontal flipping. During the training of the network, we implement the BN-based block indicator and BN-based layer indicator in search spaces based on the ShuffleNetV2 block and MobileNetV2 blocks, respectively. The epochs of the two sets of experiments are set to 80 and 100, and the learning rate is continuously adjusted with the growth of epochs using cosine annealing. In terms of search, we use an evolutionary algorithm to traverse each subnet according to the setting of the search strategy in SPOS<cit.> to find the best subnet structure for retraining. Finally, for the searched subnet, 240 epochs of training are performed, and this process does not freeze any weights in the network. Our experiments are conducted on the NVIDIA GeForce RTX 3080 with the Pytorch framework.
§.§ Comparison of Baseline Methods in the Search
We compare the experimental results of the Supernet training phase in NAS with the results of baseline methods. Since our model only needs to optimize the parameters of the BN layer during back-propagation, the training time of our Supernet is dramatically reduced compared to other methods. In addition, due to the advantage of orthogonal weights, our network is much faster than BNNAS. The increase in FLOPS(468M) of the network after adding the BN-based layer indicator is minimal in exchange for a better performance of the searched network . The model's parameters stabilize in a lightweight range(4.9M) compared with other methods. Table <ref> shows the details of some models' size, computation cost, and training time. From the table it can be seen that any Supernet based algorithm has shorter running time than the general NAS algorithm(NASNet-A) and the BN-based indicator method(Ours, BNNAS) is an order of magnitude faster than the others.
There is a large gap between the method based on the BN indicator and other methods in training time , for the gradient propagation only focuses on the BN layer in the network. Compared to BNNAS, our network goes further in training time, and each image takes less propagation time in our network. Since we are not concerned with the expressiveness of the BN layers, train a sufficient number of epochs for the supernet. After 80 epoch iterations our supernet training time is 2.8 gpu days.
§.§ Searching Approach
As for the search algorithm, we follow BNNAS and SPOS, using an evolutionary algorithm that does not affect the time to search the structure even adding the BN based layer indicator. We believe that our search algorithm with BN-based layer indicator can provide a better model structure with the addition of proxy requirements.
§.§ Validation Set Evaluation
Not only do we compare the results on the validation set, but also conduct comparative experiments with BNNAS for our design, including layer-based BN indicator and dynamic isometric network parameter initialization method. The search space of all models is implemented based on the MobileNetV2 block. We set up two sets of experiment models with orthogonal weight initialization, and both experiments use the BN block indicator. We added a BN-based layer indicator to one of the models. From the experimental results, it is also of benefit to the network’s search results to judge the essential indicators of each layer , and the effect of top-1 accuracy(76.22%) is the best of all similar algorithms.
For the comparison of the initialization weight methods , we use different initialization methods for the initial model parameters and performed a complete forward and backward propagation. To be as fair as possible, we set the batchsize to 32 uniformly and calculate the latency of one complete training for each method. Other initialization methods we refer to open source code. In addition, it is difficult to control the specific initialization value of the model parameters, so we conduct several experiments to take the average value.
Our method's latency(93.1 ms) narrowly outperforms BNNAS(96.3 ms).
In order to prove the generalization ability of the method, we change the search space to be based on the ShuffleNetV2 block module, and set the control of whether to add the BN-based layer indicator. As shown in table <ref>, Compared with the strategy of SPOS whose search space is also based on ShuffleNetV2 blocks modules, our method is faster and has fairer
evaluation criteria for candidate modules. Our model is also more capable of finding the most efficient subnet structure. In addition, our model has the ability to find multiple excellent models from a search space simultaneously.
§ CONCLUSION
NAS has greatly boosted the SOTA deep learning methods in computer vision. However, existing NAS methods are time-consuming and lack of fairness. We proposed a novel parameters initialization strategy for supernet pre-training which can efficiently ensure the fairness of subnets selection. The proposed method can also accelerate the searching process and improve the performance of searching results for NAS. Thanks to the mean field theory, we can guarantee that our method has theoretical support by analyzing the dynamics of random neural networks and estimating the generalization error of dynamic isometry block. Extensive experiments validate that the proposed dynamical isometry based rigorous Fair NAS can significantly decrease the whole time consumption for one-shot NAS without performance loss.
ieee_fullname
§ PROOF OF THEOREM 1
To prove the theorem, we need the following definitions and lemmas:
The Orlicz norm of a random variable X with respect to a convex function ψ : [0, ∞) → [0, ∞)
such that ψ(0) = 0 and lim_x →∞ψ(x) = ∞ is defined by
‖ X‖ _ψ := inf{ t>0 | 𝔼[ ψ( | X|/t)] ≤ 1}
if ‖ X‖ _ψ_2 < ∞ X is called to be sub-Gaussian, and sub-exponential if ‖ X‖ _ψ_1 < ∞
where ψ _p(x) := exp{ x^p} - 1 for p ≥ 1.
The following lemmas take advantage of the properties of the Orlicz norm and provide some inequalities for use as instructions for follow-up problems.
Let X and Y be sub-Gaussian random variables. Then,
* Sum of independent sub-Gaussian. If X and Y are also independent, then their sum, X+Y,
is sub-Gaussian. Moreover, ‖ X+Y‖ _ψ_2^2 ≤ C( ‖ X‖ _ψ _2^2 + ‖ Y‖ _ψ _2^2)
for an absolute constant C. The same holds(with the same constant C) also for sums of multiple
independent sub-Gaussian random variables.
* Centering. X-𝔼[X] is sub-Gaussian. Moreover, ‖ X - 𝔼[X]‖ _ψ _2≤ C ‖ X‖ _ψ _2
for an absolute constant C.
* Product of sub-Gaussian. XY is sub-exponential. Moreover, ‖ XY‖ _ψ _1≤‖ X‖ _ψ _2‖ Y‖ _ψ _2
Let X_1,...,X_N be independent zero-mean sub-exponential random variables. Then ∀ t ≥ 0:
ℙ( |1/N∑_i=1^N X_i |≥ t ) ≤ 2 exp{ -min{t^2/K ^2, t/K}· C · N },
where K= max _i ‖ X _i‖ _ψ _1 and c > 0 is an absolute constant.
We first fill the parameters of F_p with random parameters that satisfy a Gaussian distribution F_p∼𝒩(0,v), then ⟨ F_p,[x]_ij^r⟩
and ⟨ F_p,[y]_ij^r⟩ are jointly Gaussian random variables with
mean zero and variances v ^2 ‖ [x]_ij^r‖ and v ^2 ‖ [y]_ij^r‖ respectively, due to their independence and linear combinations. Hence, ⟨ F_p,[x]_ij^r⟩ and ⟨ F_p,[y]_ij^r⟩ are also sub-Gaussian with, ∀ p ∈ [N], ∀ i,j ∈ℤ_n,:
‖⟨ F_p,[x]_ij^r⟩‖ _ψ _2≤ C_0 v ‖ [x]_ij^r‖,
‖⟨ F_p,[y]_ij^r⟩‖ _ψ _2≤ C_0 v ‖ [y]_ij^r‖,
where universal constant C_0>0. To achieve dynamic isometry, we rely on the conclusions of the previous orthogonal weights matrix obtained by triangulating every filter F_p. Suppose the F_p is reversible, then there is a unique positive definite triangular matrix W and its inverse W^-1, such that Q_p = F_p · W^-1, where Q_p ∈ℝ ^r × r × d is an orthogonal matrix. We have:
‖⟨ Q_p,[x]_ij^r⟩‖ _ψ _2≤ C_1 v ‖ [x]_ij^r‖,
‖⟨ Q_p,[y]_ij^r⟩‖ _ψ _2≤ C_1 v ‖ [y]_ij^r‖,
∀ p ∈ [N], ∀ i,j ∈ℤ_n, C_1 = C_0 ·(∑ _k=1^r w_k,k ^2)^-1/2,
where w_k,k denote the elements on the diagonal of W^-1, f_kk and x_ij_kk in the following proof are also diagonal
elements of F_p and [x]_ij^r, respectively.
‖⟨ Q_p,[x]_ij^r⟩‖ _ψ _2 = ‖tr(F_p · [x]_ij^r · W ^-1)‖ _ψ _2
≤‖√(∑ _k=1^r (f_kk x_ij_kk)^2)·√(∑ _k=1^r w_k,k ^2)‖ _ψ _2
≤‖∑ _k=1^r f_kk x_ij_kk‖ _ψ _2·‖√(∑ _k=1^r w_k,k ^2)‖ _ψ _2
= ‖⟨ F_p,[x]_ij^r⟩‖ _ψ _2·‖√(∑ _k=1^r w_k,k ^2)‖ _ψ _2.
C_0 is a sufficiently large constant, and the magnitude of C_0 is much larger than the trace of the triangular matrix. Combining the above derivation results
with equation <ref>, we can get
‖⟨ Q_p,[x]_ij^r⟩‖ _ψ _2≤‖⟨ Q_p,[x]_ij^r⟩‖ _ψ _2≤ C_1 v ‖ [x]_ij^r‖.
where C_1 = C_0/√((∑_k=1^r w_k,k ^2)), for uniqueness of the triangular matrix W, C_1 is a constant.
The activation function is represented as σ, and we use the tanh method here according to the previous reasoning, then
X^'_ij,p := σ(⟨ Q_p,[x]_ij^r⟩), Y^'_ij,p := σ(⟨ F,[y]_ij^r⟩), ∀ l ∈ [N], ∀ i,j ∈ℤ_n
Adding a batch normalization layer to the above filter calculation:
X_ij,p := σ(⟨ Q_p,[x]_ij^r⟩) ·γ/√(v_h-ε̂),
Y_ij,p := σ(⟨ F,[y]_ij^r⟩)·γ/√(v_h^'-ε̂),
∀ l ∈ [N], ∀ i,j ∈ℤ_n
Set σ is Lipschitz continuous with a Lipschitz constant L. In addition, since each module satisfies the dynamic isometry, the input and output variance is constant then the parameters of the BN layer are independent of others. We can get X_ij,p and Y_ij,p are sub-Gaussian with:
‖ X_ij,p‖ _ψ _2≤ C_1 ‖γ‖ Lv ‖ [x]_ij^r‖ v_h^', ‖ Y_ij,p‖ _ψ _2≤ C_1 ‖γ‖ Lv ‖ [y]_ij^r‖ v_h^'^', ∀ p ∈ [N], ∀ i,j ∈ℤ_n
where v_h^' = ‖ (v_h - ε̂)^-1/2‖, v_h^'^' = ‖ (v_h^' - ε̂)^-1/2‖ γ are trainable parameters.
Therefore, by the lemma <ref> we continue to have:
‖ X_ij,pY_ij,p‖ _ψ_1 ≤ C_1^2 ‖γ‖ ^2 v^2 L^2 ‖ [x]_ij^r‖‖ [y]_ij^r‖ v_h^'^' v_h^',
‖ X_ij,pY_ij,p - 𝔼[X_ij,pY_ij,p]‖ _ψ_1 ≤ C_1^2 C ‖γ‖ ^2 v^2 L^2 ‖ [x]_ij^r‖‖ [y]_ij^r‖ v_h^'^' v_h^'
For [x]_ij^r and [y]_ij^r at all locations in the input signals, we let their minimum maximum related to R, then bring in the distance formula we want to find:
ℙ(|1/N∑_p=1^N⟨ B( σ( F_p * 𝐡)),B( σ( F_p * 𝐡^'))⟩ -
𝔼[ ⟨ B( σ( F * 𝐡)), B( σ( F * 𝐡^'))⟩]|≥ε)
= ℙ( 1/N|∑_p∈ [N]; i,j ∈ℤ_n{ X_ij,pY_ij,p-𝔼[X_ij,pY_ij,p]}|≥ε)
≤ℙ( 1/N∑_i,j ∈ℤ_n|∑_p ∈ [N]{ X_ij,pY_ij,p-𝔼[X_ij,pY_ij,p]}|≥ε)
≤ℙ( n^2/Nmax_i,j ∈ℤ_n|∑ _p∈ [N]{ X_ij,pY_ij,p-𝔼[X_ij,pY_ij,p]}|≥ε)
≤∑_i,j∈ℤ_nℙ( n^2/N|∑_p ∈ [N]{ X_ij,pY_ij,p-𝔼[X_ij,pY_ij,p]}|≥ε)
lemma<ref>≤ 2n^2 exp{ -min (ε ^2/C_1^4 C^2 ‖γ‖ ^4 v^4 L^4 R^4 n^4, ε/C_1^2 C ‖γ‖ ^2 v^2 L^2 R^2 n^2)cN} = δ.
|
http://arxiv.org/abs/2307.02878v1
|
20230706093212
|
Unusual surface states associated with the PT-symmetry breaking and antiferromagnetic band folding in NdSb
|
[
"Asuka Honma",
"Daichi Takane",
"Seigo Souma",
"Yongjian Wang",
"Kosuke Nakayama",
"Miho Kitamura",
"Koji Horiba",
"Hiroshi Kumigashira",
"Takashi Takahashi",
"Yoichi Ando",
"Takafumi Sato"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall",
"cond-mat.str-el"
] |
Department of Physics, Graduate School of Science, Tohoku University, Sendai 980-8578, Japan
Department of Physics, Graduate School of Science, Tohoku University, Sendai 980-8578, Japan
Corresponding authors:
s.souma@arpes.phys.tohoku.ac.jp
t-sato@arpes.phys.tohoku.ac.jp
Center for Science and Innovation in Spintronics (CSIS), Tohoku University, Sendai 980-8577, Japan
Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, Sendai 980-8577, Japan
Institute of Physics II, University of Cologne, Köln 50937, Germany
Department of Physics, Graduate School of Science, Tohoku University, Sendai 980-8578, Japan
Precursory Research for Embryonic Science and Technology (PRESTO), Japan Science and Technology Agency (JST), Tokyo, 102-0076, Japan
Institute of Materials Structure Science, High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan
National Institutes for Quantum Science and Technology (QST), Sendai 980-8579, Japan
National Institutes for Quantum Science and Technology (QST), Sendai 980-8579, Japan
Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, Sendai 980-8577, Japan
Department of Physics, Graduate School of Science, Tohoku University, Sendai 980-8578, Japan
Institute of Physics II, University of Cologne, Köln 50937, Germany
Corresponding authors:
s.souma@arpes.phys.tohoku.ac.jp
t-sato@arpes.phys.tohoku.ac.jp
Department of Physics, Graduate School of Science, Tohoku University, Sendai 980-8578, Japan
Center for Science and Innovation in Spintronics (CSIS), Tohoku University, Sendai 980-8577, Japan
Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, Sendai 980-8577, Japan
International Center for Synchrotron Radiation Innovation Smart (SRIS), Tohoku University, Sendai 980-8577, Japan
Mathematical Science Center for Co-creative Society (MathCCS), Tohoku University, Sendai 980-8577, Japan
We have performed micro-focused angle-resolved photoemission spectroscopy on NdSb which exhibits the type-I antiferromagnetism below T_N = 16 K. We succeeded in selectively observing the band structure for all the three types of single-q antiferromagnetic (AF) domains at the surface. We found that the two of three surfaces whose AF-ordering vector lies within the surface plane commonly show two-fold-symmetric surface states (SSs) around the bulk-band edges, whereas the other surface with an out-of-plane AF-ordering vector displays four-fold-symmetric shallow electronlike SS at the Brillouin-zone center. We suggest that these SSs commonly originate from the combination of the 𝒫𝒯 (space-inversion and time-reversal) symmetry breaking at the surface and the band folding due to the AF order. The present results pave a pathway toward understanding the relationship between the symmetry and the surface electronic states in antiferromagnets.
Unusual surface states associated with the 𝒫𝒯-symmetry breaking and antiferromagnetic band folding in NdSb
Takafumi Sato
August 1, 2023
==========================================================================================================
§ INTRODUCTION
The interplay among symmetry, electronic states, and physical properties is a key topic of condensed-matter physics, as highlighted by the formation of charge/spin-density wave (CDW/SDW) triggered by the Fermi surface (FS) nesting accompanied by the change in the translational symmetry, as well as the superconductivity associated with broken gauge symmetry characterized by an energy gap in the low-energy excitation.
Crystal surface provides a fertile playground to study such interplay, because it inherently breaks the translational and space inversion (𝒫) symmetries, often leading to unconventional physical properties originating from exotic surface states (SSs) such as the giant spin-split Rashba states <cit.> and topological Dirac-cone states <cit.>.
Breaking time-reversal (𝒯) symmetry via ferromagnetism is regarded as a promising route to realize even more exotic SSs, such as massive Dirac-cone states responsible for the quantum anomalous Hall effect <cit.>.
Besides the 𝒯-symmetry breaking, the variety of magnetic structures and the resulting abundant crystal symmetries, along with their breaking, have been theoretically suggested to enrich the physics in antiferromagnets.
Characteristic symmetries of antiferromagnets so far discussed are the combination of 𝒯-symmetry and the translational symmetry (S-symmetry) which gives rise to the surface selective massive Dirac cone features predicted in antiferromagnetic (AF) TIs <cit.> and experimentally studied in MnBi_2Te_4 family <cit.>.
Another important symmetry is the combination of 𝒫 and 𝒯 symmetries (𝒫𝒯-symmetry) that has been recently discussed in magnetic Weyl semimetals <cit.>, Kagome magnets <cit.>, and altermagnets <cit.>.
However, in a broader perspective, the interplay between surface electronic states and the combined symmetry in antiferromagnets is still unclear, despite the common interest in utilizing antiferromagnets for realizing emergent quantum phenomena and applications in spintronics <cit.>.
Here, we focus on rare-earth monopnictides RX_p (R = rare earth, X_p = pnictogen) with a rock-salt structure [see Fig. [FIG1]1(a)] in which the strong coupling between the electronic states and antiferromagnetism was pointed out <cit.>.
For example, CeSb was reported to undergo reconstruction of bulk-band structure and FS across T_N associated with p-f mixing <cit.>.
Also, strong modulation of topological Dirac-cone SS in the AF phase was clarified in CeBi <cit.>.
Recently, unusual Fermi-arc SS in the AF phase was reported by angle-resolved photoemission spectroscopy (ARPES) of NdBi <cit.>.
This SS is characterized by the magnetic splitting different from the conventional Rashba/Zeeman splitting, and its origin is discussed in terms of the topology.
Subsequent studies have clarified the existence of similar SS in other RX_p’s <cit.>, consistent with DFT calculations assuming the putative multiple-q type AF structures <cit.>.
However, the origin of SS and its relation to the topology are still under an intensive debate, mainly due to the presence of multiple AF domains at surface.
In particular, it is unclear what type of symmetry in the AF-ordered state is responsible for the emergence of unusual SS.
In this paper, we report ARPES study of NdSb and suggest the existence of SS that originates from the breaking of 𝒫𝒯 symmetry and the change in the translational symmetry associated with the AF order.
This is enabled by the utilization of micro-focused ARPES with a small beam spot to distinguish all the three types of single-q AF domains at the surface.
We found that, while the surface electronic states at the (001) facet strongly depend on the magnetic ordering vector of AF domains, they are commonly characterized by the band splitting, band folding, and band hybridization.
We also discuss implications of the present results in relation to the non-trivial topology.
§ EXPERIMENTS
Single crystals of NdSb were grown by the flux method using Sn flux.
Raw materials were mixed in a molar ratio of Nd:Sb:Sn=1:1:20 and placed in an alumina crucible.
The crucible was sealed in an evacuated Quartz tube filled with Ar gas of 50 mbar.
The ampoule was heated to 1100 °C, kept for 10 hours, and then cooled to 700 °C in 160 hours.
The excessive Sn was removed in a centrifuge. Obtained crystals were characterized by X-ray diffraction measurements.
Soft-X-ray (SX) ARPES measurements were performed with an Omicron-Scienta SES2002 electron analyzer with energy-tunable synchrotron light at BL2 in Photon Factory (PF), KEK.
We used linearly polarized light (horizontal polarization) of 515–601 eV.
VUV-ARPES measurements were performed with micro-focused vacuum-ultraviolet (VUV) synchrotron light at BL28 in PF <cit.>.
We used linearly or circularly polarized light of 60–75 eV.
The energy resolution for the SX- and VUV-ARPES measurements was set to be 150 and 10–20 meV, respectively.
Samples were cleaved in situ along the (001) plane of the cubic crystal in an ultrahigh vacuum of 1×10^-10 Torr.
Prior to the ARPES measurements, the crystal orientation was determined by X-ray Laue backscattering.
The Fermi level (E_F) of samples was referenced to that of a gold film electrically in contact with the sample holder.
§ RESULTS AND DISCUSSION
At first, we present the electronic states of NdSb in the paramagnetic (PM) phase.
By utilizing the bulk-sensitive SX photons to minimize the experimental uncertainty in the out-of-plane wave vector (i.e. k_z broadening), we mapped out the FS originating from the bulk bands.
In-plane FS mapping at k_z∼2π/a plane at T = 40 K shown in Fig. [FIG1]1(b) signifies an elliptical pocket centered at each X point of bulk Brillouin zone (BZ), together with a diamond-like pocket at the side Γ point (note that the intensity of the pocket at the X_3 point appears to be suppressed due to the matrix-element effect).
The latter diamond-like pocket consists of an outer diamond-like pocket and an inner circle-like one, as better visualized by the FS mapping at the k_z∼0 plane in Fig. [FIG1]1(c).
From the ARPES-intensity plot along the ΓX cut in bulk BZ shown in Fig. [FIG1]1(d), the pockets centered at the X and Γ points are assigned to the Nd 5d electron band (e1) and a couple of Sb 5p hole bands (h1 and h2), respectively, consistent with the compensated semimetallic nature of the RX_p family as highlighted by the bulk FS topology in Fig. [FIG1]1(a).
In RBi, the e1 and h2 bands cross each other midway between Γ and X points due to the bulk-band inversion, and the hybridization gap due to the strong spin-orbit coupling opens at the intersection.
On the other hand, in NdSb, the h2 and e1 bands approach at the X point without intersection [see Fig. [FIG1]1(d) and corresponding schematics in Fig. [FIG1]1(f)] and consequently these energy bands keep the non-inverted characteristics as in the case of other RSb <cit.>.
The absence of band inversion is also suggested from the ARPES intensity along the Γ̅M̅ cut obtained with VUV photons (hν = 60 eV) [see Fig. [FIG1]1(e)], where the band structure obtained with SX photons is overall reproduced, but the band structure at different k_z’s (k_z = 0 and 2π/a) is simultaneously observed due to the strong k_z broadening caused by the short photoelectron escape depth in VUV measurements <cit.>.
Such k_z broadening is recognized by the observation of the e3 band at the Γ̅ point and the e2 band at the M̅ point both of which originate from the k_z = 2π/a plane [see Fig. [FIG1]1(a)], besides the e1, h1, and h2 bands originating from the k_z = 0 plane.
We found no signature of topological Dirac-cone SS at the Γ̅ and M̅ points [see Fig. [FIG1]1(e)] unlike the case of RBi <cit.>, consistent with the absence of the band inversion, signifying that NdSb is a topologically trivial semimetal in the PM phase.
The FS mapping in the PM phase shown in Fig. [FIG1]1(g) is consistent with the non-topological semimetallic character because only bulk-derived electron and hole pockets are identified.
Even when the sample is cooled down to T = 7 K well below T_N (= 16 K), the intensity pattern does not appear to show a significant change, as shown in Fig. [FIG1]1(h). However, a closer look reveals a qualitative change in the intensity profile; for example, a new tiny pocket appears inside the h1 pocket at the Γ̅ point (white arrow).
Now we turn our attention to the electronic states in the AF phase.
As shown in Fig. [FIG2]2(a), when NdSb forms a single AF domain (e.g. by applying a magnetic field along the [001] axis), the top surface becomes a FM layer with an out-of-plane magnetic moment (called here domain C), whereas two side surfaces have a stripe AF configuration with the magnetic moment parallel to the surface (domains A and B).
Owing to the cubic symmetry of NdSb, all these domains can be obtained by cleaving the crystal.
Under the absence of a magnetic field, it is expected that all the three types of AF domains as large as a few 100 μm <cit.> coexist on the top surface.
Thus, by utilizing the micro-beam spot of ∼10×10 μm^2, one can selectively probe each single AF domain.
For this sake, we carried out scanning micro-ARPES measurements on the cleaved surface, and were able to resolve all the three types of AF domains.
Results of FS mapping around the Γ̅ point in the AF phase (T = 8 K) obtained at three representative sample points A-C [Fig. [FIG2]2(a)] are shown in Figs. [FIG2]2(c), [FIG2]2(f), and [FIG2]2(i), respectively.
At point A [Fig. [FIG2]2(c)], one can see the overall C_2-symmetric intensity distribution, characterized by the existence of a small pocket on the left- and right-hand sides of the bulk h2 pocket (highlighted by dashed orange circles).
This pocket can be better recognized from the band dispersion along the horizontal k_x axis (cut 1) in Fig. [FIG2]2(d) which signifies a couple of shallow bands in the vicinity of E_F (white arrows).
These bands are assigned to the SS according to the previous ARPES studies of NdBi and NdSb, but the origin is under intensive debate <cit.>.
We found that these SS are absent along the vertical k_y axis (cut 2), as seen from both the FS mapping in Fig. [FIG2]2(c) and the band dispersion in Fig. [FIG2]2(e), confirming the C_2-symmetric nature of the overall electronic structure.
Since the small pocket is likely associated with the AF-induced band folding as we discuss later, one can attribute the electronic states at the sample point A to the domain A characterized by the AF configuration with the magnetic moment aligned vertically as shown in the inset to Fig. [FIG2]2(c).
We also identify a counterpart of domain A, namely, domain B with the horizontally aligned AF configuration.
As shown in Fig. [FIG2]2(f), the overall FS intensity profile obtained at the sample point B exhibits C_2 symmetry but is rotated by 90^∘ with respect to that of domain A [Fig. [FIG2]2(c)].
Specifically, small pockets appear along the k_y axis, but not along the k_x axis, as can be identified from the band dispersion along cuts 3 and 4 which signifies the existence of shallow bands only along the k_y cut (cut 4) [Fig. [FIG2]2(h)].
We found that the ARPES data at the sample point C, assigned to domain C with the FM top layer showing an out-of-plane magnetic moment, is very different from those of points A and B.
As shown in Fig. [FIG2]2(i), the FS mapping shows an overall C_4 symmetry as in the case of the PM phase shown in Fig. [FIG1]1, distinct from the C_2-symmetric behavior at points A and B in the AF phase [note that the FS mapping in Fig. [FIG1]1(h) was obtained at point C].
Small pockets outside the bulk h2 pockets are completely absent, as evident from the plots of band dispersion along cuts 5 and 6 in Figs. [FIG2]2(j) and [FIG2]2(k).
All these band/FS features are consistent with the C_4-symmetric AF configuration of domain C.
A careful look at Fig. [FIG2]2(i) reveals a small pocket at the Γ̅ point which is absent in domains A and B [Figs. [FIG2]2(c) and [FIG2]2(f)].
This pocket originates from the complicated reconstruction of band structure at the Γ̅ point as recognized from the emergence of a couple of shallow features in the vicinity of E_F around the Γ̅ point [white arrows in Figs. [FIG2]2(j) and [FIG2]2(k)], distinct from domains A and B.
These results strongly suggest the successful identification of all the three types of AF domains coexisting at the surface.
The present experimental results definitely rule out the triple-q AF structure which requires the existence of single C_4-symmetric AF domain at the (001) surface.
Moreover, the double-q AF structure could also be ruled out because in this case the small pocket should appear in the C_4-symetric manner in domain C <cit.>.
Therefore, our experimental results strongly suggest the single-q nature of the AF structure in NdSb, consistent with the neutron diffraction experiments <cit.>.
To pin-down the origin of unusual SS forming the shallow pocket in domains A and B, we have carried out detailed temperature-dependent ARPES measurements across T_N along cut 1 (k_x axis) for domain A, and the result is shown in Fig. [FIG3]3(a).
At T = 8 K [Fig. [FIG3]3(a1)], one can see a couple of shallow bands crossing E_F (called here S1 and S2) outside the highly dispersive bulk h1 band, where the outer S2 band forms a small pocket as seen in Fig. [FIG2]2(c).
The S1 and S2 bands significantly reduce their spectral weight on moving away from the Γ̅ point.
On increasing temperature, the S1 and S2 bands gradually merge into a single band and become indistinguishable at T = 13 K, and their spectral weight eventually vanishes at T ∼ 15–16 K, around T_N (= 16 K).
This indicates that these bands are associated with the AF transition, as in the case of NdBi <cit.>.
The AF origin of these bands is also suggested from the observation of discontinuity in the spectral weight across T_N in the plot of ARPES intensity at E_F against temperature [Fig. [FIG3]3(b)].
To obtain further insights into the overall band structure of S1 and S2, we show in Fig. [FIG3]3(c) the ARPES intensity with enhanced color contrast obtained along the k_x axis in the wider 𝐤 range that covers the M̅ points at both sides of the Γ̅ point.
Obviously, replicas of the S1 and S2 bands also appear around the M̅ point (white arrows) and the overall band dispersion of the S1 and S2 bands seems mirror symmetric with respect to 𝐤 = 1/2Γ̅M̅ which is at the AF BZ boundary for domain A.
It is thus tempting to associate the S1 and S2 bands with the AF-induced band folding. We found that the S1 band around the M̅ point smoothly disperses toward higher binding energy up to 0.3 eV at 𝐤 = 1/2Γ̅M̅ and further disperses into the bulk h2 band, although its intensity is suddenly reduced at the intersection with the projection boundary of the bulk e1 pocket.
This suggests that the intensity of the S1 and S2 bands are enhanced by the surface resonance due to the overlap/proximity with the bulk bands.
Now that the temperature evolution of the SS is established for the C_2-symmetric domain A and equivalently for domain B, the next question is whether or not the SS associated with the AF band folding can also be identified for the C_4-symmetric domain C.
To access this issue, we show in Fig. [FIG4]4(a) the temperature dependence of the ARPES intensity obtained along cut 5 (k_x axis) for domain C [Fig. [FIG2]2(c)].
At T = 18 K in the PM phase [Fig. [FIG4]4(a6)], one can see a rather simple band structure, i.e. a bulk hole band h3 topped at 0.4 eV, a bulk electron band e3 bottomed at ∼0.3 eV, besides weaker hole bands h1 and h2.
At T = 14 K [Fig. [FIG4]4(a4)], a new hole band emerges (blue arrow).
On decreasing temperature, this band systematically moves upward, whereas the bulk h3 band is stationary against the temperature variation.
This band is assigned to the SS and separates from the bulk bands h3, as seen from the temperature dependence of EDCs in Fig. [FIG4]4(b).
High-resolution ARPES image at a lower temperature (T = 6.5 K) shown in Fig. [FIG4]4(c) signifies that this SS consists of a couple of hole bands called S5 and S6, as also evident from the numerical fitting to the momentum distribution curve (MDC) at E_B = 0.32 eV which crosses the S5 and S6 bands [see Fig. [FIG4]4(d)].
Another important aspect of the band structure in the AF phase is the emergence of shallow electron bands as seen in Figs. [FIG2]2(j) and [FIG2]2(k).
These bands are assigned to another SSs called S3 and S4, and they survive at least up to 12 K [Figs. [FIG4]4(a1-a3)].
On increasing temperature, these bands gradually smear out and eventually vanish at T = 16 K, confirming their AF origin.
From a comparison of extracted band dispersion between the AF phase (T = 6.5 K) and PM phase (T = 18 K), we concluded that totally four SSs (S3-S6) appear in the AF phase in domain C.
It is noted that all these SSs cannot be explained in terms of the bulk bands associated with the AF band folding, because there exists no counterpart of bulk bands at the X point [X_3 in Fig. [FIG1]1(a)] in the corresponding energy region [see also Fig. [FIG1]1(f)].
Based on the established band structure in the AF phase of all the three domains, we discuss the comprehensive feature of SS in more detail.
Since all the S1-S6 bands are observed only below T_N, they are definitely associated with the AF order.
Here we propose a phenomenological framework that considers three key band modulations in the AF phase, i.e. (i) the split-off SS from the bulk bands, (ii) the band folding of SS due to the AF potential, and (iii) the hybridization between the original and back-folded SS.
Although these band modulations occur simultaneously and are essentially indistinguishable, it would be useful to introduce a simplified picture that separates out these effects so as to intuitively understand our observation.
As shown in Fig. [FIG5]5(a), the original band structure along the Γ̅M̅ cut in the PM phase is characterized by the bulk hole bands (h1-h3) at the Γ̅ point and the bulk electron band (e1) at the M̅ point.
The AF order triggers the emergence of SS detached from the original bulk bands.
The hole SS with the Sb 5p character further show a spin splitting [black curves in Fig. [FIG5]5(b)], which is suggested from our observation that the SS always appear side by side (S1 and S2, S3 and S4, and S5 and S6).
We discuss the mechanism of the spin splitting later.
Since the SSs also suffer from the influence of the AF potential, band folding and resultant band hybridization would take place [Figs. [FIG5]5(c)-[FIG5]5(f)].
For domain A and equivalently domain B, since the folding of SS occurs within the surface plane with respect to the AF BZ boundary of 1/2Γ̅M̅ [Fig. [FIG5]5(c)], the overall SS band dispersion becomes mirror-symmetric with respect to 𝐤 = 1/2Γ̅M̅ as demonstrated in Fig. [FIG3]3(i).
In addition, the hybridization between hole and electron SS opens an energy gap at each intersection, resulting in the formation of S1 and S2 bands [Fig. [FIG5]5(d)].
Although many of the SSs are experimentally obscured owing to their weak intensity [highlighted by gray curves in Fig. [FIG5]5(d)], some surface bands inside the projection of bulk bands are clearly detected in the present experiment because of the intensity enhancement by the surface resonance.
We found that this picture can be also applied to domain C.
As shown in Figs. [FIG5]5(e) and [FIG5]5(f), the SS at the Γ̅ point which are split off from the bulk e3 and h3 bands hybridize with each other due to the AF-induced out-of-plane band folding to produce the complicated SS dispersion (S3-S6), although the reproduction of all the fine structures needs further elaboration utilizing first-principles band calculations.
What is the physical mechanism behind the present observation?
To answer this question, the following two key issues must be clarified; (i) the origin of spin splitting in the SS, and (ii) the reason why the SS appear only in the AF phase.
First, we discuss the issue (i).
In ordinary antiferromagnets with zero internal magnetic field, the bulk bands are spin degenerate because the 𝒫𝒯 symmetry corresponding to the operation to reverse the spin while keeping the same momentum k [E(𝐤,↑) = E(𝐤,↓)] holds in the bulk <cit.>.
Since the single-q AF phase of NdSb has a 𝒫𝒯-preserving center in the bulk crystal [black circle in Fig. [FIG1]1(a)], the 𝒫𝒯 symmetry holds and the bulk bands are spin degenerate in both the PM and AF phases.
On the other hand, at the surface of NdSb in the AF phase, the 𝒫𝒯 symmetry is broken at the surface boundary and thereby the energy bands are spin split.
This splitting is likely associated with the k-dependent magnetic exchange interactions because the splitting sets in at T_N [see Figs. [FIG3]3(b) and [FIG4]4(b)].
In this respect, it is interesting to note the similarity to altermagnets where the band splitting due to the 𝒫𝒯-symmetry breaking was recently proposed <cit.>.
Next, we discuss the issue (ii) why the SSs appear only in the AF phase.
Obviously, the SSs are not a conventional SS associated with dangling bonds and/or surface relaxation.
The possibility of surface reconstruction is unlikely because no such reconstruction was observed.
The possibility of the topological Fermi-arc SS associated with the magnetic Weyl semimetal phase <cit.> is also excluded because the bulk bands are spin degenerate.
The Z_2 topology associated with the combined S symmetry (time reversal and fractional translation) in the AF-TI phase <cit.> is further excluded because NdSb has no band inversion and the Dirac-cone SS is absent (Fig. [FIG1]1).
It is noted that, since the magnetic space group (#128.140) of the single-q AF structure is a subgroup of that of the PM phase (#225) <cit.>, there is no new symmetry operation activated only in the AF phase.
Hence, the emergence of the SS implies the change in the hidden topology in the bulk electronic states.
This change may be triggered by the band folding and hybridization, as seen from the behavior of SS discussed in Fig. [FIG5]5.
We thus suggest that the essential ingredients to realize the SS of NdSb in terms of symmetry are the combination of (i) the TRS breaking, (ii) the 𝒫𝒯-symmetry breaking, and (iii) the change in the translational symmetry.
We emphasize that the surface of antiferromagnets naturally satisfies all the above three conditions, whereas the other materials in the ordered phase such as ferromagnets, ferroelectrics, and CDW compounds only partially satisfy these conditions.
It is highly desirable to search for exotic SS associated with the entanglement of multiple symmetries in other antiferromagnets.
Such SS would show a Berry phase effect seen in ferromagnets and recently also in altermagnet due to the spin splitting of energy bands <cit.>, and has a potential to lead to the exotic surface properties such as surface anomalous Hall conductivity, surface anomalous Nernst effect, and surface magnetooptical response <cit.>.
It is thus desirable to investigate the relationship between the spectroscopically identified exotic SS and the surface anomalous physical properties in antiferromagnets.
§ CONCLUSION
The AF-domain-selective micro-ARPES measurements of NdSb have clarified the existence of three types of surface electronic states characterized by the C_2 and C_4 symmetries corresponding to the single-q AF domains with in-plane and out-of-plane magnetic moments, respectively.
We found that both the C_2- and C_4-symmetric surfaces are characterized by the emergence of new SS in the AF state.
Although the SS appears in different locations of the surface BZ, their origins are commonly explained in terms of the breaking of 𝒫𝒯 symmetry and the AF band folding.
The present result lays a foundation for studying exotic surface properties of antiferromagnets.
We thank Y. Kubota, T. Kato, T. Kawakami, and N. Watanabe for their assistance in the ARPES experiments.
This work was supported by JST-CREST (No. JPMJCR18T1), JST-PRESTO (No. JPMJPR18L7), and Grant-in-Aid for Scientific Research (JSPS KAKENHI Grant Numbers JP21H04435 and JP19H01845), Grant-in-Aid for JSPS Research Fellow (No: JP18J20058), KEK-PF (Proposal number: 2021S2-001), and UVSOR (Proposal number: 21-658 and 21-847).
The work in Cologne was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project number 277146847 - CRC 1238 (Subproject A04). A.H. thanks GP-Spin and JSPS, and D.T. thanks JSPS and Tohoku University Division for Interdisciplinary Advanced Research and Education.
Note added: After the completion of this work, we became aware of a similar work by Y. Kushnirenko et al. <cit.> which reports the domain-selective micro-ARPES study of NdSb.
prsty
100
LaShellPRL1996 S. LaShell, B. A. McDougall, and E. Jensen, Spin Splitting of an Au(111) Surface State Band Observed with Angle Resolved Photoelectron Spectroscopy, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.93.046403Phys. Rev. Lett. 77, 3419 (1996).
KoroteevPRL2004 Y. M. Koroteev, G. Bihlmayer, J. E. Gayone, E.V. Chulkov, S. Blugel, P. M. Echenique, and Ph. Hofmann, Strong Spin-Orbit Splitting on Bi Surfaces, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.77.3419Phys. Rev. Lett. 93, 046403 (2004).
HasanRMP2010 M. Z. Hasan and C. L. Kane, Colloquium: Topological Insulators, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.82.3045Rev. Mod. Phys. 82, 3045 (2010).
QiRMP2011 X. L. Qi and S. C. Zhang, Topological Insulators and Superconductors, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.83.1057Rev. Mod. Phys. 83, 1057 (2011).
AndoJPSJ2013 Y. Ando, Topological Insulator Materials, https://journals.jps.jp/doi/10.7566/JPSJ.82.102001J. Phys. Soc. Jpn. 82, 102001 (2013).
ChangScience2013 C.-Z. Chang, J. Zhang, X. Feng, J. Shen, Z. Zhang, M. Guo et al., Experimental Observation of the Quantum Anomalous Hall Effect in a Magnetic Topological Insulator, https://www.science.org/doi/10.1126/science.1234414Science 340, 167 (2013).
MongPRB2010 R. S. K. Mong, A. M. Essin, and J. E. Moore, Antiferromagnetic Topological Insulators, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.81.245209Phys. Rev. B 81, 245209 (2010).
OtrokovNature2019 M. M. Otrokov, I. I. Klimovskikh, H. Bentmann, D. Estyunin, A. Zeugner, Z. S. Aliev et al., Prediction and Observation of an Antiferromagnetic Topological Insulator, https://www.nature.com/articles/s41586-019-1840-9Nature (London) 576, 416 (2019).
HaoPRX2019 Y.-J. Hao, P. Liu, Y. Feng, X.-M. Ma, E. F. Schwier, M. Arita et al., Gapless Surface Dirac Cone in Antiferromagnetic Topological Insulator MnBi_2Te_4, https://journals.aps.org/prx/abstract/10.1103/PhysRevX.9.041038Phys. Rev. X 9, 041038 (2019).
LiPRX2019 H. Li, S.-Y. Gao, S.-F. Duan, Y.-F. Xu, K.-J. Zhu, S.-J. Tian et al., Dirac Surface States in Intrinsic Magnetic Topological Insulators EuSn_2As_2 and MnBi_2nTe_3n+1, https://journals.aps.org/prx/abstract/10.1103/PhysRevX.9.041039Phys. Rev. X 9, 041039 (2019).
ChenPRX2019 Y. J. Chen, L. X. Xu, J. H. Li, Y. W. Li, H. Y. Wang, C. F. Zhang et al., Topological Electronic Structure and Its Temperature Evolution in Antiferromagnetic Topological Insulator MnBi_2Te_4, https://journals.aps.org/prx/abstract/10.1103/PhysRevX.9.041040Phys. Rev. X 9, 041040 (2019).
WanPRB2011 X. Wan, A. M. Turner, A. Vishwanath, and S. Y. Savrasov, Topological Semimetal and Fermi-arc Surface States in the Electronic Structure of Pyrochlore Iridates, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.83.205101Phys. Rev. B 83, 205101 (2011).
LinPRB2020 Z. Lin, C. Wang, P. Wang, S.Yi, L. Li, Q. Zhang et al., Dirac Fermions in Antiferromagnetic FeSn Kagome Lattices with Combined Space Inversion and Time-Reversal Symmetry, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.102.155103Phys. Rev. B 102, 155103 (2020).
SmejkalSciAdv2020 L. S̆mejkal, R. González-Hernández, T. Jungwirth, and J. Sinova, Crystal Time-reversal Symmetry Breaking and Spontaneous Hall Effect in Collinear Antiferromagnets, https://www.science.org/doi/10.1126/sciadv.aaz8809Sci. Adv. 6, eaaz8809 (2020).
YuanPRB2020 L.-D. Yuan, Z. Wang, J.-W. Luo, E. I. Rashba, and A. Zunger, Giant Momentum-dependent Spin Splitting in Centrosymmetric Low-Z Antiferromagnets, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.102.014422Phys. Rev. B 102, 014422 (2020).
SmejkalPRX2022 L. S̆mejkal, J. Sinova, and T. Jungwirth, Beyond Conventional Ferromagnetism and Antiferromagnetism: A Phase with Nonrelativistic Spin and Crystal Rotation Symmetry, https://journals.aps.org/prx/abstract/10.1103/PhysRevX.12.031042Phys. Rev. X 12, 031042 (2022).
LiNP2010 R. Li, J. Wang, X. L. Qi, and S. C. Zhang, Dynamical Axion Field in Topological Magnetic Insulators, https://www.nature.com/articles/nphys1534Nat. Phys. 6, 284 (2010).
NakatsujiNature2015 S. Nakatsuji, N. Kiyohara, and T. Higo, Large Anomalous Hall Effect in a Non-collinear Antiferromagnet at Room Temperature, https://www.nature.com/articles/nature15723Nature 527, 212 (2015).
WadleyScience2016 P. Wadley, B. Howells, J. Železný, C. Andrews, V. Hills, R. P. Campion et al., Dirac Fermions in Antiferromagnetic FeSn Electrical Switching of an Antiferromagnet, https://www.science.org/doi/10.1126/science.aab1031Science 351, 587 (2016).
SettaiJPSJ1994 R. Settai, T. Goto, S. Sakatume, Y. S. Kwon, T. Suzuki, Y. Kaneta, and O. Sakai, Observation of Heavy Hole State in CeSb, https://journals.jps.jp/doi/abs/10.1143/JPSJ.63.3026?journalCode=jpsjJ. Phys. Soc. Jpn. 63, 3026 (1994).
KumiPRB1997 H. Kumigashira, H.-D. Kim, A. Ashihara, A. Chainani, T. Yokoya, T. Takahashi, A. Uesawa, and T. Suzuki, Paramagnetic-to-antiferroparamagnetic Phase Transition of CeSb Studied by High-resolution Angle-resolved Photoemission, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.56.13654Phys. Rev. B 56, 13654 (1997).
TakayamaJPSJ2009 A. Takayama, S. Souma, T. Sato, T. Arakane, and T. Takahashi, Magnetic Phase Transition of CeSb Studied by Low-Energy Angle-Resolved Photoemission Spectroscopy, https://journals.jps.jp/doi/10.1143/JPSJ.78.073702J. Phys. Soc. Jpn. 78, 073702 (2009).
JangSciAdv2019 S. Jang, R. Kealhofer, C. John, S. Doyle, J. Hong, J. H. Shim et al., Direct Visualization of Coexisting Channels of Interaction in CeSb, https://www.science.org/doi/10.1126/sciadv.aat7158Sci. Adv. 5, eaat7158 (2019).
OinumaPRB2019 H. Oinuma, S. Souma, K. Nakayama, K. Horiba, H. Kumigashira, M. Yoshida, A. Ochiai, T. Takahashi, and T. Sato, Unusual Change in the Dirac-cone Energy Band upon a Two-step Magnetic Transition in CeBi, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.100.125122Phys. Rev. B 100, 125122 (2019).
KurodaNC2020 K. Kuroda, Y. Arai, N. Rezaei, S. Kunisada, S. Sakuragi, M. Alaei et al., Devil's Staircase Transition of the Electronic Structures in CeSb, https://www.nature.com/articles/s41467-020-16707-6Nat. Commun. 11, 2888 (2020).
SchrunkNature2022 B. Schrunk, Y. Kushnirenko, B. Kuthanazhi, J. Ahn, L. L. Wang, E. O’Leary et al., Emergence of Fermi Arcs due to Magnetic Splitting in an Antiferromagnet, https://www.nature.com/articles/s41586-022-04412-xNature 603, 610 (2022).
KushnirenkoPRB2022 Y. Kushnirenko, B. Schrunk, B. Kuthanazhi, L. L. Wang, J. Ahn, E. O’Leary et al., Rare-earth Monopnictides: Family of Antiferromagnets Hosting Magnetic Fermi Arcs, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.106.115112Phys. Rev. B 106, 115112 (2022).
MadPRB2022 A. P. Sakhya, B. Wang, F. Kabir, C.-Y. Huang, M. M. Hosen, B. Singh et al., Complex Electronic Structure Evolution of NdSb Across the Magnetic Transition, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.106.235119Phys. Rev. B 106, 235119 (2022).
LiNPJ2023 P. Li, T. Li, S. Liao, Z. Cao, R. Xu, Y. Wang et al., Origin of the Exotic Electronic States in Antiferromagnetic NdSb, https://www.nature.com/articles/s41535-023-00557-8npj Quantum Mater. 8, 22 (2023).
WangCP2023 L. L. Wang, J. Ahn, R. J. Slager, Y. Kushnirenko, B. G. Ueland, A. Sapkota et al., Unconventional Surface State Pairs in a High-symmetry Lattice with Anti-ferromagnetic Band-folding, https://www.nature.com/articles/s42005-023-01180-6Commun. Phys. 6, 78 (2023).
KitamuraRSI2022 M. Kitamura, S. Souma, A. Honma, D. Wakabayashi, H. Tanaka, A. Toyoshima et al., Development of a Versatile Micro-focused Angle-resolved Photoemission Spectroscopy System with Kirkpatrick–Baez Mirror Optics, https://pubs.aip.org/aip/rsi/article-abstract/93/3/033906/2843376/Development-of-a-versatile-micro-focused-angle?redirectedFrom=fulltextRev. Sci. Instrum. 93, 033906 (2022).
OinumaPRB2017 H. Oinuma, S. Souma, D. Takane, T. Nakamura, K. Nakayama, T. Mitsuhashi et al., Three-dimensional Band Structure of LaSb and CeSb: Absence of band inversion, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.96.041120Phys. Rev. B 96, 041120(R) (2017).
KumiPRB1998 H. Kumigashira, H.-D. Kim, T. Ito, A. Ashihara, T. Takahashi, T. Suzuki et al., High-resolution Angle-resolved Photoemission Study of LaSb, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.58.7675Phys. Rev. B 58, 7675 (1998).
NayakNC2017 J. Nayak, S.-C. Wu, N. Kumar, C. Shekhar, S. Singh, J. Fink et al., Multiple Dirac Cones at the Surface of the Topological Metal LaBi, https://www.nature.com/articles/ncomms13942Nat. Commun. 8, 13942 (2017).
KurodaPRL2018 K. Kuroda, M. Ochi, H. S. Suzuki, M. Hirayama, M. Nakayama, R. Noguchi et al., Experimental Determination of the Topological Phase Diagram in Cerium Monopnictides, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.086402Phys. Rev. Lett. 120, 086402 (2018).
LiPRB2018 P. Li, Z. Wu, F. Wu, C. Cao, C. Guo, Y. Wu et al., Tunable Electronic Structure and Surface States in Rare-earth Monobismuthides with Partially Filled f Shell, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.98.085103Phys. Rev. B 98, 085103 (2018).
SakhyaPRB2022 A. P. Sakhya, S. Kumar, A. Pramanik, R. P. Pandeya, R. Verma, B. Singh et al., Behavior of Gapped and Ungapped Dirac Cones in the Antiferromagnetic Topological Metal SmBi, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.106.085132Phys. Rev. B 106, 085132 (2022).
NeresonJAP1971 N. Nereson and G. Arnold, Magnetic Properties of CeBi, NdBi, TbBi, and DyBi, https://pubs.aip.org/aip/jap/article-abstract/42/4/1625/786446/Magnetic-Properties-of-CeBi-NdBi-TbBi-and-DyBi?redirectedFrom=fulltextJ. Appl. Phys. 42, 1625 (1971).
SchobingerJPCSSP1973 P. Schobinger-Papamantellos, P. Fischer, O. Vogt, and E. Kaldis, Magnetic Ordering of Neodymium Monopnictides Determined by Neutron Diffraction, https://iopscience.iop.org/article/10.1088/0022-3719/6/4/020J. Phys. C Solid State Phys. 6, 725 (1973).
ManfrinettiJAC2009 P. Manfrinetti, A. Provino, A. V. Morozkin, and O. Isnard, Magnetic Structure of the NaCl-type NdSb Compound, https://www.sciencedirect.com/science/article/abs/pii/S0925838809016090J. Alloys Compd. 487, L28 (2009).
PerezARMR2015 J. Perez-Mato, S. Gallego, E. Tasci, L. Elcoro, G. de la Flor, and M. Aroyo, Symmetry-Based Computational Tools for Magnetic Crystallography, https://www.annualreviews.org/doi/abs/10.1146/annurev-matsci-070214-021008Annu. Rev. Mater. Res. 45, 217 (2015).
MSGindentifer H. T. Stokes, D. M. Hatch, and B. J. Campbell, ISO-MAG, ISOTROPY Software Suite, retrieved from https://iso.byu.eduhttps://iso.byu.edu.
XiaoRMP2010 D. Xiao, M. C. Chang, and Q. Niu, Berry Phase Effects on Electronic Properties, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.82.1959Rev. Mod. Phys. 82, 1959 (2010).
ChenPRL2014 H. Chen, Q. Niu, and A. H. MacDonald, Anomalous Hall Effect Arising from Noncollinear Antiferromagnetism, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.017205Phys. Rev. Lett. 112, 017205 (2014).
NakatsujiNP2017 M. Ikhlas, T. Tomita, T. Koretsune, M. T. Suzuki, D. Nishio-Hamane, R. Arita, Y. Otani, and S. Nakatsuji, Large Anomalous Nernst Effect at Room Temperature in a Chiral Antiferromagnet, https://www.nature.com/articles/nphys4181Nat. Phys. 13, 1085 (2017).
SamantaJAP2020 K. Samanta, M. Lĕaić, M. Merte, F. Freimuth, S. Blügel, and Y. Mokrousov, Crystal Hall and Crystal Magneto-optical Effect in Thin Films of SrRuO_3, https://pubs.aip.org/aip/jap/article-abstract/127/21/213904/566066/Crystal-Hall-and-crystal-magneto-optical-effect-in?redirectedFrom=fulltextJ. Appl. Phys. 127, 213904 (2020).
KushnirenkoArXiv2023 Y. Kushnirenko, B. Kuthanazhi, L.-L. Wang, B. Schrunk, E. O'Leary, A. Eaton, P. C. Canfield, and A. Kaminski, Directional Effects of Antiferromagnetic Ordering on the Electronic Structure in NdSb, https://arxiv.org/abs/2305.17085arXiv:2305.17085.
|
http://arxiv.org/abs/2307.01255v1
|
20230703180001
|
Effects of clays on spin-spin relaxation: a route for non-invasive total clay content quantification
|
[
"Jefferson G. Filgueiras",
"Matheus S. J. de Miranda",
"Carla S. Semiramis",
"Rodrigo B. V. de Azeredo"
] |
physics.geo-ph
|
[
"physics.geo-ph",
"physics.app-ph"
] |
uff2,ufrj1]Jefferson G. Filgueirascor1
jgfilgueiras@id.uff.br
uff2]Matheus S. J. de Miranda
uff2]Carla S. Semiramis
uff2]Rodrigo B. V. de Azeredo
[uff2]Instituto de Química, Universidade Federal Fluminense, Outeiro de São João Batista, s/nº, 24020-007, Niterói, RJ, Brazil
[ufrj1]Instituto de Física, Universidade Federal do Rio de Janeiro, CP 68528, 21941-972, Rio de Janeiro, RJ, Brazil
[cor1]corresponding author
Clay minerals are important components of sandstone rocks, due to their significant role on petrophysical properties
like porosity and permeability. These minerals have a particular impact on Nuclear Magnetic Resonance measurements
since the iron contained on clays generates internal gradients which directly affect the transverse relaxation. Here,
we apply a methodology recently developed to a set of 20 sandstones, with varying clay content and mineralogy, in order
to estimate the total clay content by using the effect of internal gradients on transverse relaxation. Based on these
measurements, we propose a geochemical rock typing from quantities determined by our measurements, namely the total
clay content and porosity.
Clay content internal gradients Nuclear magnetic resonance rock typing
Effects of clays on spin-spin relaxation: a route for non-invasive total clay content quantification
[
August 1, 2023
====================================================================================================
§ INTRODUCTION
The identification and quantification of clay minerals play an important role in sandstone reservoir characterization,
since they impact on petrophysical properties like water saturation, permeability, and wettability, among others
<cit.>. In particular, some types of clays like kaolinite and smectites reduce both porosity and permeability.
Since such minerals are composed of very small grains, they augment the residual water saturation due to higher
capillary retention <cit.>. Thus, for sandstone reservoir logging, an accurate
determination of the total clay content helps to improve the interpretation of data of gamma-ray and neutron porosity
logs, which are particularly sensitive to the presence of clay minerals <cit.>.
The composition of clay minerals varies widely since they can exist in different forms and chemical compositions
<cit.>. For example, the non-swelling illite is usually found as (K,H_3O)(Al,Mg,Fe)_2(Si,Al)_4
O_10[(OH)_2,(H_2O)]. In particular, some common clay constituents are paramagnetic ions like iron (Fe), copper (Cu)
and nickel (Ni) <cit.>. Despite the importance of total clay content for a precise interpretation of log
data, the standard way to estimate the clay content using XRD techniques require the samples in powder, i.e., they are
invasive measurements and cannot be performed in the well.
The paramagnetic ions directly affect the magnetic susceptibility of the pore matrix, and as such, the chemical
composition of the clays in a sandstone impact the Nuclear Magnetic Resonance (NMR) response
<cit.>, a technique often employed as a logging tool to evaluate oil fields. In particular,
the paramagnetic ions generate internal gradients inside the pores, resulting in a relaxation mechanism observed in
sandstones. As the nuclear spins diffuse inside the pores, their phases are modified by the internal gradients
as they travel within the pore space, resulting in a loss of magnetization. This is diffusive relaxation, which occurs
simultaneously with the surface relaxation resulting from the interaction between the fluid and the pore surfaces.
This diffusive relaxation reduces the transverse relaxation time, T_2, increasing the
complexity of the interpretation of NMR data. For example, the reduction in T_2 can be misinterpreted as the presence of
smaller pores <cit.>. However, the internal gradients can be estimated by a few techniques, since this
relaxation mechanism depends explicitly on the echo time used in the T_2 measurements. To do that, a few 1D and
2D techniques have been proposed in the literature, along with the discussion of how to interpret such data due to assumptions
made during data analysis <cit.>. Recently, a very simple way to
estimate the total clay content using the diffusive relaxation mechanism has been proposed by Elsayed and collaborators
<cit.>. On their method, only two T_2 measurements are necessary to estimate the clay content, which can be
easily done in situ during drilling. Using only seven samples, they observed how total clay content and the
relative reduction of T_2 due to diffusive relaxation are connected, observing a nonlinear relation between these two
variables.
In this paper, we test the hypothesis by Elsayed et al in a set of 20 sandstones, with varying total clay content
and diverse mineral compositions. Instead of nonlinear behavior, we observe a linear correlation between total clay
content and the relative reduction of T_2. We also discuss how some specific minerals, like ankerite and kaolinite,
can impact the total clay estimation using diffusive relaxation.
This paper is organized as follows. Section 2 describes the theory necessary to understand how clays affect
spin-spin relaxation through the diffusive relaxation mechanism. The third section details the materials and methods used
for the mineralogical and relaxation measurements using XRD and NMR. Section 4 shows the results we have about total clay
quantification, as long as a comparison with the XRD quantification and the clay bound water saturation NMR measurement,
which uses a T_2 cutoff in the T_2 distribution <cit.>. The last section summarizes our results
and point to some directions for future work.
§ THEORY
Time-domain NMR measures the dynamics of the magnetization of ^1H nuclear spins when removed from equilibrium. Since
the state of the nuclear spins is directly affected by the molecular mobility, the NMR response is sensitive to the
environment surrounding the nuclear spins of the liquid used as molecular probe <cit.>. In
particular, the transverse relaxation time, T_2, can be used to study the geometry of porous media, since it is affected by
the interaction of the saturating fluid with the pore walls. This relaxation mechanism is the surface relaxation and, in
the fast diffusion limit, directly relates T_2 and pore size <cit.>. Since the magnetization is directly
proportional to the amount of fluid, the T_2 measurement also allows us to estimate the pore volume (provided the sample
is fully saturated), i.e., the effective porosity. For this reason, NMR measurements are widely adopted to estimate
properties like porosity and permeability in reservoir rocks <cit.>. In parallel to the surface relaxation
mechanism, when there are internal gradients in a rock, the nuclear spins diffusing on such gradients accumulate a phase
dependent on both intensity of the gradients and the position of the molecules. Since this phase cannot be fully refocused
after a 180^o spin inversion pulse in an echo sequence, there is a decrease in T_2 due to this diffusion effect. Thus,
the transverse relaxation T_2 can be described by <cit.>
1/T_2 = 1/T_2^bulk + ρ_2S/V + 1/3γ^2G^2D_0τ^2,
where T_2^bulk is the bulk relaxation time of the fluid, ρ_2 is the surface relaxivity, S
and V are the pore surface and volume, respectively. The last term in the expression describes the effect of the
diffusive relaxation, with γ being the gyromagnetic ratio for the proton, G the internal field and D_0 is the self
diffusion coefficient of the fluid. Finally, τ is half the echo time, the time between two successive 180^o pulses
in the Carr-Purcell-Meiboom-Gill (CPMG) sequence <cit.>. This equation is valid under the
assumptions of fast diffusion regime for the surface relaxation and free diffusion for the diffusive relaxation
<cit.>. When we consider porous media, the last assumption can lead to some problems in
interpreting the data, since it is valid only in the case of a uniform fluid, i.e., without any effects of restricted
diffusion. However, it was demonstrated that such an assumption is reasonable to describe the internal fields in sandstone
rocks <cit.>.
Equation (<ref>) describes how the relaxation time decreases due to the effect of internal gradients when we increase
the echo time. Thus, we can gain information on the internal gradients by measuring the T_2 distribution by varying
τ. Using such measurement it is possible to recover the distribution of effective internal gradients of a porous
media <cit.> and even correlate the effective internal gradient to pore size using a 2D experiment
<cit.>. Here, we use Eq. (<ref>) to observe how the relaxation rate (1/T_2) behaves as a
function of τ^2. For short diffusion times, the spins do not interact with pore surfaces, the free diffusion is valid
and we observe a linear behavior. However, as τ^2 increases, there is a deviation from the linear behavior due to
restricted diffusion <cit.>, reflected in a reduction of the apparent diffusion
coefficient. This effect of restriction is particularly strong on small pores like the ones associated with clays.
For sandstones, the internal gradient field is related mainly to the presence of paramagnetic ions on the pore matrix,
mainly iron. So, we should expect an increase in the intensity of the internal gradients with higher clay content. As
such, Elsayed et al hypothesized that any clay in a porous medium is correlated with shifts in T_2 <cit.>.
§ MATERIALS AND METHODS
Rock samples. We use 20 sandstone rocks with 1.5" diameter and 5 cm length, provided by Kocureck Industries
(Caldwell, USA). We cleaned the samples with Soxhlet extraction to completely remove any hydrocarbons or salts residing
inside the pores. The plugs were dried at 60^oC in an oven. We measured both porosity and permeability using a porosimeter
and a permeater. We calculated porosity using Boyle's law and permeability using Darcy's law. The full list of the samples
and their porosity and permeability are shown in Table <ref>. In particular, the Idaho Gray sample has a
permeability higher than the sensibility limit of the permeameter (5000 mD).
Geochemistry and mineralogy. We determined the total geochemical composition by using X-ray fluorescence,
with an EDXRF (PAnalytical, Malvern, UK). We also measured the mineralogical composition with X-ray diffraction, using a D2-PHASER (Bruker,
Karlsruhe, Germany). The measurements were done using the following parameters: range 3^o to 100^o,
0.02^o step size and a 3 s scanning time. The XRD data was processed using the EVA® software.
For the mineralogical quantification of the samples, we followed the Rietveld Method using DIFFRAC.SUITE
TOPAS® software <cit.>. The uncertainty in each mineral is
1 %wt, and the error for total clay content is 1.7 %wt, obtained through error propagation.
Nuclear Magnetic Resonance. We performed ^1H NMR relaxometry measurements using a Geospec DRX core analyzer
(Oxford Instruments, Oxford, UK), with a Larmor frequency of 2.2 MHz (0.05 T). We saturated the samples with a KCl brine
at 30000 ppm concentration and acquired CPMG measurements with 21 geometrically spaced echo times between 0.2 and 6
milliseconds using the GIT software (Green Image Technologies, Fredericton, Canada). The T_2 distributions were obtained
using an Inverse Laplace Transform (ILT) for each echo time, with 512 points. We wrapped the core samples with 2" wide
Teflon tape and put inside a PEEK support to prevent fluid loss during the measurements. The NMR porosity was
estimated using a standard sample with known fluid of water, provided by Green Image Technologies.
§ RESULTS
Rock sample geochemistry and mineralogy. We used X-ray fluorescence to determine the proportion of iron,
in weight, on each sample to correlate this quantity of iron to the intensity of the internal gradients on each
sample. Iron is the most common paramagnetic ion found in sandstones and the main source of internal gradients in this
type of rock. The amount of iron in each sample is shown in Table 1. While the amount of iron is important for the
intensity of the internal gradients, the most important feature is where this iron is placed in the pores. An iron
ion far from any pore surface does not contribute to the internal field gradients, since the hyperfine interaction between
the electronic spin of such ions with the nuclear spins of the saturating fluid occurs only in a very short range. Thus,
iron contained in minerals such as ankerite, a carbonatic cement found in several of our samples, does not contribute to the
internal gradients as much as iron contained in chlorite, a clay usually observed coating the pore surface. The amount of
iron and the mineralogical composition of the 20 sandstones we analyzed are found in Table <ref>. Given the uncertainty in
the amount of each mineral is 1 %wt, we consider that minerals with their content below the uncertainty were detected as traces
in the mineralogical composition.
The samples can be classified into two groups. The quartz sandstones, with quartz as their main component, and arcosean/
subarcosean sandstones, with a significant amount of feldspars. The quartz sandstones are Berea Stripe, Bentheimer,
Leopard, Briarhill, Buff Berea, Berea, Castlegate, Gray Berea, and Upper Gray Berea. The arcosean/ subarcosean
sandstones are Bandera Brown, Bandera Gray, Kirby, Torey Buff, Idaho Gray, Boise Idaho Gray, Boise Idaho
Brown, Kentucky, Nugget, Scioto, and Sister Gray Berea. In particular, the Boise Idaho Brown sample has a large
concentration of zeolite. While pores in both zeolites and clays have pore sizes on the same length scale and several
similarities in their structure <cit.>, zeolites do not impact the permeability inside the pore matrix like clay minerals.
The main paramagnetic ion on the sandstones is iron, contained in the cementation present in all samples. There are
three types of cementation in the sandstones: clays (illite, chlorite, and kaolinite), carbonatic (ankerite, dolomite, and
calcite) and amorphous (composed of iron oxides, probably as coating the pore surfaces). The cement varies between 3 and
20%, occurring mainly in the form of clays, except the Torey Buff sample, which has intense carbonatic
cementation of more than 20% in weight. The minerals containing iron are the clays illite and chlorite, the carbonate
ankerite, and the iron oxides magnetite and hematite, observed only in Gray Berea and Bandera Gray. These are the minerals
responsible for the internal gradients observed in the samples. Kaolinite is the only clay that has no iron in its
composition, and as such, it does not generate internal gradients.
Diffusive relaxation and total clay content quantification. The T_2 distribution offers a lot of information
about the porous medium, such as the effective porosity and how the fluid is distributed along the pore network. In
particular, it is used to estimate the clay content setting a T_2^cutoff for clay-bound water, and the cumulative
porosity below this cutoff is assumed as the clay-bound water saturation for sandstones <cit.>. While this
simple measurement is readily accessible on-site, the shortest T_2 values are the most affected by the presence of
internal gradients and their effect can mislead the estimation of the clay content. By using this method with a
T_2^cutoff = 3 ms, we found a quite poor correlation between the total clay content determined by XRD and the
clay-bound water determined by NMR. The data is shown in Fig. <ref> and Table <ref>. All T_2 distributions
are shown in the Supplementary Material <cit.>.
The low correlation observed, R^2 = 0.67, can be analyzed by looking at some of the outliers in Figure 1. The
samples Leopard and Nugget have 4.4 and 6.2 % total clay content, respectively. However, they have very large
clay-bound water saturations of 10.2 and 14.4, respectively. This is due to the large surface relaxivities of these samples
<cit.>, resulting in porosity observed in the range below the T_2^cutoff of 3 ms which is not
associated with clays. Other samples have their clay content underestimated, like Buff Berea, Gray Berea, and Bandera Gray,
which have kaolinite as the dominant clay type. Kaolinite is known to have a large specific area in comparison to other
clays, leading to a longer T_2 in comparison to other clays <cit.>.
As already mentioned, an alternative to the T_2^cutoff method has been proposed recently by Elsayed et al <cit.>.
In this approach, we use the shift on the peak of the T_2 distribution due to the internal field gradients, which
follows Eq. <ref> for short diffusion times, when the free diffusion approximation is valid. The core of the method is
that the effects of restriction make the displacement of T_2^peak smaller than it should be in the case of free
diffusion. Since clays are usually minerals with larger magnetic susceptibility, a large amount of clays implies
bigger shifts of T_2^peak due to stronger internal gradients. As a figure of merit, we use the relative displacement
of T_2^peak in relation to the shortest τ available, of 100 μs in our case. This is quantified by the
equation
Δ T_2^τ = T_2^0.1 ms - T_2^τ/T_2^0.1 ms,
where T_2^0.1 ms is the peak of the T_2 distribution for τ = 100 μs and T_2^τ
is the peak of the distribution when half echo time is given by τ. We emphasize that, for clay quantification, we
must use τ long enough to be outside the free diffusion limit <cit.>. To avoid variations
on T_2^peak due to redundancies in the T_2 distributions, we must use a large number of bins in the T_2 distribution.
In our case, we used 512 points on each T_2 distribution. We used 21 geometrically spaced
echo times for each of our 20 samples. We used the behavior for small τ^2, i.e., the free diffusion regime, to
estimate the internal gradient for each sample using a linear fit in function of τ^2, as shown in Figure
<ref> for four samples (Boise Idaho Brown, Berea, Bandera Gray, and Kentucky). The fitted values for the internal
gradients are shown in Table <ref> for all samples. We see that for higher clay contents, we observe a larger
variation on 1/T_2 as well as a larger internal gradient, reflected by the bigger slope for the free diffusion
regime.
We used four τ values in order to study the behavior of Δ T_2^τ as a linear function of the total clay
content, τ = 1.5, 1.8, 2.2 and 2.6 ms. We observe the optimal value for τ is 1.5 ms, with R^2 = 0.89
between Δ T_2^τ and total clay content data sets. It has a slightly better performance than τ = 1.8 ms
(R^2 = 0.88), and the correlation decreases for longer τ. Despite this decrease, we still observe a much better
correlation when compared with the T_2^cutoff method. As shown in Figure <ref> (a), we have a reasonable linear
correlation between Δ T_2^τ and the total clay content. We remark that if we use a logistic function to describe
the relation between Δ T_2^τ and
the total clay content, we have almost the same correlation coefficient, R^2 = 0.90. Thus, for simplicity, we used a
linear model. One argument in favor of the logistic function is that it is a bounded function, just like Δ T_2^τ.
But since it is unusual to observe sandstones with more than 20 % of total clay content, a linear model is sufficient to
describe this relationship.
We observe a few outliers in Fig. <ref>(a) below the fitting of the data. This is the case for samples like
Castlegate and Torey Buff, which have kaolinite as the most abundant clay in their mineralogical composition. As
already mentioned, kaolinite is the only clay observed in our samples that does not have any paramagnetic ion on its composition
and thus does not contribute to the internal gradients. Such a feature results in a smaller shift of T_2 due to the internal gradients,
which implies a smaller clay content predicted by our measure, Δ T_2^τ. However, as it is clear from
Fig. <ref>(a), the usage of internal fields to quantify the total clay content offers a significant advantage over the
estimation of clay-bound water based on T_2^cutoff shown in Fig. <ref>.
We can also use the total clay content to propose a simple rock typing, which could in principle indicate the order of magnitude
of the permeability. This relies solely on the fact that clays are one of the biggest hindrances to fluid flow throughout the
pore network <cit.>. To do that, we use a slight modification on Δ T_2^τ:
δ = aϕ^2(1 - Δ T_2^τ)^2,
in such a way that we have the logarithm of the permeability K increasing with δ and scaled with the
porosity ϕ and a is simply a scaling factor to make δ vary between 0 and 1. We used the squares to maximize
the correlation between δ and log_10(K). Figure <ref>(b)
shows how log_10(K) correlates well with δ (R^2 = 0.89), with large values for δ associated with low total clay content and the
opposite holding for small δ values. By doing so, with the same measurement we can also infer roughly the
permeability of the sample without relying on the SDR or Timur-Coates permeability models, which require the calibration of
lithological constants <cit.>. In simple words, a small δ value, i.e., a large
internal gradient, implies low permeability due to the high clay content. The inverse holds for δ≈ 1
being related to samples with high permeability and small internal gradients.
We can use Eq. (<ref>) to classify the rocks into three different sets: low, medium, and high total clay content.
As we see in Figure <ref>(b), for δ < 0.2, we have samples with high total clay content, bigger
than 10 %. We classify such samples as having high clay content. These samples have their permeability distributed along the
two decades with the lowest permeability values shown in Figure <ref>(b). We define as low clay content the samples
with δ > 0.5. Such samples have their permeability values all above 400 mD and their clay content
ranging between 0 and 6 %. For 0.2≤δ≤ 0.5, we define the medium total clay content rocks. In this case,
the permeability varies over almost three decades, between 10 and 400 mD, and the total clay content ranges between 5
and 9 %.
§ DISCUSSION AND CONCLUSION
Clay quantification is an important task during well logging since an accurate determination of the total clay
content is useful for the interpretation of gamma-ray and neutron porosity logs <cit.>. Moreover,
clay cement reduces important properties such as porosity and permeability. Here, we applied the methodology recently developed
by Elsayed and collaborators <cit.> to a set of 20 sandstone samples, a larger set than the one used in their original
work. Using such set of samples, we observed a linear correlation between the relative displacement of the dominant peak of the T_2
distribution, given by Eq. (<ref>), and the total clay content. While such model is distinct from the nonlinear behavior observed
by Elsayed et al., a linear model is much simpler and with an easier interpretation.
We took one step further and proposed a geochemical rock typing based on Δ T_2^τ, which allows us to determine if
a rock has low, medium or high clay content based on NMR measurements, readily available during well logging and with minimal data
processing. Finally, we also showed that if we also use the porosity, we can define a quantity that classifies sandstones according to
their permeability, as shown in Figure <ref>(b). This simple figure of merit indicates the possibility to include the
total clay content on NMR permeability models, by modifying the well-known and widely applied Timur-Coates and SDR models.
The good correlation between Δ T_2^τ and the total clay content observed here indicates a non-invasive route for
clay quantification. For in situ applications, future work points to evaluate this methodology when dealing with more than one fluid and
under different wettability conditions, since the internal gradients are particularly strong close to the pore surface.
§ DECLARATION OF INTERESTS
The authors declare that they have no known competing financial interests or personal relationships that could have appeared
to influence the work reported in this paper.
§ DATA AVAILABILITY
Data will be made available on request.
§ ACKNOWLEDGEMENTS
This research was carried out in association with the ongoing R&D project registered as ANP nº 21289-4, ”Desenvolvimento de modelos matemáticos, estatísticos e computacionais para o aperfeiçoamento da caracterização pe-
trofísica de reservatórios por Ressonância Nuclear Magné-
tica (RMN)” (UFF/Shell Brasil/ANP), sponsored by Shell Brasil Petróleo Ltda under the ANP R&D levy as "Compromisso de Investimentos com Pesquisa e Desenvolvimen-
to." The authors also recognize the support from CAPES, CNPq and FAPERJ.
§ AUTHOR CONTRIBUTIONS
Jefferson G. Filgueiras: Conceptualization, Methodology, Investigation, Formal analysis, Experimental data supply, Writing - Original Draft;
Matheus S. J. de Miranda: Investigation, Formal analysis, Experimental data supply, Writing - Review & Editing;
Carla S. Semiramis: Investigation, Formal analysis, Experimental data supply, Writing - Review & Editing;
Rodrigo B. V. de Azeredo: Supervision, Resources, Project administration, Methodology, Fund acquisition, Writing - Review & Editing;
elsarticle-num
|
http://arxiv.org/abs/2307.00925v1
|
20230703105305
|
Automatic Design of Semantic Similarity Ensembles Using Grammatical Evolution
|
[
"Jorge Martinez-Gil"
] |
cs.CL
|
[
"cs.CL",
"cs.AI"
] |
Software Competence Center Hagenberg GmbH
Softwarepark 32a, 4232 Hagenberg, Austria
<jorge.martinez-gil@scch.at>
Semantic similarity measures are widely used in natural language processing to catalyze various computer-related tasks. However, no single semantic similarity measure is the most appropriate for all tasks, and researchers often use ensemble strategies to ensure performance. This research work proposes a method for automatically designing semantic similarity ensembles. In fact, our proposed method uses grammatical evolution, for the first time, to automatically select and aggregate measures from a pool of candidates to create an ensemble that maximizes correlation to human judgment. The method is evaluated on several benchmark datasets and compared to state-of-the-art ensembles, showing that it can significantly improve similarity assessment accuracy and outperform existing methods in some cases. As a result, our research demonstrates the potential of using grammatical evolution to automatically compare text and prove the benefits of using ensembles for semantic similarity tasks.
Ensemble Learning, Grammatical Evolution, Semantic Similarity Measurement
§ INTRODUCTION
In recent times, ensemble learning has become a widely-used technique to address the limitations of individual methods by combining them into a unified model. Aggregating the predictions of diverse methods aims to mitigate individual method shortcomings, such as outliers in response to specific inputs. Therefore, the fundamental premise behind ensemble learning is the expectation that a carefully chosen set of methods will yield superior results compared to any single method alone <cit.>.
While ensemble learning has attracted considerable attention and received extensive research efforts <cit.>, its application in semantic similarity measurement remains largely unexplored. This presents a compelling opportunity to showcase the potential of this approach to address the challenge of automatically determining semantic similarity between pieces of textual information. The reason is that, despite advancements in semantic similarity measures, a lack of consensus persists among the individual suitability of these measures when assessing the semantic similarity between textual information <cit.>.
The introduction of ensemble learning to semantic similarity measurement aims to bridge this gap and improve the assessments' reliability and accuracy. The motivation behind this approach comes from the idea that a diversified pool of semantic similarity measures can compensate for the inherent limitations of individual measures <cit.>. Through the aggregation of multiple measures, our proposed approach seeks to leverage the diversity of these measures to achieve a higher level of agreement and consistency.
Through this research, we aim to contribute to natural language processing (NLP) by providing a novel perspective on semantic similarity measurement. We propose adopting Grammatical Evolution (GE) <cit.> as an ensemble learning strategy to address the inherent misalignment among existing semantic similarity measures. Empirical evaluations conducted on benchmark datasets will demonstrate the effectiveness of GE ensemble-based approaches in significantly improving performance concerning most existing methods' capabilities.
The rationale behind this research is that GE can bring a new point of view to the semantic similarity measurement domain. The collective recommendation capability of various similarity measures allows for augmenting the quality and consistency of semantic similarity assessments, paving the way for more accurate and reliable real-world applications. Therefore, the significant contributions of this work can be summarized as follows:
* We propose, for the first time, the automatic learning of semantic similarity ensembles based on the notion of GE. This method offers advantages such as high accuracy, excellent interpretability, a platform-independent solution, and easy transferability to problems of analog nature.
* We implement and empirically evaluate our strategy to compare it with existing work and demonstrate its superiority in solving some of the most well-known dataset benchmarks used by the research community.
The rest of this paper is organized as follows: Section 2 provides an overview of related work in ensemble learning using GE and other kinds of ensembles for semantic similarity. Section 3 introduces the problem statement. Section 4 presents the details of the proposed GE strategy to address the challenge. Section 5 describes the experimental setup and presents the evaluation results. Section 6 discusses the results obtained and future work directions. Finally, Section 7 concludes the paper.
§ STATE-OF-THE-ART
GE is a particular form of genetic programming (GP) that uses a formal grammar (FG) to generate computer programs <cit.>. GE is considered an evolutionary strategy that makes use of a genotype-to-phenotype strategy. To do that, GE uses an FG definition to describe the language that the model might produce. The most common approach uses the Backus-Naur Form (BNF) <cit.>, a widely used notation to formulate an FG using production rules. These rules include terminals and non-terminals (which can be expanded into terminal and non-terminal symbols).
The BNF grammar allows defining the structure of the ensembles to be learned. Please note that in this work, the term ensemble is equivalent to a program aiming to aggregate an initial set of semantic similarity measures most effectively and efficiently as possible. The FG acts, therefore, as the guideline for the evolution of the ensembles, and it defines the set of valid ensembles that can be generated. This allows for a more structured and controlled evolution compared to rival techniques.
Furthermore, the evolution of the learning process is guided towards optimizing a fitness function, which measures the quality of the generated ensembles in the training phase. In our case, we can evaluate the quality based on the degree of correlation it presents concerning human judgment. Moreover, this fitness function also allows the selection of the ensembles that will be used as the parents in the next generation. This process is repeated until a good enough solution has been reached or a pre-defined number of iterations has been consumed.
Apart from the possibility of reaching high degrees of accuracy, the other significant advantage of this approach is the ability to generate models that adhere to a specific syntax and structure (i.e., good interpretability of the resulting models). Therefore, this approach is advantageous in domains where the capability of understanding the solution is essential.
§.§ Semantic Similarity
The challenge of semantic similarity measurement is a critical task in many computer-related fields <cit.>. It aims to quantitatively capture the degree of likeness between two pieces of text based on their underlying meaning <cit.>. In recent years, significant progress has been made in this field, leading to the development of state-of-the-art techniques <cit.>. One prominent approach involves utilizing deep learning (DL) models, such as transformer-based architectures like BERT (Bidirectional Encoder Representations from Transformers) <cit.>. These models are pre-trained on vast amounts of text, enabling them to learn text representations. Fine-tuning these models has shown remarkable performance, outperforming traditional methods that rely on handcrafted features <cit.>.
Another line of research focuses on leveraging distributional semantics, which captures meaning using distributional patterns of words in a large corpus. Methods such as word embeddings (e.g., Word2Vec <cit.>) represent words as dense vectors in a continuous vector space. The semantic resemblance between the textual pieces can then be estimated by comparing the vector representations of these pieces using methods like cosine distance. Additionally, recent studies have explored incorporating contextual information using contextualized word embeddings, such as Embeddings from Language Models (ELMo) <cit.> and Universal Sentence Encoder (USE) <cit.>. Considering the surrounding words, these models generate context-dependent word representations, leading to improved semantic similarity estimation in a given context.
In recent times, ensembles have also emerged as a helpful technique in semantic similarity measurement, offering a reasonable solution to the challenges posed by the inherent complexity of human language <cit.>. The idea of aggregating multiple semantic similarity measures allows ensembles to mitigate the limitations of individual measures and capture a more comprehensive understanding of semantic similarity <cit.>. Ensembles exploit each measure's inherent complementarity and different perspectives by leveraging the diversity of these existing measures <cit.>. This means that improving the performance, and transfer learning capabilities is usually possible <cit.>. With their ability to aggregate diverse perspectives and mitigate model biases, ensembles have proven helpful in semantic similarity measurement, pushing the boundaries of accuracy and offering promising lines of research <cit.>.
In summary, state-of-the-art techniques for semantic similarity measurement have witnessed significant progress in the last years, driven by the use of DL models, the incorporation of contextual information, and the exploitation of ensembles. These approaches have demonstrated exemplary performance, being superior to traditional methods. As the field continues to evolve, further research and development are expected to improve the existing methods, facilitating many computer-related applications <cit.>.
§.§ Grammatical evolution
GE is a well-known technique in the domain of GP, combining the principles of genetic algorithms (GAs) and FGs. It has gained recognition as a state-of-the-art approach for evolving computer programs that exhibit complex behaviors <cit.>. It offers a framework to automatically generate programs (ensembles in our particular case) by evolving their syntax and semantics through a GA.
The ensembles can be represented through strings of symbols, which allows their manipulation and evolution using genetic operators through FGs. This facilitates the exploitation of a vast search space that allows the discovery of practical solutions to a wide range of computational problems <cit.>. Over time, GE has undergone remarkable advances, including knowledge integration, mutation process improvements, and new crossover operators. These advances have improved the accuracy and scalability of GE-based solutions, making it one of the most promising techniques in the GP landscape <cit.>.
The state-of-the-art in GE involves developing hybrid approaches that combine GE with other techniques like particle swarm optimization <cit.>. These hybrid approaches can leverage the strengths of multiple techniques to overcome limitations and improve search capability. Additionally, increased focus is on improving scalability through parallel and distributed computing paradigms. Researchers have achieved efficacy in solving computationally intensive problems using these paradigms. Furthermore, advancements in fitness approximation techniques have significantly improved efficiency by reducing computational overhead. Continually exploring novel techniques aims to improve this GP approach's performance, scalability, and applicability.
§.§ Differences between Genetic Programming and Genetic Algorithm
The main distinction between GP and GAs is their optimization approaches. GAs optimize a given function by searching for optimal parameter values, while GP generates programs (ensembles in this case) that perform well on a specific task. GP uses a higher-level representation to capture complex relationships among variables, enabling the encoding of complex solutions within the population. It incorporates a refined selection process to maintain population diversity and avoid premature convergence. GP's advanced crossover operator generates novel solutions, while its mutation operator maintains diversity by introducing variations. Additionally, GP utilizes a complex fitness function that ensures a more accurate and thorough assessment of ensembles during the evolutionary process.
§.§ Contribution over the state-of-the-art
We propose exploring GE as a suitable approach for learning ensembles within the domain of semantic similarity measurement. The primary goal is to identify a program (or ensemble, as applicable in our specific case) that attains a near-optimal fitness value for a given objective function, which involves emulating human judgment. While the usual way to proceed is to use a tree-structured expression that can be directly manipulated <cit.>, our method uses genetic operators on an integer string that is subsequently transformed into an ensemble using a BNF grammar. The advantages offered by this strategy include enhanced accuracy, improved interpretability of the resulting model, and a simplified process of converting the model into the most widely used programming languages.
§ PROBLEM STATEMENT
Let us assume that we have a set of candidate similarity measures ℳ = M_1, M_2, ..., M_n, where n is the total number of candidates. Let us assume that each M_i takes a pair of textual pieces X and Y as input and produces a similarity score S_i as output.
Our goal is automatically select a subset from ℳ and aggregate them into an ensemble E, such that E(X,Y) provides an accurate semantic similarity score.
Let us also assume that we have a vector 𝐰 = [w_1, w_2, ..., w_n], where w_i ∈0, 1 represents the inclusion of M_i in the ensemble. If w_i = 1, then M_i is selected; otherwise, if w_i = 0, M_i is excluded from E.
The ensemble function E(X,Y) is defined as the aggregation of a subset from ℳ where the measures are weighted by their corresponding aforementioned inclusion values as shown in Eq. <ref>:
E(X, Y) = ∑_i=1^n w_i · M_i(X, Y)
In this research, we use GE to build the ensemble function. Please note that GE provides a framework for generating and evolving an ensemble based on BNF grammar. In this case, the BNF grammar defines the rules for constructing the aggregation strategies.
Therefore, the problem consists of finding the 𝐰 that maximizes the ensemble's performance. Examples of performance can be measures such as precision and recall. Nevertheless, in the case of semantic similarity measurement, the challenge is to emulate human judgment <cit.>. This means that we need to use methods such correlation coefficients. Therefore, we aim to optimize the correlation between the ensemble results and a human-curated ground truth dataset.
To do that, given a gold standard 𝒢, i.e., a dataset created and curated by human experts, the goal is to maximize the correlation between the 𝒢 and the results from the proposed strategy 𝒮 as shown in Eq. <ref>.
S=arg max_S correl(𝒢⃗,𝒮⃗)
𝒮 can take different semantic similarity measures as input. These measures will function as weak estimators to obtain intermediate semantic similarity scores to learn a higher-level yet robust strategy able to work over unseen data. In short, the goal is to identify an ensemble capable of adapting to training data that is able to perform well on data never seen before.
GE can evolve candidate ensembles, evaluating their correlation to a human-curated training set. The fitness function guides the search process by assigning fitness to each candidate ensemble based on performance. The process iteratively evolves the population of candidate ensembles, using genetic operators, until a termination condition is met, such as reaching a maximum number of generations (previously defined by the operator) or achieving a satisfactory fitness level for the problem at hand, since the ideal result will be difficult to achieve.
In this way, a computer language's syntax and semantics can be created following the rules described within GE. These criteria are applied to produce a population of computer programs, or ensembles in our specific case, capable of evolving. The approach generates new strings of symbols equivalent to the most successful ensembles in the population.
The degree to which an ensemble successfully correlates to the ground truth is critical in determining that success. The reason is that ensembles more successful at completing have a greater chance of being picked for reproduction and mutation. In contrast, the less successful ones will not be passed on to the next generation. The rationale behind this approach is that the population changes over time, with more successful ensembles becoming prevalent.
One of the most significant benefits is that GE facilitates building ensembles that can solve complex issues automatically. During the evolutionary process, the approach automatically explores the space of possible ensembles and selects the one that maximizes the performance. Our hypothesis is that the resulting ensemble can estimate semantic similarity for unseen textual inputs. This hypothesis will be empirically tested later in this paper.
§ METHODS
We have seen that GE is a powerful evolutionary computation technique that combines GAs with an FG. We can automatically learn complex similarity models capable of capturing the nuances of natural language by leveraging the adaption capability of GE.
The process begins by defining a BNF grammar that represents the structure of the possible semantic similarity models. This BNF grammar serves as a guideline for generating diverse candidate solutions. Each candidate solution is intended to represent a unique ensemble of semantic similarity measures. Algorithm <ref> shows us how through an iterative process, the approach explores the space of potential solutions, gradually improving their performance through fitness evaluation and selection.
The fitness evaluation is based on an objective function that measures the quality of the ensembles. This function could consider factors such as the ensemble's output's accuracy, robustness, and diversity, although this research focuses on accuracy. The idea behind aggregating multiple semantic similarity measures allows the ensembles to capture different aspects of the problem. The adaptive nature of the process enables the ensembles to learn and evolve, continuously refining their performance over time.
GE not only automates the ensemble learning process but also pushes the boundaries of semantic similarity modeling. Allowing the ensembles to learn from data eliminates the need for manual feature engineering (e.g., manual selection of similarity measures), which can be time-consuming and error-prone. Instead, the ensembles adapt to the training data, uncovering hidden patterns that may not be apparent to the human eye.
§.§ Mathematical foundation
GE works by implementing a genotype-phenotype mapping that can be defined as follows:
Let G be the genotype, a string representing an individual's genetic code. The genotype comprises a series of genes, each consisting of a fixed number of bits.
The phenotype P is the resulting program generated from the genotype G. It represents the executable form of the genetic code.
The mapping from the genotype to the phenotype can be defined as a function f: {0, 1}^n → P, where n is the length of the genotype. This function inputs the binary string G and produces the corresponding phenotype P.
The mapping is defined by a grammar C and a mapping table M. The grammar C specifies the production rules that define the syntax of the phenotype so that each rule in the grammar corresponds to a gene in the genotype. The mapping table M maps each gene in the genotype to a production rule from the grammar.
The mapping function f could be formulated as follows:
f(G) = Base case: If G is a terminal gene,
return the corresponding terminal symbol.
Recursive case: If G is a non-terminal gene,
apply the corresponding production rule from M
to the genes that follow in the genotype.
In this way, the genotype-phenotype mapping transforms the binary representation of the genotype into an executable phenotype.
§.§ Fitness Function
Let F(𝐰) be the fitness function that evaluates the performance of an ensemble represented by the binary vector 𝐰. The fitness function can be defined based on a performance metric specific to the semantic similarity task, i.e., the correlation coefficient. The fitness function, therefore, assesses the quality of the ensemble.
§.§ Genetic Operators
Genetic operators are used in the evolution process of the ensemble configurations. Two commonly used genetic operators are crossover and mutation:
§.§.§ Crossover
Given two parent ensembles 𝐰^1 and 𝐰^2, crossover produces two offspring ensembles 𝐰^1 and 𝐰^2 by exchanging genetic information between the parents. The specific crossover mechanism can vary, such as one-point crossover, two-point crossover, or uniform crossover. However, a deeper explanation of this is out of the scope of this paper.
§.§.§ Mutation
Mutation introduces random changes to the ensemble. A mutation operator alters one or more elements of the binary vector 𝐰 to explore new configurations. Our mutation process randomly selects positions in 𝐰 and flips the corresponding values.
§.§ Grammar Rules
GE uses grammar rules to generate and interpret promising ensembles. These rules define the structure and constraints for building ensembles using production rules. Each rule specifies how to expand non-terminal symbols into terminal or non-terminal symbols. The grammar rules ensure the generated configurations follow the required syntax and semantics.
Generally speaking, the exact formulation of the fitness function, genetic operators, and grammar rules can vary depending on the specific implementation and requirements of the task. These aspects must be carefully designed and tailored to the problem domain to effectively guide the search process and generate high-performing ensemble configurations using GE. However, it is usually assumed that a set of points is common to all aggregation strategies. For example, when tackling a problem with GE, a suitable BNF grammar definition must initially be defined. The BNF can be either the specification of an entire language or, perhaps more feasible from the point of view of resource consumption, a subset of a language appropriate for the problem at hand. An example of FG inspired in Python can be seen in Example <ref>, which defines a language for arithmetic expressions and some additional mathematical functions.
[label=ex:01]
The ability to easily customize the output structures by manipulating the BNF grammar is an advantage distinguishing GE from rival techniques. Furthermore, the genotype-phenotype mapping enables search operators to operate not only on solution trees, as is the case in standard GP, but also on the genotypes (i.e., integer or binary lists), partially derived phenotypes or the fully-formed phenotypic derivation trees. This capability expands the range of entities on which search operators can act and contributes to the efficacy of the GE methodology.
Our work is implemented using PonyGE2 <cit.>, a state-of-the-art open-source library for GP that offers a comprehensive set of features and tools for evolutionary computation. It combines versatility, efficiency, and usability to explore and optimize complex problems. The library supports various GP paradigms, including GE.
§ RESULTS
In this section, we present the findings of our experiments focused on semantic similarity measurement. We will also explore two ways to build ensembles using the Python language. From now on, we will call one GE, which will only search for accuracy. Furthermore, the other, which we will call GE-i from now on, will look for a Python style that facilitates interpretability. We will see examples later and conduct a comparative analysis of the outcomes produced by our proposed strategies concerning state-of-the-art GP techniques to assess effectiveness and implications.
§.§ Empirical Setup and Baseline Selection
Table <ref> presents our setup concerning the set of parameters and their corresponding values associated with the PonyGE2 framework <cit.>. The technical details of each of the entries in the table are beyond the scope of this paper but can be consulted at <cit.>. The purpose of this table is to provide a concise overview of the configuration settings used in the context of a particular study or experiment.
Our baseline is one of the top-performing methods for aggregating similarity scores, i.e., linear regression <cit.>. Linear regression aims to establish a functional relationship between the previously considered semantic similarity measures. This relationship can be represented using a mathematical equation, which connects the output with multiple semantic similarity measures, as depicted in Eq. <ref>.
α⃗̂⃗ = arg min (D, α⃗) = arg min ∑_i=1^n(α⃗·a⃗_⃗i⃗ - b_i)^2
Eq. <ref> represents the minimization problem involved in linear regression, aiming to find the optimal vector α⃗̂⃗ that minimizes the discrepancy D between the predicted values and the actual values. The optimization process seeks to minimize the sum of squared differences between the dot product of the vector α⃗ and the vector a⃗_⃗i⃗, representing the semantic similarity measures, and the corresponding target values b_i. The symbol arg min denotes the argument that minimizes the expression within the parentheses, and the index i ranges from 1 to n, representing the number of instances. In this way, linear regression is a foundational approach for building ensembles by quantifying the association between the semantic similarity measures and the desired output, allowing for the derivation of predictive models.
§.§ Datasets
The first dataset used in our experiments is the so-called Miller & Charles dataset <cit.>, from now MC30. This is the standard dataset community members use when evaluating research methodologies that concentrate on general cases. It includes 30 use cases on the comparison of words of daily use. Therefore, this dataset aims to evaluate the semantic similarity between words that are components of a general-purpose scenario.
The second dataset is the so-called GeReSiD50 dataset <cit.> and is drawn from the realm of geospatial research. It covers a pool of textual phrases, each of which has been grouped into one of 50 unique pairings. This pool of sentences includes over 100 different geographical expressions. On each of the 50 pairings, human opinions about the degree of semantic similarity were solicited and recorded individually. These 50 pairings include samples that are in no way comparable to one another and others that, in human view, are virtually indistinguishable.
§.§ Evaluation Criteria
When exploring correlation coefficients, researchers commonly consider two extensively utilized ones: the Pearson Correlation Coefficient (PCC) and the Spearman Rank Correlation Coefficient (SRCC). On the one hand, the PCC serves as a measure to assess the linear correlation between a gold standard and the resulting outcomes of the proposed strategy. On the other hand, the SRCC is primarily used to investigate the ordinal correlation between the anticipated values derived from the proposed strategy and the actual observed values. This study aims to closely examine the ensemble's accuracy concerning these two correlation coefficients, as discussed in <cit.>.
§.§ Empirical results
We provide an overview of the outcomes derived from our empirical assessment of the above benchmarks. Tables <ref> and <ref> show the reference data for the semantic similarity measures that will be part of the ensemble for solving the MC30 and the GeReSiD50 benchmark datasets, respectively. Our primary pool of measures will be based on different variants over BERT <cit.> since there is a broad consensus about their superiority in tackling this task. Truth represents the ground truth values, ranging from 0 to 1, as a reference for comparison. Bert-Cos. displays the results obtained by encoding the text pieces using BERT and calculating similarity based on the cosine formula. Bert-Man. presents results obtained using the Manhattan distance. Bert-Euc. shows results based on the Euclidean distance. Bert-Inn. reflects results obtained using the Inner Product similarity measure. Lastly, Bert-Ang. illustrates results obtained by calculating similarity using the cosine of the angle.
It is important to remark that the outcomes of our reported experiments are based on 30 independent runs, owing to the inherent non-deterministic characteristics of the methods. Therefore, we aim to report a snapshot of the values achieved.
§.§ Assessing Semantic Similarity in a General-purpose Context
Figure <ref> displays the results for two evaluation criteria, PCC and SRCC, over the MC30 benchmark dataset. The x-axis represents different strategies used for evaluation. At the same time, Linear Regression (LR) is the baseline, as discussed earlier. A dotted horizontal line represents it.
The state-of-the-art genetic ensembles are Tree-based Genetic Programming (TGP) <cit.>, Linear Genetic Programming (LGP) <cit.>, and Cartesian Genetic Programming (CGP) <cit.> precisely as in <cit.>. GE is the approach proposed in this work, and GE-i is the interpretable variant of GE discussed earlier. It is essential to note that all the ensembles are trained on the same training dataset to facilitate the fairness of the comparisons.
In the first subplot (a), the LGP achieves relatively high performance compared to the other methods. The boxplot displays the distribution of PCC values obtained from 30 experimental runs. The box represents the interquartile range (IQR), where the central box spans from the lower quartile (Q1) to the upper quartile (Q3). The line within the box corresponds to the median value. The whiskers extend to the minimum and maximum values.
In the second subplot (b), the GE method (first blue boxplot) demonstrates the best performance regarding SRCC. The boxplot characteristics are the same as in the previous subplot but now represent the distribution of SRCC values.
Overall, both subplots suggest that the LGP outperforms the other evaluated methods regarding PCC, and GE is superior regarding SRCC on the MC30 benchmark dataset. GE-i, although interpretable, achieves the worst performance.
As a matter of curiosity, we can see in Example <ref> the code generated for both PCC ans SRCC over the MC30 dataset. This given source code is represented in Python and uses the Numpy library, which supports mathematical operations on arrays and matrices. The result is computed using various mathematical functions and operators. The reason is that we are using the FG seen in the Example <ref>. It is important to note that the expressions within parentheses are evaluated and combined using the specified operators.
[label=ex:A]
[colback=black!10!white,colframe=black!70!black,title=Ensemble optimized for PCC over MC30,colbacktitle=black!50!white]
[colback=black!10!white,colframe=black!70!black,title=Ensemble optimized for SRCC over MC30,colbacktitle=black!50!white]
We also show the changes over time in important variables during the GE process. Figure <ref> shows the progression of these parameters. Specifically, we focus on four key parameters: Average Fitness, Average Genome Length, Average Tree Nodes, and Best Fitness.
The Average Fitness provides insights into the overall performance of the evolving population. It reflects the average fitness value of individuals in each generation, indicating the progress achieved by the GE strategy.
The Average Genome Length tracks the average length of individual genomes within the population at different training stages. The goal of monitoring this variable is to understand how the complexity of GE-generated solutions changes over time.
The Average Tree Nodes measures the average number of nodes in the evolved solutions. It offers valuable information about the complexity and intricacy of evolved ensembles, shedding light on the strategy's search for space exploration.
Lastly, the Best Fitness represents the fitness value of the best individual in each generation. Observing this variable helps to assess the progress in finding optimal solutions as training is performed. Please remember that these values are for the training phase, and then it remains to test the generated ensemble on previously unseen data.
Analyzing the evolution of these variables allow us to obtain insights into how they contribute to PCC optimization and interact during the GE process over the MC30 benchmark dataset. This analysis provides a valuable view into the behavior of GE and the overall performance of the approach.
The examination of Figure <ref> reports us a comprehensive visualization concerning the progressive evolution of the aforementioned important variables when optimizing SRCC over the MC30 benchmark dataset.
§.§ Assessing Semantic Similarity in a Domain-Specific Context
Figure <ref> displays the results for both PCC and SRCC over the GeReSiD50 benchmark dataset. As in the previous case, the x-axis represents different strategies used for evaluation. Linear Regression (LR) is again the baseline, as discussed earlier, and is represented by a dotted horizontal line. The state-of-the-art genetic ensembles are again TGP <cit.>, LGP <cit.>, and CGP <cit.>. GE is again the approach proposed in this work, and GE-i is the interpretable variant of GE, precisely as we discussed in the previous case.
In the first subplot (a), the LGP achieves relatively high performance compared to the other methods. The boxplot displays the distribution of PCC values obtained from 30 experimental runs. The box again represents the IQR, where the central box spans from the lower to the upper quartile. The line within the box corresponds to the median value. The whiskers extend to the minimum and maximum values.
In the second subplot (b), the GE method demonstrates the best performance regarding SRCC. The boxplot characteristics are the same as in the previous subplot but now represent the distribution of SRCC values.
As with the general purpose use case, both subplots suggest again that the LGP outperforms the other evaluated methods regarding PCC, and GE is superior regarding SRCC on the GeReSiD50 benchmark dataset. GE-i, although interpretable, achieves the worst performance once again.
As a matter of curiosity, we provide the generated Python source code in Example <ref>. It is an ensemble optimized for PCC over MC30 that consists of two functions: my_pearson(x, y) and p(). The my_pearson(x, y) function calculates the PCC between two arrays, while the p() function aims to maximize through an algebraic formula that needs to be learned. In order to do that, the code reads data from training and validation CSV files, extracts relevant columns, and performs calculations to generate a new column with the expression to be learned. The PCC coefficient between the response column and the new column is then computed, which serves as the goal (PCC over unseen data) to be maximized.
[label=ex:B]
[colback=black!10!white,colframe=black!70!black,title=Ensemble optimized for PCC over MC30,colbacktitle=black!50!white]
We also provide the generated code for SRCC in Example <ref>. It is an ensemble optimized for SRCC over MC30 that implements two functions, my_spearman(x, y) and p(), to maximize SRCC. The my_spearman(x, y) function calculates the SRCC between two arrays. The p() function loads training and validation datasets, extracts relevant columns and performs calculations on the data. It defines an auxiliary expression involving variables, replaces one set of variables with another, evaluates the expression, and assigns the results to a new column in the validation dataset. Finally, the SRCC is computed between the response and newly created columns. The objective is maximizing the value returned by p(), representing the SRCC over unseen data.
[label=ex:C,colback=black!5!white,colframe=black!20!black,title=Model]
[colback=black!10!white,colframe=black!70!black,title=Ensemble optimized for SRCC over MC30,colbacktitle=black!50!white]
Once again, examining Figure <ref> deepens our understanding of the progressive evolution of critical variables in generating the ensemble using PCC over the GeReSiD50 benchmark dataset. This analysis sheds light on the optimization process's behavior, the approach's effectiveness, and the interplay between critical parameters.
At the same time, and once again, the examination of Figure <ref> shows us a comprehensive understanding of the progressive evolution of these critical variables. However, this time is intended to understand better the process of optimizing SRCC over the GeReSiD50 benchmark dataset.
§.§ Summary of results
Table <ref> offers a summary of the results obtained for the MC30 benchmark dataset. It is divided into two sections, and each of them features two columns: the first denoting the method or ensemble used and the second representing the performance, i.e., the PCC in the initial section and SRCC in the subsequent section. These scores assess the degree of correlation between the predicted and ground truth values. Values are reported as the median of the results of the 30 independent runs.
The tabular presentation of the results enables comparisons of the effectiveness of various methods or ensembles, thus facilitating the identification of optimal approaches for the specific task. In fact, we can see that LGP is giving better results for PCC, and GE for SRCC.
Table <ref> summarizes the results obtained for the GeReSiD50 benchmark dataset. The table also consists of two sections, each containing also two columns. The first column displays the method or ensemble used in the study, while the second column represents the performance denoted as the PCC and SRCC, respectively. Values are again reported as the median result of the 30 independent runs.
It is possible to observe that when operating over the GeReSiD50 dataset, LGP performs better in terms of PCC, and GE presents better results in terms of SRCC, as in the previous case.
§ DISCUSSION
Semantic similarity ensembles are advantageous over other methods as they can effectively leverage the capabilities of a broad spectrum of established similarity measures. As a result, these models often yield predictions of superior accuracy compared to utilizing individual methods in isolation. Our work has shown the advantages of using GE techniques to build ensembles in this context. Our research on the use of GE to address this specific challenge allows identifying several advantages over traditional GP-based strategies:
* It presents greater flexibility, allowing for evolving solutions with diverse structures. This flexibility enables GE to handle the problem effectively.
* It demonstrates good efficiency compared to other GP-based methods due to generating directly executable code for each solution. This process reduces computational overhead and accelerates the evolution of solutions.
* It allows to establishment an appropriate trade-off between accuracy and interpretability. Achieving very accurate or interpretable ensembles in extreme cases or a balance between both features when necessary.
However, GE also has its drawbacks. One area for improvement lies in interpreting the evolved solutions. Our approach might improve efficiency but sacrifice transparency, making it challenging for people to comprehend the evolved solutions' inner workings and underlying mechanisms. Moreover, this kind of encoding also poses difficulties in debugging GE, as understanding the complex relationships within the evolved solutions can take time and effort. Lastly, GE is usually blind in finding good starting points for the search process.
§ CONCLUSION
In this work, we have presented a novel approach for the automated design of ensembles of semantic similarity measures using GE. Through empirical evaluations on several benchmark datasets, we have demonstrated the superior performance of our method compared to existing state-of-the-art GP-based ensembles in some cases. These findings show the potential of GE for automatic semantic similarity measure selection and aggregation when the goal is to achieve superior performance compared to individual semantic similarity measures.
Furthermore, our proposed strategy offers several notable advantages over traditional methods: It enables handling a large pool of candidate semantic similarity measures without requiring manual feature selection or parameter tuning, alleviating the process's time-consuming and knowledge-intensive aspects. Moreover, our approach demonstrates flexibility, allowing human operators to easily add or remove semantic similarity measures from the initial pool per their requirements.
In conclusion, our proposed strategy exhibits promising potential for automated NLP system design. Moreover, extending our approach to other machine learning domains beyond semantic similarity measures is possible. Future research endeavors could explore the application of GE to other NLP tasks and investigate alternative search strategies and fitness functions. Overall, our work sheds light on the significance of ensemble methods and showcases the potential of evolutionary algorithms in facilitating semantic similarity measurement.
§ ACKNOWLEDGMENTS
The research reported in this paper has been funded by the Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation, and Technology (BMK), the Federal Ministry for Digital and Economic Affairs (BMDW), and the State of Upper Austria in the frame of SCCH, a center in the COMET - Competence Centers for Excellent Technologies Programme managed by Austrian Research Promotion Agency FFG.
|
http://arxiv.org/abs/2307.01401v1
|
20230703234229
|
Multi-Task Learning Improves Performance In Deep Argument Mining Models
|
[
"Amirhossein Farzam",
"Shashank Shekhar",
"Isaac Mehlhaff",
"Marco Morucci"
] |
cs.CL
|
[
"cs.CL"
] |
Sliding suffix trees simplified
Laurentius Leonard10000-0001-8477-7033
Shunsuke Inenaga20000-0002-1833-010X
Hideo Bannai30000-0002-6856-5185
Takuya Mieno40000-0003-2922-9434
==============================================================================================================================================================
The successful analysis of argumentative techniques from user-generated text is central to many downstream tasks such as political and market analysis. Recent argument mining tools use state-of-the-art deep learning methods to extract and annotate argumentative techniques from various online text corpora, however each task is treated as separate and different bespoke models are fine-tuned for each dataset. We show that different argument mining tasks share common semantic and logical structure by implementing a multi-task approach to argument mining that achieves better performance than state-of-the-art methods for the same problems. Our model builds a shared representation of the input text that is common to all tasks and exploits similarities between tasks in order to further boost performance via parameter-sharing. Our results are important for argument mining as they show that different tasks share substantial similarities and suggest a holistic approach to the extraction of argumentative techniques from text.
§ INTRODUCTION
Text content generated by online users is a fundamental source of information for understanding the ideas, feelings, and behavior of large populations of interest for social scientists. Within these texts, it is important to be able to recognize ideas and worldviews expressed by individuals on a large scale. To this end, argument mining (AM) has emerged in recent years as a sub-field of natural language processing (NLP) focusing on creating language models capable of detecting and classifying argumentative strategies in online texts.
Within AM, several different sub-tasks have been proposed through the years. For example, <cit.> focus on identifying agreement and disagreement in online texts, <cit.> propose a method to distinguish factual from emotional argumentation techniques, <cit.> detect the presence of certain rhetorical figures in arguments, and <cit.> produce measures of argument quality. These are only some examples of the many distinct sub-tasks that have been identified in AM. In this paper, we suggest that all these AM sub-tasks share substantial similarity and use this idea to formulate a model that achieves high accuracy in several of these problems.
More specifically, existing work in AM treats many of the sub-tasks within the field as separate problems and focuses on fine-tuning bespoke models for each task <cit.>. While this approach has been demonstrated to work in many settings, it fails to take advantage of the substantial overlap and similarities between AM sub-tasks.
In this paper we propose to take advantage of the similarities across AM tasks by constructing a multi-task model that constructs a shared latent representation of the inputs for each task, and uses this representation to make more accurate predictions for each individual task. Our models also provide evidence that AM sub-tasks do indeed share substantial conceptual overlap; the latent representations of different tasks output by our model depicted in Figure <ref> clearly depict clusters of individual tasks as well as substantial overlap between these clusters in representation space, indicating that the same latent features are informative for multiple tasks.
The model we propose achieves State-Of-The-Art (SOTA) performance on all tasks for which we had information on previous SOTA metrics, and it also surpasses individual-task models fine-tuned on similar architectures for these tasks. In addition, our models allow for substantial computational gains over individual-task models as they permit training inference for many outputs at once, instead of having to train and evaluate an individual model for each desired task.
Overall, our results have the important implication for AM as a field that further research and model-building should not only focus on taking advantage of the structure of the specific task of interest <cit.>, but also on incorporating information from similar tasks into the model for better performance.
Our paper will proceed as follows: first, we review relevant work (section <ref>), second we introduce the different AM tasks and respective data sources that we incorporate in our model (section <ref>), as well as the state of the art performance on these tasks. Then, we introduce our proposed model architecture, loss function and training regime (section <ref>). Finally, we present our empirical results (section <ref>) and discuss conclusions and future work (section <ref>).
§ RELATED WORK
We build on an active research agenda in argument mining (AM)—the automated extraction of argumentative structure, reasoning, and features from text <cit.>. <cit.> identify two stages in AM: identifying arguments within documents and classifying those arguments on their characteristics, such as supporting, attacking, or background information. Our work is situated in the second stage, involving the identification of features or typologies of arguments.
Much computational work in AM has investigated argumentation in online interactions <cit.>, due in part to the vast amounts of available data and the ease of collecting it. But some scholars have used news articles to construct corpora of propaganda and fact-checking <cit.>. Still others have leveraged monologues like persuasive essays or legal decisions <cit.>. We incorporate all three types of data into our models to further show that tasks with different generation processes and textual characteristics nevertheless exhibit common semantic structure.
There is evidence that many natural language tasks share a common core <cit.>, and models trained on one task tend to also perform well on others. <cit.> demonstrate that multi-task approaches benefit model performance in several natural language tasks such as topic detection and sentiment analysis. Multi-task approaches have been somewhat rare within AM—for example, <cit.> uses task similarities to extract arguments and rebuttals from peer review—but these other tasks likely share enough common structure with ours that performance on them may also be improved via multi-task learning.
A prevalent model architecture for multi-task learning within computer vision involves segregating the network into shared and task-specific components.
This conventional structure, termed as a “shared trunk” <cit.>, typically comprises a universal feature extractor, constructed of convolutional layers that are employed by all tasks, and a distinct output branch for each task <cit.>.
Further enhancements on this shared trunk template have been made by <cit.> and <cit.>, who incorporated task-specific modules into the original framework.
This template is not confined to computer vision but is also prevalent in multi-task learning models in NLP.
Traditional feed-forward architectures, using the shared trunk template in combination with task-specific modules, have been utilized for multi-task NLP by a variety of researchers <cit.>.
These architectures bear a structural resemblance to their counterparts in computer vision, featuring a shared, global feature extractor followed by task-specific output branches. However, in the context of NLP, the features in question are text representations.
§ DATA
We draw on three benchmark corpora to create a dataset with a diverse number of argument characteristics. We take 8 tasks from the Internet Argument Corpus (IAC), a collection of posts extracted from several online debate and discussion forums <cit.>. Each post is annotated on a variety of characteristics, such as whether the post expresses disagreement or uses an emotion- or fact-based argument, with a value in [-5, 5] on each characteristic. Some researchers have dichotomized these data by removing observations around the midpoint <cit.>. This practice is not appropriate in the multi-task setting, however, as it would remove too much information that the model could use to build shared representations across tasks. Instead, we dichotomize the data by simply cutting on the scale midpoint.
A wide array of studies have used the IAC to construct unique tasks <cit.> and train single-task models <cit.>. We are aware of comparable state-of-the-art benchmarks for three tasks: On the disagreement classification task, <cit.> achieve an accuracy of 68.2% and <cit.> achieve an F1 score of 69.6%. On the emotional or factual argument classification task, <cit.> achieve an F1 score of 53.7%. Finally, on the nasty or nice tone classification task, <cit.> achieve an F1 score of 69%.
The second benchmark corpus we draw on is IBM-Rank-30k, a corpus of crowd-sourced arguments <cit.>. Two quality scoring functions then translated binary annotations into a continuous value of argument quality in [0, 1]. We use scores produced by the authors' weighted average scoring function because it accounts for coder reliability, leading to less noisy annotations. As with the IAC labels, we dichotomize the data by cutting on the scale midpoint.
The final corpus is introduced by <cit.>, who collect articles from both propagandistic and non-propagandistic news sources and annotate sentences within each article that contain one or more of 18 different propaganda techniques, such as loaded language or causal oversimplification. We extract all sentences from each article, including those that are annotated as containing no propaganda techniques. Data from all three corpora are combined to create our final dataset. We use 80% for training and set aside 10% each for validation and test sets.
Finally, to help guard against overfitting, we conduct four types of data augmentation on the training set <cit.>. In back-translation, we translate the text into a different language, then translate it back to the original language. We choose German as the target language for its high lexical similarity to English. In contextual word embedding, we randomly choose thirty percent of tokens, feed the surrounding words to BERT, and substitute the predicted word in for the original. In synonym augmentation, we randomly choose thirty percent of tokens and substitute the most similar word from the WordNet lexical database <cit.>. Finally, in random cropping, we randomly delete thirty percent of tokens. Table <ref> shows the total number of observations in the training set as well as the class balance for each task.
§ METHODOLOGY
Our methodology is based on a multi-task learning approach which leverages the shared information across tasks corresponding to different sources of data, leading to improved performance on each task in a multi-label classification setting.
The model architecture and the loss function are the two key components of our methodology.
Additionally, we make use of several standard training and optimization techniques, described in this section, in order to improve the performance.
§.§ Model Architecture
Our model architecture shares a key similarity to network templates comprising a shared trunk feeding task-specific modules, common to multitask learning architectures proposed in previous works <cit.>.
This architecture aims to utilize shared information across tasks through the shared trunk while learning distinct task features through the task-specific modules.
Following the same principle, we use a network with double-branching in layers following the shared trunk, in order to make use of commonalities across different types of tasks as well as more fine-grained information about each individual task.
We therefore use a feed-forward neural network with four sequential sets of layers: a base text embedding model shared across all tasks, followed by a shared encoder, which is followed by a double branching structure feeding two sets of task-specific modules.
Our model architecture is based on the BERT model <cit.>, which is a transformer-based model known for its effectiveness in various natural language processing tasks <cit.>.
We use small BERT <cit.> as the base of our model, followed by three dense layers each followed by dropout.
These layers help in learning features that are shared across tasks.
The architecture then branches out to learn task-type and task-specific features.
In particular, the architecture consists of four sets of layers, described below, and visualized in Figure <ref>.
Each dense layer in the network uses a ReLU activation, with the exception of the final activation layer which is a sigmoid for binary classification.
* Shared embedding layers:
We use a BERT model <cit.> to obtain an embedding of the text input.
In order to keep the model size small and training practical, we use small BERT <cit.>, which outputs a 128-dimensional embedding.
The embedding model, shared across all tasks, is fine-tuned through our training.
* Shared encoding layers:
In addition to the base embedding model, all tasks share an encoder, consisting of two sequential dense layers each followed by a dropout layer.
This helps learn a shared representation, used by all tasks, while allowing for sparsity and reducing the problem to learning our target features.
* Task-type Layers:
The first branching in the network architecture follows the shared layers aiming to learn coarse-grained task-specific features which are expected to share logical structures across tasks within each type.
This is particularly suitable for multi-task learning on data consisting of a mixture of datasets, where the number of labels exceeds the number of sources in the mixture.
Given such input data, in the first step towards learning the shared representation, the task-type layers learn dataset-specific features, while still utilizing commonalities between individual tasks sharing a dataset.
For each task-type branch, we use two sequential dense layers each followed by dropout.
Since we have three sets of target labels each corresponding to their own dataset, we use three main branches.
* Task-specific Layers:
Each main branch further branches out into individual task layers.
These layers aim to learn more fine-grained features from the representations produced through the main branches, and output a vector representation for each task.
Each task-specific branch contains two sequential dense and dropout layers, which feed a sigmoid activation layer for predicting labels.
The number of these sub-branches equals the number of individual features in the combined dataset.
In the branch corresponding to propaganda techniques, we additionally use a maximum pooling layer to reduce the 18 individual propaganda technique labels to a single binary propaganda classification, predicting whether a propaganda technique is used.
The full architecture is illustrated in Figure <ref>.
Using this architecture, we obtain a vector representation of the size of the fine-grained features described in the dataset.
Note that this does not need to be the same as the size of the target output. It is not in this case, as we apply max-pooling to 18 entries of the output corresponding to propaganda techniques in order to obtain a binary label.
The network outputs a real-valued 10-dimensional vector which is then mapped to a binary vector of size 10 using individual thresholds for each label.
For the results produced in the main text of this paper we use a model with 17024 trainable parameters in addition to the parameters in small BERT.
§.§ Loss Function
The loss function plays a crucial role in our multi-task learning approach, which relies on a mixed corpora corresponding to different task types.
The custom loss function is designed to handle the data size imbalance across task types, in addition to class imbalance, in order to effectively capture the contribution of each prediction to the overall loss.
Considering this, given predicted labels ŷ and true labels y, the total loss ℒ used in our gradient descent optimization is:
ℒ(ŷ | y )
=
∑_kν_kℒ( ŷ | y, 𝒟_k),
where D_k denotes the set of data point indices corresponding to task-type k, and ν_k ∼ 1/|D_k| are the task-type weights.
The loss for each task type k, which accounts for the class imbalance across output labels, is:
ℒ( ŷ | y, 𝒟_k) ∼1/|T_k|∑_j∈ D_k∑_t∈ T_k∑_c ∈𝒞_t
w^c_t
l( ŷ_j | y_j = c ),
where l(.) is the loss function, T_k denotes the set of tasks within task type k, and 𝒞_t is the corresponding set of classes.
The class weights w^c_t, which are proportional to the inverse of the enrichment of class c in task t within dataset k, counter the impact of class imbalance.
We use the binary cross-entropy loss for the loss function l throughout our computations.
In the implementation, the loss computation is vectorized using masked matrices to filter entries by task.
§.§ Model Training
For training the parameters in our model, we take advantage of an array of optimization and training enhancement techniques.
We use an AdamW optimizer <cit.> for the stochastic gradient descent with an initial learning rate of 0.0003. To help avoid overfitting, we employ a weight decay rate of 0.01 and 40% dropout.
We use 5% of data for warmup, a batch size of 256, and stop training after two epochs without a decrease in loss.
We also incorporate threshold tuning, maximizing true positive rate while minimizing false positive rate, for optimal mapping of the sigmoid layer's output to binary labels.
All training hyperparameters are tuned through a standard grid search over 72 sets of hyperparameters and selected based on F1 score in the validation set.
§ EMPIRICAL RESULTS
We evaluate our multi-task model's performance in terms of prediction metrics, computational efficiency, and comparison against state-of-the-art (SOTA) metrics on the target labels.
We also offer evidence that the tasks we combine do indeed share important similarities by presenting
text embeddings and intermediate layer representations, in Figure <ref> and Figure <ref>.
We show that our model outperforms the SOTA on several tasks (Table <ref>), while being substantially more computationally efficient than single-task counterparts (Figure <ref>).
§.§ Commonalities Across Tasks
Our model was trained on three different corpora, described in section <ref>, which we argue possess important semantic similarities. To provide evidence of our 10 tasks existing within a common representation space, we present t-SNE projections <cit.> of the input text embeddings corresponding to each label at three different locations within the neural network. Figure <ref>, discussed in section <ref>, shows the t-SNE projection from the output of the BERT model we use as our base encoder. Points are color-coded according to their task. If our text data carried mutually exclusive information applicable only to the particular task for which it was labeled, we would see distinct clusters of representations in Figure <ref>.
There is some minor evidence of clustering, particularly with respect to the propaganda and argument quality tasks, but even those tasks have observations spanning the entire representation space, and they clearly mix with other task representations. This suggests that the fine-tuned BERT model is learning representations that reflect similar semantic and logical structures across tasks. We also highlight the fact that the clustering behavior within tasks observable in the figure shows that our model's embeddings are not completely discarding task-specific structure. Rather, our model learns task-specific representations, and those representations exist within a common space with other task-specific representations, thus further lending evidence to the theory behind our approach.
This pattern is largely preserved throughout the layers of our model. Figure <ref> presents similar t-SNE projections of two other intermediate layers: a shared layer (before any model branching occurs) and the final task-specific layer before the sigmoid activation (after the double-branching). Following the BERT model, each successive layer in the neural network gradually becomes more task-specific, and encodes information that is more relevant to distinguishing among tasks and among labels within tasks. It is notable, then, that we observe similar levels of clustering in the t-SNE projections regardless of model layer. Propaganda and argument quality tasks appear to inhabit more discernible regions of the representation space, but their clusters are neither well-defined nor tightly constrained.
We take this consistent pattern as evidence that AM tasks share a common semantic space. Enabling a model to learn these fine-grained similarities and differences between tasks and across task types is therefore likely to improve performance relative to models that rely solely on shared features or no sharing at all. We test this conjecture in the next section.
§.§ Performance Evaluation
We evaluate the performance of our model primarily in terms of weighted F1 scores, which account for the class imbalances noted in Table <ref>. In comparison with the SOTA for each task (Table <ref>), our model shows superior performance in predicting all of the tasks for which we had SOTA information available. This is a significant achievement, as it indicates that effectively leveraging shared features does improve prediction performance across multiple tasks.
Table <ref> shows a comparison of the predictive performance (as measured by the class-weighted F1-score) between baselines, single-task, and multi-task versions of the same model. The baseline metrics represent random guessing and the unigram metrics are produced by a naive Bayes classifier. While baselines underperform all the deep-network based approaches, the multi-task model also outperforms single-task models based on the same encoder architecture in all but three of the tasks that comprise our data. Again, we take this as evidence that our multi-task model is capable of exploiting the common structure between tasks in order to obtain more accurate predictions. In Table <ref> in the Appendix, we show that this performance gain is not merely due to adding additional trainable parameters; multi-task models of various sizes perform comparably.
We further investigated the impact of changing the base encoding model from small BERT to small ELECTRA <cit.> and base ALBERT <cit.>. Table <ref> shows a comparison of performance across three different variants of our multi-task model. All three models have the architecture described in Section <ref>, however, the base encoder differs each time. Generally, multi-task models trained on different encoders seem to display similar performance, indicating that the gain in performance due to the adoption of our framework is not necessarily due to the specific architecture of the encoder chosen. This is further demonstrated by the comparison of performance for different base encoders across individual tasks, which is offered in Table <ref> of the Appendix.
§.§ Computational Efficiency
A key consideration, particularly when adding more trainable parameters as our model does, is whether the performance gain comes at the cost of more costly computation. We evaluate the peak GPU RAM usage and time to train our multi-task model and compare them to the same metrics from single-task models. We conduct this evaluation by randomly sampling 5%, 10%, 20%, and 40% of the training data to assess how computational load increases with data size. All models for this analysis were trained on one NVIDIA A100 GPU for one epoch. Figure <ref> displays the results.
Our multi-task model achieves better performance using substantially lower computational resources, proving the branched task-specific modules in our model architecture to be an effective, yet practical, strategy for learning fine-grained features for label prediction.
Comparing our model's performance with single-task classification on individual tasks (Table <ref>), we observe that our model can achieve comparable performance while decreasing the computation time by at least 31%.
Put together, these observations indicate that this multi-task learning approach simultaneously has a performance and computational efficiency advantage over single-task models. Computational efficiency plots for different multi-task model sizes are included in the Appendix for comparison.
§ CONCLUSION
Natural language tasks share substantial semantic and structural similarities, and deep learning models have been shown to be able to take advantage of these similarities in order to achieve better performance <cit.>. In this paper, we have further extended this result to the field of argument mining. We have shown that AM tasks do indeed share a substantial amount of features, and that these shared features can be used to boost model performance across previously unrelated tasks. We have combined three data sources and proposed models that outperform the state-of-the-art on several of these tasks. Our models are also more computationally efficient and have overall better predictive accuracy than single-task models with comparable architectures. Aside from the practical usefulness of our models, our results are important for argument mining as a field, as they suggest that further research and model building should focus on exploiting commonalities between different tasks to boost performance.
In future work, we propose to extend our analysis to several other AM tasks that share commonalities with those studied here <cit.>, as well as devising improved model architectures for our multi-task setting. In particular, we propose to take advantage of frameworks such as contrastive learning <cit.> to encode known similarities between tasks within the representations learned by the model.
§ LIMITATIONS
As with all proposed models, ours carries important limitations. Although we show in the Appendix that the choice of base encoder does not have a drastic effect on performance, we suspect that the performance of our models is largely dependent on the ability to fine-tune a base encoder. Indeed, baseline models using unigram features performed quite poorly. Fine-tuning large base encoders—not to mention training one from scratch—can be computationally expensive. However, transfer learning may be able to help. Common semantic and logical structures across tasks point to opportunities for using transfer learning or pre-trained models from warm start to re-train on new tasks.
Multi-task models also depend on data quality and sufficient semantic overlap across tasks. This is especially challenging in AM, as argument annotation is often highly subjective <cit.>, which can lead to noisy training data. Combining one low-quality dataset with other higher-quality ones may have a detrimental effect on model performance, as the model is unable to learn a shared representation space from noisy annotations, thus degrading performance on all tasks.
unsrtnat
§ ADDITIONAL RESULTS
In this appendix, we compare the performance of our multi-task model with alternative designs and configurations for multi-task learning, in terms of model architecture, network size, and the base encoder.
§.§ Model Architecture
Table <ref> compares the performance of our multi-task model—which incorporates branched task-type and task-specific modules—with a standard “shared-trunk” alternative, which consists of only a small BERT encoder followed by a sigmoid activation layer. This comparison shows the utility of our model architecture.
Our multi-task model outperforms the shared-trunk model on all but two tasks, where the F1 metric is within 1 percentage point of that of the shared-trunk model.
This performance gain comes at a negligible memory cost and a small increase in computation time (Figure <ref>).
§.§ Network Size
We also compare the performance of the small multi-task model we presented in the main text with alternative networks that preserve the same architectural design but increase the sizes of the layers, from 17024 to 272384 and 438784 trainable parameters, following the base encoder.
This comparison shows that the superiority in performance, due to the task-type and task-specific modules, is consistent across various network sizes and is not simply due to adding more trainable parameters on top of the shared trunk (Table <ref>).
Moreover, Figure <ref> further confirms that the layers following the BERT encoder are responsible only for a negligible increase in usage of computational resources, as multiplying the combined size of those layers by 16 (Multi-Task, Medium) and 32 (Multi-Task, Large) does not result in a substantial increase in elapsed time for training or peak GPU memory usage.
§.§ Alternative Embedding Models
In addition to comparing our model with other multi-task models, we also compare it to other base encoders.
In particular, we deploy base ALBERT <cit.> and small ELECTRA <cit.>, replacing the small BERT encoder with each of these other base encoders in our multi-task model.
Although small BERT achieves the best average performance across different tasks, as the results in Table <ref> suggest, using ELECTRA yields an average F1 score within 2 percentage points of that of small BERT, while ALBERT shows more variability across tasks with a lower average F1 score.
|
http://arxiv.org/abs/2307.02461v1
|
20230705173414
|
Landscape approximation of low energy solutions to binary optimization problems
|
[
"Benjamin Y. L. Tan",
"Beng Yee Gan",
"Daniel Leykam",
"Dimitris G. Angelakis"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.dis-nn"
] |
[]b.tan@u.nus.edu
[]daniel.leykam@gmail.com
[]dimitris.angelakis@gmail.com
We show how the localization landscape, originally introduced to bound low energy eigenstates of disordered wave media and many-body quantum systems, can form the basis for hardware-efficient quantum algorithms for solving binary optimization problems. Many binary optimization problems can be cast as finding low-energy eigenstates of Ising Hamiltonians. First, we apply specific perturbations to the Ising Hamiltonian such that the low energy modes are bounded by the localization landscape.
Next, we demonstrate how a variational method can be used to prepare and sample from the peaks of the localization landscape.
Numerical simulations of problems of up to 10 binary variables show that the localization landscape-based sampling can outperform QAOA circuits of similar depth, as measured in terms of the probability of sampling the exact ground state.
Landscape approximation of low energy solutions to binary optimization problems
Dimitris G. Angelakis
August 1, 2023
===============================================================================
§ INTRODUCTION
Finding optimal solutions to Quadratic Unconstrained Binary Optimization (QUBO) problems is one proposed near term application of quantum computers <cit.>. Solving large-scale QUBO problems has importance scheduling and allocation tasks <cit.>, machine learning <cit.>, amongst others <cit.>.
The search for these optimal solutions is generally difficult, as QUBO problems are NP-hard <cit.>. However, in many cases obtaining approximate solutions close to the optimal can be sufficient. This is especially true within the context of industry applications where a higher quality solution, despite being sub-optimal, may still result in significant cost savings <cit.>.
Commonly employed techniques for solving QUBO problems using quantum computers are typically based on mapping the QUBO problem at hand to an Ising Hamiltonian, solving the problem by finding the ground state of the Ising Hamiltonian.
Quantum algorithms to find the ground state include Quantum Annealing <cit.>, variational problem-specific algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) and its generalizations <cit.>, Variational Quantum Eigensolvers <cit.>, and Quantum Assisted methods <cit.>.
Methods such as Quantum Annealing and QAOA have shown provable convergence to the exact ground state in the limit of infinite annealing time and circuit depth. In many applications it is necessary to obtain a solution in a finite time, in which case these methods will ideally give a mixture of low-lying states. Hyperparameters such as the annealing schedule/path, mixer Hamiltonian, number of QAOA steps, and the sampling frequency thresholds for QAOA can affect the quality of the obtained solutions <cit.>.
Here we consider a different approach.
Instead of an exact method that, when run on a finite-sized circuit, gives approximate solutions whose quality is difficult to predict, we consider a scheme to sample from low energy solutions with well-defined bounds, to solve the QUBO problem approximately using shallower-depth circuits. Our approach is inspired by the “localization landscape” used to study the Anderson localization of low energy modes of disordered systems.
Anderson localization is the phenomenon where eigenfunction solutions to the equation with disordered potentials are confined due to wave interference <cit.>. Finding the locations where these quantum states localize typically requires solving the eigenvalue problem, as there is often seemingly little correlation between the potentials and the subregions where the peaks of these eigenfunctions can be found. Efforts into identifying these regions of localization resulted in the localization landscape (LL) function <cit.>.
The localization landscape is a function that places a tight bound on the subregions where low energy states tend to lie.
The inverse of the landscape function serves as an effective potential that can be used to predict areas of confinement for low energy eigenstates by identifying valleys within this effective potential.
Since its introduction, efforts have gone into using the localization landscape to obtain the integrated density of states, thereby giving an estimate for the energies of the lowest eigenstates for the 1D tight binding model <cit.>. The original localization landscape function loses its accuracy when attempting to accurately identify localized regions of higher energy eigenstates, motivating the development of related landscape functions such as the ℒ^2 landscape <cit.>. The ℒ^2 landscape is able to provide a tight bound on the localization of mid-spectrum eigenstates, and can be efficiently computed with a stochastic procedure using sparse matrix methods <cit.>.
Remarkably, landscape functions can be generalized beyond low-dimensional disordered systems to more general families of real symmetric matrices (M-matrices) <cit.>. The localization landscape theory has also been applied to many-body quantum systems <cit.>, extending many of its well-known properties to Hamiltonians describing interacting spins, enabling the identification of regions of Hilbert space where the low-energy many-body eigenstates localize. Qualitative changes in the shape of the landscape, e.g. quantified using methods such as persistent homology, can be used as indicators of phase transitions in many-body quantum systems <cit.>.
In this work, we present a method of using the localization landscape to prepare a quantum state from which low energy solutions to QUBO problems can be sampled with higher probabilities. We describe how this quantum state can be prepared on a near term quantum device, and demonstrate our methods for two problem instances — a non-degenerate randomly generated QUBO, and a degenerate MaxCut problem <cit.>.
The outline of this paper is as follows: Sec. <ref> reviews the localization landscape and its application to Anderson localization and many-body localization.
Sec. <ref> presents the mapping of QUBO problems to Ising Hamiltonians, showing how the Ising Hamiltonian can be perturbed such that its low-energy eigenstates are bounded by the localization landscape and proposing a heuristic using shallow variational circuits for sampling from this landscape suitable for noisy intermediate-scale quantum (NISQ) devices.
Sec. <ref> presents numerical simulations showing how the method can be used to sample low energy solutions with higher probability than shallow QAOA circuits. We analyze the effect of the two hyperparameters of the method (the energy offset and coupling strength) in Sec. <ref> before concluding with Sec. <ref>.
§ LOCALIZATION LANDSCAPE
Given a disordered Hamiltonian Ĥ, finding the regions where eigenstates localize typically requires solving the eigenvalue equation.
However, Ref. <cit.> introduced a function called the localization landscape, u, that is able to predict these subregions where the eigenstates of Ĥ peak at, with the requirement that its inverse is non-negative, i.e. Ĥ^-1≥0.
The landscape function u is the solution to the following differential equation
Ĥu = 1⃗
where 1⃗ is a vector of all 1's. For an eigenstate | ϕ^β⟩ of Ĥ expressed in an orthonormal basis {|J⟩} with energy E^β, u can be expressed as <cit.>:
u_J = ∑_β⟨ J | ϕ^β⟩/E^β∑_m ⟨ m | ϕ^β⟩.
where u_J is the J^th component of u and the summation is performed over all the basis states.
Originally developed to predict areas of localization for a single particle system in a random potential with Dirichlet boundary conditions, u has the useful property of being able to bound the eigenstate amplitudes according to their energies.
An effective potential, W, can be defined from the inverse of the landscape function W = 1/u and the regions where low energy eigenstates peak at correspond to minima in W, providing greater insights into the regions of localizations compared to the original potentials, which are seemingly uncorrelated to these regions.
Ref. <cit.> extended the concept of a localization landscape to many-body systems, showing that u bounds the eigenstate amplitudes of Ĥ according to:
| ⟨ J | ϕ^β⟩| = | E^β| | ∑_m (Ĥ^-1)_Jm⟨ m | ϕ^β⟩|
≤| E^β| ϕ⃗^β_∞∑_m (Ĥ^-1)_Jm
= | E^β| ϕ⃗^β_∞ u_J
where ϕ⃗^β_∞ = max_m (|⟨ m | ϕ^β⟩|) is the infinity norm of ϕ⃗^β, and the definition of u in Eq. (<ref>) was used to get from Eq. (<ref>) to Eq. (<ref>).
This extension of the localization landscape to many-body systems also places additional considerations on Ĥ for these bounds to hold, namely that sufficiently short-ranged hopping in Ĥ is required.
For a Fock space graph 𝒢_F where nodes correspond to the N-spin states and edges connect state transitions according to the the hopping terms in the potential, this can be realized by maintaining the maximum degree of the Fock space graph 𝒢_F to be linear in N.
Further efforts in Ref. <cit.> explored the useful properties of the landscape function beyond disordered wave media, laying out additional constraints on the matrix form of Ĥ for these bounds to hold. More generally, Ĥ can be a positive semidefinite matrix with Ĥ_ij≤ 0 for i≠ j, and Ĥ_ij≥ 0 for i=j.
§ SAMPLING FROM THE LANDSCAPE FUNCTION
Our intention with this work is to prepare a quantum state |u⟩ that represents the localization landscape function u, from which exact solutions to Eq. (<ref>) can be sampled with probability α| ⟨x⃗^* |u ⟩|^2, where α is the number of degenerate solutions to the problem.
Other low energy solutions can also be sampled with probabilities inversely proportional to their energy, as suggested by Eq. (<ref>).
§.§ Quadratic Unconstrained Binary Optimization
The QUBO problem can be represented as
Find x⃗^* = x⃗argmin 𝒞_Q(x⃗)
where 𝒞_Q(x⃗) = x⃗^⊤𝒜x⃗.
The vector x⃗ consists of N binary variables, x⃗ = (x_1,...,x_N) ∈{0,1}^N, and 𝒜 is a real symmetric matrix that defines the problem.
Finding optimal solutions to QUBO problems, x⃗^*, is NP-hard in general <cit.>, and serves as a strong impetus for designing classical and quantum heuristics to find approximate solutions.
Quantum algorithms used to solve QUBO problems typically begin by mapping the QUBO cost function to an Ising Hamiltonian of the form,
Ĥ_Ising =
1/4∑_ij^N 𝒜_ij (σ̂^z_i + Î) (σ̂^z_j + Î)
where σ̂^z_i is the Pauli-Z operator acting on the i^th qubit.
By mapping each binary variable in x⃗ to a qubit, the expectation value Ĥ_Ising has a minimum value of 𝒞_Q(x⃗^*) in Eq. <ref>, and the QUBO problem can be solved by finding the ground state, |x⃗^*⟩, that minimizes Ĥ_Ising.
§.§ Fitting the constraints
In general, Ĥ_Ising in Eq. (<ref>) does not satisfy the aforementioned constraints for the landscape Eq. (<ref>) to bound the support of the low energy eigenstates.
However, the constraints can be satisfied by introducing the following transformation accompanied by two hyperparameters Γ and λ:
Ĥ = Ĥ_Ising + ΓÎ - λ∑_i^N σ̂_i^x
where Î is the identity matrix.
The role of Γ is to add a positive offset to the diagonal elements of Ĥ_Ising that is at least as large as its largest negative eigenvalue.
However, the largest negative eigenvalue is typically not known a priori as it requires finding the solution to Eq. (<ref>), although in practice it is adequate to pick a sufficiently large value heuristically which can then be further fine tuned.
The ground state of Ĥ_Ising in Eq. (<ref>) is a basis state in the computational Z-basis.
For problems with symmetries, such as the ℤ_2 symmetry in MaxCut problems <cit.>, finding the exact ground state can lead to further ground states with the same energy.
In general, being able to find the ground state or an approximate ground state provides little information on nearby states with similar energy values, although there are heuristics that attempt to find “nearby” solutions in terms of energy <cit.>.
The role of λ is to introduce a mixing parameter into Ĥ_Ising to increase the overlap between states that are similar in terms of energy levels.
This is done so that the ground state of Ĥ in Eq. (<ref>) will contain components of surrounding low energy eigenstates of Ĥ_Ising.
It is worth noting that by parameterizing λ = λ(t), Eq. (<ref>) is often used as the Hamiltonian in Quantum Annealing, where one starts in the ground state of an easy-to-solve Hamiltonian in the large λ limit and adiabatically decreases λ(t) to zero to obtain the ground state of H_Ising.
The conditions imposed on Ĥ at the end of Sec. <ref> and the negative sign in Eq. (<ref>) limits λ > 0.
The Hamiltonian Ĥ can be visualized using a Fock space graph, 𝒢_F, where nodes representing states of Ĥ_Ising are connected by an edge if they are one spin flip away, corresponding to the potential term λ∑_i^N σ̂_i^x in Eq. (<ref>).
An example of 𝒢_F for a N=4 Hamiltonian with randomly generated Ĥ_Ising with randomly chosen Γ and λ values satisfying these criteria is shown in Fig. <ref>,
The similarities between the peak amplitudes of the localization landscape and the low energy states of Ĥ_Ising at each site can be observed, along with their decay based on the Hamming distance to the optimal solution (although the rate of decay is different).
The short-ranged hopping condition for the many-body localization landscape outlined at the end of Sec. <ref> is satisfied by 𝒢_F having a maximum degree of N.
Thus, we have shown that the QUBO problem can be mapped to a Hamiltonian whose low energy eigenstates are bounded by the localization landscape, at the cost of introducing two hyperparameters Γ and λ, which control the tightness and extent in Hilbert space of the bounds provided by the landscape, respectively. While the process of finding the optimal values of λ and Γ for each problem instance is beyond the scope of this work, we will show some results on how they can affect the probability of sampling the optimal solutions in Sec. <ref>.
§.§ Preparing the landscape function
Once we have the transformed Hamiltonian Ĥ the final step is to prepare |u⟩, the state that corresponds to the landscape function of Ĥ. Then, measuring |u⟩ in the computational basis will sample bitstrings corresponding to peaks of |u⟩ (or equivalently, valleys of the landscape) with higher probability. |u⟩ can be obtained by solving the qubit-analogue of Eq. <ref> as a linear system of equations using a quantum device:
Ĥ|u⟩ = |+⟩
where we use |+⟩ to denote the N-qubit superposition state |+⟩^⊗ N.
In this work, we use a variational method <cit.> to prepare |u⟩ using a variational ansatz |ψ(θ⃗)⟩ = Û(θ⃗)|0⟩.
This is achieved by minimizing the following variational cost function:
f_v(θ⃗) = [⟨ψ (θ⃗) | Ĥ|ψ (θ⃗)⟩ - ⟨ψ (θ⃗) |+⟩]^2
which is constructed from Eq. (<ref>) by taking the inner product with |u⟩ on both sides of the equation, squaring the difference between the two terms, and replacing |u⟩ with a variational ansatz.
We note that using a variational approach comes with several potential issues, namely the risk of encountering barren plateaus <cit.> or having limited expressibility where only a portion of the target state overlaps with states that can be produced.
Notably, variational quantum algorithms can also be difficult to optimize <cit.>.
Despite the plethora of issues, we pursue the variational approach here for its simplicity when implemented on NISQ devices and compatibility with shallow hardware-efficient circuits.
Common techniques used to mitigate the effects of barren plateaus can also be applied <cit.>, although this was not required in obtaining the presented results.
We contrast our variational approach here to traditional variational quantum approaches to QUBO problems, where the ground state of Ĥ_Ising is typically not known, and the variational method is used to search for the optimal state, as opposed to preparing a known state.
One major advantage of our method is that the target state in our case is known and |u⟩ can be obtained using any of the existing approaches for solving linear equations of the form A|x⟩ = |b⟩, such as the well known HHL algorithm <cit.>.
Other, more NISQ-friendly methods for solving linear equations include variational methods such as the Variational Quantum Linear Solver (VQLS) <cit.>, the Classical Combination of Variational Quantum States (CQS) <cit.>, and the Hybrid Classical-Quantum Linear Solver <cit.>.
Regardless, the choice of method used to prepare |u⟩ does not affect the validity of the following results.
§ RESULTS
To demonstrate the effectiveness of using the landscape function to solve QUBO problems and our variational approach to prepare |u⟩, we apply the methods described above to two problem instances with N=10 variables — a randomly generated fully-connected QUBO matrix 𝒜, with 𝒜_ij∈ [-1,1] to showcase the non-degenerate case, and a randomly generated 3-regular MaxCut problem (formulated as a minimization problem) as a common example of a problem with multiple degenerate solutions.
Using the landscape approximation for QUBO is generally problem agnostic, and in later sections, we use the non-degenerate case to further investigate the behaviour of the landscape function, and the degenerate case as an example of how prior knowledge about a structured problem can be used to improve the quality of the solutions.
For small problem sizes, the exact solutions can be obtained exactly, and the minimum QUBO cost function for our MaxCut and randomly generated instances are 𝒞^MC_Q(x⃗^*) = -12 and 𝒞^rand_Q(x⃗^*) ≈ -7.895 respectively.
To fit our problems to the constraints, we used a value of Γ=8.5 and λ=0.3 for the randomly generated QUBO instance, and Γ=13 and λ=0.3 for the MaxCut instance.
Figure <ref> shows the landscape function u of the respective perturbed Hamiltonians Ĥ for both the randomly generated QUBO instance and the MaxCut problem, compared with the 4 lowest energy eigenstates for the Ising Hamiltonians representing the two problem instances.
We observe that the peaks of the landscape function u line up with basis states of Ĥ which are the lowest energy eigenstates of Ĥ_Ising.
Based off Fig. <ref>, we intend to prepare the target state |u⟩ that will have amplitudes of a similar structure to u as shown in Fig <ref>, from which these low energy states of Ĥ_Ising can be sampled with high probability.
For both problem instances, we used the same variational ansatz consisting of an initial layer of Hadamard gates on all qubits, followed by 4 alternating layers of R_y(θ) rotations on all qubits and nearest neighbour CNOT entangling gates in a linear topology, keeping all the coefficients of the quantum state real.
We used COBYLA <cit.>, a gradient-free classical optimizer, to search for the optimal parameters that minimizes Eq. <ref> from 10 initial starting set of θ⃗ angles.
Gradient-based optimizers can also be used <cit.>, and we show how the gradient of f_v can be obtained in Appendix <ref> using a gate-based circuit that produces a quantum state with only real coefficients.
We compare the expectation value of Ĥ_Ising obtained using our post-optimized state, |ψ(θ⃗^*)⟩, and from |u⟩ obtained from inverting Ĥ in Eq. (<ref>).
We also compare the final Ĥ_Ising values obtained using both our landscape method and from using the QAOA with p=1 layers. Finally, we compare the solutions obtained by sampling from our optimized ansatz, from |u⟩, and from |ψ(γ, β)⟩_QAOA with p=1 as defined in Appendix <ref>.
All simulations were conducted using the statevector simulator (i.e. number of shots →∞) in PennyLane <cit.>.
In Fig. <ref> we show the optimization runs used to prepare |u⟩ using our variational ansatz, and the quality of the solutions obtained after every 200 iterations of COBYLA used to calculate the classical QUBO cost function, 𝒞_Q in Eq. (<ref>) for both the MaxCut and random QUBO instances.
Also shown in Fig. <ref> are comparisons between Ĥ_Ising from preparing the exact |u⟩, from randomly sampling bitstrings over a uniform distribution, from optimizing the QAOA with p=1, and from our variational ansatz after optimization ⟨Ĥ_Ising (θ⃗^*) ⟩ = ⟨ψ(θ⃗^*) | Ĥ_Ising | ψ(θ⃗^*) ⟩.
Further information regarding our implementation of the QAOA and the number of CNOT gates used can be found in Appendix <ref>.
As observed in Fig. <ref>, the mean QUBO cost function from sampled bitstrings obtained every 200 iterations tend towards ⟨ u |Ĥ_Ising| u ⟩ as the ansatz converges to a state representing |u⟩.
In both problem instances, being able to prepare |u⟩, whether exactly or using our simple circuit ansatz, allows one to obtain a lower cost function value compared with the QAOA.
For the non-degenerate case, being able to prepare and sample from the exact landscape function state |u⟩ brings us closer to the optimal 𝒞_Q value compared to p=1 of the QAOA.
This is likely due to our specific problem and choice of hyperparameters, where the mixing introduced by λ in Eq. <ref> is small compared to the difference between the lowest two energy states of Ĥ_Ising for the non-degenerate case.
This causes the ground state of Ĥ_Ising to be the dominant basis state in the ground state of Ĥ and preparing |u⟩ will produce a strong peak at |x⃗^*⟩.
§ EFFECT OF HYPERPARAMETERS
In this section we explore how the hyperparameters Γ and λ affect the quality of the solutions obtained for the non-degenerate case, although similar properties hold for degenerate problem instances as well.
To properly characterize the capabilities of |u⟩, the results presented from this section on are limited to states that can be prepared exactly.
As mentioned in Sec. <ref>, the variational method in Sec. <ref> was mainly an example of how |u⟩ can be prepared quickly using NISQ-friendly methods, and other methods can be used to |u⟩ with potentially higher accuracy.
We begin by noting that the optimal solution to the QUBO problem in Eq. (<ref>) can be represented by a computational basis state of Ĥ_Ising in Eq. (<ref>) used to construct Ĥ.
Using Eq. (<ref>), we can find the probability amplitude associated with sampling |x⃗^*⟩ if we had prepared the ground state of Ĥ:
u_x^*| E^β| ϕ_β_∞≥| ⟨x⃗^* | ϕ^β⟩|
where in this case we let |ϕ^β⟩ and E^β to be the ground state and ground state energy of Ĥ, respectively.
For small values of λ, we can expand the denominator using first order perturbation theory and express E^β in terms of Γ, λ, and the ground state energy of Ĥ_Ising (i.e. E^* = 𝒞_Q(x⃗^*)).
u_x^* ≥| ⟨x⃗^* | ϕ^β⟩|/| E^β| ϕ_β_∞
≈| ⟨x⃗^* | ϕ^β⟩|/| E^* + Γ + λ⟨x⃗^*|∑_i σ̂_i^x| x⃗^*⟩| ϕ_β_∞
The left hand side of Eq. (<ref>), u_x^*, is not | ⟨x⃗^* | u ⟩ |, since the landscape function u from Eq. (<ref>) is not a normalized state. Nevertheless, it is related to the probability amplitude of sampling |x⃗^*⟩ from |u⟩ and it is still in our interest to maximize it.
According to Eq. (<ref>), this can be done by choosing Γ to be as close to -E^* as possible.
Figure <ref>(a) shows the probability of sampling the optimal solution, x⃗^* from |u⟩ as a function of both Γ and λ.
Values of λ > 1 are beyond the perturbative regime used in the approximation in Eq. (<ref>).
Shown in Fig. <ref>(b) is the Hamming distance between the optimal solution x⃗^* and the vector x⃗ that has the highest probability of being sampled from |u⟩. This can be expressed more succinctly as
d(x⃗^*, x⃗argmax|⟨x⃗ | u ⟩|^2 ),
where d(x⃗_1,x⃗_2) is the Hamming distance between x⃗_1 and x⃗_2.
The results in Fig. <ref> also suggest that sampling solutions close to the optimum favour having λ to be as large as possible while still respecting the constraints for a given E^* + Γ, which should be as close to 0 as possible.
In Fig. <ref>, we compare the total probability of sampling solutions with Hamming distance d away from the optimal solution for the randomly generated QUBO problem from 3 states — |u⟩ , the ground state of the perturbed Hamiltonian Ĥ, and the same optimal state of the QAOA with p=1 in Fig. <ref>.
These probabilities change as a function of Γ and λ.
In Fig. <ref>(a) Γ and λ are too small, and there is insufficient mixing between the low energy states of the corresponding Ĥ_Ising, and preparing the landscape function provides similar probabilities to sample the grounds state of Ĥ_Ising from the perturbed Hamiltonian.
This can be desirable in most cases where we are only interested in the optimal solution to the QUBO problem.
In contrast, for cases where Γ and/or λ are too large, such as in Fig. <ref>(b,c), the majority of the solutions sampled will be approximately N/2 Hamming distances away.
For large values of λ, the perturbation term in Ĥ become dominant compared to the ZZ-interactions in Ĥ_Ising, and |u⟩ tends toward the uniform superposition state in the computational basis.
However, there is also an interesting regime in Fig. <ref>(d) where, for well-chosen values of Γ and λ, sampling from |u⟩ is able to provide the optimal solution with a high probability along with nearby solutions in terms of Hamming distance.
In practice, this can be used to find the optimal bitstring from just a handful of samples on |u⟩.
On the other hand, sampling from the optimal state produced by the QAOA with p=1 will result in majority of the samples being N/2 hamming distance away from the optimal solution.
We note that in all of these cases, the probability of obtaining the optimal solution x⃗^* from |u⟩ is still higher than any other bitstring, although it may not form the majority of the samples obtained.
The main results presented so far mainly concerned 2 different problem instances for N=10.
Figure <ref> represents an initial foray into how using the localization landscape scales with problem sizes, as well as how prior knowledge of the problem can be used to increase the probability of sampling optimal solutions.
For each problem type (random QUBO and MaxCut instances), the probability of sampling eigenstates of Ĥ_Ising (x⃗^*, x⃗_1, and x⃗_2) corresponding to the 3 distinct lowest energy values (E^*, E_1, and E_2) are plotted against the problem size N, and compared against the probability obtained from sampling the optimal solution from a uniform distribution.
For the MaxCut instances in Fig. <ref>(b), the curves show the total probability, i.e. α| ⟨x⃗_i | u ⟩|^2, where α is the number of degenerate states corresponding to energy E_i.
At first glance, it is worth noting that while the exponentially decreasing probability of sampling the optimal solution may pose a glaring issue, especially for the random QUBO instance, the probability of sampling x⃗^* remains consistently above that of random sampling.
As mentioned earlier, a “good" choice of Γ would be one that is as close to -E^* as possible.
For any QUBO problem, -∑_i,j|𝒜_ij| is a lower bound of 𝒞_Q and an initial value Γ = 1.1×∑_i,j|𝒜_ij| can be used for unstructured problems.
For a MaxCut problem, one can use the maximum possible number of edge bisections in a graph as the lower bound for 𝒞_Q.
This is equal to the total number of edges, n_e = Nd/2, for a d-regular graph.
For 3-regular graphs, one can choose Γ = 3N/2 + 1 which is typically less than ∑_i,j|𝒜_ij| to obtain a much higher probability in sampling x⃗^*, as observed when comparing the solid and dotted lines in Fig. <ref>(b).
We used a value of λ=0.07 Γ for the random QUBO instances, and λ=0.03 Γ for the MaxCut problems to fulfill the constraints in Sec. <ref> for the instances considered in Fig. <ref>.
However, these values may not be valid or ideal for all QUBO problems in general.
In all problem instances, the exponential decrease in probability can be ameliorated with further tuning of Γ and λ for the specific instance.
Another interesting observation of Fig. <ref>(b) is how higher energy states can have a greater overall probability of being sampled compared to the optimal solution.
This can be explained by the increase in number of degenerate states closer to the middle of the energy spectrum.
As shown in Fig. <ref>(b), the probability of sampling individual ground states is still dominant compared to the other states.
§ DISCUSSION AND CONCLUSION
A key part in obtaining the results in this work was by perturbing Ĥ_Ising, which is diagonal in the computational basis, with a uniform transverse magnetic field ∑_i^N σ̂_i^z, equivalent to a uniform nearest neighbour hopping on an N-dimensional hypercube.
This was done to controllably smear out the eigenstates of Ĥ_Ising in the Fock space, allowing for the QUBO problem to be solved approximately by sampling from the solutions of the easier-to-solve landscape problem.
As we have observed, the quality of the resulting solutions will depend on the strength and form of the perturbing potential, and the properties of alternative perturbation terms provide interesting avenues for further exploration.
For example, one may replace the perturbative term with a number-conserving perturbation (arising in the case of models of many-body localization), such as ∑_i,j(σ̂_i^+ σ̂_j^- + h.c.) where σ̂^± = 1/2 (σ̂^x± i σ̂^y), leading to landscape functions that explore decoupled subspaces of the full Hilbert space as in Ref. <cit.>.
Such a perturbative term may be more useful when sampling solutions to QUBO problems involving hard constraints, such as those requiring the number of spin excitations to be preserved.
Another interesting avenue for exploration would be to consider QUBO problems where the eigenvalue spectrum of the Hamiltonian encoding the problem is skewed towards having a few low energy states separated from many high energy states by a large gap.
These types of QUBO problems are typically present in industry-relevant contexts, where the use of a penalty term when constructing the unconstrained problem causes all solutions that do not satisfy any constraints to have very high costs.
By preparing the landscape function, it should be possible to prepare a state such that solutions satisfying the constraints can be sampled much easily, and the optimal solution can be easily found from this smaller, finite group of samples.
In conclusion, we showed how to apply the localization landscape theory used to find localized regions of low energy eigenstates in many-body systems to prepare quantum states that can be used to sample low energy solutions to the QUBO problem with high probability. We demonstrated our methods on two problem instances, a randomly generated MaxCut problem <cit.> exemplifying the degenerate case and a randomly generated QUBO problem for the non-degenerate case, and showed that by preparing a state, |u⟩, representing the landscape function, low energy solutions to the Ising Hamiltonian corresponding to the QUBO problem can be sampled with higher probability. An advantage of the approach is that the good solutions can be sampled using relatively shallow circuits, minimizing the effect of gate noise and decoherence present in current noisy intermediate-scale quantum processors.
§ ACKNOWLEDGEMENTS
We acknowledge support from the National Research Foundation, Prime Minister’s Office, Singapore and A*STAR under the CQT Bridging Grant and the Quantum Engineering Programme (NRF2021-QEP2-02-P02), and by the EU HORIZON-Project101080085—QCFD.
§ GRADIENT CALCULATION
In this Appendix, we show how the derivative of the cost function, ∂ f_v/∂θ_i, can be obtained in-situ using a quantum device and the parameter shift rule.
f_v = [⟨ψ (θ⃗) | Ĥ|ψ (θ⃗)⟩ - ⟨ψ (θ⃗) |+⟩]^2
∂ f_v/∂θ_i = 2 [⟨ψ (θ⃗) | Ĥ|ψ (θ⃗)⟩ - ⟨ψ (θ⃗) |+⟩]
×∂/∂θ_i[⟨ψ (θ⃗) | Ĥ|ψ (θ⃗)⟩ - ⟨ψ (θ⃗) |+⟩]
From here, we will proceed term by term. Using the parameter shift rule:
∂/∂θ_i⟨ψ (θ⃗) | Ĥ|ψ (θ⃗)⟩ =
1/2[
Ĥ(θ_i + π/2)
-
Ĥ(θ_i - π/2)
]
To evaluate ∂/∂θ_i( ⟨ψ (θ⃗) |+⟩), we observe that for a real quantum state |ψ (θ⃗)⟩:
∂/∂θ_i | ⟨ψ (θ⃗) |+⟩ | ^2
=
2 ⟨ψ (θ⃗) |+⟩∂/∂θ_i( ⟨ψ (θ⃗) |+⟩)
and
∂/∂θ_i | ⟨ψ (θ⃗) |+⟩ | ^2
=
∂/∂θ_i( ⟨ψ (θ⃗) | + ⟩⟨ + | ψ (θ⃗) ).
Putting Eq. (<ref>) and Eq. (<ref>) together, and letting M̂ = | + ⟩⟨ + | = (I + σ̂_x/2) ^ ⊗ N, we obtain:
∂/∂θ_i( ⟨ψ (θ⃗) |+⟩)
=
1/21/⟨ψ (θ⃗) |+⟩∂/∂θ_i⟨ψ (θ⃗) | M̂ |ψ (θ⃗) ⟩
=
1/21/⟨ψ (θ⃗) |+⟩∂/∂θ_iM̂(θ⃗)
= 1/4M̂(θ_i + π/2)
-
M̂(θ_i - π/2)/⟨ψ (θ⃗) |+⟩
where we have used the parameter shift rule in Eq. (<ref>) to evaluate ∂/∂θ_iM̂.
Evaluating the gradient ∂ f_v/∂θ_i therefore requires 3 state preparations per variational parameter, at θ_i, θ_i + π/2, and at θ_i - π/2.
§ QUANTUM APPROXIMATE OPTIMIZATION ALGORITHM
The quantum approximate optimization algorithm (QAOA) is a variational quantum algorithm for finding approximate solutions to combinatorial optimization problems.
The QAOA state is parameterized by two sets of angles, γ⃗ = {γ_1 ,... , γ_p} and β⃗ = {β_1 ,... , β_p}:
|ψ(γ⃗, β⃗)⟩_QAOA = ∏_i^p U_x(β_i) U_H(γ_i) |+⟩
where
U_H(γ) = e^-i γĤ_Ising
U_x(β) = e^-i β∑_i σ̂_i^x.
In Sec. <ref>, we compared the results obtained from preparing the landscape state |u⟩ with results obtained from p=1 of QAOA.
For p=1, the state in Eq. (<ref>) only contains 2 variational parameters, and the optimal parameters to obtain the QAOA results in Fig <ref> were found by using a grid search with a resolution of 100 × 100 and then using COBYLA to perform a local search, further fine tuning the parameters.
Figure <ref> shows the grid and fine tuning needed to obtain the optimal parameters.
The variational ansatz described in Sec. <ref> to produce the results in Fig. <ref> uses 4× (N-1) = 36 CNOT gates.
By comparison, decomposing the unitaries in the QAOA to similar gatesets require 2× n_e number of CNOT gates per depth p, where n_e is the number of edges in the problem graph <cit.>.
For p=1, this amounts to 30 and 90 CNOT gates for the MaxCut and random QUBO graph before accounting for measures to handle long ranged interactions between qubits.
|
http://arxiv.org/abs/2307.02694v1
|
20230705235355
|
Loss Functions and Metrics in Deep Learning. A Review
|
[
"Juan Terven",
"Diana M. Cordova-Esparza",
"Alfonzo Ramirez-Pedraza",
"Edgar A. Chavez-Urbiola"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.CV",
"I.4.0"
] |
[
Tatsuki Fushimi
August 1, 2023
===================
One of the essential components of deep learning is the choice of the loss function and performance metrics used to train and evaluate models. This paper reviews the most prevalent loss functions and performance measurements in deep learning. We examine the benefits and limits of each technique and illustrate their application to various deep-learning problems. Our review aims to give a comprehensive picture of the different loss functions and performance indicators used in the most common deep learning tasks and help practitioners choose the best method for their specific task.
[type=acronym]
§ INTRODUCTION
Deep Learning has become a powerful tool for solving complex problems in various fields, such as image <cit.> and speech recognition <cit.>, natural language processing <cit.>, and computer vision <cit.>. One of the critical components of Deep Learning is the choice of the loss function and performance metrics used to train and evaluate models. Loss functions measure how well a model can approximate the desired output, while performance metrics evaluate the model’s ability to make accurate predictions on unseen data.
Selecting a suitable loss function and performance metric is crucial for achieving good performance in deep
learning tasks. However, with a wide range of options available, it can be challenging for practitioners to choose the
most appropriate method for their specific task.
This paper aims to comprehensively review the most commonly used loss functions and performance metrics
in deep learning. We will discuss the advantages and limitations of each method and provide examples of their
application in various Deep Learning tasks. We begin by discussing regression and classification's most commonly used loss functions, including mean squared error, cross-entropy, and hinge loss. Then, we explain their advantages and limitations and when they are typically used. For example, mean squared error is
widely used for regression tasks, while cross-entropy is used for classification tasks. We will also examine more complex tasks such as object detection, segmentation, face recognition, and image generation.
Along the way, we review the most commonly used performance metrics in each category, explaining how these metrics are
calculated, their advantages and limitations, and when they are typically used.
§ LOSS FUNCTIONS VS. PERFORMANCE METRICS
A loss function and a performance metric are both used to evaluate the performance of a deep learning model, but they serve different purposes.
A loss function is used during training to optimize the model's parameters. It measures the difference between the predicted and expected outputs of the model, and the goal of training is to minimize this difference.
On the other hand, a performance metric is used to evaluate the model after training. It measures how well the model can generalize to new data and make accurate predictions. Performance metrics also compare different models or configurations to determine the best-performing one.
The following list describes the common differences between loss functions and performance metrics:
* Optimization vs. Evaluation: As mentioned previously, loss functions optimize the model's parameters during training. In contrast, performance metrics evaluate the model's performance after training.
* Model-Dependence: Loss functions depend on the model's architecture and the specific task. Performance metrics, however, are less dependent on the model's architecture and can be used to compare different models or configurations of a single model.
* Minimization vs. Maximization: The goal of training a deep learning model is to minimize the loss function. However, evaluating a model aims to maximize the performance metric —except for error performance metrics such as Mean Squared Error.
* Interpretation: Loss functions can be challenging to interpret as their values are often arbitrary and depend on the specific task and data. On the other hand, performance metrics are often more interpretable as they are used across different tasks.
§.§ Properties of loss functions
The loss functions have a series of properties that need to be considered when selected for a specific task:
* Convexity: A loss function is convex if any local minimum is the global minimum. Convex loss functions are desirable because they can be easily optimized using gradient-based optimization methods.
* Differentiability: A loss function is differentiable if its derivative with respect to the model parameters exists and is continuous. Differentiability is essential because it allows the use of gradient-based optimization methods.
* Robustness: Loss functions should be able to handle outliers and not be affected by a small number of extreme values.
* Smoothness: Loss function should have a continuous gradient and no sharp transitions or spikes.
* Sparsity: A sparsity-promoting loss function should encourage the model to produce sparse output. This is useful when working with high-dimensional data and when the number of important features is small.
* Multi-modality: A loss function is considered multi-modal if it has multiple global minima. Multi-modal loss functions can be useful for tasks requiring the model to learn multiple data representations.
* Monotonicity: A loss function is monotonous if its value decreases as the predicted output approaches the true output. Monotonicity ensures that the optimization process is moving toward the correct solution.
* Invariance: A loss function is invariant if it remains unchanged under particular input or output transformations. Invariance is valuable when working with data that may be transformed in various ways, such as rotation, scaling, or translation.
The following sections review the loss functions and performance metrics for common deep learning tasks. Table <ref> summarizes common vision-related tasks with their common loss functions used and performance metrics.
§ REGRESSION
Regression is a supervised learning problem in machine learning that aims to predict a continuous output value based on one or more input features. Common regression models include linear regression, polynomial regression, and regression trees.
Linear regression assumes a linear relationship between the independent and dependent variables. It is represented by the equation
ŷ= β_0 + β_1x_1 + β_2x_2 + … + β_nx_n,
where ŷ is the predicted value, β_0 is the intercept or bias, β_1, β_2, ..., β_n are the coefficients or weights corresponding to the input features or independent variables x_1, x_2, ..., x_n and n is the number of input features.
The goal is to find the bias and the coefficient values that minimize the difference between the predicted and actual values, usually using a loss function such as Mean Squared Error (MSE) or Mean Absolute Error (MAE).
In polynomial regression, the relationship between the independent variable x and the dependent variable y is modeled as an n^th degree polynomial. This is useful for capturing complex, non-linear relationships between input and output variables. The general form of a polynomial regression equation is given by
ŷ= β_0 + β_1x + β_2x^2 + β_3x^3 + … + β_nx^n,
where ŷ is the predicted value, β_0 is the intercept or bias, β_1, β_2, ..., β_n are the coefficients corresponding to the powers of x and n is the degree of the polynomial.
Like linear regression, the objective is to find the bias and the coefficients that minimize the difference between the predicted and the actual values. However, high-degree polynomials tend to overfit when the model becomes excessively complex such that it performs well on training data but poorly on unseen or test data.
Regression trees, on the other hand, are a type of decision tree where the output is a continuous variable. Unlike linear and polynomial regression models that establish a single prediction equation, regression trees split the input space into smaller regions where a simple model is used. The tree is built during training through a process known as binary recursive partitioning. The output for a new instance is predicted by traversing the tree until a leaf node is reached. The value associated with the leaf node is typically the mean target value of the training samples in this node. Unlike polynomial regression, this model can capture complex, non-linear relationships and interactions between features without specifying them explicitly. However, regression trees can also overfit the training data if not properly pruned or controlled, leading to poor generalization performance on new, unseen data. Figure <ref> shows a regression tree.
Regression is used in various domains, including finance, healthcare, social sciences, sports, and engineering. Some practical applications include house price prediction <cit.>, energy consumption forecasting <cit.>, healthcare and disease prediction <cit.>, stock price forecasting <cit.>, and customer lifetime value prediction <cit.>.
In the following subsections, we will review the most common lost functions and performance metrics used for regression.
§.§ Regression Loss Functions
Table <ref> shows the common loss functions used for regression and their applications.
The following subsections describe each of these loss functions in more detail.
§.§.§ Mean Squared Error (MSE)
The MSE measures the average of the squared differences between the predicted values and the true values <cit.>. The MSE loss function can be defined mathematically as
MSE =1/n∑_i=1^n(y_i - ŷ_̂î)^2,
where n is the number of samples, y_i is the true value of the i^th sample and ŷ_̂î is the predicted value of the i^th sample.
The MSE loss function has the following properties:
* Non-negative: Since the differences between the predicted and actual values are squared, MSE is always non-negative. A value of 0 indicates a perfect fit, while larger values correspond to higher discrepancies between predictions and actual values.
* Quadratic: MSE is a quadratic function of the prediction errors, which means it places more emphasis on larger errors than smaller ones. This property makes it sensitive to outliers and can lead to models that prioritize reducing large errors over smaller ones.
* Differentiable: MSE is a smooth and continuous function for the model parameters. This property allows for the efficient computation of gradients, which is essential for optimization algorithms like gradient descent.
* Convex: MSE is a convex function, which means it has a unique global minimum. This property simplifies the optimization process, as gradient-based optimization techniques can converge to the global minimum without getting trapped in local minima. However, for deep neural networks, the error landscape is generally non-convex due to the multiple layers of non-linear activation functions, leading to a complex and highly non-linear optimization problem.
* Scale-dependent: The value of MSE depends on the scale of the target variable, making it difficult to compare the performance of models across different problems or target variable scales. For this purpose, researchers often use the root mean squared error (RMSE) or mean squared percentage error (MSPE).
The MSE, also called L2 loss, is computationally simple. However, it is not robust to outliers due to the square of the error term. Thus if the data includes outliers, it is better to use another loss function, such as Mean Absolute Error (MAE) which is more robust to outliers, or Huber Loss, which is a combination of MSE and MAE. The MSE is also used as a performance metric.
§.§.§ Mean Absolute Error (MAE)
The MAE is another commonly used loss function in regression problems. It measures the average of the absolute differences between the predicted values and the true values <cit.>. The MAE loss can be defined as
MAE =1/n∑_i=1^n|y_i - ŷ_̂î|,
where n is the number of samples, y_i and ŷ_̂î are the true and predicted value of the i^th sample.
The MAE loss function has the following properties:
* Non-negative: Like MSE, MAE is always non-negative because it takes the absolute value of the differences between predicted and actual values. A value of 0 indicates a perfect fit, while larger values correspond to higher discrepancies between predictions and actual values.
* Linear: MAE is a linear function of the prediction errors, which treats all errors equally regardless of their magnitude. This property makes MAE less sensitive to outliers than MSE, as it does not disproportionately emphasize large errors.
* Robust: Due to its linear nature and reduced sensitivity to outliers, MAE is considered a more robust loss function than MSE. This makes it suitable for applications where the presence of outliers is expected or the distribution of errors is not symmetric.
* Non-differentiable: Although MAE is continuous, it is not differentiable when the prediction error is zero due to the absolute value function. This property can complicate the optimization process for specific algorithms, particularly those relying on gradient-based techniques. However, subgradient methods<cit.> can be employed to overcome this issue.
* Convex: MAE is a convex function, which means it has a unique global minimum. This property simplifies the optimization process, as gradient-based optimization techniques can converge to the global minimum without getting trapped in local minima. Like the MSE, the MAE is non-convex for Deep neural networks due to the multiple layers with non-linear activation functions.
* Scale-dependent: Like MSE, the value of MAE depends on the scale of the target variable, making it difficult to compare the performance of models across different problems or target variable scales. To address this issue, researchers often use scale-invariant metrics such as mean absolute percentage error (MAPE) or normalized mean absolute error (NMAE) to compare models across different scales or units.
The MAE, called L1 loss, is often used as an evaluation metric. It is computationally simple and easy to understand, but it does not have the smooth and differentiable property of the MSE and is not sensitive to outliers.
§.§.§ Huber Loss
The Huber loss combines the properties of both Mean Squared Error (MSE) and Mean Absolute Error (MAE). Huber loss is designed to be more robust to outliers than MSE while maintaining smoothness and differentiability <cit.>. The Huber loss function is defined as
L(y, ŷ) =
1/2(y - ŷ)^2 for |y - ŷ| ≤δ
δ(|y - ŷ| - 1/2δ) otherwise,
where y is the true value, ŷ is the predicted value, and δ is a user-specified threshold value.
When the error is small, the Huber loss function behaves like the MSE loss function, and when the error is large, the Huber loss function behaves like the MAE loss function. This property makes the Huber loss function more robust to outliers than the MSE loss function, as it is less sensitive to large errors.
The Huber loss function is differentiable, which makes it suitable for use in gradient-based optimization algorithms such as stochastic gradient descent (SGD). It is commonly used in linear regression and time series forecasting, as it can handle outliers and noise in the data. It is also used in robust optimization problems where the data may contain outliers or noise.
The threshold δ can be chosen empirically by trying different values and evaluating the model's performance. However, common practice is to set δ to a small value if the data has a lot of noise and to a large value if the data has outliers.
§.§.§ Log-Cosh Loss
The Log-Cosh loss function is smooth and differentiable. It is commonly used in regression problems where the data may contain outliers or noise <cit.>. The Log-Cosh loss is defined as
L(y, ŷ) = 1/n∑_i=1^nlog(cosh(y_i - ŷ_i)),
where y is the true value, ŷ is the predicted value and n is the number of samples.
One of the advantages of the log-cosh loss function is that it is less sensitive to outliers than the mean squared error (MSE), as it is not affected by extreme data values. However, it is more sensitive to small errors than the Huber loss.
§.§.§ Quantile Loss
Also known as quantile regression loss, this function is often used for predicting an interval instead of a single value <cit.>. If we denote the quantile as q where 0 < q < 1, and the predicted and actual values as ŷ and y respectively, then the quantile loss is given by
L(y, ŷ) = q ·max(y - ŷ, 0) + (1 - q) ·max(ŷ - y, 0)
max(a, b) represents the maximum of a and b. The expression y - ŷ is used when the prediction underestimates, and ŷ - y is used when the prediction overestimates. The loss is scaled by q for underestimations and (1 - q) for overestimations.
Note that when q = 0.5, the quantile loss is equivalent to the Mean Absolute Error (MAE), making it a generalization of MAE that allows for asymmetric penalties for underestimations and overestimations.
Overestimation occurs when a model's prediction exceeds the actual value. Underestimation is the opposite of overestimation. It occurs when a model's prediction is lower than the actual value.
Practical examples of quantile regression include:
Financial Risk Management: To estimate Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR), which are measures of financial risk used in risk management. These quantile-based measures help to understand the potential for extreme losses <cit.>.
Supply Chain and Inventory Management: Predicting demand for products can benefit from quantile loss as it can give a range of potential demand rather than a single point, which can help manage inventory and reduce stockouts or overstock situations <cit.>.
Energy Production: To predict power output, having a range of potential outputs to manage grid stability <cit.>.
Economic Forecasting: Predicting economic indicators can use quantile regression to give a range of possible values, which can help planning and policy-making <cit.>.
Weather Forecasting: Can be useful for predicting variables like temperature or rainfall, where providing a range can be more informative than a single-point estimate <cit.>.
Real Estate Pricing: Predicting the price of a property within a range can be more useful than predicting a single price <cit.>.
Healthcare: Quantile regression can predict a range of possible patient outcomes based on a set of features, which can assist doctors in making more informed decisions <cit.>.
§.§.§ Poisson Loss
Poisson loss is used in regression tasks when the target variable represents count data and is assumed to follow a Poisson distribution. The Poisson loss is derived from the negative log-likelihood of the Poisson distribution. It maximizes the likelihood of observing the count data given the predicted values <cit.>. It s defined as
L(y, ŷ) = 1/n∑_i=1^n (ŷ_i - y_i log(ŷ_i)),
where y_i represents the actual target value, ŷ_i is the predicted value, and n is the number of samples.
When applying the Poisson loss function to model count data, we must ensure that the predicted values are non-negative since negative counts are not meaningful in real-world scenarios. To achieve this, it is common to use a link function that transforms the linear combination of input features to a non-negative output, which can then be interpreted as the expected count.
A link function is a mapping from the linear predictor to the predicted value. In the context of Poisson regression, the exponential function is a common choice for the link function because it guarantees non-negative outputs. The exponential function has the following form:
ŷ_i = exp(𝐰^⊤𝐱_i + b),
where 𝐰 is a vector of weights, 𝐱_i is a vector of input features for the i-th observation, and b is the bias term.
Using the exponential function as a link function, we ensure that the predicted values ŷ_i are always non-negative. In this case, the Poisson loss function can be written as
L(y, ŷ) = 1/n∑_i=1^n(exp(𝐰^⊤𝐱_i + b) - y_i log(exp(𝐰^⊤𝐱_i + b)))
The Poisson distribution is typically used for modeling the number of times an event occurred in an interval. Here are some examples of applications where Poisson loss can be useful.
Traffic Modeling: Poisson regression can predict the number of cars that pass through a toll booth during a given time interval based on factors like the time of day, day of the week, and weather conditions <cit.>.
Healthcare: Epidemiology can predict the number of disease cases in different regions based on variables like population density, vaccination rates, and social behavior patterns <cit.>.
Insurance: In the insurance industry, it can be used to model claim counts for certain types of insurance policies <cit.>.
Customer Service: Poisson regression can be used to predict the number of calls that a call center receives during different times of the day, to aid in staff scheduling <cit.>.
Internet Usage: It can be used to model the number of website visits or clicks on an ad during a given time interval to help understand user behavior and optimize ad placement <cit.>.
Manufacturing: It can predict the number of defects or failures in a manufacturing process, helping in quality control and maintenance planning <cit.>.
Crime Analysis: Poisson regression can be used to model the number of occurrences of certain types of crimes in different areas to help in police resource allocation and crime prevention strategies <cit.>.
§.§ Regression Performance Metrics
Table <ref> shows the most common metrics used in regression tasks.
The following sections delve into more details on each of these metrics skipping the mean square error (MSE) and the mean absolute error (MAE) because they are the same discussed previously as loss functions.
§.§.§ Root Mean Squared Error (RMSE)
The RMSE is the square root of the mean squared error (MSE) defined as
RMSE = √(1/n∑_i=1^n (y_i - ŷ_i)^2),
where y_i is the true value, ŷ_i is the predicted value, and n is the number of samples.
The RMSE measures the average deviation of the predictions from the true values. This metric is easy to interpret because it is in the same units as the data. However, it is sensitive to outliers. Lower RMSE values indicate better model performance, representing smaller differences between predicted and actual values.
§.§.§ Mean Absolute Percentage Error (MAPE)
The MAPE measures the average percentage error of the model's predictions compared to the true values. It is defined as
MAPE = 1/n∑_i=1^n |y_i - ŷ_i|/y_i× 100,
where y_i is the true value, ŷ_i is the predicted value, and n is the number of samples.
One of the advantages of using MAPE is that it is easy to interpret, as it is expressed in percentage terms. It is also scale-independent, which can be used to compare models across different scales of the target variable. However, it has two limitations: it can produce undefined results when y_i is zero and is sensitive to outliers.
§.§.§ Symmetric Mean Absolute Percentage Error (SMAPE)
The SMAPE is a variation of the Mean Absolute Percentage Error (MAPE) commonly used to evaluate the accuracy of predictions in time series forecasting <cit.>. SMAPE is defined as
SMAPE = 2/n∑_i=1^n |y_i - ŷ_i|/|y_i| + |ŷ_i| * 100,
where y_i is the true value, ŷ_i is the predicted value, and n is the number of samples.
One of the advantages of using SMAPE is that it is symmetric, which means that it gives equal weight to over-predictions and under-predictions. This is particularly useful when working with time series data, where over-predictions and under-predictions may have different implications, and SMAPE helps to ensure that the model is equally penalized for both types of errors, leading to better overall performance in terms of how well it meets the business needs or objectives. However, SMAPE has some limitations; for example, it can produce undefined results when both y_i and ŷ_i are zero and can be sensitive to outliers.
The implications of over-predictions and under-predictions varied depending on the application. In the following, we discuss real-world examples.
Inventory Management: Over-predicting demand can lead to excess inventory, which ties up capital and can result in waste if products expire or become obsolete. Under-predicting demand can lead to stockouts, lost sales, and damage to customer relationships <cit.>. A symmetric error measure like SMAPE penalizes both cases because over-prediction and under-prediction have costly implications.
Energy Demand Forecasting: Over-prediction of energy demand can cause unnecessary production, leading to waste and increased costs. Under-prediction can lead to insufficient power generation, resulting in blackouts or the need for expensive on-demand power generation <cit.>.
Financial Markets: In financial markets, over-prediction of a stock price might lead to unwarranted investments resulting in financial loss, while under-prediction might result in missed opportunities for gains <cit.>.
Sales Forecasting: Over-prediction of sales could lead to overstaffing, overproduction, and increased costs, while under-prediction could lead to understaffing, missed sales opportunities, and decreased customer satisfaction <cit.>.
Transportation and Logistics: Over-predicting the demand for transportation might lead to underutilized vehicles or routes, resulting in unnecessary costs. Under-predicting demand might lead to overcrowding and customer dissatisfaction <cit.>.
§.§.§ Coefficient of Determination R^2
The Coefficient of Determination (R^2), measures how well the model can explain the variation in the target variable <cit.>. R^2 is defined as the proportion of the variance in the target variable that the model explains. It ranges from 0 to 1, where 0 means that the model does not explain any variation in the target variable, and one means that the model explains all the variation in the target variable.
The formula for R-squared is
R^2 = 1 - ∑_i=1^n (y_i - ŷi)^2/∑i=1^n (y_i - y̅)^2,
where y_i is the true value, ŷ_i is the predicted value, y̅ is the mean of the true values, and n is the number of samples.
Benefits and Limitations of R-squared
Some of the main benefits of R^2 are:
* Measures the relationship between the model and the response variable: R-squared describes the strength of the relationship between the model and the response variable on a convenient 0 – 1 scale.
* Interpretable: It can be more interpretable than other statistics because it provides a percentage that can be intuitively understood.
* Helps in model selection: If we have two models, we can compare their R-squared values as a part of the selection process. The model with the higher R-squared could indicate a better fit.
The limitations of R^2 include:
* Misleading with non-linear relationships: R^2 works as intended in a simple linear regression model with one explanatory variable but can be misleading with more complex, nonlinear, or multiple regression models.
* Influenced by the number of predictors: R^2 always increases as we add more predictors to a model, even if they are unrelated to the outcome variable. This can lead to overly complex models that overfit the data. This is the benefit of the adjusted R^2, which adjusts the R^2 value based on the number of predictors in the model.
* Sensitive to outliers: R^2 is sensitive to outliers.
* Does not check for biased predictions: R^2 cannot determine whether the coefficient estimates and predictions are biased, which is to say, whether the predictions systematically over or underestimate the actual values.
* Limitation with small sample sizes: When the sample size is small, the R^2 value might be unreliable. It can be artificially high or low and might not represent the true strength of the relationship between the variables.
§.§.§ Adjusted R^2
Adjusted R^2 is a modified version of R^2 that has been adjusted for the number of predictors in the model. It increases only if the new term improves the model more than would be expected by chance. It decreases when a predictor improves the model by less than expected by chance <cit.>. The adjusted R-squared is defined as
Adjusted R^2 = 1 - ((1 - R^2)(n - 1)/n - k - 1),
where n is the number of observations, k is the number of predictors. The adjustment is a penalty for adding unnecessary predictors to the model. This penalty increases with the increase in the number of predictors. This is particularly useful in multiple regression, where several predictors are used simultaneously.
The Adjusted R^2 is often used for model comparison, as it won't necessarily increase with adding more variables to the model, unlike regular R^2. It is useful when we need to compare models of different sizes. Unlike R^2, its value can be negative, meaning that the model is a poor fit for the data.
§ CLASSIFICATION
Classification is a supervised machine learning task in which a model is trained to predict the class or category of a given input data point. Classification aims to learn a mapping from input features to a specific class or category.
There are different classification tasks, such as binary classification, multi-class classification, and multi-label classification. Binary classification is a task where the model is trained to predict one of two classes, such as "spam" or "not spam," for an email. Multi-class classification is a task where the model is trained to predict one of the multiple classes, such as "dog," "cat," and "bird," for an image. Multi-label classification is a task where the model is trained to predict multiple labels for a single data point, such as "dog" and "outdoor," for an image of a dog in the park.
Classification algorithms can be based on techniques such as decision trees, Naive Bayes, k-nearest neighbors, Support Vector Machines, Random Forest, Gradient Boosting, Neural Networks, and others.
§.§ Classification Loss Functions
Several loss functions can be used for classification tasks, depending on the specific problem and algorithm. In the following sections, we describe the most common loss functions used for classification:
§.§.§ Binary Cross-Entropy Loss (BCE)
The BCE, also known as log loss, is a commonly used loss function for binary classification problems. It measures the dissimilarity between the predicted probability of a class and the true class label <cit.>.
Cross-entropy is a well-known concept in information theory commonly used to measure the dissimilarity between two probability distributions. In binary classification, the true class is usually represented by a one-hot encoded vector, where the true class has a value of 1, and the other class has a value of 0. The predicted probability is represented by a vector of predicted probabilities for each class, where the predicted probability of the true class is denoted by p(y = 1|x) and the predicted probability of the other class is denoted by p(y = 0|x).
The loss function is defined as
L(y,p) = -(y log(p) + (1-y) log(1-p))
Which intuitively can be split into two parts:
-log(p) if y = 1
-log(1-p) if y = 0,
where y is the true class label (0 or 1) and p is the predicted probability of the positive class. The loss function is minimized when the predicted probability p equals the true class label y.
The binary cross-entropy loss has several desirable properties, such as being easy to compute, differentiable, and providing a probabilistic interpretation of the model's output. It also provides a smooth optimization surface and is less sensitive to outliers than other loss functions. However, it is sensitive to the class imbalance problem, which occurs when the number of samples of one class is significantly greater than the other. We can use the Weighted Binary Cross Entropy for these cases.
§.§.§ Weighted Binary Cross Entropy (WBCE)
Variation of the standard binary cross-entropy loss function, where the weight of each sample is considered during the loss calculation. This is useful in situations where the distribution of the samples is imbalanced <cit.>.
In the standard binary cross-entropy loss, the loss is calculated as the negative log-likelihood of the true labels given the predicted probabilities. In the WBCE, a weight is assigned to each sample, and the loss for each sample is calculated as
L = -(w_i · ylog(p) + w_i(1 - y)log(1 - p)),
where w_i is the weight assigned to the i^th sample, y is the true label, and p is the predicted probability of the positive class.
By assigning a higher weight to samples from under-represented classes, the model is encouraged to pay more attention to these samples, and the model's overall performance can be improved.
§.§.§ Categorical Cross-entropy Loss (CCE)
The CCE, also known as the negative log-likelihood loss or Multi-class log loss, is a function used for multi-class classification tasks. It measures the dissimilarity between the predicted probability distribution and the true distribution <cit.>.
Given the predicted probability distribution, it is defined as the average negative log-likelihood of the true class. The formula for categorical cross-entropy loss is expressed as
L = -1/N∑_i=1^N ∑_j=1^C y_i,jlog(p_i,j),
where N is the number of samples, C is the number of classes, y is the true label, and p is the predicted probability of the true class. The loss is calculated for each sample and averaged over the entire dataset.
The true label is a one-hot encoded vector in traditional categorical cross-entropy loss, where the element corresponding to the true class is one, and all other elements are 0. However, in some cases, it is more convenient to represent the true class as an integer, where the integer value corresponds to the index of the true class leading to the sparse categorical cross-entropy loss discussed next.
§.§.§ Sparse Categorical Cross-entropy Loss
Variation of the categorical cross-entropy loss used for multi-class classification tasks where the classes are encoded as integers rather than one-hot encoded vectors <cit.>. Given that the true labels are provided as integers, we directly select the correct class using the provided label index instead of summing over all possible classes. Thus the loss for each example is calculated as
H(y, ŷ) = -log(ŷ_i, y_i)
And the final sparse categorical cross-entropy loss is the average over all the samples:
H(Y, Ŷ) = -1/n∑_i=1^nlog(ŷ_i, y_i),
where y_i is the true class of the i-th sample and ŷ_i, y_i is the predicted probability of the i-th sample for the correct class y_i.
§.§.§ Cross-Entropy loss with label smoothing
In the Cross-Entropy loss with label smoothing, the labels are smoothed by adding a small value to the true label and subtracting the same value from all other labels. This helps reduce the model's overconfidence by encouraging it to produce more uncertain predictions <cit.>.
The motivation behind this is that when training a model, it is common to become over-confident in its predictions, particularly when trained on a large amount of data. This overconfidence can lead to poor performance on unseen data. Label smoothing helps to mitigate this problem by encouraging the model to make less confident predictions.
The formula for the Cross-Entropy loss with label smoothing is similar to the standard categorical cross-entropy loss but with a small epsilon added to the true label and subtracted from all other labels. The formula is given by
L(y, ŷ) = -∑_c=1^C[(1-ϵ) y_c logŷ_c + ϵ/Clogŷ_c],
where y is the true label, ŷ is the predicted label, C is the number of classes, and ϵ is the smoothing value. Typically, ϵ is set to a small value, such as 0.1 or 0.2.
Label smoothing does not always improve performance, and it is common to experiment with different epsilon values to find the best value for a specific task and dataset.
§.§.§ Focal loss
The focal loss introduced by Tsung-Yi Lin et al. <cit.> is a variation of the standard cross-entropy loss that addresses the issue of class imbalance, which occurs when the number of positive samples (objects of interest) is much smaller than the number of negative samples (background). In such cases, the model tends to focus on the negative samples and neglect the positive samples, leading to poor performance. The focal loss addresses this issue by down-weighting the easy negative samples and up-weighting the hard positive samples.
The focal loss is defined as
FL(p_t) = -α_t (1 - p_t)^γ log(p_t),
where p_t is the predicted probability for the true class, α_t is a weighting factor that controls the importance of each example, and γ is a focusing parameter that controls the rate at which easy examples are down-weighted.
The weighting factor α_t is usually set to the inverse class frequency to balance the loss across all classes. The focusing parameter γ is typically set to a value between 2 and 4 to give more weight to hard examples.
In the original paper, the authors used a sigmoid activation function for binary classification and the cross-entropy loss for multi-class classification. The focal loss is combined with these loss functions to improve the performance of object detection and semantic segmentation models.
In recent works, focal loss has been used in object detection, semantic <cit.>, instance segmentation <cit.>, and human pose estimation <cit.>.
§.§.§ Hinge Loss
Hinge loss is a popular function used for maximum-margin classification, commonly used for support vector machines (SVMs) for example in one-vs-all classification where we classify an instance as belonging to one of many categories and situations where we want to provide a margin of error <cit.>.
The hinge loss function for an individual instance can be represented as
L(y, f(x)) = max(0, 1 - y · f(x)),
where y is the true label of the instance, which should be -1 or 1 in a binary classification problem. f(x) is the predicted output for the instance x. The raw margin is y · f(x).
The hinge loss is 0 if the instance is on the correct side of the margin. The loss is proportional to the distance from the margin for data on the wrong side of the margin.
§.§ Classification Performance Metrics
Table <ref> summarizes the common metrics used for classification. The following sections will delve into each of these metrics.
§.§.§ Confusion Matrix
The confusion matrix is used to define a classification algorithm's performance. It contains the number of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) that result from the algorithm. The confusion matrix for a binary classification problem is represented in a 2x2 table as shown in Table <ref>.
For example, consider a binary classification problem where the algorithm tries to predict whether an image contains a cat. The confusion matrix for this problem would look like Table <ref>:
Where:
* TP: the number of images correctly classified as cats.
* TN: the number of images correctly classified as not cats.
* FP: the number of images incorrectly classified as cats.
* FN: the number of images incorrectly classified as not cats.
Using the values in the confusion matrix, we can calculate performance metrics such as accuracy, precision, recall, and F1-score.
§.§.§ Accuracy
Accuracy is a commonly used metric for object classification. It is the ratio of correctly classified samples to the total number of samples <cit.>. Mathematically, it can be represented as
Accuracy = Number of Correctly Classified Samples/Total Number of Samples
Accuracy can be expressed in terms of the confusion matrix values as
Accuracy = TP+TN/TP+FP+TN+FN
It is a simple and intuitive metric, but it can be misleading when the class distribution is imbalanced, as it tends to favor the majority class. For example, let's assume that we want to predict the presence of cancer in a cell. If for every 100 samples, only one contains cancer, a useless model that always predicts "No cancer" will have an accuracy of 99%. Other metrics, such as precision, recall, or F1-score, are more appropriate in these cases.
§.§.§ Precision
Precision measures the accuracy of positive predictions. It is defined as the number of true positive predictions divided by the number of true positive predictions plus the number of false positive predictions <cit.>. Mathematically, it can be represented as
Precision = TP/TP + FP,
where TP is the number of true positive predictions, and FP is the number of false positive predictions.
Precision is useful when the cost of a false positive prediction is high, such as in medical diagnosis or fraud detection. A high precision means the model is not generating many false positives, so the predictions are reliable. However, it is important to note that precision is not the only metric to consider when evaluating a model's performance, as high precision can also be achieved by a model that is not generating many positive predictions at all, which would result in a low recall.
§.§.§ Recall, Sensitivity, or True Positive Rate (TPR)
The recall metric, also known as sensitivity or TPR, measures the proportion of true positive instances (i.e., instances correctly classified as positive) out of the total number of positive instances <cit.>. Mathematically, it is represented as
Recall = True Positives/True Positives + False Negatives
It measures how well the model can identify all the positive instances in the dataset. A high recall value indicates the model has fewer false negatives, meaning it can correctly identify the most positive instances. However, a high recall value does not necessarily mean the model has a high precision, as the number of false positives can also influence it.
§.§.§ Precision-Recall Tradeoff
The precision-recall tradeoff refers to the inverse relationship between precision and recall. As one metric increases, the other tends to decrease.
Imagine a machine learning model trying to predict whether an email is spam. If the model is tuned to be very conservative and only marks an email as spam when confident, it is likely to have high precision (i.e., if it marks an email as spam, it is very likely to be spam). However, this conservative approach means it will probably miss many spam emails it is unsure about, leading to a lower recall.
Conversely, if the model is tuned to be liberal and marks emails as spam more freely, it will probably identify most spam emails correctly, leading to a high recall. However, this approach will also incorrectly mark many non-spam emails as spam, leading to a lower precision.
This tug-of-war between precision and recall is the crux of the tradeoff. An optimal balance between the two must be found depending on the use case. For instance, in a medical context, a high recall might be prioritized to ensure that all possible disease cases are identified, even at the expense of some false positives. On the other hand, a spam detection system might aim for high precision to avoid annoying users with wrongly classified emails, accepting that some spam messages might slip through.
The precision-recall tradeoff is a crucial consideration when tuning machine learning models. Maximizing both metrics is only sometimes possible; thus, a balance must be struck based on the requirements and constraints of the specific application.
§.§.§ F1-score
The F1 score combines precision and recall to provide a single value representing a classification model's overall performance <cit.>. It is defined as the harmonic mean of precision and recall computed as
F1 = 2 * precision · recall/precision + recall
The F1 score considers both the model's ability to correctly identify positive examples (precision) and the ability of the model to identify all positive examples in the dataset (recall). A higher F1 score indicates that the model has a better balance of precision and recall, whereas a low F1 score indicates that the model may have a high precision or recall but not both.
It is particularly useful when the class distribution is imbalanced, or we want to give equal weight to precision and recall.
§.§.§ F2-score
The F2 score is a variation of the F1 score, with more weight given to the recall metric. The F2 score is the harmonic mean of precision and recall, with a weighting factor of 2 for recall <cit.>. The formula for the F2 score is
F2 = (1 + 2^2) Precision * Recall/2^2 * Precision + Recall
Like the F1 score, the F2 score ranges from 0 to 1, with a higher score indicating better performance. However, the F2 score places a greater emphasis on recall, making it useful when it is important to minimize false negatives. For example, a false negative could mean a patient is not diagnosed with a serious disease in medical diagnosis, so the F2 score is often used in such scenarios <cit.>.
§.§.§ Specificity
Specificity, also known as the true negative rate (TNR), is a metric that measures the proportion of actual negatives that are correctly identified as negatives by a classification model. It is defined as the number of true negatives (TN) divided by the number of true negatives plus the number of false positives (FP) <cit.>. The formula for specificity is
Specificity = TN/TN + FP
This metric is particularly useful in medical diagnostic testing, where it is important to minimize the number of false positives to avoid unnecessary treatments or interventions. High specificity indicates that the model is good at identifying negatives and has a low rate of false positives.
It is often used with the Recall or TPR to evaluate the overall performance of a classification model.
§.§.§ False Positive Rate (FPR)
The FPR is used to evaluate the proportion of false positives (i.e., instances that are incorrectly classified as positive) to the total number of negatives (i.e., instances that are correctly classified as negative). It is also known as the Type I Error rate, which complements the Specificity metric.
Formally, the FPR is calculated as
FPR = FP/FP + TN
FPR directly relates to the threshold classifying instances as positive or negative. A lower threshold will increase the number of false positives and thus increase the FPR, while a higher threshold will decrease the number of false positives and decrease the FPR.
In practice, the FPR is often plotted on the x-axis of a Receiver Operating Characteristic (ROC) curve to visualize the trade-off between the TPR and FPR for different classification thresholds. See section <ref> for more details.
§.§.§ Negative Predictive Value (NPV)
The NPV measures the proportion of negative cases that are correctly identified as such <cit.>. It is calculated as
NPV = TN/TN + FN
The NPV is useful when the cost of a false negative (i.e., an actual negative case being classified as positive) is high. For example, a false negative result in medical diagnostics can delay treatment or even death. In such cases, a high NPV is desired.
The NPV is not affected by the prevalence of the condition in the population, whereas other metrics, such as sensitivity and specificity, are. This makes the NPV a useful metric for evaluating the performance of a classifier when the class distribution is imbalanced.
The NPV can be interpreted as the complement of the false positive rate (FPR)
NPV = 1 - FPR
§.§.§ True Discovery Rate (TDR)
TDR evaluates the proportion of true positive predictions a model makes among all the positive predictions. It is also known as the Positive Predictive Value (PPV) or precision of the positive class <cit.>. TDR is calculated as
TDR = TP/TP + FP
TDR is a useful metric for evaluating the performance of a model in situations where the number of false positive predictions is high, and the number of true positive predictions is low. It is particularly useful in high-dimensional datasets where the number of features is large and the number of positive observations is low. TDR can provide a more accurate picture of the model's performance than accuracy or recall in such cases.
There may be a trade-off between TDR and recall in some cases: TDR may be low when the recall is high, and vice versa. Therefore, it's important to consider both TDR and recall when evaluating the performance of a model.
§.§.§ False Discovery Rate (FDR)
The FDR measures the proportion of false positives among all positive predictions made by a classifier <cit.>. It is defined as
FDR = FP/TP + FP
The FDR can be an alternative to the False Positive Rate (FPR) when the cost of false positives and true negatives differs. It is particularly useful in cases where the number of false positives is more critical than the number of false negatives, such as in medical testing or fraud detection. A lower FDR value indicates that the classifier makes fewer false positive predictions.
§.§.§ Precision-Recall Curve
The precision-recall (PR) curve is a graphical representation of the trade-off between precision and recall for different threshold values of a classifier. Precision is the proportion of true positive predictions out of all positive predictions, while recall is the proportion of true positive predictions out of all actual positive instances. The precision-recall curve plots precision on the y-axis and recall on the x-axis for different threshold values of the classifier <cit.>.
Computing the Precision-Recall Curve
* Start with a binary classifier that can predict a binary outcome and estimate the probability of the positive class. These probabilities are also known as scores.
* For every possible threshold (from 0 to 1) on these scores, calculate the Precision (see Section <ref>) and the Recall (see Section <ref>).
* Plot a curve with Recall on the X-axis and Precision on the Y-axis. Figure <ref>(a) shows an example of Precision-Recall curves for three models.
Interpretation of the Precision-Recall Curve
Figure <ref>(a) shows the precision/recall curves for three models trained on the same data. The dashed line shows the ideal performance. Each model reports its Average Precision metric (see Section <ref> for more details on Average Precision). In the following, we explain how to interpret PR curves.
The closer the curve is to the top-right corner, the better the model's performance. Ideal performance is indicated by a point at (1,1), which signifies perfect precision (no false positives) and recall (no false negatives). If the curve is closer to the top-right corner of the plot, it indicates that the model achieves a good balance of precision and recall for most threshold settings.
The area under the curve (AUC-PR) provides a single-number summary of the information in the curve. The maximum possible AUC is 1, which corresponds to a perfect classifier. A random classifier will have an AUC of 0.5. A model with a higher AUC is generally considered better.
Steepness of the curve. Ideally, we want the recall to increase quickly as precision decreases slightly, resulting in a steep curve. This steepness reflects a good balance between precision and recall. If the curve is less steep, we are losing a lot of precision for small increases in recall.
Comparison of different models. We can compare the PR curves of different models to understand their performance. If the PR curve of one model is entirely above that of another, it indicates superior performance across all thresholds.
§.§.§ Area Under the Receiver Operating Characteristic curve (AUC-ROC)
The Area Under the Receiver Operating Characteristic Curve (AUC-ROC) is a commonly used performance metric for evaluating the performance of binary classification models <cit.>. It measures the ability of the model to distinguish between positive and negative classes by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. The AUC-ROC is a value between 0 and 1, with 1 indicating a perfect classifier and a value of 0.5 indicating a classifier that performs no better than random guessing.
The AUC-ROC offers a single-value summary of the model's performance across all possible threshold values. This measure is particularly valuable when comparing the performance of different models, as its assessment is independent of threshold choice.
However, in cases where the positive and negative class distributions are significantly imbalanced, the AUC-ROC, while still applicable, may not provide the most accurate performance representation. With a heavy imbalance, the ROC curve can appear overly optimistic, as a low false positive rate can still mean a large number of false positives if the total count of actual negatives is high, resulting in a misleadingly high AUC-ROC value.
In such imbalanced scenarios, the Precision-Recall (PR) curve and its corresponding area under the curve (AUC-PR) can often provide a more nuanced and accurate performance assessment. As PR curves focus more on detecting positive instances, often the minority class in an imbalanced dataset, they can deliver a more insightful evaluation of a model's ability to detect positive instances, providing a more relevant representation of the model's performance.
Computing the ROC Curve
* Start with a binary classifier that can predict a binary outcome and estimate the probability of the positive class. These probabilities are also known as scores.
* For every possible threshold (from 0 to 1) on these scores, calculate the TPR (see Section <ref>) and the FPR (see Section <ref>).
* Plot a curve with FPR on the X-axis and TPR on the Y-axis. Figure <ref>(b) shows an example of ROC curves for three models.
Interpretation of the ROC Curve
Figure <ref>(b) shows the ROC curves for three models trained on the same data. The dashed line shows random performance. Each model reports its Area under the curve (AUC) in the legend. In the following, we explain how to interpret ROC curves.
TPR and FPR on each axis: The True Positive Rate (TPR) is used for the vertical axis. It measures the proportion of actual positives that are correctly identified as such. The False Positive Rate (FPR), also known as the fall-out or Probability of False Alarm, measures the proportion of actual negatives that are incorrectly identified as positives. The ROC curve plots the TPR vs. FPR at different classification thresholds. Lowering the classification threshold classifies more items as positive, thus increasing both False Positives and True Positives.
Area Under the ROC Curve (AUC-ROC): AUC provides an aggregate performance measure across all possible classification thresholds. AUC-ROC of a model equals the probability that the model will rank a randomly chosen positive instance higher than a randomly chosen negative instance. Hence, the higher the AUC-ROC score, the better the model (from 0 to 1).
Diagonal line equals random guess: The diagonal line in the ROC curve plot has an AUC of 0.5 and represents a model with no discriminatory ability, i.e., one that predicts positives and negatives at random.
Towards the top-left corner: The more the curve sits in the top-left corner, the better the classifier, as it means the True Positive Rate is high and the False Positive Rate is low.
Compare Models: ROC curves are useful for comparing different models. The model with a higher AUC and its curve towards the top-left corner is generally considered better.
§ OBJECT DETECTION
Object detection in deep learning is a computer vision technique that involves localizing and recognizing objects in images or videos. It is common in various applications such as autonomous driving <cit.>, surveillance <cit.>, human-computer interaction <cit.>, and robotics <cit.>. Object detection involves identifying the presence of an object, determining its location in an image, and recognizing the object's class.
§.§ Object Detection Loss Functions
Since object detection involves localization (regression) and recognition (classification), object detection systems use a combination of multiple loss functions. Among these loss functions, we find:
* Multi-Class Log Loss (also known as Cross-Entropy Loss): It is used for the multi-class classification part of the object detector. Penalizes the difference between the predicted class probabilities and the ground truth class labels.
* Smooth L1 Loss: It is used for the regression part of the object detector. It aims to reduce the mean absolute error between the predicted and ground truth bounding box coordinates.
* IoU Loss: It calculates the Intersection Over Union (IoU) between the predicted bounding box and the ground truth bounding box and penalizes the difference between the predicted IoU and the ground truth IoU.
* Focal Loss: It is used to overcome the problem of class imbalance and focuses on the misclassified samples. It penalizes the samples that are easily classified with high confidence and gives more weight to the samples that are difficult to classify.
* YOLO Loss: It is used for the You Only Look Once (YOLO) object detection family of algorithms and combines the prediction of bounding box coordinates, objectness scores, and class probabilities.
In the following sections, we will delve into the loss functions that we have not touched on before or are defined differently.
§.§.§ Smooth L1 Loss
The smooth L1 loss, also known as the smooth mean absolute error (SMAE) loss, is a commonly used loss function in object detection tasks; it was introduced in Fast R-CNN <cit.>.
The smooth L1 loss is a modification of the mean absolute error (MAE) loss that aims to balance between being too sensitive to outliers and insensitive to small errors. The formula for the smooth L1 loss is given by
L = 0.5 * (x_i - y_i)^2 if |x_i - y_i| < 1
|x_i - y_i| - 0.5 otherwise,
where x_i and y_i are the predicted and ground truth values, respectively.
It is commonly used in the region proposal network (RPN) part of the two-stage object detectors <cit.> to regulate the regression of the bounding box coordinates outperforming the mean square error (MSE) loss in terms of both accuracy and efficiency.
§.§.§ Intersection over Union (IoU) Loss
IoU is a metric used in object detection that measures the overlap between two bounding boxes. Figure <ref> depicts the IoU metric used in object detection. The IoU between two bounding boxes is calculated as
IoU = area of intersection/area of union
The IoU loss function is defined as
L = 1 - IoU
This function encourages the predicted bounding boxes to overlap highly with the ground truth bounding boxes. A high IoU value indicates that the predicted bounding box is close to the ground truth, while a low IoU value indicates that the predicted bounding box is far from the ground truth.
The IoU loss function is commonly used for one-stage detectors <cit.> as part of a multi-task loss function that includes a classification loss and a localization loss.
§.§.§ YOLO Loss
The YOLO loss function is used in the YOLO object detection architecture. It was introduced by Redmon et al. in <cit.>. The YOLO loss function is a multi-part loss that consists of three components:
* Localization loss: This component penalizes the network for misprediction of the object's coordinates in the image. It is calculated as the mean squared error between the predicted and ground-truth bounding box coordinates.
* Confidence loss: This component penalizes the network for not detecting an object even when one is present. It is a binary cross-entropy loss calculated between the predicted objectiveness score and the ground-truth label.
* Classification loss: This component penalizes the network for misclassifying the object. It is a multi-class cross-entropy loss calculated between the predicted class scores and the ground-truth label.
The total YOLO loss is the weighted sum of these three components. Figure <ref> explains the full YOLO loss function.
§.§ Object Detection Metrics
To compute the metrics in object detection, we also compute the True Positives, False Positives, True Negatives, and False Negatives. The definitions of these metrics are based on the IoU score as follows:
True Positives in object detection: The match between the predicted location of an object and its actual location is measured using an Intersection Over Union (IoU) score. The IoU score ranges from 0 to 1, with a score of 1 indicating a perfect match between the predicted and ground-truth locations. Since a perfect match is hard to achieve, we define a threshold value to determine whether a prediction is a true positive. Common values for the threshold are 0.25, 0.5, and 0.75. These thresholds are not fixed and can be adjusted based on the application's requirements. If the IoU score between the predicted and ground-truth boxes is greater than or equal to the defined threshold, the prediction is considered a true positive.
False Positive in object detection: Occurs when the model predicts the presence of an object, but the object is not present in the image. This affects the precision metric.
False Negative in object detection: Occurs when the model fails to detect an object that is present in an image. This affects the recall metric.
True Negative in object detection: Refers to a case where the object detector correctly determines that an object is not present in an image.
Common IoU thresholds for object detection:
* 0.5: A threshold of 0.5 is commonly used as a balanced threshold for object detection. A predicted bounding box is considered a true positive if its IoU with the ground truth bounding box is greater than or equal to 0.5.
* 0.75: A threshold of 0.75 is used for applications that require higher precision, such as autonomous driving, where false positive detections can lead to critical consequences.
* 0.25: A threshold of 0.25 is used for applications that require higher recall, such as medical image analysis, where missing detections can lead to an incorrect diagnosis.
The common object detection metrics are:
* Average Precision (AP).
* Intersection over union (IoU). See details in section <ref>.
* Precision-Recall Curve. See details in section <ref>.
§.§.§ Average Precision (AP)
Object detection models must identify and localize multiple object categories in an image. The AP metric addresses this by calculating each category's AP separately and then taking the mean of these APs across all categories (that is why it is also called mean average precision or mAP). This approach ensures that the model's performance is evaluated for each category individually, providing a more comprehensive assessment of the model's overall performance.
To accurately localize objects in images, AP incorporates the Intersection over Union (IoU) to assess the quality of the predicted bounding boxes. As described previously, IoU is the ratio of the intersection area to the union area of the predicted bounding box and the ground truth bounding box (see Figure <ref>). It measures the overlap between the ground truth and predicted bounding boxes. The COCO benchmark considers multiple IoU thresholds to evaluate the model's performance at different levels of localization accuracy.
The two most common object detection datasets are The Pascal VOC <cit.> and Microsoft COCO <cit.>. The AP is computed differently in each of these. In the following, we describe how it is computed on each dataset.
§.§.§ VOC Dataset
This dataset includes 20 object categories. To compute the AP in VOC, we follow the next steps:
* Compute Intersection over Union (IoU): For each detected object, compute the IoU with each ground truth object in the same image (refer to section <ref> for more details).
* Match Detections and Ground Truths: For each detected object, assign it to the ground truth object with the highest IoU, if the IoU is above the threshold.
* Compute Precision and Recall: For each category, calculate the precision-recall curve by varying the confidence threshold of the model's predictions (refer to section <ref> for more details). This results in a set of precision-recall pairs.
* Sort and interpolate with 11-points: Sort the precision-recall pairs by recall in ascending order. Then, for each recall level r in the set {0, 0.1, 0.2, ..., 1.0}, find the highest precision p(r) for which the recall is at least r. This is known as interpolated precision. This process results in a precision-recall curve that is piecewise constant and monotonically decreasing.
* Compute Area Under Curve (AUC): The Average Precision is then defined as the area under this interpolated precision-recall curve. Since the curve is piecewise constant, this can be computed as a simple sum: AP = sum(p(r) / N), where the sum is over the N recall levels, and p(r) is the interpolated precision at recall level r.
§.§.§ Microsoft COCO Dataset
This dataset includes 80 object categories and uses a more complex method for calculating AP. Instead of using an 11-point interpolation, it uses a 101-point interpolation, i.e., it computes the precision for 101 recall thresholds from 0 to 1 in increments of 0.01. Also, the AP is obtained by averaging over multiple IoU values instead of just one, except for a common AP metric called AP_50, which is the AP for a single IoU threshold of 0.5. Table <ref> shows all the metrics used to evaluate models in the COCO dataset.
The steps for computing AP in COCO are the following:
* Compute the Intersection over Union (IoU): For each detected object, compute the IoU with each ground truth object in the same image.
* Match Detections and Ground Truths: For each detected object, assign it to the ground truth object with the highest IoU, if this IoU is above the threshold.
* Compute Precision and Recall: For each possible decision threshold (confidence score of the detection), compute the precision and recall of the model. This results in a set of precision-recall pairs.
* Interpolate Precision: For each recall level r in the set {0, 0.01, 0.02, ..., 1.00} (for the 101-point interpolation used in COCO), find the maximum precision p(r) for which the recall is at least r. This is known as interpolated precision.
* Compute Area Under Curve (AUC): The Average Precision is then defined as the area under this interpolated precision-recall curve. Since the curve is a piecewise constant, this can be computed as a simple sum: AP = sum(p(r)) / 101, where the sum is over the 101 recall levels, and p(r) is the interpolated precision at recall level r.
* Average over IoU Thresholds: Repeat steps 2-5 for different IoU thresholds (e.g., 0.5, 0.55, 0.6, ..., 0.95) and average the AP values.
* Average over Categories: Repeat steps 2-6 for each category and average the AP values. This is to prevent categories with more instances from dominating the evaluation.
* Average over Object Sizes: Finally, you can compute AP for different object sizes (small, medium, large) to see how well the model performs on different sizes of objects.
§.§.§ Average Recall (AR)
Average Recall (AR) is used to evaluate the performance of object detection models. Unlike Precision or Recall, defined at a particular decision threshold, Average Recall is computed by averaging recall values at different levels of Intersection over Union (IoU) thresholds and, if needed, at different maximum numbers of detections per image. This metric is commonly used to report COCO data results <cit.>.
The general steps to compute AR are the following:
* Compute the Intersection over Union (IoU): For each detected object, compute the IoU with each ground truth object in the same image.
* Match Detections and Ground Truths: For each ground truth object, find the detected object with the highest IoU. If this IoU is above a certain threshold, the detection is considered a true positive, and the ground truth is matched. Each ground truth can only be matched once.
* Compute Recall: For each image, recall is the number of matched ground truths divided by the total number of ground truths.
* Average over IoU Thresholds: Repeat steps 2 and 3 for different IoU thresholds (e.g., from 0.5 to 0.95 with step size 0.05), and average the recall values.
* Average over Max Detections: Repeat steps 2-4 for different maximum numbers of detections per image (e.g., 1, 10, 100), and average the recall values. This step is necessary because allowing more detections per image can potentially increase recall but at the cost of potentially more false positives.
* Average over Images: Finally, compute the average recall over all the images in the dataset.
For COCO, the Average Recall measure can also be computed separately for different object sizes (small, medium, and large) to evaluate how well the model works for objects of different sizes.
§ IMAGE SEGMENTATION
Image segmentation aims to assign a label or category to each pixel in the image, effectively segmenting the objects at a pixel level. Segmentation is usually performed using deep learning models trained to classify each pixel in the image based on its features and context. Segmentation methods are mainly classified into three categories: semantic segmentation <cit.>, instance segmentation <cit.>, and panoptic segmentation <cit.>.
Semantic Segmentation studies the uncountable stuff in an image. It analyzes each image pixel and assigns a unique class label based on the texture it represents. In a street image, the semantic segmentation’s output will assign the same label to all the cars and the same image to all the pedestrians; it cannot differentiate the objects separately.
Instance Segmentation deals with countable things.
It can detect each object or instance of a class present in an image and assigns it to a different mask or bounding box with a unique identifier.
Panoptic Segmentation presents a unified segmentation approach where each pixel in a scene is assigned a semantic label (due to semantic segmentation) and a unique instance identifier (due to instance segmentation).
Segmentation applications include scene understanding <cit.>, medical image analysis <cit.>, robotic perception <cit.>, autonomous vehicles <cit.>, video surveillance <cit.>, and augmented reality <cit.>.
§.§ Segmentation Loss Functions
Common loss functions include cross-entropy loss, Intersection over union (IoU) loss, Focal loss, Dice loss, Tversky loss, and Lovász Loss.
The following sections will describe these loss functions and their applications.
§.§.§ Cross Entropy Loss for Segmentation
The cross-entropy loss for segmentation measures the dissimilarity between the predicted and ground truth segmentation maps. The cross-entropy loss is calculated by comparing the predicted and ground truth segmentation maps pixel-by-pixel <cit.>. It is defined as the negative log-likelihood of the ground truth segmentation map given the predicted segmentation map. The cross-entropy loss is calculated using the following formula:
-1/N∑_i=1^N∑_c=1^C y_i,c log(p_i,c),
where N is the total number of pixels in the image, C is the number of classes, y is the ground truth segmentation map, and p is the predicted segmentation map. The values of y and p should be between 0 and 1 and sum up to 1. The lower the cross-entropy loss, the better the prediction.
§.§.§ Intersection Over Union (IoU) loss for segmentation
The Intersection Over Union (IoU) loss is a commonly used loss function and evaluation metric for semantic segmentation tasks. The goal is to predict a per-pixel segmentation mask for a given image. The IoU loss is also known as the Jaccard loss or Jaccard Index (JI), and is defined as the ratio of the intersection of the predicted and ground-truth masks to the union of the predicted and ground-truth masks. The IoU loss is calculated per-pixel basis, and the final loss is the average IoU across all pixels in the image.
The IoU loss can be defined mathematically as:
IoU = 1/n∑_i=1^ny_i ∩ŷ_i/y_i ∪ŷ_i,
where y_i is the ground-truth mask for pixel i, ŷ_i is the predicted mask, y_i ∩ŷ_i is the intersection of the ground-truth and predicted masks, and y_i ∪ŷ_i is the union of the ground-truth and predicted masks.
IoU is commonly used in various semantic segmentation works as a loss function <cit.> and as an evaluation metric<cit.>.
§.§.§ Dice Loss
The Dice loss, also known as the Dice similarity coefficient <cit.>, is used to evaluate the similarity between the predicted segmentation mask and the ground truth mask. The loss function is defined as
L = 1 - 2 · intersection(pred, gt)/|pred| + |gt|,
where pred is the predicted segmentation mask, gt is the ground truth segmentation mask, intersection(pred,gt) is the number of pixels that are in the intersection of the predicted and ground truth masks, and |pred| and |gt| are the total number of pixels in the predicted and ground truth masks, respectively.
Dice loss is widely used in medical imaging, where the goal is to segment structures in images with high precision.
§.§.§ Tversky loss
The Tversky loss <cit.> is a variation of the Dice loss, commonly used in image segmentation tasks. It is defined as
Tversky(A,B) = |A ∩ B|/|A ∩ B| + α |A \ B| + β |B \ A|,
where A and B are the predicted and ground truth segmentation masks, respectively, and α and β are user-defined hyperparameters that control the weighting of false positives and false negatives.
This loss function is similar to the Dice loss, but it allows the assignment of different weights to false positives and false negatives, which can be useful in certain scenarios where the imbalance between the two types of errors is significant.
§.§.§ Lovász Loss
The main idea behind the Lovász Loss<cit.> is to optimize the IoU by optimizing the Jaccard index or IoU between the predicted segmentation and the ground-truth segmentation. This loss function is particularly useful in image segmentation tasks where the Intersection-over-Union (IoU) score is highly important. The Lovász Loss is defined as the sum of the predicted segmentation mask and the ground-truth segmentation mask, weighted by the Jaccard index as follows:
L = -1/N∑_i=1^N Jaccard(p_i,y_i) log(p_i),
where N is the number of pixels in the image, p is the predicted segmentation mask, and y is the ground-truth segmentation mask.
The Lovász loss provides a differentiable surrogate for the non-differentiable IoU metric, allowing it to be optimized directly.
§.§ Segmentation Metrics
The common metrics for evaluating segmentation are the Mean Intersection over union (mIoU), pixel accuracy, Average Precision (AP) (refer to section <ref>), BF score, and Panoptic Quality. The following sections will explain each, skipping IoU, and AP already discussed.
§.§.§ Pixel Accuracy
Measures the proportion of correctly classified pixels in the whole image. It is calculated by dividing the number of correctly classified pixels by the total number of pixels in the image. The formula for pixel accuracy is
Pixel accuracy = Number of correctly classified pixels/Total number of pixels in image
Like regular accuracy, pixel accuracy can be misleading when the class imbalance is high, as it does not consider false positives or negatives. This is why other metrics, such as Intersection over Union (IoU) or Dice coefficient, are commonly used to evaluate image segmentation models.
§.§.§ Boundary F1 Score (BF)
The BF <cit.>, often abbreviated as BF score, is a metric used for evaluating image segmentation quality, particularly when the precision of the boundary location is important. The BF score applies the concept of the F1 score to the segmentation boundaries rather than the segment regions themselves.
The computation of the BF score involves the following steps:
* For each segment in the ground truth, find the closest predicted segment (according to some distance measure, often the mean shortest distance between the boundaries of the two segments).
* Compute the precision as the proportion of predicted segments close enough to a ground truth segment. Formally, this is P = TP/(TP + FP), where TP (True Positive) is the number of predicted segments close enough to a ground truth segment, and FP (False Positive) is the number of predicted segments not close enough to any ground truth segment.
* Compute the recall as the proportion of ground truth segments close enough to a predicted segment. Formally, this is R = TP/(TP + FN), where FN (False Negative) is the number of ground truth segments not close enough to any predicted segment.
* The BF score is then the harmonic mean of precision and recall:
F_score = 2 * Precision * Recall/Precision + Recall
A threshold is typically defined as close enough when matching predicted segments to ground truth segments. The BF score ranges from 0 to 1, with 1 indicating a perfect match between the predicted and ground truth boundaries.
§.§.§ Panoptic Quality (PQ)
PQ is a metric proposed for evaluating panoptic segmentation tasks <cit.>.
The Panoptic Quality metric is defined as:
PQ = Σ_(p,g) ∈ TP IoU(p,g)/|TP|×|TP|/|TP| + 1/2|FP| + 1/2|FN|,
where
* IoU(p, g) denotes the intersection-over-union of prediction p and ground truth g.
* TP (True Positive) is a set of matched pairs of predicted and ground truth segments.
* FP (False Positive) is a set of predicted segments not matched with any ground truth segment.
* FN (False Negative) is a set of ground truth segments not matched with any predicted segment.
The PQ metric ranges between 0 and 1, where 1 signifies a perfect segmentation. It is a product of two terms:
The first term, called segmentation quality (SQ), calculates the average IoU of the correctly predicted segments (true positives). This term measures how accurately each detected object has been segmented and thus rewards prediction quality.
The second term, called Recognition Quality (RQ), calculates the ratio of the number of true positives to the total number of segments. This term measures how accurately each object has been recognized.
This metric can be more informative than mean IoU when dealing with complex scenes containing multiple object instances per class, as it considers both the segmentation quality and the correct recognition of distinct instances.
§ FACE RECOGNITION
Face recognition aims to accurately match an individual's face in an image or video to a corresponding entry in a database of faces. This task is often performed using deep learning algorithms, such as Convolutional Neural Networks (CNNs) or Transformers <cit.>, that are trained on large datasets of face images. The algorithms are trained to extract features from images faces and then use these features to recognize faces that match those in the database. Face recognition has many applications, including security <cit.>, social media <cit.>, and biometric identification systems <cit.>.
§.§ Face Recognition Loss Functions and Metrics
The loss functions used in face recognition are typically aimed at preserving the structure and similarity of the input faces. They can be divided into two classes: loss functions based on classification and loss functions based on representation learning.
Common loss functions based on classification are softmax loss, A-softmax loss, Center loss, Large-Margin cosine loss, and Additive Angular Margin loss. On the other hand, loss functions based on representation learning are Triplet loss, Contrastive loss, Circle loss, and the Barlow twins loss.
The metrics commonly used for face recognition are the same as the ones used for classification, such as accuracy, precision, recall, F1-score, ROC, etc.
In the following subsections, we will describe each of these loss functions.
§.§.§ Softmax Loss
The Softmax Loss computes the cross-entropy between the predicted class probabilities and the true class label, followed by the logarithmic operation to convert the negative log-likelihood into a loss value. The final loss is obtained by summing the cross-entropy over all classes and all samples.
Let's denote the weight vectors for each class (or, in this case, each face identity) as W = {w_1, w_2, ..., w_n}, where n is the number of classes (or identities). For a given input image x of class y, the linear classifier would compute a score f(x, W) = Wx + b, where b is the bias term.
The softmax function converts these scores into probabilities. The probability of the i^th class is computed as follows:
P(y=i| x; W) = e^w_i^T x + b_i/∑_j=1^n e^w_j^T x + b_j
The softmax loss (also known as the cross-entropy loss) for an input-output pair (x, y) is defined as the negative log-likelihood of the correct class, which we can be expressed as
L_i = - log(P(y=y_i|x;W)) = - f_y_i + log∑_j=1^n e^f_j,
where f_y_i is the score for the correct class, and f_j are the scores for all classes. The total loss for a batch of data is the mean of L_i over all the examples in the batch.
The disadvantage of this loss function is that it does not have fine-grained control over the intra-class and inter-class distances that come in handy for face recognition purposes.
§.§.§ A-Softmax Loss
The A-Softmax loss <cit.>, also known as the SphereFace loss, was designed to address the limitations of the traditional softmax loss by considering the angular information between the features of face images and their corresponding labels. The A-Softmax loss aims to maximize the inter-class separability and minimize the intra-class variations of face features.
Given a weight matrix W, an input feature vector x, and a margin parameter m, the SphereFace loss is calculated as follows:
1. Compute the Normalized weight matrix W_norm as
W_norm = W/||W||
where each column w_i in W_norm is a unit vector, i.e., ||w_i|| = 1. The normalization operation makes the weights for each class lie on the unit hypersphere.
2. Compute the margin M to be applied to the angles:
M = (m - 1) · y_true + 1
In this equation, y_true is the true class label. If y_true equals 1, M equals m, and if y_true equals 0, M equals 1.
3. Compute the cosine of the angle θ between the feature vector x and the weight vector:
cos(θ) = W_norm· x/||x||,
where ||·|| denotes the L2 norm.
4. Compute the new angle θ' and its cosine after applying the margin:
θ' = θ· M
cos(θ') = cos(θ')
5. Compute the new prediction y_pred' by rescaling with the norm of x:
y_pred' = ||x|| · cos(θ')
6. Finally, compute the SphereFace loss L, which is the negative log-likelihood of the true class:
L = - log∑ y_true· e^y_pred'/∑ e^y_pred'
Here, the summation is taken over all classes. The numerator is the exponentiated prediction for the true class, and the denominator is the sum of exponentiated predictions for all classes.
Compared to the traditional softmax loss, the A-Softmax loss produces more discriminative and compact features, improving face recognition performance.
§.§.§ Center Loss
The center loss <cit.> aims to increase the inter-class variance while reducing the intra-class variance by penalizing the distances between the features of a sample and the center of its corresponding class. The center loss is inspired by the idea of having a center for each class in the feature space, and the loss function encourages the features of the same class to be close to its center.
The center loss is defined as the Euclidean distance between the feature of a sample and the center of its class in the feature space.
The center loss is usually added to the main loss function in the training process, and it is defined as
L_center = 1/2∑_i=1^n| 𝐱_𝐢 - 𝐜_𝐲_𝐢|_2^2,
where 𝐱_𝐢 is the feature representation of the i^th sample, y_i is its corresponding class label, 𝐜_𝐲_𝐢 is the center of class y_i, and n is the number of samples.
§.§.§ CosFace: Large-Margin Cosine Loss
CosFace loss, also known as the Large Margin Cosine Loss, maximizes the decision margin in the cosine space to further enhance the discriminative power of the deeply learned features.
The cosine of the angle between the feature vector x_i and the weight w_j of the j^th class is given by
cosθ_j = w_j^T x_i/||w_j|| ||x_i||,
where ||.|| denotes the l2 norm.
The CosFace method adds a cosine margin m to the target logit, so the modified cosine of the angle θ_y_i corresponding to the ground truth class y_i is given by:
cosθ_y_i - m
Then the CosFace loss for an input-output pair (x_i, y_i) is defined as
L_i = -loge^s(cosθ_y_i) - m/e^s(cosθ_y_i) - m + ∑_j y_i^n e^s cosθ_j,
where s is a scaling factor.
One advantage of this function is that the cosine function's non-monotonicity does not create a problem here, unlike SphereFace. Also, because the feature vector is normalized, the model must learn better separation of the angles as it cannot reduce loss by learning a different norm <cit.>.
§.§.§ ArcFace. Additive Angular Margin Loss
The ArcFace loss <cit.> enhances the discriminative power of the softmax loss by adding an angular margin penalty to the target logit.
The ArcFace method adds an additive angular margin m to the target logit, so the modified cosine of the angle θ_y_i corresponding to the ground truth class y_i is given by
cos(θ_y_i + m)
Then the ArcFace loss for an input-output pair (x_i, y_i) is defined as
L_i = -loge^s cos(θ_y_i + m)/e^s cos(θ_y_i + m) + ∑_j y_i^n e^s cosθ_j,
where s is a scaling factor. The margin m can be interpreted as an additional arc length on the hypersphere of radius s. Experiments show better inter-class discrepancy than Triplet Loss while having about the same intra-class similarity <cit.>.
§.§.§ Triplet Loss
Probably the best-known loss function for face recognition. The idea behind the triplet loss <cit.> is to train the model to distinguish between a positive pair of images (two images of the same person) and a negative pair of images (two images of different persons).
Given an anchor image, A, a positive image, P, and a negative image, N, (see Figure <ref>), the loss is calculated as the distance between the anchor image and positive image and the distance between the anchor image and negative image, plus a margin. The equation is defined as
L_triplet = max0, ‖f_A - f_P ‖^2_2 - ‖f_A - f_N ‖^2_2 + α,
where f_A, f_P, and f_N are the embeddings of the anchor image, positive image, and negative image, respectively, and α is a hyperparameter known as the margin. By minimizing this loss, the embeddings of the positive images get closer to each other than the embeddings of the negative images.
§.§.§ Contrastive Loss
The contrastive loss <cit.> learns a feature representation or image embedding that projects similar images close to each other and dissimilar images far apart. This loss is based on a Siamese architecture of the neural network. The contrastive loss function is defined as:
L = 1/N∑_i=1^N y_i |f(x_i^a) - f(x_i^p)|_2^2 + (1-y_i)max(0, m - |f(x_i^a) - f(x_i^n)|_2^2),
where f(x_i^a) is the deep feature representation of the anchor image x_i^a, f(x_i^p) is the deep feature representation of the positive image x_i^p, f(x_i^n) is the deep feature representation of the negative image x_i^n. y_i is a binary label indicating whether the anchor and positive images are similar (1) or dissimilar (0), m is a margin hyperparameter, and N is the number of triplets in the batch.
The margin m controls how hard the model should work to push dissimilar embeddings apart. Extending a trained model for new/unseen classes is easy because it learns to create a semantic representation of the image rather than classify it among a predetermined set of classes.
One limitation of this loss is that the margin m is the same constant for all dissimilar pairs, which implicitly pushes the model to have the same distance between all dissimilar pairs, even if some are more dissimilar <cit.>. A second limitation is that the absolute notion of similar and dissimilar pairs is not transferable from one context to another <cit.>.
§.§.§ Circle Loss
The Circle Loss <cit.> pushes positive pairs closer and negative pairs farther away while maintaining a circle-like decision boundary. This is achieved by adding margins to positive and negative pairs, enlarging the intra-class variance for negative pairs, and reducing the intra-class variance for positive pairs. By doing so, Circle Loss can effectively handle imbalanced data and complex distributions, which are common challenges in face recognition tasks. The circle loss equation can be expressed as
α_pos_i = max(O_pos_i - s_pos_i, 0)
α_neg_j = max(O_neg_j - s_neg_j, 0)
sum_pos = ∑_i e^-γ·α_pos_i· _pos_i
sum_neg = ∑_j e^γ·α_neg_j· s_neg_j
L = log(1 + sum_pos· sum_neg)
Where:
* s_pos_i and s_neg_j represent the pairwise similarity between the positive and negative pairs. The positive pairs belong to the same class, and the negative pairs belong to different classes.
* O_pos_i and O_neg_j represent the user-defined margins for positive and negative pairs, respectively. O_pos is a margin that should be smaller than the expected similarity of positive pairs, while O_neg should be larger than the expected similarity of negative pairs. Thus, you want to choose O_pos and O_neg such that O_pos < O_neg.
* α_pos_i and α_neg_j are slack variables that ensure the positive similarities are larger than O_pos_i and the negative similarities are smaller than O_neg_j. This is achieved by setting the minimum value to 0, ignoring similarities that have already met the margin requirement.
* sum_pos and sum_neg are exponentiated and scaled sums of the positive and negative similarities, respectively. The exponential function ensures that all values are positive and emphasize larger values, while γ is a scaling factor that controls the rate at which the emphasis increases. The slack variables α_pos_i and α_neg_j are used in the exponent to give more weight to pairs far from meeting the margin requirement.
* Finally, L is the Circle Loss computed as the logarithm of 1 plus the product of sum_pos and sum_neg. This encourages both sum_pos and sum_neg to be small, which in turn encourages the positive similarities to be large and the negative similarities to be small. Adding one inside the logarithm ensures that the argument is always positive, and the logarithm itself helps dampen the effect of large values and reduce the effect of outliers.
The circle loss has a more definite convergence target than the triplet loss because there is a single point in the (s_neg, s_pos) space toward which the optimization is driven (O_neg, O_pos). However, choosing (O_neg, O_pos) is arbitrary. In practice, it is common to use cross-validation or a separate validation set to tune these hyperparameters.
A common strategy is to start with small margins and gradually increase them. If the margins are too large, the model struggles to learn; if they are too small, it learns embeddings that are not discriminative enough. In some implementations of Circle Loss, O_pos and O_neg are not independent but related by a constant margin m, which is set to ensure a sufficient gap between positive and negative pairs. In this case, we only need to tune one of the margins or the gap m.
§.§.§ Barlow Twins Loss
The Barlow Twins loss <cit.> is a self-supervised learning approach. The key idea is to make the outputs of a two-twin network, processing two different views of the same image, as similar as possible while reducing the redundancy between the dimensions of these representations. It encourages the network to learn highly informative features about the input and non-redundant.
Given the batch size N and the dimensionality D of the embeddings, the network processes two different augmentations of the same input data, producing the embeddings z_a and z_b. These embeddings are then normalized to have zero mean and unit variance along the batch dimension.
The computation of the Barlow Twins loss can be formulated as follows:
1. Compute the cross-correlation matrix C:
C = 1/N z_a_norm^T · z_b_norm
2. Compute the difference matrix C_diff between the cross-correlation matrix and an identity matrix and square it:
C_diff = (C - I)^2
3. Scale the off-diagonal elements of C_diff by a factor λ:
C_diff_ij =
C_diff_ij, if i = j
λ C_diff_ij, if i ≠ j
4. Finally, the Barlow Twins loss L is the sum of all elements in the updated C_diff:
L = ∑_i,j C_diff_ij
By backpropagating this loss and updating the model's parameters, the network is trained to minimize redundancy and increase the similarity between two different views of the same image, learning more robust and informative features as a result.
This method does not require a fixed number of classes and does not suffer from data expansion as it does not require explicit negative examples. However, the model in the paper required large dimensionality of the final representation for good performance, and the performance is not robust to remove certain distortions to the inputs <cit.>.
§.§.§ SimSiam Loss
SimSiam <cit.> is a self-supervised learning aimed at learning representations by pushing two views of the same image to be as similar as possible. While it is not explicitly designed for face recognition, it can be used to learn meaningful representations of faces.
Given two different augmentations x_1 and x_2 of the same image, they are forwarded through a neural network and a prediction head to obtain the features z_1, z_2 and the predictions p_1, p_2.
The loss is defined as the negative cosine similarity between the predictions and the features of the different augmentations.
L = -1/2[(p_1^T z_2)/(||p_1||·||z_2||) + (p_2^T z_1)/(||p_2||·||z_1||)],
where ^T denotes transpose, and ||·|| denotes the L2 norm.
This loss encourages the model to make the predictions p_1 and p_2 as similar as possible to the features z_2 and z_1, respectively.
An important part of the SimSiam approach is a stop-gradient operation applied to z_2 in the first term and z_1 in the second term of the loss function. This means that the gradients are not backpropagated through these variables during training. The stop-gradient operation is critical to avoid the model collapsing into trivial solutions where the features and the predictions are the same.
The advantages of the SimSiam loss are:
* No Negative Pair: Unlike contrastive learning methods that require negative examples, SimSiam does not require any. This simplifies the model training process and can make it more efficient.
* Stop-Gradient Operation: The use of stop-gradient operation in SimSiam makes it less prone to collapsing into trivial solutions. This is a significant advantage because many self-supervised learning models struggle with this problem.
* Simplicity: SimSiam is simpler compared to other self-supervised learning methods. It uses a symmetric architecture and a simple loss function which encourages two views of the same image to have similar representations.
The disadvantages include:
* Hyperparameter Sensitivity: SimSiam has a few crucial hyperparameters, such as the learning rate and weight decay, which require careful tuning to get the best performance. Incorrect settings can significantly degrade the model's performance.
* Dependence on Data Augmentation: The success of SimSiam, like many other self-supervised learning models, heavily relies on the choice and quality of data augmentations. This requires domain knowledge and potentially significant effort to determine the most effective augmentations.
* Non-semantic Features: One common issue with self-supervised learning methods, including SimSiam is that the features learned may not necessarily be semantically meaningful. They are good at capturing low-level features but may not be as effective in capturing high-level semantic information.
§ IMAGE GENERATION
Image generation in deep learning involves using artificial neural networks to generate new images. This task has been revolutionized by developing various models such as Variational Autoencoders (VAEs) <cit.>, Generative Adversarial Networks (GANs) <cit.>, Normalized Flow models (NFs) <cit.>, Energy-Based Models (EBMs) <cit.>, and Diffusion Models <cit.>. These models allow the generation of high-quality images that can be used in various applications such as image super-resolution <cit.>, denoising <cit.>, inpainting <cit.>, and style transfer <cit.>.
Variational Autoencoders (VAEs) are generative models that use deep learning techniques to create new data and learn latent representations of the input data. VAEs consist of two primary components: an encoder and a decoder. The encoder takes input data and compresses it into a lower-dimensional latent space, capturing the essential features of the data. This is typically a probabilistic process, producing a mean and standard deviation representing potential latent space values distribution. The decoder then takes a point from this latent space distribution and reconstructs the original data. The entire process is trained in such a way as to minimize the difference between the original and the reconstructed data, as well as to ensure the latent space approximates a standard Gaussian distribution. VAEs can generate new data by feeding the decoder points sampled from the latent space.
Generate Adversarial Networks (GANs) involve two neural networks, a Generator, and a Discriminator, playing a game against each other. The Generator tries to create data similar to the training data, while the Discriminator tries to distinguish between the real and generated data. Through this process, both networks improve: the Generator learns to produce increasingly realistic data, while the Discriminator becomes better at distinguishing between real and artificial data. This adversarial process continues until an equilibrium is reached, at which point the Generator is producing realistic data and the Discriminator is, at best, randomly guessing whether the data is real or generated. This equilibrium is conceptually referred to as a Nash equilibrium in game theory <cit.>.
Normalizing Flow Models (NFs) can construct complex probability distributions by transforming a simple base distribution through a sequence of invertible and differentiable transformations, or flows. These flows warp and twist the data space, allowing the model to generate diverse outputs. The parameters of these transformations are learned from data using maximum likelihood estimation. An advantage of Normalizing Flows over other generative models is their ability to provide an exact and tractable likelihood for a given sample, enabling efficient sampling and density estimation. However, they can be computationally intensive to train due to the need to compute and backpropagate through the Jacobian of the flows.
Energy-Based Models (EBMs) learn a scalar energy function to distinguish real data points from unlikely ones, assigning lower energy values to points similar to the training data and higher values otherwise. A neural network often parameterizes this function and learns from the data. Sampling in EBMs is typically done via Markov Chain Monte Carlo (MCMC) <cit.> methods, producing samples distributed according to the learned energy function. While EBMs can represent a wide variety of data distributions, they can be challenging to train due to the intractability of the partition function and the computational expense of MCMC sampling.
Diffusion models can create new data by simulating a random process, specifically a diffusion process. The process starts with a simple data distribution (e.g., Gaussian noise) and gradually transforms it towards the target data distribution, like a particle undergoing diffusion or a random walk. This transformation is controlled by a neural network trained to model the reverse process, taking real-world data and applying a series of transformations to reduce it to noise. During generation, an initial noise sample is iteratively updated using this reverse process model, running the process backward, leading to a sample from the target data distribution. This approach allows the generation of complex and high-quality data, such as images, by simulating a smooth transition from noise to the desired data.
The following sections will review the common lost functions and performance metrics used for image generation.
§.§ Image Generation Loss functions
The loss function in a VAE consists of the reconstruction loss and the KL (refer to section <ref>). The reconstruction loss measures how well the decoded data matches the original input data. The KL divergence measures how much the learned distribution in the latent space deviates from a target distribution, usually a standard normal distribution. KL-divergence is used as a regularization term to ensure that the distributions produced by the encoder remain close to a unit Gaussian, penalizing the model if the learned distributions depart from it.
The most common loss function used in GANs is the adversarial loss, which is the sum of the cross-entropy loss between the generator's predictions and the real or fake labels. More recently, WGAN <cit.> applied the Wasserstein distance as an alternative to training GANs to improve the stability and avoid mode collapse that occurs when the generator network stops learning the underlying data distribution and begins to produce a limited variety of outputs, rather than a diverse range of outputs that accurately represent the true data distribution.
Normalizing Flows are typically trained using maximum likelihood estimation. Given a dataset, the aim is to maximize the log-likelihood of the data under the model by minimizing the negative log-likelihood of the data.
During training, Energy-based models (EBMs) minimize a loss function that encourages the energy function to assign lower energy values to data points from the training data and higher energy values to other points. Different types of EBMs use different loss functions, such as Contrastive Divergence (CD)<cit.>, Maximum Likelihood Estimation (MLE)<cit.>, and Noise-Contrastive Estimation (NCE) <cit.>.
Diffusion models use a denoising loss function based on the Mean Absolute Error (MAE) or the Mean Squared Error (MSE) between the original and reconstructed data.
In the next sections, we describe in detail each of these losses.
§.§.§ Reconstruction Loss
The purpose of the reconstruction loss is to ensure that the decoder network of the VAE can reconstruct an input image that is as close as possible to the original image. In essence, the reconstruction loss compares the original image to the reconstructed image and penalizes any difference between the two. The reconstruction loss is calculated as the mean squared error (MSE) between the original image and the reconstructed image, which is defined as follows
L_recon = 1/N∑_i=1^N ||x_i - x̂_̂î||^2,
where x_i is the original image, x̂_̂î is the reconstructed image, and N is the number of images.
Another metric that can be used for reconstruction loss is binary cross-entropy (BCE), a common choice for binary images. The BCE loss can be defined as:
L_BCE = - ∑_i=1^n y_i log(ŷ_̂î) + (1 - y_i)log(1 - ŷ_̂î),
where y_i is the binary value of the original image and ŷ_̂î is the binary value of the generated image.
In the context of deep learning, reconstruction loss has been used in various image generation tasks such as image restoration, image generation, and image synthesis.
§.§.§ Kullback-Leibler Divergence Loss
The KL divergence loss, also known as the KL divergence loss, measures the difference between the predicted probability distribution and the true probability distribution of the classes <cit.>. The KL divergence loss ensures that the predicted probabilities are close to the true probabilities, which can be useful in cases where the true probabilities are known or can be approximated.
The KL divergence loss is defined as
KL(p||q) = ∑_i=1^n p(x_i)log(p(x_i)/q(x_i)),
where p(x_i) is the true probability of class x_i and q(x_i) is the predicted probability of class x_i.
The KL divergence loss is often used in generative models such as variational autoencoders, which aim to approximate the true data distribution. It is also used in reinforcement learning to ensure that the agent's learned policy is close to the optimal policy.
One disadvantage of using the KL divergence loss is that it is sensitive to zero probabilities in the true probability distribution. This can lead to numerical instability and can make the loss function difficult to optimize. To overcome this issue, a common practice is to add a small value (e.g., 1e-7) to the true probability distribution to avoid zero probabilities.
§.§.§ Adversarial Loss
The adversarial loss is the main function used in Generative Adversarial Networks (GANs). It is based on the idea of a minimax game between the two neural networks. The adversarial loss is used to train the generator, defined as the negative log-likelihood of the discriminator's output.
Goodfellow et al. first introduced the adversarial loss in <cit.>. The authors showed that this loss function leads to the convergence of the generator to a Nash equilibrium, where the generator produces realistic images that the discriminator cannot distinguish from real images.
The adversarial loss can be defined as:
L_adv(G,D) = - 𝔼x ∼ pdata(x)[log D(x)] - 𝔼_z ∼ p_z(z)[log (1 - D(G(z)))],
where G is the generator network, D is the discriminator network, x are real images from the data distribution p_data(x), z are random noise inputs from the noise distribution p_z(z), and G(z) are generated images.
The adversarial loss encourages the generator to generate indistinguishable images from real images and the discriminator to correctly classify real images and generated images. The training process alternates between updating the generator and the discriminator until the generator produces realistic images.
§.§.§ Wasserstein loss
The Wasserstein loss <cit.> is used in generative models, such as Generative Adversarial Networks (GANs), to measure the difference between the real and generated distributions. The Wasserstein loss is the minimum work required to transform one probability distribution into another. The amount of work is the sum of the distances between each sample multiplied by their probability mass. The Wasserstein loss can be written as
W(p_data, p_gen) = inf_γ∈Γ(p_data,p_gen)∫_x,y ||x-y|| dγ(x,y)
where p_data and p_gen represent the real and generated distributions, respectively. Γ(p_data,p_gen) is the set of joint distributions with marginals p_data and p_gen, and γ(x,y) is the joint distribution of (x,y).
The Wasserstein loss has been widely used in GANs for its ability to provide a more stable training process and to avoid the mode collapse problem, where the generator produces a limited number of similar outputs.
§.§.§ Negative Log-likelihood in Normalizing Flows
In Normalizing Flows, the objective is to learn a transformation (or a sequence of transformations) that maps a complex data distribution to a simpler distribution, for example, a standard Gaussian. The transformations are chosen such that their Jacobian determinant is easy to compute, which allows using the change of variables formula to compute the likelihood of the data under the model.
Given a data point x, let's denote z = f_θ(x) as the mapping of x to the base distribution under the flows parameterized by θ, and p_z(z) as the density of the base distribution. Then, the log-likelihood of x under the model is given by
log p_θ(x) = log p_z(f_θ(x)) - log| ∂ f_θ(x)/∂ x|
The second term is the log absolute determinant of the Jacobian of the transformation, which corrects for the change in volume due to the transformation.
The loss function that we minimize during training is the negative log-likelihood of the data. If our dataset consists of N points x_1, x_2, …, x_N, then the loss function L(θ) is expressed as
L(θ) = -1/N∑_i=1^Nlog p_θ(x_i) = -1/N∑_i=1^N[ log p_z(f_θ(x_i)) - log| ∂ f_θ(x_i)/∂ x_i| ]
In practice, stochastic gradient descent or a variant minimizes this loss function and learns the parameters θ of the flows.
Normalizing Flows are computationally intensive to train, due to the need to compute and backpropagate through the Jacobian of the flows. For this reason, methods such as RealNVP <cit.> and Glow <cit.> are designed so that the determinant of the Jacobian can be computed efficiently.
§.§.§ Contrastive Divergence
Contrastive Divergence (CD) can be used in training Energy-Based Models (EBMs), specifically for estimating the log-likelihood gradient. As with RBMs, the objective is to maximize the likelihood of the data under the model. The key difference is that the energy function is defined over the visible and hidden units in EBMs, whereas it's defined over the joint configuration of visible and hidden units in RBMs.
In many energy-based models, the log-likelihood gradient involves an expectation over all possible data configurations, which is generally intractable to compute exactly. CD approximates this expectation by running a Markov chain for several steps.
Given a dataset consisting of N data points x_1, x_2, ..., x_N, the log-likelihood of the data under the model is:
L(θ) = 1/N∑_i=1^Nlog p(x_i; θ),
where θ represents the parameters of the model, and p(x; θ) is the probability of data point x, defined as p(x; θ) = e^-E(x; θ)/Z(θ), with E(x; θ) being the energy function and Z(θ) the partition function.
The gradient of the log-likelihood for the parameters is
∂/∂θ L(θ) = 1/N∑_i=1^N∂/∂θlog p(x_i; θ)
This gradient can be decomposed into two terms: the positive phase, which is easy to compute
1/N∑_i=1^N∂ E(x_i; θ)/∂θ
and the negative phase, which involves an expectation over all model configurations and is generally intractable.
- ⟨∂ E(x; θ)/∂θ⟩_p,
where ⟨⟩_p denotes an expectation with respect to the model distribution.
In CD, the negative phase is approximated by running a Markov chain starting from a data point for a few steps and using the resulting sample to estimate the expectation. This leads to the CD-k algorithm, where k is the number of Gibbs sampling steps. The update to the parameters after seeing a data point x is then
Δθ = η (1/N∑_i=1^N∂ E(x_i; θ)/∂θ + ⟨∂ E(x'; θ)/∂θ⟩_CD),
where η is the learning rate, ⟨⟩_CD denotes an expectation to the distribution defined by the Markov chain after k steps, and x' is the sample obtained after k steps of Gibbs sampling starting from a data point x.
Contrastive Divergence is often more computationally efficient than other methods for training energy-based models, such as Persistent Contrastive Divergence (PCD) <cit.> or Mean-Field methods <cit.> because it requires running the Markov chain for a few steps rather than until it reaches equilibrium. However, this introduces a bias in the estimate of the gradient of the log-likelihood. CD is a suitable choice when the bias is acceptable or can be mitigated by running the chain for more steps.
§.§ Image Generation Metrics
Some of the common metrics used for Image generation are:
* Peak Signal-to-Noise Ratio (PSNR)
* Structural Similarity Index (SSIM)
* Inception Score (IS) <cit.>
* Frechet Inception Distance (FID) <cit.>
In the following sections, we will dive into each of these metrics.
§.§.§ Peak Signal-to-Noise Ratio (PSNR)
PSNR is a traditional metric often used to assess the quality of image and video codecs, comparing the original and the reconstructed (compressed and then decompressed) images or videos. It is a measure of the quality of an approximation of an image.
In image generation, PSNR can be used to evaluate the quality of the images generated by models, particularly for tasks like image super-resolution, denoising, inpainting, etc. Here, the PSNR is used to compare the images generated by the model to a ground truth high-quality image.
The PSNR is defined in terms of the mean squared error (MSE), which for two m × n monochrome images I and K where one of the images is considered a noisy approximation of the other is defined as
MSE = 1/mn∑_i=0^m-1∑_j=0^n-1[I(i,j) - K(i,j)]^2
The PSNR is defined as
PSNR = 10 ·log_10((MAX_I)^2/MSE),
where MAX_I is the maximum possible pixel value of the image. For instance, for an 8-bit grayscale image, this value would be 255.
PSNR provides an easily interpretable score as expressed in a logarithmic decibel scale. Higher PSNR generally indicates that the reconstruction is of high quality; however, in some cases, perceptually similar images may have low PSNR where simple pixel-wise errors do not well capture structural similarity.
While PSNR could be useful in the context of tasks like super-resolution or denoising where there is a clear ground truth to compare to, it is not particularly effective for tasks like generative image modeling where the goal is to produce new, high-quality images that are not simply reconstructions of existing images <cit.>. More sophisticated metrics like the Inception Score or Fréchet Inception Distance are typically used for these tasks.
§.§.§ Structural Similarity Index (SSIM)
The SSIM <cit.> can be used in image generation tasks to compare the generated images with the target (real) images. In other words, it measures how similar the synthetic images produced by a generative model are to the actual images.
The SSIM index is based on three main factors: luminance, contrast, and structure. The SSIM value ranges from -1 to 1, where 1 indicates perfect similarity, and -1 indicates no similarity.
To calculate the SSIM index, the first step is to compute each image's mean and standard deviation, then the cross-covariance between the two images, and the product of the standard deviations. The SSIM index is then computed as
SSIM(x,y) = (2μ_xμ_y + C_1)(2σ_xy + C_2)/(μ_x^2 + μ_y^2 + C_1)(σ_x^2 + σ_y^2 + C_2),
where μ_x and μ_y are the means of images x and y, σ_x and σ_y are the standard deviations of images x and y, σ_xy is the cross-covariance between the two images, and C_1 and C_2 are constants used to avoid instability.
The SSIM metric is more robust to brightness and contrast changes than other popular image quality metrics, such as the Mean Squared Error (MSE) and the Peak Signal-to-Noise Ratio (PSNR). However, using SSIM alone as an evaluation metric for generative models is insufficient because the ultimate goal of a generative model is not just to reproduce an exact copy of the input image but to understand the underlying data distribution. For this reason, SSIM is often used in conjunction with other metrics, such as Inception Score (IS) and Fréchet Inception Distance (FID), to evaluate the performance of generative models more comprehensively.
§.§.§ Inception Score (IS)
The IS quantifies the quality and diversity of generated images <cit.>. It is computed by combining two scores: the marginal likelihood of the generated images and their quality or diversity. The marginal likelihood is calculated by feeding the generated images into an Inception-v3 classifier trained on the ImageNet dataset and taking the average of the softmax probabilities. Their overall scores determine the quality of the images, while the entropy of these scores reflects the diversity.
The formula for the inception score can be expressed as
IS = e^(𝔼_x ∼ p_g(x) [KL(p(y|x)||p(y))]),
where p_g(x) is the distribution of generated images, p(y|x) is the class conditional probability for each image and class, and p(y) is the marginal likelihood of each class.
The Inception Score provides a single number that reflects the quality of generated images and the diversity of these objects and it is easy to calculate using pre-trained Inception models that don't require the true data distribution as input.
However, the Inception Score has some limitations that make it insufficient to comprehensively understand the model performance:
* Inception Model Dependence: Relies heavily on the Inception model, which is pre-trained on the ImageNet dataset. This may limit its usefulness in domains that are very different from ImageNet, as the features learned by the Inception model might not be relevant for those domains.
* Mismatch with Human Perception: It does not always align with human perception of image quality. Images can achieve high IS by simply being diverse and recognizable, even if they don't look real.
* Lack of Discrimination Between Modes: It can be high for models that cover many modes of the true data distribution but fail to accurately capture the frequency of those modes.
* Doesn't Measure Mode Collapse: As mentioned before, a problem in training GANs is mode collapse, where the model generates a limited range of images. The Inception Score can't effectively detect mode collapse as long as the few modes that the GAN does generate are diverse enough.
* Unreliable for Low Number of Samples: When computed over a small number of samples, the Inception Score can become unreliable due to high variance.
§.§.§ Fréchet Inception Distance (FID)
Unlike the Inception Score, the FID <cit.> considers the statistics of the images generated by the model and compares them to the statistics of real images. It measures the distance between these two distributions in a feature space provided by a specific layer of the Inception network.
Mathematically, the FID is defined as follows. Let us denote the generated images as X and the real images as Y. Each dataset is passed through an Inception network, and the activations at a specific layer are extracted. The resulting activations are denoted as X' and Y' respectively. Assuming that X' and Y' are multivariate Gaussian distributions with means μ_x and μ_y, and covariance matrices Σ_x and Σ_y, respectively, the Fréchet distance between these two multivariate Gaussian distributions is defined as
FID(X', Y') = ||μ_x - μ_y||^2 + Tr(Σ_x + Σ_y - 2(Σ_xΣ_y)^1/2),
where Tr is the trace of a matrix, which is the sum of its diagonal elements, (Σ_xΣ_y)^1/2 is the matrix square root of the product of the covariance matrices, which is well-defined since the covariance matrices are positive semi-definite.
FID has the following desirable properties:
* Lower value is better: The lower the FID score, the closer the generated image distribution is to the real image distribution.
* Considers both mean and covariance: FID considers both the means and the covariance of the feature representations, giving it an advantage over metrics considering only one of these aspects.
* Less susceptible to noise: FID is more robust to noise compared to the Inception Score.
Regarding the limitations, like the Inception Score, it relies on the Inception network and, therefore, may not perform well on tasks very different from the ImageNet dataset on which the Inception network was trained.
§ DISCUSSION
Throughout this paper, we have reviewed a variety of loss functions and metrics utilized in deep learning, specifically focusing on different tasks, including regression, classification, object detection, image segmentation, and face recognition. Our review underscores the importance of selecting an appropriate loss function and evaluation metric, depending on the specific task at hand and the characteristics of the data.
For regression tasks, for instance, Mean Squared Error (MSE) and Mean Absolute Error (MAE) are widely used due to their simplicity and interpretability. However, there are more robust alternatives such as Huber loss or application specific such as Quantile or Poisson loss. Additionally, while RMSE and MAE are commonly used for evaluation, they may need to adequately capture the performance of models on all types of data, leading to the use of additional metrics such as R^2 and Adjusted R^2.
In classification, it is noted that while binary cross-entropy loss is standard for binary classification tasks, options such as hinge loss and weighted binary cross-entropy can provide robustness in situations where classes are imbalanced. Furthermore, more than accuracy is required to provide a complete picture of a model's performance, especially in imbalanced datasets. This necessitates using additional metrics like precision, recall, F1-score, and AUC-ROC.
The complexity increases when we consider object detection, image segmentation, and face recognition tasks. Here, loss functions are about more than just calculating the difference between the predicted and true values. For instance, the YOLO loss in object detection and image segmentation, or the Triplet Loss in face recognition, considers the relative positions and distances of multiple instances or entities.
Moreover, metrics used in these tasks, such as Average Precision (AP) and Average Recall (AR) for object detection or Panoptic Quality (PQ) for Panoptic segmentation, go beyond typical accuracy-based metrics to evaluate the quality of instance identification and segmentation.
As discussed, each loss function and metric has pros and cons, and their appropriateness may depend on the specific application or dataset characteristics. Understanding these trade-offs is critical for the design of effective deep learning systems.
Looking forward, there are opportunities for developing new loss functions and metrics that are more robust to data anomalies, consider specific practical constraints, or are tailored to new and emerging deep learning tasks. We also see potential in automated methods that intelligently select or combine different loss functions and metrics based on the given task and data.
By advancing our understanding of these critical components of deep learning, we can continue to push the boundaries of what these powerful models can achieve.
§ CONCLUSION
The choice of the loss function and metric can profoundly influence the performance of a model; hence understanding their nature and implications is helpful for anyone working in deep learning.
From regression to complex tasks such as object detection, face recognition, and generative models, we have highlighted the importance of utilizing appropriate loss functions and evaluation metrics. Furthermore, considering dataset characteristics, specifically class imbalances and outliers, was vital when designing and evaluating models. There is no one-size-fits-all loss function or metric for every task. This highlights the continued necessity for researchers to develop task-specific or adaptable loss functions and metrics and further refine the performance and applicability of deep learning models.
A promising direction for future work is the exploration of automated methods to intelligently select or combine loss functions and metrics, thereby reducing the manual effort and potential bias involved in their selection. In addition, creating robust loss functions that can handle data anomalies and practical constraints effectively is a fertile area for exploration.
The ability to accurately model and predict complex phenomena is increasingly critical in our rapidly evolving digital world. By enhancing our comprehension and application of loss functions and metrics in deep learning, we can significantly contribute to advancing this technology, paving the way for future more sophisticated, effective, and reliable models.
§ ACKNOWLEDGMENTS
We thank the National Council for Science and Technology (CONACYT) for its support through the National Research System (SNI) and the project SIP 20232290.
§ DECLARATION OF GENERATIVE AI AND AI-ASSISTED TECHNOLOGIES IN THE WRITING PROCESS
During the preparation of this work, the authors used GPT-4 for help with wording, formatting, and styling throughout this work. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the publication's content.
ieeetr
|
http://arxiv.org/abs/2307.01956v1
|
20230704232707
|
Instantaneous Wireless Robotic Node Localization Using Collaborative Direction of Arrival
|
[
"Ehsan Latif",
"Ramviyas Parasuraman"
] |
cs.RO
|
[
"cs.RO",
"cs.NI"
] |
Instantaneous Wireless Robotic Node Localization Using Collaborative Direction of Arrival
Ehsan Latif and Ramviyas Parasuraman^*
School of Computing, University of Georgia, Athens, GA 30602, USA.
^* Corresponding Author Email: ramviyas@uga.edu.
August 1, 2023
=================================================================================================================================================================
Localizing mobile robotic nodes in indoor and GPS-denied environments is a complex problem, particularly in dynamic, unstructured scenarios where traditional cameras and LIDAR-based sensing and localization modalities may fail. Alternatively, wireless signal-based localization has been extensively studied in the literature yet primarily focuses on fingerprinting and feature-matching paradigms, requiring dedicated environment-specific offline data collection. We propose an online robot localization algorithm enabled by collaborative wireless sensor nodes to remedy these limitations. Our approach's core novelty lies in obtaining the Collaborative Direction of Arrival (CDOA) of wireless signals by exploiting the geometric features and collaboration between wireless nodes. The CDOA is combined with the Expectation Maximization (EM) and Particle Filter (PF) algorithms to calculate the Gaussian probability of the node's location with high efficiency and accuracy. The algorithm relies on RSSI-only data, making it ubiquitous to resource-constrained devices. We theoretically analyze the approach and extensively validate the proposed method's consistency, accuracy, and computational efficiency in simulations, real-world public datasets, as well as real robot demonstrations. The results validate the method's real-time computational capability and demonstrate considerably-high centimeter-level localization accuracy, outperforming relevant state-of-the-art localization approaches.
Localization, Robots, Wireless Sensor Networks, Collaboration, Expectation Maximization, Particle Filter
§ INTRODUCTION
A set of sensors, actuators, and mobile devices are connected to form an Internet of Things (IoT) system. Here, location information is critical for such operations, especially for wireless and mobile robotic nodes.
Node localization has been a challenging problem, especially in indoor environments.
As such, Indoor localization has emerged as one of the most critical components in robotics, automation, and wireless systems. Here, one fundamental requirement is to provide an accurate and efficient localization system in a real-time (online) manner.
Furthermore, GPS-denied or dynamically changing environments pose additional challenges for mobile robot indoor localization <cit.>.
Sensors such as cameras, LIDAR, inertial measurement units (IMU), and their fusion have been exploited for obtaining accurate indoor localization of mobile devices <cit.>. However, these technologies are expensive, non-applicable to resource-constrained devices and robots, and also suffer from various limitations, such as the requirement of proper lighting conditions in vision-based localization and structured non-dynamic surfaces for LIDAR-based perception.
On the other hand, wireless technologies such as Wi-Fi and Bluetooth are the most extensively utilized for indoor WLANs. The ubiquitous availability of Received Signal Strength Indicator (RSSI) measurements from Access Points (AP) or Wireless Sensor Nodes (WSNs) can be used for various objectives, including localization <cit.>, multi-robot control <cit.>, and communication optimization <cit.>. These advances provide opportunities to exploit the RSSI information from WSNs in aiding mobile robot localization.
Extant RSSI-based indoor positioning systems frameworks require an offline site survey to generate fingerprints and match the current real-time RSSI data to this database for positioning with a supervised machine learning algorithm (e.g., <cit.>). However, the fingerprinting approaches require a dedicated offline phase, in addition to the limitation of generalization, where they can be employed only for the specific environment where they are fingerprinted <cit.>. However, for mobile robot deployments, these limitations are not practical.
To address these gaps, we propose a novel algorithm for estimating the Collaborative Direction of Arrival (CDOA) estimated with the RSSI values obtained through collaboration among WSNs.
The CDOA estimation is then integrated with two Bayesian framework variants for robust node localization: Expectation Maximization (EM) and Particle Filter (PF).
See Fig. <ref> for an overview of the proposed WSN collaboration-based wireless robotic node localization method.
The main contributions of this paper are outlined below.
* We propose a novel collaboration-aided mechanism for a mobile robot to collect RSSI data from the WSNs and estimate Wireless signal CDOA.
* We integrate the CDOA with a Bayesian framework for robot node localization. We propose two variants (Expectation Maximization and Particle Filter) to exploit the statistical EM method's accuracy and efficiency advantage the sampling-based PF method offers.
* We theoretically analyze the properties of the proposed CDOA localization method in terms of localization consistency, accuracy, area coverage, scalability, and computational complexity.
* Through extensive experimental analysis in diverse setups enabled by numerical simulations, publicly available real-world datasets, and in-house robot hardware demonstrations, we evaluate the localization accuracy and efficiency of the proposed variants of CDOA-aided node localization.
* We validate our approach by comparing with relevant non-fingerprinting methods from the recent literature such as the trilateration <cit.>, weighted centroid localization <cit.>, differential RSSI <cit.>, improved RSSI-based localization <cit.>, smooth Particle Filter with Extended Kalman Filter <cit.>, and sparse Bayesian learning applied over the Direction of Arrival <cit.> approaches.
* We open source all our codes and datasets (native Python implementations for the IoT and wireless sensor network community, as well as a ROS <cit.> package for the robotics community) at <https://github.com/herolab-uga/cdoa-localization>. We believe this will enable the reproducibility and extension of our approach by the research community.
The core novelty of our approach lies in that we employ a CDOA metric obtained through cooperative communication between the WSNs and the mobile robot instead of relying on the RSSI metric directly (as used in relevant methods in the literature). Further, we integrate and extensively evaluate the CDOA metric with Bayesian frameworks for robot node localization.
Our proposed methods achieve superior accuracy, efficiency, and robust localization performance through these novelties while enabling real-time efficiency compared to several state-of-the-art solutions.
A video demonstrating the CDOA approach implemented on real robots can be found at <https://www.youtube.com/watch?v=jVg2hzouO9E>.
§ RELATED WORK
According to a recent survey on Wi-Fi-based indoor positioning, <cit.>, there are two categories of wireless localization solutions: Model-based and Survey-based. Model-based approaches include trilateration using RSSI, triangulation using DOA, and Weighted Centroid using distance. Recent works include variants thereof, such as the filtered trilateration <cit.>, differential RSSI-based least squares estimation <cit.>, and Expectation Maximization <cit.>.
While the survey-based approaches provide high accuracy based on the precise fingerprints collected from the same environment through a dedicated offline process, they also come with high computation costs of prediction algorithms like the K-Nearest Neighbors <cit.>. Accordingly, we focus on model-based solutions.
In model-based approaches, multilateration and triangulation are the fundamental methods to predict the position of a wireless device (e.g., a mobile robot) using RSSI captured from multiple anchors/APs <cit.>. However, these methods suffer from co-linearity, ambiguous positioning, non-intersecting circles, etc.
A recent survey <cit.> on model-based techniques confirms that the balance of accuracy and computing complexity is absent in the literature. Also, different variants of trilateration/triangulation can have different localization accuracy, resulting in inconsistency in its application. The weighted centroid method is less accurate for non-line-of-sight conditions, limiting its applicability. Further, some methods convert the raw RSSI measurements into distance estimates to use in multilateration algorithms, which suffer from the dependency on wireless channel parameters of the environment for RSSI to distance conversion.
A novel trilateration algorithm is put forth by Yang et al. <cit.> for RSSI-based indoor localization of a target using the geometric relationships between transmitters and receivers. The precision of localization can be considerably impacted by environmental changes, such as obstructions and multipath propagation, because trilateration is sensitive to these. This approach also requires considerable calibration and exact distance estimation, which might be difficult in dynamic situations. A Gaussian Filtered RSSI-based Indoor Localization method utilizing Bootstrap filtering is presented by Wang et al. <cit.>. This technique uses particle filtering to determine the location of a target within a WLAN. Although it is more resistant to environmental noise, this method still relies on raw RSSI measurements, which are prone to interference, multipath propagation, and signal attenuation.
Pinto et al. <cit.> use K-means clustering and Bayesian estimation to create a reliable RSSI-based indoor positioning system. This method seeks to increase the accuracy of localization by integrating unsupervised learning with probabilistic estimates. It is nonetheless susceptible to the drawbacks of RSSI measurements, such as susceptibility to environmental changes and the requirement for significant calibration. Bayesian filtering method also presented by Mackey et al.<cit.> for enhancing BLE beacon proximity estimation accuracy. This technique improves the performance of BLE beacons, but is still vulnerable to interference from other wireless devices and may have decreased accuracy in NLOS scenarios and dynamic surroundings. Another method for estimating the path-loss exponent using Bayesian filtering is presented by Wojcicki et al. <cit.>. This method is intended to describe the propagation of signals in various contexts, increasing the precision of localization. The quality of the RSSI measurements, which the surroundings and signal attenuation can impact, is crucial to the path-loss exponent estimation's accuracy.
Combining data from many sources and using probabilistic techniques, the proposed CDOA methods minimize the drawbacks of the aforementioned methods. While CDOA-EM uses locations as samples to fill a grid and determine robot location using Gaussian probability, CDOA-PF increases robustness in complicated situations by repeatedly updating particles representing potential positions. Both strategies overcome the shortcomings of conventional RSSI-based techniques and offer more precise and dependable localization in the presence of NLOS, multipath propagation, and environmental changes.
As an alternative to the RSSI metric, researchers have proposed the use of the Channel State Information (CSI) metric for robot localization systems <cit.>. However, most CSI-based techniques involve extensive offline fingerprinting processes to improve accuracy. Moreover, as with RSSI-based metrics, the CSI metric is limited to very few radios and cannot be exploited for ubiquitous applications. DOA (or Angle of Arrival) based methods achieve higher localization accuracy than RSSI-based solutions. For instance, in research <cit.>, the authors used a sensor node equipped with Infrared light arrays to estimate the DOA of a mobile robot, which was used to achieve indoor localization with meter-level accuracy. Cooperatively localizing target nodes using multiple reference nodes with known locations has been explored. For instance, the authors in <cit.> provided a distributed method for cooperatively estimating the DOA of an acoustic sensor network. In contrast, the authors in <cit.> used cooperative DOA from Ultra Wideband (UWB) radios to locate target nodes using many reference nodes.
UWB-based indoor localization systems, while highly promising, still face several limitations. The presence of multipath propagation and non-line-of-sight (NLOS) conditions can significantly affect the positioning accuracy <cit.>. Yang et al. <cit.> proposed a UWB-based indoor localization with fewer nodes and utilized deep neural networking to avoid the effect of non-line-of-site; however, this solution requires offline data training and sampling overhead, which makes the system restricted to the trained environment. Additionally, deploying UWB anchors can be challenging in real-world environments due to their need for precise installation <cit.>. Further, the power consumption of UWB devices and their susceptibility to interference from other wireless systems can negatively impact their performance and scalability <cit.>. Integration of UWB-based systems with other sensing modalities can be difficult, as the fusion of data from different sources may be subject to noise and uncertainty <cit.>.
A wireless sensor network with a few WSNs can overcome the limitations of UWB-based indoor localization by leveraging cooperative sensor modalities to provide robust and accurate localization in complex environments. By fusing data from diverse sensing sources, WSNs can mitigate the effects of multipath propagation, NLOS conditions, and interference, enhancing the overall system performance <cit.>.
Furthermore, commercial UWB-based localization solutions provide accuracy of up to 10cm and connection stability at the cost of high computation power and expensive anchor node solution, which makes it impractical for swarm robots <cit.> with limited computational power and can only possess wireless connectivity. The proposed solution provides highly scalable, computationally efficient, and high localization accuracy for small to large-scale multi-robot systems.
Estimating a mobile node's position using only a few reference nodes with high accuracy and efficiency is achievable in wireless sensor networks. Wang et al. <cit.> proposed sparse Bayesian learning for robust DOA estimation with only a few base station nodes. But, their implementation assumed multiple antennas at each base station, realizing an EM-based DOA estimation and eventual vehicle localization using the DOA triangulation.
Wang's proposed solution is computationally expensive and requires high-end base stations, which makes it impractical for small robots operating indoor environment. On the contrary, in our work, we assume typical Wi-Fi sensors without having access to multi-antenna data, allowing ubiquitous integration with existing wireless sensor networks/IoT systems and computationally efficient online CDOA-based indoor localization.
Therefore, we propose a CDOA estimation using IoT or wireless nodes and fuse it with Bayesian approaches for high-accuracy localization of mobile robotic nodes.
We depart from the literature in two different ways: 1) we use a collaborative mechanism between the WSNs to obtain the CDOA of wireless signals; 2) we estimate Gaussian probability on the CDOA estimates, adopting EM and PF Bayesian frameworks. The localization system can be applied independently of the robot's motion model or combined with the robot's odometry, if available, to improve accuracy. Moreover, our approach uses only a few reference nodes and works on resource-constrained robotic nodes in real time.
Our proposed approach is advantageous by reducing computational complexity without embedding external hardware and using bearing-only information (aided by the cooperative RSSI measurements). It achieves high accuracy even in the presence of signal noise. This way, our method balances the efficiency and accuracy of quick online operation without fingerprinting dependence.
While localization of static Access Points has been demonstrated using DOA <cit.>, this is the first work that uses CDOA for robotic node localization demonstrated in real-world implementations.
§ PROBLEM STATEMENT
We look at the problem of a robot node localizing itself against its surroundings.
Here, a limited (smaller) number of WSNs or IoT nodes are distributed in the environment, and the mobile robot is mounted with an AP, which nearby WSNs can sense.
The robot can operate within the sensing range of WSNs, which is assumed to be 40m; the robot is not restricted to be in the boundary of WSNs but confined to be within the range (e.g., it could be outside the convex of boundary).
WSNs are assumed to be static, and their exact position is known to the robot for gradient calculation in the global frame. Furthermore, the robot is restricted to moving within the sensing range of connected WSNs.
The WSNs measure the RSSI values coming from the AP and communicate this information to the robot cooperatively (assuming the measurements and shared data are reasonably time-synchronized using NTP-like protocols).
Robot will use RSSI values to calculate the gradient and convert it into the CDOA with respect to the position of WSNs.
The robot R keeps track of the trajectory along with the CDOA measurements as the tuple: m_l = { x_l , y_l , CDOA_l}, where (x_l , y_l) is the location of the robot at location l.
The objective is to find the best estimate of the robot's location (x_l^*, y_l^* ), which maximizes the probability of observing the measurement tuples when the robot is at the estimated location P(x_l, y_l| m_l, m_l-1, . . . , m_l-M), where m_l is the sample for position l and M is the number of previous samples considered along the completed trajectory so far, given that we employ an arbitrary method to estimate the CDOA. Table <ref> lists the key symbols and notations used in the paper.
§.§ Notations and Definitions
We use the following notations and definitions. Every wireless node in WSNs, N_i ∈ N,i=1,2,..,4, with the RSSI measured locally at a wireless node N_i at a specific time t denoted as S_Ni.
S_i accumulates all RSSI values for node i over a time window T, with the average denoted by S_avg,i. The gradient of the RSSI's signal strength is given by g⃗ =[g_x,g_y]. The centroid position of a rectangular networked infrastructure is (x_c,y_c). The RSSI values of nodes 1, 2, 3, and 4 are represented by S_1, S_2, S_3, and S_4, respectively. The distance between the wireless sensor's antennas along the x and y axes is denoted by Δ_X, and Δ_Y, respectively. CDOA of the wireless signal at a position along the path l using the RSSI gradient is given by CDOA_l. The initial and re-sampled list of particles in PF are represented by P_r and R_r, respectively, with ω as the weights associated with each sample or grid position in P_r or E_R. The state at time t is x_t. E_r represents the list of grid positions in the EM algorithm, while w_E_r(x_t) is the Gaussian Probability for each grid position in E_r at time t. The probability density function for the position l of candidate particle i in PF is P_l(q_i). M denotes the number of previous samples considered, with σ as the standard deviation of error for all previous samples and err_l^k the CDOA error for sample k. The weight of each particle is represented as w_i, with w_i^*(q_i) as the normalized weight of each particle.
§ PROPOSED CDOA APPROACH
Cooperative localization can be accomplished with a network of wireless nodes, where each node can sense the signal strength of the other node in the network.
Our approach consists of two units: 1) we propose a CDOA estimation scheme from RSSI measurements with an assumption of the geometric model of the AP/WSN distribution in the environment, which is typical in the literature; 2) our solution deploys EM and PF-based localization of a mobile robot node using the CDOA.
In principle, we need at least three WSNs that form at least two noncollinear segments between them to measure a valid gradient inside the boundary created by the WSNs (<cit.>). Having a higher number of WSNs will increase the robustness of the solution.
It is possible to extend this setup where a mesh network is available with several known and unknown wireless nodes, but we limit our scope and experiments in this paper with four WSNs deployed at the corners of a robotic node workspace boundary (see Fig. <ref>).
The proposed method has a basis in WSNs collaboration to measure RSSI collaboratively and calculate CDOA. Alg. <ref> provides an algorithmic pseudo-code of the CDOA estimation of the mobile robot using the surrounding WSNs.
The first part of the Alg. <ref> lays out the wireless network collaboration for determining CDOA at the robot.
All these computations are performed by the moving AP-mounted robot, which runs a centralized service and receives the RSSI information from all connected nodes that sense wireless signals independently in a synchronized manner.
For collaborative measurement of RSSI on WSNs connected to an access point installed on a robot, time synchronization is essential. These nodes can synchronize their clocks within the time window using the Network Time Protocol (NTP), ensuring precise and consistent RSSI readings throughout the whole network <cit.>. Using the time window idea, sensor nodes can send and receive RSSI data at predetermined intervals, effectively controlling communication and minimizing the possibility of collisions while enabling the system to coherently process and evaluate the obtained data <cit.>.
For the algorithm to work, we need at least three spatially-distributed WSNs in the network, and the robot needs to be within the polygonal boundary of the WSNs.
RSSI can be modeled as a vector with two components, and the gradient concerning the center of the robot can be represented as g⃗ =[g_x,g_y].
One of the primary advantages of the central finite difference method is that it provides gradient estimation based on the received signal strength from geometrically oriented wireless nodes. After an appropriate gradient estimation, a receiver node (moving robot) can estimate the direction of arrival of signals based on the reference position for position estimation using EM and PF contrivance.
In our current implementation, the CDOA of the mobile robot within the network is obtained from the geometric rule described in the central finite difference method <cit.>.
For a rectangular configured networked infrastructure with centroid position (x_c,y_c), refer to Fig. <ref> where the RSSI value of Node 1 is S_1 and so on, the RSSI gradient is calculated as:
g_x = S_3 - S_2/2Δ_X + S_4 - S_1/2Δ_X ;
g_y = S_2 - S_1/2Δ_Y + S_3 - S_4/2Δ_Y
Here, Δ_X is the distance between the wireless sensor's antennas along the x-axis, and Δ_Y is the distance between the wireless sensor's antennas along the y-axis.
For an arbitrary number and positioning of wireless nodes, the gradient calculation can be generalized as in <cit.>.
g_x = S_x_c+λ,y_c-δ - S_x_c-λ,y_c-δ/2Δ_X + S_x_c+λ,y_c+δ - S_x_c-λ,y_c+δ/2Δ_X
g_y = S_x_c+λ,y_c-δ - S_x_c+λ,y_c+δ/2Δ_Y + S_x_c-λ,y_c-δ - S_x_c-λ,y_c+δ/2Δ_Y
Here, λ = 0.5Δ_X, δ = 0.5Δ_Y, and S_x_c,y_c is the RSS value from a node measured at the current path location (x_c,y_c).
We then calculate the CDOA from the gradients calculated using Eq. (<ref>).
CDOA_l=arctan(g_y/g_x)
The formula provides the CDOA of the wireless signal at a position along the path l using the RSS gradient.
We can then suppress the noise of the calculated CDOA by using the exponentially weighted moving average.
We employ a Gaussian probability model on the wireless signal CDOA estimates to calculate the weights of each random particle in the PF.
Similar to the work in <cit.> that uses acoustic signals, this probabilistic model will weigh the quality of signals sensed by each node from N and ultimately produce an accurate robot location estimate through the PF.
The absolute error between Actual CDOA for all wireless sensors at a potential candidate position l of mobile robot with coordinates (x_c, y_c) to the perceived CDOA values for each sensor calculated for each particle.
Later, we use the Gaussian probability formula (similar to <cit.>) on this error to calculate the probability of the i_th candidate location of the particle q_i = (x_i, y_i, w_i), i ∈ (1, ..n), where n is the number of samples in the PF spread in the bounded region with the resolution of and w_i is the weight of each particle calculated over a set of previous path samples as:
P_l(q_i) = ∏_k=0^M-1[1/σ√(2π)e^-(err_l^k)^2/2σ^2] ,
where P_l(q_i) is the probability density function for the position l of candidate particle i in PF, M is the number of the previous samples considered, σ is the standard deviation of error for all previous samples, and (err_l^k)^2 is the CDOA error for sample k. Eq. (<ref>) provide the probability for particle q_i considering M-1 previous samples.
There is an intrinsic angular inaccuracy in each CDOA degree that is analyzed. σ represents the fluctuation (deviation) of this error, which is anticipated to be known because we know the correctness of the technique used to assess the CDOA. We use the product of the Gauss likelihood of CDOA error over M - 1 prior robot positions (imitating geographically scattered samples) so that the sifted CDOA from earlier path locations can be used similarly as readings from many sensors.
Next, we discuss how the CDOA estimation is integrated with a probabilistic framework to achieve localization using the DOA information.
§.§ CDOA-PF
The CDOA probability from Eq. (<ref>) is used to calculate the weights of the particles in the PF, which is then employed in the resampling procedure in the next PF step (iteration).
w_i ∝ P_l(q_i)
The EM and PF provide initial hypotheses with a uniform sampling of probable robot locations across the environment using a constraint around the present robot location. The Gaussian probability is determined for each particle, the signal source. The particles are subsequently given weights that are proportional to their likelihood, and the weights w_i
are normalized as w_i^*(q_i)=w_i(q_i)/∑_i=0^n-1w_i(q_i).
This normalized weight determines the likelihood of regenerating a particle in the next iteration. The particle with the highest weight (softmax) best gauges the robot's location. This process is repeated, and the particles eventually converge on the location estimates. It is worth noting that the PF is iterated for each new estimation tuple.
Alg. <ref> depicts a pseudo-code of particle filtering to estimate the location from the CDOA efficiently. The CDOA-PF algorithm combines a Particle Filter technique with a Cross Difference of Arrival. The algorithm incorporates transition models and CDOA computations and iteratively updates particles that reflect the robot's potential positions. The algorithm calculates the robot's position by resampling particles according to their associated probabilities and doing so until the end of the robot's trajectory.
§.§ CDOA-EM
Expectation Maximization (EM) is a grid-based localization that uses an explicit, discrete representation for the probability of all positions in the state space. We represent the environment by finite discrete state spaces (Grids). The algorithm updates the probability of each state of the entire space at each iteration. Use a fixed decomposition grid by discretizing each CDOA: (x, y, θ). For each location x_i = [x,y,θ] in the configuration space: determine probability P(x_i) of the robot being in that state. Then, it chooses the state with the highest probability.
This approach resembles the EM method in <cit.>.
Alg. <ref> depicts the procedure of the EM approach that can be used to estimate the location from the CDOA efficiently. The CDOA-EM algorithm employs Expectation Maximization and Cross Difference of Arrival. It iteratively updates the robot's position using transition models and CDOA calculations, populating a grid with positions as samples. The program determines the maximum probability grid position by converting CDOA into Gaussian probability and predicting the robot's location until the end of the trajectory.
§.§ Summary of the system architecture
Fig. <ref> delineates the system architecture of proposed CDOA-based indoor localization in which WSNs collaboration is shown in CDOA block where all nodes share RSSI and perform gradient calculation using Eq. (<ref>), which was further converted into CDOA using Eq. (<ref>) as mentioned in Alg. <ref>. CDOA with robots' position estimation is used for PF resampling as discussed in Alg. <ref>, and similarly, for estimation maximization as mentioned in Alg. <ref>. These estimates will further be used for state estimation of the robot as the robot moves along a trajectory.
§ THEORETICAL ANALYSIS
Assumption 1 Given the locations of static wireless nodes N, their range R in the wireless sensor network, and the position of the robot node x; the following observation can be made about the CDOA estimation probability: CDOA is independent of the previous observations, i.e.,
P(CDOA_t| CDOA_t-1,N,R,x) = P( CDOA_t|N,R,x)
Assumption 2 Given X as a set of samples in PF and as the resolution of spread. We assume that samples spread randomly in the space with the given resolution spread, greater than the centroid of converged samples in PF. i.e.,
∑_i=0^n-1(X_i)/n≤
The error in the location estimation depends upon the cumulative noise percentage 𝒩% of RSSI from each wireless sensor node in wireless sensor network.
First, We prove the relation between the error in CDOA estimation of a robot at a candidate position l with cumulative noise 𝒩% in RSSI from WSNs.
Let η_x = η_x,1,η_x,2, ..., η_x,j, and η_y = η_1,y,η_2,y, ..., η_k,y, are noise values of nodes in the horizontal and vertical axis in wireless sensor network respectively. Based on Eq. (<ref>), the uncertainty values for g_x and g_y are ∑_i=1^j(η_x,i) and, ∑_i=1^k(η_i,y) respectively. Hence, calculated CDOA at position l has a cumulative percentage error of g_x and g_y.
Next, we note that location estimation is the soft-max of weights of n samples in PF; hence the error in location estimation depends upon the cumulative percentage error of maximum weight in PF, which is further dependent upon the error of CDOA estimation using RSSI in wireless sensor network.
Moreover, as shown in <cit.>, the location estimation error variance estimated using the DOA metric can scale linearly with the DOA estimation variance and quadratically with the target distance (see Lemma <ref>).
(Convergence) The CDOA-based localization output will converge to the actual position of the robot in finite iterations. Let x_i=τ x_i-1 + ϵ, where i is the current iteration, τ∈ X and ϵ≈ (resolution). We can claim that the infinite sequence, x_i^∞_i=1 has an approximate solution in a finite number of iterations.
Let x_1, x_2, ..., x_n be the converged particles in the PF solution, and let c be the centroid of these particles. The particle filter algorithm iteratively updates the particles according to the likelihood function and the prior distribution. After a finite number of iterations, the particles converge to a stable solution that approximates the true position.
We can represent the particle filter algorithm as an iterative process, where the k-th iteration is represented by the function g_k(x_k-1) = x_k. We can prove that the sequence of iterations x_k converges to a fixed point x^* of the function g(x) = lim_k →∞ g_k(x)
Now, let x be any candidate particle with a minimum difference from each of the converged particles x_1, x_2, ..., x_n. By the definition of the centroid, we know that ∑_i=1^n | x_i - c| is minimized. Therefore, | x - c| is also minimized, which means that x is closest to c
Therefore, we can say that position estimation can be obtained with uncertainty 𝒩% using Lemma 1, and the proposed algorithm converges over time and results in accurate location estimation.
A similar proof can be derived to guarantee the convergence of the CDOA-EM localization output.
The accuracy of location estimation depends upon the resolution spread , i.e.,
*argmin_i=0^n-1 |x-X_i|≤
According to Theorem 1, we have proven that the PF/EM solution converges to the actual position after finite iterations. Let X = X_0, X_1, …, X_n-1 be a set of n known locations and let x be an unknown location that we wish to estimate. We are given that the accuracy of location estimation depends on the resolution spread ≤ based on Assumption 2.
To prove this lemma, we will show that for any x, the distance between x and its closest known location X_i is less than or equal to .
We have demonstrated that Theorem 1's statement is true and that the state estimation converges to the actual position after a finite number of iterations. Now, define X as a collection of n known locations, where X = X_0, X_1, …, X_n-1 is the set, and define x as an unknown location that we want to estimate. By Assumption 2, we are informed that the resolution spread ≤ determines how accurately we can estimate the position.
To demonstrate this lemma, we shall demonstrate that for any x, x's distance from its nearest known position X_i is less than or equal to Re.
First, we note that for any i ∈0, 1, …, n-1, the distance between x and X_i is given by |x-X_i|. Therefore, the closest known location X_j to x is the one that minimizes this distance, i.e. argmin_i=0^n-1 |x-X_i| = X_j.
We want to show that |x-X_j| ≤. To do this, we use the definition of argmini=0^n-1 (x-X_i)≤. We know that argmini=0^n-1 (x-X_i)≤ means that for any i ≠ j, we have |x-X_j| ≤ |x-X_i| ≤. It follows that |x-X_j| ≤ proves the lemma.
(Coverage) With a minimum of 4 WSNs or IoT nodes in the network available for collaboration with a sensing range of r each, the CDOA localization method's maximum coverage area is r^2/2, as long as the robot node to be localized is within the boundary of the collaborating wireless nodes.
Assume that the four WSNs or IoT nodes A, B, C, and D are situated at the four corners of a square region (see Fig. <ref> and <ref>). Assume that each side of the square is s in length.
The maximum distance between the robot i and any of these nodes must be less than or equal to the diagonal of the square, which is √(()2) s, because A, B, C and D are located at the corners of the square.
As a result, the sensing range r must be at least a distance of √(()2) s to connect the robot node with other supporting nodes in the network (i.e., r ≥√(2) s, meaning s ≤r/√(2)).
The square region, which forms the outer boundary workspace of the robot node, has an area of s^2.
Therefore, the reliable area of the region that the CDOA approach can cover for localization with nodes having a sensing range of r is ≥r^2/2 as long as the robot is inside the boundaries of the four cooperating nodes.
Moreover, expressing the coverage area A(r) as a function of r, we can take its derivative as
∂ A(r)/∂ r = ∂ (r^2/2)/∂ r = 1
We can see that this derivative is always positive and indicates the linearity of the coverage area concerning the number of nodes and the sensing range.
Remark 1 Minimum of three nodes (instead of 4 nodes) can be sufficient as per the one-way finite difference equation for node collaboration to estimate DOA (see <cit.>), as long as all the nodes encompass the convex hull of the area boundary.
Remark 2 If more nodes are available to collaborate than the minimum number of nodes, this allows exploiting redundancy in the CDOA estimation, providing robustness and accuracy advantages.
(Coverage Generalization) The minimum area coverage mentioned in Lemma <ref> can be generalized to a rectangular region with an area of r^2.k/k^2 + 1, where k is the width factor of the length and width of the new rectangular area, and r is the sensing range. The maximum coverage is achieved when the length is equal to the width (square region).
Trivial. Given are the dimensions of a rectangle: length l, width w, and aspect ratio k (the width factor that makes w = kl). The sensing range r must be greater or equal to the largest diagonal of the rectangle. That is, r ≥√(()l^2 + w^2) ≥ l^2 (1 + k^2). Therefore, the coverage area is expressed as l^2 ≤ r^2.k/1+k^2.
Moreover, it is trivial to observe that the width and length must be identical in order to cover the most area, i.e., when k = 1. This relates to a square region.
Remark 3 The CDOA estimation and coverage area can be generalized to an arbitrary polygonal shape of the boundary of the localization workspace as long as the convex hull of the boundary nodes can be defined. For instance, the CDOA can be estimated using gradient estimation algorithms as proposed in <cit.> when the WSNs are distributed in the workspace without a specific geometric pattern.
Number of Collaborating Nodes A minimum of 2n + 2 number of nodes are required to cover an area of nr^2/2, for wireless nodes with sensing range r.
Let A be the unit area of the square that can be covered by 4 WSNs (using the result of Lemma <ref>. To extend this coverage beyond this unit area with a scaling factor of n, we would need 2n more nodes, which require replicating the square region n number of times as shown in Fig. <ref>. With 2n+2 nodes, the maximum area of coverage then becomes nA = nr^2/2. For example, for a unit area A and n=1, we need four nodes (nodes A-D); with double the coverage area, we need a minimum of 6 nodes (nodes A-F). Therefore, the number of nodes required for CDOA scales linearly with the coverage requirement.
(CDOA Scalability) Combining the results of Lemmas <ref> and <ref>, the approximate linear relationship between the number of nodes n and the coverage area A for localization using the CDOA approach can be expressed as A ≈ cn + d, where c and d are constants.
We will prove this theorem using the mathematical induction principle.
Considering Eq. (<ref>), which uses four wireless nodes (n=4) and proposed approach able to find position estimation in the bounded region, given by algorithm Alg.<ref>, also, in this case, the coverage area is A_4 = 4c+d where c and d both are constants, which satisfies the theorem (see Lemma <ref>).
Next, we will assume that the theorem holds for some arbitrary value of n=k where k>4. In other words, we assume that A_k ≈ ck + d.
Now, we need to prove that the theorem also holds for n=k+1. The coverage area for n=k+1 nodes is given by A_k+1 = A_k + Δ A, where Δ A is the increase in coverage area due to the addition of one more node.
Since the CDOA approach is based on the DOA of RSSI, it can be assumed that the increase in coverage area due to adding one more node is approximately proportional to the existing coverage area. This means that Δ A ≈α A_k, where α is a constant (see Lemma <ref>).
Substituting the value of A_k from the induction hypothesis, we get Δ A ≈α (ck+d). This means that the total coverage area for n=k+1 nodes is given by:
A_k+1≈ ck+d + α (ck+d) = (c+α)k + (d+α d).
A_1 = d ; A_k+1≈ (c+α)k + (d+α d) ; A_n ≈ cn + d
Thus, we have proved that the theorem holds for n=k+1. Since we have established the base case and have shown that the theorem holds for all n=k+1 if it holds for some arbitrary value of n=k, we can conclude that the theorem holds for all n ≥ 4.
Computational Complexity
Let n be the number of samples (or particles) in the EM (or PF), and N be the number of cooperative nodes in the wireless sensor network.
At each step, the robot needs to find pose estimation based on our proposed WSN cooperative localization algorithm, which involves the following steps:
* EM and PF initialization with random particles: O(n).
* Weight transfer to new PF for each sample in O(n).
* Robot-WSN collaboration in an open time window of α for sharing and receiving wireless signals from N nodes in O(α N) time.
* Sample weight calculation based on CDOA in O(n).
* Finding the soft max from particle weight distribution as the best pose estimate in O(n/2).
Therefore, one iteration is O(n ×α(N)) where α and N are constant values and have a low impact on overall computation complexity; hence, overall time consumed by the proposed algorithm would be O(n).
§ EXPERIMENTAL VALIDATION
We implemented our approach and compared the performance with relevant recent methods from the literature (see Sec. <ref>.
We performed extensive experiments through simulations (Sec. <ref>), real-world datasets (Sec. <ref>), and real robot in-house experiments (Sec. <ref>) to verify and validate the performance of the proposed localization in terms of accuracy measured through the Root Mean Squared Error (RMSE) and efficiency measured through the Time Per Iteration (TPI) metrics.
In each experiment, we made 100 trials and averaged the localization error over all trials as the distance between the predicted and the actual positions (ground truth).
The experiment settings shown in Table <ref> show diverse settings under which we evaluate the proposed method.
§.§ Comparison with the State-of-the-Art (SOTA)
To validate the results of our proposed approach, we implemented the five model-based solutions from the recent literature: 1) Trilateration <cit.>, 2) WCL: Weighted centroid localization <cit.>, 3) D-RSSI: differential RSSI-based localization <cit.>, 4) I-RSSI: Improved RSSI based localization <cit.>, 5) PF-EKF: Particle filter - Extended Kalman Filter <cit.>, 6) SBL-DOA: Sparse Bayesian Learning applied over the Direction of Arrival information <cit.>.
Our CDOA-PF approach applies the Gaussian probability over CDOA for a fixed number of sampled particles, and in our CDOA-EM, the solution is extensively searched through each and every grid of the workspace with a fixed resolution (the EM implementation is similar to the method in <cit.>).
More details on each of these methods are included in Appendix <ref>.
§.§ Numerical Analysis with Simulations
We simulated four WSNs distributed on the corners of the simulation workspace, with the robot's initial position being their center (Fig. <ref>). We simulated three different trajectories for robot motion: Boundary (left), Cross Coverage (center), Diagonal (right), and the scale of the workspace is 6m x 6m, as shown in Fig. <ref>.
Through these paths, we cover all potential positions within the bounded region.
We measure the RSS value of each WSN for all positions along the robot's path. The estimated RSS based on the log-normal radio signal fading model is computed as:
RSSI = A - 10×η×log_10(d) ,
where η denotes the path loss exponent, which varies between 2 (free space) and 6 (complex indoor environment), d denotes the distance from Robot R to the node N, and A denotes received signal strength at a reference distance of one meter.
We used this setup to perform experiments and validate the accuracy of localization techniques for different noise conditions on the measurements simulated through a zero-mean Gaussian noise varying from 1 to 4 dBm variance.
The path loss exponent η is set to 3 in our simulations to present a reasonable indoor environmental channel in our simulations. The simulation effectively mimics noise in RSSI by incorporating factors like distance, signal frequency, and environmental conditions. Moreover, with varied noise levels, it aptly represents diverse real-world scenarios, substantiating its representativeness in our experiments.
§.§ Real-world Datasets
We used two different publicly available real-world RSSI datasets on indoor localization.
See Fig. <ref> for the illustration of the workspace of the datasets along with the positions of the WSNs and locations where the RSSI samples are measured.
Dataset 1[<https://github.com/pspachos/RSSI-Dataset-for-Indoor-Localization-Fingerprinting>] Provides RSSI values for three wireless technologies; BLE, Zigbee, and Wi-Fi in a 4m x 4m room. We have used the data for Scenario 1 of this dataset as it relates to our approach to the geometric positioning of anchor nodes. In this scenario, a room of 6.0 x 5.5 m was used as the experimental testbed.
All transmitting devices were removed from the surroundings to establish a transparent testing medium where all devices could communicate without interference.
The transmitters were spaced 4 meters apart in the shape of a triangle.
The fingerprint and test points were obtained with a 0.5 m distance between the transmitters in the center.
The database would be made up of 49 fingerprints due to this. Ten test points were chosen at random for testing.
We have arranged the fingerprinting dataset in such a way that it makes a trajectory in the region.
Dataset 2[<https://ieee-dataport.org/documents/multi-channel-ble-rssi-measurements-indoor-localization>] Provides RSSI values for the three regions of varying ranges in the bounded area: diagonal, boundary, and inside in a 10m x 10m room. Four anchors took RSSI measurements while receiving messages from a single mobile node, delivering advertisement and extended advertisement messages in all BLE channels (both primary and secondary advertisement channels). Four anchors were placed in the corners of a 10 x 10 m office area (no considerable impediments).
We have compared the results for different communication channels under different regions in the bounded area.
§.§ Real Robot Hardware Experiments
To validate the practicality of the algorithm for real-world scenarios, we performed hardware experiments on a testbed of dimensions 2.34 m × 1.75 m, with the ceiling-mounted camera for visual localization as ground truth; we have mounted a wireless access point of power 20dBm with a 2.4Ghz frequency over the top of a Turtlebot3 mobile robot.
Fig. <ref> shows the experimentation test-bed and sample trajectories with the Turtlebot3 robot at the center of the WSNs on the four corners.
In each trial, we drove the robot remotely and recorded RSSI and ground truth position with an overhead camera-based fiducial marker (AprilTag <cit.>) tracking). Robot Operating System (ROS <cit.>) has been employed for inter-node communication, as ROS is the de-facto software framework used in the robotics literature. The ROS master runs a service on the experimental robot to receive perceived signal strength from all connected nodes through a synchronization service.
The WSN nodes operate at a 10Hz rate to calculate the RSSI and publish these values to the ROS topics.
Five trials have been conducted for each of the three different trajectories.
§ RESULTS
Table. <ref> comprehensively presents the overall DOA-only localization performance, time complexity, and efficiency results of the proposed approach compared to the SOTA approaches in simulation, real-world datasets, and real robot hardware experiments.
For time complexity, the variables n, S, and r represent the number of particles in PF, the size of the grid in EM, and the resolution of grids, respectively.
Looking at the TPI (efficiency) metric, the model-based approaches such as Trilateration and WCL are the fastest because of the low computational requirement they need for every new sample. However, the CDOA-PF is comparable to the model-based methods in terms of real-time computational tractability, allowing the possibility for instantaneous localization.
In general, the CDOA methods performed significantly better than the baselines in terms of higher accuracy and reasonable efficiency.
More details on the results are discussed below.
§.§ Simulation Results
Fig. <ref> summarizes the results of the simulation experiments.
The proposed CDOA approaches outperformed all SOTA methods and have been shown to localize a mobile device using an existing WSN or AP infrastructure with up to 8 cm accuracy achieved in our simulation environment of 6x6 m, even in high signal noise of 4dBm. The CDOA-EM method provided the best accuracy among all methods, while the I-RSSI approach had high localization accuracy among the SOTA algorithms. Furthermore, in comparison to efficiency, CDOA-PF is 40% more efficient than CDOA-EM in simulations, as expected.
SBL-DOA demonstrate comparable localization accuracy than proposed approach but is 10% less efficient than CDOA-PF.
It can be seen that the weighted centroid has the least accuracy of 84%, and the proposed approach has the highest accuracy of 92% compared to the ground truth location.
Also, as expected, the insider and diagonal trajectories provide better accuracies than the boundary cases for all the compared methods. The RSSI and DOA-based location estimates can be ambiguous on the boundary regions, resulting in higher localization errors.
The PF-EKF approach used the same number of particle filters and applied an EKF to predict and update the robot's state while using raw RSSI values (which are inconsistent and unreliable <cit.>) as an observation, resulting in 30% reduced position estimation accuracy than our approach.
In addition, the EKF update and prediction step in PF-EKF added overhead in the computation and made it complex, while the proposed CDOA-PF has no such computationally expensive operation; hence time per iteration of CDOA-PF is 60% less than that of PF-EKF. Overall, the proposed CDOA-PF achieved a balance of high localization accuracy and efficiency over all other SOTA algorithms.
§.§.§ Ablation Analysis
In the proposed approach, certain factors, such as the number of particles and noise level in RSSI measurements, can impact localization accuracy.
We analyze the localization accuracy (RMSE) for different simulated noise levels in the measured RSSI values.
It can be seen in Fig. <ref> that the proposed approaches have lower RMSE (high accuracy) among all techniques, even under high noise levels. The accuracy improvement is more pronounced when the noise level is increased. The trilateration approach performed better than the Weighted Centroid method. However, both have 3x lower accuracy than the proposed EM and PF-based methods for most experiments. CDOA-EM performed slightly better than CDOA-PF in terms of accuracy and robustness. However, the CDOA-EM approach is computationally complex as it calculates the Gaussian probability in all the grid areas with a full resolution instead of sparsely and randomly distributed particles in the CDOA-PF.
§.§ Real-world Public Datasets
In general, our method outperformed other methods in both datasets, as can be seen in Table <ref>. Further analysis of individual technologies and channels is provided below.
Dataset 1: Wireless technology comparison
Dataset 1 captures RSSI observations from three technologies (Wi-Fi, BLE, and Zigbee). The findings from these experiments (Fig. <ref>) delineate the suitability of using Wi-Fi as a communication channel and the proposed approach for indoor localization, as it has the least RMSE than other technologies, with 1.07m RMSE in a 4m x 4m of bounded region. As expected, there is no significant difference in computational complexity among different technologies because the methods take the same time to run the algorithm, irrespective of where the signals are coming from. However, CDOA-EM requires high time per iteration for position prediction than any other method.
The proposed CDOA approach also has 0.4 m better accuracy than the KNN as presented in <cit.> scenario 1 of Dataset 1. SBL-DOA used a similar DOA estimation technique and achieved relatively higher localization accuracy than other SOTA approaches but slightly less than the proposed CDOA-EM/PF.
The CDOA-PF provided higher accuracy than the CDOA-EM, contrary to the expectation that EM provides better PF accuracy. We believe this is because the softmax-based convergence of a few particles in PF after several iterations resulted in the closest position estimate rather than a more significant number of grids converging in the EM. This is especially the case for certain positions where the bounding centroid of grids is very close to the actual location, which yields lower RMSE in CDOA-PF for longer trajectories, as found in this dataset.
Dataset 2: Regional and Channel-wise comparison
Dataset 2 has RSSI captured in three regions (inside, diagonal, and boundary) in a bounded size 10 x 10 m. All the localization methods work well for the positions inside the AP/WSN perimeter, compared to diagonal or boundary points, similar to the results found in the simulations. However, the robot node would generally be inside the infrastructure boundaries, exploiting the full advantages of the proposed CDOA-based localization methods. Accordingly, the CDOA methods provided the best accuracy compared to other methods.
Dataset 2 also has RSSI observations for 40 different channels under three regions in the bounded dimension 10 x 10 m.
We show the results for the combined RMSE for channels 0-39 in Fig. <ref>. It can be seen from Table. <ref> that the proposed approach consistently provided the best performance in most of the scenarios with reasonable computational efficiency. We also analyzed the individual channel performance and found that the comparative results did not differ significantly compared to the averaged channel estimates. However, some channels were found to have higher noise and, therefore, poorer performance on all the compared methods. SBL-DOA approach over different channels performed more or less the same as the proposed approach because the SB-DOA relied on the Bayesian learning to mitigate the effect of channel variations, but at the cost of high computational cost, which makes it less practical in the robotics domain.
§.§ Real Robot Hardware Experiments
Fig. <ref> provides the averaged results of the real robot experiments in different regions.
The results delineate high average localization accuracy of 95%, with an absolute RMSE of 0.12 m (in a bounded region of 4 m^2) for CDOA methods in our hardware experiments.
A sample of the hardware experiments and demonstration can be viewed at <https://www.youtube.com/watch?v=jVg2hzouO9E>. As compared to other SOTA; SBL-DOA performed better and has shown 90% localization accuracy but 5% less accurate than the proposed CDOA method.
Table <ref> also provides comparative numerical values of localization error and time per iteration for other SOTA approaches, which validate the significant performance improvement achieved by the proposed CDOA methods in all trajectories. CDOA-EM has relatively better localization than different approaches but with a high computational cost. The CDOA-PF has more than 93% localization compared to other model-based approaches, confirming the simulation experiments. Interestingly, the CDOA-PF is 35% more accurate than CDOA-EM, similar to the observation in dataset 1.
Furthermore, the hardware experiments also validated the practicality of the proposed CDOA approach by providing evidence of high localization accuracy, similar to simulation and dataset results.
Finally, we have made a hardware experiments dataset and the software source codes of the implementations available for the research community to reproduce the results and build on the work to improve DOA-based localization approaches further.
The proposed CDOA-PF and CDOA-EM approaches, given their impressive localization accuracy in both simulated and real-world environments, should operate effectively even in non-line-of-sight (NLOS) scenarios. These methods, especially the CDOA-PF, combine robustness to variations in the signal environment with computational efficiency, making them likely to handle NLOS conditions where signals may be blocked or distorted. Moreover, as these methods have shown superior performance in high signal noise scenarios, they should be robust to the increased noise and multipath effects commonly found in NLOS situations.
Limitations: Similar to any wireless node collaboration-based approach <cit.>, the proposed approach only works for more than three fixed nodes placed at geometrically-aligned positions of a regular polygon bounded region to obtain accurate CDOA. While it is robust for most scenarios, it depends on the quality of the RSSI and the obstacles or non-line of sight conditions, which need to be studied further.
§ CONCLUSION
We proposed a novel collaborative direction of arrival (CDOA) based instantaneous localization method for robotic wireless nodes that can collaborate with other static wireless/IoT nodes in a GPS-denied environment. The proposed CDOA method efficiently incorporates wireless signals from multiple sources for position estimation. The received RSSI is used to determine the Gaussian probability of wireless signal CDOA through the geometric properties of the wireless sensor nodes. Two Bayesian approaches (Expectation Maximization and Particle Filter) use the CDOA probability density function and best fit (high weight and less probability noise) for the robot node state (position) estimation.
Extensive simulation experiments, as well as real-world datasets and demonstrations, have revealed promising results on the method's accuracy and efficiency.
Our method revealed at least a 50% improvement in localization accuracy than other non-sampling-based methods such as trilateration, weighted centroid, etc., using RSSI-only data, and 20% better than sampling-based techniques. Furthermore, despite CDOA being a sample-based technique, due to the less computational cost of CDOA and less frequent resampling, it is more efficient than other sampling and non-sampling-based methods. The experiments proved the practicality of the approach for cooperative robot localization to achieve high accuracy with low computational costs.
IEEEtran
§ COMPARED STATE-OF-THE-ART METHODS
Below are the specific details of the methods implemented from the state-of-the-art fingerprint-less localization techniques so as to compare with the proposed CDOA methods.
§.§ Trilateration <cit.>
Trilateration is a historical model-based technique <cit.> that uses distances to determine the receiver's location numerically. To calculate with trilateration, we need three transmitting devices to obtain a 2-D position and four to find a 3-D position. The distances between the transmitter and the receivers and the right number of transmitting devices are necessary. A frequent method for calculating the distance between devices is to use the RSSI of a signal.
For 2-D space, with three anchor nodes N_1,N_2,N_3 and positions in space be (a_1,b_1),(a_2,b_2),(a_3,b_3) respectively. We can find the unknown position (x,y) of the receiver as:
(a_1-x)^2+(b_1-y)^2=d_1^2
(a_2-x)^2+(b_2-y)^2=d_2^2
(a_3-x)^2+(b_3-y)^2=d_3^2
To minimize the posting error, we need to minimize the following objective function using a non-linear least squares technique:
f(x,y)=∑ _i=1^3 [√((x-a_i)^2+(y-b_i)^2)-d_i]^2
§.§ Weighted Centroid <cit.>
The basic idea of a weighted centroid localization algorithm (e.g., <cit.>) based on RSSI is that unknown nodes gather RSSI information from the beacon nodes around them.
Assuming there are n anchor nodes in the wireless sensor network, with coordinates (x_1, y_1), (x_2, y_2),. . .,(x_n, y_n), respectively, the location of the unknown node can be obtained by using the improved centroid algorithm estimating the coordinates of n nodes as:
x=w_1^∗x_1^+w_2^∗x_2w_3^∗x_3^+…+w_n^∗x_n/w_1^+w_1^+w_1^+…+w_1
y=w_1^∗y_1^+w_2^∗y_2w_3^∗y_3^+…+w_n^∗y_n/w_1^+w_1^+w_1^+…+w_1
w_i=RSSI_i/RSSI_1+RSSI_2+RSSI_3+…+RSSI_n
i∈(1,2,3,…,n)
§.§ Differential RSS <cit.>
The Differential RSS method in <cit.> works without knowing to transmit power beforehand. There are two phases in this technique; offline and online phases. During the offline phase, received RSS values are generated using the representative (measured) RSS model for each grid point. During the online phase, the measured DRSS values for each grid point are compared to the theoretical ones. The estimated location (X, Y) is determined as the grid point with the theoretical RSS values closest (the least squares) to the ones measured:
(X,Y)=min_x,y∑_i=1^N(DRSS_(x,y),i,T-DRSS_i,M)^2
Here DRSS_(x,y), I, T denotes the actual differential RSS value at position (x,y) from or at anchor i, i=1 ... N with N is the number of anchors.RSS_i, M is the measured RSS value from or at anchor i and is therefore required. DRSS_0 is obtained from a measurement at a reference point using the below equation.
DRSS_i=RSS_i-RSS_1
Here, RSS_1 denotes the most significant received signal strength. In the algorithm, n is considered constant and known.
§.§ Improved RSSI <cit.>
The mean or probability of the RSSI signal uses to determine the RSSI signal characteristics. However, because of multipath and non-line-of-sight propagation in a complex and dynamic interior environment, RSSI characteristics can vary significantly in time and space. For example, if multipath interference is higher than the signal, the probability distribution is right-skewed. As a result, the RSSI signal's mean does not adequately represent RSSI's dynamic nature. Therefore, the improved RSSI algorithm <cit.> works to bag RSSIs for a particular time interval to maintain the temporal correlation between observations, then extract top k values from decidedly sorted observations (value of k determined to be 13 for indoor localization). Later on, average out the extracted observation to find the finest RSSI will then be used to calculate the differential distance from the previous observation using the differential of standard signal intensity attenuation model as:
Δ d = d_o 10^A-R_t/10η - d_o 10^A-R_t-1/10η,
where d_o set to one meter, A is received RSSI at d_o, R_t and R_t-1 are the processed RSSI at time t and t-1 respectively.
For localization, an initial position is (0, 0), and subsequent positions are estimated using the intersection of Δ d measured from each access point using standard rules of trilateration.
§.§ Particle Filter Extended Kalman Filter (PF-EKF) <cit.>
For indoor localization, Bayesian filtering is an appealing solution. Which, on the other hand, requires the system(depicts how the state changes over time.) and measurement (relates the noisy measurements (RSSI for PF and the user position for EKF) with the state/position) models. Using a recursive filter, the PF-EKF algorithm in <cit.> calculates the posterior Probability Density Function (PDF). The prediction and update stages of recursive filters are where the state predicts and then updates once the measurements are available. Then, using the Bayes theorem, the updated state has gathered measurements to adjust the prediction PDF.
The algorithm first uses a particle filter to find the estimated PDF in weighted random samples as:
p(y_i| x_1:i) ≈∑_k=1^N_sw_i^kδ(y_i-y_i^k)
Furthermore, we used this PDF in an extended Kalman filter to get the prediction of position and update the measurement model for future estimation as:
Y̅_i-1= FY_i-1
P̅_i-1= FP_i-1F^T+Q
K_i=P̅_i-1H^T(HP̅_i-1H^T+R)^-1
Y̅_i=Y̅_i-1+ K_i(X_i-HY̅_i-1)
P_i=P̅_i-1(1-KH)
§.§ SBL-DOA <cit.>.
The proposed method in <cit.> focuses on assisted vehicle localization based on three collaborative base stations and SBL-based robust DOA estimation. To accurately estimate the direction of arrival (DOA) for vehicle localization, the authors create the sparse Bayesian learning (SBL) technique. The main concept is to use three base stations for cooperative localization and to take advantage of the previously known sparsity in the angular space.
The SBL-based robust DOA estimation problem is formulated as:
θ, αminimize 1/2 |𝐲 - 𝐀(θ)𝐱|2^2 + γ/2 |𝐱|1
subject to α_i = 1/σ_i^2, i = 1, …, N,
where 𝐲 represents the received signal, 𝐀(θ) denotes the steering matrix, 𝐱 is the sparse vector of interest, α is the hyperparameter vector, and γ is the regularization parameter.
After obtaining the DOA estimates, the proposed method triangulates the robot's location using the base stations. For a system with three base stations, the vehicle's position (x, y) is calculated using the following set of equations:
d_1 = √((x - x_1)^2 + (y - y_1)^2)
d_2 = √((x - x_2)^2 + (y - y_2)^2)
d_3 = √((x - x_3)^2 + (y - y_3)^2),
where (x_i, y_i) denotes the position of the i-th base station and d_i represents the distance between the vehicle and the i-th base station.
By applying the SBL-based robust DOA estimation and triangulation, the proposed approach in <cit.> achieves accurate and reliable vehicle localization in the presence of multipath and shadowing effects commonly encountered in urban environments.
§.§ Illustration of a localization example
We plotted the position predictions in a large simulated area of 40 m x 40 m bounded region to further understand how our method localizes compared to other methods. Fig. <ref> visualizes the localization accuracy for all methods for a specific sample location. We can observe that the proposed methods predicted robot node location very close to the actual location, even in a large area.
§.§ Ablation study on the number of particles in CDOA-PF
We present an ablation analysis of the number of particles in the proposed CDOA-PF method since it is a sampling-based method depending heavily on the number of particles.
Fig. <ref> presents the results of the RMSE variations under different numbers of particles in the proposed CDOA-PF approach. The accuracy has improved by increasing the number of particles, but not so much beyond a certain optimum value. Also, more particles increased the computational complexity. Implementing the motion model over PF significantly improved the accuracy at a small expense in the computation time. The results demonstrate the potential of integrating the mobile robot's motion model (or IMU sensor), when possible, to improve the CDOA-based localization solution.
|
http://arxiv.org/abs/2307.03058v1
|
20230706152423
|
Hydrodynamic atmospheric escape in HD 189733 b: Signatures of carbon and hydrogen measured with the Hubble Space Telescope
|
[
"Leonardo A. Dos Santos",
"Antonio García Munõz",
"David K. Sing",
"Mercedes López-Morales",
"Munazza K. Alam",
"Vincent Bourrier",
"David Ehrenreich",
"Gregory W. Henry",
"Alain Lecavelier des Etangs",
"Thomas Mikal-Evans",
"Nikolay K. Nikolov",
"Jorge Sanz-Forcada",
"Hannah R. Wakeford"
] |
astro-ph.EP
|
[
"astro-ph.EP"
] |
0000-0002-2248-3838]Leonardo A. Dos Santos
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
ldsantos@stsci.edu
0000-0003-1756-4825]Antonio García Muñoz
Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France
0000-0001-6050-7645]David K. Sing
Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA
Department of Earth & Planetary Sciences, Johns Hopkins University, Baltimore, MD, USA
0000-0003-3204-8183]Mercedes López-Morales
Center for Astrophysics | Harvard & Smithsonian, 60 Garden St, Cambridge, MA 02138, USA
0000-0003-4157-832X]Munazza K. Alam
Carnegie Earth & Planets Laboratory, 5241 Broad Branch Road NW, Washington, DC 20015, USA
Observatoire astronomique de l'Université de Genève, Chemin Pegasi 51, 1290 Versoix, Switzerland
Observatoire astronomique de l'Université de Genève, Chemin Pegasi 51, 1290 Versoix, Switzerland
0000-0003-4155-8513]Gregory W. Henry
Center of Excellence in Information Systems, Tennessee State University, Nashville, TN 37209, USA
Institut d'Astrophysique de Paris, CNRS, UMR 7095 & Sorbonne Universités UPMC Paris 6, 98bis bd Arago, 75014 Paris, France
0000-0001-5442-1300]Thomas Mikal-Evans
Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg, Germany
0000-0002-6500-3574]Nikolay K. Nikolov
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
0000-0002-1600-7835]Jorge Sanz-Forcada
Departamento de Astrofiísica, Centro de Astrobiologiía (CSIC-INTA), ESAC Campus, Camino bajo del Castillos s/n, 28692 Villanueva de la Cañada, Madrid, Spain
0000-0003-4328-3867]Hannah R. Wakeford
University of Bristol, HH Wills Physics Laboratory, Tyndall Avenue, Bristol, UK
One of the most well-studied exoplanets to date, HD 189733 b, stands out as an archetypal hot Jupiter with many observations and theoretical models aimed at characterizing its atmosphere, interior, host star, and environment. We report here on the results of an extensive campaign to observe atmospheric escape signatures in HD 189733 b using the Hubble Space Telescope and its unique ultraviolet capabilities. We have found a tentative, but repeatable in-transit absorption of singly-ionized carbon (C ii, 5.2%± 1.4%) in the epoch of June-July/2017, as well as a neutral hydrogen (H i) absorption consistent with previous observations. We model the hydrodynamic outflow of HD 189733 b using an isothermal Parker wind formulation to interpret the observations of escaping C and O nuclei at the altitudes probed by our observations. Our forward models indicate that the outflow of HD 189733 b is mostly neutral within an altitude of ∼ 2 R_ p and singly ionized beyond that point. The measured in-transit absorption of C ii at 1335.7 Å is consistent with an escape rate of ∼ 1.1 × 10^11 g s^-1, assuming solar C abundance and outflow temperature of 12 100 K. Although we find a marginal neutral oxygen (O i) in-transit absorption, our models predict an in-transit depth that is only comparable to the size of measurement uncertainties. A comparison between the observed transit depths and hydrodynamics models suggests that the exosphere of this planet interacts with a stellar wind at least one order of magnitude stronger than solar.
§ INTRODUCTION
One of the most striking discoveries in the search for exoplanets is that they can orbit their host stars at extremely close-in distances, a fact that initially challenged our understanding of planetary formation outside the Solar System <cit.>. In particular, hot-Jupiters were the first exoplanets to be found because they imprint strong transit and gravitational reflex signals in their host stars, despite their being intrinsically rare <cit.>. Although as a community we ultimately aspire to find another planet similar to the Earth, hot-Jupiters stand out as an important stepping stone because they are excellent laboratories to test our hypotheses of how planetary systems form and evolve <cit.>.
For small, short-period exoplanets, the impinging irradiation from their host stars and how it varies with time are some of the most important factors that drive the evolution of their atmospheres <cit.>. That is because the incoming energetic photons (with wavelengths between X-rays and extreme-ultraviolet, or XUV) heat the upper atmosphere of the planet, which in turn expands and produces outflowing winds. If this outflow becomes supersonic, the atmospheric escape process is said to be hydrodynamic. Originally formulated by <cit.> to describe the evolution of the early Earth and Venus, hydrodynamic escape has been observed in action in many hot exoplanets <cit.>. Other factors such as composition, as in high mean molecular weight atmospheres, are also important in regulating the mass-loss rate of exoplanets <cit.>.
The hot-Jupiter HD 189733 b <cit.> is a particularly well-studied exoplanet owing to: i) its proximity to the Solar System, ii) size and mass in relation to its host star, and iii) short orbital period — see the stellar and planetary parameters in Table <ref>; these were compiled following the most recent and most complete datasets available in the literature, aiming for precision and consistency.
Previous optical observations of the atmosphere of HD 189733 b have shown that its transmission spectrum is consistent with the presence of high-altitude hazes <cit.>. In the near-infrared, the low amplitude of the water feature in transmission indicates a depletion of H_2O abundance from solar values, likely a result from its formation <cit.>. Using both transit and eclipse data of this planet, <cit.> concluded that the C/O ratio of HD 189733 b is ∼ 0.66, and that it has a super-solar atmospheric metallicity.
Using different observational and theoretical techniques, previous atmospheric escape studies of HD 189733 b found that the planet likely has high mass-loss rates in the order of 10^10 - 10^11 g s^-1, which is consistent with a hydrodynamic outflow <cit.>. In this regime, the outflow of H is so intense that it can drag heavier species, such as C and O, upwards to the exosphere of the planet, where these nuclei can quickly photoionize. In this context, <cit.> reported on the detection of neutral oxygen (O i) in the exosphere of HD 189733 b, which the authors attribute to atmospheric escape, but require super-solar abundances and super-thermal line broadening to be explained; they also report a non-detection of singly-ionized carbon (C ii). More recently, <cit.> ruled-out the presence of singly-ionized magnesium (Mg ii) in the outflow of this planet and, although they reported a non-detection of Mg i, they did not rule out the presence of this species.
In this manuscript, we report on a comprehensive analysis of all the far-ultraviolet (FUV) transit observations of HD 189733 b performed with the Hubble Space Telescope and the Cosmic Origins Spectrograph (COS) instrument obtained to date. In Section <ref>, we describe the observational setup and data reduction steps; Section <ref> contains the results of our data analysis in the form of spectroscopic light curves; in Section <ref>, we discuss the models we used to interpret our results and how they compare with the literature; in Section <ref> we lay the main conclusions of this work.
llcl
Stellar and planetary parameters, and transit ephemeris of the HD 189733 b system
0pt
Parameter Unit Value Ref.
Stellar radius R_⊙ 0.765^+0.019_-0.018 <cit.>
Stellar mass M_⊙ 0.812^+0.041_-0.038 <cit.>
Stellar effective temperature K 5050 ± 20 <cit.>
Projected rotational velocity km s^-1 3.5 ± 1.0 <cit.>
Age Gyr ∼ 1.2 <cit.>
Systemic radial velocity km s^-1 -2.204^+0.010_-0.011 <cit.>
Distance pc 19.7638^+0.0128_-0.0127 <cit.>
Spectral type K2V <cit.>
Planetary radius R_Jup 1.119 ± 0.038 <cit.>
Planet-to-star ratio R_ p / R_ s 0.1504^+0.0038_-0.0039 <cit.>
Planetary mass M_Jup 1.166^+0.052_-0.049 <cit.>
Planetary density g cm^-3 1.031^+0.106_-0.090 <cit.>
Planetary eq. temperature K 1209 ± 11 <cit.>
Orbital period d 2.218577^+0.000009_-0.000010 <cit.>
Semi-major axis au 0.03106^+0.00051_-0.00049 <cit.>
Orbital inclination 85.690^+0.095_-0.097 <cit.>
Eccentricity < 0.0039 <cit.>
Transit center reference time BJD 2458334.990899^+0.000726_-0.000781 <cit.>
Transit dur. (1st-4th contact) h 1.84 ± 0.04 <cit.>
Stellar and planetary parameters were obtained in the following DOI: [10.26133/NEA2]https://doi.org/10.26133/NEA2
§ OBSERVATIONS AND DATA ANALYSIS
Several FUV transits of have been observed with HST in the General Observer programs 11673 (PI: Lecavelier des Etangs), 14767 (PanCET program; PIs: Sing & López-Morales), and 15710 (PI: Cauley). Another program (12984, PI: Pillitteri) also observed HD 189733 in the frame of star-planet interactions, but no in-transit exposures were obtained. In program 15710, which aimed at measuring transits simultaneously with HST and ground-based facilities, two of three visits had guide star problems and are not usable; the third visit has only one exposure covering in-transit fluxes, and the remaining ones occur after the transit; this non-optimal transit coverage is likely the result of difficulties in coordinating HST and ground-based observatories for simultaneous observations. We list the dataset identifiers and times of observation in Table <ref>[These data are openly available in the following DOI: [10.17909/2dq3-g745]https://doi.org/10.17909/2dq3-g745.]. Each identifier corresponds to one exposure of HST, or one orbit.
The COS observations were set to spectroscopic element G130M centered at 1291 Å and a circular aperture with diameter 2.5 arcsec, yielding wavelength ranges [1134, 1274] Å and [1290, 1429] Å. The data were reduced automatically by the instrument pipeline <cit.>. Several FUV transits of HD 189733 b have also been observed with the STIS spectrograph, but with a more limited wavelength range — thorough analyses of the STIS datasets are discussed in <cit.>, <cit.>, and <cit.>. In this manuscript we focus only on the COS data, which cover more metallic emission lines than the STIS data.
l c c c l
Observations log of HD 189733 b transits with HST/COS.
0pt
Visit Dataset Start time (BJD) Exp. time (s) Phase
8*A 2009-09-16 18:31:52.378 208.99 Out of transit
2009-09-16 19:50:49.344 889.18 Out of transit
2009-09-16 21:26:41.338 889.18 Out of transit
2009-09-16 21:44:13.344 889.15 Ingress
2009-09-16 23:02:33.331 889.15 In transit
2009-09-16 23:20:05.338 889.18 Egress
2009-09-17 00:38:25.325 889.18 Post-transit
2009-09-17 00:55:57.331 889.18 Post-transit
5*B 2017-06-24 08:03:55.843 2018.18 Out of transit
2017-06-24 09:24:16.877 2707.20 Out of transit
2017-06-24 10:59:38.803 2707.17 Ingress
2017-06-24 12:35:00.816 2707.17 Egress
2017-06-24 14:10:23.866 2707.17 Post-transit
5*C 2017-07-03 05:00:55.786 2018.18 Out of transit
2017-07-03 06:21:52.934 2707.20 Out of transit
2017-07-03 07:57:12.960 2707.20 Ingress
2017-07-03 09:32:33.850 2707.20 Egress
2017-07-03 11:07:54.826 2707.20 Post-transit
5*D 2020-09-01 06:23:14.842 1068.16 In transit
2020-09-01 07:47:24.835 2060.19 Post-transit
2020-09-01 09:22:43.824 2435.17 Post-transit
2020-09-01 10:58:01.862 2600.19 Post-transit
2020-09-01 12:33:20.851 2600.19 Post-transit
We search for signals of atmospheric escape using the transmission spectroscopy technique. Due to the strong oscillator strengths of FUV spectral lines, we analyze stellar emission lines individually (see Figure <ref>). In this regime, one effective way of searching for excess in-transit absorption by an exospheric cloud around the planet is by measuring light curves of fluxes in the emission lines <cit.>. Depending on the abundance of a certain species in the exosphere, an excess absorption of a few to several percent can be detected.
The C ii lines are a doublet with central wavelengths at 1334.5 Å and 1335.7 Å[More specifically, there is a third component blended with the second line at 1335.66 Å, which would make this feature a triplet. However, this third component is one order of magnitude weaker than the second component.], both emitted by ions transitioning from the configuration 2s2p^2 to the ground and first excited states of the configuration 2s^22p, respectively. The O i lines are a triplet with central wavelengths at 1302.2 Å, 1304.9 Å and 1306.0 Å, emitted by atoms transitioning from the configuration 2s^22p^3(^4S^ o)3s to the ground, first and second excited states of the configuration 2s^22p^4, respectively. See the relative strengths of these spectral lines in Figure <ref>. As discussed in <cit.>, insterstellar medium (ISM) absorption can in principle affect the observable flux of the C ii lines. However, the effect is negligible for our analysis, which relies on a differential time-series analysis and not on the intrinsic stellar flux.
While analyzing the atomic oxygen (O i) lines, care has to be taken because of geocoronal contamination. To get around this issue, we subdivided the HST exposures into several subexposures, identifying which subexposures are contaminated, discarding them, and analyzing only the clean subexposures. Since the O i contamination is correlated with the geocoronal emission levels in , we identify the problematic subexposures using the line, where the contamination is more obvious. When analyzing other emission lines that do not have geocoronal contamination, we do not discard any subexposures. In principle, the contamination in the O i lines can also be subtracted using templates in a similar fashion as the line <cit.>. But the contrast between the airglow and the stellar emission of HD 189733 is too low to allow for a proper subtraction (see Appendix <ref>), thus we opt to discard contaminated subexposures instead.
For relatively bright targets like HD 189733, the FUV continuum can be detected in COS spectroscopy despite the low signal-to-noise ratio (SNR). In our analysis, we measure the FUV continuum by integrating the COS spectra between the following wavelength ranges: [1165, 1173], [1178, 1188], [1338, 1346], [1372, 1392], [1413, 1424] and [1426, 1430] Å. These ranges strategically avoid strong emission lines and weaker ones that were identified by combining all available COS data (red spectrum in Figure <ref>).
§ RESULTS
In the following discussion, we frequently mention fluxes during transit ingress and egress because those are the transit phases that are covered by Visits B and C (see Table <ref>). Visits A and D contain exposures near mid-transit, but the first visit has low SNR due to shorter exposure times and the latter does not have an adequate out-of-transit baseline. The light curves were normalized by the average flux in the exposures before the transit. As is customary in this methodology, we did not consider exposures after the transit as baseline for normalization because they may contain post-transit absorption caused by an extended tail. The transit depths quoted in this manuscript are measured in relation to the baseline out-of-transit flux. In this section, we deem signals as detections related to the planet HD 189733 b if they are repeatable during transits in the epoch of 2017 (Visits B and C).
§.§ Exospheric oxygen and carbon
We have found HD 189733 b to produce repeatable absorption levels of ionized carbon at blueshifted Doppler velocities. Furthermore, some of these signatures are asymmetric in relation to the transit center, indicative of departures from spherical symmetry. In Section <ref>, we will see that no detectable atomic oxygen is expected in the exosphere of HD 189733 b.
For Visit A, we found that all exposures have low levels of geocoronal contamination except for the quarter of the following datasets: lb5k01umq, lb5ka1uqq, lb5ka1v0q and lb5ka1vdq; these subexposures with high contamination were discarded from the O i analysis (see Appendix <ref>). The exposures of Visits B, C and D were longer, so we divided them into five subexposures instead of four. For Visits B and C, we discard the last subexposure of every dataset; in the case of ld9m50oxq, we also discard the fourth subexposure. In Visit D, we discard the first subexposure of all exposures, except ldzkh1ifq.
Our analysis of the O i light curves yields an in-transit absorption of 5.3%± 1.9% (2.8σ significance; Doppler velocity range [-75, +75] km s^-1) by combining Visits B and C when co-adding all O i lines. In Visit A (epoch 2009), we do not detect a significant in-transit absorption, likely due to a combination of shorter exposure times and stellar variability (see Appendix <ref>). We deem the results of Visit D (epoch 2020) inconclusive due to a non-optimal out-of-transit baseline (see also Appendix <ref>). We show the in- and out-of-transit O i spectra in the second and third rows of Figure <ref>.
In addition to O i, we also measure an in-transit absorption of singly-ionized carbon (C ii, all lines co-added) in HD 189733 b, more specifically 7.4%± 2.0% and 3.0%± 2.1% for Visits B and C, respectively. By combining these two visits, we measure an absorption of 5.2%± 1.4% (3.7σ detection; Doppler velocity range [-100, +100] km s^-1). We report the ingress and egress absorption levels at different Doppler velocity ranges in Table <ref>. The signal is largely located in the blue wing, between velocities [-100, 0] km s^-1, of the excited-state line at 1335.7 Å (see the top row of Figure <ref>); if the signal is indeed of planetary nature, this suggests that the material is being accelerated away from the host star <cit.>. The other emission lines in the COS spectrum (Si ii, Si iv, C iii and N v) do not show significant variability (see Appendix <ref>).
l c c c
Average in-transit absorption levels measured over Visits B and C.
0pt
Species Blue wing Red wing Full line
4lIngress
O i 2.3%± 5.5% 5.1%± 5.1% 3.2%± 3.7%
O i^* 8.9%± 4.2% 1.6%± 4.7% 6.6%± 3.1%
O i^** 3.4%± 3.9% 9.9%± 5.5% 5.4%± 3.1%
C ii 4.3%± 3.7% 0.5%± 3.4% 3.4%± 2.4%
C ii^* 7.6%± 2.3% 5.3%± 3.0% 6.1%± 1.8%
4lEgress
O i 5.5%± 5.5% -0.1%± 5.1% 0.7%± 3.7%
O i^* 1.5%± 4.2% 0.1%± 4.7% 0.7%± 3.1%
O i^** 0.0%± 3.9% 8.6%± 5.4% 3.7%± 3.1%
C ii 5.4%± 3.7% 0.0%± 3.4% 3.4%± 2.4%
C ii^* 4.2%± 2.2% -2.0%± 3.0% 1.5%± 1.8%
Visits A and D were excluded from this analysis because they were observed in different epochs from the PanCET visits (see details in Section <ref>).
§.§ Non-repeatable signals: stellar or planetary variability?
The hot Jupiter HD 189733 b is known for orbiting a variable host star and for having variable signatures of atmospheric escape <cit.>. Our analysis provides potential observational evidence for the variability of the upper atmosphere in this exoplanet, but it is difficult to disentangle it from variability in the host star emission-line flux.
In Visit B, the Si iii line at 1206.5 Å shows a flux decrease of 7.6%± 2.9% near the egress of the transit (see left panel of Figure <ref>). This line is well known for being a sensitive tracer of stellar activity <cit.>. However, it is difficult to determine whether this signal is due only to stellar activity or the presence of doubly-ionized Si in the upper atmosphere of HD 189733 b or a combination of both. Similar non-repeatable egress absorptions are seen in the C ii line at 1335.7 Å in Visit B. If it is due solely to stellar activity, it is possible that part of the in-transit absorption of C ii observed in Visit B is also due to activity, since these lines are a moderate tracer of activity as well. With that said, it is not completely unexpected to see tails of ionized exospheric atoms after the egress of a highly-irradiated planet like HD 189733 b <cit.>. If the egress Si iii flux decrease seen in Visit B is indeed due to the presence of doubly-ionized Si in the exospheric tail of the planet, a non-detection in Visit C could be explained by: i) variability in the outflow velocities of HD 189733 b, ii) variability in its ionization level, or iii) variation in the stellar wind. Further modeling will be necessary to test these different hypotheses, and we leave it for future efforts.
Since the FUV continuum traces the lower chromosphere in solar-type stars <cit.>, we also compute its light curve and search for signals of variability connected to stellar activity. For HD 189733, we measure an average out-of-transit FUV continuum flux density of (1.18 ± 0.05) × 10^-16 and (1.11 ± 0.05) × 10^-16 erg s cm^-2 Å^-1, respectively for visits B and C (see wavelength ranges in Section <ref>). The FUV continuum transit light curve is shown in the right panel of Figure <ref>. We do not detect statistically significant variability of the normalized FUV continuum flux during Visits B and C; however, the uncertainties of the measured fluxes are slightly larger than the line fluxes measured for the C ii light curves.
Following the methods described in <cit.>, we did not identify any flares in the datasets we analyzed, and found no evidence for rotational or magnetic activity modulation of FUV fluxes due to the relatively short baseline of observations available. The photometric monitoring of the host star (see Appendix <ref>) suggests a rotational period of 12.25 d. Visit B occurred during a time of maximum starspots, while Visit C occurred between times of maximum and minimum spottedness. In the HST data, we found that Visit B has higher absolute fluxes of metallic lines than Visit C by a factor of ∼10% (except for Si ii; see Appendix <ref>). On the other hand, for the ground-based photometry in b and y bands, we observe a Δ mag of ∼0.02, which corresponds to a difference in flux of approximately 1.9% in the optical.
§.§ A repeatable hydrogen signature
The COS spectra we analyze also contain information about the stellar Lyman-α line, despite the strong geocoronal contamination. We used the same technique described in <cit.> to remove the geocoronal contamination (see Appendix <ref>) and analyzed the time series of the cleaned line for both its blue and red wings (respectively [-230, -140] km s^-1 and [+60, +110] km s^-1, based on the results of ). Some of the exposures in Visit A are not suitable for decontamination due to low SNR and had to be discarded. The resulting light curves are shown in Figure <ref>.
We found that the blue wing of the line shows a repeatable absorption during the ingress of HD 189733 b, with transit depths consistent between all visits. The light curves measured with COS are also consistent with that observed with STIS <cit.>. This signal at high velocities in the blue wing suggests that the exospheric H of HD 189733 b is accelerated away from the host star, an effect that has been extensively studied in, e.g., <cit.>, <cit.> and <cit.>. The time series of the red wing also shows an absorption during the transit of HD 189733 b. These results are consistent with the observations reported in <cit.>.
In addition, we also found a hint of a post-transit absorption in the blue wing two hours after the transit mid-time, which could indicate the presence of a long neutral H tail as predicted by <cit.>. At this moment, it is unclear how consistent this would be with the doubly-ionized Si tail hinted in the left panel of Figure <ref>, so we would benefit from future work in more detailed hydrodynamic modeling of HD 189733 b involving H and metallic species.
As we shall see in Section <ref>, our simplified one-dimensional modeling suggests that the exosphere of HD 189733 b is mostly ionized for all the species we simulated (H, He, C and O). The detection of neutral H, however, is not inconsistent with this scenario, since even a small fraction of neutral of H can yield a detectable signal due to the large absolute abundances of H atoms in the outflow.
The global 3D hydrodynamics simulations presented by <cit.> predict different levels of absorption in the blue and red wings of the line depending on the strength of the stellar wind (SW). According to their models, weaker winds (Ṁ_ sw < 10^13 g s^-1)[For comparison, the solar wind Ṁ is ∼ 10^12 g s^-1 <cit.>.] tend to produce deeper red wing transits and shallower blue wing absorption in the Doppler velocity ranges we analyzed. On the other hand, stronger winds (Ṁ_ sw > 10^13 g s^-1) produce transit depths that are similar to those that we measured in both blue and red wings. These results highlight the importance of transits to study not only exoplanet outflows, but also their interaction with the stellar wind.
§ MODELING THE ESCAPE OF CARBON AND OXYGEN
To interpret our observations, we produce forward models of the escaping atmosphere of HD 189733 b using the code <cit.>. The code calculates the structure of the planet's upper atmosphere by assuming that the outflow can be simulated as an isothermal, one-dimensional Parker wind <cit.>. We further assume that the planet's atmosphere has a H/He fraction of 90/10, and that C and O are trace elements. In this version of , the code computes the ionization-advection balance using the photoionization, recombination and charge transfer reactions listed in Table 1 of <cit.>. We further include the charge transfer reaction between C^2+ and He^0 from <cit.>. A more detailed description of this version of the code is present in Appendix <ref>. We caution that our simulation is simplified in that we assume that the outflow is one-dimensional; a more accurate model for an asymmetric transit would require three-dimensional modeling. Since the focus of this manuscript is on reporting the results of the observation, we provide here only this simplified modeling approach and leave more detailed modeling for future work.
The distribution of ions in the upper atmosphere of an exoplanet is highly dependent on the incident high-energy spectral energy distribution (SED) from the host star. For the purposes of this experiment, we will use the high-energy SED of HD 189733 that was estimated in <cit.>, based on X-rays and HST observations.
We calculate the expected distributions of C ii and O i as a function of altitude by assuming that the planet has solar C and O abundances, the escape rate[When referring to mass-loss rates in this manuscript, we are referring to the sub-stellar escape rate. The term “sub-stellar" is used when assuming that the planet is irradiated over 4π sr. This assumption is usually employed in one-dimensional models like p-winds. In reality, planets are irradiated over π sr only, and the total mass-loss rate is obtained by dividing the sub-stellar rate by four.] is 1.7 × 10^10 g s^-1 and the outflow temperature is 11 800 K based on the comprehensive 1D hydrodynamic simulations of <cit.>; we refer to this setup as Model 1 or M1. We also produce a forward model assuming solar C and O abundances, an escape rate of 1.1 × 10^11 g s^-1 and outflow temperature of 12 400 K, which were estimated by fitting the metastable He transmission spectrum of HD 189733 b <cit.>; we refer to this setup as Model 2 or M2. Finally, we also calculate a forward model based on the photochemical formalism presented in <cit.>, which yields an escape rate of 1.8 × 10^10 g s^-1 and a maximum thermospheric temperature of 12 000 K; we refer to this setup as Model 3 or M3.
Our results for M1 and M2 show that the upper atmosphere of HD 189733 b is mostly neutral within altitudes of ∼ 2 R_ p for all species: H, He, C and O; beyond this point, the atmosphere is predominantly singly ionized (see left panels of Figure <ref>). For M3, we found that the outflow is mostly singly ionized and is neutral only near the base of the wind. The population of ground and excited states C and O is sensitive to the assumed mass-loss rate and outflow temperature because they control how many electrons are available to collide and excite the nuclei, as well as the energy of the electrons (see right panels of Figure <ref>).
We used the density profiles of O i and C ii, as well as the ground/excited state fractions to calculate the expected transmission spectra of HD 189733 b near the ingress of the planet, where we found the strongest signals of a possible in-transit planetary absorption (see Figure <ref>). To this end, we used the instrumental line spread function of COS with the G130M grating centered at λ_0 = 1291 Å obtained from STScI[<https://www.stsci.edu/hst/instrumentation/cos/performance/spectral-resolution>]. The resulting theoretical transmission spectra are shown in Figure <ref>.
In order to compare these predictions with our observations, we take the average in-transit absorption within limited integration ranges where there is a detectable stellar flux in each wavelength bin. These ranges are the Doppler velocities [-100, +100] km s^-1 in the stellar rest frame for C ii and [-75, +75] km s^-1 for O i. The results are shown in Table <ref>.
l c c c
Wavelength-averaged ingress absorption calculated for our models.
0pt
Species Model 1 Model 2 Model 3
O i (all lines) 3.0% 3.3% 3.0%
C ii 3.3% 4.5% 2.9%
C ii^* 3.6% 5.9% 2.9%
As expected for the lower mass-loss rate inferred by <cit.> and by M3 (1.8 × 10^10 g s^-1) as compared to the estimate of <cit.>, the excess absorptions in M1 and M3 are shallower than that of M2 and are inconsistent with the C ii transit depths that we observe with COS. We find that an isothermal Parker wind model with an escape rate of 1.1 × 10^11 g s^-1, estimated from metastable He spectroscopy <cit.>, yields a C ii transit depth that is consistent with the COS observations. This mass-loss rate is also consistent with the simple estimates from <cit.>, based on the energy-limited formulation <cit.>. As seen in Table <ref>, all models predict an O i transit depth of approximately 3%, which is comparable to the sizes of the uncertainties of the COS transit depths.
Considering that M2 has a solar H fraction of 90% and it yields an average neutral fraction of 26%, we estimate that the sub-stellar escape rate of neutral H is 2.6 × 10^10 g s^-1. Since the planet is irradiated only over π sr, we divide the sub-stellar rate by four, yielding 6.4 × 10^9 g s^-1. This escape rate of neutral H is comparable to that estimated by <cit.> and <cit.> from Lyα transit observations. Our simulations are also consistent with the hydrodynamic models calculated by <cit.>, which predict an O i transit depth of about 3.5%. However, <cit.> had claimed that super-solar O abundances or super-thermal broadening or the absorption lines are required to fit the ∼ 6.4% transit depth they had measured for O i. Since we did not find such a deep O i transit in our analysis, no changes in the assumptions of our models were necessary.
§ CONCLUSIONS
We reported on the analysis of several HST transit spectroscopy observations of HD 189733 b in the FUV. We found a tentative, but repeatable absorption of 6.1%± 1.8% in the singly-ionized carbon line in the first excited state during the June-July/2017 epoch. This signal is in tension with the 2009 observations of this planet, which found no significant in-transit absorption of C ii. In addition, we found a less significant ingress absorption in the neutral oxygen lines of 5.3%± 1.9%.
Our analysis yielded hints of a C ii and Si iii post-transit tail, but they are not repeatable across the visits in question. We could not draw a definitive conclusion whether these non-repeatable signals are due to planetary or stellar variability. Although we were able to measure the FUV continuum flux of HD 189733 using COS, its light curves show no signal of significant in-transit absorption or variability. A comparison between absolute FUV fluxes and nearly simultaneous ground-based photometry in the b and y bands suggest that FUV emission lines tend to increase in flux by a factor of 10% when the star is 1.9% fainter in the optical due to starspots.
Using a geocoronal decontamination technique, we analyzed the time series and found a repeatable in-transit absorption in both the blue and red wings of the stellar emission. This result is consistent with previous studies using the STIS spectrograph. A comparison with hydrodynamics models in the literature shows that the absorption levels we found are consistent with an outflow that interacts with a stellar wind at least 10 times stronger than solar.
We interpreted the tentative C ii and O i signals using the one-dimensional, isothermal Parker-wind approximation of the Python code p-winds, which was originally created for metastable He observations. We adapted this code to include the photochemistry of C and O nuclei (see Appendix <ref>). This adaptation is publicly available as p-winds version 1.4.3. Based on our modeling, we conclude that the mass-loss rate of HD 189733 b is consistent with those inferred by the previous observational estimates of <cit.> and <cit.>, assuming solar abundances for the planet. Interestingly, for exoplanets that we detect both C and O escaping, we may be able to measure the C/O ratio of the outflow and compare them with estimates obtained with near-infrared transmission spectra measured with JWST.
We will benefit from future modeling efforts to address the following open questions: i) What levels of stellar variability in its wind and high-energy input are necessary to produce variability in the planetary outflow? And how can we observationally disentangle them? ii) Does HD 189733 b possess a post-transit tail with neutral H and ionized C and Si?
LADS thanks the postdoctoral researchers of STScI, Aline Vidotto and Ofer Cohen for their helpful input during the writing of this manuscript and acknowledges the often-overlooked labor of the custodial, facilities and security staff at STScI – this research would not be possible without them. LADS further thanks Shreyas Vissapragada and Michael Gully-Santiago for contributing to the p-winds code, and Dion Linssen, Lars Klijn and Yassin Jaziri for helping find bugs in the code. The authors thank the anonymous referee for the kind and valuable feedback given to this manuscript. ALDE acknowledges support from the CNES (Centre national d'études spatiales, France). This work is part of the HST Panchromatic Comparative Exoplanetary Treasury (PanCET) Program GO-14767. Astronomy at Tennessee State University is supported by the State of Tennessee through its Centers of Excellence program. This research was enabled by the financial support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (projects: Four Aces grant agreement No 724427 and Spice Dune grant agreement No 947634), and it has been carried out in the frame of the NCCR PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. This research is based on observations made with the NASA/ESA Hubble Space Telescope. The data are openly available in the Mikulski Archive for Space Telescopes (MAST), which is maintained by the Space Telescope Science Institute (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. STScI stands on the traditional and unceded territory of the Piscataway-Conoy and Susquehannock peoples. This research made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.
HST(STIS), HST(COS).
NumPy <cit.>, SciPy <cit.>, Astropy <cit.>, Jupyter <cit.>, Matplotlib <cit.>, p-winds <cit.>, ChiantiPy <cit.>, batman <cit.>.
§ AIRGLOW REMOVAL FROM COS SPECTROSCOPY
The geocoronal contamination in COS spectra extends over a wide wavelength range near the and O i lines and, as opposed to STIS, it is not easily subtracted by the instrument's pipeline. The reason for that is because of a combination of a large angular size of its circular aperture and the fact that it does not acquire angularly-resolved spectra away from the science target that would serve as a sky background <cit.>. However, it is possible to subtract this contamination with some careful analysis after the data reduction. A detailed description of this process is discussed in <cit.> <cit.>. Briefly, for , it consists of identifying a wavelength range where we do not expect stellar emission, such as the core of the stellar line (which is absorbed by the interstellar medium and thus contain only geocoronal emission). Then, we fit an airglow template[COS airglow templates are publicly available in <https://www.stsci.edu/hst/instrumentation/cos/calibration/airglow>.] with varying amplitudes and wavelength shifts to this region without stellar emission. In the case of our observation, we found that the best range to fit the airglow templates was [-70, +10] km s^-1 in the heliocentric rest frame. We illustrate this process in Figure <ref>.
We verified that, for Visits A, B and C, each COS full exposure is comprised of one or more sub-exposures that have relatively low levels of contamination (see the blue, orange and red spectra in the left panel of Figure <ref>). We use these sub-exposures to build clean O i profiles, since the geocoronal contamination in these lines are negligible in the regime of low airglow (see an example in Figure <ref>).
§ ADDITIONAL LIGHT CURVES
We include here light curves of O i and C ii for Visits A and D (see Figure <ref>), as well as light curves of C iii, Si ii, Si iv and N v for all visits (see Figure <ref>). The absolute-flux light curves of the host star are shown in Figure <ref>.
§ GROUND-BASED PHOTOMETRIC MONITORING OF HD 189733
We acquired photometric observations of HD 189733 during 2017A with the T10 0.80 m automatic photoelectric telescope (APT) at Fairborn Observatory in Arizona. The T10 APT is equipped with a two-channel photometer that uses two EMI 9124QB bi-alkali photomultiplier tubes to measure stellar brightness simultaneously in the Strömgren b and y passbands.
The photometry of HD 189733 was measured differentially with respect to the nearby comparison star HD 191260. To improve the photometric precision of the individual nightly observations, we combine the differential b and y magnitudes into a single pseudo-bandpass (b + y)/2. The typical precision of a single observation, as measured from pairs of constant comparison stars, typically ranges between 0.0010 mag and 0.0015 mag on good nights. The T8 APT, which is identical to T10, is described in <cit.>.
We show the differential photometry of HD 189733 from season 2017A in Figure <ref> (top and bottom panels). Visits B and C, which are the ones discussed in this manuscript, are marked with arrows. Visit B occurred when HD 189733 was in a light curve minimum and therefore the most spotted phase. Visit C occurred after the following maximum when the star was approximately 0.025 mag brighter than during Visit B, and therefore less spotted. We identify a rotational period of 12.25 ± 0.15 d in the middle panel of Figure <ref>.
§ EXTENSION OF P-WINDS FOR EXOSPHERIC C AND O
The isothermal, one-dimensional Parker wind code p-winds was originally developed to simulate transmission spectra of metastable He in the upper atmosphere of evaporating exoplanets <cit.>. In the current development version (1.4.3), we implemented the modules carbon and oxygen that can calculate the distribution of neutral, singly- and doubly-ionized C nuclei, as well as neutral and singly-ionized O nuclei. Future versions of the code will include other species relevant for observations of atmospheric escape, such as Si, Fe and Mg. The development version of p-winds also implements Roche lobe effects, as described in <cit.> and <cit.>.
The list of new reactions implemented on p-winds to allow the modeling of C and O are shown in Table <ref>; these are in addition to the reactions described in <cit.>. To calculate the photoionization rates Φ as a function of radius r, we use the following equation:
Φ(r) = ∫^λ_0_0λ/hc f_λ σ_λ e^-τ_λ(r) dλ,
where λ_0 is the wavelength corresponding to the ionization energy of a given species and f_λ is the incident flux density. σ_λ is the photoionization cross section taken from the references listed in Table <ref>. τ_λ is the optical depth for a given species and is calculated as follows:
τ_λ(r) = ∫_r^∞σ_λ n(r) dr ,
where n is the number density of a given species.
Following the formulation of <cit.>, we calculate the fractions of ionized C and O using the steady-state advection and ionization balance combined with mass conservation to obtain the following equations:
v(r) df_ C II/dr = f_ C I(Φ_ C I e^-τ_ C I + n_e q_ E1 + n_ H II q_ T2 + n_ He II q_ T4)
- f_ C II(n_e q_ R2 + n_ H II q_ T1) + f_ C III(n_e q_ R3 + n_ H I q_ T3 + n_ He I q_ T5) ,
v(r) df_ C III/dr = f_ C II(Φ_ C II e^-τ_ C II + n_e q_ E2) - f_ C III(n_e q_ R3 + n_ H I q_ T3 + n_ He I q_ T5) ,
v(r) df_ O II/dr = f_ O I(Φ_ O I e^-τ_ O I + n_e q_ E3 + n_ H II q_ T7) - f_ O II(n_e q_ R4 + + n_ H I q_ T6) ,
where f is the ionization fraction of a given species, q is the rate for a given reaction in Table <ref>, n is the number density of a given particle, f_ CI = 1 - f_ CII - f_ CIII and f_ OI = 1 - f_ O II. The Eqs. <ref> and <ref> are coupled and solved simultaneously using the ODEINT method of SciPy's <cit.> integrate module; Eq. <ref> is solved using the IVP method of the same module. The first solutions are obtained from an initial guess of the fractions f provided by the user and then repeated until they reach a convergence of 1%.
The number densities as a function of altitude for C ii and O i are calculated as:
n_C II(r) = f_C II(r) f_ C ρ(r)/(f_ H + 4 f_ He + 12 f_ C)m_ H and
n_O I(r) = f_O I(r) f_ O ρ(r)/(f_ H + 4 f_ He + 16 f_ O)m_ H,
where f_ X is the total fraction of X nuclei in the outflow.
We then proceed to calculate the wavelength dependent transmission spectrum by assuming that the only source of opacity in the atmosphere are the ions C ii, C iii and the O i atoms. We use the same simplified ray tracing and line broadening descriptions from <cit.>. The central wavelengths, oscillator strengths and Einstein coefficients of the spectral lines were obtained from the National Institute of Standards and Technology (NIST) database <cit.>[<https://www.nist.gov/pml/atomic-spectra-database>.].
The C ii lines are composed of one transition arising from the ground state and a blended doublet arising from the first excited state (with an energy of 0.00786 eV); the three O i lines present in the COS spectra arise from the ground, first and second excited states. Since the equations above do not take into account excitation, we calculate the population of excited states using the CHIANTI software <cit.> implemented in the ChiantiPy Python package[<https://github.com/chianti-atomic/ChiantiPy/>.] <cit.>. We assume that the upper atmosphere is isothermal and calculate the excited state populations as a function of electron number density. We then multiply the fraction of each state estimated by CHIANTI by the total number densities of C ii and O i, which are used to calculate the transmission spectra.
l l l l[t]
New reactions used to calculate the distribution of O and C.
0pt
Reaction Rate (cm^3 s^-1) Reference
4lPhotoionization
P1 He + h ν → He^+ + e See text <cit.>
P2 C + h ν → C^+ + e See text <cit.>
P3 C^+ + h ν → C^2+ + e See text <cit.>
P4 O + h ν → O^+ + e See text <cit.>
4lRecombination
R1 He^+ + e → He + h ν 4.6 × 10^-12 (300 / T_e)^0.64 <cit.>
R2 C^+ + e → C + h ν 4.67 × 10^-12 (300 / T_e)^0.60 <cit.>
R3 C^2+ + e → C^+ + h ν 2.32 × 10^-12 (1000 / T_e)^0.645 <cit.>
R4 O^+ + e → O + h ν 3.25 × 10^-12 (300 / T_e)^0.66 <cit.>
4lElectron impact ionization
E1 C + e → C^+ + e + e 6.85 × 10^-8(1/0.193+U)^0.25exp(-U), U=11.3/E_e ( eV) <cit.>
E2 C^+ + e → C^2+ + e + e 1.86 × 10^-8(1/0.286+U)^0.24exp(-U), U=24.4/E_e ( eV) <cit.>
E3 O + e → O^+ + e + e 3.59 × 10^-8(1/0.073+U)^0.34exp(-U), U=13.6/E_e ( eV) <cit.>
4lCharge transfer with H and He nuclei
T1 C^+ + H → C + H^+ 6.30 × 10^-17 (300 / T)^-1.96exp(-170 000/T) <cit.>
T2 C + H^+ → C^+ + H 1.31 × 10^-15 (300 / T)^-0.213 <cit.>
T3 C^2+ + H → C^+ + H^+ 1.67 × 10^-4 (10 000 / T)^-2.79 [1 + 304.72 e^(-4.07 T / 10 000)] <cit.>
T4 C + He^+ → C^+ + He 2.50 × 10^-15 (300 / T)^-1.597 <cit.>
T5 C^2+ + He → C^+ + He^+ ∼ 1.23 × 10^-9 for T < 15 000 K <cit.>
T6 O^+ + H → O + H^+ 5.66 × 10^-10 (300 / T)^-0.36exp(8.6/T) <cit.>
T7 O + H^+ → O^+ + H 7.31 × 10^-10 (300 / T)^-0.23exp(-226/T) <cit.>
T_e is the temperature of the electrons, which we assume to be the same as the temperature of the outflow T. E_e is the energy of the colliding electrons at a given temperature T_e.
aasjournal
|
http://arxiv.org/abs/2307.01805v1
|
20230704162130
|
Enhancing Quantum Otto Engine Performance in Generalized External Potential on Bose-Einstein Condensation Regime
|
[
"Zahara Zettira",
"Ade Fahriza",
"Zulfi Abdullah",
"Trengginas E P Sutantyo"
] |
cond-mat.quant-gas
|
[
"cond-mat.quant-gas",
"cond-mat.stat-mech",
"quant-ph"
] |
APS/123-QED
trengginasekaputra@sci.unand.ac.id
^1Theoretical Physics Laboratory, Department of Physics, Faculty of Mathematics and Natural Science, Andalas University, Indonesia
We examine a quantum Otto engine using both Bose-Einstein Condensation (BEC) and normal Bose gas as working medium trapped in generalized external potential. We treated the engine quasi-statically and endoreversibly. Since the expansion and compression in both quasi-static and endoreversible take place isentropic, the expression of efficiency is similar. However, the power output in the quasi-static cycle is zero due to infinite and long stroke time. In contrast, with an endoreversible cycle, thermalization with two reservoirs takes place at a finite time. We use Fourier’s law in conduction to formulate the relation between temperature of medium and reservoir, making work depend on heating and cooling stroke time. Moreover, we maximized the power with respect to compression ratio κ to obtain efficiency at maximum power (EMP). We found that EMP is significantly higher when using BEC as a working medium, meanwhile EMP with normal Bose gas is just Curzon-Ahlborn efficiency. We also investigate the effect of thermal contact time τ with hot (τ_h) and cold (τ_l) reservoir on EMP. We found that when τ_h=τ_l stroke time occur, there are no significant differences. Nevertheless, adjusting various cooling and heating stroke time provide a significant result on EMP, which is much higher at τ_h<τ_l stroke time whilst lower at τ_h>τ_l stroke time. We conclude this partial thermalization enhances the EMP of the engine due to residual coherence.
Enhancing Quantum Otto Engine Performance in
Generalized External Potential on Bose-Einstein Condensation Regime
Trengginas E P Sutantyo^1
August 1, 2023
==================================================================================================================
§ INTRODUCTION
Nowadays, quantum science significantly influences the development of classical physics theory, unexceptionable thermodynamics, which has recently been known as quantum thermodynamics <cit.> or nano thermodynamics <cit.>. Since the discovery of quantum science, nano thermodynamics has emerged rapidly <cit.> and being fundamental for future advanced technologies <cit.>; quantum information, high precision sensors, and quantum heat engine (QHE). Started by the pioneer of QHE <cit.> until recent development <cit.>; this revolution is a bridge to the breakthrough implementation of nano heat engines. Over the last two decades, research on quantum heat
engines has opened up the horizons of their benefits. <cit.>. The motivation is to realize the QHE as close as possible to the practical world as a worthwhile device.
Quantum Heat Engine (QHE) is a device that utilizes quantum matter or particles as its working medium in order to convert heat into work <cit.>. In fact, the QHE also operates using a cycle that resembles classical thermodynamics, such as the Otto <cit.>, Lenoir <cit.>, Diesel <cit.>, and Carnot <cit.> cycles. Nevertheless, the physical quantities in the quantum thermodynamic cycle are different from the classical thermodynamic cycle; this is the underlying reason that the efficiency of a quantum heat engine is ably outperformed the most efficient classical heat engine, the Carnot Engine <cit.>. In a certain state, the work produced by QHE exceeds the maximum value of a heat engine which operates classically <cit.>.
A classical heat engine that operates with a reversible cycle produces the highest efficiency, e.g., classical Carnot engine. However, the Carnot engine operates via a quasistatic process that lasts long and produces zero power output, making it impossible to realize. Curzon and Ahlborn <cit.> applied an endoreversible approach, in which the process is accelerated, and the power output produced by the engine is finite. Although the efficiency η_CA is lower than the efficiency of a quasistatic Carnot engine η_C, an endoreversible Carnot engine is more realistic and possible to realize.
Recently, many heat engine models proposed to investigate the endoreversible cycle because it is more realistic to implement <cit.>, even those endoreversible heat engines have an efficiency that exceeds Curzon-Ahlborn limit <cit.>. The endoreversible heat engine model uses a variety of mediums such as classical gas <cit.>, single ion <cit.>, or even quantum matter in the form of fermions <cit.> and bosons <cit.>.
Interestingly, boson as a working medium produces greater efficiency than fermion achieves <cit.>, due to the advantages of symmetry possessed by boson that fermion does not. This is what underlies this research to examine the role of boson as a working medium, especially in the Bose-Einstein Condensation (BEC) regime, which is even claimed to be able to provide better performance than ordinary boson gas <cit.>. The use of BEC as a heat engine working medium is being intensively investigated <cit.>. When the bosonic atoms gas is cooled below a certain critical temperature T_c, such that most of the atoms are condensed in the lowest quantum state, experience a phase transition into BEC regime <cit.>. BEC is a quantum phenomenon that can be realized on several vapors, i.e., rubidium <cit.>, sodium <cit.>, and lithium <cit.>. Moreover, recently BEC can be observed in a 3D potential box <cit.>.
In this study, we examine the quantum Otto engine using the advantage of the BEC's thermodynamic properties as a working medium under the influence of a 3D generalized external trapping potential. We modify that typical potential <cit.> in order to investigate performance in other circumstances, especially considering the possibilities of the cycles that can exploit external trapping potential as a bridge to extending the analysis to the nonequilibrium regime <cit.>. Besides, with that potential, the engine aims to obtain better performance, considering for n→∞, i.e., box potential, provides lower heat capacity (c_V) than for n=2, i.e., harmonic potential <cit.>. This will affect the efficiency of the Otto engine η(c_V), with box potential achieving higher efficiency than harmonic potential. In addition, we compare the performance of quantum Otto engine from work and efficiency both in condensed and non-condensed phase. Apart from that, we investigate the engine performance from efficiency at maximum power (EMP) by maximizing the power output to the compression ratio parameter. Lastly, we also explore the engine performance by varying thermalization time during the cycles, which at a specific value boosts the EMP significantly.
§ THERMODYNAMIC PROPERTIES OF BEC IN GENERALIZED EXTERNAL POTENTIAL
We consider N number of particles in an ideal Bose gas to be equally distributed in a certain energy level. This energy level is determined by the Hamiltonian worked in the aforementioned particles.
In this study, we define ideal Bose gas in generalized external trapping potential as formulated below
V(r)= ε_0( |x/a|^p + |y/a|^q + | z/a|^l)
where x, y, and z are coordinate of the particles' positions in space, ε_0 is a constant in energy dimension, and a is radius of the potential. Here, we set p = q = l = n, so for certain n numbers, the potential has shape as Figure <ref>.
The density of state for distributed particles in potential from Equation <ref> is given by <cit.>
ρ(ε) = [ 2π(2m)^3/2/h^3] a^3/ε_0^3/nε^λ F(n,n,n)
where λ
= 3/n + 1/2 and F(n,n,n) is defined as below
1F(n,n,n) = 0.85[ ∫_-1^1 (1 - X^n)^1/2+2/n dX ]×
0.85[ ∫_-1^1 (1 - X^n)^1/2+1/n dX ]×
0.85[ ∫_-1^1 (1 - X^n)^1/2 dX ].
Grand potential for the boson particle system is given by <cit.>
Ω = k_BT ∑_iln(1-ze^-βε_i)
which z = e^μ/k_BT is the fugacity. Since in BEC cases, the big fraction of bosons are condensed into ground state energy, we can consider k_BT ≫ε_i+1-ε_i. Therefore, the system can be defined as a continuous state plus a discrete ground state. By substituting Equation <ref> to Equation <ref>, we obtain
Ω = Ak_BTa^3∫^∞_0ε^3/n+1/2ln(1-ze^-εβ)dε
= -A'a^3(k_BT)^3/n+5/2 g_3/n+5/2 (z)
with A = [ 2π(2m)^3/2/h^3] F(n,n,n)/ε_0^3/n and A' = 1/3/n+3/2Γ( 3/n + 5/2)A which are constants and b=3/n+3/2 is used to shorten the notations. Furthermore, g_3/n+5/2(z) is the related Bose function which is defined in the following equation <cit.>
g_v(z) = 1/Γ(v)∫^∞_0 dx 1/z^-1e^x-1
Bose function can also be defined as a series that is similar to the Zeta function series,
g_v(z) = ∑^∞_n=1z^n/n^v
and can also be expressed in the following recursion relation,
g_v-1(z) = ∂/∂ln(z) g_v (z)
Based on Ω = U - TS - μ N relation <cit.>, we can derived N that is the number of excited atom at temperature T
N = -( ∂Ω/∂μ)_T,a = A'a^3(k_BT)^b g_b (z)
BEC occurs when μ increases monotonously until the value approaches the energy at ground state (ε_0). If ε_0 = 0 <cit.>, then BEC occurs at μ = 0 or z = 1 so that we can derive the critical temperature, the temperature at the phase transition, as
T_c = [ N/A'a^31/ζ(b)]^1/b1/k_B
The entropy S and the internal energy U of the BEC can be derived respectively, as follow
S = -( ∂Ω/∂ T)_T,a,μ =
A'a^3k_B(k_BT)^b[ (b+1) g_b+1 (1) ] T ≤ T_c
A'a^3k_B(k_BT)^b[ (b+1) g_b+1 (z) - ln zg_b (z) ] T ≥ T_c
and
U = Ω + TS + μ N =
(b) A'a^3(k_BT)^b+1 g_b+1 (1) T ≤ T_c
(b) A'a^3(k_BT)^b+1 g_b+1 (z) T ≥ T_c
§ ENDOREVERSIBLE OTTO CYCLE
The main principle of endoreversible thermodynamics is local equilibrium. The transformation of working medium during isentropic stroke takes place slowly so that the working medium is always able to reach equilibrium. However, because the cycle time is finite, the working medium does not have time to reach equilibrium with the reservoir. So, from the reservoir point of view, the process is irreversible, but from the working medium point of view itself, the process is reversible <cit.>.
An ideal Otto cycle consists of four strokes: isentropic compression (stroke 1-2), isochoric heating (stroke 2-3), isentropic expansion (stroke 3-4), and isochoric cooling (stroke 4-1) <cit.>, as shown in Figure <ref>. By varying a as a parameter shows potential radius, it means we variate the external potential for the system which also means there is a change in order of the energy at said system until the system is excited. On the other hand, imbuing thermal energy into the system can also cause excitation in the system. The energy transfer of the external field and the thermal energy are arranged in such a way that the Otto cycle is relevant to the classical Otto cycle.
During isochoric heating stroke, external field remains constant at a_l so as the volume and the system make contact with the hot reservoir, the heat flows into the system. Based on the first law of Thermodynamic, the amount of heat transferred is
Δ Q_in = U (T_3, a_l) - U (T_2, a_l)
Heat transfer during heating stroke, the system obeys the Fourier heat conduction law as given
dT/dt = -α_h( T-T_h)
where α_h is a constant that depends on thermal conductivity and thermal capacity of the medium during heating stroke. The boundary conditions at the beginning and the end of isochoric stroke are written as
T(0) = T_4 and T(τ_h) = T_1, where T_4 < T_1≤ T_h
with solution
T_3 - T_h = (T_2 - T_h) e^-α_hτ_h
when a time limit of heating stroke goes to infinity, τ_h→∞, T_3 = T_h and it becomes the quasi-static condition.
During isentropic expansion stroke, external field is varied from a_l to a_h which also means the volume of the system is changed from V_l to V_h. However, the system is unattached from the reservoir so that there is no heat exchange within the system (Δ Q = 0). From the First Law of Thermodynamics, we get
W_exp = U(T_4, a_h) - U(T_3, a_l)
After the volume reach V_h, the system is back interconnected to the cold reservoir. During this stroke, the external field remains constant at a_h. Similar to the isochoric heating stroke, the ejected heat is the internal energy difference from stroke 3 to 2,
Δ Q_out = U (T_1, a_h) - U (T_4, a_h)
Heat transfer process during cooling stroke which also obey the Fourier heat conduction law as below
dT/dt = -α_l( T-T_l)
where a_l is a constant of thermal conductivity and thermal capacity of the medium during cooling stroke. This isochoric stroke has boundary conditions as
T(0) = T_4 and T(τ_l) = T_1, where T_4 > T_1≥ T_l
with solution
T_1 - T_l = (T_4 - T_l) e^-α_lτ_l
when quasi-static condition is achieved while cooling stroke time reaches the limit to infinity, τ_l→∞, in other words, T_1 = T_l.
Furthermore, the system is re-disconnected from the reservoir and the field is varied a_h to a_l to fulfill isentropic conditions. As there is no heat transfer within the system, then the work during compressing stroke is given by
W_comp = U(T_2, a_l) - U(T_1, a_h)
Unlike the quasi-static Otto cycle, in which the four strokes are reversible, in the endoreversible Otto cycle, 2 strokes are irreversible and 2 strokes are reversible. Irreversibility occurs when the working medium comes into contact with the reservoir during isochoric stroke, which causes leaking during the heat transfer <cit.>. On the other hand, during isentropic strokes, there is no heat leaking occurs due to the system being isolated to external energy <cit.>. This idea has been demonstrated in recent studies. <cit.>
Next, we determine the thermal efficiency by comparing the total work done in a cycle with the heat in
η = -W_exp+W_comp/Q_in
whilst the power output is determined by the total work done and the time for one cycle
P = -W_exp+W_comp/γτ_h+τ_l
with γ is a multiplier constant that refers to the total time for one cycle, including the time throughout isentropic strokes.
To eliminate T_1, T_2, T_3, and T_4 into controllable parameters (T_h, T_l), both Equation <ref> and <ref> are not enough so that another two equations are needed. During isentropic strokes, the entropy and fugacity are constant <cit.> so based on Equation <ref>, within isentropic strokes, we got the relation of temperature as
T_2 = κ^-1/b T_1 and T_4 = κ^1/b T_3
which κ = ( a_l/a_h)^3 is the ratio of volume compression.
§ RESULT AND DISCUSSION
§.§ Quasi-static Performance
Using Equation <ref>, we obtain the critical temperature for n = 1,2,3 and n →∞, as shown in table 1 that T_c decreases as n increases. T_c for n = 2 is the same as Myers et al. obtained in their research <cit.> and other related data have also been adjusted according to reference <cit.>. This decrease in T_c is related to the volume of each potential as shown in Figure <ref>, where volume is proportional to n. Thus at the same number of particles N, potential with small V, the density of atoms is greater than the potential with large V. This is consistent with the results obtained by <cit.> that an increase in density of atoms causes an increase in T_c as well, so that at n →∞ the gas becomes less dense so T_c is also small.
In the quasi-static cycle all four-stroke is reversible, as consequences of internal friction due to expansion and compression during isentropic stroke are negligible. Since thermal contact within medium and reservoir last for long time, so we get from Equation <ref> and <ref> T_3 = T_h and T_1=T_l. For the cycle in quasi-static regime, we present the result in Figure <ref> and <ref>. Figure <ref> represent the numerical solutions of quasi-static Otto cycle which operated in non-condensed phase or just normal bose gas that is when T≥ T_c and Figure <ref> represent numerical solutions of quasi-static Otto cycle which operated in condensed phase (BEC phase) that is when T≤ T_c. In this simulation parameters T_h (hot reservoir temperature), T_l (cold reservoir temperature) and a_l (initial potential radius) are kept constant, whilst a_h (final potential radius) is varied.
Figure <ref> displays work and efficiency for several values of n for medium in non-condensed phase. Left shows the work curve versus a_h, and at each curve the maximum work is marked with a small black dot. For medium in non-condensed phase T_l and T_h are given equally for all n that is T_l = 300 nK and T_h = 500 nK. This is intended so that the resulting work only depends on n so that a comparison can be obtained. Right displays the efficiency curve versus a_h together with Carnot efficiency (η_C) at a constant value of η = 0,4. a_h at maximum work is also shown in this efficiency curve which is denoted by efficiency at maximum work.
By obtaining the analytical solutions of total work in a cycle, we use Equation <ref>, which the first line is the solution for condensed phase and the second line is the solution for non-condensed phase. The total work for non-condensed phase is written as
W_t_ncon = bA'k^b+1_Ba^3_l( 1-κ^1/b) ×
[ T^b+1_h g_b+1(z_3)
-κ^-b+1/bT^b+1_l g_b+1(z_2) ]
z_2 and z_3 are the fugacity for point 2 and 3 respectively. Since during isentropic stroke there is no heat exchange from medium to the environment, the number of atoms N is fixed <cit.>. As a consequence Equation <ref> must be constant during this process. By substituting the Temperature and Volume relation from Equation <ref>, we found fugacity must be constant during isentropic stroke, that is we have z_1(a_h, T_1) = z_2(a_l, T_2) and z_3(a_l,T_3) = z_4(a_h,T_4). Furthermore, at high temperature limit, we only need the first term of the expansion of Equation <ref> because we obtain N/A'a^3(k_BT)^b≪ 1 by substituting an appropriate value of T. Therefore, Equation <ref> is transformed to classical formulation of
W_t = b N k_B( 1 - κ^1/b) [T_h - κ ^-1/bT_l]
For fixed T_h and T_l value, the total work decreases as n increases, as shown in Figure <ref>. These results are in agreement with <cit.>, which at constant external energy ϵ_0 whilst volume is varied, the total work produced in a cycle decreases with increasing degree of potential n.
In general, the efficiency of Equation <ref> can be expressed as
η = 1 - κ^1/b
Furthermore, efficiency increases with increasing n, from Figure <ref> we see that for small compression, (small a_h), engines operating at high n are more efficient than engines operating at small n. These results are also consistent with those obtained in references <cit.>.
Efficiency at maximum work is equal for all n, which is exactly Curzon-Ahlborn efficiency <cit.>. By deriving equation <ref> with respect to κ, we get κ^1/b_max=(T_l/T_h)^1/2 so η = 1-(T_l/T_h)^1/2. On Section <ref>, we also see that efficiency at maximum power for medium in non-condensed phase is also equivalent to Curzon-Ahlborn efficiency.
In Figure <ref> we represent the total work for several n for medium in condensed phase. T_l and T_h are chosen in such a way for each n medium is always below critical temperature. Need to be noted that for each n, we cannot display them in one figure as we did in Figure <ref>, this is due to the difference in critical temperature at each n. We get the analytical expression for total work for medium in condensed phase below
W_t_con = bA'k^b+1_Ba^3_l( 1-κ^1/b) ζ(b+1) ×
[ T^b+1_h-κ^-b+1/bT^b+1_l]
ζ(b+1) is a zeta function originated from Equation <ref>, i.e. when z=1. By dint of choosing the decreasing temperature as n increases, the maximum work obtained is also decreased as n increases as shown in Figure <ref>. But unlike in the non-condensed phase, where the optimum efficiency is the same for all n, the efficiency at the maximum work increases with increasing n. It shows that the optimum efficiency for medium in condensed phase is dependent on the properties of the medium.
In Section <ref>, it is clearly seen that the efficiency at maximum power for medium in condensed phase is higher than Curzon-Ahlborn efficiency. Need to be noted that work in condensed phase is not explicitly a function of N (number of particles) because the number of condensed bosons is a function of temperature. Referring to Equation <ref>, we obtain the fraction of excited boson to total N (T ≤ T_c) as written
N_T/N = ( T/T_c)^b
The maximum excitation occurs when T=T_c whilst minimum excitation occurs when all bosons are condensed, viz. when T=0.
To obtain the ideal compression ratio (W is maximized), the Equation <ref> is derived by κ
T^b+1_l[ (b+1) - (bκ^*)^1/b] - T^b+1_h (κ^*)^2+b/b = 0
κ^* and a^* are compression ratio and radius of potential at maximum work, respectively. The ideal κ^* and a^* for any number of n can be derived by solving the Equation <ref> which the values are displayed in table <ref>.
Based on the Table, the values do not change constantly as the variation of n is increased. It is due to the total work done in the shape of a sphere (n = 2) being equally distributed. Thus, the compression ratio required is lesser than other shapes. Because we are not really interested in engine that operates quasi-statically, due to long and infinite stroke time the power it produces is small to zero. We are more interested in finite time Otto cycle. The contact between medium and reservoir during the isochoric stroke is finite, medium is not in equilibrium with the thermal reservoir. But still during expansion and compression isentropic stroke are slow and quasi-static.
§.§ Endoreversible Performance
First of all, we review the efficiency and power output at condensed phase when the medium is under critical temperature. By substituting Equation <ref>, <ref>, and <ref> to Equation <ref>, we derived similar efficiency as the quasi-static condition:
η_T ≤ T_C = 1 - κ^1/b
while the power output is derived by combining Equation <ref> and <ref> with Equation <ref>, so as by using the relation of temperature in Equation <ref>, <ref>, and <ref>. As a result, the power output of condensed phase can be written below
0.85P_T ≤ T_C = bA'k^b+1_B g_b+1(1)a^3_l(1 - κ^1/b) [[ T_hκ^1/b e^α_lτ_l( e^α_hτ_h-1 ) + T_l( e^α_lτ_l-1 ) ]^b+1 - [ T_l e^α_hτ_h( e^α_lτ_l-1 ) + T_hκ^1/b( e^α_hτ_h-1 ) ]^b+1]/γ (τ_c+τ_h)κ^b+1/b( e^α_hτ_h+α_cτ_c-1 )^b+1
which each T_h and T_l are the hot and cold reservoir temperatures, respectively, while each α_h and α_l represent the thermal conductivity while making contact with hot and cold reservoir, and the time of strokes within the heating and cooling process are denoted with τ_h and τ_l.
Furthermore, we obtain the efficiency of the non-condensed phase is the same as condensed phase,
i.e.
η_T ≥ T_C = 1 - κ^1/b.
Since efficiency is determined by the rate of work done during expansion and compression, for endoreversible cycle expansion and compression stroke is operated by isentropic condition means its quasistatic, so there is no internal friction arises which will reduce efficiency <cit.>. However, the power output will be nonzero since the finite time process at isochoric stroke. This result indicates that in both quasi-static and endoreversible cases has no impact on engine efficiency but does on power. As obtained in the formulation of quasi-static work (Equation <ref>), the fugacity of Equation <ref> is approximated only at the first term because the z value is small. T_3 and T_2 is derived from Equation <ref>, <ref>, and <ref> so that the obtained power output is explicitly depend on N
0.85P_T ≥ T_C = bNk_B(1 - κ^1/b) [[ T_hκ^1/b e^α_lτ_l( e^α_hτ_h-1 ) + T_l( e^α_lτ_l-1 ) ] - [ T_l e^α_hτ_h( e^α_lτ_l-1 ) + T_hκ^1/b( e^α_hτ_h-1 ) ] ]/γ (τ_c+τ_h)κ^1/b( e^α_hτ_h+α_cτ_c-1 )
need to be noted, at T ≥ T_c, all bosons are in non-condensed phase and there is no condensed boson yet. In contrast, at T ≤ T_c, the amount of condensed boson depends on temperature <ref> so that N can not be written explicitly. By replacing κ^1/b with 1-η, power can also be represented as a function of efficiency. We visualize power as a function of efficiency (η) and isochoric stroke time (τ) by a 3D plot in Figure <ref> for medium in BEC phase and in Figure <ref> for medium in non-BEC phase.
We found that longer stroke times minimize power. This is due to the dependence of the denominator of the equation <ref> on stroke time. Power is also minimized as efficiency approaches 1; it shows that engine performance is not only seen from its efficiency but also from the amount of power produced. By the reason of that, it is interesting to know at which efficiency the power becomes maximum. Efficiency at maximum power (EMP) is marked with a peak on each curve. The apex of this curve shifts to the left as τ increases, which means that EMP also decreases as τ increases. However, at high n the peak shift of the curve is small, so the increasing of τ does not have a significant effect on the decreasing of EMP.
We used T_h and T_l in this non-condensed phase slightly higher than the critical temperature of n=1, so that none of the n has condensed. We also use this value in representing EMP in Figure <ref>. Although the shape of the curves are similar for all n, the power P decreases with the increasing of n, which is also confirmed by Figure <ref>a. Unlike the power in the BEC phase where the peak of the curve shifts slightly with an increasing of τ, in non-condensed phase, the peak of the curve does not change with increasing of τ. This shows that the efficiency at maximum power of non-condensed phase does not depend on τ. As has been found in the quasi-static results, the efficiency at maximum power for the medium in the non-condensed phase is the Curzon-Ahlborn efficiency which only depends on the temperature of the reservoir. Curzon-Ahlborn efficiency is given by η = 1-(T_l/T_h)^1/2 so we get η=0.22 as exactly shown in Figure <ref> The optimum efficiency achieved in the non-condensed phase is also lower than in the BEC phase.
It is well established that there is an inherent trade-off between efficiency and power <cit.>. Efficiency is maximized at long and infinite stroke time, but power is minimized for long stroke time due to dependence of the denominator in Equations <ref> and <ref>. The highest efficiency is bounded by Carnot efficiency, that is when κ^1/b = T_l/T_h. By substituting this to Equation <ref> or <ref>, we found that both in condensed phase and in non-condensed phase, power vanishes at that value. So it is important to find an optimum efficiency, Efficiency at Maximum Power (EMP). Similar to what Curzon and Ahlborn <cit.> did, then followed by many researchers <cit.>, we also determine EMP by maximizing Equations <ref> and <ref> with respect to κ, then the results are substituted back to Equations <ref> and <ref>, which shown in Figure <ref>.
Since the critical temperature of n →∞ is the lowest, we used T_h and T_l which are comparable to the critical temperature of n →∞, thus all n is in the condensation phase. As shown in Figure <ref>a, EMP decreases as increasing of n and temperature of cold reservoir T_l, but still higher than Curzon-Ahlborn efficiency. Because of work in a quantum system is manifested by the difference between the initial and final energies of the system <cit.>, this result can be linked to the change in internal energy during expansion and compression. Based on equation <ref>, the internal energy depends on the order of (1/n), so n=1 gives the highest expression of internal energy, and it reduces with the increasing of n. The power and EMP are inversely proportional to n. Moreover, EMP also shifts to Curzon-Ahlborn efficiency as the temperature of cold reservoir T_l increases, which can be seen in Figure <ref>b and Figure <ref>c.
Furthermore, although at n = 1 the highest EMP is produced, the power generated at picokelvin range is the lowest because P depends on the power of b+1 (see Equation <ref>). In addition, n = 1 has the largest b while T ≪ 1, so the power generated is also very small. Therefore, we present the EMP for n = 1 separately in Figure <ref>b in nanokelvin range. In that case, we use T_h = 300 nK, slightly higher than its critical temperature, which resulting much higher power than in picokelvin range. Classically, work is produced from the movement of the piston when pressure is applied <cit.>, but the atoms in the condensation phase cannot accept or use the pressure exerted on it <cit.>. Nevertheless, <cit.> claimed that work must be generated by the fraction of bosons out of condensate in the thermal cloud. At n=1, the picokelvin temperature is very far below its critical temperature (see Equation <ref>), and almost all of the bosons have condensed, which causes only a few to be in the thermal cloud. Meanwhile, in the nanokelvin temperature range, which is slightly below its critical temperature, bosons are just starting to condense which causes some of them to still be in the thermal cloud. For this reason, the work and power generated in the nanokelvin range will be greater than the picokelvin range; this is as shown in Figure <ref>, the EMP in Figure <ref>b is higher than in Figure <ref>a.
At non-condensed phase, we obtained the optimal compression ratio as κ_max=(T_l/T_h)^b/2. By substituting it into Equation <ref>, we get η=1-(T_l/T_h)^1/2 in which is Curzon-Ahlborn efficiency. As shown in Figure <ref>c, EMP has the same value for all n. As we did previously, at high temperature limit, we took N/A'a^3(k_BT)^b≪ 1 where the term N/A'a^3 is the quantum property of bosons in a certain potential with the dimensions of energy. If (k_BT)^b is much greater than N/A'a^3, then the quantum property of the gas can be neglected. Therefore, the energy of the bosons which was initially discrete has become continuous. When the energy of system matches classical conditions, the efficiency at maximum power is equal to the Curzon-Ahlborn efficiency <cit.>. However, when N/A'a^3 is comparable or greater than (k_BT)^b, the quantum effect of the gas cannot be neglected. Hence, the EMP is also depend on quantum property of the gas, including potential. Each potential gives different energy eigenvalue <cit.>, so that it will affect the performance of the engine. Not only the difference of energy quantization affects performance, in this study, we found that the variations in the thermal contact time with the reservoir during isochoric strokes also affects the performance, especially the EMP. We visualize EMP for working medium in BEC phase as function of cold bath temperature with different heating and cooling stroke time, τ_h and τ_l respectively. In this case we only represented EMP in BEC phase since in non-condensation phase EMP is independent of heating and cooling stroke time.
In Figure <ref>a, we consider the engine works in a equal heating and cooling stroke time, τ_l = τ_h, which are varied in time units for all n. On the other hand, in Figure <ref>b, the heating and cooling stroke time are distinguished for only n=1, the highest EMP producer, whereas all other parameters are kept constant. As seen in Figure <ref>a, the increasing of stroke time relatively decreases EMP even though at higher n does not give a significant difference. This result is not the same as obtained in prior research <cit.> in which the increasing of stroke time precisely increases the EMP produced. Ideally, the longer the stroke time makes the engine more likely reversible and the engine that can exhibits reversibility would provide the highest efficiency <cit.>. However, there are also some interesting things we found in this study. Figure <ref>b shows that the EMP produced is significantly higher at short heating stroke time and long cooling stroke time. Otherwise, at long heating stroke time and a short cooling stroke time, the EMP produced is lower than other configurations even so close to Curzon-Ahlborn efficiency. Furthermore, EMP at τ_h<τ_l is higher than EMP at τ_h=τ_l. Physically, it is due to the medium extracting more energy from hot reservoir when the heating stroke time is long, so that the temperature of medium reaches the temperature of hot reservoir itself. In contrast, during the short heating and long cooling stroke time, the medium ejects more energy into the cold reservoir so that the temperature drops. This is in agreement with Figure <ref> that EMP is high at lower temperature and likewise.
The increasing of efficiency at short and finite heating stroke time is known as the affect of residual coherence due to incomplete thermalizations <cit.>. In order to obtain higher amount of power output, the cycle time should be cut down. At least there are two ways we can do; first, by cutting the time during the expansion and compression strokes (shortcut to adiabaticity) <cit.>, and second by cutting the contact thermal time between the medium and the reservoir <cit.>. Nevertheless, in this study, we uses an endoreversible cycle with the expansion and compression processes done in isentropic, i.e. quasi-static, so that we can only cut the time from isochoric processes. Furthermore, due to rapid compression and expansion, the internal friction (quantum friction) arises and directly reduces efficiency <cit.>. Finite-time transformation also exhibits entropy production which leads the engine to irreversibility <cit.>. However, this irreversibility does not only occur during expansion and compression strokes, but also in the heating and cooling strokes. Due to the short heating and cooling stroke time, the medium never reaches thermal equilibrium with the reservoir thereby leaving the medium in a coherent state in energy basis |E_n⟩, or it is known as residuals coherence. The coherence in this energy basis can be associated with entropy production and quantum friction <cit.>. Longer heating stroke time increases entropy production, because it makes more heat flowed from the hot reservoir to the medium. Mathematically, entropy is the amount of heat that flows for the increasing of every one degree in temperature <cit.>, so that the longer the contact time of the medium to the hot reservoir, the higher entropy increases. The flowing heat does not occur continuously, due to a thermal equilibrium will be achieved at a certain temperature that the entropy will not increase anymore. Entropy production is closely related to the irreversibility of the engine. The greater the entropy production, the more irreversible the engine is, which then results in a decrease in efficiency <cit.>.
In the cooling process, heat is transferred from the medium to the cold reservoir, so the change in entropy is negative. In finite time cycles such as endoreversible cycles, the entropy production during heating and cooling does not cancel each other or Δ S_heating+Δ S_cooling > 0. The amount of heat flow during heating stroke is not the same as the amount of heat flow during cooling stroke. Only reversible engine such as Carnot engine produces zero entropy during the cycle <cit.>. Thus, the efficiency of irreversible engine will never reach Carnot efficiency. However, this entropy production could be minimized by reducing the amount of heat transferred to the medium or reducing the heating stroke time. This statement is in agreement with the results obtained in Figure <ref>b, EMP is relatively higher when the heating time τ_h is shortened.
Based on Figure <ref>b, both τ_l and τ_h are actually affected EMP. It can be seen that the blue and black dashes work at the same τ_h, but EMP is higher at shorten τ_l. Physically, the temperature of medium in blue dash is lower than temperature in black dash, because more heat is injected from medium into the cold reservoir. In BEC, the fraction of condensed atoms depends on temperature (Equation <ref>), the fraction of bosons condensed on the blue dash will be larger than on the black dash. When condensations are formed, these bosons will be occupied to the lowest quantum state or, macroscopically, the wave properties of each atom will collapse and interfere constructively with each other to form a single wave function <cit.>. The coherence effect arises from the atoms in the condensate occupied in the same quantum state, which is described by the single wave function <cit.>. By this fact, blue will be "more coherent" than black. Thus, it can be said that the presence of this coherence in incomplete thermalizations increasing efficiency <cit.>.
The study shows EMP is higher when using BEC as working medium than normal boson gas (non-condensed conditions). The best performance is also given by the potential with n=1, because it produces the highest power and EMP. In addition, its critical temperature is relatively easy to reach and close to the temperature of experimental realizations of BEC itself <cit.>. Nevertheless, the power generated by n=1 is still very small when compared to the real engine, due to the engine operates at very low temperatures even though the efficiency of using BEC is much higher than the efficiency provided by classical gas <cit.>. Since the power is determined by changes in internal energy during expansion and compression, it is possible to boost the power by adding more particles or by increasing external potential ϵ_0 (in this calculation we used ϵ_0 corresponding to prior study <cit.>). However, increasing the number of particles will increase the density of the gas, so the interaction between atoms in gas cannot be neglected. Moreover, based on experimental results, BEC only occurs at very low densities <cit.>. The critical temperature also depends on particle density, the rise in density also rising T_c which is relatively easy to access <cit.>. It is necessary to consider other physical aspects such as the volume of the potential as shown in Equation <ref> so that BEC can still occur.
§ CONCLUSIONS
Bose-Einstein Condensation (BEC) is one of the matters which can be harnessed as a working medium in the Quantum Heat Engine (QHE) idea. By setting the temperature of the system, the partially uncondensed BEC gives considerably high efficiency, especially its EMP or Efficiency at Maximum Power, as shown in this study. Furthermore, because of its dependent on medium properties and external potential, we focus on studying EMP using BEC trapped in a generalized external potential instead of the non-BEC. We have found that, at the same range of T, EMP decreases as the degree of potential n increases. In contrast, η is increasing as n increases. On the other hand, we also obtained the EMP of BEC regimes in various isochoric stroke times (τ_l and τ_h). If we take a look back at Equation <ref>, the Fourier Conduction Equation, both τ_l and τ_h directly affects the final temperatures of medium in each isochoric stroke (T_1 and T_3). Despite the fact that EMP is decreasing for a longer stroke time, there is no significant difference for the same value of τ_l and τ_h. However, if we set τ_l and τ_h properly, e.g., short stroke time in the heating isochoric process and long stroke time in the cooling isochoric process, the engine produces higher EMP than the one operates in long heating stroke time and short cooling stroke time, surprisingly also higher than operates in equal heating and cooling stroke time. This indicates that the setting of the stroke time of the engine can restrain the entropy production as well as quantum friction in the system. Furthermore, with the uniqueness of BEC regime, the optimal work extracted can be obtained within the full excited expansion stroke (the closest by T_C) and full condensed compression stroke (the farthest by T_C). Hence, we can conclude that the quantum Otto engine operates more effectively on potential n=1 than other variations of n whose the highest critical temperature, so as being the highest EMP and highest power output producer. While the potentials of n<1 should be considered in future studies, as this potential may give a new perspective on the performance of the quantum Otto engine. In addition, it is also interesting to see how the dependency of the number of particles and external potential ϵ_0 to the power to construct a full potential nano-engine.
TEPS thanks the Faculty of Mathematics and Natural Sciences, Andalas University, for financially supporting this research with research grant No. 04/UN.16.03.D/PP/FMIPA/2022.
|
http://arxiv.org/abs/2307.00356v1
|
20230701150103
|
Fedward: Flexible Federated Backdoor Defense Framework with Non-IID Data
|
[
"Zekai Chen",
"Fuyi Wang",
"Zhiwei Zheng",
"Ximeng Liu",
"Yujie Lin"
] |
cs.LG
|
[
"cs.LG",
"cs.CR"
] |
Fedward: Flexible Federated Backdoor Defense Framework with Non-IID Data
1st Zekai Chen
College of Computer Science and Big Data
Fuzhou University
Fuzhou, China
chenzekaiify@gmail.com
2nd Fuyi Wang
School of Information Technology
Deakin University
Waurn Ponds, Country
wangfuyi@deakin.edu.au
3rd Zhiwei Zheng
College of Computer Science and Big Data
Fuzhou University
Fuzhou, China
zhiweifzu@gmail.com
4th Ximeng Liu* * is the corresponding author.
College of Computer Science and Big Data
Fuzhou University
Fuzhou, China
snbnix@gmail.com
5th Yujie Lin
College of Computer Science and Big Data
Fuzhou University
Fuzhou, China
linyujie121@163.com
August 1, 2023
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Federated learning (FL) enables multiple clients to collaboratively train deep learning models while considering sensitive local datasets' privacy.
However, adversaries can manipulate datasets and upload models by injecting triggers for federated backdoor attacks (FBA). Existing defense strategies against FBA consider specific and limited attacker models, and a sufficient amount of noise to be injected only mitigates rather than eliminates FBA.
To address these deficiencies, we introduce a Flexible Federated Backdoor Defense Framework (Fedward) to ensure the elimination of adversarial backdoors. We decompose FBA into various attacks, and design amplified magnitude sparsification (AmGrad) and adaptive OPTICS clustering (AutoOPTICS) to address each attack. Meanwhile, Fedward uses the adaptive clipping method by regarding the number of samples in the benign group as constraints on the boundary. This ensures that Fedward can maintain the performance for the Non-IID scenario.
We conduct experimental evaluations over three benchmark datasets and thoroughly compare them to state-of-the-art studies.
The results demonstrate the promising defense performance from Fedward, moderately improved by 33%∼ 75% in clustering defense methods, and 96.98%, 90.74%, and 89.8% for Non-IID to the utmost extent for the average FBA success rate over MNIST, FMNIST, and CIFAR10, respectively.
Federate learning, distributed backdoor attack, backdoor defense, Non-IID data, clustering
§ INTRODUCTION
Federated learning (FL) <cit.> is a concept to eradicate data silos and collaboratively train a remarkable global model with the assistance of clients' uploaded models. It has attractive advantages and promotes many applications development, while adversaries can manipulate datasets and upload models to inject triggers for targeted attacks, called federated backdoor attacks (FBA). Specifically, adversaries are masked as benign clients to inject distinct triggers into a tiny part of the training datasets to trick the trained model into staying strongly associated with these triggers. Furthermore, adversaries can not affect the injected model performance on other benign datasets but activate triggers in malicious datasets.
There are currently many strategies for tackling the FBA problem for not identically and independently distributed (Non-IID) data. Byzantine-robust aggregation algorithms to mitigate FBA in the Non-IID data, and early work includes Trimmed-Mean <cit.>, Median <cit.>, etc.
Complement to the prior art, CRFL <cit.> employs the particular thresholds of clipping and perturbing noise in FL aggregation.
However, the clipping thresholds and perturbing noise levels are difficult to be specified. Therefore, FedCC <cit.> proposes the K-means method to group the penultimate layer features of local models for identifying malicious clients over benign clients against FBA.
In practice, due to FBA with strong concealment and Non-IID scenario, FLAME <cit.> adopts HDBSCAN to group the benign model updates and malicious model updates, dynamic clipping, and noise smoothing against FBA. Nevertheless, FLAME employs HDBSCAN with a single constraint for clustering, which properly misleads the defense method to classify malicious models in the Non-IID scenario.
Meanwhile, FLAME adopts dynamic clipping to limit the global model update, which lacks constraints on boundary dynamics. And, noise smoothing serves only as mitigation against FBA, which cannot be eliminated FBA.
In response to the above-identified challenge, we propose
Flexible Federated Backdoor Defense Framework (Fedward). Firstly, due to data distribution being similar among malicious models in the Non-IID scenario, we propose amplified magnitude sparsification () to extract the major local model update and then amplify the major update from the maximum absolute gradient in each layer of the model, which can endeavor to magnify malicious model updates.
Then, we adopt the adaptive OPTICS clustering () approach, which has clear distance criteria for dividing malicious models and benign models in the Non-IID scenario. Finally, the adaptive clipping method takes the number of samples in the benign group as constraints on the boundary for applying to the Non-IID scenario.
The contributions are summarized as follows:
∙ In this work, we propose Flexible Federated Backdoor Defense Framework (Fedward) against FBA in federated learning (FL). Instead of strong assumptions about the attack
the strategy of the adversary, our Fedward remains stronger defense capabilities and benign performance of the
aggregated model.
∙ Analyzing various types of FBA, we present Amplified Magnitude Sparsification, AutoOPTICS clustering, and Adaptive Clipping, which can protect against Magnitude with Larger Deviations Attack (MLA), Angle with Larger Deviation Attack (ALA), and Angle or Magnitude with Slight Deviation Attack
(AMSA), respectively.
∙ We evaluate the effectiveness of our Fedward by
compared with five state-of-the-art studies. For fairness, the experimental
settings are consistent with prior work. In addition, we tune the
poisoning data rate (PDR) and the Non-IID data rate (NIR) to
control the concealment and strength of the attack. And the comprehensive experimental validation on benchmark datasets demonstrates our Fedward is practical and applicable to complex scenarios.
§ RELATED WORK
§.§ Federated Learning
Federated learning (FL) <cit.> is an emerging distributed learning for improving efficiency, privacy, and scalability. It is capable of collaboratively training deep learning models by multiple clients under the supervision of a centralized aggregator.
Generally, in practical application scenarios, datasets across multiple clients have inherently heterogeneous characteristics (i.e., Non-IID) data.
Despite the attractive advantages derived from FL, it is more vulnerable to training heterogeneity data threats. And there are serious accuracy concerns caused by the heterogeneity of training data.
To consider such Non-IID concerns, McMahan et al. <cit.> propose a generic aggregation mechanism federated averaging (FedAvg) commonly applied in FL <cit.>.
Subsequently, several well-known aggregation mechanisms are proposed, including Krum, Trimmed-Mean <cit.> or Median <cit.>, and adaptive federated averaging <cit.>.
Unfortunately, the distributed learning methodology, as well as inherently Non-IID data distribution across multiple clients, may unintentionally provide a venue for new attacks. Specifically, FL is unauthorized to communicate or collect locally sensitive datasets, facilitating federated backdoor attacks (FBA) on various models from different clients.
In this paper, the specific goal is to identify the FBA and eliminate such an attack effectively.
§.§ Backdoor Attacks on Federated Learning
Backdoor attacks is corrupting a subset of training data by inserting adversarial triggers, causing machine learning models trained on the manipulated dataset to predict incorrectly on the test set carrying the same trigger. Machine learning models are susceptible to profound impacts from backdoor attacks <cit.> and Backdoor attacks on FL have been recently studied in <cit.>.
The generic techniques utilized in FL backdoor attacks are two-fold: 1) data poisoning <cit.>(i.e., attackers manipulate the training datasets), and 2) model poisoning <cit.> (i.e., attackers manipulate the training process or the updated model), shown in Fig. <ref>.
Data Poisoning. In the data poisoning configuration, malicious attackers only allow poisoning the trained dataset from compromised clients by modifying targeted labels. Nguyen et al. <cit.> design a data poisoning attack by implanting a centralized backdoor into the aggregated detection model to incorrectly classify malicious traffic as benign. Each party incorporates the same global trigger during training.
Additionally, the distributed backdoor attack (DBA) <cit.> is presented, disintegrating a global trigger pattern into independent regional patterns and embedding them into the trained dataset of different compromised clients respectively. We adopt this poisoning in this paper.
Model Poisoning.
To amplify the impact of the backdoor attack while escaping the aggregator’s anomaly detection, the poisoned models modify the parameters and scale the resulting model update.
There are a variety of model poisoning strategies against various defenses to achieve the evasion of anomaly detection into the attacker’s loss function and model's weights, such as generic constrain-and-scale and train-and-scale strategies <cit.>, explicit boosting strategy from an adversarial perspective <cit.>, and projected gradient descent strategy <cit.>.
§ PROBLEM FORMULATION
§.§ Problem Definition
Federated Backdoor Attack Scenario.
Federated backdoor attacks (FBA) are more focused on poisoning local data or models, which can impact benign model distribution 𝒳. For stealthiness and robustness, FBA must maintain a balance in attention to each local model's accuracy and attack success rate (ASR). The FBA problem can be formulated as follows:
Ĝ^*=argmin_Ĝ∈Θ∑_i ∈{S_cln, S_poi}^N|𝒟_i|/∑_j ∈ c_i^N|𝒟_j|ℒ(D_i;Ĝ)
≜𝔼_𝒟∼χ_poi[ℱ(𝒟;Ĝ) = τ_poi] + 𝔼_𝒟∼χ_cln[ℱ(𝒟;Ĝ) = τ_cln],
where Θ is the parameter space of the global model, |·| is size, S_cln is a set of benign clients, S_poi is a set of malicious clients, N is the total number of clients, τ is a set of the target in training data, datasets of all clients are 𝒟 = ⋃_i^N𝒟_i, Ĝ is the global model, 𝒳 is the training data distribution, ℒ(·) is a general definition of the empirical loss function for supervised learning tasks, and ℱ(·) is inference function for evaluating the global model Ĝ.
Attack Assumption. For real-world FL, datasets from multiple clients are inherently Non-IID data.
Assuming the adversary/attacker 𝒜 disguise or control over N/2 benign clients, no more than half of benign clients will reject to join FL for non-convergence in the global model. Thus, 𝒜 completely controls over less than N/2 clients <cit.>, including their training data, processes, and trained parameters.
Furthermore, with the exception of any FL aggregation execution or local client training, 𝒜 is aware of the FL aggregation methodology incorporating potential defense mechanisms.
§.§ Federated Backdoor Attacks
𝒜 manipulates trained datasets
or loss item to generate unique poisoning distribution 𝒳_poi model for FBA task. Due to FL aggregation regulations, 𝒜 is unable to directly manipulate the central server or other benign clients.
In order to further describe FBA, we will analyze benign models' distribution 𝒳_cln and malicious models' distribution 𝒳_poi disparity over deflection angle and magnitude, respectively. Additionally, FBA controls distance between 𝒳_cln and 𝒳_poi. Thus, FBA can be decomposed into the following components:
* Magnitude with Larger Deviations Attack (MLA). 𝒜 grows magnitude of gaps between 𝒳_cln and 𝒳_poi presently, which are conducted mainly through replacement or scaling attack.
* Angle with Larger Deviation Attack (ALA). Similarly, 𝒜 has really convenient way to grow angular gaps between 𝒳_cln and 𝒳_poi, which are conducted mainly through increasing the poisoning data rate (PDR) or malicious clients' local training epochs (LTE).
* Angle or Magnitude with Slight Deviation Attack (AMSA). 𝒜 can close gaps between 𝒳_cln and 𝒳_poi in an easy way, which are conducted mainly through narrowing PDR or malicious clients' LTE.
§.§ Federated Backdoor Attack Defense
Since FBA manipulates local models from the data distribution attacks. To overcome this issue, a common approach attempts to create distinguish between benign distributions 𝒳_cln and malicious distributions 𝒳_poi. However, due to different proportions of Non-IID distribution, the problem can be approached from the similarity of 𝒳_poi. And then, how to establish distance constraints from 𝒳_poi and 𝒳_cln? The most radical solution is setting up the bound of malicious model updates for establishing distance constraints. Thus, the optimization problem can be solved as follows:
argmin_1 ≤ i ≤ N∑_j = 1 and i ≠ j^N Dist(w_i, w_j) - θ_poi,
where Dist is distance function, w_i is model update of client-i, θ_poi is 𝒳_poi distance constraints.
§ DEFENSE METHODS
§.§ System Overview
Fedward aims to provide a series of defenses to address a typical scenario of FBA.
For solving the FBA in FL scenarios, Fedward establishes expand magnitude sparsification, adaptive OPTICS clustering, and adaptive clipping. More details about the procedure of Fedward are shown in Algorithm 1. Due to space constraints, we combine the description of the system overview with the pseudocode, i.e. in Algorithm 1, the explanation follows the ▹ symbol.
§.§ Amplified Magnitude Sparsification
The prior studies lack an effective approach to limit AMSA. FLAME adopts dynamic model filtering to limit the magnitude of global model update. However, when faced with challenges of AMSA, it doesn't exclude malicious slight model update as bounded of filtering resulting in poisoning.
Due to the stealthiness of AMSA, our Fedward adopts gradient sparsification to extract the major local model update and then amplifies the major update from the maximum absolute gradient of each layer of the model. In particular, it can expand the malicious slight model update, so as to avoid AMSA.
Inspired by TernGrad <cit.>, our Fedward presents amplified magnitude sparsification () (line 6 in Algorithm 1) to sustain sign vector {-1, 1} and the maximum each layer of model update, which specific details as follows:
W = _l_i ∈ W(l_i) = (l_i) * ((l_i)),
where W is local model update, l_i is the i-layer of W, gets sign of gradient, gets absolute of gradient, gets maximum of a layer of W.
§.§ Adaptive OPTICS Clustering
Regarding MLA and ALA, the advanced methods cluster local models from 𝒳_poi and 𝒳_cln. For instance, FedCC utilizes K-Means to cluster the malicious model and benign model. Nevertheless, it is unable to entirely make a correct decision that group is benign or malicious and remove malicious clients from benign groups. In contrast, FLAME first proposes HDBSCAN to alleviate this issue by setting up a hyperparameter (min_cluster_size = n/2) with the same attack assumption. There are no clear criteria for correctly determining which local model is malicious or benign. Especially, it lacks distance constraints from 𝒳_poi and 𝒳_cln.
Generally, requiring less computational complexity than HDBSCAN from FLAME, another mechanism DBSCAN has {eps, minPts} hyperparameters to set the bound of benign model update and the minimum number of benign clients, respectively. In short, DBSCAN has clear distance criteria for separating malicious clients and benign clients, which is more effective and accurate when eliminating malicious clients from the benign group.
Furthermore, for applying to relatively small amounts of malicious clients, OPTICS is more laxity eps than DBSCAN. Thus, our Fedward adopts adaptive OPTICS clustering (line 9 in Algorithm 1), which applies to varying degrees of deviation of 𝒳. More details are shown in Algorithm 2.
Before clustering, the center server computes Euclidean distance in W (line 2). Then, a subset of M_euc is selected with the number of mins after executing to get the top mins values M_euc^s (line 4). For getting the eps, the center server executes to get median value of M_euc^s (line 5). Finally, the center server clusters the M_euc to get benign model updates and the number of benign model (line 6).
§.§ Adaptive Clipping
For mitigating MLA and ALA, <cit.> proposes norm thresholding of updates to clip the model update. However, there is a lack of clear norm thresholding for eliminating the impact of MLA and ALA. And then, CRFL adopts norm thresholding by the specific parameter p but cannot adaptively set up p to constrain malicious model update. Subsequently, FLAME provides adaptive clipping to limit global model update on the basis of Median. When a bound of clipping or the number of malicious clients is much smaller, it will remove most of the benign clients' critical information.
In response to issues, Fedward adopts to amplify model updates and takes the |inds|-th value of Norm to clip the model updates (line 11 ∼ 15 in Algorithm 1).
Norm = {||W_1||_2,||W_2||_2, ⋯, ||W_m||_2}
ρ_clip = Norm_|inds|
W_i = W_i / (1,Norm_i/ρ_clip),
where ||x||_2 is the second norm, gets the maximum value, ρ_clip is the bound of clipping, Norm_i is the i-th value of Norm, and W_i is the i-th of local model update.
§ EXPERIMENTAL EVALUATION
We implement a prototype of Fedward by PyTorch. Extensive experiments are conducted on the server equipped with 64-core CPUs, 128GB RAM, and 2 NVIDIA GeForce RTX 2080Ti. We evaluate the effectiveness of our Fedward by compared with five baselines. For fairness, the experimental settings are consistent with prior work.
Datasets. We evaluate the performance of defense methods against the FBA in three common benchmark datasets, i.e.,
MNIST<cit.>, FMNIST<cit.>, and CIFAR10<cit.>.
Baselines. The Five state-of-the-art FBA defense methods are regarded as baselines, which cover the previously mentioned effective defense
against FBA, i.e., Median <cit.>, Trimmed-Mean<cit.>, CRFL <cit.>, FedCC <cit.>, and FLAME <cit.>.
Attack.
We assess the defense methods against the state-of-the-art distributed backdoor injection attack (DBA). For facilitated achieving in various types of FBA, DBA can tune the poisoning data rate (PDR) and the Non-IID data rate (NIR) to control the concealment and strength of the attack.
Evaluation Metrics.
In the experiments, we evaluate the scheme performance by three metrics:
(1) AER indicates the rate at which malicious models escape from the clustering defense method.
(2) AASR indicates the average FBA success rate of all iterations of the global model in the backdoor task. (3) MA indicates the accuracy of the model in the main task.
The 𝒜’s goal is to maximize AASR and MA a well-performed defense model needs to minimize AASR.
§.§ Comparison of Clustering Defense Method
Since strong clustering can directly reject malicious models, we conduct experiments for FBA in clustering defense on FedCC (K-means), FLAME (HDBSCAN), and our Fedward (AutoOPTICS), respectively.
Considering MLA, ALA, and AMSA in FBA, we perform different proportions of PDR and NIR.
Fig. 2 presents the AER comparison of Fedward with prior work over the three benchmark datasets.
Obviously, Fedward has the lowest AER, indicating that it outperforms in distinguishing malicious models.
Notably, Fedward with is capable of defending malicious models completely over MNIST. Although, the AER over FMNIST and CIFAR10 (14.24% and 4.64%, respectively, when NIR =25 and PDR =31.25%) are slightly poor, whereas it still outperforms FedCC and FLAME.
FedCC presents the K-means method that gets worst in the Non-IID distribution with different PDR (approximately 76% in AER ).
Due to significant differences among benign features in different NIR settings,
FedCC is unavailable in clustering it for eliminating malicious clients but malicious clients are readily detectable in the IID datasets with specific parameters (PDR =31.25% and 46.875% over MNIST, PDR =46.875% over FMNIST).
FLAME fluctuates significantly in certain PDR and NIR settings, owing to the weak constraint of HDBSCAN against FBA, such as AER approaches 34.5% with PDR = 15.625% and NIR = 50%, and 23.92% with PDR = 46.875% and NIR = 25% in FMNIST and CIFAR10, respectively.
In short, FedCC (K-means) and FLAME (HDBSCAN) are all similarly vulnerable against FBA, and our Fedward (AutoOPTICS) can be moderately improved by 33%∼ 75% to the utmost extent, compared with FLAME and FedCC.
§.§ Comparison of Defense Efficiency
The AASR and MA comparison results of our Fedward and the baselines over MNIST/FMNIST/CIFAR10 with various PDR and NIR are shown in Table 1.
We can see that FLAME and our Fedward are significantly superior to Median, Trimmed-Mean, CRFL, and FedCC in the NIR settings.
With PDR = 15.625% and 31.25% over MNIST, the backdoor defense aimed at Median is effective, and it has higher accuracy than Fedward, while the median is invalid PDR = 46.875%. Additionally, Median is frail over FMNIST and CIFAR10.
Trimmed-Mean is the most vulnerable and performs worst against FBA in IID and NIR settings, in which AASRs are 91.64 %∼ 98.53%, 55.7 %∼ 86.59%, and 87.8 %∼ 94.34% over MNIST, FMNIST, and CIFAR10, respectively.
While it preserves a higher MA, its AASR maintain exceedingly high which is indefensible against FBA. CRFL can resist attacks to a certain extent for the IID setting, but when faced with Non-IID settings, it is vulnerable to being attacked. Taking MNIST as an example when PDR =15.625%, its AASR is 3.00%, while 67.19% with NIR =25%.
FedCC has a similar performance to CRFL.
FLAME has a significant reduction of AASR with the effective assistance of HDBSCAN, dynamic clipping, and perturbing noise against FBA. But, it performs worst in certain PDR and NIR settings. For CIFAR10 dataset with PDR =31.25%, AASR derived from FLAME is up to 45.65%, compared with 9.67% from our Fedward.
Fedward has the best AASRs followed by higher MA for various settings, indicating that it can defend a variety of FBA. It can moderately improve AASR (0.07%↑∼ 67.25%↑) over different benchmark datasets, compared to FLAME.
With the increase in NIR, the defense effect from Fedward is gradually improved (AASR = 0.82% for MNIST when NIR =75% and PDR =15.625%).
For different PDR, the trend of performance difference among frameworks is similar.
§ CONCLUSION
In this paper, we present a Flexible Federated Backdoor Defense Framework (Fedward) to eliminate the impact of backdoor attacks while maintaining the performance of the aggregated model on the main task. Combining with amplified magnitude sparsification () and adaptive OPTICS clustering (), Fedward completely eradicates various backdoor attacks while preserving the benign performance of the global model. Furthermore, with the assistance of the adaptive clipping method, Fedward can be applied for Non-IID scenarios. The comprehensive
experimental validation on benchmark datasets demonstrates our Fedward is practical and applicable to complex scenarios. We attempted to apply the proposed Fedward to some privacy-critical applications in future work.
00
mcmahan2017communication Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and
Blaise Aguera Arcas, “Communication-efficient learning of deep networks
from decentralized data,” in Artificial intelligence and statistics. PMLR,
2017, pp. 1273–1282.
yin2018byzantine Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett,
“Byzantine-robust distributed learning: Towards optimal statistical rates,”
in ICML. PMLR, 2018, pp. 5650–5659.
xie2021crfl Chulin Xie, Minghao Chen, Pin-Yu Chen, and Bo Li, “Crfl: Certifiably
robust federated learning against backdoor attacks,” in ICML. PMLR,
2021, pp. 11372–11382.
jeong2022fedcc Hyejun Jeong, Hamin Son, Seohu Lee, Jayun Hyun, and Tai-Myoung
Chung, “FedCC: Robust federated learning against model poisoning attacks,” arXiv preprint arXiv:2212.01976, 2022.
280048 Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen
Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia
Mirhoseini, Shaza Zeitouni, Farinaz Koushanfar, Ahmad-Reza Sadeghi, and
Thomas Schneider, “FLAME: Taming backdoors in federated learning,”
in USENIX Security, Boston, MA, Aug. 2022, pp. 1415–1432, USENIX
Association.
nguyen2019diot Thien Duc Nguyen, Samuel Marchal, Markus Miettinen, Hossein Fereidooni, N Asokan, and Ahmad-Reza Sadeghi, “DÏot: A federated self-
learning anomaly detection system for iot,” in ICDCS. IEEE, 2019, pp.
756–767.
fereidooni2021safelearn Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini,
Helen Möllering, Thien Duc Nguyen, Phillip Rieger, Ahmad-Reza Sadeghi,
Thomas Schneider, Hossein Yalame, et al., “SAFELearn: secure aggregation for private federated learning,” in SPW. IEEE, 2021, pp. 56–62.
munoz2019byzantine Luis Muñoz-González, Kenneth T Co, and Emil C Lupu, “Byzantine-robust
federated machine learning through adaptive model averaging,” arXiv
preprint arXiv:1909.05125, 2019.
gao2020backdoor Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin
Fu, Surya Nepal, and Hyoungshick Kim, “Backdoor attacks and coun-
termeasures on deep learning: A comprehensive review,” arXiv preprint
arXiv:2007.10760, 2020.
bagdasaryan2020backdoor Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vi-
taly Shmatikov, “How to backdoor federated learning,” in International
Conference on Artificial Intelligence and Statistics. PMLR, 2020, pp. 2938–
2948.
bhagoji2019analyzing Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin
Calo, “Analyzing federated learning through an adversarial lens,” in In-
ternational Conference on Machine Learning. PMLR, 2019, pp. 634–643.
nguyen2020poisoning Thien Duc Nguyen, Phillip Rieger, Markus Miettinen, and Ahmad-Reza
Sadeghi, “Poisoning attacks on federated learning-based iot intrusion de-
tection system,” in DISS, 2020, pp. 1–7.
xie2019dba Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li, “DBA: Distributed
backdoor attacks against federated learning,” in International Conference
on Learning Representations, 2020.
wang2020attack Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma,
Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, and Dimitris Papailiopou-
los, “Attack of the tails: Yes, you really can backdoor federated learning,”
Advances in Neural Information Processing Systems, vol. 33, pp. 16070–
16084, 2020.
wen2017terngrad Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen,
and Hai Li, “Terngrad: Ternary gradients to reduce communication in
distributed deep learning,” Advances in neural information processing sys-
tems, vol. 30, 2017.
lecun1998gradient Yann LeCun, L éon Bottou, Yoshua Bengio, and Patrick Haffner, “Gradient-
based learning applied to document recognition,” Proceedings of the IEEE,
vol. 86, no. 11, pp. 2278–2324, 1998.
xiao2017fashion Han Xiao, Kashif Rasul, and Roland Vollgraf, “Fashion-mnist: a novel im-
age dataset for benchmarking machine learning algorithms,” arXiv preprint
arXiv:1708.07747, 2017.
krizhevsky2014cifar Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton, “The cifar-10 dataset,”
online: http://www. cs. toronto. edu/kriz/cifar. html, vol. 55, no. 5, 2014.
2.
|
http://arxiv.org/abs/2307.02067v2
|
20230705071610
|
Primordial black holes from second order density perturbations as probes of the small-scale primordial power spectrum
|
[
"Yu-Ting Kuang",
"Jing-Zhi Zhou",
"Zhe Chang",
"Xukun Zhang",
"Qing-Hua Zhu"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"gr-qc"
] |
figures/
left=2.5cm,right=2.5cm,top=2.5cm,bottom=2.5cm
()
[
]
ßGW
short = GW ,
long = gravitational wave ,
short-plural = s
LIGO
short = LIGO ,
long = Laser Interferometer Gravitational-wave Observatory ,
short-plural =
LISA
short = LISA ,
long = Laser Interferometer Space Antenna ,
short-plural =
SKA
short = SKA ,
long = Square Kilometre Array ,
short-plural =
SNR
short = SNR ,
long = signal-to-noise ratio ,
short-plural =
PTA
short = PTA ,
long = pulsar timing array ,
short-plural =
PDF
short = PDF ,
long = probability density function ,
short-plural = s
FLRW
short = FLRW ,
long = Friedmann-Lemaitre-Robertson-Walker ,
short-plural =
SIGW
short = SIGW ,
long = scalar induced gravitational wave ,
short-plural = s
PBH
short = PBH ,
long = primordial black holes ,
short-plural = s
CMB
short = CMB ,
long = cosmic microwave background ,
short-plural = s
DM
short = DM ,
long = dark matter ,
short-plural = s
LSS
short = LSS ,
long = large scale structure ,
short-plural = s
RD
short = RD ,
long = radiation-dominated ,
short-plural =
Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaUniversity of Chinese Academy of Sciences, Beijing 100049, Chinazhoujingzhi@ihep.ac.cnInstitute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaUniversity of Chinese Academy of Sciences, Beijing 100049, ChinaInstitute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaUniversity of Chinese Academy of Sciences, Beijing 100049, ChinaInstitute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, ChinaUniversity of Chinese Academy of Sciences, Beijing 100049, ChinaUniversity of Chinese Academy of Sciences, Beijing 100049, ChinaCAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics,
Chinese Academy of Sciences, Beijing 100190, China
We investigate the second order energy density perturbation δ^(2) induced by small-scale Gaussian and local-type non-Gaussian primordial curvature perturbations. The relative abundance of PBH is calculated in terms of the PDF of total energy density perturbation δ_r=δ^(1)+1/2δ^(2). The effects of second order density perturbation greatly reduce the upper bounds of small-scale power spectra of primordial curvature perturbations by one to two orders of magnitude. For log-normal primordial power spectrum, its amplitude A_ζ is constrained to be about A_ζ∼ 3×10^-3. And for local-type non-Gaussianity with f_NL=10, the upper bound of A_ζ is about 2.5×10^-4.
Primordial black holes from second order density perturbations as probes of the small-scale primordial power spectrum
Qing-Hua Zhu
=====================================================================================================================
Introduction—In inflation theory, valuable information about the early Universe is encoded in the cosmological perturbations which are originated from the quantum fluctuations during inflation. The power spectrum of primordial curvature perturbations 𝒫_ζ(k ) is one of the most important predictions in inflation theory. The amplitude of the primordial power spectrum at different scales can be constrained by current cosmological observations. On large scales (≳1 Mpc), according to the current observations of the CMB and LSS<cit.>, the amplitude of the power spectrum of primordial curvature perturbations is constrained to be about 2× 10^-9. However, on small scales (≲1 Mpc), the constraints of primordial curvature perturbations are significantly weaker than those on large scales <cit.>. Constraining the primordial power spectrum on small scales is an important issue in the study of early Universe.
The large amplitudes of small-scale primordial spectrum have been attracting a lot of attention in the past few years on account of their rich and profound phenomenology, such as PBH<cit.> which is a candidate of DM. The abundance of PBHf_pbh can be calculated in terms of a given primordial power spectrum of curvature perturbation 𝒫_ζ(k). The abundance of PBH, f_pbh(m_pbh), has been constrained on different masses of PBHm_pbh<cit.>, then the corresponding small-scale primordial power spectrum can also be constrained by various observations of PBH. In addition, the primordial perturbations will inevitably generate higher order perturbations, such as SIGW<cit.>. This strong correlation between primordial curvature perturbation and SIGW signals might be a promising approach to detecting small-scale primordial power spectrum in the upcoming GW experiments, such as LISA and PTA<cit.>.
In this Letter, we investigate the contributions of second order induced energy density perturbations δ^(2)=ρ^(2)/ρ^(0) to the PBH formation. More precisely, the primordial curvature perturbation that enters the Hubble radius during the radiation-dominated (RD) era will lead to the generation of higher order scalar and energy density perturbations. Then, the relative abundance of PBH can be studied in terms of the PDF of the total energy density perturbations: P(δ^(1)(R)+1/2δ^(2)(R)), where δ^(1)(R) is the contributions of the primordial curvature perturbation which have been studied for many years <cit.>. Here, δ^(2)(R) is the new contribution of second order energy density perturbation δ^(2) which was neglected in previous studies[ The statistics of induced higher order energy density perturbation δ^(2) are highly non-Gaussian, since they are generated by first order scalar perturbations.]. Since the abundance of PBHf_pbh(m_pbh) has been constrained on different m_pbh, the amplitude of small-scale primordial power spectra can be constrained by the abundance of PBH. The effects of second order density perturbation will greatly reduce the upper bounds of small-scale power spectra of primordial curvature perturbations. The second order energy density perturbation induced by local-type non-Gaussian primordial curvature perturbations is also considered <cit.>. The upper bounds on the amplitudes of primordial power spectra are represented.
Second order induced energy density perturbation—The perturbed metric in the FLRW spacetime with comoving gauge takes the form
d s^2=a^2(-(1+2 ϕ^(1)+ ϕ^(2)) dη^2.
.+(2 ∂_iB^(1)+∂_iB^(2)) dηd x^i.
.+(1-2 ψ^(1)- ψ^(2)δ_i j)d x^id x^j) ,
where ϕ^(n), ψ^(n), and B^(n)(n=1,2) are the nth-order scalar perturbations. We have set δ u^(n)=0 and E^(n)=0 in comoving gauge. The higher order cosmological perturbations can be studied in terms of the package <cit.>. The equations of motion of second order scalar perturbations are
1/4( 2ℋϕ^(2)'+6ℋψ^(2)'+2ψ^(2)”+8/3ℋΔ B^(2).
.+Δ B^(2)'+Δϕ^(2)-5/3Δψ^(2))=-1/4𝒯^ij S^(2)_ij ,
1/2 (-ℋB^(2) -1/2B^(2)' -1/2ϕ^(2)+1/2ψ^(2))
=-1/2Δ^-1(∂^iΔ^-1∂^j-1/2𝒯^ij) S^(2)_ij ,
∂^i( ℋ∂_iϕ^(2)+∂_iψ^(2)'+S^(2)_i )=0 ,
where S^(2)_ij and S^(2)_i are second order source terms. 𝒯^ij is defined as 𝒯^ij=δ^ij-∂^iΔ^-1∂^j. The second order energy density perturbations δ^(2)=ρ^(2)/ρ^(0) can be calculated in terms of second order induced scalar perturbations and first order scalar perturbations
δ^(2) = -2ϕ^(2)+2/3ℋ^2Δψ^(2)-2/3ℋΔ B^(2)
-2/ℋψ^(2)'+S^(2)_ρ ,
where S^(2)_ρ are composed of first order scalar perturbations.
Upper bounds on primordial curvature perturbation—We consider the log-normal primordial power spectrum
𝒫_ζ(k)=A_ζ/√(2 πσ_*^2)exp(-ln(k / k_*)^2/2 σ_*^2) .
In the limit of σ_* → 0, 𝒫_ζ(k) approaches to a monochromatic power spectrum, namely 𝒫_ζ(k)=k_* δ(k-k_*). PDF can be calculated in terms of the given primordial power spectrum in Eq. (<ref>) and the window function W(x)=3(sin x-xcos x )/x^3<cit.>. As shown in Fig. <ref>, in order to intuitively show the trend of PDF with respect to σ_*, we calculate PDF corresponding to different σ_* values near σ_*=0.
The relative abundance of PBHf_pbh(m_pbh) is equivalent to the probability that the smoothed density field exceeds the
threshold δ_c=0.4<cit.>,
β(M_PBH)= ∫_δ_c d δ(R) P(δ(R)) .
Substituting PDFP(δ^(1)(R)+1/2δ^(2)(R)) of total energy density perturbations into Eq. (<ref>), we obtain β(A_ζ) as a function of A_ζ (see Fig. <ref>). Since β has been constrained on different masses of PBH, we use Fig. 18 in Ref. <cit.> to obtain the upper bounds of A_ζ. As shown in Fig. <ref>, for log-normal primordial power spectrum, its amplitude A_ζ is constrained to be about A_ζ∼ 3×10^-3.
Constraints on local-type non-Gaussianity—The local-type non-Gaussian primordial curvature perturbation is given by <cit.>
ζ^NG_𝐤=ζ^G_𝐤+3/5f_NL∫d^3n/(2π)^3/2ζ^G_𝐤-𝐧ζ^G_𝐧 ,
where ζ^G is the Gaussian primordial curvature perturbation. Here, we consider the lowest order contributions of local-type non-Gaussianity. Namely, δ^(1)_NG∼ A_ζf_NL and δ^(2)_NG∼ A^3/2_ζf_NL. As shown in Fig. <ref>, in this case, β(A_ζ,f_NL) is a function of A_ζ and f_NL. The upper bounds of A_ζ with different f_NL are given in Fig. <ref>. For f_NL=10, the upper bound of A_ζ is about 2.5×10^-4. When we consider the second order induced energy density perturbation, the effects of primordial non-Gaussianity will greatly reduce the upper bounds of A_ζ even for a relatively small f_NL.
Third order scalar induced GW—Scalar induced GW can be constrained by current and future observations, such as LISA and PTA. By using the constraints on the induced gravitational waves, we can obtain constraints on the small-scale curvature perturbations <cit.>. Ref. <cit.> investigated the energy density spectra of third order SIGW, in this case, the first order scalar perturbation induces the second order scalar, vector, and tensor perturbations. At the next iteration, the first order scalar, the second order scalar, vector, and tensor perturbations all induce the third order tensor perturbations. Ref. <cit.> concluded that the energy density of third order SIGWρ^(3)_GW will be larger than the energy density of second order SIGWρ^(2)_GW if A_ζ>1.76× 10^-2. According to the PBH observations, the upper bound of A_ζ is constrained to be about A_ζ∼ 2× 10^-2 with respect to δ^(1)=ρ^(1)/ρ^(0)<cit.>. In this case, if we want to produce enough PBH as candidates for DM, then the energy density of third order SIGW will be comparable to or even greater than that of second order SIGW. However, the second order energy density perturbation δ^(2) also contributes to the production of PBH, then enough PBH can be produced even for A_ζ∼ 10^-3. For A_ζ = 3× 10^-3, we obtain ρ^(3)_GW∼ 0.19 ρ^(2)_GW. Namely, according to the current PBH observations, the energy density of third order SIGW is always less than that of second order SIGW.
Conclusion—In this Letter, we have analyzed the new effects of second order scalar induced energy density perturbation δ^(2) on PBH formation. We calculated PDF and β as a function of A_ζ and σ_* in terms of the log-normal primordial power spectra. In order not to overproduce PBH, the parameters of primordial power spectra are constrained. For σ_* → 0, the amplitude of primordial power spectra is constrained to be about A_ζ∼ 3×10^-3. With the increase of σ_*, the upper bounds of A_ζ will gradually increase. The local-type non-Gaussianity is also considered in this Letter. For f_NL=10, the upper bound of A_ζ is about 2.5×10^-4. The effects of primordial non-Gaussianity will greatly reduce the upper bounds of A_ζ even for a relatively small f_NL. In addition, since the upper bound of A_ζ is constrained to be about A_ζ∼ 3× 10^-3 for Gaussian primordial curvature perturbation, the energy density of third order SIGW is always less than that of second order SIGW.
This work has been funded by the National Nature Science Foundation of China under grant No. 12075249 and 11690022, and the Key Research Program of the Chinese Academy of Sciences under Grant No. XDPB15. We acknowledge the package <cit.>.
|
http://arxiv.org/abs/2307.03348v1
|
20230707014922
|
Chip-firing on graphs of groups
|
[
"Margaret Meyer",
"Dmitry Zakharov"
] |
math.CO
|
[
"math.CO",
"math.AG"
] |
We define the Laplacian matrix and the Jacobian group of a finite graph of groups. We prove analogues of the matrix tree theorem and the class number formula for the order of the Jacobian of a graph of groups. Given a group G acting on a graph X, we define natural pushforward and pullback maps between the Jacobian groups of X and the quotient graph of groups X//G. For the case G=/2, we also prove a combinatorial formula for the order of the kernel of the pushforward map.
Investigation of the ND̅ system in quark delocalization color screening model
Jialun Ping^4
Abstract
0.9
We review the modular flavor symmetric models of quarks and leptons
focusing on our works.
We present some flavor models of quarks and leptons
by using finite modular groups and discuss the phenomenological implications.
The modular flavor symmetry gives interesting phenomena at the fixed point of
modulus. As a representative, we show the successful texture structure at the fixed point τ = ω.
We also study CP violation, which occurs through the modulus stabilization.
Finally,
we study SMEFT with modular flavor symmetry by including higher dimensional operators.
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The theory of chip-firing on graphs is a purely combinatorial theory, having a remarkable similarity to divisor theory on algebraic curves. A divisor on a graph is an integer linear combination of its vertices, and two divisors are linearly equivalent if one is obtained from another by a sequence of chip-firing moves. The set of equivalence classes of degree zero divisors on a graph X is a finite abelian group, called the Jacobian (X) or the critical group of X. The similarity with algebraic geometry is not accidental: graphs record degeneration data of one-dimensional families of algebraic curves, and divisors on graphs represent discrete invariants of algebraic divisor classes under degeneration.
Chip-firing on graphs is functorial with respect to a class of graph maps known as harmonic morphisms, which may be viewed as discrete analogues of finite maps of algebraic curves. Specifically, a harmonic morphism of graphs f:X→ Y defines natural pushforward and pullback maps f_*:(X)→(Y) and f^*:(Y)→(X). Harmonic morphisms are characterized by a local degree assignment at the vertices of the source graph, and are a generalization of topological coverings, which have local degree one everywhere.
A natural example of a topological covering, and hence of a harmonic morphism, is the quotient p:X→ X/G of a graph X by a free action of a group G. The paper <cit.> thoroughly investigated the corresponding pushforward map p_*:(X)→(X/G), and found a combinatorial formula for the degree of the kernel in the case when G=/2. If the action of G on X has nontrivial stabilizers, however, then p is not in general harmonic, and there is no relationship between (X) and (X/G). This raises the natural problem of redefining chip-firing on the quotient graph in a way that preserves functoriality.
In this paper, we solve this problem using the theory of graphs of groups, also known as Bass–Serre theory (see <cit.> and <cit.>). Given a G-action on a graph X, the quotient graph of groups X//G consists of the quotient graph X/G together with the data of the local stabilizers, and may be thought of as the stacky quotient of X by G. We define the Laplacian matrix and the Jacobian group of a graph of groups by weighting the chip-firing map using the orders of the local stabilizers. We define natural pushforward and pullback maps p_*:(X)→(X//G) and p^*:(X//G)→(X), and we investigate their properties.
The paper is organized as follows. In Section 2, we recall the definitions of chip-firing for a graph, as well as harmonic morphisms of graphs and Bass–Serre theory. We define graphs and chip-firing in terms of half-edges and introduce a detailed factorization of the graph Laplacian. This approach is notationally cumbersome but proves useful in Section 3, where we define chip-firing and the Jacobian group for a graph of groups. We prove two formulas for the order of the Jacobian of a graph of groups: Theorem <ref>, a weighted version of Kirchhoff's matrix tree theorem, and Theorem <ref>, which is a class number formula involving a hypothetical Ihara zeta function of a graph of groups. In Section 4, we consider a group G acting on a graph X and consider the Jacobian of the quotient graph of groups X//G. We define natural pushforward and pullback maps between the Jacobians (X) and (X//G). We compute the Jacobians of all group quotients of two graphs with large automorphism groups: the complete graph on four vertices and the Petersen graph. Finally, in Section 5 we specialize to the case G=/2 and find a combinatorial formula for the order of the kernel of the pushforward map p_*:(X)→(X//G), generalizing a result of Reiner and Tseng <cit.>.
A natural question is to relate chip-firing on graphs of groups to algebraic geometry. A version of the chip-firing maps with edge weights (but trivial vertex weights) appears in <cit.> and <cit.>, in the study of moduli spaces of curves with level structure. Curves with a G-cover with arbitrary group G are considered in <cit.>. It is natural to assume that chip-firing on graphs of groups should be related to the theory of line bundles on stacky curves. Investigating this connection, however, is beyond the scope of this paper.
§ GRAPHS WITH LEGS AND GRAPHS OF GROUPS
We begin by recalling a number of standard definitions concerning graphs, group actions, divisor theory on graphs, harmonic morphisms, and graphs of groups.
§.§ Graphs, morphisms, and group actions
In Serre's definition (see <cit.>), the edges of a graph are the orbits of a fixed-point-free involution acting on a set of half-edges. When considering group actions on graphs, it is then necessary to require that the action not flip any edges of the graph. We can relax this constraint by allowing the involution on the set of half-edges to have fixed points. The resulting object is a graph with legs, where a leg is the result of folding an edge in half via an involution. Such objects have appeared before in the combinatorics literature (for example, see p. 60 in the paper <cit.>, where they are called half-arcs).
A graph with legs X, or simply a graph, consists of the following data:
* A set of vertices V(X).
* A set of half-edges H(X).
* A root map r_X:H(X)→ V(X).
* An involution ι_X:H(X)→ H(X).
The involution ι_X partitions H(X) into orbits of size one and two. An orbit e={h,h'} of size two (so that ι_X(h)=h') is an edge with root vertices r_X(h),r_X(h')∈ V(X), and the set of edges of X is denoted E(X). An edge whose root vertices coincide is called a loop. A fixed point of ι_X is called a leg and has a single root vertex r_X(h)∈ V(X), and we denote the set of legs of X by L(X). The tangent space T_vX=r_X^-1(v) of a vertex v∈ V(X) is the set of half-edges rooted at v, and its valency is (v)=|T_vX| (so a leg is counted once, while a loop is counted twice). An orientation of an edge e={h,h'} is a choice of order (h,h') on the half-edges, and we call s(e)=r_X(h) and t(e)=r_X(h') respectively the initial and terminal vertices of an oriented edge e. An orientation on X is a choice of orientation for each edge (each leg has a unique orientation). We consider only finite connected graphs.
A morphism of graphs f:→ X is a pair of maps f:V()→ V(X) and f:H()→ H(X) (both denoted f by abuse of notation) that commute with the root and involution maps on and X.
Let f:→ X be a morphism of graphs. If l∈ L() is a leg then ι_X(f(l))=f(ι_(l))=f(l), so f(l)∈ L(X) is also a leg. On the other hand, if e={h,h'}∈ E() is an edge, then either f(h)≠ f(h'), in which case f maps e to an edge f(e)={f(h),f(h')}∈ E(X), or f(h)=f(h')∈ L(X) is a leg. In other words, edges can map to edges or fold to legs. However, we do not allow morphisms to contract edges or half-legs, in other words we consider only finite morphisms.
Let X be a graph and let G be a group acting on the right on X. In other words, each g∈ G defines an automorphism of X, which we denote x↦ xg for x∈ V(X)∪ H(X), such that x(g_1g_2)=(xg_1)g_2 for all x∈ V(X)∪ H(X) and all g_1,g_2∈ G. We define the vertices and half-edges of the quotient graph X/G as the G-orbits of V(X) and H(X):
V(X/G)=V(X)/G={vG:v∈ V(X)},
H(X/G)=H(X)/G={hG:h∈ H(X)},
and descending the root and involution maps:
r_X/G(hG)=r_X(h)G, ι_X/G(hG)=ι_X(h)G.
The quotient projection p:X→ X/G sends each element of X to its orbit.
Let h∈ H(X) be a half-edge with orbit p(h)=hG∈ H(X/G). If h is a leg, then ι_X/G(hG)=ι_X(h)G=hG so p(h)=hG∈ L(X/G) is also a leg. If h belongs to an edge e={h,h'}∈ E(X), then there are two possibilities. If h'≠ hg for all g∈ G, then the orbits hG and h'G are distinct half-edges of X/G forming an edge p(e)={hG,h'G}∈ E(X/G). However, if h'=hg for some g∈ G (in other words, if the G-action flips the edge e), then p(e)=hG=h'G∈ L(X/G) is a leg.
In Serre's original definition, the involution ι_X on a graph X is required to be fixed-point-free, and hence the set H(X) of half-edges is partitioned into edges only. Relaxing this condition enables us to consider quotients by group actions that flip edges. We give a simple example below and two extended examples in Sections <ref> and <ref>.
Let X be the graph with two vertices joined by an edge. There is a unique nontrivial morphism f:X→ X exchanging the two vertices, so (X) is the cyclic group of order two. The quotient X/(X) is the graph having one leg at one vertex, and is in fact the terminal object in the category of graphs with legs, while no such object exists in the category of graphs.
§.§ The graph Laplacian and chip-firing.
We now recall divisor theory on a graph X. We follow the framework of the paper <cit.>, which we reformulate in terms of half-edges. Specifically, we use a detailed factorization of the Laplacian which can be conveniently generalized to graphs of groups. A minor additional advantage is that we are never required to pick an orientation for the graph.
For a set S, we denote ^S and ^S_0 respectively the free abelian group on S and the subgroup consisting of elements whose coefficients sum to zero. The free abelian group ^V(X) is called the divisor group of X, and a divisor D=∑_v∈ V(X) a_v v is interpreted as a distribution of a_v chips on each vertex v. The root and involution maps r_X:H(X)→ V(X) and ι_X:H(X)→ H(X) induce homomorphisms
r_X:^H(X)→^V(X), ι_X:^H(X)→^H(X)
on the corresponding free abelian groups (denoted by the same letters by abuse of notation). Let τ_X denote the transpose of r_X:
τ_X:^V(X)→^H(X), τ_X(v)=∑_h∈ T_vXh.
The Laplacian of a graph X is the homomorphism L_X:^V(X)→^V(X) given by
L_X=r_X∘(Id-ι_X)∘τ_X, L_X(v)=∑_h∈ T_vX(v-r_X(ι_X(h))).
Figure <ref> displays all the maps involved in defining the graph Laplacian. It is elementary to verify that L_X⊂^V(X)_0, where L_X is the subgroup of principal divisors on X, and in fact ^V(X)_0= (r_X∘(Id-ι_X)) if the graph X is connected.
The Jacobian of a graph X is the quotient group
(X)=^V(X)_0/ L_X.=(r_X∘ (-ι_X))/ L_X.
The Jacobian (X) is also known as the critical group of X. Kirchhoff's matrix-tree theorem states that (X) is a finite group whose order is equal to the number of spanning trees of X. Given a vertex v∈ V(X), the divisor -L_X(v) is obtained by firing the vertex v, in other words by moving a chip from v along each half-edge h∈ T_vX to the root vertex of ι_X(h). Chips moved along legs and loops return to v, hence legs and loops of X do not contribute to the Laplacian or the Jacobian group, and (X) is canonically isomorphic to the Jacobian of the graph obtained by removing all legs and loops. However, legs and loops naturally occur when taking quotients by group actions, so we nevertheless consider them.
We give an explicit presentation for the matrix L of the graph Laplacian L_X. Let n=|V(X)| and m=|E(X)| denote the number of vertices and edges, respectively. Then L=Q-A, where Q and A are the n× n valency and adjacency matrices of X:
L_uv=Q_uv-A_uv,
Q_uv=δ_uv(v), A_uv=|{h∈ T_vX:r_X(ι_X(h))=u}|.
These matrices have the following convenient factorizations. Pick an orientation on X and define the n× m root matrices
S_ve={[ 1, s(e)=v,; 0, s(e)≠ v, ].,
T_ve={[ 1, t(e)=v,; 0, t(e)≠ v. ].
It is then easy to verify that
Q=SS^t+TT^t, A=ST^t+TS^t, L=Q-A=(S-T)(S-T)^t.
§.§ Harmonic morphisms of graphs
Given a morphism of graphs f:→ X, there is generally no relationship between () and (X). However, we can define functoriality with respect to a class of graph morphisms that admit a local degree function on the vertices of the source graph (see <cit.> and <cit.>).
A graph morphism f:→ X is called harmonic if there exists a function d_f:V()→, called the local degree, such that for any ∈ V() and any h∈ T_f()X we have
d_f()=|{þ∈ T_:f(þ)=h}|.
For example, a covering space f:→ X (in the topological sense) is the same thing as a harmonic morphism with d_f()=1 for all ∈ V(). If X is connected, then any harmonic morphism f:→ X has a global degree equal to
(f)=∑_∈ f^-1(v)d_f()=|f^-1(h)|
for any v∈ V(X) or any h∈ H(X). In particular, any harmonic morphism to a connected graph is surjective (on the edges and the vertices).
Let f:→ X be a harmonic morphism of graphs, and denote
f_*:^V()→^V(X), f_*()=f(),
f_*:^H()→^H(X), f_*(þ)=f(þ)
the induced homomorphisms on the free abelian groups. For any graph morphism (not necessarily harmonic) we have
f_*∘ r_=r_X∘ f_*, f_*∘ι_=ι_X∘ f_*.
For any ∈ V() we have
(f_*∘τ_)()=d_f() (τ_X∘ f_*)()
by the harmonicity of f, therefore
(f_*∘ L_)()=d_f()(L_X∘ f_*)().
It follows that f_*( L_)⊂ L_X and the map f_* descends to a surjective pushforward map
f_*:()→(X).
Similarly, if X is connected, we define the maps
f^*:^V(X)→^V(),
f^*(v)=∑_∈ f^-1(v)d_f()·
and
f^*:^H(X)→^H(),
f^*(h)=∑_þ∈ f^-1(h)þ.
It is easy to verify that
f^*(L_X(v))=∑_∈ f^-1(v)L_()
for any v∈ V(X), hence f^*((X))⊂ () and there is an induced pullback map
f^*:(X)→().
The map f^*:(X)→() is injective (Theorem 4.7 in <cit.>), and the composition f_*∘ f^* acts by multiplication by (f) on (X). Figure <ref> displays all the maps associated to a harmonic morphism of graphs.
§.§ Graphs of groups
We now recall graphs of groups, which are the natural category for taking quotients of graphs by non-free group actions. We modify the definitions in <cit.> to allow graphs with legs (and thus quotients by group actions that flip edges).
A graph of groups =(X,_v,_h) consists of the following data:
* A graph X (possibly with legs).
* A group _v for each vertex v∈ V(X).
* A subgroup _h⊂_r_X(h) for each half-edge h∈ H(X).
* An isomorphism i_h:_h→_ι_X(h) for each edge {h,ι_X(h)}∈ E(X), where we assume that i_ι_X(h)=i_h^-1.
Our definition differs slightly from the standard one <cit.>, where one assumes that the two groups _h and _ι_X(h) corresponding to an edge are the same, and instead records monomorphisms _h→_r(h). The two approaches are equivalent in the case when there are no legs. We consider only finite graphs of groups, so that the underlying graph and all vertex groups are finite.
We now define the quotient graph of groups by a right group action on a graph. The standard definition in <cit.> uses a trivialization with respect to a choice of spanning tree in the quotient graph and a lift of the tree to the source graph, and records the gluing data on the complementary edges (with respect to a choice of orientation). We find it more natural to instead trivialize the neighborhood of every vertex.
Let G be a group acting on the right on a graph , let X=/G be the quotient graph, and let p:→ X be the quotient map. We define the quotient graph of groups //G=(X,_v,_h) on X as follows:
* Choose a section (·):V(X)→ V() of the map p:V()→ V(X). For each vertex v∈ V(X), _v=G_={g∈ G: g=} is the stabilizer of the chosen preimage ∈ p^-1(v).
* Choose a section (·):H(X)→ H() of the map p:H()→ H(X) with the property that r_(þ)=r_X(h) for all h∈ H(X). For each half-edge h∈ H(X), _h=G_þ={g∈ G:þ g=þ} is the stabilizer the chosen preimage þ∈ p^-1(h). It is clear that _h=G_þ⊂ G_r_(þ)=_r_X(h).
For v∈ V(X) and g∈ G we denote _g= g (so that _1=); this identifies the fiber p^-1(v)={_g:g∈ G} with the set _v\ G of right cosets of _v in G. Similarly, given h∈ H(X) and g∈ G we denote þ_g=þ g (so that þ_1=þ), so that p^-1(h)={þ_g:g∈ G} is identified with _h\ G. Hence
V()=∐_v∈ V(X)_v\ G, H()=∐_h∈ H(X)_h\ G
as sets, and under this identification the root and projection maps and the G-action are given by
p(_g)=v, p(þ_g)=h, r_(þ_g)=r_X(h)_g, _gg'=_gg', þ_gg'=þ_gg'
for v∈ V(X), h∈ H(X), and g,g'∈ G.
Finally, let h∈ H(X) be a half-edge. Applying the involution on to þ gives a half-edge lying over h'=ι_X(h) (it may be that h'=h). Therefore there exists an element β(h)∈ G, unique up to left multiplication by _h', such that ι_(þ)=h'_β(h). It follows that
ι_(þ_g)=ι_X(h)_β(h)g
for all h∈ H(X) and g∈ G. We observe that _h'=β(h)_hβ(h)^-1. We can choose the β(h) so that β(h')=β(h)^-1 for all h (in general, they only satisfy β(h')β(h)∈_h). The required isomorphism i_h:_h→_ι_X(h) is then given by conjugation by β(h).
We can run the construction in reverse and recover the morphism p:→ X together with the G-action on from the quotient graph of groups X//G together with the chosen elements β(h)∈ G (in keeping with graph-theoretic terminology, we may call the β(h) a generalized G-voltage assignment on X//G). First of all, we assume that the vertex and half-edge groups are given not simply as abstract groups, but as subgroups of G. Hence we can define as a set by Equation (<ref>). The root and projection maps are given by Equation (<ref>), so that is trivialized in the neighborhood of each vertex. Finally, the involution map is given by Equation (<ref>) and defines how the tangent spaces of the vertices are glued to each other. We note that for an edge {h,h'}∈ E(X) we may choose β(h)∈ G arbitrarily and then set β(h')=β(h)^-1, but for a leg h∈ L(X) the element β(h)∈ G must have order two (or be the identity), and furthermore must lie in the normalizer of _h. The fiber p^-1(h) over the leg h consists of legs if β(h)∈_h (in which case we may as well have chosen β(h)=1) and edges if β(h)∉_h.
Two generalized G-voltage assignments on X//G are equivalent if they define isomorphic G-covers → X. The set of equivalence classes of voltage assignments may be constructed as the first Čech cohomology set of an appropriate constructible sheaf of non-abelian groups on X. This set was explicitly described in <cit.> for an abelian group G (in which case the set is also an abelian group), and the construction immediately generalizes to the non-abelian case. We leave the details to the interested reader.
§ THE LAPLACIAN AND THE JACOBIAN GROUP OF A GRAPH OF GROUPS
Let G be a finite group acting on a finite graph , and let p:→ X=/G be the quotient map. If the action of G is free, then p is a covering space and hence a harmonic morphism, and induces pushforward and pullback homomorphisms p_*:()→(X) and p^*:(X)→(). However, for an arbitrary G-action there is no natural relationship between () and (X). The solution is to replace X with the quotient graph of groups =//G, and to define the chip-firing operation on //G in a way that takes into account the orders of the local stabilizers. We now describe this construction.
§.§ Chip-firing on a graph of groups
Let =(X,_v,_h) be a graph of groups, and let ^V(X) and ^H(X) be the free abelian groups on the vertices and half-edges of the underlying graph, respectively. As for graphs, we call ^V(X) the divisor group of , and interpret divisors as distributions of chips on the vertices of the underlying graph X (the chips are not weighted in any way). As before, the root and involution maps induce homomorphisms
r_X:^H(X)→^V(X), ι_X:^H(X)→^H(X).
For v∈ V(X) denote c(v)=|_v| the order of the local group at v, and similarly for h∈ H(X) denote c(h)=|_h|. Given an edge e={h,h'}∈ E(X), we denote c(e)=c(h)=c(h'). For each half-edge h∈ H(X) rooted at v=r_X(h), there is an inclusion _h⊂_v of the local groups, hence c(h) divides c(v). We now define the weighted transpose of r_X by the formula
τ_:^V(X)→^H(X), τ_(v)=
∑_h∈ T_vXc(v)/c(h)h.
The Laplacian of the graph of groups =(X,_v,_h) is the homomorphism L_:^V(X)→^V(X) given by
L_=r_X∘(Id-ι_X)∘τ_, L_(v)=∑_h∈ T_vXc(v)/c(h)(v-r_X(ι_X(h))).
Given a vertex v∈ V(G), the divisor -L_(v) is the result of firing the vertex v. It is obtained by moving, along each edge e={h,h'} rooted at v, a stack of c(v)/c(e) chips from v to the other root vertex of e. As in the case of graphs, if h is a leg or belongs to a loop then r_X(h)=r_X(ι_X(h)), so loops and legs do not contribute to the Laplacian. However, the chip-firing operation is not symmetric: firing two adjacent vertices will in general cause them to exchange chips. As before, if X is connected then ^V(X)_0=(r_X∘(Id-ι_X)), so the group of principal divisors L_ lies in ^V(X)_0. Hence we can define the Jacobian group of in the same way as for graphs:
The Jacobian group of a graph of groups is the quotient group
()=^V(X)_0/ L_.
We give an explicit formula for the matrix L of the Laplacian L_ of a graph of groups =(X,_v,_h). Assume that X has no legs (this does not affect the Laplacian), and let n=|V(X)| and m=|E(X)| be the number of vertices and edges, respectively. Then L=Q-A, where Q is the diagonal valency matrix and A is the adjacency matrix of the graph of groups :
L_uv=Q_uv-A_uv,
Q_uv=δ_uv∑_h∈ T_vXc(v)/c(h), A_uv=∑_h∈ T_vX: r_X(ι_X(h))=uc(v)/c(h).
We note that L and A are not symmetric in general. The Laplacian L is degenerate, specifically its rows sum to zero (but generally not the columns).
We introduce the following matrix factorizations. Let C_V and C_E be the respectively n× n and m× m diagonal matrices
(C_V)_uv=c(u)δ_uv, (C_E)_ef=c(e)δ_ef
recording the orders of the local groups. Let S and T be the root matrices (<ref>) of X, with respect to a choice of orientation. It is then elementary to verify that
Q=SC_E^-1S^tC_V+TC_E^-1T^tC_V, A=SC_E^-1T^tC_V+TC_E^-1S^tC_V, L=(S-T)C_E^-1(S-T)^tC_V.
For future use, we also require the adjugate of the Laplacian.
The adjugate of the Laplacian matrix L of a graph of groups =(X,_v,_h) is equal to
(L)=C_V^-1Jξ.
Here J is the matrix whose entries are all equal to 1, and the constant ξ is equal to
ξ=∏_v∈ V(X)c(v)∑_T⊂ X∏_e∈ E(T)c(e)^-1,
where the sum is taken over all spanning trees T of X.
The adjugate of the Laplacian L of an ordinary graph X is equal to J·κ(X), where κ(X)=|(X)| is the number of spanning trees, and is computed by applying the Cauchy–Binet formula to the factorization L=(S-T)(S-T)^t (see, for example, Theorem 6.3 in <cit.>). Applying the same proof to the Laplacian of a graph of groups and using the factorization in Equation (<ref>) gives the desired result.
We note that defining chip-firing on a graph of groups =(X,X_v,X_h) uses only the underlying graph and the orders c(v)=|_v| and c(h)=|_h| of the local groups. The structure of the groups is irrelevant, which is not surprising given that chip-firing is an abelian theory. In particular, given a group action of G on X, the choices of the local stabilizers that are made when defining the quotient graph of groups X//G do not affect chip-firing.
Furthermore, this definition of chip-firing makes sense for any graph whose vertices and edges are equipped with weights c(v) and c(e), with the condition that the weight of any edge divides the weights of its root vertices. The weights themselves need not be integers, so for example rescaling all weights by an arbitrary factor does not change the chip-firing map. This framework allows one to modify the edges and edge weights of a graph without changing the chip-firing map. For example, one may eliminate edge weights entirely by dividing all weights by a sufficiently large number such that each edge e has weight 1/n(e) for some integer n(e), and then replacing each edge e with n(e) unweighted edges. Conversely, a set {e_1,…,e_n} of edges joining two vertices can be replaced by a single edge e with weight c(e)=(c(e_1)^-1+⋯+c(e_n)^-1)^-1, so chip-firing on any weighted graph is equivalent to chip-firing on a simple graph (without multi-edges). Vertex weights, however, cannot be modified away.
§.§ The order of the Jacobian via spanning trees
We now compute the order of the Jacobian ()=(X,_v,_h) of a graph of groups in two different ways. The first formula generalizes Kirchhoff's theorem and computes () as a weighted sum over the spanning trees of X. A similar formula for a graph with trivial vertex weights appears in Theorem 4.1 in <cit.>.
Let =(X,_v,_h) be a graph of groups. For each vertex v∈ V(X) and edge e={h,h'}∈ E(X), let c(v)=|_v| and c(e)=|_h|=|_h'| be the orders of the local groups. The order of the Jacobian of is equal to
|()|=c_v^-1∏_v∈ V(X)c(v)∑_T⊂ X∏_e∈ E(T)c(e)^-1,
where c_v is the least common multiple of the vertex weights c(v), and the sum is taken over all spanning trees T of X.
Denote n=|V(X)| and m=|V(E)| and label the vertices of X as V(X)={v_1,…,v_n}. Fix an orientation on X, then the matrix L of the Laplacian of admits the factorization (<ref>)
L=BC_E^-1B^TC_V, B=S-T,
where C_E and C_V are diagonal matrices recording the c(e) and the c(v). Let L=[u_1⋯ u_n] denote the columns of L, these vectors satisfy the relation
u_1/c(v_1)+⋯+u_n/c(v_n)=0.
The matrix L defines the chip-firing map L:^n→^n, whose image lies in the kernel of the degree map :^n→ (which sums the components). Fix the vertex v_n and let ^n→^n-1 be the homomorphism that forgets the last coordinate; it is clear that it maps isomorphically onto ^n-1. The matrix of the composed map ^n→^n→^n-1 is L'=[u'_1⋯ u'_n], which is L with the last row removed. Then the Jacobian is
()=/ L=^n-1/ L'.
Let L=[u'_1⋯ u'_n-1] be the matrix obtained by removing the last column from L', then
|()|=|^n-1/L|/| L'/L|.
The group L'/L is the finite cyclic group generated by the vector u'_n over the lattice ⟨ u_1'⋯ u_n-1'⟩. Clearing denominators in (<ref>), we obtain the minimal relation between the u'_i:
c_v/c(v_1)u'_1+⋯+c_v/c(v_n)u'_n=0.
Hence the order of u'_n and thus the denominator in (<ref>) is equal to
| L'/L|=|⟨ u_1'⋯ u_n'⟩/⟨ u_1'⋯ u_n-1'⟩|=c_v/c(v_n).
The numerator in (<ref>) is the determinant of the (n-1)× (n-1) matrix L obtained from L by deleting the last row and column. By Lemma <ref>, it is equal to
|^n-1/L|=L=1/c(v_n)ξ=∏_i=1^n-1c(v_i)∑_T⊂ X∏_e∈ E(T)c(e)^-1.
Plugging the above two equations into (<ref>), we obtain the result.
§.§ The order of the Jacobian via the zeta function
We give an alternative method for computing the order of the Jacobian group () of a graph of groups. Recall that the Ihara zeta function ζ(u,X) of a graph X is an analogue of the Dedekind zeta function of a number field. It is defined as an Euler product over the primes of X, which are equivalence classes of certain closed walks on X. Unlike its arithmetic analogue, the Ihara zeta function ζ(u,X) is the reciprocal of an explicit polynomial associated to X. Specifically, let n=|V(X)| and m=|E(X)|, and let Q and A be the n× n valency and adjacenty matrices of X, then Bass's three-term determinant formula (see <cit.> and <cit.>) states that
ζ(u,X)^-1=(1-u^2)^m-n(I_n-Au+(Q-I_n)u^2).
The Ihara zeta function of a graph exhibits a number of remarkable similarities to the Dedekind zeta function. For example, it satisfies a graph-theoretic analogue of the class number formula, with (X) playing the role of the ideal class group. Specifically, at u=1 the zeta function has a pole of order g=m-n+1 (if g≥ 2) and its reciprocal has the following Taylor expansion (see <cit.>):
ζ(u,)^-1= 2^g(-1)^g+1(g-1) |()|· (u-1)^g+O((u-1)^g+1).
It is a natural problem to generalize closed walks and the Ihara zeta function to graphs of groups. In <cit.>, the second author defined ζ(u,) for a graph of groups having trivial edge groups and proved an analogue of Bass's three-term determinant formula for ζ(u,) (see Theorem 3.8 in <cit.>), and in upcoming work will extend these results to arbitrary graphs of groups.
It is natural to expect that the Ihara zeta function ζ(u,) of a graph of groups computes the order of (). We show that this is indeed the case, provided that ζ(u,) satisfies an analogue of Bass's three-term determinant formula (which it does in the edge-trivial case by Theorem 3.8 of <cit.>).
Let =(X,_v,_h) be a finite graph of groups on a graph with n=|V(X)| vertices and m=|E(X)| edges. Define the Ihara zeta function of by the formula
ζ(u,)^-1=(1-u^2)^m-n(I_n-Au+(Q-I_n)u^2),
where Q and A are the valency and adjacency matrices (<ref>) of . Then ζ(u,)^-1 has a zero of order g=m-n+1 at u=1, and has leading coefficient
ζ(u,)^-1= 2^g(-1)^g+1c_v(∑_e∈ E(X)c(e)^-1-∑_v∈ V(X)c(v)^-1) |()|· (u-1)^g+O((u-1)^g+1),
where c_v is the least common multiple of the vertex weights c(v).
Plugging u=1 into the determinant we get
(I_n-A+(Q-I_n))= L=0,
since the Laplacian is singular. The term (1-u^2)^m-n has a zero of order g-1 at u=1 with leading coefficient 2^g-1(-1)^g+1. Therefore ζ(u,) has a zero of order at least g at u=1, and it is sufficient to show that
.d/du (I_n-Au+(Q-I_n)u^2)|_u=1=2c_v |()|(∑_e∈ E(X)c(e)^-1-∑_v∈ V(X)c(v)^-1).
We follow the proof of Theorem 2.11 in <cit.>. Using Jacobi's formula, we have
.d/du (I_n-Au+(Q-I_n)u^2)|_u=1=.[(I_n-Au+(Q-I_n)u^2)d/du(I_n-Au+(Q-I_n)u^2)]|_u=1=
=[ (Q-A)·(2Q-A-2I_n)]=
(L)· Q-2(L),
where we used that L=Q-A and therefore
(L)· (Q-A)=(L)· L= L· I_n=0.
By Lemma <ref> and Equation (<ref>) we have
(L)=ξ (C_V^-1J)=ξ(C_V^-1)=ξ∑_v∈ V(X)c(v)^-1,
(L)· Q=ξ[C_V^-1J(SC_E^-1S^tC_V+TC_E^-1T^tC_V)]=ξ[J(SC_E^-1S^t+TC_E^-1T^t)]=2ξ∑_e∈ E(X)c(e)^-1,
where
ξ=∏_v∈ V(X)c(v)∑_T⊂ X∏_e∈ E(T)c(e)^-1=c_v |()|
by Theorem <ref>. Plugging these into Equation (<ref>), we obtain the desired result.
§ THE JACOBIAN OF A QUOTIENT GRAPH OF GROUPS
We now determine the relationship between the Jacobians () and (), where is a graph with a right G-action and =X//G=(X,_v,_h) is the quotient graph of groups.
§.§ Pushforward and pullback to the quotient
Let X=/G be the quotient graph, let p:→ X be the quotient map, and let c(v)=|_v| and c(h)=|_h| be the vertex and edge weights. We recall the description of in terms of and a voltage assignment β:H(X)→ G given in Section <ref>. Following Equation (<ref>), we make the identifications
^V()=⊕_v∈ V(X)^_v\ G, ^H()=⊕_h∈ H(X)^_h\ G,
where the summands correspond to the fibers of p. The generators of ^_v\ G are denoted _g for v∈ V(X) and g∈ G, where _g=_g' if and only if _vg=_vg', and similarly for half-edges.
It is elementary to verify that, in terms of these identifications, the maps r_, ι_, and τ_ are given by the following formulas on the generators:
r_(þ_g)=r_X(h)_g, ι_(þ_g)=ι_X(h)_β(h)g, τ_(_g)=∑_h∈ T_vX∑_g'∈_h\_vþ_g'g.
We note that the G-action on naturally defines right G-module structures on ^V() and ^H(), but we do not use this. The various homomorphisms between the free abelian groups associated to the quotient p:→ X are shown on Figure <ref> (the objects in the top row are described in Section <ref>).
We define the pushforward homomorphisms
p_*:^V()→^V(X), p_*:^H()→^H(X)
on the generators by the formulas
p_*(_g)=v, v∈ V(X), p_*(þ_g)=h, h∈ H(X).
We note that the formulas are the same as for a harmonic morphism, in other words, p_* simply adds up the chips in each fiber without any additional weights.
The pushforward homomorphism p_*:^V()→^V(X) commutes with the Laplacians
p_*∘ L_X=L_∘ p_*
and defines a surjective homomorphism p_*:()→().
The identities
p_*∘ r_=r_X∘ p_*, p_*∘ι_=ι_X∘ p_*
hold because p is a morphism of the underlying graphs (though not harmonic in general). It remains to see how p_* interacts with τ_ and τ_. Let _g∈ V() be a vertex lying over p(_g)=v. By Equation (<ref>), we have
(p_*∘τ_)(_g)=p_*[∑_h∈ T_vX∑_g'∈_h\_vþ_g'g]=∑_h∈ T_vX∑_g'∈_h\_vh=∑_h∈ T_vX|_v|/|_h|h,
which is exactly
(τ_∘ p_*)(_g)=τ_(v)=∑_h∈ T_vXc(v)/c(h)h.
We therefore see that
p_*∘τ_=τ_∘ p_*, p_*∘ L_X=L_∘ p_*,
and hence p_* induces a homomorphism p_*:()→(), which is surjective because the original map p_*:^V()→^V(X) is surjective.
We also define a pullback homomorphism as follows. Define homomorphisms
p^*:^V(X)→^V(), p^*:^H(X)→^H()
on the generators as follows:
p^*(v)=c(v)∑_g∈_v\ G_g, p^*(h)=c(h)∑_g∈_h\ Gþ_g.
The pullback homomorphism p^*:^V(X)→^V() commutes with the Laplacians
L_∘ p^*=p^*∘ L_
and defines a homomorphism p^*:()→(). Furthermore, the homomorphism p_*∘ p^* acts by multiplication by |G| on ().
Let h∈ H(X) be a half-edge rooted at v=r_X(h)∈ V(X). Then
(r_∘ p^*)(h)=r_[|_h|∑_g∈_h\ Gþ_g]=|_h|∑_g∈_h\ G_g=|_h|∑_g∈_v\ G|_v|/|_h|_g=p^*(v)=(p^*∘ r_X)(h),
hence r_∘ p^*=p^*∘ r_X. Similarly, ι_∘ p^*=p^*∘ι_X because c(ι_X(h))=c(h) for all h∈ H(X). Finally, let v∈ V(X), then by Equation (<ref>) we have
(p^*∘τ_)(v)=p^*[∑_h∈ T_vX|_v|/|_h|h]=∑_h∈ T_vX|_v|/|_h||_h|∑_g∈_h\ Gþ_g=|_v|∑_h∈ T_vX∑_g∈_h\ Gþ_g,
while by Equation (<ref>)
(τ_∘ p^*)(v)=τ_[|_v|∑_g∈_v\ G_g,]=|_v|∑_g∈_v\ G∑_h∈ T_vX∑_g'∈_h\_vþ_g'g,
and the two sums agree since each right _v-coset is naturally partitioned into _h-cosets. Therefore τ_∘ p^*=p^*∘τ_, and putting everything together we get L_∘ p^*=p^*∘ L_. Hence the pullback map induces a homomorphism p^*:()→(), and (p_*∘ p^*)(v)=|G|v for any v∈ V(X) by the orbit-stabilizer theorem.
We note that, unlike the case of graphs, the pullback homomorphism p^* need not be injective. For example, let G act trivially on any graph X, then (X//G)=(X) and p^*:(X//G)→(X) acts by multiplication by |G|, which is the trivial map if |G| is divisible by |(X)|.
It is instructive to compare the pushforward p_* and pullback p^* homomorphisms associated to a G-cover p:→ X to those associated to a harmonic morphism f:→ X. Comparing Equation (<ref>) with (<ref>), and similarly (<ref>) with (<ref>), we offer the following stack-theoretic interpretation of the morphisms p_* and p^*. The map p views a vertex ∈ V() lying over v=p() as a set of c(v) indistinguishable vertices that have been identified by the G-action. The morphism p may then be viewed as a harmonic morphism having local degree one at each of these identified vertices. This explains why no degree coefficient appears in Equation (<ref>), in contrast to Equation (<ref>). Similarly, the coefficient c(v) in Equation (<ref>) should be viewed as a count of these identified vertices, and not as a local degree coefficient as in Equation (<ref>). With this interpretation, p is a covering space map (in the stacky sense) of global degree |G|.
More generally, one can define the notion of a harmonic morphism of graphs of groups f:→ inducing pushforward and pullback homomorphisms f_*:()→() and f^*:()→(). Such a map f is required to satisfy a balancing condition at vertices that takes the local weighs on both and into account. A natural example is the subquotient map X//H→ X//G corresponding a subgroup H⊂ G of a group G acting on a graph X. We leave the details to the interested reader.
§.§ Quotients of the tetrahedron
As a simple example, we consider all interesting quotients of K_4, the complete graph on 4 vertices. Denote V(K_4)={a,b,c,d}. It is well-known that
(K_4)≃/4⊕/4.
Specifically, (K_4) is generated by the classes of the divisors
D_a=a-d, D_b=b-d, D_c=c-d
subject to the relations
4D_a=4D_b=4D_c=D_a+D_b+D_c=0.
The automorphism group of K_4 is S_4, and we consider the quotients K_4//G for all subgroups G⊂ S_4 that act non-transitively on the vertices (otherwise the quotient graph has a single vertex and its divisor theory is trivial). There are, up to conjugation, four such subgroups, which we enumerate below. The corresponding quotient graphs of groups are shown in Figure <ref>. Vertices are marked by bold dots, so a line segment with one end vertex represents a leg. Nontrivial stabilizers are labeled by their degree.
* C_2, the order 2 subgroup generated by (ab). The valency, adjacency, and Laplacian matrices of K_4//C_2 are
Q=([ 3 0 0; 0 3 0; 0 0 3 ]),
A=([ 1 2 2; 1 0 1; 1 1 0 ]),
L=([ 2 -2 -2; -1 3 -1; -1 -1 3 ]).
Finding the Smith normal form of L, we see that (K_4//C_2)≃/4. In fact, the Jacobian is generated by the class of D=p_*(D_a)=p_*(D_b), and the pullback map is given by p^*(D)=D_a+D_b.
* C_2,2, the order 2 subgroup generated by (ab)(cd). The valency, adjacency, and Laplacian matrices of K_4//C_2,2 are
Q=([ 3 0; 0 3 ]),
A=([ 1 2; 2 1 ]),
L=([ 2 -2; -2 2 ]).
The Jacobian is (K_4//C_2,2)≃/2, generated by D=p_*(D_a)=p_*(D_b), while p_*(D_c)=0. The pullback map is p^*(D)=D_a+D_b-D_c.
* V_4, the non-normal Klein 4-group generated by (ab) and (cd). The valency, adjacency, and Laplacian matrices of K_4//V_4 are in fact identical to those of K_4//C_2,2, and the Jacobian is also (K_4//V_4)≃/2.
* C_3, the order 3 subgroup generated by (abc). The valency, adjacency, and Laplacian matrices of K_4//C_3 are
Q=([ 3 0; 0 3 ]),
A=([ 2 3; 1 0 ]),
L=([ 1 -3; -1 3 ]).
Finding the Smith normal form of L, we see that (K_4//C_3) is the trivial group.
§.§ Quotients of the Petersen graph
As an extended example, we consider the various quotients of the Petersen graph P. We identify the vertices of P with two-element subsets of a five-element set {a,b,c,d,e}. Two vertices are connected by an edge when the corresponding two-element subsets are disjoint. The Jacobian of the Petersen graph is (P)≃/2⊕ (/10)^3. We have computed the Jacobian (P//H) of the quotient graph of groups for all subgroups H⊂(P)=S_5, by finding the Smith normal form of the Laplacian. Figure <ref> lists all subgroups H, up to conjugacy, having the property that (P//H) is nontrivial.
The corresponding quotient graphs of groups are shown on Figure <ref>. Each vertex in a quotient graph is labeled by the first vertex of its preimage, with respect to lexicographic order. Numbers at vertices, edges, and legs indicate the orders of nontrivial stabilizers.
§.§ The kernel of the pushforward
We now identify the kernel of p_*:()→(), the pushforward map on the Jacobians. Denote the kernels of p_* on ^V() and ^E() by
V_0=(p_*:^V()→^V(X))=⊕_v∈ V(X)^_v\ G_0,
H_0=(p_*:^H()→^H(X))=⊕_h∈ H(X)^_h\ G_0,
where we use the identification (<ref>), and let i_*:V_0→^V() and i_*:H_0→^H() denote the canonical injections. It is elementary to verify that the maps r_, ι_, and τ_ descend to maps (see Figure <ref>)
r_0:H_0→ V_0, ι_0:H_0→ H_0, τ_0:V_0→ H_0.
Following the terminology of <cit.>, we introduce the following definitions:
The voltage Laplacian of the cover p:→ X is the map
L_0:V_0→ V_0, L_0=r_0∘ (-ι_0)∘τ_0.
The voltage Jacobian of the cover p:→ X is the quotient
_0= (r_0∘ (-ι_0))/ L_0.
An elementary rank count shows that the lattices (r_0∘ (-ι_0)) and L_0 have full rank in V_0. Therefore the voltage Laplacian is non-degenerate, unlike the case of a graph X, where L_X has full rank in (r_X∘ (-ι_X))=^V(X)_0. However, r_0∘ (-ι_0) is not generally surjective, and the quotients V_0/ L_0 and _0 need to be carefully distinguished.
It is clear that _0 embeds into the kernel of p_*:()→(). In fact, the two are isomorphic.
The natural inclusion map _0→(p_*:()→()) is an isomorphism, hence the voltage Jacobian fits into an exact sequence
0[r] _0[r] ()[r,"p_*"] () [r] 0.
In particular, |_0|=|()|/|()|.
This result generalizes Theorem 1.1 in <cit.> to the case of non-free G-actions, and our proof is essentially a copy of their proof. First, we recall Proposition 2.2 from <cit.>, which states that, given a diagram A fg⇄B of abelian groups, the map f induces an isomorphism
A/( g+ f)≃ f/ (f∘ g).
Hence, denoting
∂_0=r_0∘ (-ι_0), ∂_=r_∘ (-ι_), ∂_X=r_X∘ (-ι_X),
we instead work with the groups
_0≃ H_0/(τ_0+∂_0), ()≃^H()/(τ_+∂_), ()≃^H(X)/(τ_+∂_X).
Second, we replace each of the three finite abelian groups A=_0,(),() with its Pontryagin dual A^∨=(A,/). The dual groups are isomorphic, but the arrows now point in the opposite direction:
0 _0[l] () [l] () [l,"p^∨_*"'] 0[l].
To show that p_*≃_0, we instead show that p_*^∨≃_0. For each h∈ H(X), the map p_*:^H()→^H(X) sends the generator corresponding to each half-edge þ∈ p^-1(h)=_h\ G to h. Hence the Pontryagin dual p_*:^H(X)→^H() sends h∈ H(X) to the sum of the þ over all þ∈_h\ G. It is therefore clear that ^H(X)/p_*^∨(^H(X))≃ H_0, and hence
p_*^∨=^H()/(τ_+∂_+p_*^∨(^H(X)))≃
H_0/(τ_0+∂_0)=_0.
Let p:→ X be a free G-cover, in other words assume that the G-action on is free. By Equation (<ref>), the orders of () and (X) can be computed from the Taylor expansions at u=1 of the Ihara zeta functions ζ(u,) and ζ(u,X). In fact, ζ(u,X) divides ζ(u,), and the ratio is a product of the Artin–Ihara L-functions L(u,X,ρ) associated to the cover p:→ X corresponding to the nontrivial irreducible representations ρ of G (the L-function of the trivial representation is equal to ζ(u,X), see <cit.> or <cit.>). Hence the order of _0 can likewise be computed by looking at the u=1 Taylor expansion of this product.
Assuming that the Ihara zeta function of a graph of groups is defined and satisfies Bass's three-term determinant formula, Theorem <ref> shows that the order () can be computed from the Taylor expansion of ζ(u,) at u=1. It is therefore natural to expect that ζ(u,) is equal to the product of the Artin–Ihara L-functions L(u,,ρ) of the graph of groups , suitably defined, where the product runs over the irreducible representations of G and where L(u,,1)=ζ(u,). If this is the case, then |_0|=|()|/|()| can be found from the Taylor expansion of the product of the L-functions of the cover → X associated to the nontrivial irreducible representations of G.
The project of defining the Ihara zeta function and the Artin–Ihara L-function of a graph of groups was carried out by the second author in <cit.> in the case then G acts with trivial stabilizers on the edges of . In future work, the second author intends to complete this project and define these functions for arbitrary graphs of groups.
§ DOUBLE COVERS
We now consider the group G=/2 acting on a graph . We call the quotient map p:→ X a double cover, and introduce some terminology borrowed from tropical geometry.
Let v∈ V(X) be a vertex. We say that v is undilated if it has two preimages in exchanged by the involution, which we arbitrarily label p^-1(v)={^±}, and dilated if it has a unique preimage, which we label p^-1(v)={}. We similarly say that a half-edge h∈ H(X) is undilated if p^-1(h)={þ^±} and dilated if p^-1(h)={þ}. A dilated half-edge is rooted at a dilated vertex, so the set of dilated half-edges and vertices forms a subgraph X_⊂ X, called the dilation subgraph. The root vertex v=r_X(h) of an undilated half-edge h∈ H(X) may be dilated or undilated. In the latter case, we label the preimages in such a way that r_(þ^±)=^±, in other words a half-edge with a sign is rooted at either a vertex with the same sign or a vertex with no signs. Finally, we say that the double cover p:→ X is free if X_=∅ (in other words, if the /2-action is free) and dilated otherwise.
We now construct the free graph X_ corresponding to the double cover p:→ X as follows. The vertices of X_ are the undilated vertices of X, so V(X_)=V(X)\ V(X_). The edges of X_ are the undilated edges of X both of whose root vertices are undilated. The legs of X_ come in two types. First, each undilated leg of X that is rooted at an undilated vertex is a leg of X. Second, consider an edge e={h,h'}∈ E(X) having an undilated root vertex r(h)=u and a dilated root vertex r(h')=v. For each such edge, we attach h to X_ as a leg rooted at u (so that r_X_(h)=r_X(h)=u as before but ι_X_(h)=h instead of ι_X(h)=h'). We call these null legs, in order to distinguish them from the legs coming from X. In other words, X_ is obtained from X by removing X_, and turning each loose edge (having one root vertex on X_ and one missing root vertex) into a leg.
We now define a parity assignment on the half-edges of X_ as follows:
* Let e={h_1,h_2}∈ E(X_) be a edge (having undilated root vertices, which may be the same). Our choice of labels for the preimages of the root vertices determines a labeling þ^±_1, þ^±_2 for the preimages of the half-edges. With respect to this choice, we define
(e)=(h_1)=(h_2)= +1, ι_(þ_1^±)=þ_2^±,
-1, ι_(þ_1^±)=þ_2^∓.
We say that e is even if (e)=1 and odd if (e)=-1.
* Let l∈ L(X_) be a leg. If l is a leg of X (in other words, if it is not a null leg), then p^-1(l)={l^±}, and there are two possibilities: either ι_(l^±)=l^±, so p^-1(l) is a pair of legs exchanged by the involution, or ι_(l^±)=l^∓, so e={l^+,l^-} is an edge folded by the involution. We therefore set
(l)=
+1, ι_(l^±)=l^±,
-1, ι_(l^±)=l^∓,
0, l
We say that a non-null leg l is even if (l)=1 and odd if (l)=-1.
The parity assignment gives X_ the structure of a signed graph, and this construction already occurs in <cit.> for the case of free double covers (so null legs do not appear). The values of on the edges depend the labeling ^± of the preimages ^± of the undilated vertices. The cocycle []∈ H^1(X_,/2) in the simplicial cohomology group, however, is well-defined. The leg parity assignement does not depend on any choices, and the cover p:→ X can be uniquely reconstructed from the choice of a dilation subgraph X_⊂ X, an element []∈ H^1(X_,/2) defining the edge parity, and a choice of leg parity.
§.§ The voltage Laplacian of a double cover
We now compute the voltage Laplacian L_0 and the voltage Jacobian _0 of the double cover p:→ X in terms of the free graph X_. We introduce the following diagram:
^H(X_)[r,bend left,"r_"] [loop left,"ι_"] ^V(X_)[l,bend left,"τ_"'].
Here r_=r_X_ is the ordinary root map of X_ and τ_=τ_X_fr is its transpose (see Equation (<ref>)). The involution, however, is twisted by the parity assignment:
ι_(h)=(h)ι_X_(h).
In terms of the identification given by Equation (<ref>), we have _0^_v\ G=(^+-^-) for an undilated vertex v∈ V(X_), while if v is dilated then _0^_v\ G is trivial. Hence we can identify V_0 with ^V(X_). Similarly, _0^_h\ G=(þ^+-þ^-) if h∈ H(X) is an undilated half-edge and is trivial otherwise. However, H_0 is larger than ^H(X_), since it has generators corresponding to undilated half-edges rooted at dilated vertices. These generators, however, do not appear in the image of r_0, and hence we can compute the Laplacian L_0 by restricting to ^H(X_).
Let be a graph with a /2-action, let p:→ X be the quotient map, let X_ be the free graph, and let be the parity assignment on H(X_) defined above. Under the identification of V_0 with V(X_), the voltage Laplacian L_0:V_0→ V_0 and the voltage Jacobian are equal to
L_0=r_∘(-ι_)∘τ_, _0=( r_∘(-ι_))/ L_0.
The matrix of the voltage Laplacian L_0:V_0→ V_0 is explicitly given by
L_0,uv= |{u}|+4|{u}|+2|{u}|+|u}|, u=v,
|{uv}|-
|{uv}|, u≠ v.
By abuse of notation, for an undilated vertex v∈ V(X_) we denote v=^+-^- the corresponding generator of V_0; this identifies the generators of ^V(X_) and V_0. Similarly, if h∈ H(X)\ H(X_) is an undilated edge we denote h=þ^+-þ^- the corresponding generator of H_0. If r_X(h) is an undilated vertex then h is also a generator of ^H(X_), so we view the latter as a subgroup of H_0.
It is clear that the maps τ_0:^V_0→^H_0 and τ_:^V(X_)→^H(X_) agree under these identifications. Given an undilated half-edge h∈ H(X)\ H(X_) rooted at v=r_X(h), we have
r_0(þ^+-þ^-)=^+-^-, v,
0, v.
Hence the restriction of r_0:^H_0→^V_0 to ^H(X_) agrees with r_:^H(X_)→^V(X_).
Now let h∈ H(X_) be a half-edge rooted at an undilated vertex v=r_(h). We need to check that r_∘(-ι_)(h) agrees with r_0∘(-ι_0)(þ^+-þ^-). There are several cases to consider.
* h is part of an even edge e={h,h'}∈ E(X_), where the vertex v'=r_(h') is also undilated. Then ι_(þ^±)=þ'^±, so
r_∘(-ι_)(h)=r_(h-h')=v-v'=^+-^–'^++'^-=r_0∘(-ι_0)(þ^+-þ^-).
The half-edge h contributes +1 to L_0,vv and -1 to L_0,vv', and these contributions cancel if e is a loop.
* h is part of an odd edge e={h,h'}∈ E(X_), where the vertex v'=r_(h') is also undilated. Then ι_(þ^±)=þ'^∓, so
r_∘(-ι_)(h)=r_(h+h')=v+v'=^+-^-+'^+-'^-=r_0∘(-ι_0)(þ^+-þ^-).
The half-edge h contributes +1 to L_0,vv and +1 to L_0,vv'. If v=v' (e is an odd loop), the total contribution from h and h' to L_0,vv is equal to 4.
* h is an even leg, then ι_(h)=h and ι_(þ^±)=þ^± since þ^± are also legs. Thus
r_∘(-ι_)(h)=0=r_0∘(-ι_0)(þ^+-þ^-)
and h does not contribute to the voltage Laplacian.
* h is an odd leg and þ^± form an edge of . Then ι_(h)=-h and ι_(þ^±)=þ^∓, hence
r_∘(-ι_)(h)=2r_(h)=2v=2^+-2^-=r_0∘(-ι_0)(þ^+-þ^-)
and h contributes +2 to L_0,vv.
* h is a null leg corresponding to an edge e={h,h'}∈ E(X) with dilated root vertex v'=r_X(h'). Then ι_(h)=0 and we can assume that ι_(þ^±)=þ'^±, so
r_∘(-ι_)(h)=r_(h)=v=^+-^-=r_0∘(-ι_0)(þ^+-þ^-)
because r_0(þ'^+-þ'^-)=0. Hence h contributes +1 to L_0,vv.
It follows that L_0=r_∘(-ι_)∘τ_, and to complete the proof it is sufficient to show that the image of H(X_)⊂ H_0 under the map r_0∘ (-ι_0) is equal to the image of all of H_0. Let e={h,h'}∈ E(X) be an undilated edge with undilated root vertex v=r_X(h) and dilated root vertex v'=r_X(h'), then þ'^+-þ'^- is a generator of H_0 but not H(X_). We verify that
r_0∘ (-ι_0)(þ'^+-þ'^-)=r_0(þ'^+-þ'^–þ^++þ^-)=-^++^-=-v=-r_∘(-ι_)(h),
where h=þ^+-þ^- is a generator of H(X_). Hence adding the þ'^+-þ'^- as a generator to H(X_) does not increase the image.
We observe that the matrix of the voltage Laplacian L_0 of the double cover p:→ X is obtained from the signed graph Laplacian of the free subgraph X_ (see Definition 9.4 in <cit.>) by adding the contributions from the null legs.
§.§ Ogods and the order of the voltage Jacobian of a double cover
We now derive a combinatorial formula for the order of the voltage Jacobian of a double cover p:→ X. To make our formula self-contained, we express it in terms of and X, and not in terms of the auxiliary graph X_. The only terminology that we retain is that we distinguish odd and even undilated legs of X: the preimage of the former is a single edge folded by the involution, while the preimage of the latter is a pair of legs. The following paragraphs are expository, and the interested reader may skip directly to Definition <ref> and Theorem <ref>.
Kirchhoff's matrix tree theorem states that the order of the Jacobian of a connected graph X is equal to the number of spanning trees of X, and a spanning tree of X may be characterized as a minimal connected subgraph containing all vertices of X. Our goal is to define an analogous property for subgraphs of the target graph of a double cover.
Let be a graph with a /2-action and let p:→ X be the corresponding double cover. We say that a (possibly disconnected) subgraph Y⊂ X is relatively connected if each connected component of Y has connected preimage in . We now characterize connected subgraphs Y⊂ X that are minimal with respect to this property, in other words we require that p^-1(Y) be connected but that the graph obtained from Y by removing any edge or leg (and retaining the root vertices) have a connected component with disconnected preimage in . We make the following simple observations.
* A connected subgraph Y⊂ X having at least one dilated vertex is relatively connected. In particular, Y is not minimally relatively connected if it has at least one dilated edge or leg, since this edge or leg may be removed, or if it has at least two dilated vertices. Similarly, if Y has exactly one dilated vertex but is not a tree, then Y is not minimally relatively connected.
* A relatively connected subgraph Y⊂ X having at least one even leg is not minimally relatively connected, since the leg may be removed.
* A connected subgraph Y⊂ X having at least one odd leg l∈ L(Y) is relatively connected, since the preimage edge e=p^-1(l) connects the (possibly disjoint) preimages of Y\{l}. The subgraph Y is not minimally relatively connected unless it is a tree.
* Let Y⊂ X be a subgraph containing no dilated vertices and no legs. By covering space theory, the restricted double cover p|_p^-1(Y):p^-1(Y)→ Y corresponds to an element of (π_1(Y),/2)=H^1(Y,/2). If Y is a tree then the cover is trivial and hence disconnected, so Y is not relatively connected. If Y has genus one (in other words, if it has a unique cycle), then H^1(Y,/2)=/2 and Y has two covers: the trivial disconnected one and the nontrivial connected one. In the latter case, it is clear that Y is minimally relatively connected, since removing any edge produces a tree. Finally, suppose that Y has genus at least two (in other words, it has at least two independent cycles) and p|_p^-1(Y):p^-1(Y)→ Y is a nontrivial double cover. It is an easy exercise to show that Y is not minimally relatively connected, in other words there is an edge e∈ E(Y) such that each connected component of Y\{e} (there may be one or two) has connected preimage in .
We can therefore characterize minimal relatively connected subgraphs of X that contain all vertices of X, which are the double cover analogues of spanning trees. One important difference is that these subsets now come with a weight assignment.
Let be a graph with a /2-action and let p:→ X be the quotient map. An ogod component Y of weight w(Y) is a connected subgraph Y⊂ X having no dilated edges, dilated legs, or even legs, and that is of one of the following three types:
* Y is a tree having a unique dilated vertex, and no legs. We say that w(Y)=1.
* Y is a tree having no dilated vertices and a unique odd leg. We say that w(Y)=2.
* Y has no legs and a unique cycle, and p^-1(Y)⊂ is connected. We say that w(Y)=4.
Now let B be a set of n undilated edges and odd legs of X, where n is the number of undilated vertices of X. Let X|_B be the graph obtained from X by deleting all edges and legs not in B, including all dilated edges and legs, and retaining all vertices, and let X_1,…,X_k be the connected components of X|_B. We say that B is an ogod if each of the X_i is an ogod component, and the weight w(B) of the ogod is the product of the weights of the X_i.
The term ogod is an acronym for odd genus one decomposition: for a free double cover p:→ X without legs, the connected components X_i of an ogod are graphs of genus one such that the restricted covers p|_p^-1(X_i):p^-1(X_i)→ X_i are given by the odd (nontrivial) elements of H^1(X_i,/2). This terminology was introduced by the second author in <cit.>, who was unaware of the history of this definition going back to the seminal paper <cit.>. Howver, to the best of the authors' knowledge, there does not appear to be an established term describing such subsets in the combinatorics literature.
We are now ready to state the analogue of Kirchhoff's matrix tree theorem for a dilated double cover p:→ X, with ogods playing the role of spanning trees.
Let be a graph with a non-free /2-action and let p:→ X be the quotient map. The order of the voltage Laplacian is equal to
|_0|=∑_Bw(B),
where the sum is taken over all ogods B of X.
For free double covers, this result already occurs in <cit.>, and was explicitly interpreted as a formula for the order of the voltage Laplacian in <cit.>. It was subsequently independently derived by the second author in <cit.>. We note that for a free double cover there is an additional 1/2 coefficient in the right hand side of Equation (<ref>).
Let X_ be the free graph, and let be the parity assignment on H(X_) defined above. By Proposition <ref>, we may compute the voltage Laplacian L_0=r_∘(-ι_)∘τ_ and voltage Jacobian _0=( r_∘(-ι_))/ L_0 using the diagram (<ref>) of X_. Let n=|V(X_)| and m=|H(X_)|. The n× n matrix of the voltage Laplacian factors as L_0=DT, where D is the n× m matrix of r_∘ (-ι_) and T is the m× n matrix of τ_:
D_vh= +1, r(h)=vh,
+1, r(ι(h))=vh,
-1, r(ι(h))=vh,
+2, r(h)=vh,
0, ,
T_hv= +1, v=r_(h),
0,
By the Cauchy–Binet formula,
L_0=∑_B⊂ H(X_):|B|=n D|_B T|_B,
where we sum over all n-element subsets B⊂ H(X_) of half-edges of X_ and where D|_B and T|_B are the matrices obtained from D and T by deleting respectively all columns and all rows except those indexed by B.
We make a number of simple observations:
* D|_B=0 if B contains a half-edge that lies on an even loop or is an even leg. Indeed, the corresponding column of D is zero.
* D|_B=0 if B contains both half-edges of a single edge e={h,h'}. Indeed, the h- and h'-columns of D are equal if e is odd and sum to zero if e is even. Hence we only consider only those n-element subsets B⊂ H(X_) that have at most one half-edge from each edge. We represent each such B as a choice of a total of n edges and legs, as well as an orientation for each edge, in other words an arrow pointing in the direction of the chosen half-edge.
* T|_B=0 unless each half-edge in B is rooted at a distinct vertex of X_. Viewing B as a choice of oriented edges and legs, we require that each arrow point to a different vertex.
We now show that the nonzero contributions in Equation (<ref>) come from ogods, and that the contribution from each ogod B is exactly w(B). Fix B, and let X_|_B be the subgraph of X obtained by deleting all edges and legs not in B. Let X_|_B=X_1∪⋯∪ X_k be the decomposition into connected components, and let B_i=H(X_i)∩ B for i=1,…,k. The matrices D|_B and T|_B are block-diagonal with blocks corresponding to the X_i, and a block-diagonal matrix has nonzero determinant only if each block is square, in other words if |B_i|=|V(X_i)| for each i. In other words, the product D|_B T|_B is nonzero only if each X_i is a connected oriented graph having an equal number of legs and edges as vertices, with each leg and edge pointing to a distinct vertex. A moment's thought shows that there are only two possibilities for each X_i:
* X_i has a unique leg (odd or null but not even) and is a tree, and all edges are oriented away from the root vertex of the leg. Hence X_i is an ogod component of weight w(X_i)=1 if the leg is null and w(X_i)=2 if the leg is odd.
* X_i has no legs and a unique cycle. The edges on the cycle are oriented cyclically, while the remaining edges (lying on trees attached to the cycle) are oriented away from the cycle. Hence X_i is an ogod component of weight w(X_i)=4 if the preimage of the cycle is connected, which happens if an odd number of edges on the cycle are odd. If there is an even number of odd edges, then the preimage of the cycle is disconnected and X_i is not an ogod.
It is now an elementary linear algebra exercise to show that the product D|_B_i T|_B_i equals 1 or 2 in the first case, depending on whether the unique leg is null or odd. Similarly, in the second case the product is equal to 2 if there is an odd number of odd edges along the cycle and zero if there is an even number. In this case, there are two contributions corresponding to the two possible choices of orientation along the cycle. Hence we see that the total contribution of D|_B_i T|_B_i from an ogod component X_i is equal to w(X_i). Since weights and determinants are multiplicative in connected components, it follows that the contribution of each ogod B to the sum of the D|_B T|_B (taken over the possible choices of orientations) is equal to w(B).
We have shown that L_0 is equal to the right hand side of Equation (<ref>). To complete the proof, we show that the map r_fr∘ (Id-ι_):^H(X_fr)→^V(X_) is surjective (this is in contrast to free double covers, where the image has index two). Again, we may pass to connected components and assume that X_ is connected. Since the double cover p:→ X is dilated, there is at least one dilated vertex v∈ V(X)\ V(X_) connected by an undilated edge to an undilated vertex u∈ V(X_). Let l∈ L(X_) be the corresponding null leg rooted at u. By the proof of Proposition <ref> we have r_fr∘ (Id-ι_)(l)=u, so u∈ (r_fr∘ (Id-ι_)). Now let e={h,h'}∈ E(X_) be an edge rooted at r(h)=u and another vertex r(h')=u'. Again by the proof of Proposition <ref> we have r_fr∘ (Id-ι_)(h)=u± u', and since u∈ ( r_∘ (Id-ι_)) we have u'∈ ( r_fr∘ (Id-ι_)). Since X_ is connected, we may proceed in this way and show that w∈ ( r_fr∘ (Id-ι_)) for every generator w of ^V(X_). This completes the proof.
We consider the two /2-quotients of the Petersen graph P shown on Figure <ref>. We recall that (P)=/2⊕ (/10)^3 and thus |(P)|=2000.
Taking the quotient by the order two subgroup G⊂(P) generated by (ab), we obtain the top center graph P/G. There are three undilated vertices ac, ad, and ae and six undilated edges that we denote E_u={e_ac,de, e_ad,cd, e_ae,cd, e_ac,ad, e_ad,ae,e_ac,ae}. We consider the 20 three-element subsets of E_u. If we remove the three edges of P/G incident to ac, then the lone vertex ac∈ V(P/G) has disconnected preimage p^-1(ac)={ac,bc}. Hence B={e_ac,de,e_ac,ad,e_ac,ae} is not an ogod, and the same is true for the tangent spaces to ad and ae. The outside cycle B={e_ac,ad, e_ad,ae, e_ae,ac} lifts to a closed loop in P and hence is an ogod of weight 4. For each of the 16 remaining 3-element subsets B⊂ E_u, every connected component of the graph (P/G)|_B is a tree having a unique dilated vertex, hence B is an ogod of weight 1. Proposition <ref> and Theorem <ref> imply that
|(P)|/|(P//G)|=|_0|=16+1· 4=20.
This agrees with Figure <ref>, since (P//G)=(/10)^2 and hence |(P//G)|=100.
We also consider the order two subgroup H⊂(P) generated by (ab)(cd), the quotient graph for which is the top left graph in Figure <ref>. The graph P//H has six undilated edges E_u={e_ab,ce, e_ac,ce, e_ac,ae, e_ad,ae, e_ad,ce,e_ae,cd} and two odd legs L={l_ac,l_ad}. Out of the 70 4-element subsets of E_u∪ L, there are 46 ogods in 15 symmetry classes. Figure <ref> lists all ogods up to symmetry together with their weights. The total weight of all ogods is 100, so by Proposition <ref> and Theorem <ref> we have
|(P)|/|(P//H)|=|_0|=100
This agrees with Figure <ref>, since (P//G)=/2⊕/10 and hence |(P//G)|=20.
which agrees with Figure <ref> since (P//G)=(/10)^2 and hence |(P/G)|=100.
amsalpha
|
http://arxiv.org/abs/2307.01532v1
|
20230704073611
|
Analyzing Intentional Behavior in Autonomous Agents under Uncertainty
|
[
"Filip Cano Córdoba",
"Samuel Judson",
"Timos Antonopoulos",
"Katrine Bjørner",
"Nicholas Shoemaker",
"Scott J. Shapiro",
"Ruzica Piskac",
"Bettina Könighofer"
] |
cs.AI
|
[
"cs.AI"
] |
Unsupervised Video Anomaly Detection with Diffusion Models Conditioned on Compact Motion Representations
Anil Osman Tur1,2^* Nicola Dall'Asen1,3^* Cigdem Beyan1 Elisa Ricci1,2
August 1, 2023
========================================================================================================
Principled accountability for autonomous decision-making in uncertain environments requires distinguishing intentional outcomes from negligent designs from actual accidents.
We propose analyzing the behavior of autonomous agents through a quantitative measure of the evidence of intentional behavior.
We model an uncertain environment as a Markov Decision Process (MDP). For a given scenario, we rely on probabilistic model checking to compute the ability of the agent to influence reaching a certain event. We call this the scope of agency.
We say that there is evidence of intentional behavior if the scope of agency is high and the decisions of the agent are close to being optimal for reaching the event. Our method applies counterfactual reasoning to automatically generate relevant scenarios that can be analyzed to increase the confidence of our assessment.
In a case study, we show how our method can distinguish between `intentional' and `accidental' traffic collisions.
§ INTRODUCTION
Find code and experimental details in the accompanying repository <https://github.com/filipcano/intentional-autonomous-agents>.
Artificial intelligence (AI)-based autonomous agents play a significant role in diverse facets of society, such as transportation, robotics, medical devices, manufacturing, and more. Ideally, engineers would verify their correctness before deploying them in the real world.
However, for various theoretical and practical reasons, formal verification of software for autonomous agents is not often feasible.
As a result, autonomous agents might not behave as planned initially and they might cause harm.
As we cannot predict when harm will happen, we need to examine the software of the harming autonomous agent ex post – after the harm – to assess questions of accountability.
While the liability scheme for autonomous agents has yet to be developed, it is plausible to assume that manufacturers of autonomous agents that intentionally harm should be held to a higher standard of accountability than ones that create agents that harm negligently or purely accidentally.
Therefore, defining and understanding intention is of paramount significance for establishing accountability.
In this paper, we propose a new way of determining whether an autonomous agent has, in fact, acted in a way consistent with the intention to harm.
Historically, symbolic AI produced a large body of work to formally specify and design autonomous agents that were `rational'. Such agents would explicitly derive decisions based on their beliefs, desires, and intentions (BDI) <cit.>. Determining whether an autonomous agent has acted with the intention to harm is easy in the case of BDI agents.
One just needs to read off the intentions from where they are written in the code.
The statistical nature of modern machine-learning-based agents leaves the interpretation of their decision-making in probabilistic settings a far greater challenge,
since intentions are not explicitly present in such models.
The traditional view of intention establishes a connection to planning through either cognitive or computational reasoning. Intention is a nuanced legal and philosophical term of art. Here, we use it in the restricted sense of the `state of the world' the agent plans towards.
Whether human or machine, a rational agent with bounded resources must necessarily plan towards a goal to succeed in achieving it <cit.>. Modern machine-learned agents plan implicitly through techniques like reinforcement learning (RL) <cit.>.
This paper considers an autonomous agent operating with other agents within an environment. During the agent's operation, a certain event happened. In the context of holding the agent accountable for such an event, we want to analyze whether the agent acted towards making that event happen.
Problem statement. We concretely model the interactions between the agent and its environment as a Markov Decision Process (MDP).
The event under analysis is formalized as a set of states
S_ℐ in the MDP.
Our goal is to analyze whether the decision-making policy of the agent shows evidence of intentional behavior towards reaching S_ℐ.
If we assume that the agent has perfect knowledge about the entire world as captured in an MDP, we could simply say: `There is evidence of intentional behavior towards reaching a state in S_ℐ, if the agent implements a policy that maximizes the probability of reaching S_ℐ'. However, for any agent acting within a complex environment, this assumption is implausible. For example, the current state information might not be precise due to imprecisions in sensor measurements, bounded resources in computing the policy, imprecisions due to abstractions,
partial observability, or usage of inaccurate models of other agents.
Therefore, we need to consider a certain degree of uncertainty in our assessments.
Method for analyzing intentional behavior.
In this paper, we propose a methodology to analyze
whether there is evidence that an agent acted intentionally to reach a state in S_ℐ.
For a given scenario, we use probabilistic model checking to automatically compute the policies that maximize and minimize the probability to reach S_ℐ.
We use these policies to compute the influence that the agent had to bring about S_ℐ.
We call this the scope of agency.
We say that there is evidence of intentional behavior if the scope of agency is high and the decisions of the agent are close to optimal for reaching S_ℐ.
To strengthen our evaluation, we make use of a
widespread technique in accountability analysis <cit.>, which is analysing a diverse set of relevant counterfactual scenarios, and aggregating the evaluation results.
Main Contributions. The main contributions of this paper are the following:
* To the best of our knowledge, we present the first method that analyzes intentional behavior directly from the policies in MDPs.
* We give definitions for evidence of intentional behavior in MDPs.
* We propose a method to analyze evidence of intentional behavior of agents in MDPs.
Our method uses model checking to automatically relate the agent's policy to any other possible policy.
Furthermore, our method applies
counterfactual reasoning to increase the reliability of the assessment.
* We provide a case study in which we analyze potential intentional behavior in the same scenario for different implementations of driving agents.
§ PRELIMINARIES
§.§.§ Markov Decision Processes
A Markov Decision Process (MDP) is a tuple ℳ = (𝒮, 𝒜, 𝒫), where
𝒮 is the set of states,
𝒜 is the set of actions and
𝒫:𝒮×𝒜×𝒮→ [0,1] is the
transition function.
A state represents `one way the world
can exist',
so any information available to the agent for
deciding what to do is included in the state of the MDP.
The set 𝒜 contains every possible action that can be taken
by the agent.
The function 𝒫 represents the transition to a new environment state that is produced as the result of the agent executing an action in a particular state.
A trace is a finite or infinite sequence of states
τ = (s_1, s_2, …).
A trace τ is valid
if
for each i, there exists at least one a_i∈𝒜
such that 𝒫(s_i,a_i,s_i+1) > 0.
The agent is modeled by a memoryless and deterministic policy
𝒮→𝒜 over ℳ that assigns an action to each state.
In Section <ref> we discuss how our method can be extended to consider strategies with non-determinism and memory.
§.§.§ Probabilistic Model Checking
Using probabilistic model checking <cit.>,
we can compute the exact probability 𝒫_π(φ,s) of π satisfying a property φ for each state s of the MDP <cit.>. This property φ will typically be defined in a probabilistic variant of a modal temporal logic, like probabilistic linear temporal logic (PLTL) <cit.>.
Let Π⊆{π𝒮→𝒜} be a set of policies.
We denote the maximum probability of
satisfying φ restricted to a policy in Π
as
𝒫_max|Π(φ,s) = max_π∈Π𝒫_π(φ, s).
Similarly, we denote the minimum probability as
𝒫_min|Π(φ,s).
In this paper φ := (S) encodes the property of reaching any state s in a set of states S ⊆𝒮.
§ DEFINITION OF INTENTIONAL BEHAVIOR IN MDPS
In this paper, we assume that we have given a scenario where a certain event happened, like the agent visited a certain location or the agent had a collision with another agent.
Our goal is to analyze whether there is evidence the agent intentionally acted towards reaching this event.
In this section, we give the definitions for evidence of intentional behavior of policies in the presence of uncertainty.
We use an MDP ℳ = (𝒮,𝒜,𝒫)
to model the interaction of the agent and the environment.
In the following sections, we will then propose and implement a method to analyze intentional behavior according to the definitions of this section.
§.§ Intentions of Agents with Perfect Information
According to <cit.>, an intention of an agent is a set of states S_ℐ⊆𝒮 the agent committed to reach. Therefore, the agent acts towards reaching S_ℐ to the best of its knowledge.
Let us assume that the agent has perfect knowledge about the environment.
For a set of states S_ℐ⊆𝒮 to be an intention of an agent, the agent has to implement a policy π that maximizes the probability of reaching S_ℐ.
Formally, if
S_ℐ⊆𝒮 is an intention of an agent, then
𝒫_π((S_ℐ),s) =
𝒫_max|Π((S_ℐ),s),
for any state s∈𝒮.
The policies considered to compute 𝒫_max
can be restricted to a set of policies Π, if there are policies that should be excluded for comparison. For example,
we may only be interested in policies for comparison that
satisfy certain properties like fairness or progress properties.
An agent π shows evidence of intentional behavior in a state s towards S_ℐ if
π maximizes the probability of reaching S_ℐ, ,
𝒫_π((S_ℐ),s) =
𝒫_max|Π((S_ℐ),s). Otherwise, we say that the agent has evidence of non-intentional behavior in state s towards S_ℐ.
§.§ Intentions of Agents Under Uncertainty
The definition of intention given above assumes perfect knowledge about the environment and
the agent implementing a policy that is optimal for reaching S_ℐ.
However, if we want to fully analyze intentional behavior we have to take imprecision and uncertainties into account.
Any agent operating in a complex environment needs to make abstractions about the environmental state and, most likely, only has partial observability. Furthermore, the agent has to make assumptions about the other agents that act within the environment, which may be incorrect.
Therefore, we need to relax the definition of intention to take uncertainties into account.
In order to analyze an agent π under uncertainty, we first define the intention-quotient ρ_π(s) for a state s which represents how close π is to the policy optimal for reaching S_ℐ from state s∈𝒮.
For an agent π at a state s∈𝒮,
the intention-quotient is defined as follows:
ρ_π(s) = 𝒫_π
((S_ℐ),s) - 𝒫_min|Π((S_ℐ),s)𝒫_max|Π((S_ℐ),s) -
𝒫_min|Π((S_ℐ),s).
In contrast to the case of perfect information,
the uncertainty in the agent's knowledge and resources
implies uncertainty in the assessment of intentional behavior.
Given lower and upper thresholds
0≤δ_ρ^L < δ_ρ^U ≤ 1,
we say that there is evidence of
intentional behavior towards the intention S_ℐ in the state s, if
ρ_π(s) ≥δ_ρ^U.
Analogously,
we say there is evidence of
non-intentional behavior towards the intention S_ℐ in the state s, if
ρ_π(s) ≤δ_ρ^L.
In case that
δ_ρ^L < ρ_π(s) < δ_ρ^U,
we say that we have not enough evidence for intentional behavior.
By adjusting the thresholds δ_ρ^U and δ_ρ^L, we can control how much discrepancy from the optimal policy under perfect information is allowed in order
for π to be still considered as intentional or non-intentional behavior for S_ℐ.
In general, the higher the value of the intention-quotient ρ_π(s),
the more evidence the policy π shows of intentionally trying to reach S_ℐ.
The lower the value of ρ_π(s), the more evidence the policy π shows on acting
without the intention to reach S_ℐ.
An additional source of uncertainty is introduced by the scope of agency of a state.
In situations where the agent's actions have little effect on reaching S_ℐ,
there is not enough evidence to support a claim of intentional behavior. For this reason, we take the scope of agency into account for our definition of intentional behavior.
The scope of agency σ(s) at a state s for intention S_ℐ
is defined as the gap between the best and the worst policy in terms of reaching S_ℐ.
Formally, it is given by
σ(s) = 𝒫_max|Π((S_ℐ),s) - 𝒫_min|Π((S_ℐ),s).
The scope of agency of a trace τ is given by
σ(τ) = 1/|τ|∑_s'∈τσ(s').
If the scope of agency σ(τ) of a trace τ is very low,
any assessment about intentional behavior will be very weak.
The above definitions of intentional and non-intentional behavior
apply to a single state in the MDP.
In order to extend these definitions to traces in the MDP,
we aggregate the intention-quotients of the individual states using the
scope of agency as the weighting factor.
For an agent π
operating along a trace τ,
the intention-quotient ρ_π(S)
is given as
the weighted average
ρ_π(τ) = 1/∑_s∈τσ(s)∑_s∈τσ(s) ρ_π(s).
Given lower and upper thresholds
0≤δ_ρ^L < δ_ρ^U ≤ 1,
and an agency threshold 0<δ_σ < 1,
we say that there is evidence of
intentional behavior towards reaching S_ℐ along a trace τ, if
σ(τ) ≥δ_σ and ρ_π(τ) ≥δ_ρ^U.
We say that there is evidence of
non-intentional behavior towards reaching S_ℐ
if
σ(τ) ≥δ_σ and ρ_π(τ) ≤δ_ρ^L.
In case that δ_ρ^L < ρ_π(τ) < δ_ρ^U or σ(τ) < δ_σ, we say that we have not enough evidence for intentional behavior.
§ SETTING AND PROBLEM STATEMENT
In this section, we describe the setting in which we want to analyze intentional behavior and give the problem statement.
Setting.
We have a model of the environment in the form of an
MDP ℳ = (𝒮,𝒜,𝒫)
that captures all relevant dynamics and possible
interactions for an agent.
We also have a concrete scenario to analyze in the form of a trace τ_ref = (s_1,…, s_n).
The trace τ_ref is a sequence of visited states in ℳ that leads to a state in S_ℐ,
, s_n∈ S_ℐ.
The implementation of the agent is given in the form of a policy
π𝒮→𝒜.
The underlying intentions of the agent are unknown.
Problem statement.
Given this setting,
we want to analyze whether there is evidence of the agent acting intentionally,
with uncertainty thresholds
δ_ρ^D, δ_ρ^U, and δ_σ
for the intention-quotient and scope of agency, respectively.
Hence, we want to analyze whether there is
evidence of intentional behavior
of the agent π
towards intention S_ℐ in the scenario τ_ref.
Let us consider a scenario in which
an autonomous car collides with a pedestrian
crossing the road.
To analyze to which degree the car is accountable for the accident, we are interested in whether causing harm was the intention of the car.
In such an example, ℳ captures all relevant information necessary to analyze the accident, like positions and velocities of car and pedestrian, car dynamics, road conditions, etc.
The scenario τ_ref = (s_1,…, s_n) is defined via the sequence of states prior to the collision.
The set of states S_ℐ represents collisions.
We want to analyze whether the policy π shows evidence of intentional behavior towards S_ℐ.
To avoid unfair comparison with unrealistic policies,
we define a set of policies Π that excludes
unreasonably slow-moving cars (e.g., cars that stop even though there is no other road user close by).
§ METHODOLOGY
In this section, we propose a concrete methodology
to analyze whether there is evidence an agent acted intentionally to reach S_ℐ.
Our method is illustrated in Figure <ref>.
As depicted in the figure,
we start the analysis of the given trace τ_ref
by computing the intention-quotient ρ_π(τ_ref)
and the scope of agency σ(τ_ref).
If σ(τ_ref) ≥δ_σ, we can draw conclusions about intentional behavior:
* If ρ_π(τ_ref) ≥δ_ρ^U, then we conclude that there is evidence of intentional behavior towards S_ℐ.
* If ρ_π(τ_ref) ≤δ_ρ^L, then we conclude that there is evidence of non-intentional behavior towards S_ℐ.
In cases without enough agency, , where σ(τ_ref) < δ_σ,
or where the intention-quotient falls between the lower and upper thresholds, ,
δ_ρ^L < ρ_π(τ_ref) < δ_ρ^U,
we say that we do have not enough evidence to reach a conclusion.
In such cases, we propose to generate more evidence by analyzing counterfactual scenarios.
A counterfactual scenario τ is a scenario close to τ_ref according to some distance notion.
Our method generates a set of counterfactual scenarios T_cf and
computes whether there is evidence for intentional
or non-intentional behavior
for each trace τ∈ T = T_cf∪{τ_ref}.
We fix beforehand the number of counterfactual scenarios to some parameter N.
As before, we draw conclusions about intentional behavior based on the aggregated results of the scope of agency σ(T) and intention-quotient ρ_π(T).
If
σ(T) < δ_σ or δ^L_ρ < ρ_π(T) < δ^U_ρ,
there is still not enough evidence for intentional or non-intentional behavior,
with σ(T) being the scope of agency
averaged over all traces in T,
and ρ_π(T) being the average intention-quotient for the set of traces in T.
In such cases,
our algorithm iterates back and extends the set T_cf by generating N more counterfactual scenarios to be analyzed.
The algorithm stops when enough evidence has been generated to draw a conclusion
or
when the number of generated counterfactual scenarios exceeds some user-defined limit.
In the following, we discuss the generation of counterfactual scenarios in detail.
§.§ Counterfactual Generation
In order to find enough evidence for our assessment of intentional behavior, we generate scenarios that are counterfactuals for τ_ref.
There are many ways to generate counterfactual traces.
We describe here three alternatives, ordered by decreasing the requirement
of expert knowledge and involvement in the process.
§.§.§ Counterfactual Generation via a Human Expert
Asking and analyzing counterfactual questions is a standard procedure in accountability processes <cit.>.
Usually, such counterfactual questions are proposed by a domain expert.
We transfer this concept
to analyzing intentional behavior on MDPs.
The counterfactual questions posed by the expert are translated to counterfactual traces T_cf in the model ℳ.
Recall Example <ref>.
Some counterfactual questions posed by an expert in the traffic scenario
could be:
(Q1) What if the car had driven slower?
(Q2) What if the pedestrian had been visible earlier?
(Q3) What if the road conditions were different?
Each of Q1-Q3 translates to a counterfactual trace,
which we can analyze in our framework.
The method of generating counterfactuals using a human expert imposes a heavy burden of work on the expert.
Next, we propose two methods to automatically generate counterfactuals to mitigate the need for human effort.
§.§.§ Counterfactual Generation Using Factored MDP
Since ℳ models the interactions of the agent with its environment,
ℳ is typically given in form of a factored MDP.
In factored MDPs, the state space of ℳ is defined in terms of state variables 𝒮 = 𝒳_1×⋯×𝒳_m.
In this approach for counterfactual generation, we assume domain knowledge about which variations of state variables generate interesting counterfactual scenarios.
In particular, we assume to know which state variables are integral state variables that we want to use in the analysis of intentional behavior, and which variables are
peripheral.
To generate informative counterfactuals,
we are interested in changing the values of the integral state variables.
In Example <ref>, integral state variables might represent the position and velocity of the car, the position of the pedestrian, the road condition, etc.,
are integral variables. However, state variables that represent, for example, positions of other pedestrians located behind the car, are most likely labeled as peripheral by a human expert.
A counterfactual trace generated from changes in the pedestrians' positions that are not involved in the collision will give no new insights into the assessment of intentional behavior. On the contrary, changing the speed of the car might have a considerable effect on the collision probabilities and may provide an informative counterfactual scenario.
We automatically generate counterfactual traces by exploring variations of the integral variables.
Let the state space be factored as 𝒮 = 𝒳_1×…×𝒳_m,
where variables 𝒳_1,…,𝒳_k are peripheral and 𝒳_k+1,…,X_m are integral.
For any state s=(x_1,…,x_m),
we write its factorization
into peripheral and integral variables
as
s = (s^per||s^int).
Let s^int_ref = (x_k+1,…,x_m) be the value of the integral variables
at any state of τ_ref.
We define the set of counterfactual values as:
Cf_ϵ(s^int_ref) = {(y_k+1, …, y_m) ∈𝒳_k+1×…×𝒳_m :
∀ i,
|x_i - y_i| < ϵ_i
},
where ϵ=(ϵ_k+1,…,ϵ_m)
contains for each integral variable the range of variation that is still considered valid.
For a given trace τ_ref = (s_1,…,s_n),
the counterfactual traces are
T_C(τ_ref) = {(s'_1,…, s'_n) : ∃ s_cf^int∈Cf_ϵ(s^int_ref),
∀ i=1… n :
s'_i = (s_i^per||s_cf^int),
(s'_1, …, s_n'), s'_n ∈ S_ℐ}.
Note that the search for counterfactual traces is limited to those integral variables
𝒳_i
for which ϵ_i >0,
thus by setting some of the ϵ_i to zero,
we can fix their value in the counterfactual generation process.
From T_C, we sample N traces to be used for the counterfactual analysis. For the trace selection,
emphasis can be put on traces with higher scopes of agency.
§.§.§ Counterfactual Generation Using Distances on MDPs
This method for generating counterfactual scenarios requires to have given a distance d𝒮×𝒮→ℝ_≥ 0 defined over states in the MDP. Given such a distance metric d over the states, the set of counterfactual traces is given as
T_C(τ_ref) = {(s'_1,…, s'_n) : ∀ i=1… n, d(s_i,s'_i) < η,
(s'_1, …, s_n'),
s_n ∈ S_ℐ},
where η>0 is a distance that represents states being `close enough'
to be compared as counterfactuals.
In case there is no distance defined in the MDP,
there are bisimulation distances that are well defined in any MDP <cit.>.
They depend on the intrinsic structure of the MDP,
defined mainly by similarities in terms of the transition function.
The main caveat of this approach is that distances are expensive to compute, and
the explanation of why two states are assigned a given distance
becomes more obscure to the user.
§ EXPERIMENTAL RESULTS
In this section,
we showcase our method on a traffic-related scenario related to Examples 1-2, and that is illustrated in Figure <ref>.
In this scenario, a car was driving on a road with a crosswalk.
A pedestrian at the crosswalk decided to cross.
Close to the crosswalk, there was a parked truck that blocked the visibility
of the car.
Furthermore, the cold weather conditions made the road slippery,
so that braking was less effective than normal.
While crossing, the pedestrian was hit by the car.
We want to study the behavior of the car for signs of the hit being intentional.
All experiments were executed
on an Intel Core i5 CPU with 16GB of RAM running Ubuntu 20.04.
We use Tempest <cit.>
as our model checking engine.
§.§ Model of Environment
The environment is modeled as an MDP
ℳ = (𝒮, 𝒜, 𝒫).
The set of states is a triple
𝒮 = 𝒮^car× S^ped× S^env,
where S^car models the position and velocity of the car,
S^ped models the position of the pedestrian,
and S^env models other properties
that do not change during a scenario.
These properties include the slipperiness factor of the road and the existence of the truck blocking the car's view of the pedestrian.
The car's position is defined via the integers x_c and y_c with 0 ≤ x_c ≤ 60
and 3 ≤ y_c ≤ 13. The velocity of the car is in
{0, 1, …, 5} [per-mode=symbol].
The position of the car is updated at each step,
assuming a uniform motion at the current velocity.
The car has the following set of actions 𝒜:
hitting the brakes, pressing down on the accelerator, and coasting.
If the car is on a non-slippery part of the road,
accelerating stochastically increases the velocity
(by 1 or 2 [per-mode=symbol]),
braking stochastically decreases the velocity
(by 1 or 2 [per-mode=symbol]) and coasting maintains or decreases the velocity
(by 1 [per-mode=symbol]).
If the car is on a slippery part of the road, the
consequences of the selected action on the velocity change, and include the possibility of no modification to the current velocity for both the actions of braking and accelerating.
The pedestrian's position is given via the integers x_p and y_p with
0 ≤ x_p ≤ 60 and 0 ≤ y_p ≤ 15.
The pedestrian can move 1m in any direction, or not move at all.
The probabilities of moving in each direction are given by
a stochastic model of the pedestrian,
designed in such a way that the pedestrian favors crossing the street
through the crosswalk while avoiding being hit by the car.
The probabilities in the pedestrian's position update can be influenced by a hesitance factor, which captures how likely it is that the pedestrian puts themselves at a hitting distance from the car.
The resulting MDP consists of about 120k states and 400k transitions.
§.§ Analysis of a Trace
In the described environment, we are given a scenario τ_ref
as illustrated in Figure <ref>,
and an agent π𝒮→𝒜.
As thresholds to evaluate evidence of intention,
we use δ_ρ^L = 0.25, δ_ρ^U = 0.75 and
δ_σ = 0.5. We restrict the set of policies Π to policies that do not stop the car if no pedestrian is within a range of 15m of the car.
The collision states are given by
S_ℐ = {s ∈𝒮 : |x_p-x_c| ≤ 5 ∨ |y_p-y_c| ≤ 5 }.
Given this setting, we analyze τ_ref for evidence
of intentional behavior towards reaching S_ℐ.
Therefore, we first compute the intention-quotient ρ_π(τ_ref)
and the scope of agency σ(τ_ref).
Results of analysing τ_ref.
In Figure <ref>, we give the results of the model checking calls for reaching S_ℐ for states in τ_ref.
The lower line (figures/pmin.png)
represents 𝒫_min,
the upper (figures/pmax.png) represents 𝒫_max
and the line in the middle (figures/p1.png) represents 𝒫_π for every state in τ_ref.
The shaded area,
between 𝒫_min and 𝒫_max,
represents the scope of agency at each state.
The figure shows the agent is close to the line of 𝒫_max,
but the scope of agency is very low,
with
ρ_π(τ_ref) = 0.73 and
σ(τ_ref) = 0.18.
Since σ(τ_ref) < δ_σ, our method concludes that there is not enough evidence for intentional behavior yet and moves on to the step of generating counterfactual scenarios.
Counterfactual analysis.
We generate counterfactual scenarios by exploiting domain knowledge about integral variables of the MDP.
We change the following variables:
* Slipperiness range. The street is considered to be slippery between
the positions sl_init and sl_end.
* Slipperiness factor. The strength of the slippery effect is measured by the
slippery factor sl_fact,
which is analogous to the inverse of the friction coefficient in classical dynamics.
The effect of slipperiness is to
make the acceleration and brake less effective,
increasing the probability that both acceleration and brake
have no effect on the speed of the car.
The larger the value of sl_fact,
the more effect,
with sl_fact=1 being the minimum value,
where the road is considered to be `not slippery at all'.
* Hesitancy factor. The pedestrian, in general, tends to cross the street
through the crosswalk. The hesitancy factor modifies the probabilistic model of the pedestrian, to make them more or less prone to put themselves at a hitting distance from the car.
A pedestrian with hesitancy factor h_fact=0 is a completely cautious pedestrian.
On the contrary, a pedestrian with hesitancy factor
h_fact=1 completely disregards the state of the car.
* Visibility. In the given scenario, there is a truck blocking the visibility of the car, corresponding to vis=1.
In case vis=0, the visibility block is eliminated.
The variables and the ranges considered for generating counterfactuals are summarized in Table <ref>.
Results of analyzing counterfactual scenarios.
We build the counterfactuals in batches of N=5,
by sampling uniformly on the ranges described in Table <ref>.
We show the results in terms of intention-quotient and scope of agency in Table <ref>.
We report the averaged values and standard deviations over 5 runs.
As we can see from the table,
with 21 traces in T we have ρ_π(T) > δ_ρ^U = 0.75
and σ_π(T) > δ_σ = 0.5.
Thus, our method concludes that the agent under study does present evidence of intentional behavior to hit the pedestrian.
§.§ Comparative Analysis of Several Agents
In this section,
we illustrate how our method can be used to compare different agents
in terms of intentional behavior.
We compare three different agents π_1, π_2, π_3 in the same scenario τ_ref.
The agent π_1 corresponds to the policy π in Section <ref>.
In Figure <ref> we give the probabilities
for reaching S_ℐ for the policies π_1, π_2, π_3
for two different traces: left for τ_ref, right for a counterfactual trace τ∈ T with a high scope of agency.
The figure illustrates how even a single counterfactual trace can be a powerful tool for distinguishing between policies that seem impossible to differentiate with any confidence in the originally given trace τ_ref.
A second insight is illustrated in Table <ref>.
In this table, for each agent π_1,π_2,π_3,
we show the number of counterfactuals needed to generate enough evidence of intentional behavior,
together with the final values of the intention-quotient and the scope of agency.
Both π_1 and π_3 are clear-cut,
but for π_2 our algorithm reaches the limit of |T| = 100
without finding enough evidence.
In this case, the intention-quotient of the agent seems to converge to a value of about 0.53,
sitting in the middle of the lower and upper threshold.
Finally, in Figure <ref>,
we show the values of
intention-quotient against the scope of agency
for 100 counterfactual traces
sampled from the ranges in Table <ref>.
This serves as a visual representation of the same facts presented in Table <ref>, concluding that
π_1 (figures/p1.png) is clearly showing evidence of intentionally hitting the pedestrian,
π_2 (figures/p2.png) is showing evidence of intentionally hitting the pedestrian in a lower magnitude, which would be considered enough or not depending on the thresholds,
and π_3 (figures/p3.png) is showing clear evidence of acting without the intention of hitting the pedestrian.
§ DISCUSSION
To the best of our knowledge, we present the first method that analyzes intentional behavior directly from the policies given in an MDPs. We believe that our approach has great potential. However, there are aspects that need to be addressed to make the method applicable in challenging scenarios:
* Our method requires having a correct model of the environment that captures everything relevant to analyze a scenario. In many cases, such models are not available.
Recent work on digital twin technologies <cit.> and the existence of realistic simulators <cit.>
provides optimism for more and more accurate models of agents and their environment.
* Our method requires the agent be given as a policy in an MDP.
In case we are given a different implementation, e.g., as a neural network, we would need a sample-efficient method to translate the implementation into a policy in the MDP, at least for the relevant parts of the state space.
* While current probabilistic model checking engines achieve impressive
performance <cit.>,
computing exact probabilities is costly (polynomial complexity).
An alternative would be to use statistical model checking <cit.>, which is less demanding, albeit also less precise.
Statistical model checking has been successfully used to validate autonomous driving modules <cit.>.
General policies.
We briefly discuss how to treat policies with memory and non-determinism.
Our definitions naturally extend to non-deterministic policies with memory,
although it is not evident whether the probabilities required to measure intention-quotients (Definition <ref>) are easy to compute.
Computing extreme probabilities is equally hard for general policies.
If the policy has a finite amount μ of memory, 𝒫_π(𝚁𝚎𝚊𝚌𝚑(S_ℐ),s) can be computed using probabilistic model checking, with a cost of μ times that of the memoryless case <cit.>.
In case the non-determinism is unknown to us,
to compute 𝒫_π(𝚁𝚎𝚊𝚌𝚑(S_ℐ),s)
we need to sample the decisions of the agent often enough to get an accurate approximation of its
decision-making probabilities,
making it more costly,
although recent heuristics for determinization may help <cit.>.
Knowledge of the agent's beliefs.
An intrinsic limitation of studying policies in MDPs
is the lack of knowledge of the agent's beliefs about the world.
Belief plays a fundamental role in the study of intentions:
an agent that intends S_ℐ must act believing that their acts
are a good strategy to reach S_ℐ <cit.>.
Belief is also central to the definitions of responsibility and blameworthiness in structural causal models <cit.>.
In part for this reason, together with the uncertainties derived from a probabilistic setting,
we can only claim incomplete
evidence of intentional behavior.
Single-agent setting.
In our framework, all relevant parts of the environment are modeled by an MDP,
and all the agency in the model is attributed to the agent,
, the only actor choosing actions in the MDP is the agent.
We argue that this decision is reasonable to study the behavior of an individual agent:
from the perspective of an agent,
it makes no difference whether the decisions of other actors are governed by a
sophisticated policy or by random events in the environment,
as long as the MDP model contains accurate transition probabilities.
The emergence of intrinsically multi-agent phenomena,
like shared intentions in cooperative settings, would require a multi-agent extension of our framework and is left as future work.
In particular, we do not explore how to assign moral responsibility to large groups of agents (the so-called “problem of many hands” <cit.>).
Another problem we do not explore is the existence of
responsibility voids <cit.>, ,
situations in which a group of agents should be held accountable for an outcome,
while at the same time, no individual agent intended that outcome.
§ RELATED WORK
Intention in artificial intelligence.
We borrow the concept of intention as a set of states to reach
from standard BDI literature <cit.>.
Closest related to our work is <cit.>,
where the authors develop a mapping from the BDI formalism to the MDP formalism.
The mapping they propose on intentions to policies in MDPs yields a definition of intentions in MDP similar to our Definition <ref>
of intentional behavior under perfect knowledge.
In contrast to our work,
Simari and Parsons focus on optimal policies in MDPs
and their correspondence to plans following a certain intention in the BDI model.
Therefore, their mapping holds only for optimal policies and cannot be applied to agents with suboptimal policies.
A central element in the definition of intention is
commitment: an agent should not reconsider its intentions too often <cit.>.
Although we do not model reconsideration as it relates to time,
the intention quotient ρ_π can be interpreted as a quantitative measure of the agent's commitment to reach a certain state.
Responsibility and accountability.
The concept of intention of rational agents, both humans and non-humans,
has been the subject of extensive study in the context of
philosophy of action <cit.> as well as in its relation to moral responsibility <cit.>.
The concept of agency is a necessary element
in assigning responsibility, leading to issues when the agency is diluted among many individuals <cit.>.
There is an ongoing debate in the philosophy of mind, between those
that consider that an agent’s reasoning is sufficient to explain
their actions <cit.>, and those who maintain that extrinsic information must be imported through a “Principle of Charity” <cit.>.
By building a model of the agent’s knowledge (the MDP) to inquire about their behavior,
we are assuming the latter position. Recent work attempts to
answer similar questions from the former <cit.>.
Causality and blame attribution.
A basic element for a complete accountability process is the study of causality <cit.>.
The foundational work of <cit.>
introduced a quantitative notion of causality,
by studying degrees of responsibility and blame.
Responsibility and blame allocation has been extensively developed in the context of non-probabilistic structures
(see, e.g., <cit.> for the characterization of complexity or <cit.> for a multi-agent framework).
More recent and more closely related to our approach is the work of <cit.>,
studying responsibility and blame in Markov models.
The study of harm from a causality perspective is also
gaining attention recently,
with
<cit.> studying harm from an actual causality perspective,
and <cit.> studying harm from a probabilistic perspective, heavily relying on counterfactuals.
Counterfactual analysis <cit.> is a key concept in causality <cit.>,
used in an analogous way as our generation of counterfactual scenarios.
We go one step further by relating the implementation of the agent to the best and worst implementation for reaching an intended event.
Another recent approach to blame attribution is <cit.>,
which studies multi-agent Markov decision processes from a game-theoretic perspective.
Policy-discovery methods.
Since the popularization of reinforcement learning,
there exist several methods for obtaining representations of a black-box agent,
by studying traces of such agents.
In inverse reinforcement learning <cit.>,
the agent is assumed to be maximizing an unknown reward function, and the objective is to find the reward function that best explains the agent's performance over a set of traces.
These methods could potentially be used as a
pre-processing step to apply our framework to black box agents.
In any case, the obtained representations must be accurate enough before using them for any accountability process.
Explainability.
One of the most influential works in explainability of AI is <cit.>,
which studies how explainability should rely on
concepts from social sciences.
More recently
<cit.>
uses the built-in notions of desire, beliefs and intentions to
study
explainability of BDI models, relying on concepts from the sociology literature.
While the main paradigm in explainable reinforcement learning is applying techniques from explainable machine learning <cit.>,
our analysis of intentional behavior
can be used as a method to
aid the interpretability of agents operating in MDPs, using concepts from the philosophy of action <cit.>.
§ CONCLUSION & FUTURE WORK
In this paper, we analyzed policies in MDPs with respect to
intentional behavior taking uncertainties into account. Our method uses probabilistic model checking to automatically compute the best and worst possible policy for reaching a set of intended states.
We assess evidence of intentional behavior in a policy by relating it to the best and worst policies,
and use counterfactual analysis to generate more evidence if needed.
In future work, we want to extend our current analysis by considering a multitude of possibly conflicting intentions of the agent.
Another interesting line of work is to extend the study of intentional behavior to multi-agent systems, in which cooperative or competitive intentions may arise.
We also want to study long executions, where the agent has time for reconsideration.
Furthermore, we want to implement our framework to study reinforcement learning agents in challenging application areas.
§ ACKNOWLEDGEMENTS
This work was supported in part from the European Union’s Horizon 2020 research
and innovation programme under grant agreement N^∘ 956123 - FOCETA, by the State Government of Styria, Austria – Department Zukunftsfonds Steiermark, the Office of Naval Research (ONR) of the United States Department of Defense through an National Defense Science and Engineering Graduate (NDSEG) Fellowship, and by the National Science Foundation (NSF) awards CCF-2131476 , CCF-2106845, and CCF-2219995.
We also thank Lukas Posch for his help in setting up Tempest.
named
|
http://arxiv.org/abs/2307.02797v1
|
20230706061237
|
BHEISR: Nudging from Bias to Balance -- Promoting Belief Harmony by Eliminating Ideological Segregation in Knowledge-based Recommendations
|
[
"Mengyan Wang",
"Yuxuan Hu",
"Zihan Yuan",
"Chenting Jiang",
"Weihua Li",
"Shiqing Wu",
"Quan Bai"
] |
cs.IR
|
[
"cs.IR",
"cs.AI",
"68T07",
"I.2.6; I.2.7"
] |
mengyan.wang@autuni.ac.nz
Auckland University of Technology
Auckland
New Zealand
yuxuan.hu@utas.edu.au
University of Tasmania
Hobart
Australia
zyuan0@utas.edu.au
University of Tasmania
Hobart
Australia
chenting.jiang@utas.edu.au
University of Tasmania
Hobart
Australia
Corresponding author
weihua.li@aut.ac.nz
Auckland University of Technology
Auckland
New Zealand
Shiqing.Wu@uts.edu.au
University of Technology Sydney
Sydney
Australia
quan.bai@utas.edu.au
University of Tasmania
Hobart
Australia
In the realm of personalized recommendation systems, the increasing concern is the amplification of belief imbalance and user biases, a phenomenon primarily attributed to the filter bubble. Addressing this critical issue, we introduce an innovative intermediate agency (BHEISR) between users and existing recommendation systems to attenuate the negative repercussions of the filter bubble effect in extant recommendation systems. The main objective is to strike a belief balance for users while minimizing the detrimental influence caused by filter bubbles. The BHEISR model amalgamates principles from nudge theory while upholding democratic and transparent principles. It harnesses user-specific category information to stimulate curiosity, even in areas users might initially deem uninteresting. By progressively stimulating interest in novel categories, the model encourages users to broaden their belief horizons and explore the information they typically overlook. Our model is time-sensitive and operates on a user feedback loop. It utilizes the existing recommendation algorithm of the model and incorporates user feedback from the prior time frame. This approach endeavors to transcend the constraints of the filter bubble, enrich recommendation diversity, and strike a belief balance among users while also catering to user preferences and system-specific business requirements. To validate the effectiveness and reliability of the BHEISR model, we conducted a series of comprehensive experiments with real-world datasets. These experiments compared the performance of the BHEISR model against several baseline models using nearly 200 filter bubble-impacted users as test subjects. Our experimental results conclusively illustrate the superior performance of the BHEISR model in mitigating filter bubbles and balancing user perspectives.
<ccs2012>
<concept>
<concept_id>10002951.10003227</concept_id>
<concept_desc>Information systems Information systems applications</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003260.10003261.10003271</concept_id>
<concept_desc>Information systems Personalization</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003260.10003261.10003270</concept_id>
<concept_desc>Information systems Social recommendation</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Information systems Information systems applications
[500]Information systems Personalization
[500]Information systems Social recommendation
BHEISR: Nudging from Bias to Balance - Promoting Belief Harmony by Eliminating Ideological Segregation in Knowledge-based Recommendations
Quan Bai
August 1, 2023
=========================================================================================================================================
§ INTRODUCTION
The presence of filter bubbles in online recommendations has raised concerns about the potential negative impact on society and individuals <cit.>. Filter bubbles refer to users being fragmented by continuously and passively receiving homogeneous items due to over-specific algorithm recommendations and personal preferences <cit.>. In the era of big data, recommendation systems play a crucial role in shaping users' perspectives and access to diverse information <cit.>. However, the personalization recommendation algorithms are reckoned as the filter bubbles' main culprit. Under the filter bubbles, each person has a sole information universe and is alone in their specific world <cit.>. These filter bubble-existed recommendation systems often decrease the diversity of users' beliefs, leading to bias and ideological segregation, reinforcing users' existing beliefs and limiting their exposure to alternative viewpoints. This phenomenon not only hinders the users' formation of a well-rounded understanding but also exacerbates societal polarization <cit.>.
To address the prevailing issue of societal polarization in online social networks, existing research primarily deploys two types of strategies embedded in recommendation systems: algorithm-focused and human-focused <cit.>. The algorithm-focused strategies combat filter bubbles by advocating for content diversity during the in-processing and post-processing stages <cit.>. Methods such as explanation-based diversity recommendation <cit.>, community-aware model <cit.>, category-based diversification algorithm <cit.>, the Diversified GNN-based Recommender system (DGRec) <cit.>, and the graph-based user-item interaction method <cit.> are employed. These strategies, particularly those based on graph-based recommendations, offer invaluable insights into user preferences and category diversity. However, their heavy reliance on algorithmic engines to curate recommended content inadvertently neglects the integral role of human decision-making processes.
In contrast, human-focused strategies privilege individual agency over algorithmic engines when addressing the problem of filter bubbles <cit.>. For instance, nudging-based recommendations <cit.>. Unlike algorithm-focused models, nudge recommendations replace directly serving up targeted information to users, but adopt an indirect method of influencing individuals' decisions and behaviors <cit.>. Despite this, existing nudging-based models often grapple with effectively and explainable expanding users' interest into categories previously shown no interest. Therefore, this study amalgamates the benefits of both graph-based algorithmic strategies and nudging-based recommendation systems to provide a responsible system to effectively mitigate the impact of the filter bubble.
This research aims to address the challenge posed by the filter bubble phenomenon and promote belief harmony among users by implementing nudge recommendations. We propose a novel recommendation approach, BHEISR, which functions as an intermediary between existing recommendation systems and user beliefs, as depicted in Figure <ref>. BHEISR is designed to mitigate the adverse effects of preference-based recommendation algorithms in systems susceptible to filter bubbles. With regard to users who are already impacted by filter bubbles, the model employs nudging techniques to encourage them to step outside their personal awareness comfort zones, fostering a more balanced representation of users' beliefs.
In pursuit of our objective, we initially propose a Filter Bubbles Detection Model based on Multi-faceted Reasoning (FBDMR). The model utilizes a bipartite approach to ascertain users who are influenced by the filter bubble and to identify the systems implementing user preference-based recommendation algorithms. The filter bubble impacts users, resulting in unbalanced beliefs, signifying a user's narrow or disinterested perspectives on a specific type of information (considered as a category) when they show extreme interest in another. Subsequently, we develop a recommendation prompt path incorporating these two categories (extremely interested or normal) by investigating more interconnected nodes between them. The BHEISR model employs nudge recommendations, embodying libertarian paternalism and adhering to principles of transparency. The aim is to subtly guide users towards exploring diverse viewpoints and questioning their preconceived notions via this generated prompt path. Throughout this process, the Generate Artificial Intelligence (GAI) model plays an essential role in further extracting abundant contextual information in the realm of big data.
The key contributions of this research can be categorized into four aspects.
* Firstly, we propose a novel model, BHEISR, to alleviate the adverse impacts of the filter bubble. The model balances users' interests and the recommendation system. Also, BHEISR equilibrates existing recommendation lists while offering democratic and interpretable suggestions to users. Furthermore, it promptly gathers user feedback, attaining belief harmony for users.
* Secondly, our research introduces a Filter Bubbles Detection Model based on Multi-faceted Reasoning (FBDMR). This model detects and identifies users affected by filter bubbles and recommendation systems where filter bubbles exist. With diverse methodologies, we delve into a fine-grained analysis of the existence and effects of filter bubbles from various detection aspects.
* Thirdly, we harness the power of nudging techniques to gently guide users towards broadening their interests and nurturing belief harmony. The process of nudge recommendation adheres to the principles of libertarian paternalism, transparency, and democracy, which enhances users' comprehension of the recommendations and boosts their confidence.
* Lastly, we employ the emerging GAI techniques within BHEISR to produce prompt path-based textual content for users and to solicit their feedback for updating belief graphs. The GAI model expands these belief graphs, deeply investigating the relationship between users' preferences and non-preferences, and conveys a broader range of semantic information to users, thereby enhancing diversity.
The remainder of this paper is organized as follows. In Section 2, we provide a comprehensive review of related work on recommendation systems, user selective exposure, filter bubbles, and nudging techniques. Section 3 describes our formal definitions, user belief networks, and category correlation graph construction. Section 4 presents the methodology and framework employed in our research. In Section 5, we discuss the experimental setup. Results and analysis are also presented in this section, followed by a discussion of the findings in Section 6. Finally, Section 7 concludes the paper and outlines potential avenues for future research.
§ RELATED WORKS
Despite their popularity and ability to provide users with tailored content suggestions, personalized recommendation systems inadvertently create a significant challenge: filter bubbles <cit.>. Filter bubbles constrain users' exposure to various perspectives and information, thus potentially leading to belief biases and societal fragmentation <cit.>. To mitigate this concern, an increasing number of researchers and practitioners have turned their attention towards dismantling filter bubbles, fostering diversity and democracy in recommender systems, and facilitating users' belief harmony.
In this section, we comprehensively review the relevant research works, deliberate on the filter bubble issue, explore the diversification of recommender systems, and review existing works on nudge recommendations. Additionally, we will highlight the contributions of this study.
§.§ Filter Bubbles and Ideological Segregation
§.§.§ Preference-based Recommendation Systems
Conventional recommendation systems prioritize the generalization of user preference, implying that these systems often recommend items to users based on their specific preferences and behaviors <cit.>. Techniques such as Collaborative Filtering (CF) <cit.>, Content-Based filtering (CB) <cit.>, rule-based methods <cit.>, or hybrid models <cit.> are commonly employed to analyze users' preferences and past behaviors. The recommendation system then suggests content that aligns closely with user preferences to enhance user satisfaction and engagement. However, this approach based on user preference may exacerbate the filter bubble issue, leading to ideological isolation and user bias. For example, Bryant et al. demonstrate that the YouTube algorithm, representative of a preference recommendation algorithm, exhibits a marked bias towards right-leaning political videos, including those espousing racist views propagated by the alt-right community <cit.>. It has become important to address the limitations of current preference recommendations, boost the diversity of suggestions, and actualize users' belief harmony.
§.§.§ Mitigating Filter Bubble Effects
In contrast to the echo chamber, filter bubbles emphasize on the constraints of preference recommendation algorithms <cit.>. Dahlgren et al. introduce the term "internet filters" to encapsulate the phenomenon of filter bubbles, which can entail numerous repercussions for users, including a narrowed focus on personal interests, substantial reinforcement of confirmation bias, reduced curiosity, decreased exposure to diverse ideas and people, compromised understanding of the world, and a skewed perception of reality <cit.>.
Addressing the negative effects of filter bubbles necessitates recognizing various challenges, with the algorithmic bias being particularly noteworthy. Chen et al. highlight that the emergence of recommendation algorithm bias amplifies the experimental nature of user behavior data as opposed to observational <cit.>.
Additionally, Dahlgren et al. examine the recommendation algorithm bias and broaden the concept of bias into two facets <cit.>. One form of bias originates from the recommendation algorithm, while the other stems from users' behaviors.
Aside from algorithmic bias, another challenge in mitigating filter bubbles lies in their elusive nature <cit.>. Users are often unaware of the homogenized world due to the imperceptible filter bubble effect. Namely, they remain unaware that their viewpoint differs from others in the same situation.
The growing influence of filter bubbles has raised increased concerns among researchers. A well-crafted recommendation system usually offers high accuracy while also promoting diversity; systems oriented solely towards accuracy may inevitably lead to filter bubble effects <cit.>. Contemporary research works propose several strategies for breaking filter bubbles by enhancing the diversity of recommendations. The research works of addressing filter bubbles with and without knowledge graphs, as well as selective exposure detection, are reviewed as follows.
Research beyond the scope of the knowledge graph. Early diversification algorithms, initially designed for recommendation systems, sought to augment category diversification <cit.>. Su et al. propose the category-inspired diversity recommendation algorithm to integrate the dissimilarity of a user's item features, considering the taxonomic tree to widen and enhance the user's interests <cit.>.
Yu et al. introduce an attribute-inspired, explanation-based diversity recommendation system, which achieves diversity in recommendation lists from two angles, i.e., item-based and collaborative filtering strategy-based recommendations <cit.>.
Similarly, Grossetti et al. extend the scope from groups to communities, where a community-aware model is proposed to identify Twitter communities and compute their category-based similarities <cit.>.
Moreover, Lunardi et al. propose a category-based diversification algorithm to diminish the filter bubble effect, ensuring that recommended items align with the user's interests while maintaining a degree of diversity in content and features<cit.>.
Research in the scope of the knowledge graph. While the aforementioned research models have significantly contributed to alleviating filter bubbles and enhancing recommendation diversity, many researchers also advocate for the critical role of knowledge graph-based recommendation algorithms. These algorithms not only mitigate data sparsity and cold start issues, but they also add an essential interpretability factor to recommendation systems <cit.>.
Yang et al. introduced the Diversified GNN-based Recommender System (DGRec), a graph-based recommendation system built on GNN, augmenting the diversity of recommended lists by improving the embedding generation process <cit.>.
Additionally, Li et al. adopt a graph-based methodology by constructing a user-item interaction graph for data analysis to examine the existence of a centralized recommendation phenomenon <cit.>.
In contrast to these methods, our model transcends the traditional approach of generating marginally varied items based on user preferences for diversity. We prioritize incrementally stimulating users' interest in items they may initially disregard without altering existing recommender system algorithms. The proposed novel approach aims to counteract the filter bubble effect by considering user interest and disinterest beliefs, i.e., an aspect that has received very limited attention from researchers.
Detection of Selective Exposure. Belief bias in reasoning refers to individuals' tendency to favor conclusions that align with their pre-existing beliefs <cit.>. This phenomenon is intimately connected with the formation of online filter bubbles, in which users tend to accept information that confirms their viewpoints and interests while rejecting alternative perspectives that challenge their beliefs <cit.>.
Existing methods proposed for selective exposure detection include Information Source Diversity Analysis (ISDA) <cit.>, User Interaction Pattern Analysis (UIPA) <cit.>, Reinforcement Learning Methods (RLM) <cit.>, and Social Network Analysis (SNA) <cit.>. Considering the limited interpretability of RLM and the focus of SNA on alleviating echo-chamber effects rather than filter bubbles, our research concentrates on investigating selective exposure detection algorithms based on ISDA and UIPA. ISDA includes various detection metrics such as topology metrics and homophily metrics <cit.>. Likewise, UIPA includes several established detection metrics, including the coverage algorithm and the Majority Category Domination (MCD) algorithm <cit.>.
Drawing inspiration from these metrics, we propose the FBDMR model for dual verification of the authenticity of the filter bubble phenomenon, having the concept of 'Entropy' <cit.> included to substantiate the existence of filter bubbles.
§.§ Nudge Techniques and Recommendations
A nudge is a non-coercive intervention designed to influence behavior by modifying the context in which choices are made <cit.>.
This form of such an intervention is usually transparent and optional, enabling individuals to better understand their choice consequences and to boost the likelihood of beneficial decision-making <cit.>. The core idea behind a nudge is to exploit individuals' beliefs and behavioral biases through various design strategies, directing them towards more favorable outcomes without constraining their freedom of choice <cit.>.
Given the widespread acceptance of nudging, an increasing number of researchers in recommendation systems are beginning to investigate the impact and effectiveness of nudges in diverse domains of recommendation systems. However, the majority introduces nudging recommendations from an AI-deprived perspective, implying a substantial absence or lack of AI technology in their research context. For example, Jesse et al. consolidate 87 nudging mechanisms at this AI-deprived level, including alterations in font size, the reputation of the messenger, and the visibility of information <cit.>.
Joachim et al. propose a platform empowered by AI designed to nudge, influence, and guide the behavior of individuals with diabetes <cit.>. To enhance the transparency of the nudging process and offer greater interpretability to the interventions, Tangruamsub et al. incorporate a knowledge graph into the nudge-based recommendation system to analyze users' preferences more thoroughly and make more precise recommendations <cit.>.
Furthermore, Sitar et al. propose an automated recommendation system that integrates managers' priorities and user feedback, utilizing knowledge graph structures, to organize items based on descending order of priority, known as nudge concepts <cit.>.
The recommendation systems mentioned previously have revealed the importance of establishing a knowledge graph-based nudging recommendation system.
However, these models are developed on the principle of privileging user preferences without recognizing the significance of transforming user disinterest perceptions into acceptable items. Different from the existing approaches, in this research, we strive to amplify recommendation diversity to augment user belief and harmony. The objective is to present items that users might initially dislike or reject in a non-coercive manner, guiding user perceptions from one end of the graph (items congruent with user preferences) to the other end (items less favored by users), enabling a transition in user belief from bias to balance.
§ PRELIMINARIES
In this section, we provide an overview of the fundamental concepts and terminology that are essential for understanding the subsequent discussions and analysis in this paper, which encompasses formal definitions, belief revision, and the category network.
§.§ Formal Definitions
A recommendation system is denoted by S = ⟨ U, A, C ⟩, where U = {u_1, u_2, ..., u_n} represents a set of n users, C = {C_1, C_2, ..., C_m} denotes a set of m categories. A indicates an AI platform designed to recommend a category C_i∈ C to a user u_j∈ U.
Definition 1. Belief network of user u_i refers to the representation of u_i's knowledge, beliefs, preferences, and expectations. Mathematically, u_i's belief network is denoted as G_u_i = ⟨ u_i, B_u_i, C_u_i⟩, where C_u_i = { C_u_i,1, C_u_i,2, ..., C_u_i,n} represents the set of categories with which the user has previously interacted, and B_u_i = { B_u_i,1, B_u_i,2, ..., B_u_i,n} signifies the user's belief degree towards each category. In the context of interactions with the BHEISR model, M_u_i,t symbolizes the collection of items that have been accepted by u_i from time step 0 to t, which includes both system-recommended items and GAI-generated items. Conversely, P̅ui,t denotes the list of GAI-generated prompts that have been presented to u_i, but whose generated items have been rejected by u_i from time step 0 to t.
Definition 2. A category C_i∈ C represents a specific category within the recommendation system and is defined as C_i = ⟨ Sub_C_i, r_C_i⟩. It consists of a set of subcategories within the category, denoted by Sub_C_i = {Sub_C_i,1, Sub_C_i,2, ..., Sub_C_i,n}, and the corresponding click probability for each subcategory, represented by r_C_i = { r_C_i,1, r_C_i,2, ..., r_C_i,n}.
Definition 3. Category Correlation ρ(C_x, C_y) ∈ [-1, 1] refers to the level of relevance between two categories C_x and C_y, with ρ(C_x, C_y) = ρ(C_y, C_x).
A higher value of ρ(C_x, C_y) indicates a stronger correlation between the two categories. To measure category correlation, the title and abstract of all textual items under each category (e.g., news articles or movie descriptions) are processed to extract their features that capture the essential information of the categories. If a piece of item is N_k belongs to the category C_x, its title and abstract are respectively represented as t_N_k = {w_1, w_2,..., w_n} and a_N_k = {a_1, a_2,...,a_n}, where n means the length of N_k's title and abstract. Further, N_k can be shown as t_N_k + a_N_k. Then, the extracted features are transformed into numerical representations by employing an embedding representation technique, such as BERT <cit.>. We transform N_k.t to N⃗_⃗k⃗.⃗t⃗ for expressing the representation of C_x at time t. This transformation allows for capturing the semantic relationships between different categories. The category correlation is formulated as:
ρ(C_x, C_y) = C_xC_y^T/‖ C_x‖·‖ C_y‖,
where C_x and C_y denote the vector representations of two categories involved in the calculation.
Due to the fine-grained feature of BHEISR, we update the category correlation graph after receiving the user feedback. Therefore, if the user accepts the item N_g.t+1 at time t+1, the feature of C_x at t+1 should represent as N⃗_⃗k⃗.⃗t⃗+⃗1⃗⊕N⃗_⃗g⃗.⃗t⃗+⃗1⃗.
Definition 4. A Filter Bubble-existed System FB^x_g refers to a system that encompasses a group of Filter Bubble-affected Users g = { u_n|u_n∈ U }. In such a system, each user u_n∈ g exhibits an extreme preference b^x_n for a particular category C_x while simultaneously being extremely insensitive towards information related to another category.
Each user u_n∈ g is significantly influenced by the filter bubble effect when it comes to decision-making behaviors. Specifically, when an item presented by FB^x_g involves categories that align with the user's preference b^x_n, the likelihood of the user accepting the item appears high. Conversely, items that do not contain the user's preferred categories face substantial hurdles in gaining the user's acceptance. Thus, an item composed solely of a category C_y that contradicts the user's preferences (i.e., ρ(C_y, C_x) = -1) is anticipated to be rejected by u_n∈ g.
§.§ Belief Revision
In the context of a user's belief network, both the user and categories are depicted as nodes, with the user's belief degree embodied by the edges connecting the user node to each category node. These edges represent the strength of the user's affinity for the corresponding category. It is important to note that the sum of a user's click probabilities for all subcategories within the belief network must equate to 1.
The belief degree is inferred from the user's behavior, specifically from their click probability, which signifies their inclination towards the subcategories. This click probability is used to calculate belief degree, employing entropy as a metric to quantify the user's click behavior across subcategories. In the current setting, a higher entropy value signifies a more robust belief in the user's preferences for that category.
Thus, the computation of belief degree relies on the user's click probability for all subcategories within the category of interest (denoted by r_C_i). The belief degree B_u_i,C_j of user u_i towards category C_j is formulated as follows:
B_u_i,C_j = - ∑_k=1^N_Sub_C_jr_C_j,klog_2(r_C_j,k),
where N_Sub_C_j signifies the total number of subcategories of C_j, and r_C_j,k represents the user's click probability for each individual subcategory under C_j.
§.§ Category Network
Category correlation quantifies the potential influence between different categories in our recommendation system. When ρ(C_x, C_y) = -1, it signifies a lack of relationship between categories C_x and C_y, indicating that the two categories cannot exert any impact on each other. However, as the value of category correlation approaches 1 (i.e., ρ(C_x, C_y) → 1), the correlation between categories C_x and C_y becomes increasingly strong. This implies that if a user accepts an item from category C_x, there is a higher probability of the user also accepting items from the highly related category C_y, and vice versa. In other words, the user's acceptance of items within one category tends to influence their acceptance of items within the correlated category. Therefore, the category network is constructed based on the extent to which the categories are related.
The information environment of a recommendation system S = ⟨ U, A, C ⟩ can be represented as a category network. As shown in Figure <ref>, the nodes correspond to the categories C in the system S = ⟨ U, A, C ⟩. Each category C_i∈ C is considered a distinct node in the category network. The edges connecting these nodes represent the category correlations among the categories, which quantifies the relationships and relevancy between different categories within the system.
§ THE FRAMEWORK OF BHEISR MODEL
The BHEISR model is designed to foster a transition for users from selective exposure to belief harmony, effectively enabling them to escape from the filter bubble. The key novelty of BHEISR resides in its incorporation of the "nudge" concept and utilization of GAI for producing recommendation items. This allows users to be gently steered from their highly preferred categories towards less engaging ones, free from any form of compulsion. The proposed BHEISR operates as a mediation mechanism bridging existing recommendation systems and users. It is built based on the existing recommendation algorithm to align with the user's knowledge system but adjusts the user's knowledge system according to the latest round of recommendations.
Operating as a dynamic, time-sensitive, and interactive system, the BHEISR model continually adapts users' confidence networks and tailors soft recommendation policies in response to user feedback. In its functioning, it embodies democratic principles and adheres to the principle of non-coercion, thereby fostering subtle and gradual attainment of belief harmony for users. Figure <ref> provides an overview of the entire BHEISR model framework.
As demonstrated in Figure <ref>, the BHEISR is a user feedback-responsive, time-sensitive recommendation model. Its objective goal is to balance users' cognition, thereby preventing the exhaustion stemming from the filter bubble phenomenon and users' selective exposure. Detection of the filter bubble forms the foundation of the BHEISR model. The integration of FR and CR modules in FBDMR provides a comprehensive understanding of filter bubbles, taking into account both system-level biases and user experiences. This inclusive approach enables precise evaluation of biased models and facilitates interventions to counteract their adverse effects.
To accomplish category-level nudge recommendations, the essential step is to establish relationships between categories. The BHEISR model builds a category relationship graph, grounded in the unique characteristics of each category and the similarities between category pairs. Leveraging the amalgamation of the user's belief graph and category correlation graph, the model discerns the most suitable paths between categories. By employing the GAI framework, the model generates relevant content based on these paths. The final phase involves integrating this newly generated item with existing recommendation lists and presenting this curated set of items as a recommendation feed to the user. The model continually updates users, category networks, and optimal paths in response to user interactions, while evaluating the probability of user acceptance until the user achieves belief harmony.
§.§ A Filter Bubbles Detection Model based on Multi-faceted Reasoning (FBDMR)
The Filter Bubbles Detection Model Based on Multi-faceted Reasoning (FBDMR) is a novel model designed to investigate existing recommendation systems from dual perspectives: their inclination towards user preferences and the occurrence of users exhibiting selective exposure. Different from traditional single-dimensional reasoning models, FBDMR operates in both Forward Reconnaissance (FR) and Counter Reconnaissance (CR). The FR assesses the recommendation system to ascertain any potential bias favoring user preferences. In contrast, the CR targets users, pinpointing instances where users are influenced by preference-biased models, which could lead to users' selective exposure within filter bubbles. FBDMR offers an inclusive evaluation of filter bubbles, enabling the detection of biased models and assessing their impact on users. Its implementation contributes to the development of fairer recommendations and interventions to alleviate the negative consequences of filter bubbles. Figure <ref> illustrates the structure of the FBDMR.
§.§.§ Forward Reconnaissance (FR) Model
For the Forward Reconnaissance (FR) Model, we validate the user preference bias of the current recommendation model from two perspectives: mathematical validation and time-sensitive data validation. Subcategory coverage score <cit.> and subcategory duplicate score <cit.> are widely used diversity validation metrics, considered as mathematical validations in this paper. The formula for studying the diversity of coverage and duplicate is given below:
DC = 1/N∑_i=1^N(1 - f_Sub_C_i/F),
where DC indicates diversity coverage in this paper, N means the number of subcategories. f_Sub_C_i is frequency of subcategory Sub_C_i, while F is total frequency of all subcategories.
DD = 1/N(N-1)∑_i=1^N∑_j ≠ i d_ij ,
where DD shows diversity of duplicate in this paper, N means the number of subcategories. d_ij represents the duplicate measure between subcategory Sub_C_i and Sub_C_j.
In order to further assess whether the current recommendation algorithm suffers from filter bubbles, we perform an analysis on a specific case by observing the evolution of recommendation logs (existing recommendation items) rankings over time. Figure <ref> demonstrates the time-involving recommendation logs.
From the above pie chart, it can be observed that as time progresses, the recommender system consistently recommends information related to the user's preferred categories. In this case, the user shows a greater interest in 'autos' information. Conversely, the system gradually reduces the recommendations for 'television' information as it detects a lack of user interest in that aspect during human-machine interaction. Based on the aforementioned research, it is possible to assess whether the recommender system exhibits a filter bubble. However, further investigation is required to determine whether users are actually being affected by it.
§.§.§ Counter Reconnaissance (CR) Model
The CR model is a user-centric model designed to identify the influence of filter bubbles on user information perception. The model creates a belief network for each user, derived from their interaction records within the system. This belief network is then employed to analyze user preferences and dislikes towards different categories of information.
The metric of entropy is employed to establish a baseline for distinguishing user preferences. In particular, high entropy is indicative of diverse user interests, while low entropy suggests a concentration of interest in specific areas. Figure <ref> describes a representative user belief network that includes preference and dislikes data used in our experiment.
In Figure <ref>, the red, blue, and yellow nodes represent the historically interacted categories C_u_i of user 'U21538' or u_i. The pink nodes correspond to the subcategories within each of these primary categories. The edge that links a category to its subcategory represents the user's click probability for that subcategory. Moreover, the connection between the user and a specific category indicates the user's degree of belief towards that category, represented by B_u_i.
Upon completion of the user belief graph construction, we introduced a novel and explainable definition for identifying users demonstrating selective exposure to information. The belief diversity among users is presented in Figure <ref>, where the distribution of users across various belief levels is depicted for multiple types of information.
We apply the Kolmogorov-Smirnov test and Skewness calculation to evaluate whether the observed distribution aligns with the standards of a Gaussian distribution, as depicted in the following equations.
K, p = kstest(N_C_m,'norm', args=(μ,σ)) ,
skewness_C_m= 1/N_C_m∑_i=1^N_C_m(x-μ/σ) ,
where N_C_m symbolizes the total number of users within category C_m. K signifies the Kolmogorov-Smirnov statistic, and p stands for the p-value of the test. A p-value greater than 0.05 suggests that the distribution for category C_m is not significantly different from a normal distribution.
The skewness of the distribution, represented as skewness_C_m, offers further insights into the shape of the distribution and can be interpreted in light of the empirical rule (also known as the "68-95-99.7" rule). The skewness_C_m in Equation <ref> serves as a criterion to classify the degree of skewness in the distributions within the m^th category. In this formula, N_C_m denotes the number of users, μ represents the mean value, and σ represents the standard deviation.
According to Definition 4, a user u_i can be classified as being influenced by filter bubbles, denoted as u_i∈ g, only when they exhibit selective exposure. For distributions exhibiting positive skewness, we apply the empirical rule, or the "68-95-99.7" rule, to categorize users' preferences and dislikes. Figure <ref> provides a specific example, demonstrating this rule's application in the 'autos' category.
In Figure <ref>, the x-axis denotes the belief degree in 'autos', while the y-axis corresponds to the number of users. According to the "68-95-99.7" rule, users outside the range of two standard deviations from the mean are classified as u ∈ g. More specifically, u ∈ g includes users with belief degrees lower than approximately 0.1 (indicating disinterest in the 'autos' category) and users with belief degrees higher than about 1.9 (indicating strong interest in the 'autos' category).
As can be observed from Figure <ref>, it is clear that the user's interests across categories are not evenly distributed. The user demonstrates a significantly lower interest in the 'news' category, denoted by the red node, as indicated by a belief degree of 0.44, compared to other categories. In contrast, the user's interest in the 'autos' category appears exceptionally high, as demonstrated by a belief degree of 2.39. When a user demonstrates a pronounced preference for one category while showing a notable lack of interest in another, this behavior suggests selective exposure influenced by the filter bubble effect. The discrepancy in interest levels allows us to identify and categorize such users accordingly.
§.§.§ Interplay and Feedback Loop
The FR and CR models are linked, forming an intricate feedback loop. Biases identified within the FR model can provide valuable insights for the CR model to better understand user experiences and vice versa. This interaction facilitates a deeper comprehension of the intricate dynamics existing between recommendation systems, user preferences, and the consequent formation of filter bubbles. By examining the interplay between system-level biases and user behavior, we gain insights into how they influence each other. The FBDMR model, combining FR and CR models, offers a comprehensive approach to understanding filter bubbles since it considers system-level biases and user experiences, providing a holistic view of the phenomenon.
§.§ Nudging Strategy
Nudging is a concept drawn from behavioral economics that aims to steer individuals towards certain behaviors without relying on monetary incentives. This concept resonates with the notion of libertarian paternalism, advocating for strategies that preserve individual freedom of choice while subtly guiding towards more beneficial decisions <cit.>. The BHEISR model leverages knowledge graphs and nudging techniques to deliver democratic and transparent recommendations.
As an illustration, recall that, in Figure <ref>, instead of directly incorporating "news" into the recommendation list, the BHEISR model expands the exploration to a broader array of potential category information. This approach serves to facilitate nudging recommendations, subtly guiding users towards diverse information consumption while respecting their freedom of choice.
§.§.§ Adaptive Path Exploration Algorithms
The creation of a recommendation prompt path forms the foundation of the nudge recommendation mechanism in the BHEISR model, where an adaptive path exploration algorithm is designed. This algorithm automatically constructs the recommendation path based on user feedback, thereby implementing a closed-loop feedback system that interlinks user feedback and system recommendations.
The adaptive path exploration algorithm is designed to continuously evolve in response to user behavior and changes in the category graph. This dynamic adaptation facilitates the delivery of contextually appropriate prompts, tailored to the user's current situation and preferences. The operational procedure of the adaptive path exploration algorithm is delineated below.
The adaptive path exploration algorithm is inspired by the shortest path exploration algorithm, also known as Dijkstra's algorithm <cit.>. The core principle of this approach is to commence from a central point, traverse the neighboring points, and identify the point with the highest weight as the starting point for the subsequent time step. To accommodate the dynamic nature of user knowledge and category relationships, we have proposed an enhanced adaptive path exploration algorithm. This algorithm integrates the evolving user perceptions and category relationships into the process of path discovery.
As depicted in Figure <ref>, there is a category correlation graph C = {C_1, C_2,..., C_i}, where i denotes the total number of categories. Assuming C_1 as the start of a path, the most potent path at the next timestamp would be:
ρ(C_1, C_n)_.t+1 = max(ρ(C_1, C_n)_.t+B_u.t^n*rej_w.t),
where C_n serves as the start node at the timestamp t+1, and B_u_i.t^n represents user u_i's belief degree of C_n at the timestamp t. The symbol w signifies the rejection weight, typically set to 1. To ensure the fidelity of user perceptions, we keep the bidirectional relationship graph constant in instances where users decline recommendations, contrasting with the continually evolving belief and categorical relationship networks. We have introduced a tolerance threshold θ for user rejections. When this tolerance level is exceeded, we assign a rejection weight rej_w of -1 to differentiate between disfavored information that changes over time. The optimal prompt path at time t, represented as p_t, is composed of multiple categories. Each category is strongly connected to the starting point at each timestamp until the traversal is completed.
§.§.§ Nudging and GAI Algorithms
The proposed nudging process leverages the concept of incremental computing <cit.>. By fragmenting the path into smaller sub-paths, we can carry out incremental recommendation calculations. This approach implies that when a user expresses a preference or opinion at a particular point in the path, only the impacted sub-path requires recalibration rather than recomputing the entire path. This method facilitates handling longer or more intricate paths than simple sequential recommendations, thus reducing the number of recommendations and enhancing efficiency. Figure <ref> demonstrates the recommendation process of the BHEISR model within the nudging environment. Furthermore, we describe the proposed nudge strategy in Algorithm <ref>.
The GAI algorithm, when combined with nudge recommendation techniques, efficiently exploits the interconnectedness of information. This combination presents a solution to address information gaps that may arise during end-to-end recommendation processes. By employing the powerful capabilities of the Large Language Models (LLM), e.g., GPT-3.5 Turbo, this approach offers rich semantic information at each juncture in the recommendation path, thus strengthening the relationships between individual points. This strategy effectively engages a user's interest in specific categories to foster intrigue in areas they might find less attractive.
As previously mentioned, a nudging prompt p_t = {C_1^p,..., C_i^p} can be generated at each timestamp. This prompt denotes an optimal path between the starting node, representing the category of highest user interest C_1^p, and the end node, the category of least interest C_i^p. To harness rich contextual information from point-to-point paths within the vast landscape of big data, these paths are fed into the GAI, generating a contextually rich item, GI, for each point in the path p_t. The associated equation is as follows:
GI ⇒ C_1^p∩ C_2^p∩,...,∩ C_i^p,
In conclusion, the integration of the GAI algorithm and nudge recommendation techniques in the BHEISR model allows the effective utilization of the interconnections. This approach mitigates information gaps, capitalizes on the rich semantic information delivered by the GAI, and employs nudge techniques to establish strong connections between data points. This can help to stimulate user interest in less engaging categories through their established preferences for related subjects.
§.§.§ Recommendations
As an intermediary between the recommendation system and users, the BHEISR model alleviates the adverse effects of filter bubbles caused by the current system and fosters belief balance among users subjected to selective exposure. We introduce an additional set of lists to the original recommendation list, where the final user recommendation list is denoted as feed, such that feed = {feed_original, feed_current}. feed_original consists of several items i, represented as feed_original = {i_1, i_2,..,i_j}. To derive the original recommendation list, we employ two filtering algorithms: Content-Based Filtering (CB) <cit.> and User-Collaborative Filtering (UC) <cit.>.
CB determines the utility of an item i for a user u based on the item's content. This utility is computed by assessing the similarity between the content of the item and the historical items accepted by the user:
score_i,u = sim(TF-IDF_i, 1/n∑_j=1^nTF-IDF_j),
where sim(·) calculates the cosine similarity between an item i and a user u. We use TF-IDF to analyze the content of the item i. 1/n∑_j=1 | j ∈ M_u,t^nTF-IDFj represents the user's preferences according to their historical behavior. Specifically, Mu,t signifies the user's historical records at time step t, indicating the items they clicked from time 0 to t. 1/n∑_j=1 | j ∈ M_u,t^nTF-IDFj is the average of TF-IDF feature vectors of all interacted items. UC assesses the utility of an item i based on the historical records of other similar users as per Equation <ref>. We calculate the similarity between two users using cosine similarity, considering their respective historical records. Moreover, Bu_i denotes the vector of user u_i across different categories.
score_i ∈M_j,u = M_iM_j^T/‖M_i‖·‖M_j‖ * B_u_i
feed_current indicates GI, so the recommended feed to users should be feed = i_1, i_2,..,i_j, GI. After the BHEISR model recommends a feed to the target filter bubble-affected user u, u decides whether to accept the recommendation based on the acceptance probability AP^i_u,t, as defined in Equation <ref>:
AP^i_j_u,t =∑_C_x ∈ i_jω_C_x * B_u,n/∑_i=1^n B_u,
where ω_C_x is the weight of category C_x contained within the content of an item i_j ∈ feed. B_u_i,n represents the belief degree of u towards category C_i at the current time step. ∑(B_u,t) calculates the total belief degrees of B_u,n.
§ EXPERIMENTS
We conduct experiments from two general directions: system and user. Regarding the system-centered experiment, it is mainly to prove the effectiveness of the BHEISR model as an intermediate agency in alleviating the system filter bubble. Regarding the user’s perspective, we conducted four user-centered experiments, including detecting the positive effect of the BHEISR model on increasing user belief diversity, examining the effectiveness of the BHEISR model in motivating filter bubble-impacted users to the category they are less interested in, and analyzing the ability of the BHEISR model to reduce the number of filter bubble-impacted users. Finally, we conducted a parametric analysis experiment to analyze the different effects of different BHEISR recommendation weights on stimulating users' interest in new categories.
§.§ Experimental Setting
§.§.§ Dataset
In the experiments, we leverage two real-world datasets from two sources: the Microsoft News dataset (MIND) [https://msnews.github.io/] and IMDB dataset [https://www.kaggle.com/datasets/meastanmay/imdb-dataset?select=tmdb_5000_movies.csv/].
MIND Dataset is a large-scale and publicly available news recommendation dataset consisting of user interaction data collected from Microsoft News. The dataset contains 50,000 samples, and a smaller, lighter version includes 5,000 users with 230,117 user behaviors. Each behavior captures a user's reading habits and includes information such as user ID, timestamp, category, subcategory, title, and click behavior. In the MIND dataset, click behavior acts as a binary indicator of user interest, where a value of 1 signifies that the user reads the article, while a value of 0 indicates dislike.
IMDB Dataset includes movie rating records that reflect users' past behaviors. The dataset provides movie-related information, including movie ID, genres, titles, and overviews, along with user ratings for movies and corresponding user IDs. In our experiments, the degree of a movie's rating is considered a proxy for user interest, with ratings ranging from 0 to 5. A rating higher than 2.5 indicates user interest in the movie, whereas a rating below that suggests a lack of interest.
Datasets Statistics. Table <ref> provides an overview of the sizes and properties of the two datasets, IMDB and MIND. It shows statistics on the number of users, user behaviors, categories, and filter bubble-affected users. The category diversity is similar across the two datasets, with 16 and 17 categories each. The users who exhibited extreme belief imbalance are selected for the experiments. In the MIND dataset, we identified 180 such users, and from the IMDB dataset, we found 20 out of 300 individuals that met the criteria. The experiments focus on this select group of 200 users.
Evaluation of User Feedback. Given the inherent impracticality and high cost associated with online testing for researchers, we have designed a novel offline evaluation approach:
* Implement an 'Acceptance Probability Algorithm' to simulate user feedback.
* Generate recommendations using a 'Nudge Strategy' based on the simulated user feedback.
* Evaluate the recommendations by considering aspects such as diversity, and efficacy in mitigating filter bubbles. This includes employing metrics like Category Coverage and User Belief-related algorithms.
§.§ Parameter settings and baselines
Baselines: We assess the performance of BHEISR in comparison with several established baseline methods:
* Random (RD): This method suggests a selection of existing items randomly.
* Content-Based Filtering (CB): This strategy recommends existing items based solely on content-based filtering.
* User-Collaborative Filtering (UC): This approach recommends existing items using only user collaborative filtering.
* BHEISR: This method suggests a set of items (designated as GI) as recommendation feeds, derived using the BHEISR approach.
* Random with BHEISR (RD_wC): This model proposes a list of recommended items, comprising randomly selected existing items and items derived from BHEISR.
* CB with BHEISR (CB_wC): This method offers a list of recommended items that combine items selected through content-based filtering and items suggested by BHEISR.
* UC with BHEISR (UC_wC): This approach generates a list of recommended items, including items selected by user collaborative filtering and items recommended by BHEISR.
Evaluation Metrics:
We evaluate the BHEISR model from both system and user perspectives. From a system perspective, we use the Coverage degree to judge the diversity of system recommendations. The Coverage algorithm refers to Section 4.4.1, where the coverage degree algorithm is formulated in Equation <ref>.
From the users' perspective, we conduct experiments in three aspects: detecting the diversity of users' belief networks, detecting users' interests in categories that users are not interested in, and the number of users affected by filter bubbles. We still use the Coverage algorithm to judge the diversity of user belief networks.
The user belief algorithm described in Equation <ref> of Section 3.2. Equations <ref> and <ref>, and the previously mentioned '68-95-99.7' rule in Section 4.1.2 are the metrics to check filter bubble-impacted users and user belief degrees for different categories. Besides, we also make the parameter analysis for detecting the user belief changes on different nudge weights. The metric in this experiment also utilizes the user belief algorithm.
Parameters:
The proportion of BHEISR-generated items (w) is seen as a parameter in our experiments. We analyze the impact of different proportions w within a recommendation feed on user belief diversity and recommendation system diversity.
§.§ Experimental Results
§.§.§ Experiment 1: Coverage Analysis
The first experiment analyzes the impact of the proposed BHEISR on both the diversity of the baselines and the user belief network. We randomly select a filter bubble-affected user from both datasets as our experimental object in this case. 'U25354' is chosen from the MIND dataset, while 'U128' is selected from the IMDB dataset.
Coverage Analysis for systems. First of all, we examine the evolution of diversity in recommendation lists over time for seven models, each based on filter bubble-affected users from different domains. The results are demonstrated in Tables <ref> and <ref>. Here, "sum." signifies the total coverage degree of each model throughout the recommendation process, and "Improv." denotes the rate of growth in diversity.
Upon comparison of Table <ref> and Table <ref>, it can be observed that the performance of the same model across different users remains constant. The strategies Random, UC, and CB exhibit relatively lower coverage rates, with their values demonstrating minor fluctuations over time. This pattern suggests that these strategies may represent preference-based recommendation models, which lack coverage breadth or variability advantages. In contrast, the strategies CB_wC and UC_wC show higher coverage rates, with a significant increase in these values over time. This observation points to the relative efficacy of these strategies in coverage, as well as their progressively expanding coverage range. While the strategy RD_wC exhibits the highest coverage rate among all strategies, its coverage values remain fairly consistent over time without notable increases or variations.
The values of Improv. reveal that all BHEISR-based recommendation systems present greater diversity in recommendations. The strategies RD_wC and UC_wC outperform the rest in terms of coverage, with UC_wC demonstrating the most substantial improvements over time. While RD_wC strategy displays a comparably higher coverage rate, it lacks the upward trend observed in the previously mentioned strategies. Furthermore, when considered as an independent recommendation model, BHEISR does not appear to be particularly effective at breaking the influence of the system filter bubble.
Coverage Analysis for User Belief Networks. After analyzing the effectiveness of the BHEISR approach in mitigating the filter bubble. we conduct another experiment to investigate its efficiency in enhancing the diversity of user belief across multiple domains. Figure <ref> demonstrates the dynamic changes in belief diversity for time-sensitive users' beliefs.
The two bar charts in Figure <ref>, though bearing potential discrepancies in details, attributable to variations within the datasets, generally align in their overarching trends.
In the comparison of RD and RD_wC, the RD_wC model exhibits superior performance. For the users in the MIND dataset, their openness to new information remains high and stable. Conversely, for the user in the IMDB dataset, the model's expressiveness exhibits a growing trend. As the number of recommendations escalates, users begin to slowly accept more new information. The results suggest that the combination of random selection and BHEISR strategies can help to increase the diversity within users' belief systems. On top of that, comparisons between UC and UC_wC, as well as CB and CB_wC, indicate that the UC_wC and CB_wC models surpass the UC and CB models, respectively. This indicates that the integration of the BHEISR strategy improves the performance of the UC and CB models.
Overall, the models incorporating the BHEISR strategy (RD_wC, UC_wC, CB_wC) outperform their counterparts lacking the BHEISR strategy (RD, UC, CB) in their respective comparisons. With an increasing number of recommendations, the BHEISR-based recommendation model fosters a broader diversity in user beliefs. This reveals a growing tendency among users to seek more diverse information over time.
§.§.§ Experiment 2: User Beliefs Analysis
The BHEISR model is designed to leverage existing recommendation systems to present information to users, with the objective of achieving a balance within the user's belief network. Given that shifts in users' belief states require long-term guidance, we seek to demonstrate that users tend to receive less preferred information via recommendations in a relatively short timeframe. In pursuit of this objective, we selected user 'U276' from the IMDB dataset, which presents a shorter recommendation path p_shorter.u (indicating closer feature distances between categories of interest and disinterest). This strategy allows us to observe changes in user beliefs within a short period. Furthermore, user 'U18469', exhibiting a longer recommendation path p_longer.u, is also included in this experiment to represent cases involving longer recommendation paths. This experiment employs the Collaborative Filtering (CF) algorithm to examine user belief degrees across various categories. Figure <ref> shows the temporal evolution of belief levels for information initially deemed uninteresting by the users, according to their belief networks.
For users with longer paths, "p_longer", we have adjusted the time scale in Figures <ref> and <ref> to better illustrate the trend in belief degree: a single time interval now represents 10 recommendation feeds. Consequently, we measure the users' level of interest in the "autos" and "video" categories once every 10 recommendations. In contrast, for users with shorter paths, "p_shorter", we capture belief changes following each recommendation. In these two figures, "autos" represents a category of shared interest for both users, "video" refers to a category that the MIND user is disinterested in, and "travel" indicates a category of disinterest for the IMDB user.
From both figures, we observe that the users' preference for "autos" either remains stable or displays a minor declining trend as time progresses. Conversely, for the categories that the users initially showed no interest in, continuous recommendations have sparked their interest. Particularly for the IMDB user, a significant increase in interest towards the "travel" category is evident. This observation robustly validates the efficacy of the BHEISR-based model in fostering belief harmony among users.
§.§.§ Experiment 3: Filter Bubble Users Detection
In this experiment, we focus on all MIND and IMDB users influenced by the filter bubble effect, serving as our experimental subjects. The purpose of this experiment is to track the temporal variations in the number of filter bubble-impacted users under different recommendation models, all of which are based on the CF algorithm.
Table <ref> describes the number of users affected by the filter bubble under various recommendation systems. From a horizontal perspective, recommendation models based on the BHEISR strategy outperform the original models, affecting fewer users with the filter bubble. This underlines the effectiveness of BHEISR in mitigating the filter bubble effect and establishing balance in user beliefs. Moreover, except for the minor changes observed in the CB-related model, there is a notable decrease in the number of users affected by the filter bubble across other models. The table further reveals that, when comparing the performance of the BHEISR model as a recommendation model to its use as an intermediate agency, the latter demonstrates superior capability in alleviating filter bubbles, particularly in datasets with substantial data volume such as the MIND dataset.
From a vertical perspective, the number of users affected by the filter bubble fluctuates over time. For instance, in the IMDB dataset under the UC_wC model, and in the MIND dataset under the RD_wC model, there are evident fluctuations in the count of filter bubble-impacted users. In the IMDB dataset, the number of such users initially starts at 4, increases to 6, and then reduces to 5. Similarly, in the MIND dataset, the count of filter bubble users exhibits variable changes, from 28 to 24, then from 24 to 26, next from 26 to 24, and finally from 24 to 26. These observations strongly suggest that user belief is influenced by the nudging technique and the rescheduling path approach integrated within the BHEISR model. The BHEISR model persistently adjusts the recommendation path based on user feedback, thereby influencing user behavior.
§.§.§ Experiment 4: Parameter Analysis
This experiment aims to investigate the impact of varying weights w of BHEISR-generated items GI within a recommendation feed on user belief diversity. The specific focus is on analyzing how the diversity of perspectives among users is influenced by changing the weight of BHEISR-generated items. By manipulating the w parameter, we can explore the effect of different degrees of intervention from the BHEISR model on the recommendation results. In this experiment, we utilize user 'U1629' from the MIND dataset as a representative subject. Additionally, we select UC_wBHEISR as the experimental model for this experiment.
As can be seen from Figure <ref>, the weight assigned to BHEISR recommendations significantly influences user belief diversity. It becomes clear that as the number of recommendations increases over time, the impact of the recommendation weight grows more pronounced. Notably, when the weight is set at 0.4, users show a higher probability of accepting the recommendations, and this influence remains steady. However, after a certain number of recommendations, the impact of BHEISR-recommended information on users reaches its peak. The larger the fraction of items recommended by BHEISR in the total recommendation list, the greater pronounced the impact on users in the later stages of recommendations.
In conclusion, the analysis indicates that the weight of BHEISR recommendations considerably affects user belief diversity. The influence of this weight amplifies as the number of recommendations increases. However, there is a saturation point at which the impact of BHEISR-recommended information on users reaches its maximum. In the experiment, we select w=0.6 as the BHEISR recommendation weight owing to its consistently improving performance.
§ CONCLUSION AND FUTURE WORK
In this research work, we focus on addressing the issue of filter bubbles in recommendation systems and propose the BHEISR model as a solution, with the objective of mitigating the negative effects of filter bubbles and promoting belief harmony among users.
The BHEISR model functions as an intermediary between existing recommendation systems and users, aiming to facilitate democratic and transparent recommendations. It incorporates several key features. First, it utilizes the Filter Bubbles Detection Model based on Multi-faceted Reasoning for filter bubbles identification. Second, it applies nudging techniques to incrementally broaden users' interests and balance their beliefs. Finally, it incorporates a user feedback loop based on Generative GAI to capture evolving user beliefs over time and increase recommendation diversity.
The experimental results explicitly reveal the effectiveness of the BHEISR model in mitigating filter bubbles and balancing user beliefs. Real-world datasets and nearly 200 filter bubble-affected users were used to validate the model's performance.
As for future research work, we plan to explore additional techniques to augment the model's performance further, undertake user studies to assess the long-term impacts, and investigate the influence of the BHEISR model on critical thinking abilities and inclusivity. Persistent refinement and evaluation of the model could lead to improved recommendation systems that cater more effectively to the diverse needs of users.
§ ACKNOWLEDGMENTS
The authors would like to acknowledge the financial support from Callaghan Innovation (CSITR1901, 2021), New Zealand, without which this research would not have been possible. We are grateful for their contributions to the advancement of science and technology in New Zealand. The authors would also like to thank CAITO.ai for their invaluable partnership and their contributions to the project.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.02325v1
|
20230705143021
|
Formally Verifying a Real World Smart Contract
|
[
"Alexandre Mota",
"Fei Yang",
"Cristiano Teixeira"
] |
cs.SE
|
[
"cs.SE"
] |
1,2]Alexandre Mota
2]Fei Yang
2]Cristiano Teixeira
[1]acm@cin.ufpe.br
Centro de Informática-UFPE, Av. Jornalista Aníbal Fernandes, s/n, Cidade Universitária, Zip 50.740-560, Brazil
[2]{alexandre, fei, cristiano}@lindylabs.net
Lindy Labs,
https://www.lindylabs.net/, Portugal
Formally Verifying a Real World Smart Contract
[
August 1, 2023
==============================================
Nowadays, smart contracts have become increasingly popular and, as with software development in general, testing is the standard method for verifying their correctness. However, smart contracts require a higher level of certainty regarding correctness because they are difficult to modify once deployed and errors can result in significant financial losses. Therefore, formal verification is essential. In this article, we present our search for a tool capable of formally verifying a real-world smart contract written in a recent version of Solidity.
Keywords. Blockchains, Ethereum, Smart Contracts, Solidity, SMTChecker, Verismart, Certora
§ INTRODUCTION
Nowadays, smart contracts have been attracting significant attention. While testing is the de facto standard for verifying the correctness of smart contracts in software development, smart contracts require a more robust means of ensuring correctness because they are difficult to modify once deployed, and any flaws could result in substantial financial losses. Therefore, formal verification is necessary. This article showcases the endeavor to identify a tool capable of formally verifying a real-world smart contract written in Solidity.
Until the end of March, 2022, Sandclock was verified mainly by Slither <cit.> and Echidna <cit.>, besides independent auditors, a common practice nowadays in the Smart Contracts community. Slither is a static analysis solution that reports useful tips to improve a smart contract but it also reports several false alarms. And Echidna which performs fuzz testing and thus can take a long time to process sometimes; like any testing technique, Echidna is very useful when a bug is found because such a situation is certain but no so ever when no bug is found. In such a case, testers cannot affirm that the system is absent of bugs.
Thus in April, 2022, we started a long journey to find out some tools (or at least a single one) that are able to formally verify Sandclock. We had a catalogue of expected properties already checked by Echidna. But a more trustworthy status was required.
In September 2022, we finally arrived at a conclusion regarding the formal verification of Sandclock. This article details our journey, which included identifying limitations with certain tools and experiencing disappointment with others. We also discuss potential enhancements for some of the tools, and ultimately select the single tool capable of proving the desired properties for Sandclock.
The main contributions of this article are:
* Investigating which testing and formal tools can deal with a real world smart contract written in Solidity 0.8.10;
* Discussing flaws in formal tools;
* Showing that Certora is the only formal tool able to formally verify Sandclock; our real-world system.
The structure of this work is as follows: Section <ref> provides a brief introduction to the Solidity language, utilizing an excerpt of our real-world smart contract (Sandclock) as an example. Sandclock is then presented in more detail in Section <ref>. In Section <ref>, we discuss and present the various tools we attempted to use on Sandclock, finally finding the Certora prover as the best candidate. Section <ref> introduces the Certora prover and its application to Sandclock in greater detail. Finally, in Section <ref>, we summarize our conclusions and suggest potential future work.
§ SOLIDITY
Solidity <cit.> is an object-oriented programming language designed for writing smart contracts executing in Ethereum Virtual Machine[https://ethereum.org/en/developers/docs/evm/], which is widely supported by various blockchain platforms. The concept for Solidity was introduced by Ethereum co-founder Gavin Wood in 2014, and was further developed by Christian Reitwiessner, Ales Beregszaszi, and other Ethereum contributors. The initial release, version 0.1.2, was made available in August 2015, and since then, Solidity has been under continuous development with sponsorship from the Ethereum Foundation. The latest version of Solidity is 0.8.18, and a community of collaborators is involved in contributing to the language's evolution by adding new features, building systems, improving documentation, and addressing issues on GitHub. Solidity is influenced by popular programming languages such as C++, Python, and JavaScript, and is a statically typed, curly-braced, contract-oriented, high-level programming language. It includes common features such as inheritance, libraries for reusable code, and complex custom types. Listing <ref> displays an excerpt of Solidity code from our real world smart contract[The complete system can be found here: https://github.com/lindy-labs/sc_solidity-contracts].
|
http://arxiv.org/abs/2307.02965v1
|
20230706130500
|
Spatio-Spectral Vector Beams
|
[
"Lea Kopf",
"Rafael Barros",
"Robert Fickler"
] |
physics.optics
|
[
"physics.optics",
"quant-ph"
] |
Increasing the complexity of a light field through the advanced manipulation of its degrees of freedom (DoF) provides new opportunities for fundamental studies and technologies.
Correlating polarization with the light's spatial or spectral shape results in so-called spatial or spectral vector beams that are fully polarized and have a spatially or spectrally varying polarization structure.
Here, we extend the general idea of vector beams by combining both approaches and structuring a novel state of light in three non-separable DoF’s, i.e. space, wavelength, and polarization.
We study in detail their complex polarization structure, show that the degree of polarization of the field is only unveiled when the field is narrowly defined in space and wavelength, and demonstrate the analogy to the loss of coherence in non-separable quantum systems.
Such light fields allow fundamental studies on the non-separable nature of a classical light field and new technological opportunities, e.g. through applications in imaging or spectroscopy.
Spatio-Spectral Vector Beams
Lea Kopf,^1,* Rafael Barros,^1 and Robert Fickler^1
^1 Tampere University, Photonics Laboratory, Physics Unit, Tampere, FI-33720, Finland
==============================================================================================================================================
§ INTRODUCTION
Increasing the complexity of a light field and the control of different degrees of freedom (DoF's) is beneficial for advancing research and technology.
Increasingly complex structures realized by combining several DoF's have been studied in a myriad of experiments over the last decades and the enhanced understanding of the interplay of the DoF's has already enabled novel photonic technologies <cit.>.
Initially, many experiments have studied structuring transverse light fields as well as shaping the temporal profile of pulses, both in its scalar forms, i.e. with a uniform polarization structure <cit.>.
The complexity of the light field's structure was further increased by including the polarization domain leading to beams with spatially non-uniform polarization distributions, i.e. spatial vector beams, as well as a temporally varying polarization vector across the pulse duration.
Over the last years, this approach has been extended to combine all DoF's, e.g. the study of advanced spatio-temporal pulses of vectorial light fields <cit.>.
Interestingly, the focus in most of the research efforts has been to generate complex polarization and spatial patterns over the temporal domain of light with much less attention to studying structured light fields in the time's complementary DoF, i.e. the spectral domain.
Correlating polarization with the wavelength of a light pulse, for example, can be used for advanced sensing and pulse characterization schemes <cit.>.
Here, we extend the idea of spatial and spectral vector beams by combining all three DoF's, namely polarization, space, and wavelength of the light field.
We term such light pulses spatio-spectral vector beams (SSVB's), which are light fields with a varying polarization structure in space as well as wavelength as shown in Fig. <ref> a).
Using a simple setup consisting of only three optical elements placed along a single beam line, we are able to generate highly complex pulses of light for which any given wavelength (or transverse angular position) the light field shows a different spatial (or spectral) polarization pattern.
We further show that all three DoF's are non-separable and the complex vectorial nature of the light field can only be observed when all DoF's are resolved.
By integrating over the light's transverse spatial extend or wavelength spectrum (or both), the beam would be seemingly unpolarized.
Finally, we detail the analogy of such complex structured light fields to the Greenberger–Horne–Zeilinger (GHZ) state, which describes three entangled particles in quantum optics.
Here, the DoF's of the classical light field act as the different quantum states in the GHZ description.
We expect that the complexity of SSVB's combined with the ease of our generation technique will trigger novel studies on structured light fields and advance their applications in novel spectroscopy, imaging, or sensing technologies.
§ CONCEPT
We consider a SSVB of the form
E⃗(r⃗,λ)= √(I_0/2)[S_1(r⃗) F_1(λ) ϵ̂_1+S_2(r⃗) F_2(λ) ϵ̂_2] ,
where r⃗ is the position vector, λ is the wavelength, I_0 is the total field intensity, S_1,2 and F_1,2 are normalized spatial and spectral basis functions, respectively, and ϵ̂_1,2 are unit polarization vectors.
We assume orthogonal polarization vectors with ϵ̂_i·ϵ̂_j=δ_ij, but allow for the spatial and spectral functions to have non-vanishing overlap.
Thus, the function can describe a light field with non-separable polarization, space, and wavelength for zero overlap.
To visualize and measure the degree of tripartite non-separability of field (<ref>), we consider the degree of polarization (DoP) resulting from intensity measurements integrated over the regions Ω_S and Ω_F in space and wavelength, respectively (see appendix <ref> for additional information).
The DoP is given by
D=√((Λ^S_11Λ^F_11-Λ^S_22Λ^F_22)^2+4|Λ^S_12|^2|Λ^F_12|^2/(Λ^S_11Λ^F_11+Λ^S_22Λ^F_22)^2) ,
where Λ^S_ij=⟨ S_i,S_j⟩_Ω_S and Λ^F_ij=⟨ F_i,F_j⟩_Ω_F (j=1,2) are the inner products between the spatial and spectral basis functions over the measured regions.
Note that when integrating over the complete spatial and spectral DoF's (Ω_S=ℝ^2 and Ω_F=ℝ), Eq. (<ref>) yields D=|Λ^S_12Λ^F_12|.
Thus, when either the spatial or spectral basis functions are orthogonal, the measured light field is seemingly unpolarized.
The same argument holds for space or wavelength-only measurements, which yield reduced degrees of spatial and spectral coherence, respectively.
Only joint measurements of the three DoF's unveil the full coherence of the input field, which is a signature of tripartite non-separability.
The apparent decoherence of the field (<ref>) upon partial tracing of one DoF is a byproduct of its mathematical similarity to the GHZ states <cit.>.
Classically such a tripartite non-separable behavior can also be explained in an intuitive way.
At any point in space where both spatial basis functions are non-vanishing, the polarization is spectrally non-uniform, i.e. a spectral vector beam <cit.>.
Similarly, at any wavelength present in both spectral basis functions, the polarization is spatially non-uniform, i.e. a spatial vector beam <cit.>.
Lastly, measuring the linear polarization state of the light field leads to a strong correlation between spatial and spectral structures.
In other words, specifying one DoF can project the remaining two in a bipartite non-separable field, which we will explore in more detail in section <ref>.
We note that a related behavior has been observed using multiple beams, each with a spatially varying polarization <cit.>.
§ EXPERIMENTAL REALIZATION
We create the light field (<ref>) with the process depicted in Fig. <ref>.
First, Fourier-limited laser pulses with a duration of τ=220 fs centered at a wavelength of 780 nm are linearly polarized.
The polarization is chosen to be diagonal with respect to the fast and slow axes of a 2 mm long birefringent BaB_2O_4 (BBO) crystal with the optical axis oriented at 23.4^∘ from the propagation direction.
The propagation through the birefringent crystal coherently splits each pulse into two trailing pulses with half the intensity, linearly polarized along the crystal's fast (f) and slow (s) axes, producing a spectral vector beam <cit.>. The field at this stage is
E⃗(r⃗,λ)= √(I_0/2)S_0(r⃗)F_0(λ)[e^iπ c δ t/λê_f+ e^-iπ c δ t/λê_s] ,
where S_0(r⃗) is a Gaussian transverse spatial profile, F_0(λ) is the spectral function, c is the speed of light, and δ t is the birefringent time delay.
A subsequent quarter-wave plate oriented at 45^∘ with respect to the fast axis of the crystal transforms the field's polarization components ê_f and ê_s to left and right circular polarizations, i.e. ê_R and ê_L respectively.
We set the time delay to δ t≈τ by choosing the appropriate crystal length and fine-tuning the crystal rotation, which ensures that the resulting linear polarization state in the wavelength domain rotates approximately once within one spectral bandwidth.
This beam is non-separable in wavelength and polarization, but still has only a single transverse structure, i.e. a Gaussian mode.
Next, we correlate the spatial DoF with polarization using an m=1 zero-order vortex half-wave retarder.
The vortex retarder has a constant half-wave retardance across the clear aperture with the fast axis oriented along θ=ϕ/2, where ϕ is the azimuthal angle.
Upon transmission, the left and right circular polarization components change their handedness and acquire an azimuthally varying phase from 0 to 2π <cit.> resulting in
E⃗(r⃗,λ)= √(I_0/2)S_0(r⃗)F_0(λ)[e^iϕe^iπτ c/λê_L+.
.e^-iϕe^-iπτ c/λê_R] .
The field (<ref>) is exactly of the intended form of the SSVB (<ref>), provided that we identify S_1,2(r⃗)=S_0(r⃗)e^±ϕ and F_1,2(λ)=F_0(λ)e^± iπτ c/λ.
Note that while the spatial basis functions are mutually orthogonal (⟨ S_1,S_2⟩_ℝ^2=0), the overlap between the spectral basis functions is non-vanishing (⟨ F_1,F_2⟩_ℝ∼ 1/e^2).
The spectral overlap limits the quality of our SSVB, and is the cost for the simple method to produce it.
We note that the aforementioned limitation can be overcome by more advanced pulse shaping techniques to generate orthogonal temporal or spectral modes <cit.>, however, they lead to the same arguments as presented here.
§ RESULTS
§.§ Characterization
To characterize the SSVB (<ref>), we perform both individual and joint measurements of its polarization, wavelength, and spatial profile.
In the first set of measurements, we use the experimental setup depicted in Fig. <ref> a).
We send the SSVB through a spatial mask comprising two diametrically opposite 0.2 mm wide apertures centered at the azimuthal angles ϕ_p and ϕ_p+π, which roughly corresponds to two 17^∘ wide slits and projects the field onto approximately E⃗(ϕ_p,λ).
We characterize the resulting field with spectrally-resolved polarization tomography, where the Stokes parameters are measured with a spectrometer, as shown for ϕ_p=80^∘ in Fig. <ref> b) with the corresponding wavelength-dependent polarization ellipses in c).
In Fig. <ref> d), we show different rotation angles of the spatial mask, which display a linear shift of the spectral-polarization structure.
Note that the measured polarization states shown in Fig. <ref> c-d) are not strictly linear, as expected from (<ref>), but consist of elongated ellipses.
This results from the non-zero overlap of the spectral basis functions in our scheme and experimental imperfections such as the finite area of the spatial slits.
In the second set of measurements, we project the SSVB onto wavelength instead of space, using the setup depicted in Fig. <ref> a).
A monochromator consisting of a diffraction grating, a mirror, and a tunable rectangular slit selects a central wavelength λ_p, which projects the field onto approximately E⃗(r⃗,λ_p).
The projection resolution is fixed by the minimum bandwidth of around 2.1 nm of the monochromator.
We characterize the spectrally filtered field through spatially-resolved polarization tomography, in which the Stokes parameters are measured with a CMOS camera.
In Fig. <ref> b), we show the polarization-resolved images obtained at λ_p=785.5nm with the corresponding spatial vector beam in c).
The spatial vector beams retrieved for different wavelengths are shown in Fig. <ref> d) and, to help visualize the wavelength-dependence of the retrieved field structures, the related images obtained at a fixed horizontal polarization are additionally shown in e).
The polarized images show a two-lobe structure that rotates across the spectrum, as expected from Eq. (<ref>).
We further study that partially resolving the spatial and spectral domains can be used to control the DoP of the laser beam.
To this end, we perform polarization measurements for different spatial and spectral bandwidths.
We repeat the measurements shown in Fig. <ref> a) with different spectral resolutions at λ_p=779.75 nm by changing the width of the rectangular slit in the monochromator.
Then, a computational spatial mask with varying angular slit widths is added to the images in post-processing, simulating the physical mask used in Fig. <ref> a).
From the measured Stokes parameters for each setting, we retrieve the DoP shown in Fig. <ref>.
By increasing the bandwidth of the wavelength filter or the slit width of the spatial mask, thereby averaging over the two DoF's, we expect from Eq.(<ref>) that the beam's DoP decreases.
The initial DoP of 0.95, measured with a wavelength filter bandwidth of 2.1 nm and a spatial slit mask 0.16 rad wide, drops down to 0.04 when both wavelength and spatial filters are completely removed, showing excellent agreement with the simulations (see appendix <ref>).
Without resolving all DoF's, we thus have a seemingly unpolarized light field.
Moreover, note that the minimum DoP obtained for spatially-resolved measurements is non-negligible (∼ 0.2), which agrees with the estimated overlap between the spectral basis functions of 1/e^2.
§.§ Analogy with quantum mechanics
As has been already recognized, classical non-separable light fields share a mathematical similarity with entangled quantum states <cit.>.
In contrast to manifold discussions of two non-separable DoF's of light and the analogy to Bell states <cit.>, SSVB's are mathematically similar to the description of three-particle GHZ states.
The GHZ state has the form
|Ψ⟩=1/√(2)(|000⟩+|111⟩) ,
where |0⟩ and |1⟩ denote the two possible states of each quantum system expressed in terms of qubits in the computational or z-basis.
The mutually unbiased bases to z are the x- and y-bases, defined in the Dirac notation as |± x⟩=(|0⟩±|1⟩)/√(2) and |± y⟩=(|0⟩± i|1⟩)/√(2), respectively.
To demonstrate the non-separability of the SSVB, we closely follow the GHZ argument <cit.>, which focuses on the simultaneous measurements of all three qubits in the x-basis (xxx measurement).
In the original GHZ argument, the two possible measurement outcomes in each basis are ascribed with the values +1 and -1.
Together with the assumption of a local realistic theory, it is possible to show that classical correlations require the product of the measurement outcomes in the x-basis to be +1, while for quantum correlations the outcome will be -1.
By adapting these ideas to SSVB's and showing a similar behavior, we can use the GHZ argumentation to demonstrate the non-separability of the classical light field.
To assign the notation (<ref>) to the SSVB, we choose the mutually unbiased bases for the three DoF's as follows.
For the spatial DoF, the z basis is formed by the spatial functions S_1,2(r⃗)=S_0(r⃗)e^± i ϕ, which contains counter-rotating spiral wavefronts carrying opposite orbital angular momenta <cit.>, and indistinguishable intensity profiles.
On the other hand, the x (y) basis modes have two-lobe structures oriented along 0^∘ and 90^∘ (45^∘ and -45^∘) <cit.>, and can be distinguished by narrow slits oriented along these directions <cit.>.
We implement these slits by using the spatial mask already presented in the measurements of Fig. <ref>.
In the wavelength domain, the z basis is formed by the spectral functions F_1,2(λ)=F_0(λ)e^± iπτ c/λ, which cannot be distinguished by measurements of spectral amplitude.
On the other hand, the modes composing the x basis are F_1x∝ F_0(λ)cos(πτ c/λ) and F_2x∝ F_0(λ)cos(πτ c/λ-π/2).
Similarly, the y basis comprises F_1y∝ F_0(λ)cos(πτ c/λ-π/4) and F_2y∝ F_0(λ)cos(πτ c/λ-3π/4).
In analogy to the spatial domain, we measure in the x and y bases by choosing four wavelengths spaced by 1/τ at which the intensity difference between the horizontal/vertical or diagonal/anti-diagonal polarization components are maximized.
In polarization, we define the z basis vectors as the left and right circularly polarized components of the light field.
Consequently, the x basis corresponds to horizontal/vertical linear polarizations, and the y basis to diagonal/anti-diagonal linear polarizations.
A graphical overview of the chosen bases is given in appendix <ref>.
The first entry of the state vector of the SSVB is assigned to the space domain, the second entry to wavelength, and the third entry to polarization.
We repeat the measurements presented in Fig. <ref> while choosing suitable projections onto polarization and space, and projecting onto wavelength in post-processing.
Fig. <ref> shows the measurement results for all terms of the normalized state (<ref>) written in the x and y bases.
More information on the normalization procedure can be found in appendix <ref>.
Clearly, the xxx measurements show that the product of the three outcomes mostly yields +1, with a small contribution of -1 due to experimental limitations and imperfections.
We regard this result as an interesting way to show the non-separability between polarization, spatial profile and wavelength by exploring the mathematical isomorphism with the GHZ state.
§ DISCUSSION
We present a spatio-spectral vector beam - a light field with a complex polarization structure in space as well as wavelength.
Our measurements show that the SSVB is well defined in its three DoF's if, and only if, they are measured jointly, while individual measurements result in apparent incoherence, i.e. a reduced degree of polarization, similar to the loss of coherence in non-separable quantum systems.
In the future, it will be interesting to extend the concept further to more complex spatio-spectral patterns, e.g. by including spatial Poincaré beams <cit.> or studying SSVB's in the context of optical skyrmions <cit.>.
Not only do SSVB's contribute to a better understanding of the complex interplay between different DoF's, they might also enable applications benefiting from the complex correlations and simple generation.
In contrast to earlier work, where spatial and spectral vector beams have been used for high-speed tracking of changes in space <cit.> and spectrum <cit.>, the presented light fields could allow a fruitful combination of the two leading to novel schemes for advanced sensing methods in all optical domains simultaneously.
In addition, our results might be applied in quantum optics experiments when complex hyper-entangled states are utilized <cit.>.
Finally, it will be interesting to study the possible benefits of SSVB's in nonlinear light-matter interactions of structured light <cit.>.
36
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Rubinsztein-Dunlop et al.(2016)Rubinsztein-Dunlop, Forbes,
Berry, Dennis, Andrews,
Mansuripur, Denz, Alpmann,
Banzer, Bauer, Karimi,
Marrucci, Padgett, Ritsch-Marte, Litchinitser, Bigelow,
Rosales-Guzmán, Belmonte, Torres, Neely, Baker, Gordon, Stilgoe, Romero, White, Fickler, Willner, Xie, McMorran, and Weiner]Rubinsztein_2017
author author H. Rubinsztein-Dunlop, author A. Forbes, author M. V. Berry,
author M. R. Dennis, author D. L. Andrews, author
M. Mansuripur, author
C. Denz, author C. Alpmann, author P. Banzer, author T. Bauer, author E. Karimi, author L. Marrucci, author M. Padgett, author M. Ritsch-Marte, author N. M. Litchinitser, author N. P. Bigelow, author C. Rosales-Guzmán, author A. Belmonte, author J. P. Torres, author T. W. Neely,
author M. Baker, author R. Gordon, author
A. B. Stilgoe, author
J. Romero, author A. G. White, author R. Fickler, author A. E. Willner, author G. Xie, author B. McMorran, and author A. M. Weiner, title title Roadmap on structured light, https://doi.org/10.1088/2040-8978/19/1/013001 journal
journal Journal of Optics volume 19, pages 013001 (year 2016)NoStop
[Forbes et al.(2021)Forbes,
de Oliveira, and Dennis]forbes2021structured
author author A. Forbes, author M. de Oliveira, and author M. R. Dennis, title title Structured light, https://doi.org/10.1038/s41566-021-00780-4 journal journal Nature Photonics volume 15, pages 253 (year 2021)NoStop
[Piccardo et al.(2021)Piccardo, Ginis, Forbes, Mahler, Friesem, Davidson, Ren, Dorrah, Capasso, Dullo, Ahluwalia, Ambrosio, Gigan, Treps, Hiekkamäki, Fickler, Kues, Moss, Morandotti, Riemensberger, Kippenberg,
Faist, Scalari, Picqué,
Hänsch, Cerullo, Manzoni,
Lugiato, Brambilla, Columbo,
Gatti, Prati, Shiri,
Abouraddy, Alù, Galiffi,
Pendry, and Huidobro]piccardo2021roadmap
author author M. Piccardo, author V. Ginis,
author A. Forbes, author S. Mahler, author
A. A. Friesem, author
N. Davidson, author
H. Ren, author A. H. Dorrah, author F. Capasso, author F. T. Dullo, author B. S. Ahluwalia, author A. Ambrosio,
author S. Gigan, author N. Treps, author
M. Hiekkamäki, author
R. Fickler, author M. Kues, author D. Moss, author R. Morandotti,
author J. Riemensberger, author T. J. Kippenberg, author J. Faist, author
G. Scalari, author N. Picqué, author T. W. Hänsch, author G. Cerullo, author C. Manzoni,
author L. A. Lugiato, author M. Brambilla, author
L. Columbo, author A. Gatti, author F. Prati, author A. Shiri, author A. F. Abouraddy, author A. Alù, author E. Galiffi,
author J. B. Pendry, and author P. A. Huidobro, title title Roadmap on multimode light shaping, https://doi.org/10.1088/2040-8986/ac3a9d journal
journal Journal of Optics volume 24, pages 013001 (year 2021)NoStop
[He et al.(2022)He,
Shen, and Forbes]he2022towards
author author C. He, author Y. Shen, and author A. Forbes, title title Towards higher-dimensional structured light, https://doi.org/10.1038/s41377-022-00897-3 journal
journal Light: Science & Applications volume 11, pages 205 (year 2022)NoStop
[Shen et al.(2022a)Shen, Zhan,
Wright, Christodoulides, Wise, Willner, Zhao, Zou,
Liao, Hernández-García et al.]shen2022roadmap
author author Y. Shen, author Q. Zhan, author L. G. Wright, author
D. N. Christodoulides, author
F. W. Wise, author
A. E. Willner, author
Z. Zhao, author K.-h. Zou, author C.-T. Liao, author C. Hernández-García, et al., title title Roadmap on spatiotemporal light fields, https://doi.org/10.48550/arXiv.2210.11273 journal journal arXiv preprint arXiv:2210.11273 (year
2022a)NoStop
[Wan et al.(2023)Wan,
Wang, Liu, Fu, and Shen]Zhensong2023
author author Z. Wan, author H. Wang, author Q. Liu, author
X. Fu, and author
Y. Shen, title title Ultra-degree-of-freedom structured light for ultracapacity
information carriers, https://doi.org/10.1021/acsphotonics.2c01640
journal journal ACS Photonics volume XXXX, pages XXX (year
2023)NoStop
[Weiner(2000)]weiner2000femtosecond
author author A. M. Weiner, title title Femtosecond pulse shaping
using spatial light modulators, https://doi.org/10.1063/1.1150614
journal journal Review of scientific
instruments volume 71, pages 1929
(year 2000)NoStop
[Dickey(2018)]dickey2018laser
author author F. M. Dickey, @noop title Laser beam shaping:
theory and techniques (publisher CRC press, year 2018)NoStop
[Chen et al.(2022)Chen,
Zhu, Huo, Song, Lezec, Xu, and Agrawal]chen2022synthesizing
author author L. Chen, author W. Zhu, author P. Huo, author
J. Song, author H. J. Lezec, author T. Xu, and author A. Agrawal, title title
Synthesizing ultrafast optical pulses with arbitrary spatiotemporal
control, https://doi.org/10.1126/sciadv.abq8314 journal journal Science Advances volume 8, pages eabq8314 (year
2022)NoStop
[Sano et al.(1980)Sano,
Okada, and Hashimoto]sano1980simple
author author K. Sano, author K. Okada, and author K. Hashimoto, title title Simple optical wavelength meter in 700-1200 nm
wavelength region, https://digital-library.theiet.org/content/journals/10.1049/el_19800650
journal journal Electronics Letters volume 24, pages 912 (year
1980)NoStop
[del Toro Iniesta(2003)]del2003introduction
author author J. C. del Toro Iniesta, @noop title Introduction
to spectropolarimetry (publisher Cambridge university
press, year 2003)NoStop
[Aspnes(2014)]aspnes2014spectroscopic
author author D. Aspnes, title title Spectroscopic
ellipsometry—past, present, and future, https://doi.org/10.1016/j.tsf.2014.03.056 journal journal Thin Solid Films volume 571, pages 334 (year 2014)NoStop
[Kopf et al.(2021)Kopf,
Ruano, Hiekkamäki, Stolt,
Huttunen, Bouchard, and Fickler]Kopf_21
author author L. Kopf, author J. R. D. Ruano,
author M. Hiekkamäki,
author T. Stolt, author M. J. Huttunen, author
F. Bouchard, and author
R. Fickler, title title Spectral vector beams for high-speed spectroscopic measurements, https://doi.org/10.1364/OPTICA.424960 journal
journal Optica volume 8, pages 930 (year 2021)NoStop
[Jolly et al.(2021)Jolly,
Gobert, and Quéré]Jolly_21
author author S. W. Jolly, author O. Gobert, and author F. Quéré, title title Spatio-spectral characterization of
ultrashort laser pulses with a birefringent delay line, https://doi.org/10.1364/OSAC.430432 journal journal OSA Continuum volume 4, pages 2044 (year 2021)NoStop
[Milione et al.(2011)Milione, Sztul, Nolan, and Alfano]Milione2011
author author G. Milione, author H. I. Sztul,
author D. A. Nolan, and author R. R. Alfano, title title Higher-order poincaré sphere, stokes
parameters, and the angular momentum of light, https://doi.org/10.1103/PhysRevLett.107.053601 journal
journal Phys. Rev. Lett. volume 107, pages 053601 (year 2011)NoStop
[Bouwmeester et al.(1999)Bouwmeester, Pan, Daniell, Weinfurter, and Zeilinger]bouwmeester1999observation
author author D. Bouwmeester, author J.-W. Pan, author M. Daniell,
author H. Weinfurter, and author A. Zeilinger, title title Observation of three-photon
greenberger-horne-zeilinger entanglement, https://doi.org/https://doi.org/10.1103/PhysRevLett.82.1345 journal journal Physical Review Letters volume 82, pages 1345 (year
1999)NoStop
[Pan et al.(2000)Pan,
Bouwmeester, Daniell, Weinfurter, and Zeilinger]Pan_00
author author J. W. Pan, author D. Bouwmeester,
author M. Daniell, author H. Weinfurter, and author A. Zeilinger, title
title Experimental test of quantum nonlocality in three-photon
greenberger-horne-zeilinger entanglement, https://doi.org/https://doi.org/10.1038/35000514 journal
journal Nature volume 403, pages 515– (year 2000)NoStop
[Greenberger et al.(1989)Greenberger, Horne, and Zeilinger]Greenberger1989
author author D. M. Greenberger, author M. A. Horne, and author A. Zeilinger, title Going beyond bell's theorem, in https://doi.org/10.1007/978-94-017-0849-4_10 booktitle
Bell's Theorem, Quantum Theory and Conceptions of the Universe, editor edited by editor M. Kafatos (publisher Springer Netherlands, address Dordrecht, year 1989) pp. pages
69–72NoStop
[Zhan(2009)]Zhan:09
author author Q. Zhan, title title Cylindrical vector beams:
from mathematical concepts to applications, https://doi.org/10.1364/AOP.1.000001 journal journal Adv. Opt. Photon. volume 1, pages 1 (year 2009)NoStop
[Balthazar et al.(2016)Balthazar, Souza, Caetano, ao, Huguenin, and Khoury]Balthazar:16
author author W. F. Balthazar, author C. E. R. Souza, author D. P. Caetano,
author E. F. G. ao, author J. A. O. Huguenin, and author A. Z. Khoury, title
title Tripartite nonseparability in classical optics, https://doi.org/10.1364/OL.41.005797 journal journal Opt. Lett. volume 41, pages
5797 (year 2016)NoStop
[Biener et al.(2002)Biener,
Niv, Kleiner, and Hasman]Biener:02
author author G. Biener, author A. Niv,
author V. Kleiner, and author E. Hasman, title
title Formation of helical beams by use of pancharatnam–berry
phase optical elements, https://doi.org/10.1364/OL.27.001875
journal journal Opt. Lett. volume 27, pages 1875 (year
2002)NoStop
[Monmayrant et al.(2010)Monmayrant, Weber, and Chatel]monmayrant2010newcomer
author author A. Monmayrant, author S. Weber, and author B. Chatel, title title A newcomer's guide to ultrashort pulse
shaping and characterization, https://doi.org/10.1088/0953-4075/43/10/103001 journal
journal Journal of Physics B: Atomic, Molecular and Optical
Physics volume 43, pages 103001
(year 2010)NoStop
[Spreeuw(1998)]Spreeuw_00
author author R. J. C. Spreeuw, title title A classical
analogy of entanglement, https://doi.org/https://doi.org/10.1023/A:1018703709245 journal journal Foundations of Physics volume 28, pages 361– (year
1998)NoStop
[Karimi and Boyd(2015)]Karimi2015
author author E. Karimi and author R. W. Boyd, title title Classical entanglement?, https://doi.org/10.1126/science.aad7174 journal
journal Science volume 350, pages 1172 (year 2015)NoStop
[Töppel et al.(2014)Töppel, Aiello, Marquardt, Giacobino, and Leuchs]Töppel2014
author author F. Töppel, author A. Aiello,
author C. Marquardt, author E. Giacobino, and author G. Leuchs, title
title Classical entanglement in polarization metrology, https://doi.org/10.1088/1367-2630/16/7/073019 journal
journal New Journal of Physics volume
16 (year 2014)NoStop
[Allen et al.(1992)Allen,
Beijersbergen, Spreeuw, and Woerdman]PhysRevA.45.8185
author author L. Allen, author M. W. Beijersbergen, author R. J. C. Spreeuw, and author J. P. Woerdman, title title Orbital angular momentum
of light and the transformation of laguerre-gaussian laser modes, https://doi.org/10.1103/PhysRevA.45.8185 journal journal Phys. Rev. A volume 45, pages 8185 (year 1992)NoStop
[Padgett and Courtial(1999)]padgett1999poincare
author author M. J. Padgett and author J. Courtial, title title Poincaré-sphere
equivalent for light beams containing orbital angular momentum, https://doi.org/10.1364/OL.24.000430 journal journal Optics letters volume 24, pages 430 (year 1999)NoStop
[Fickler et al.(2012)Fickler, Lapkiewicz, Plick, Krenn, Schaeff, Ramelow, and Zeilinger]fickler2012quantum
author author R. Fickler, author R. Lapkiewicz,
author W. N. Plick, author M. Krenn, author
C. Schaeff, author S. Ramelow, and author A. Zeilinger, title title
Quantum entanglement of high angular momenta, https://doi.org/10.1126/science.1227193 journal journal Science volume 338, pages
640 (year 2012)NoStop
[Plick et al.(2015)Plick,
Fickler, Lapkiewicz, and Ramelow]plick2015violation
author author W. N. Plick, author R. Fickler,
author R. Lapkiewicz, and author S. Ramelow, title title Violation of an extended wigner inequality with
high-angular-momentum states, https://doi.org/10.1103/PhysRevA.91.022124 journal journal Phys. Rev. A volume 91, pages 022124 (year 2015)NoStop
[Beckley et al.(2010)Beckley, Brown, and Alonso]Beckley:10
author author A. M. Beckley, author T. G. Brown, and author M. A. Alonso, title title Full poincaré beams, https://doi.org/10.1364/OE.18.010777 journal journal Opt. Express volume 18, pages 10777 (year 2010)NoStop
[Shen et al.(2022b)Shen, Martínez, and Rosales-Guzmán]doi:10.1021/acsphotonics.1c01703
author author Y. Shen, author E. C. Martínez, and author C. Rosales-Guzmán, title title Generation of
optical skyrmions with tunable topological textures, https://doi.org/10.1021/acsphotonics.1c01703 journal
journal ACS Photonics volume 9, pages 296 (year 2022b)NoStop
[Berg-Johansen et al.(2015)Berg-Johansen, Töppel, Stiller,
Banzer, Ornigotti, Giacobino,
Leuchs, Aiello, and Marquardt]berg2015classically
author author S. Berg-Johansen, author F. Töppel, author B. Stiller,
author P. Banzer, author M. Ornigotti, author
E. Giacobino, author
G. Leuchs, author A. Aiello, and author C. Marquardt, title title
Classically entangled optical beams for high-speed kinematic sensing, https://doi.org/10.1364/OPTICA.2.000864 journal
journal Optica volume 2, pages 864 (year 2015)NoStop
[Barreiro et al.(2005)Barreiro, Langford, Peters, and Kwiat]barreiro2005generation
author author J. T. Barreiro, author N. K. Langford, author N. A. Peters, and author P. G. Kwiat, title title Generation of
hyperentangled photon pairs, https://doi.org/10.1103/PhysRevLett.95.260501 journal
journal Phys. Rev. Lett. volume 95, pages 260501 (year 2005)NoStop
[Achatz et al.(2023)Achatz,
Bulla, Ecker, Ortega,
Bartokos, Alvarado-Zacarias, Amezcua-Correa, Bohmann, Ursin, and Huber]achatz2023simultaneous
author author L. Achatz, author L. Bulla,
author S. Ecker, author E. A. Ortega, author
M. Bartokos, author
J. C. Alvarado-Zacarias, author
R. Amezcua-Correa, author
M. Bohmann, author R. Ursin, and author M. Huber, title title
Simultaneous transmission of hyper-entanglement in three degrees of freedom
through a multicore fiber, https://doi.org/10.1038/s41534-023-00700-0 journal journal npj Quantum Information volume 9, pages 45 (year 2023)NoStop
[Buono and Forbes(2022)]buono2022nonlinear
author author W. T. Buono and author A. Forbes, title title Nonlinear optics with structured
light, https://doi.org/10.29026/oea.2022.210174 journal journal Opto‐Electronic Advances volume 5, pages 210174 (year
2022)NoStop
[Kagalwala et al.(2015)Kagalwala, Kondakci, Abouraddy, and Saleh]Kagalwala_15
author author K. H. Kagalwala, author H. E. Kondakci, author A. F. Abouraddy, and author B. E. A. Saleh, title title Optical coherency matrix
tomography, https://doi.org/10.1038/srep15333 journal journal Scientific Reports volume 5, pages 15333 (year
2015)NoStop
§ ACKNOWLEDGEMENTS
All authors acknowledge the support of the Academy of Finland through the Competitive Funding to Strengthen University Research Profiles (decision 301820) and the Photonics Research and Innovation Flagship (PREIN - decision 320165).
LK acknowledges the support of the Vilho, Yrjö and Kalle Väisälä Foundation of the Finnish Academy of Science and Letters through its graduate student scholarship.
RB acknowledges the support of the Academy of Finland through the postdoctoral researcher funding (decision 349120).
RF acknowledges the support of the Academy of Finland through the Academy Research Fellowship (decision 332399).
§ SIMULATION OF THE DEGREE OF POLARIZATION
The degree of polarization (DoP) of a field E⃗(r⃗,λ) is defined as <cit.>
D=1/⟨ I_0 ⟩√(S_H^2+S_D^2+S_R^2)
where ⟨ I_0 ⟩ is the total average intensity, and S_j=2⟨ I_j⟩-⟨ I_0 ⟩ (j=H,D,R) are the so-called Stokes parameters. The indices H, D, and R denote polarization projections along the horizontal, diagonal, and right-circular polarizations, without loss of generality.
The average intensities ⟨ I_j⟩ are given by
⟨ I_j ⟩=∫_Ω_S d^2r⃗∫_Ω_F dλ |E⃗(r⃗,λ)·ϵ̂_̂ĵ|^2
where Ω_S and Ω_F are the integration regions in the spatial and spectral domains, respectively. Using the field expressed in Eq. (<ref>) of the main text, we obtain
⟨ I_j ⟩ = I_0/2∑_i=1,2Λ^S_iiΛ^F_ii |Λ^P_ij|^2 +I_0Re(
Λ^S_12Λ^F_12Λ^P_1jΛ^P*_2j)
where Λ^P_ij=ϵ̂_i·ϵ̂_j^ *, and where
Λ^S_ik = ∫_Ω_S d^2 r⃗ S_i(r⃗)S_k^*(r⃗) ,
Λ^F_ik = ∫_Ω_F dλ F_i(λ)F_k^*(λ) ,
with i,k=1,2, are the overlaps of the spatial and spectral basis functions of E⃗(r⃗,λ) over Ω_S and Ω_F.
Finally, by inserting the Stokes parameters calculated from (<ref>) into (<ref>), we obtain
D=√((Λ^S_11Λ^F_11-Λ^S_22Λ^F_22)^2+4|Λ^S_12|^2|Λ^F_12|^2/(Λ^S_11Λ^F_11+Λ^S_22Λ^F_22)^2) ,
which is the expression we use in the main text.
In Fig. <ref>, we show the DoP calculated with Eq. (<ref>) as a function of the spatial and spectral measurement bandwidths.
We use the spatial and spectral basis functions defined in Eq. (<ref>) of the main text, while considering a Gaussian spatial profile with approximatly w_0=600 μm beam waist and a Gaussian spectral profile corresponding to a pulse duration of τ=220 fs, matching the parameters used in the experiment.
The spatial bandwidth corresponds to the opening angle of an angular mask, while the spectral bandwidth corresponds to the width of a sharp window centered at 780 nm. As expected, the simulated DoP has excellent agreement with the measured DoP shown in Fig. <ref> of the main text.
§ ANALOGY TO THE QUANTUM DESCRIPTION
§.§ Bases
Fig. <ref> shows an overview of the mutually unbiased bases and the assigned labels needed to show the non-separability in analogy to the quantum description.
The following list gives additional descriptions to the shown figure.
* Space domain: The computational basis in the space domain corresponds to a light field with counter-rotating spiral phases as discussed in the main text.
Here, the basis states are two doughnut-shaped intensity structures with opposite azimuthal phase gradients as indicated by the green arrows in Fig. <ref>.
The complementary x-bases correspond to a pair of two-lobbed structures which are rotated by 90^∘ with respect to each other.
The y-basis consists of two similar modes that are rotated by 45^∘ with respect to the x-basis <cit.>.
In the measurements, we approximate that a narrow spatial slit projects on a superposition of orthogonal spatial modes <cit.>.
These slits are depicted by dotted lines in Fig. <ref>.
* Wavelength/Time domain: The computational basis for the spectral/temporal DoF is formed by two time-delayed pulses, which correspond to the functions F_1,2(λ)=F_0(λ)e^± iπτ c/λ given in the main text.
The x and y basis are made up of four specific frequencies at which the intensity difference between the horizontal/vertical or diagonal/anti-diagonal polarization components are maximized (shown in Fig. <ref> as dotted lines).
We follow the same argumentation as in the spatial domain: The transmission through a very narrow slit, i.e. strongly filtering in the frequency domain, approximates the projection on a superposition of modes.
* Polarization: The computational domain is defined by the left and right circularly polarized components of the light field.
The x basis projects on horizontal/vertical linear polarization, while the y basis projects on diagonal/anti-diagonal linear polarization.
§.§ Normalization
In the measurements, the intensity of the states projected onto polarization and space in the chosen bases are measured in the spectral domain.
In the Dirac notation all possible measurement outcomes in a basis should add up to a probability amplitude of 1.
Thus, the intensity measurements have to be normalized.
First, the measured intensity signal is normalized with respect to the intensity summed up over all polarizations, i.e. with respect to the S_0 Stokes parameter.
Then, the measurements of all possible permutations of the basis are summed up to find a normalization constant N.
In the xyy-basis the state for example reads
|ϕ⟩= 1/N( α|x,y,-y⟩+β|x,y,y⟩+γ|x,-y,y⟩+ .
δ|x,-y,-y⟩+ ϵ|-x,y,y⟩+ ζ|-x,y,-y⟩+
. η|-x,-y,-y⟩+θ|-x,-y,y⟩)
where ideally for a non-separable state β=δ=ζ=θ=0.
The normalization factor is given by
N=α^2+β^2+γ^2+δ^2+ϵ^2+ζ^2+η^2+θ^2.
|
http://arxiv.org/abs/2307.00658v1
|
20230702202626
|
Accelerating Relational Database Analytical Processing with Bulk-Bitwise Processing-in-Memory
|
[
"Ben Perach",
"Ronny Ronen",
"Shahar Kvatinsky"
] |
cs.DB
|
[
"cs.DB"
] |
Accelerating Relational Database Analytical Processing with Bulk-Bitwise Processing-in-Memory
This work was supported by the European Research Council through the European Union's Horizon 2020 Research and Innovation Programme under Grant 757259 and through the European Union's Horizon Europe Research and Innovation Programme under Grant 101069336.
Ben Perach Ronny Ronen Shahar Kvatinsky
Viterbi Faculty of Electrical & Computer Engineering
Technion – Israel Institute of Technology
Haifa, Israel
benperach@campus.technion.ac.il ronny.ronen@ef.technion.ac.il shahar@ee.technion.ac.il
August 1, 2023
===============================================================================================================================================================================================================================================================================================================================================================
Online Analytical Processing (OLAP) for relational databases is a business decision support application. The application receives queries about the business database, usually requesting to summarize many database records, and produces few results. Existing OLAP requires transferring a large amount of data between the memory and the CPU, having a few operations per datum, and producing a small output. Hence, OLAP is a good candidate for processing-in-memory (PIM), where computation is performed where the data is stored, thus accelerating applications by reducing data movement between the memory and CPU. In particular, bulk-bitwise PIM, where the memory array is a bit-vector processing unit, seems a good match for OLAP. With the extensive inherent parallelism and minimal data movement of bulk-bitwise PIM, OLAP applications can process the entire database in parallel in memory, transferring only the results to the CPU. This paper shows a full stack adaptation of a bulk-bitwise PIM, from compiling SQL to hardware implementation, for supporting OLAP applications.
Evaluating the Star Schema Benchmark (SSB), bulk-bitwise PIM achieves a 4.65× speedup over Monet-DB, a standard database system.
Processing-in-memory, Database, OLAP, Memristors
§ INTRODUCTION
Analytical processing of relational databases is a business decision support application, allowing decision-makers to analyze their business data <cit.>. Often, these analyses require summarizing large sections of the database, taking a long execution time <cit.>, mostly on transferring data from the memory to the CPU. Summarizing large sets of data to few results hints that this application can benefit from processing-in-memory (PIM) techniques.
PIM techniques come in various types and technologies <cit.>, all operate on data at (or close to) where it is stored, i.e., the memory. In this paper, we are interested in accelerating analytical processing of relational databases with a specific PIM technique called bulk-bitwise PIM <cit.>. Bulk-bitwise PIM is characterized by utilizing the memory cell arrays to both process the data and directly store the result. Thus, computing with bulk-bitwise PIM can minimize data movement. As data-intensive applications invest most of their time and energy in memory access <cit.>, our approach for accelerating analytical processing consists of reducing the required memory accesses by the host. We do so by leveraging the in-place processing of bulk-bitwise PIM to distill the information stored in memory, transferring as little information as possible to the host. Data transfer reduction is possible because databases can be transferred into PIM memory once and used many times.
Note that using bulk-bitwise PIM in this way is orthogonal to the host processor type (e.g., CPU, GPU, FPGA) since any host requires to get the information from the memory. Furthermore, using bulk-bitwise PIM to reduce the data movement out of the memory cell arrays is complementary to other PIM techniques where dedicated processing units are placed close to the memory arrays. These dedicated processing units can act as the hosts for the bulk-bitwise PIM technique, profiting from the same reduced data transfer.
In this paper, we combine the thread from several of our previous works <cit.> to show how analytical processing for relational databases can be supported with bulk-bitwise PIM. As the processing capabilities of bulk-bitwise PIM depend on the data arrangement within the memory arrays, we show how to arrange the database relations within the memory arrays, creating a dedicated data structure for bulk-bitwise PIM. We then identify the basic primitive of bulk-bitwise PIM that can reduce data transfer with this new data structure (i.e., filter and aggregate). Afterward, we show how more complex database operations are supported for the star schema database <cit.> (i.e., GROUP-BY and JOIN), allowing execution of full queries. Using this support, we can execute the full Star Schema Benchmark (SSB) <cit.>. We evaluate these techniques with a memristive bulk-bitwise PIM design <cit.> using the gem5 <cit.> simulation environment.
§ BACKGROUND
§.§ Relational Databases and Analytical Processing
The relational database is a data model organized into one or more relations (tables). Each relation is constructed as multiple records and attributes (shown in Fig. <ref>), represented by the relation's rows and columns, respectively. Records are independent, holding information belonging to a single item with a single value for each attribute. Each relation has an attribute (or a set of attributes) that uniquely identifies the records in the relation and is called the key of that relation. When a relation has an attribute that has values from a key of another relation, this attribute is called a foreign key.
Queries on the database are questions about the data held in the database. In analytical processing, queries require finding all records fulfilling certain conditions on their attributes and summarizing (e.g., sum, average, max) one or more attributes across the found records <cit.>. This summarizing can also be requested per subgroups of the found records, where subgroups are defined according to unique values of some attributes. This division for subgroups is called a GROUP-BY operation.
When a query includes conditions involving attributes from multiple relations, records from the different relations are matched according to these conditions. The operation of matching records is called a JOIN. When the condition between the relations' attributes for JOIN is equality, the JOIN is named equi-JOIN. For analytical processing queries, JOIN operations are usually the most time-consuming part of query execution <cit.>.
A common database structure for analytical processing is the star schema <cit.>. An illustration of the star schema structure is shown in Fig. <ref>. In this schema, there is a single large relation, called the fact relation, and several small relations called the dimension relations. The fact relation has a foreign key to each of the dimension relations. In the star schema, JOIN operations required by queries are, by and large, only equi-JOIN between a dimension relation's key and its respective fact relation foreign key <cit.>. All the SSB benchmark <cit.> queries use only this kind of JOIN.
§.§ Bulk-Bitwise PIM
Bulk-bitwise PIM uses the memory cell arrays as processing units, which can be implemented with DRAM or emerging nonvolatile memory technologies <cit.>. Because of the regular structure of these arrays, the supported operations are bitwise logic operations (e.g., AND, NOT, NOR) between array rows or columns (shown in Fig. <ref>). When multiple memory cell arrays operate concurrently, the effective operation is a very wide bitwise operation, i.e., bulk-bitwise operations. As such, bulk-bitwise PIM can process data where it is stored and exhibit high computational bandwidth.
To support virtual memory, bulk-bitwise PIM operations are restricted to use and rewrite data within a single virtual page <cit.>, usually a huge page. This way, when a program sends a PIM operation to memory with a virtual address, the virtual address can be translated in the standard fashion and the PIM operation is routed by the hardware to its designated place. To perform the same operation on several pages, however, the same operation has to be sent to each page separately. In addition, since the PIM computation is tightly coupled with the layout of data in the memory cell arrays, data for PIM has to be structured in a specific, dedicated way. To allow software in virtual memory fine-grain control over the layout of data structures within the cell arrays, the mapping of addresses to cell array location is part of the bulk-bitwise programming model <cit.>. The specified mapping is on the page offset bits of the virtual address since they do not change on virtual-physical address translation, giving the virtual space software control over these bits.
To guarantee program correctness, the ordering rules of PIM operations with the standard memory operations (e.g., loads, stores) have to be defined <cit.>. Having well-defined ordering rules for bulk-bitwise PIM requires the PIM memory and host caches to be coherent, supported by the host hardware <cit.>.
§ THE RELATION DATA STRUCTURE
A database relation is stored in a memory cell array, as shown in Fig. <ref>. It is necessary to store data in a certain way so it can be directly processed without further movement since the processing capabilities of bulk-bitwise PIM are directly connected to the layout of data within the memory cell array.
To this end, each record is set across a cell array row, and each attribute spans several columns.
In this data structure, column-wise operations can be used to process all records in parallel. Some cell array columns, however, have to remain empty to store the PIM results before they are read from the memory. Note that a record's bytes are not consecutive in virtual address space due to the address mapping.
If a single array row is not enough to hold all the attributes of a relation, the relation's attributes must be split on more than a single cell array. These additional cell arrays are put on different pages and might require moving data between the pages using standard loads and stores.
§ BULK-BITWISE PIM PRIMITIVES
This section presents the basic database operations using the relation data structure and bulk-bitwise PIM capabilities. These database primitives reduce the required data transfer between the host and memory.
§.§ Filter
A filter operation, shown in Fig. <ref>, checks a condition across all relation records. This operation filters all records for a query or a single subgroup in a GROUP-BY operation (Section <ref>). The condition checking is done with PIM, where the result is a single bit per record. The resulting bit is stored in the same cell array column across all cell arrays of the relation. Hence, to assert which record passed the condition, the host only needs to read a single bit per record instead of the condition's attributes per record. The data transfer reduction depends on the number of attributes in the condition, the attributes' lengths, and the data itself (as non-PIM techniques are data depended <cit.>).
To support the filter primitive, the PIM module instruction set includes comparison operations (e.g., equality, less-than), logic operations (e.g., AND, OR, NOT), and arithmetic operations (e.g., addition, multiplication). These operations are supported for both an attribute with an attribute and an attribute with an immediate. Additionally, these operations are supported for multiple attribute lengths.
§.§ Aggregation
The aggregation operation, shown in Fig. <ref>, reduces a specific relation attribute, across all of the relations, to a single value (e.g., sum, average, max). This reduction, however, includes values only from selected records.
This selection is done by first filtering the records according to the condition (as in the filter primitive in Section <ref>). Instead of reading the filter result, it is used to generate a masked version of the attribute to be aggregated (without overwriting the original attribute). The mask operation nullifies the non-selected values so they will not affect the aggregation operation.
The masked attribute is then aggregated, according to the desired operation, within each cell array, resulting in a single value per cell array. Afterward, the host reads the single value from each cell array and aggregates them to complete the operation. Hence, the host only has to read a single value per cell array per aggregation.
For example, when summing an attribute, the attribute's bits are ANDed with the filter result bit to create the masked attribute. Hence, the unselected records have a zero value in the masked attribute, while the selected records' values remain the same. Summing the masked attribute will have the same result as summing only the attribute for the selected records. The host reads the single values in each cell array and sums them together, producing the final result.
Since the aggregation operations are performed value by value, they must be commutative and associative. Hence, the PIM module supports the aggregation operation for sum, min, and max within each cell array. Other aggregation operations can be supported as a combination of commutative and associative aggregation. For example, to perform an average (a non-associative operation) on an attribute, the attribute is first summed. Then, the filter result is also aggregated using sum, resulting in the count of the selected records. The host then divides the total sum of the records by their count to produce the average.
§.§ Evaluating PIM Primitives
To evaluate the performance of the PIM primitives, we used the TPC-H benchmark <cit.>, a relation database analytical processing benchmark. The baseline compared to is an in-house implementation of an in-memory database <cit.>, where the entire database is stored in DRAM main memory. The baseline ran on the same system as with PIM and executed the same query section as the PIM system. A full evaluation description and more analysis are presented in <cit.>.
The speedup and memory accesses reduction (LLC misses) for the PIM over the baseline are shown in Fig. <ref>. The queries are divided into two groups, full queries and filter-only queries. Full queries can perform the entire query, including the required aggregations, while filter-only queries can perform only the filter part of the query. Since the aggregation performs a more substantial data transfer reduction, it also achieves a significantly higher speedup. We also see that the data transfer reduction is similar in magnitude to the speedup, supporting our approach that bulk-bitwise PIM speedup comes from the data transfer reduction.
§ SUPPORTING THE STAR SCHEMA
Using the PIM database primitives and understanding their strengths and weaknesses, more complex operations can be designed and supported. This section presents the support for JOIN and GROUP-BY for the star schema. Full details and full evaluation are presented in <cit.>.
§.§ Supporting JOIN
JOIN requires matching records from different relations, where the matching itself is data-dependent. Hence, performing JOIN requires many data movements, which conflicts with the goal of data movement reduction. To avoid this, JOIN is accomplished by pre-computing the required JOIN output and storing the relations as a single JOINed relation. Pre-computing a JOIN operation is a known technique to accelerate query execution, appearing as denormalization <cit.> or materialized views <cit.>. Storing a pre-computed JOIN, however, has drawbacks. Characteristics of the star schema and bulk-bitwise PIM can mitigate these drawbacks.
A significant drawback of pre-computed JOINs is their flexibility. If a query requires a different JOIN than the one pre-computed, then the pre-computation is not helpful, and the required JOIN has to be performed. For the star schema, most JOIN operations are an equi-JOIN between a dimension relation and the fact relation on the dimension key <cit.>. All of the JOIN operations of the SSB <cit.> benchmark are of this type. Hence, we cover and accelerate the vast majority of cases by storing the fact relation equi-JOINed to all dimension relations on their key.
Using pre-computed JOINs also complicates maintenance <cit.>. Since JOIN operations duplicate data, performing an UPDATE on a pre-computed JOIN requires changing many entries. Using bulk-bitwise PIM, however, alleviates this drawback. The filter primitive can efficiently find all records to update. Furthermore, PIM operations can perform the update itself, not requiring reading any value from PIM memory (i.e., implementing a PIM MUX with the filter result as select <cit.>).
§.§ Supporting GROUP-BY
Performing GROUP-BY can be simply done by using PIM aggregation for every subgroup, aggregating subgroups one by one. The number of subgroups, however, can be large. Performing many aggregations may result in long latency, high energy, and high power, and reduce the lifetime of the PIM module due to the limited endurance of memory cells <cit.>.
To mitigate these deficiencies of PIM aggregation, a standard CMOS logic circuit is added to the memory cell array peripherals. The circuit performs the aggregation by reading the masked attribute to be aggregate value by value from its cell array. The circuit aggregates the read values and writes the final result back to the cell array. Thus, the latency, energy, and power of the aggregation primitive are reduced compared to a pure bulk-bitwise PIM aggregation. Furthermore, the PIM module lifetime is increased, as cells are written only at the end of the aggregation operation.
Nevertheless, the latency of the entire GROUP-BY operation can still be high if there are many subgroups. We note that aggregation can be performed in another way. The records for the query (for all subgroups) can be filtered, then read record-by-record by the host. The host then classifies and aggregates the records in their subgroups. This host aggregation technique is dependent on the number of records in the query and is independent of the number of subgroups. PIM aggregation, however, depends on the number of subgroups and not the number of records in the query. Hence, it is beneficial to have PIM aggregating a few large subgroups, leaving many small subgroups to host aggregation, exploiting the strength of each method. To divide the subgroups between PIM and host aggregation, a performance model is used along with an estimate for subgroups' sizes <cit.>. This estimate is done by taking a small sample from the database. This GROUP-BY technique is adapted from an in-cloud processing work <cit.>.
§.§ Star Schema Evaluation
We evaluate our solutions on the SSB benchmark <cit.>. The execution times are shown in Fig. <ref>. We compared three versions of PIM. one-xb stores records of the JOINed relation in a single memory cell array. two-xb is where the JOINed relation's records are vertically split into two memory cell arrays, having the fact relation's attributes in one array and all of the dimension relations' attributes in another array. pimdb is the same as one-xb, only the PIM aggregation (PIM-gb) is performed with pure bulk-bitwise operations (without the added aggregation circuit).
Two baselines are compared, mnt-reg and mnt-join. These baselines run Monet-DB <cit.>, a real in-memory database system running on real hardware. The mnt-reg and mnt-join hold the SSB database in its standard and pre-computed JOIN, respectively.
The best execution time is for one-xb, having a geo-mean speedup of 7.46× and 4.65× over mnt-reg and mnt-join, respectively. two-xb is 3.39× slower (in geo-mean) than one-xb since many data transfers are required between the memory cell arrays of the relation. two-xb, however, still has 1.37× speedup on mnt-join.
The cases where a non-PIM execution is faster than a PIM execution (Q2.1, Q3.1, and Q4.1) are where the PIM execution does not achieve a significant data transfer reduction. In these cases, the number of records required by the query is high. Due to the data structure of the PIM relation, most of the relation's records are transferred to the host, resulting in little to no data reduction and removing the advantage of PIM. See <cit.> for more details and discussions.
§ CONCLUSIONS
This paper shows how to support analytical processing of relational databases using bulk-bitwise PIM. Our bulk-bitwise PIM technique aimed to reduce the required data transfers. By substituting serial data accesses to memory with very wide and short operations within the memory, we achieve a significant speedup over von Neumann machines.
We first designed a data structure suited for bulk-bitwise PIM. Then, we identified and supported primitive operations (filter and aggregate), performing relevant functionality and reducing data transfer. These primitives were evaluated and studied. Based on these primitives, we support more complex operations (JOIN and GROUP-BY) and evaluated a full database benchmark (SSB) for a system based on memristive bulk-bitwise PIM. We believe this work will inspire other research for further adaptation of applications for bulk-bitwise PIM.
IEEEtran
|
http://arxiv.org/abs/2307.02194v2
|
20230705104154
|
Abstractions, Scenarios, and Prompt Definitions for Process Mining with LLMs: A Case Study
|
[
"Alessandro Berti",
"Daniel Schuster",
"Wil M. P. van der Aalst"
] |
cs.DB
|
[
"cs.DB"
] |
Abstractions, Scenarios, and Prompts for Process Mining with LLMs
A. Berti et al.
Process and Data Science Chair, RWTH Aachen University, Aachen, Germany
{a.berti, schuster, wvdaalst}@pads.rwth-aachen.de Fraunhofer FIT, Sankt Augustin, Germany
Abstractions, Scenarios, and Prompt Definitions for Process Mining with LLMs: A Case Study
Alessandro Berti1,20000-0002-3279-4795,
Daniel Schuster2,10000-0002-6512-9580,
Wil M. P. van der Aalst1,20000-0002-0955-6940
August 1, 2023
==================================================================================================================================
Large Language Models (LLMs) are capable of answering questions in natural language for various purposes.
With recent advancements (such as GPT-4), LLMs perform at a level comparable to humans for many proficient tasks. The analysis of business processes could benefit from a natural process querying language and using the domain knowledge on which LLMs have been trained.
However, it is impossible to provide a complete database or event log as an input prompt due to size constraints.
In this paper, we apply LLMs in the context of process mining by i) abstracting the information of standard process mining artifacts
and ii) describing the prompting strategies.
We implement the proposed abstraction techniques into pm4py, an open-source process mining library.
We present a case study using available event logs. Starting from different abstractions and analysis questions, we formulate prompts and evaluate the quality of the answers.
§ INTRODUCTION
Process mining uses event data from information systems to enhance business processes, involving process discovery, conformance checking, model enhancement, and predictive analytics. This data science field provides insights for improving operational processes.
Transitioning from traditional process analysis, the emergence of Large Language Models (LLMs) like GPT-4 <cit.> adds a new perspective to data exploration. These advanced models, drawing on extensive training data, serve as versatile tools for general querying, enabling the extraction of valuable insights. They not only generate and retrieve information, but also hold potential to analyse and enhance business process outcomes.
In this paper, we investigate the applications of LLMs in the context of process mining, which are essential for process querying (i.e., in the verification of properties against the event log or the preprocessing phase) and in embedding the domain knowledge (used to train the LLM) in the various process mining tasks.
Despite their impressive performance, applying LLMs like GPT-4 to process mining presents challenges due to their 'context window' limitation <cit.>, referring to the maximum sequence length these models can manage per interaction.
This balancing act between information quantity and output quality can lead to significant data loss <cit.>. Strategies including text compression, context truncation, or improved prompts <cit.> are required to effectively encapsulate process mining information.
Therefore, we explore in this paper the usage of textual abstractions of standard process mining artifacts, e.g., event logs and process models, that can embed the essential information of such artifacts.
This paper offers various prompting strategies to address the loss of information from proposed abstractions.
A direct answer or a database query verified against the original object may be obtained, as summarized in Figure <ref>.
This study further presents the integration of the pm4py process mining library[<https://pm4py.fit.fraunhofer.de>] with GPT-4 and provides a case study exploring these prompting strategies using public event logs.
The case study examines responses under different abstractions and GPT-4's domain knowledge for various processes (medical, travel expense reporting, and fines management), alongside additional process mining knowledge required for specific use cases.
The remainder of the paper is organized as follows.
Section <ref> covers related work.
Section <ref> describes the abstractions and the different prompting strategies for LLMs.
Section <ref> describes the implementation.
Section <ref> presents a case study demonstrating the usage of different abstractions and prompting strategies for process mining tasks.
Finally, Section <ref> concludes this paper.
§ RELATED WORK
This section provides a brief overview of process querying and the usage of domain knowledge in process mining.
Several process-mining-specific querying languages exist <cit.>.
In <cit.>, a framework for devising process querying methods is proposed.
SQL is widely used for process discovery <cit.>, conformance checking <cit.>, and data preprocessing <cit.>.
Cypher, a graph-based querying language, has also been adopted for process mining <cit.>.
Also, Celonis PQL <cit.> is a proprietary high-performance process querying language integrated into the Celonis platform.
The mentioned languages are expressive and permit a versatile set of process mining inquiries. However, they require considerable expertise in the syntax and semantics of the query language in question and specialist knowledge.
The complexity of process querying can be reduced by translating natural language queries into database executable statements. As proposed in <cit.>, a natural language querying interface aids in democratizing process mining, making it more accessible to non-technical users. The proposed reference architecture handles natural language questions and provides responses by integrating with process mining tools, using techniques such as entity recognition and semantic parsing.
In <cit.>, a natural language interface is proposed to assist the end-user in querying event data stored in a graph database. The natural language queries are translated into the Cypher language.
Another study, <cit.>, presents a conformance checking method based on NLP, which extracts business actions and objects from textual labels associated with events. Meanwhile, <cit.> focuses on identifying constraints for business process execution from natural language documents.
In <cit.>, chatbots are trained on customer service conversations to learn the underlying business process, showing the effectiveness of such an approach, though the generalization capabilities remain unclear.
Domain knowledge about a process can be expressed in natural language. For example, documents might contain the process execution rules if a formal model is not defined.
Utilizing domain knowledge in process discovery has been investigated in <cit.>.
In <cit.>, the domain knowledge of the process analyst is used to modify/improve a discovered process model.
In <cit.>, an event log is abstracted to a level needed by the analyst using domain knowledge extracted from the documentation of the process to match semi-automatically
events and activities.
The role of LLMs in the business process management field has been initially investigated in <cit.>, where prompt engineering techniques to embed the required information about the business processes are discussed as an alternative to training a company/process-specific LLM.
This paper proposes the usage of LLMs for process mining tasks.
LLMs such as GPT-4 know the domain knowledge and execution constraints for the set of business processes covered by the training data.
Therefore, LLMs are not process-specific and can interpret and execute queries in natural language. In our case study, we show that the queries can be either executed directly against an abstraction of a given process mining artifact or database (SQL) queries can be automatically generated by GPT-4 to verify hypotheses.
§ APPROACH
When using LLMs for process mining, the original event logs or process model representations cannot be directly used due to size limitations.
An abstraction of these artifacts must be obtained to execute specific queries, i.e., prompts, against an LLM.
In the following subsections, we will explain textual abstractions (see Section <ref>) and different prompt generation strategies (see Section <ref>).
§.§ Abstracting Process Mining Objects
This section describes how textual abstractions of common process mining objects, i.e., event logs and process models, can be obtained. These abstractions are later used in the proposed case study.
basicstyle=,
columns=fullflexible,
frame=single,
breaklines=true,
§.§.§ Abstracting Event Logs
Traditional event logs link each event with a single case identifier, enabling the computation of the directly-follows graph and the identification of traces and process variants <cit.>. These concepts can be associated with frequency and performance metrics
* In a directly-follows graph, frequency is quantified by the instances where a pair of activities are sequential, and performance is calculated as an aggregation, such as average or median, of recorded times between the two activities.
* For a process variant, frequency is determined by the count of cases following the given trace, while performance is an aggregation, such as average or median, of total throughput times for the cases.
This information can be textually represented to aid an LLM in responding to inquiries about event data.
Section <ref> and Listing <ref> demonstrate the textual representation of variants and the top 5 relationships from a Directly-Follows Graph (DFG), respectively. When constructing the directly-follows graph, various notations may be employed such as → or the phrase “is followed by”. Despite the differences in representation, Large Language Models (LLMs) like GPT-4 interpret these notations equivalently.
caption=Textual abstraction of a DFG.
label=lst:listingDfgAbstraction
Create Fine -> Send Fine ( frequency = 103392 performance = 7568635.65 )
Send Fine -> Insert Fine Notification ( frequency = 79757 performance = 1501626.95 )
Insert Fine Notification -> Add penalty ( frequency = 72334 performance = 5184000.0 )
Add penalty -> Send for Credit Collection ( frequency = 57182 performance = 45566346.44 )
Create Fine -> Payment ( frequency = 46952 performance = 905663.45 )
In the realm of object-centric event logs, wherein an event may associate with various object types, additional process modeling notations exist that can undergo textual abstraction. Specifically, object-centric directly-follows graphs <cit.> represent an assembly of directly follows graphs corresponding to distinct object types.
§.§.§ Abstractions of Process Models
Formal process models, e.g., Petri nets, BPMN, and declarative models, express constraints on the activities and the paths that are executable in a process.
For example, the Petri net shown in Fig. <ref> can be abstracted as in Listing <ref>. The method used for textually abstracting a Petri net is not fixed and can be approached in multiple ways, provided that the naming for places and transitions is unique. The choice of abstraction strategy is arbitrary and can be tailored to specific use cases or data structures.
Similar textual abstractions of many other model formalisms (e.g., process trees, prefix trees, transition systems, BPMN models)
are possible, but we do not describe them here.
§.§ Prompt Generation
After obtaining the abstractions above, we can provide them to an LLM along with a query.
These prompts could lead to two different types of answers, i.e.,
directly answering the original questions or leading to the formulation of hypotheses that can be verified against the data by means of database queries.
caption=Textual abstraction of the Petri net represented in Fig. <ref>.
label=lst:listingPetriAbstraction
places: [ p1, sink, source ]
transitions: [ (A, 'A'), (B, 'B') ]
arcs: [ (A, 'A')->p1, (B, 'B')->sink, p1->(B, 'B'), source->(A, 'A') ]
initial marking: ['source:1']
final marking: ['sink:1']
§.§.§ Direct Answering
An LLM prompt can be formulated using abstractions, such as "Describe the meaning of the activity A," which is particularly useful for descriptive or conformance checking purposes. It's important that these prompts consider no more knowledge than the provided event log or process model abstraction.
Due to the inherently probabilistic behavior of LLMs like GPT-4, the same question might yield varying responses across different sessions. This feature, rather than being an issue, is part of the model's design to promote diverse outputs and creative problem solving. If initial responses do not adequately meet the user's need, refining the question or asking more specific follow-up questions is possible to address any perceived gaps in the information provided.
§.§.§ Hypothesis Formulation and Verification
Certain process mining questions can be answered using the DFG/variants abstraction as they concern the order of activities. However, questions related to time and data perspectives of the event log, which require access to additional attributes or information, cannot be directly addressed by such abstractions. We may formulate hypotheses, such as impacts of specific activities on case duration, but these need further verification.
To verify a hypothesis, we can prompt an LLM, like GPT-4, with good SQL knowledge <cit.>, to generate a database query that can be applied to the whole event log. The prompt uses the DFG/variants abstraction and an abstraction of event log attributes. Upon receiving the result of a query from the user, the LLM can then assess this information to confirm, refine, or dismiss the hypothesis.
It is also important to note that LLMs, provided with the top variants and attributes, can autonomously generate hypotheses on the data. Through provided abstractions, LLMs can make assertions and formulate database queries for hypothesis testing, demonstrating their flexibility and adaptability in process mining tasks.
Therefore, LLMs offer flexibility in formulating queries for hypothesis testing based on provided abstractions.
§ IMPLEMENTATION
In this section, we present the implementation of various abstractions (see Section <ref>) into the open-source process mining library, pm4py (version 2.7.5 or later). The goal is to create textual abstractions of process mining artifacts, like traditional/object-centric event logs and process models (Petri nets), suitable for GPT-4's input limit. From these abstractions, specific queries are formulated for GPT-4 execution. Listing <ref> demonstrates this integration, where an event log is ingested for root cause analysis, and the inductive miner algorithm discovers a process model for optimization suggestions.
language=Python
frame=lines
caption=Example usage of the pm4py's OpenAI/GPT-4 integration on traditional process mining objects
label=lst:exampleUsageTraditionalObjects
basicstyle=
import pm4py
log = pm4py.read_xes("tests/input_data/roadtraffic100traces.xes")
iq1 = """What are the root causes of the performance issues in the process?
Please provide only process and data specific considerations,
no general considerations."""
print(pm4py.llm.abstract_variants(log) + iq1)
net, im, fm = pm4py.discover_petri_net_inductive(log)
iq2 = """Can you provide suggestions to improve the process model
based on your domain knowledge?"""
print(pm4py.llm.abstract_petri_net(net, im, fm) + iq2)
§ CASE STUDY
We present a case study using publicly available event logs and GPT-4 <cit.>.
We propose an assessment of prompts that can be directly answered by GPT-4.
Further, we propose an example of hypothesis formulation and verification against the entire dataset (by means of a SQL query).
§.§ Direct Answering
To assess prompts requiring direct answers from the LLM, we use publicly available event logs: (1) Road Traffic Fine Management process [<https://data.4tu.nl/articles/_/12683249/1>], which is related to the management of fines in an Italian municipality, (2) BPI Challenge 2020: Domestic Declarations [<https://data.4tu.nl/collections/_/5065541/1>], which is a travel expense approval process, (3) Sepsis Cases[<https://data.4tu.nl/articles/_/12707639/1>], which is a medical process for sepsis treatment, and (4) Conformance Checking Challenge 2019[<https://data.4tu.nl/articles/_/12707639/1>], which is a medical training process.
We have compiled a list of questions related to processes, sorted into various categories[A more extensive list of questions is available at <https://pm4py.fit.fraunhofer.de/static/assets/api/2.7.3/api.html#openai-integration-pm4py-openai>.]. Each question is accompanied by acceptance criteria to help determine if the response given by the LLM is satisfactory.
Descriptive Questions:
* Can you describe the process contained in this data?
* GPT-4 should provide the name/category of the process underlying the data and the description of the main steps of the process).
* If GPT-4 does not correctly understand the context and identifies the wrong name or category for the process, the response is considered unsatisfactory.
Conformance Questions:
* Can you pinpoint the central anomalies of the process from this data? Please only process and data-specific considerations, not general considerations.
* Our expectation is that GPT-4, using its domain knowledge,
is able to identify paths that are illogical, rework, or missing activities.
* A response is deemed unsatisfactory if GPT-4 points to infrequent activities/paths, and to paths with high performance, without exploiting the domain knowledge about the process.
Process Improvement Questions:
* What are the root causes of performance issues specific to the process and related data? Please refrain from providing general considerations and focus on issues directly tied to the process and its data.
* Our expectation is that GPT-4 should identify activities, paths, or rework
that lead to higher throughput times.
* A response is deemed unsatisfactory when GPT-4 identifies just the infrequent activities or paths, or is able to detect different execution orders for the activities but asks the user to verify if there is something wrong.
* Please suggest improving the process model based on your domain knowledge. Also, please compare it against implementations of similar processes.
Provide only process and data-specific considerations, not general ones.
* We expect that GPT-4 can suggest additional activities to optimize the throughput time and reduce rework. Also, it should
be able to detect when the activities are executed in a suboptimal order.
* A response is deemed unsatisfactory if general considerations about merging activities or reducing invisible steps are provided.
Certain queries align closely with those presented in <cit.>. Specifically, IQ1 and IQ2 correspond to questions 104 and 71 respectively, as listed in the provided resource (<https://ic.unicamp.br/ luciana.barbieri/pmquestions.csv>). Nevertheless, DQ1 and CQ1, which pertain to descriptive analytics and anomaly detection, exceed the capabilities offered by the Everflow tool.
All the considered prompts have been created starting from the result of abstraction and including one question.
The prompts have been executed against GPT-4 <cit.>.
We report on the quality of the answers in Table <ref>.
Each row reports the results on the considered logs for a given abstraction and question.
Different colors have been used: green indicates a useful answer for the given question, orange indicates a partly useful response, and red indicates that GPT-4 did not provide a good answer to the given question.
The response to descriptive questions has been satisfying for all event logs. GPT-4 for the considered questions and event logs is able to provide meaningful answers starting from both the directly-follows graph and variants. In particular, it is surprising that the DFG abstraction (which can be efficiently computed for large event logs) is sufficient not only to make local considerations (related to a path/activity of the process) but on the end-to-end execution of the process.
The conformance questions elicited satisfying responses for the "Road Traffic" and "BPI Challenge 2020" event logs (see Listing <ref>). Key issues like inconsistent activity order and multiple payment recordings were pinpointed for "Road Traffic," while the "BPI Challenge 2020" rework in rejections/resubmissions were identified as problematic. Contrarily, responses for medical event logs "Sepsis" and "CCC19" were below expectations, possibly due to GPT-4's limited domain knowledge of medical processes and difficulties handling their unstructured nature and high event count per case.
Responses to process improvement questions met expectations for the "Road Traffic" and "BPI Challenge 2020" event logs, especially with the variants abstraction. The directly-follows graph abstraction had limited utility in root cause analysis, as rework was more discernible in variants abstraction. However, for IQ2 employing model-based abstraction, GPT-4 offered valuable process optimization insights expressed in natural text.
For the contexts where both the DFG and variants abstraction are useful, we found that different viewpoints are provided. Therefore, the tips obtained thanks to different abstractions can be combined.
Across different executions, we obtained slightly different answers (with similar degrees of usefulness), which can lead to a broader set of insights.
caption=Response to CQ1 for the BPI Challenge 2020 event log.
label=lst:anomaliesBpic2020
basicstyle=
From the given data, here are some of the potential anomalies I can identify in this process:
Inefficient Process Variants: There are several process variants where a declaration is rejected multiple times (by either ADMINISTRATION, PRE_APPROVER, SUPERVISOR, or BUDGET OWNER) before it is finally approved. This suggests an inefficiency in the process that may require further investigation. It is possible that the declaration is not adequately prepared before submission or the criteria for approval are not clearly communicated. These inefficient process variants not only increase the frequency of steps in the process but also increase the overall performance time.
High Performance Time: Some process variants have remarkably high performance times. For instance, the process variant where the declaration is "SUBMITTED by EMPLOYEE -> Declaration REJECTED by ADMINISTRATION" has a performance time of 13805149.2. This high performance time could indicate a delay or bottleneck in the process that needs to be addressed.
Missing Steps: There is a process variant where a declaration is directly "FINAL_APPROVED by SUPERVISOR" without going through "APPROVED by ADMINISTRATION" or "APPROVED by BUDGET OWNER" or "APPROVED by PRE_APPROVER". This could indicate a deviation from the standard process, which might either suggest an alternative process route or a process violation.
Unfinished Process: There are some process variants that do not end with "Payment Handled", such as "Declaration SUBMITTED by EMPLOYEE -> Declaration REJECTED by ADMINISTRATION -> Declaration REJECTED by EMPLOYEE" and "Declaration SUBMITTED by EMPLOYEE -> Declaration REJECTED by SUPERVISOR -> Declaration REJECTED by EMPLOYEE". This could mean that these processes are incomplete, or that the payment handling is being done outside of the tracked process.
Unexpected REJECTION: There are a few process variants where after being "FINAL_APPROVED by SUPERVISOR", the declaration is "REJECTED by MISSING". This is unexpected as the declaration should ideally not be rejected after final approval. This anomaly might indicate some errors or issues in the process or system.
§.§ Hypothesis Formulation and Verification
We provide an example of hypothesis formulation and verification on top of the Road Traffic Fine Management event log. For this, we formulate the prompt in Listing <ref>, containing the top variants of the event log,
and a summary of the numerical attributes of the event log.
In preliminary tests, GPT-4 required details about case identifier, activity, and timestamp attributes. It also attempted to access a non-existent variant attribute and needed guidance to compute the case's duration. Yet, given the limited information, GPT-4 surprisingly formulated plausible and testable hypotheses.
GPT-4 generates various hypotheses for the given event log, including a supposed influence of the 'expense' attribute on 'Payment' activity occurrence. Testing this hypothesis using the SQL query in Listing <ref> shows it to be inaccurate, as the minor difference in average expenses between cases with and without payment isn't statistically significant. Given these results, GPT-4 suggests examining the 'amount' attribute's influence on payment presence, recognizing its initial hypothesis as unsubstantiated.
caption=Prompt provided to GPT-4 for hypothesis formulation on the Road Traffic Fine Management event log.
label=lst:listingHypothesisFormulation
If I have a process with the following process variants:
Create Fine -> Send Fine -> Insert Fine Notification -> Add penalty -> Send for Credit Collection ( frequency = 56482 performance = 59591524.946000494 )
Create Fine -> Payment ( frequency = 46371 performance = 889688.4000776347 )
Create Fine -> Send Fine ( frequency = 20385 performance = 8380516.026490066 )
...
and the log of the process contains the following attributes:
amount empty: 331240 quantiles: 0.0: 0.0, 0.25: 33.6, 0.5: 38.0, 0.75: 71.5, 1.0: 8000.0
article empty: 411100 quantiles: 0.0: 7.0, 0.25: 7.0, 0.5: 157.0, 0.75: 157.0, 1.0: 401.0
expense empty: 457483 quantiles: 0.0: 0.0, 0.25: 10.0, 0.5: 11.88, 0.75: 13.5, 1.0: 76.0
time:timestamp empty: 0 quantiles: 0.0: Timestamp('2000-01-01 00:00:00+0000', tz='UTC'), 0.25: Timestamp('2002-11-12 00:00:00+0000', tz='UTC'), 0.5: Timestamp('2006-02-28 00:00:00+0000', tz='UTC'), 0.75: Timestamp('2009-03-30 00:00:00+0000', tz='UTC'), 1.0: Timestamp('2013-06-18 00:00:00+0000', tz='UTC')
...
Can you provide some hypothesis between the execution of the process and its attributes? I mean, can you provide me a DuckDB SQL query that I can execute, and return the results to you, in order for you to evaluate such hypothesis about the process? More in detail, the data is stored in a Pandas dataframe where each row is an event having the provided attributes (so there are no separate table containing the variant). Can you tell me in advance which hypothesis you want to verify? Please consider the following information: the case identifier is called "case:concept:name", the activity is stored inside the attribute "concept:name", the timestamp is stored inside the attribute "time:timestamp", the resource is stored inside the attribute "org:resource", there is not a variant column but that can be obtained as concatenation of the activities of a case, there is not a duration column but that can be obtained as difference between the timestamp of the first and the last event. Also, the dataframe is called "dataframe". You should use the EPOCH function of DuckDB to get the timestamp from the date.
language=Python
frame=lines
caption=SQL code provided by GPT-4 for the verification of an hypothesis on the entire dataset
label=lst:sqlCodeHypothesis
basicstyle=
WITH case_has_payment AS (
SELECT "case:concept:name",
MAX(CASE WHEN "concept:name" = 'Payment' THEN 1 ELSE 0 END)
as has_payment
FROM dataframe
GROUP BY "case:concept:name"
)
SELECT has_payment, AVG(expense) as avg_expense
FROM dataframe
JOIN case_has_payment ON
dataframe."case:concept:name" = case_has_payment."case:concept:name"
GROUP BY has_payment;
§.§ Limitations, Open Challenges, and Opportunities
The results indicate that GPT-4's proficiency in addressing advanced conformance and process improvement queries improves with mainstream and standardized processes. Generally, GPT-4 exhibits substantial process mining understanding, albeit with the need for simple instructions for computing variants and throughput time. Notably, it was intriguing that GPT-4 could decipher the entire process execution from the DFG abstraction.
Nonetheless, these insights warrant validation against a wider array of questions and event logs. Additionally, the assessment of the proposed questions was based on the stated acceptance criteria, which is somewhat subjective, and alternative criteria could be employed. Consequently, the presented case study should be regarded as a preliminary exploration of LLMs' applicability in process mining.
§ CONCLUSION
The findings of this study provide promising indications for the application of Large Language Models (LLMs) in process mining, underscoring their potential in handling complex queries and process interpretations. LLMs, such as GPT-4, demonstrate impressive proficiency in understanding and analyzing process structures, highlighting the vast opportunities these models could bring to the field.
However, several challenges persist. One key concern is privacy - a considerable number of companies may be reticent to upload their core data to public LLMs like GPT-4 due to the sensitivity of the information involved. This brings to the fore the need for private LLMs, which can balance the utility of large-scale language models with the security needs of individual organizations.
To address privacy concerns, proprietary LLMs could be developed, trained on a mix of general and company-specific data. While current open-source models lag behind GPT-4, they're improving, suggesting the feasibility of private, customized LLMs. These models could potentially enhance process mining's efficiency and adaptability.
splncs04
|
http://arxiv.org/abs/2307.01115v1
|
20230703154514
|
MeT: A Graph Transformer for Semantic Segmentation of 3D Meshes
|
[
"Giuseppe Vecchio",
"Luca Prezzavento",
"Carmelo Pino",
"Francesco Rundo",
"Simone Palazzo",
"Concetto Spampinato"
] |
cs.CV
|
[
"cs.CV",
"cs.GR"
] |
: A Graph Transformer for Semantic Segmentation of 3D Meshes
Giuseppe Vecchio1,
Luca Prezzavento1,
Carmelo Pino2,
Francesco Rundo2,
Simone Palazzo1,
Concetto Spampinato1
1 Department of Computer Engineering, University of Catania
2 ADG, R&D Power and Discretes, STMicroelectronics
August 1, 2023
=========================================================================================================================================================================================================================================================================
Polygonal meshes have become the standard for discretely approximating 3D shapes, thanks to their efficiency and high flexibility in capturing non-uniform shapes. This non-uniformity, however, leads to irregularity in the mesh structure, making tasks like segmentation of 3D meshes particularly challenging.
Semantic segmentation of 3D mesh has been typically addressed through CNN-based approaches, leading to good accuracy.
Recently, transformers have gained enough momentum both in NLP and computer vision fields, achieving performance at least on par with CNN models, supporting the long-sought architecture universalism.
Following this trend, we propose a transformer-based method for semantic segmentation of 3D mesh motivated by a better modeling of the graph structure of meshes, by means of global attention mechanisms.
In order to address the limitations of standard transformer architectures in modeling relative positions of non-sequential data, as in the case of 3D meshes, as well as in capturing the local context, we perform positional encoding by means the Laplacian eigenvectors of the adjacency matrix, replacing the traditional sinusoidal positional encodings, and by introducing clustering-based features into the self-attention and cross-attention operators.
Experimental results, carried out on three sets of the Shape COSEG Dataset <cit.>, on the human segmentation dataset proposed in <cit.> and on the ShapeNet benchmark <cit.>, show how the proposed approach yields state-of-the-art performance on semantic segmentation of 3D meshes.
§ INTRODUCTION
Three-dimensional (3D) shapes are at the core of computer graphics and play an important role in many daily-life applications such as vision, robotics, medicine, augmented reality, and virtual reality.
In recent years, many approaches have been proposed to encode real-world shapes, including 3D meshes <cit.> and point clouds <cit.>.
Meshes have become widely adopted to represent complex real-world objects, which are commonly composed of continuous surfaces, through a discrete approximation.
The mesh is an efficient way to represent non-uniform surfaces, from simple shapes that generally require only a small number of polygons, up to arbitrarily complex objects, where the number of required polygons may increase significantly. The advantages presented by a mesh are particularly evident when compared to other forms of representation, like point clouds, which fall short when higher quality and preservation of sharp shape features are required.
With the increasing spread of deep learning techniques in many fields, research has tried to apply approaches from computer vision to 3D shape analysis. Convolutional neural networks (CNNs), in particular, have demonstrated outstanding performance on a variety of images-related tasks such as classification <cit.> and semantic segmentation <cit.>. However, CNNs are designed to work on images, which are represented on a regular grid of discrete values, far from the irregular representation of 3D shapes. On the other hand, representing 3D objects through volumetric grids, e.g. mapping 3D shapes to multiple 2D
projections <cit.> or 3D voxel grids <cit.>, is extremely inefficient and leads to computational costs that increase exponentially with higher resolution.
Recent approaches have tried to directly apply CNNs to the sparse point cloud representation <cit.>. These approaches have a substantial gain in terms of efficiency, but present an ill-defined notion of neighborhoods and connectivity and are inherently oblivious to the local surface. This issue makes the application of convolution and pooling operations non-trivial.
To overcome this limitation, several works have recently tried to generalize CNN architectures to non-Euclidean domains such as graphs, and incorporate neighborhood information <cit.>.
Other approaches have tried to apply deep neural networks to 3D meshes <cit.>. One recent example is MeshCNN <cit.>, which obtained state-of-the-art results on several segmentation datasets.
A recent trend in computer vision revolves around the use of transformer–based architectures, originally born for NLP, <cit.> for vision tasks <cit.>. The success of transformers lies in their extensive attention mechanism, which allows the network to learn global correlations between inputs. This property makes transformers able to intrinsically operate on fully-connected graphs.
However, when dealing with sparse graphs, transformers show evident limitations, mainly because of the sinusoidal positional encoding that is not able to exploit
graph topology and to the lack of local attention operators. Recently, <cit.> proposed an approach to extend the transformer architecture for arbitrary graphs. It introduces a graph transformer architecture which leverages the graph connectivity inductive bias, exploiting the graph topology. In particular, they 1) propose a new attention mechanism, 2) replace the positional encoding with the Laplacian eigenvectors, 3) re-introduce batch normalization layers, and 4) take into account edge feature representation.
Inspired by this work, and leveraging the structure of a mesh, which can be represented as a graph where the nodes correspond to vertices connected by polygon edges, we propose , a transformer-based architecture for semantic mesh segmentation. In particular, our approach embeds locality features by means of the Laplacian operator (as in <cit.>) and by combining polygon features with clustering-based features into a novel two-stream transformer layer architectures, where features from the two modalities are extracted through self-attention and combined through cross-attention. Additionally, we ensure that graph structure inferred by the input mesh affects the attention operators, by injecting adjacency and clustering information as attention masks.
We evaluate our method on a variety of publicly-available mesh datasets of 3D objects and human bodies; in our experiments, the proposed approach is able to outperform previous works, both qualitatively and quantitatively.
To sum up, the key contributions of this work are:
* We enforce graph locality in the transformer by a combination of clustering information operator with Laplacian positional encoding in place of positional encoding.
* We introduce novel self-attention and cross-attention mechanisms, specifically designed for mesh segmentation, that take into account adjacency and clustering information to mask elements and further impose locality.
* Experimental results on multiple standard benchmarks with different type of meshes showing that our model outperforms, both quantitatively and qualitatively, existing mesh segmentation methods, setting new state-of-the-art performance on the task.
§ RELATED WORK
Meshes represent a way to describe 3D objects. They consist of vertices, edges and faces that defines the shape of a polyhedral object. In this work we will focus on triangular meshes, i.e., a mesh where all the faces are triangles.
§.§ Mesh segmentation
The semantic segmentation of 3D meshes is the process of assigning a label to each face. The task of semantic segmentation for meshes has applications in many fields such as robotics, autonomous driving, augmented reality and medical images analysis.
Following the success of deep learning, several CNN-based methods have been applied 3D meshes to tackle the task of mesh segmentation <cit.>.
We hereby present an overview of relevant work on 3D data analysis using neural networks, grouped by input representation type.
Volumetric. A common approach to represent the 3D shape into a binary voxel form that is the 3D analogous to a 2D grid such as an image. This allows for extending to 3D grids operations that are applied on 2D grids, thus applying any common image-based approaches to the shape domain. This concept was first introduced by <cit.>, who present a CNN that processes voxelized shapes for classification and completion.
Following this approach, <cit.> introduce a shape reconstruction method, using a voxel-based variational autoencoder. In 2019, <cit.> present Alignet which used a voxel representations estimated applied the deformation on the original mesh.
Although being easy to process and extend existing method to voxels, this kind of representation is computationally and memory expensive. Resource efficient methods to process volumetric representations are an open research field with several approaches being proposed <cit.>.
Sparse convolutions allows to further reduce computational and memory requirements, leading to more efficient approaches <cit.>, but suffer from inaccurate
position information due to voxelization.
Graph.
Another family of approaches leverages the ability to represent meshes as a graph structure.
We distinguish between two main approaches for graph processing, one relies on the spectral properties of graphs <cit.>; the second one is to directly process graphs extracting locally connected regions and transforming them into a canonical form for a neural network <cit.>.
In 2017, <cit.> propose a new architecture called Directionally Convolutional Network (DCN) that extends CNNs by introducing a rotation-invariant convolution and a pooling operation on the surface of 3D shapes. In particular, they propose a two-stream segmentation framework: one stream uses the proposed DCN with face normals as the input, while the other one is implemented by a neural network operating on the face distance histogram. The learned shape representations from the two streams are fused by an element-wise product. Finally, Conditional Random Field (CRF) is applied to optimize the segmentation.
<cit.> propose SyncSpecCNN, a spectral CNN with weight sharing in the spectral domain spanned by graph laplacian eigenbases, to tackle the task of 3D segmentation.
<cit.> propose a Graph Neural Network (GNN) which exploits the Dirac operator to leverage extrinsic differential geometry properties of three-dimensional surfaces. These methods generally operate on the vertices of a graph.
Manifold.
<cit.>, with the Geodesic Convolutional Neural Networks, and <cit.> with the Anisotropic Convolutional Neural Networks, proposed two different CNNs–based architectures for triangular mesh segmentation.
In 2019, MeshNet was proposed by <cit.>. This architecture differs from the previous by working on mesh edges rather than faces. MeshCNN combines specialized convolution and pooling layers that operate on the mesh edges by leveraging their intrinsic geodesic connections. Convolutions are applied on edges and the four edges of their incident triangles, and pooling is applied via an edge collapse operation that retains surface topology, thereby, generating new mesh connectivity for the subsequent convolutions. MeshCNN learns which edges to collapse, thus forming a task-driven process where the network exposes and expands the important features while discarding the redundant ones.
Other approaches, like <cit.> propose alternative solutions to the segmentation task. MeshWalker <cit.> represents mesh's geometry and topology by a set of random walks along the surface; these walks are fed to a recurrent neural network. HodgeNet <cit.>, instead, tackles the problem relying on spectral geometry, and proposes parallelizable algorithms for differentiating eigencomputation, including approximate backpropagation without sparse computation. Finally, DiffusionNet <cit.> introduces a general-purpose approach to deep learning on 3D surfaces, using a simple diffusion layer to agnostically represent any mesh.
§.§ Graph transformers
Since their introduction, Transformers <cit.> have demonstrated their wide applicability to many different tasks, from NLP to Computer Vision.
The original transformer was designed for handling sequential data in NLP, and operates on fully connected graphs representing all connections between the words in a sentence. However, when dealing with sparse graph, transformers perform poorly.
Recently, several attempts to adapt transformers to graphs have been proposed <cit.> focusing on heterogeneous graphs, temporal networks and generative modeling <cit.>.
In 2019, <cit.> introduce a model employing attention an all graph nodes, instead of a node’s local neighbors, to capture global information. This approach limits the exploitation of sparsity, which is a good inductive bias for learning on graph datasets as shown in <cit.>.
To learn global information other approaches involve the use of a graph-specific positional features <cit.>, node Laplacian position eigenvectors <cit.>, relative learnable positional information <cit.> and virtual nodes <cit.>.
<cit.>, propose an approach to extend the transformer architecture for arbitrary graphs. It introduces a graph transformer architecture with four new properties compared to the standard model, which are:
1) an attention mechanism which is a function of the neighborhood connectivity for each node in the graph; 2) positional encoding represented by the Laplacian eigenvectors, which naturally generalize the sinusoidal positional encoding often used in NLP; 3) a batch normalization layer in contrast to the layer normalization; 4) edge feature representation.
MeshFormer <cit.> propose a mesh segmentation method based on graph transformers, which uses a boundary-preserving simplification to reduce the data size, a Ricci flow-based clustering algorithm for constructing hierarchical structures of meshes, and a graph transformer with
cross-resolution convolutions, which extracts richer high-resolution semantic.
Recently <cit.> introduced a novel method for 3D mesh segmentaion named Navigation Geodesic Distance Transformer (NGD-Transformer). It exploit the manifold properties of the mesh through a novel positional encoding called navigation geodesic distance positional encoding, which encodes the geodesic distance between vertices.
Our work takes inspiration from <cit.> and proposes a transformer-based architecture for tackling 3D meshes represented as graphs. As in <cit.> we employ a positional encoding represented by the Laplacian eigenvectors of the adjacency matrix and a pre-layer batch normalization. However, we extend the original approach by adapting the architecture to 3D meshes, particularly, by proposing two cross-attention modules (similarly to decoder layers) learning local and global representations on 3D meshes and clusters thereof.
§ METHOD
In this work, we propose a novel transformer-based architecture for semantic segmentation of 3D meshes. The proposed method takes inspiration from recent vision transformer architectures <cit.> and spectral methods for graphs, in order to create an embedding in a Euclidean space with the topological features of the mesh.
Given a triangular mesh, described as a set of V vertices {_k = (x_k, y_k, z_k)}_k=1,…,V and a set of N triangles {_i = (k_i,1,k_i,2,k_i,3, _i)}_i=1,…,N, where each triangle is defined by its three vertices and the normal direction _i of its surface, the goal is to assign a class c_i∈ to each triangle _i, representing the dominant class on the surface of the triangle.
§.§ Feature extraction
For each triangle, we initially extract a set of features based on spectral properties of the triangle graph (where triangles are nodes, and shared sides are edges), which is the dual of the mesh (where vertices are nodes, and triangle sides are edges).
The process starts by building the adjacency matrix Å, of size N× N (N being the number of triangles in the mesh), such that A_ij=1 if the i-th and the j-th triangles share an edge, and A_ij=0 otherwise.
From the the adjacency matrix Å, we then compute the symmetric normalized Laplacian matrix as:
= -^-1/2Å^-1/2,
where is the identity matrix and is the degree matrix for Å, i.e., a diagonal matrix such that D_ii is the number of edges connected to i (equivalently, the sum of the elements in the i-th row of Å).
Then, we identify the (E+1)-th (with E ≤ N) eigenvector with the smallest non-zero eigenvalue. The i-th components of the E remaining eigenvectors, corresponding to vector ≪_i, are then used to encode the location of the i-th triangle within the mesh. We employ these features as a positional encoding in the transformer, as described by <cit.>, and to identify local neighborhood by means of clustering (described in the next section).
Formally, given a triangle _i=(k_1,k_2,k_3,_i), where _i is its normal vector direction, obtained by computing the vector product _i=(_k_2-_k_1)×(_k_3-_k_1) and then normalizing it as _i = _̃ĩ/_̃ĩ, we obtain the feature representation _i for triangle i as _i = (_k_1, _k_2, _k_3, _i, ≪_i).
Fig. <ref> shows the visualization of the Laplacian eigenvectors for a mesh.
§.§ Clustering
Before being processed by the network, triangles' features are clustered in M=V/λ clusters, where λ is a configurable parameter, controlling the average number of mesh vertices per cluster. As we describe in detail when presenting our transformer architecture, we introduce clustering as an additional and more explicit way than positional encoding to enforce locality on the features extracted for each triangle. Clustering is carried out using the Ward method <cit.>, which applies constraints on the connectivity dictated by the dual graph adjacency matrix, generating clusters geometrically and topologically connected and cohesive.
The result is a matrix with shape N× M, such that J_im = 1 if the i-th triangle is in the m-th cluster, and J_im = 0 otherwise. Each row _i in can be interpreted as the one-hot cluster representation for the i-th triangle.
Fig. <ref> shows an example of mesh triangles clustering.
§.§ Network architecture
Mesh Transformer. The proposed architecture implements a transformer model with two internal feature extraction streams, one for triangle features and one for cluster features, organized in matching sets (i.e., the i-th element of the triangle set corresponds to the i-th element of the cluster set). The two sets of features are processed by a cascade of transformer layers; only features from the triangle stream are finally used for prediction through a two-layer feedforward network, which predicts for each triangle a score vector _i, of size equal to the number of segmentation classes.
Given the extracted features _i and cluster identifier _i for each mesh triangle, we first convert them into two sequences of tokens, to be provided as input to the transformer layers. Triangle tokens _i are obtained as:
_i = FF_t( _i )
where FF_t is a feedforward layer with ReLU activation[All feedforward layers in our model have ReLU activations.], of output size d_t. Cluster tokens _i, of size d_p, are obtained by a learnable embedding layer on the corresponding one-hot cluster identifier _i. Matrices ∈^N × d_t and ∈^N × d_p are defined by laying each token as a row in the corresponding matrix.
Each network layer, illustrated in Fig. <ref>, can thus be defined as a function L_i(·, ·) on token sequences:
L_i ( , ) =
(
R_i( SA_t,i( TC_i( , ) ) ),
R_i( SA_p,i( CT_i( , ) ) )
)
where SA_t,i and SA_p,i are, respectively, the triangle and cluster self-attention functions, R_i is a residual connection function, and TC_i and CT_i are, respectively, the function updating triangle tokens from cluster tokens and vice versa. The output of each layer has the same dimensions as the input, allowing for arbitrary length of encoder sequences.
Multi-head attention. Before introducing the details of the encoder layers, let us present a general formulation of multi-head attention, which is extensively employed in the proposed architecture. An attention function A receives three matrices ∈^N_q × d_k (query), ∈^N_k × d_k (key) and ∈^N_k × d_v (value), and returns a matrix ∈^N_q × d_v, where each row is computed as a linear combination of rows from , weighted by normalized dot-product similarity between rows of and , as follows:
A(, , ) = softmax( ^⊤/√(d_k))
where softmax is applied on rows of the input matrix.
In multi-head attention, in order to capture several possible attention patterns between elements, , and are usually computed by linearly projecting a set of input matrices ∈^N_q ×d̂_k, ∈^N_k ×d̂_k and ∈^N_k ×d̂_v through multiple sets of projection matrices {( _q,i, _k,i, _v,i) }_i=1,…,h, with h being the number of heads. The attention outputs for each set of projection matrices are then concatenated and linearly projected to produce the final output, as follows:
MA(, , ) =
concat(H_1, …, H_h) _O
where ^O ∈^N × d_o is a linear projector to the desired output dimension, and H_i is the output of the i-th attention head:
H_i = A( _q,i, _k,i, _v,i)
The amount of computation required for multi-head attention is approximately the same as in single-head attention, by uniformly splitting dimensions d_q, d_k and d_v among the h heads.
In this work, for simplicity, we set d_q = d_k = d_v = d_o = d, whose specific value depends on where multi-head attention is employed in the network, as described below.
Self-attention for cluster tokens. The architecture of the self-attention module for cluster tokens is presented in Fig. <ref>. The module receives the set of cluster tokens and applies a function SA_p defined as:
_n = PLN( )
SA_p ( ) = MA( _n, _n, _n ) +
where PLN is Pre-Layer Normalization <cit.>, which has been shown to improve training of transformer architectures, and query, key and values matrices are all set to _n, as is typical of self-attention. A final residual connection is applied to improve gradient flow. The d size is set to d_p, i.e., the size of the input cluster tokens.
Self-attention for triangles tokens, illustrated in Fig. <ref>, shares the same architecture as the self-attention module for clusters, but it employs the adjacency matrix Å as a mask for multi-head attention computation. The choice to adopt a adjacency-based attention masking mechanism is due to the need to preserve the capacity of the model to capture both local composition and long-range dependency <cit.> and to reduce computation requirements for high-resolution meshes exploiting the sparsity of the Å matrix. To carry out masked multi-head attention, the attention function in Eq. <ref> is modified by subtracting infinity from masked positions of the query-key similarity vector, in order to nullify the corresponding softmax terms. The resulting attention function A_mask is defined as:
A_mask(, , , ) = softmax( ^⊤ - /√(d_k))
where elements of are either 0 or -∞. We can thus define our self-attention module for triangles as:
_n = PLN( )
SA_t ( ) = MA_mask( _n, _n, _n, Å̂) +
where MA_mask is the variant MA employing A_mask as attention function, and Å̂ = logÅ, so that Â_ij = -∞ where A_ij = 0, and Â_ij = 0 where A_ij = 1. The d size for multi-head attention is set to d_t, i.e., the size of the input triangle tokens.
Updating cluster representation from triangle tokens. The cluster-triangle update module is introduced to
update the clusters' representation w.r.t. the triangles', thus allowing the network to exchange information between the two different modalities employed for modeling graph structure, i.e., Laplacian eigenvectors and clustering. To this aim, we employ masked multi-head attention using cluster tokens for computing query vectors, and triangle tokens to compute keys and values; in order to aggregate, for each cluster, only information of the triangles contained in it, we compute a symmetric matrix from the clustering matrix by setting C_ij = 1 if triangles i and j belong to the same cluster, i.e., _i = _j, and C_ij = 0 otherwise.
The architecture of the cluster-triangle update module is presented in Fig. <ref>, and implements the following function:
_n = PLN( )
CT( , ) = MA_mask( _n, , , ) +
where mask = log, as above. The d dimension for multi-head attention is set to d_p, i.e., the size of input cluster tokens.
Updating triangle representations from cluster tokens.
A triangle-cluster update module is also used to update triangle representation with respect to clusters. Similarly to the cluster-triangle case, each triangle is affected only by elements belonging to the same cluster. The cross-attention module computes the sum between each triangle token and a projection of the average of the corresponding cluster tokens through a single feed-forward layer, as follows:
_n = PLN( )
TC( , ) = _n + FF_TC( )
where FF_TC is a single feedforward layer.
The architecture of the triangle update module is presented in Fig. <ref>. This operation can be interpreted as a form of cross-attention between triangle tokens and cluster tokens, where the former attend to the latter by means of a constant attention factor defined by cluster membership.
Layer residual connection.
The output of each token stream of a network layer finally undergoes a feedforward residual transformation, to independently transform each token, as follows:
_n = PLN( )
( ) = FF_R( _n ) +
where is either or , and FF_R is a feedforward layer.
The architecture of the triangle update module is presented in Fig. <ref>.
§ EXPERIMENTAL RESULTS
In this section, we first introduce the datasets employed in our work: the COSEG Shapes dataset <cit.>, and the Human segmentation datasets proposed by <cit.>.
Then, we evaluate the accuracy of our approach on the two different datasets. First, we assess how the model performs in three categories of the COSEG dataset, namely, Chairs, Vases and Tele-Aliens; afterwards, we evaluate our method on the segmentation of human body meshes as well as on the ShapeNet dataset <cit.>.
Ablation study then follows to substantiate the choices on the architecture components.
As a methodical note on the evaluation, for the comparison to , with existing methods, we report the performance values reported in their original papers on the considered benchmarks.
§.§ Datasets and metrics
We test the performance of and compare it with those yielded by existing models on three standard benchmarks, namely, the Shape COSEG dataset <cit.>, the Human dataset <cit.> and the ShapeNet dataset <cit.>.
The Shape COSEG dataset <cit.> consists of 11 sets of shapes with a consistent ground-truth segmentation and labeling: 8 sets are rather small and come from the dataset by <cit.>, while the 3 remaining ones contain, respectively, tele-alines, vases and chairs. Given the scale of tele-alines, vases and chairs sets compared to the other eight sets, we used only them to evaluate the performance of . Train and test splits are the same defined in MeshCNN <cit.> for a fair comparison. As validation set we use 6% of the training set.
We also evaluate our method on human segmentation dataset introduced by <cit.>. It consists of human meshes from several datasets, in particular SCAPE, FAUST, MIT Animation and SHREC 2007. The latest is used as test set, as in the MeshCNN <cit.> paper.
ShapeNet <cit.> is a large-scale repository of shapes represented by 3D models of objects categorized following the WordNet taxonomy. ShapeNet contains semantic annotations about object parts as well as for rigid alignments, bilateral symmetry planes, physical sizes and other annotations.
§.§ Model training and evaluation
We train our model with mini-batch gradient descent, using the AdamW <cit.> optimizer and a batch size of 12. Learning rate is set to 5· 10^-5 with a weight decay of 0.01.
Dropout with probability 0.1 is used after each feedforward layer and multi-head attention in the transformer encoder, and after each feedforward layer in the classification network.
The value of the λ parameters controlling the features clustering, described in Sec. <ref>, is 8 for all the experiments, while token dimensions are set as d_t = 512 and d_p = 1024. All these parameters we set by measuring performance on a validation set extracted from each the COSEG dataset.
Cross-entropy loss function is used and weighted for each triangle based on its surface (larger triangles have more weight).
We perform data augmentation by applying random translation, rotation and scaling for each mesh in a mini-batch.
Accuracy is computed, as in DCN <cit.>, as the total surface of triangles correctly classified over the entire surface.
Data preprocessing. Similarly to MeshCNN, each mesh in the dataset is preprocessed reducing the number of vertices to a maximum of 1200 using the algorithm proposed by <cit.>. Duplicated vertices are merged and “padding” triangles are added to allow batched processing of meshes. After preprocessing, each mesh consists of 2412 triangles. Padding triangles are not adjacent to any mesh triangle and do not influence the final prediction. Vertex coordinates are standardized between -1 and 1. The Å and matrices are extended to include the padding triangles.
We first evaluate our models on the Chairs, Vases and Tele-Aliens subsets of the COSEG dataset. For each set, we report the performance, in terms of accuracy.
Tab. <ref> shows that our approach achieves a higher global accuracy on all the COSEG sets w.r.t. state of the art methods.
Fig. <ref> shows segmentation examples for each mesh set.
Mesh segmentation performances on the Human dataset <cit.> are reported in Tab. <ref>, showing better performance of our approach also on this benchmark when comparing with three state-of-the-art algorithms. Fig. <ref> shows qualitative results for the predicted segmentation.
Finally, we compute mesh segmentation performances on the ShapeNet dataset <cit.>, which are showed in Tab. <ref>. Also on this benchmark, yields better accuracy than state-of-the-art methods.
§.§ Ablation study
We perform an ablation study, on the three subsets of the COSEG dataset, to substantiate our design choices.
We first assess how each component in the triangle representation affects performance, namely, triangle coordinates, surface normal and Laplacian positional encoding. Results in Tab. <ref> show that all the input features positively affect accuracy. However, the highest contribution to the final performance is provided by the the Laplacian.
We then assess the importance of the cluster-related stream described in Sec. <ref>, i.e., cluster self-attention and cluster-triangle cross-attention . A comparison of the model accuracy with and without the cluster modules is presented in Tab. <ref>, where we can see that the cluster modules lead to gain of 3.6 percent points over the baseline, i.e., the model using only triangle information.
§ CONCLUSION
In this work, we introduce a novel transformer-based architecture for 3D mesh segmentation. Our approach successfully and significantly extends standard transformers with features specifically designed for the task at hand. First, we introduce a two-stream processing pipeline with each transformer layer, designed to enforce locality through the combination between mesh triangle features and clustering-based features, and by integrating spectral graph properties, through Laplacian vectors, to replace classic sinusoidal positional encoding. Additionally, we adapt typical attention mechanisms in transformers, by taking into account graph properties, and in particular by using adjacency matrix and triangle clustering to explicitly mask multi-head self- and cross-attention.
Experimental results, evaluated on multiple object categories, show that the resulting approach is able to outperform state-of-the-art methods on mesh segmentation, and demonstrate the positive impact of our architectural novelties by means of extended ablation studies.
To conclude, we show that transformer models — in spite of their characteristics for global processing and limitations with representing locality in sparse graphs — can be successfully adapted to mesh analysis, by carefully integrating methodological adjustments designed to capture mesh properties in a complex task such as segmentation.
§ ACKNOWLEDGEMENTS
This research is supported by the project Future Artificial Intelligence Research (FAIR) PNRR MUR Cod. PE0000013- CUP: E63C22001940006 and by the project “LEGO.AI: LEarning the Geometry of knOwledge in AI systems”, n. 2020TA3K9N.
IEEEtran
|
http://arxiv.org/abs/2307.01609v1
|
20230704095013
|
A Language Model for Grammatical Error Correction in L2 Russian
|
[
"Nikita Remnev",
"Sergei Obiedkov",
"Ekaterina Rakhilina",
"Ivan Smirnov",
"Anastasia Vyrenkova"
] |
cs.CL
|
[
"cs.CL"
] |
N. Remnev et al.
HSE University, 11 Pokrovsky Bulvar, Moscow, Russia
remnev.nikita@gmail.com, sergei.obj@gmail.com, erakhilina@hse.ru, smirnof.van@gmail.com, avyrenkova@hse.ru
A Language Model for Grammatical Error Correction in L2 Russian
Nikita Remnev0000-0002-1816-3823 Sergei Obiedkov0000-0003-1497-4001Ekaterina Rakhilina0000-0002-7126-0905 Ivan Smirnov0000-0001-8361-0282 Anastasia Vyrenkova0000-0003-1707-7525
=====================================================================================================================================================================================
Grammatical error correction is one of the fundamental tasks in Natural Language Processing. For the Russian language, most of the spellcheckers available correct typos and other simple errors with high accuracy, but often fail when faced with non-native (L2) writing, since the latter contains errors that are not typical for native speakers. In this paper, we propose a pipeline involving a language model intended for correcting errors in L2 Russian writing. The language model proposed is trained on untagged texts of the Newspaper subcorpus of the Russian National Corpus, and the quality of the model is validated against the RULEC-GEC corpus.
§ INTRODUCTION
Grammatical error correction (GEC) is one of the fundamental tasks in Natural Language Processing (NLP). Errors of different types challenge GEC systems at different levels. Most current systems are sufficiently good in dealing with typos, which can usually be corrected by considering the words that are closest to the erroneous word in terms of edit distance <cit.>. A language model can help choose the right word among several options at the same distance. More complex cases include words with several typos, agreement and lexical choice errors, and other types of errors. To address such cases, it is possible to apply manually designed rules <cit.> or use machine learning methods, in particular, approaches based on classification <cit.> or machine translation <cit.>, which require a large amount of data (most often, tagged) for training.
One of the best spellcheckers available for the Russian language is Yandex.Speller.[<https://yandex.ru/dev/speller/>] This tool is capable of detecting and correcting errors in Russian, Ukrainian, and English texts by using the CatBoost machine-learning library.[<https://catboost.ai>] One of the advantages of Yandex.Speller is that it can handle words that are still missing from dictionaries. To correct punctuation errors specific models focused on punctuation marks are trained. Incorrect word choices and errors involving multiple words are the most difficult error types. Such errors are not only hard to correct, but they are also hard to detect. This happens because the individual items making up such expressions are written correctly; so checking for their existence in the dictionary does not help. Resorting to higher-order N-grams could help, but would require a critically large amount of training data.
In this paper, we focus on texts by two types of non-native (L2) speakers of Russian. The first type is people who study Russian as a foreign language. Within this group, certain words and rules used in the native language are frequently transferred into the target language. The second type is constituted by so-called heritage speakers, i.e., people who inherited Russian from their parents, but the dominant language in their environment is not Russian. For heritage speakers, typical are cases of unusual word creation and combination of two languages. Generally, L2 texts contain significantly more errors than texts by native speakers. Often, several words in a row can be misspelled, which makes it difficult to use context for error correction. Fully distorted words become even harder to recognize, since they come together with word formation patterns transferred from another language; lexical choice errors are also much more common than in native writing.
This paper is structured as follows. In Section <ref>, we overview existing approaches to grammatical error correction. Section <ref> describes the data we use for constructing our spellchecker and for testing it. In Section <ref>, we describe the various steps of our approach and evaluate them experimentally. We conclude in Section <ref>.
§ RELATED WORK
Several approaches to GEC are commonly used. We will now describe these approaches in some detail.
§.§ Rule-based approach
The classic approach to GEC consists in designing rules for specific error types <cit.>. The first rule systems were based on pattern matching substitution. Eventually, they included part-of-speech markup and information obtained from syntactic trees <cit.>. The main advantage of a rule-based system is that it can be implemented without requiring a large amount of data and complex algorithms. The drawback is that it is not possible to systematize rules that would cover all types of errors, especially for languages with rich morphology systems such as Russian. Therefore, the use of the rule-based approach has significant limitations in practice, but it can successfully complement more complex models.
§.§ Classifier-based approach
As the amount of annotated data has recently grown, there are now several approaches that use machine learning to train classifiers for correcting particular types of mistakes <cit.>. For each type of error, a list of corrections is made, which are considered as a set of class labels. Linguistic features for the classifier may include part-of-speech tags, syntactic relationships, etc. Thus, the GEC task is solved as a multi-class classification problem in which the model selects the most appropriate candidate from the list. The classifier corrects one word for a specific category of errors, which ignores the dependencies between words in a sentence. In addition, the classifier assumes that adjacent words in a context contain no grammatical errors. However, this is not the case in texts written by non-native authors. Thus, a system that can handle only one type of errors is restricted in use.
Over time, several methods have been developed to correct multiple errors in one sentence. One of the most commonly used approaches consists in training and combining multiple classifiers, one for each error type. In <cit.>, an ensemble of rule-based models and classifiers was proposed for constructing multiple correction systems. However, this approach works only if errors are independent of one another.
Several approaches have been proposed to solve the error interaction problem. Dahlmeier <cit.> developed a beam search decoder to correct interdependent errors. This approach outperformed the ensemble of individual rule-based classifiers and models. The classification paradigm is used for Russian in <cit.>, where classifiers were developed for several common grammatical errors: preposition, noun case, verb aspect, and verb agreement. The experiments were conducted with training on non-native speakers' data, as well as on native speakers' data using so-called minimal supervision. The classifiers performed better in terms of the F-measure than the machine translation approach to which they were compared.
§.§ Approach based on machine translation
The current availability of a large amount of data for some languages makes it possible to design high-quality language models. In correcting grammatical errors using language models, the main idea is that sentences assigned low probability by the model are more likely to contain grammatical errors than sequences assigned a high probability.
The first works showing a successful use of language models trained on large amounts of data are <cit.>). Most of the GEC systems presented at the CoNLL 2013 <cit.> and CoNLL 2014 <cit.> were either based on language models or included them as components. Although, with the development of neural machine translation, approaches based solely on language models have become less popular, they continue to be an integral part of grammatical error correction modules <cit.>. Alikaniotis <cit.> propose to replace the N-gram language model with several implementations of modern language models of the Transformer architecture <cit.>. They evaluated their effectiveness in GEC tasks and concluded that advanced language models produce results comparable to those produced by classifier-based approaches.
In this work, we explore the potential of a language model-based approach in grammatical error correction for Russian.
§.§ Approach based on language modeling
The current availability of a large amount of data for some languages makes it possible to design high-quality language models. In correcting grammatical errors using language models, the main idea is that sentences assigned low probability by the model are more likely to contain grammatical errors than sequences assigned a high probability.
The first works showing a successful use of language models trained on large amounts of data are <cit.>. Most of the GEC systems presented at the CoNLL 2013 <cit.> and CoNLL 2014 <cit.> were either based on language models or included them as components. Although, with the development of neural machine translation, approaches based solely on language models have become less popular, they continue to be an integral part of grammatical error correction modules <cit.>. <cit.> propose to replace the N-gram language model with several implementations of modern language models of the Transformer architecture <cit.>. They evaluated their effectiveness in GEC tasks and concluded that advanced language models produce results comparable to those produced by classifier-based approaches.
In this work, we explore the potential of a language model-based approach in grammatical error correction for Russian.
§ DATA
We use the Newspaper Corpus, which is part of the Russian National Corpus (RNC) <cit.>, to build our language model. The corpus contains articles from major Russian media including Izvestia, Komsomolskaya Pravda, Novy Region, RBC, RIA Novosti, Sovetsky Sport, and Trud collected from 2000 to 2011. The corpus features a diverse vocabulary and consists of a sufficiently large number of grammatically correct texts.
To evaluate the performance of our system, we use the test sample of the RULEC-GEC corpus, which has become a benchmark for the GEC problem for the Russian language <cit.>. This corpus contains annotated data from the Russian Learner Corpus of Academic Writing (RULEC) <cit.>. RULEC consists of
essays and papers written in the United States by university students learning Russian as a foreign language and heritage speakers (those who grew up in the United States but had exposure to Russian at home).
In total, the RULEC-GEC corpus includes 12480 sentences containing about 206000 words. The data was manually tagged by two annotators, who identified 13 categories of errors.
In total, the RULEC-GEC corpus includes 12480 sentences containing about 206000 words. The data was manually tagged by two annotators, who identified 13 categories of errors. The markup in the RULEC-GEC corpus is similar to the one used for GEC in English <cit.>, which allows using the MaxMatch Scorer <cit.> to evaluate the results. This tool measures the performance of a system by checking how the set e_i of suggested corrections for the ith sentence of the test set meets the gold standard correction set g_i for the same sentence. Precision P, recall R, and the F-measure F_β are defined as usual:
P = ∑^n_i=1|e_i ⋂ g_i|/∑^n_i=1 |e_i|
R = ∑^n_i=1|e_i ⋂ g_i|/∑^n_i=1 |g_i|
F_β = (1+β^2)PR/β^2P + R
The MaxMatch scorer was the main scoring system for the CoNLL 2013 and CoNLL 2014 shared tasks on grammatical error correction <cit.>. The F_1 measure was the main performance metric at CoNLL 2013 and the F_0.5 measure was the main one at CoNLL 2014. In this work, we will also use F_0.5 measure as the main performance metric.
It is worth noting that the texts in RULEC-GEC were written mostly by authors with a relatively high proficiency level in Russian. This is confirmed by the number of errors found in the texts contained in the corpus: it is much lower than in English corpora (Table <ref>). Some of the techniques we propose here may not be particularly suitable for correcting errors in such texts. They are more relevant for texts produced by speakers with a lower level of Russian, examples of which can be found in the Russian Learner Corpus (RLC) <cit.>. RLC includes the data of the RULEC-GEC corpus, as well as texts of native speakers of 27 languages with different levels of proficiency in Russian. Some of the examples considered in this paper are taken from this corpus.
§ LANGUAGE MODEL FOR ERROR CORRECTION
In this section, we describe our approach to error correction using a language model built from (predominantly) correct texts of the Newspaper Corpus. Our approach is iterative, and we correct each sentence independently from others. Being responsible for certain error types, each stage of the algorithm starts and terminates with a number of partially corrected versions of the same sentence.
If several corrections of an error are possible, the number of versions maintained by the algorithm can grow after one iteration. This happens, in particular, when several adjacent words are misspelled, which makes it hard to choose one among several corrections based on the context. To prevent an exponential growth of versions to be maintained, we keep only a constant number (five, in the experiments reported below) of most promising ones after each iteration. These are sentences that are most likely from the point of view of the language model. We use a three-gram language model with Kneser-Key smoothing trained on the Newspaper Corpus with KenLM <cit.> to estimate the sentence probability.
We preprocess the Newspaper Corpus to build a dictionary that registers the number of occurrences of each unigram and bigram and use the Symspell algorithm[<https://github.com/wolfgarbe/SymSpell>] to search the dictionary for words at a certain edit distance from the word to be corrected. In the following subsections, we describe the stages of our algorithm in detail.
§.§ Correction of orthographic errors
The first stage of our pipeline consists in correcting words that contain spelling errors. A sentence is divided into tokens, and each token is then searched in the dictionary. If the search does not return any results, the token is regarded as erroneous. For each such token, we compose a list of possible corrections based on the Damerau–Levenshtein distance and then select the most promising ones using the language model.
In L2 texts, we often encounter sequences of adjacent erroneous words. We start correcting such sequences from the rightmost word and then proceed from right to left. Although the edit distance between the incorrect and correct spellings of a word in L2 texts can be quite large, using a very large distance when searching for candidates can be prohibitive: not only it is computationally demanding, it may hopelessly complicate the selection of the right candidate among a potentially huge number of completely irrelevant ones. We found empirically that limiting the maximum distance by two (and by one for words of length at most four) at this stage yields the best results.
russian
This however means that, for particularly distorted words, candidate lists obtained with edit distance are often empty. To address this problem, we resort to phonetic word representations yielded by the Soundex algorithm <cit.>. We build an additional dictionary where a phonetic representation is associated with a list of dictionary words that share this representation. As an example, “пирикрасут” is an incorrect spelling of “перекрасят” (`will recolor'), the edit distance between the two variants being equal to three. The phonetic representation of “пирикрасут”, “1090390604”, is shared by eight dictionary words including “перекрасят”. Again, we use the Symspell algorithm to find not only words with the same phonetic representation as the erroneous word, but also words with phonetic representations at a small edit distance, in this case, at distance at most one. Among these words, we first select those at a minimum edit distance from the erroneous word, and, from these, we then choose the variants that maximize the sentence probability.
The experimental results are presented in Table <ref>. Line 1 shows the results for Yandex.Speller open-access spell checker discussed in Section <ref> on the RULEC-GEC data. Line 8 reports the results of the language-model approach discussed in this section. Line 15 corresponds to the results obtained by using Yandex.Speller first and then applying our approach to its output. Lines 22–27 contain the results obtained on the same data using other methods known from the literature. The other lines correspond to various improvements to be discussed in the following sections.
On its own, our approach performs worse than Yandex.Speller, especially in terms of recall. However, when used as a postprocessing step, it increases the recall, albeit at the cost of some decrease in precision, showing a higher value of the F_0.5 measure. Next, we describe several additions to our model that make it possible to further increase both precision and recall.
§.§ Manually designed rules
russian
After correcting spelling errors, we apply two simple rules:
* Add a comma before the conjunctions “а” and “что” (but only if the previous word is not “потому”, “не”, or “ни”), as well as before forms of
“который” (`which').
* When choosing between the preposition “о” (`about') and its form “об”, use “о” when the following word starts with a consonant or a iotified vowel (“e”, “ё”, “ю”, “я”) and “об” when followed by any other vowel.
Although these rules admit exceptions, their application results in improved metrics on the learner corpus, see Table <ref>, lines 2–3, 9–10, and 16–17. This suggests that contexts presenting counterexamples to these rules rarely occur in L2 writing.
Note that the first rule concerns punctuation. Although, in general, we do not aim at correcting punctuation errors, accurate punctuation can help correct other errors at later stages, and this is why we choose to apply this simple rule.
§.§ Prepositions correction with RuBERT
russian
Incorrect use of prepositions is one of the most common errors made by non-native speakers. We have already discussed the incorrect usage of the prepositions “о” and “об” in the previous section. Another problematic case for non-native speakers concerns the prepositions “в” and “во” / `in' (“*во природе” instead of “в природе” / `in nature'), as well as “с” and “со” / `with' (“*с своим” instead of “со своим” / `with one's own'). There are also more complex cases in which a completely incorrect preposition is used; for example, “на” (`on') is often confused with “в” (`in') and vice versa (`*в рынке” / `in the market' instead of “на рынке” / `on the market').
To fix such errors, we apply an approach based on the neural network model RuBERT (Bidirectional Encoder Representations from Transforms (BERT) <cit.> for the Russian language. To do this, we use a pretrained RuBERT for the Masked Language Modeling problem. The principle behind this operation is as follows: one token from the sentence is replaced with a masked token <MASK>, then the model tries to predict which token is most likely to occur in place of the mask.
To correct mistakes with prepositions, we mask all prepositions in the sentence and take the most probable option suggested by the RuBERT model. We determine whether a word is a preposition or not by using a POS-tagging solution from DeepPavlov.[<https://deeppavlov.ai/>] The sentence probability is calculated before and after the replacement, and, if changing the preposition sufficiently increases the probability, we accept the replacement. Since different prepositions can fit in place of the mask, it is important to set a high threshold, so as to avoid unnecessary changes in the sentence. The results obtained after this operation are presented in Table <ref>, lines 4, 11, and 18.
russian
The results show that a small number of errors get corrected. The corrections applied mainly concern the errors mentioned above, i.e., those in the use of the prepositions “в” and “во”, as well as “с” and “со”. As a result of setting a high threshold, the total number of corrected errors at this stage is small, but a lower threshold would have led to a decrease in the F_0.5 measure due to the replacement of correct prepositions.
§.§ Correction of agreement and control errors
Another type of errors is related to agreement and control. These errors are typical for non-native speakers, but they also occur among native speakers, although not so often. Agreement errors include subject–verb and adjective–noun agreement in gender and number. Control errors concern noun case selection depending on the governing preposition, verb, or adverb. There are other types of agreement and control errors, but, in this paper, we only consider these types as the most common in the RULEC-GEC corpus.
The POS-tagging solution from DeepPavlov mentioned in Section <ref> is also used for agreement and control errors. Specifically, this solution is used to determine not only the part of speech of a word, but also the gender and number for verbs and the case, gender, and number for nouns.
russian
To detect such errors, we extract what can be called grammatical two- and three-grams from the Newspaper Corpus. For each sentence of the corpus, we carry out POS tagging, build a parse tree, and extract all possible chains of two and three words one of which is a noun. We keep the root word and replace the dependent words with corresponding grammatical tags. For example, the sentence
В этих местах мало перспектив. yields the following chains: в местах мало, этих местах мало, мало перспектив. The latter chain gets transformed into
мало obl | NOUN | Animacy=Inan | Case=Gen | Gender=Fem | Number=Plur
When correcting errors in a sentence, we extract such two- and three-element chains from it and match them against chains extracted from the Newspaper Corpus. If some chain is not found, this is an indication of a possible error. In this case, we generate potential corrections by varying the case, number, and gender for nouns in the chain. If a resulting chain is found among the chains extracted from the Newspaper Corpus, we substitute it for the original chain in the sentence and recalculate the sentence probability. We then choose the that maximizes the increase in probability. The error correction results for each of the agreement and control error types discussed above are presented in Table <ref>, lines 5, 12, and 19. Here again, we observe an increase in F_0.5, as well in both precision and recall.
Examining the results demonstrated by the other approaches as shown in lines 22–27 of Table <ref>, we see that, in terms of F_0.5, our results are second to the machine-translation model from <cit.> with fine-tuning, which, unlike our model, needs a large amount of labeled data for training. Our model demonstrates a significantly lower recall, but its precision is slightly better than that of a fine-tuned model and much better than that of the model without fine-tuning.
§ CONCLUSION AND FUTURE WORK
In this paper, we study the usage of a language model built from correct texts for grammatical error correction in L2 writing. We use this model in isolation and for postprocessing of the results obtained with the help of another spellchecker (Yandex.Speller, in our case). We combine it with phonetic algorithms, manually designed rules, and dedicated procedures for specific error types, in paricular, those for prepositions, coordination and control errors. On RULEC-GEC corpus, we obtain 64.79% precision, 17.08% recall, and 41.41% F_0.5, which is better than the results reported in the literature except achieved by machine-translation models with fine-tuning <cit.>.
Using our language model in a postprocessing step following the application of Yandex.Speller results in increased recall, but at the cost of decreased precision. The decrease is not big though: the value of the F_0.5 measure, which favors precision over recall, is still higher with the language model than without it. Note that we obtain this improvement using a very simple language model; replacing it by a more advanced version (with better smoothing, etc.) could help avoid decrease in precision and result in an even more significant overall improvement. Our pipeline can also be extended with additional manually designed rules and procedures for specific error types.
There is room for improvement in how candidates for correcting erroneous words are generated. Currently, candidate generation is based on edit distance and phonetic representations. In L2 writing, derivational and lexical choice errors are very common. Candidate generation for those could be handled by dedicated modules based on morphological and semantic considerations.
As future work, we also intend to incorporate into our pipeline an auxiliary part-of-speech language model that would be used together with the main language model to estimate the probability of sentences resulting from substituting various candidates for erroneous words.
It remains to see whether our approach can be tweaked to the point that it outperforms state-of-the-art models based on machine translation. It would also be interesting to see if adding our method as a postprocessing step to these models can improve their performance, as it does for Yandex.Speller.
8
Levenshtein:66
Vladimir I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10(8):707–710.
Damerau:64
Fred J. Damerau. 1964. A technique for computer detection and correction of spelling errors. Commun. ACM, 7(3):171–176.
Sidorov:13
Grigori Sidorov, Anubhav Gupta, Martin Tozer, Dolors Catala, Angels Catena, and Sandrine Fuentes. 2013. Rule-based system for automatic grammar correction using syntactic n-grams for English language learning (L2). In Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task, pp. 96–101, Sofia, Bulgaria. Association for Computational Linguistics.
Rozovskaya:19
Alla Rozovskaya and Dan Roth. 2019. Grammar error correction in morphologically rich languages: The case of Russian.Transactions of the Association for Computational Linguistics, 7:1–17.
Naplava:19
Jakub Náplava and Milan Straka. 2019. Grammatical error correction in low-resource scenarios. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pp. 346–356,Hong Kong, China. Association for Computational Linguistics.
Heidorn:82
G. E. Heidorn, K. Jensen, L. A. Miller, R. J. Byrd, and M. S. Chodorow. 1982. The epistle text-critiquing system. IBM Systems Journal, 21(3):305–326.
Bustamante:96
Flora Ramírez Bustamante and Fernando Sánchez León. 1996. GramCheck: A grammar and stylechecker. In COLING 1996 vol. 1: The 16th International Conference on Computational Linguistics.
Han:04
Na-Rae Han, Martin Chodorow, and Claudia Leacock. 2004. Detecting errors in English article usage with a maximum entropy classifier trained on a large, diverse corpus. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04), Lisbon, Portugal. European Language Resources Association (ELRA).
Rozovskaya:11
Alla Rozovskaya and Dan Roth. 2011.Algorithm selection and model adaptation for ESL correction tasks. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 924–933, Port-land, Oregon, USA. Association for Computational Linguistics.
Rozovskaya:13
Alla Rozovskaya, Kai-Wei Chang, Mark Sammons, and Dan Roth. 2013. The University of Illinois system in the CoNLL-2013 shared task. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task,pp. 13–19, Sofia, Bulgaria. Association for Computational Linguistics.
Dahlmeier:12b
Daniel Dahlmeier and Hwee Tou Ng. 2012b. Better evaluation for grammatical error correction. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 568–572, Montréal, Canada. Association for Computational Linguistics.
Gamon:08
Michael Gamon, Jianfeng Gao, Chris Brockett, Alexandre Klementiev, William B. Dolan, Dmitriy Belenko, and Lucy Vanderwende. 2008. Using contextualspeller techniques and language modeling for ESL error correction. In Proceedings of the Third Inter-national Joint Conference on Natural Language Processing: vol.-I.
Hermet:08
Matthieu Hermet, Alain Désilets, and Stan Szpakowicz. 2008.Using the web as a linguistic re-source to automatically correct lexico-syntactic errors. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), Marrakech, Morocco. European Language Resources Association (ELRA).
Yi:08
Xing Yi, Jianfeng Gao, and William B. Dolan. 2008. A web-based English proofing system for English as a second language users. In Proceedings of the Third International Joint Conference on Natural Language Processing: vol.-II.
Ng:13
Hwee Tou Ng, Siew Mei Wu, Yuanbin Wu, Christian Hadiwinoto, and Joel Tetreault. 2013. The CoNLL-2013 shared task on grammatical error correction.In Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task, pp. 1–12, Sofia, Bulgaria. Association for Computational Linguistics.
Ng:14
Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, ChristianHadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pp. 1–14, Baltimore, Maryland. Association for Computational Linguistics.
Junczys:16
Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2016. Phrase-based machine translation is state-of-the-art for automatic grammatical error correction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1546–1556, Austin, Texas. Association for Computational Linguistics.
Alikaniotis:19
Dimitris Alikaniotis and Vipul Raheja. 2019. The unreasonable effectiveness of transformer language models in grammatical error correction.In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 127–133, Florence, Italy. Association for Computational Linguistics.
Vaswani:17
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, and I. Polosukhin. 2017. Grammatical error correction using hybrid systemsand type filtering. In Advances in neural information processing systems, pp. 5998–6008.
Grundkiewicz:14
R. Grundkiewicz and M. Junczys-Dowmunt. 2014. The wiked error corpus: A corpus of corrective wikipedia edits and its application to grammatical er-ror correction. In International Conference on Natural Language Processing, pp. 478–490. Springer.
Apresjan:06
Ju. Apresjan, I. Boguslavsky, B. Iomdin, L. Iomdin, A. Sannikov, and Sizov V. 2006. A syntactically and semantically tagged corpus of russian: State of the art and prospects. In Proceedings of LREC, pp. 1378–1381, Genova, Italy.
Alsufieva:12
Anna A. Alsufieva, Olesya V. Kisselev, and Sandra G.Freels. 2012. Results 2012: Using flagship data to develop a Russian learner corpus of academic writing. Russian Language Journal, 62:79–105.
Dahlmeier:12a
Daniel Dahlmeier and Hwee Tou Ng. 2012a. A beam-search decoder for grammatical error correction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 568–578, Jeju Island, Korea. Association for Computational Linguistics.
Rakhilina:16
E.V. Rakhilina, A.S. Vyrenkova, E. Mustakimova,I. Smirnov, and A. Ladygina. 2016. Building a learner corpus for russian. In Proceedings of the joint workshop on NLP for Computer Assisted Language Learning and NLP for Language Acquisition at SLTC, pp. 1–10.
Heafield:13
Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013 Scalable modified Kneser-Ney language model estimation. In Proceedings of ACL.
Raghavan:04
Hema Raghavan and James Allan. 2004. Using soundex codes for indexing names in ASR documents. In Proceedings of the Workshop on Inter-disciplinary Approaches to Speech Indexing and Retrieval at HLT-NAACL 2004, pp. 22–27, Boston,Massachusetts, USA. Association for Computational Linguistics.
Devlin:19
Jacob Devlin, Ming-Wei Chang, Kenton Lee, andKristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under-standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Grundkiewicz:19
Roman Grundkiewicz and Marcin Junczys-Dowmunt. 2019. Minimally-augmented grammatical error correction. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pp. 357–363, Hong Kong, China. Association for Computational Linguistics.
Takahashi:20
Yujin Takahashi, Satoru Katsumata, and Mamoru Komachi. 2020. Grammatical error correction using pseudo learner corpus considering learner’s error tendency. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pp. 27–32, Online. Association for Computational Linguistics.
|
http://arxiv.org/abs/2307.02563v2
|
20230705180408
|
Double Copy from Tensor Products of Metric BV${}^{\color{gray} \blacksquare}$-algebras
|
[
"Leron Borsten",
"Branislav Jurco",
"Hyungrok Kim",
"Tommaso Macrelli",
"Christian Saemann",
"Martin Wolf"
] |
hep-th
|
[
"hep-th",
"math-ph",
"math.MP"
] |
l.borsten@herts.ac.uk,hk55@hw.ac.uk,branislav.jurco@gmail.com,tmacrelli @phys.ethz.ch,c.saemann@hw.ac.uk,m.wolf@surrey.ac.uk
EMPG–23–14,DMUS–MP–23/11
a]Leron Borsten
b]Branislav Jurčo
c]Hyungrok Kim
d]Tommaso Macrelli
c]Christian Saemann
e]Martin Wolf0.1em
[a]Department of Physics, Astronomy, and Mathematics,
University of Hertfordshire, Hatfield AL10 9AB, United Kingdom
[b]Mathematical Institute, Faculty of Mathematics and Physics,
Charles University, Prague 186 75, Czech Republic
[c]Maxwell Institute for Mathematical Sciences, Department of Mathematics,
Heriot-Watt University, Edinburgh EH14 4AS, United Kingdom
[d]Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland
[e]School of Mathematics and Physics,
University of Surrey, Guildford GU2 7XH, United Kingdom
Field theories with kinematic Lie algebras, such as field theories featuring colour–kinematics duality, possess an underlying algebraic structure known as -algebra. If, additionally, matter fields are present, this structure is supplemented by a module for the -algebra. We explain this perspective, expanding on our previous work and providing many additional mathematical details. We also show how the tensor product of two metric -algebras yields the action of a new syngamy field theory, a construction which comprises the familiar double copy construction. As examples, we discuss various scalar field theories, Chern–Simons theory, self-dual Yang–Mills theory, and the pure spinor formulations of both M2-brane models and supersymmetric Yang–Mills theory. The latter leads to a new cubic pure spinor action for ten-dimensional supergravity. We also give a homotopy-algebraic perspective on colour–flavour-stripping, obtain a new restricted tensor product over a wide class of bialgebras, and we show that any field theory (even one without colour–kinematics duality) comes with a kinematic L_∞-algebra.
We are particularly grateful to Martin Cederwall and Pietro Antonio Grassi for detailed discussion on pure spinor formulations of gauge theories and supergravity. We also benefited from discussion with Maor Ben-Shahar, Silvia Nagy, and an anonymous user on MathOverflow. H.K. and C.S. were supported by the Leverhulme Research Project Grant RPG-2018-329. B.J. was supported by the GAČR Grant EXPRO 19-28628X.
No additional research data beyond the data presented and cited in this work are needed to validate the research findings in this work. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
§ INTRODUCTION AND RESULTS
Background.
The space of observables of a classical field theory is a rather complicated object. In order to obtain it, one needs to quotient the classical field space by gauge transformations and then divide the ring of functions on this quotient space by the ideal generated by the equations of motion. The Batalin–Vilkovisky (BV) formalism <cit.> turns this space into a differential complex, called the BV complex, in which the observables are encoded in the cohomology of the BV differential.
The BV complex forms, in fact, a differential graded commutative algebra, which is the Chevalley–Eilenberg algebra, or the dual description, of an L_∞-algebra, see e.g. <cit.> for a detailed review as well as <cit.> for the discussion of equations of motion. Such an L_∞-algebra is a generalisation of a differential graded Lie algebra, in which the Jacobi identity holds only up to homotopy. Moreover, the anti-bracket on the BV complex encodes a metric on the L_∞-algebra. Altogether, this leads to the homotopy algebraic perspective on perturbative quantum field theory, which implies a dictionary between physical concepts and algorithms and mathematical notions and constructions; we list some elements of this dictionary in <ref>.
Particularly noteworthy is the fact that the homotopy algebraic perspective on quantum field theory puts action principles and scattering amplitudes on equal footing: both are particular forms of L_∞-algebras <cit.>. Closely related to this perspective is also the work by Costello <cit.> and Costello and Gwilliam <cit.>.
In this paper, our goal is to explain the connection between colour–kinematics duality in much more detail and to add the following further line to <ref>:
Recall that colour–kinematics (CK) duality <cit.> is a surprising and non-evident feature of perturbative quantum field theories, first observed in tree-level scattering amplitudes of Yang–Mills theories. Concretely, the scattering amplitudes of a CK-dual field theory can be decomposed into sums of cubic graphs with each diagram having a contribution 1/p_ℓ^2 from the propagator along each internal line ℓ, a colour contribution, and a remaining kinematic contribution. CK duality is now the statement that the algebraic properties of the colour contributions induced by the anti-symmetry and Jacobi identity of the Lie bracket are precisely mirrored in the kinematic contributions.
It is natural to assume, and indeed is the case in many examples, that the interaction vertices are cubic and decompose into products of the structure constants of a colour Lie algebra and the structure constants of a second Lie algebra, usually called the kinematic Lie algebra <cit.>. It is further natural to assume that the cubic graphs exhibiting CK duality are indeed the Feynman diagrams of the tree-level perturbative expansion of a field theory given by an action principle. In this case, the kinematic Lie algebra is manifested in the action itself, and a number of action-based approaches to CK duality and the double copy have been presented in the literature <cit.>.
Interestingly, the homotopy algebraic perspective has an elegant description of this situation. Since there are only cubic vertices, the L_∞-algebra encoding the action is simply a differential graded Lie algebra. The fact that we have a kinematic Lie algebra amounts to a factorisation ≅⊗, where is a differential graded commutative algebra refined to a -algebra[in most cases; in the body of the paper, we will explain that a kinematic Lie algebra merely implies a pseudo--algebra structure]. This fact was first noted by Reiterer <cit.> in the context of Yang–Mills theory in a first order formulation. In this picture, the kinematic Lie algebra appears in a degree-shifted form as the Gerstenhaber bracket that each -algebra naturally possesses. This homotopy algebraic perspective on CK duality allowed us to produce a number of new and interesting results with comparatively little effort, cf. <cit.>.
CK duality has many implications and applications; see <cit.> for reviews. For this paper, it is important to recall that CK duality is the key ingredient to the famous double copy prescription <cit.> summarised by the slogan that `gravity is the square of Yang–Mills theory'. More precisely, the kinematic contribution to the CK-dual parametrisation of the Yang–Mills scattering amplitudes can be used to replace the colour contribution, leading to the scattering amplitudes of =0 supergravity. The latter theory is a string-theoretically natural extension of Einstein–Hilbert gravity by a scalar dilaton field and a Kalb–Ramond 2-form field.
To arrive at a homotopy algebraic perspective on the double copy, it is natural to start from the -algebras encoding the kinematic Lie algebra of Yang–Mills theory and to consider the tensor product with itself, ⊗. Recall that the tensor product of differential graded commutative algebras is again a differential graded commutative algebra, and this tensor product extends to -algebras.
The field content of Yang–Mills theory is contained in _1, the linear subspace of containing the homogeneous elements of degree 1. Correspondingly, the double-copied field content sits in _1⊗_1⊆_2. We expect the double copy to be described by a differential graded Lie algebra with the double-copied fields in degree 1, so it is evident that we will have to degree-shift . There is now an evident candidate for this Lie algebra, namely the grade-shifted kinematic Lie algebra contained in the -algebra in the form of a Gerstenhaber bracket.
This suggestive answer has to be corrected in two ways. First of all, the domain of all fields in is formed by two copies of the original space-time, somewhat akin to what happens in double field theory. This can be taken into account by introducing a cocommutative Hopf algebra whose elements correspond to the momenta labels of the field theory and act on and, thus, naturally on . We can then restrict to the invariants under this action, leading to fields taking values on the original space-time.[Another possibility is to take a double field theory-like approach and to impose a section condition, as done in <cit.>. A third possibility, suggested by <cit.>, is to replace the pointwise product with a convolution, as described in <ref>. ]
Secondly, the BV field space turns out to be twice the expected size of the usual BV field space for the double-copied field content. This can be corrected by restricting to the kernel of a naturally defined operator on . This kernel is closely related to level-matching in string theory and was also used for the double copy in <cit.>. The result is indeed the differential graded Lie algebra of the double-copied field theory.
To demonstrate our mathematical constructions in detail, we consider a number of explicit examples in <ref>. In particular, we discuss our formalism for both CK duality and the double copy for the biadjoint scalar field theory (as well as the instructive extension to a biadjoint-bifundamental scalar field theory) and pure Chern–Simons theory. In the latter case, the double copy produces the complete BV triangle for an interesting biform field theory, whose physical part was previously derived in <cit.>. We also sketch our description of CK duality of <cit.> and explain the relation to the recent work of <cit.>. Our most important examples are the pure spinor descriptions of Yang–Mills theory and M2-brane models. We review our description of full tree-level CK duality from <cit.>, but then also develop the corresponding picture for the double copy. In the case of Yang–Mills theory, we obtain the first cubic pure spinor action for ten-dimensional supergravity, which may also shed some light on questions in previous pure spinor actions for supergravity. In the case of M2-brane models, we obtain the again a cubic biform action which is an extension of the one obtained for Chern–Simons theory. This action is a candidate for either a supergravity or a Born–Infeld like action. We also consider the interesting example of a sesquiadjoint scalar field theory, a deformation of a biadjoint scalar field theory in which one of the two Lie algebras is replaced by a more general algebraic structure. In this case, the kinematic Lie algebra is lifted to a kinematic L_∞-algebra, an object that any classical field theory possesses.
Results.
Altogether, our results can be summarised as follows. We show that any field theory that exhibits a kinematic Lie algebra has an underlying pseudo--algebra, a mild generalisation of a -algebra. In these pseudo--algebras, the kinematic Lie algebra appears in a grade-shifted form, and the Lie bracket is given by a derived bracket[Such constructions are common in homotopical algebra.]. If =, the Minkowski d'Alembertian, we have the usual form of CK duality.
We also show that this kinematic Lie algebra is a special case of a more general kinematic L_∞-algebra that any classical field theory possesses.
We then give a construction of the action of a syngamy field theory of two field theories with metric -algebras. The familiar double copy is a special case of this construction, and using pure spinors, we find a new cubic action for ten-dimensional supergravity.
Byproducts of our constructions include the homotopy algebraic perspective on colour–flavour-stripping, see <ref>, as well as a restricted tensor product of modules over a wide class of bialgebras, see <ref>, which appears to be a new mathematical construction.
Literature overview. There have been a number of important developments in recent years closely related to this work, some in quick succession and happening in parallel, so it may be useful to give a brief contextual overview of the literature that uses an action-based approach to CK duality and the double copy, particularly from the homotopy algebraic perspective.
The idea that CK duality and the double copy can be approached from the perspective of the action is rather old and dates back to <cit.>; see <cit.> for work along the same lines. In the context of the double copy, homotopy algebras were first used[There is earlier work by Zeitlin <cit.>, in which a particular set of =0 supergravity equations are reproduced within a tensor product of the homotopy commutative algebras underlying Yang–Mills theories, at least for Hermitian manifolds and at least to first order in homotopification. This paper does not link this observation to the double copy or the KLT relations.] in <cit.>, where the double copy construction was given by a twisted tensor product; recent applications of this technology include homotopy double copies for Navier–Stokes equations <cit.> and non-commutative gauge theories <cit.>. In this work, and in particular in <cit.>, we demonstrated that CK duality could be realised at the level of the complete off-shell BV action up to counterterms that may be required to ensure manifest unitarity. In particular, we provided an algorithm to construct the CK duality manifesting BV action to any order in perturbation theory. This picture involved adding a tower of higher-order interaction terms to the BV action while preserving the S-matrix, building on the results of <cit.> by including ghost, longitudinal and off-shell states.
In <cit.>, Reiterer made a seminal contribution to our understanding of CK duality. In particular, it was shown that Zeitlin's differential graded commutative algebra of the (colour-stripped) first-order formulation of pure four-dimensional Yang–Mills theory <cit.> carries a homotopy _∞-algebra structure (also defined in <cit.>). The central and immediate corollary is that the corresponding Feynman diagram expansion of the S-matrix satisfies CK duality up to homotopies given by the _∞-algebra. As for all homotopy algebras, there is a corresponding strict form of the _∞-algebra. Indeed, Reiterer provided a strictification (or rectification) relating the _∞-algebra to a -algebra, making CK duality of the tree-level S-matrix exact and manifest. An interesting precursor to <cit.> is found in the work of Zeitlin. In <cit.>, he speculates that there is a homotopy Gerstenhaber algebra in Yang–Mills theory, anticipating parts of a _∞-algebra structure. He also linked the homotopy commutative algebra arising in Yang–Mills theory in a particular limit to a homotopy commutative algebra arising for the Courant algebroid <cit.>, for which there exists a sketch of an argument that this algebra extends to a BV_∞-algebra[We thank Anton Zeitlin for pointing this out.].
In <cit.> it was explained that the higher-order interaction terms, introduced in <cit.> to render the BV action CK-dual, correspond (after colour-stripping) precisely to the higher products of a _∞-algebra[The non-trivial higher-products of the _∞-algebra roughly split into three classes corresponding to interactions generated by Tolotti–Weinzierl-type terms, gauge-fixing and field redefinitions. With hind-sight, the algorithms of <cit.> can be understood as uncovering fragments of a _∞-algebra.]. By introducing auxiliary fields, the tower of higher-order interactions can be made cubic and we arrive at a strict -algebra with manifest CK duality <cit.>. The conclusion (roughly) is that any theory with a CK duality manifesting BV action has an L_∞-algebra carrying a -algebra structure <cit.>. This gives rise to the penultimate entry in <ref>. Implicit in this statement, is a cyclic structure for the _∞-algebra, inherited from the anti-bracket, answering one of the open problems identified in <cit.>. We make this precise in the present contribution.
In light of these developments, CK duality is a (possibly anomalous[In the sense described above; CK duality violating counter-terms may be required to ensure manifest unitarity <cit.>.]) symmetry of the action itself; as such, it is natural to expect that there is an underlying organising principle manifesting this symmetry. In <cit.>, the authors realised that pure spinor space can provide such a principle, and using it, they could establish CK duality for the tree-level currents of ten dimensional supersymmetric Yang–Mills theory. In <cit.>, we then identified twistor spaces as a second, and closely related, organising principle. This should come as no surprise; besides the even simpler biadjoint scalar field theory <cit.>, Chern–Simons theory is a prime example of a CK-dual field theory, cf. <cit.>, and both pure spinors and twistor space allow for a reformulation of Yang–Mills theories as Chern–Simons-type theories.
Using twistor space, it is possible to concretely identify the kinematic Lie algebras of self-dual and full supersymmetric Yang–Mills theories. In the case of self-dual Yang–Mills theory, the resulting kinematic Lie algebra comes in a form that implies conventional CK duality even at the loop level. Having become aware of the work <cit.>, we also studied pure spinor space actions of ten dimensional supersymmetric Yang–Mills theory in <cit.>, where by using a different choice of gauge, we could lift the result of <cit.> to the tree-level amplitudes. This implied a new proof of tree-level CK duality for Yang–Mills theories in arbitrary dimensions d≤ 10 with an arbitrary amount of supersymmetry, which is simpler than existing ones in that it uses directly the action and does not rely on any concrete computations. In the same paper, we also extended Reiterer's perspective on CK duality to gauge–matter theories, which come with additional -modules from the homotopy algebraic perspective. This, together with the pure spinor actions for M2-brane models of <cit.>, allowed us to give the first proof of full, tree-level CK duality for M2-brane models.
Given Reiterer's interpretation of CK duality as a -algebra, it is natural to look for an interpretation of the double copy in the tensor product of two -algebras, as originally suggested in <cit.>. We presented initial ideas for such a construction in <cit.>. Independently, a double-field-theory-inspired version of this interpretation was then given in <cit.>, drawing on ideas in earlier work <cit.> relating the double copy to double field theory; see also <cit.> for recent work building on this, for example constructing weakly constrained double field theory to quartic order and elucidating the case of self-dual gravity, as well as <cit.> for further double-field-theory-inspired work on the double copy. Our present contribution mostly agrees with the constructions of <cit.>[which, in turn, have some similarity to those of <cit.>], except that we use a Hopf algebra[This is in line with Reiterer's original construction, and very helpful for the homotopification of this picture to be presented in <cit.>.] to control momentum dependence, while <cit.> employs a double-field-theory-like section condition. However, we would like to stress that our constructions go beyond those of <cit.> in a number of ways. First of all, all our construction applies to metric[Homotopy algebraists may prefer the term `cyclic'.] -algebras, and we give an explicit prescription for double copying the field-space metric. This is important for considering amplitudes and action principles; in particular, a -algebra implies CK-duality on currents, but not on amplitudes, as explained in <ref>. Secondly, we discuss gauge–matter theories by allowing for modules over -algebras. Thirdly, since we focus on -algebras, and all our constructions are exact; in <cit.>, the authors use _∞-algebras, for which the precise definition of tensor product is unclear, forcing one to work order by order in the double copy.[We note that in the conclusions of <cit.>, the authors identify a complete form of the _∞-algebra of Yang–Mills theory as the most important outstanding problem. Our twistor space descriptions of self-dual and full Yang–Mills theories <cit.> provide such a complete form. To turn it into a plain space-time expression, all one has to do is perform a mode expansion and integrate over the auxiliary spectral parameters in twistor space. A similar construction exists for the pure spinor actions. From our perspective, an order-by-order computation is possible (as explained already in <cit.>), but we believe that just as for supersymmetry, using an auxiliary space providing an organising principle is much more useful.]
Most recently, -algebras were also used in <cit.> to study self-dual Yang–Mills theory, but contrary to <cit.>, where the exact -algebra was given using an auxiliary space, a gauge-invariant formulation of self-dual Yang–Mills theory on space-time was studied directly, leading to a _∞-algebra deduced up to cubic order; we comment in detail on the relation between this work and our perspective in <ref>.
§ BASICS OF COLOUR–KINEMATICS DUALITY
§.§ Colour–kinematics duality and the double copy
We begin with a concise review of colour–kinematics (CK) duality. For general reviews on CK duality and the double copy, see <cit.>.
Colour–kinematics duality.
A gauge field theory is said to possess colour–kinematics duality if its scattering amplitude integrands can be parametrised in terms of cubic graphs (i.e. diagrams with vertices that all have degree 3) such that at vertices and connected pairs of vertices, the gauge Lie algebra contribution to these diagrams has the same algebraic properties as the kinematic contribution. More specifically, the n-point, L-loop scattering amplitude integrands _n,L can be parametrised as
_n,L ∼ ∑_γ∈Γ_n,L_γ_γ/|(γ)|d_γ ,
where Γ_n,L is the set of n-point, L-loop cubic diagrams; _γ is the colour numerator, that is, the contribution to the diagram γ due to the metric and the structure constants of the gauge Lie algebra; d_γ is the product of the denominators of the propagators (without colour component) for γ, usually 1/p_ℓ^2 for each propagator line ℓ∈γ; |(γ)| is the symmetry factor of the diagram γ, i.e. the order of its automorphism group; and _γ is the kinematic numerator containing the remaining contributions of γ to _n,L. The anti-symmetry of the Lie algebra structure constants and the Jacobi identity induce certain sums of colour numerators to vanish, i.e.
_γ_a1+_γ_a2 = 0
_γ_J1+_γ_J2+_γ_J3 = 0
for certain pairs (γ_a1,γ_a2) and triples (γ_J1,γ_J2,γ_J3). A theory is said to be colour–kinematics (CK) dual if the same relations hold for the corresponding kinematic numerators:
_γ_a1+_γ_a2 = 0
_γ_J1+_γ_J2+_γ_J3 = 0 .
Full CK duality has been established for very few field theories; in particular, it is found for the archetypal cases of biadjoint scalar field theory and Chern–Simons theory[As Chern–Simons theory is trivial on Minkowski space, one considers `scattering amplitudes' of harmonic differential forms.] <cit.>. For Yang–Mills theory and supersymmetric generalisations, CK duality has been established at the tree level using a variety of approaches <cit.>. It is known, however, that loop–level CK duality for pure Yang–Mills theory is not possible if one assumes that the kinematic numerators could have been derived from the Feynman diagrams of a local action with manifest unitarity <cit.>. This conclusion is also confirmed by observations regarding possible CK-dual action principles in <cit.>. A lift up to anomalies, however, does exist <cit.>.
Colour–kinematics duality for currents.
Note that we can also study CK duality for currents as e.g. the famous Berends–Giele gluon currents <cit.>. These are essentially amplitudes, but with one external leg kept off-shell and a propagator attached to this leg. They can be computed recursively, and sometimes possess a more evident form of CK duality, cf. e.g. <cit.>. Explicitly, we have a similar parametrisation to (<ref>), namely
_n,L ∼ ∑_γ∈Γ_n,L_γ_γ/|(γ)|d_γ
such that (<ref>) implies (<ref>) in the evident fashion, but d_γ here contains an additional factor arising from the propagator on the single external leg with propagator, and the _γ now also may involve off-shell polarisations.
Double copy.
CK duality is the crucial ingredient in the double copy construction: the kinematic numerators _γ of a CK-dual field theory can be doubled to construct consistent scattering amplitude integrands of a new field theory,
_n,L ∼ ∑_γ∈Γ_n,L_γ_γ/|(γ)|d_γ .
It has been shown that starting from tree-level pure Yang–Mills scattering amplitudes, the double copy construction yields the tree-level scattering amplitudes of =0 supergravity <cit.>, and this generalises to supersymmetric gauge and gravity theories, see again <cit.>.
More generally, one can take the kinematic numerators ^(1)_γ and ^(2)_γ of two CK-dual field theories and form their syngamy[We follow again our nomenclature of <cit.>.] theory, i.e.
_n,L ∼ ∑_γ∈Γ_n,L^(1)_γ^(2)_γ/|(γ)|d_γ .
In this paper, we shall focus on the Lagrangian perspective on CK duality and the double copy <cit.>. Our aim is then to explain the relevant mathematical structures underlying the double copy prescription from this perspective.
Gauge–matter colour–kinematics duality.
The above form of colour–kinematics duality can be extended from gauge theories to gauge–matter theories <cit.>. See <cit.> for a variety of gauge–matter colour–kinematics duality and double copy examples. By gauge theory, we mean any theory where all the fields are valued in the adjoint representation of the gauge Lie algebra , such as Yang–Mills and maximally supersymmetric Yang–Mills theories.[Thus, theories without gauge symmetry such as the biadjoint scalar or the non-linear sigma model on a principal homogeneous space are nevertheless `gauge theories' in our sense.] Gauge–matter theories, on the other hand, include (possibly integer spin) `matter' fields carrying some other representation R of . The colour–stripped amplitudes are constructed in the same manner as the case of purely adjoint fields, although the colour decomposition may be more involved <cit.>, essentially due to the particular representation theoretic properties of the matter. See <ref> for the details relevant to our discussion.
CK duality proceeds much as before. The only structural difference from the case of gauge theories is that now (<ref>) can hold either due to the Jacobi identity of the gauge Lie algebra, as before, due to the commutation relations in the (not necessarily irreducible) representation R <cit.>, or due to some combination of the two. Correspondingly, the sum over cubic Feynman diagrams (<ref>) is enlarged to include all possible decorations of the edges by matter field representations R:
_n,L ∼ ∑_γ∈Γ^R_n,L_γ_γ/|(γ)|d_γ .
Here, Γ^R_n,L denotes the set of n-point, L-loop cubic graphs with all consistent decorations of the edges by R, including the subset Γ_n,L⊆Γ^R_n,L without decorations (the pure adjoint graphs). Note that R may include several copies of the same irreducible representation of the gauge Lie algebra to incorporate flavours.
Double copy with gauge–matter theories.
The double copy is usually generalised to ^(1)_γ^(1)^(1)_γ^(1)→^(2)_γ^(2)^(1)_γ^(1), where γ^(1) and γ^(2) either both belong to Γ_n,L or belong to Γ^R^(1)_n,L∖Γ_n,L and Γ^R^(2)_n,L∖Γ_n,L, respectively[Note, this is in the spirit of <cit.> and more general than the working rule 4 adopted in <cit.>. It is consistent nonetheless, at least when there is an underlying action.]. This restriction reflects the fact that only field couplings corresponding to R× R→ and, dually, × R→ R do not require any properties of the representations beyond the universal Jacobi identities, commutation relations, and existence of conjugates[We are implicitly assuming here that R contains all required conjugate representations.]. While more elaborate coupling are in principle possible, we explicitly restrict to these cases, as described in <ref>. This is mathematically natural, see <ref>, and appears to be physically necessary. Allowing, say, γ∈Γ_n,L and γ'∈Γ^R^(2)_n,L∖Γ_n,L could be used to produce arbitrary numbers of gravitini, which would be inconsistent with the accompanying local supersymmetry <cit.>.
§.§ Field theories and homotopy algebras
Our discussion will be based on the homotopy algebraic perspective on classical field theories, cf. e.g. <cit.> or <cit.>.
Metric differential graded Lie algebras.
The classical Batalin–Vilkovisky (BV) action[Note that the BV algebras and -algebras that form an essential ingredient in our picture are not obtained from a BV formulation of the theories we consider.] of a field theory with cubic vertices is dual to a metric differential graded (dg) Lie algebra (,μ_1,μ_2) with the underlying graded vector space ≅⊕_i∈_i and cochain complex
() (
⋯[r,"μ_1"] _0[r,"μ_1"] _1[r,"μ_1"] _2[r,"μ_1"] _3[r,"μ_1"] ⋯) .
Here, _0 contains the ghosts, _1 the fields, _2 the anti-fields, and _3 the anti-fields of the ghosts,[not to be confused with the anti-ghost fields] respectively. Hence, the degree |ϕ| of a field ϕ∈ is given by
|ϕ| 1-|ϕ|_gh ,
where |ϕ|_gh is the ghost degree of ϕ. Correspondingly, in a gauge-fixed BV formulation of an ordinary gauge theory, _1 will also contain the Nakanishi–Lautrup field and the anti-field of the anti-ghost and _2 will also contain the anti-field of the Nakanishi–Lautrup field and the anti-ghost. The differential μ_1 encodes all linear features of the theory, such as kinematic terms, linearised gauge transformations, and their duals. Interactions, non-linear parts of gauge transformations, and their duals are encoded in a graded Lie bracket
μ_2 : × → ,
which is of degree 0, bilinear, graded anti-symmetric, compatible with the differential, and satisfies the graded Jacobi identity. The metric (or cyclic structure)
-- : × →
is a non-degenerate, bilinear, and graded symmetric map of a fixed degree, which is compatible with the differential μ_1 and the Lie bracket μ_2 in the sense that
μ_1(ϕ_1)ϕ_2+(-1)^|ϕ_1|ϕ_1μ_1(ϕ_2) = 0 ,
μ_2(ϕ_1,ϕ_2)ϕ_3+(-1)^|ϕ_1| |ϕ_2|ϕ_2μ_2(ϕ_1,ϕ_3) = 0
for all ϕ_1,2,3∈. If the metric is of degree -3, we can use it to write down an action principle
S 12ϕμ_1(ϕ)+13!ϕμ_2(ϕ,ϕ)
for the fields ϕ∈_1. In this way, any action with exclusively cubic interaction vertices can be encoded in a metric dg Lie algebra.
Homotopy transfer.
We can obtain an equivalent field theory by `integrating out' parts of the field content. This is done by an appropriate tree-level Feynman diagram expansion, and mathematically, this corresponds to a homotopy transfer from the cochain complex (,μ_1) to a quasi-isomorphic cochain complex (,μ̃_1) consisting of the modes that have not been integrated out, cf. <cit.>.[The fact that homotopy transfer amounts to integrating out fields is a general folklore in BV quantisation; see also <cit.> and <cit.> for recent applications.] In particular, we have the diagram
[loop,out=160,in=200,distance=20,"" left] (,μ_1)[r,shift left] (,μ̃_1) [l,shift left] ,
where and are cochain maps, denoting a projection and an embedding, such that
∘ = _ ,
which implies that
Π ∘
is a projector. There is usually some ambiguity in choosing , which involves a choice of gauge. The contracting homotopy → is a map of degree -1 satisfying
𝕀_-Π = μ_1∘+∘μ_1
as well as the annihilation or side conditions
∘ = 0 , ∘ = 0 , ∘ = 0 .
Even if the side conditions do not hold, one can redefine such that they do, cf. <cit.>.
Note that equation (<ref>) implies that is the inverse of μ_1 on the modes that are being integrated out.
In other words, can be regarded as a propagator, and the homotopy transfer indeed reproduces the usual tree-level Feynman diagram expansion with propagator . The result of this homotopy transfer generically contains n-point vertices, which are encoded in algebraic operations with n-1 inputs and one output. Therefore, the result of the homotopy transfer is no longer a dg Lie algebra but a generalisation known as an L_∞-algebra. The notion of a dg Lie algebra is equivalent to that of a strict L_∞-algebra. Further details are again found, e.g., in <cit.>, but they will be irrelevant to our discussion.
The smallest permissible cochain complex (,μ̃_1) yields the minimal model (^∘,0), and it is given by the cohomology ^∘ H^∙_μ_1() of (,μ_1). The minimal model is unique up to (strict) isomorphisms, and its L_∞-algebra structure encodes the tree-level scattering amplitudes of the theory <cit.>. Indeed, physical fields in the cohomology satisfy the free or linearised equations of motion, and linear gauge transformations have been quotiented out. We thus see that the physical fields in the cohomology correspond to the asymptotically free fields, labelling the open legs of scattering amplitudes. Altogether, there is now a dictionary between physical features and operations with scattering amplitudes and amputated correlators as well as (homotopy) algebraic operations, as indicated in <ref>.
Factorisation.
For example, we can factor out the colour or gauge Lie algebra (,[-,-]_) by writing
≅ ⊗ ,
where (,,_2) is the differential graded (dg) commutative algebra with
μ_1(τ_1⊗ϕ_1) = τ_1⊗ϕ_1 ,
μ_2(τ_1⊗ϕ_1,τ_2⊗ϕ_2) = [τ_1,τ_2]_⊗_2(ϕ_1,ϕ_2)
for all τ_1,2∈ and ϕ_1,2∈. This is the mathematical formulation of what physicists would call colour-stripping, cf. <cit.>.
In this paper, we will always regard a field theory as a metric dg Lie algebra, and we collect many examples in <ref>.
§.§ Colour–flavour-stripping
We saw above that, mathematically, colour-stripping a cubic field theory amounts to a factorisation of the theory's dg Lie algebra into a colour Lie algebra and a dg commutative algebra. We are not aware of a discussion of the extension to colour–flavour-stripping in the literature, so we give a more detailed account here. This will become important when discussing CK duality of gauge–matter theories.
Factorisation and Lie algebra representations.
Consider a gauge field theory with only cubic interaction vertices and gauge Lie algebra . Then, the space of fields decomposes into irreducible representations of as
≅ (⊗)⊕(R^(1)⊗ V^(1))⊕(R^(2)⊗ V^(2))⊕⋯ ,
in which is the graded vector space of fields transforming in the adjoint representation (such as the gauge potential or other components of the gauge supermultiplet in supersymmetric gauge theories), and V^(i) for i=1,2,… is the graded vector space of fields transforming in the representation R^(i). Since there are no invariant pairings between distinct irreducible representations, there are no kinetic terms that mix fields of different representations. Thus, and V^(i) are dg vector spaces (i.e. cochain complexes), each endowed with invariant metrics.
To simplify the discussion, we combine R⊕_i∈R^(i) and V⊕_i∈V^(i), such that we can write
⊆ (⊗)⊕(R⊗ V)
for some cochain complexes and V endowed with invariant metrics. The right-hand side is generically larger than (<ref>) since we also get summands R^(i)⊗ V^(j) for i≠ j. We can, however, restrict to the subspace (<ref>) if necessary or desired.[This is a technical simplification. One can either regard the extra fields in ∖ as free fields that decouple from the rest of the theory, or one can choose to keep track of different kinds of matter, which would technically amount to working with operads (i.e. convenient tools for encoding algebras, cf. <cit.>) with more than two sorts.] The potential cubic interaction vertices encoded in the product μ_2 can then be of a number of types,
μ_2 : (⊗)×(⊗) → (⊗) ,
μ_2 : (⊗)×(R⊗ V) → (R⊗ V) ,
μ_2 : (R⊗ V)×(R⊗ V) → (⊗) ,
μ_2 : (R⊗ V)×(R⊗ V) → (R⊗ V) ,
μ_2 : (⊗)×(R⊗ V) → (⊗) ,
μ_2 : (⊗)×(⊗) → (R⊗ V) .
Whilst the last three types of products (<ref>)–(<ref>) are possible, they require additional algebraic structures on and R that go beyond an ordinary Lie algebra representation. The products (<ref>) still appear in familiar field theories, but (<ref>) and (<ref>) are very uncommon. We therefore restrict ourselves to the case in which only the first three types (<ref>)–(<ref>) of maps are non-trivial; this certainly covers all field theories in which we are interested.[It is also mathematically natural. For example, it is reminiscent of the Lie algebra decomposition for symmetric spaces.] We note that cyclicity of the metric on implies in particular
χ_1μ_2(ϕ,χ_2) = (-1)^|ϕ| |χ_1|+1ϕμ_2(χ_1,χ_2)
for all χ_1,2∈ R⊗ V and ϕ∈⊗, so that the product (<ref>) is fixed by the product (<ref>).
The first two types of product are captured by the Lie bracket on , the action of on R, a structure of a dg commutative algebra on , and an action of on the dg vector space V.
Putting all relevant structures together, we have the following mathematical description of colour–flavour-stripping.
Given a metric[sometimes called quadratic or cyclic instead] Lie algebra (,[-,-]_,--_) with a metric representation (R,_R,--_) together with a metric dg commutative algebra (,_,_2,--_) and a metric -module (V,_V,_V,--_V), we define the tensor product
(⊗)⊕(R⊗ V)
endowed with maps
μ_1(τ_1⊗ϕ_1+r_1⊗ v_1) τ_1⊗_ϕ_1+r_1⊗_Vv_1 ,
-6cmμ_2(τ_1⊗ϕ_1+r_1⊗ v_1,τ_2⊗ϕ_2+r_2⊗ v_2)
-4cm [τ_1,τ_2]_⊗_2(ϕ_1,ϕ_2)+μ_2(r_1⊗ v_1,r_2⊗ v_2)
1cm+(τ_1_Rr_2)⊗(ϕ_1_Vv_2)-(-1)^|v_1| |ϕ_2|(τ_2_Rr_1)⊗(ϕ_2_Vv_1)
with μ_2(r_1⊗ v_1,r_2⊗ v_2) defined by (<ref>) as well as
τ_1⊗ϕ_1+r_1⊗ v_1τ_2⊗ϕ_2+r_2⊗ v_2_ τ_1τ_2_ ϕ_1ϕ_2_+r_1r_2_R v_1v_2_V
for all τ_1,2∈, r_1,2∈ R, ϕ_1,2∈, and v_1,2∈ V.
The tuple (,μ_1,μ_2,--) defined in (<ref>) forms a metric dg Lie algebra.
By direct computation, cf. <ref>.
Clearly, the tensor product (<ref>) can possess metric dg Lie subalgebras of the form (<ref>). Contrary to the colour-stripping, colour–flavour-stripping hence requires additional information about the desired branching of R⊗ V into the summands R^(i)⊗ V^(i).
Altogether, colour–flavour-stripping is a decomposition of the form (<ref>) such that the original metric dg Lie algebra is a subalgebra of the full tensor product .
We specialise this factorisation further to CK-dual ones in <ref>, and physical examples are found in <ref> and <ref>.
§.§ Kinematic Lie algebras from actions
Motivation.
For the action perspective on CK duality and the double copy, we will always assume that the diagrams γ∈Γ_n,L in the expansions (<ref>) and (<ref>) are indeed the Feynman diagrams of scattering amplitudes, as obtained from the rules derived from an action principle in the usual way. In this case, CK duality implies the existence of a kinematic Lie algebra, from which the kinematic numerators _γ are constructed in full analogy with the construction of the colour numerators _γ from the gauge or colour Lie algebra. Put differently, each cubic vertex of the Feynman diagram γ∈Γ_n,L contributes a structure constant to both _γ and _γ, and propagators joining vertices amount to index contractions. The kinematic Lie algebra is the vital ingredient in the action perspective on CK duality, and we are not aware of an example of a CK-dual field theory without a kinematic Lie algebra. Moreover, the concept of a kinematic Lie algebra generalises far beyond theories with conventional CK duality, as we shall see. We will therefore always consider CK-dual field theories as a subset of theories with kinematic Lie algebras.
As a fairly general and simple example of such a situation, consider the action
S 12__Φ^Φ^+13!__^_^_Φ^Φ^Φ^ ,
cf. <cit.>. Here, is the d'Alembertian, the ^_ and ^_ are structure constants of the gauge and kinematic Lie algebras, and the _ and _ are invariant metrics on each of the two Lie algebras, which are required for writing down an action principle. Note that ,,… are DeWitt indices combining momentum, species, polarisation, and spinor labels. Among the field theories featuring tree-level CK duality that can be brought into this form are the biadjoint scalar field theory, the non-linear sigma-model, Chern–Simons theory, and Yang–Mills theory.
Feynman diagram expansion.
We will always be concerned with kinematic Lie algebras relative to a Feynman diagram expansion, or, equivalently, relative to a propagator , i.e. a contracting homotopy in a deformation retract (<ref>). The kinematic Lie algebras usually discussed in the literature are obtained when is the ordinary Feynman propagator, giving a contracting homotopy to the minimal model of the underlying L_∞-algebra, because this Feynman diagram expansion yields the scattering amplitudes. In the case of Chern–Simons theory, the tree-level scattering amplitudes are trivial, and we consider generalised amplitudes of harmonic differential forms.
In particular, we shall follow an idea of Reiterer <cit.> which assumes that the contracting homotopy or propagator can be written as
= _⊗ ^-1
[,] = 0
under the factorisation (<ref>) such that is a differential of degree -1, which maps e.g. physical anti-fields to physical fields, is a second-order differential operator of degree 0 (e.g. the d'Alembertian) with ^-1 its inverse defined to vanish on (), and Π=0=[μ_1,^-1] for the projector (<ref>). Then, (<ref>) can be rewritten as
[,] = ∘+∘ .
Derived bracket.
The operator now allows us to define the derived bracket
{ϕ_1,ϕ_2} (_2(ϕ_1,ϕ_2))-_2(ϕ_1,ϕ_2)-(-1)^|ϕ_1|_2(ϕ_1,ϕ_2)
for all ϕ_1,2∈, which measures the failure of to be a derivation of the product _2. This derived bracket enters into the construction of the kinematic numerators, analogously to the Lie algebra brackets entering into the colour numerators; and, in particular, it yields the Lie bracket of the kinematic Lie algebra.
Returning to the action (<ref>), the structure constants ^_ are those of the Lie algebra defined by the Lie bracket (<ref>). This kinematic Lie algebra arises when integrating out modes in the Feynman diagram expansion with propagator _⊗ ^-1 and cubic vertices encoded in μ_2(-,-)=[-,-]_⊗_2(-,-).
Kinematic Lie algebra for currents.
Concretely, let us look at an example of a field theory current, i.e. a Feynman diagram with n incoming fields and one outgoing, propagating field ϕ_0. This clearly demonstrates how the operator gets assigned to vertices:
[column sep=0.0cm, row sep=0.3cm]
ϕ_0
^-1[u,dash]
_2 [u,dashed,dash]
^-1[ur,dash] ^-1[ul,dash]
_2 [ur,dashed,dash] _2 [ul,dashed,dash]
ϕ_1 [ur,dash] ϕ_2 [ul,dash] ϕ_3 [ur,dash] ϕ_4 [ul,dash]
→ [column sep=0.0cm, row sep=0.3cm]
ϕ_0
^-1[u,dash]
_2 [u,dash]
^-1[ur,dash] ^-1[ul,dash]
_2 [ur,dash] _2 [ul,dash]
ϕ_1 [ur,dash] ϕ_2 [ul,dash] ϕ_3 [ur,dash] ϕ_4 [ul,dash]
Here, a solid line denotes a field and a dashed line denotes an anti-field. The operator is taken along its unique anti-field line to a vertex and combined with _2 to the kinematic Lie bracket, which maps pairs of fields to fields. Note that _2 is indeed the kinematic Lie algebra on fields because, as we shall see, these are in the kernel of , at least after gauge fixing.
This prescription clearly extends to currents involving anti-fields, where the outgoing leg can be a field. We thus see that after the re-assignment of the operator , the vertices are turned into the derived bracket (<ref>), which is therefore the kinematic Lie algebra.
Kinematic Lie algebra for scattering amplitudes.
In the case of scattering amplitudes, the discussion is a bit more subtle. Amplitudes are obtained from the currents by removing the propagator on the outgoing leg of a current and pairing the anti-field coming out of the diagram with the remaining field using the cyclic structure. For example, the amplitude (ϕ_0,ϕ_1,ϕ_2,ϕ_3) will receive a contribution from
[column sep=0.2cm, row sep=0.3cm]
ϕ_0
⟨-,-⟩[u,dash]
_2 [u,dashed,dash]
^-1[ur,dash] ^-1[ul,dash]
_2 [ur,dashed,dash] _2 [ul,dashed,dash]
ϕ_1 [ur,dash] ϕ_2 [ul,dash] ϕ_3 [ur,dash] ϕ_4 [ul,dash]
It is then clear that CK duality will hold for any triple of subdiagrams not involving ϕ_0. For all physically interesting theories, however, the relevant external fields will be -exact, i.e. in particular ϕ_0=ψ. In this case, we can compute the sum of the general s-, t- and u-channels (i.e. the terms _γ_J1, _γ_J2, and _γ_J3 from (<ref>)) involving ϕ_0 as follows:
⟨ϕ_0,_2(T_1,_2(T_2,T_3))⟩+⟨ϕ_0,_2(T_2,_2(T_3,T_1))⟩+⟨ϕ_0,_2(T_3,_2(T_1,T_2))⟩ ,
where T_1, T_2, and T_3 are currents, making up the rest of the diagrams. Again, in all physically interesting examples, is its own adjoint, and hence we have
⟨ϕ_0,_2(T_1,_2(T_2,T_3))⟩ = ⟨ψ,_2(T_1,_2(T_2,T_3))⟩ = ⟨ψ,_2(T_1,_2(T_2,T_3))⟩ .
If the derived bracket is a Lie bracket, then this reformulation makes it clear that (<ref>) indeed vanishes. We note that, due to cyclic symmetry of the amplitudes, it is sufficient if at least one external field is -exact.
Underlying algebraic structure.
Ultimately, the dg commutative algebra (,,_2) and the differential will form the structure of a -algebra <cit.>, see also <cit.>. We shall formalise and explore these in the remainder of this paper. Moreover, we shall extend this picture to CK duality involving matter (i.e. fields taking values in representations of the gauge group that can be different from the adjoint representation). This leads to the notion of -modules, following the discussion of <cit.>.
Comment regarding the loop level.
Consider now the dg Lie algebra of a cubic BV action S which has been gauge-fixed in the usual manner. Suppose that the dg Lie algebra structure can be colour–stripped and enhanced to a -algebra with the d'Alembertian and with a second-order differential . Using the Feynman rules following from S, we can write down the loop integrand for a Feynman diagram corresponding to a process by using the propagator / for each internal edge, the cubic interaction [-,-]_⊗_2(-,-) for the vertices, and the cyclic structure to join loops formally. The resulting integrand for a trivalent graph Γ is then of the form
I_Γ = 𝖼_Γ N_Γ/∏_e∈ E(Γ)_e ,
where N_Γ is a series of contractions of _2 and .
Note that we can cut all loops open so that the loop diagram Γ reduces to a tree. In this tree, we can use the derived bracket (<ref>) to bring all vertices to the form [-,-]_⊗{-,-}, cf. (<ref>), as long as all fields attached to incoming lines are in (). Since we are working with a gauge-fixed action, there are no anti-fields running inside loops, so the above condition holds. Altogether, our vertices are described by pairs of Lie algebra structure constants, and CK duality holds at the level of loop integrands.
We note that the situation regarding the number of -operators that made the transition from currents to amplitudes subtle in the case of tree diagrams is absent for loops: each loop adds a propagator relative to the tree diagrams, increasing the number of -operators by one.
§ COLOUR–KINEMATICS DUALITY FROM BV-BOX-ALGEBRAS AND THEIR MODULES
In this section, we fully develop the mathematical tools for an algebraic description of kinematic Lie algebras and colour–kinematics duality.
§.§ Pseudo-BV-box-algebras and kinematic Lie algebras
Pseudo--algebras.
We start with the most general definition of an algebra that implies the existence of a kinematic Lie algebra.
A pseudo--algebra is a tuple (,,_2,) such that (,,_2) is a dg commutative algebra[We shall always assume that _2 is associative, that is, _2(_2(ϕ_1,ϕ_2),ϕ_3)=_2(ϕ_1,_2(ϕ_2,ϕ_3)) for all ϕ_1,2,3∈.] endowed with an additional differential → of degree -1 such that the derived bracket
{ϕ_1,ϕ_2} (_2(ϕ_1,ϕ_2))-_2(ϕ_1,ϕ_2)-(-1)^|ϕ_1|_2(ϕ_1,ϕ_2)
for all ϕ_1,2∈ defines a shifted Lie algebra. That is, besides the shifted anti-symmetry[It is shifted graded anti-symmetric since the bracket carries a degree. We choose to work with this convention for shifted algebras, which is operadically natural, in order to simplify later discussion.]
{ϕ_1,ϕ_2} = (-1)^|ϕ_1||ϕ_2|{ϕ_2,ϕ_1}
implied by (<ref>), we also have the shifted Jacobi identity
{ϕ_1,{ϕ_2,ϕ_3}} = (-1)^|ϕ_1|+1{{ϕ_1,ϕ_2},ϕ_3}+(-1)^(|ϕ_1|+1)(|ϕ_2|+1){ϕ_2,{ϕ_1,ϕ_3}}
for all ϕ_1,2,3∈. Furthermore, we set
[,] = ∘+∘ .
Hence, the derived bracket measures the failure of to be a derivation for _2. Note that [,]=0=[,].
A pseudo--algebra will turn out sufficient for describing CK duality of currents, but in order to extend the picture to amplitudes, we will also need a cyclic structure or metric.
A metric pseudo--algebra is a pseudo--algebra (,,_2,) endowed with a non-degenerate graded symmetric bilinear map
-- : × → ,
called a cyclic structure, metric, or inner product, which is compatible with the pseudo--algebra structure in the sense that
ϕ_1ϕ_2+(-1)^|ϕ_1|ϕ_1ϕ_2 = 0 ,
_2(ϕ_1,ϕ_2)ϕ_3-(-1)^|ϕ_1||ϕ_2|ϕ_2_2(ϕ_1,ϕ_3) = 0 ,
ϕ_1ϕ_2-(-1)^|ϕ_1|ϕ_1ϕ_2 = 0
for all ϕ_1,2,3∈. We say that -- is of degree n if ϕ_1ϕ_2≠ 0 implies |ϕ_1|+|ϕ_2|+n=0 for all ϕ_1,2∈.
Note that combining (<ref>) with (<ref>), we see that
ϕ_1ϕ_2 = ϕ_1ϕ_2
for all ϕ_1,2∈.
We will want to use the operator =_⊗^-1 for some Lie algebra as the contracting homotopy in a special deformation retract (<ref>), and this will produce a Feynman diagram expansion. Among the general choices, the following is particularly relevant.
We call the operator in a -algebra (,,_2,) complete if ^-1 is the contracting homotopy in a special deformation retract to the cohomology H^∙_() of the cochain complex (,).
Note that in this definition, we consider a `colour-stripped' form of the homotopy transfer (<ref>). Physically, a -algebra with complete operator comes with a natural Feynman diagram expansion in which all non-physical fields are propagating and hence integrated out.
Kinematic Lie algebras.
Importantly, the shifted Jacobi identity (<ref>) allows us to associate a Lie algebra with a pseudo--algebra.
Given a pseudo--algebra (,,_2,) with derived bracket (<ref>), we call the associated Lie algebra () given by[We use square brackets [k] with k∈ to denote a degree shift for a graded vector space V=⊕_i∈V_i by V[k]=⊕_i∈(V[k])_i⊕_i∈V_i+k.]
() ([1],[-,-]_())
[ϕ_1[1],ϕ_2[1]]_() (-1)^|ϕ_1|{ϕ_1,ϕ_2}[1]
for all ϕ_1,2[1]∈() the kinematic Lie algebra.
We note that the map extends to a functor from the evident category of pseudo--algebras to the category of Lie algebras.
Our discussion in <ref>, in particular the argument around (<ref>), now yields the following result.
A cubic gauge field theory comes with a kinematic Lie algebra if its underlying dg Lie algebra (,μ_1,μ_2) factorises into a Lie algebra (,[-,-]_) and a pseudo--algebra (,,_2,) such that ≅⊗ and
μ_1(τ_1⊗ϕ_1) = τ_1⊗ϕ_1 ,
μ_2(τ_1⊗ϕ_1,τ_2⊗ϕ_2) = [τ_1,τ_2]_⊗_2(ϕ_1,ϕ_2)
for all τ_1,2∈ and ϕ_1,2∈.
Note that () together with generally fails to be a dg Lie algebra as the following proposition makes clear.
For any pseudo--algebra (,,_2,) with derived bracket (<ref>), we have
{ϕ_1,ϕ_2} = -{ϕ_1,ϕ_2}-(-1)^|ϕ_1|{ϕ_1,ϕ_2}
1cm+(_2(ϕ_1,ϕ_2))-_2(ϕ_1,ϕ_2)-_2(ϕ_1,ϕ_2) ,
{ϕ_1,ϕ_2} = -{ϕ_1,ϕ_2}-(-1)^|ϕ_1|{ϕ_1,ϕ_2}
for all ϕ_1,2∈.
This follows from a straightforward calculation using the definition of the derived bracket (<ref>) together with the definition (<ref>) of and the fact that both and are differentials. The second equation has already been observed in <cit.>, see also <cit.>.
Put differently, this proposition says that, whilst is a derivation for the derived bracket, is not. This proposition also implies the following.
With respect to the derived bracket (<ref>), () is closed. In fact, (<ref>) implies that
{(),()} ⊆ () ⊆ () .
Thus, in <ref>, we may restrict the kinematic Lie algebra () to a shifted Lie subalgebra with
()[1] ⊆ ⊆ ()[1] .
For most physically interesting field theories, such as e.g. Yang–Mills theory, we have ()==(), where is the space of fields (as opposed to anti-fields), at least after gauge fixing. For other theories, such as e.g. Chern–Simons theory, () may be smaller than () in general, but after gauge fixing, the space of fields satisfies (<ref>), as we shall see in <ref>. For an explicit example, see <ref>. The kinematic Lie algebra that is usually discussed in the literature is the one restricted to fields, or further to physical fields. We therefore make the following definition:
The restricted kinematic Lie algebra ^0() of a -algebra is the Lie subalgebra
^0() ()[1] ⊆ () .
Colour–kinematics duality.
We conclude with a sufficient criterion for CK duality. There are several restrictions for a theory with kinematic Lie algebra or, equivalently, pseudo--algebra to exhibit traditional CK duality.
A theory with a pseudo--algebra (,,_2,) will produce a Feynman diagram expansion of currents that is naturally of the form (<ref>). The `amputated correlators', i.e. the currents paired off with the final propagators removed and paired off with fields using the cyclic structure, have a Feynman diagram expansion of the form (<ref>) if at least one of the external fields lies in the image of .
If now the operator is complete, then in the Feynman diagram expansion all non-physical fields are propagating and hence integrated out. The above currents and amputated correlators become `physical currents' and `physical amplitudes' with expansions (<ref>) and (<ref>).
Finally, if the operator is the d'Alembertian on the underlying space-time, then the amplitude parametrisation (<ref>) is of the form conventionally discussed in the literature, i.e. d_γ is the product of 1/p^2_ℓ ranging over all internal lines ℓ. <ref> therefore has the following immediate corollary.
Consider a cubic gauge field theory whose underlying dg Lie algebra factorises into a Lie algebra and a pseudo--algebra (,,_2,) with complete operator and =. Then the corresponding Feynman diagram expansion
yields a CK-dual parameterisation of the currents (<ref>) and a CK-dual parameterisation of the amplitudes (<ref>) with at least one external field in the image of .
We note that, when considering physical amplitudes, the physical fields ϕ usually satisfy the gauge condition ϕ=0, cf. the examples in <ref>. Moreover, in most physically interesting cases, the cohomology of is trivial, so that a pseudo--algebra with structure = and all non-physical modes propagating directly implies CK duality of the amplitudes.
We also note that a CK-dual field theory does not necessarily have to have a kinematic Lie algebra. In particular, the parameterisation (<ref>) does not have to come from the Feynman diagram expansion obtained from a path integral.
A pseudo--algebra structure as in <ref> with complete and = implies full, off-shell CK duality of all tree-level correlators. Given an anomaly-free path-integral measure completing the action to a quantum theory, this is sufficient to obtain full loop level CK duality as we shall see later. In many concrete examples, however, CK duality only exists at the tree level, and this is then visible in various obstacles to obtain the above mentioned situation. For example, we saw that the field redefinitions introduced in <cit.> to reformulate the Yang–Mills action such that it has an underlying pseudo--algebra introduced Jacobian counterterms leading to anomalies. In another case, the twistor description of supersymmetric Yang–Mills theory that where used to produce pseudo--algebra descriptions in <cit.> come with a non-standard -operator. Finally, in the case of pure spinors <cit.>, the tree-level constructions did not lift to the loop level, as there was again a problem with the regularisation, cf. <ref>. This problem is expected and unavoidable due to the results of <cit.>.
§.§ Modules over pseudo-BV-box-algebras
Pseudo--modules.
For CK-dual field theories involving matter fields, that is, fields which do not take values in the gauge Lie algebra , we need to extend the concept of a pseudo--algebra to a pseudo--module.
A module over a pseudo--algebra (,_,_2,_) is a tuple (V,_V,_V,_V) such that (V,_V,_V) is a (left) module over the dg commutative algebra (,_,_2) with the action _V× V→ V of degree 0 and which is endowed with an additional differential _V V→ V of degree -1 such that the derived bracket
{ϕ,v}_V _V(ϕ_Vv)-(_ϕ)_Vv-(-1)^|ϕ|ϕ_V(_Vv)
for all ϕ∈ and v∈ V satisfies
{ϕ_1,{ϕ_2,v}_V}_V = (-1)^|ϕ_1|+1{{ϕ_1,ϕ_2}_,v}_V+(-1)^(|ϕ_1|+1)(|ϕ_2|+1){ϕ_2,{ϕ_1,v}_V}_V
for all ϕ_1,2∈ and v∈ V, where {-,-}_ is the derived bracket (<ref>). Furthermore, we set
_V [_V,_V] = _V∘_V+_V∘_V .
Finally, in analogy with <ref>, we call the operator _V complete if _V^-1⊗_V is the contracting homotopy in a special deformation retract to the cohomology H^∙__V(V) of the cochain complex (V,_V).
When there is no confusion, we will drop the subscripts V and on all the operations. We also note that, for all physical applications, pseudo--modules with V concentrated in degrees 1 (fields) and 2
(anti-fields) will turn out to be sufficient.
Just as for -algebras, we also need to introduce a metric to talk about action principles and amplitudes.
A metric of degree n on a module (V,_V) over a dg Lie algebra (,_) is a non-degenerate bilinear graded-symmetric map of degree n
--_V : V× V →
such that
v_1_Vv_2_V+(-1)^|v_1|_Vv_1v_2_V = 0 ,
ϕ_V v_1v_2_V-(-1)^|ϕ||v_1|v_2ϕ_V v_1_V = 0
for all v_1,2∈ V and ϕ∈.
A metric dg Lie module is a dg Lie module equipped with a metric.
A metric on a pseudo- module is defined in the same way, with the evident compatibility condition with _V; a metric pseudo- module is a pseudo- module equipped with a metric.
Note that on a cyclic module V over a cyclic dg Lie algebra , one can define a graded-anti-symmetric bilinear operation ∧_V as
ϕv_1∧_V v_2_ ϕ_V v_1v_2_V
for any v_1,2∈ V and ϕ∈. Similarly, on a cyclic module V over a cyclic pseudo--algebra , one can define a graded-symmetric bilinear operation ∙_V as
ϕv_1∙_Vv_2_ ϕ_V v_1v_2
for any v_1,2∈ V and ϕ∈.
We now have the following result.
Given a module V=(V,_V,_V,_V) over a pseudo--algebra =(,_,_2,_), we have a graded (left) module (,_) over the kinematic Lie algebra () with V[1] and
_ : ()× → ,
ϕ[1]_ v[1] (-1)^|ϕ|{ϕ,v}_V[1]
for all ϕ[1]∈() and v[1]∈ with {-,-}_V denoting the derived bracket (<ref>).
By direct calculation, cf. <ref>.
Gauge–matter colour–kinematics duality.
It is now easy to see that these structures are the appropriate ones for capturing gauge–matter CK duality. Firstly, as a direct extension of <ref>, we have the following result.
A cubic gauge–matter theory has a kinematic Lie algebra with Lie algebra module if its underlying dg Lie algebra factorises into a Lie algebra representation and a pseudo--algebra with pseudo--module.
Explicitly, we consider the Feynman diagram expansion induced by the pseudo--algebra and its module, which uses the propagator _⊗_^-1_+_V⊗_V^-1_V. The operators are then moved from the propagators to the interaction vertices, as indicated in (<ref>). This turns the interaction vertices into derived brackets of the form (<ref>) or (<ref>). Hence, the Feynman diagram expansion of currents possesses a kinematic Lie algebra with Lie algebra module, which extends to amplitudes with at least one external leg in the image of _ or _V.
As in the pure gauge case, the above theorem has the following corollary, the analogue of <ref>, which provides a sufficient criterion for gauge–matter theories to possess CK duality.
The Feynman diagram expansion of a cubic gauge–matter theory whose underlying dg Lie algebra factorises into a Lie algebra representation and a pseudo--algebra (,_,_2,_) with _= together with a module (V,_V,_V,_V) over a pseudo--algebra with _V= and both _ and _V complete yields a gauge–matter CK-dual parametrisation of the physical currents and a gauge–matter CK-dual parametrisation of the physical amplitudes with at least one external field in the image of _ or _V.
§.§ Pseudo-BV-box-algebras and their modules over Hopf algebras
For technical reasons, it is convenient to define and work with the notion of a pseudo--algebra over a Hopf algebra, following <cit.>. The technical reasons are twofold. Firstly, in future work <cit.>, we intend to give the full homotopy algebraic picture, lifting the restriction to cubic actions; in this case, it is convenient to work with the framework of operadic Koszul duality, for which the Hopf algebra (that provides an ambient symmetric monoidal category) will be necessary. Secondly, our discussion of the double copy to ordinary space-time (as opposed to a double field theory on doubled space) is most easily understood using tensor products over Hopf algebras.
Hopf algebras.
Let us first recall some relevant definitions.
A bialgebra over is a tuple (,Δ,ϵ,), where (,) is an associative unital algebra over and Δ→⊗ (the coproduct) and ϵ→ (the counit) are unital homomorphisms of -algebras such that Δ is coassociative,
(Δ⊗𝕀_)Δ = (𝕀_⊗Δ)Δ ,
and ϵ is indeed a counit,
(𝕀_⊗ ϵ)Δ = 𝕀_ = (ϵ⊗𝕀_)Δ .
It will be convenient to use the common (sumless) Sweedler notation
χ^(1)⊗χ^(2) Δ(χ)
for χ∈, and in this notation, (<ref>) and (<ref>) read as
χ^(1)⊗((χ^(2))^(1)⊗(χ^(2))^(2)) = ((χ^(1))^(1)⊗(χ^(1))^(2))⊗χ^(2) ,
ϵ(χ^(1))χ^(2) = χ = χ^(1)ϵ(χ^(2)) .
A bialgebra (,Δ,ϵ) is called commutative if the algebra is commutative; it is called cocommutative if it satisfies the condition
χ^(1)⊗χ^(2) = χ^(2)⊗χ^(1)
for all χ∈.
A Hopf algebra over is a tuple (,Δ,ϵ,S) where (,Δ,ϵ) is a bialgebra and where S→ is an -linear map (the antipode) such that
S(χ^(1))χ^(2) = χ^(1)S(χ^(2)) = ϵ(χ)
for all χ∈.
In the following, we shall always work with restrictedly tensorable (see <ref>) cocommutative Hopf algebras over .[In this paper, we do not really need the antipode, so it suffices to work with bialgebras. However, the antipode will become important for operadic Koszul duality.] A trivial example of such a Hopf algebra is itself with the ordinary product and all other maps trivial. Another important example to our discussion is the following.
Let ^d^1,d-1 be d-dimensional Minkowski space with metric tensor η=(-1,1…,1) and Cartesian coordinates x^μ with μ,ν,…=0,…,d-1. The Hopf algebra _^d is the Hopf algebra of differential operators with constant coefficients on ^d that is generated by the partial derivatives x^μ.
Explicitly, _^d is the vector space of power series in the partial derivative x^μ with unit =1 and evident product. The coproduct on elements in _^d is fully defined by unitality and the Leibniz rule,
Δ(1) = 1⊗ 1
Δ(x^μ) = x^μ⊗ 1+1⊗x^μ ,
and the counit is the projection onto the constant part of the power series, i.e.
(1) = 1
(x^μ) = 0 .
Finally, the antipode is defined by
S(1) = 1 , S(χ_1χ_2)=S(χ_2)S(χ_1) ,
S(x^μ) = -x^μ .
This Hopf algebra is evidently commutative (hence restrictedly tensorable) and cocommutative.
Pseudo--algebras and modules over Hopf algebras.
We start with the obvious notion of a dg commutative algebra over .
A differential graded (dg) commutative algebra over a cocommutative Hopf algebra is a tuple (,,_2,) such that (,,_2) is a dg commutative algebra, (,) is a graded (left) module over with an action ×→ of degree 0, and the differential and the product _2 are -linear in the sense that
χϕ_1 = (χϕ_1) ,
χ_2(ϕ_1,ϕ_2) = _2(χ^(1)ϕ_1,χ^(2)ϕ_2)
for all χ∈ and ϕ_1,2∈, where we use again the Sweedler notation (<ref>).
This notion extends to pseudo--algebras over , where we additionally demand that ∈.
A pseudo--algebra over a cocommutative Hopf algebra is a tuple (,,_2,,) such that (,,_2,) is a pseudo--algebra, (,,_2,) is a dg commutative algebra over , the differential is linear over , i.e.
χ(ϕ) = (χϕ)
for all χ∈ and ϕ∈, and there is a _∈ such that ϕ=[,]ϕ=_ϕ for all ϕ∈. (In the following, we will be sloppy and identify _= or even write [,]∈.)
A metric pseudo--algebra over a cocommutative Hopf algebra is a pseudo--algebra equipped with a metric --:⊗_→ that is a -linear map, where is equipped with the trivial -module structure and ⊗_ is equipped with the -module structure induced by the coproduct.
It remains to extend the notion of a pseudo--module to a pseudo--module over .
A pseudo--module over a cocommutative Hopf algebra is a module (V,_V,_V,_V) over a pseudo--algebra (,_,_2,_,_) over such that all maps are -linear in the sense that
χ_V(_Vv) = _V(χ_Vv) ,
χ_V(_Vv) = _V(χ_Vv) ,
χ_V{ϕ,v}_V = {χ^(1)_ϕ,χ^(2)_Vv}_V
for all h∈, ϕ∈, and v∈ V. Here, {-,-}_V is the derived bracket (<ref>) associated with (V,_V,_V,_V). In addition, we require that =[_,_]=[_V,_V].
A cyclic module over a cyclic pseudo--algebra over a cocommutative Hopf algebra is a pseudo--module V equipped with a metric ⟨-,-⟩ V⊗_ V→ V that is a -linear map, where is equipped with the trivial -module structure and V⊗_ V is equipped with the -module structure induced by the coproduct.
§.§ BV-box-algebras and their modules
As we will see, it is both physically and mathematically natural to specialise our pseudo--algebras to the case of -algebras <cit.>. These are pseudo--algebras in which the operator is a second-order differential in the sense of[This concept was first defined for commutative and associative algebras by Koszul <cit.>. Here, we choose to work with the more flexible definition in <cit.>, which extends to non-commutative and non-associative algebras.] Akman <cit.>. We start by recalling the notion of higher-order differentials.
Higher-order differentials.
Consider a graded vector space with a multilinear operation of arity[The generalisation from binary _2 to arbitrary arity was not considered in <cit.> but is straightforward, although it is not needed in this paper. In particular, if a theory has a 3-Lie algebra <cit.> colour structure and a corresponding quartic-vertex CK duality <cit.>, then the colour-stripped theory is naturally captured by an analogue of a (pseudo-)-algebra with a totally graded-symmetric ternary _3 and a second-order differential operator.] k+1 and degree || and a differential δ→ of degree |δ|. For all r∈, we define recursively the maps Φ^r+1_δ by
Φ^1_δ(ϕ_1) δϕ_1 ,
Φ^2_δ(ϕ_1,…,ϕ_k+1) Φ^1_δ((ϕ_1,…,ϕ_k+1))-(-1)^|||δ|(Φ^1_δ(ϕ_1),ϕ_2,…,ϕ_k+1)-…
-(-1)^(||+|ϕ_1|+…+|ϕ_k|) |δ|(ϕ_1,…,ϕ_k,Φ^1_δ(ϕ_k+1)) ,
8pt⋮
Φ^r+1_δ(ϕ_1,…,ϕ_rk+1) Φ^r_δ(ϕ_1,…,ϕ_(r-1)k,(ϕ_(r-1)k+1,…,ϕ_rk+1))
-(-1)^|| (|δ|+|ϕ_1|+…+|ϕ_(r-1)k|)
·(Φ^r_δ(ϕ_1,…,ϕ_(r-1)k+1),ϕ_(r-1)k+2,…,ϕ_rk+1)
- …
-(-1)^(||+|ϕ_(r-1)k+1|+…+|ϕ_rk|)(|ϕ_1|+…+|ϕ_(r-1)k|+|δ|)
·(ϕ_(r-1)k+1,…,ϕ_rk,Φ^r_δ(ϕ_1,…,ϕ_(r-1)k,ϕ_rk+1)) ,
for all ϕ_1,…,rk+1∈, which measure the failure of Φ^r_δ(ϕ_1,…,ϕ_(r-1)k,-) to be a derivation of the (k+1)-ary product .
A differential δ on (,_2) is said to be a differential operator of order r if Φ^r+1_δ=0.
Note that a differential of r-th order is automatically of order r+1.
For a pseudo--algebra (,,_2,), the condition for being of second order is
(_2(_2(ϕ_1,ϕ_2),ϕ_3)) = (-1)^|ϕ_1|_2(ϕ_1,(_2(ϕ_2,ϕ_3)))
1cm+(-1)^(|ϕ_1|+1)|ϕ_2|_2(ϕ_2,(_2(ϕ_1,ϕ_3)))
1cm+(-1)^|ϕ_3|(|ϕ_1|+|ϕ_2|+1)_2(ϕ_3,(_2(ϕ_1,ϕ_2)))
1cm+(-1)^|ϕ_1|+|ϕ_3|+|ϕ_2||ϕ_3|+1_2(_2(ϕ_1,ϕ_3),ϕ_2)
1cm+(-1)^|ϕ_1|+|ϕ_2|+1_2(_2(ϕ_1,ϕ_2),ϕ_3)
1cm+(-1)^(|ϕ_1|+1)(|ϕ_2|+|ϕ_3|)+1_2(_2(ϕ_2,ϕ_3),ϕ_1)
for all ϕ_1,2,3∈.
For a module (V,_V,_V,_V) over a pseudo--algebra (,_,_2,_), the condition for _V being of second order amounts to
_V(ϕ_1_V(ϕ_2_Vv)) = (-1)^|ϕ_1|ϕ_1_V_V(ϕ_2_Vv)
1cm+(-1)^(|ϕ_1|+1)|ϕ_2|ϕ_2_V_V(ϕ_1_Vv)
1cm+_(_2(ϕ_1,ϕ_2))_Vv
1cm+(-1)^|ϕ_1|+|ϕ_2|+1ϕ_1_V(ϕ_2_V(_Vv))
1cm+(-1)^|ϕ_1|+1ϕ_1_V((_ϕ_2)_Vv)
1cm-(_ϕ_1)_V(ϕ_2 v)
for all ϕ_1,ϕ_2∈ and v∈ V.
-algebras.
We now refine <ref> as follows.
A (cyclic) -algebra is a (cyclic) pseudo--algebra (,,_2,) in which is of second order.
We have already seen in <ref> that the derived bracket (<ref>) for a pseudo--algebra automatically satisfies the shifted anti-symmetry (<ref>). The operator being of second order now implies that also shifted Jacobi identity (<ref>) automatically holds as the following proposition shows.
Let (,,_2,) be a pseudo--algebra. The condition that is of second order is equivalent to the shifted Poisson identity
{ϕ_1,_2(ϕ_2,ϕ_3)} = _2({ϕ_1,ϕ_2},ϕ_3)+(-1)^(|ϕ_1|+1)|ϕ_2|_2(ϕ_2,{ϕ_1,ϕ_3})
for all ϕ_1,2,3∈ for the derived bracket (<ref>). The shifted Poisson identity, in turn, implies the shifted Jacobi identity (<ref>).
By direct computation, cf. <ref>.
For a -algebra (,,_2,), the operator =[,] is of second order.
This follows from the fact that the (graded) commutator of an r-th order differential and an s-th order differential is a differential of order r+s-1, cf. <cit.>.
Note that by virtue of <ref>, for (,,_2,) a -algebra, the tuple (,_2,{-,-}) with {-,-} the derived bracket (<ref>) is what is commonly known as a Gerstenhaber algebra, that is, a Poisson algebra of degree -1.
A -algebra (,,_2,) with =[,]=0 is called a differential graded (dg) Batalin–Vilkovisky (BV) algebra.
We then have the following immediate corollary to <ref>:
Consider a BV algebra with differential .
Together with [1], the kinematic Lie algebra () and the restricted kinematic Lie algebra ^0() defined in <ref> become dg Lie algebras.
-algebra modules.
Let us also specialise the notion of modules. Firstly, we define -modules by refining <ref>.
A module over a -algebra is a module (V,_V,_V,_V) over , regarded as a pseudo--algebra, in which _V is of second order.
We now have the analogues of <ref>.
For _V of second order, the derived bracket (<ref>) always satisfies (<ref>) as well as
{ϕ_1,ϕ_2_Vv}_V = {ϕ_1,ϕ_2}__Vv+(-1)^(|ϕ_1|+1)|ϕ_2|ϕ_2_V{ϕ_1,v}_V
for all ϕ_1,2∈ and v∈ V.
For a -module (V,_V,_V,_V), the operator _V=[_V,_V] is of second order.
If we specialise to the situation _=_V=0, we obtain dg Lie modules.
Given a module V=(V,_V,,_V) over a dg BV algebra =(,,_2,), then ^0(V)(_V)[1] is a module over the dg Lie algebra
^0().
By direct calculation, cf. <ref>.
Finally, we also refine the notions of pseudo--algebras and pseudo--modules over Hopf algebras introduced in <ref>, see also <ref>, as follows.
A -algebra over a cocommutative Hopf algebra is a tuple (,,_2,,) such that (,,_2,) is a -algebra, (,,_2,) is a dg commutative algebra over , the differential is linear over ,
χ(ϕ) = (χϕ)
for all χ∈ and ϕ∈, and we require that =[,]∈.
It remains to extend the notion of a -module to a -module over .
A -module over a cocommutative Hopf algebra is a module (V,_V,_V,_V) over a -algebra (,_,_2,_,_) over such that we have linearity over in the sense of
χ_V(_Vv) = _V(χ_Vv) ,
χ_V(_Vv) = _V(χ_Vv) ,
χ_V{ϕ,v}_V = {χ^(1)_Vϕ,χ^(2)_Vv}_V
for all h∈, ϕ∈, and v∈ V. Here, {-,-}_V is the derived bracket (<ref>) associated with (V,_V,_V,_V), and we have used Sweedler notation (<ref>). In addition, we require that =[_,_]=[_V,_V].
A metric on a -module V on a cyclic -algebra is the same as a metric as a pseudo--module. A cyclic -module is a cyclic pseudo--module that is a -module.
Comments.
Anticipating our upcoming work <cit.>, we note that the above definitions have a nice operadic formulation, which is crucial for a generalisation to homotopy algebras, providing a generalisation of the present analysis of double copy to theories with interactions beyond cubic terms. Operads are algebraic gadgets that encode the axioms of an algebraic structure. They are formulated inside an ambient setting of symmetric monoidal categories; in the present case, the category is that of cochain complexes of modules over the Hopf algebra , with the monoidal operation given by tensor product over (rather than the smaller tensor product over ). This means that all operations are linear (rather than multi-linear) over . Thus, one can construct an operad in the category of cochain complexes of -modules such that algebras over this operad are -algebras over . Similarly, one can construct a two-sorted operad over the cochain complexes of -modules, with one sort for elements of the -algebra itself and the other sort for elements of the module; an algebra over this operad is then a -algebra over together with a -module over it.
§.§ Gauge fixing
Let us now examine how gauge fixing a BV action of a CK-dual gauge field theory affects the pseudo--algebra structure on the colour-stripped dg commutative algebra. We shall focus on ordinary gauge theories; higher gauge theories can be dealt with in a similar fashion.
General gauge-fixing procedure.
The traditional gauge-fixing procedure in the BV formalism usually consists of the following three steps <cit.>, see also <cit.> for a detailed review:
-4pt
* Add trivial pairs of fields to the BV action as needed. For ordinary gauge theories, one such gauge Lie algebra valued pair, consisting of a Nakanishi–Lautrup field and an anti-ghost, is sufficient. For higher gauge theories, one needs a full BV triangle of trivial pairs, cf. <cit.> and <cit.> for a review.
* Using these fields, define a gauge-fixing fermion Ψ, i.e. a function of ghost degree -1 in the BV fields, which, in turn, defines a symplectomorphism or canonical transformation
(ϕ,ϕ^+) ↦ (ϕ̃,ϕ̃^+) (ϕ,ϕ^++[Ψ]ϕ)
for the fields ϕ and anti-fields ϕ^+. For simplicity, we always restrict ourselves to the usual quadratic gauge-fixing fermions, for which the canonical transformation becomes a constant rotation.
* In most cases of interest to us (ordinary gauge and gauge–matter theories, as well as =0 supergravity) the BV action is linear in the anti-fields after this symplectomorphism, and we can simply put to zero all terms containing anti-fields from the gauge-fixed action.
Even for considering tree-level CK duality, it is helpful (albeit not necessary) to consider the gauge-fixed BV action as the kinematic operator exclusively maps fields to anti-fields.
Step (i): trivial pairs.
Consider a cubic gauge field theory with an underlying pseudo--algebra (,,_2,). The first step in the gauge-fixing procedure consists of adding trivial pairs which amounts to extending the field space by V⊕ V[-1] for V a graded vector space.
Let (,,_2,) be a pseudo--algebra and V a graded vector space with an action of . Then the tuple (',','_2,') with
' ⊕ V⊕ V[-1]
and[We use n_i to indicate Nakanishi–Lautrup fields to avoid the notational collision of the usual b with our operator .]
'(ϕ_1,n_1,c̅_1) (ϕ_1,0,n_1[-1]) ,
'_2((ϕ_1,n_1,c̅_1),(ϕ_2,n_2,c̅_2)) (_2(ϕ_1,ϕ_2),0,0) ,
'(ϕ_1,n_1,c̅_1) (ϕ_1,(c̅_1)[1],0)
for all ϕ_1,2∈, n_1,2∈ V, and c̅_1,2∈ V[-1] is a pseudo--algebra with '=[',']=.
It is straightforward to see that '^2=0 and that ' is a derivation for '_2. Thus, (','_2,') is a dg commutative algebra. Likewise, '^2=0 and [',']=. In addition, the derived bracket (<ref>) for (',','_2,') is
{(ϕ_1,n_1,c̅_1),(ϕ_2,n_2,c̅_2)}' = ({ϕ_1,ϕ_2},0,0) ,
where {-,-} is the derived bracket for (,,_2,). Consequently, the conditions (<ref>) are also satisfied. Altogether, (',','_2,') is a pseudo--algebra.
Step (ii): gauge-fixing fermion and canonical transformation.
The second step in the gauge-fixing procedure, namely introducing a gauge-fixing fermion Ψ and performing the canonical transformation (<ref>) preserves the -algebra structure for the usual quadratic[Recall that we assume that Ψ is quadratic in all BV fields. This includes the usual gauge-fixing fermions, as e.g. those for R_ξ-gauges.] Ψ, as in this case the canonical transformation (<ref>) is merely a constant rotation of all the fields and anti-fields.
We will mostly be interested in the gauge-fixing condition A=0, and we can implement this condition by using the usual gauge-fixing fermion for R_ξ-gauges. This leads to an interesting phenomenon. For simplicity and concreteness sake, let us consider the differentials for the pseudo--algebra in an ordinary gauge theory on d-dimensional Minkowski space ^d. In degree 0 and 1, we have the following structure before applying the symplectomorphism:
[row sep=1cm,column sep=2.7cm]
cΩ^0(^d)[r,shift left,""] AΩ^1(^d)[l,shift left,"^†"] [r,shift left,"K"] A^+Ω^1(^d)[l,shift left,"K^+"] [r,shift left,"^†"] c^+Ω^0(^d)[l,shift left,""]
nΩ^0(^d)[rd,shift left,"", pos=0.1] n^+Ω^0(^d)[ld,shift left,"", pos=0.1]
c̅^+Ω^0(^d)[ru,shift left,"", pos=0.1] c̅Ω^0(^d)[lu,shift left,"", pos=0.1]
where K is a kinematic operator, e.g. K=^† for Yang–Mills theory, or K=⋆ for Chern–Simons theory, and K^+ is the corresponding operator so that we have [,]=, e.g. K^+= for Yang–Mills theory and K^+=^†⋆ for Chern–Simons theory, with = in these two examples. After the symplectomorphism induced by the usual R_ξ-gauge-fixing fermion for the gauge A=^† A=0, we have the following:
[row sep=1cm,column sep=2.7cm]
cΩ^0(^d)[r,shift left,""] [rdd,shift left,"-"] AΩ^1(^d)[l,shift left,"^†"] [rd,shift right,"-^†"', pos=0.1] [r,shift left,"K"] A^+Ω^1(^d)[l,shift left,"K^+"][r,shift left,"^†"] c^+Ω^0(^d)[l,shift left,""]
nΩ^0(^d)[rd,shift left,"-", pos=0.1] [ru,shift left,"", pos=0.1] n^+Ω^0(^d)[ld,shift left,"", pos=0.1]
c̅^+Ω^0(^d)[ru,shift left,"", pos=0.1] c̅Ω^0(^d)[lu,shift left,"-", pos=0.1][ruu,shift left,"-"]
Step (iii): removing anti-fields.
The third and final step is now the most subtle one, as we need to truncate the interaction vertices to a subset to remove the anti-fields. The colour-stripped fields form a subspace of with a natural complement of colour-stripped anti-fields, and we have projectors Π_ and Π_ such that
_ = Π_+Π_ , Π_^2 = Π_ .
Removing the anti-fields from the BV action then changes the dg commutative algebra (,,_2) of the colour-stripped action to the dg commutative algebra (',','_2). Because the action contains only fields, the differential and product it encodes can only map fields to anti-fields. Hence,
' Π_∘∘Π_ ,
_2' Π_∘_2∘(Π_⊗Π_)
with a potential cyclic structure preserved. This directly extends to modules encoding potential matter fields.
This projection requires to redefine to preserve [, ]=. It is, however, clear that there is a redefinition of to an operator ' such that all fields are in its kernel and that [',']=. In our above example, we have
[row sep=1cm,column sep=2.7cm]
cΩ^0(^d)[rdd,shift left,"-"] AΩ^1(^d)[rd,shift right,"-^†"', pos=0.1] [r,shift left,"K"] A^+Ω^1(^d)[l,shift left,"K^+"] [ld,shift left,"^†", pos=0.1] c^+Ω^0(^d)[ldd,shift left,"-"]
nΩ^0(^d)[ru,shift left,"", pos=0.1] n^+Ω^0(^d)[lu,shift right,"-"', pos=0.1]
c̅^+Ω^0(^d)[luu,shift left,"-"] c̅Ω^0(^d)[ruu,shift left,"-"]
We see that the anti-field of the Nakanishi–Lautrup field takes over the role of the ghost, and this is a generic feature of gauge fixing to A=0. It is therefore clear that a redefinition →' with [',']= always exists.
Moreover, it follows that the image of ' is now fully contained in the subspace of fields ⊆:
(') ⊆ ⊆ (') .
Analogously, we have for the anti-fields ⊆:
(') ⊆ ⊆ (') .
This generalises to arbitrary gauge theories as well as abelian higher gauge theories, such as =0 supergravity.
In all cases of interest, it turns out that gauge fixing in this manner ensures that is of second order, so that we arrive at a gauge-fixed -algebra (,',_2','). This observation directly extends to -modules.
A gauge-fixed -algebra is a -algebra together with a decomposition =⊕ as graded vector spaces into field and anti-field spaces such that (<ref>) are satisfied.
§.§ Koszul hierarchy: kinematic L-infinity-algebras
Let us briefly give an outlook on our forthcoming paper <cit.>, in which we shall discuss the homotopy generalisation of the picture presented here. That is, the algebras with unary and binary operations (i.e. differentials and binary products) appearing in our discussion will be replaced by operations with arbitrary arity. But we can encounter such homotopy algebras already here.
Derived brackets of the type (<ref>) are reminiscent of other derived bracket constructions, cf. <cit.>, which naturally produce higher brackets of arbitrary arity. A similar phenomenon can be observed here. Consider a theory with colour-stripped dg commutative algebra (,,_2) together with a nilpotent operator of degree -1 which gives rise to the colour-stripped propagator /[,]. While the derived bracket {-,-} given in (<ref>), which is the operator Φ^2_ in as defined in (<ref>), is no longer a Lie bracket, one finds that the Jacobi identity is violated only up to homotopy. Generally, we have the following result.
Given a graded commutative algebra with a differential δ of degree -1, the operations Φ_δ^r defined in (<ref>) form the grade-shifted higher products of an L_∞-algebra. This L_∞-algebra is known as the Koszul hierarchy. It is quasi-isomorphic to the cochain complex defined by δ.
For examples, see also <cit.>.
Another important observation was made in <cit.>, where the Koszul hierarchy was interpreted as a twisting of a cochain complex by a specific twist, see also <cit.>. This observation not only gives a surprisingly simple proof of the above proposition but also provides for new examples of hierarchies of higher brackets, called higher braces there. Such braces are referred to as natural ones if they use only the data that are available for any graded associative commutative algebra with a differential δ. As such, they could possibly also be relevant as kinematic L_∞-algebras.
The Koszul hierarchy is singled out by the requirement that the binary bracket measures the failure of δ being first order, the coefficient of δ(_2(_2(ϕ_1,ϕ_2),ϕ_3)) in Φ^3_δ(ϕ_1,ϕ_2,ϕ_3) in (<ref>) is ± 1 and that Φ^k_δ=0 implies Φ^k+1_δ=0 (hereditarity), cf. <cit.>.
Hence, we can define pre--algebras.
A pre--algebra is a dg commutative algebra (,,_2) together with a differential of degree -1. The kinematic L_∞-algebra of a pre--algebra is the (shifted) L_∞-algebra given by the Koszul hierarchy.
Recall, however, from above, that there are, in fact, a number of possibly relevant kinematic L_∞-algebras (which are, again, isomorphic to the Koszul hierarchy).
We then have the following immediate specialisations of our above notions:
A pre--algebra in which the higher product μ_2 in the kinematic L_∞-algebra satisfies the shifted Jacobi identity (<ref>) is a pseudo--algebra. A pre--algebra with strict kinematic L_∞-algebra is a -algebra.
Recall from (<ref>) that being of second order is tantamount to Φ^3_δ(-,-,-) being trivial.
There are two important points to note. First of all, while L_∞-algebras can always be strictified, there is no reason to believe that any pre--algebra is quasi-isomorphic to a pseudo--algebra. Hence, we cannot expect all field theories to have an underlying pseudo--algebra, or, equivalently, exhibit CK duality. Secondly, it may be surprising that such a physically evidently non-trivial datum as the kinematic Lie algebra extends to a dg Lie algebra which is quasi-isomorphic to an ordinary cochain complex. Again, however, we have to note that this quasi-isomorphism does not amount to a physical equivalence, which would be captured by a quasi-isomorphism of the underlying pseudo--algebras. Moreover, we note that for most interesting field theories, the operator has trivial cohomology, and hence the kinematic L_∞-algebra is quasi-isomorphic to the trivial one.
We plan to investigate the deeper implications of kinematic L_∞-algebras in future work <cit.>.
Pseudo--algebras vs -algebras. Let us close this section on CK duality with a comment on the difference between pseudo--algebras and -algebras. As we saw, a pseudo--algebra (and a module over it) is the minimal requirement for having a kinematic Lie algebra manifested on the Feynman diagram expansion of the currents of a field theory. We can now conclude that the restriction to -algebras is certainly natural from a mathematical perspective: the fact that the operator in the data of a pseudo--algebra is of second order is equivalent to the Poisson identity by <ref>, which, in turn, is equivalent to the Koszul hierarchy being a dg Lie algebra.
From a physics perspective, it is natural to ask for the kinematic Lie algebra to lift uniquely to arbitrary local operators constructed by multiplying the fields in the theory. This unique lift is provided by the additional Poisson identity.
§ DOUBLE COPY AND SYNGAMIES FOR SPECIAL BV-BOX-ALGEBRAS
In this section, we shall explain how two -algebras of field theories can be combined into a syngamy. The double copy of gauge theories to supergravity theories is a special case of this construction.
Outline.
Given our discussion of CK duality, we are led to looking for an interpretation of the double copy in terms of -algebras[In the rest of the paper, we will focus on -algebras and comment here and there on the problems of generalising the picture to pseudo--algebras.], and the obvious starting point is the tensor product of two -algebras <cit.>. As we will see below, this tensor product exists, extending the tensor product of two dg commutative algebras.
This direct tensor product, however, does not match our expectations. To see this, let us sketch the simple example of biadjoint scalar field theory, which is fully developed in <ref>. The -algebra of this theory has an underlying cochain complex () which is concentrated in degrees 1 and 2,
() = (_1_2) = (⊗^∞(^d)⊗^∞(^d))
for some Lie algebra. The kinematic Lie algebra is simply the Lie algebra , and the double copy of with itself is expected to yield biadjoint scalar field theory with fields taking values in ⊗⊗^∞(^d).
The tensor product ()⊗(), however, is given by the cochain complex
((⊗^∞(^d))^⊗ 2^2⊗(⊗^∞(^d))^⊗ 2(⊗^∞(^d))^⊗ 2)
concentrated in degrees 2, 3, and 4, which has several problems. First of all, there are no BV fields, as all elements of degree 1 are trivial. We will show that this problem can be solved by switching from the tensor product -algebra to its kinematic Lie algebra (), which involves a degree shift. After this, we end up with a cochain complex concentrated in degrees 1, 2, and 3. The field space, _1=(⊗^∞(^d))^⊗ 2, however, is still larger than the expectation ⊗⊗^∞(^d). This issue can be addressed by considering -algebra over the Hopf algebra _^d of <ref>, which is generated by the differential operators on space-time ^d with constant coefficient and hence allows us to control momentum dependence. As shown in <ref>, there is a natural notion of restricted tensor product of -algebras which are modules over a restrictedly tensorable cocommutative Hopf algebra such as _^d, which here amounts to restricting the tensor product to the kernel of the operators
x^μ⊗-⊗x^μ
with x^μ the Cartesian coordinates on ^d. As a result the tensor product ^∞(^d)⊗^∞(^d) is reduced to ^∞(^d), and we have a new, reduced kinematic Lie algebra . Even after this reduction, however, the homogeneous subspaces of of degrees 2 and 3 are still too large and require further reduction. In fact, we note that the space of BV fields is, in a sense, double its expected size.[A similar problem arises in the pure spinor formulation of supergravity, see the comments in <ref>.] Moreover, we note that for the biadjoint scalar field theory, is split in half as
= (⊗-⊗)⊕(⊗-⊗) ,
and (⊗-⊗) is naturally a dg Lie subalgebra. Hence, we restrict further to the kernel ⊗-⊗, and the resulting dg Lie algebra turns out to be the expected one for biadjoint scalar field theory. This restriction should be seen analogously to the section condition in double field theory (albeit we only double the functions, not the dimensions of the tensors). In this sense, the double copy is closely related to double field theory, cf. also <cit.>. After a first version of this paper was finished, we became aware of the paper <cit.>, in which essentially the same restriction was used in the context of a special form of =0 supergravity on Hermitian manifolds.
In the following, we will develop the construction sketched above in detail.
§.§ Tensor products of BV-box-algebras
Ordinary tensor product.
Recall that -algebras as defined in <ref> are dg commutative algebras endowed with an additional operation . The tensor product of two dg commutative algebras _=(_,_,_2) and _=(_,_,_2) is another dg commutative algebra =(,,_2) with =_⊗_ and the differential and product defined by
(ϕ_1⊗ϕ_1) _ϕ_1⊗ϕ_1+(-1)^|ϕ_1|ϕ_1⊗_ϕ_1 ,
_2(ϕ_1⊗ϕ_1,ϕ_2⊗ϕ_2) (-1)^|ϕ_1||ϕ_2|_2(ϕ_1,ϕ_2)⊗_2(ϕ_1,ϕ_2) .
If both _ and _ are endowed with metric --_ and --_ of degrees n_ and n_, respectively, then the tensor product is endowed with a metric of degree n_+n_ given by[For a proof of the cyclicity of this tensor product, see <ref>.]
ϕ_1⊗ϕ_1ϕ_2⊗ϕ_2
(-1)^|ϕ_1||ϕ_2|+n_(|ϕ_1|+|ϕ_2|)ϕ_1ϕ_2_ϕ_1ϕ_2_ .
This definition extends to a tensor product of two (metric) -algebras _ and _.
The tensor product of two -algebras _ and _ is the -algebra , whose underlying dg commutative algebra is the tensor product of _ and _, both regarded as dg commutative algebras, and whose operator is defined as
(ϕ_⊗ϕ_) _+(ϕ_⊗ϕ_) _ϕ_⊗ϕ_+(-1)^|ϕ_|ϕ_⊗_ϕ_
for all ϕ_⊗ϕ_∈. Correspondingly, we have a natural definition of on the tensor product,
[,] = _⊗+⊗_ .
We will be particularly interested in the special case that both _ and _ are -algebras over a Hopf algebra with _=_=∈. In this case, =_⊗_ is also a module over with
χ(ϕ_⊗ϕ_) (χϕ_)⊗ϕ_+ϕ_⊗ (χϕ_)
for all χ∈ and ϕ_⊗ϕ_∈.
Restricted tensor product.
As explained above, the ordinary tensor product is not directly suitable for an interpretation of the double copy, and we have to use the restricted tensor product introduced in <ref>.
Let be a restrictedly tensorable cocommutative Hopf algebra. Given two -algebras _=(_,_,_2,_) and _=(_,_,_2,_) over with _=_=∈, the tuple (,,_2,) with
_⊗^_ ⋂_χ∈((χ⊗-⊗χ)) ⊆ _⊗_
and[Note the sign flip between the two summands in relative to (<ref>), which will turn out to be convenient.]
(ϕ_1⊗^ϕ_1) _ϕ_1⊗ϕ_1+(-1)^|ϕ_1|ϕ_1⊗_ϕ_1 ,
_2(ϕ_1⊗^ϕ_1,ϕ_2⊗^ϕ_2) (-1)^|ϕ_1||ϕ_2|_2(ϕ_1,ϕ_2)⊗_2(ϕ_1,ϕ_2) ,
_-(ϕ_1⊗^ϕ_1) _ϕ_1⊗ϕ_1-(-1)^|ϕ_1|ϕ_1⊗_ϕ_1
for all ϕ_1,2∈_ and ϕ_1,2∈_ forms a dg BV algebra; in particular, _- is of second order with respect to _2.
If both _ and _ come with -linear metrics --_ and --_ of degrees n_ and n_, respectively, then
ϕ_1⊗^ϕ_1ϕ_2⊗^ϕ_2
(-1)^|ϕ_1||ϕ_2|+n_(|ϕ_1|+|ϕ_2|)ϕ_1ϕ_2_ϕ_1ϕ_2_
defines a metric for (,,_2,_-) of degree n_+n_ for all ϕ_1,2∈_ and ϕ_1,2∈_.
From the discussion in <ref>, it is clear that (,,_2) forms a dg commutative algebra, and that _- is a differential of degree -1. Furthermore, we have χ(ϕ_⊗^ϕ_)=(χϕ_)⊗^ϕ_=ϕ_⊗^(χϕ_) for all χ∈, ϕ_∈_, and ϕ_∈_. Consequently,
[,_-](ϕ_⊗^ϕ_) = (ϕ_)⊗^ϕ_-ϕ_⊗^(ϕ_) = 0 ,
because of the assumption _=_=∈. In addition, the derived bracket (<ref>) now becomes
{ϕ_1⊗^ϕ_1,ϕ_2⊗^ϕ_ 2} = (-1)^|ϕ_1||ϕ_2|{ϕ_1,ϕ_2}_⊗_2(ϕ_1,ϕ_2)
1cm-(-1)^|ϕ_1||ϕ_2|+|ϕ_1|+|ϕ_2|_2(ϕ_1,ϕ_2)⊗{ϕ_1,ϕ_2}_
for all ϕ_1,2∈_ and ϕ_1,2∈_, and closure on follows from closure of the defining operations on . It remains to show that _- is of second order, which is equivalent to the shifted Poisson identity (<ref>), as we saw in <ref>. Using the Poissonator defined in (<ref>), some lengthy but straightforward calculation similar to the derivation (<ref>) shows that
(ϕ_1⊗^ϕ_1 , ϕ_2⊗^ϕ_2 , ϕ_3⊗^ϕ_3)
.5cm= (-1)^|ϕ_2||ϕ_3|+|ϕ_1|(|ϕ_2|+|ϕ_3|)[_(ϕ_1,ϕ_2,ϕ_3)⊗_2(ϕ_1,_2(ϕ_2,ϕ_3))
1.5cm-(-1)^|ϕ_1||ϕ_2|+|ϕ_3|_2(ϕ_1,_2(ϕ_2,ϕ_3))⊗_(ϕ_1,ϕ_2,ϕ_3)]
for all ϕ_1,2,3∈_ and ϕ_1,2,3∈_. Hence, the shifted Poisson identities for (_,_,_2,_) and (_,_,_2,_) imply that of (,,_2,_-). The properties of the metric follow by restriction of those on the ordinary tensor product, see also <ref>.
When (_,_,_2,_) and (_,_,_2,_) are mere pseudo--algebras, see <ref>, then the tuple (,,_2,_-) defined in (<ref>) is generally not even a pseudo--algebra. This is because the Jacobiator (<ref>) for the derived bracket (<ref>) does not only involve the Jacobiators for the derived brackets of (_,_,_2,_) and (_,_,_2,_) but also their Poissonators, defined in (<ref>). In all physical applications, however, we are dealing exclusively with (gauge-fixed) -algebras.
Consider -algebras _ and _, which are modules over the Hopf algebra _^d defined in <ref> and whose homogeneously graded vector spaces are rings over ^∞(^d) and hence fields over space-time ^d. The above construction of the restricted tensor product will ensure that the homogeneously graded subspaces of are still fields over ^d, instead of fields over ^d⊗^d, as would be the case for the ordinary tensor product.
§.§ Syngamies of pure gauge theories
Syngamies.
Let us now come to the construction of syngamies, i.e. the construction of a field theory from two -algebras. The usual double copy constructions and its variants will turn out to be special cases of this construction. We start with the construction for pure gauge theories, such as pure Yang–Mills or Chern–Simons theory, and theories with a flavour Lie algebra, such as the biadjoint scalar theory; theories with matter, i.e. fields in general representation of a gauge or flavour Lie algebra, will be discussed in <ref>.
Even after taking the restricted tensor product _⊗^_ of two -algebras (_,_,_2,_) and (_,_,_2,_) underlying two field theories, we still end up with a BV field space that is twice the expected size. Concretely, each of the factors _ and _ contains subspaces for fields and anti-fields, and hence the tensor product contains the subspaces
fields⊗fields , fields⊗anti-fields , anti-fields⊗fields , anti-fields⊗anti-fields ,
which is twice the expected field content of a syngamy.[Again we note that the same problem arises in the pure spinor formulation of supergravity, see the comments in <ref>.] We therefore have to restrict to the correct subspace, and a convenient choice in the case of gauge-fixed -algebras is the restriction to
(_-) = (_⊗-⊗_)
with _- defined in (<ref>),
which naturally extends the restriction from the ordinary tensor product to the restricted tensor product over . Recall from <ref> that for gauge-fixed -algebras, the kernel of contains the field space.[In many examples, ()==().] Considering the kernel of means to work with a slightly enlarged BV field space '=()⊇, which will turn out to be harmless in all relevant examples. Denoting the cokernel by ', the kernel of _- will consist of the space '_⊗^'_ as well as elements of '_⊗^'_ ⊕ '_⊗^'_ that are symmetrised such that _- annihilates them.
The BV algebra structure on (_-) yields a (metric) dg Lie algebra, which defines the syngamy field theory.
Let be a restrictedly tensorable cocommutative Hopf algebra. Furthermore, let _=(_,_,_2,_) and _=(_,_,_2,_) be two gauge-fixed -algebras over with _=_=∈ and let =(,,_2,_-) be the restricted tensor product over as defined in (<ref>). The syngamy of _ and _ is the restricted kinematic dg Lie algebra ^0() of <ref>.
Inner product.
As we shall show now, ^0 can naturally be endowed with a metric of degree -3, which is necessary for the definition of the action. The following construction may seem a bit abstract and not particularly well-motivated. Nevertheless, it will be the one reproducing all expected features when we will look at concrete examples in <ref>.[
There is an analytical subtlety here, which we will largely gloss over outside of this footnote. If –__ and –__ are finite and well-defined, then of course so is the tensor product –_ and, thus, –_^0 defined with respect to it. Now, for analytic purposes it is convenient to have _ and _ be nuclear topological vector spaces, e.g. spaces of smooth functions ^∞(^d) with the usual Fréchet topology, such that the topological tensor product behaves well. In that case, however, the naïve inner product fg=∫_^dfg fails to be finite for general smooth f and g, and if one double-copies this, the restricted tensor product will consist of functions that are translation-invariant along d directions in ^d×^d, which means that –_ and therefore –_^0 have an `infinite volume factor' vol(^d) that must be cancelled or otherwise regulated away. Working naïvely, one thus runs into such harmless but annoying infinite factors, which are an artefact of the lack of infrared regulators. For more sophisticated approaches to infrared regulators, see e.g. <cit.>.
]
Firstly, in view of the tensor product (<ref>), let us write
_±(ϕ_⊗^ϕ_) _ϕ_⊗ϕ_±(-1)^|ϕ_|ϕ_⊗_ϕ_
for all ϕ_,∈_,. Evidently, _+=. It is then easy to check that
[_±,_-] = (1∓1)
[_+,_-] = 0 .
and
_±ϕ̂_1ϕ̂_2 = -(-1)^|ϕ̂_1|ϕ̂_1_±ϕ̂_2
for all ϕ̂_1,2∈.
Let =(,,_2,_-) be the dg BV algebra defined in (<ref>) and let ^0=^0() be the associated syngamy. Suppose that has a metric --_ of degree -6. We say that a metric --_^0 on the syngamy of degree -3 is compatible if
ϕ̂_1[1]ϕ̂_2[1]_^0 = (-1)^|ϕ̂_1|_-ϕ̂_1ϕ̂_2_
for all ϕ̂_1,2[1]∈^0 with _- defined in (<ref>).
If is invertible, there is a unique compatible metric on ^0.
Consider again the situation in <ref> and suppose that the action of is invertible. Then
ϕ̂_1[1]ϕ̂_2[1]_^0 (-1)^|ϕ̂_1|^-1_-ϕ̂_1ϕ̂_2_
for all ϕ̂_1,2[1]∈^0 is a compatible metric on the syngamy.
Note that as pointed out in (<ref>), we have that [_-,_-]=2. Consequently, (_-)∩(_-)⊆(_-)∩(_-)⊆() and so, our assumption that is invertible implies that _- is injective. Thus, --_^0 is non-degenerate.
Next, we must show that
ϕ̂_1[1]ϕ̂_2[1]_^0 = (-1)^|ϕ̂_1[1]||ϕ̂_2[1]|ϕ̂_2[1]ϕ̂_1[1]_^0 ,
_^0ϕ̂_1[1]ϕ̂_2[1]_^0 = -(-1)^|ϕ̂_1[1]|ϕ̂_1[1]_^0ϕ̂_2[1]_^0 ,
[ϕ̂_1[1],ϕ̂_2[1]]_^0ϕ̂_3[1]_^0 = -(-1)^|ϕ̂_1[1]||ϕ̂_2[1]|ϕ̂_2[1][ϕ̂_1[1],ϕ̂_3[1]]_^0_^0
for all ϕ̂_1,2,3[1]∈^0.
Firstly, again using (<ref>), we find
ϕ̂_2[1]ϕ̂_1[1]_^0 = (-1)^|ϕ̂_2|^-1_-ϕ̂_2ϕ̂_1_
= -ϕ̂_2^-1_-ϕ̂_1_
= -(-1)^(|ϕ̂_1|+1)|ϕ̂_2|^-1_-ϕ̂_1ϕ̂_2_
= (-1)^(|ϕ̂_1|+1)(|ϕ̂_2|+1)ϕ̂_1[1]ϕ̂_2[1]_^0
= (-1)^(|ϕ̂_1[1]||ϕ̂_2[1]|ϕ̂_1[1]ϕ̂_3[1]_^0 .
Furthermore, (<ref>) also yields
_^0(ϕ̂_1[1])ϕ̂_2[1]_^0 = (-1)^|ϕ̂_1|+1^-1_-_+ϕ̂_1ϕ̂_2_
= ^-1_-ϕ̂_1_+ϕ̂_2_
= (-1)^|ϕ̂_1|ϕ̂_1[1]_^0(ϕ̂_2[1])_^0
= -(-1)^|ϕ̂_1[1]|ϕ̂_1[1]_^0(ϕ̂_2[1])_^0 .
Finally,
[ϕ̂_1[1],ϕ̂_2[1]]_^0ϕ̂_3[1]_^0 = (-1)^|ϕ̂_1|{ϕ̂_1,ϕ̂_2}[1]ϕ̂_3[1]_^0
= (-1)^|ϕ̂_2|+1^-1_-{ϕ̂_1,ϕ̂_2}ϕ̂_3_
= (-1)^|ϕ̂_1|+1_-_2(ϕ̂_1,ϕ̂_2)^-1_-ϕ̂_3_
= (-1)^|ϕ̂_2|+1_2(ϕ̂_1,ϕ̂_2)^-1_-_-ϕ̂_3_
= 2(-1)^|ϕ̂_2|+1_2(ϕ̂_1,ϕ̂_2)ϕ̂_3_
= 2(-1)^|ϕ̂_1||ϕ̂_2|+|ϕ̂_2|+1ϕ̂_2_2(ϕ̂_1,ϕ̂_3)_
= 2(-1)^|ϕ̂_2||ϕ̂_3|+|ϕ̂_2|+1_2(ϕ̂_1,ϕ̂_3)ϕ̂_2_
= (-1)^|ϕ̂_2||ϕ̂_3|+|ϕ̂_2|+|ϕ̂_3|[ϕ̂_1[1],ϕ̂_3[1]]_^0ϕ̂_2[1]_^0
= -(-1)^|ϕ̂_1[1]||ϕ̂_2[1]|ϕ̂_2[1][ϕ̂_1[1],ϕ̂_3[1]]_^0_^0 ,
where in the third step we have used (<ref>), inserted the definition (<ref>) of the derived bracket, and used that ϕ̂_1,2,3∈(_-), and in the sixth step we have used the cyclicity of _2.
We note, however, that in most cases, is not invertible. Indeed, the kernel of usually consists of the asymptotically free fields in the perturbative expansion. Nevertheless, this is a set of measure zero in the space of all fields, and the action is expected to be continuous on this space. We can therefore always extend the inner product between the interacting fields to the full field space, up to technical issues of mathematical analysis that are of little consequence for physical computations. Moreover, operator insertions closely analogous to ^-1_- have also been introduced in the context of Kodaira–Spencer theory or Bershadsky–Cecotti–Ooguri–Vafa (BCOV) theory <cit.>.[We are grateful to Pietro Antonio Grassi for bringing this to our attention.]
We also note that the inner product is non-local, such that the resulting (Maurer–Cartan) action may be also non-local, since it contains a factor of ^-1; this happens for instance in the double copy of Chern–Simons theory, which agrees with the non-local action found in <cit.>. However, in the common case where is merely a degree shift (as in the biadjoint scalar) and hence is a degree-shifted version of , so that ^-1_- combine to a degree shift. Hence, the inner product and the resulting action are local if the original left and right theories are local.
Relation to the double copy.
Let us briefly explain how the above construction relates to the usual double copy construction. Recall the two perspectives on the CK duality depicted in (<ref>). There was a freedom as to whether to assign the operator in the colour-stripped propagator / to the propagator or to the interaction vertex. In the combination of two kinematic Lie algebras as e.g. in the double copy construction, we double copy everything except for the operator . Correspondingly, if we consider the combination of the kinematic Lie algebras of two -algebras _ and _, we can either work with the propagator and interaction vertex[Here and in the following, we are a bit cavalier with the action of 1/∉, but the meaning should be obvious.] very schematically written as follows:
P̃ = _⊗_/μ̃_2 = _2⊗_2
or, and this is the picture emerging from our tensor product construction,
P = _⊗+⊗_/2μ_2 = {-,-}={-,-}_⊗_2+_2⊗{-,-}_ .
This is the same choice as made in <cit.> when defining the double copy. Note that we have indeed
P̃μ̃_2 = 1/{-,-}_⊗{-,-}_ = Pμ_2
on (_-), as required for the equivalence of the two perturbative expansions. The kinematic operator _ should be, when defined, the inverse of the propagator P̃. Note that for the differential operator =_+ of the tensor product, we have
P_ = _⊗+⊗_/2(_⊗+⊗_) = ⊗+⊗/2 = / ,
as required. Hence, the perturbative expansions of currents of the reduced kinematic dg Lie algebra ^0 of the restricted tensor product _⊗^_ indeed reproduces the expected result of a combination of the kinematic Lie algebras.
To see that the cyclic structure is the correct one is a bit more subtle. It turns out that the differential and the Lie bracket in ^0 are such that they can be rescaled by a factor ^-1_- to produce local expressions, modulo a few technical subtleties. Instead of presenting an abstract discussion, we simply refer to the concrete examples in <ref>.
Tensoring by colour.
The above procedure allows us to produce a field theory or dg Lie algebra from two -algebras. The inverse of colour-stripping, namely tensoring a dg commutative algebra by a Lie algebra also yields a field theory in the form of a dg Lie algebra. It will turn out that there is a special -algebra, namely that of the biadjoint scalar field theory, for which both constructions are equivalent. Further details are found in <ref>.
Relation to our previous construction.
In our previous work <cit.>, we considered the factorisation of the dg Lie algebra of a gauge field theory into three parts:
≅ ⊗(⊗_τ) ,
where is the gauge Lie algebra, is a kinematic vector space and ^∞(^d)[-1]⊕^∞(^d)[-2] is the BV field space of a field theory of a single, real-valued scalar field. Moreover, ⊗_τ is a twisted tensor product, a generalisation of a semi-direct product, allowing to act on ^∞(^d).
In our new picture, the -algebra is an algebraic enhancement of the dg commutative algebra ⊗_τ. Moreover, if carries an action of the Hopf algebra _^d, then the kernel of this action can naturally be associated with the space T^*[-1].
In <cit.>, we constructed the double copy by doubling the kinematic Lie algebra:
_double ≅ ⊗_τ(⊗_τ) ,
which makes intuitive sense. Here, we tensor together two copies of , which results in an unwanted doubling of . This is eliminated by considering the restricted tensor product ⊗^, reducing the functions to those on a single copy space-time ^d, and the kernel of _-, reducing the quadrupled BV field space to the expected one.
Syngamies via compactified space-time.
In the case of concrete field theories over Minkowski space ^d, we run into the usual analytical problems of field theories. For example, the metrics are really defined only for a subset of fields that do not include e.g. asymptotically free fields. While inconsequential for concrete considerations, trying to resolve these issues leads to some interesting observations.
A natural way to cure these is to compactify space-time from ^d to the torus ^d/Λ^d with size Λ and work with the space of finite linear combinations of (possibly off-mass-shell) plane waves on the torus. Note that the Hopf algebra _^d has a natural action on after compactification. Moreover, we can replace the restricted tensor product of <ref> by the ordinary tensor product over the Hopf algebra, because
⊗__^d ≅ ,
as shown in <ref>.
Such a compactification is certainly useful since it cures all infrared divergences, but it raises also some conceptual issues: what does it mean to consider scattering amplitudes in a compact space and — worse — periodic time? The answer is that, formally, one can always define the scattering amplitudes via the homological perturbation lemma, and this, in turn, is equivalent to computing the scattering amplitudes on flat space subject to the condition that all incoming and outgoing momenta lie on the dual lattice to Λ^d. Thus, by setting the radii of the compactified torus appropriately, one can recover all scattering amplitudes.
§.§ Syngamies of theories with matter fields
Our above constructions readily extend to theories containing matter fields. In the following, we briefly explain the required constructions. The relevant theorems are more or less the same as for pure gauge theories, and we will omit the proofs if they parallel to those for the pure gauge theory case up to minor and evident changes.
In the pure gauge case, the syngamy was constructed from a tensor product of -algebras. The evident generalisation for theories with fields in general representations of a gauge or flavour Lie algebra is to consider tensor products of -modules.
Tensor products of -modules.
In the following, let be again a restrictedly tensorable cocommutative Hopf algebra. We then have the following result for the tensor product of -modules.
Given two -algebras _=(_,_,_2,_) and _=(_,_,_2,_) with _=_=∈ over and modules V_=(V_,_V_,_V_,_V_) and V_=(V_,_V_,_V_,_V_) over them respectively, the tuple V̂=(V̂,_V̂,_V̂,_V̂-) with
V̂ V_⊗^ V_
and
_V̂(v_⊗^ v_) _V_v_⊗ v_+(-1)^|v_|v_⊗_V_ v_ ,
(ϕ_⊗^ϕ_)_V̂(v_⊗^ v_) (-1)^|ϕ_||v_|(ϕ__V_v_)⊗(ϕ__V_v_) ,
_V̂-(v_⊗^ v_) _V_ v_⊗ v_-(-1)^|v_|v_⊗_V_ v_
for all v_∈ V_L, v_∈ V_R, ϕ_∈_, and ϕ_∈_ forms a dg BV module over the dg BV algebra _⊗^_ defined in <ref>.
The extension of the derived bracket on to V̂ reads as
{ϕ_⊗^ϕ_,v_⊗^ v_} =(-1)^|ϕ_||v_|{ϕ_,v_}⊗ (ϕ__V_R v_)
-(-1)^|ϕ_||v_|+|ϕ_|+|v_|(ϕ__V_L v_)⊗{ϕ_,v_} .
Provided that both _ and _ come with -linear metrics --_ and --_ of degrees n_ and n_, respectively, and both V_ and V_ come with -linear metrics --_V_ and --_V_ of degrees n_ and n_, respectively,
then
v_1⊗ v_1v_2⊗ v_2_V̂ (-1)^|v_1||v_2|+n_(|v_1|+|v_2|)v_1v_2_V_v_1v_2_V_
defines a -bilinear metric for V̂ of degree n_+n_ for all v_1,2∈ V_ and v_1,2∈ V_.
The proof follows closely that of <ref>.
Syngamies.
We can straightforwardly generalise syngamies to gauge theories with matter as follows.
Let _=(_,_,_2,_) and _=(_,_,_2,_) be two -algebras over with _=_=∈, and let =(,_,_2,_-) be their tensor product over as defined in (<ref>). Let V_=(V_,_V_,_V_,_V_) and V_=(V_,_V_,_V_,_V_) be -modules over _ and _, respectively, and let V̂=(V̂,_V̂,_V̂,_V̂-) be their tensor product over as defined in (<ref>). The syngamy of the pairs (_,V_) and (_,V_) is the restricted kinematic Lie algebra ^0() as defined in <ref>, together with the restricted kinematic Lie algebra module ^0(V̂) over ^0(), defined in <ref>.
By <ref>, we know that the syngamy is a dg Lie module over a dg Lie algebra.
To complete the syngamy, we have to endow ^0(V̂) with a metric. Analogously to the case of -algebras and in view of the tensor product (<ref>), let us define
_V̂±(v_⊗ v_) _V_v_⊗ v_±(-1)^|v_|v_⊗_V_v_
for all v_,∈ V_,; evidently, _+=. It is then easy to check that
[_V̂±,_V̂-] = (1∓1)
[_V̂+,_V̂-] = 0
and
_V̂± v_1v_2_V̂ = -(-1)^|v_1|v_1_V̂±v_2_V̂
for all v_1,2∈V̂. Next, we introduce the notion of compatible metrics for modules.
Let (V̂,_V̂,_V̂,_V̂-) be the dg BV module over the dg BV algebra =(,_,_2,_-) defined in (<ref>) and let (^0,_^0,[-,-]_^0) and (^0,_^0,_^0,_^0) be the associated syngamy. Suppose that has a metric --_ of degree -6. We say that a metric --_^0 on the dg Lie algebra module V_0 in the syngamy (^0,^0) of degree -3 is compatible if
v_1[1]v_2[1]_^0 = (-1)^|ϕ_1|_V-v_1v_2_V̂
for all v_1,2[1]∈^0=(_V̂-)[1] with _V̂- as defined in (<ref>).
Consider again the situation of <ref> and assume that the actions of on both the BV algebra and the BV module are invertible. Then,
v_1[1]v_2[1]_^0 (-1)^|v_1|^-1_V-v_1v_2_V̂
for all v_1,2[1]∈^0=(_V̂-)[1] with _V̂- as defined in (<ref>) is a compatible metric on the syngamy.
The proof is a minor variation of that of <ref>.
§ EXAMPLES
§.§ Biadjoint scalar field theory
The simplest and archetypal example of a theory with colour–kinematics duality is certainly the theory of a biadjoint scalar field with evident cubic interaction, a theory that is frequently used as a toy model in the scattering amplitudes literature <cit.>.
Differential graded Lie algebra.
Consider two flavour metric Lie algebras and with bases _a and _a̅, structure constants f_ab^c and f̅_a̅b̅^c̅ and metrics g_ab and g̅_a̅b̅, respectively. Classically, a biadjoint scalar field φ is a (⊗)-valued function on ^d, and we write
φ = _a⊗_a̅⊗φ^aa̅ ∈ (⊗)⊗^∞(^d) .
We shall be interested in the theory with action functional
S^biadj ∫^dx{12φ^aa̅g_abg_a̅b̅φ^bb̅+13!φ^aa̅g_abg_a̅b̅f_cd^bf̅_c̅d̅^b̅φ^cc̅φ^dd̅} .
The L_∞-algebra corresponding to this field theory is the dg Lie algebra ^biadj=⊕_p∈^biadj_p with underlying cochain complex
(^biadj) (
[column sep=20pt]
*[r] (⊗)⊗^∞(^d)_ ^biadj_1[r,"_⊗⊗"] [30pt] (⊗)⊗^∞(^d)_ ^biadj_2[r] *
) ,
where * denotes the trivial vector space. In particular, we have the field[in the sense of the BV formalism, i.e. as opposed to an anti-field] φ∈^biadj_1, the corresponding anti-fields φ^+=φ^+_aa̅^a^a̅∈^biadj_2 for ^a=g^ab_a and ^a̅=g̅^a̅b̅_a̅, and the only non-trivial component of the differential μ_1_⊗⊗. The non-vanishing components of the cyclic inner product are
φφ^+ ∫^dx φ^aa̅φ^+_aa̅ .
The interactions are encoded in the Lie bracket μ_2^biadj×^biadj→^biadj, and the only non-trivial components are
μ_2(φ_1,φ_2) f_ab^c_c⊗f̅_a̅b̅^c̅_c̅⊗φ^aa̅_1φ^bb̅_2
for all φ_1,2∈^biadj_1.
-algebra and colour–kinematics duality.
Regarding one of the two Lie algebras (say ) as colour, we may strip it off to form a -algebra. This amounts to the factorisation
^biadj ≅ ⊗^biadj.
Explicitly, ^biadj has the underlying cochain complex
(^biadj) (
[column sep=40pt]
*[r] ⊗^∞(^d)_ ^biadj_1[r,"_⊗"] ⊗^∞(^d)_ ^biadj_2[r] *
)
with φ=_a̅ φ^a̅∈^biadj_1, φ^+=^a̅ φ^+_a̅∈^biadj_2, and _⊗. Note that we continue to label colour-stripped fields by φ, slightly abusing notation. Furthermore, we have
_2(φ_1,φ_2) f̅_a̅b̅^c̅_c̅⊗φ^a̅_1φ^b̅_2
φφ^+ ∫^dx φ^a̅φ^+_a̅ .
To extend ^biadj to a -algebra, we need to endow it with an operator such that [,]=. The evident choice here is the shift isomorphism (denoted [1]).
[1] : _2^biadj _1^biadj .
The derived bracket {-,-} of (<ref>) is then
{φ_1,φ_2} = (_2(φ_1,φ_2)) = f̅_a̅b̅^c̅ _c̅⊗φ_1^a̅φ_2^b̅∈^biadj_1 ,
{φ_1,φ^+_2} = _2(φ_1,φ^+_2) = f̅_a̅b̅^c̅g^b̅d̅ _c̅⊗φ_1^a̅φ^+_2d̅ = {φ^+_2,φ_1}∈^biadj_2 .
It is then easy to check that all the remaining axioms are satisfied; in particular, is of second order, which amounts to the following specialisation of (<ref>):
0 = -_2(φ_1,(_2(φ_2,φ_3)))+_2(φ_2,(_2(φ_1,φ_3)))-_2(φ_3,(_2(φ_1,φ_2)))
for all φ_1,2,3∈^biadj_1, a consequence of the Jacobi identity. We will denote the resulting -algebra also by ^, to indicate the choice of Lie algebra . This -algebra will play an important role as a replacement for the colour Lie algebra later.
According to <ref>, the existence of the -algebra ^ proves that the biadjoint scalar field theory possesses CK duality on its currents. Because is a shift isomorphism, all fields φ are of the form φ=φ^+ for some anti-field φ^+, and hence CK-duality extends to the amplitudes.
Syngamy.
We now follow the approach of <ref> and consider the syngamy of the two -algebras ^ and ^ for , some Lie algebras. To this end, we note that the -algebra comes with a natural action of the Hopf algebra _^d from <ref>, and it is easy to check that all operations are _^d-linear with respect to this action.
The restricted tensor product ^⊗^^ has then the underlying cochain complex
(
[column sep=20pt]
*[r] ⊗⊗^∞(^d)_ _2[r] ^2⊗⊗⊗^∞(^d)_ _3[r] ⊗⊗^∞(^d)_ _4[r] *
) ,
and we have a corresponding kinematic Lie algebra concentrated in degrees 1,2,3. We will be interested in the shifted Lie bracket [-,-][1] on fields φ_1,2∈^⊗^^, which reads as
[φ_1,φ_2][1] = _2(φ^(1)_1,φ^(1)_2)⊗_2(φ_1^(2),φ_2^(2))+_2(φ^(1)_1,φ^(1)_2)⊗_2(φ_1^(2),φ_2^(2)) ,
where we used again Sweedler notation φ_1,2=φ_1,2^(1)⊗φ_1,2^(2).
We note that the cochain complex (<ref>) is split in half, into the kernel and cokernel of the operator
_- [1]⊗-⊗[1] .
In particular, the kernel is given by _2 as well as the symmetrised sum of the two copies of ⊗⊗^∞(^d) contained in _3.
With <ref>, we note that the restricted kinematic Lie algebra ^0=^0(), i.e. restricted to the kernel, cf. <ref>, together with the differential [1] becomes a dg Lie algebra ^0 with underlying cochain complex
(^0) = (
[column sep=20pt]
*[r] ⊗⊗^∞(^d)_ ^0_1[r,"_⊗_⊗"] [30pt] ⊗⊗^∞(^d)_ ^0_2[r] *
) .
Moreover, the product μ_2 can be read off from (<ref>), and its non-trivial components are given by
μ_2(φ_1,φ_2) _c⊗_c̅⊗φ^aa̅_1φ^bb̅_2f_ab^cf̅_a̅b̅^c̅
for all φ_1,2∈^0_1.
On fields that are not in the kernel of = (e.g. Schwartz-type functions describing interacting fields), a metric can be defined by means of <ref>:
φ_1φ_2_^0 ^-1_-φ_1φ_2_ .
Because of the symmetry of --_^0 established in <ref>, we can assume that φ_1 is a field, i.e. and element in _2[1], without loss of generality. In this case, |φ_1^(1)|=1 and hence
^-1(_-φ_1) = ^-1(φ_1^(1)⊗φ_1^(2)-φ_1^(1)⊗φ_1^(2))
= (φ_1^(1)[-1]⊗φ_1^(2)-φ_1^(1)⊗φ_1^(2)[-1]) ,
where we used that (^-1φ_1^(1))⊗φ_1^(2)=φ_1^(1)⊗ (^-1φ_1^(2)). The restriction to (_-)⊆_2⊕_3 with ^0, together with the removal of the infinite volume factor along the constant directions (cf. the discussion in <ref>), then leads to the expected inner product (<ref>).
Altogether, we see that ^0=^biadj and, as expected, the resulting double copy is the biadjoint scalar theory with Lie algebras and .
Note that, as predicted above, the role of the colour Lie algebras is played by the -algebras ^ and ^. In particular, constructing the syngamy of a -algebra with the -algebra ^ produces the same field theory (in the form of a dg Lie algebra) as if we tensored the dg commutative algebra underlying with . This relation is quite evident for field theories where the differential in is =, such as biadjoint scalar and conventional rewritings of Yang–Mills theory, but it also extends to Chern–Simons theory, as we shall see in <ref>.
§.§ Biadjoint scalar theory with bifundamental matter
The simplest example including matter fields is the biadjoint scalar theory coupled to a bifundamental scalar, cf. <cit.>, i.e. a scalar field taking values in the (metric) fundamental representations[The choice of fundamental representation is just for concreteness sake; the theory straightforwardly generalises to arbitrary metric representations.] R⊗R̅ of the Lie algebras ⊗.
Differential graded Lie algebra.
Explicitly, we couple the biadjoint scalar field theory (<ref>) to the action for a bifundamental scalar field
S^biadj-fun S^biadj+∫^dx{12ψ^i g_ijg̅_ψ^j+12ψ^ig_ijg̅_T_aj^iT̅_a̅^φ^aa̅ψ^j} ,
where
ψ = _i⊗_⊗ψ^i ∈ (R⊗R̅)⊗^∞(^d) ,
and where we have introduced bases, _i and _, metrics g_ij and g_ with respect to these bases, and structure constants, T_ a j^i and T̅_a̅^, describing the interactions, for R and R̅, respectively.
The underlying cochain complex of the dg Lie algebra ^biadj-fun is that of ^biadj enlarged to
(^biadj-fun) (
[
row sep=-3pt,column sep=12pt
]
(⊗)⊗^∞(^d) [r,"_⊗⊗"] [25pt] (⊗)⊗^∞(^d)
[r,shorten >=8ex] ⊕ ⊕[r,shorten <=8ex] *
(R⊗R̅)⊗^∞(^d)[r,"_R⊗R̅⊗"] (R ⊗R̅)⊗^∞(^d)
),
where the anti-fields φ^+ and ψ^+ belong to the degree shifted copies of (⊗)⊗^∞(^d) and (R⊗R̅)⊗^∞(^d), respectively. The fields φ, ψ and anti-fields φ^+, ψ^+ have dg Lie algebra degree 1 and 2 (and, thus, ghost degree 0 and 1), respectively.
The interactions are encoded in the graded anti-symmetric Lie bracket
μ_2 : ^biadj-fun×^biadj-fun → ^biadj-fun ,
which has non-trivial components
μ_2(φ_1,φ_2) _cf_ab^c⊗_c̅f̅_a̅b̅^c̅⊗φ^aa̅_1φ^bb̅_2 ,
μ_2(φ,ψ) _j T_ ai^j⊗_T̅_a̅^⊗φ^aa̅ψ^i=μ_2(φ, ψ) ,
μ_2(ψ_1,ψ_2) _a T^ a_ij⊗_a̅T̅^a̅_⊗ψ_1^iψ_2^j
for all φ,φ_1,2 and ψ,ψ_1,2 in the evident subspaces of ^biadj-fun_1. The assumption that R,R̅ are metric implies the existence of a cyclic structure with non-vanishing components
φ+ψφ^++ψ^+ ∫^dx {φ^aa̅φ^+_aa̅ + ψ^iψ^+_i}
for all φ,ψ and φ^+,ψ^+ in the evident subspaces of ^biadj-fun_1 and ^biadj-fun_2, respectively. Altogether, ^biadj-fun is a metric nilpotent dg Lie algebra.
Colour–flavour-stripping.
The next step is to perform a colour–flavour-stripping as explained in <ref>. Without loss of generality, we can chose and R to be the colour–flavour factors and we expect a factorisation of the dg Lie algebra ^biadj-fun as follows:
^biadj-fun ≅ ⊗^biadj ⊕ R⊗ V^bifun
with ^biadj as defined (<ref>) and V^bifun=(V^bifun,_V^bifun,_V^bifun) is a (dg) module over ^biadj with underlying cochain complex
(V^bifun) (
[
row sep=-3pt
]
*[r] R̅⊗^∞(^d) [r,"_R̅⊗"] R̅⊗^∞(^d) [r] *
).
The action is defined as
φ_V^bifunψ _T_a^⊗φ^aψ^ .
A short computation then verifies the factorisation (<ref>).
-module structure and colour–kinematics duality.
We have already seen that the dg commutative algebra ^biadj can be enriched to a -algebra ^g̅; it remains to enriched V^bifun to a -algebra module, which we will denote by the same letter. As in the case of the dg commutative algebra, also here the required additional operator _V^bifun is given by the evident degree shift
_V^bifun [1] : V_2^biadj V_1^biadj .
The derived bracket {-,-}_V^bifun^biadj× V^bifun→ V^bifun as defined in (<ref>)
reads as
{φ,ψ}_V^bifun = (_T̅_a̅^⊗φ^a̅ψ^)[1] ,
{φ,ψ^+}_V^bifun = _T̅_a̅^⊗φ^a̅(ψ^+[1]) ,
{φ^+,ψ}_V^bifun = _T̅_a̅^⊗(φ^+a̅[1])ψ^ ,
{φ^+,ψ^+}_V^bifun = 0
for all φ∈^biadj_1, φ^+∈^biadj_2, ψ∈ V^bifun_1, and ψ^+∈ V^bifun_2. Together with the derived bracket of the biadjoint scalar theory, see (<ref>), it follows that {-,-}_V^bifun satisfies the shifted Poisson identity (<ref>) and
(V^bifun,_V^bifun,_V^bifun,_V^bifun)
is a BV^-module over the -algebra ^biadj.
Double copy.
To illustrate syngamies involving matter fields, let us consider the syngamy of two copies of (^biadj,V^bifun) with Lie algebras and metric fundamental representations (,R) and (,R̅), respectively. The restricted tensor product of the two -algebras is given in (<ref>), and the restricted tensor product V̂ of the -modules similarly has underlying cochain complex
(
[column sep=17pt]
*[r] R⊗R̅⊗^∞(^d)_ V̂_2[r] ^2⊗ R⊗R̅⊗^∞(^d)_ V̂_3[r] R⊗R̅⊗^∞(^d)_ V̂_4[r] *
) ,
and by <ref>, there is a corresponding underlying module for the kinematic Lie algebra of defined in (<ref>). Again, the cochain complex (<ref>) is split in half into the kernel and cokernel of the operator
_V̂- [1]⊗-⊗[1] ,
and (_V̂-) consist of _2 and a symmetrised sum of the two copies of R⊗R̅⊗^∞(^d) in _3. Restricted to this kernel, becomes a dg module ^0 over the reduced kinematic dg Lie algebra ^0 of by <ref>.
The reduced kinematic dg Lie algebra ^0 and the dg module ^0 now combine into a single dg Lie algebra, and it is not hard to see that this dg Lie algebra is ^biadj-fun, the dg Lie algebra we started from. In particular, the double copy of the metric (<ref>) is fully analogous to that of the metric in biadjoint scalar theory. Hence, the syngamy of two copies of (^biadj,V^bifun) yields a biadjoint scalar theory coupled to bifundamental matter.
§.§ The sesquiadjoint scalar and kinematic L-infinity-algebras
In order to illustrate at least one case of a kinematic L_∞-algebra (again, anticipating our future work <cit.>), we introduce a sesquiadjoint scalar field theory.
Differential graded Lie algebra and colour-stripping.
The setup is almost identical to the biadjoint scalar, except that we replace in ⊗ with a vector space W equipped with an anti-symmetric binary operation [-,-]:W× W→ W that does not (necessarily) fulfil the Jacobi identity.[Such products were considered, e.g., in <cit.>.]
Colour-stripping, we have a dg commutative algebra ^seqadj with underlying cochain complex,
( ^seqadj) (
[
row sep=-3pt
]
*[r] W⊗^∞(^d) [r,"_W⊗"] W ⊗^∞(^d) [r] *
),
and non-trivial graded symmetric product
_2 : W⊗^∞(^d)× W⊗^∞(^d) → (W⊗^∞(^d))[-1] ,
(φ_1,φ_2) _cf_ab^c⊗(φ^a_1φ^b_2) ,
where we have introduced a basis, _a, for W and structure constants f_ab^c for the binary operation [-,-] that does not obey the Jacobi identity.
Kinematic L_∞-algebra.
As before, the shift isomorphism
[1] : _2^sesqadj _1^sesqadj .
satisfies +=. The non-trivial higher-order differentials, as defined in (<ref>) with δ==[1], are given by
Φ^1_(ϕ^+_1) ϕ^+_1[1] ,
Φ^2_(ϕ_1,ϕ_2) (ϕ_1,ϕ_2)[1] ,
Φ^2_(ϕ_1,ϕ^+_2) (ϕ_1,ϕ^+_2[1]) ,
Φ^2_(ϕ^+_1,ϕ_2) - (ϕ^+_1[1], ϕ_2) ,
Φ^3_(ϕ_1,ϕ_2,ϕ_3) (ϕ_1[1],(ϕ_2,ϕ_3))-( (ϕ_1,ϕ_2)[1],ϕ_3)
+(ϕ_2,(ϕ_1,ϕ_3)[1]) .
By <ref>, the higher products μ_iΦ^i_ define an L_∞-algebra on the shifted cochain complex (^seqadj)[1]. Here, μ_3 (as always) describes the homotopy that encodes the failure of μ_2 to satisfy the Jacobi identity, which in turn is due to the bracket [-,-] not satisfying the Jacobi identity. This derived L_∞-algebra is directly analogous to the derived Lie algebra of the kinematic Lie algebra. It is an example of the kinematic L_∞-algebras described in <ref>. We stress that the homotopy Jacobi relations in this example are non-trivial.
General setting.
Since there is always a graded commutative product _2, every perturbative Lagrangian BV theory has such a kinematic L_∞-algebra (under the very weak assumption that there is a suitable ). We plan to explore the significance of this observation further in future work. The most radical implication that one might envisage, is that every theory can be double-copied using the kinematic L_∞-algebra structure. This seems (at least superficially) unlikely, and the standard double copy argument <cit.> for scattering amplitudes is certainly not generalised in an obvious fashion.
In the above example, in particular, the differential Φ^1_ has trivial cohomology, and hence the L_∞-algebra of the Koszul hierarchy is quasi-isomorphic to the trivial one[This is in close analogy to the Lie or L_∞-algebra of inner derivations of a Lie or L_∞-algebra being contractible or quasi-isomorphically trivial.]. By contrast, the usual kinematic Lie algebra is non-trivial precisely because we can halve the field content and render the (cohomology of the) kinematic algebra non-trivial. This possibly suggests that generic kinematic L_∞-algebras are not of use in the double copy.
§.§ Pure Chern–Simons theory
So far, we encountered scalar field theories which directly exhibited CK duality. In this example, we increase the complexity by introducing gauge symmetry while still maintaining manifest CK duality.
Differential graded Lie algebra.
Let be a metric Lie algebra with basis _a relative to which we have structure constants f_ab^c and a metric g_ab. Furthermore, let Ω^p(^3) be the differential p-forms on ^3 with the exterior differential Ω^p(^3)→Ω^p+1(^3) and let ⋆:Ω^p(^3)→Ω^3-p(^3) be the usual Hodge operator with respect to the Minkowski metric on ^3.
The field content of Chern–Simons theory consists of the Chern–Simons gauge potential A=_a⊗ A^a with A^a∈Ω^1(^3) and its ghost c=_a⊗ c^a with c^a∈Ω^0(^3) paired with their anti-fields A^+=_a⊗ A^+a with A^+a∈Ω^2(^3) and its ghost c^+=_a⊗ c^+a with c^+a∈Ω^3(^3). In addition to this usual BV field content, we also add a Nakanishi–Lautrup field n=_a⊗ n^a with n^a∈Ω^1(^3) and an anti-ghost c̅=_a⊗c̅^a with c̅^a∈Ω^1(^3) together with the corresponding anti-fields n^+ and c̅^+. After gauge fixing with the gauge-fixing fermion Ψ=∫{g_abc̅^a∧⋆ (^† A^b-12 n^b)}, the action functional looks as follows:[For the L_∞-algebra before gauge-fixing, see e.g. <cit.>.]
S^CS ∫{12g_abA^a∧ A^b+13g_abf_cd^bA^a∧ A^c∧ A^d
2cm-g_abc̅^a∧⋆^†(∇ c)^b+12 g_abn^a∧⋆ n^b+g_abn^a∧⋆^† A^b} .
The dg Lie algebra structure is readily read off, and we directly continue with colour-stripping.
Colour-stripping and -algebra structure.
All of the fields take values in the colour Lie algebra, and after colour-stripping, we obtain a dg commutative algebra ^CS, which comes with a natural operator , and has the following underlying bidirectional complex, cf. (<ref>):
[row sep=1cm,column sep=2.7cm]
cΩ^0(^d)[rdd,shift left,"-"] AΩ^1(^d)[rd,shift right,"-^†"', pos=0.1] [r,shift left,""] A^+Ω^2(^d)[l,shift left,"^†"] [ld,shift left,"⋆", pos=0.1] c^+Ω^4(^d)[ldd,shift left,"-⋆"]
nΩ^0(^d)[ru,shift left,"⋆", pos=0.1] n^+Ω^0(^d)[lu,shift right,"-"', pos=0.1]
cΩ^0(^d)_ ^CS_0 c̅^+Ω^0(^d)_ ^CS_1[luu,shift left,"-"] c̅Ω^0(^d)_ ^CS_2[ruu,shift left,"-⋆"] c^+Ω^0(^d)_ ^CS_3
The binary products are given as follows:
_2([ A_1; n_1; c̅^+_1 ],[ A_2; n_2; c̅^+_2 ]) [ A_1∧ A_2; 0; 0 ] ∈ _2^CS ,
_2([ A_1; n_1; c̅^+_1 ],c_2) [ 0; 0; ^†(A_1 c_2) ] ∈ _1^CS ,
_2(c_1,[ A_2^+; n_2^+; c̅_2 ]) [ cc̅; 0; 0 ] ∈ _2^CS ,
where the notation and positions of the components in the arguments and images in these expressions correspond to those of diagram (<ref>). We clearly see that the operator implied by (<ref>) is of second order with respect to these binary products, and we obtain indeed a -algebra structure. Moreover, there is an evident metric with the following, non-vanishing components:
AA^+ ∫ A∧ A^+ , cc^+ ∫ c∧ c^+ ,
nn^+ ∫ n∧⋆ n^+ , c̅c̅^+ ∫c̅∧⋆c̅^+ .
Colour–kinematics duality.
We recall that the tree-level amplitudes of Chern–Simons theory on ^d are all trivial. However, following e.g. <cit.>, we can consider the homotopy transfer to harmonic forms[i.e. amputated correlation functions with external legs being harmonic forms] on ^d, and it is the CK duality for this Feynman diagram expansion that the -algebra ^CS manifests. Moreover, we have =[,]=, which is evident from the diagram (<ref>), so that the arising kinematic Lie algebra is indeed for the ordinary form of CK duality with propagator 1/. Note that here, we have full loop level CK-duality.
Comments.
Before coming to the double copy, let us comment on a new feature in Chern–Simons theory. Contrary to previous theories, the -operator, concretely the component |_^CS_2, is no longer simply a shift isomorphism. Therefore the kernel of no longer cleanly cuts the BV field space into fields and anti-fields, and some parts of the anti-fields are left in (). These parts, however, are very small; they consists of exact and coexact anti-fields A^+ of the gauge potential (which on ^3 amounts to a harmonic scalar field) as well as constant Nakanishi–Lautrup anti-fields n^+. We can usually ignore this issue, as the common constraints on a quantum field theory such as locality etc. allow us to truncate away subspaces that are not full ^∞(^d)-modules. If one feels uncomfortable about this truncation, one can also extend our notion of -algebra to -algebras with polarisations, i.e. structures that compatibly split the field space into fields and complementing anti-fields, respecting in particular (<ref>). Because of the additional technicalities that do not add much in concrete discussions, we refrained from using these notions.
Double copy.
With the above technicality out of the way, we can follow our usual prescription using the evident Hopf algebra _^3 generated by the translation operators on ^3, and consider the kernel of _-, cf. (<ref>). This leads to a BV field space with the fields, i.e. the (truncated) elements of (_)⊗^(_)⊆(_-) given by the direct sums of the spaces
[row sep=0cm, column sep=0.2cm]
c_⊗ c_Ω^0(^3) c_⊗ A_Ω^1(^3)⊕A_⊗ c_Ω^1(^3) A_⊗ A_Ω^1(^3)⊗Ω^1(^3)
c_⊗ n_Ω^0(^3)⊕n_⊗ c_Ω^0(^3) A_⊗ n_Ω^1(^3)⊕n_⊗ A_Ω^1(^3)
c_⊗c̅_Ω^0(^3)⊕c̅_⊗ c_Ω^0(^3) A_⊗c̅_Ω^1(^3)⊕c̅_⊗ A_Ω^1(^3)
c_⊗ c_Ω^0(^3)_ ^CSCS_-1 c_⊗ A_Ω^1(^3)⊕A_⊗ c_Ω^1(^3)_ ^CSCS_0 n_⊗ n_Ω^0(^3) _ ^CSCS_1 n_⊗c̅_Ω^0(^3)⊕c̅_⊗ n_Ω^0(^3)_ ^CSCS_2 c̅_⊗c̅_Ω^0(^3)_ ^CSCS_3
where we have indicated the origin of the subspaces using the component notation of (<ref>), and we have also indicated the degree of the fields in the resulting double-copied dg Lie algebra ^CSCS. The corresponding anti-fields form a grade-shifted and flipped copy dual of this field space, and together they form the graded vector space of the dg Lie algebra ^CSCS.
The differential and the product of the dg Lie algebra ^CSCS are straightforwardly constructed, but the cyclic structure is a bit more complicated. For the propagating field components, i.e. those components of fields that are not in the kernel of , we can use <ref> to define this inner product. We can then continue the resulting expression to all fields by locality.
Altogether, the double copy leads to a rather unusual BV field theory, whose physical part was first presented in <cit.>. Explicitly, the kinetic term of the action for the physical fields given by the (1,1)-biforms A_⊗ A_∈Ω^1(^3)⊗Ω^1(^3) reads as
1/4∫{(A_⊗ A_)∙^-1_-μ_1(A_⊗ A_)} = 1/2∫{(A_⊗ A_)∙⊗/A_⊗ A_)},
where the product ∙:Ω^p_1(^3)⊗Ω^q_1(^3)×Ω(^3)^p_1⊗Ω^q_2(^3)→Ω^p_1+p_2(^3)⊗Ω^q_1+q_2(^3) on biforms is defined as
(A_1⊗ B_1)∙(A_2 ⊗ B_2) (A_1∧ A_2)⊗(B_1∧ B_2) .
The interaction terms for the physical fields are given by
∫13!(A_⊗ A_)∙(A_⊗ A_)∙(A_⊗ A_) ,
and together, (<ref>) and (<ref>) are the double-copied Chern–Simons action of <cit.> in the (p,q)-formalism of <cit.>. A further study of this action is certainly warranted, particularly, since it will also appear in <ref> in the context of M2-brane models.
We note that a useful outcome of our double copy construction is the full BV triangle required for studying biform theories.
§.§ Self–dual Yang–Mills theory and self–dual gravity
The field theories studied in the previous sections came with in a -algebra in their original formulation. This is contrary to the case of Yang–Mills theory, where the action has to be rewritten in an equivalent form in order to manifest CK duality, cf. <cit.> and the detailed discussion in <cit.>. A theory that is in between both cases is self-dual Yang–Mills (SDYM) theory, which features CK duality on its currents <cit.>. Presented in light-cone gauge, it is essentially a biadjoint scalar field theory, and therefore manifestly CK-dual. In the gauge-invariant form of the Chalmers–Siegel action <cit.>, which contains an enlarged field content featuring also an anti-self-dual 2-form field, however, it does require an equivalent rewriting in order to manifest CK duality. As stated in the introduction, CK duality is ultimately a symmetry of the action and therefore we may expect an organisational principle that leads to a manifest formulation.
In <cit.>, we showed that the twistor space Z, i.e. the total space of the holomorphic vector bundle (1)⊕(1) over P^1 can serve as such an organising principle. Explicitly, SDYM theory can be equivalently formulated as a holomorphic Chern–Simons theory on Z, and, as for ordinary Chern–Simons theories, there is a natural adjoint of the Dolbeault differential that is of second order with respect to the binary product, and hence an operator that enhances the evident dg commutative algebra structure for holomorphic Chern–Simons theory on Z to a -algebra structure. Even better, we have =, the d'Alembertian on space-time in this situation, so that the kinematic Lie algebra describes indeed ordinary CK duality on currents and, in the maximally supersymmetric case, even loop level amplitudes. An elegant example of the formalism presented in this paper can be found in <cit.>, where we consider an action equivalent to and reminiscent of the light-cone formulation of SDYM theory on twistor space, which elegantly double copies to an analogous formulation of self-dual gravity, also on twistor space. For all the technical details of the above, we refer to <cit.> and <cit.>.
Instead, let us briefly compare this result with that of <cit.>. In this paper, the authors considered the equations of motion and gauge transformations of SDYM theory on space-time, together with its colour-stripped dg commutative algebra, in order to study the kinematic algebra in the absence of space-time gauge-fixing (as opposed to the light-cone gauge analysis of <cit.>). As for Chern–Simons theory, there is a natural candidate for the -operator, namely =^†, the usual Hodge dual of the de Rham differential. As it stands, this differential is not second order with respect to the binary product, as the latter is not just a wedge product of forms, but at least on fields, it contains a projection operator. Therefore, as observed in this paper, the derived bracket (<ref>) in this picture is not a Lie bracket, but as explained in <ref>, the binary bracket in a kinematic L_∞-algebra. This is precisely what the authors of <cit.> observe to lowest order: there is a ternary operation, given by the expression from the Koszul hierarchy, so that the derived bracket satisfies the homotopy Jacobi identity of an L_∞-algebra.
The authors of <cit.>, however, obtain more. They show that the graded Poisson relation (<ref>) of the derived bracket (<ref>) is violated in a controlled way, and they compute the correction to this order. This leads to parts of a _∞-algebra <cit.>, see also <cit.>. In this sense, CK duality is not manifested literally, but only `up to homotopy'. The usual strictification theorem for homotopy algebras applies, and hence one can rewrite the theory in an equivalent form that makes use of an ordinary -algebra, and therefore manifests CK duality. We note that the 3-bracket inserted in <cit.> corresponds, after inserting a metric, and further an action principle, to a Tolotti–Weinzierl-type term that may be added to the action to manifest CK-duality to this order.
We also note that our rewriting on twistor space directly produces such a rewriting. Twistor space Z is diffeomorphic to the space[In the supersymmetric case, ^4 is replaced by ^4|2.] ^4× P^1, and one can perform a mode expansion along P^1. Some of these infinitely many modes correspond to physical fields on space-time, the rest will be the auxiliary fields that produce the Tolotti–Weinzierl terms[These are terms in the action that vanish due to the Jacobi identity of the colour algebra, cf. <cit.> and also <cit.>.] in the action necessary for manifesting CK duality. The obtained action will hence be the usual first order formulation of SDYM theory given by the Chalmers–Siegel action <cit.> plus additional trivial terms, which will become non-trivial after colour-stripping. Note that the twistor formulation allows for a choice of gauge, usually called space-time gauge, that directly leads to the Chalmers–Siegel action <cit.>, see also <cit.>.
Altogether, we saw that twistor space can serve as an organising principle that naturally leads to CK-dual formulations of field theories. In the case of full Yang–Mills theory, one can use ambitwistor space, and while this description still yields a kinematic Lie algebra, the operator is not the space-time d'Alembertian operator, so we only obtain a generalised form of CK duality. For this case, a more suitable organisational principle is found in pure spinor space, to which we turn next.
§.§ Pure spinor formulation of supersymmetric Yang–Mills theory
Closely related to the twistor construction of self-dual Yang–Mills theory mentioned in the previous section is the pure spinor formulation of supersymmetric gauge theories. In particular, ten-dimensional supersymmetric Yang–Mills theory can be formulated as Chern–Simons type action on pure spinor space, providing a natural -algebra structure. Contrary to the ambitwistor space construction of four-dimensional supersymmetric Yang–Mills theory in <cit.>, however, there is a natural operator that leads to =, the d'Alembertian, so that conventional CK duality can be established <cit.> for amplitude currents. As explained in <cit.>, however, reducing the currents to tree-level numerators in this picture involves a diverging integral over the pure spinors. This can be fixed by an alternative choice of <cit.>, and we briefly review this construction.
Pure spinor space.
For the ten-dimensional supersymmetric Yang–Mills theory, we start from the superspace
_10d =1 ^10|16× (^2|1⊗_10d MW) ,
where ^10|16 is the ten-dimensional =1 Minkowski superspace and _10d MW is the space of Majorana–Weyl spinors in ten dimensions. Hence, ^2|1⊗_10d MW is the (32|16)-dimensional superspace with coordinates[Note that λ̅_A is indeed common notation for a coordinate.] (λ^A,λ̅_A,λ̅_A), which transform in the 16, 16, and 16 of (1,9), respectively. The pure spinor space _10d =1 is obtained from this space as the quadric
λ^Aγ^M_ABλ^B = λ̅_Aγ^M ABλ̅_B = λ̅_Aγ^M ABdλ̅_B = 0 ,
where γ^M_AB and γ^M AB are the evident Clifford algebra generators. Operationally, we will work with fields on _10d =1 and identify the fields on _10d =1 as a quotient of these by the ideal generated by the quadrics (<ref>).
The space _10d =1 comes with a natural vector field Q,
Q = λ^A D_A+λ̅_A λ̅_A ,
where the D_A are the usual covariant superderivatives on ^10|16, satisfying
D_AD_B+D_BD_A = -2γ^M_ABx^M .
This vector field Q descends to a differential on the functions on _10d =1 due to (<ref>); in particular, is a differential ideal.
There is now a family of operators such that
^2 = 0
Q+ Q =
with the d'Alembertian on ^10 <cit.>. Usually, a Lorentz-covariant choice
_Lorentz λ̅_Aγ^M ABD_B/2(λ^Aλ̅_A)x^M+⋯ ,
M=0,…,9, is made, but this choice is less suitable for our purposes; instead, we work with the -operator of the Y-formalism <cit.>,
-v_Aγ^M ABD_B/2λ^A v_Ax^M ,
where we have chosen a reference pure spinor v, satisfying v_Aγ^M ABv_B=0. It is straightforward to verify that the relations (<ref>) are satisfied.
We summarise the properties of all the objects introduced so far in <ref>.
Pure spinor action and Siegel gauge.
There is now a simple, Chern–Simons type formulation of the BV action of ten-dimensional supersymmetric Yang–Mills theory <cit.>. The field content is organised into a single scalar superfield Ψ on _10d =1 of ghost number 1, mass dimension 0, and Graßmann degree 1, which takes values in the metric gauge Lie algebra (,--_). Together with the natural volume form Ω_10d =1 on pure spinor space _10d =1 that was given in <cit.>, we can write down the action functional
S^10d =1 ∫Ω_10d =1 ΨQΨ+13[Ψ,Ψ]_ .
The underlying cochain complex of the pure spinor BV L_∞-algebra is compactly encoded in the space of smooth functions on the pure spinor space,
(^psYM) ≅ ^∞(⊗_10d =1) .
To recover the component (anti-)fields and identify the graded vector spaces to which they belong, one Taylor-expands the -valued superfield Ψ(x,θ^A,λ^A,λ̅_A,λ̅_A) with respect to the λ^A,λ̅_A,λ̅_A coordinates.
There is an evident dg Lie algebra structure on ^∞(⊗_10d =1). The differential is given by _⊗ Q and
μ(Ψ_1,Ψ_2) [Ψ_1,Ψ_2] = f_ab^c_c⊗Ψ^a_1·Ψ^b_2 ,
where -·- is just the pointwise product on ^∞(_10d =1).
In order to compute perturbative scattering amplitudes, cf. <cit.>, we can work in Siegel gauge,
Ψ = 0 .
Note that our choice (<ref>) of imposes a form of axial gauge along v.
The propagator in this gauge is simply /, which is a clear generalisation of the propagator we encountered in the discussion of pure Chern–Simons theory in <ref>.
-algebra structure and colour–kinematics duality. It is now rather evident that the metric dg commutative algebra induced by the action (<ref>) becomes a -algebra
^psSYM (^∞(_10d =1),Q,-·-,)
with given by (<ref>) from the Y-formalism. The only fact to check is that is of second order with respect to the function product on pure spinor space _10d =1, but this is evident from the explicit expression for in (<ref>). Note that the pure spinor field already contains Nakanishi–Lautrup field and anti-ghosts (as well as the corresponding anti-fields), so that it indeed packages up all the BV fields required for a gauge-fixed action, cf. <cit.>.
By <ref>, we thus have a theory with manifest CK-dual parametrisation of its currents, and this observation had been made before in <cit.> for the commonly used, covariant -operator (<ref>). Using the -operator (<ref>) of the Y-formalism, this result extends to the amplitudes, as we explain now.
Recall from the discussion in <ref> that in order to convert a current into an amplitude, we have to remove the propagator on the outgoing leg and pair it off with another incoming, asymptotically free field. This latter pairing involves an integral over pure spinor space, which may lead to divergences. These divergences certainly cancel in the tree-level amplitudes, but they do not necessarily cancel in individual diagrams. This is a problem since we can only establish CK duality, if we can extract finite numerators of a CK-dual parametrisation of the scattering amplitudes. We briefly sketched the solution to this problem in <cit.>; let us be a bit more detailed in the following.
Essentially, the numerators can suffer from two types of divergences. Firstly, we have to account for the fact that pure spinor space (contrary to the base of twistor space) is non-compact, and therefore we will encounter infrared-like divergence from integrating over the unbounded (λ,λ̅)-domains. These divergences are mostly harmless, and there is a well-known Q-invariant regularisation of the integral measure by a factor of
^-t{Q,χ}
for χ a fermionic function on pure spinor space, e.g. χ=θ^Aλ̅_A <cit.>, cf. also <cit.>. Note that any t-dependent terms will drop out of the integral, and therefore this regularisation will not affect CK duality.
Secondly, there can be divergences from expressions of the form 1/λ^A v_A in the integrands appearing in the tree-level amplitudes. These originate from two sources. First of all, the propagator / in the Y-formalism with given in (<ref>) clearly contains such a singularity. Hence, the integrand will contain powers of these expressions. Furthermore, the functions on pure spinor space describing the external states of a tree-level amplitude will also possess such singularities, which are induced when solving the Siegel gauge condition Ψ=0. Explicitly, one can start from the non-singular representatives λ^Aλ^A_AB, where _AB are fields on ordinary superspace, of the cohomology class of the anti-fields, which encodes the anti-fields of the gauge field and the gluino. One can then apply to this representative in order to obtain a representative for the physical fields <cit.>.
We can argue, however, that the kinematic Jacobi identities has to hold order by order in 1/λ^A v_A. In principle, contributions from different diagrams can combine into Q-exact terms in the total scattering amplitude and hence drop out. However, Q does not change the order of singularity near λ^Av_A=0, and therefore this cancellation has to happen order-by-order. Because we know that the final scattering amplitude will not be singular, we can safely remove all singular terms in the individual diagrams as well as their resulting numerators, leaving us with finite expressions for the latter. This process is akin to the minimal subtraction prescription familiar from dimensional regularisation. The resulting subtracted numerators then provide indeed a CK-dual parametrisation of the Yang–Mills tree-level scattering amplitudes.
We note that had we used the usual, covariant -operator of (<ref>), our argument would not have worked. In this case, ultraviolet divergences arise at the tip of the cone λ^Aλ̅_A=0 in pure spinor space, but Q does change the degree of singularity near λ^Aλ̅_A=0 due to the derivative with respect to λ̅_A. This leads to a potential mixing of singularities, and therefore CK duality is not guaranteed order by order. In this case, there is no subtraction scheme as for the -operator in the Y-formalism.
Double copy.
The -algebra obtained above can be double-copied using our formalism in a straightforward manner. We choose to work with the evident cocommutative Hopf algebra _^10 to control the momentum dependence. Correspondingly, we use the restricted tensor product
^psSYM⊗^_^10^psSYM .
Upon factorising the pure spinor space for supersymmetric Yang–Mills theory as
_10d =1 ^10|16×^ps_10d =1 ,
we find that the graded vector space underlying is simply
^∞(^10|32×^ps_10d =1×^ps_10d =1) ,
and we note that both the odd superspace coordinates θ as well as all the auxiliary coordinates λ^A, λ̅_A, and λ̅_A get doubled. In this larger space, we now have to consider the kernel of _-=⊗-⊗,
(_-) = { f∈^∞(^10|32×^ps_10d =1×^ps_10d =1) | (⊗) f=(⊗) f } ,
which underlies the restricted kinematic Lie algebra ^0(). This turns out to be a metric dg Lie algebra, and the resulting action principle reads as
S ∫Ω_10d =1∧_^10|16Ω_10d =1Ψ(Q⊗+⊗ Q)Ψ+13[Ψ,Ψ]_^0() ,
where Ω_10d =1∧_^10Ω_10d =1 denotes the evident integral on the space (<ref>) (where we have again removed the infinite volume factor from the additional integral over the second copy of ^10, cf. the discussion in <ref>).
We regard our cubic double-copied action (<ref>) as a rather exciting new result in the pure spinor formulation of supergravity. In eleven dimensions, the currently available action contains quartic terms in the pure spinor field <cit.>, see also <cit.> and reference therein for more recent work using integral forms. In ten dimension, a pure spinor formulation of the vertex operators of closed superstrings was given in <cit.>, cf. also <cit.>. These are precisely the double copy without the restriction to (_-) (which would amount to imposing the section condition), and hence the field content is initially too large. In <cit.>, a different solution to this problem has been proposed, but this does not allow for the direct link between world-sheet ghost number and target-space ghost number that we observe in our prescription; also, it would lead to a non-cubic action. Hence, to our knowledge, (<ref>) presents the first cubic form of a pure spinor action for ten-dimensional supergravity. Further study of this action is certainly warranted, in particular regarding the link to the pure spinor formulations of open and closed string, but this has to be left to future work.
§.§ Pure spinor formulation of M2-brane models
Pure spinor space.
The Bagger–Lambert–Gustavsson (BLG) M2-brane model <cit.> can also be formulated as a Chern–Simons–matter theory on pure spinor spaces <cit.>.
Here, we start from the space
_3d =8 ^3|16× (^2|1⊗_10d MW) ,
where ^3|16 is the three-dimensional =8 Minkowski superspace and _10d MW again the space of Majorana–Weyl spinors in ten-dimensional, but now with indices reflecting the branching (1,9)→(1,2)×(7). Explicitly, ^2|1⊗_10d MW is coordinatised by (λ^α i,λ̅_α i,λ̅_α i) with α=0,… 2 and i=1,…,8, transforming in the 2⊗8, 2⊗8̅, and 2⊗8̅ of (1,2)×(7). Note that indices in the 2 are raised and lowered as usual with _αβ and its inverse. Also, the R-symmetry group is enlarged from (7) to (8), and we use indices m,n=1,…,8 for the vector representation 8_𝐯 of (8).
The pure spinor space _3d =8 is then the quadric in _3d =8 with the following relations:
λ^α iγ_αβ^μλ_i^β = λ̅^α iγ_αβ^μλ̅^β_i = λ̅^α iγ_αβ^μλ̅^β_i = 0 ,
where γ^μ_αβ are the generators of the Clifford algebra of (1,2).
Together with the supersymmetric covariant derivatives D_iα which satisfy the relations
{D_iα,D_jβ} = γ_αβ^μδ_ijx^μ ,
we have a natural vector field Q on _3d =8,
Q λ^α iD_α i+λ̅_α iλ̅_α i ,
which descends to a differential on functions on _3d =8.
Again, there is a family of operators satisfying[albeit the covariant form has not been constructed so far]
^2 = 0
Q+ Q = ,
and we choose to work again with the evident operator arising in the Y-formalism,
-v_α iγ^μ αβδ^ijD_β j/2λ^α i v_α ix^μ ,
where v is a reference pure spinor v with v_α iγ^μ αβδ^ijv_β j=0. A short computation verifies (<ref>).
We summarise the properties of the above objects in the following table.
Gauge algebra.
Recall that the BLG model has an underlying metric 3-Lie algebra in the sense of <cit.>. Such a 3-Lie algebra can be seen as a Lie algebra with an orthogonal representation <cit.>. In the case of the BLG model, the Lie algebra is =(2)⊕(2) and the orthogonal representation is Euclidean ^4. Concretely, we can identify ≅ V∧ V with V^4, and with respect to the standard basis _k, k=1,…,4 on ^4, we have a ternary bracket
[_k_1,_k_2,_k_3]_V _k_1k_2k_3k_4_k_4
with _k_1k_2k_3k_4 the Levi-Civita symbol,
and the metric
_k_1_k_2_V δ_k_1k_2
with δ_k_1k_2 the Kronecker symbol. These define a metric Lie algebra by the relations
(_k_1∧_k_2)_k_3 = [_k_1,_k_2,_k_3]_V ,
_k_1∧_k_2_k_3∧_k_4_ = _k_3[_k_1,_k_2,_k_4]_V_V ,
and we find ≅(2)⊕(2) as a Lie algebra with an indefinite metric of signature (+,+,+,-,-,-).
Field content and action.
For the BLG model, the formalism presented in <cit.> uses two fields. Firstly, there is a scalar superfield Ψ on _3d =8 of mass dimension 0, Graßmann degree 1, and ghost number 1 taking values in the metric Lie algebra , which encodes the gauge sector.
The matter sector is a bit more subtle. There is a (trivial) (8)-bundle over _3d =8, and we can consider the associated vector bundle E for the vector representation 8_v. From its sheaf of sections, we construct the quotient sheaf[Note that the sections can have singularities in ^2|1⊗_10d MW.]
__3d =8 Γ(E)/_E ,
where _E is the ideal generated by λ^α iγ^m_ijϑ^j_α where ϑ^j_α is an arbitrary function of ghost degree -1 and γ^m_ij are the (8)-factor of the Clifford algebra generators for (1,2)×(8). The matter fields Φ^m are now elements of __3d =8 with values in V. Operationally, we can regard them as sections of E (with values in V) subject to the identification
Φ^m ∼ Φ^m+λ^α iγ^m_ijϑ^j_α .
We note that there is a natural pairing on __3d =8 given by
g_mnΦ^mΦ^n
g_mn λ^α iγ_mn ijλ^j_α .
The pure spinor superspace _3d =8 comes with a natural dimensionless volume form Ω_3d =8 <cit.>, and we can formulate the action
S^3d =8 ∫Ω_3d =8{ΨQΨ+13[Ψ,Ψ]_+g_mnΦ^mQΦ^n+ΨΦ^n_V} .
-algebra and -module structure.
Our remaining constructions can now follow fully analogous to the case of supersymmetric Yang–Mills theory, except for the fact that we are dealing with a -algebra module. The -algebra itself is given by
^psM2 (^∞(_3d =8),Q,-·-,)
with -·- the pointwise product and the Y-formalism -operator (<ref>), which is evidently of second order. The relevant module V^psM2 is given by
V^psM2 (_3d =8,Q,-·-,) ,
where the actions of Q and are the evident ones, induced by the operators (<ref>) and (<ref>) on _3d =8, respectively, and -·- is again the pointwise product. The fact that V^psM2 is a module over ^psM2 is self-evident.
The -algebra and -module structure (^psM2,V^psM2) guarantees CK duality on the field theories currents <cit.>. Moreover, the same arguments as for supersymmetric Yang–Mills theories lift this CK duality to the tree-level amplitudes. Singularities in the integrand are either IR-type singularities, which can be regulated in an evident form, or they are of the form 1/λ^α iv_α i, and then, because of our use of the Y-formalism -operator, there is a minimal subtraction scheme allowing us to extract finite CK-dual numerators for the tree-level amplitudes of the M2-brane model.
While the pure-spinor-based proof of CK duality of the tree-level amplitudes of supersymmetric Yang–Mills theory was an alternative proof, this proof for tree-level CK duality in BLG models is the first; only partial results were available in the literature previously, cf. <cit.>. The relation of our notion of CK duality, the conventional one for gauge–matter theory, and the quartic CK duality of <cit.> is explained in <cit.>.
Double copy.
The -algebra and -module structure (^psM2,V^psM2) can now be straightforwardly double-copied, following our general formalism specialised to the evident cocommutative Hopf algebra _^3. The restricted tensor product leads again to a -algebra and -module with
^psSYM⊗^_^3^psSYMV̂ V^psSYM⊗ V^psSYM ,
and using the factorisations
_3d =8 ^3|16×^ps_3d =8_3d =8 _3d⊗^ps_3d =8 ,
we find that the graded vector spaces underlying and V̂ read as
^∞(^3|16×^ps_3d =8×^ps_3d =8)
_3d⊗^ps_3d =8⊗^ps_3d =8 .
As expected both the odd superspace coordinates θ as well as all the pure spinor auxiliary coordinates λ^A, λ̅_A, and λ̅_A get doubled. This larger space carries an action of the operator _-,
(_-) = { f∈^∞(^3|16×^ps_3d =8×^ps_3d =8)⊕_3d | (⊗) f=(⊗) },
to which the kinematic Lie algebra of can be truncated. The result is another cubic action of the form (<ref>), which we would expect to describe =16 supergravity in three dimensions, cf. <cit.>. Studying the resulting action in detail is, however, beyond the scope of this paper, and we leave it to future work.
Comment on the ABJ(M) models.
Both the Aharony–Bergman–Jafferis–Maldacena (ABJM) model <cit.> and the Aharony–Bergman–Jafferis (ABJ) model <cit.> can also be formulated in the pure spinor formalism of <cit.>. The pure spinor superspace with m,n=1,…,4 for these theories is obtained from the pure spinor space of the BLG model, _3d =8, by truncating the R-symmetry (8) to (6). It not difficult to adjust the action principal for the BLG model to this situation.
There is, however, a technical complication compared to the BLG model: the representation space V in the underlying -module is complex, as explained in <cit.>, and therefore there is no suitable symplectic metric on the underlying vector space. While this is not a fundamental issue for discussing CK duality, it significantly complicates all constructions. We therefore refrain from giving the details here; the -algebra and -module structure can be found in our paper <cit.>.
§ RESTRICTED TENSOR PRODUCT OF MODULES OVER BIALGEBRAS
Throughout this section, we use the Sweedler notation (<ref>), and we fix a bialgebra over a field of arbitrary characteristic; see <ref>. Furthermore, we view as the canonical -module in which acts via the counit ϵ : →.
Let V and W be -modules. We call the subset
V⊗^ W ⋂_χ∈((χ⊗-⊗χ))
of V⊗ W the restricted tensor product of V and W.
The restricted tensor product forms an -module under the following condition.
A bialgebra is restrictedly tensorable if the left ideal of the unital associative algebra ⊗ generated by the subset
Γ {χ⊗-⊗χ|χ∈}
is also a two-sided ideal.
Let be a restrictedly tensorable bialgebra, and let V and W be -modules. The restricted tensor product V⊗^ W is an -submodule of V⊗ W.
It suffices to see that, for arbitrary u∈ V⊗^ W and χ_1,χ_2∈, we have
(χ_1⊗-⊗χ_1)Δ(χ_2)u = 0 .
Restricted tensorability implies that (χ_1⊗-⊗χ_1)Δ(χ_2) is an element in the two-sided ideal left- and right-generated by Γ, and we can write this element as
(χ_1⊗-⊗χ_1)Δ(χ_2)
= ∑_i=1^NX_i(χ_1,i⊗-⊗χ_1,i)
for some finite N and X^(1),…,X^(N)∈⊗ and χ_1,1,…,χ_1,N∈. It is now clear that the latter element of ⊗ annihilates all u∈ V⊗^ W.
Examples of restrictedly tensorable bialgebras important to the discussion of CK duality and the double copy are primitively generated bialgebras. These are all bialgebras generated by a set of differential operators labelling momenta, together with their coproducts, a typical situation in a physical theory.
Recall that an element χ in a bialgebra is primitive if Δ(χ)=χ⊗+⊗χ, and a bialgebra is primitively generated if it is generated as a unital associative algebra by its set of primitive elements; over a field of characteristic zero, it is a standard fact that a primitively generated Hopf algebra is isomorphic to the universal enveloping algebra of the Lie algebra of its primitive elements.
Every primitively generated bialgebra is restrictedly tensorable.
It suffices to show that, for every χ,ϕ∈, we have
(χ⊗-⊗χ)Δ(ϕ) ∈ ,
where is the left ideal generated by the subset {ψ⊗-⊗ψ|ψ∈}. This is evidently equivalent to showing that
[χ⊗-⊗χ,Δ(ϕ)] ∈ ,
since Δ(ϕ)(χ⊗-⊗χ)∈.
By the assumption of primitive-generatedness, we may assume that ϕ is a linear combination of products of primitive elements.
We proceed by induction. First, the base case: suppose that ϕ is primitive. Then it is immediate that
[χ⊗-⊗χ,Δ(ϕ)]
= [χ,ϕ]⊗-⊗[χ,ϕ] ∈ .
Next, suppose that we have shown (<ref>) in the case where ϕ is a linear combination of products of at most n-1 primitive elements. Now, suppose that ϕ=ϕ_1ϕ_2 with ϕ_1 primitive and ϕ_2 a linear combination of at most n-1 primitive elements. Let
[χ⊗-⊗χ,Δ(ϕ_1)]
= χ'⊗-⊗χ' .
Then
[χ⊗-⊗χ,Δ(ϕ)]
.5cm = [χ⊗-⊗χ,Δ(ϕ_1)Δ(ϕ_2)]
.5cm = [χ⊗-⊗χ,Δ(ϕ_1)]Δ(ϕ_2)+Δ(ϕ_1)[χ⊗-⊗χ,Δ(ϕ_2)]
.5cm = (χ'⊗-⊗χ')Δ(ϕ_2)+Δ(ϕ_1)[χ⊗-⊗χ,Δ(ϕ_2)]
.5cm = [χ'⊗-⊗χ',Δ(ϕ_2)]+Δ(ϕ_2)(χ'⊗-⊗χ')+Δ(ϕ_1)[χ⊗-⊗χ,Δ(ϕ_2)] ,
and now all three terms manifestly belong to the left ideal .
The restricted tensor product is not, in general, symmetric nor associative up to isomorphism. Our construction generalises the familiar concept of the module of invariants.
For any -module V, the restricted tensor product V⊗^ is canonically isomorphic to V^⋂_χ∈(χ-ϵ(χ)), which is called the module of invariants.[Compare this to the well-known result that the module of coinvariants V_ is given by V_≅ V⊗_.]
It is simply a matter of unwinding the definition (<ref>) to see that V⊗^⊆ V⊗≅ V is given by V^.
Thus, the restricted tensor product ⊗^ is, in some sense, the dual of the tensor product ⊗_ over the Hopf algebra : V⊗^ W is a submodule of V⊗ W, whereas V⊗_ W is a quotient of V⊗ W.
Suppose that V and W are -modules equipped with -linear maps f V^⊗ n→ V and g W^⊗ n→ W. Then, the -linear map f⊗ g(V⊗ W)^⊗ n→ V⊗ W restricts to an -linear map f⊗^ g(V⊗^ W)^⊗ n→ V⊗^ W.
For clarity of exposition, we spell out the proof only for n=2; the other cases generalise straightforwardly. Given u^(1)_1,2⊗ u^(2)_1,2∈ V⊗ W, then
(f⊗ g)(u^(1)_1⊗ u^(2)_1,u^(1)_2⊗ u^(2)_2) = f(u^(1)_1,u^(1)_2)⊗ g(u^(2)_1,u^(2)_2) .
Suppose now that u^(1)_i⊗ u^(2)_i∈ V⊗^ W⊆ V⊗ W for i=1,2 and let χ∈. Then,
(χ f(u^(1)_1,u^(1)_2))⊗ g(u^(2)_1,u^(2)_2) = f(χ^(1)u^(1)_1,χ^(2)u^(1)_2)⊗ g(u^(2)_1,u^(2)_2)
= (f⊗ g)(χ^(1)u^(1)_1⊗ u^(2)_1,χ^(2)u^(1)_2⊗ u^(2)_2)
= (f⊗ g)(u^(1)_1⊗χ^(1)u^(2)_1,u^(1)_2⊗χ^(2)u^(2)_2)
= f(u^(1)_1,u^(1)_2)⊗ g(χ^(1)u^(2)_1,χ^(2)u^(2)_2)
= f(u^(1)_1,u^(1)_2)⊗(χ g(u^(2)_1,u^(2)_2)) ,
where in the second and fourth steps we have used (<ref>), and in third step the assumption χ^(i)u^(1)_i⊗ u^(2)_i=u^(1)_i⊗χ^(i)u^(2)_i for i=1,2. Hence, f⊗ g(V⊗ W)^2→ V⊗ W in (<ref>) restricts to a map f⊗^ g(V⊗^ W)^2→ V⊗^ W.
This proposition now implies that given two -algebras, that is, -modules V equipped with -linear n-ary algebraic operations V^⊗ n→ V, their restricted tensor naturally inherits a corresponding algebra structure.
§ ANALYTICAL SETTINGS VIA CONVOLUTIONS
As briefly remarked in the introduction, in the case where the Hopf algebra is commutative, to construct the double copy, instead of working with the restricted tensor product ⊗^ of <ref>, we can instead work with a tensor product ⊗_ over the commutative Hopf algebra, which corresponds to the convolutional double copy of <cit.>. This approach runs into analytical difficulties because plane wave states cannot be convolved (or, equivalently, delta functions in momentum space cannot be squared). One can circumvent this either by compactifying space–time as in <ref> or by complicating the notion of Hopf algebras as in <ref>.
§.§ Analytical setting via compactification
In this section, we provide a proof of the statement that compactifying space-time provides an analytical setting using the tensor product over the Hopf algebra. In the following, the metric signature is irrelevant. We compactify ^d to ^d/Λ^d; without loss of generality, we may work with units where Λ=1.
Let be the subspace of ^∞(^d/^d,) consisting of finite linear combinations of plane waves, i.e. smooth functions whose Fourier series' supports are finite sets. This is dense inside L^2(^d/^d,) in the L^2-norm topology as well as inside ^∞(^d/^d,) in the Fréchet topology, since Fourier series of smooth functions converge pointwise and hence, uniformly.
As modules over _^d of differential operators with constant coefficients discussed in <ref>, we have
⊗_[∂_μ] ≅
by means of the convolution
f = f^(1)⊗ f^(2) ↦ f^(1)⋆ f^(2)
for all f∈⊗_[∂_μ].
Let us first show injectivity of (<ref>). Suppose that f,g,h∈. We wish to show that
f⊗(g⋆ h) = (f⋆ g)⊗ h
in the tensor product ⊗_[∂_μ]. If this holds, then from f_1⋆ g_1=f_2⋆ g_2, we get f_1⊗ g_1=f_1⊗(g_1⋆𝕀_)=(f_1⋆ g_1)⊗𝕀_=(f_2⋆ g_2)⊗𝕀_=f_2⊗ g_2, that is injectivity of (<ref>). To verify (<ref>), let Ksupp(f̂)∪supp(ĝ)∪supp(ĥ)⊆^d be the union of the supports of the Fourier transforms f̂, ĝ, and ĥ of all three functions f, g, and h; let δ_K∈ be an approximation of the Dirac comb on K, namely
δ_K(x) ∑_k∈ K^ k· x .
It is a convolutional idempotent, that is, δ_K⋆δ_K=δ_K.
By multivariate polynomial interpolation in Fourier space, we can find (not necessarily unique) differential operators D_f,D_g,D_h∈[∂_1,…,∂_d] such that f=D_fδ_K, g=D_gδ_K, and h=D_hδ_K. Then it is clear that, inside ⊗_[∂_μ], we have
f⊗(g⋆ h) = D_fδ_K⊗(D_gδ_K⋆ D_hδ_K)
= D_fδ_K⊗ D_gD_hδ_K
= D_fD_gδ_K⊗ D_hδ_K
= (D_fδ_K⋆ D_gδ_K)⊗ D_hδ_K
= (f⋆ g)⊗ h .
Having shown injectivity of (<ref>), surjectivity is now straightforward: for any f∈ we have f=f⋆δ_supp(f̂).
§.§ Analytical setting via generalisations of Hopf algebras
In this section, we provide an analytical setting for the double copy using the tensor product over the Hopf algebra where compactification of space-time is not needed, at the cost of having to work with an algebra of pseudo-differential operators that does not form a Hopf algebra anymore. [We thank an anonymous user on MathOverflow for their help.]
The physical metric signature is irrelevant for the following analytical considerations, but for analytical considerations it is convenient to use an auxiliary positive-definite metric on space-time ^d. We emphasise that this does not really pertain to the physics but is only used in the course of the mathematical proofs.
Since we are going beyond the usual setting of Hopf algebras, we spell out precisely what we achieve.
A convenient analytical setting consists of
-4pt
* a function space ⊆^∞(^d,) and
* a space of (pseudo-)differential operators ⊆[[∂_1,…,∂_d]]
such that
-4pt
* is closed under pointwise products; thus, it forms a nonunital commutative associative algebra;
* is closed under composition and contains 1; thus it forms a unital commutative ring;
* contains [∂_μ]=_^d as a subring;
* ⊆; thus, is a module over ;
* an analytic Leibniz rule holds in the sense that, for D=∑_I∈^dc_I∂_I∈ and f,g∈, we have
D(f· g) = ∑_I,I'∈^dI+I'Ic_I+I'(∂_If)·(∂_I'g)
in the topology of pointwise convergence on some neighbourhood of the origin in Fourier space;
* ⊗_=;
* is dense inside ^∞(^d,) with respect to the Fréchet space topology (i.e. topology of uniform convergence on compact sets).
We further define the following tube domain of the real hyperplane:
^d_ϵ {x+ y | x,y∈^d and y<ϵ} ⊆ ^d .
Define _0 to be the space of functions f^d→ such that there exist ϵ,δ>0 such that f extends analytically to ^d_ϵ with
|f(x+ y)| = (^-δx) .
Define as
{f∈^∞(^d,) | ∂_If∈_0} ,
i.e. the space of functions whose arbitrary-order derivatives lie in _0.
If f∈_0 is holomorphic on ^d_ϵ and |f(x+ y)|≤ C^-δx, then f̂ is holomorphic on ^d_δ and |f̂(ξ+η)|≤ C'^-ϵ'ξ/(δ-η)^d for some constant C'. In particular, f̂∈_0 also; thus, the Fourier transform is an involution of _0.
To check holomorphicity of f̂ on ^d_ϵ, it suffices to check that the integral
f̂(ξ+η) = ∫_^d^dx f(x) ^-(ξ+η)· x
converges as long as η<δ so that we can take derivatives under the integral sign. But this is clear since |f(x)|=(^-δx).
Furthermore, for arbitrary y∈^d with y<ϵ, we can use Cauchy's integral theorem axis-by-axis to obtain the estimate
|f̂(ξ+η)| = |∫_^d^dx f(x) ^-(ξ+η)· x|
= |∫_^d^dx f(x+ y) ^-(ξ+η)·(x+ y)|
≤ ∫_^d^dx C^-(δ-η)x^ξ· y
≤ C'/(δ-η)^d ^ξ· y ,
where C' is a constant depending on d only. By choosing y=-ϵ'ξ/ξ for arbitrary 0<ϵ'<ϵ, we obtain |f̂(ξ+η)|≤ C'^-ϵ'ξ/(δ-η)^d. Taking the limit ϵ'→ϵ, we obtain |f̂(ξ+η)≤ C'^-ϵ'ξ/(δ-η)^d.
forms a nonunital subalgebra of ^∞(^d,).
It is clear that is closed under sums and scalar multiplication. The only nontrivial thing to prove is closure under pointwise product. Let f,g∈. Then
∂_I(f· g) = ∑_I',I”∈^d
I'+I”=I II'(∂_I'f)(∂_I”g) ∈ _0 .
Hence f· g∈_0.
Now, define _0 to be the space of pseudo-differential operators of the form p(∂) where p∈_0, where _0 is the class of functions p^d→ such that there exists an ϵ>0 such that p extends analytically to ^d_ϵ and that, on this tube domain, for every δ>0, there exists a C_δ>0 such that
|p(x+ y)| ≤ C_δ^δx .
Define the ring as
{∑_i=1^np_iq_i | n∈, p_1,…,p_n∈_0, q_1,…,q_n∈_^d} ,
that is, the ring of pseudo-differential operators generated by _0 and _^d=[∂_1,…,∂_d].
_0 is a _0-module.
Let f∈_0 and D=p(∂) with p∈_0. Then f̂ is analytic on ^d_ϵ and |f̂(ξ+η)|≤ C^-δξ for some C,ϵ,δ>0. Similarly, p is analytic on ^d_ϵ'.
Then, in Fourier space, the pointwise product f̂p is analytic on ^d_min{ϵ,ϵ'} and |f̂p|=(^-δ'ξ) for any δ'<δ. So f̂p∈_0, and hence Df∈_0.
is a -module.
It is clear that is closed under the action of [∂_1,…,∂_d] by construction. It remains to show that is a _0-module.
Let f∈ and D∈_0 and I∈^d. It suffices to show that ∂_IDf∈_0. But since ∂_If∈_0, so ∂_IDf=D(∂_If)∈_0 (using <ref>).
⋆=, where ⋆ denotes convolution.
It is clear that _0·_0⊆_0. Since the Fourier transform is bijective on _0, thus _0⋆_0⊆_0.
Now, suppose that f,g∈. Then f⋆ g∈_0, and for any multi-index I∈^d, we have ∂_I(f⋆ g)=(∂_If)⋆ g⊂_0. Hence f⋆ g∈. Thus ⋆⊆.
It remains to show that ⋆⊇. Given any f∈, then we have
f/u_ϵ· u_ϵ = f ,
where
u_ϵ(x+ y) = 1/∏_i=1^dcosh(ϵ(x_i+ y_i))+ .
Now, clearly u_ϵ∈_0, so the same holds for the Fourier transform û_ϵ∈_0. Furthermore, for any polynomial q, clearly pu_ϵ∈_0 as well. Hence û_ϵ∈. Similarly, if |f|≤ C^-δx, then ϵ<δ ensures that pf/u_ϵ∈_0 for any polynomial p; hence f/u_ϵ∈_0. Thus, f̂=û_ϵ⋆f/u_ϵ, so that ⋆⊇.
(,) is a convenient analytical setting.
The numbering follows <ref>.
(i) is clear by construction. (ii) is also clear by construction, since composition amounts to pointwise products in Fourier space. (iii) is also clear by construction.
(iv) was shown in <ref>. (v) is clear by analyticity in Fourier space. As for (vi): it is clear that is a submodule of considered as a module over itself (since ⊆_0⊆_0). So ⊗_⊆. <Ref> then implies that ⊗_=.
(vii) It is clear that smooth functions with compact support are dense inside ^∞(^d,) (e.g. multiply by bump functions ψ_m supported at [-m-1,m+1]) that are 1 on [-m,m]). Suppose that f is smooth with compact support. Then f is the limit of convolutions f⋆ψ_m where ψ_m=∏_i=1^dm^-π m^2z_i^2 is a family of analytic functions with exponential falloff approximating the Dirac delta.
Symmetric monoidal category.
Since is no longer a Hopf algebra, the category of arbitrary modules over no longer has a well defined tensor product (i.e. does not form a symmetric monoidal category); in particular, double copy of arbitrary -modules is not guaranteed to work. Instead, we single out a particular subcategory of the category of all -modules that is closed under the tensor product.
Consider the category Mod_,nice whose objects are -modules of the form
⊕_i=1^K⊗_⊗_…⊗_^n_i ,
where n_i∈ and K is a nonnegative integer or ∞. These all have a canonical action of on them by virtue of the `infinitary Leibniz rule' defining an `infinitary coproduct'.
Consider the full subcategory of the category of chain complexes of -modules consisting of those whose degreewise components all belong to Mod_,nice. This forms a symmetric monoidal category equipped with ⊗_. In particular, we can define operads over this category.
§ PROOFS BY DIRECT COMPUTATION
In this section, we collect mostly straightforward computational proofs omitted from the body of the paper.
<ref>. It is clear, cf. e.g. the review in <cit.>, that ⊗ is a dg Lie algebra, and that R⊗ V is a dg vector space with an action ⊗↷ R⊗ V. It is also well-known that a Lie algebra and a representation can be packaged into a Lie algebra with Lie bracket the semi-direct product. This extends to the differential graded setting. It remains to show that the given inner product is indeed cyclic, i.e.
ℓ_1μ_2(ℓ_2,ℓ_3)_ =
(-1)^|ℓ_1| |ℓ_2|+|ℓ_1| |ℓ_3|+|ℓ_2| |ℓ_3|ℓ_3μ_2(ℓ_1,ℓ_2)_ .
This is well-known to be the case for ℓ_1,ℓ_2,ℓ_3∈⊗. For ℓ_1,ℓ_2,ℓ_3∈ R⊗ V, both sides of the relation are trivial, and for ℓ_1∈⊗, ℓ_2,ℓ_3∈ R⊗ V (as well as cyclic permutations), cyclicity is ensured by (<ref>). Because of the lack of pairing between R⊗ V and ⊗, both sides of the identity also vanish for ℓ_1∈ R⊗ V and ℓ_2,ℓ_3∈⊗.
<ref>.
By direct computation, from <ref> and Equations (<ref>) and (<ref>), we have
[ϕ_1[1],ϕ_2[1]]__ v[1]
.5cm= (-1)^|ϕ_1|{ϕ_1,ϕ_2}_[1]_ v[1]
.5cm= (-1)^|ϕ_2|+1{{ϕ_1,ϕ_2}_,v}_V[1]
.5cm= (-1)^|ϕ_1|+|ϕ_2|({ϕ_1,{ϕ_2,v}_V}_V-(-1)^(|ϕ_1|+1)(|ϕ_2|+1){ϕ_2,{ϕ_1,v}_V}_V)[1]
.5cm= ϕ_1[1]_(ϕ_2[1]_ v[1])-(-1)^(|ϕ_1|+1)(|ϕ_2|+1)ϕ_2[1]_(ϕ_1[1]_ v[1])
for all ϕ_1,ϕ_2∈ and v∈ V, hence (,_) is a graded (left) module over the kinematic Lie algebra (,[-,-]_).
<ref>.
Using the definition (<ref>) of the derived bracket and the associativity _2(_2(ϕ_1,ϕ_2),ϕ_3)=_2(ϕ_1,_2(ϕ_2,ϕ_3)) for all ϕ_1,2,3∈ of _2, it is easy to see that (<ref>) is, in fact, equivalent to (<ref>).
To establish the shifted Jacobi identity (<ref>), we follow <cit.>. In particular, set
(ϕ_1,ϕ_2,ϕ_3) {ϕ_1,_2(ϕ_2,ϕ_3)}-_2({ϕ_1,ϕ_2},ϕ_3)
1.5cm-(-1)^(|ϕ_1|+1)|ϕ_2|_2(ϕ_2,{ϕ_1,ϕ_3}) ,
(ϕ_1,ϕ_2,ϕ_3) {ϕ_1,{ϕ_2,ϕ_3}}-(-1)^|ϕ_1|+1{{ϕ_1,ϕ_2},ϕ_3}
1.5cm-(-1)^(|ϕ_1|+1)(|ϕ_2|+1){ϕ_2,{ϕ_1,ϕ_3}} ,
which we call the Poissonator and the Jacobiator, respectively. Then,
(ϕ_1,ϕ_2,ϕ_3)-{ϕ_1,{ϕ_2,ϕ_3}}
1cm= -(-1)^|ϕ_1|+1[(_2({ϕ_1,ϕ_2},ϕ_3))-_2(({ϕ_1,ϕ_2}),ϕ_3)
4cm-(-1)^|ϕ_1|+|ϕ_2|+1_2({ϕ_1,ϕ_2},ϕ_3)]
2cm-(-1)^(|ϕ_1|+1)(|ϕ_2|+1)[(_2(ϕ_2,{ϕ_1,ϕ_3}))-_2(ϕ_2,{ϕ_1,ϕ_3}))
4cm-(-1)^|ϕ_2|_2(ϕ_2,({ϕ_1,ϕ_3}))]
1cm= (-1)^|ϕ_1|+1((ϕ_1,ϕ_2,ϕ_3)-{ϕ_1,_2(ϕ_2,ϕ_3)})
2cm-(-1)^|ϕ_1|+1[_2({ϕ_1,ϕ_2},ϕ_3)+(-1)^|ϕ_1|_2({ϕ_1,ϕ_2},ϕ_3)
4cm-(-1)^|ϕ_1|+|ϕ_2|+1_2({ϕ_1,ϕ_2},ϕ_3)]
2cm+(-1)^(|ϕ_1|+1)(|ϕ_2|+1)[_2(ϕ_2,{ϕ_1,ϕ_3}))-(-1)^|ϕ_2|_2(ϕ_2,{ϕ_1,ϕ_3})
4cm-(-1)^|ϕ_1|+|ϕ_2|_2(ϕ_2,{ϕ_1,ϕ_3})]
1cm= (-1)^|ϕ_1|+1((ϕ_1,ϕ_2,ϕ_3)-{ϕ_1,_2(ϕ_2,ϕ_3)})
2cm+(-1)^|ϕ_1|+1[(ϕ_1,ϕ_2,ϕ_3)-{ϕ_1,_2(ϕ_2,ϕ_3)}]
2cm-[(ϕ_1,ϕ_2,ϕ_3)-{ϕ_1,_2(ϕ_2,ϕ_3)}]
2cm-(-1)^|ϕ_2|[(ϕ_1,ϕ_2,ϕ_3)-{ϕ_1,_2(ϕ_2,ϕ_3)}]
1cm= (-1)^|ϕ_1|+1[((ϕ_1,ϕ_2,ϕ_3))+(ϕ_1,ϕ_2,ϕ_3)
2cm+(-1)^|ϕ_1|(ϕ_1,ϕ_2,ϕ_3)+(-1)^|ϕ_1|+|ϕ_2|(ϕ_1,ϕ_2,ϕ_3)]
2cm-{ϕ_1,{ϕ_2,ϕ_3}} ,
where we have repeatedly made use of the definition (<ref>) of the derived bracket and the fact that is a derivation for the derived bracket as shown in <ref>. Hence,
(ϕ_1,ϕ_2,ϕ_3) = (-1)^|ϕ_1|+1[((ϕ_1,ϕ_2,ϕ_3))+(ϕ_1,ϕ_2,ϕ_3)
1cm+(-1)^|ϕ_1|(ϕ_1,ϕ_2,ϕ_3)+(-1)^|ϕ_1|+|ϕ_2|(ϕ_1,ϕ_2,ϕ_3)] .
So the shifted Poisson identity (<ref>) implies the shifted Jacobi identity (<ref>).
<ref>.
To show that ^0(V)(_V)[1] is a module over the dg Lie algebra ^0() (_)[1], it suffices to show for every ϕ∈_ and v∈_V that ϕ[1]_ v[1]=(-1)^|ϕ|{ϕ,v}_V[1] is an element of (_V)[1], i.e. _V{ϕ,v}_V=0:
_V{ϕ,v}_V = _V(_V(ϕ_ v)-(_ϕ)_ v - (-1)^|ϕ|ϕ_ (_V v)) = _V^2(ϕ_ v)
= 0 .
Cyclicity in the tensor product of -algebras.
Consider the tensor product of two -algebras _ and _ as defined in (<ref>). We now verify the properties of the metric. Firstly, we have
ϕ_2⊗ϕ_2ϕ_1⊗ϕ_1
.5cm= (-1)^|ϕ_2||ϕ_1|+n_(|ϕ_1|+|ϕ_2|)ϕ_2ϕ_1_ϕ_2ϕ_1_
.5cm= (-1)^(|ϕ_1|+|ϕ_1|)(|ϕ_2|+|ϕ_2|)+|ϕ_1||ϕ_2|+n_(|ϕ_1|+|ϕ_2|)ϕ_1ϕ_2_ϕ_1ϕ_2_
.5cm= (-1)^(|ϕ_1|+|ϕ_1|)(|ϕ_2|+|ϕ_2|)ϕ_1⊗ϕ_1ϕ_2⊗ϕ_2
for all ϕ_1,ϕ_2∈_ and ϕ_1,ϕ_2∈_, establishing graded symmetry. Next, we verify the axioms (<ref>). In particular, using the definition of from (<ref>), we find
(ϕ_1⊗ϕ_1)ϕ_2⊗ϕ_2
.5cm= _ϕ_1⊗ϕ_1ϕ_2⊗ϕ_2+(-1)^|ϕ_1|ϕ_1⊗_ϕ_1ϕ_2⊗ϕ_2
.5cm= (-1)^|ϕ_1||ϕ_2|+n_(|ϕ_1|+|ϕ_2|+1)_ϕ_1ϕ_2_ϕ_1ϕ_2_
1.5cm+(-1)^|ϕ_1|+(|ϕ_1|+1)|ϕ_2|+n_(|ϕ_1|+|ϕ_2|)ϕ_1ϕ_2__ϕ_1ϕ_2_
.5cm= -(-1)^|ϕ_1|+|ϕ_1||ϕ_2|+n_(|ϕ_1|+|ϕ_2|+1)ϕ_1_ϕ_2_ϕ_1ϕ_2_
1.5cm-(-1)^|ϕ_1|+|ϕ_1|+(|ϕ_1|+1)|ϕ_2|+n_(|ϕ_1|+|ϕ_2|)ϕ_1ϕ_2_ϕ_1_ϕ_2_
.5cm= -(-1)^|ϕ_1|+|ϕ_1|ϕ_1⊗ϕ_1_ϕ_2⊗ϕ_2
1.5cm-(-1)^|ϕ_1|+|ϕ_1|+|ϕ_2|ϕ_1⊗ϕ_1ϕ_2⊗_ϕ_2
.5cm= -(-1)^|ϕ_1|+|ϕ_1|ϕ_1⊗ϕ_1(ϕ_2⊗ϕ_2)
again for all ϕ_1,ϕ_2∈_ and ϕ_1,ϕ_2∈_, which verifies the first relation in (<ref>). A similar calculation for establishes the last relation in (<ref>). It remains to verify the second relation in (<ref>). Using the definition of _2 from (<ref>), we find
_2(ϕ_1⊗ϕ_1,ϕ_2⊗ϕ_2)ϕ_3⊗ϕ_3
.5cm= (-1)^|ϕ_1||ϕ_2|_2(ϕ_1,ϕ_2)⊗_2(ϕ_1,ϕ_2)ϕ_3⊗ϕ_3
.5cm= (-1)^|ϕ_1||ϕ_2|+(|ϕ_1|+|ϕ_2|)|ϕ_3|+n_(|ϕ_1|+|ϕ_2|+|ϕ_3|)
1.5cm×_2(ϕ_1,ϕ_2)ϕ_3__2(ϕ_1,ϕ_2)ϕ_3_
.5cm= (-1)^|ϕ_1||ϕ_2|+(|ϕ_1|+|ϕ_2|)|ϕ_3|+|ϕ_1||ϕ_2|+|ϕ_1||ϕ_2|+n_(|ϕ_1|+|ϕ_2|+|ϕ_3|)
1.5cm×ϕ_2_2(ϕ_1,ϕ_3)_ϕ_2_2(ϕ_1,ϕ_3)_
.5cm= (-1)^(|ϕ_1|+|ϕ_1|)(|ϕ_2|+|ϕ_2|)+|ϕ_1||ϕ_3|
1.5cm×ϕ_2⊗ϕ_2_2(ϕ_1,ϕ_3)⊗_2(ϕ_1,ϕ_3)
.5cm= (-1)^(|ϕ_1|+|ϕ_1|)(|ϕ_2|+|ϕ_2|)ϕ_2⊗ϕ_2_2(ϕ_1⊗ϕ_1,ϕ_3⊗ϕ_3) .
|
http://arxiv.org/abs/2307.02392v1
|
20230705160444
|
RADiff: Controllable Diffusion Models for Radio Astronomical Maps Generation
|
[
"Renato Sortino",
"Thomas Cecconello",
"Andrea DeMarco",
"Giuseppe Fiameni",
"Andrea Pilzer",
"Andrew M. Hopkins",
"Daniel Magro",
"Simone Riggi",
"Eva Sciacca",
"Adriano Ingallinera",
"Cristobal Bordiu",
"Filomena Bufano",
"Concetto Spampinato"
] |
cs.CV
|
[
"cs.CV"
] |
Collision integral with momentum-dependent potentials and its impact on pion production in heavy-ion collisions
Akira Ono
August 1, 2023
===============================================================================================================
Along with the nearing completion of the Square Kilometre Array (SKA), comes an increasing demand for accurate and reliable automated solutions to extract valuable information from the vast amount of data it will allow acquiring.
Automated source finding is a particularly important task in this context, as it enables the detection and classification of astronomical objects.
Deep-learning-based object detection and semantic segmentation models have proven to be suitable for this purpose.
However, training such deep networks requires a high volume of labeled data, which is not trivial to obtain in the context of radio astronomy. Since data needs to be manually labeled by experts, this process is not scalable to large dataset sizes, limiting the possibilities of leveraging deep networks to address several tasks.
In this work, we propose RADiff, a generative approach based on conditional diffusion models trained over an annotated radio dataset to generate synthetic images, containing radio sources of different morphologies, to augment existing datasets and reduce the problems caused by class imbalances. We also show that it is possible to generate fully-synthetic image-annotation pairs to automatically augment any annotated dataset.
We evaluate the effectiveness of this approach by training a semantic segmentation model on a real dataset augmented in two ways: 1) using synthetic images obtained from real masks, and 2) generating images from synthetic semantic masks. We show an improvement in performance when applying augmentation, gaining up to 18% in performance when using real masks and 4% when augmenting with synthetic masks.
Finally, we employ this model to generate large-scale radio maps with the objective of simulating Data Challenges.
§ INTRODUCTION
The Square Kilometre Array (SKA) will be the largest radio telescope ever built, poised to revolutionize our understanding of the Universe <cit.>. With unprecedented levels of sensitivity and spatial resolution, it is expected to facilitate astronomical discoveries across different domains.
However, the data volume produced by its precursor telescopes already requires considerable effort and well-designed algorithms to extract scientific information in an efficient and automated way <cit.>.
Machine learning, and in particular deep learning, has recently emerged as a valuable tool for detecting and classifying radio sources in images, which is a challenging task, especially for observations with significant diffuse backgrounds or very extended or diffuse sources (i.e., those aiming at the Galactic Plane) <cit.>.
Background noise and artifacts introduced during the imaging process, such as sidelobes around bright sources, often lead these models to false detections. The identification of sources from multiple non-contiguous islands and their classification into known classes of astrophysical objects poses another challenging task, relevant especially when searching for Galactic objects in Galactic plane surveys.
To achieve high accuracy and precision through deep learning algorithms, a great number of data samples with high-quality annotations are crucial. While SKA data is not yet available, data simulations are commonly employed for testing source extraction tools, as demonstrated in studies such as <cit.>. One of the limitations of the simulated datasets employed in these studies is the assumed ideal background and morphology of extended sources, often modeled as single-component islands generated from a 2D Gaussian distribution of random minor/major axis ratios. Our aim is to overcome this limitation through the application of conditional generative models.
Generative models are deep learning architectures that allow for synthesizing new data points that follow the distribution of the training data. Once trained, these models are capable of generating new data points starting from a random Gaussian noise vector.
The applicability of generative models to specific tasks and domains stems from the possibility of controlling the output of the generation process. Enforcing constraints on the shape of the generated data and on their features enables the user to generate samples in a specific way to tackle a particular problem <cit.>. For instance, these models can be employed for data augmentation <cit.> when lack of data availability is an issue or for datasets characterized by heavy class imbalances. In this case, using a controllable generative model can alleviate the problem by allowing for generating under-represented classes and compensating class imbalance <cit.>.
While Generative Adversarial Networks (GANs) <cit.> are widely employed in several domains, including radio astronomy <cit.>, we base our method on a more recent generative model architecture: Diffusion Models <cit.>. This choice is motivated by the widespread success of this kind of model in the computer vision domain <cit.> but, most importantly, by their versatility in terms of controllable generation <cit.>. An interesting contribution in radio astronomy using conditional diffusion models has recently been made by Wang et al. <cit.>, although they propose a different task than ours since they focus on radio interferometric image reconstruction.
We propose RADiff, a conditional diffusion model trained to generate radio-astronomical images starting from two types of conditioning: 1) a semantic segmentation map that describes the shape, location, and category of the objects contained in the image to generate, and 2) an image to control the intensity and pattern of the background that surrounds the objects.
An overview of the proposed idea is shown in Figure <ref>
Our key contributions are the following:
* We employ a Latent Diffusion Model (LDM) <cit.> architecture to efficiently generate high-quality images;
* We leverage the controllable generation capabilities of LDMs to guide the generation process using a semantic segmentation map and background information from another image;
* We evaluate our proposed framework on the domain-specific tasks of data augmentation in cases of insufficient data or class imbalances, and on populating large-scale maps with synthetic radio-astronomical objects.
§ RELATED WORK
In this section, we provide an overview of the evolution of generative models in computer vision and their applications to astrophysics and, more specifically, radio astronomy.
§.§ Generative Models
Synthetic data generation is a widely explored task involving deep learning models that learn to map a vector sampled from a random Gaussian distribution to data points that follow the distribution of the training data.
This task can be achieved using several neural network architectures. Among these, the most commonly employed are Variational Autoencoders (VAE) <cit.>, Generative Adversarial Networks (GAN) <cit.>, and, from more recent advancements, diffusion models <cit.>.
VAEs follow the architecture of autoencoders, composed of an encoder, designed to map the input data to a lower-dimensional latent space, and a decoder, that maps points from the latent space back to the original space.
Since VAEs are generative models, the encoders map the input to a distribution (approximating a Gaussian) of latent vectors instead of a single latent representation. Then, a latent vector is sampled from this distribution and provided to the decoder to generate data.
Variants of this architecture introduce improvements on the reconstruction efficiency by latent space quantization <cit.>, on the controllability of generated samples with class labels <cit.>, or on the quality of the generation using the attention mechanism <cit.>
While VAEs rely on supervised training, GANs <cit.> employ adversarial training, employing two neural networks, namely a generator and a discriminator, trained in an adversarial way. The former is optimized to generate data that follows the distribution of the training data, while the latter is trained to distinguish between “real” data belonging to the training set and samples produced by the generator.
Since these two modules are trained jointly with opposite objective functions, adversarial training can lead to unstable and unpredictable training, often causing the model to diverge.
Several improvements have been proposed to the original architecture, by adding conditioning information to control the distribution of the generated samples <cit.>, introducing the ability to map samples back to the latent space from the image space <cit.>, and enabling the possibility of transferring the style of an image to another, without losing its semantic content <cit.>.
During recent years, Denoising Diffusion Probabilistic Models (DDPM) <cit.> have emerged as a valid alternative to adversarial approaches, outperforming them on common benchmarks <cit.>.
These models are trained in a supervised way as a sequence of denoising autoencoders. During training, noise is gradually added to the input data and a neural network learns to revert this process (See Section <ref> for a detailed explanation of how these models work). Diffusion models proved to be flexible in accepting different kinds of conditioning information <cit.> and are also capable of combining multiple inputs for more precise guidance <cit.>.
We select this family of generative models motivated by their ability to treat multiple conditioning sources other than their improved generation quality with respect to GANs.
§.§ Generative Models in Astronomy
Many methods in the radio-astronomical field have successfully shown how generative models, in particular VAEs and GANs, can be successfully employed to serve several purposes.
VAEs can be effectively used to simulate radio galaxies <cit.> for radio surveys enhancing the generation process with conditional information on the class labels.
<cit.> propose Flow-VAE, a hybrid VAE responsible for learning a representation of the galaxy images, combined with a latent-space normalizing flow. This model is trained to generate the galaxy light profiles, and the outputs can be conditioned on physical galaxy parameters.
While VAEs can successfully synthesize images in different contexts, GANs show a wider range of applications in radio astronomy.
<cit.> present RadioGAN, a method capable of performing image-to-image translation between two different radio surveys. It extracts information from radio data and recovers extended flux from a survey with a high angular resolution. This approach can be used for data augmentation and image translation, though the generation process is limited to being controlled by other images and not by other types of information.
<cit.> propose a GAN-based approach to identify new radio pulsars alleviating the problem of lack of labeled data. They propose semi-supervised GANs, which achieve comparable performance to supervised methods using a small fraction of labeled data with respect to unlabeled data.
GANs have also been employed with the purpose of data augmentation for object detection models to generate realistic solar radio burst (SRB) simulations <cit.> and radio galaxies <cit.>. In both cases, the generation is not controllable so its applicability to the data augmentation use cases is limited.
Given their recent popularity, diffusion models have been employed in radio astronomy as well <cit.>, to perform radio interferometric image reconstruction. This approach uses the original architecture of Diffusion Models <cit.> and conditions the generation process on the visibility of the image, to estimate its cleaned version.
To the best of our knowledge, no approach in radio astronomy has explored controllable generative models to perform data augmentation. In particular, we propose a multi-conditional model with two different sources of conditioning information to achieve data augmentation to improve the performance of deep learning models when labeled data is scarce.
§ DATASET
In this section, we present the dataset (named Survey Collection) employed to train and evaluate our proposed method, along with other approaches, for comparison. The dataset consists of cutouts of large maps obtained from several galactic radio astronomical surveys and segmentation masks used as annotations.
§.§ Survey Collection
We train and evaluate our generative models on a dataset, which we name Survey Collection (SC), composed of maps from several galactic radio astronomical surveys. These maps exhibit large sizes (up to 12,000 × 15,000 pixels), which cannot be directly provided to deep learning models since they would require an excessive amount of computational resources.
For this reason, we extract cutouts of size 128 × 128 from each radio map, making sure that extended objects are not cropped within a cutout. The dataset contains a total of 13,602 map cutouts extracted from several radio astronomical surveys obtained with the following telescopes: 1) the Australia Telescope Compact Array (ATCA), 2) the Australian Square Kilometre Array Pathfinder (ASKAP), and 3) the Very Large Array (VLA). More details on the objects contained in the cutouts are reported in Table <ref>.
We extracted images from the following surveys:
* Data Release 1 (DR1)[<https://cloudstor.aarnet.edu.au/plus/s/agKNekOJK87hOh0> from <https://github.com/chenwuperth/rgz_rcnn/issues/10>] <cit.> of the Radio Galaxy Zoo (RGZ) project <cit.>, using 1.4 GHz radio observations at 5" resolution from the Faint Images of the Radio Sky at Twenty cm (FIRST) survey <cit.>;
* 1.2 GHz ASKAP-36 ♏ survey at ∼9.4"×7.7" resolution (see <cit.> for a description of the survey).
* ASKAP EMU pilot survey at ∼12.5"×10.9" resolution <cit.>;
* 912 MHz ASKAP-15 ♏ survey at ∼24"×21" resolution <cit.>
* 2.1 GHz ATCA ♏ survey at ∼9.8"×5.8" resolution <cit.>;
* ASKAP EMU Pilot 2 maps at ∼16.6" x 13.4" angular resolution observed in Stokes-V polarization
In Figure <ref>, for each telescope we report the RMS values distribution, expressed in Jy/beam. A similar version of this dataset, not containing the Stokes-V radio background maps from ASKAP EMU Pilot 2, has been already used in previous works <cit.>.
Each cutout contains one or multiple objects belonging to one of the following classes:
* Extended Sources: This object category includes radio galaxies from the RGZ and ASKAP-36 surveys, described above.
These maps contain extended sources with a 2- and 3-component morphology.
Objects from these surveys are extended emission sources and multi-island sources with two, three, or more components. We removed HII regions, planetary nebulae, and supernova remnants, thus all the sources are likely extragalactic.
* Compact sources: This category contains compact radio sources with single-island morphology. We obtained images with such objects from the ASKAP-15 and ATCA ♏ surveys, reported above.
* Spurious sources: Structures that appear bright but are not categorizable as proper sources fall into this category. This includes mostly imaging artifacts created around bright radio sources due to the limitation of the acquisition instrument. These objects were obtained from the ASKAP EMU pilot survey, ASKAP-15 ♏, and ASKAP-36 ♏, ATCA ♏ survey. Traditional algorithms tend to erroneously classify these artifacts as real sources.
* Background: These images are extracted from the ASKAP EMU Pilot 2 Stokes-V images.
Throughout the paper, we will omit `source' when it becomes redundant and only use their categorization, i.e. one among `compact', `extended', and `spurious' and refer to radio map cutouts as images.
The whole dataset, aggregating images from the aforementioned surveys, consists of a total of 36,773 objects. More details are shown in Table <ref>.
Common computer vision approaches process images in common formats (e.g., JPG or PNG) and normalize them to values in the range [0,1 ], so one possible solution to use FITS data would be to convert it into a common image format before feeding it to any model. The problem with this approach is that it involves mapping all (floating point) values to integers in the range [0, 255 ]. This contributes to losing the fine-grained details stored in FITS data and altering the dynamic range of the data.
Alternatively, we apply the following pre-processing steps to each FITS image: we first set the NaN values to 0 to avoid problems when giving the images to the model, then transform the data using a tanh function to map the image to the range [0,1 ] and finally normalize the image to avoid unstable training.
§.§ Segmentation Masks
Conditioning a generative model requires additional information on the data samples to be generated so that, at inference time, this information can be used to control the generation process.
For this reason, we annotate each cutout with a segmentation mask, typically employed in semantic segmentation tasks <cit.>.
A segmentation mask is a 2D map with the same size as the image it annotates, where each pixel is an integer describing the category to which that pixel belongs. Given a semantic map M ∈ℝ^H × W, and N_c possible classes, each pixel M(i), with i ∈ [0, H × W] is defined as follows:
M(i) = n ∈ [0, N_c]
This way it is possible to determine the shape and number of the objects contained in the cutouts, as well as their category.
We use these annotations to train a conditional generative model, which can be employed to generate realistic radio map cutouts, provided with a segmentation mask. We corroborate the choice of using segmentation maps as conditioning by evaluating our model on the use case of data augmentation for semantic segmentation models (See Section <ref>).
To this purpose, we use the source finder on each cutout to produce a first raw object segmentation, which is then manually refined by domain experts. This step is crucial to correctly identify sources and distinguish proper sources from spurious ones when they are close together, as source finders often detect them as belonging to the same island.
Images belonging to the RGZ survey have been annotated using both infrared and radio data, while annotations of data originating from other surveys are based only on radio data.
Samples of annotated images are reported in Fig. <ref>.
Both cutouts and segmentation masks are kept under version control, using the Data Version Control (dvc) framework[<https://dvc.org/>], and stored separately in two different formats. The cutouts are stored in FITS [<https://fits.gsfc.nasa.gov/fits_documentation.html>] files, while the information of the annotations is contained in JSON files, one for each cutout, where all the individual FITS files containing each object mask are specified (see <cit.> for more information on the format).
For proper evaluation, we randomly split our dataset into a train and a test set, following a ratio of 80/20, thus ending up with 10,881 and 2,720 images for the train and test set, respectively.
§ METHODOLOGY
In this section, we introduce the architecture of our conditional generative model, named RADiff (Radio Astronomical Diffusion) illustrated in Fig. <ref>. The model comprises the following key components: 1) an autoencoder (ℰ and 𝒟 in the figure) that projects the images to a latent space, compressing them into a lower-dimensional representation; 2) a diffusion model <cit.>, which is an iterative model that represents the core of the generative process; and 3) a conditional encoder ℰ_c, that projects the conditioning information on the latent space to embed it into the generation process.
Diffusion models do not scale well for image sizes above 32 × 32 or 64 × 64, due to their multi-head attention operations <cit.>, which involve a complexity of O(N^2).
In our case, we treat images of size 128 × 128, so we employ a Latent Diffusion Model (LDM) <cit.>, which operates in a latent space learned from the dataset in a preliminary phase. In particular, we first train an autoencoder to reconstruct the images in the dataset and then use its learned latent representations as input to the diffusion model. The purpose of using a latent representation instead of full-sized data is the reduced computational complexity needed to perform the operations since such representations allow for compressing the information in the data into smaller feature maps.
One interesting property of LDMs is that they allow for customized generation by supporting multiple conditioning mechanisms <cit.>. The generation process can be thus enriched with conditional information to efficiently control the sampled data using human-readable information. We use semantic segmentation maps and image embeddings to condition our diffusion model (see Section <ref>).
§.§ Learning latent representations
As pointed out at the beginning of this section, the choice of employing latent diffusion models implies the necessity of learning a latent vector space from all the samples in the dataset as it will be used as input to the diffusion model.
For this reason, we train an autoencoder <cit.> (ℰ and 𝒟 in Figure <ref>), to reconstruct the input data, and regularize its learned latent space using a Kullback-Leibler divergence to reduce its variance.
Formally, given an input image x ∈ℝ^H × W × C, with H, W being the image size and C the number of channels, the encoder ℰ projects it into a latent vector z = ℰ(x ) ∈ℝ^h × w × c_z where h = H/f, w = W/f, f is the compression factor, and c_z is the number of channels of the latent vectors (generally, c_z >> C). The decoder 𝒟 maps the latent vector z back onto the pixel space and produces the reconstructed image x̂ = 𝒟(z).
We train this autoencoder with a reconstruction loss ℒ_rec and a regularization loss ℒ_reg. The reconstruction loss, defined in <ref>, minimizes the distance between the pixels of the reconstructed image w.r.t the input, using a mean absolute error loss.
ℒ_rec = ‖ x - x̂‖_1
The regularization loss, defined in <ref>, limits the variance of the latent space, preventing it from growing indefinitely. We use a KL-divergence between the distribution of the latent vectors P(z) and a standard Gaussian distribution Q(z) = 𝒩(0,1).
ℒ_reg = KL(P||Q)=∑_zP(z)log(P(z)/Q(z))
The architecture of the autoencoder comprises two downsampling blocks, one bottleneck block, and two upsampling blocks. The downsampling block is made of two residual blocks (ResBlock), one attention block (AttnBlock), and a downsampling convolution. The ResBlock contains a series of 3 convolutions with residual connections <cit.>, the AttnBlock employs multi-head self-attention <cit.> on its input, and the downsampling convolution reduces the spatial size of the feature map while expanding the number of channels. The bottleneck is a sequence of two ResBlocks interleaved with an AttnBlock. The upsampling block is the dual of the downsampling one, with an upsampling convolution at the end, increasing the spatial size of the feature map and reducing its number of channels.
The main shape of the objects is correctly restored, although the highest difference lies at the edges of the objects, as the model loses some high-frequency details due to compression.
§.§ Diffusion model
The diffusion model <cit.> is a probabilistic model that learns to revert a Markov chain of length T with the objective of generating images from Gaussian noise.
Training a diffusion model consists of two stages: 1) the forward diffusion process, depicted in Figure <ref>, and 2) the backward diffusion process.
During the first stage, data is converted into an isotropic Gaussian distribution by progressively adding Gaussian noise for T timesteps.
Given a data point x, sampled from the dataset, we gradually destroy the information it contains by adding noise, following an iterative deterministic process q (x_t | x_t-1), with t ∈[ 0, T ] to obtain the noised data. In this process, no neural network is involved as the parameters of the Gaussian noise added at each step are defined beforehand. In particular, we define a parameter, β_t = 1 - α_t, with α_t ∈ [0,1], that controls the mean and variance of the noise. β_t depends on the timestep t and follows a predefined schedule. Following the original work of DDPM <cit.>, we select a linear schedule for β_t. Equation <ref> formally defines the forward diffusion process.
q(x_1, …, x_T | x_0) :=∏_t=1^T q(x_t | x_t-1)
q(x_t | x_t-1) :=𝒩(x_t ; √(1-β_t) x_t-1, β_t 𝐈)
The generative capability of diffusion models resides in the backward process, defined in Equation <ref>, where the goal is to revert the forward process to obtain the original, denoised version of the data.
This process is similar to other generative models (e.g., GANs) as it is the equivalent of mapping a Gaussian noise vector to data. To revert the forward process, we need to compute the reverse conditional probability q (x_t-1 | x_t ). This would require a calculation over the entire dataset to calculate the prior, which is computationally unfeasible, so we optimize the parameters θ of a neural network to approximate the distribution q with another distribution, p, defined as follows:
p(x_t-1| x_t):=𝒩(x_t-1 ; μ_θ(x_t, t), Σ_θ(x_t, t)),
where μ_θ and Σ_θ are the mean and variance, respectively, of the noise predicted by the network at timestep t.
Reconstructing the data at each timestep t happens by subtracting the estimated noise from the noise vector x_t to obtain x_t-1.
As we treat 2D data, we employ a U-Net <cit.>, as done in other works <cit.>. This model follows the original architecture proposed in Ronneberger et al. <cit.>, comprising a series of blocks of convolutional layers that first downsample the input to a lower-dimensional space to extract low-level features. Then, a bottleneck processes the downsampled output before the latent vector is expanded again through a series of transposed convolutional layers that upsample the data back to the original image size. Equal-sized blocks in the downsample and upsample paths are connected via skip-connections, as in the original architecture, and allow the deeper blocks to take into account low-level features as well. The convolutional blocks are enriched with residual connections and multi-head self-attention blocks to better capture spatial correlations within the latent vectors.
Multi-head attention <cit.> is defined as follows:
Attention(Q, K, V) = softmax(Q K^T/√(d_k)) V,
where Q = W_Q·φ_i(x_t), K = W_K·φ_θ(y), V = W_V·φ_θ(y).
Here, W_Q, W_K, W_V∈ℝ^d × d_z^i are learnable projection matrices, d_z^i represents the number of feature maps at the output of the i-th layer, d represents the size of the flattened feature maps, and φ_i(z_t) is the output of the i-th U-Net intermediate layer.
We train our diffusion model using a L_2 loss between the noise the U-Net model estimates at a specific timestep ϵ_θ(x_t, t) and the noise added during the forward process at the same timestep ϵ_t. This loss ℒ_𝒹𝒾𝒻𝒻 is defined as follows:
ℒ_𝒹𝒾𝒻𝒻=E_t, x_0, ϵ[ϵ_t-ϵ_θ(x_t, t)^2]
§.§ Conditioning mechanism
Providing noise as the only input to the diffusion model enables the generation of samples that follow the distribution of the training dataset. For practical use cases, generative models need to be controllable with additional information to effectively guide them toward generating a specific output.
Diffusion models are particularly flexible for conditional generation. Given any kind of conditioning information c, diffusion models can learn the joint distribution p(x_t-1| x_t, c), enabling controllable generation on task-specific information. In our case, given the availability of our annotations, we condition our model on semantic segmentation masks, each of which defines the shape, location, and class of the objects in the image (see Section <ref>).
We embed this information in our diffusion model by concatenating the segmentation map with the input of each denoising step. This contributes to injecting the spatial correlation of the segmentation map into the generative process to guide the sampled data using the structure of the semantic maps.
Since the semantic map only guides the disposition of the objects without any information on the background, we add another type of conditioning by extracting global information on the image background via another condition encoder. This allows for also controlling the background of the image by providing an image with the desired background (see Section <ref>).
In the conditional configuration, the model optimizes the following loss function, which is an extension of Equation <ref> that considers the joint probability with the condition c:
ℒ_𝒹𝒾𝒻𝒻=E_t, x_0, ϵ, c[ϵ_t-ϵ_θ(x_t, t, c)^2]
§ RESULTS
In this section, we conduct a series of analyses on our proposed pipeline with a threefold objective: 1) assess the quality of the generated samples both quantitatively and qualitatively; 2) evaluate how this synthetic data can be used for data augmentation in semantic segmentation models with few or imbalanced training data; 3) employ our model to produce large-scale maps with a real background for Data Challenge purposes.
§.§ Generation Quality
We explore the generative performance of our proposed approach on the following tasks: unconditional image generation and semantic image synthesis. We report quantitative results on the fidelity of the produced samples with respect to the images in the dataset, and samples of the generated data to provide visual feedback as well.
§.§.§ Unconditional Generation
As a first step, we assess the ability of our model to generate high-quality images that exhibit realism and relevance in the field of radio astronomy.
To gauge this aspect, we employ the Fréchet Inception Distance (FID) <cit.>, a widely-used metric introduced for GANs evaluation and used more in general for generative models. The FID, defined in Equation <ref>, expresses the distance between two multivariate normal distributions. We can use this metric to compute the distance between the distribution of the feature vectors extracted from real images and the ones relative to synthetic samples. These feature vectors are typically extracted using the Inception-v3 <cit.> classifier trained on the ImageNet <cit.> dataset, which contains images of objects and scenes in common contexts. Since these images exhibit different features with respect to our data, we train a self-supervised model on our dataset to extract domain-specific features and obtain a more meaningful comparison, as done in other approaches <cit.>.
In particular, we train BYOL <cit.> with a ResNet18 <cit.> backbone on our dataset. Once the model has been trained, we employ its backbone to extract a feature vector, of size 512, from each image in the dataset and from the generated samples. Then, we compute the means μ_R, μ_G and the covariance matrices Σ_R, Σ_G for the feature vectors extracted from the real dataset (R), and from the generated samples (G), respectively. Finally, the computation of the FID metric is defined as follows:
FID = |μ_R - μ_G| + tr(Σ_R + Σ_G - 2(Σ_R Σ_G)^1/2)
Here, tr stands for the trace operator in linear algebra.
As this metric represents a distance, a lower score indicates a higher similarity between the features, thus a better quality of the generated images.
Table <ref> shows the performance of several generative architectures trained on our dataset. Since, to the best of our knowledge, we did not find any open-source generative models in radio astronomy, we evaluate the performance of our model against two state-of-the-art unconditional GANs, namely DCGAN <cit.> and Progressive Growing GAN (PG-GAN) <cit.>. We also evaluate DDPM <cit.>, which follows the original implementation of diffusion models, to assess the advantages of using a latent diffusion model (LDM). Since training DDPM on images at 128 × 128 is not tractable for its high computational requirements (See Section <ref>), we resize the images to 64 × 64 pixels in this case. The results show how our model outperforms the GAN-based models and DDPM in terms of FID.
§.§.§ Conditional Generation
Since we are interested in conditioning the generation process as well, in addition to measuring the closeness of the real and generated feature vectors we need to evaluate how the synthetic image is coherent with the conditioning input. In our case, we use semantic segmentation masks as conditions, so we are interested in whether the synthetic image contains the objects defined in the mask. For this reason, we define a Segmentation Score, which measures the amount of overlap between the input mask and the mask predicted on the generated image using a pre-trained semantic segmentation model. In particular, we measure the Intersection over Union (IoU), defined in <ref> between the ground truth mask and the estimated segmentation mask. The IoU is typically employed in semantic segmentation tasks <cit.> and defines the ratio between the correctly classified pixels for each class k (intersection) and the total amount of pixels of objects belonging to class k (union). We compute the average IoU for each image over the classes and finally average the results from all images. Formally, the IoU is defined as follows:
IoU = 1/N∑IoU_i, with i ∈[ 0, N ]
where N is the number of images in the dataset. For each image, considering a task where the number of possible classes is K, the IoU_i is defined as follows:
IoU_i = 1/K∑intersection_k/union_k, with k ∈[ 0, K ]
intersection_k = ∑_jpred_k ∘ gt_k
union_k = ∑_jpred_k ∪ gt_k
Finally, the terms pred_k and gt_k are binary masks relative to each pixel j of the segmentation mask and each class k, specifying where, either the prediction or the ground truth maps, report segmentations of class k. Formally, these are defined as follows:
pred_k =
1 if pred_j = k
0 otherwise
where pred_j refers to the class predicted for the pixel j. gt_k is computed in the same way for the ground truth mask.
In our analysis, we employ Tiramisu <cit.> as it has been previously trained on a radio-astronomical dataset <cit.>.
Another metric that quantifies the generation quality is the Structural Similarity Index Measure (SSIM) <cit.>, which evaluates the affinity between the structures of the two images x and y, defined in Equation <ref>.
SSIM(x,y) = (2μ_xμ_y + C_1) + (2 σ _xy + C_2)/(μ_x^2 + μ_y^2+C_1) (σ_x^2 + σ_y^2+C_2)
Here, μ and σ^2 indicate mean and variance, respectively, σ_xy is the covariance between x and y, c_1=(K_1 L)^2 and c_2=(K_2 L)^2 are stabilizing constants. K_1=0.01, K_2=0.03 by default, L represents the dynamic range of the images.
We compare our model against two state-of-the-art semantic image synthesis approaches, SPADE <cit.> and INADE <cit.>, which serve as baselines. Then, to evaluate both the quality of the generation of our model and its capability of transferring the background properties from another image to the synthetic one, we test our model in two configurations: 1) we condition the diffusion model by providing only the semantic masks as input; 2) we use both the semantic masks and the background information from another image as conditioning (See Figure <ref>).
We report the quantitative results of this analysis in Table <ref>, showing how using a combination of mask and background conditioning contributes to higher performance and a more stable generation.
We corroborate our results with visual samples of the synthetic images, which help to better understand how the model behaves under different configurations and to assess the quality of the generation.
In Figure <ref>, we report some synthetic images obtained with different models, conditioned with different masks.
While the shape and location of the objects in the input map are correctly reproduced in both cases, the results show that adding background conditioning to the semantic masks reduces the distance from the ground truth, by reproducing the background features with higher fidelity.
The effect of the background conditioning is better visualized in Figure <ref>, where it becomes clear that conditioning the generation process on the background of different images contributes to transferring the visual properties of the background into the synthetic image.
§.§ Synthetic images for data augmentation
In contexts where large data volumes are difficult to obtain, generative models offer the possibility to augment available datasets with synthetic images <cit.>, improving the performance of deep architectures.
For this use case, we evaluate our approach by training a semantic segmentation model, Tiramisu <cit.>, adopting the following strategies to augment our Survey Collection dataset: 1) we generate fully-synthetic image-mask pairs, and 2) we use real semantic masks to generate the synthetic images. We compare the performance of the model instances in terms of IoU, defined in Equation <ref>.
In semantic segmentation tasks, the dataset is composed of image-mask pairs, where the image is the input and the mask represents the ground-truth annotation.
For data augmentation purposes, we generate fully-synthetic image-mask pairs containing compact and extended sources. To generate images using our RADiff pipeline, we need to first produce the segmentation masks to control the image generation and to serve as annotations for training the segmentation model. Ideally, these masks should be designed by experts and contain realistic objects distributions, and shapes. To evaluate our approach, we produce synthetic segmentation masks by training an unconditional diffusion model (DDPM <cit.>) to generate 5,000 masks, with a structure similar to those present in our dataset. In Table <ref>, we report the number of object instances in the generated masks. Then, we use the masks to condition our pre-trained RADiff model and obtain the associated synthetic images.
In this analysis, we evaluate how synthetic data can solve class imbalances, so we augment the dataset using image-mask pairs containing only objects belonging to a single class among compact and extended and assess their performance on each class separately. For completeness, we also evaluate the model on the dataset augmented on objects from both classes.
Table <ref> shows the results of this analysis and it emerges that augmenting the real dataset with fully-synthetic image-mask pairs can improve the overall performance of the model. In addition to this, controlling the type of objects generated can alleviate performance loss related to class imbalances, as shown by the performance improvement for the augmented class. The approach is limited by the quality of the segmentation masks, which is why, to properly exploit the potential of this method, well-designed masks should be provided to generate the synthetic images.
Considering these limitations and the fact that synthetic segmentation masks may not represent realistic object distributions and shapes, we evaluate our approach using real segmentation masks to produce synthetic images. To this purpose, we train our RADiff model on a reduced version of our dataset, SC-Reduced (SC_R), using a 70% of the training dataset (7,616 images), then we generate a dataset of augmented synthetic images, SC-Augmented (SC_A), providing the semantic masks of the images not belonging to SC_R (SC_R ∩ SC_A = Ø) to RADiff.
In Table <ref>, we report the number of object instances for each category in the two datasets. For a realistic simulation, we left most of the extended sources in the synthetic dataset, since this kind of source is the most difficult to emulate, while compact sources are more easily reproducible.
Then, we train our semantic segmentation model, Tiramisu, in the following configurations: 1) trained only on SC_R to establish a baseline, 2) trained on a mix of SC_R and SC_A, and 3) trained on a dataset composed only of synthetic images (SC_S), using the real masks from SC to generate them. We evaluate the model on the same test set used for the other experiments and report the results in Table <ref>.
§.§ Large-scale map generation
Exploiting the capabilities of our model, able to generate high-quality samples with well-defined shapes, we additionally evaluate the RADiff pipeline in the generation of synthetic large-scale maps populated with radio-astronomical objects for Data Challenge purposes.
To achieve this, we employ our RADiff model, pre-trained on the SC dataset, to guide the image generation using N semantic masks, (thus generating N crops of size 128 × 128), then filter out the background from each crop, extract the objects from the generated image, and paste them on a real large background map. Since the images generated by the model contain values in the range [0, 1 ], we need to rescale these values to adapt them to the flux of the background noise map. We modulate the pixel values of each object by the standard deviation of the background flux σ and by a scaling factor k sampled from an exponential distribution. We choose a λ value of 3 for the exponential distribution and limit its maximum value to 10 in the images we show in this analysis, even though these values are parameterizable.
Finally, we place the objects randomly on the background map, avoiding overlap. The distribution and the percentage of accepted overlap between the objects are also parameterizable.
Some zoomed-in visual results of this approach are shown in Figure <ref>, while more detailed images at different zoom levels are reported in Section <ref>.
Since the model is trained by applying some pre-processing to the input images, the generated objects will contain values that follow the distribution of the processed ones.
In the future, we will therefore investigate generative approaches with modular preprocessing functions applied, allowing users to choose the image transformations that best suit their use case.
§ CONCLUSION
In this work, we leverage the flexibility of Latent Diffusion Models with conditioning information by proposing RADiff, a conditional LDM that conditions the generation of synthetic images with a semantic mask and background information. We apply this pipeline to augment small datasets, evaluating how extending a real dataset with synthetic samples impacts the performance of deep learning models. We evaluated the impact of using synthetic and generated masks to generate synthetic images and trained a semantic segmentation model on these augmented datasets. In this evaluation, we found an improvement in performance, especially when treating class imbalances.
We also exploited the quality of the generated samples to simulate large-scale radio maps populated with different kinds of objects to be used in data challenge simulations.
Further exploration of our work may include the extension of this pipeline to 3D data, enabling the generation of 3D spectral data cubes, and the improvement of large-scale map generations with better-designed post-processing operations or with architectural changes to the pipeline to better suit the task.
§ ACKNOWLEDGEMENTS
Part of this work has been supported by the INAF PRIN TEC CIRASA programme, and the spokes "FutureHPC & BigData” and "Astrophysics & Cosmos Observations" of the ICSC – Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing – and hosting entity, funded by European Union – NextGenerationEU.
§ ADDITIONAL LARGE SCALE SAMPLES
In this section, we report additional samples of the large-scale generated map with the same method presented in Section <ref>. Figure <ref> reports the whole map populated with generated objects, at its total size. Since this has a high number of pixels (12,000 × 15,000), small details and the shape of the objects are not visible in this representation. For this reason, we also report the same image at several levels of zoom, indicating the focused region as well. Figure <ref> shows the same image with a 2×-zoom, highlighting how the objects are distributed in the map. This can be better visualized in the 5×-zoomed image, reported in Figure <ref>. Figures <ref> and <ref> show two different regions zoomed at a 10× scale, focusing on the shape of the synthetic objects.
unsrtnat
|
http://arxiv.org/abs/2307.01464v1
|
20230704035305
|
Unsupervised Quality Prediction for Improved Single-Frame and Weighted Sequential Visual Place Recognition
|
[
"Helen Carson",
"Jason J. Ford",
"Michael Milford"
] |
cs.CV
|
[
"cs.CV"
] |
Unsupervised Quality Prediction for Improved Single-Frame and Weighted Sequential Visual Place Recognition
This research is partially supported by an ARC Laureate Fellowship FL210100156 to MM, the QUT Centre for Robotics, the Centre for Advanced Defence Research in Robotics and Autonomous Systems, and received funding from the Australian Government via grant AUSMURIB000001 associated with ONR MURI grant N00014-19-1-2571. The work of H.Carson was supported in part by an Australian Postgraduate Award.
The authors are with the QUT Centre for Robotics, School of Electrical Engineering and Robotics at the Queensland University of Technology, Brisbane, Australia (e-mail: h.carson@hdr.qut.edu.au, j2.ford@qut.edu.au, michael.milford@qut.edu.au).
Helen Carson, Jason J. Ford, Michael Milford
Department of Mathematics, Southern University of Science and Technology,
Shenzhen, China
E-mail: hongjl@sustech.edu.cn
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
fancy
plain
While substantial progress has been made in the absolute performance of localization and Visual Place Recognition (VPR) techniques, it is becoming increasingly clear from translating these systems into applications that other capabilities like integrity and predictability are just as important, especially for safety- or operationally-critical autonomous systems. In this research we present a new, training-free approach to predicting the likely quality of localization estimates, and a novel method for using these predictions to bias a sequence-matching process to produce additional performance gains beyond that of a naive sequence matching approach. Our combined system is lightweight, runs in real-time and is agnostic to the underlying VPR technique. On extensive experiments across four datasets and three VPR techniques, we demonstrate our system improves precision performance, especially at the high-precision/low-recall operating point. We also present ablation and analysis identifying the performance contributions of the prediction and weighted sequence matching components in isolation, and the relationship between the quality of the prediction system and the benefits of the weighted sequential matcher.
§ INTRODUCTION
Visual Place Recognition (VPR) is the task of estimating position using cameras, by matching an image of the current scene to a database of geo-tagged reference images taken along a previous traverse of the route <cit.>. Substantial progress has been made in this research area in recent years <cit.>. While absolute performance has often been the target, when adapting these systems to operate on deployed robots and autonomous systems, other capabilities around localization integrity become just as, if not more, important.
While substantial progress has been made in other domains including aerospace on topics like system integrity <cit.>, this field is relatively underinvestigated in robotics. Initial pilot work in this area has demonstrated some promise, but has had substantial practical limitations including the need for extensive, multi-traverse training data with ground truth correspondences from the target domain. In this paper, we present a new VPR system that provides high performance whilst overcoming some of the practical limitations of previous approaches.
We make the following contributions:
* We present a novel, unsupervised method that enables out-of-tolerance matches to be identified and discarded during a post-processing step on various VPR techniques, using a consensus-based approach.
* We use our localization quality prediction technique to anchor sequence-based matching around identifiably good localization points, further improving performance
(Fig. <ref>).
To support our contribution claims, we demonstrate increased average precision over 15 combinations of benchmark datasets and VPR techniques, covering viewpoint variation, seasonal changes, and urban driving scenarios, using both simple and state-of-the-art VPR techniques.
We further provide analysis isolating the performance contributions of both the prediction system and the weighted sequence matching technique, as well as of the relationship between the performance of the prediction system and the subsequent effect on performance of the weighted sequence matcher.
The paper proceeds as follows. We provide background in Section <ref> and describe our approach in Section <ref>. Our experimental setup is outlined in Section <ref>, and results and discussion in Sections <ref> and <ref>.
§ BACKGROUND
In this section, we further define the problem, provide an overview of VPR techniques and related works, and outline the differences between our methods and similar post-processing techniques.
§.§ VPR Techniques
VPR techniques themselves take many forms, including direct image comparison, hand-crafted techniques, machine-learned feature extraction, hierarchical methods, and semantic segmentation <cit.>. All involve a method of describing images in the form of extracted features prior to comparison, ranging from simple downsampling and patch normalization <cit.>, to complex techniques such at SIFT <cit.>, SURF <cit.>,ORB <cit.>, FAB-MAP <cit.>, Convolutional Neural Network (CNN) architectures <cit.> such as NetVLAD <cit.>; and hierarchical methods such as PatchNetVLAD <cit.>. A localization match is then formed by finding the minimum distance between features in the current scene and all reference images. This match however can be incorrect due to visual aliasing, changes in the environment, dynamic agents, lighting differences and
shadows, and many other issues that cause images taken at the same place to look different, and different places to look the same <cit.>.
Applying post-processing techniques to initial VPR estimates can result in large performance improvements <cit.>. These can include fusion with additional sensor data <cit.>; use of visual odometry to reject out-of-tolerance estimates <cit.>; temporal window-based filtering <cit.>; and Random Sample Consensus (RANSAC) to identify outliers once an error tolerance threshold is tuned <cit.>. Sequential filtering can also be used to exploit the rich spatio-temporal information provided as a localization system moves along a route, rather than treating each image independently <cit.>, including learned descriptors based on contrastive learning methods such as SeqNet <cit.> and SeqMatchNet <cit.>. Extended Kalman Filters (EKFs) <cit.> and particle filters <cit.> can also be used to provide continually updated localization estimates, based on motion state-space estimation.
§.§ Localization Integrity
We relate our work to the concept of integrity, which aims to provide a real-time determination of whether a current localization estimate is within an acceptable tolerance from ground-truth, or not <cit.>.
As the overall performance of camera-based localization systems becomes higher, and these systems are able to perform reliably in more challenging environments, a system's ability to self-identify when its performance is poor is becoming increasingly important, especially in safety- or operationally-critical applications like autonomous vehicles <cit.>.
In our previous work <cit.>, we proposed a supervised learning method for predicting which matches along a route are in-tolerance, based on extracting features from the distance matrix.
However, this supervised technique had substantial practical disadvantages: it required two traverses along a route through a representative environment, along with ground-truth data for each reference and query frame, to enable the prediction method to learn which points were likely out-of-tolerance.
Here we provide two complementary techniques: an unsupervised method for predicting in-tolerance matches, and a method of weighting sequences prior to matching based on these predictions to improve overall precision. In contrast to our previous work, this technique requires no calibration or training data.
Our unsupervised method is effectively a consensus-based filter used to “check” the distance-based localization estimate from the VPR technique against a secondary and independent gradient-based estimate. Both of these techniques are suited to applications where high precision and low false positive rates are more important than recall - that is, where it is better to reject good points as bad, rather than the alternative: accepting out-of-tolerance estimates as good.
In the following sections we outline our approach and experimental method, and compare the performance of our proposed techniques to baseline VPR techniques.
§ APPROACH
In this section we detail our approach to our two contributions, including the formulation of a distance matrix, our new gradient matrix, consensus process, and the exploitation of this process to create an improved weighted sequence matching approach.
§.§ Unsupervised prediction
The cornerstone of our unsupervised method is based on the observation that the absolute average gradient of the similarity matrix is typically maximised around the best match. This is most evident when creating a similarity matrix of one set of images against itself: all exact matches produce an instantaneous distance of zero along the diagonal, corresponding with a sharp instantaneous rate of change in the distance vector at each matching point. Within the matrix, similar places may exhibit relatively low distances between each other, but intuitively, the absolute average gradient change should be highest around the closest match when the distance is closest to zero.
Our goal is to identify correct (in-tolerance) points with high precision, such that inaccurate points are discarded and only accurate points remain. Based on our observations, we form the hypothesis that points along a route where the minimum distance and maximum rate-of-change (gradient) of the distance vector coincide are more likely to be exact or very close matches, rather than where these two phenomenon occur at vastly different positions along the route.
§.§.§ Formulation of Distance Matrix
For completeness sake, here we briefly introduce our distance matrix calculation, formed by computing the euclidean or cosine distance between the features in the current scene (query image), to features in a set of reference images. The candidate best match is then selected as the reference frame with the minimum distance to each query along a route, representing the best estimated location. In this paper we use a distance matrix 𝐃∈ℝ^n × m as a representation of feature euclidean distance between all m queries and n references, and a distance vector 𝐝_j=𝐃[:,j] as the distance between a single query image and all reference images, which is the method used in real-time implementation.
Each element 𝐃_ij is the distance between the features in the i^th database image and the j^th query image, and the reference with the minimum distance, d_0, is found as the candidate best match where, i_d0= argmin(𝐝_j). Lower values at the minimum point indicate a stronger match.
§.§.§ Formulation of Gradient Matrix
We introduce in this work a gradient matrix, which is formulated by computing a modified average gradient around each point along the distance vector, computed as:
g={[ 1/2(d_i+1+d_i-1)-d_i, 1<i<n; d_i+1-d_i, i=1; d_i-1-d_i, i=n; ].
We use convolution to enhance the signal using a 3 × 3 kernel, and pad the first two frames with the mean of the gradient vector. A potential negative impact of this in a real-time implementation may be decreased prediction accuracy for the first and second query frames.
A gradient vector, 𝐠_j=𝐆[:,j] is formed for the jth query, representing the rate of change between the query features, and features in each reference frame along the route. Note that the order of the reference frames must be preserved, but the query images can be encountered in any order in real-time. The maximum gradient for each query can then be hypothesised as an alternative best match candidate:
i_g0=argmax(𝐠_j)
§.§.§ Rejection of out-of-tolerance points
To finalise our technique, we compare the best match generated by the minimum distance vector and maximum gradient vector, and reject matches where the number of frames between these are greater than one. We form a binary prediction vector 𝐲_pred∈{0,1}^m from this comparison, where y_pred=1 representation predicts the localization estimate is in-tolerance (True Positive) and y_pred=0 represents an out-of-tolerance prediction (False Positive):
y_pred=
1, |i_g0 - i_d0| <= 1
0, otherwise.
The prediction vector can be used as a mask to retain only predicted good candidate matches from the distance matrix. Examples of this consensus based prediction approach are shown in Figure <ref>.
§.§ Anchored/Weighted sequence
The methods described so far have the potential to improve single-frame-based matching, and could be further enhanced by naive application of a sequence-based matching technique. However, a key new opportunity here we pursue is to weight the sequence matching process based on the predictions around the quality of the localization estimates. Here we incorporate our method into SeqSLAM <cit.> by heavily weighting predicted good points prior to performing sequence matching. This artificially gives each sequence template containing predicted good points a much lower value than the standard implementation, which anchors the sequence around our prediction.
For each query, we weight each minimum value, d_0, in the distance vector corresponding to a predicted good point by a weighting factor, w, between 0 and 1 to create a weighted distance matrix, 𝐃_w:
D_w|_i=i_d0,j=
d_0 - w(d_0-D_min), y_pred=1
D_i,j, y_pred=0
The weighting factor w lowers the minimum value in the distance vector towards the overall minimum of the distance matrix. All other values, including predicted bad points (y_pred=0), are left unchanged.
To implement sequence matching, we use the computationally efficient method of convolving the weighted distance matrix with an identity matrix with sequence length L, as described in <cit.> and <cit.>:
𝐃_wseq=𝐈_L ∗𝐃_w,
and identify the new best candidate sequence for each query:
i_0=argmin(𝐃_wseq[:,j])
Using a weighting factor w=1 reduces all predicted `good' values to the minimum value in the distance matrix, 𝐃_min. To enable generation of PR-curves, a weighting factor w less than one (e.g. w=0.99) is recommended.
§ EXPERIMENTAL SETUP
To test our hypotheses, we implement our approach using 15 combinations of datasets and VPR techniques, described in the sections below, and compare the performance of our method against those of baseline techniques.
§.§ VPR Techniques
We apply our method to Sum-of-Absolute Differences (SAD), using patch-normalized downsampled images <cit.>; NetVLAD <cit.> based on a CNN architecture; and recent state-of-the-art approach PatchNetVLAD, which uses fusion of locally-global descriptors on multiple-scales using pre-trained models <cit.>.
§.§ Datasets
Four widely used benchmark datasets have been used to implement our technique: Gardens Point Walking <cit.>, Nordland <cit.>, 4Seasons OfficeLoop <cit.> and Oxford RobotCar <cit.>. In each case, we use one reference set and one query set taken on the same route for our unsupervised prediction. When benchmarking against our previous supervised prediction technique, we use a second calibration reference/query set over a different route as described in <cit.>, along with ground-truth data for the calibration set.
Gardens Point Walking is a walking dataset obtained on a university campus with viewpoint variation between the reference and query sets. We use the first 100 frames with the left set as reference and right set as query, with 2.5m between each frame. Nordland contains images from a Norwegian railway, and uses a fixed viewpoint, but with large seasonal variation between reference and query sets. We have used two separate sets from this dataset: fall/spring and summer/winter as query/reference sets. The summer/winter set provides a challenging dataset due to significant differences in daylight levels, snow cover, and foliage along the route. In both cases we use 1150 images over 35km from Test Section 2 as defined in <cit.>, with mean distance of 30m between consecutive images, and tunnels and station stops removed.
4Seasons and Oxford RobotCar contain images recorded from passenger vehicles driving in urban environments. Challenges in these sets include presence of dynamic objects, viewpoint variations along the route (e.g. when changing lanes) and environmental differences. In 4Seasons, we use 381 frames for reference and query routes over 500m with little variation apart from dynamic objects on the road, providing an `easy' set that all chosen VPR techniques work well on. For Oxford RobotCar, we use 1800 image frames over approximately 2.25km recorded in sunny conditions as the reference set, in rain for the query set, with 1.25m median distance between frames <cit.>.
Together, the datasets chosen provide a broad set of conditions and challenges to assess the performance of our method.
§ RESULTS
We present performance results for the overall combination of our two proposed contributions in Section <ref>, and the results of our two contributions independently in Sections <ref> and <ref>. We also present an ablation study showing the effect of sequence length and weighting factor on the weighted sequence performance, and experimental results for mean execution time at inference.
§.§ Combined system
Precision and recall are commonly used performance measures defined as ratios of in-tolerance localization matches (True Positives), out-of-tolerance matches (False Positives), and False Negatives (FN) (good matches that have been discarded), such that precision=TP/TP+FP and recall=TP/TP+FN <cit.>. Area Under the PR-curve (AUC) is a proxy for average precision, with higher values corresponding to higher performance. We impose a tight tolerance of +/-1 image frame as a “good” (in-tolerance) match in all experiments.
Table <ref> includes AUC up to 20% recall for baseline sequences, and sequences weighted with our unsupervised predictions. In 13 out of 15 combinations our weighted sequence has higher or equal AUC than the baseline. A common weighting factor of 0.99 and sequence length of 2 is used in all cases.
PR-curves for all three VPR techniques applied to each dataset are shown in Figure <ref> for the baseline sequences using no weighting, weighted sequences using the unsupervised prediction method, and the supervised method from our previous work. In the majority of cases, the PR-curve using weighted sequences has higher precision for the same recall as the unweighted sequences in the higher precision range for each dataset, which is the operating point of interest for safety-critical scenarios. The performance using weighted sequences also matches or exceeds the precision of our supervised technique in several cases, albeit up to a lower recall.
§.§ Performance of individual contributions
The following sections present an analysis of results for our two contributions independently.
§.§.§ Unsupervised Prediction
We compare our unsupervised method to the following: (1) baseline VPR technique, sweeping the minimum distance as a threshold, and (2) applying a supervised (rather than unsupervised) prediction to the baseline VPR technique, and (3) generating matches using the maximum value in the gradient matrix only, and sweeping the maximum gradient as a threshold.
Fig. <ref> demonstrates the impact of these comparisons as averaged PR-curves for the Nordland fall/spring and 4Seasons datasets, and shows the increased precision over the baseline when supervised and unsupervised predictions are used to remove `bad' matches. The curves are truncated to a maximum recall value due to many matches being discarded, such that 100% recall cannot be achieved.
These results also show that for these two datasets, using the maximum gradient alone to determine the best match produces decreased performance compared to the baseline, which uses the minimum distance value. Increased precision is only achieved when the match with the minimum distance and maximum gradient coincide (within +/1 one frame), which is the basis for our unsupervised technique.
§.§.§ Weighted Sequences
The performance of the weighted sequence is directly impacted by the prediction quality, chosen weighting factor, and is also bounded in part by the performance of the underlying VPR technique in the environment.
A break-even point will exist for each combination of these characteristics such that below a particular prediction quality threshold, use of the weighted sequence may actually decrease the combined system performance. The weighting factor, w, tempers the impact of the prediction on performance, and can be chosen based on the estimated performance quality of a given prediction method and dataset / VPR technique combination.
Figure <ref> shows the increase in performance by using a weighted sequence, where the unsupervised method (from Section <ref>) is used to weight the distance matrix at predicted `good' locations, for two datasets using SAD. The potential increase on this specific route and conditions using a perfect prediction is also shown to illustrate how, even with unrealistic perfect prediction, upper bound performance is limited by the underlying system.
To highlight the impact of this precision increase on the number of false positive matches, we can examine a specific example from our results. In Fig. <ref>(b), at 20% recall, the number of false positives (incorrect matches) in the GardensPoint SAD set is 37 for the unweighted sequence but drops to only 17 for the weighted sequence using unsupervised predictions.
Finally, Figure <ref> presents a characterisation of the impact of the prediction performance over a range of weighting factors, for two different scenarios, a good prediction system and a poor prediction system, using the Oxford Robotcar dataset. The thicker line in the middle of the graph shows an unweighted sequence matcher as a baseline. Below the unweighted line, we have a badly performing predictor, showing that as the weighting factor goes up, the performance gets progressively worse. Above the unweighted line, we have a good predictor; we can see as the weighting factor goes up in this case, the overall performance goes up.
§.§.§ Ablation Study
An ablation study of weighted sequences for varying sequence lengths shows that the performance increase persists as sequence length increases, but the relative improvement over the baseline decreases. Examples are given in Figure <ref> for two combinations of Nordland datasets with varying sequence lengths from L=2 to L=9.
§.§ Computational resources
To evaluate whether our methods are suitable for real-time implementation, we analyse scalability and present experimental results showing mean execution time for our method.
§.§.§ Theoretical explanation of scalability
At run-time, our prediction method requires generation of a gradient vector for each query point. Both the prediction and weighted sequence filter require convolution to be performed along the gradient and distance vectors. Execution time will therefore increase linearly with reference database size, as the same arithmetic operations are performed for each reference image.
§.§.§ Execution time at inference
Mean execution time for real-time implementation in Python has been measured running on an Apple Silicon M1 chip with 16GB RAM. Figure <ref> shows the total mean execution time required for the unsupervised prediction and weighted sequence at inference is less than 20ms per query for datasets presented here up to 1800 reference frames. This includes the time required to implement the sequence filter, which is common with the baseline sequence technique. Note that our implementation is not optimised and can be considered a worst case. Results in Fig <ref> provide evidence that our proposed methods are suitable for implementation in real-time applications.
§ DISCUSSION
We have presented two complementary methods for improving VPR performance based on: (1) unsupervised prediction of `good' localization estimates along a route, and (2) using sequences weighted with predictions. Both methods improve performance but synergize in particular because the predictions enable the weighted nature of the modified sequence matching technique. Over a comprehensive combination of benchmark datasets and VPR techniques, results show that both techniques lead to improvements in localization performance. A limitation is that these techniques are best suited to applications where high precision, low recall operating points are acceptable.
There are a number of exciting avenues for future work. While the unsupervised nature of the approach presented here has practical advantages, there are a subset of applications where supervised training of a prediction system is feasible. In such situations, it would be interesting to examine how fusing both supervised and unsupervised methods could further improve both performance and versatility. In particular, an unsupervised method could potentially provide an online training signal for a supervised method. In addition, while difference and gradient-based matching and prediction has been shown to be effective here, it may be possible to learn more abstract “factors" that provide further predictive capability around localization performance. Finally, while we have used prediction outputs to weight the sequence matching process, it may be possible to more organically link the prediction processes and weighted sequential matching processes into a single monolithic system, with further potential advantages.
IEEEtran
|
http://arxiv.org/abs/2307.02021v1
|
20230705044958
|
Parameterized Complexity of Domination Problems Using Restricted Modular Partitions
|
[
"Manuel Lafond",
"Weidong Luo"
] |
cs.CC
|
[
"cs.CC",
"cs.DS"
] |
Topological classes of thermodynamics of the four-dimensional static accelerating black holes
Di Wu
August 1, 2023
=============================================================================================
For a graph class , we define the - of a graph G as the minimum size of a vertex partition of G into modules that each induces a graph in . This generalizes other module-based graph parameters such as neighborhood diversity and iterated type partition. Moreover, if has bounded modular-width, the W[1]-hardness of a problem in - implies hardness on modular-width, clique-width, and other related parameters. On the other hand, fixed-parameter tractable (FPT) algorithms in - may provide new ideas for algorithms using such parameters.
Several FPT algorithms based on modular partitions compute a solution table in each module, then combine each table into a global solution. This works well when each table has a succinct representation, but as we argue, when no such representation exists, the problem is typically W[1]-hard. We illustrate these ideas on the generic (α, β)-domination problem, which asks for a set of vertices that contains at least a fraction α of the adjacent vertices of each unchosen vertex, plus some (possibly negative) amount β. This generalizes known domination problems such as , , and . We show that for graph classes that require arbitrarily large solution tables, these problems are W[1]-hard in the -, whereas they are fixed-parameter tractable when they admit succinct solution tables. This leads to several new positive and negative results for many domination problems parameterized by known and novel structural graph parameters such as clique-width, modular-width, and -.
§ INTRODUCTION
Modular decompositions of graphs have played an important role in algorithms since their inception <cit.>.
In the world of parameterized complexity <cit.>, Gajarský et. al. <cit.> proposed the notion of modular-width, or mw for short, which can be defined as the maximum degree of a prime node in the modular decomposition tree of G.
Unlike other structural parameters such as treewidth <cit.>, mw can be bounded on certain classes of dense graphs, making it comparable to the clique-width (cw) parameter <cit.>. In fact, cw is at most mw plus two, and mw can sometimes be arbitrarily larger than cw.
It is known that several problems that are hard in cw are fixed-parameter tractable (FPT) in mw, with popular examples including Hamiltonian Cycle, Graph Coloring <cit.>, and Metric Dimension <cit.>.
In particular in <cit.>, the main technique used to design such algorithms is dynamic programming over the modular decomposition. In essence, the values of an optimal solution are found recursively in each module of the graph G, which are then combined into a solution for the graph itself, often using small integer linear programs based on the prime graph of G.
Another technique was recently introduced in <cit.>,
where the authors show that the number of potential maximal cliques of a graph is at most O^*(1.73^mw), a fact that can be combined with results of <cit.> to obtain FPT algorithms for several problems such as Treewidth and Minimum Fill-in.
Despite these efforts, there are still several problems that are known to be hard on cw, for instance Max-Cut <cit.> and Bounded Degree Deletion <cit.>, but unknown to be hard or FPT in mw. Recently, an XP algorithm is given for k-Domination in parameter treewidth (tw) <cit.>, but the W-hardness for k-Domination in parameter tw, cw or mw is unknown. Also, the authors of <cit.> conclude with the question of whether Edge Dominating Set and Partition into Triangles are FPT in mw, which are 10 years-old questions that are still unanswered.
One promising direction to gain knowledge and tools for mw algorithms is to study some of its related parameters. We consider such variants in which the graph must be decomposed into modules that each induces a subgraph belonging to a specific graph class. Notable examples include neighborhood diversity (nd) <cit.>, which can be seen as the minimum size of a partition into modules that induce edgeless graphs or cliques, and iterated type partition (itp) <cit.>, in which each module must induce a cograph. This idea was also used in <cit.> from which we borrow our terminology, where a partition into modules inducing cliques is used to obtain linear kernels for the cluster editing problem. Meanwhile, as Knop wrote in <cit.>, `another important task in this area is to understand the boundary (viewed from the parameterized complexity perspective) between modular-width on one side, and neighborhood diversity, twin-cover number, and clique-width on the other side'. Our work resides in this boundary.
In this work, we propose to generalize the above ideas by
restricting modules to a given graph class . That is, we define the - of a graph G, denoted (G), as the size of the smallest partition of its vertices into modules that each induces a subgraph in . For a graph class of bounded modular-width (such as cographs), the hardness of a problem in - implies its hardness in mw (and thus cw). On the other hand, FPT techniques for may shed light towards developing better algorithms for mw. Moreover, by considering graph classes of unbounded modular-width (e.g. paths, grids), may be incomparable with mw or even cw, leading to FPT algorithms for novel types of graphs.
To the best of our knowledge, such a generalization had not been studied, although it is worth mentioning that in <cit.>, the authors propose a similar concept for treewidth, where some bags of a tree decomposition are allowed to induce subgraphs of a specified graph class.
Our contributions.
We first establish that if is hereditary and has bounded mw, then mw(G) is at most (G) for a graph G ∉, allowing the transfer of hardness results. We then show that for many graph classes, namely those that are easily mergeable, there is a polynomial-time algorithm to compute and obtain a corresponding modular partition.
We then introduce techniques to obtain W[1]-hardness results and FPT algorithms for the parameter.
In essence, we argue that the dynamic programming technique on mw algorithms works well when a small amount of information from each module is sufficient to obtain a solution for the whole graph (for instance, the algorithms of <cit.> require only a single integer from each module). Such succinct solution tables from each module can often be combined using integer programs with few integer variables <cit.>.
Conversely, when too much information is required from each module (e.g. linear in the size of the modules) to obtain a final solution, we are unable to use integer programming and this typically leads to W[1]-hardness. This occurs when arbitrary solution tables are possible in each module.
We use a large class of domination problems to illustrate these techniques. Specifically, for a real number α∈ [0, 1] and integer β (possible negative), we introduce the problem.
Given a graph G and an integer q, this problem asks for a subset of vertices X ⊆ V(G) of size at most q such that, for each v ∈ V(G) ∖ X, we have |N(v) ∩ X| ≥α |N(v)| + β. In other words, each unchosen vertex has at least a fraction α of its neighbors dominating it, plus some number β.
The problem is equivalent to the problem <cit.> if α = 1 and β≤ 0; equivalent to the problem <cit.> if α = 0 and β≥ 1; and equivalent to the problem <cit.> if α∈ (0,1] and β=0. Additionally, if α = 0 and β≤ 0, then all well-coded instances of the problem are yes instances, and if α = 1 and β≥ 1, then all instances of the problem are no instances. We exclude those cases.
Table <ref> illustrates some of the main results of this paper. Most of the hardness results follow from a more general result on arbitrary solution tables, detailed in Theorem <ref> below. The statement is slightly technical, so we describe its high-level implication on the case α = 1 (). In this problem, a possible solution table is a function f : [n] →ℕ such that, for each i ∈ [n], f(i) is the minimum maximum degree achievable in G after deleting i vertices. The theorem states that for graph class , if for any such f we can construct a graph in whose solution table is f, then is W[1]-hard in -.
We show that the class of disjoint stars satisfies this property, which implies several other hardness results.
On the other hand, several positive results for the problem make use of succinct solution tables. In essence, when f can be represented by a constant number of linear functions, or by a convex function, then we can use integer programming to merge these tables and obtain positive results.
Finally, additional results not shown in the table can be deduced easily from this technique. We show that is FPT in - and - as parameters, which are of interest since they are incomparable with modular-width.
§ PRELIMINARY NOTIONS
For an integer n, denote [n] = {1,…,n}. The maximum degree of a graph G is denoted Δ(G).
The complement of G, denoted G, is the graph with V(G) = V(G) and, for distinct u, v ∈ V(G), uv ∈ E(G)
if and only if uv ∉ E(G). The neighborhood of v ∈ V(G) is N(v).
The set of connected components of a graph G is denoted CC(G). For X ⊆ V(G), G[X] denotes the subgraph of G induced by X and G - X = G[V(G) ∖ X].
If X = {v}, we may write G - v.
Slightly abusing notation, we may also write v ∈ G instead of v ∈ V(G), |G| instead of |V(G)|, and X ∩ G instead of X ∩ V(G).
A graph class is a (possibly infinite) set of graphs containing at least one non-empty graph.
We say that is hereditary if, for any G ∈, any induced subgraph of G is also in .
Note that if is hereditary,
the graph consisting of an isolated vertex is in .
We say that is a polynomial-time recognition graph class if there is a polynomial-time algorithm that decides whether a given graph G is in .
Some popular graph classes that we will use throughout this paper: is the set of all edgeless graphs; is the set of all complete graphs; is the set of graphs in which every connected component induces a complete graph; is the set of graphs in which every connected component is a star graph; is the set of cographs, where a cograph is either a single vertex, or a graph obtained by applying either a join or a disjoint union of two cographs <cit.>.
Observe that ⊆⊆.
Modular parameters.
First, let us introduce the concepts of modular decomposition and modular-width.
For a graph G=(V,E), a module of G is a set of vertices M ⊆ V
such that for every v∈ V ∖ M, either all vertices of M are adjacent to v or all vertices of M are not adjacent to v. The empty set, the vertex set V, and all one element sets {v} for v∈ V are called the trivial modules. In a prime graph, all modules are trivial modules. A factor is a subgraph induced by a module.
A module M is strong if for any module M' of G, either M' is a subset of M, M is a subset of M', or the intersection of M and M' is empty. M is maximal if M ⊊ V and there is no module M' such that M ⊊ M' ⊊ V.
A partition of V(G) is called a modular partition if every element of is a module of G. If only contains maximal strong modules, then it is a maximal modular partition. It is known that for every graph, this partition is unique. Two modules M and M' are adjacent in G if every vertex of M is adjacent to every vertex of M', and non-adjacent if every vertex of M is not adjacent to any vertex of M'. For a modular partition of V, the quotient graph G_/ is defined by V(G_/) = {v_M : M ∈} and v_M_1 v_M_2∈ E(G_/P) if and only if modules M_1, M_2 are adjacent.
We can represent all strong modules M of G by an inclusion tree MD(G). Moreover, we call MD(G) the modular decomposition tree of G, in which each vertex v_M is corresponding to a strong module M. More specifically, each leaf v_{v} of the inclusion tree corresponds to a vertex v of G and the root vertex v_V corresponds to V. Moreover, for any two strong modules M and M', M' is a proper subset of M if and only if v_M' is a descendant of v_M in MD(G).
An internal vertex v_M of MD(G) is parallel if G[M] is disconnected, is series if G[M] is disconnected, and is prime otherwise.
The modular-width of a graph G is the maximum number of children of a prime vertex in MD(G).
We recommend <cit.> for more information on the topic.
Next, let us provide the definition of the parameter neighborhood diversity <cit.>. A modular partition of V(G) is called a neighborhood partition if every module of the modular partition is either a clique or an independent set. The width of the partition is its cardinality. The minimum neighborhood partition of V(G) is the neighborhood partition of V(G) with the minimum width. The neighborhood diversity, denoted by nd(G) or nd, of G is the width of the minimum neighborhood partition of V(G). Theorem 7 of <cit.> proves that minimum neighborhood partition, thus also the neighborhood diversity, can be obtained in polynomial time. Clearly, modular-width generalizes neighborhood diversity.
Our variant of modular-width restricted to graph classes follows.
Let be a graph class.
For a given graph G (not necessarily in ), a module M of G is a -module if G[M] belongs to .
A modular partition = {M_1, …, M_k} of a graph G is called a -modular partition if each M_i is a -module.
The - of G, denoted by (G), is the minimum cardinality of a -modular partition of G.
The neighborhood diversity (nd) is equivalent to the (∪)-.
The iterated type partition (itp) parameter <cit.> is the number of vertices of the graph obtained through the following process:
start with the smallest modular partition into cliques and edgeless graphs;
contract each module into a single vertex; repeat until no more contraction is possible. It can be shown that the remaining vertices represent modules that are cographs.
Thus, itp(G) is not smaller than the - of .
§ PROPERTIES AND TRACTABILITY OF 𝒢-
In this section, we show that mw is not larger than for graph classes of bounded modular-width. This allows hardness results on to also apply to mw. We then show that is polynomial-time computable for “easily mergeable” graph classes.
Observe that for a graph G, if one G or G is disconnected, then CC(G) or CC(G), respectively, form the maximal modular partition of G (since the modules in this partition are defined to be strong).
Before we can compare and mw, we will need the following.
Let G be a graph with at least two vertices. Then the following holds:
* if G is disconnected, then mw(G) = max_C ∈ CC(G) mw(G[C]);
* if G is disconnected, then mw(G) = max_C ∈ CC(G) mw(G[C]);
* if both G and G are connected, let ^* be the maximal modular partition of G. Then mw(G) = max( |^*|, max_M ∈^* mw(G[M]) ).
According to the definitions of modular-width, the first and second cases are trivial. Let us consider the third case. Suppose G = (V,E) and MD(G) is the modular decomposition tree of G. Since both G and G are connected, the root v_V of MD(G) is a prime vertex with |^*| children, moreover, each child v_M of v_V is corresponding to a module M ∈^*. Modular decomposition tree MD(G[M]) of G[M] is a subtree of MD(G) such that the root v_M of MD(G[M]) is a child of v_V, and mw(G[M]) is the largest prime vertex in the MD(G[M]). Thus, max_M ∈^* mw(G[M]) is the largest prime node among all subtree of MD(G), where the roots of the subtree are children of v_V. This means that max_M ∈^* mw(G[M]) is the largest prime vertex of MD(G) except for the root vertex of MD(G). Overall, max( |^*|, max_M ∈^* mw(G[M]) ) equals the modular-width of G according the definition of modular-width.
We heavily rely on the relationships between , modules, and induced subgraphs.
Suppose graph G is not in graph class . Let and ^* be a -modular partition and the maximal modular partition of G, respectively.
Then the following holds:
* if G or G is disconnected, then, for any M ∈ and M'∈^*, M ∩ M' ∈{M, M', ∅}.
* if G and G are connected, then for any M ∈, there is a M' ∈^* such that M ⊆ M'.
The first case is straightforward since M' is a strong module.
Consider the second case.
Note that since we assume G ∉, we have (G) > 1. Suppose that there are a, b ∈ M
and distinct M^*_a, M^*_b ∈^* such that
a ∈ M^*_a, b ∈ M^*_b. Because M^*_a is the unique maximal module of G that contains a, M must be equal to V(G), as otherwise, this would imply the existence of another maximal module containing a (this other maximal module is either M, or the largest module not equal to V(G) that contains M). Thus = {M}, contradicts (G) > 1. Therefore, M must be a subset of some module of ^*.
Let be a hereditary graph class. Then for any graph G and any X ⊆ V(G), (G[X]) ≤(G).
Let be a -modular partition of G of size (G). If we delete a vertex v of a module M_i ∈,
the resulting subgraph G[M_i] - v belongs to by heredity. Thus ' = (∪{M_i ∖{v}}) ∖{M_i, ∅}
is a -modular partition of G - v (even if M_i = {v}) and |'| ≤ ||.
Notice that without the heredity condition, Observation <ref> may be false, for instance if is the graph class
“graphs whose number of vertices is either 1, or a multiple of 100” and G is an edgeless graph with 100 vertices.
It is now possible to show inductively that is our setting, for graph classes of fixed modular-width, having bounded implies bounded mw.
Let be an hereditary graph class and define ω_ := max_H ∈ mw(H). Then for any graph G, mw(G) ≤max((G), ω_ ).
If G ∈, then (G) = 1 and mw(G) ≤max(1, ω_) holds trivially because mw(G) ≤ω_ by definition. Hence we will assume that G ∉.
We prove the statement by induction on |V(G)|. As a base case, suppose that |V(G)| = 1. Then mw(G) = 0 and (G) = 1 and the statement holds.
Assume that |V(G)| > 1 and that the statement holds for smaller graphs.
If (G) = 1, then G ∈ and mw(G) ≤ω_, in which case the statement holds. Suppose for the remainder that (G) > 1.
Let us note that for any subset X ⊊ V(G), we may assume by induction that
mw(G[X]) ≤max((G[X]), ω_) ≤max((G), ω_)
where Observation <ref> was used for the last inequality.
Suppose first that G (resp. G) is disconnected. By Lemma <ref>,
there is C ∈ CC(G) (resp. C ∈ CC(G) such that mw(G) = mw(G[C]).
Since C ⊊ V(G), we can use (<ref>) to deduce that mw(G) = mw(G[C]) ≤max((G), ω_).
Let us then assume that G and G are both connected. Let be a -modular partition of G of size (G).
Let ^* be the maximal modular partition of G.
Since (G) > 1, we may apply Lemma <ref> and deduce that each module M_i ∈ is contained in some module of ^*, which implies
|^*| ≤ || = (G).
Now consider some M^*_j ∈^*.
By (<ref>), we have mw(G[M^*_j]) ≤max((G), ω_).
Finally, by Lemma <ref> we know that mw(G) is the maximum of |^*| or one of mw(G[M^*_i]),
and in either case the statement holds.
Let us note that the above bound is tight, in the sense that mw(G) can be at least as large as either (G) or ω(G).
For instance, if G ∉ is a prime graph, then mw(G) = (G), and if G = max_H ∈ mw(H), then mw(G) = ω_.
Computing for easily mergeable graph classes.
A graph G is a -join if G is disconnected and G[C] ∈ for each C ∈ CC(G).
Likewise, G is a -union if G is disconnected and G[C] ∈ for each C ∈ CC(G).
If G is a -join (resp. -union), a -merge is a -modular partition of G such that
for each C ∈ CC(G) (resp. C ∈ CC(G)), there is some M ∈ that contains C.
We say that is a minimum -merge if no other -merge has a size strictly smaller than .
We say that a graph class is easily mergeable if there exists a polynomial time algorithm that,
given a graph G such that G is either a -join or a -union,
outputs a minimum -merge of G.
We say that is trivially mergeable if, for any -join or -union G, one of {V(G)}, CC(G), or CC(G) is a minimum -merge of G.
For easily mergeable classes, the - can be computed easily. If G ∈, we can simply return {V(G)}. If G and G are connected, we know by Lemma <ref> that each module of a minimum -modular partition is contained in a maximal module. It thus suffices to find -modules recursively into each module of the maximal modular partition.
The complex cases occur when G or G is disconnected.
It does not suffice to find the -modules recursively in each connected component.
Indeed, the minimum -modular partition might merge several of these connected components into one module.
As we show, we may safely recurse into the connected components that do not induce a graph in . For the connected components that do induce a graph in , we must decide how to merge them optimally, and this is where the easily mergeable property is needed. Algorithm <ref> describes this idea.
By combining Lemma <ref> and induction, we can show the following.
Suppose that is a hereditary graph class. Suppose further that is polynomial-time recognizable and easily mergeable.
Then Algorithm <ref> returns a -modular partition of G of minimum size in polynomial time.
We focus on the correctness of the algorithm (the polynomial time is not too difficult to verify). The proof of correctness is by induction on |V(G)|. If |V(G)| = 1,
then by heredity, G ∈ and the algorithm correctly returns {V(G)}.
Suppose that |V(G)| > 2 and that the algorithm is correct on smaller graphs. If G ∈, then it is clear that returning {V(G)} is correct. So assume that G ∉.
Let be the partition of V(G) returned by the algorithm, and let ' be a -modular partition of G of minimum size. It is not hard to check that in all cases, is a -modular partition: the modules added to either come from recursive calls (and are thus -modules by induction), or are explicitly checked to be in (the _1 set combined with the definition of a -merge). Hence, we focus on showing that || = |'|.
Consider the case in which both G and G are connected.
Let ^* be the maximal modular partition of G.
By Lemma <ref>, each module of ' is a subset of some module of ^*.
For M^*_i ∈^*, let '_i = {M' ∈' : M' ⊆ M^*_i}.
Then '_i must be a -modular partition of G[M^*_i] of minimum size, because if there is
a smaller such partition ”_i, then (' ∖'_i) ∪”_i is a better -partition of G.
By induction, Algorithm <ref> returns a -modular partition of G[M^*_i] of size |'_i| and,
since this is true for each M^*_i ∈^*, || = |'|.
So, consider the case in which G or G is disconnected.
Let = CC(G) if G is disconnected, and let = CC(G) otherwise.
As a consequence of Lemma <ref>, for any C ∈, either some M'_i ∈' contains C, or there is '_i ⊆' of size at least two that partitions C.
As in the algorithm, let _1 = {C ∈ : G[C] ∈} and let
_2 = ∖_1.
Let C ∈_1. Since G[C] ∈, we may assume that there is some M'_i ∈' that contains C, since otherwise by Lemma <ref>, there is some '_i ⊆ of size at least two that partitions . If the latter occurs, (∖'_i) ∪{C} is a better -modular partition, a contradiction.
On the other hand, let C ∈_2. Since G[C] ∉, there cannot be M'_i ∈' that contains C. Indeed, if such a M'_i existed, we would have G[M'_i] ∈ and, by heredity, G[C] ∈, a contradiction to the definition of _2.
Therefore, for each C ∈_2, there is '_C ⊆' of size at least two that partitions C.
We can thus deduce that if we define '_1 as the set of modules of that contain some element of _1, then
'_2 := ' ∖'_1 consists of the modules that are properly contained in some element of _2.
Moreover, since '_1 is a -modular partition of G[⋃_C ∈_1 C] and M' ∈ for each M' ∈'_1, it follows that '_1 is a -merge of
G[⋃_C ∈_1 C].
Since the -merge _1 computed by Algorithm <ref> is minimum, we have |_1| ≤ |'_1|.
In a similar fashion, consider C ∈_2 and let '_C be the set of modules of ' contained in C.
By induction, compute--mc(G[C]) returns a minimum -modular partition _C of G[C], and thus |_C| ≤ |'_C|.
Therefore, every set of module added by Algorithm <ref> to corresponds to a distinct set of modules in ' that is at least as large, and it follows that is of minimum size as well.
For ∈{, , , }, a -modular partition of minimum size of a graph G can be computed in polynomial time.
We argue that the graph classes mentioned above are trivially mergeable.
For cographs, it follows from the definition of a cograph that if G is a -join or a -union, then G is itself a cograph. Thus {V(G)} is always a -merge.
For cliques, if G is a -join, then {V(G)} is a -merge, and if G is a -union, then CC(G) is the unique minimum -merge.
For edgeless graphs, if G is an -join then CC(G) is a -merge, and if G is an -union, then {V(G)} is a -merge.
Let =. If G is a -union, then since each C ∈ CC(G) is a set of disjoint cliques, it follows that the same holds for G and thus {V(G)} is a -merge of G.
Suppose that G is a -join and let be a minimum -merge of G.
Let C ∈ CC(G). We claim that if C contains at least two connected components K_a and K_b (which are cliques), then C ∈. Recall that by Lemma <ref>, the modules of that intersect with C either contain C, or are contained in C. Notice that there is no M ∈ that contains C and another vertex from another C' ∈ CC(G), because if that was the case, we could form a P_3 by taking a vertex in each of K_a, K_b, and C', contradicting that G[M] is a cluster graph.
Thus the modules of that intersect with C are either proper subsets of C or equal to C. Because C ∈ and is minimum, we may assume that C ∈, as claimed.
It only remains to deal with ' = {C ∈ CC(G) : C is a clique}.
Clearly, ⋃_C' ∈' C' is a cluster graph, and it follows that a minimum -merge can be obtained by taking each C ∈ CC(G) ∖', and the union of the vertices in '.
Note that not all graph classes are equally easy to merge. Consider the class “graphs with at most 100 vertices”. Merging such graphs amounts to solving a bin packing problem with a fixed capacity of 100, and current polynomial-time algorithms require time in the order of n^100. This is easily mergeable nonetheless, and it would be interesting to find graph classes that are hereditary and polynomial-time recognizable, but not easily mergeable.
§ HARDNESS OF DOMINATION PROBLEMS WITH ARBITRARY SOLUTION TABLES
Let us recall our generic domination problem of interest. For α∈ [0, 1] and β∈ℤ we define:
The problem
Input: a graph G = (V, E) and a non-negative integer q;
Question: does there exist a subset X⊆ V of size at most q such that for every v ∈ V ∖ X, |N(v)∩ X| ≥α |N(v)| + β?
In the above, the vertex set X is called a (α, β)-linear degree dominating set of G. In addition, for convenience, we also call X the deletion part of G.
For any vertex v∈ V(G - X), the degree of v in X, denoted by (v,G,X), equals the number of vertices in X∩ V(G) adjacent to v. The minimum degree of G - X in X equals min{(v,G,X) : v∈ G - X}, which is denoted by δ (G - X, X).
Our goal is to formalize the intuition that graph classes with arbitrary solution tables lead to W[1]-hardness in .
For , this takes the form of arbitrary deletion tables for α∈ (0, 1] and arbitrary retention tables for α = 0.
Let (a_0, I, c) be a triple with {a_0}∪ I ⊆ℕ and c : ℕ→ℕ. We call c a cost function. We say that (a_0, I, c) is decreasing valid if there is an ordering a_1, …, a_|I| of I such that a_0 > a_1 > … > a_|I| >1, and 0= c(a_0) < c(a_1) < … < c(a_|I|).
For a decreasing valid triple (a_0, I, c), we say that
a graph G =(V,E) is a (a_0, I, c)-degree deletion graph if all of the following conditions hold:
* G has maximum degree a_0 and at most (a_0c(a_|I|))^10 vertices;
* for any a_i ∈ I, there exists X ⊆ V of size c(a_i) such that G - X has maximum degree a_i;
*
for any a_i ∈ I and any X ⊆ V of size strictly less than c(a_i), G - X has maximum degree at least a_i - 1.
In addition, if G ∈ and satisfies the above three conditions, then we say that G is a (a_0, I, c)-degree deletion star graph.
We say that a graph class admits arbitrary degree deletion tables if, for any decreasing valid triple (a_0, I, c), one can construct in time polynomial in a_0c(a_|I|) a graph G ∈ such that G is a (a_0, I, c)-degree deletion graph.
Note that the size of G is only required to be a polynomial function of a_0c(a_|I|), but we fix it to (a_0c(a_|I|))^10 for convenience.
For an integer x ∈ [0, |V|], we call f(x) = min{Δ (G - X) : |X| = x } the degree deletion function of G, where X is a subset of V.
Figure <ref> demonstrates the degree deletion function of a (a_0, I, c)-degree deletion graph. The intuition behind degree deletion graphs is that their deletion function has a stepwise behavior with many steps.
The above notion of an arbitrary solution table works well for α∈ (0, 1]. For α = 0, we need to replace “deletion” with “retention”. This is useful for the α = 0 case, where the set X must contain at least β neighbors of each unchosen vertex. Hence, the steps of the table describe, for each number x of vertices to include in X,
the maximum possible δ(G - X, X) that can be achieved with a subset of size x.
Let (a_0, I, c) be a triple with {a_0}∪ I ⊆ℕ and c : ℕ→ℕ. We call c a cost function. We say that (a_0, I, c) is increasing valid if there is an ordering a_1, …, a_|I| of I such that a_0 > a_1 > … > a_|I| > 100,[In fact, any large constant here is enough for our proof in this paper, we fix it to be 100 for convenience.] and c(a_0) > c(a_1) > … > c(a_|I|) = 0.
For a increasing valid triple (a_0, I, c), we say that a graph G=(V,E) is a (a_0, I, c)-degree retention graph above (p,l), where p and l are positive integers and l< 100, if all of the following conditions hold:
* G has maximum degree a_0, at most (a_0c(a_0))^10 vertices, and p vertices of degree less than l;
* for any a_i ∈{a_0}∪ I, there exists X ⊆ V of size p + c(a_i) such that the minimum degree of G - X in X is a_i;
*
for any a_i ∈{a_0}∪ I and any X ⊆ V of size strictly less than p + c(a_i), the minimum degree of G - X in X is at most a_i + 1, where here we define a_|I|+1 = l.
We say that a graph class admits arbitrary degree retention tables if, for any increasing valid triple (a_0, I, c), there exist integers p and l such that one can construct in time polynomial in c(a_0)a_0 a graph G ∈ such that G is a (a_0, I, c)-degree retention graph above (p,l).
Note that condition 2 of Definition <ref> imply that p + c(a_0) < |V| ≤ (c(a_0)a_0)^10.
For any integer x∈ [0, |V|-1], we call f(x) = max{δ (G - X,X) : |X|= x} the degree retention function of G, where X is a subset of V.
Figure <ref> demonstrates the degree retention function of a (a_0, I, c)-degree retention graph.
Importantly, admit both types of tables.
The graph class admits arbitrary deletion tables and arbitrary retention tables.
The proof is split into two parts.
The graph class of disjoint stars admits arbitrary degree deletion tables.
Let (a_0, I, c) be a decreasing valid triple. Then we may order I = {a_1, …, a_n} so that a_0 > a_1 > … > a_n >1 and 0= c(a_0) < c(a_1) < … < c(a_n).
We construct a graph of disjoint stars G that has this solution table.
For each i ∈ [n], add to G a set of c(a_i) - c(a_i-1) disjoint stars of degree a_i - 1. Then, add one star of degree a_n.
It is clear that G can be constructed in time polynomial in a_0c(a_n).
Also, G has maximum degree a_0 and has at most (c(a_n)+1)(a_0+1) vertices.
We argue the second and third parts of Definition <ref> simultaneously.
Now consider i ∈ [n]. The number of stars of degree strictly more than a_i is ∑_j = 1^i (c(a_j) - c(a_j-1)) = c(a_i).
If we delete the center of each of those stars, we achieve maximum degree a_i. On the other hand, if we make less than c(a_i) deletions, at least one of those stars will remain intact and the maximum degree will be strictly greater than a_i.
The graph class of disjoint stars admits arbitrary degree retention tables.
Let (a_0, I, c) be an increasing valid triple. Then we may order I = {a_1, …, a_n} so that a_0 > a_1 > … > a_n > 100 and c(a_0) > c(a_1) > … > c(a_n) = 0. Let p= a_0 + ∑_i∈ [n] (c(a_i-1) - c(a_i))a_i and l= 1.
We construct a graph of disjoint stars G that is a (a_0, I, c)-degree retention graph above (p,l).
For each i ∈ [n], add to G a set of c(a_i-1) - c(a_i) disjoint stars of degree a_i. Then, add one star of degree a_0.
It is clear that G can be constructed in time polynomial in a_0c(a_0).
Also, G has maximum degree a_0 and has at most (c(a_0)+1)(a_0+1) vertices. Moreover, the number of vertices with degree at most l, which are the leaves of the stars, is a_0 + ∑_i∈ [n] (c(a_i-1) - c(a_i))a_i.
We argue the second and third parts of Definition <ref> simultaneously. We know the total number of leaves of the stars is p. Suppose a_n+1 = l and c(a_n+1) = 0. Now consider integer i ∈ [0,n]. The number of stars of degree strictly less than a_i is p+ ∑_j = i^n (c(a_j) - c(a_j+1)) = p + c(a_i).
If we add the center of each of those stars and all leaves to X, the minimum degree of G - X in X is a_i, since the left vertices have degrees at least a_i and all their neighbors are in X. On the other hand, assume we add less than p + c(a_i) vertices to X. Since we have exactly p + c(a_i) vertices with degrees at most a_i+1, there is a vertex v with degrees at most a_i+1 that is in V(G) ∖ X. Thus, the minimum degree of G-X in X is at most a_i+1.
We can now state one of our fundamental results.
The problem is W[1]-hard in the following cases:
* α = 0, β is in the input,
and the parameter is the (∪)-, where is any graph class that admits arbitrary degree retention tables;
* α is any fixed constant in the interval (0,1), β is any fixed constant in ℤ, and the parameter is the -;
* α =1, β is in the input,
and the parameter is the (∪)-, where is any graph class that admits arbitrary degree deletion tables.
The proof of the Theorem <ref> is quite long, and we put it in Appendix. The proof of the three cases all use the same construction and the same set of claims, but most claims require an argument for each case. The next section illustrates the main techniques on the case α = 1.
Together, Proposition <ref> and Theorem <ref>, along with the relationship demonstrated in Figure <ref> implies the following corollary.
Moreover, the - of the output graph of the W[1]-hardness reduction in Theorem <ref> equals the iterated type partition of the graph.
The problem parameterized by either clique-width, modular-width, -, iterated type partition, and -, is W[1]-hard in the following cases:
* α = 0, β is in the input;
* α is any fixed constant in the interval (0,1), β is any fixed constant in ℤ;
* α =1, β is in the input.
In particular, problem, problem, and problem are W[1]-hard in all these parameters.
For -, this follows from Proposition <ref> and Theorem <ref>.
The hardness on - then follows from the fact that stars are cographs, and the hardness on modular-width follows from the fact that - is at least as large as the modular-width because cographs have bounded modular-width (Theorem <ref>). The hardness on clique-width follows from the fact that cw ≤ mw +2.
Next, let us consider the parameter iterated type partition, whose definition refers to Definition 1 of <cit.>.
Roughly speaking, the iterated type partition of a graph is defined by iteratively constructing type graphs until the type graph remains the same as the previous one, where the type graph of a graph is the quotient graph of a -modular partition such that includes the complete graphs or the edgeless graphs. The base graph is the last one of the type graph sequence. Obviously, the main reduction works if we replace all S_i and U_ij in H with their corresponding star graphs. Then, the type graph sequence of H is H^0, H^1, H^2, H^3, where H^0 = H, H^1 is the graph that replaces every star in S_i and U_ij with a K_2 graph, and replaces every other edgeless factor with a vertex; H^2 is the graph that replaces every K_2 in S_i and U_ij of H^1 with a vertex; the base graph H^3 is the graph that replaces every S_i and U_ij of H^2 with a vertex. Clearly, the vertex number of H^3, which is iterated type partition of H, equals - of H and is O(k^2).
The next proposition is not related to the above hardness results but allows us to fill in some of the gaps that the above leaves in our results table. In section <ref>, we will also show that
for α = 1 and β in the input (the case), the problem is FPT in -. Note that the design of FPT algorithm for parameterized by - in section <ref> is of particular interest, the solution table for that case is non-convex and contains a larger number of blocks, so it seems that the solution table can not be succinct at first glance. But, we can reduce this special non-convex function to a convex function using ceilings to get the FPT algorithm.
The problem is FPT in the following cases:
* α∈ [0,1], β is in the input, and the parameter is the neighborhood diversity;
* α∈{0,1}, β is a constant, and the parameter is the clique-width.
The first case requires some work, whereas the second is a simple application of Courcelle's theorem <cit.>, as the problem admits a constant-length MSO_1 formula.
We first show that is FPT in parameter nd when α∈ [0 , 1] and β is in the input.
Recall that the minimum neighborhood partition (neighborhood diversity) can be obtained in polynomial time.
To obtain our result, we need the following FPT algorithm for the problem parameterized by the dimensions.
The problem can be decided in O(n^2.5n+o(n)· I) time, where I is the size of the input and n is the number of variables.
problem is FPT parameterized by neighborhood diversity even if β is a part of the input.
Suppose (G,q,β) is an input of
the problem, where G=(V,E). Compute the minimum neighborhood partition = { M_1,…, M_r, M_r+1, …, M_k} of V in polynomial time, where each M_i for i∈ [r] is a clique, and each M_i for i∈ [r+1,k] is an independent set. For a module M_i ∈, assume that N(M_i) consists of all modules adjacent to M_i, and that N[M_i] = N(M_i) ∪{M_i}. Suppose the vertex number of the intersection of M_i and X is x_i. In addition, for any integer a and real number b, we have that a ≥ b if and only if a ≥⌈ b ⌉. Then, it is not hard to verify that (G,q,β) is a yes instance of problem if and only if the following instance is yes for .
(1) ∑_1≤ i ≤ k x_i≤ q
(2) ∑_M_j ∈ N[M_i] x_j ≥⌈α ( ∑_M_j ∈ N[M_i] |M_j| - 1)⌉ +β ∀ i ∈ [0, r-1]
(3) ∑_M_j ∈ N(M_i) x_j ≥⌈α∑_M_j ∈ N(M_i) |M_j| ⌉ +β ∀ i ∈ [r, r+s]
(4) x_i ∈ℕ ∀ i ∈ [k]
(5) 0 ≤ x_i ≤ |M_i| ∀ i ∈ [k]
Clearly, the has k = nd(G) variables. Thus, problem is in FPT based on Theorem <ref>.
Let us now focus on showing that for α∈{0, 1} and fixed β, the problem is FPT in clique-width.
This is easy using Courcelle's theorem <cit.>.
For each α∈{0,1}, we can write a constant-length MSO_1 formula with free variable X, denoted by (α, β)-LDD(X), to verify that X is a (α, β)-linear degree dominating set of G as follows: (1) (0, β)-LDD(X) =
∀_v∈ V∖ X∃_v_1 ∈ X… ∃_v_β∈ X (adj(v,v_1) …adj(v,v_β)), and (2) (1, β)-LDD(X) = (∃_v∈ V∖ X∃_v_0 ∈ V∖ X… ∃_v_|β|∈ V∖ X (adj(v,v_0) …adj(v,v_|β|))). According to the optimization version of the Courcelle's theorem <cit.>, there exists an FPT algorithm parameterized by clique-width to minimize |X| subject to (α, β)-LDD(X) for each α.
§ -LINEAR DEGREE DOMINATION (BDD)
We sketch the proof of the W[1]-hardness of (1, β)-Linear Degree Domination problem, which is enough to demonstrate the main idea of the reduction technique. Recall that we assume that α = 1 and β≤ 0, which is the problem. That is, we must delete at most q vertices such that the resulting subgraph has maximum degree at most |β|.
Furthermore, in the (1, β)-Linear Degree Domination problem, we also call X, the (1, β)-linear degree dominating set of G, the deletion part of G.
We provide a reduction from the problem, which we define as follows. A symmetric multicolored graph G = (V^1∪ V^2 …∪ V^k , E) is a connected
graph such that, for all distinct i,j ∈ [k],
* V^i = {v_1^i, …, v_n^i}, where n≥ k;
* all the vertices of V^i are colored by color i;
* if v_r^i v_s^j ∈ E(G), then v_s^j v_r^i ∈ E(G) as well.
Then, for the problem, the input is a symmetry multicolored graph G and an integer k, and the objective is to decide whether G contains a k-clique with vertices of all k colors. We also call v_r^i v_s^j and v_s^i v_r^j symmetry edges.
The reduction in <cit.>, which proves the W[1]-hardness of the multicolored clique problem, actually produces a symmetric multicolored graph. Hence, is W[1]-hard. We now sketch the following.
Case 3 for the W[1]-hardness results in Theorem <ref> is correct.
Let (G,k) be an instance of , where G = (V^1∪ V^2 ∪…∪ V^k , E). Without loss of generality, suppose k≥ 100, otherwise, the problem can be solved in polynomial time.
We will construct a corresponding instance (H, β, q) of (1,β)-Linear Degree Domination, where H is a graph whose - will be bounded by O(k^2), β= -(nk)^10000, and q is the maximum allowed size of X, the desired deletion part of H (to be specified later).
Before proceeding, we will make use of a 2-sumfree-set, which is a set of positive integers in which every couple of elements has a distinct sum. That is, I is a 2-sumfree set if, for any (a, b), (a', b') ∈ I × I, a + b = a' + b' if and only if {a, b} = {a', b'} (note that a = b is possible).
It is known that one can construct in time O(n^3) a 2-sumfree-set I of cardinality n in which the maximum value is n^4, which can be achieved with a greedy procedure (this is because (n+1)^4-n^4 > n^3 and a_i+a_j-a_r has at most n^3 different values).
We thus assume that we have built a 2-sumfree set I = {a_1, …, a_n} for H, where n^4 ≥ a_1 > … > a_n ≥ 1. Without loss of generality, we may multiply each a_i ∈ I by an integer r, where here we choose r = 2(k-1)^2k^3.
Then, we have that n^4r ≥ a_1 > … > a_n = r for the updated I.
Moreover, for any distinct pair (a,b),(a',b')∈ I × I, we have that the absolute value of a+b-a'-b' is at least r.
For each color class V^i = {v_1^i, …, v_n^i} of G, we provide a bijection f_i from V^i to I, such that f_i(v^i_s) = a_s for every s∈ [n].
Clearly, we can use f_i^-1(a_s) to denote the unique vertex v^i_s of V^i associated with a_s ∈ I. We have that, for all s, t∈ [n], each pair of a_s, a_t ∈ I has a unique sum. For distinct i, j and any u ∈ V^i, w ∈ V^j such that uw ∈ E(G), if f_i(u) + f_j(w) = a_s + a_t, then edge uw is either v^i_s v^j_t or v^i_t v^j_s. Moreover, for any distinct color classes V^i, V^j, edges v^i_s v^j_t and v^i_t v^j_s, together, are both in E(G) or both not in E(G). Hence, by looking at a sum in I, we will be able to tell whether it is corresponding to a pair of symmetry edges between V^i and V^j, or not.
Next, we define s = n^10, q = ks + k2 s, a_0= q +1, and a_n+1= a_n - 1.
We construct H in Figure <ref> as follows. First, for each color class V^i of G, add three factors R_i, S_i, T_i to H, where:
* R_i is an edgeless graph of size |β| -s;
* S_i is a (a_0, I ∪{a_n + 1}, c)-degree deletion graph, where we put the costs c(a_j) = s - 1/2a_j for a_j ∈ I and c(a_n + 1) = a_0.
* T_i is an edgeless graph of size s.
We then make S_i adjacent with R_i and T_i.
Secondly, for each pair of color classes V^i, V^j with i < j, we add another two factors U_ij, R_ij, where:
* R_ij is an edgeless graph of size |β| - 2s;
* U_ij is built as follows.
Suppose integer set I_ij consists of all a + b such that a, b ∈ I and symmetry edges f_i^-1(b) f_j^-1(a) , f_i^-1(a) f_j^-1(b) ∈ E(G). Let ℓ_ij = min(I_ij).
Then U_ij is a (a_0, I_ij∪{ℓ_ij - 1}, c_ij)-degree deletion graph, where we put the cost c_ij(a + b) = s - 1/2(k-1) ( a+b ), and we put c_ij(ℓ_ij - 1) = a_0.
We then make U_ij adjacent with R_ij, and adjacent with T_i and T_j. To avoid cumbersome notation, we define U_ij = U_ji, R_ij = R_ji, c_ij = c_ji, and I_ij = I_ji.
This completes the construction of H.
It is easy to see that H can be constructed in polynomial time.
All that remains is to argue that the objective instance is equivalent to the original one.
The following lemma gives the forward direction, whose detailed proof can be found in case 2 of Lemma <ref> in the Appendix. We only provide a sketch here.
Suppose that G contains a multicolored clique C of size k. Then there is X ⊆ V(H) of size at most q = ks + k2 s such that Δ(H - X) ≤ |β|.
For i ∈ [k], let f^-1_i(_i) be the vertex of V^i that belongs to the multicolored clique C, where _i ∈ I is the number associated with the vertex. For any i≠ j, we know that vertices f^-1_i(_i) and f^-1_j(_j) are in C, which means that f^-1_i(_i) f^-1_j(_j) ∈ E(G).
This implies that _i + _j is in the I_ij list that was used to construct U_ij.
We then show how to construct the vertex set X for H.
The intersection of each R_i and X is empty.
For each T_i, add _i vertices to X.
For each S_i, add c(_i) = s - 1/2_i vertices to X. This can be done so that S_i - X has maximum degree _i according to Definition <ref>.
The intersection of each R_ij and X is empty.
For each U_ij, we can add
c_ij(_i + _j) = s - _i + _j /2(k-1)
vertices to X so that U_ij- X has maximum degree _i + _j according to Definition <ref>. It is not hard to verify that X with exactly q vertices satisfies that Δ(H - X) ≤ |β|.
The converse direction is much more difficult.
Suppose that there is X ⊆ V(H) with |X| ≤ q such that H - X has maximum degree at most |β|.
Then G contains a multicolored clique of size k.
Sketch: Let X ⊆ V(H) be of size at most q such that H - X has maximum degree |β| or less.
To ease notation slightly, for a factor M of H, we will write X(M) := X ∩ V(M) and χ(M) := |X(M)|.
The proof is divided into a series of claims.
For each i ∈ [k], we may assume that χ(R_i) = 0. Moreover for each distinct i, j ∈ [k], we may assume that χ(R_ij) = 0.
The detailed proof of this claim refers to case 2 of Claims <ref>, <ref> in the Appendix.
The rough idea is that a vertex of X(R_i) can always be replaced by a vertex of T_i ∖ X if the latter is non-empty. If V(T_i) ∖ X is empty, then after an unavoidable at least c(a_1) vertices deletion from S_i, no deletion in R_i is needed since each remaining vertex of S_i in H - X has maximum degree at most |β| - s + a_1 < |β|. The idea for χ(R_ij) = 0 goes the same way.
For any i ∈ [k], we may assume Δ(S_i - X) ∈{a_1, …, a_n}, and that χ(S_i) = c(Δ(S_i - X)).
The detailed proof refers to case 2 of Claim <ref> in the Appendix.
The rough idea is that Case 2 of Definition <ref> states that we can delete c(a_j) vertices from S_i to make Δ(S_i - X) = a_j, where 0≤ j ≤ n+1. In fact, this is the smallest maximum degree we can achieve in S_i by deleting between c(a_j) and c(a_j+1) - 1 vertices, because of the stepwise behavior of arbitrary deletion tables. Moreover, we already know that χ(R_i) = 0 based on Claim <ref>. So, if Δ(S_i - X) = a_0 there is a vertex in S_i - X with degree |β| - s + a_0 > |β|, and if Δ(S_i - X) = a_n - 1 or less then c(a_n - 1) = a_0 > q vertices have to be deleted from S_i.
For any distinct i, j ∈ [k], we may assume that Δ(U_ij - X) ∈ I_ij, and that χ(U_ij) = c_ij(Δ(U_ij - X)).
The idea for proving this claim is similar to that of Claim <ref>.
Our next step is to argue that the degree chosen by U_ij must be the sum of the degrees chosen by S_i and S_j.
More specifically, for i ∈ [k], we say that S_i chose a_j ∈ I if Δ(S_i - X) = a_j and χ(S_i) = c(a_j).
Likewise, for distinct i, j ∈ [k], we say that U_ij chose a, b ∈ I if Δ(U_ij - X) = a + b and χ(U_ij) = c_ij(a + b).
Note that by Claim <ref>, each S_i chooses one a_j and by Claim <ref>, each U_ij chooses one pair a, b such that symmetry edges f_i^-1(b) f_j^-1(a) , f_i^-1(a) f_j^-1(b) ∈ E(G).
The point to make is that if S_i and S_j
chose a and b, respectively, then U_ij must have chosen a, b.
For each i ≠ j,
if S_i chose a ∈ I and S_j chose b ∈ I, then U_ij chose a, b.
The detailed proof of this claim refers to Claim <ref> in Appendix. We sketch it as follows.
For each i ∈ [k], we will denote by _i the element of I that S_i chose.
We divide the U_ij's into three groups:
U^< = { U_ij : }
U^= = { U_ij : }
U^> = { U_ij : }
To prove the claim, it suffices to show that U^< and U^> are empty (this is because U_ij∈ U^= is only possible if U_ij chose _i, _j, since all the sum pairs are distinct).
The rough idea is as follows.
If each U_ij chose the correct _i, _j, then each of them will incur a deletion cost of c_ij(_i + _j) = s - _i + _j/2(k - 1) and end up cancelling the deletion costs of the S_i and T_i factors.
If U^< is non-empty, it incurs extra deletion cost with respect to c_ij(_i + _j) with no real benefit. The complicated case is when U^> is non-empty. In this case, U_ij - X has higher degree than if it had chosen _i, _j and incurs less deletions than c_ij(_i + _j). However, this needs to be compensated with extra deletions in T_i and T_j. By using a charging argument, we can show that the sum of extra deletions required for all the U^> members outweighs the deletions saved in the U_ij's of U^>.
We can now construct a multicolored clique. To this end, define C = {f^-1_i(_i) : i ∈ [k] }.
We claim that C is a clique.
By Claim <ref>, each S_i chooses some _i and thus |C| = k.
Now let f^-1_i(_i), f^-1_j(_j) be two vertices of C, where i<j.
Then _i, _j were chosen by S_i and S_j, respectively, and by Claim <ref>
we know that U_ij chose _i + _j.
By the construction of the U_ij solution table, this is only possible if symmetry edges f_i^-1(_i) f_j^-1(_j) , f_i^-1(_j) f_j^-1(_i) ∈ E(G). Therefore, f_i^-1(_i) f_j^-1(_j) ∈ E(G) and C is a clique.
§ FPT ALGORITHMS FOR SUCCINCT SOLUTION TABLES
In this section, we study the problem (i.e. α = 1, β < 0) on graph classes that admit succinct solution tables.
This notion is formalized below.
Let G=(V,E) be a graph and x be an integer in [0,|V|], suppose that set 𝕊_x ⊆ 2^V consists of all subsets of V with size x. A vertex set X∈𝕊_x is called an x-deletion set of G if Δ(G-X) = min{Δ(G-Y): Y∈𝕊_x}. An x-deletion of G is the process of deleting all the vertices of an x-deletion set from G.
Recall that, for any integer x∈ [0, |V|], we call f(x) = min{Δ (G - X) : |X| = x } the degree deletion function of G.
Thus, the degree deletion function of G is the maximum degree of G after an x-deletion.
A piecewise linear function g: ℝ→ℝ is a continuous function defined on a sequence of intervals, such that the function is linearly restricted to each of the intervals (each such linear function is called a sub-function of g).
In addition, a constant piecewise linear function is a piecewise linear function that consists of a constant number of linear sub-functions.
Let be a polynomial-time recognizable graph class.
For G ∈, suppose that f^G(x) is the degree deletion function of G = (V,E), where x∈ [0, |V|].
We say that admits a succinct solution table for if, for every G∈, there exists a function g^G : ℝ→ℝ that satisfies at least one of the following conditions:
* g^G is a constant piecewise linear function such that f^G(x) = g^G(x) for every integer x∈ [0, |V(G)|],
* g^G is a piecewise convex linear function such that f^G(x) = ⌈ g^G(x) ⌉ for every integer x∈ [0, |V(G)|].
Moreover, g^G can be described and constructed in polynomial time with respect to |V(G)|.
For convenience, we say a graph class admits a succinct solution table of type t for if we refer to the condition t, where t ∈{1,2}. The first type implies that the solution table can be divided into a small number of blocks, where the information of each block can be encoded into a small number of bits.
In the second type, the solution table could be convex, but could also be non-convex and may
contain a larger number of blocks, but we can reduce this special non-convex function to a convex function using ceilings (this occurs with -).
As we show, these definitions capture the intuition of solution tables that can be merged in FPT time. The proof of Theorem <ref> relies on the results of <cit.>, which show that Mixed Integer Programming with Simple Piecewise Linear Transformations (MIP/SPLiT) problem is FPT parameterized by the number of variables.
Let us first introduce the problem (MILP). In the MILP problem, the input is a matrix A= ℤ^m × (n+k) with m rows and n+k columns and vector b∈ℚ^n+k, the task is to decide whether there exists a vector x =(x_1,…,x_n+k) such that Ax≤b, where variables x_1, …, x_n are integers and variables x_n+1,…, x_n+k are rational numbers.
If k=0 then the problem is called problem.
In addition, if we replace x_i with some integer function on x_i, and x_j with some real function on x_j, where 1≤ i≤ n and n+1≤ j ≤ n+k, then we call the problem problem ().
Recall that - is a generalization of the parameter neighborhood diversity which was first proposed by Lampis <cit.>. Moreover, Lampis <cit.> creates a nice technique, which uses MILP to combine the fact that each set of the neighborhood partition is able to be colored by either one color or the vertex number of the set colors, to design an FPT algorithm for Chromatic Number problem parameterized by neighborhood diversity. Then, Gajarský et. al. <cit.> refine this technique and combine it with dynamic programming to obtain FPT algorithms for some important graph problems parameterized by modular-width. Moreover, the conclusion section of paper <cit.> asks whether this technique can be further generalized to eventually obtain meta-theorem-like results, and suggests using `convex' solution tables for modules to generalize the technique. In the following, we demonstrate that problem is FPT parameterized by - if admits a `succinct' solution table for . The rough idea of our algorithm is to compute a succinct solution table for each module of a -modular partition of the graph and combine these tables using . Intuitively, a `succinct' solution table should be a `convex' one. However, our result goes further over this. We emphasize that condition 2 of our succinct solution table in definition <ref> allows some non-convex functions.
The FPT algorithms in this section rely on the known FPT algorithm for Mixed Integer Programming with Convex Constraints <cit.>. So we define piecewise linear convex functions in the same way. A piecewise linear function g : ℝ→ℝ is a continuous function, which is defined on a sequence of intervals, such that the function is linearly restricted to each of the intervals. In addition, the linear function on each of the intervals is called a sub-function of g. Function g is called convex if, for any real numbers x_1,x_2, there exists a real number r with 0≤ r ≤ 1 such that g(rx_1 + (1-r)x_2) ≤ rg(x_1) + (1-r)g(x_2). A function g is called concave if the function -g is convex.
Naturally, g a piecewise linear convex (concave) functions if g is not only a piecewise linear function but also a convex (concave) function.
Moreover, we restrict that the endpoints of all sub-functions of g are rational points and that the slopes of all sub-functions of g are rational numbers. More details for piecewise linear convex functions refer to <cit.>.
Let us recall the definition of the Mixed Integer Programming with Simple Piecewise Linear Transformations (MIP/SPLiT) problem <cit.>. The input is a set of integers {c_1,…,c_m}, a set of piecewise linear concave functions {f_i,j: i∈ [n+k], j∈ [m]}, and a set of piecewise linear convex functions {g_i,j: i∈ [n+k], j∈ [m]}. The objective is to decide whether there exist x_1,…,x_n+k such that
∑_i ∈ [n+k] g_i,j (x_i) ≤∑_i ∈ [n+k] f_i,j (x_i) + c_j ∀ j ∈ [m]
x_i ∈ℕ ∀ i ∈ [n]
x_i ∈ℝ^+ for n+1 ≤ i ≤ n+k
In addition, if functions {f_i,j: i∈ [n+k], j∈ [m]} or functions {g_i,j: i∈ [n+k], j∈ [m]} are non-convex functions, then we call the problem MIP with non-convex constraints.
The MIP/SPLiT problem can be decided in n^2.5n+o(n)(I+P)^O(1) time, where I is the size of the input, P is the pieces number of the function that has the most number of pieces, and n is the number of integer variables.
We will use this result as a black box in our proof. More details for MIP/SPLiT problem can be found in <cit.>. If all the variables of the instances are integers, then it is a special case of MIP/SPLiT problem where k=0.
Let be a graph class that admits a succinct solution table for . Assume a minimum -modular partition is given. Then, is FPT parameterized by -.
Let (G,q,-β) be an input of the problem, where G=(V,E). Let ={M_1,…,M_k} be a -modular partition of V, where k equals the - of G. Let i∈ [k]. For each module M_i, suppose N(M_i)⊆ includes all the modules that are adjacent to M_i. Let function f^G[M_i](x) be the maximum degree of G[M_i] after an x-deletion in G[M_i], for integer x∈ [0, |M_i|].
(G,q,-β) is a yes instance of problem if and only if the following instance is yes for .
(1) ∑_1≤ i ≤ k x_i≤ q
(2) ∑_M_j ∈ N(M_i) (|M_j|-x_j) + f^G[M_i](x_i) ≤ -β ∀ i ∈ [k]
(3) x_i ∈ℕ ∀ i ∈ [k]
(4) 0 ≤ x_i ≤ |M_i| ∀ i ∈ [k]
For one direction, suppose that there is X ⊆ V with |X|≤ q such that G-X has maximum degree -β. Assume X_i = M_i ∩ X. We claim that x_i = |X_i| for all i∈ [k] is a feasible solution for the instance. According to the definition of |X_i|, the solutions fulfill constraints of the forms (3) and (4), and constraint (1). Consider each module M_i. Suppose v_i is a vertex with the maximum degree Δ(G[M_i∖ X_i]) in G[M_i∖ X_i]. Then, among all vertices in M_i∖ X_i, v_i has the most adjacent vertices in G-X, which is
∑_M_j ∈ N(M_i) (|M_j|-|X_j|) + Δ(G[M_i∖ X_i]) ≤ -β
since the degree of any vertex of G-X is at most -β. Additionally, we have f^G[M_i](|X_i|) ≤Δ(G[M_i∖ X_i]) according to the definition of f^G[M_i]. Therefore, the solutions fulfill the forms of constraint (2).
For the other direction, assume that x_i=x'_i for all i ∈ [k] is a feasible solution for the instance of . According to the forms of constraints (3) and (4), x'_i is a natural number of size at most |M_i|. So we may assume that the solution X is a subset of V such that, for each i, |X∩ M_i|=x'_i and X∩ M_i is an x'_i-deletion set of G[M_i]. Based on the definition of f^G[M_i], we have Δ(G[M_i∖ X]) = f^G[M_i](x'_i) for all i ∈ [k]. This means that, for every i ∈ [k], any vertex in M_i∖ X has at most f^G[M_i](x'_i) neighbors in G[M_i∖ X], and thus has at most
∑_M_j ∈ N(M_i) (|M_j|-x'_j) + f^G[M_i](x'_i)
neighbors in graph G-X. Therefore, we have Δ(G-X)≤ -β based on the forms of constraints (2). In addition, the cardinality of X is at most q according to constraint (1). Hence, (G,q,-β) is a yes instance for problem.
Since admits a succinct solution table for ,
for every i ∈ [k], there exists a function g^G[M_i] : ℝ→ℝ that fulfills one of the following conditions:
* g^G[M_i] is a constant piecewise linear function such that f^G[M_i](x) = g^G[M_i](x) for every integer x∈ [0, |M_i|],
* g^G[M_i] is a piecewise convex linear function such that f^G[M_i](x) = ⌈ g^G[M_i](x) ⌉ for every integer x∈ [0, |M_i|],
moreover, g^G[M_i] can be constructed in polynomial time according to Definition.
Let t∈{1,2}. Suppose _1, _2 is a partition of , where M_i∈_t has the property that f^G[M_i] fulfills the condition t, for t ∈{1,2}.
The instance for in Claim <ref> is a yes instance if and only if the following instance is yes for .
(1) ∑_1≤ i ≤ k x_i≤ q
(2) g^G[M_i](x_i) ≤ -β -∑_M_j ∈ N(M_i) (|M_j|-x_j) ∀ i ∈ [k]
(3) x_i ∈ℕ ∀ i ∈ [k]
(4) 0 ≤ x_i ≤ |M_i| ∀ i ∈ [k]
For one direction, assume that x_i=x'_i for all i ∈ [k] is a feasible solution for the instance of in Claim <ref>. Obviously, x'_1,…,x'_k are feasible solutions for the forms of constraints (3) and (4), and constraint (1) in Claim <ref>. If M_i∈_1, then f^G[M_i](x'_i) = g^G[M_i](x'_i). Clearly, x'_1,…,x'_k are feasible solutions for the forms of constraints (2) with M_i ∈_1 in Claim <ref>. If M_i∈_2, then f^G[M_i](x'_i) = ⌈ g^G[M_i](x'_i)⌉. By rearranging constraint (2) of Claim <ref>, we have that
⌈ g^G[M_i](x'_i) ⌉ = f^G[M_i](x'_i) ≤ -β -∑_M_j ∈ N(M_i) (|M_j|-x'_j).
Moreover, for any real number r, we have r≤⌈ r ⌉. Hence, x'_1,…,x'_k are feasible solutions for the forms of constraints (2) with M_i∈_3 in Claim <ref>.
For the other direction, assume that x_i=x'_i for all i ∈ [k] is a feasible solution for the instance of in Claim <ref>. Obviously, x'_1,…,x'_k are feasible solutions for the forms of constraints (3) and (4), and constraint (1) in Claim <ref>. If M_i∈_1, then f^G[M_i](x'_i) = g^G[M_i](x'_i). Clearly, x'_1,…,x'_k are feasible solutions for the forms of constraints (2) with M_i ∈_1 in Claim <ref>. For each M_i ∈_2, by constraint (2), we get
g^G[M_i](x'_i) ≤ -β -∑_M_j ∈ N(M_i) (|M_j|-x'_j)
for each i ∈ [k],
where the left part and the right part of the inequality
are a real number and an integer, respectively. Moreover, for any real number r and any integer z, if r≤ z then ⌈ r ⌉≤ z. Hence, for every M_i∈_2, we have
f^G[M_i](x'_i) = ⌈ g^G[M_i](x'_i)⌉≤ d -∑_M_j ∈ N(M_i) (|M_j|-x'_j).
As a result, x'_1,…,x'_k are feasible solutions for the forms of constraints (2) with M_i∈_2 in Claim <ref>.
If a graph class admits a succinct solution table for , it is not hard to verify that the instance for in Claim <ref> can be constructed in polynomial time when given an instance (G,q,-β).
Now, let us consider the instance of in Claim <ref>, where g^G[M_i] is either a constant piecewise linear function or a piecewise convex linear function.
Consider g^G[M_i] in the instance of in Claim <ref>, where M_i∈_1. In this case, g^G[M_i] is a constant piecewise linear function. Then there is a constant h_i such that g^G[M_i] consists of h_i linear sub-functions in the domain [0,|M_i|].
Let us denote C = max_M_i ∈_1 h_i.
Additionally, we call the domain for a linear sub-function of g^G[M_i] a linear sub-domain of g^G[M_i]. More specifically, for each M_i ∈_1, assume the linear sub-domains of g^G[M_i] are intervals [m_i^0,m_i^1], [m_i^1,m_i^2] … [m_i^h_i-1,m_i^h_i], where m_i^0 = 0 and m_i^h_i = |M_i|.
For the instance of in Claim <ref>, if we restrict each integer variable x_i, where M_i∈_1, to a linear sub-domain of g^G[M_i], then the instance is divided into Π_M_i∈_1 h_i ≤ C^k new instances for MIP/SPLiT, moreover, the original instance is feasible if and only if there exists one of the new instances, as follows, is feasible.
(1) ∑_1≤ i ≤ k x_i≤ q
(2) g^G[M_i](x_i) ≤ -β -∑_M_j ∈ N(M_i) (|M_j|-x_j) ∀ i ∈ [k]
(3) x_i ∈ℕ ∀ i ∈ [k]
(4) 0 ≤ x_i ≤ |M_i| ∀ M_i ∈_2
(5) m_i^s-1≤ x_i ≤ m_i^s ∀ M_i ∈_1, and some s ∈ [h_i]
In addition, there is an O^*(k^2.5k+o(k)) time algorithm that decides the instance of MIP/SPLiT based on Theorem <ref>. Overall, in Claim <ref> can be solved in O^*(k^2.5k+o(k)· C^k) time, and thus is FPT parameterized by -.
Application: For each t ∈{1,2}, we provide a graph class that admits a succinct solution table of type t for . Let graph class BDT include all graphs with bounded degree and treewidth. We first demonstrate that
BDT admits a succinct solution table of type 1, thus problem is FPT parameterized by BDT-. Then, we prove that the admits a succinct solution table of type 2, hence problem is FPT parameterized by -.
Clearly, linear forest, formed from the disjoint union of path graphs, and binary forest, formed from the disjoint union of binary trees, have unbounded modular-width, but have bounded treewidth and bounded degree. Thus, linear forest and binary forest are subsets of BDT, moreover, BDT- is incomparable with modular-width. In addition, it is not hard to verify that BDT- is bounded by a function of vertex cover number since there are no twin vertices in the quotient graph.
Furthermore, - is a parameter with size at most neighborhood diversity and at least modular-width.
BDT admits a succinct solution table for .
Therefore, the problem is FPT parameterized by BDT-.
First, there is a linear time algorithm to decide if a graph in BDT <cit.>. In addition, we have demonstrated that (0,β)-Linear Degree Domination with constant β (or ) admits a constant-length MSO_1 formula, thus admits a constant-length MSO_2 formula. Therefore, on BDT graph can be solved in polynomial time for any β according to the Courcelle's theorem <cit.>.
Let G=(V,E) be a BDT graph with maximum degree d. Assume the maximum degree of G after an x-deletion is f^G(x), which is clearly a non-increasing function.
Assume S= { (x,f^G(x)): x∈ [0,|V|]}. Suppose Y = { y : (x,y) ∈ S} = {d_1,…,d_t}, where d_1 > d_2 … >d_t. Then, we have Y ⊆{0,1,…,d} and t ≤ d+1. Clearly, we can use one horizontal segment (point) to cover all points of (x,y) ∈ S with y = d_i for each i∈ [t], where the endpoints of the horizontal segment (point) are in S. Then, connect all horizontal segments (points) from left to right using another t-1 segments, all of which make up the piecewise linear function g^G: ℝ→ℝ in the domain [0,|V|] that contains at most 2t-1 segments. To make g^G well-defined, assume g^G(x) = d_1 for x≤ 0, and g^G(x) = d_t for x≥ |V|.
We now move to an example of a type 2 succinct solution table on cluster graphs.
Let H ∈. For any complete graph K in H and any X ⊆ V(H), suppose X' is obtained from X by replacing K∩ X with any |K∩ X| vertices of K. Clearly, graphs H-X and H - X' are isomorphic, and Δ(H-X) equals Δ(H - X'). Thus, when we consider the problem on any complete graph of a cluster graph, we only need to consider the deleted vertex number of it and do not need to distinguish the difference between any two vertices of it.
Assume cluster graph H contains b complete graphs K_a, where K_a is the maximum size complete graph in H. Let q∈ [b,|V(H)|]. Suppose R is obtained from H by deleting exactly one vertex from every K_a of H. Then, H has a q-deletion such that the maximum degree of the remaining graph is h if and only if R has a (q- b)-deletion such that the maximum degree of the remaining graph is h.
The correctness of the lemma is evident if a=1. Next, assume a≥ 2. We claim that any q-deletion set of H contains at least one vertex from every K_a. Otherwise, the maximum degree of H after the q-deletion is a-1, however, we can delete one vertex from every K_a and any other q-b vertices from H so that the maximum degree of the remaining graph is at most a-2, a contradiction. Next, we prove the lemma for the case that a≥ 2.
For one direction, assume H has a q-deletion and the maximum degree of the remaining graph of H is h. Then, the deleted vertices from H contain at least one vertex from every K_a. Without loss of generality, we may assume the vertices of V(H)-V(R) are included in the q-deletion of H. We have that R has a (q- b)-deletion such that the maximum degree of the remaining graph is at most h. If R has a (q- b)-deletion set D_R such that the maximum degree of the remaining graph of R is at most h-1, then V(H)-V(R) together with D_R is a q-deletion for H such that the maximum degree of the remaining graph of H is at most h-1, a contradiction.
For the other direction, assume R has a (q-b)-deletion and the maximum degree of the remaining graph of R is h. Clearly, H has a q-deletion such that the maximum degree of the remaining graph is at most h. Moreover, if H has a q-deletion such that the maximum degree of the remaining graph of H is at most h-1, then, according to the result proved in the last paragraph, R has a (q-b)-deletion and the maximum degree of the remaining graph of R is at most h-1, a contradiction.
The graph class admits a succinct solution table for . Hence, the problem is FPT parameterized by cluster-.
Clearly, it is polynomial-time decidable whether a given graph is a cluster graph. Let G=(V,E) be a cluster graph. Assume the maximum degree of G after an x-deletion is f^G(x). Suppose G contains r different types of complete graphs K_a_1+1,…, K_a_r+1 with vertex number at least two, where a_1 > … > a_r > 0. Define a_r+1=0. For each i∈ [r+1], assume G contains b_i complete graphs with exactly a_i+1 vertices. Clearly, we have |V|= ∑_1≤ i ≤ r +1(a_i+1)b_i.
Let g^G: ℝ→ℝ be a piecewise linear function. For each i∈ [r], two endpoints of the i-th segment of function g^G are (x_i,a_i) and (x_i+1,a_i+1), where x_i= ∑_1 ≤ j≤ i (a_j-a_i)b_j and x_r+1= ∑_1 ≤ j≤ r+1 a_jb_j. It is not hard to calculate the formulas of the function for all r segments, which is
g^G(x) = -1/∑_1 ≤ j≤ i b_j(x-x_i) + a_i for x ∈ [x_i,x_i+1).
To make g^G well-defined, assume that g^G(x)= -|V|· x+a_1 is the 0-th segment, where x∈ (-∞, x_1); and g^G(x)= 0 is the (r+1)-th segment, where x∈ [x_r+1, +∞). Consider the slope for each segment of the g^G(x). For each i∈ [r-1], the slope of i-th segment -(b_1+…+b_i)^-1 is less than the slope of (i+1)-th segment -(b_1+…+b_i+b_i+1)^-1. In addition, the slope of 0-th segment -|V| is less than the slope of 1-th segment -b_1^-1; and the slope of r-th segment -(b_1+…+b_r)^-1 is less than the slope of (r+1)-th segment 0. Consequently, g^G is a convex function since the slope for the i-th segment of g^G increases as i increases for all 0≤ i≤ r+1. In addition, it is clear that g^G can be constructed in polynomial time when giving G as an input.
Now, we prove that f^G(x)=⌈ g^G(x) ⌉ for every integer x∈ [0,|V|] to demonstrate that cluster graph admits a succinct solution table of type 2 for .
Let vertex set S be a subset of V such that every complete graph of G has exactly one vertex that is not included in S. Clearly, Δ(G - S)=0 and S contains exactly x_r+1 = ∑_1 ≤ j≤ r+1 a_jb_j vertices. Consider every integer x∈ [x_r+1, |V|]. S together with any x-x_r+1 vertices of V∖ S are an x-deletion set of G and the maximum degree of the remaining graph is zero because Δ(G - S) is already zero and the maximum degree of G after deleting any x vertices is at least zero. Therefore, f^G(x)=0 for every integer x∈ [x_r+1, |V|].
In addition, based on the definition of g^G, g^G(x)=0 for all integer x≥ x_r+1. Thus, we have f^G(x)=⌈ g^G(x) ⌉ for every integer x ∈ [x_r+1, |V|].
Next, let us consider x∈ [0,x_r+1). Since x_1 = 0, we only need to consider the 1st to r-th segments of function g^G. Consider x∈ [x_i,x_i+1) for each i∈ [r], which is the domain of i-th segment of g^G. Based on Lemma <ref>, an x-deletion set of G, denoted by X, can be obtained by carrying out the following two processes.
* If x is not smaller than the number of maximum complete graphs of G, we repeatedly do the process: select exactly one vertex from every maximum complete graph of G, add them to X, delete them from G, and x = x - x', where x' is the number of the deleted vertices in this process.
* If x is smaller than the number of maximum complete graphs of G, then select arbitrary x vertices from G, add them to X, delete them from G, and x=0.
Since x_i+1- x_i = (a_i - a_i+1)(b_1+ … +b_i), we may assume x=x_i+ts+y, for s=b_1+ … +b_i, integer t ∈ [0, a_i - a_i+1), and integer y ∈ [0, b_1+ … +b_i). Since x≥ x_i= ∑_1 ≤ j≤ i (a_j-a_i)b_j, the first x_i vertices added to X in process (1) come from the complete graphs with vertex numbers larger than a_i+1, and the vertex numbers of all these complete graphs decrease to a_i+1 after the deletion of x_i vertices in process (1).
After that, G contains exactly s complete graphs with a_i+1 vertices, and there are ts+y vertices to be deleted. Then, process (1) will be carried out t more times and the maximum degree of G decreases to a_i-t > a_i+1.
After that, there are y vertices to be deleted. Since y< b_1+ … +b_i and the number of maximum complete graphs with a_i-t+1 vertices in G is s > y, the y arbitrary vertex deletion in process (2) does not change the maximum degree of G. As a result, the maximum degree of G is a_i-t after an x-deletion of G, this means that f^G(x)=a_i-t. On the other hand, we know
⌈ g^G(x) ⌉ = ⌈ -1/∑_1 ≤ j≤ i b_j(x-x_i) + a_i ⌉ for x ∈ [x_i,x_i+1).
Replace x with x_i+st+y, we have
⌈ g^G(x) ⌉ = ⌈-1/∑_1 ≤ j≤ i b_j(x_i+st+y -x_i) + a_i ⌉
= ⌈-1/s(st+y) + a_i ⌉
= ⌈ a_i -t -y/s⌉
= a_i -t.
Hence, f^G(x) = ⌈ g^G(x) ⌉ for every integer x∈ [0,x_r+1).
In fact, for a modular partition ={M : M⊆ V} of V, if the factors G[M] belong to different graph classes that admit succinct solution tables for , i.e. some factors are BDT and some factors are clusters, then the problem is still FPT parameterized by || according to Theorem <ref>. Therefore, based on Lemma <ref>, Lemma <ref>, and Theorem <ref>, we have the following corollary.
The problem is FPT parameterized by (BDT ∪)-.
Clearly, the parameter (BDT ∪)- is not greater than neighborhood diversity but is incomparable with modular-width.
§ CONCLUSIONS
We conclude with some interesting problems.
* Can we characterize graph classes are easily mergeable?
For instance, is the class of H-free graphs easily mergeable, for any fixed graph H?
* Is fixed-parameter tractable in parameter (K_1, t-free)-, where t≥ 3 is either fixed or a parameter?
* Is FPT in parameter -, where β is related to the input size?
* Is FPT in parameter -?
* The Red-Blue Capacitated Dominating Set problem was shown to be W[1]-hard in cw in <cit.>. It is not hard to prove it to be FPT in mw using succinct solution tables. Does the same hold for the Red-Blue Exact Saturated Dominating Set?
* Are Edge Dominating Set, Max-cut, and Partition into Triangles FPT in parameter -?
§ ACKNOWLEDGEMENT
We thank the anonymous reviewers for their valuable comments.
abbrv
§ APPENDIX: PROOF FOR THEOREM <REF>
For convenience, we recall the problem definition and some of the definitions that appear in the main text.
The problem
Input: a graph G = (V, E) and a non-negative integer q;
Question: does there exist a subset X⊆ V of size at most q such that for every v ∈ V ∖ X, |N(v)∩ X| ≥α |N(v)| + β?
If X is a (α, β)-linear degree dominating set of G = (V,E), then we say V∖ X and every vertex of V∖ X satisfies the linear inequality of the problem. In addition, for convenience, we also call X the deletion part of G.
For a vertex v∈ V ∖ X and N(v) ≠∅ with respect to graph G, we define that the dominating coefficient of v with respect to (G,X) is λ(v,G,X) = |N(v)∩ X| - β/|N(v)|.
Moreover, if V∖ X is not empty, then, for a vertex set W ⊆ V∖ X, we define that the dominating coefficient of W with respect to (G,X) is Λ(W, G ,X) = min{λ(v,G,X) : v∈ W }, otherwise, we define that the dominating coefficient of V∖ X with respect to (G,X) is one. Clearly, X is a (α,β)-linear dominating set of G if and only if the dominating coefficient of V∖ X with respect to (G,X) is at least α, where G does not have an isolated vertex.
For any integer x∈ [0, |V|], we call f(x) = min{Δ (G - X) : |X| = x } the degree deletion function of G, where X is a subset of V. Then, we have the following observation, the proof of which is trivial.
The degree deletion function of a graph G is a non-increasing function.
Observation <ref> and Definition <ref> imply that, for any a_i ∈ I, there exists X ⊆ V of size at least c(a_i) such that Δ(G - X) ≤ a_i.
In a similar fashion, for any integer x∈ [0, |V|-1], we call f(x) = max{δ (G - X,X) : |X|= x} the degree retention function of G, where X is a subset of V. Then, we have the following observation, the proof of which is trivial.
The degree retention function of a graph G is a non-decreasing function.
Observation <ref> and Definition <ref> imply that, for any a_i ∈{a_0}∪ I, there exists X ⊂ V of size at least p + c(a_i) such that δ (G - X,X) ≥ a_i.
In the following, we provide the reduction for the hardness for arbitrary deletion and retention tables.
Our reduction is from a specific variant of the Multicolored Clique problem, which we call .
A symmetric multicolored graph G = (V^1∪ V^2 ∪…∪ V^k , E) is a connected
graph such that, for all distinct i,j ∈ [k],
* V^i = {v_1^i, …, v_n^i}, where n≥ k;
* all the vertices of V^i are colored by color i and form an independent set;
* if v_r^i v_s^j ∈ E(G), then v_s^j v_r^i ∈ E(G) as well.
Then, for the problem, the input is a symmetric multicolored graph G and an integer k, and the objective is to decide whether G contains a k-clique with vertices of all k colors. We also call v_r^i v_s^j and v_s^i v_r^j symmetry edges.
We claim that problem is W[1]-hard, which is immediate by using the same proof of Lemma 1 in <cit.>. For convenience, we sketch the proof here, which is via a reduction from k-Clique problem. Let (G,k) be an instance of k-Clique. Without loss of generality, assume G is connected. Create G' by replacing every v∈ V(G) with vertices v_1,…,v_k, one in each color class. If u and v are adjacent in G, then add the edge u_i v_j, for every distinct i, j ∈ [k]. Clearly, (G',k) is a valid instance of problem, and (G,k) is a yes instance of k-Clique problem if and only if (G',k) is a yes instance of problem.
Suppose S_k denotes a star with k leaves. Here, we provide two lemmas about stars, which will be used later.
Let G be a (a_0, I, c)-degree deletion star graph. Apart from the stars with leaves less than a_|I|-1, G consists of exactly c(a_i) - c(a_i-1) stars with a_i-1 leaves for all i∈ [|I|].
Let us consider every i ∈ [|I|]. According to condition 2 of Definition <ref>, there are at most c(a_i) stars, each of which contains at least a_i+1 leaves. Based on condition 3 of Definition <ref> there are at least c(a_i) stars, each of which contains at least a_i-1 leaves. In addition, we have a_i-1≥ a_i +1, therefore there are exactly c(a_i) stars, each of which contains at least a_i-1 leaves. Suppose G_i consists of the c(a_i) stars with at least a_i-1 leaves. Thus, together with condition 2 of Definition <ref>, exactly one vertex for each star in G_i has to be added to X to make G-X have maximum degree a_i since a_i≤ a_i-1 - 1. Since G_i contains exactly c(a_i) stars, apart from G_i, there is no vertices in G are added to X. This means that, apart from G_i, the internal vertex of every star of G contains at most a_i leaves. Moreover, for every i∈ [|I|-1], we also know that there are exactly c(a_i+1) stars, each of which contains at least a_i leaves. Therefore, for every i∈ [|I|-1], there are exactly c(a_i+1) - c(a_i) stars, each of which contains exactly a_i leaves. In addition, there are exactly c(a_1) stars, each of which contains at least a_0 leaves, together with the fact that G has maximum degree a_0 based on condition 1 of Definition <ref>. Therefore, there are exactly c(a_1) = c(a_1) - c(a_0) stars, each of which contains exactly a_0 leaves.
Suppose M is a stars-module of a graph G such that the degrees of internal vertices v_1,…,v_k of all the stars satisfy that deg(v_1)≥…≥ deg(v_k)≥ 2. Assume X is a subset of V(G) and |X∩ M|=r. Then, for any r∈ [k-1], to obtain the maximum value of the dominating coefficient of M∖ X with respect to (G,X), we may choose X∩ M={v_1,…,v_r}.
Suppose the vertex number of N(M) is y and the vertex number of N(M)∩ X is x, clearly we have y≥ x≥ 0. Assume that γ = x/y+deg(v_r+1). We have γ≤x/y+2 since deg(v_r+1) ≥ deg(v_k) ≥ 2.
First, we claim that, for any X∩ M with r vertices, the dominating coefficient of M∖ X with respect to (G,X) is at most γ. We give proof by contradiction for the claim as follows. Assume that there exists an X∩ M with r vertices such that the dominating coefficient of M∖ X with respect to (G,X) is larger than γ. Consider all the stars with internal vertices v_1,…,v_r+1. Clearly, for every i∈ [r+1], we have λ(v_i,G,X) ≤γ. Moreover, each star is a connected component in G[M], so we have to add at least one vertex of each star to X to change the value of λ(v_i,G,X) for each i. Thus, X∩ M contains at least r+1 vertices, a contradiction.
Second, we demonstrate that the dominating coefficient of M∖ X with respect to (G,X) is at least γ if we choose X∩ M={v_1,…,v_r}. For i∈ [r], the dominating coefficient of each adjacent vertex of v_i with respect to (G,X) is x+1/y+1> γ. For r +1 ≤ i ≤ k, the dominating coefficient of each adjacent vertex of v_i with respect to (G,X) is x/y+1≥γ, and the dominating coefficient of v_i with respect to (G,X) is x/y+deg(v_i)≥γ.
The rest of the section is dedicated to proving the following. Suppose is the class of all edgeless graphs.
We provide a reduction from the problem to each above-mentioned case of the (α,β)-Linear Degree Domination problem.
Let (G,k) be an instance of , where G = (V^1∪ V^2 ∪…∪ V^k , E). Without loss of generality, suppose k≥ 100, otherwise, the problem can be solved in polynomial time.
We next describe a graph H and integer q, which are the input to our corresponding instance of the (α,β)-Linear Degree Domination.
Note that Theorem <ref> has three cases and would require three reductions. In cases 1 and 3, α is fixed (to either 0 or 1), but we must also specify β as an input parameter, whereas in case 2, α and β are constants that we have no control over.
In all three cases of the theorem, the general modular structure of H is the same, and only the structure inside the modules differ (in addition to β also being different). We therefore present the proof of the theorem in one single reduction, and separate the proof into cases only when needed.
Before proceeding, we will make use of a 2-sumfree-set, which is a set of positive integers in which every pair of elements has a distinct sum. That is, I is a 2-sumfree set if, for any (a, b), (a', b') ∈ I × I, a + b = a' + b' if and only if {a, b} = {a', b'} (note that a = b is possible).
Clearly, we can construct in time O(n^4 log n) a 2-sumfree-set I = {x_1, …, x_n}, in which the maximum value is at most n^4, using a greedy procedure. The reason is that, for any positive integers i, j, r ≤ h < n, x_i+x_j-x_r has at most h^3 different values and we can always find the next element x_h+1 in the interval (h^4, (h+1)^4] since (h+1)^4-h^4 > h^3. Let us also mention that the more general notion of h-sumfree sets was recently used to show hardness results for Bin Packing <cit.>.
We assume that we have built a 2-sumfree set I = {a_1, …, a_n},
where a_1 > … > a_n ≥ 1.
Without loss of generality, we may assume that for any integer r ∈ℕ, each element a_i ∈ I is a multiple of r. Indeed, if I does not satisfy this property, we can update I by multiplying each a_i by r, which preserves the 2-sumfree property.
The value of r we want to use depends on whether we are in case 1, 2, or 3 of the theorem, and the possibilities are listed in Table <ref>. We henceforth assume that every element of I is a multiple of r, defined according to the table.
Since the initial I had maximum value n^4, we thus have
r n^4 ≥ a_1 > … > a_n ≥ r.
Moreover, for any distinct pair (a,b),(a',b')∈ I × I, we have that the absolute value of a+b-a'-b' is at least r.
For each
color class V^i = {v_1^i, …, v_n^i} of G,
let f_i be the bijection from V^i to I, such that f_i(v^i_s) = a_s for every s∈ [n].
Clearly, we can use f_i^-1(a_s) to denote the unique vertex v^i_s of V^i associated with a_s ∈ I.
Since I has the 2-sumfree property,
we have that for any distinct i, j ∈ [k] and any u ∈ V^i and any w ∈ V^j, if f_i(u) + f_j(w) = a_s + a_t, then u, w is either v^i_s, v^j_t or v^i_t, v^j_s. Moreover, for any distinct color classes V^i, V^j, edges v_r^i v_s^j and v_s^i v_r^j are either both in E, or both not in E. Hence, by looking at a sum in I, we can decide whether it is corresponding to a pair of symmetry edges or non-edges.
Suppose positive integers p, p_ij < n^210 for all distinct i,j ∈ [k]. Assume p'=n^210. We define x= (10a_1)^5, y= (nka_1)^10000 and m=(nka_1)^20000 for any α∈ [0,1]. Additionally, for each i∈ [k], we define integers β, s, l, a_0, and t as listed in Table <ref> and integers q, q_1, and q_2 as listed in Table <ref>, whose values depend on which case of the theorem we are in.
Table <ref> demonstrates some relationships between these values, which will be used later. We will explain Table <ref> as follows.
Consider α = 0. We have that a_1 ≤ n^4r = n^4 2(k-1)^2k^3 < n^10, that a_0 = a_1 +1 ≤ n^10, that q_1 < ky + k^2β = ky + k^2(nk)^50< ky + n^102, that q_2 < kp + k^2p' + k n^10 < kn^210 + k^2n^210 + k n^10 < n^213, that q = q_1 + q_2 < ky + n^214.
Consider α∈ (0,1). We have that q_1 < 2k^2y = 2k^2(nka_1)^10000, that q_2 < (k + k2) (10a_1)^5 + 2ka_1 < 2k^2(10a_1)^5, that q < 3k^2y = 3k^2(nka_1)^10000, that a_0 = (10a_1)^50 > 2k^2(10a_1)^5 > q_2.
Consider α = 1. We have that q_1 = 0, that q_2 < k^2 n^10, that q < k^2 n^10, that a_1 ≤ n^4r < (nk)^5 ≤ n^10, that a_0 < k^2n^10≤ n^12.
We can now start describing the graph H, which is illustrated in Figure <ref>.
First, add to H two factors N and K, where:
* N is an edgeless graph of size ⌊α(1-α)m + 2(1-α)(s+l) ⌋;
* K is an edgeless graph of size ⌊α(1-α)m + (1-α)(s+l) ⌋.
Secondly, for each color class V^i of G, add six factors A_i, B_i, D_i, R_i, S_i, T_i to each H, where:
* A_i and D_i are edgeless graphs of size ⌊ (1-α)y ⌋;
* B_i is an edgeless graph of size ⌈α^2(1-α)x ⌉;
* R_i is an edgeless graph of size ⌊ |β| -s - (1-α)l ⌋;
* Now, consider S_i. Recall that l < r.
First, S_i is a (a_0, I ∪{a_n - 1}, c)-degree deletion graph if α = 1, and is a (a_0, I ∪{a_n - 1}, c)-degree deletion star graph if α∈ (0, 1), where we put c(a_j) = t - ⌈α/2a_j⌉ for a_j ∈ I and c(a_n - 1) = a_0.
Secondly, if α = 0, then S_i is a (a_0, I, c)-degree retention graph above (p,l)[We will later prove that p < n^210 is well-defined here. Also note that l < 100 by definition, and therefore l < r as desired.], where c(a_i) = 1/2(a_i - a_n) and c(a_0)= ks.
* T_i is an edgeless graph of size t.
We then make A_i, B_i, and D_i adjacent with K; make R_i adjacent with A_i; make S_i adjacent with R_i, B_i, and T_i; make T_i adjacent with D_i.
Thirdly, for each pair of color classes V^i, V^j with i < j, we add another four factors U_ij, R_ij, A_ij, B_ij, where:
* R_ij is an edgeless graph of size ⌊ |β| - 2s - 2(1-α)l ⌋;
* A_ij is an edgeless graph of size ⌊ (1-α)y ⌋;
* B_ij is an edgeless graph of size ⌈ 2α^2 (1-α) x ⌉;
* Now, we consider U_ij. Recall that l < r.
Define I_ij = {a + b : f_i^-1(a) f_j^-1(b) ∈ E(G)}, i.e. the sums of pairs that correspond to an edge between V^i and V^j.
Let ℓ_ij and ħ_ij be the smallest element and the largest element of I_ij, respectively.
First, U_ij is a (a_0, I_ij∪ {ℓ_ij - 1}, c_ij)-degree deletion graph if α = 1, and is a (a_0, I_ij∪ {ℓ_ij - 1}, c_ij)-degree deletion star graph if 0<α <1, where we put c_ij(a + b) = t - 1/k-1 (⌈α a/2⌉ + ⌈α b/2⌉), and we put c_ij(ℓ_ij - 1) = a_0.
Secondly, if α = 0, then U_ij consists of an edgeless graph with p'_ij vertices together with a (2a_1 + 1, I_ij, c_ij)-degree retention graph above (p_ij,l)[We will also later prove that p_ij < n^210 is well-defined here.],
where c_ij(a+b) = 1/2(k-1)(a + b - l_ij), and p'_ij= p' - p_ij + 1/2(k-1)(l_ij - 2a_n). In addition, we define c_ij(2a_1+1)= ks.
We then make U_ij adjacent with T_i, T_j, B_ij, and R_ij; make A_ij adjacent with R_ij and N; make B_ij adjacent with K.
To avoid cumbersome notation, for each (α, β), we define U_ij = U_ji, R_ij = R_ji, c_ij = c_ji, B_ij = B_ji, and A_ij = A_ji. This concludes the construction of H.
The S_i and U_ij factors form the main components of the reduction, and we first provide bounds on their size (vertex number).
Consider i ∈ [k] and the S_i factor. According to definition <ref>, we have that the size of S_i is at most (a_0c(a_n - 1))^10= (a_0)^20 if α∈ (0,1]. More specifically, if α = 1, then (a_0)^20 < (n^12)^20 = n^240 based on Table <ref>. If α∈ (0,1), then (a_0)^20 = ((10a_1)^50)^20 = (10a_1)^1000 based on Table <ref>.
Suppose that α =0. According to definition <ref>, the size of S_i is at most (a_0c(a_0))^10, which is (a_0ks)^10≤ (n^10kn^10)^10≤ n^210 based on Table <ref> and Table <ref>. In addition, according to Definition <ref>, we have p< (a_0c(a_0))^10, which is at most n^210. So, the restriction that p< n^210 does not affect the definition of S_i.
We next bound the size of each U_ij as follows. According to definition <ref>, we have that the size of U_ij is at most (a_0c_ij(ℓ_ij - 1))^10= (a_0)^20 if α∈ (0,1]. Moreover, we have already known that (a_0)^20 < n^240 if α = 1, and that (a_0)^20 = (10a_1)^1000 if α∈ (0,1). Assume that α =0. According to definition <ref>, the size of the (2a_1 + 1, I_ij, c_ij)-degree retention graph above (p_ij,l) is at most (a_0c_ij(2a_1+1))^10 = (a_0ks)^10, which has been proved to be at most n^210. Additionally, according to Definition <ref>, we have p< (a_0c(a_0))^10, which is at most n^210. So, the restriction that p< n^210 does not affect the definition of U_ij. Furthermore, since p'= n^210 and p_ij is a positive integer according to Definition <ref>, we have
p'_ij = p' - p_ij + 1/2(k-1)(l_ij - 2a_n) < p' + 1/2(k-1)(l_ij - 2a_n)
< p' + 1/2(k-1)2a_1 < n^210 + a_1 < n^210 + n^10 < 2n^210.
Therefore, the size of U_ij is at most p'_ij + (a_0c(2a_1+1))^10 < 2n^210 + n^210 < n^211. For convenience, we summarize these bounds in Table <ref>.
As we argued, the set I can be constructed in polynomial time with maximum value O(rn^4) and, by the definition of admitting arbitrary degree deletion or retention tables, the S_i and U_ij factors can also be built in polynomial time. Moreover, all other factors are edgeless graphs with sizes polynomial in n+k. Thus H can be constructed in polynomial time.
All that remains is to argue that the objective instance is equivalent to the original one. Lemma <ref> and <ref> will give the forward and reverse directions, respectively.
Suppose that G contains a multicolored clique C with k vertices. Then there is X ⊆ V(H) of size at most q such that |N(v)∩ X| ≥α |N(v)| + β for every v ∈ V(H) ∖ X.
For i ∈ [k], define _i as the unique element of I such that f^-1_i(_i) is the vertex of V^i that belongs to the multicolored clique C. Then for any distinct i, j ∈ [k], f^-1_i(_i) f^-1_j(_j) ∈ E(G).
This implies that _i + _j is in the I_ij list that was used to construct U_ij.
We divide the proof into two cases, which is α = 0 and 0< α≤ 1.
Case 1: consider α = 0. Recall that β = (nk)^50 in this case. Define w_i = s + l - _i. Note that for any a_i ∈ I, 0≤ l < r ≤ a_i ≤ a_1 < n^10 = s based on Table <ref> and Table <ref>. Therefore, we have 0 < w_i < s.
First, we show how to construct the vertex set X for H.
For all i≠ j, the sets B_i, B_ij are empty sets, so we do not need to consider them.
Add all vertices of K and N to X.
Let i ∈ [k]. Add all vertices of R_i and D_i to X, and the intersection of A_i and X is empty.
In S_i, we add p + c(_i) = p+ 1/2(_i- a_n) vertices to X. This can be done so that S_i - X has minimum degree _i in X∩ S_i according to Condition 2 of Definition <ref>. For each T_i, add any w_i of its vertices to X, which is feasible since 0 < w_i < s and |T_i| = s.
Next consider distinct i, j ∈ [k]. Add all vertices of R_ij to X, and make the intersection of A_ij and X empty.
For U_ij, we first add all p'_ij vertices of its edgeless graph to X. Then, we add p_ij + c_ij(_i + _j) vertices to X so that U_ij - X has minimum degree _i + _j in X∩ U_ij, again according to Condition 2 of Definition <ref>.
We claim that X is a solution to the instance H.
Secondly, we bound the size of X.
The number of elements of X from the S_i's is
∑_i ∈ [k] |S_i ∩ X| = ∑_i ∈ [k](p+ 1/2(_i-a_n)) = kp -1/2 k a_n + 1/2∑_i ∈ [k]_i.
Next consider distinct i, j ∈ [k].
Recall that we defined p'_ij=p' - p_ij + 1/2(k-1)(l_ij - 2a_n). Thus
|U_ij∩ X| = p'_ij+ p_ij + c_ij(_i + _j)
= p' - p_ij + 1/2(k-1)(l_ij - 2a_n) + p_ij + 1/2(k-1)(_i + _j - l_ij)
= p' + 1/2(k-1)(_i + _j - 2a_n).
The number of elements of X from the U_ij's is
∑_1 ≤ i < j ≤ k |U_ij∩ X| = ∑_1 ≤ i < j ≤ k(p' + 1/2(k-1)(_i + _j - 2a_n))
= k2p' - k21/2(k-1) 2a_n + ∑_i ∈ [k](k - 1) 1/2(k-1)_i
= k2p' -1/2 k a_n + 1/2∑_i ∈ [k]_i,
where the equality on the second line is because each element of _1, …, _k appears exactly k - 1 times in the summation.
In addition, the size of the intersection of X and each T_i is w_i = s+l-a_i. So the size of the intersection of X and all U_ij, S_i, and T_i factors is
kp -1/2 k a_n + 1/2∑_i ∈ [k]_i + k2p' -1/2 k a_n + 1/2∑_i ∈ [k]_i + ∑_i ∈ [k] w_i
= kp + k2p' - k a_n + ∑_i ∈ [k]_i + ∑_i ∈ [k](s+l-_i)
= kp + k2p' - k a_n + ∑_i ∈ [k](s+l )
= kp + k2p' + k (s + l - a_n )= q_2.
In addition, the number of vertices in X, other than those from the S_i, T_i, and U_ij factors, is
∑_1 ≤ i < j ≤ k|R_ij| + ∑_i ∈ [k]( |D_i| + |R_i|) + |N| + |K|,
which is equal to q_1 after simple calculation. Overall, the size of X is q_1 + q_2 = q.
Thirdly, we verify that X is a (α,β)-linear degree dominating set of H. Since α =0, we only need to demonstrate that |N(v)∩ X| ≥β for every vertex v ∈ V(H) ∖ X.
The inequality holds for every vertex of A_i of each i ∈ [k] since |R_i| + |K| = |β| = β, which holds because the floor functions can be ignored when α = 1.
In addition, the inequality holds for every vertex of A_ij for every distinct i, j ∈ [k], since |R_ij| + |K| = |β| = β.
Consider a vertex v in some T_i. We have |N(v)∩ X| ≥ |D_i| = y, where y is equal to (nka_1)^10000 > (nk)^50 = β.
Consider a vertex in some S_i. We know S_i - X has minimum degree _i in X∩ S_i. In addition, the size of the intersection of T_i and X is s+l-_i and R_i ⊆ X. Thus, for any v∈ S_i - X, we have that |N(v)∩ X| is at least
_i + (s+l-_i) + (|β|- s - l) = β.
Consider a vertex in some U_ij. We know that U_ij has minimum degree _i + _j in X∩ U_ij. Moreover, R_ij⊆ X and the intersection of T_i ∪ T_j and X contains 2s+2l-_i-_j vertices. Thus, for any v∈ U_ij - X, we have that |N(v)∩ X| is at least
(_i+_j) + (|β|- 2s - 2l) + (2s+2l-_i-_j) = β.
We have handled every non-empty factor of H - X, which completes this part of the proof.
Case 2: consider 0< α≤ 1. Recall that β is some arbitrary constant if α∈ (0,1), and β = -(nk)^10000 if α = 1.
We define w_i as follows:
* if α∈ (0,1), then w_i = 2(|β|+ ⌈α/2_i⌉);
* if α = 1, then w_i = 2⌈α/2_i⌉.
In addition, when α = 1, w_i is also equal to _i since _i is an even number in this case (we leave w_i in the above form to relate it more easily to the c(_i)'s later on).
First, we show how to construct the vertex set X.
The intersection of K ∪ N and X is an empty set.
Let i ∈ [k]. Add all vertices of A_i, B_i and D_i to X. The intersection of R_i and X is empty.
In T_i, add any of its w_i vertices to X.
In S_i, we add c(_i) = t - ⌈α/2_i⌉ vertices to X. This can be done so that S_i - X has maximum degree _i according to Definition <ref>.
Consider distinct i, j ∈ [k]. Add all vertices of A_ij and B_ij to X and the intersection of R_ij and X is empty.
For U_ij, we add
c_ij(_i + _j) = t - 1/k-1( ⌈α/2_i ⌉ + ⌈α/2_j ⌉)
vertices to X so that U_ij- X has maximum degree _i + _j according to Definition <ref>.
Secondly, we bound the size of X. The size of the intersection of X and all S_i factors is
∑_i ∈ [k] |S_i ∩ X| = ∑_i ∈ [k](t - ⌈α/2_i⌉) = kt - ∑_i ∈ [k]⌈α/2_i⌉.
The size of the intersection of X and all U_ij factors is
∑_1 ≤ i < j ≤ k |U_ij∩ X| = ∑_1 ≤ i < j ≤ k(t - 1/k-1(⌈α/2_i ⌉ + ⌈α/2_j ⌉))
= k2t - 1/k-1∑_i ∈ [k] (k-1) ⌈α/2_i ⌉
= k2t - ∑_i ∈ [k]⌈α/2_i ⌉.
Recall that the size of the intersection of X and each T_i is w_i. So the size of the intersection of X and all U_ij, S_i, and T_i factors is
kt - ∑_i ∈ [k]⌈α/2_i⌉ + k2t - ∑_i ∈ [k]⌈α/2_i ⌉ + ∑_i ∈ [k] w_i
= (k + k2) t - 2 ∑_i ∈ [k]⌈α/2_i⌉ + ∑_i ∈ [k] w_i.
When α∈ (0, 1), this equals
(k + k2) t - 2 ∑_i ∈ [k]⌈α/2_i⌉ + ∑_i ∈ [k] 2( |β|+ ⌈α/2_i⌉)
= (k + k2) t + 2k|β| = q_2
and when α = 1, this equals
(k + k2) t - 2 ∑_i ∈ [k]⌈α/2_i⌉ + ∑_i ∈ [k] 2 ⌈α/2_i⌉
= (k + k2) t = (k + k2) s = q_2
In addition, apart from all U_ij, S_i, and T_i factors, the size of the X is
∑_1 ≤ i < j ≤ k(|A_ij| +|B_ij|) + ∑_i ∈ [k]( |A_i| + |B_i| + |D_i|)
= k2(⌊ (1-α)y ⌋ +⌈ 2α^2(1-α)x ⌉) + k( ⌈α^2(1-α)x ⌉ + 2⌊ (1-α)y ⌋),
which equals q_1 if 0 < α < 1, and equals 0 = q_1 if α = 1.
Overall, the size of X is q_1 + q_2 = q for any α∈ (0,1].
Thirdly, we verify that X is a (α,β)-linear degree dominating set of H. We need to demonstrate that every vertex v ∈ V(H) ∖ X satisfies the inequality that |N(v)∩ X| ≥α |N(v)| + β. We divide it into two cases, which are α =1 and α∈ (0, 1).
Assume α =1. Recall that β= - (nk)^10000. In addition, the size of S_i and U_ij are less than n^240 based on Table <ref>.
Thus, we need to prove that every v ∈ V(H) ∖ X satisfies that |N(v)| - |N(v)∩ X| ≤ -β = (nk)^10000.
Clearly, N, K, all A_i, B_i, D_i, A_ij, and B_ij are empty sets, so we do not need to consider them.
For any vertex v of T_i, we have N(v) ≤ |S_i| + ∑_j≠ i |U_ij|, which is at most k^2 n^240 < -β. For any vertex v∈ R_i, we have N(v) = |S_i| ≤ n^240 < -β. For any vertex v∈ R_ij, we have N(v) = |U_ij| ≤ n^240 < -β.
Consider any vertex v in some U_ij - X. Suppose a and b denote the number of vertices in N(v)∩ (U_ij - X) and N(v)∩ U_ij∩ X, respectively. Since the maximum degree of U_ij - X is _i + _j, we have a≤_i + _j. Furthermore, recall that t=s in this case, so N(v) equals
|R_ij| + |T_i| + |T_j| +a +b
= |β| - 2s + 2t + a + b
= |β| +a +b.
In addition, we have that |N(v)∩ X| equals
|T_i∩ X| + |T_j∩ X| + |N(v) ∩ U_ij∩ X|
= w_i + w_j + b
= _i + _j + b.
Overall, we have that
|N(v)| - |N(v)∩ X|
= |β| +a +b - (_i + _j + b)
= |β| + (a- _i - _j)
≤ |β| = -β.
Consider any vertex v of some S_i - X. Suppose a and b denote the number of vertices in N(v)∩ (S_i - X) and N(v)∩ S_i ∩ X, respectively. Additionally, S_i - X has maximum degree _i, so a≤_i. First, we have N(v) = |R_i| + |T_i| +a +b = |β| +a +b. In addition, we have that |N(v)∩ X| equals w_i + b = _i + b. Thus, we have |N(v)| - |N(v)∩ X| = |β| + (a- _i) ≤ |β| = -β.
This concludes the case α = 1.
Assume α∈ (0, 1).
Recall that a_1 > r. In addition, the number of vertices in any S_i or U_ij is at most (10a_1)^1000 based on Table <ref>.
Consider any vertex v of some T_i - X. Since all vertices of D_i are in X, we have that |N(v)∩ X| -α |N(v)| is at least
|D_i| - α(|D_i|+|S_i| + ∑_j ∈ [k]∖{i} |U_ij|)
≥ (1-α) ⌊ (1-α) y ⌋ - α k (10a_1)^1000
≥ (1-α) ⌊ (1-α) (nka_1)^10000⌋ - α k (10a_1)^1000
> (1-α)^2 (nka_1)^10000 - α k (10a_1)^1000 - 1 +α
> (1-α)^2 (nka_1)^1000a_1 - (nka_1)^1000 - 1
> ((1-α)^2 r -1) (nka_1)^1000 - 1
= ((1-α)^2 ⌈10k(|β|+10)/α(1-α)⌉^10 -1) (nka_1)^1000 - 1
> (1-α)^2 ⌈10k(|β|+10)/α(1-α)⌉^10 -2 > β,
which implies that |N(v) ∩ X| ≥α |N(v)| + β.
Consider any vertex v of some R_i. Since all vertices of A_i are in X, we have
|N(v)∩ X| -α |N(v)| = |A_i| - α(|A_i|+|S_i|)
≥ (1-α) ⌊ (1-α) y ⌋ - α (10a_1)^1000,
which is clearly larger than formula (<ref>), thus lager than β.
Consider any vertex v of some R_ij. Since all vertices of A_ij are in X, we have
|N(v)∩ X| -α |N(v)| ≥ |A_ij| - α(|A_ij|+|U_ij|)
≥ (1-α) ⌊ (1-α) y ⌋ - α (10a_1)^1000,
which is clearly larger than formula (<ref>), thus lager than β.
Consider any vertex v of K and N. Since all neighbor vertices of v are in X, we have
|N(v)∩ X| -α |N(v)| = (1-α)|N(v)|
≥ (1-α) ⌊ (1-α)y ⌋,
which is clearly larger than formula (<ref>), thus lager than β.
Consider any vertex v of some S_i - X. Suppose a and b denote the number of vertices of N(v)∩ (S_i - X) and N(v)∩ X ∩ S_i, respectively. In addition, we know S_i - X has maximum degree _i. Clearly, we have 0≤ a≤_i and b≥ 0. Thus, we have that the dominating coefficient λ(v,H,X) = |N(v)∩ X| - β/|N(v)| of v with respect to (H,X) is
|B_i| + |T_i ∩ X| + b - β/|T_i| + |B_i| + |R_i| + a + b
= ⌈α^2(1-α)x ⌉ + 2|β| + 2⌈1/2α_i⌉ + b - β/⌊α(1-α)^2x ⌋+ s + ⌈α^2 (1- α)x ⌉ + ⌊ |β| - s - (1-α)l ⌋ + a + b
≥ ⌈α^2(1-α)x ⌉ + α |β|+ α_i + α b/α(1-α)^2x + ⌈α^2 (1- α)x ⌉ + |β| + _i + b
≥ α^2(1-α)x + α |β|+ α_i + α b/α(1-α)^2x + α^2 (1- α)x + |β| + _i + b
= α (α (1- α)x + |β| + _i + b)/α (1- α)x + |β| + _i + b= α,
where we used the facts that l=0, and
where the equality on the penultimate line is because ⌈ c⌉+d/⌈ c⌉+e≥ c+d/ c+e for any real number e≥ d ≥ 0 and c > 0.
Consider any vertex v of some U_ij - X. Suppose a and b denote the number of vertices of N(v)∩ (U_ij - X) and N(v)∩ X ∩ U_ij, respectively. Additionally, we know that U_ij- X has maximum degree _i + _j. Clearly, we have 0≤ a≤_i+_j and b≥ 0. Thus, we have the dominating coefficient λ(v,H,X) of v with respect to (H,X) is
|B_ij| + |T_i ∩ X| + |T_j ∩ X| + b -β/|T_i| + |T_j| + |B_ij| + |R_ij| + a + b
= ⌈ 2α^2(1-α)x ⌉ + 2|β| + 2⌈1/2α_i⌉ + 2|β| + 2⌈1/2α_j⌉ + b - β/2t + ⌈ 2α^2 (1- α)x ⌉ + ⌊ |β| - 2s - 2(1-α)l ⌋ + a + b
≥ ⌈ 2α^2(1-α)x ⌉ + 3|β| + α_i+ α_j + b/2⌊α(1-α)^2x ⌋ + ⌈ 2α^2 (1- α)x ⌉ + |β| + a + b
≥ ⌈ 2α^2(1-α)x ⌉ + α |β| + α_i+ α_j + α b/2 α(1-α)^2x + ⌈ 2α^2 (1- α)x ⌉ + |β| + _i+ _j + b
≥ 2α^2(1-α)x + α |β| + α_i+ α_j + α b/2 α(1-α)^2x + 2α^2 (1- α)x + |β| + _i+ _j + b
= α (2 α(1-α) x + |β| + _i+ _j + b)/2 α(1-α) x + |β| + _i+ _j + b = α,
where, again, the equality on the penultimate line is because ⌈ c⌉+d/⌈ c⌉+e≥ c+d/ c+e for any real number e≥ d ≥ 0 and c > 0.
Overall, we proved that X is a (α,β)-linear degree dominating set of H, where the size of X at most q.
We now consider the reverse direction, which will be demonstrated in the following lemma.
Suppose that there is X ⊆ V(H) with |X| ≤ q such that |N(v)∩ X| ≥α |N(v)| + β for every v ∈ V(H) ∖ X. Then G contains a multicolored clique with k vertices.
Assume i,j∈ [k]. To alleviate notation slightly, for a factor M of H, we will write X(M) := X ∩ V(M) for the vertices of M added to X, and χ(M) := |X(M)| for the number of vertices of M added to X.
In the proof, M will be one of the
factors in H.
Define V_2 as the set of vertices consisting of the union of all the vertices of T_i, S_i, and U_ij for all distinct i, j∈ [n]. In addition, define V_1 = V(H) ∖ V_2.
Let us first consider the size of T_i of H. If α∈{0,1}, then |T_i| = s = n^10. If α∈ (0,1), then |T_i| = ⌊α (1-α)^2 x ⌋≤ x = (10a_1)^5. Thus, we have
|V_2| = ∑_i∈ [k] (|S_i| + |T_i|) + ∑_1≤ i<j ≤ k |U_ij|.
If α =1, then, according to Table <ref>, we have
|V_2|< k (n^240 + n^10) + k^2 n^240 < n^243.
If α∈ (0,1), then, according to Table <ref>, we have
|V_2|< k ((10a_1)^1000 + (10a_1)^5) + k^2 (10a_1)^1000 < (10a_1)^1001.
If α =0, then, according to Table <ref>, we have
|V_2|< k (n^210 + n^10) + k^2 n^211 < n^214.
The proof is divided into a series of claims.
If α∈ (0,1), then all factors adjacent to N or K are in X, moreover, the number of vertices of all these factors is q_1.
First note that, according to Table <ref>, the size of X is at most q < 3k^2(nka_1)^10000. Also notice that |β| < r < a_1 (see Table <ref>).
Consider any vertex v which is adjacent to a vertex of N. Assume v ∉X. Recall that k≥ 100. Then, we have a contradiction as follows.
|X| ≥ |N(v)∩ X| ≥α |N(v)| + β≥α |N| - r
≥α⌊α (1-α) m ⌋ - (a_1 - 1)
> α^2(1-α) m - a_1
= α^2(1-α) (nka_1)^20000 - a_1
> α^2(1-α) a_1(nka_1)^19999 - a_1
> α^2(1-α) ⌈10k(|β|+10)/α(1-α)⌉^10 (nka_1)^19999 - a_1
> 2 (nka_1)^19999 - a_1 > (nka_1)^19999 > k^3 (nka_1)^10000
> 3k^2(nka_1)^10000 > q ≥ |X|.
Thus, all vertices adjacent to N are in X. Additionally, the proof that all factors adjacent to K are in X is identical, since |K| = |N|. Finally, it is easy to calculate that the vertex number of all factors adjacent to N or K is q_1.
Consider factors B_i, B_ij, and D_i for distinct i,j ∈ [k]. We have that χ(B_i) = |B_i|, χ(B_ij) = |B_ij|, and χ(D_i) = |D_i|.
Assume α = 0. First, B_i and B_ij are empty sets. Secondly, for any vertex v∈ D_i, we have |N(v)| = |T_i| + |K|, which equals
2s + l
< 2s + r < 3s = 3n^10 < (nk)^50 = β.
Thus, v must be in X and D_i ⊆ X (if v ∉ X, it does not have enough neighbors, since it needs at least β neighbors in X).
In addition, according to Claim <ref>, B_i, B_ij, and D_i are subsets of X if α∈ (0,1), moreover, B_i, B_ij, and D_i are empty set if α =1.
Consider factors N and K. We may assume that
* χ(N) = |N| and χ(K) = |K| if α = 0;
* χ(N) = 0 and χ(K) = 0 if α∈ (0,1].
Case 1: α = 0. Consider A_i for some i ∈ [k]. Suppose that there exists a vertex of R_i ∪ K that is not in X. For any vertex v∈ A_i, we would have |N(v)∩ X| < |R_i| + |K| = |β|. Thus, each vertex of A_i is in X. Moreover, we know that every D_i is a subset of X according to Claim <ref>, which means that |X| is at least
|A_i|+ ∑_i∈ [k] |D_i| = ky +y =ky + (nka_1)^10000.
However, we know that |X| = q is at most ky + n^214 according to Table <ref>, a contradiction. Thus, all vertices of R_i ∪ K are in X. The proof for that R_ij∪ N is a subset of X goes the same way.
Case 2: α∈ (0, 1]. First, N and K are empty sets if α =1. Secondly, all factors adjacent to N or K are in X according to Claim <ref> if α∈ (0,1), furthermore, the vertices in N or K are independent set. Thus, for any v ∈ N ∪ K, we have N(v) ⊆ X. For any v∈ N ∪ K, we claim that |N(v)∩ X| ≥α |N(v)| + β. Since N(v) ⊆ X, it is enough to argue that |N(v)| ≥β/1-α. In addition, since A_i ⊆ N(K) and A_ij⊆ N(N), we have
|N(v)| ≥min{ |A_i|, |A_ij|} = ⌊ (1-α) y ⌋
> (1-α) y - 1 = (1-α) (nka_1)^10000 - 1
> (1-α) a_1 - 1 > (1-α) r - 1
> (1-α) ⌈10k(|β|+10)/α(1-α)⌉^10 - 1,
which is clearly larger than β/1-α.
Note that, in this Claim <ref>, we have proved the fact that R_i ∪ K ⊆ X and R_ij∪ N ⊆ X for all distinct i,j∈ [k] if α = 0. Moreover, this fact will be used in Claim <ref>, Claim <ref>, and Claim <ref>.
Consider factors A_i and A_ij for all distinct i, j∈ [k]. We may assume that
* χ(A_i) = 0 and χ(A_ij) = 0 if α = 0;
* χ(A_i) = |A_i| and χ(A_ij) = |A_ij| if α∈ (0,1].
Case 1: α = 0. By Claim <ref>, R_i ∪ K and R_ij∪ N are subsets of X for all distinct i,j∈ [k]. Thus, we may assume that χ(A_i) = 0 and χ(A_ij) = 0 since |R_i| + |K| = β and |R_ij| + |N| = β.
Case 2: α∈ (0, 1]. By Claim <ref>,
every A_i and A_ij are subsets of X if α∈ (0,1), furthermore, every A_i and A_ij are empty sets if α = 1. Thus, we may assume that χ(A_i) = |A_i| and χ(A_ij) = |A_ij| if α∈ (0,1].
For any vertex v of T_i, R_i, or R_ij such that v ∉ X, we have |N(v) ∩ X| ≥α |N(v)| + β.
Consider a vertex v of some R_i or R_ij. If α =0, then R_i and R_ij are subsets of X based on the proof of Claim <ref>. So we do not need to consider this case. If α =1, then, according to Table <ref> and Table <ref>, |N(v)| = |S_i| < n^240 = -β for v∈ R_i, and |N(v)| = |U_ij| < n^240 < -β for v∈ R_ij.
Let us consider α∈ (0, 1). Assume v∈ R_i. According to Claim <ref>, we have A_i ⊆ X.
Even if S_i ∩ X is an empty set, the dominating coefficient of any vertex v in R_i with respect to (H, X) is
λ(v, H, X) = |A_i| - β/|A_i| + |S_i|≥⌊ (1-α)y ⌋ - (r - 1)/⌊ (1-α)y ⌋ + (10a_1)^1000
> (1-α)y - r/(1-α)y + (10a_1)^1000 > (1-α)y - r - (10a_1)^1000/(1-α)y
> 1 - 2(10a_1)^1000/(1-α)(nka_1)^10000
> 1 - 1/(1-α)a_1
> 1 - 1/(1-α)r = 1 - 1/(1-α)⌈10k(|β|+10)/α(1-α)⌉^10
> 1 - 1/(1-α)1/α^10(1-α)^10
> 1 - (1-α) = α.
The second line is because b/a≥b-c/a-c for any 0<c<b<a. Suppose v∈ R_ij.
According to Claim <ref>, we have A_ij⊆ X. Even if U_ij∩ X is an empty set, the dominating coefficient of any vertex v in R_ij with respect to (H, X) is
λ(v, H, X) = |A_ij| - β/|A_ij| + |U_ij| > ⌊ (1-α)y ⌋ - (r - 1)/⌊ (1-α)y ⌋ + (10a_1)^1000,
which is larger than α according to the inequality above.
Consider a vertex v of T_i. D_i is a subset of X by Claim <ref>. So |N(v) ∩ X| ≥ |D_i| = ⌊ (1-α)y ⌋. If α =0, then it is trivial that |N(v) ∩ X| = y > (nk)^50 = β. Assume α∈ (0, 1). Even if, apart from D_i, the intersection of X and the other neighbors of v is empty, the dominating coefficient of v with respect to (H,X) is
λ(v, H, X) = |D_ij| - β/|D_ij| + |S_i| + ∑_j∈ [k]∖{i}|U_ij|
> ⌊ (1-α)y ⌋ - (r - 1)/⌊ (1-α)y ⌋ + k(10a_1)^1000,
which is larger than α using a similar procedure as that used above. Now, let us consider α =1. Clearly, we have
|N(v)| = |S_i| + ∑_j∈ [k]∖{i}|U_ij|
< kn^240 < (nk)^10000 = -β.
Thus, we have |N(v)| + β < 0 ≤ |N(v)∩ X|.
Consider factor R_i for all i. We may assume that
* χ(R_i) = |R_i| if α = 0;
* χ(R_i) = 0 if α∈ (0,1].
According to Claim <ref>, we only need to consider the vertices outside R_i that are affected by the intersection of R_i and X.
Case 1: that χ(R_i) = |R_i| is demonstrated in the proof of Claim <ref> if α = 0.
Case 2: first, let us consider α =1, and recall that in this case G - X has maximum degree at most |β|. If χ(S_i) < c(a_1), then Δ(S_i - X) ≥ a_0 by Definition <ref>.
Let v ∈ S_i - X be a vertex of maximum degree in the graph S_i - X. In H, v has |β| neighbors in R_i ∪ T_i, which means that at least a_0 = q +1 vertices of R_i∪ T_i are added to X, a contradiction. Thus, χ(S_i) ≥ c(a_1).
We may assume that the vertices of S_i ∩ X are chosen to minimize Δ(S_i - X), since the choice of vertices of S_i to add to X does not affect other factors, only the number. Thus, we may assume that Δ(S_i - X) ≤ a_1.
Assume that there exists some v ∈ X(R_i). Suppose that there exists at least one vertex u ∈ T_i - X. Recall that we also call X the deletion part of H. Consider X' = (X ∖{v}) ∪{u}. The subgraph H - X' can be seen as taking H - X, deleting u, and reinserting v.
Deleting u decreases the degree of the vertices in the neighbors of T_i by 1, including the S_i vertices. Then by reinserting v, we only increase the degrees of the S_i vertices by 1. Thus, the maximum degree of H - X' is not more than the maximum degree of H - X and we may use X' instead. We may repeat this argument until no element of R_i is in X (in which case we are done), or until T_i - X is empty, i.e. every element of T_i is in X.
So suppose that we reach a point where all vertices of T_i are in X. Since Δ(S_i - X) ≤ a_1, if we assume that X(R_i) is empty, each remaining vertex of S_i in H - X has maximum degree at most |β| - s + a_1 < |β|. Thus, we may assume that χ(R_i) = 0.
Secondly, let us consider α∈ (0,1).
If χ(S_i) < c(a_1), then S_i - X has a vertex v with degree a_0 by Definition <ref>. Consider the degree of v in H. The size of X∩ N(v) is at least
α |N(v)| + β≥α( |R_i| + |T_i| + |B_i| + a_0) + β
> α a_0 + β
Furthermore, based on Claim <ref>, we have X∩ (H - N(v)) ≥ q_1 - |B_i|. Thus, the size of X is larger than
α a_0 + β + q_1 - |B_i|
= α x^10 + β + q_1 -⌈α^2 (1-α)x ⌉
> q_1 + α x^10 + β - 2α x -1
= q_1 + α x (x^9 -2) + β -1
= q_1 + α (10a_1)^5 ( (10a_1)^45 -2) + β -1
> q_1 + α ((10a_1)^44r-2)r + β -1
= q_1 + α ((10a_1)^44r-2)⌈10k(|β|+10)/α(1-α)⌉^10 + β -1
> q_1 + (10a_1)^44r + β -3 = q_1 + (10a_1)^44⌈10k(|β|+10)/α(1-α)⌉^10 + β -3
> q_1 + (10a_1)^44
> q_1 + 2k^2(10a_1)^5 > q_1 + q_2 = q,
which is a contradiction. Thus, χ(S_i) ≥ c(a_1).
According to Lemma <ref>, S_i contains c(a_1) -c(a_0) = c(a_1) stars with a_0 leaves. Based on Lemma <ref>, we may assume that the internal vertices of all c(a_1) stars with a_0 leaves are added to X. Thus, Δ(S_i - X) ≤ a_1 and the largest star in S_i - X contains at most a_1 leaves.
Assume that there exists some v ∈ X(R_i). Suppose that at least one vertex u of T_i is not deleted by X, i.e. there exists u ∈ T_i - X. Consider X' = (X ∖{v}) ∪{u}. Consider every neighbor factor of T_i that is not a subset of X. It is not hard to verify that Λ(V(S_i -X),H,X) = Λ(V(S_i - X'),H,X), and that Λ(V(U_ij -X),H,X) ≤Λ(V(U_ij - X'),H,X) for every j∈ [k] ∖{i}. Furthermore, S_i is the only neighbor factor of R_i that is not a subset of X. Thus, we may use X' instead.
So suppose instead that all vertices of T_i are deleted. Since the largest star in S_i - X contains at most a_1 leaves, we have that, even if χ(R_i) = 0, Λ(V(S_i -X),H,X) is at least
|T_i| + χ(B_i) + χ(R_i) - β/|T_i| + |B_i| + |R_i| + a_1
= ⌊α (1-α)^2
x ⌋ +⌈α^2(1-α)x ⌉ - β/⌊α (1-α)^2
x ⌋ +⌈α^2(1-α)x ⌉ + |β| + a_1
> α(1-α)x - 1 - |β|/α(1-α)x + 1 + |β| + a_1
> α(1-α)x - 2 - 2|β| - a_1/α(1-α)x
> 1 - 2a_1/α(1-α)x = 1 - 2a_1/α(1-α)(10a_1)^5
> 1 - 1/α(1-α)r = 1 - 1/α(1-α) ⌈10k(|β|+10)/α(1-α)⌉^10
> 1 - (1-α) = α.
The fifth line is becasue a_1 > r > 2|β|+2. Thus, we may assume χ(R_i) = 0.
Consider factor R_ij for all i≠ j. We may assume that
* χ(R_ij) = |R_ij| if α = 0;
* χ(R_ij) = 0 if α∈ (0,1].
Case 1: that χ(R_ij) = |R_ij| is demonstrated in the proof Claim <ref> if α = 0.
Case 2: first, let us consider α =1. The idea is the same as that of the proof for Claim <ref>.
If χ(U_ij) < c_ij(ħ_ij), then by Definition <ref>, we have Δ(U_ij - X) = a_0. This implies a_0 deletions in R_ij∪ T_i ∪ T_j. Then we have |X| ≥ a_0 = q +1, a contradiction. Thus, χ(U_ij) ≥ c_ij(ħ_ij) and we may assume that Δ(U_ij - X) ≤ħ_ij < 2s (recall that s > a_1).
If there exists some u ∈ (V(T_i) ∪ V(T_j)) ∖ X and some v ∈ X(R_ij), then X' = (X ∖{v}) ∪{u} does not alter the degrees in V(U_ij) ∖ X and can only reduce the degrees of other relevant factors.
So assume that all elements of T_i and T_j are deleted.
We know Δ(U_ij - X) < 2s, thus, in H - X, the vertices of V(U_ij) ∖ X have maximum degree at most ħ_ij + d - 2s < d even if X(R_ij) is empty. Thus, we may assume that χ(R_ij) = 0.
Secondly, let us consider α∈ (0,1).
If χ(U_ij) < c_ij(ħ_ij), then U_ij - X has a vertex v with degree a_0 by Definition <ref>. Consider the degree of v in H. The size of X∩ N(v) is at least
α |N(v)| + β≥α( |R_ij| + |T_i| + |T_j| + |B_ij| + a_0) + β
> α a_0 + β
Furthermore, based on Claim <ref>, we have X∩ (H - N(v)) ≥ q_1 - |B_ij|. Thus, the size of X is larger than
α a_0 + β + q_1 - |B_ij|
= α x^10 + β + q_1 -⌈ 2 α^2 (1-α)x ⌉
> q_1 + α x^10 + β - 2α x -1
= q_1 + α x (x^9 -2) + β -1,
which is larger than q based on formula (<ref>), a contradiction. Thus, χ(U_ij) ≥ c_ij(ħ_ij).
According to Lemma <ref>, U_ij contains c_ij(ħ_ij) - c_ij(a_0) = c_ij(ħ_ij) stars with a_0 leaves. Based on Lemma <ref>, we may assume that the internal vertices of all c_ij(ħ_ij) stars with a_0 leaves are added to X. Thus, Δ(U_ij - X) ≤ħ_ij and the largest star in U_ij - X contains at most ħ_ij leaves.
Assume that there exists some v ∈ X(R_ij). Suppose that at least one vertex u of T_i ∪ T_j is not deleted by X, i.e. there exists u ∈ V(T_i∪ T_j) ∖ X. Consider X' = (X ∖{v}) ∪{u}, and every neighbor factor F of T_i or T_j that is not a subset of X. (Clearly, it includes U_ij.) It is not hard to verify that Λ(V(F -X), H, X) ≤Λ(V(F - X'), H, X). Furthermore, U_ij is the only neighbor factor of R_ij that is not a subset of X. Thus, we may use X' instead.
So suppose instead that all vertices of T_i∪ T_j are deleted. Since the largest star in U_ij - X contains at most ħ_ij leaves. We have that, even if χ(R_ij) = 0, Λ(V(U_ij -X), H, X) is at least
|T_i| + |T_j|+ χ(B_ij) + χ(R_ij) - β/|T_i| + |T_j|+ |B_ij| + |R_ij| + ħ_ij
= 2⌊α (1-α)^2
x ⌋ +⌈2α^2(1-α)x ⌉ - β/2⌊α (1-α)^2
x ⌋ +⌈2α^2(1-α)x ⌉ + |β| + ħ_ij
> 2α(1-α)x - 2 - |β|/2α(1-α)x + 1 + |β| + 2a_1
> 2α(1-α)x - 3 - 2|β| - 2a_1/2α(1-α)x
> 1 - 4a_1/2α(1-α)x = 1 - 2a_1/α(1-α)(10a_1)^5,
which is larger than α based on formula (<ref>). The fifth line is because 3+2|β| < r < a_1 < 2a_1. Thus, we may assume that χ(R_ij) = 0.
Thus in all cases, we may assume that the size of the intersection of X and V_1 is q_1. This means that the size of the intersection of X and V_2 is at most q - q_1 = q_2. In the next, we consider the vertices of V_2.
If α =0, then χ(S_i)≥ p for every i∈ [k], and χ(U_ij) ≥ p_ij+p'_ij≥ p' for all distinct i,j ∈ [k].
Let α = 0. Consider the neighbor modules R_i, T_i of S_i. We have |R_i| = |β| - s -l and |T_i| = s. Thus, for any vertex v ∈ S_i, the number of neighbors of v outside S_i is |β| - s -l + s = |β| -l. In addition, according to the definition, S_i has p vertices of degree less than l in S_i. Thus, all the p vertices must be in X, otherwise, there exists some vertex u∈ S_i - X such that |N(u)∩ X| ≤ |N(u)| < |β| -l + l = β, a contradiction. Hence, χ(S_i)≥ p.
Now, consider the neighbor modules R_ij, T_i, T_j of U_ij. We have |R_ij| = |β| - 2s - 2l and |T_i| = |T_j| = s. Thus, for any vertex v∈ U_ij, the number of adjacent vertices of v outside U_ij is at most |R_ij| + |T_i| + |T_j| = |β| -2l. In addition, according to the definition, U_ij has p'_ij vertices with degree zero and p_ij vertices of degree less than l. Therefore, the p_ij+p'_ij vertices must be in X, otherwise, there exists some vertex u∈ U_ij - X such that |N(u)∩ X| ≤ |N(u)| < |β| -2l + l = |β| - l ≤β, a contradiction. Recall that p'_ij= p' - p_ij + 1/2(k-1)(l_ij - 2a_n).
Since l_ij≥ 2a_n, we have
p_ij+p'_ij = p_ij + p' - p_ij + 1/2(k-1)(l_ij - 2a_n) ≥ p'.
Hence, we have χ(U_ij) ≥ p_ij+p'_ij≥ p'.
Recall that the minimum degree of S_i - X in X is denoted by δ (S_i - X, X), and that the maximum degree of S_i - X is denoted by Δ(S_i - X).
Consider factor S_i for all i. We may assume that,
* if α = 0, then δ (S_i - X, X) is an element of I, and χ(S_i) = p + c(δ (S_i - X, X) );
* if α∈ (0,1], then Δ(S_i - X) is an element of I, and χ(S_i) = c(Δ(S_i - X)), moreover, S_i ∩ X consists of the first χ(S_i) largest degree internal vertices of the stars graph S_i if α∈ (0,1).
Case 1: since α =0, S_i is a (a_0, I, c)-degree retention graph above (p,l), where c(a_i) = 1/2(a_i - a_n) and c(a_0) = ks.
Since only the number of the vertices added to X in S_i affects the vertices outside of S_i, for any fixed χ(S_i), we may assume that the deletion of χ(S_i) vertices maximize δ (S_i - X, X). Based on Claim <ref>, χ(S_i)≥ p. Moreover, since p+ c(a_n) = p, we have δ (S_i - X, X)≥ a_n according to the definition of S_i and Definition <ref>.
Assume that there exists some i ∈ [k] such that χ(S_i) ≥ p + c(a_0) = p +ks. We know χ(U_ij)≥ p' according to Claim <ref>. Therefore, we have q_2 = |X ∩ V_2|, which equals
∑_i∈ [k] (χ(S_i) + χ(T_i)) + ∑_1≤ i<j ≤ kχ(U_ij)
≥ ∑_i∈ [k]χ(S_i) + ∑_1≤ i<j ≤ kχ(U_ij)
≥ (k-1)p + (p + ks) + k2 p'
= kp + k2 p' + ks
> kp + k2p' + k (s + l - a_n) = q_2,
a contradiction. The last line is because l < r ≤ a_n. Hence, for every i ∈ [k], we have χ(S_i) < p + c(a_0). This means that δ (S_i - X, X)≤ a_1 according to the definition of S_i and Definition <ref>.
Now, we have that a_n ≤δ(S_i - X, X) ≤ a_1, and that p + c(a_n) ≤χ(S_i) < p + c(a_0).
Let j ∈ [n] be the minimum index such that p + c(a_j) ≤χ(S_i). Then, we have χ(S_i) < p+ c(a_j-1), which implies that δ(S_i - X, X) ≤ a_j based on Definition <ref>. Moreover, it is possible to make δ(S_i - X, X) ≥ a_j by adding χ(S_i) vertices to X according to Definition <ref> and Observation <ref>. Thus, we have δ(S_i - X, X) ∈ I.
Now assume that χ(S_i) = p+ c(a_j) + h for some h > 0.
Consider X' obtained from X, but adding exactly p+c(a_j) vertices to X from S_i instead. Then δ(S_i - X', X') can still be a_j. Only the vertices in T_i and R_i might be affected by this change. However, R_i ⊆ X' based on Claim <ref>, and |N(v)∩ X'| ≥ |D_i| > β for any v∈ T_i according to Claim <ref>. So this change does not create any problems. Hence, we may assume that δ(S_i - X, X) = a_j and χ(S_i) = p+c(a_j).
Case 2: first consider 0< α <1. S_i is a (a_0, I ∪{a_n - 1}, c)-degree deletion star graph. According to Lemma <ref>, we have that, apart from the stars with leaves less than a_n, S_i consists of exact c(a_j) - c(a_j-1) stars with a_j-1 leaves for all j∈ [n] as well as exact c(a_n-1) - c(a_n) stars with a_n leaves. Clearly, the number of stars with at least a_n leaves equals
c(a_n-1) - c(a_n) + ∑_j ∈ [n] (c(a_j) - c(a_j-1))
= c(a_n-1) - c(a_0) = c(a_n-1) = a_0.
Since a_0 > q_2 according to Table <ref>, the number of stars with at least a_n leaves is larger than χ(S_i). It means that χ(S_i) < c(a_n-1).
Furthermore, since only the number of the vertices added to X affects the dominating coefficients of the vertices outside of S_i with respect to (H,X), for any fixed χ(S_i), we may assume that the deletion of χ(S_i) vertices maximize the value of the dominating coefficient of V(S_i)∖ X with respect to (H,X).
Thus, based on Lemma <ref>, we may assume that X(S_i) consists of the first χ(S_i) largest degree internal vertices of the stars graph S_i.
Hence, Δ(S_i - X)∈ I ∪{a_0}.
Moreover, according to case 2 of the proof of Claim <ref>, we have Δ(S_i - X) ≤ a_1, which also means that χ(S_i) ≥ c(a_1) based on the definition of S_i and Definition <ref>.
Overall, we have Δ(S_i - X) ∈ I and c(a_1) ≤χ(S_i) < c(a_n-1).
Let j ∈ [n] be the maximum index such that c(a_j) ≤χ(S_i). Assume χ(S_i) = c(a_j) + h for some h >0. Consider X' obtained from X, but adding the largest c(a_j) internal vertices from S_i instead. Then S_i - X' still have maximum degree a_j, and thus Λ(S_i - X, H,X) = Λ(S_i - X', H, X').
In addition, according to the proof of Lemma <ref>, the dominating coefficient of T_i and R_i with respect to (H, X) is always larger than α no matter what is the value of χ(S_i).
Hence, we may assume that S_i - X has degree a_j ∈ I and χ(S_i) = c(a_j).
Secondly, consider α = 1. The degree of every vertex of H - X is at most -β = (nk)^10000.
As before, we may assume that the χ(S_i) deletions in S_i minimize Δ(S_i - X).
According to case 2 in the proof of Claim <ref>, we have χ(S_i) ≥ c(a_1), and Δ(S_i - X) ≤ a_1.
Moreover, S_i - X cannot have maximum degree a_n - 1 or less, since this requires that χ(S_i) is at least c(a_n - 1) = a_0 = q+1, a contradiction.
Hence, we have a_n ≤Δ(S_i - X) ≤ a_1 and c(a_1) ≤χ(S_i) < c(a_n -1).
Let j ∈ [n] be the maximum index such that c(a_j) ≤χ(S_i).
Note that j is well-defined since χ(S_i) ≥ c(a_1).
If j = n, then we made at least c(a_n) deletions and we have Δ(S_i - X) = a_n.
If instead j < n, we have χ(S_i) < c(a_j+1) and, by Defintion <ref>, we have Δ(S_i - X) ≥ a_j. Furthermore, according to Defintion <ref> and Observation <ref>, it is possible to make Δ(S_i - X) ≤ a_j by adding χ(S_i) vertices to X. Thus, we may assume that Δ(S_i - X) = a_j.
In either case, Δ(S_i - X) ∈ I, as desired.
Now assume that χ(S_i) = c(a_j) + h for some h > 0.
Consider X' obtained from X, but adding exactly c(a_j) vertices from S_i instead. Then S_i - X' can still have maximum degree a_j. Although the degrees of vertices of T_i and R_i in H - X' have increased by h, any vertex of R_i and T_i has a total degree in H at most |V_2| < n^243, which is much less than - β =(nk)^10000, so this increase cannot create any problem. Hence, we may assume that S_i - X has degree a_j and χ(S_i) = c(a_j).
Recall that integer set I_ij consists of all a + b such that a, b ∈ I and edge (f_i^-1(a), f_j^-1(b)) ∈ E(G), and that ℓ_ij and ħ_ij are the smallest element and the largest element of I_ij, respectively.
Consider factor U_ij for all distinct i, j ∈ [k]. We may assume that
* if α = 0, then δ (U_ij - X, X) is an element of I_ij, and χ(U_ij) = p'_ij + p_ij + c_ij(δ (U_ij - X, X) );
* if α∈ (0,1], then Δ(U_ij - X) is an element of I_ij, and χ(U_ij) = c_ij(Δ(U_ij - X)), moreover, U_ij∩ X consists of the first χ(U_ij) largest degree internal vertices of the stars graph U_ij if α∈ (0,1).
For any two vertex sets V^i and V^j of graph G, suppose there are m_ij pairs of symmetry edges between them. Then, in H, each I_ij has exactly m_ij elements.
Suppose I_ij = {a_1^ij, …, a_m_ij^ij}, where ħ_ij = a_1^ij > … > a_m_ij^ij = ℓ_ij. Assume ∈ [m_ij].
Case 1: since α =0, U_ij consists of an edgeless graph with p'_ij vertices together with a (2a_1 + 1, I_ij, c_ij)-degree retention graph above (p_ij,l), where c_ij(a+b) = 1/2(k-1)(a + b - l_ij), and p'_ij= p' - p_ij + 1/2(k-1)(l_ij - 2a_n). In addition, we have c_ij(2a_1+1)= ks. Assume that a_0^ij = 2a_1 + 1.
Since only the number of the vertices added to X affects the vertices outside of U_ij, for any χ(U_ij), we may assume that the selection of χ(U_ij) vertices maximize δ (U_ij - X, X). Based on the proof of Claim <ref>, we have χ(U_ij) ≥ p_ij + p'_ij and the p'_ij vertices of the edgeless graph in U_ij must be in X. Moreover, since p'_ij+p_ij+ c_ij(ℓ_ij) = p_ij + p'_ij, we have δ (U_ij - X, X)≥ℓ_ij according to the definition of U_ij and Definition <ref>.
Assume there exists some U_ij such that χ(U_ij) ≥ p'_ij + p_ij + c_ij(2a_1+1). In addition, we know c_ij(2a_1+1)= ks and χ(S_i)≥ p according to Claim <ref>. Therefore, q_2 = |X ∩ V_2| is equal to
∑_i∈ [k] (χ(S_i) + χ(T_i)) + ∑_1≤ i<j ≤ kχ(U_ij)
≥ ∑_i∈ [k]χ(S_i) + ∑_1≤ i<j ≤ kχ(U_ij)
≥ kp + k2 (p'_ij + p_ij ) + k s
≥ kp + k2 p' + k s
> kp + k2p' + k (s + l - a_n) = q_2,
a contradiction. The penultimate line is because p'_ij + p_ij≥ p' based on Claim <ref>. The last line is because l < r ≤ a_n. Hence, for all U_ij, we have χ(U_ij) < p'_ij + p_ij + c_ij(2a_1+1). This means that δ (U_ij - X, X)≤ħ_ij according to the definition of U_ij and Definition <ref>.
Now, we have that ℓ_ij≤δ(U_ij - X, X) ≤ħ_ij, and that p'_ij + p_ij + c_ij(ℓ_ij) ≤χ(U_ij) < p'_ij + p_ij + c_ij(2a_1+1).
Recall that the p'_ij vertices of the edgeless graph in U_ij must be in X(U_ij), so the number of the vertex deletion of the (2a_1 + 1, I_ij, c_ij)-degree retention graph above (p_ij,l) is χ(U_ij) - p'_ij.
Let ∈ [m_ij] be the minimum index such that p'_ij + p_ij + c_ij(a_^ij) ≤χ(U_ij). Then, we have χ(U_ij) < p'_ij + p_ij + c_ij(a_-1^ij), which implies that δ(U_ij - X, X) ≤ a_^ij based on the definition of U_ij and Definition <ref>. Moreover, it is possible to make δ(U_ij - X, X) ≥ a_^ij by adding χ(U_ij) vertices to X according to Definition <ref> and Observation <ref>. Thus, we have δ(U_ij - X, X)=a_^ij, which is an element of I_ij.
Now assume that χ(U_ij) = p'_ij + p_ij + c_ij(a_^ij) + h for some h > 0.
Consider X' obtained from X, but adding exactly p'_ij + p_ij +c_ij(a_^ij) vertices to X from U_ij instead. Clearly, δ(U_ij - X', X') can still be a_^ij, and only the vertices in T_i, T_j, and R_ij might be affected by this change. However, R_ij⊆ X' based on Claim <ref>, and |N(v)∩ X'| ≥min{|D_i|, |D_j|} = y > β for any v∈ T_i ∪ T_j according to Claim <ref>. So this change does not create any problems. Hence, we may assume δ(U_ij - X, X) is a_^ij and χ(U_ij) =p'_ij + p_ij + c_ij(a_^ij).
Case 2: first consider 0< α <1. U_ij is a (a_0, I_ij∪{ℓ_ij - 1}, c_ij)-degree deletion star graph. Assume that a_0^ij = a_0.
According to Lemma <ref>, apart from the stars with leaves less than ℓ_ij, U_ij consists of exact c_ij(a_^ij) - c_ij(a_-1^ij) stars with a_-1^ij leaves for all ∈ [m_ij] as well as exact c_ij(ℓ_ij - 1) - c_ij(a_m_ij^ij) stars with a_m_ij^ij leaves. It is not hard to calculate that the number of stars with at least a^m_ij_ij= ℓ_ij leaves is c_ij(ℓ_ij - 1)= a_0.
Since a_0 > q_2 according to Table <ref>, the number of stars with at least ℓ_ij leaves is larger than χ(U_ij). It means that χ(U_ij) < c_ij(ℓ_ij-1).
Furthermore, since only the number of the vertices added to X affects the dominating coefficients of the vertices outside of U_ij with respect to (H,X), for any χ(U_ij), we may assume that the selection of χ(U_ij) vertices maximize the value of the dominating coefficient of V(U_ij)∖ X with respect to (H,X). Thus, based on Lemma <ref>, we may assume V(U_ij) ∩ X consists of the first χ(U_ij) largest degree internal vertices of the stars graph U_ij.
Therefore, Δ(U_ij - X) is an element of set I_ij∪{a_0}.
Moreover, based on case 2 in the proof of Claim <ref>, Δ(U_ij - X) is at most ħ_ij, which also means that χ(U_ij) ≥ c_ij(ħ_ij) based on Definition <ref>.
Overall, we have Δ(U_ij - X) ∈ I_ij and c_ij(ħ_ij) ≤χ(U_ij) < c_ij(ℓ_ij-1).
Let ∈ [n] be the maximum index such that c_ij(a_^ij) ≤χ(U_ij). Assume χ(U_ij) = c_ij(a_^ij) + h for some h >0. Consider X' obtained from X, but adding the largest c_ij(a_^ij) internal vertices from U_ij instead. Then U_ij - X' still can have maximum degree a_^ij, and Λ(U_ij - X, H, X) = Λ(U_ij - X', H, X').
In addition, according to the proof of Claim <ref>, any vertex of T_i, T_j and R_ij always satisfies the linear inequality of the problem no matter what is the value of χ(U_ij).
Hence, we may assume that U_ij - X has degree a_^ij and χ(U_ij) = c_ij(a_^ij).
Secondly, consider α = 1. Every vertex v∈ V(H) - X satisfies that |N(v)| - |N(v) ∩ X| ≤ - β, where where -β = (nk)^10000. This implies that the degree of v in H is at most |β| after deleting all vertices of X.
As before, we may assume that the χ(U_ij) deletions in U_ij minimize Δ(U_ij - X). According to case 2 in the proof of Claim <ref>, we have that χ(U_ij) is at least c_ij(ħ_ij ), and that U_ij - X has maximum degree ħ_ij or less.
Moreover, U_ij - X cannot have maximum degree ℓ_ij - 1 or less, since this requires that χ(U_ij) is at least c_ij(ℓ_ij - 1) = a_0 = q_2+1, a contradiction.
Overall, we may assume that ℓ_ij≤Δ(U_ij - X) ≤ħ_ij and c_ij(ħ_ij ) ≤χ(U_ij) < c_ij(ℓ_ij - 1).
Recall that ħ_ij = a_1^ij and ℓ_ij = a_m_ij^ij. Let ∈ [m_ij] be the maximum index such that c_ij(a_^ij) ≤χ(U_ij). Note that is well-defined since χ(U_ij) ≥ c_ij(ħ_ij ).
If = m_ij, then we made at least c_ij(ℓ_ij) deletions and we have Δ(U_ij - X) = ℓ_ij.
If instead < m_ij, we have χ(U_ij) < c_ij(a_+1^ij) and, by Defintion <ref>, we have Δ(U_ij - X) ≥ a_^ij. According to Defintion <ref> and Obersavation <ref>, it is possible to make Δ(U_ij - X) ≤ a_^ij by adding χ(U_ij) vertices to X. Thus, we may assume that Δ(U_ij - X) = a_^ij.
In either case, Δ(U_ij - X) ∈ I_ij, as desired.
Now assume that χ(U_ij) = c_ij(a_^ij) + h for some h > 0.
Consider X' obtained from X, but adding exactly c_ij(a_^ij) vertices from U_ij instead. Then U_ij - X' can still have maximum degree a_^ij.
In addition, according to the proof of Claim <ref>, any vertex of T_i, T_j and R_ij has degree in H less than -β no matter what is the value of χ(U_ij).
Hence, we may assume that U_ij - X has degree a_^ij and χ(U_ij) = c_ij(a_^ij).
We sometimes abuse terminology by saying that S_i choose a_j ∈ I to indicate the following points: (1) if α = 0, then the minimum degree of S_i - X in X is a_j, and χ(S_i) = p + c(a_j); (2) if α∈ (0, 1), then V(S_i) ∩ X consists of the first χ(S_i) largest degree internal vertices of S_i, the maximum degree of S_i - X is a_j, and χ(S_i) = c(a_j); (3) if α =1, then the maximum degree of S_i - X is a_j and χ(S_i) = c(a_j).
Suppose that S_i chose a_j ∈ I. Then, χ(T_i) is at least
* s +l - a_j if α = 0;
* α a_j + β - 1 if α∈ (0,1);
* a_j if α =1.
Consider α = 0. For any vertex v∈ S_i - X in H, we have |N(v)∩ X| ≥β. Since the minimum degree of S_i - X in X is a_j, and R_i ⊆ X based on Claim <ref>, we have |R_i| +χ(T_i)+a_j ≥β, which means that χ(T_i) is at least
β - |R_i| - a_j = β - (|β| - s -l) - a_j,
which is equal to s +l - a_j since β is a positive number here.
Consider α∈ (0,1). Recall s=0 in this case. For any vertex v∈ S_i - X in H, we have |N(v)∩ X| ≥α |N(v)| + β. Based on Claim <ref>, it is enough to make the internal vertex of the largest star, with a_j leaves, in S_i - X satisfied the inequality of the problem. Thus, we have
χ(B_i) + χ(R_i) + χ(T_i)≥α (|B_i|+|R_i|+|T_i|+a_j) + β.
Moreover, χ(B_i) = |B_i| and χ(R_i) = 0 according to Claim <ref> and Claim <ref>. Then, χ(T_i) is at least
α (|R_i|+|T_i|+a_j) + (α -1)|B_i| + β
≥ α (|T_i|+a_j) + (α -1)|B_i| + β
= α ( ⌊α (1-α)^2x ⌋ + s + a_j ) -(1 - α) ⌈α^2(1-α)x ⌉ + β
= α a_j + β + α⌊α(1-α)^2x ⌋ - (1- α)⌈α^2(1-α)x ⌉
> α a_j + β - 1.
Consider α =1. For any vertex v∈ S_i - X, the formula |N(v)| - |N(v)∩ X| ≤ - β should be satisfied. Suppose there are h vertices in X ∩ V(S_i) that are adjacent to v. Then, we have
|B_i| +|R_i| +|T_i|+a_j + h - (χ(B_i) + χ(R_i) + χ(T_i) +h ) ≤ -β.
Moreover, we have that χ(B_i) = |B_i| =0, χ(R_i) = 0 according to Claim <ref> and Claim <ref>. Then, we have χ(T_i) is at least β + |R_i|+|T_i|+a_j, which is equal to
β + (|β| - s )+ s + a_j = a_j since β is a negative number.
We sometimes abuse terminology by saying that U_ij chose a + b ∈ I_ij to indicate the following points: (1) if α = 0, then the minimum degree of U_ij - X in X is a+b, and χ(U_ij) = p_ij + p'_ij + c_ij(a + b); (2) if α∈ (0, 1), then V(U_ij) ∩ X consists of the first χ(U_ij) largest degree internal vertices of U_ij, the maximum degree of U_ij - X is a + b, and χ(U_ij) = c_ij(a + b); (3) if α =1, then the maximum degree of U_ij - X is a+b and χ(U_ij) = c_ij(a+b).
Suppose that U_ij chose a + b ∈ I_ij. Then, χ(T_i) + χ(T_j) is at least
* 2s +2l - a-b if α = 0;
* α a + α b + β - 2 if α∈ (0,1);
* a + b if α =1.
Consider α = 0. For any vertex v∈ U_ij - X, the formula |N(v)∩ X| ≥β should be satisfied. Since the minimum degree of U_ij - X in X is a+b, and R_ij⊆ X based on Claim <ref>, we have |R_ij| +χ(T_i)+χ(T_j)+a +b ≥β. Thus, χ(T_i)+χ(T_j) is at least
β - |R_ij| - a -b
= β - (|β| - 2s -2l) - a -b
= 2s +2l - a-b
since β >0 in this case.
Consider α∈ (0,1). Recall s=0 in this case. For any vertex v∈ U_ij - X, the formula |N(v)∩ X| ≥α |N(v)| + β should be satisfied. Based on Claim <ref>, it is enough to make the internal vertex of the largest star, with a+b leaves, in U_ij - X satisfied the inequality of the problem. Thus, we have
χ(B_ij) + χ(R_ij) + χ(T_i) +χ(T_j)≥α (|B_ij|+|R_ij|+|T_i| + |T_i| + a+b) + β.
Moreover, we have that χ(B_ij) = |B_ij| and χ(R_ij) = 0 according to Claim <ref> and Claim <ref>. Then, χ(T_i) + χ(T_j) is at least
α (|R_ij|+|T_i| + |T_j| + a+b) - (1-α)|B_ij| + β
≥ α (|T_i| + |T_j| + a+b) - (1-α)|B_ij| + β
= α ( 2⌊α (1-α)^2 x ⌋ + 2s + a+b ) -(1 - α) ⌈ 2α^2(1-α)x ⌉ + β
= α a + α b + β + 2α⌊α(1-α)^2x ⌋ - (1- α)⌈ 2α^2(1-α)x ⌉
> α a + α b + β - 2.
Consider α =1. For any vertex v∈ U_ij - X, the formula |N(v)| - |N(v)∩ X| ≤ - β should be satisfied. Suppose there are h vertices in X ∩ V(U_ij) that are adjacent to v. Then, we have
|B_ij| +|R_ij| +|T_i|+|T_j|+a+b + h
- (χ(B_ij) + χ(R_ij) + χ(T_i) +χ(T_j) +h ) ≤ -β.
Moreover, we have that χ(B_ij) = |B_ij| =0 and that χ(R_ij) = 0 based on Claim <ref> and Claim <ref>. Then, we have
χ(T_i) +χ(T_j) is at least β + |R_ij|+|T_i|+|T_j|+a+b, which is equal to β + (|β| - 2s )+ 2s + a+b = a+b since β is a negative number here.
For each distinct i, j ∈ [k], if S_i chose a ∈ I and S_j chose b ∈ I, then U_ij chose a + b.
For each i ∈ [k], according to Claim <ref>, we will denote by _i the element of I that S_i chose. For each distinct i,j ∈ [k], we define _i, _i, and p_ij as follows:
* if α = 0, then _i = s+l-_i, _i = p + s +l - 1/2 (_i + a_n), and ℘_ij = p_ij + p'_ij;
* if α∈ (0,1), then _i = α_i - |β| -1, _i = t + α/2_i - |β| - 2, and ℘_ij = 0;
* if α = 1, then _i = _i, _i = s + 1/2_i, and ℘_ij = 0.
Moreover, we claim that
χ(S_i) + χ(T_i) ≥χ(S_i) + _i ≥_i.
The first inequality is true since χ(T_i) ≥_i according to Claim <ref>. For the second inequality, let us consider χ(S_i) + _i, which equals
* p+ 1/2 (_i - a_n) + s+l-_i = _i if α = 0;
* t - ⌈α/2_i⌉ + α_i - |β| -1 > _i if α∈ (0,1);
* s - ⌈1/2_i ⌉ + _i = _i if α = 1.
Assume Ψ_ij = ℘_ij + c_ij(_i + _j) for each distinct i,j∈ [k].
Let us define functions t_ij: I_ij→ℕ for all distinct i, j ∈ [k] as follows. Assume U_ij chose a' + b' ∈ I_ij. Suppose that T_ij⊆ T_i ∪ T_j is the minimum possible vertex set added to X such that every vertex v∈ U_ij - X satisfies that |N(v)∩ X| ≥α |N(v)| + β. Then, we use t_ij(a' + b') to denote the vertex number of T_ij.
According to Claim <ref>, for any a'+ b' ∈ I_ij, we have t_ij(a' + b') is at least
* 2s+2l-a'-b' if α = 0;
* α a' + α b' -|β| -2 if α∈ (0,1);
* a'+b' if α = 1.
Assume U_ij chose a' + b'. Then, according to Claim <ref>, we have a' + b' ∈ I_ij. We divide all U_ij into three groups, denoted by U_<, U_=, and U_>, as follows.
* U_< consists all U_ij such that a'+b' < _i + _j if α∈ (0,1], and that a'+b' > _i + _j if α = 0;
* U_= consists all U_ij such that a'+b' = _i + _j;
* U_> consists all U_ij such that a'+b' > _i + _j if α∈ (0,1], and that a'+b' < _i + _j if α = 0.
Furthermore, U denotes the union of U_<, U_=, and U_>.
U_≥ denotes the union of U_= and U_>.
U_≤ denotes the union of U_< and U_=.
In addition, since |a'+b' - _i - _j| ≥ r, it is not hard to verify that Ψ_ij > χ(U_ij) if U_ij∈ U_>, that Ψ_ij = χ(U_ij) if U_ij∈ U_=, and that Ψ_ij < χ(U_ij) if U_ij∈ U_<.
To prove the claim, it suffices to show that U_< and U_> are empty (this is because U_ij∈ U_= is only possible if U_ij chose _i + _j, since all the sum pairs of I_ij are distinct).
The rough idea is as follows.
If each U_ij chose the correct _i + _j, then each of them will incur a deletion cost of Ψ_ij and end up satisfying that the number of all the vertices of the deletion set X is at most q.
If U_< is non-empty, it incurs an extra deletion cost with respect to Ψ_ij with no real benefit. The complicated case is when U_> is non-empty. In this case, U_ij - X incurs fewer deletions, which is χ(U_ij), than if it had chosen _i + _j, which would have deleted Ψ_ij vertices. However, this needs to be compensated with extra deletions in T_i and T_j. By using a charging argument, we will show that the sum of extra deletions required for all the U_> members outweighs the deletions saved in the U_ij's of U_>.
First, let us consider the U_> set. Suppose that it is non-empty. Recall that each T_i requires at least _i deletions by Claim <ref>.
Denote by X_0(T_i) an arbitrary subset of X(T_i) containing exactly _i vertices, and denote X_1(T_i) = X(T_i) ∖ X_0(T_i). (Of course, X_1(T_i) could occasionally be an empty set.)
We also use the notation χ_0(T_i) := |X_0(T_i)| and χ_1(T_i) := |X_1(T_i)|.
The X_0(T_i) corresponds to (part) deletions we had to do because of S_i, and X_1(T_i) corresponds to “extra” deletions. We will write X_1 = ⋃_i ∈ [k]X_1(T_i).
Consider some U_ij∈ U_>. We know U_ij chose a'+ b'. We define Δ_ij = t_ij(a' + b') - _i - _j. We have the following statements.
* If α = 0, then t_ij(a' + b') ≥ 2s +2l - a'-b' , which is larger than _i + _j = 2s +2l - _i - _j since a'+b' < _i + _j in this case. Moreover, Δ_ij is at least _i + _j - a'- b'.
* If α∈ (0,1), then t_ij(a' + b') ≥α a' + α b' - |β| - 2 , which is larger than _i + _j = α_i + α_j - 2|β| -2 since a'+b' > _i + _j in this case. Moreover, Δ_ij is at least α( a'+ b' - _i - _j).
* If α =1, then t_ij(a' + b') ≥ a' + b', which is larger than _i + _j = _i + _j since a'+b' > _i + _j in this case. Moreover, Δ_ij is at least a'+ b' - _i - _j.
Thus, we always have Δ_ij > 0 for every U_ij∈ U_>.
In addition, (T_i ∪ T_j) ∩ X contains at least t_ij(a' + b') vertices, which means that χ(T_i) + χ(T_j) ≥ t_ij(a' + b'). Furthermore, we have χ_0(T_i) + χ_0(T_j) = _i + _j. Therefore, χ_1(T_i) + χ_1(T_j) ≥ t_ij(a' + b') - (_i + _j) = Δ_ij. Moreover, X_1 is not empty since U_> is not empty.
We define a (charging) function : X_1 →ℝ, illustrated in Figure <ref>. We assume that each vertex of X_1 starts with (v) = 0 and, in the description that follows, we describe how charges are added incrementally. For a U_ij∈ U_>, we have X_1(T_i) ∪ X_1(T_j) contains at least Δ_ij vertices. Then, for each extra vertex v ∈ X_1(T_i) ∪ X_1(T_j) that “related” to U_ij, we increase (v) by 1/(k - 1), moreover, we repeat this for every U_ij∈ U_>, and this concludes the charging procedure. Note that the “related” vertices here means that we need to add these vertices together with X_0(T_i) ∪ X_0(T_j) to X to make every vertex of U_ij - X satisfies the linear inequality of the problem. Moreover, the number of the related vertices is exactly Δ_ij.
First observe that after the charging procedure is done, each U_ij has charged at most 1/(k - 1) to each extra vertex v ∈ X_1(T_i) ∪ X_1(T_j) that related to U_ij.
Moreover, for any i ∈ [k] and any v ∈ X_1(T_i), at most k - 1 of the U_ij sets can charge that 1/(k - 1) to v, since there are only k - 1 of the U_ij sets that have i in their subscripts.
This means that (v) ≤ (k - 1) · 1/(k - 1) = 1, and therefore that
∑_v ∈ X_1(v) ≤ |X_1|.
We further observe that, for any distinct i,j∈ [k], the total charge added by U_ij across the extra deletion vertices X_1(T_i) ∪ X_1(T_j) is exactly 1/k-1Δ_ij. Thus, the total charge added by all elements of U_> across the extra deletion vertices X_1 is
∑_U_ij∈ U^>1/k-1Δ_ij = ∑_v ∈ X_1(v).
Moreover, the number of deletions in U_ij is χ(U_ij) = ℘_ij + c_ij(a' + b').
This saves us some deletions compared to the number Ψ_ij, but this save in cost is much less than the total charge spread by U_ij, moreover, this fact will be demonstrated as follows.
First, we demonstrate the saving in each U_ij as follows.
Ψ_ij - χ(U_ij)
= ℘_ij + c_ij(_i + _j) - ℘_ij - c_ij(a' + b')
= c_ij(_i + _j) - c_ij(a' + b').
If α = 0, then the formula (<ref>) equals
1/2(k-1)(_i + _j - l_ij) - 1/2(k-1)(a' + b' - l_ij)
= 1/k-1(_i + _j - a' - b') - 1/2(k-1)(_i + _j - a' - b')
≤ 1/k-1Δ_ij - 1/2(k-1)(_i + _j - a' - b')
≤ 1/k-1Δ_ij- 1/2(k-1) r
= 1/k-1Δ_ij- (k-1)k^3.
If α = (0,1), then the formula (<ref>) equals
1/k-1(⌈α a'/2⌉ + ⌈α b'/2⌉) - 1/k-1(⌈α_i/2⌉ + ⌈α_j/2⌉)
< α(a' + b' - _i - _j) + 4/2(k-1)
= α (a' + b' - _i - _j)/k-1 - α (a' + b' - _i - _j) - 4/2(k-1)
≤ 1/k-1Δ_ij - α (a' + b' - _i - _j) - 4/2(k-1)
≤ 1/k-1Δ_ij - 1/2(k-1) (α r - 4)
= 1/k-1Δ_ij - 1/2(k-1)( α⌈10k(|β|+10)/α(1-α)⌉^10 - 4 )
< 1/k-1Δ_ij - (10k(|β|+10)/α(1-α))^9.
If α = 1, then the formula (<ref>) equals
1/k-1(⌈a'/2⌉ + ⌈b'/2⌉) - 1/k-1(⌈_i/2⌉ + ⌈_j/2⌉)
= a' + b' - _i - _j/k-1 - a' + b' - _i - _j/2(k-1)
≤ 1/k-1Δ_ij - 1/2(k-1)( a' + b' - _i - _j )
≤ 1/k-1Δ_ij - r/2(k-1)
= 1/k-1Δ_ij - (k-1)k^3.
The second line is due to that a',b',_i, and _j are even numbers.
Moreover, we define r_1 as follows. If α∈{0,1}, then r_1= (k-1)k^3. If α∈ (0,1), then r_1= (10k(|β|+10)/α(1-α))^9. Thus, for any U_ij∈ U_> and any α∈ [0,1], we have
Ψ_ij - χ(U_ij) ≤1/k-1Δ_ij - r_1.
Then, summing over every U_ij∈ U_>, we get that the all saving deletions are
∑_U_ij∈ U_>(Ψ_ij - χ(U_ij) )
≤ ∑_U_ij∈ U_>(1/k-1Δ_ij - r_1 )
≤ ∑_U_ij∈ U_>1/k-1Δ_ij - r_1
= ∑_v ∈ X_1(v) - r_1
≤ |X_1| - r_1 ,
where the second line is based on formula (<ref>), and the last and penultimate lines are based on formulas (<ref>) and (<ref>), respectively.
Secondly, let us consider the U_< set. Suppose that it is non-empty and some U_ij∈ U_<. Let us consider the value of χ(U_ij) = ℘_ij + c_ij(a' + b').
If α =0, then a' + b' ≥_i + _j + r. We have
χ(U_ij) = ℘_ij+ 1/2(k-1)(a' + b' - l_ij)
≥ ℘_ij + 1/2(k-1)(_i + _j - l_ij) + 1/2(k-1)r
= Ψ_ij + 1/2(k-1)r.
If α∈ (0,1), then a' + b' ≤_i + _j - r and ℘_ij = 0. We have
χ(U_ij) = t - 1/k-1( ⌈α a'/2⌉ + ⌈α b'/2⌉)
> t - 1/k-1( α a'/2 + α b'/2 +2 )
≥ t - 1/2(k-1)( α (_i + _j - r) + 4 )
≥ t - 1/k-1( ⌈α_i/2⌉ + ⌈α_j/2⌉) + 1/2(k-1)( α r - 4 )
= c_ij(_i + _j) + 1/2(k-1)( α r - 4 )
= ℘_ij + c_ij(_i + _j) + 1/2(k-1)( α r - 4 )
= Ψ_ij + 1/2(k-1)( α r - 4 ).
If α =1, then a' + b' ≤_i + _j - r and ℘_ij = 0. We have
χ(U_ij) = t - 1/k-1( ⌈a'/2⌉ + ⌈b'/2⌉)
= t - 1/k-1( a'/2 + b'/2)
≥ t - 1/2(k-1)( _i + _j -r )
≥ t - 1/k-1( ⌈_i/2⌉ + ⌈_j/2⌉) + r/2(k-1)
= c_ij(_i + _j) + r/2(k-1)
= ℘_ij + c_ij(_i + _j) + r/2(k-1)
= Ψ_ij + r/2(k-1).
By replacing the values of r, we have that χ(U_ij) ≥Ψ_ij + (k-1)k^3 if α∈{0,1}, and that χ(U_ij) > Ψ_ij + (10k(|β|+10)/α(1-α))^9 if α∈ (0,1). This means that χ(U_ij) ≥Ψ_ij + r_1.
Then, summing over every U_ij∈ U_<, we get have
∑_U_ij∈ U_<χ(U_ij) ≥∑_U_ij∈ U_<(Ψ_ij + r_1 )
≥∑_U_ij∈ U_<Ψ_ij + r_1.
We can now finish the proof of the claim.
We have that the size of X ∩ V_2 is
∑_i ∈ [k] (χ(S_i) + χ(T_i)) + ∑_1 ≤ i < j ≤ kχ(U_ij).
If U_> is not empty, then formula (<ref>) equals
∑_i ∈ [k] (χ(S_i) + χ_0(T_i)) + ∑_i ∈ [k]χ_1(T_i) + ∑_1 ≤ i < j ≤ kχ(U_ij)
= ∑_i ∈ [k] (χ(S_i) + _i) + |X_1| + ∑_1 ≤ i < j ≤ kχ(U_ij)
≥ ∑_i ∈ [k]_i + |X_1| + ∑_1 ≤ i < j ≤ kχ(U_ij)
≥ ∑_i ∈ [k]_i + r_1 + ∑_U_ij∈ U_>( Ψ_ij - χ(U_ij) ) + ∑_1 ≤ i < j ≤ kχ(U_ij)
= ∑_i ∈ [k]_i + r_1 + ∑_U_ij∈ U_>( Ψ_ij - χ(U_ij) ) + ∑_U_ij∈ U_>χ(U_ij) + ∑_U_ij∈ U_≤χ(U_ij)
= ∑_i ∈ [k]_i + r_1 + ∑_U_ij∈ U_>Ψ_ij + ∑_U_ij∈ U_≤χ(U_ij)
≥ ∑_i ∈ [k]_i + r_1 + ∑_U_ij∈ U_>Ψ_ij + ∑_U_ij∈ U_≤Ψ_ij
= ∑_i ∈ [k]_i + r_1 + ∑_U_ij∈ UΨ_ij.
The third line and the fourth line are based on formulas (<ref>) and (<ref>), respectively. The penultimate line is due to inequality (<ref>), and it equals the last third line when U_< is empty. Next, we demonstrate that formula (<ref>) is larger than q_2.
According to the proof in Lemma <ref>, we have that, if α∈{0,1}, then
∑_i ∈ [k]_i + r_1 + ∑_1 ≤ i < j ≤ kΨ_ij = q_2 + r_1 > q_2,
a contradiction.
Let us consider the case α∈ (0,1). According to formula (<ref>) in Lemma <ref>, we have
∑_1 ≤ i < j ≤ kΨ_ij
= ∑_1 ≤ i < j ≤ k(t - 1/k-1(⌈α/2_i ⌉ + ⌈α/2_j ⌉))
= k2t - ∑_i ∈ [k]⌈α/2_i ⌉ .
Moreover, we have
∑_i ∈ [k]_i + r_1
= ∑_i ∈ [k]( t + α/2_i - |β| - 2) + r_1
> ∑_i ∈ [k]( t + ⌈α/2_i ⌉ - |β| - 3) + r_1
≥ ∑_i ∈ [k]( t + ⌈α/2_i ⌉) - k |β| - 3k + (10k(|β|+10)/α(1-α))^9
> ∑_i ∈ [k]( t + ⌈α/2_i ⌉) + 2k|β|.
Therefore, we have
∑_i ∈ [k]_i + r_1 + ∑_1 ≤ i < j ≤ kΨ_ij
> ∑_i ∈ [k]( t + ⌈α/2_i ⌉) + 2k|β| + k2t - ∑_i ∈ [k]⌈α/2_i ⌉
= k t + k2t + 2k|β|= q_2,
which is again a contradiction.
If U_> is empty but U_< is non-empty, then formula (<ref>) equals
∑_i ∈ [k] (χ(S_i) + χ(T_i)) + ∑_U_ij∈ U_=χ(U_ij) + ∑_U_ij∈ U_<χ(U_ij)
≥∑_i ∈ [k]_i + ∑_U_ij∈ U_=Ψ_ij + ∑_U_ij∈ U_<χ(U_ij)
≥∑_i ∈ [k]_i + ∑_U_ij∈ U_=Ψ_ij +∑_U_ij∈ U_<Ψ_ij + r_1
= ∑_i ∈ [k]_i + ∑_U_ij∈ UΨ_ij + r_1
> q_2,
a contradiction. Note that the penultimate line is the same as formula (<ref>) that is larger than q_2, and that the second and third lines are based on formula (<ref>) and formula (<ref>), respectively.
Overall, both U_< and U_> are empty, and thus every U_ij chose _i and _j.
This is all we need to construct a multicolored clique.
To this end, we define C = {f^-1_i(_i) : i ∈ [k] }.
We claim that C is a clique.
By Claim <ref>, each S_i chooses some _i and thus |C| = k.
Now let f^-1_i(_i), f^-1_j(_j) be two vertices of C, where i<j.
Then _i, _j were chosen by S_i and S_j, respectively, and by Claim <ref>
we know that U_ij chose _i + _j.
By the construction of the U_ij solution table, this is only possible if both (f^-1_i(_j), f^-1_j(_i)) and (f^-1_i(_i), f^-1_j(_j)) are in E(G). Therefore, (f^-1_i(_i), f^-1_j(_j))∈ E(G) and C is a clique.
|
http://arxiv.org/abs/2307.00763v1
|
20230703055927
|
Quantum--classical correspondence and dissipative to dissipationless crossover in magnetotransport phenomena
|
[
"Akiyoshi Yamada",
"Yuki Fuseya"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
] |
APS/123-QED
^1The Institute for Solid State Physics, the University of Tokyo, Chiba 277-8581, Japan
^2Department of Engineering Science, University of Electro-Communications, Chofu, Tokyo 182-8585, Japan
^3Institute for Advanced Science, University of Electro-Communications, Chofu, Tokyo 182-8585, Japan
The three-dimensional magneto-conductivity tensor was derived in a gauge invariant form based on the Kubo formula considering the quantum effect under a magnetic field, such as the Landau quantization and the quantum oscillations.
We analytically demonstrated that the quantum formula of the magneto-conductivity can be obtained by adding a quantum oscillation factor to the classical formula.
This result establishes the quantum–classical correspondence, which has long been missing in magnetotransport phenomena.
Moreover, we found dissipative-to-dissipationless crossover in the Hall conductivity by paying special attention to the analytic properties of thermal Green's function.
Finally, by calculating the magnetoresistance of semimetals, we identified a phase shift in quantum oscillation originating from the dissipationless transport predominant at high fields.
Quantum–classical correspondence and dissipative to dissipationless crossover in magnetotransport phenomena
Yuki Fuseya^2, 3
August 1, 2023
============================================================================================================
§ INTRODUCTION
Magnetotransport phenomena constitute one of the oldest research topics in solid state physics <cit.>.
In particular, applying a magnetic field can drastically alter the electron transport through the Lorentz force.
From the classical equation of motion, the magneto-conductivity tensor can be expressed in the following form <cit.>:
σ̂_ cl = σ_0 ( [ 1/(ω_cτ)^2+1 -ω_cτ/(ω_cτ)^2+1 0; ω_cτ/(ω_cτ)^2+1 1/(ω_cτ)^2+1 0; 0 0 1 ]),
where σ_0 is the conductivity at zero magnetic field, ω_c is the cyclotron frequency, and τ is the relaxation time.
A similar equation can be derived based on the semiclassical Boltzmann's equation <cit.>.
The physical meaning of the classical Eq. (<ref>) is clear and straightforward to handle. Thus, Eq. (<ref>) has been a powerful tool for analyzing the transport properties in good metals with a larger chemical potential than the cyclotron energy, μ≫ħω_c.
On the other hand, the validity of the (semi-) classical theory is lost when ħω_c ≳μ or ω_c τ≳ 1, where the quantum effect, such as the Landau quantization, plays a crucial role.
The magnetotransport phenomena have recently attracted renewed interest, especially in topological materials <cit.>.
The effect of Landau quantization cannot be neglected in these materials, even for magnetic fields of several teslas, because most of them have a small effective mass yielding high cyclotron energy.
Nevertheless, researchers relied only on the (semi-) classical formula to analyze the experimental data of topological materials because the complexity of applying the previous quantum formula to actual materials is high except for a few simple cases <cit.>.
The lack of understanding of the connection between the quantum and the classical formula further prevents us from analyzing the experimental data from the quantum perspective.
The quantum counterpart of the classical formula can be obtained based on the Kubo formula <cit.>.
For two-dimensional systems, the Kubo formula has successfully elucidated the underlying physics of the quantum Hall effect <cit.>.
Moreover, since the late 1950s, the fundamentals of magnetotransport phenomena has been investigated for three-dimensional (3D) system <cit.>, among which certain theoretical investigations followed zero-temperature formulation <cit.>.
Kubo et al. showed that the quantum formula agrees with the classical one, Eq. (<ref>), in the weak magnetic field limit, but the quantum–classical correspondence could not be obtained in the strong field, where the effects of the Landau quantization and the quantum oscillation are prominent <cit.>.
Abrikosov derived another quantum formula valid only in the strong field region so that the quantum–classical correspondence could not be obtained.
More elegant formulations were established using the Kubo formula and the thermal Green's function by Fukuyama et al. <cit.> and Shiba et al. <cit.>.
Although Shiba's formulations are valid in the entire magnetic field region (from weak to strong), the obtained formula was too complicated to uncover the clear quantum–classical correspondence.
In this study, we revisit the quantum magnetotransport of 3D free electrons based on the Kubo formula <cit.>.
We succeeded in obtaining a simple quantum formula, Eq. (<ref>), by paying special attention to the analytic nature of Green's function and putting some originality into the derivation.
The developed formula is essentially equivalent to Abvikosov's formula at strong fields and to the theory by
Shiba et al. for the entire region, but in a much simpler form.
The quantum–classical correspondence is clear in the developed formula: just add the quantum oscillation factor Q(B) in the diagonal conductivity σ_xx,yy.
Furthermore, as a by-product, we found that the crossover from the dissipative term to the dissipationless one occurs in the Hall conductivity σ_xy by increasing the magnetic field, whereas the diagonal conductivity σ_xx are dominated by the dissipative term for the entire range of the magnetic field.
§ THEORY
§.§ Magnetotransport theory based on the Kubo formula and Green's function
The components of the conductivity tensor were evaluated considering the Kubo formula in a magnetic field <cit.>, as follows:
σ_ij = 1/i.∂Φ_ij/∂ω|_ω=0,
Φ_ij = -2e^2/β V m^2∑_n, k Tr[ 𝒢π_i 𝒢π_j],
where Φ denotes the current-current correlation function [Fig. <ref> (a)], π_i indicates the kinematical momentum operator, and 𝒢=(iε_n-ℋ)^-1 represents the thermal Green's function (ε_n denotes the Matsubara frequency) <cit.>.
ℋ denotes the Hamiltonian of a 3D free electron in the magnetic field.
β=1/k_BT, and V(=L^3) denotes the system volume.
In the magnetic field, the velocity operator is given by π_i/m=(p_i+eA_i)/m, where the electron charge is defined as -e<0.
Considering the trace over the Landau indices ℓ,ℓ', the correlation function can be rewritten in the following form:
Φ_ij(iω_λ) = -2e^2N_L/Lm^2∑_ℓ,ℓ'⟨ℓ|π_i|ℓ'|⟨%s|%s⟩⟩ℓ'|π_j|ℓF_ℓ,ℓ'(iω_λ),
F_ℓ,ℓ'(iω_λ) = 1/β∑_n, k_zg_ℓ'(k_z,iε_n)g_ℓ(k_z,iε_n-iω_λ),
where g_ℓ=(iε_n-E_ℓ)^-1 and E_ℓ=(ℓ+1/2)ħω_c+ħ^2k_z^2 /2m.
N_L(=eB/2πħ) is the Landau degeneracy <cit.>.
ω_c=eB/m denotes the cyclotron frequency, where B denotes the magnetic field.
With the magnetic field along the z-axis, the π operators in the x-y plane can be defined as π_x=√(ħ eB/2)(a^++a^-), π_y=i√(ħ eB/2)(-a^++a^-), where a^+ and a^- represent the raising and lowering operators for Landau indices, respectively.
From the commutation relation of the kinematical momentum in a magnetic field, [π_i, π_j]=-iħ e ε_ijkB_k, a commutation between π_x ↔π_y implies the sign inversion of the magnetic field, which guarantees the Onsager's reciprocal relation σ_xy(-B)=σ_yx(B).
Furthermore, we do not need to assume the gauge by transfer from the vector potential to the magnetic field through the commutation relation. Thus, the formulas obtained below are all gauge invariant.
We can conduct the summation over ℓ' and obtain the following form:
Φ_xx(iω_λ) = -e^3ħ B/Lm^2∑_ℓ[ℓ F_ℓ,ℓ-1+(ℓ+1)F_ℓ,ℓ+1],
Φ_yx(iω_λ) = ie^3ħ B/Lm^2∑_ℓ[ℓ F_ℓ,ℓ-1-(ℓ+1)F_ℓ,ℓ+1].
The summation with respect to ε_n can be rewritten into the path integration along the imaginary axis in the complex plane by introducing the Fermi distribution function n_F(x)=1/(e^β(x-μ)+1) <cit.>.
Furthermore, the pass of the integral was transformed into four separate improper integrals along the real axis to avoid crossing the singularities of Green's functions [Im[z]=0, Im[z]=ω_λ; cf. Fig. <ref> (b)].
After an analytic continuation, iω_λ→ħω+i δ,
F_ℓ',ℓ can be expressed as follows:
F_ℓ,ℓ'=-1/2π i∑_k_z∫_-∞^∞dx n_F(x)[G_ℓ'^R(x+ħω)G_ℓ^R(x) -G_ℓ'^R(x+ħω)G_ℓ^A(x) .
. +G_ℓ'^R(x)G_ℓ^A(x-ħω) -G_ℓ'^A(x)G_ℓ^A(x-ħω) ],
where G^A(R)_ℓ(x)=(x+E_ℓ∓ iΓ)^-1 represents the advanced (retarded) Green's function, respectively.
We introduced the imaginary part of the self-energy as Γ=ħ/2τ.
The second and third terms, including G^A G^R and G^R G^A, are referred to as the “Fermi surface terms," accounting for the contribution from the non-equilibrium transport in the vicinity of the Fermi energy, whereas the first and fourth terms with G^RG^R (G^AG^A) are referred to as the “ Fermi sea terms" corresponding to the equilibrium transport [cf. Fig. <ref> (a)].
§.§ Fermi surface terms
We first investigated the contributions from the Fermi surface terms at low temperatures.
The Fermi surface terms can be expressed proportionally to -∂ n_F / ∂ x, which can be approximated by a delta function at low temperatures.
The transverse components of the magnetoconductivity tensor, σ_xx, yy and σ_xy, yx, were calculated using Eqs.(<ref>), (<ref>), and (<ref>).
After integrating with respect to k_z, the following equation can be obtained:
σ_xx = σ_xx^ surf( I)+σ_xx^ surf( II),
σ_yx = σ_yx^ surf( I)+σ_yx^ surf( II),
where
σ_xx^ surf( I) = σ_01/(ω_cτ)^2+1∑_ℓ 3(γω_cτ)^2 Re[ℓ+1/2/K_ℓ], σ_xx^ surf( II)=-σ_01/(ω_cτ)^2+1∑_ℓ3γ^2(ω_cτ)^3/2 Im[1/K_ℓ],
σ_yx^ surf( I) = σ_0ω_cτ/(ω_cτ)^2+1∑_ℓ3(γω_cτ)^2 Re[ℓ+1/2/K_ℓ], σ_yx^ surf( II)=σ_01/(ω_cτ)^2+1∑_ℓ3(γω_cτ)^2/2 Im[1/K_ℓ],
K_ℓ = √(1-(2ℓ+1)ω_cτγ+iγ)
σ_0 is the electric conductivity in zero-field given by e^2τ/m1/3π^2(√(2mμ)/ħ)^3 and γ=Γ/μ.
During the derivation of the above form, we transformed the equation to be compatible with the classical formula, say, by factorizing 1/[(ω_c τ)^2 + 1].
The longitudinal component of the correlation function can be expressed as follows:
Φ_zz(iω_λ) =-2e^2/β Lm^2∑_ℓ,n,k_zp_z^2g_ℓ(k_z,iε_n)g_ℓ(k_z,iε_n-iω_λ).
Note that the velocity operator along the magnetic field (z axis) contains no transition component across different Landau indices.
Then, the longitudinal conductivity from the Fermi surface term is obtained as follows:
σ_zz = e^2τ/m N_e(B),
N_e(B) = (√(2mμ)/ħ)^3γω_cτ/π^2∑_ℓ Re[K_ℓ].
where, N_e is independent from the magnetic field and equivalent to the carrier number for Γ≪μ, as shown in Fig. <ref>.
The remaining components, such as σ_xz and σ_yz, are zero due to the relation:
⟨ℓ|π_x,y|ℓ'|⟨%s|%s⟩⟩ℓ'|π_z|ℓ = (C_ℓδ_ℓ'-1,ℓ+C'_ℓδ_ℓ'+1,ℓ)δ_ℓ',ℓ
= 0.
(Note that π_x and π_y are off-diagonal, and π_z is the diagonal based on the Landau indices.)
§.§ Fermi sea terms
The Fermi sea terms, consisting of G^RG^R or G^AG^A, include the contribution from deep energy states due to a factor of n_F, which is the origin of the dissipationless contribution.
These terms can be transformed into integrating the derivative of n_F after performing integration by part to obtain their analytical forms, which turned out to be a key derivative to obtain the clear quantum–classical correspondence in the end.
The diagonal component of the transverse conductivity tensor contains only a single term from the sea term as follows:
σ_xx^ sea = σ_0∑_ℓ3γ^2ω_cτ/2 Im[1/K_ℓ]
Unexpectedly, this form perfectly cancels one of the Fermi surface terms, σ_xx^ surf( II) in Eq.(<ref>), at strong fields as
σ_xx^ sea+σ_xx^ surf( II)=0
( for ω_c τ≫ 1).
Therefore, only the dissipative term σ_xx^ surf ( I) contributes to the diagonal conductivity, and no dissipationless term remains.
The Hall conductivity includes two sea terms:
σ_yx^ sea( I) = σ_0∑_ℓ 3γ^2ω_cτ Re[ℓ+1/2/K_ℓ]
σ_yx^ sea( II) = σ_0∑_ℓ 3γ Re[K_ℓ].
Contrastingly, at low fields, these terms cancel each other as
σ_yx^ sea ( I) + σ_yx^ sea( II) =0
( for ω_c τ≪ 1 and Γ≪μ),
At high fields, on the other hand, the first term σ_yx^ sea( I) neutralizes σ_yx^ surf( I) in Eq.(<ref>) as
σ_yx^ surf( I) + σ_yx^ sea( I) =0
( for ω_c τ≫ 1).
All cancellation relations are summarized in Fig. <ref> (b).
The second term σ_yx^ sea( II) can be expressed as follows:
σ_yx^ sea( II) = eN_e(B)/B,
which is independent of scattering as it hardly depends on τ.
This aspect is consistent with the classical description of the Hall effect, which is reproduced by Kubo <cit.> for weak field and introduced phenomenologically by Shiba et al. <cit.> and Ando et al. <cit.>.
The sea term in σ_zz was evaluated as follows:
σ_zz^ sea = σ_0 ∑_ℓ 3γ^2ω_cτ Im[1/K_ℓ].
This term is substantially smaller than the surface term if the scattering is weak, μ≫Γ.
§.§ Quantum–classical correspondence
Finally, the total conductivity tensor was obtained for Γ≪μ as follows:
σ̂ = σ_0 ( [ Q(B)/(ω_cτ)^2+1 -ω_cτ/(ω_cτ)^2+1 0; ω_cτ/(ω_cτ)^2+1 Q(B)/(ω_cτ)^2+1 0; 0 0 1 ]),
Q(B) = 3(γω_cτ)^2∑_ℓ(ℓ+1/2) Re[1/K_ℓ].
These results are the counterpart of the classical formulation expressed in Eq. (<ref>).
The field dependence of Q(B) is illustrated in Fig. <ref>.
In particular, Q(B) exhibits a clear quantum oscillation that cannot be considered in the classical formulation.
In other words, the quantum oscillation Q(B) is the only quantum correction to the diagonal conductivity σ_xx(B).
The global properties of quantum magnetotransport agree with that of classical quantitatively.
Now, Eq.(<ref>) shows the clear quantum–classical correspondence that holds for the entire range of the field, from a weak to a strong magnetic field.
The field dependence of the Fermi surface term, sea term, and the total conductivity σ^ surface+σ^ sea are illustrated in Fig.<ref> (a–d), where the chemical potential μ was varied to maintain the carrier number fixed for the whole range of the magnetic field.
At low field limits, the total conductivity was quantitatively consistent with the classical model (Fig.<ref> (b),(d)), whereas at high field limits, the Fermi sea term in the transverse conductivity σ_xx canceled one of the surface terms.
The Fermi surface term dominated σ_xx across all the field ranges and displayed prominent quantum oscillation at high field limits.
This oscillation is apparently induced by the one-dimensional density of the states in Eq.(<ref>).
In contrast, the major part of the Fermi surface term in Hall conductivity was canceled by σ_yx^ sea( I) (Eq.(<ref>)), and the remaining sea term (σ_yx^ sea( II)) did not include oscillations if the carrier number was fixed. Thus, the total σ_yx displayed no oscillation (Fig.<ref> (d)).
As such, the notable characteristic of Hall conductance is the crossover from the surface term to the sea term.
§.§ Crossover from dissipative to dissipationless
The physical implication of the “surface term" and “sea term" can be clearly understood by examining the Γ dependence illustrated in Fig.<ref>.
The sea terms slightly depend on Γ, whereas the surface terms strongly depend on it.
This result is expected because the sea terms correspond to the thermodynamic contributions.
The surface terms correspond to the non-equilibrium dissipative transport, while the sea terms correspond to the equilibrium dissipationless transport.
Therefore, the field effect in the Hall conductivity, the crossover from the surface to sea terms, can be regarded as a crossover from dissipative to dissipationless transport.
Note that the surface term in the Hall conductivity gains independence from Γ at the clean limit and cancels out with one of the sea terms (Fig.<ref> (b)).
§ SHUBNIKOV-DE HAAS OSCILLATION IN SEMIMETALS
The derived formulation can be readily applied to multicarrier metals and semimetals by adding up the conductivities in each carrier pocket as follows:
σ̂^ tot =∑_i σ̂^ (i).
The magnetoresistivity and Hall resistivity in the voltage measurements were calculated by the inversion of the conductivity tensor:
ρ̂=σ̂^-1.
As such, a noteworthy remark was found in the calculations for a semimetal.
The transverse magnetoresistance (TMR) in a semimetal with electron and hole carriers is depicted in Fig. <ref>, where the effective masses of electrons and holes are assumed to be the same.
We calculated the two cases depending on whether the carriers were compensated.
The compensated case exhibited a non-saturating quadratic field dependence across the entire range, whereas the uncompensated case saturated at high field limits.
The quantum oscillations reflected notable variations.
When the Landau level surpassed the chemical potential, the compensated semimetal displayed minimum resistivity, unlike other cases exhibiting the maximum.
The TMR in the isotropic systems can be expressed as follows:
ρ_xx =σ_yy/σ_xxσ_yy-σ_xyσ_yx.
The field dependence and oscillatory characteristics were determined using the dominant term in the denominator.
If the carrier is compensated, σ_xy,yx cancels out because these components alter the sign depending on the carrier charge sign.
In this case, the asymptotic form of TMR, is 1/σ_xx at high field limits; therefore, the resistivity does not saturate, and the oscillation is inverted.
In all other cases, σ_xy,yx dominates the denominator unless they vanish.
Herein, ρ_xx approaches σ_yy/|σ_xy|^2 at a high field limit.
The oscillation is primarily caused by σ_yy because the oscillation in Hall conductivity is suppressed owing to the cancellation of the dissipative transport.
Furthermore, determining the crosspoints of Landau levels and chemical potential is a critical problem in experiments with semimetals, considering the Berry curvature or the effective g-factor in metals and semimetals <cit.>.
The nonnegligible Hall conductivity shifts the position of peaks in ρ_xx and may cause misinterpretation of the phase information in the oscillation.
Thus, the present calculation provides a microscopic explanation for this technical problem.
§ DISCUSSION AND CONCLUSIONS
In this study, we derived a formulation regarding magneto-conductivity for 3D free electrons based on the Kubo formula with a Landau-quantized basis, which is valid for an arbitrary field intensity.
We obtained the clear quantum–classical correspondence, as illustrated in Eq. (<ref>), a long-missing piece in magnetotransport theory.
We found that only the quantum correction to the classical formulation is the quantum oscillation factor Q(B) in σ_xx(B).
The quantum–classical correspondence thus obtained provides useful knowledge to analyze the experimental data.
Note that the classical formula has been used to analyze experimental data, even at strong fields, because it is useful, knowing that its validity will be lost for ω_c τ≳ 1.
In contrast, the obtained quantum–classical correspondence guarantees the validity of the classical formulation, Eq. (<ref>), even for ω_c τ≳ 1 except for the quantum oscillation.
Although this wide-range validity somehow contrasts with the conventional understanding, the present work clearly proves it both analytically and numerically.
In addition, the quantum-classical correspondence will also be useful to distill the quantum oscillation factor Q(B) from the experimental data.
From the viewpoint of the analytic properties of thermal Green's function, an interesting perspective was present for the transport coefficient in the magnetic field.
As the field intensity increases, the primary constituent of Hall conductivity switches from the dissipative to dissipationless term. In contrast, only the dissipative term dominates among the diagonal components of the tensor.
The dissipationless term originates from the non-equilibrium transport which commonly gives rise to the thermodynamic phenomena like orbital diamagnetism <cit.> and spin current in paramagnetic conductors <cit.>.
Magnetoresistance spontaneously includes both contributions.
Moreover, their weight is alternative depending on the intensity of the field and the relative direction of the current from the field.
Finally, the derived formula can be readily applied to multicarrier systems and quantum limit transport.
The peak or dip positions of quantum oscillation in certain metals and semimetals deviate from the intersection between the Landau level and the chemical potential.
The developed formulation demonstrated from a microscopic perspective that the origin was the dissipationless contribution to the off-diagonal conductivity, which superseded the dissipative terms in the diagonal conductivity in high-intensity fields.
Thus, we consider that the quantum extension derived herein provides new insights regarding quantum oscillation to the semiclassical quantitative magnetotransport theory.
Our study can also be extended to anisotropic electron systems such as ellipsoidal Fermi surfaces by altering the effective mass <cit.>.
However, in multi-band systems, the interband contribution cannot always be renormalized to the mass.
In particular, we would need to consider the multi-band Hamiltonian and resulting energy dispersion and velocity operators, which is beyond the present theory.
In such cases, the quantum–classical correspondence is still unclear due to the non-trivial interband contribution <cit.>.
The authors would like to thank S. Tago for many helpful suggestions. We present special thanks to M. Tokunaga and A. Miyake for their valuable discussions. This work was supported by JSPS KAKENHI Grant Numbers 19H01850, 23H00268, and 23H04862.
36
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Thomson(1857)]Thomson1857
author author W. Thomson, title title On the electro-dynamic
qualities of metals:—Effects of magnetization on the electric conductivity
of nickel and of iron, @noop journal journal Proc. R. Soc. Lond. volume 8, pages 546 (year 1857)NoStop
[Kapitza and Rutherford(1928)]Kapitza1928
author author P. Kapitza and author E. Rutherford, title title The study of the
specific resistance of bismuth crystals and its change in strong magnetic
fields and some allied problems, @noop journal
journal Proc. R. Soc. A volume 119, pages 358 (year 1928)NoStop
[Pippard(1989)]Pippard_book
author author A. Pippard, https://books.google.co.jp/books?id=D5XHMARd2ocC title Magnetoresistance in Metals, Cambridge Studies in Low
Temperature Physics (publisher Cambridge University Press, year 1989)NoStop
[Beer(1963)]Beer1963
author author A. Beer, https://books.google.co.jp/books?id=Q68PAQAAMAAJ title Galvanomagnetic effects in semiconductors, Solid State
Physics Series (publisher Academic Press, year
1963)NoStop
[Grosso and Parravicini(2000)]grosso2000solid
author author G. Grosso and author G. Parravicini, @noop title Solid State
Physics (publisher Elsevier Science, year
2000)NoStop
[Zhu et al.(2018)Zhu,
Fauqu'e, Behnia, and Fuseya]Zhu2018
author author Z. Zhu, author B. Fauqu'e,
author K. Behnia, and author Y. Fuseya, title
title Magnetoresistance and valley degree of freedom in bulk
bismuth, @noop journal journal J.
Phys.: Condens. Matter volume 30, pages
313001 (year 2018)NoStop
[Wilson(2011)]Wilson2011-bx
author author A. H. Wilson, @noop title The theory of metals (publisher Cambridge University Press, address
Cambridge, England, year 2011)NoStop
[Novoselov et al.(2005)Novoselov, Geim, Morozov, Jiang, Katsnelson, Grigorieva,
Dubonos, and Firsov]Novoselov2005
author author K. S. Novoselov, author A. K. Geim,
author S. V. Morozov, author D. Jiang, author
M. I. Katsnelson, author
I. V. Grigorieva, author
S. V. Dubonos, and author
A. A. Firsov, title
title Two-dimensional gas of massless dirac fermions in
graphene, @noop journal journal
Nature volume 438, pages 197
(year 2005)NoStop
[Fuseya et al.(2015a)Fuseya, Ogata, and Fukuyama]Fuseya2015JPSJ
author author Y. Fuseya, author M. Ogata, and author H. Fukuyama, title title Transport properties and diamagnetism of dirac
electrons in bismuth, @noop journal journal J. Phys. Soc. Jpn. volume 84, pages 012001 (year 2015a)NoStop
[Feng et al.(2015)Feng,
Pang, Wu, Wang, Weng, Li, Dai, Fang,
Shi, and Lu]Feng2015
author author J. Feng, author Y. Pang, author D. Wu, author
Z. Wang, author H. Weng, author J. Li, author X. Dai, author Z. Fang, author
Y. Shi, and author
L. Lu, title title Large linear magnetoresistance in Dirac semimetal
Cd_3As_2 with Fermi surfaces close to the Dirac
points, @noop journal journal Phys.
Rev. B volume 92, pages 081306(R)
(year 2015)NoStop
[Shekhar et al.(2015)Shekhar, Nayak, Sun, Schmidt, Nicklas, Leermakers, Zeitler, Skourski, Wosnitza, Liu, Chen, Schnelle, Borrmann, Grin, Felser, and Yan]Shekhar2015
author author C. Shekhar, author A. K. Nayak,
author Y. Sun, author
M. Schmidt, author M. Nicklas, author I. Leermakers, author U. Zeitler, author Y. Skourski, author J. Wosnitza, author Z. Liu, author Y. Chen, author W. Schnelle,
author H. Borrmann, author Y. Grin, author
C. Felser, and author
B. Yan, title title Extremely large magnetoresistance and ultrahigh mobility in the
topological Weyl semimetal candidate NbP, @noop journal journal Nat. Phys. volume
11, pages 645 (year 2015)NoStop
[Luo et al.(2015)Luo,
Ghimire, Wartenbe, Choi,
Neupane, McDonald, Bauer,
Zhu, Thompson, and Ronning]Luo2015
author author Y. Luo, author N. J. Ghimire,
author M. Wartenbe, author H. Choi, author
M. Neupane, author R. D. McDonald, author E. D. Bauer, author J. Zhu, author J. D. Thompson, and author F. Ronning, title title Electron-hole compensation effect
between topologically trivial electrons and nontrivial holes in NbAs, @noop journal journal Phys. Rev. B volume 92, pages 205134 (year 2015)NoStop
[Wang et al.(2018)Wang,
Niu, Yan, Li, Bi, Yao, Yu, and Wu]Wang2018
author author J. Wang, author J. Niu, author B. Yan, author
X. Li, author R. Bi, author Y. Yao, author D. Yu, and author X. Wu, title title Vanishing quantum oscillations in Dirac semimetal
ZrTe_5, @noop journal journal Proc.
Natl. Acad. Sci. USA volume 115, pages
9145 (year 2018)NoStop
[Lu et al.(2015)Lu,
Zhang, and Shen]lu2015prb
author author H.-Z. Lu, author S.-B. Zhang, and author S.-Q. Shen, title title High-field magnetoconductivity of topological
semimetals with short-range potential, @noop journal
journal Phys. Rev. B volume 92, pages 045203 (year 2015)NoStop
[Wang et al.(2016)Wang,
Lu, and Shen]wang2016prl
author author C. M. Wang, author H.-Z. Lu, and author S.-Q. Shen, title title Anomalous phase shift of quantum oscillations in
3d topological semimetals, https://doi.org/10.1103/PhysRevLett.117.077201 journal
journal Phys. Rev. Lett. volume 117, pages 077201 (year 2016)NoStop
[Könye and Ogata(2018)]Konye2018prb
author author V. Könye and author M. Ogata, title title Magnetoresistance of a
three-dimensional dirac gas, @noop journal journal Phys. Rev. B volume 98, pages 195420 (year 2018)NoStop
[Kubo(1957)]Kubo1957
author author R. Kubo, title title Statistical-mechanical
theory of irreversible processes. i. general theory and simple applications
to magnetic and conduction problems, @noop journal
journal J. Phys. Soc. Jpn. volume
12, pages 570 (year 1957)NoStop
[Kubo et al.(1965)Kubo,
Miyake, and Hashitsume]KUBO1965
author author R. Kubo, author S. J. Miyake, and author N. Hashitsume, title title Quantum theory of galvanomagnetic
effect at extremely strong magnetic fields, in https://doi.org/10.1016/S0081-1947(08)60413-0 booktitle
Solid State Physics, Vol. volume 17, editor
edited by editor F. Seitz and editor D. Turnbull (publisher Academic Press, year 1965) pp. pages
269–364NoStop
[Ando et al.(1982)Ando,
Fowler, and Stern]Ando1982_review
author author T. Ando, author A. B. Fowler, and author F. Stern, title title Electronic properties of two-dimensional
systems, @noop journal journal Rev.
Mod. Phys. volume 54, pages 437
(year 1982)NoStop
[Dmitriev et al.(2003)Dmitriev, Mirlin, and Polyakov]Dmitriev2003
author author I. A. Dmitriev, author A. D. Mirlin, and author D. G. Polyakov, title title Cyclotron-Resonance
Harmonics in the ac Response of a 2D Electron Gas with Smooth Disorder, @noop journal journal Phys. Rev. Lett. volume 91, pages 226802 (year 2003)NoStop
[Dmitriev et al.(2012)Dmitriev, Mirlin, Polyakov, and Zudov]Dmitriev_review_2012
author author I. A. Dmitriev, author A. D. Mirlin, author D. G. Polyakov, and author M. A. Zudov, title title Nonequilibrium phenomena
in high Landau levels, @noop journal journal Rev. Mod. Phys. volume 84, pages 1709 (year 2012)NoStop
[Argyres(1958)]Argyres1958
author author P. N. Argyres, title title Quantum theory of
galvanomagnetic effects, https://doi.org/10.1103/PhysRev.109.1115
journal journal Phys. Rev. volume 109, pages 1115 (year
1958)NoStop
[Adams and Holstein(1959)]ADAMS1959
author author E. N. Adams and author T. D. Holstein, title title Quantum theory of
transverse galvano-magnetic phenomena, https://doi.org/10.1016/0022-3697(59)90002-2 journal
journal J. Phys. Chem. Solids volume
10, pages 254 (year 1959)NoStop
[Fukuyama et al.(1970)Fukuyama, Saitoh, Uemura, and Shiba]Fukuyama1970MR
author author H. Fukuyama, author M. Saitoh,
author Y. Uemura, and author H. Shiba, title
title Theory of impurity bands in magnetic fields. ii. transport
properties, @noop journal journal J.
Phys. Soc. Jpn. volume 28, pages 842
(year 1970)NoStop
[Shiba et al.(1971)Shiba,
Kanada, Hasegawa, and Fukuyama]Shiba1971
author author H. Shiba, author K. Kanada,
author H. Hasegawa, and author H. Fukuyama, title title Galvanomagnetic effects in impurity band
conductions, https://doi.org/10.1143/JPSJ.30.972 journal journal J. Phys. Soc. Jpn. volume 30, pages 972 (year 1971)NoStop
[Abrikosov et al.(2012)Abrikosov, Gorkov, and Dzyaloshinski]AGD
author author A. A. Abrikosov, author L. P. Gorkov, and author I. E. Dzyaloshinski, @noop title Methods of
quantum field theory in statistical physics (publisher Dover
Publications, year 2012)NoStop
[Mahan(2000)]Mahan2000
author author G. D. Mahan, https://doi.org/10.1007/978-1-4757-5714-9 title Many-Particle Physics (publisher Springer
US, year 2000)NoStop
[Matsubara(1955)]Matsubara1955
author author T. Matsubara, title title A New Approach to
Quantum-Statistical Mechanics, @noop journal
journal Prog. Theor. Phys. volume
14, pages 351 (year 1955)NoStop
[Ando et al.(1975)Ando,
Matsumoto, and Uemura]Ando_Hall_1975
author author T. Ando, author Y. Matsumoto, and author Y. Uemura, title title Theory of Hall Effect in a
Two-Dimensional Electron System, @noop journal
journal J. Phys. Soc. Jpn. volume
39, pages 279 (year 1975)NoStop
[Ando(2013)]Ando2013
author author Y. Ando, title title Topological insulator
materials, @noop journal journal J.
Phys. Soc. Jpn. volume 82, pages
102001 (year 2013)NoStop
[Fuseya et al.(2015b)Fuseya, Zhu,
Fauqué, Kang, Lenoir, and Behnia]Fuseya2015PRL
author author Y. Fuseya, author Z. Zhu,
author B. Fauqué, author W. Kang, author
B. Lenoir, and author
K. Behnia, title title Origin of the large anisotropic g factor of holes in bismuth, https://doi.org/10.1103/PhysRevLett.115.216401 journal
journal Phys. Rev. Lett. volume 115, pages 216401 (year 2015b)NoStop
[Izaki and Fuseya(2019)]Izaki2019PRL
author author Y. Izaki and author Y. Fuseya, title title Nonperturbative matrix mechanics
approach to spin-split landau levels and the g factor in spin-orbit coupled
solids, https://doi.org/10.1103/PhysRevLett.123.156403 journal journal Phys. Rev. Lett. volume 123, pages 156403 (year
2019)NoStop
[Peierls(1933)]Peierls1933
author author R. Peierls, title title Zur Theorie des
Diamagnetismus von Leitungselektronen. II Starke Magnetfelder, https://doi.org/10.1007/BF01338364 journal journal Zeitschrift für Physik volume 81, pages 186 (year 1933)NoStop
[Fukuyama and Kubo(1970)]Fukuyama1970jpsj
author author H. Fukuyama and author R. Kubo, title title Interband effects on
magnetic susceptibility. II. diamagnetism of bismuth, https://doi.org/10.1143/JPSJ.28.570 journal journal J. Phys. Soc. Jpn. volume 28, pages 570 (year 1970)NoStop
[Ando(1976)]Ando1976anisotropic
author author T. Ando, title title Quantum transport in an
anisotropic two-dimensional system under strong magnetic fields, @noop journal journal Z. Phys.
B-Condensed Matter volume 24, pages
219 (year 1976)NoStop
[Fuseya et al.(2009)Fuseya,
Ogata, and Fukuyama]Fuseya2009
author author Y. Fuseya, author M. Ogata, and author H. Fukuyama, title title Interband contributions from the magnetic field on
hall effects for dirac electrons in bismuth, @noop journal journal Phys. Rev. Lett. volume 102, pages 066601 (year
2009)NoStop
|
http://arxiv.org/abs/2307.03173v1
|
20230706175513
|
Data processing of Visible Emission Line Coronagraph Onboard ADITYA L1
|
[
"Muthu Priyal",
"Jagdev Singh",
"B. Raghavendra Prasad",
"Chavali Sumana",
"Varun Kumar",
"Shalabh Mishra",
"S. N. Venkata",
"G. Sindhuja",
"K. Sasikumar Raja",
"Amit Kumar",
"Sanal krishnan",
"Bhavana S. Hegde",
"D. Utkarsha",
"Natarajan Venkatasubramanian",
"Pawankumar Somasundram",
"S. Nagabhushana",
"PU. Kamath",
"S. Kathiravan",
"T. Vishnu Mani",
"Suresh Basavaraju",
"Rajkumar Chavan",
"P. Vemareddy",
"B. Ravindra",
"S. P. Rajaguru",
"K. Nagaraju",
"Wageesh Mishra",
"Jayant Joshi",
"Tanmoy Samanta",
"Piyali Chatterjee",
"C. Kathiravan",
"R. Ramesh"
] |
astro-ph.SR
|
[
"astro-ph.SR",
"astro-ph.IM"
] |
[cor1]Corresponding author:
email: muthu.priyal@iiap.res.in;
Indian Institute of Astrophysics, Koramangala, Bengaluru - 560034
ADITYA-L1 is India’s first dedicated mission to observe the sun and its atmosphere from a halo orbit around L1 point. Visible emission line coronagraph (VELC) is the prime payload on board at Aditya-L1 to observe the sun’s corona. VELC is designed as an internally occulted reflective coronagraph to meet the observational requirements of wide wavelength band and close to the solar limb (1.05 Ro). Images of the solar corona in continuum and spectra in three emission lines 5303ÅÅ [Fe xiv], 7892ÅÅ [Fe xi] and 10747ÅÅ [Fe xiii] obtained with high cadence to be analyzed using software algorithms automatically. A reasonable part of observations will be made in synoptic mode, those, need to be analyzed and results made available for public use. The procedure involves the calibration of instrument and detectors, converting the images into fits format, correcting the images and spectra for the instrumental effects, align the images etc. Then, develop image processing algorithms to detect the occurrence of energetic events using continuum images. Also derive physical parameters, such as temperature and velocity structure of solar corona using emission line observations. Here, we describe the calibration of detectors and the development of software algorithms to detect the occurrence of CMEs and analyze the spectroscopic data.
VELCAditya-L1 missionCoronagraphImaging and spectroscopic observationsData pipeline architectureData flow
§ INTRODUCTION
A reliable spectroscopic observation with good photometric accuracy and high spectral resolution of the emission corona up to 1.5Ro (Ro - Solar radius) or beyond will help to understand the physical and dynamical nature of solar corona (<cit.>). Most of the spectroscopic or imaging data in visible emission lines have been recorded during the occurrences of total solar eclipses or using ground-based coronagraphs during excellent clear sky conditions. The ground-based coronal observations are insufficient due to limited number of hours per day with very clear coronagraphic sky conditions (<cit.>). The increase in sky brightness because of scattering of sun-light from water vapors and aerosols in the earth’s atmosphere makes it challenging to observe the weak coronal signal. Such a problem does not arise in space, and we can observe 24 hours a day round the year by placing the satellite in a halo orbit around Lagrangian L1 point.
ADITYA-L1 mission is a space based solar observatory with seven payloads on-board, which is planned to be placed in the first Lagrange point (L1) of the sun-earth system. Visible Emission Line Coronagraph (VELC) (<cit.> and <cit.>) is a space based solar coronagraph, a major payload on-board Aditya-L1 to study the solar corona. The VELC onboard Aditya-L1 is designed to perform imaging of solar corona at 500 nm, simultaneous spectroscopy of solar corona in emission lines centered around 530.3 nm [Fe XIV], 789.2 nm [Fe XI], 1074.7 nm [Fe XIII] and Spectro-polarimetry at 1074.7 nm [Fe XIII]. FOV of the imaging channel is 1.05 - 3.0 Ro and 1.05 – 1.5 Ro for spectroscopy channels. The payload's uniqueness stems from the fact that observations of solar corona closer to the limb (1.05Ro) with a high cadence (<cit.> and <cit.>) are possible. Further, imaging of the solar corona in continuum will yield the speed of CME’s in the plan of sky and spectroscopic emission line observations in the line-of-sight at the same time. Hence, it will be possible to derive the true velocity of CME’s and study the acceleration or de-acceleration of CME’s.
Several corrections are required for the solar data obtained from space instruments compared to the data taken by ground based telescope. There are many space based missions to study the sun and its atmosphere (<cit.> and <cit.>). It is convenient to transmit the data to the ground station from space instruments in binary and compressed format to permit more observations within the same volume of data to be sent through telemetry. Therefore, as a first step of the calibration, the raw data needs to be decompressed and convert to Flexible Image Transport (FITS) format. Then many corrections to the data such as, check file size, dark current, flat fielding of image / spectra, geometrical calibration, wavelength calibration, time-dependent corrections, replacement of spikes / bad pixels and aligning the images are need to be made. The procedure to perform the basic corrections is discussed in earlier paper (<cit.>). After doing the basic corrections, one needs to convert the observed counts using the digital cameras to absolute numbers such as flux or absolute intensity and correct for the instrument characteristics. The other effects due to scattered light, broadening of line profiles because of instrument, geometrical distortion of spectra because of optics need to be corrected after the basic analysis of data. To apply these corrections, calibration of the instrument in laboratory before the launch and in space after the launch is needed. Here, we shall discuss the calibration of detectors carried out in the laboratory and development of software to analyse the data to derive physical parameters from the observations made. To examine the performance of the developed software codes we have used the coronal images from C-2 coronagraph onboard SOHO and coronal spectroscopic data obtained with 25cm coronagraph at Norikura Observatory at Japan. The provision was made to record observations in 4 emission lines, simultaneously, at the 25-cm coronagraph at Norikura observatory, Japan. The 5303ÅÅ [Fe xiv], 10747ÅÅ [Fe xiii] and 10798ÅÅ [Fe xiii] were common to record the spectra. In addition, one of the two, 6374ÅÅ [Fe x] or 7892ÅÅ [Fe xi] spectral lines could be chosen for observations. We have also used the laboratory calibration data to verify the codes and confirm the satisfactory working of the instrument as desired. Here, we discuss the calibration of the detectors, some specific corrections and the verification of the codes using observations with SOHO, Norikura coronagraph and data obtained during total solar eclipses.
§ CALIBRATION OF THE INSTRUMENT
The specifications of various optical components of the instrument, performance of mechanical units, responses of detectors need to be examined, individually first and later, the integrated payload are described in <cit.>. There are three CMOS and one IR detectors. The performance of all the three CMOS detector is similar.
§.§ Calibration of CMOS detectors in Laboratory before launch
The important parameters for a detector to be determined are bias, dark current variations with exposure time, linearity in response of the detector to light level and exposure time, noise level and signal variation over pixels with different exposures of same time.
§.§.§ Bias and dark calibration
The camera electronics does not permit to measure the dark current with zero exposure time known as bias. Therefore, we have taken the images with minimum exposure time of 0.010 s to estimate the bias and its variation with time. We find that mean bias is about 142 and 139 counts in the low gain (LG) and high gain (HG), respectively which does not vary with time. We have measured the bias current at temperatures from -4^∘C to -7^∘C with an incremental value of 1 deg C. This is the expected operating temperature of CMOS detectors on-board VELC instrument during the mission life and found that these bias values do not change within this temperature range. Two left and right panels in the upper row of Figure <ref> show the dark current in counts as a function of exposure time (0.01 to 100 seconds) for the low and high gain, respectively. The dark current plotted in figure is mean over all the pixels in the image. Two panels in the upper panel indicate that dark current increase marginally with exposure time up to 100 seconds. The mean variation in dark current with time indicate that dark build up is at the rate of 0.004 and 0.032 counts / second in the low and high gain, respectively. These variations are insignificant for the projected exposure times. Two panels in the bottom row show the histogram of count for four representative exposure times in the range of 0.113, 1.0, 10.0 and 100 seconds in different colours. The histogram plots indicate that dark current is stable in both the low and high gain for different exposure times. The distribution of dark current agrees very well for the LG and differs by insignificant amount for HG. The FWHM of the dark current distribution is ∼ 30 counts for all the exposure times up to 100 seconds for low and high gains. The small fluctuations in the histograms are due to variation in signal in different rows caused by different amplifiers of CMOS detector. Most of the dark count values range between 100 and 180 for all gains. These variations are mostly due to different gains of amplifiers and photon noise.
§.§.§ Calibration of CMOS detector with uniform light source in laboratory
After taking the dark data we have taken the images with uniform light source at different intensity and exposure times till the near saturation of the detectors. Left and Right side panels in the upper row of Figure <ref> show the mean signal over the image (counts) as a function of exposure time for the low and high gains, respectively. Upper curve (red) in both the panels indicate the mean counts with light and lower curve after subtracting averaged dark image of 16 individual images from the light image. The low and high gain plots indicate that response of the detector is almost linear to the exposure time. For an exposure time > 100 ms, the increase in signal is linear with time till 90 % of the saturation value for the detectors. Both the LG (1X and 2X) and HG (10X and 30X) show the similar behaviour. The experiment was repeated with different known intensity levels and found that detectors show linear response in the range of 5 – 90 % of the full well capacity of the detectors for all the gains.
Left and right panels in the bottom row of Figure <ref> show the histograms of the count values for the dark image in blue, image with uniform light source in black and the dark subtracted light image in Red for LG and HG, respectively. The histograms of dark and light image show some departure from the Gaussian distribution due to fixed pattern noise in the data. But the Gaussian distribution of histogram of (light – dark) image indicates the noise due different response of amplifiers (fixed pattern noise in the data) has been corrected. For the LG image, the FWHM of about 32 counts of the corrected intensity distribution indicates the variation in the signal is well within the photon noise considering average signal count of ∼ 850. For the HG data, FWHM of ∼ 55 counts for a mean signal of 840 indicates that variations in the signal are almost equal to the photon noise. This is likely due to additional noise in detector at high gain. All the three CMOS cameras have been calibrated and behave in a similar way.
§.§ Calibration of IR detector
First, we study the variation of mean dark current over an image with exposure time for the temperatures from -14^∘C to -19^∘C, expected range of temperature of the detector onboard, at an interval of one degree. Left and right panels in the upper row of Figure <ref>a show the variation of dark count for the LG and HG of the camera, respectively for three temperatures, -14^∘C, -17^∘C and -19^∘C. The figure indicates that dark count is dependent on temperature and increases with increase of temperature of the detector. The difference is less for small exposure times and the difference goes on increasing with the increase in exposure time. Left and right panels of the Figure <ref>b show the histogram of the dark current for the LG and HG, respectively, for four exposure times. The exposure times are 103ms (minimum), 5.3s, 20s and 50s (maximum) for LG and 103ms, 1.003s, 5.3s and 10s for HG as the detector gets saturated for exposure > 10s in HG for dark image, itself. The histogram plots for dark current show double peaks. The reason for this is not clear. However, the double peaks disappear in the dark corrected light (light – dark) images. Therefore, the double peak may be due to some fixed pattern noise in the detector. Further, dark count increases with exposure time significantly unlike the behavior of CMOS detector for which dark count increases negligibly with exposure time up to 100 seconds. The mean dark current is ∼ 490 and ∼ 190 counts for the LG and HG, respectively, for an exposure time of 103ms. The histograms of the dark current for LG, for different exposure time show that dark count value increases by ∼ 50 % with an exposure time of 20 seconds as compared to that with 103ms, decreasing the dynamic range significantly. In addition, the increase in width in the distribution of dark count with increasing exposure times indicates significant increase in the dark noise. Thus, the exposure time more than 20 seconds will have impact on dynamic range and photometric accuracy in LG observations. The right side panel of Figure <ref>b shows that for exposure time > 2s in HG, the distributions of dark count become very broad and mean value increases at a faster rate. The computed values of dark build up indicate that dark count increase at rate of 12 counts/sec in the LG and 220 counts/ sec in HG. Hence, it is advisable to keep the exposure time < 5s for HG observations considering the dark build up and noise in the data for larger exposure times. In Figure <ref>, we plot the mean signal (Light – dark) in counts for LG (left panel) and HG (right panel) for images with uniform light source, as a function of exposure time.
§ OBSERVATIONS IN CONTINUUM CHANNEL AT 500 NM
After generating fits files from binary files, the dark current and flat-field corrections, the images (<cit.>) will be aligned depending on the satellite data about yaw, roll and pitch angles. To begin with all these images will be scanned visually in the form of video to detect the occurrence of CME.
§.§ Detection of CMEs using continuum images
We have developed a code to detect the occurrence of CME’s, automatically. First, the aligned FITS images of a single day are taken for generating the background image (I_bac). The procedure involved is to create a minimum image (I_min) such that each pixel in I_min corresponds to the minimum intensity of all the images on that day. The intensity in the outer corona is very less, little more than the dark value. Sometimes signal is lost in the photon noise. Large number images need to be added to increase the signal to noise ratio (SNR) in the outer corona. Sometimes, some pixels, especially in the outer corona in the dark subtracted images show negative values due to photon noise. To avoid this type of noise in an image, the minimum background is taken as zero for those pixels. This generated I_min is used to produce the azimuthally averaged background image (I_bac). To make the background image, the minimum image I_min is rotated from 0^∘ to 360^∘ deg at an increment of 1^∘ to get 360 images. Then, we generate the azimuthally averaged image (I_bac) by averaging over the 360 images such that each pixel in I_bac corresponds to the average of all the intensities over 360 images at that pixel. To detect the occurrence CME (Icme), we subtract the contribution of background from the coronal image to enhance the contrast of the image, using the azimuthally averaged background image (I_bac) and the following relation:
I_cme= (Image – I_min) / I_bac
As a case study, we have applied the developed algorithm to broad-band coronal images obtained by LASCO-C2. We have downloaded number of Level 0.5 images from the archive of LASCO-C2, in which north pole is aligned up. We have subtracted the dark offset from the Level-0.5 images and generated the back ground image. Then applied the CME detection code to detect the occurrence
of CME. We found that the code works well and Figure <ref> shows one such example of CME occurrence on June 2, 1998 at 10:29:34 UT. The left and middle panels of the figure show the raw image and the generated background image considering all the images obtained on that day. Figure <ref> shows another example of the image obtained on July 7, 2001 at 00:05:55 UT using C2 coronagraph onboard SOHO showing the coronal streamer structures clearly after the analysis. It may be noted that downloaded images does not show the streamer structures. It does indicate the occurrence of faint structure that may be CME, which need to be confirmed. The panel in the right side of the figure shows image (I_cme) with the detected bright CME’s and long lived streamer structures of the solar corona.
We have tested our algorithm for various events with Lasco-C1 and Lasco-C2 data. Our algorithm works well in detecting the Coronal features like CME’s and streamers. We plan to develop the code to determine the speed of CME in the plane of sky.
§.§ Merging the LG and HG images
Two images of solar corona will be taken, one in LG and other in HG, simultaneously, due to limited dynamic range of CMOS detector and large difference in the intensity in the inner and outer corona. The signal, therefore, is likely to be very less above the dark noise at the outer corona in LG and saturated at the inner corona in HG. Considering the conversion factor of LG and HG, the images will be merged to generate a single image using a software code developed. To develop the code we have used the LASCO-C2 image. We termed the image as a LG as seen in left side panel of Figure <ref>. Then generated the HG image by considering the gain factor of 5 (2X and 10X gains in case of VELC) as shown in middle panel of the figure. It may be noted that part of the image is saturated in HG. Considering these images and the gain factor we combined the LG and HG images to generate full image as seen in the right side panel of the figure. It may appear that there is not much difference in the LG and combined image, probably original data is 16-bit format. We expect to see the difference in the 11-bit images taken with VELC in the continuum channel.
§.§ Equal intensity contour maps of corona
We plan to make equal intensity contour maps of solar corona as function of solar radii on daily basis in units using the solar disk observations obtained onboard. We combine the LG and HG images obtained. Average the images over 60 minutes. Using this average image, we make the average intensity profiles as a function of solar radii at an interval of 5^∘ in azimuth angle. Normalize these intensity profiles using solar disk data. To develop the code, we used the coronal images in red emission line taken during the total solar eclipse of July 22, 2009. Figure <ref> shows the intensity profiles at 0^∘, 90^∘, 180^∘ and 270^∘ azimuth angle. Then join the chosen set of equal intensity points at 5^∘ interval to generate equal intensity contour map as seen in Figure <ref>. It may be noted that Figures <ref> and <ref> are on relative intensity scale but we plan to make such maps in the scale of solar disk intensity. The equal intensity contours of the solar corona can be used to study long term solar cycle variations. In addition, variations in the quiet coronal structures can be studied due to occurrence of energetic events.
§ ANALYSIS OF SPECTROSCOPIC OBSERVATIONS
The spectroscopic observations will be made in 3 emission lines, namely 5303ÅÅ [Fe XIV], 7892ÅÅ [Fe XI], 10747ÅÅ [Fe XIII] using multi-slit spectrograph with 4 slits. The figure <ref> shows the location of 4 slits when linear scan mechanism (LSM) to move the image on slits in steps, is at home position. The slits are 50 micron wide and separated by 3.75 mm. The image of sun and corona can be moved on the slits using LSM (<cit.>). One mode of observations is to keep the coronal image fixed on the slits and take the spectra around these emission lines with certain exposure and at chosen interval, generally referred as “sit and stare mode”. The second mode of observations is to move the coronal image on the slits at certain chosen steps in multiple of 10 microns and at chosen time interval using linear scan mechanism (LSM) and record the spectral image at each step, referred as “Raster scan”. The analysis procedure of spectral images is the same in both the cases. In case of “sit and stare” mode one investigates temporal variations at the locations of the slits, whereas in raster scan observations, one studies the spatial variation over the corona by making the 2-dimensional (2-D) images of solar corona using spectra. One can also determine the temporal variations by taking multiple raster scans. It takes longer time to make raster scan and thus one can study slow variations on 2-D image. In case of sit and stare mode one can study relatively faster variations but on limited coronal region along the slits. It is also planned to take the spectra in the sit and stare mode at longer interval (∼1 minute) for longer periods to study the CMEs. The Gaussian fit to the emission line profile at each spatial location in the corona will be made to compute the peak (intensity), line-width (FWHM) and central position of the peak. From these data, one will be able to generate the intensity, velocity in the line-of-sight and line-width maps of solar corona including CME. The images in continuum and velocity maps will help to determine the true velocity of CME. We plan to make catalog of various features of CMEs and put in website for the scientific use of the data.
§.§ Dark, flat-field and geometrical corrections
Using the dark, detector's flat-field spectra and other information about the calibration of detectors such as hot and dead pixels, the spectra will be corrected (<cit.>). Sometimes the recorded spectrum shows curvature due to optics and tilt in the spectra because of mounting of detector in the instrument. The absorption lines in the solar disk spectrum show very small velocity (< 1 km/sec) as compared to the velocity of plasma in dynamic solar corona. Further, disk spectrum due to scattered light of the sun at different locations along the slit shows much less velocity because of contribution from the whole solar disk. Generally, the coronal spectrum shows two parts, one due to disk scattered light in the instrument and other because of emission from the hot coronal plasma. Using the absorption line in the spectrum, the spectra of various locations are shifted and aligned to a reference spectrum (chosen spectra at the center of slit) to correct for the curvature and tilt in all the spectra. The left panel of Figure <ref> shows the spectrum of the solar disk taken with VELC at 5303ÅÅ. M2 mirror of the VELC was illuminated with sunlight using a fiber bundle to take the disk spectra. There are 4 spectra due to 4 slits of the spectrograph. The missing part of the spectra due to 2 middle slits is because of hole in the M2 mirror. To correct for the geometrical corrections spectrum due to extreme left slit was separated as seen in the middle panel of the top row of the figure. The enlarged view of the spectrum shows the curvature in the absorption line. The profile at a spatial location of 1200 row was chosen and all the spectra at other locations were shifted such that minimum of absorption line coincide with that of 1200 location. Right panel in the top row shows the spectrum after the geometrical corrections indicating that curvature in the spectrum has been corrected. Further, the left panel in the bottom row shows that the minimum of the absorption line at different locations differ by 5 – 6 pixels but after the geometrical corrections the minimum of absorption line at various spatial locations coincide as seen in the right panel in the bottom row of the figure. Similarly, we make the geometrical corrections for the spectra due to other slits choosing the respective reference location.
§.§ Conversion of pixel scale to wavelength scale
We have developed a code to create a file having the values of pixel versus wavelength by comparing the absorption lines in the disk or coronal spectrum with the atlas spectrum (https://nispdata.nso.edu/ftp/pub/atlas/fluxatl/) of the sun. In this process two absorption lines are identified in the disk or coronal spectra and the corresponding lines are selected in the solar atlas spectrum. After comparing the average centers (away from active regions) of these absorption lines, the code computes the wavelength of each pixel and the dispersion of the spectrum.
§.§ Correction for Narrow-band filter transmission for Multi-slit observations
The use of narrow band filters to avoid the overlap of spectra due to one slit with other slits complicates the data analysis. After the dark, flat-field and geometrical corrections as shown in figure <ref>, the spectra need to be compensated for the transmission curve of the filter. It is easy to handle the analysis by separating the spectra due to each slit and later combining the results. Here, we have considered the multi-slit spectra obtained in 530.3 nm [Fe xiv] emission line during the total solar eclipse of 2010 at Easter Island (<cit.>). The exponential decrease in coronal intensity with increasing solar radii adds to complications. After determining the contribution due to transmission filter at each location in solar corona and at each wavelength for continuum part of the spectra, the spectra were corrected for the transmission profile of the filter to make uniform background at continuum part. Left panel of Figure <ref> shows the spectra due to 3 slits obtained during the total solar eclipse of July 11, 2010. Middle and right panels of the figure show spectrum of the extreme left slit before and after compensating the transmission of the narrow band filter.
§.§ Scattered light correction
The coronal spectra include the disk light spectra due to scattering of solar disk light by earth's atmosphere in case of ground base observations or by instrument while observing from space . In some cases, absorption line of the disk spectrum is blended with the emission line. For example, the 789.19 nm absorption line lies at the centre of [Fe xi] emission line at 789.2 nm and another absorption line at 530.27 nm at blue wings of [Fe xiv] emission line. The contribution of the continuum and these absorption lines to the emission line needs to be removed to determine profile of the emission line.
Left and right side of top panel in the figure <ref> show the coronal and disk spectra around 7892ÅÅ [Fe xi] emission line, respectively, obtained with 25-cm coronagraph at Norikura observatory. Using the disk spectra and absorption lines we remove the contribution of scattered sunlight and resulting emission line spectra is seen in the bottom panel of the figure <ref>. This panel clearly shows the emission line at 7892ÅÅ. There is still a signature of absorption lines against the uniform background. The intensity of remanant of absorption lines is very less and do not have any effect in fitting the Gaussian profile to the emission line and in the determination of emission line parameters, such as peak intensity, central wavelength and line-width. The developed code will be applied to the spectroscopic observation obtained with VELC.
§.§ Determination of emission line parameters
The faint absorption lines (Figure <ref>) seen in the emission line spectra are due to small residuals and do not have any impact on the determination parameters of emission line using Gaussian fit to the observed profiles. Defining the approximate centre of emission line, interval of the Gaussian fit, information about the pixel to wavelength scale conversion, we determine the peak intensity, full width half maximum (FWHM) and Doppler velocity at all the spatial locations along the slit. First column of the Figure <ref> shows the observed profiles at three representative spatial locations obtained during the total solar eclipse of July 11, 2010 using multi-slit spectrograph (<cit.>). The middle and right side columns show the contribution of transmission profile of filter and the remnant emission line profile after compensating the transmission profile of the narrow band filter. A Gaussian fit to the emission line profile is also shown to compute the line-width, intensity and line-of-sight velocity at that spatial location. The values of peak intensity, position of peak intensity, FWHM in pixels and FWHM in Angstrom of the emission line after the correction for the instrumental profile are given in Table <ref>. The spectrum appears noisy because of very short exposure time to study the temporal oscillations. This analysis will be done at each spatial location along the slits. It may be noted the code has a provision, not to consider certain number of pixels while making a Gaussian fit to the emission line because of residual signal at those pixels due to absorption line. In case of raster scan observations these parameters will be combined to make intensity, Doppler and line-width maps of the scanned region. The data of all the 4 slits will be combined to generate image of the observed solar corona. The Figure <ref> shows an example of a coronal region observed with 25-cm coronagraph in a similar procedure.
§.§ Alignment and re-scaling of images
The CMOS detectors to record Fe[xi] and Fe[xiv] lines and IR detector for Fe[xiii] emission line being used have different pixel spatial scales, 1.25 arcsec / pixel for CMOS and 4.8 arcsec / pixel for IR detector. The images made from the observed spectra need to be aligned with each other and made of equal format, that means of equal spatial scale by adjusting their spatial scales to compare the parameters of different emission lines. This will be achieved by taking the spectra with a cross wire put on the slits of spectrograph. A code has been developed and tested to do it.
§.§ Temperature and Doppler maps of solar corona
The intensity of these coronal lines is temperature sensitive as the abundance of respective ions depends on the temperature of the plasma. By taking the ratio of intensity of these lines e.g., Fe[xiii] / Fe[xi] and Fe[xiv] / Fe[xi] we shall generate the temperature maps of the solar corona. Using this temperature map and line width information, we shall be able derive the non-thermal component of the plasma at each location of the solar corona and thus generate Doppler map of solar corona. Multi-temperature intensity, velocity, line-widths, and Doppler maps are expected to provide detailed physical and dynamical nature of solar corona and coronal loops.
§.§ Alignment of maps
The location of a point in the solar corona is generally defined in terms of solar radii from the centre of sun and the angle measured from the north pole of sun towards east. Hence, images will be rotated considering the “Yaw” angle of the satellite and “P” angle of the sun at the time of observations to make the north pole of the sun vertical and east on the left side of the image as is the general norm. The image will also be moved depending on the “roll and pitch” angles of the satellite at the time of observations.
§ ANALYSIS OF SPECTRO-POLARIMETRIC DATA
While making the spectro-polarimetric observations of the solar coronal in Fe[xiii] emission line, other two channels may also be recording the spectra in Fe[xi] and Fe [xiv] emission lines. There will be two spectra in Fe[xiii] line for each slit because of polarizing beam splitter. The analysis of the spectra till the generation of emission line profiles will be done in a similar way as partly explained in <cit.> and <cit.>. To derive the I, Q, U and V parameters, a complete methodology will be adopted which will be described separately.
§ AVAILABILITY OF SOFTWARE CODES
It is planned that information about codes will be shared with data users before and after the launch of the payload by arranging meetings at Indian Institute of Astrophysics, Bengaluru. Some training will also be given to the participants. These codes need to be verified with the actual data and working of the instrument during the PV (payload verification) phase. After making the required changes in codes and confirming the proper working of codes, these will be put in public domain. Again, information and training to the users will be provided in workshops during the PV and GT phase. It may be noted that proposal submission form to make the observational plan, is being worked and tested. It has gone through number of revisions because of changes in hardware and control electronics. The details of proposal submission form will be shared in all the proposed workshops and meetings.
§ ACKNOWLEDGEMENTS
We thank all the Scientists/Engineers at the various centres of ISRO such as URSC, LEOS, SAC, VSSC etc. and Indian Institute of Astrophysics who have made great contributions to the mission to reach at the present state. We gratefully acknowledge the financial support from ISRO for this project. The coronal spectroscopic data used here was obtained by Prof. Jagdev Singh at Norikura observatory, Japan. The SOHO/LASCO data used here are produced by a consortium of the Naval Research Laboratory (USA), Max-Planck-Institut for Sonnensystemforschung (MPS, Germany), Laboratoire d'Astronomie (LAS, France), and the University of Birmingham (UK). SOHO is a project of international cooperation between ESA and NASA.
[Brueckner et al (1995) 1995]brueckner1995
Brueckner, G. E., Howard, R. A., Koomen, M. J. 1995, The Large Angle Spectroscopic Coronagraph (LASCO). Solar Physics, 162, 357.
[Ichimoto et al (1999) 1999]ichimoto1999
Ichimoto, Kiyoshi., Noguchi, Motokazu., Tanaka, Nobuyuki., Kumagai, Kazuyoshi., Shinoda, Kazuya., Nishino, Tetsuo., Fukuda, Takeo., Sakurai, Takashi., Takeyama, Norihide. 1999, A New Imaging System of the Corona at Norikura, PASJ, 51, 383 - 391.
[Kumar et al (2018) 2018]kumar2018
Kumar, N., Raghavendra Prasad, Budihal., Singh, Jagdev., Venkata, S. N. 2018, Optical design of visible emission line coronagraph on Indian space solar mission Aditya-L1. Experimental Astronomy, 45.
[Nagaraju et al (2021) 2021]nagaraju2021
Nagaraju, K., Prasad, B.R., Hegde, B.S., et al. 2021, Spectropolarimeter on board the aditya-L1, polarization modulation and demodulation. Applied optics, 60, 8145-8153.
[Prasad et al (2017) 2017]prasad2017
Raghavendra Prasad, Budihal., Banerjee, Dipankar., Singh, Jagdev., Subramanya, Nagabhushana., Kumar, Amit., Kamath, P., Kathiravan, S., Venkata, S.N, Rajkumar, N., Venkatasubramanian, Natarajan ., Juneja, Madhur., Somu, Pawan., Pant, Vaibhav., Shaji, Nigar., Sankarsubramanian, K., Patra, Asit., Venkateswaran, R., Adoni, Abhijit., Narendra, S., Jaiswal, Bhavesh. 2017, Visible Emission Line Coronagraph on Aditya-L1, Current Science, 113, 613-615.
[Samanta et al (2016) 2016]samanta2016
Samanta, T., Singh, J., Sindhuja, G., Banerjee, D. 2016, Detection of High-Frequency Oscillations and Damping from Multi-slit Spectroscopic Observations of the Corona, Solar Physics, 291, 155.
[Sasikumar et al (2022) 2022]sasikumar2022
Sasikumar Raja, K., Venkata, S.N., Singh, J., Prasad, B.R. 2022, Solar coronal magnetic fields and sensitivity requirements for spectropolarimetry channel of VELC onboard Aditya-L1. Advances in Space Research, 69, 814-822.
[Singh et al (2004) 2004]singh2004
Singh, J., Takashi Sakurai., Kiyoshi Ichimoto., Tetsuya Watanabe. 2004, Complex variations in the line-intensity ratio of coronal emission lines with height above the limb, The Astrophysical Journal, 617, L81–L84.
[Singh et al (2006) 2006]singh2006
Singh, J., Takashi sakurai., Kiyoshi Ichimoto. 2006, Do the line widths of coronal emission lines increase with height above the limb?. The Astrophysical Journal, 639, 475–483.
[Singh et al (2011) 2011]singh2011
Singh, Jagdev., Raghavendra Prasad, Budihal., Venkatakrishnan, P., Sankarasubramanian, K., Banerjee, Dipankar., Bayanna, A., Mathew, Shibu., Murthy, Jayant., Subramaniam, Prasad., Rajaram, Ramesh., Kathiravan, S., Subramanya ., Nagabhushana, K., Mahesh., Manoharan, P., Uddin, Wahab., Sripadmanaban, Sriram., Kumar, Amit ., Srivastava, N., Rao, Koteswara., Patra, Asit. 2011, Proposed visible emission line space solar coronagraph, Current Science, 100.
[Singh et al (2011) 2011]singh2011a
Singh, J., Hasan, S.S., Gupta, G. R., Nagaraju, K., Banerjee, D. 2011, Spectroscopic Observation of Oscillations in the Corona During the Total Solar Eclipse of 22 July 2009, Solar Phys, 270, 213–233.
[Singh et al (2019) 2019]singh2019
Singh, J., Raghavendra Prasad, B., Venkata, S., Kumar, A. 2019, Exploring the outer emission corona spectroscopically by using Visible Emission Line Coronagraph (VELC) on board ADITYA-L1 mission, Advances in Space Research, 64, 7, 1455–1464.
[Singh et al (2022) 2022]singh2022
Singh, J., Raghavendra Prasad, B., Chavali Sumana, Amit Kumar, Varun Kumar, Muthu Priyal, Venkata, S.N. 2022, Data pipeline architecture and development for VELC onboard Space Solar Mission AdityaL1, Advances in Space Research, 69, 2601–2610.
[Venkata et al (2017) 2017]venkata2017
Venkata, S.N., Prasad, B.R., Nalla, R.K., Singh, J. 2017, Scatter studies for visible emission line coronagraph on board ADITYA-L1 mission. J. Astron. Telescopes Instrum. Syst., 626, 3.
[Venkata et al (2021) 2021]venkata2021
Venkata, S.N., Prasad, B.R., Singh, J. 2021, Spectropolarimetry Package for Visible Emission Line Coronagraph (VELC) on board Aditya-L1 Mission. Experimental Astronomy, 53, 71-82.
[ Wülser et al (2018) 2018]wulser2018
Wülser, JP., Jaeggli, B., De Pontieu., et al. 2018, Instrument Calibration of the Interface Region Imaging Spectrograph (IRIS) Mission. Solar Physics, 293, 149.
|
http://arxiv.org/abs/2307.00472v1
|
20230702044419
|
Equal Confusion Fairness: Measuring Group-Based Disparities in Automated Decision Systems
|
[
"Furkan Gursoy",
"Ioannis A. Kakadiaris"
] |
cs.LG
|
[
"cs.LG",
"cs.CY"
] |
Equal Confusion Fairness:
Measuring Group-Based Disparities
in Automated Decision Systems
Furkan Gursoy, Ioannis A. Kakadiaris
Computational Biomedicine Lab
Dept. of Computer Science
University of Houston
Houston, TX, USA
{fgursoy, ioannisk}@uh.edu
F. Gursoy and I. A. Kakadiaris, "Equal Confusion Fairness: Measuring Group-Based Disparities in Automated Decision Systems," 2022 IEEE International Conference on Data Mining Workshops (ICDMW), Orlando, FL, USA, 2022, pp. 137-146. <https://doi.org/10.1109/ICDMW58026.2022.00027>
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================
As artificial intelligence plays an increasingly substantial role in decisions affecting humans and society, the accountability of automated decision systems has been receiving increasing attention from researchers and practitioners. Fairness, which is concerned with eliminating unjust treatment and discrimination against individuals or sensitive groups, is a critical aspect of accountability. Yet, for evaluating fairness, there is a plethora of fairness metrics in the literature that employ different perspectives and assumptions that are often incompatible. This work focuses on group fairness. Most group fairness metrics desire a parity between selected statistics computed from confusion matrices belonging to different sensitive groups. Generalizing this intuition, this paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness. To further analyze the source of potential unfairness, an appropriate post hoc analysis methodology is also presented. The usefulness of the test, metric, and post hoc analysis is demonstrated via a case study on the controversial case of COMPAS, an automated decision system employed in the US to assist judges with assessing recidivism risks. Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment, such as those based on the system accountability benchmark.
fairness, artificial intelligence, automated decision systems, algorithmic accountability, algorithm audit
§ INTRODUCTION
Corresponding with the advances in artificial intelligence (AI) technology and the wider adoption of AI technologies by practitioners, automated decision systems (ADS) have begun to play an increasingly substantial role in assisting or making important decisions affecting human lives. Such decisions assisted by ADS include criminal recidivism risk assessment <cit.>, welfare fraud risk scoring <cit.>, biometric recognition in law enforcement <cit.>, employment decisions <cit.>, and visa application decisions <cit.>. Such uses of ADS are not free from issues such as bias and discrimination, and the referenced works include discussions on why those AI-based systems may be problematic.
The problematic applications of AI do not necessarily imply that all uses of ADS should be avoided. On the contrary, if their accountability is ensured, such systems may improve efficiency and effectiveness in many decision-making tasks. For instance, a systematic review of more than 50 papers found that majority of AI-enabled decision support systems improve patient safety outcomes in healthcare settings <cit.>. However, the same study notes the lack of standardized benchmarks and homogeneous AI reporting. To this end, frameworks such as the system accountability benchmark <cit.> aim to improve the standardization of AI accountability assessment and reporting within an exhaustive scheme. There are also legal and regulatory efforts to ensure accountability of ADS, mainly in the US <cit.>, the EU <cit.>, and the UK <cit.>.
Fairness is concerned with unjust outcomes for individuals or groups. Individual fairness postulates that similar persons should receive similar outcomes <cit.>. Group fairness, on the other hand, is concerned with eliminating unjust outcomes based on sensitive group membership <cit.>. Group fairness has been receiving increasing attention from researchers, practitioners, and legislators as many AI systems may exhibit bias based on race <cit.>, gender <cit.>, age <cit.>, disability status <cit.>, political orientation <cit.>, and religion <cit.>. This paper concentrates on group fairness.
There are multiple approaches and numerous notions and metrics for group fairness. These do not agree on a single fairness definition. This is so because fairness does not have a value-free definition, and different fairness approaches may adhere to different value principles. Consequently, the plethora of fairness metrics in the literature makes it challenging for practitioners to choose among many incompatible alternatives. It may also enable a "cherry-picking" behavior. This work aims to unify major fairness approaches and notions in a general but unique fairness assessment methodology and operationalize it to facilitate practical and effective use in the real world. The proposed methodology may also be employed to evaluate the group fairness elements included in larger accountability frameworks.
The main contributions of this paper can be enumerated as follows.
* Equal confusion fairness, a new group fairness notion, is introduced.
* The proposed notion is operationalized by designing appropriate testing and measurement processes.
* An equal confusion test is designed to identify whether an ADS exhibits unfair behavior.
* A confusion parity error is proposed to quantify the extent of unfairness exhibited by the system.
* An appropriate methodology for the post hoc analysis is presented to identify the impacted groups and characterize the specific unfair behavior.
* A software program to assist with the analysis of equal confusion fairness is provided as an open-source tool.[The code and reproducibility files are made available at <https://github.com/furkangursoy/equalconfusion>.]
The rest of the work is structured as follows. Section <ref> provides a comparative overview of the related work on group fairness. Section <ref> presents the methods for the equal confusion test, confusion parity error, and the post hoc analysis. Section <ref> demonstrates the applicability and usefulness of the proposed methods using a real-world dataset from an actual recidivism risk assessment tool that is employed in the US criminal justice system to assist judges in their decision-making. Final remarks and directions for future research are provided in Section <ref>.
§ RELATED WORK
Albeit a relatively new topic, fairness in machine learning has seen a dramatic increase in publication numbers in recent years. Generally speaking, a distinction can be made between individual fairness and group fairness. While individual fairness focuses on whether similar individuals receive similar outcomes, group fairness focuses on whether the decisions are just for members of different groups on average. Usually, individual fairness notions employ distance functions to compute the similarity between individuals and the similarity between their respective outcomes. On the other hand, group fairness notions usually seek parity of selected statistics between different groups. Causality-based methods may be viewed as another stream. However, specific causality-based studies either focus on group fairness or individual fairness. This section summarizes major approaches to group fairness to provide a background for the methodology provided in the next section.
Three major approaches to group fairness exist: independence, separation, and sufficiency. All three are defined based on joint distributions of sensitive characteristics s, predictions ŷ, and ground truth values y. Independence requires that sensitive characteristics (e.g., race- or sex-based group memberships) and predictions are statistically independent. Separation requires that sensitive characteristics and predictions are conditionally independent given ground truth values. Sufficiency requires that sensitive characteristics and ground truth values are conditionally independent given predictions. The three approaches can be mathematically represented respectively as s ŷ, s ŷ | y, and s y | ŷ.
There is an abundance of fairness metrics in the literature. Mehrabi et al. <cit.> provided 10 widely used fairness measures. Makhlouf et al. <cit.> presented 19 fairness measures, 16 of which are for group fairness. Castelnovo et al. <cit.> and Verna and Rubin <cit.> presented 19 and 20 fairness measures, respectively. Fairness 360 toolkit by IBM <cit.> contains more than 70 fairness metrics as of 2022.
Enumeration and the detailed investigation of those fairness metrics are beyond the scope of this work. Interested readers are encouraged to refer to the cited works and other surveys on the topic <cit.>. However, the following should be noted. Except for causality-based metrics, most group fairness metrics can be calculated from the confusion matrices belonging to different sensitive groups and many follow one of the three major approaches <cit.>.
Confusion matrices tabulate the relationship between ŷ and y, providing information on the type of errors made by a classifier. For binary classification, a confusion matrix consists of four cells, as shown in Table <ref>. The cells contain the frequencies for true positives (TP), false positives (FP), false negatives (FN), and true negatives (TN). From a confusion matrix, additional statistics can be defined. Precision is defined as the fraction of actual positives among all positive predictions. Negative predictive value is defined as the fraction of actual negatives among all negative predictions. Recall is defined as the fraction of predicted positives among all actual positives. Specificity is defined as the fraction of predicted negatives among all actual negatives. Their mathematical definitions are given below. Any three of the four are necessary and sufficient to compute the fourth and to fully identify the distribution of the confusion matrix:
* Precision: TP / (TP + FP),
* Negative Predictive Value: TN / (TN + FN),
* Recall: TP / (TP + FN), and
* Specificity: TN / (TN + FP).
In relation to confusion matrices, the three major fairness approaches require the following respective quantities to be on par across sensitive groups <cit.>:
* Independence: (TP + FP)/((TP + FP + FN + TN),
* Sufficiency: TP / (TP + FP) and TN / (TN + FN) (i.e., precision and negative predictive value, respectively), and
* Separation: TN / (TN + FP) and TP / (TP + FN) (i.e., specificity and recall, respectively).
When sufficiency and separation are known, the distribution of the confusion matrix becomes known. Hence, independence may also be computed. Moreover, once the distribution is known, other fairness metrics based on confusion matrices may also be computed. The case of all three fairness approaches being satisfied is known as total fairness <cit.>. However, it is not possible to satisfy all three at the same time except in specific cases <cit.>.
Although their simultaneous satisfaction is rarely observed outside rhetorical cases <cit.>, all three fairness approaches are sought to be satisfied as much as possible. This paper argues that while the impracticality regarding the simultaneous and perfect satisfaction of the three approaches should be acknowledged, practitioners should strive to achieve the best possible overall performance in all three. This would also prevent "cherry-picking" among more confined fairness metrics when evaluating an ADS for fairness. Therefore, this paper argues that the distribution of confusion matrices, from which most group fairness metrics are computed, should be on par across different groups. As the specific source(s) of a potential unfairness result would not be immediately apparent, any unfairness result should be followed up by an appropriate post hoc analysis that seeks to reveal and characterize the inequalities between the confusion matrices.
Another noteworthy and relevant concept is intersectional fairness <cit.>. Intersectionality is a framework to study how overlapping identities may create different inequities in the sense that the sum is more than the parts. Thus, intersectional fairness requires the analysis of intersectional groups (e.g., Hispanic females) rather than isolated analyses of, for instance, race and sex. The intersectional approach also limits fairness gerrymandering <cit.> where a system appears fair at a group level but is not fair at a subgroup level.
§ EQUAL CONFUSION FAIRNESS
§.§ Notation
Scalar values are denoted by lower case letters (e.g., a).
Vectors are denoted by boldface lowercase letters (e.g., 𝐚). The i^th element of 𝐚 is denoted by 𝐚_i.
Matrices are denoted by boldface uppercase letters (e.g., 𝐀).
The i^th row vector and j^th column vector of 𝐀 are denoted by 𝐀_i* and 𝐀_*j, respectively.
The entry at the intersection of i^th row and j^th column of 𝐀 is denoted by 𝐀_ij.
The real value space, nonnegative integer space, and categorical value space are denoted respectively by ℝ, ℤ^+, and 𝕊.
A vector of categorical values with size n is denoted as 𝐚∈𝕊^n. A non-negative integer-valued matrix with n rows and m columns is denoted as 𝐀∈ℤ^+^n × m.
§.§ Problem Definition
This paper proposes an equal confusion approach to investigate the fairness of a decision system, given the following:
* a matrix 𝐗 that represents n humans and m features where 𝐗∈(ℝ ∪ 𝕊)^n × m,
* a vector 𝐬 that represents the sensitive group memberships for the n humans where 𝐬∈𝕊^n regardless of whether 𝐬𝐗 in general,
* a decision system f: 𝐗→ŷ,
* decision outputs ŷ where ŷ∈𝕊^n, and
* corresponding ground truth values 𝐲 where 𝐲∈𝕊^n.
Equal confusion fairness requires the confusion matrices to have the same distribution across all sensitive groups. To this end, first, a statistical test is presented to determine whether a decision system is fair or not. Second, a fairness metric is presented to measure the extent of unfairness, if any. Third, a post hoc test is presented to detect the differences in specific sensitive groups and specific decision system behavior contributing to unfairness, if any.
§.§ Equal Confusion Test
To determine whether a decision system is fair or not, equal confusion fairness investigates the relation between sensitive groups and outcome groups.
Usually, 𝐬 represents protected groups such as those based on gender and race.
The pair {ŷ, 𝐲} represents outcome groups. Specifically, the unique value pairs in {ŷ, 𝐲} correspond to the cells in the confusion matrix. For instance, in the case of a binary decision problem, outcome groups are true positive, false positive, true negative, and false negative.
The equal confusion test employs Pearson's chi-squared test of independence to test the relationship between 𝐬 and {ŷ, 𝐲}. The relevant null and alternate hypotheses for Pearson's chi-squared test of independence are as follows.
𝐇_0: 𝐬 and {ŷ, 𝐲} are independent.
𝐇_𝐀: 𝐬 and {ŷ, 𝐲} are dependent.
The test requires a contingency matrix 𝐎∈ℤ^+^q × r where q is the number of sensitive groups and r is the number of outcome groups (i.e., the number of cells in the confusion matrix).
The contingency matrix cross-tabulates the observed frequencies for sensitive groups and outcome groups.
Fig. <ref> illustrates the generation of the contingency matrix from the set of confusion matrices for a decision system with three possible outputs (i.e., ŷ_i, 𝐲_i∈{α, β, θ}) and three sensitive groups. For this system, q=3 and r=9.
Figs. <ref>a, <ref>b, and <ref>c represent the confusion matrices 𝐂^1, 𝐂^2, and 𝐂^3, respectively, corresponding to the three sensitive groups.
Consequently, for instance, the cell value b^' corresponds to the number of people (i) who belong to the second sensitive group, (ii) for whom the decision system produced the label ŷ_i=α, and (iii) whose th label is 𝐲_i=β.
Each confusion matrix is flattened to obtain a single vector.
The obtained vectors are stored in the rows of the contingency matrix (Fig. <ref>d). Hence, 𝐎_i* is equivalent to 𝐂^i.
Therefore, sensitive groups and outcome groups are represented respectively in the rows and columns of the contingency matrix.
After establishing 𝐎, the expectation matrix 𝐄 is computed. The matrix 𝐄 has the same shape as 𝐎 and represents the case of independence between 𝐬 and {ŷ, 𝐲}, the expected frequencies under the null hypothesis. The values of its entries, 𝐄_ij, are computed as shown in Eq. <ref>.
𝐄_ij = ∑_k=1^q𝐎_kj∑_l=1^r𝐎_il/∑_k=1,l=1^q,r𝐎_kl
Then, the chi-squared statistic χ^2, the sum of normalized squared differences between the observed and expected values, is computed as shown in Eq. <ref>.
χ^2 = ∑_i=1^q ∑_j=1^r (𝐎_ij - 𝐄_ij)^2/𝐄_ij
To evaluate the significance level for Pearson's chi-squared test of independence, the corresponding p value can be obtained from the chi-squared distribution with (q-1)(r-1) degrees of freedom. If it is found as statistically significant (e.g., p<0.01), the null hypothesis is rejected and the strength of the association between 𝐬 and {ŷ, 𝐲} is investigated next.
§.§ Confusion Parity Error
The confusion parity error is equivalent to Cramer's V <cit.> computed on 𝐎. It is a measure of the association between two categorical variables based on the chi-squared statistic. It generalizes the Matthews correlation coefficient <cit.> beyond binary variables, which is otherwise only applicable to two-by-two contingency matrices (i.e., extending it for r>4). Cramer's V, denoted by ϕ, is computed as shown in Eq. <ref>.
ϕ = √(χ^2 / n/min (q-1, r-1))
Its range is [0,1] irrespective of the shape of 𝐎. The value 0 corresponds to no association and 1 corresponds to complete association. The lower bounds of ϕ for determining small, moderate, or strong association strength are presented in Table <ref> following the recommendations provided by <cit.>. However, interpreting such effect sizes requires caution and may depend on the context <cit.>.
§.§ Post hoc Fairness Analysis
Pearson's chi-squared test of independence is omnibus. That is, it is a global test that does not reveal the specific source of a statistically significant result <cit.>. In the simplest case of a binary classification with only two sensitive groups (i.e., q=2 and r=4), a statistically significant fairness test result does not reveal which cells of the original confusion matrix (i.e., the cells that denote true positive, false positive, true negative, and false negative) contribute towards the statistically significant result. In the case of more than two sensitive groups (i.e., q > 2), a statistically significant test result does not reveal among which groups the identified discrepancy exists. Here, a suitable post hoc analysis method is presented to identify the contingency matrix cells contributing to the unfairness determined by the fairness test.
Adjusted standardized residual 𝐑_ij for a specific cell 𝐎_ij is computed by finding the difference between the observed and the expected value and then normalizing this value with an appropriate adjustment and standardization <cit.>. Equation <ref> presents adjusted standardized residual 𝐑_ij that corresponds to the cell 𝐎_ij.
𝐑_ij = 𝐎_ij - 𝐄_ij/√(𝐄_ij (1-∑_l=1^r𝐎_il/∑_k=1,l=1^q,r𝐎_kl)
(1-∑_k=1^q𝐎_kj/∑_k=1,l=1^q,r𝐎_kl)
)
The residual 𝐑_ij is then tested against the standard normal distribution at an appropriate significance level <cit.>. It is suggested in the literature to either apply a Bonferroni correction based on the number of cells in the contingency matrix <cit.> or evaluate the statistical significance at a stricter level <cit.>. For a desired statistical significance level of 95%, the appropriate p-value would be 0.05/qr after the Bonferroni correction instead of 0.05. A more stringent p-value such as 0.001 is recommended in the latter. Employing the p-value of 0.001, 𝐑_ij values less than -3.29 indicate a smaller value of 𝐎_ij than expected. 𝐑_ij values more than 3.29 indicate that 𝐎_ij is higher than expected. As the expectation reflects no discrepancy between the sensitive groups in the confusion matrix, such deviations reveal the specific sources of unfairness.
§.§ Complexity Analysis
Computing the contingency matrix (Fig. <ref>) has the computational complexity of O(n) where n is the number of humans. Computational complexity of computing expected values (Eq. <ref>), computing χ^2 (Eq. <ref>), and computing adjusted standardized residuals (Eq. <ref>) is O(qr). The computational complexity for computing Cramer's V is O(1). In most realistic cases, q and r are very small. The computational complexity of O(n) indicates a linear time. The space complexity is O(qr). Hence, the presented methods are highly scalable.
The presented methodology has specific sample size requirements, as will be stated next. Therefore, sample complexity is a more constraining factor in comparison to time and space complexities.
§.§ Scope, Discussion, and Limitations
A list of considerations is provided below to clarify the scope of the applicability of the presented techniques and their limitations.
* The approach is applicable for binary and multi-class classification tasks. It is not applicable for regression tasks.
* Reliable and unbiased ground truth labels are required.
* A representative and acceptable test set is required. In practice, the representativeness of a test may not be determined with absolute certainty. Therefore, an ongoing fairness assessment that utilizes the data collected via the system's real-world use is highly recommended as part of a larger monitoring strategy.
* The test set and the frequencies in each cell should be sufficiently large to allow a reliable statistical analysis. Cochran <cit.> recommends for Pearson's chi-squared test of independence that expected frequencies in the contingency matrix should be (i) at least five for at least 80% of the cells and (ii) at least one in all cells.
* The presented fairness approach is rather strict and forces independence, sufficiency, and separation, which can be simultaneously maximized only in very restrictive cases.
* If there is more than a single type of sensitive group (e.g., when both gender and race need to be considered), the test can be repeated separately for gender groups and race groups. If it is desired to compare intersectional groups, such groups may be created from race and gender (e.g., black men, black women, white men, white women, and so on). Therefore, the presented techniques are also suitable for intersectional perspectives.
* If there is more than one dependent variable, the test can be repeated for each dependent variable separately. Alternatively and additionally, it can be repeated for intersections of the dependent variables, similar to the intersectional groups.
* In certain cases, the fairness test and post hoc analysis results may not agree. For instance, an unfairness detected by the fairness test may not be traced to individual cells by the post hoc analysis. Such a result may be due to the statistical significance of a combination of multiple cells where no single cell can be individually identified as statistically significant <cit.>.
§ CASE STUDY
Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a commercially developed automated decision-aiding tool for risk assessment in criminal justice. It has been used in several US states, including Florida, New York, Wisconsin, and California <cit.>. It can produce risk scores for recidivism, violent recidivism, and failure to appear <cit.>. It uses a proprietary methodology, and the underlying computations are made available neither to the defendant nor to the court <cit.>.
In 2016, in a case brought by a defendant against the State, the Wisconsin Supreme Court decided that the use of COMPAS by a court did not violate the defendant's due process rights <cit.>. Later in 2017, the Supreme Court of the United States denied an appeal by the defendant <cit.>. Nevertheless, the use of COMPAS remained controversial, with several studies performed on the subject <cit.>.
§.§ Methodology
The dataset used in this paper is published <cit.> alongside the original ProPublica story <cit.> that attracted widespread attention to the subject. The dataset is originally obtained via public information requests in Broward County, Florida. It contains 18,610 people who were scored in 2013 and 2014. COMPAS can be used in different stages in the criminal justice system, including parole and probation. However, this particular county primarily uses it at the pretrial stage <cit.>, for which there are a total of 11,757 people.
At the pretrial stage, COMPAS produces scores including recidivism risk and violent recidivism risk. This case study focuses on violent recidivism which consists of murder and nonnegligent manslaughter, forcible rape, robbery, and aggravated assault by the FBI definition <cit.>. COMPAS scores are integers from 1 to 10, where scores from 1 to 4 correspond to low risk, 5 to 7 correspond to medium risk, and scores above 7 correspond to high risk. According to the practitioner guide for COMPAS <cit.>, medium and high scores receive more interest from supervision agencies. Therefore, in line with the original ProPublica analysis <cit.>, medium and high risk are indicated as a positive prediction for recidivism. According to its practitioner guide <cit.>, the COMPAS recidivism score predicts the risk of recidivism in the next two years. Via the data collected from public criminal records, the dataset also contains information on whether the defendant is charged with a violent criminal offense within two years after the original COMPAS screening <cit.>.
Following the same procedure as ProPublica <cit.>, filtering is performed to remove (i) cases where the COMPAS assessment date is not within 30 days of arrest or charge dates, (ii) cases where COMPAS assessment is not found, and (iii) cases where the defendant did not have at least two years outside a correctional facility. The final dataset contains 4,020 COMPAS cases. For each case, the following information is available:
* Sex: Female, Male;
* Race: African-American, Asian, Caucasian, Hispanic, Native American, Other;
* Predicted violent recidivism: 0–Non-risky, 1–Risky; and
* Actual violent recidivism: 0–Recidivist, 1–Non-recidivist.
Initially, descriptive statistics are explored to have a general understanding of the data. Then, for sensitive groups based on sex, race, and their intersections, fairness assessment studies are conducted. The equal confusion test is used to check whether the system exhibits unfair behavior, followed by the confusion parity error to measure the magnitude of unfairness. Finally, a post hoc fairness analysis is performed to reveal the specific characteristics of the unfairness and impacted groups. These analyses are supported by a set of tables presenting information on observed and expected values for the contingency matrices, which by construction contains information on their constituent confusion matrices.
§.§ Findings
Descriptive Statistics. Table <ref> provides the distribution of the cases over race, gender, and their intersection. A large majority of the cases are male. However, females represent one-fifth of the cases, with nearly 900 cases. Caucasians and African-Americans together constitute 84% of all cases while Asians and Native Americans collectively account for only less than 1% with only 33 cases. The low number of cases for these two groups indicates that it will be very difficult, if not impossible, to achieve any statistically significant results for them.
Table <ref> presents the overall confusion matrix. Among the 4,020 cases, 1,107 are forecasted to be recidivists, while only 652 actually recidivate within the next two years. The overall accuracy of the system is 73%. Among the 1,107 who are predicted as risky, only 346 recidivate hence a precision of 31%. Among the 652 who actually recidivate, only 346 were predicted as risky hence a recall of 53%. Since larger values indicate better performance for these metrics, the results indicate a questionable performance, especially considering the harms that may result from potential misjudgments in the criminal justice system where COMPAS is utilized in.
Next, fairness studies are performed, and findings are reported for sex, race, and intersectional groups.
§.§.§ Sex
The equal confusion test resulted in p < 0.001, which indicated a statistically significant association between sex and cells of the confusion matrix. As this indicates that the system is unfair, the confusion parity error is computed as ϕ = 0.12. According to the corresponding interpretation presented in Table <ref> with q=2, r=4, it can be concluded that the system exhibits small but statistically significant unfairness. A post hoc analysis follows this finding to identify the specific sources of unfairness.
Table <ref> presents the observed values (O), expected values (E), and adjusted standardized residuals (R) for the contingency matrix. Assuming a desired p < 0.001, the absolute two-tailed critical value is 3.29. The significant values are shown in boldface type. To further investigate the characteristics of the unfairness and impacted groups, Table <ref> presents confusion matrices based on sex. The cell values are presented as the proportion of row totals, and as the proportion of subtotals in the case of parenthesized values. The cells corresponding to the significant values are shown in boldface type. A closer analysis of the significant cells results in the following observations.
* Among predicted risky females, only 16% are actually recidivists compared to the same figure of 34% for males. Precision is higher for males than females. Hence, females are more likely to be incorrectly predicted as risky.
* Among predicted non-risky females, 93% are actual non-recidivists, while the same figure goes down to 89% for males. Negative predictive value is higher for females than males. Hence, males are more likely to benefit from false negatives.
* Among actual recidivist females, only 35% are correctly predicted as risky compared to the same figure of 55% for males. Recall is higher for males than females. Hence, females are more likely to benefit from under-identification of risky status.
* Among actual non-recidivist females, 82% are correctly predicted as non-risky, while the same figure goes down to 76% for males. Specificity is higher for females than males. Hence, males are more likely to suffer from an under-identification of non-risky status.
The first two findings reveal a disadvantageous position for females, whereas the last two findings indicate an advantageous position, compared to males. These findings are not contradictory as they are based on different measurements. It indicates that implications from a fairness analysis are not straightforward and require a comprehensive perspective rather than an inspection of a subset of measurements.
§.§.§ Race
The same analysis is repeated for race. The equal confusion test resulted in p < 0.001. Subsequently, the confusion parity error is computed as ϕ = 0.13, which indicates a small but statistically significant unfairness with q=6, r=4. Tables <ref> and <ref> presents the contingency and confusion matrices in the same fashion as Tables <ref> and <ref>. A closer analysis of the significant cells results in the following observations.
* Among predicted risky African-Americans, 35% are actually recidivists compared to 24% and 14% for Caucasians and Hispanics, respectively. Precision is highest for African-Americans and lowest for Hispanics. Hence, Hispanics are more likely to be incorrectly predicted as risky than Caucasians and African-Americans.
* Among predicted non-risky Caucasians and Hispanics, 91% are actual non-recidivists, while the same figure goes down to 87% for African-Americans. Negative predictive value is lower for African-Americans. Hence, African-Americans are more likely to benefit from false negatives.
* Among actual recidivist Caucasians and Hispanics, only 37% and 29%, respectively, are correctly predicted as risky compared to the same figure of 62% for African-Americans. Recall is higher for African-Americans, Caucasians, and Hispanics are more likely to benefit from under-identification of risky status.
* Among actual non-recidivist African-Americans, 69% are incorrectly predicted as risky compared to 85% and 81% for Caucasians and Hispanics, respectively. Specificity is lower for African-Americans. Hence, African-Americans are more likely to suffer from an under-identification of non-risky status.
The first two findings indicate an advantageous position for African-Americans from one perspective, whereas the last two indicate a disadvantageous one from another.
§.§.§ Intersectional Groups
In addition to the separate analysis of race and gender, an intersectional groups analysis is performed. Produced by the two sex-based and six race-based groups, there are 12 intersectional groups. There is no observation for Native American females, so it is not included. The equal confusion test resulted in p < 0.001. Subsequently, the confusion parity error is computed as ϕ = 0.16, which indicates a small but statistically significant unfairness at its upper limits with q=10, r=4. Tables <ref> and <ref> present the contingency and confusion matrices in the same fashion as the earlier analyses of sex and race. A closer analysis of the significant cells results in the following observations.
* Among predicted risky African-American males, 37% are actually recidivists, while the same figure drops to 26% for Caucasian males, 15% for Hispanic males, and 16% for Caucasian females. Precision is highest for African-American males and lowest for Hispanic males and Caucasian females. Hence, the last two groups are more likely to be incorrectly predicted as risky than African-American males.
* Among predicted non-risky Caucasian females, 95% are actual non-recidivists, while the same figure goes down to 90% for Caucasian males and 86% for African-American males. Negative predictive value is lower for African-American males. Hence, they are more likely to benefit from false negatives.
* Among actual recidivist Caucasian females, Caucasian males, and Hispanic males, only 35%, 37%, and 33%, respectively, are correctly predicted as risky, while the same figure is 65% for African-American males. Recall is higher for African-American males. Hence, the other groups are more likely to benefit from the under-identification of risky status.
* Among actual non-recidivist Caucasian females and males, 87% and 84%, respectively, are correctly predicted as non-risky, while the same figure drops to 67% for African-American males. Specificity is lower for African-Americans. Hence, African-Americans are more likely to suffer from an under-identification of non-risky status.
The first two findings indicate an advantageous position for African-American males and disadvantageous positions for Caucasian females. Compared to the earlier findings from the non-intersectional analysis, it can be argued that the disadvantages of African-Americans lie with its male members. Similarly, the advantages of Caucasians lie more with their female members. The last two findings indicate disadvantageous positions for Caucasian males and females at comparable levels. However, the gender gap remains among African-Americans, where its male members are in a statistically significantly disadvantageous position while its female members are not.
§.§ Remarks
This case study demonstrates the proposed equal confusion test, confusion parity error, and post hoc fairness analysis. Unfairness in the system is successfully detected and quantified. Despite the relatively small strength of the unfairness, the post hoc analysis revealed several statistically significant fairness issues between certain groups. It also showed that a group that bears negative impacts from one perspective might be a beneficiary from another, indicating that fairness implications are not necessarily straightforward. Furthermore, the intersectional group analysis enabled the mapping of observed unfairness to more refined subgroups. Overall, it may be concluded that using COMPAS in critical criminal justice decisions is worrisome.
There are two main limitations of this case study. First, only the data from a particular county in Florida is available, making the findings' generalizability questionable. Second, the number of cases is limited, particularly for certain races and many intersectional groups, hindering the possibility of obtaining statistically significant results for those groups.
§ CONCLUSION
Given the larger impact AI has started to have on human lives, the need for accountability of automated decision systems has become inevitable. Fairness is critical to such accountability efforts, improving the trust placed in AI systems to reap technological benefits without causing harm. However, there is an abundance of fairness metrics that are well refined, often incompatible, and subject to "cherry-picking." Therefore, the need for a unifying fairness assessment methodology is paramount.
The main contributions of this study are the proposed equal confusion test, the confusion parity error, and the associated methodology for post hoc fairness analysis.
The equal confusion test checks whether the system exhibits any unfair behavior. If unfairness is detected, the confusion parity error is utilized to quantify the magnitude of unfairness. A table is provided to interpret the values of the confusion parity error.
Finally, the post hoc analysis is employed to examine the characteristics and positively/negatively impacted groups via identifying confusion matrix cells with statistically significant divergence from their expected values.
The use of the proposed test, metric and post hoc analysis methods are demonstrated via a case study. The case study employs real-world data from COMPAS, a criminal risk assessment tool used in the US to assist pretrial release decisions. The findings indicate that COMPAS is not fair, and discrepancies exist between different sex and race groups and intersectional groups. Specifically, African-American males and Caucasians show divergent behavior that is statistically significant. From some perspectives, one group is at a disadvantage while the same group is at an advantage from other perspectives. The findings indicate that the use of COMPAS in critical decisions is problematic.
The foreseen future research is two-fold. First, analogous tests, measures, and post hoc analyses can be developed for regression tasks. Second, the proposed methodology can be employed to assess group fairness in various currently deployed automated decision systems.
§ ACKNOWLEDGMENT
This material is based upon work supported by the National Science Foundation under Grant CCF-2131504. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
00
angwinMachineBias J. Angwin, J. Larson, S. Mattu, and L. Kirchner, “Machine bias.” May 23, 2016. Accessed: May 19, 2022. [Online]. Available: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
vanbekkumDigitalWelfareFraud2021 M. van Bekkum and F. Z. Borgesius, “Digital welfare fraud detection and the Dutch SyRI judgment,” European Journal of Social Security, vol. 23, no. 4, pp. 323–340, Dec. 2021, doi: 10.1177/13882627211031257.
castelvecchiFacialRecognitionToo2020 D. Castelvecchi, “Is facial recognition too biased to be let loose?,” Nature, vol. 587, no. 7834, pp. 347–349, Nov. 2020, doi: 10.1038/d41586-020-03186-4.
raghavanMitigatingBiasAlgorithmic2020 M. Raghavan, S. Barocas, J. Kleinberg, and K. Levy, “Mitigating bias in algorithmic hiring: evaluating claims and practices,” in Proc. Conference on Fairness, Accountability, and Transparency, New York, NY, Jan. 27, 2020, pp. 469–481. doi: 10.1145/3351095.3372828.
mcleanDigitalJusticeAustralian2019 J. McLean and R. Mackenzie, “Digital justice in Australian visa application processes?,” Alternative Law Journal, vol. 44, no. 4, pp. 291–296, Dec. 2019, doi: 10.1177/1037969X19853685.
choudhuryRoleArtificialIntelligence2020 A. Choudhury and O. Asan, “Role of artificial intelligence in patient safety outcomes: systematic literature review,” JMIR Medical Informatics, vol. 8, no. 7, p. e18599, Jul. 2020, doi: 10.2196/18599.
gursoySystemCardsAIBased2022 F. Gursoy and I. A. Kakadiaris, “System cards for AI-based decision-making for public policy.” arXiv, Mar. 01, 2022. doi: 10.48550/arXiv.2203.04754.
clarkeAlgorithmicAccountabilityAct2022 Y. D. Clarke, Algorithmic Accountability Act of 2022. 2022. Accessed: May 19, 2022. [Online]. Available: https://www.congress.gov/bill/117th-congress/house-bill/6580
EURLex52021PC0206EURLex European Commission, Artificial Intelligence Act. 2021. Accessed: May 19, 2022. [Online]. Available: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206
GuidanceAIAuditing Information Commissioner’s Office, “Guidance on the AI auditing framework: draft guidance for consultation.” Information Commissioner’s Office, Feb. 2020. Accessed: Jan. 07, 2022. [Online]. Available: https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-consultation-on-the-draft-ai-auditing-framework-guidance-for-organisations/
mehrabiSurveyBiasFairness2021 N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A survey on bias and fairness in machine learning,” ACM Comput. Surv., vol. 54, no. 6, p. 115:1-115:35, Jul. 2021, doi: 10.1145/3457607.
samoraniOverbookedOverlookedMachine2021 M. Samorani, S. L. Harris, L. G. Blount, H. Lu, and M. A. Santoro, “Overbooked and overlooked: machine learning and racial bias in medical appointment scheduling,” Manufacturing & Service Operations Management, Aug. 2021, doi: 10.1287/msom.2021.0999.
pratesAssessingGenderBias2020 M. O. R. Prates, P. H. Avelar, and L. C. Lamb, “Assessing gender bias in machine translation: a case study with Google Translate,” Neural Computing and Applications, vol. 32, no. 10, pp. 6363–6381, May 2020, doi: 10.1007/s00521-019-04144-6.
chuDigitalAgeismChallenges2022 C. H. Chu, R. Nyrup, K. Leslie, J. Shi, A. Bianchi, A. Lyn, M. McNicholl, S. Khan, S. Rahimi, and A. Grenier, “Digital ageism: challenges and opportunities in artificial intelligence for older adults,” The Gerontologist, Jan. 2022, doi: 10.1093/geront/gnab167.
whittakerDisabilityBiasAI M. Whittaker, M. Alper, O. College, L. Kaziunas, and M. R. Morris, “Disability, bias, and AI.” Nov. 2019. Accessed: May 19, 2022. [Online]. Available: https://ainowinstitute.org/disabilitybiasai-2019.pdf
petersAlgorithmicPoliticalBias2022 U. Peters, “Algorithmic political bias in artificial intelligence systems,” Philosophy & Technology, vol. 35, no. 2, p. 25, Mar. 2022, doi: 10.1007/s13347-022-00512-8.
abidPersistentAntiMuslimBias2021 A. Abid, M. Farooqi, and J. Zou, “Persistent anti-Muslim bias in large language models,” in Proc. AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, Jul. 21, 2021, pp. 298–306. doi: 10.1145/3461702.3462624.
makhloufApplicabilityMachineLearning2021 K. Makhlouf, S. Zhioua, and C. Palamidessi, “On the applicability of machine learning fairness notions,” ACM SIGKDD Explorations Newsletter, vol. 23, no. 1, pp. 14–23, May 2021, doi: 10.1145/3468507.3468511.
castelnovoClarificationNuancesFairness2022 A. Castelnovo, R. Crupi, G. Greco, D. Regoli, I. G. Penco, and A. C. Cosentini, “A clarification of the nuances in the fairness metrics landscape,” Scientific Reports, vol. 12, no. 1, p. 4209, Mar. 2022, doi: 10.1038/s41598-022-07939-1.
vermaFairnessDefinitionsExplained2018 S. Verma and J. Rubin, “Fairness definitions explained,” in Proc. International Workshop on Software Fairness, New York, NY, May 29, 2018, pp. 1–7. doi: 10.1145/3194770.3194776.
bellamyAIFairness3602019 R. K. E. Bellamy, K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, P. Lohia, J. Martino, S. Mehta, A. Mojsilović, S. Nagar, K. N. Ramamurthy, J. Richards, D. Saha, P. Sattigeri, M. Singh, K. R. Varshney, and Y. Zhang, “AI Fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias,” IBM Journal of Research and Development, vol. 63, no. 4/5, p. 4:1-4:15, Jul. 2019, doi: 10.1147/JRD.2019.2942287.
segalFairnessEyesData2021 S. Segal, Y. Adi, B. Pinkas, C. Baum, C. Ganesh, and J. Keshet, “Fairness in the eyes of the data: certifying machine-learning models,” in Proc. AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, Jul. 21, 2021, pp. 926–935. doi: 10.1145/3461702.3462554.
chouldechovaSnapshotFrontiersFairness2020 A. Chouldechova and A. Roth, “A snapshot of the frontiers of fairness in machine learning,” Communications of the ACM, vol. 63, no. 5, pp. 82–89, Apr. 2020, doi: 10.1145/3376898.
razGroupFairnessIndependence2021 T. Räz, “Group fairness: independence revisited,” in Proc. ACM Conference on Fairness, Accountability, and Transparency, New York, NY, Mar. 3, 2021, pp. 129–137. doi: 10.1145/3442188.3445876.
berkFairnessCriminalJustice2021 R. Berk, H. Heidari, S. Jabbari, M. Kearns, and A. Roth, “Fairness in criminal justice risk assessments: the state of the art,” Sociological Methods & Research, vol. 50, no. 1, pp. 3–44, 2021, doi: 10.1177/0049124118782533.
barocasFairnessMachineLearning2019 S. Barocas, M. Hardt, and A. Narayanan, Fairness and Machine Learning. fairmlbook.org, 2019.
fouldsIntersectionalDefinitionFairness2020 J. R. Foulds, R. Islam, K. Keya, and S. Pan, “An intersectional definition of fairness,” in Proc. International Conference on Data Engineering, Los Alamitos, CA, Apr. 2020, pp. 1918–1921. doi: 10.1109/ICDE48307.2020.00203.
kearnsPreventingFairnessGerrymandering2018 M. Kearns, S. Neel, A. Roth, and Z. S. Wu, “Preventing fairness gerrymandering: auditing and learning for subgroup fairness,” in Proc. International Conference on Machine Learning, Jul. 3, 2018, pp. 2564–2572. [Online]. Available: https://proceedings.mlr.press/v80/kearns18a.html
cramerMathematicalMethodsStatistics1946 H. Cramer, Mathematical Methods of Statistics. Princeton: Princeton University Press, 1946.
matthewsComparisonPredictedObserved1975 B. W. Matthews, “Comparison of the predicted and observed secondary structure of T4 phage lysozyme,” Biochimica et Biophysica Acta (BBA) - Protein Structure, vol. 405, no. 2, pp. 442–451, Oct. 1975, doi: 10.1016/0005-2795(75)90109-9.
cohenStatisticalPowerAnalysis J. Cohen, Statistical Power Analysis for the Behavioral Sciences, 2nd ed. Routledge, 1988.
fergusonEffectSizePrimer2009 C. J. Ferguson, “An effect size primer: a guide for clinicians and researchers,” Professional Psychology: Research and Practice, vol. 40, no. 5, pp. 532–538, 2009, doi: 10.1037/a0015808.
Sharpe2015YourCT D. M. Sharpe, “Your chi-square test is statistically significant: now what?,” Practical Assessment, Research and Evaluation, vol. 20, no. 8, pp. 1–10, 2015.
habermanAnalysisResidualsCrossClassified1973 S. J. Haberman, “The analysis of residuals in cross-classified tables,” Biometrics, vol. 29, no. 1, pp. 205–220, 1973, doi: 10.2307/2529686.
Agresti2007 A. Agresti, An Introduction to Categorical Data Analysis, 2nd ed. New York: Wiley-Interscience, 2007.
macdonaldTypeErrorRate2000 P. L. MacDonald and R. C. Gardner, “Type I error rate comparisons of post hoc procedures for I x j chi-square tables,” Educational and Psychological Measurement, vol. 60, no. 5, pp. 735–754, Oct. 2000, doi: 10.1177/00131640021970871.
cochranMethodsStrengtheningCommon1954 W. G. Cochran, “Some methods for strengthening the common x² tests,” Biometrics, vol. 10, pp. 417–451, 1954, doi: 10.2307/3001616.
coxPostHocPairWise1993 M. K. Cox and C. H. Key, “Post hoc pair-wise comparisons for the chi-square test of homogeneity of proportions,” Educational and Psychological Measurement, vol. 53, no. 4, pp. 951–962, Dec. 1993, doi: 10.1177/0013164493053004008.
kirkpatrickItNotAlgorithm2017 K. Kirkpatrick, “It’s not the algorithm, it’s the data,” Communications of the ACM, vol. 60, no. 2, pp. 21–23, Jan. 2017, doi: 10.1145/3022181.
mattuHowWeAnalyzed J. Larson, S. Mattu, L. Kirchner, and J. Angwin, “How we analyzed the COMPAS recidivism algorithm.” Accessed: May 19, 2022. [Online]. Available: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
israniAlgorithmicDueProcess E. Israni and E. Chang, “Algorithmic due process: mistaken accountability and attribution in State v. Loomis.” Aug. 31, 2017. Accessed: Apr. 14, 2022. [Online]. Available: https://jolt.law.harvard.edu/digest/algorithmic-due-process-mistaken-accountability-and-attribution-in-state-v-loomis-1
n.w.2d749StateLoomis Harvard Law Review, “State v. Loomis.” Mar. 10, 2017. Accessed: May 19, 2022. [Online]. Available: https://harvardlawreview.org/2017/03/state-v-loomis/
LoomisWisconsin “Loomis v. Wisconsin.” Jun. 26, 2017. Accessed: May 19, 2022. [Online]. Available: https://www.scotusblog.com/case-files/cases/loomis-v-wisconsin/
dieterichCOMPASRiskScales2016 W. Dieterich, C. Mendoza, and T. Brennan, “COMPAS risk scales: demonstrating accuracy equity and predictive parity.” Jul. 08, 2016. Accessed: May 19, 2022. [Online]. Available: https://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf
washingtonHowArgueAlgorithm2019 A. L. Washington, “How to argue with an algorithm: lessons from the COMPAS ProPublica debate,” The Colorado Technology Law Journal, vol. 17, no. 1, pp. 131–160, Apr. 2019.
jacksonSettingRecordStraight2020 E. Jackson and C. Mendoza, “Setting the record straight: what the COMPAS core risk and need assessment is and is not.” Mar. 31, 2020. Accessed: May 19, 2022. [Online]. Available: https://hdsr.mitpress.mit.edu/pub/hzwo7ax4/
propublicaDataAnalysisMachine2022 ProPublica, “Data and analysis for ‘machine bias.’” May 20, 2022. Accessed: May 19, 2022. [Online]. Available: https://github.com/propublica/compas-analysis
ViolentCrime FBI, “Violent crime.” Accessed: Apr. 12, 2022. [Online]. Available: https://ucr.fbi.gov/crime-in-the-u.s/2010/crime-in-the-u.s.-2010/violent-crime/violent-crime
northpointePractitionersGuideCOMPAS2012 Northpointe, “Practitioners guide to COMPAS.” 2012. Accessed: Apr. 14, 2022. [Online]. Available: https://njoselson.github.io/pdfs/FieldGuide2_081412.pdf
|
http://arxiv.org/abs/2307.00261v1
|
20230701075821
|
A polynomial quantum algorithm for the explicit isomorphism problem
|
[
"Péter Kutas",
"Mickaël Montessinos"
] |
math.NT
|
[
"math.NT",
"math.RA",
"11Y16 (Primary) 68Q12, 11R54 (Secondary)"
] |
IRS-Aided Overloaded Multi-Antenna Systems: Joint User Grouping and Resource Allocation
Ying Gao, Qingqing Wu, Wen Chen, Yang Liu, Ming Li, and Daniel Benevides da Costa
Y. Gao is with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 201210, China, and also with the State Key Laboratory of Internet of Things for Smart City, University of Macau, Macao 999078, China (e-mail: yinggao@um.edu.mo). Q. Wu and W. Chen are with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 201210, China (e-mail: qingqingwu@sjtu.edu.cn; whenchen@sjtu.edu.cn). Y. Liu and M. Li are with the School of Information and Communication Engineering, Dalian University of Technology, Dalian 116024, China (e-mail: yangliu_613@dlut.edu.cn; mli@dlut.edu.cn). D. B. da Costa is with the Technology Innovation Institute, 9639 Masdar City, Abu Dhabi, United Arab Emirates (email: danielbcosta@ieee.org).
August 1, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We present an efficient computational representation of central simple algebras using Brauer factor sets. Using this representation and polynomial quantum algorithms for number theoretical tasks such as factoring and S-unit group computation, we give a polynomial quantum algorithm for the explicit isomorphism problem over number field, which relies on a heuristic concerning the irreducibility of the characteristic polynomial of a random matrix with algebraic integer coefficients. We present another version of the algorithm which does not need any heuristic but which is only polynomial if the degree of the input algebra is bounded.
§ INTRODUCTION
Let F be a field. The explicit isomorphism problem over F is the algorithmic problem of, given some F-algebra A assumed isomorphic to M_d(F) for some positive integer d, constructing an explicit isomorphism φ A → M_n(F).
The explicit isomorphism problem may be thought of as a natural problem in computational representation theory. Given an F-algebra A, one may wish to assay it. That is, compute the Jacobson radical of A, and the decomposition of the semi-simple part of A as a sum of simple F-algebras, themselves identified to some M_n(D), for D a division F-algebra. In general, the hard part of this task is to find an isomorphism A → M_n(D) when A is simple. A general recipe for solving this problem is to identify the Brauer class of D over its center K/F, find structure constants for M_n(D^op) and then compute an explicit isomorphism A ⊗ M_n(D^op) ≃ M_m(K) <cit.>.
Applications of the explicit isomorphism problem go beyond the mere computational theory of associative algebras. In arithmetic geometry, the problem is relevant for trivialising obstruction algebras in explicit descent over elliptic curves <cit.> and computation of Cassel-Tate pairings <cit.>. The problem is also connected to the parametrisation of Severi-Brauer surfaces <cit.>. Recent work in algebraic complexity theory reduced the determinant equivalence test to the explicit isomorphism problem <cit.>. Finally, the explicit isomorphism problem over the function field _q(T) is also relevant to error correcting codes <cit.>.
Our approach relies on novel algorithms that make computationally practical the presentation of
a central simple F-algebra A using so-called Brauer factor sets <cit.>, where F is a field where field operations may be performed efficiently. Such a presentation may be computed from a generic element of the algebra. By generic, here, we mean an element x ∈ A such that F(x)/F is a field extension of degree A, and whose Galois group is the full symmetric group on A elements. Random matrices with integer coefficients in mind were conjectured to have irreducible characteristic polynomial with probability 1 - o(1) in the 1970s by Babai and independently in 2009 by Vu and Wood <cit.>. This conjecture was proved under certain conditions in <cit.>. To be best of our knowledge, the question of random elements of a maximal order in an algebra A ≃ M_n(F), where F is a number field, has not been considered before. While we prove some results in this setting in section <ref>, our algorithm relies on Heuristic <ref>, which generalizes the conjecture by Babai and Vu-Wood to number fields, and restricts it to a specific probability distribution. In our heuristic, we only assume that the probability is non-negligible, as it is enough for our purposes. However, we believe that the truth is that, again, the probability is 1 - o(1).
Let be a maximal order in the algebra M_d(F), for F a number field and d ≥ 3. Let ℬ = {b_1,,b_d F} be a -basis of . Then, let x = ∑_i=1^d Fζ_i b_i, where the ζ_i are iid uniform random variables in {1,2,3,4,5,6}. Then, the characteristic polynomial of x is irreducible with probability larger than 1/μ(log F, F,d), where μ is some fixed polynomial.
§.§ Related work
In the case of a finite base field, a polynomial-time algorithm was introduced by Ronyai in <cit.>.
Instances of the problem for -algebras isomorphic to M_n() were first treated separately for small values of n. When n=2, the problem reduces to finding a rational point on a projective conic <cit.>, which is solved for instance in <cit.>. Then, <cit.> presented an subexponential algorithm when n=3 by finding a cyclic presentation and solving a cubic norm equation. The case n=4 is tackled in <cit.> by reducing to the case of quaternion algebras over Q and (√(d)) and solving a norm equation.
In <cit.> an algorithm was given and studied mostly for the cases n=3 and n=5. It was then generalised in <cit.> to a K-algebra isomorphic to M_n(K), where n is a natural number and K is a number field. The complexity of this last algorithm is polynomial in the size of the structure constants of the input algebra, but depends exponentially on n, the degree of K and the size of the discriminant of K.
In 2018, <cit.> exhibited a polynomial-time algorithm for algebras isomorphic to M_n(_q(T)).
For the case of fixed n and varying base field, <cit.> independently gave an algorithm for an algebra isomorphic to M_2((√(d))) with complexity polynomial in log(d). A similar algorithm for quadratic extensions of _q(T) was given in <cit.> for the case of odd q. The case of even q was then treated in <cit.>.
More efficient algorithms are known if the algebra A ≅ M_d(F) is given with a different presentation. If A is presented as a cyclic algebra, it is well known that an explicit isomorphism may be computed by solving a norm equation in a cyclic extension of F of degree d. In 2009, <cit.> showed that if A is presented as a crossed-product algebra, on may find an explicit isomorphism by computing a group of S-units in a degree d Galois extension of F. We note that these results are very similar in spirit, as the best known general algorithm for solving norm equation in global fields <cit.> also uses the computation of S-units.
§.§ Contributions
* Section <ref> introduces a computational presentation for a central simple algebra A using Brauer factor sets. While such a presentation was known algebraically since 1928 <cit.>, there are non-trivial obstacles to using it computationally. Such factor sets are based on a degree A extension K of the base field F, and take values in the Galois closure of K. Our algorithms apply, in particular, when (K/F) = _n and avoid computations in the Galois closure of K altogether. We stress that this representation yields polynomial-time classical algorithms. An advantage over presentations as cyclic or crossed-product algebras is that it is easy to find a Brauer factor set representing an arbitrary algebra.
* We argue that if F is a number field and A ≃ M_d(F) is given from structure constants, one may easily find some x ∈ A whose characteristic polynomial χ_x is irreducible and has full symmetric Galois group _d. This is true in the case F = <cit.>. Replicating some arguments, we prove in Section <ref> that if χ_x is irreducible, then the Galois group is _n with large enough probability. We state the irreducibility of χ_x as Heuristic <ref>.
* From the combination of the points above and under Heuristic <ref>, Theorem <ref> gives a polynomial quantum algorithm which solves the explicit isomorphism problem over number fields.
* Theorem <ref> modifies Algorithm <ref>. It no longer depends on heuristic <ref> but it is only polynomial if the input algebra has bounded degree.
§.§ Organisation of the paper
We first recall relevant known results in Section <ref>. In particular, we recall definitions and properties of Brauer factor sets. Then, in Section <ref>, we study the behaviour of these factor sets when the Galois group of the relevant extension is 3-transitive. We present an efficient computational presentation for central simple algebras in this context. In Section <ref>, we adapt Fieker's <cit.> on Galois cocyles to Brauer factor sets. In Section <ref>, we show that a maximal subfield suitable to our representation may be found with large enough probability in an algebra A ≃ M_d(F). Finally, in Section <ref>, we bring all our results together and present our algorithm solving the explicit isomorphism problem.
§.§ Notations
* If n ∈, then we set [n] = {1,,n}.
* If F is a field, A is a finite dimensional K-algebra and x ∈ A, we write Π_x for the minimal polynomial of x. If A ≅ M_d(F), χ_x is the characteristic polynomial of x.
* If A is a K-algebra and x ∈ A, then C_A(x) is the centraliser of x in A. That is, C_A(x) = {y ∈ A | xy = yx}.
* If F is a number field, _F is the ring of integers in F.
* If F is a field and K = F(θ) is a separable extension of degree d, we call θ_1,,θ_d the conjugates of θ with, in particular, θ_1 = θ. Then, if I ⊂ [d], we set K_I = F((θ_i)_i ∈ I). In particular, K(θ_1,,θ_k) = K_[k] and K_[d] is the Galois closure of K.
* For F and K as above, we write (K/F) for the Galois group of K_[d] above F. Then, we identify any σ∈(K/F with a permutation in _n following the permutation that σ induces on the θ_i. That is, σ(θ_i) = θ_σ i.
§.§ Acknowledgments
The second author is grateful to Sean Eberhard for conversations on the methods and results in <cit.> that were helpful for proving results in Section <ref>, and to Clément Guérin for a fruitful discussion on the proof of Lemma <ref>. The authors are also thankful to Aurel Page for numerous conversations on the explicit isomorphism problem, and many clarifications, as well as providing the argument for the classical algorithm of Section <ref>.
The first author is supported by the Hungarian Ministry
of Innovation and Technology NRDI Office within the framework of the Quantum Information National Laboratory Program,
the János Bolyai Research Scholarship of the Hungarian Academy of Sciences and
by the UNKP-22-5 New National Excellence Program. The first author is also partly supported by EPSRC through grant number EP/V011324/1.
§ PRELIMINARIES
§.§ Algorithmic preliminaries
§.§.§ Computational presentations of central simple algebras
An F-algebra may be given as input to program in the form of a table of structure constants. Let A be a finite dimensional F-algebra of dimension n, let V be the underlying vector space of A and let (e_1,,e_n) be a basis of V. Multiplication in A is a bilinear map which may be represented as a tensor λ∈ V^∨⊗ V^∨⊗ V. The table of structure constants of A for the basis (e_1,,e_n) is then the coordinates of λ in the basis (e_i^∨⊗ e_j^∨⊗ e_k) of V^∨⊗ V^∨⊗ V, where (e_i^∨) is the basis of V^∨ dual to (e_i).
An F-algebra A ≅ M_d(F) may be presented as a cyclic algebra <cit.> or as a crossed-product algebra <cit.>. It follows readily from the algebraic constructions that, computationally, knowing a cyclic (resp. crossed-product) presentation of a central simple F-algebra A of degree d is equivalent to knowing an embedding K → A, where K/F is a cyclic (resp. Galois) extension of degree d.
As discussed in Section <ref>, if the algebra A is cyclic or a crossed-product, constructed from an embedding K → A, then computing an explicit isomorphism A → M_d(F) reduces to factorisation and computation of groups of S-units in F and K. However, there is no known efficient algorithm for computing such an embedding from the structure constants of an algebra A ≃ M_d(F).
§.§.§ Zero divisors and explicit isomorphisms
If F is a field and z ∈ M_d(F), let L_z be the left ideal M_d(F)z. Then z = _F L_z/d. Furthermore, if z has rank one, then the action of M_d(F) by left multiplication on L_z is the usual action of M_d(F) on a d-dimensional F-vector space.
It follows from this fact that if one knows a rank one element in A ≅ M_d(F) and one may efficiently perform computations in A, then one may compute an explicit isomorphism A → M_d(F) in polynomial time.
Knowing a zero divisor z ∈ A of rank larger than 1, one may still reduce the problem to computing an isomorphism A' ≃ M_d'(F), with d' < d. Indeed, by linear algebra, the left ideal L_z admits a right unit e which is an idempotent of rank r < d. Then, e'Ae' ≃ M_r(F). If one solves the explicit isomorphism problem for A', they may know an idempotent of rank 1 in A', which also has rank 1 in A. By the previous paragraph, this solves the explicit isomorphism problem. This idea is used in <cit.>.
§.§.§ Computing rings of integers and factoring ideals
The ability to factor integers in polynomial time allow for polynomial quantum algorithms for several tasks of computational number theory. For instance, Zassenhaus' round 2 algorithm <cit.> computes the ring of integers of a number field in polynomial time, provided an oracle for factoring. Thus, one may deduce a polynomial quantum algorithm for factoring integers.
Likewise, one may deduce from <cit.> a polynomial quantum algorithm for factoring ideals in a number field.
§.§.§ Computing discrete logarithms in unit groups
Here we briefly discuss a subroutine that is used throughout the paper. Let K be a number field and let h be a unit. Let g_1,…,g_n be a set of generators of the unit group of K. The algorithmic problem is to write h as a product of these generators.
Classical algorithm
An algorithm for computing discrete logarithms in the unit group of a number field is <cit.>. While the source does not make any claim on the complexity, it is well known that this algorithm runs in polynomial time. The idea of the algorithm is to use the logarithmic embedding into ℝ^n which reduces a multiplicative problem to an additive one. Dirichlet's Unit Theorem implies that the image of this embedding is a lattice. Now one has to represent the target unit as a linear combination of the lattice vectors. The issue is that since ℝ is not a computable field, one has to work with real numbers up to some precision. What one can do is write the target unit as a linear combination by solving a system of linear equations and then round the solution up to the nearest integer vector. If this does not lead to a solution one can increase the precision as mentioned in <cit.>. An alternative way would be to view this problem as a bounded distance decoding problem. One is essentially looking for the closest lattice vector to our target vector. Since this should potentially be extremely close one can use Babai's algorithm <cit.>.
Quantum algorithm
Besides the above mentioned classical algorithm, there also exists a polynomial-time quantum algorithm for this task as it can be seen as an instance of the hidden subgroup problem. If h has finite order, then h is a root of unity, thus writing it as a power of a primitive root of unity is an easy task. So one may assume that h has infinite order and let g_1,… g_k be generators of the torsion-free part of the unit group. Let us denote the unit group by G. Then one can define a homomorphism from ℤ^k+1 to G by mapping (a_1,…,a_k,b) to h^-b∏ g_i^a_i.
The kernel of this homomorphism is clearly a ℤ-lattice (not full rank) in ℤ^k+1. Generators for that can be computed in quantum polynomial time by a recent preprint of Kuperberg <cit.>. Now what is left is to find an element in the kernel where the last coordinate is 1. Indeed, if (a_1,… a_k,1) is in the kernel, then 1=h^-1∏ g_i^a_i which means that h=∏ g_i^a_i. If the kernel is generated by vectors with last coordinates b_1,… b_j, then finding such an element is just solving the Diophantine equation ∑ x_ib_i=1 which is a linear Diophantine equation which can be solved with in polynomial time.
§.§.§ A polynomial quantum algorithm for computing class groups and groups of S-Units in number fields
The class group of L, denoted by Cl(L) is the group of fractional ideals modulo principal ideals. The units in _L form a group which is called the unit group and is denoted by U_L. All these are important objects in number theory and they motivate the following three algorithmic problems:
* Compute generators of Cl(L)
* Compute generators of U_L
* Given a principal ideal I, find a generator element of I
To the best of our knowledge, there only exist polynomial-time classical algorithms for these problems in very special cases. In <cit.> the authors propose a subexponential algorithm for computing class groups and unit groups for arbitrary degree number fields. In <cit.> and <cit.> the authors propose subexponential algorithms for the principal ideal problem for a certain subclass of number fields (multiquadratic extensions of ℚ and power-of-two cyclotomic fields).
Thus these problems are good examples where quantum algorithms have a clear advantage over their classical counterparts. First Hallgren <cit.> proposed a polynomial-time quantum algorithm for computing the class group and unit group for bounded degree number fields. Later polynomial-time algorithms were proposed for these tasks for arbitrary number fields <cit.>,<cit.>. Furthermore, <cit.> also provides an efficient algorithm for computing generators of principal ideals (dubbed the principal ideal problem) and claims an application is the computation of generators of the group of S-units.
Let S be a set of prime ideals of K. An element x∈ K is called an S-integer if v_p≥ 0 for every p∉ S. An element x∈ K is called an S-unit if v_p=0 for every p∉ S. We denote the ring of S-integers by _K,S and the group of S-units by U_K,S. An element x∈ L is an S-integer if v_P≥ 0 for all primes P except perhaps those above S. An element x∈ L is an S-unit if v_P= 0 for all primes P except perhaps those above S.
Let S=P_1,…,P_k. We will essentially follow <cit.>. First we compute the class group Cl(K), let g_1,…,g_r be generators whose orders are d_1,…,d_r. Note that orders of elements can be computed with Shor's algorithm in quantum polynomial time in any finite abelian group. We also compute the unit group of _K using the algorithm from <cit.>. We introduce the following notation. Let V be a vector with k entries from K and let U be a k× l integer matrix. Then W=V^U is the vector with l entries defined by
W_j=∏_i V^U_i,j
.
First determine β_i∈ K such that g_i^d_i=β_i _K. Such β_i can be found using the polynomial quantum algorithm for the principal ideal problem <cit.>. Let M be the diagonal matrix with d_i in the ith position and let V=(β_1,…,β_r). Let P_j= (∏_i,j g_i^e_i,j)α_j. Let M'=(-e_i,j) and V'=(α_1,…,α_k).
Now compute a unimodular matrix U=[ A B; C D ] such that
(M|M')U=(0|H)
is in Hermite Normal Form. Compute W such that (W|W')=(V|V')^U. Then the coordinates of W generate the S-unit group (modulo units of K). The proof of this fact can be found in <cit.>.
One application of S-unit computation (mentioned in <cit.> and <cit.>) is that one can compute solutions to norm equations in Galois extensions. Let L|K be a Galois extension of number field and let a∈ K. Then one would like to find a solution to N_L|K(x)=a (or show that it does not exist). The main result by Simon is the following:
(<cit.>)
Let L|K be a Galois extension of number fields. Let S_0 be a set of primes generating the the relative class group Cl_i(L|K). Let S be a set of primes containing S_0 and let us denote the set of S-units in L by U_L,S and the set of S-units in K by U_K,S. Then one has that:
N_L|K(U_L,S)=N_L|K(L^*)∩ U_K,S
Informally, this means that for Galois extensions one can search for x in the group of S-units. Then by factoring S and computing the class group of L one can turn finding x into solving a system of linear equations. Thus computing S-units implies a polynomial quantum algorithm for solving norm equations in Galois extensions.
§.§ Brauer factor sets
For the remainder of this section, we fix a field F and a separable extension K = F(θ). We also let L be the Galois closure of K with respect to F. We write d = [K:F], G (K/F).
§.§.§ Brauer factor sets and central simple algebras
We recall here the basic definitions and results about so-called Brauer factor sets, following the exposition from <cit.>. We omit proofs and refer the reader to the relevant chapter of Jacobson's book.
A Brauer factor set of K/F is a map c[ [d]^3 → L^×; (i,j,k) ↦ c_ijk ] such that for all i,j,k,ℓ∈ [d] and σ∈ G, the following equations are satisfied:
σ(c_ijk) = c_σ iσ jσ k
c_ijk c_ikℓ = c_ijℓ c_jkℓ
We furthermore call a Brauer factor set c reduced if for all i ∈ [d], c_iii = 1. It then follows that for all i,j ∈ [d], c_iij = c_jii = 1.
Equation <ref> is the homogeneity condition and Equation <ref> is the cocycle condition.
A homogeneous matrix of K/F is a matrix m ∈ M_d(L) whose coefficients are all non-zero and satisfy the following homogeneity condition: for all i,j ∈ [d]^2, σ∈ G,
σ(m_ij) = m_σ iσ j.
A homogeneous matrix is further called reduced if for all i ∈ [d], m_ii = 1.
We note that the sets of (reduced) Brauer factor sets and (reduced) homogeneous matrices are groups for the obvious pointwise multiplication.
Brauer factor sets play the role of Galois 2-cocycles with coefficients in K^× in situations where K is not a Galois extension of F. That is, we construct a central simple F-algebra B(K,c) in the following manner:
Let B(K,c) be the space of homogeneous matrices of K/F. An F-algebra structure is given by a twist of the usual matrix multiplication using factor set c. Let ℓ, ℓ' ∈ B(K,c), then ℓ”ℓℓ' is defined as the matrix with coefficients
ℓ”_ij = ∑_k=1^d ℓ_ik c_ijkℓ'_kj.
Let c be a reduced Brauer factor set of K/F. The algebra B(K,c) is a degree d central simple F-algebra split by K. Every such algebra may be described as B(K,c) for some reduced Brauer factor set c.
If is the constant factor set whose values are equal to 1, then multiplication in B(K,) is the restriction of the usual multiplication in M_d(L). In particular, since B(K,) contains the rank 1 matrix with every coefficient equal to 1, we get B(K,) ≃ M_d(F) and this isomorphism may be computed explicitly using the methods from section <ref>.
§.§.§ Associated factor sets
Now, as in the Galois cohomological approach, we characterise factor sets which yield isomorphic algebras. We first define a homomorphism ∂, from the group of (reduced) homogeneous matrices to the group of (reduced) Brauer factor sets, as
∂(m) = (c_ijk)_i,j,k ∈ [d],
where
c_ijk = m_ij m_jk m_ik^-1.
Now, we get an equivalence relation between factor sets:
Let c,c' be reduced Brauer factor sets for K/F. We say that c and c' are associated, denoted by c ∼ c', if there exists a homogeneous matrix m such that c' = c∂(m).
If c is a reduced factor set such that c ∼ 1, where 1 is the constant factor set, we call c trivial and a reduced homogeneous matrix m such that c = ∂(m) is a trivialisation of c.
Of course, the upshot is that association characterises factor sets which produce isomorphic algebras.
Let c,c' be associated factor sets, with c' = c∂(m). Then, the map
[ B(K,c) → B(K,c'); (ℓ_ij) ↦ (ℓ_ijm_ij-1) ]
is an isomorphism of F-algebras.
§.§.§ Construction of a factor set associated to a central simple algebra
Let A be a degree d central simple F-algebra split by K. By Proposition <ref>, there exists a reduced factor set c for K/F such that A ≃ B(K,c). We summarize here the steps to compute such a factor set c:
* There exists some v ∈ A such that A = KvK.
* There is an isomorphism A_K_[d]≃ M_d(K_[d]) which sends θ_1 to the diagonal matrix (θ_1,,θ_d).
* Identify A_K_[d] with M_d(K_[d]) and consider elements of A as matrices. Then,
A = {(ℓ_ij v_ij): (ℓ_ij) is a homogeneous matrix}
* Set c_ikj v_ik v_kj v_ij^-1.
* The map (ℓ_ij v_ij) ↦ (ℓ_ij) gives an isomorphism A → B(K,c).
§ MULTIPLY TRANSITIVE GALOIS GROUPS AND FACTOR-SETS
The discussions in this section are valid for general separable extensions of fields. We first define k-transitive permutation groups, set some notation and describe their behaviour as Galois groups. We then show how 3-transitivity of the Galois group of K allows one to perform efficient computations for Brauer factor sets and avoid computing in the Galois closure of K.
For the remainder of the section, F is a field, K = F(θ) is a finite separable extension of F of degree d and G = (K/F).
§.§ Field extensions with multiply transitive Galois group
We first recall the definition of a k-transitive permutation group:
Let H be a subgroup of the symmetric group _n acting on some set X of cardinal n. For 1 ≤ k ≤ n, we say that G is k-transitive if for all tuples (x_1,,x_k),(y_1,,y_k) ∈ X^k such that both the x_i and the y_i are pairwise distinct, there exist σ∈ H such that σ(x_i) = y_i for all i ∈ [k].
Field extensions with k-transitive Galois group are easily characterised as follows:
The following are equivalent:
* The Galois group G of K/F is k-transitive.
* The extension K_[k]/F has degree d(d-1)⋯(d-k+1).
* For i ∈ [k-1], the polynomial Π_θ_i / (X-θ_i) is irreducible in _K_[i][X].
(i) (ii) We let H be the subgroup of G corresponding to the subextension K_[k]⊂ K_[d], so that [K_[k]:F] = [G:H] Then, H is the pointwise stabilizer of [k] in G, seen as a permutation group. In particular, the cosets of H in G are in one-to-one correspondence with the elements of {(σ(1),σ(2),,σ(k))σ∈ G}.
Then, it follows that [K_[k]:F] = d(d-1)⋯(d-k+1) if and only if G is k-transitive.
(ii) (iii) Let F_i = Π_θ_i/(X-θ_i) ∈ K_[i][X]. Then, F_i(θ_i+1) = 0. It follows that the minimal polynomial of θ_i+1 over K_[i] divides F_i. Since (F_i) = d-i, we directly get (ii) (iii).
If the Galois group G is k-transitive, then for any i_1,,i_k ∈ [d] pairwise distincts, there is an isomorphism ι_i_1i_2 i_k K_[k]→ K_{i_1i_2 i_k} such that ι_i_1i_2 i_k(θ_j) = θ_i_j for all 1 ≤ j ≤ k.
By definition of the trace of elements in field extensions, we immediately get the following:
Assume that the Galois group G is k-transitive, and let a ∈ K_[k]. Then
_K_[k]/K_[k-1](a) = ∑_k ≤ i ≤ dι_1,,k-1,i(a)
§.§ Efficient representation for Brauer factor sets
For the remainder of the section, we assume that the Galois group G of K/F is 3-transitive. In this setting, Equations <ref> and <ref> are rigid enough that a factor set is entirely determined by one of its values.
Let c and c' be two reduced factor sets. Let i,j,k ∈ [d] be pairwise distinct. Then c = c' if and only if c_ijk = c'_ijk.
Let i',j',k' ∈ [d] be pairwise distinct, and let σ∈ G be such that σ i = i', σ j = j' and σ k = k'. Then, c_i'j'k' = σ c_ijk = σ c'_ijk = c'_i'j'k'.
Now, for all i,j ∈ [d], c_iij = c_ijj = c'_iij = c'_ijj = 1. All that remains is to prove that c_iji = c'_iji for i ≠ j. But then, for any k ∈ [d] ∖{i,j}, applying Equation <ref> yields:
c_iji c_iik = c_jik c_ijk.
It follows that c_iji = c_ijk c_jik = c'_ijk c'_jik = c'_iji.
We prove likewise that a reduced homogeneous matrix m may be entirely represented by its value m_1,2 (for this, it is enough that the Galois group be 2-transitive).
We also observe that c_ijk and m_ij lie in small subfields of K_[d]:
Let c be a reduced Brauer factor set. Then for all i,j,k ∈ [d], c_ijk∈ K_{i,j,k}. Likewise, if m is a homogeneous matrix, m_ij∈ K_{i,j} for all i,j ∈ [d].
The field K_{i,j,k} is the subfield of K_[d] fixed by the automorphisms that, as permutations, fix i,j and k. By homogeneity, if σ is such an automorphism, σ(c_ijk) = c_ijk. Therefore, c_ijk∈ K_{i,j,k}.
The proof is similar for homogeneous matrices.
A consequence of Propositions <ref> and <ref> is that we may represent a factor set using a unique element of K_[3] which represents c_1,2,3. Indeed, in general we show that c_ijk lies in K_{i,j,k}. The fact that m is a trivialisation of c can be observed solely from these representations:
Let c be a reduced Brauer factor set. Let m ∈ K_[2] be such that
m ι_2,3(m) ι_1,3(m)^-1 = c_1,2,3.
Then, we may set m_ij = ι_ij(m) if i ≠ j ∈ [d] and m_ii = 1. The matrix (m_ij) is a reduced homogeneous matrix such that ∂(m) = c
Let m and (m_ij) be as in the statement. By hypothesis, ∂((m_ij))_1,2,3 = c_1,2,3. By Proposition <ref>, the factor sets c and ∂(m) are then equal.
Now, recall that the underlying vector space of the algebra B(K,c) is the space of homogeneous matrices for K/F. Using 2-transitivity, such a matrix ℓ may be represented as the tuple (ℓ_1,1,ℓ_1,2). That is, there is a F-vector space isomorphism
[ φ B(K,c) → K_[1]× K_[2]; ℓ ↦ (ℓ_1,1,ℓ_1,2) ]
The upshot is that multiplication in B(K,c) may be described efficiently using isomorphism φ.
Let c be a factor set, and let α,α' ∈ K_[1] and β,β' ∈ K_[2]. Then,
φ (φ^-1(α,β)φ^-1(α',β')) = (α”,β”)
with
α” = αα' + _K_[2]/K_[1] (c_1,2,1βι_2,1(β'))
and
β” = αβ' + ι_2,1(α') β + _K_[3]/K_[2](c_1,3,2ι_1,3(β) ι_3,2(β')).
Let (ℓ_ij) = φ^-1(α,β), (ℓ'_ij) = φ^-1(α',β') and (ℓ”_ij = (ℓ)_ij (ℓ')_ij). We have α” = ℓ”_1,1 and β” = ℓ”_1,2.
So, we compute:
ℓ”_1,1 = ∑_1 ≤ i ≤ d c_1,i,1ℓ_1,iℓ'_i,1
= c_1,1,1ℓ_1,1ℓ'_1,1 + ∑_2 ≤ i ≤ d c_1,i,1ℓ_1,iℓ'_i,1
= αα' + ∑_2 ≤ i ≤ dι_1,i(c_1,2,1) ι_1,i(β) ι_i,1(β')
= αα' + ∑_2 ≤ i ≤ dι_1,i(c_1,2,1βι_2,1(β'))
= αα' + _K_[2]/K(c_1,2,1βι_2,1(β'))
and
ℓ”_1,2 = ∑_1 ≤ i ≤ d c_1,i,2ℓ_1,iℓ'_i,2
= c_1,1,2ℓ_1,1ℓ'_1,2 + c_1,2,2ℓ_1,2ℓ'_2,2∑_1 ≤ i ≤ d c_1,i,2ℓ_1,iℓ'_i,2
= αβ' + βι(α') + ∑_1 ≤ i ≤ dι_1,2,i(c_1,3,2) ι_1,i(β) ι_i,2(β”)
= αβ' + βι(α') + ∑_1 ≤ i ≤ dι_1,2,i(c_1,3,2ι_1,3(β) ι_3,2(β'))
= αβ' + βι(α') + _K_[3]/K_[2](c_1,3,2ι_1,3(β) ι_3,2(β'))
Lemma <ref> is used for the last step of each computation.
For the remainder of the section, we identify elements of B(K,c) with tuples in K × K_[2] via φ.
We may also transfer Proposition <ref> to this setting:
Let c ∈ K_[3] represent a trivial factor set and let m ∈ K_[2] be a trivialisation of c. Then, we may compute a rank one zero divisor in B(K,c) in polynomial time.
Here we identify elements of B(K,c) and B(K,) with their image by φ. Then, by Example <ref>, the tuple (1,1) represents a rank one zero divisor in B(K,). Using Proposition <ref>, it follows that (1,m^-1) represents a rank one zero divisor in B(K,c).
Bringing together the results of this section, we obtain a compact representation of factor sets, their trivialisations and elements of B(K,c). Under these representations, arithmetical operations in B(K,c) may be computed in polynomial time. In the sequel, whenever we consider an algorithm which handles factor sets or elements of B(K,c), we implicitly use this efficient representation and our efficient algorithms.
We also abuse notation and see ∂ as a map from K_[2]^× to K_[3]^×, defined by ∂(m) = mι_2,3(m)ι_1,3^-1(m).
§.§ Computing a factor set from an embedding
In order to leverage the nice properties of Brauer factor set, we need to be able to construct a factor set c and an isomorphism A ≅ B(K,c) for a central simple algebra A given with an embedding K ⊂ A. We follow the construction given in Section <ref>.
We first need to compute some v ∈ A such that A = KvK. We begin with a lemma:
The map
[ ψ K ⊗_F K → K ⊕ K_[2]; a ⊗ b ↦ (ab,aι_2(b)) ]
is an isomorphism of F-algebras.
Recall that θ is an element of K such that K = F(θ), and let χ be the minimal polynomial of θ, which has degree d. By Lemma <ref>, the quotient χ' χ(X)/X-θ is an irreducible polynomial in K[X], which is coprime to X-θ. We then get:
K ⊗_F K ≅ K ⊗ F[X]/(χ)
≅ K[X]/(χ)
≅ K[X]/(X - θ) ⊕ K[X]/(χ')
≅ K ⊕ K_[2].
Now, let a,b ∈ K. We write b = ∑_i=0^d-1 b_i θ^i. The image of a ⊗ b in K[X]/(χ) is the class of ∑_i=0^d-1 ab_i X^i. Then, apply the CRT and reduce modulo X-θ and χ' to get the result.
There exists a polynomial-time algorithm which takes as input a positive integer d, structure constants for a central simple F-algebra A of degree d and coordinates for some u ∈ A such that F(u) ≅ K in a way that associates u to θ_1, and which outputs coordinates for v ∈ A such that A = F(u)vF(u).
The algebra A is a cyclic F(u) ⊗_F F(u)-module via (a ⊗ b)x = axb. Using Lemma <ref>, we may compute an explicit isomorphism F(u) ⊗ F(u) ≅ K ⊕ K_[2] and therefore give A a structure of K ⊕ K_[2]-module. Following the proof of <cit.>, we need to find x_1,x_2 non-zero such that x_1 ∈ (K ⊕{0})A and x_2 ∈ ({0}⊕ K_[2])A.
Such elements may be found in polynomial-time with the following algorithm:
* Using linear algebra, compute e_1 = ψ^-1((1,0)) and e_2 = ψ^-1((0,1)).
* Let (a_1,,a_d^2) be a basis of A.
* Set x_1 = e_1 a_i and x_2 = e_2 a_j chosen such that x_1 and x_2 are non-zero.
Then, following the proof of the Proposition cited above, we may take v = x_1 + x_2.
Since we need an efficient algorithm, we must avoid doing any computation directly in K_[d]. We first note that the map ψ A → B(K,c) itself does not depend explicitly on c. Indeed, as a vector space, B(K,c) is the space of homogeneous matrices, and only the multiplicative structure of the algebra depends on c. Our strategy is therefore first to compute the map ψ A → B(K,c) and then to compute the factor set c which makes this map an F-algebra homomorphism.
Since A = KvK, a basis of A is (θ_1^k v θ_1^l)_0 ≤ k,l ≤ d-1. We let a ∈ A have coordinates (a_kl) with respect to this basis. Then, by <cit.>, if (ℓ_ij) is the homogeneous matrix ψ(a), we have
ℓ_1,1 = ∑_k,l=0^d-1 a_klθ_1^k+l
ℓ_1,2 = ∑_k,l=0^d-1 a_klθ_1^kθ_2^l
it follows that we may compute the representation of φ(a) as a tuple in K × K_[2] in polynomial time. Now, by Proposition <ref>, we only need to compute c_1,3,2. For any β,β' ∈ K_[2], we must have
φ(φ^-1(0,β) φ^-1(0,β')) = (α”,β”)
with
_K_[3]/K_[2](c_1,3,2ι_1,3(β) ι_3,2(β')) = β”
Since K_[3]/K_[2] is a separable extension, if x ∈ K_[3] satisfies _K_[3]/K_[2](xβ) = 0 for all β∈ K_[3], then x = 0. So, if a solution c_1,3,2 to the linear equations above exists for all β,β' ∈ K_[2], then it must be unique. Such a solution must exist by <cit.>.
Furthermore, we may reduce the problem to solving a finite system by having β and β' range over a basis of K_[2].
Bring the above discussion and Proposition <ref> together and we get
There exists a polynomial-time algorithm which takes as input a positive integer d, structure constants for a central simple F-algebra A of degree d and coordinates for some u ∈ A such that F(u)/F is a degree d separable field extension and (F(u)/F) is 3-transitive, and outputs a reduced factor set c for F(u)/F, together with an explicit isomorphism φ A → B(K,c).
§ FINDING TRIVIALISATIONS OF BRAUER FACTOR-SETS
In this section, we let F and K be as in Section <ref>. We will prove the following adaptation of <cit.>:
Let c be a trivial reduced factor set.
Let S be a finite set of places of K such that
* S contains all the infinite places of K_[1].
* S contains all the ramified places of K_[1].
* c ∈ U_K_[3],S
* The classes of the finite primes of S generate the ideal class group of K_[1].
Then, there exists a trivialisation of c which lies in U_K_[2],S.
We first prove two lemmas:
Let L/K be an extension of number fields, and let M be the relative Galois closure of L above K. Let σ_1,,σ_d be the embeddings of L into M. Let I be a fractional ideal of L coprime with all primes of L ramified over K. Then, there exist a fractional ideal J of K such that
∑_i=1^d σ_i(I) = J _M.
In what follows, p is always a prime of K, a prime of L and 𝔓 a prime of M. Let I factor as ∏_^r_. Let Supp(I) be the set of primes of K such that v_(I) ≠ 0. For a prime 𝔓 of M, we call _𝔓,i the unique prime of L such that 𝔓|σ_i(_𝔓,i). Recall that ∏_^a_ + ∏_^b_ = ∏_^min(a_,b_). Furthermore, by hypothesis, no prime of Supp(I) ramifies in L. It is well known that they then may not ramify in M. We get the following equality:
∑_i=1^d σ_i(I) = ∏_p ∈Supp(I)∏_𝔓| p𝔓^min_i ∈ [d] r__𝔓,i.
Now, if we fix a prime 𝔓 of M above a prime p of K, we observe that every prime of L above p is _𝔓,i for some i ∈ [d]. Therefore, we get
∑_i=1^d σ_i(I) = ∏_p ∈Supp(I)∏_𝔓| p𝔓^min_i ∈ [d] r__𝔓,i
= ∏_p ∈Supp(I)∏_𝔓| p𝔓^min_| p r_
= ∏_p ∈Supp(I)(∏_𝔓| p𝔓)^min_| p r_
= (∏_p ∈Supp(I) p^min_| p r_) _M
In what follows, we extend the definition of the map ∂ K_[2]^×→ K_[3]^× to the groups of fractional ideals. That is, if I is a fractional ideal of K_[2], we set ∂(I) = I ι_2,3(I) ι_1,3(I)^-1.
Let I be a fractional ideal in K_[2] with support disjoint from S, such that ∂(I) = _K_[3]. Then, there exists a fractional ideal J of K_[1] such that I = J ι_2(J)^-1.
This is a generalisation of Hilbert's theorem 90 in two aspects: it generalises from Galois cocycles to Brauer factor sets and it generalises from elements to ideals. We adapt the proof of <cit.>, which generalized Hilbert's theorem 90 to ideals.
First, we note that for all i,j,k ∈ [d], ι_ij(I)ι_jk(I) = ι_ik(I).
Then, we set
J ∑_i=2^d ι_1,i(I).
By Lemma <ref>, we may see J as an ideal of K_[1]. We prove that I = J ι_2(J)^-1.
Indeed, we compute:
Iι_2(J) = I∑_i=1
i ≠ 2^d ι_2,i(I)
= ∑_i=1
i ≠ 2^d Iι_2,i(I)
= ∑_i=2^d ι_1,i(I)
= J
By assumption, we have a reduced Brauer factor set c and a trivialisation m ∈ K_[2]^×.
Consider the unique factorisation m_K_[2] = ∏_𝔓𝔓^r_𝔓. Then, set
ν = ∏_∈ S
𝔓|𝔓^r_𝔓.
Now, c_K_[3] factors as a product of primes with support in S, and the map ∂ is multiplicative and sends ideals above 𝔭 to ideals above 𝔭 for any prime 𝔭 of F. It follows that c = ∂(ν). That is, ∂(m^-1ν) = _K_[2].
Now, by Lemma <ref>, there is a fractional ideal μ of K_[1] such that m^-1ν = μι_2(μ)^-1. Now, write μ = aμ', where μ' is an ideal of K_[1] above S.
We get
m a ι_2(a)^-1_K_[2]= νμ'^-1ι_2(μ').
Now, the RHS of the above equality is an ideal with support in S, and it follows that
maι_2(a)^-1∈ U_K_[2],S.
Finally, one may check easily that ∂(m a ι_2(a)^-1) = ∂(m) = c.
We may now deduce Algorithm <ref> for trivialising a Brauer factor set. We use M_K to write the set of prime ideals of _K. We note that one may handle most lines using either polynomial quantum algorithms or less efficient classical algorithm. We discuss the quantum version below.
Quantum Algorithm <ref> polynomial and outputs a trivialisation of the input Brauer factor set.
As discussed in Section <ref>, there are polynomial quantum algorithms for factoring fractional ideals, computing class groups and S-unit groups as well as rings of integers (and therefore discriminants). It follows that lines 1,2,3,4,5 and 6 may be executed in quantum polynomial time. Lines 7 and 8 involve computing discrete logarithms in groups of S-units. This may be done by combining one of the algorithms described in Section <ref> with <cit.>. Then, line 9 only performs linear algebra over for which there exist polynomial-time classical algorithms. It follows that quantum Algorithm <ref> is polynomial.
The map ∂ is a group homomorphism from U_K_[2],S to U_K_[3],S. By Theorem <ref>, there exists m ∈ U_K_[2],S such that ∂(m) = c. If follows that the there exists a vector C ∈^r such that AC = β_1,,β_s as computed at line 9. Then, ∂(m) = c, where m is the output ∏_i=1^r.
§ RANDOM ELEMENTS OF MAXIMAL ORDERS OF MATRIX ALGEBRAS
In this section, we study random elements of an arbitrary maximal order of M_d(F), for F a number field. Using the appropriate random distribution, Heuristic <ref> states that such a random element has irreducible characteristic polynomial with non-negligible probability. We give a lower bound on the probability that, if x is a random matrix as in Heuristic <ref> with irreducible characteristic polynomial, then (χ_x) = _d.
If the field F is fixed, one may adapt the arguments from <cit.> to prove that either χ_x is reducible or χ = _n or _n with probability 1 - o(1). For our purposes, however, we must bound the probability from below uniformly with respect to the number field F. On the other hand, we do not need to prove that the probability of success goes to 1 as the parameters grow. In fact, we will prove that the probability of success is at least 1/μ(d) for some polynomial μ. While this lower bound is certainly not sharp, it is sufficient for the needs of this work. For the sake of simplicity, we do not seek better estimates.
There is a polynomial μ∈[X] with the following property:
Let x be as in the statement of Heuristic <ref>. Then, we have
ℙ({(χ_x) = _n |χ_x is irreducible}) ≥1/μ(d)
for d ≥ 3.
The rest of this section is dedicated to proving Proposition <ref>.
Let d ∈ and let F be a number field. Let _2 | 2 and _3 | 3 be prime ideals of _F with respective residue fields k_2 and k_3.
Reproducing the argument made in the proof of <cit.> and summarized in the proof of <cit.>, we get
Let f ∈_F[X] be a monic polynomial in _F[X], such that the reduction f̅∈ k_i is square-free and factors as a product of irreducible polynomials of degrees n_1,,n_r. Then (f) contains a permutation which factors as a product of cycles of lengths n_1,,n_r.
In order to use Lemma <ref>, we need to describe conjugacy classes that may not be contained together in a proper transitive subgroup of _n:
Let G be a transitive subgroup of _n. If G contains a n-1-cycle and either a permutation of cycle structure (2,n-3,1) if n is even, or (2,n-2) if n is odd, then G = _n.
Assume G is as in the problem statement. If n is even and σ has cycle structure (2,n-3,1), then σ^n-3 is a transposition. Similarly, if n is odd and σ has cycle structure (2,n-2), then σ^n-2 is a transposition. In either case, G must then contain a transposition, as well as an (n-1)-cycle. We may then conclude with an argument from <cit.>. We reproduce the argument below for completeness:
Up to conjugation, we may assume that G contains the cycle (2,3,,n). Then, G contains a translation τ = (a,b). Since G is transitive, there is g ∈ G such that g τ g^-1 = (1,c) for some c ∈{2,,n}. The group G also contains the cycle (c,c+1,,n,2,,c-1). So, up to conjugation, G contains the transposition (1,2) and the cycle (2,3,,n). These two permutations are known generators of _n (see for instance <cit.>).
Using the lemmas above, we may rephrase the problem in terms of the factorisation of χ_x in k_2[X] and k_3[X]. In order to do so, we must prove that our sampling method produces a uniform random variable in M_d(k_2) × M_d(k_3).
Let (a_1,,a_r) be a -basis of . If x is a uniform random variable in ⊕_i ∈ [n] [6]a_i, then (π_2(x),π_3(x)) is a uniform random variable in M_d(k_2) × M_d(k_3).
Indeed, the set ⊕_i ∈ [n] [6]a_i is a set of representatives for /6. For i ∈{2,3}, let π_i be the natural projection /6→/_i. Then, the claim follows from the fact that for the (a,b) ∈/_2×/_3, (π_2,π_3)^-1({(a,b)}) are the cosets of (_2 ×_3) (/2×/3) ≅_2 _3 /6, and that each coset has the same number of elements.
Now, we compute the probability that for x uniformly random in M_d(k_2), the polynomial χ_x factors into a product of irreducibles F_1 F_2 with F_1 = 1 and F_2 = d-1. If F_1 and F_2 are such polynomials, then by <cit.>, the probability that χ_x = F_1F_2 is
p = q^-dF(q,d)/F(q^d-1,1)F(q,1),
with F(u,r) ∏_i=1^r (1-u^-1), and q = 2^n the size of k_2.
It is well known that the number of monic irreducible polynomials of degree r in _q(X) is 1/r∑_d | rμ(d)q^r/d, where μ here is the Möbius function. Therefore, the probability that there are some irreducible polynomials F_1 and F_2 such that F_1 = 1 and F_2 = d-1 and χ_x = F_1 F_2 is
p' = 1/d-1q ∑_s | d-1μ(s)q^(d-1)/s/q^dF(q,d)/F(q,1)F(q^d-1,1)
Now, observe that (assuming d ≥ 3), lim_max(n,d) →∞2^n ∑_s | d-1μ(s)2^n(d-1)/s/2^nd = 1. We prove that F(q,d) is bounded away from zero. Since F(q,d) ≥ F(2,d) when q ≥ 2, it is enough to prove that F(2,d) is bounded away from zero. Then, this follows from the fact that the series ∑_i ≥ 1log(1-2^-i) converges. Since F(u,r) < 1 for u > 1 and r > 0, we have proved that there exists C_2 > 0 such that
p' ≥C_2/d-1.
We prove likewise that the probability that for x a uniform element of M_d(k_3), χ_x factors as F_1F_2 with (F_1) = 2 and (F_2) = d-2 is bounded from below by C_3/d-2 for some C_3 > 0, and that the probability that χ_3 factors as F_1F_2F_3 with F_1 = 1, F_2 = 2 and F_3 = d-3 is larger than C'_3/d-3 for some C'_3 > 0.
Let μ(d) = 1/C_2 min(C_3,C'_3) (d-1)(d-3). Bringing all of the above together, we have proved that with probability greater than 1/μ(d), a uniform x in ⊕_i=1^r [6]a_i has the following property: if χ_x is irreducible, then (χ_x) = _d.
§ THE MAIN ALGORITHM
In this section, we bring together the results from the previous sections and present Algorithm <ref> for solving the explicit isomorphism problem. We handle the case where A≅ M_2(F) separately.
§.§ The d=2 case
We include this case for completeness, even though this is more or less a known result. The case where d=2 is the case where every A is isomorphic to a quaternion algebra over F. This presentation can be found in polynomial time <cit.>. The main idea is that one first finds a trace 0 element u∈ A and then finds an element v such that uv=-vu by solving a system of linear equations. Then u,v will generate a quaternion basis. If one is given a quaternion algebra with the presentation u^2=a, v^2=b where a,b∈ F^*, then finding a zero divisor is equivalent to solving the norm equation N_F(√(a)|F)=b <cit.>. Now evoking results from Section <ref> one can find an explicit isomorphism between A and M_2(F) in quantum polynomial time.
§.§ The d≥ 3 case
Under Heuristic <ref>, Algorithm <ref> runs in quantum polynomial time and outputs an isomorphism A ≃ M_d(F) with probability larger than 1/μ(log F, F,d) for some polynomial μ.
Computing a basis for a maximal -order of A can be done in polynomial time<cit.>.
Combining Heuristic <ref> and Proposition <ref>, there exists a polynomial μ such that the x ∈ A sampled in line 2 has irreducible characteristic polynomial with full symmetric (and therefore 3-transitive) Galois group, with probability larger than 1/μ(log F, F,d).
By Theorem <ref>, we may use the embedding K F(x) ⊂ K to compute a reduced Brauer factor set c ∈ K_[3] and an isomorphism φ A → B(K,c), with K = F(x). Then, one may use Algorithm <ref> to find a trivialisation m of c. Finally, a rank one zero divisor of B(K,c) is computed using Corollary <ref>. The isomorphism ψ is then deduced as explained in Section <ref>.
§.§ Removing the heuristic
In section <ref>, we bounded from below the probability
ℙ((χ_x) = _n |χ_x is irreducible)
for an element x ∈ A sampled as in line 2 of Algorithm <ref> to have a characteristic polynomial with full symmetric Galois group provided that said polynomial is irreducible. Therefore, in order to produce an algorithm whose efficiency does not depend on Heuristic <ref>, we must handle the cases where χ_x is not irreducible.
We may therefore modify Algorithm <ref> in the following way: after line 2, we test if either the minimal polynomial is reducible or if it is a proper divisor of χ_x. We handle each of these two cases differently.
If the minimal polynomial Π_x is reducible, the polynomial Π_x admits a non-trivial divisor F, and F(x) is a zero-divisor in A. Then, we may apply the methods from Section <ref>. That is, we compute e the right unit of left ideal AF(x). Then, we get a subalgebra A' = eAe which is isomorphic to M_r(F), where r < d is the rank of F(x). We apply our algorithm recursively to the F-algebra A'.
If the minimal polynomial Π_x is a proper divisor of χ_x, then the proof of <cit.> generalizes readily to our setting. That is, if d' = Π_x, the centraliser C_A(x) of x in A is isomorphic to M_d/d'(F(x). We apply our algorithm recursively to C_A(x) with F(x) as a new base field.
We then obtain an unconditional version of Algorithm <ref> which is polynomial-time if d is bounded.
The modified version of Algorithm <ref> described above runs in quantum polynomial time if the input algebra A has degree bounded by some absolute constant D. It succeeds with probability larger than frac1μ(d), where μ is a polynomial and d is the degree of the input algebra.
Since every recursive call reduces the degree of the input algebra and increases the degree of the base field linearly at most, our modified algorithm only uses a bounded amount of recursive calls. It is therefore enough to prove that in every recursive call, we may still perform computations in polynomial-time in both the new base field and the new input algebra. That is, we need the size of the structure constants over F of the new base field and the size of the structure constants over the new base field of the new input algebra to be at most polynomial in the parameters.
In the case that the minimal polynomial Π_x is reducible, the base field remains the same. As for the algebra eAe, its structure constants are computed in polynomial time via linear algebra and therefore have polynomial size with respect to every parameter. The cost of computations in eAe is polynomial in the size of its structure constants and is therefore still polynomial.
In the case that Π_x is a proper divisor of χ_x, we first check that computation may still be done in the new base field F(x). However, F(x) is naturally represented as an F-algebra whose structure constants have polynomial size in the coefficients of Π_x. Since Π_x may itself be computed in polynomial-time, it follows that computations in F(x) may be performed in polynomial time. Now, an F(x)-basis of the centraliser C_A(x) in A may be extracted from an F-basis of C_a(x), itself computable in polynomial time. Therefore, structure constants for this F(x)-basis are polynomial sized.
If Π_x is irreducible, then (F(x)/F) is 3-transitive with probability at least 1/μ(d) by Proposition <ref>.
plain
|
http://arxiv.org/abs/2307.02892v2
|
20230706095435
|
The Relationship Between Speech Features Changes When You Get Depressed: Feature Correlations for Improving Speed and Performance of Depression Detection
|
[
"Fuxiang Tao",
"Wei Ma",
"Xuri Ge",
"Anna Esposito",
"Alessandro Vinciarelli"
] |
cs.CL
|
[
"cs.CL",
"cs.SD",
"eess.AS"
] |
F. Tao et al.
University of Glasgow, Glasgow G12 8QQ, UK
{f.tao.1,W.ma.1,x.ge.2}@research.gla.ac.uk, Alessandro.Vinciarelli@glasgow.ac.uk
Università degli Studi della Campania “Luigi Vanvitelli”, Caserta, Italy
Anna.Esposito@unicampania.it
The Relationship Between Speech Features Changes When You Get Depressed: Feature Correlations for Improving Speed and Performance of Depression Detection
Fuxiang Tao1 Wei Ma1 Xuri Ge1
Anna Esposito2
Alessandro Vinciarelli1
^1 West University of Timişoara, Bd. V. Pârvan nr. 4, 300223, Timişoara, Romania
^* Corresponding Author: eva.kaslik@e-uvt.ro
=========================================================================================================================================================
This work shows that depression changes the correlation between features extracted from speech. Furthermore, it shows that using such an insight can improve the training speed and performance of depression detectors based on SVMs and LSTMs.
The experiments were performed over the Androids Corpus, a publicly available dataset involving 112 speakers, including 58 people diagnosed with depression by professional psychiatrists. The results show that the models used in the experiments improve in terms of training speed and performance when fed with feature correlation matrices rather than with feature vectors. The relative reduction of the error rate ranges between 23.1% and 26.6% depending on the model. The probable explanation is that feature correlation matrices appear to be more variable in the case of depressed speakers. Correspondingly, such a phenomenon can be thought of as a depression marker.
§ INTRODUCTION
The World Health Organization recently stated that depression affects 4.4% of the world's population and accounts for 7.5% of all years lived with disability <cit.>. The recent COVID-19 pandemics further aggravated such a situation and led to 53.2 millions more depression patients, an increase by 27.6% <cit.>. Nevertheless, 63.6% of the cases remain undiagnosed partly due to limited availability of healthcare services and partly because many patients lack financial resources necessary to access appropriate medical attention <cit.>. For these reasons, the computing community made major efforts towards the development of depression detection technologies.
Many approaches use speech as input and the reason is that the pathology was shown to leave traces in the way people speak (see Section <ref>).
In fact, according to neuroscience, brain connectivity patterns tend to be more unstable in depression patients <cit.>.
In particular, depressed speakers tend to show a lower degree of coordination across different brain areas and this can “[...] alter speech production by influencing the characteristics of the vocal source, tract, and prosodics [and lead to] psychomotor retardation, where a patient shows sluggishness and motor disorder in vocal articulation, affecting coordination across multiple aspects of production” <cit.>. Such a phenomenon is expected to change not only acoustic properties of speech, but also the relationship between them. Correspondingly, it is expected to change the correlation between features and the way such a correlation changes over time.
Despite the above, to the best of our knowledge, the literature pays only limited attention to the relationship between features (see Section <ref>). The key assumption of this work is that this is an issue because it results into missing the major depression effects described earlier.
For this reason, the focus of this work is on investigating whether feature correlations and their changes over time can help improving the effectiveness of depression detection.
The experiments were performed over the Androids Corpus, a publicly available dataset involving 112 speakers. The focus is on the read speech samples for two main reasons.
The first is that the brain phenomena mentioned earlier were observed also when people read <cit.>, the second is that read speech involves less variability resulting from sources not necessarily related to the pathology (e.g., the topic being discussed). The speech samples were represented as sequences of feature correlation matrices (see Section <ref>) and such a representation was shown to improve two classification approaches, namely Support Vector Machines (SVM) and Long Short-Term Memory networks (LSTMs) <cit.>. Compared to feature vectors, correlation matrices were shown to reduce the error rate by up to 26.6% and up to 23.1% for SVMs and LSTMs, respectively. Overall, the proposed approaches reached an F1 score of up to 87.7% when fed with feature correlation matrices and up to 83.7% when fed with feature vectors. In addition, using sequences of feature correlation matrices made the classification process roughly two times faster.
Overall, the main contributions of this work can be summarized as follows:
* To the best of our knowledge, this is the first work comparing the performance of the same models (SVM and LSTM) when being fed with feature vectors and with feature correlation matrices;
* This is the first work showing that changes in correlation patterns over time can act as a depression marker.
The comparison with previous results obtained over the same data shows that the novelties above lead, on average, to higher performances.
The rest of this article is organized as follows: Section <ref> describes related work, Section <ref> describes the data, Section <ref> describes the approach used in the experiments, Section <ref> reports on experiments and results, and the final Section <ref> draws some conclusions.
§ RELATED WORK
In recent years, the computing community made substantial efforts towards automatic depression detection and identification of depression markers in speech <cit.>.
After observing that speech production is different in depression patients <cit.>, several works tried to identify markers related to temporal properties. For example, it was shown that the effectiveness of depression detection can be improved significantly by taking into account speech rate <cit.> or frequency of pauses <cit.>. However, this type of marker was shown to have different effectiveness in correspondence of different types of speech <cit.>.
In most cases, the approaches do not try to identify markers, but represent speech in terms of feature vectors and then use different machine learning methodologies to identify depression patients. The most common features are, by far, Mel Frequency Cepstral Coefficients (MFCCs), often shown to be beneficial for the performance <cit.>. MFCCs, and any other Low-Level Descriptors, are typically extracted from short analysis windows and they are often concatenated with their derivatives to account for possible temporal effects. Such an approach was shown to be effective for both spontaneous and read speech <cit.>.
Correlation between features was explored, to a limited extent, as a possible way to capture “[...] psychomotor retardation, where a patient shows sluggishness and motor disorder in vocal articulation, affecting coordination across multiple aspects of production” <cit.>. The proposed approach was based on measuring the correlation between features extracted at multiple time distances, thus resulting in multiple correlation matrices concatenated with each other <cit.>. The approach was shown to be effective not only for depression, but also for epileptic seizures <cit.>. More recent studies show that such a correlation representation is effective with deep neural networks too <cit.>. The main issues are the need for longer speech recordings (the time distances for measuring the correlation can be high) <cit.> and the appearance of artefacts that require the application of filters <cit.>. The approach proposed in this work addresses both problems by measuring the correlation over short intervals of time and by modelling the correlation changes through sequential models.
§ THE DATA
The experiments were performed over the read speech samples of the Androids Corpus, a publicly available dataset <cit.>. Overall, the corpus involves 118 participants, but the experiments focus on the 112 who provided read speech samples.
Each participant is a native Italian speaker and was asked to read aloud an Aesop's tale in Italian (“The North Wind and the Sun”). Informed consent was obtained from all participants before the experiment, in accord with privacy and data protection laws in Italy, the country where the data was collected. The reason for choosing the fairy tale above is that it is easy to understand and it does not contain words that an average reader might not know (the version of the tale used in the experiments was extracted from a book for children). This ensures that the participants can understand the text irrespective of their education level. In order to simulate the normal setting in which depression patients interact with doctors, the recordings were collected with a standard laptop microphone in the clinical consultation rooms of the three Mental Health Centers involved in the study.
Out of the 112 participants, 58 reported no history of disorders (neurological or psychiatric) and made no use of medications or recreational drugs (referred to as control participants hereafter). The remaining 54 were diagnosed with depression by professional psychiatrists using the Diagnostic and Statistical Manual of Mental Disorders 5 (DSM-5).
Table <ref> shows information about age, gender and education level. Statistical analysis shows that there is no difference in age distribution between depressed and control participants (p > 0.05 according to a two-tailed t-test). Similarly, there is no statistically significant difference in terms of gender and education level according to χ^2 tests. This suggests that speech differences between the two groups of participants depend on depression and not on other factors.
The age distribution of the data matches the age range of people that tend to develop depression more frequently <cit.>. Similarly, the number of female depressed participants is close to 2 times greater than male ones. This is in line with epidemiological observations showing that depression is more common among women than among men <cit.>. In this respect, the sample is expected to represent the general population of both depressed and non-depressed individuals. The total duration of the recordings is 1 hour, 33 minutes and 49 seconds, with an overall average of 50.3 seconds. When considering separately control and depressed participants, the averages are 52.9 and 47.4 seconds, respectively (the difference is statistically significant with p < 0.01 according to a two-tailed t test).
§ THE APPROACH
The goal of the experiments is to test whether feature correlation matrices convey more depression-relevant information than feature vectors. For this reason, the approach (see Figure <ref>) includes a feature extraction step (conversion of speech signals into sequences of feature vectors), a correlation representation step (mapping of sequences of feature vectors into sequences of feature correlation matrices) and a depression detection step. Figure <ref> shows that this latter can be performed by feeding the same model (SVM or LSTM) with either feature vectors or correlation matrices. In this way, it is possible to test whether these latter convey more information than the sole features.
The aim of the feature extraction is to convert the speech recordings into sequences of feature vectors x⃗_k. In the experiments of this work, the vectors were extracted from 25 ms long analysis windows at regular time steps of 10 ms. Both values are standard in the literature and no attempts were made to identify alternatives possibly leading to better results. The extraction was implemented with OpenSMILE <cit.>
and made use of a 32-dimensional feature set widely applied in emotion recognition <cit.>. The 32 features include the following:
* Root Mean Square of the Energy (Energy): related to loudness and it tends to be lower for depressed individuals <cit.>;
* Mel-Frequency Cepstral coefficients 1-12 (MFCC): they account for the phonetic content of the data and they are widely used in depression detection <cit.>;
* Fundamental Frequency (F0): frequency that carries the highest energy in the signal <cit.>;
* Zero-Crossing Rate (ZCR): it indicates how many times a signal crosses the value of zero per millisecond and accounts for F0 <cit.>;
* Voicing probability (VP): it accounts for the probability of a frame corresponding to emission of voice and it was shown to account for pauses that can help one to discriminate between depressed and non-depressed speakers <cit.>;
The 16 features are doubled by taking into account the differences between the feature values extracted from two consecutive analysis windows, thus leading to vectors of dimension D=32.
The correlation representation step segments the sequence of feature vectors into subsequences of length L (the number of vectors included in one subsequence) starting at regular steps of length L/2 (two consecutive subsequences overlap by half of their elements). In the experiments, the value of L ranges between 100 and 500, corresponding to time intervals of length between 1 and 5 seconds. Once the segmentation is performed, it is possible to extract a local feature correlation matrix I_k in which element {I_k}_ij is the correlation between extracted features i and j (i,j ∈ [1,D]) along the subsequence. Given that there are multiple subsequences, there will be multiple local feature correlation matrices I_k, with k ranging between 1 and T, where T is the total number of subsequences. Matrix I_n corresponds to the subsequence starting at vector (n-1)L/2 and ending at vector [(n-1)/2+1]L -1, where n=1,…,T. All correlation coefficients are converted into Z-scores with Fisher’s transformation, a standard step in statistical analysis of dependent correlations <cit.>.
A first baseline approach for depression detection is (BL1) in <cit.> that takes the average of the feature vectors extracted from a recording and feeds it to a linear kernel SVM implemented with scikit-learn (version in 0.23.2) <cit.>, the BL1 in Figure <ref>. Similarly, given that the correlation representation step converts every speech signal into a sequence of matrices I_k, it is possible to estimate the average I_k and feed it to a Support Vector Machine to perform the depression detection step (see Approach 1 in Figure <ref>). The correlation matrices are symmetric and only the elements below the principal diagonal are used, thus leading to a dimension D(D-1)/2 = 496 (D=32 is the original number of features). BL1 and Approach 1 can be compared to test whether feature correlation matrices are actually of help.
Another way to perform depression detection is to split the sequence of the feature vectors into subsequences of length 128 (a standard value in the literature) and then feed each one of these to LSTMs (BL2 in <cit.>) implemented with PyTorch (version in 1.13.1+cu116) <cit.>. Once all subsequences are classified, it is possible to apply a majority vote and assign a recording to the class its subsequences are more frequently assigned to (BL2 in Figure <ref>). The same LSTMs can be applied to the sequence of the I_k. In this case, a linear layer reducing the dimension of the correlation matrices to 32 is added to the LSTMs with the goal of making the problem computationally more tractable (Approach 2 in Figure <ref>). BL2 and Approach 2 can be compared to test whether the matrices actually improve over the feature vectors.
§ EXPERIMENTS AND RESULTS
The experiments were performed according to the k-fold protocol (k=5) available in the Androids Corpus distribution.
The folds are disjoint so that
the same participant never appeared in both training and test set. This ensures that the approach actually detects depression and does not simply recognize the speaker.
The number of hidden states in the LSTMs was set to H = 32 (the same number of hidden states as the baselines in the Androids Corpus <cit.>). The number of training epochs was T=100 and the learning rate was set to 0.0005. The training was performed using the RMSProp as an optimizer <cit.> and the categorical cross-entropy as a loss function. All experiments were replicated R=10 times using a different initialization of the networks. For this reason, all results are reported in terms of average and standard deviation observed over the repetitions. This ensures that the performances are not the result of a favorable initialization, but a realistic estimate of the system's effectiveness.
Table <ref> shows the results in terms of Accuracy, Precision, Recall and F1 Score. The random baseline approach assigns an unseen sample to a certain class with probability corresponding to its prior. This leads to the following accuracy:
α̂=p(c)^2+p(d)^2,
where p(c) is the prior of class control and p(d) is the prior of class depressed. The corresponding Precision, Recall and F1 score are all equal to p(d).
The table shows the performance of SVMs and LSTMs when giving as input both the sequence of vectors x⃗_k and the sequence of feature correlation matrices I_k. According to a two-tailed t-test, all approaches perform better than the random baseline (p<0.001 in all cases). Furthermore, always according to a two-tailed t-test, models perform better when fed with feature correlation matrices than when fed with feature vectors. The error rate decreases by 26.6% when passing from BL1 to Approach 1. Similarly, the relative reduction of the error rate is up to 23.1% for approaches based on LSTMs. In this respect, the results suggest that feature correlation matrices actually convey more depression-relevant information than simple feature vectors.
Given that BL1 and Approach 1 do not include the step of converting vectors to matrices, the results of these approaches do not change with parameter L. Table <ref> shows the best results across all values of parameter L. For this reason, Figure <ref> shows how the F1-Score changes depending on L and the results suggest that the models fed with feature correlation matrices tend to outperform those fed with feature vectors, thus confirming that the results of Table <ref> are not the result of a particular choice of L.
In addition, the use of the feature correlation matrices reduces significantly the amount of time required to train the LSTMs. When using the same computing infrastructure (Google CoLab Tesla T4 GPU), the time goes from 4.5 to 2 minutes for training LSTMs, such an improvement is statistically significant according to a two-tailed t-test (p < 0.001). This is mainly because LSTMs need less cells in the case of correlation matrix sequences. In the case of SVMs, no significant difference was observed in terms of training time. A further advantage of correlation matrix sequences is that, being shorter, they do not need to be segmented into subsequences to be fed to the LSTMs. Therefore, it is possible to avoid the majority vote.
One possible explanation of the results above is in Figure <ref>. The chart shows the average Spearman correlation coefficients between consecutive correlation matrices I_k and I_k+1 (the elements below the principal diagonal). A two-tailed t-test shows that, for all values of L, the average correlation between consecutive correlation matrices is higher, to a statistically significant extent, in the case of control participants (p<0.05 with FDR correction for all values of L). In particular, the correlation is around 0.75 for control participants and around 0.71 for depressed ones, for all values of L. Such an observation suggests that the relationship between features extracted at different points in time tends to be less consistent in the case of depressed people and this is probably a depression marker that helps the models to perform better.
§ CONCLUSIONS
The key-result of this paper is that feature correlation matrices lead to better depression results than feature vectors, at least in the case of linear kernel SVMs and LSTMs, the two models used in the experiments. Furthermore, the proposed end-to-end approach significantly reduces the time for training. To the best of our knowledge, this is the first work that compares feature vectors and correlation matrices in terms of the performance they lead to. Furthermore, this is the first work proposing an explanation of the observed results in terms of a possible marker (the different correlation between consecutive matrices).
The main limitation of the experiments is the use of a linear layer in Approach 2. Its aim is to keep the same input dimensionality as BL2, while still preserving the correlation between the features. This costs extra-parameters that make it less clear, in the comparison between BL2 and Approach 2, whether the performance improvement actually results from the correlation matrices. On the other hand, in the case of the SVMs, the only change between BL1 and Approach 1 is the use of the matrices and the improvement is statistically significant. This seems to confirm that the matrices actually help to improve depression detection.
Table <ref> shows results obtained in other works using the data of the Androids Corpus. A fully rigorous comparison with previous approaches is not possible because the experimental protocol is not always the same (see below for more details), but the performances presented in this work are the highest in terms of Accuracy, Recall and F1 Scores, thus confirming that correlation matrices are of help. In two cases <cit.>, the data is the same (read speech), but not all the speakers analyzed in this work were involved. In <cit.>, the speakers are the same, but the data is different (spontaneous speech). Finally, the results in <cit.> were obtained over 59 speakers only and using not only paralanguage, but also language. The only rigorous comparison is with the results in <cit.> (same data, speakers and experimental protocol).
This work focused on read speech and, therefore, one possible direction for future work is the application of correlation matrices to spontaneous speech data. The aim of this work is to examine the effectiveness of correlation matrices by comparing them with feature vectors, but it never involved the combination of both. Correspondingly, another possible future direction is to use both features and their correlation matrices to detect depressed speakers.
§.§.§ Acknowledgements
The research leading to these results has received funding from the project ANDROIDS funded by the program V:ALERE 2019 Università della Campania “Luigi Vanvitelli”, D.R. 906 del 4/10/2019, prot. n. 157264,17/10/2019. The work of Alessandro Vinciarelli was supported by UKRI and EPSRC through grants EP/S02266X/1 and EP/N035305/1, respectively.
splncs04
|
http://arxiv.org/abs/2307.00784v1
|
20230703070300
|
Element similarity in high-dimensional materials representations
|
[
"Anthony Onwuli",
"Ashish V. Hegde",
"Kevin Nguyen",
"Keith T. Butler",
"Aron Walsh"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci"
] |
Department of Materials, Imperial College London, London SW7 2AZ, UK
Department of Materials, Imperial College London, London SW7 2AZ, UK
Department of Materials, Imperial College London, London SW7 2AZ, UK
k.butler@qmul.ac.uk
School of Engineering and Materials Science, Queen Mary University of London, London E1 4NS, UK
a.walsh@imperial.ac.uk
Department of Materials, Imperial College London, London SW7 2AZ, UK
Department of Physics, Ewha Womans University, Seoul 03760, Korea
The traditional display of elements in the periodic table is convenient for the study of chemistry and physics. However, the atomic number alone is insufficient for training statistical machine learning models to describe and extract composition-structure-property relationships. Here, we assess the similarity and correlations contained within high-dimensional local and distributed representations of the chemical elements, as implemented in an open-source Python package ElementEmbeddings. These include element vectors of up to 200 dimensions derived from known physical properties, crystal structure analysis, natural language processing, and deep learning models. A range of distance measures are compared and a clustering of elements into familiar groups is found using dimensionality reduction techniques. The cosine similarity is used to assess the utility of these metrics for crystal structure prediction, showing that they can outperform the traditional radius ratio rules for the structural classification of AB binary solids.
Element similarity in high-dimensional materials representations
Aron Walsh
August 1, 2023
================================================================
§ INTRODUCTION
The periodic table offers an effective description of the elements in order of increasing atomic number.
Its true power comes from the latent information that it contains.
Chemists are educated to recall periodic trends in electronic configuration, atomic radius, electronegativity, accessible oxidation states, and related characteristics.
This understanding gives the ability to rapidly assess, with bias, whether a particular compound will be stable or infer what properties a molecule or material may possess without detailed computations.<cit.>
Significant advances have been made in the statistical description of chemical systems with the application of supervised, unsupervised and generative machine learning (ML) techniques.<cit.>
A critical factor in the performance of such ML models for chemical systems is the representation of the constituent elements.
The atomic number of an element can be augmented or replaced by a vector that may be built directly from standard data tables, trained from chemical datasets using a machine learning model, or even generated from random numbers.
Such representations can be categorised as local (vector components with specific meaning) or distributed (vector components learned from training data).
These have been used to build powerful ML models for property prediction based on composition alone.<cit.>
Perhaps the simplest local representation is one-hot encoding where a binary n-dimensional vector v is used to categorise the atomic number of the element, e.g. H can be represented as
[ 1 0 0 0... ]
and He as
[ 0 1 0 0... ].
A single component is `hot' for each element, thus providing an orthogonal and sparse description.
A selection of other common representations from the literature is given in Table <ref>.
In this study, we are interested in the latent chemical information that can be distilled from such high-dimensional element representations.
We consider the fundamental concept of element similarity, which can be defined here as the distance or correlation between elemental vectors.
We explore various metrics and then apply them to data-driven structure classification for the case of binary solids. The underlying tools have been combined into an open-source and modular Python package ElementEmbeddings to support future investigations.
§ RESULTS AND DISCUSSION
§.§ Element representations
We consider four vector representations of the chemical elements in the main text with additional analysis and figures provided as Electronic Supplementary Information (ESI).
The aim here is not to be exhaustive but to cover a set of distinct approaches that have been developed for chemical models.
The analysis is performed on elements 1 (H) – 83 (Bi) as higher atomic number elements are not covered in all representation schemes.
The Magpie<cit.> representation is a 22-dimensional vector.
It is a local representation built from elemental properties
including atomic number, effective radii, and the row of the periodic table.
The Mat2vec<cit.> representation is a 200-dimensional vector distributed representation built from unsupervised word embeddings<cit.> of over 3 million abstracts of publications between 1922 and 2018.
In contrast, the atomic weights from a crystal graph convolutional neural network trained to predict the formation energies of crystalline materials are used to generate the 16 dimensional
MEGnet<cit.> representation.
The Random200 representation is simply a 200-dimensional vector generated randomly for each element. Each vector component is generated from the standard normal distribution, 𝒩(0,1).
The actual vectors were collected from various sources: roost<cit.> for MEGnet16;
cbfv<cit.> for Mat2vec;
eimd<cit.> for a scaled version of the Magpie embeddings;
numpy<cit.> was used to generate the Random_200 vectors.
§.§ Similarity measures
The distance between two vectors depends on the choice of measure in n dimensional space. We assess the pairwise distances between elements representations A and B.
The Minkowski distance is a metric in the normed vector space, which is a generalisation of the common distance metrics Euclidean, Manhattan and Chebyshev:
d(A,B) =
(∑_i=1^n |A_i - B_i|^p)^1/p
Those three distance metrics can be derived from the Minkowski distance by appropriately choosing the exponent p.
For p=2, we obtain the Euclidean distance which is the length of a line segment connecting A and B:
d_E(A,B) =
√(
(A_1 - B_1)^2
+ ⋯
+ (A_n - B_n)^2 )
For p=1, the Manhattan distance is obtained which can be defined from a sum of the absolute differences in each dimension:
d_M(A,B) =
∑_i=1^n |A_i - B_i|
In contrast, the Chebyshev distance is obtained from the limiting case of p →∞ and takes account of the greatest one-dimensional separation across the n-dimensional space:
d_C(A,B) =
max_i (|A_i - B_i|)
Taking the example of the separation between the elements Li and K in the Magpie representation, d_E = 4.09, d_M = 7.87 and d_C = 3.39, which shows the typical variation in absolute values.
A larger difference between Li and Bi, expected due to their placement in the periodic table, is found with d_E = 9.85, d_M = 37.74 and d_C = 3.55.
For completeness, the Wasserstein metric (earth mover's distance), which has been adapted for materials problems,<cit.> is also included as a function in ElementEmbeddings and shown in Figure S5.
Element separations are plotted for Euclidean and Manhattan distance in Figures <ref> and <ref>, with other measures shown in the ESI. The elements are ordered in increasing atomic number along the x-axis and decreasing atomic number along the y-axis. This cuts across the groups in the periodic table.
The leading diagonals in the distance plots are zero-valued as they correspond to d(A,A). The lighter blues correspond to elements whose vector representations are close to each other within the chosen metric space.
These elements can be interpreted as similar to each other.
Stripes are seen for the nobel gas elements, such as Kr and Xe, which are very different from the neighbouring halogens and alkali metals.
On a visual basis, the global structure of the heatmaps appears similar for the Euclidean and Manhattan distances, with the main difference being the absolute scale of the distances. Less structure is seen for the Random_200 vectors, as expected for this control representation.
Alternatively, we can
consider the angle between vectors using the cosine similarity based on the dot product:
cos(θ) =
A·B/AB
For the case of Li and K, cos(θ) = 0.959 for Magpie and 0.289 for Mat2vec.
These change to -0.603 and 0.411, respectively, for the Li and Bi pair.
The pairwise cosine similarities for the four chosen representations are shown in Figure <ref>.
The Pearson correlation coefficient provides a measure of the linear correlation:
ρ_A,B =
cov(A,B)/σ_Aσ_B
where the numerator and denominator refer to the covariance and standard deviation, respectively.
For the same case of Li and K (Bi), ρ_Li,K = 0.956 (-0.533) for Magpie and 0.284 (0.411) for Mat2vec. The Pearson correlation between each element is plotted in Figure <ref>.
The cosine similarity and Pearson correlation are convenient metrics as both cos(θ) and ρ∈ [-1, 1]. The resulting heat maps are visually similar, with comparable structure to the distance metrics.
Histograms of the values are shown in Figures S3 and S4. A skewed distribution is found in each case with the exception of Random_200 which follows a normal distribution by construction.
We note that the cosine similarity is scale-invariant as it only depends on the angles between vectors. Some elemental representation schemes may be sensitive to bias in the training data, such as an abundance of certain metal oxides, that produce outliers in vector components. Therefore, we will use cosine similarity in later sections.
§.§ Periodic trends
Beyond understanding the pairwise connection between elements, we can go deeper to investigate how the elements are distributed across the n dimensions in each representation.
For this, we use dimensionality reduction techniques based on unsupervised machine learning analysis.
These two-dimensional plots enable intuitive interpretations of the elemental representations and aid in determining the connection to standard elemental groupings.
The first method is principal component analysis (PCA).
Here two principal component axes are defined using a linear transformation of the original features that give the greatest variance in the vector components.
The PCA, generated using scikit-learn<cit.>, is shown in Figure <ref> with each data point coloured by the group in the periodic table.
The second approach is t-distributed stochastic neighbour embedding (t-SNE). Unlike PCA, this algorithm is a nonlinear dimensionality reduction technique that can better separate data which is not linearly separable.
Here a probability distribution is generated to represent the similarities between neighbouring points in the original high-dimensional space and a similar distribution with the same number of points is found in a lower-dimensional space.
The t-SNE, also generated using scikit-learn<cit.>, is shown in Figure <ref> with each data point coloured by their group in the periodic table.
We observe that the element representations, with the exception of the random vectors, possess an insightful structure in the reduced dimensions, Figures <ref> and <ref>. The lanthanoid elements cluster together in the non-random representations independent of the choice of dimension reduction technique. In most of the representations Sr, Ba, Ca tend to group closely together, which reflects their common application in substitutional mixtures, for example in tuning ferroelectric solid-solutions. Interestingly the learned, distributed representations pick up some similarities, which are obvious to a trained chemist, but are not captured in the local Magpie representation, such as the similarity between Bi and Sb. In the Magpie representation, H tends to be considered more of an odd-one-out element, at the periphery of the distributions, whereas in the distributed representations it tends to be clustered with other elements, reflecting how it has been observed in training data from crystals such as HF and LiH.
§.§ Application to crystal structure prediction
We have established that chemical correlations are found within the various elemental representations.
The next question is if they can be useful beyond their original purpose.
We consider a simple classification case in crystal structure prediction, a research topic of widespread importance in computational chemistry.<cit.>
We follow the structure substitution procedure proposed by Hautier et al <cit.> and as implemented in the Python code SMACT.<cit.>
In this approach, the probability that a new chemical composition will adopt a known crystal structure depends on the pairwise substitution probability of the elements p(X,X').
The original weights were obtained from a training set of inorganic materials for the Inorganic Crystal Structure Database.<cit.>
However, we replace these with the cosine similarity between elements for each representation scheme.
The assumption is that the preferred crystal structure will be the one that maximises the similarity between X and X'.
A related procedure has been employed by Wang et al to predict new stable compounds<cit.>, with an extension based on metric learning recently been reported by Kusaba et al.<cit.>
The radius ratio rules were developed to rationalise the local coordination and crystal structure preferences of ionic solids.<cit.>
In this model, the coordination number of a cation is determined by the balance between the electrostatic attraction (cation-anion interactions) and repulsion (anion-anion interactions).
A geometric analysis predicts that 8-fold (cubic) coordination should be obtained when the radius ratio ρ = r_cation/r_anion falls in the range 0.732 – 1.000.
A 6-fold coordination environment is predicted for 0.414<ρ<0.732, while 4-fold coordination is predicted for 0.225<ρ<0.414.
For binary AB solids, these regimes are typified by the CsCl (8-fold), rocksalt (6-fold), or zincblende/wurtzite (4-fold) structures.
While it is accepted that there are many cases where these rules fail, especially in the lower radius ratio regime,<cit.> they are still commonly taught in undergraduate programs due to their instructive nature.
To obtain a set of binary AB solids that adopt one of the four structure types as their ground-state structure,
we queried the Materials Project (version: 2022.10.28)<cit.> using pymatgen<cit.>.
This led to a dataset of 101 unique compounds.
Taking the empirical Shannon radii<cit.> for each ion, averaged over coordination environments, the radius ratio rules are found to correctly predict the ground-state crystal structures in 54% of cases.
This is lower than the 66% reported in a recent study of the predictive power of Pauling's rules, and using Pauling's univalent radii, to assign the coordination preferences of metals in a dataset of around 5000 metal oxides.<cit.>
The performance of the elemental representations ranges from 71 to 78 %. Each of the representations performed better at this task than using the previous data-mined weights of Hautier et al, with the Random_200 representation performing the worst.
The classification between structure types is compared in Figure <ref>.
Curiously, we find that none of the compositions are assigned to the CsCl structure.
Even for CsCl itself, the substitution probability into the ground-state rocksalt structure of RbCl is marginally higher than the substitution into the CsCl-type structure of CsBr.
The CsCl structure type is significantly underrepresented within the dataset, making the assignment a more difficult task with this elemental substitution approach.
We note that such bias is common in materials datasets where some structure types are heavily represented.
While we can not exclude data leakage due to structure environments being present in the training data for some of the chosen representations, this use case has not been explicitly targeted in the training of the distributed representations.
§ CONCLUSION
In summary, by exploring high-dimensional representations of chemical elements derived from diverse sources, we have demonstrated the potential for enhanced similarity and correlation assessments. These descriptions can complement and even outperform traditional measures, as shown in the case of crystal structure prediction and classification for binary solids. Effective chemical representations can enhance our understanding and prediction of material properties and we hope that the associated Python toolkit provided will support these developments.
Data availability statement:
A repository containing the element embeddings and associated analysis code have been made available on Github (<https://github.com/WMD-group/ElementEmbeddings>) with a snapshot on Zenodo (DOI: 10.5281/zenodo.8101633).
The package is readily extendable to other elemental and material representations and similarity measures.
§ AUTHOR CONTRIBUTIONS
The author contributions have been defined following the CRediT system.
Conceptualisation: A.O., A.W. Investigation and Methodology: A.O., A.V.H., K.N. Software: A.O. Data curation: A.O. Supervision: A.O., K.T.B., A.W. Writing - original draft: A.O., A.W. Writing - review and editing: all authors. Resources and funding acquisition: A.W.
A.O. thanks EPSRC for a PhD studentship (EP/T51780X/1).
We are grateful to the UK Materials and Molecular Modelling Hub for computational resources, which is partially funded by EPSRC (EP/P020194/1 and EP/T022213/1).
|
http://arxiv.org/abs/2307.00428v2
|
20230701211927
|
Dynamics of cascades in spatial interdependent networks
|
[
"Bnaya Gross",
"Ivan Bonamassa",
"Shlomo Havlin"
] |
physics.soc-ph
|
[
"physics.soc-ph"
] |
AIP/123-QED
Department of Physics, Bar-Ilan University, 52900 Ramat-Gan, Israel.
bnaya.gross@gmail.com
Department of Network and Data Science, CEU, Quellenstrasse 51, 1100 Vienna, Austria
ivan.bms.2011@gmail.com
Department of Physics, Bar-Ilan University, 52900 Ramat-Gan, Israel.
The dynamics of cascading failures in spatial interdependent networks significantly depend on the interaction range of dependency couplings between layers. In particular, for increasing range of dependency couplings, different types of phase transition accompanied by various cascade kinetics can be observed including mixed-order transition characterized by critical branching phenomena, first-order transition with nucleation cascades, and continuous second-order transition with weak cascades. We also describe the dynamics of cascades at the mutual mixed-order resistive transition in interdependent superconductors and show its similarity to that of percolation of interdependent abstract networks. Finally, we layout our perspectives for the experimental observation of these phenomena, their phase diagrams and the underlying kinetics, in the context of physical interdependent networks. Our studies of interdependent networks shed light on the possible mechanisms of three known types of phase transitions, second order, first order, and mixed order as well as predicting a novel fourth type where a microscopic intervention will yield a macroscopic phase transition.
In honor of Prof. Juergen Kurths’ 70th birthday
Dynamics of cascades in spatial interdependent networks
Shlomo Havlin
August 1, 2023
=======================================================
The theory of interdependent networks has been developed to describe dependency relations between infrastructures and to understand their resilience, the propagation of cascading failures, and the conditions leading to the abrupt collapse of such systems. Interdependent networks are characterize by self-amplifying cascading processes fueled by the positive feedback induced by dependency couplings with critical dynamics that generally depend on the network topology. The theory has been motivated by improving the understanding of interdependent infrastructures such as power grids and their communication systems. However, the theory could not be proved in real-world systems since infrastructures are not possible to control. The recent experimental realization of interdependent networks as thermally coupled disordered superconductors, hereafter called physical interdependent networks (PINs) for brevity, has allowed for the first time the manifestation under a controlled environment of self-amplifying cascade dynamics analogue to those observed in interdependent percolation on abstract structures, raising new perspectives in the study of coupled macroscopic systems. Here we lay out the analogies between the various types of cascade dynamics reported in both abstract and physical interdependent networks and provide our vision for future studies.
§ INTRODUCTION
A common feature of biological <cit.>, technological <cit.>, ecological <cit.>, and social <cit.> systems is the ability to represent many of them as networks. The ability to abstract a complex system by nodes and edges representing their interactions, without losing its important features is one of the significant advantages of the complex networks paradigm and the reason for its interdisciplinary applications. About a decade ago, researchers realized that networks in various areas are not isolated but rather interact and depend on each other and that a theory for such system of systems was missing. This understanding has led to the development of the paradigm of interdependent networks <cit.> followed by a large variety of models like multiplex networks <cit.>, network of networks <cit.> and unifying frameworks for the structure and function of multilayer networks <cit.>.
Interdependent networks, in particular, have the distinctive feature of modeling systems endowed with two types of couplings: connectivity links within layers and dependency links between them, as illustrated in Fig. <ref>. The role of each type of links is different: while connectivity links are used to describe the structural connectivity of the network for its specific function, dependency links are used to describe functional dependence between components in different networks so that, e.g. failures can propagate between them <cit.>. As a result, the interplay of connectivity and dependency links offers a simple mechanisms describing the positive feedback triggering the catastrophic phenomena reported in power outages <cit.> or cascading tipping points in critical infrastructures <cit.> and ecosystems <cit.>. These cascades are self-amplifying processes <cit.> initiated by microscopic perturbations that, when close to a critical point, can lead the global shifts of the system's state. Works focusing on interdependent percolation in spatially embedded networks <cit.> have revealed the vulnerability of these structures to external microscopic localized failures, disclosing a variety of kinetic regimes. In this paper, we review some key properties of these cascading processes in the presence of constraints on the range of dependency links. We then highlight the novel phenomena emerging when studying cascading failures in physical interdependent networks, offering future perspectives of experimental validation. The theory of percolation of interdependent networks sheds light on the mechanisms of three types of known phase transitions, first order, mixed order, and second order. While the second order transition occurs when both interactions (connectivity and dependency couplings) are short range, mixed order transition occurs when one or both interactions are long-range of the order of the system size <cit.>. The first-order abrupt transition occurs due to random nucleation when one coupling is short range and the other is of length shorter than the system size <cit.>. Surprisingly, the theory of percolation phase transition of interdependent network predicts a novel phase regime of macroscopic phase transition that occurs due to microscopic intervention <cit.> (see also Fig.<ref>c below).
§ VULNERABILITY OF INTERDEPENDENT NETWORKS
A practical approach to characterize the vulnerability and the propagation of failures in interdependent structures is percolation theory <cit.>. Let us start by briefly describing the basic case of percolation in a single isolated network. In the percolation process, a fraction of 1-p of nodes are randomly removed from the network and the relative size of the largest (giant) connected component (GCC), P_∞, is measured. The GCC describes the connectivity of the network and its existence is regarded as a meaningful proxy for the functionality of the network. A percolation transition is commonly observed below a critical threshold, p_c, where the network itself breaks apart into small clusters and the relative size of its giant connected component, P_∞, vanishes. For a classical percolation process, cascades are typically absent and the transition is usually continuous, as depicted in Fig. <ref>a.
In marked contrast to percolation of isolated networks, interdependent percolation on coupled networks exhibits different and richer phenomena. In this framework, one usually starts from the random removal of a fraction 1-p of nodes from one of the networks, after which its remaining GCC is measured. Notice that, since the GCC is the functional part of the network, small clusters concurrently fail. At this stage, the dependency links transmit the failures of these nodes and of small disconnected clusters to the other network(s). In their turn, these failures disconnect some clusters from the GCC of the other network, propagating new failures through the dependency links back to the first network. As this process iterates back and forth, cascade of failures propagate between the layers until either the entire system is dismantled or a stable mutual giant connected component (MGCC)—a subset of the giant connected components of both layers composed by the functional nodes in the GCC of both networks—remains. When the external damage is sufficiently large, these cascades result in abrupt mixed or first-order percolation transitions, depending on the range of interactions, as displayed in Fig. <ref>a.
The surprising feature of the change in the transition's order relies on the underlying kinetics of failures generated by the dependency couplings. In fact, it was shown that the dynamics of cascades is characterized by different critical features which strongly depend on the range of interactions <cit.>. In what follows we will focus on the effects that a limited range, r, of the dependency couplings has on the kinetics of cascading failures in the simple model of two interdependent lattices depicted in Fig. <ref>.
§ THE ROLE OF THE DEPENDENCY INTERACTION RANGE
To study how the range of dependency links affects the observed phase transition, a spatial interdependent network model was developed <cit.>. In this model (shown in Fig. <ref>), two 2D square lattices are interdependent on each other and the dependency links are constrained to be below a specific geometric range r. For r=0, percolation of interdependent networks is identical to that of a single network, with p_c=0.593. This is because failures in one network yield identical failures in the second network and there will be no feedback of cascades. In the limiting case of very short-range dependencies—say, of a few lattice units (see Fig. <ref>)—cascades propagate only locally and the percolation transition remains continuous <cit.>. As the dependency range increases, the critical threshold, p_c, also increases without though influencing the character of the transition in the system (see Fig. <ref>b). Wei Li et al. showed <cit.> the existence of a critical value r_c, whose value is close to the value of the correlation length of a single system, above which avalanches propagate in a nucleating fashion. In this case, above r_c, the transition occurs when a small droplet of damage is spontaneously created at the critical threshold, p_c, and the dependency links amplify it by spreading it radially until the entire system collapses abruptly. In this case, in sharp contrast to the second-order phase transition, no critical scaling is observed in the relative size of the giant component.
As r further increases above r_c, the critical threshold, p_c, decreases, reaching eventually the asymptotic regime of p_c for r∼ L, where L is the linear size of the lattice. In this case, dependency links become long-range, critical droplets become more and more ramified <cit.> and the transition crosses over from nucleation-dominated to mixed-order, exhibiting scaling exponents near the critical threshold p_c and fractal fluctuations phenomena <cit.>. In this limit of long-range dependencies, cascades are typically characterized by a critical branching process with branching ratio η∼1, a microscopic property of the kinetics which reflects itself in a long-lived metastable plateau stage observed in the evolution of the MGCC <cit.>.
§ LOCALIZED ATTACK
Fig. <ref>a exhibits three types of phase transitions that appear in interdependent networks under random failures. The second order transition occurs when both interactions (connectivity and dependency couplings) are short-range, mixed order transition occurs when one or both are long-range of the order of the system size <cit.>. The first order transition occurs due to random nucleation when one coupling is short range and the other is of length larger than r_c but shorter than the system linear size <cit.>. The theory of interdependent networks also predicts a novel fourth macroscopic phase transition which is triggered via a microscopic intervention. This fourth type of structural transition also depends on the range of the interdependent interactions. This transition can be regarded as a nucleation-induced transition since it results from the spontaneous propagation of a microscopic droplet of removed modes whose size (Fig. <ref>c), remarkably, encompasses only a vanishing fraction of the system size <cit.>. This form of percolation process was presented in the literature under the term of “localized attack” since it is typically initiated by removing nodes within a circle of radius r_h anywhere in the system. In simulations, for simplicity and without loss of generality, localized attacks are performed in the center of one of the coupled networks. At a given value of p above the spontaneous nucleation critical line, when the network is connected, a critical radius size r_h^c exists where for a localized attack of r_h > r_h^c the damaged hole will propagate and destroy the system while for r_h < r_h^c it will remain local (Fig. <ref>c). It is important to note that r_h^c does not depend on the system size and therefore can be regarded as a microscopic intervention that yields a macroscopic phase transition <cit.>. The regime in which a microscopic intervention yields a macroscopic phase transition (marked in yellow) is called the metastable regime. This is because the system is not really fully stable since a microscopic intervention, anywhere in the system, yields the collapse of the system. This process enables to probe the stability of the MGCC of interdependent lattices, unveiling an upper bound in the phase diagram of the model (see the yellow area, Fig. <ref>c) where the coupled are structurally metastable. Notice that, a critical exponent describing the scaling of the critical radius of the droplet with the average degree of the underlying networks has been reported in the metastable regime <cit.>.
§ CASCADE KINETICS IN INTERDEPENDENT SUPERCONDUCTING NETWORKS
On the one hand, interdependent percolation on coupled networks has helped to understand some of the key mechanisms underlying cascading failures in real-world systems, however, the ability to test and further develop its predictions in laboratory-controlled experiments has been missing. To fill this fundamental gap, we have recently conducted an experiment performed on thermally-coupled disordered superconductors <cit.>, where heat dissipation physically realizes the dependency coupling. In this experiment, two superconducting networks (illustrated in Fig. <ref>a) are placed on top of each other with an electrically isolated material in between which has good thermal conductivity. When the networks are measured separately, each layer experiences a continuous superconductor-normal (SN) transition, as shown in Fig. <ref>b. However, once the layers are coupled, thermal interactions set in between the layers via dissipating hotspots which trigger electro-thermal runaway effects that cause the layers to lock in their critical temperature, eventually leading to mutually abrupt superconducting-normal phase transitions.
In order to characterize this phenomenon and its connection with interdependent percolation, we have developed a model of thermally coupled 2D resistively-shunted Josephson junctions (RSJJs), where local dissipation is modeled via a local, Joule heating effect (see illustration in Fig. <ref>a). In particular, we have modeled the state of a given lattice bond, (i,j), via a Josephson I-V characteristics featuring one of three possible states: superconductor (SC), intermediate (I) and normal (N). These states are defined by the junction’s critical current I_ij^c and its normal-state resistance R_ij^n, whose values depend on the local temperature, T_ij. We describe the latter via a local de Gennes relation <cit.>
I_ij^c (T_ij) = I_ij^c(0)(1-T_ijT_ij^c)^2 ,
where I_ij^c(0) is the zero-temperature critical current of the junction and T_ij^c is its activation temperature, whose values are extrapolated from the experimental data. To measure the global resistances of the networks as a function of temperature and of the bias current, we solved numerically the Kirchhoff equations 𝐆·𝐖 = 𝐈_b for each layer, where 𝐆 is the conductance matrix, 𝐖 is the potential vector and 𝐈_b is the current vector. To model the thermal coupling between the two superconducting networks, we have calculated at each iteration in the numerical solution of the Kirchhoff equations, the power dissipated by Joule heating of single junctions, i.e. P_ij, t=R_ij^2 I_ij, t, where I_ij,t is the current passing through the junction (i,j) at the t-th numerical iteration. An effective local temperature can then be obtained by thermal circuit arguments so to take into account the mutual overheating effect between the networks. In particular, given the much larger thermal conductance between the layers than within layers, one can write the local expression <cit.>
T_ij,t^μ =T+τ_p/τ_eγ^-1P_ij,t-1^μ,
where γ [WK^-1] is the thermal conductance of the coupling medium and μ'≠μ, with μ,μ'=A,B. In Eq. (<ref>), the ratio τ_p/τ_e between the two relevant time scales (τ_p for phonons and τ_e for electrons) characterize the heat rate transferred through the coupling medium and the one emitted by Joule dissipation, have values that generally depend on the geometry of the sample as well as on the physical properties of the superconducting materials.
Given the local overheating effect induced by Eq. (<ref>), we solved iteratively the coupled Kirchhoff equations characterizing the thermally coupled RSJJs. At zero temperature all bonds are superconductors and no dissipation is present. As the temperature increases, the critical current of bonds decreases according to Eq. (<ref>) and some of them switch their state from superconducting (SC) to dissipating (IN or N). At sufficiently large currents, these bonds overheat the other layer, increasing the “vulnerability” of the latter ones to switch as well to the normal state. At sufficiently large currents, a critical temperature T_c of the heat bath is eventually reached, at which the local overheating effect between the networks couples with the electrical runaway within layers, causing local perturbations to be propagated at large scales. When this electrothermal feedback process is ignited, more and more bonds switch to the normal state and a mutually abrupt resistive transition is observed in both layers (see Fig. <ref>b).
Interestingly, the critical kinetics underlying the two abrupt (i.e. mutual SC-to-N and N-to-SC phase) transitions are accompanied by different relaxation processes. At the mutual SC-to-N transition, the overheating cascade process physically realizes the kinetics of cascading failures of interdependent percolation <cit.>, further manifested by the classical long-lived plateau stage whose lifetime τ∝(T-T_c,>)^-ζ with ζ≃0.65 diverges at T_c,>. In the cooling direction, on the other hand, while the evolution from the mutual N-phase to the mutual SC-phase exhibits an analogous plateau regime (Fig. <ref>a) its characteristic lifetime diverges at the N-to-SC threshold, T_c,<, as τ∝(T_c,<-T)^-ζ now with exponent ζ≃0.5 (for details, see Ref. <cit.>. For percolation of interdependent networks ζ=0.5, see Ref. <cit.>).
Microscopically, the different critical exponents of the plateau lifetimes can be adopted as proxies for the underlying cascading kientics <cit.> , indicating that the SC-nuclei grow faster then N-nuclei.
During the heating plateau, this can be explained in terms of the pinning of the interfaces between SC-clusters and N-nuclei which halts the branching of the latter, while the smaller exponent of the cooling plateau hints at the sudden merging of thermally-suppressed SC-clusters. The critical nature of these dynamics is reflected in the evolution of the cascading trees generated by state-switching junctions (Fig. <ref>). At the transition temperatures, in fact, the avalanche size S(t), i.e. the number of junctions cascading to the SC/N-state at time t, develops a long-lived plateau (Fig. <ref>a) during which its relative growth is a zero fraction of the system's size and a critical branching factor η_c ∼ 1 is typically observed (Fig. <ref>b,c).
§ FUTURE PERSPECTIVES
Interdependent networks <cit.> feature rich and unique dynamics of cascades that governs their macroscopic phase transition, resulting in dramatic changes in the type of transitions from mixed-order to nucleation-dominated or continuous. The spatial range of dependency/connectivity couplings, in particular, plays a key role in this respect, as vividly embodied by the so-called interdependent r-model <cit.> and the so-called multiplex ζ-model <cit.> discussed above. The recent realization of PINs as interdependent superconducting networks <cit.> offers the opportunity of controlling and validating in experiments a large body of theoretical and numerical results gathered in the context of interdependent spatial networks <cit.>. Furthermore, the appearance of four different types of phase transitions in a single model improves our understanding of the mechanisms of phase transitions in general. Nonetheless, the expected fourth type of induced nucleation transition is novel and has yet to be observed in PINs. In our vision, a phase diagram of localized heating (Fig. <ref>) should be studied both theoretically and experimentally completing the picture of phase transitions in PINs.
§ ACKNOWLEDGMENTS
We thank the Israel Science Foundation, the Binational Israel-China Science Foundation Grant No. 3132/19, NSF-BSF Grant No. 2019740, the EU H2020 project RISE (Project No. 821115), the PAZY Foundation, and the EU H2020 DIT4TRAM. B.G. acknowledges the support of the Mordecai and Monique Katz Graduate Fellowship Program.
§ AUTHORS DECLERATION
Conflict of Interest
The authors have no conflicts to disclose.
Data Availability
The data that support the findings of this study are available
from the corresponding author upon reasonable request..
Author Contributions
Bnaya Gross: Conceptualization (equal); Investigation; Validation (equal); Visualization (equal); Writing – original draft (equal). Ivan Bonamassa: Conceptualization (equal); Investigation; Visualization (equal); Validation (equal); Writing – original draft (equal). Shlomo Havlin: Conceptualization (equal); Supervision (equal); Project administration (equal); Validation
(equal); Writing – review & editing (equal).
unsrt
|
http://arxiv.org/abs/2307.02303v1
|
20230705140127
|
Microsecond-duration bursts from FRB 20121102A
|
[
"M. P. Snelders",
"K. Nimmo",
"J. W. T. Hessels",
"Z. Bensellam",
"L. P. Zwaan",
"P. Chawla",
"O. S. Ould-Boukattine",
"F. Kirsten",
"J. T. Faber",
"V. Gajjar"
] |
astro-ph.HE
|
[
"astro-ph.HE"
] |
Microsecond-duration bursts from FRB 20121102A
M. P. Snelders^*
ASTRON, Netherlands Institute for Radio Astronomy
Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, The Netherlands
Anton Pannekoek Institute for Astronomy, University of Amsterdam
Science Park 904, 1098 XH, Amsterdam, The Netherlands
K. Nimmo
MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology
77 Massachusetts Ave, Cambridge, MA 02139, U.S.A.
J. W. T. Hessels
Anton Pannekoek Institute for Astronomy, University of Amsterdam
Science Park 904, 1098 XH, Amsterdam, The Netherlands
ASTRON, Netherlands Institute for Radio Astronomy
Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, The Netherlands
Z. Bensellam
Anton Pannekoek Institute for Astronomy, University of Amsterdam
Science Park 904, 1098 XH, Amsterdam, The Netherlands
L. P. Zwaan
Anton Pannekoek Institute for Astronomy, University of Amsterdam
Science Park 904, 1098 XH, Amsterdam, The Netherlands
P. Chawla
Anton Pannekoek Institute for Astronomy, University of Amsterdam
Science Park 904, 1098 XH, Amsterdam, The Netherlands
O. S. Ould-Boukattine
ASTRON, Netherlands Institute for Radio Astronomy
Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, The Netherlands
Anton Pannekoek Institute for Astronomy, University of Amsterdam
Science Park 904, 1098 XH, Amsterdam, The Netherlands
F. Kirsten
Department of Space, Earth and Environment, Chalmers University of Technology
Onsala Space Observatory, 439 92, Onsala, Sweden
J. T. Faber
Cahill Center for Astronomy and Astrophysics, MC 249-17
California Institute of Technology, Pasadena CA 91125, USA
V. Gajjar
Breakthrough Listen, University of California Berkeley
Berkeley, CA 94720, USA
Received March 10, 2023; accepted May 12, 2023
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Fast radio bursts (FRBs) are extragalactic transients with typical durations of milliseconds. FRBs have been shown, however, to fluctuate on a wide range of timescales: some show sub-microsecond sub-bursts while others last up to a few seconds in total. Probing FRBs on a range of timescales is crucial for understanding their emission physics, how to detect them effectively, and how to maximize their utility as astrophysical probes. is the first-known repeating FRB source. Here we show that is able to produce isolated microsecond-duration bursts whose total durations are more than ten times shorter than all other known FRBs. The polarimetric properties of these micro-bursts resemble those of the longer-lasting bursts, suggesting a common emission mechanism producing FRBs spanning a factor of 1,000 in duration. Furthermore, this work shows that there exists a population of ultra-fast radio bursts that current wide-field FRB searches are missing due to insufficient time-resolution.
§ INTRODUCTION
FRBs are broadly divided into two categories: repeating and apparently one-off FRBs (see <cit.> for recent reviews). Although repeaters and one-offs typically exhibit different observational properties (e.g., repeating FRBs generally show narrower spectral and wider temporal extents than non-repeaters <cit.>), it remains unclear whether this is the result of a different source class, emission physics, propagation effects or beaming geometry<cit.>.
While repeating sources represent only a few percent of all known FRBs<cit.>, they are important for understanding the phenomenon. The repeatability allows for very long baseline interferometry (VLBI) localisations to probe their local environments <cit.>; long-term monitoring to explore how their properties vary with time (e.g., scattering timescale<cit.>, Faraday rotation measure, RM <cit.> and dispersion measure, DM<cit.>); mapping of their sometimes periodic activity rate <cit.>; and also multi-epoch, multi-wavelength searches spanning radio wavelengths <cit.> to high energies <cit.>.
The first-discovered repeating FRB, <cit.>, is one of the best-studied sources. Not only did the repeating nature of rule out cataclysmic models to explain all FRBs <cit.>, but was also the first FRB to be precisely localised to a host galaxy <cit.>, confirming the extragalactic distances to FRBs implied by their high DMs <cit.>. lives in the outskirts of a star-forming region in a dwarf galaxy at a luminosity distance of approximately 1 Gpc <cit.>. The FRB emitter is associated with a compact persistent radio source (PRS), which shows that it may be in a dense nebula or in the near vicinity of a massive black hole <cit.>. This is further supported by the discovery that is in an extreme and dynamic magneto-ionic environment, as shown by its exceptionally high and variable RM<cit.>. The RM is variable on a range of timescales<cit.>, from days to years, and has decreased from about 127×10^3 rad m^-2 to about 31×10^3 rad m^-2 (in the observer frame) over the span of 7 years<cit.>.
While most surveys search for FRBs on timescales of milliseconds, in some cases the raw voltage data are saved, allowing for extremely detailed studies of FRBs probing the emission on much shorter timescales<cit.>. The as-yet non-repeating FRB 20170827A was shown to exhibit a 0.5 ms burst envelope, with a narrow sub-component of duration roughly 30 s <cit.>. Similarly, occasionally shows comparably narrow temporal features within a broader envelope<cit.>. More recently, the repeating sources FRB 20180916B and FRB 20200120E were shown to have temporal sub-structures with durations of microseconds<cit.> down to tens of nanoseconds<cit.>, respectively. Timescales shorter than tens of microseconds remain unexplored for , despite micro and sub-microsecond burst structure having been observed in other repeating FRBs <cit.>. To date, these short-timescale variations have only been observed within broader burst envelopes with durations of at least 0.1 ms or longer.
Bursts from show a variety of different morphologies, from simple single Gaussian burst profiles, to complex drifting islands of emission, now known to be characteristic of repeating FRB morphology<cit.>. bursts also often appear to be narrowband (roughly 20 % fractional bandwidth), and have been seen to emit at frequencies between 600 MHz <cit.> and 8 GHz <cit.>. These 8 GHz detections with the Green Bank Telescope (GBT) <cit.> remain the highest-frequency FRB detections, to date. The 4–8 GHz GBT data presented in <cit.> were searched for bursts at a time-resolution of 350 s, discovering a total of 21 bursts in the first hour of a 5-hour observing block, with 18 bursts occurring within the first 30 minutes. A re-analysis of this same dataset by <cit.>, retaining the same temporal and spectral resolution but using a machine learning search technique (differing from the standard boxcar matched-filtering technique used in <cit.>), discovered an additional 72 candidate bursts, with half of the bursts occurring within the first 30 minutes.
In this paper, we present a re-analysis of the first 30 minutes of the <cit.> GBT dataset, searching for ultra-fast radio bursts on timescales of microseconds: a parameter space that the previous searches of this dataset, and current FRB searches in general, are insensitive to.
§ OBSERVATIONS AND BURST SEARCH
A 5-hour observation of was carried out on 26 August 2017 using the GBT with the Breakthrough Listen (BL) recording system<cit.>. The BL system recorded from 3.9 GHz to 9.3 GHz, fully covering the sensitivity range of the GBT C-band receiver (Methods). Raw voltages of the dual-linear receiver were recorded and stored allowing for offline re-processing, coherent dedispersion, high-time-resolution searches and polarimetric analysis.
Here we have analyzed the voltage data of the first 30 minutes of the observation, totalling more than 32 terabyte (TB) in size (Methods). The voltage data were coherently dedispersed with a DM of 560.5 pc cm^-3 (from <cit.>) using <cit.>, removing the intra-channel smearing but keeping the inter-channel smearing across the band. The resulting total-intensity filterbanks<cit.> have a time-resolution of 341.3̅ ns and 4.5 GHz of bandwidth at a centre frequency of 6126 MHz.
To save a factor of greater than 10 in computational costs, we downsample the filterbanks in time using to create new filterbank data products with a time-resolution of approximately 2, 33 and 524 s, respectively. For a hypothetical burst with a duration of 341.3̅ ns, this reduces the sensitivity by a factor of √(6) compared to using the full available time resolution offered by the data (Methods). Since bursts from often appear narrowband, we choose to do multiple subbanded searches: dividing the total bandwidth into subbands of widths 4.5 GHz, 1.5 GHz (the spectral width of the widest bursts in <cit.>), 750 MHz, 350 MHz, and 187.5 MHz (the spectral width of the bright patches of the bursts in <cit.>) before searching the data for FRBs.
We searched for bursts over a range of DMs centered around 560.5 pc cm^-3 using the <cit.> suite of tools and very small step-sizes in trial DM (Methods). Microsecond-duration bursts require small step-sizes in DM because their recovered S/N decreases rapidly if the assumed DM is incorrect<cit.>. At these high frequencies the data are relatively clean and thus we did not apply any radio frequency interference (RFI) masks to avoid the accidental masking of any bright and/or short-duration bursts. Candidate astrophysical signals were classified using the machine learning classifier <cit.>. Candidates with a -assigned probability p ≥ 0.5 of being astrophysical or temporal duration ≤ 500 s (regardless of the probability given by ) were manually inspected.
§ RESULTS
We find 49 unique bursts within the first 30 minutes of the observation (Extended Data Table <ref>). All of the bursts presented in <cit.> are re-detected and from the <cit.> sample we re-detected 30/46 of the candidate bursts, missing only low signal-to-noise (S/N) pulses (where the astrophysical nature of these events is in any case unclear). We find 19 bursts that were missed in both previous searches of this dataset. Of the newly found bursts, 8 are very short in duration, with the entire burst envelope lasting no more than about 15 s (Figures <ref> and <ref>). We refer to these bursts as `ultra-fast radio bursts', or ultra-FRBs, because of their microsecond durations.
Bursts B23, B26 and B32 extend up to about 8.4 GHz, making them the highest-frequency FRBs detected to date. The emission properties above this frequency are unknown because above 8.4 GHz there is a sharp drop in the sensitivity of the receiver.
The distribution of burst durations (as they were found in the search, Figure <ref>) may appear by eye to be bimodal, but accounting for Poissonian uncertainties we find no clear statistical evidence for this. In fact, the burst durations are marginally consistent with being logarithmically uniform from microseconds to milliseconds. If we split the burst sample in two at a duration of 40 s, we find 8^+12.7_-5.9 and 41^+23.1_-16.6 bursts below and above this threshold, respectively. Here the uncertainties represent the 3σ Poissonian error.
To study the bursts with more precision, we created high-time-resolution, full-polarisation data products for a selection of bursts; these have been coherently dedispersed to a DM of 560.105 pc cm^-3 (Methods). These data products have a time-resolution of 341.3̅ ns and a frequency resolution of 2.9296875 MHz (δνδ t = 1, limited by the uncertainty principle). It would be difficult to accurately achieve a higher time resolution by inverting the signal processing step that formed the frequency channels.
§.§ High-time-resolution
In Figure <ref> we present the total intensity (Stokes I) dynamic spectra and burst profiles for the 8 shortest-duration bursts in our sample. We coherently dedisperse all of the bursts to the best-fit DM for burst B30 (Methods and Extended Data Figure <ref>), DM=560.105 pc cm^-3, because of its high S/N and extremely narrow burst width. Nonetheless, we note that there is unavoidable ambiguity in the exact DM of each burst because their time-frequency morphology is not known a priori.
Bursts B30 (Figure <ref>ab) and B43 (Figure <ref>cd) are the highest-S/N bursts in our ultra-FRB sample. Like other bursts detected in these observations, B30 shows bright patches in frequency (consistent with the expected Galactic scintillation of 8 MHz at 4.7 GHz and 87 MHz at 8.0 GHz<cit.>), and is extended over more than 800 MHz of bandwidth. The leading edge of the burst is extremely steep, going from the noise floor to the peak of the burst in about 1 s. The entire burst lasts a mere 5 s, making it the shortest isolated FRB to date.
Burst B43 is comparatively long-duration: about 15 s. The shape of the burst in time-frequency space remains curved after dedispersing: the remaining curvature does not appear to follow the ν^-2 relation for dispersion and therefore cannot simply be corrected for using a different DM. Compared to B30, it also shows clear bright patches in frequency, but a less steep leading edge. Rather, the profile of the burst (Figure <ref>c) is more Gaussian shaped. Burst B43 is also found in the <cit.> search due to the high S/N of the burst, but since their data were not coherently dedispersed and had a sampling time of 350 s (much larger than the total burst width), the microsecond duration of this burst was not visible.
Burst B06 (Figures <ref>ef and <ref>ef) shows multiple components, and is the only ultra-FRB that is detected at a central radio frequency ≥7 GHz. The burst is composed of three components with separations of roughly 15 s and 25 s. The first component is the brightest and has a duration of only about 4 s.
The remaining ultra-FRBs are weaker and need to be downsampled in time and/or frequency to be clearly seen in the dynamic spectra. The spectral extent of the bursts varies between about 120 MHz (burst B31) and 800 MHz (bursts B30 and B38). The ultra-FRBs do not obviously favour any particular frequency-range nor do they occupy a particular time-range compared to the other bursts in this dataset (Extended Data Figures <ref> and <ref>).
We calculate the peak flux density, fluence and isotropic-equivalent spectral luminosity of the 8 ultra-FRBs (Methods and Table <ref>). We also show these in the context of the transient phase space diagram (Extended Data Figure <ref>). The bursts all have a relatively high spectral luminosity and low transient duration compared to other bursts from and other localised repeating FRBs. Notably, burst B30 has an inferred brightness temperature that exceeds 10^40 K. Such high brightness temperatures have so far only been observed in `nanoshots' from the Crab pulsar<cit.> and in sub-bursts from FRB 20200120E<cit.>.
With the refined DM of 560.105 pc cm^-3, measured using B30 (Methods and Extended Data Figure <ref>), we carefully examined the three highest-S/N bursts discovered in <cit.> on microsecond timescales (Figure <ref>). No micro-structure was identified in any of these bursts. A more detailed study of the larger sample at high time-resolution will be presented in a future paper.
§.§ Polarimetric properties
After calibrating the full polarisation data (Methods), we measure the RM of our burst sample using a multi-burst joint Stokes QU-fit<cit.> (Figure <ref>). Similar to an analysis of bursts from from the 305-m Arecibo telescope by <cit.>, we find that all of the bursts, including the ultra-FRBs, can be fit with a single polarisation position angle (PPA, averaged over the burst duration) and an RM of 93586 ± 4 rad m^-2 (in the observer frame), which is consistent with previous RM measurements of these bursts <cit.>. We de-Faraday the Stokes parameters Q and U using the measured RM and determine the unbiased linear polarisation fraction, L_unbias (Methods).
We plot the profiles of Stokes I, L_unbias and the circular polarisation (Stokes V) in the bottom panels of Figure <ref>. We find that almost all of the ultra-FRBs are consistent with being 100 % linearly polarised and show no circular polarisation — similar to the millisecond-duration bursts<cit.>. The exception to this is burst B43, which shows 31±10 % circular polarisation and 87±10 % linear polarisation: still consistent with being 100 % polarised overall (Methods).
We compute the time-resolved PPA across the burst profile. Due to the limitations of our calibration method (Methods), the absolute value of the PPA is not meaningful. We therefore, per burst, subtract the weighted mean, using L_unbias as weights, so the PPA values are centred around zero. A constant line is then fit to the PPAs and we measure the reduced-χ^2, χ^2_ν. The PPA probability distribution<cit.> for every time bin is shown in the upper panels of Figure <ref>. We find that at high time-resolutions χ^2_ν > 1, indicating that the PPAs are significantly scattered. This is also observed in a comparable analysis of the three selected bursts from the <cit.> sample (Figure <ref>).
Bursts B30 (the shortest-duration burst in our sample, Figure <ref>ab) and B01 (the longest-duration burst in our sample, Figure <ref>gh) are intentionally placed on top of each other to illustrate that, even though their durations differ by a factor of roughly 600, their polarimetric properties are very similar.
§ DISCUSSION
Galactic neutron stars, including rotation-powered radio pulsars and radio-emitting magnetars, are known to produce at least three categories of coherent radio pulses<cit.>: i. canonical pulsar emission from the magnetic polar cap; ii. microsecond-duration (or shorter) giant pulses from both energetic millisecond and young pulsars (like the Crab pulsar); and iii. magnetar radio bursts. The durations, spectra, rotational phase, etc., of these various emission types strongly suggest that they originate in different parts of the neutron star magnetosphere and likely via different physical processes. Here we have shown that FRB sources can also produce isolated microsecond-duration pulses, whose luminosities are many orders of magnitude brighter than even the giant pulses from the Crab pulsar (Extended Data Figure <ref>). The range of timescales we observe from is consistent with what is seen from neutron stars, and this may indicate that FRBs can also produce multiple burst types. The similarities between FRB microstructure and Crab giant pulses are discussed in more detail in, for example, <cit.>.
Microsecond (and shorter) timescale variations have previously been observed in bursts from FRB 20180916B <cit.> and FRB 20200120E <cit.>. In these cases, the short timescale variations occur within a broader burst envelope. In contrast, here we present the discovery of isolated microsecond-duration bursts (without a broader envelope) from . These ultra-FRBs were missed in previous searches of the same data, where the time resolution was more than 100 times lower <cit.>. As suggested in <cit.>, we have shown here that there indeed exists a population of ultra-FRBs that current FRB search strategies are missing. The computational expense of searching for ultra-FRBs, because of the required high time resolution and coherent dedispersion, has resulted in them being undetected in all other FRB searches to date. Furthermore, scattering is a significant limitation to resolving such timescales when observing at low radio frequencies (less than about 1 GHz).
While FRB 20180916B<cit.>, FRB 20200120E<cit.> and <cit.> live in drastically different environments, their burst properties show striking similarities. For example, similarities in the burst morphologies and polarimetric properties, including downward-drifting structure in time and frequency, spectrally narrow emission<cit.>, and now microsecond-duration fluctuations as well<cit.>. In addition to the short timescales, the 3 orders-of-magnitude range of timescales observed in this work resembles the range of timescales observed for FRB 20180916B<cit.> and FRB 20200120E<cit.>. However, it is possible that microsecond-duration bursts are equally, or even more common in non-repeating FRBs<cit.> due to them having narrower widths in general<cit.>. That micro-shots have so far been exclusively detected in repeaters is likely because of the high-time-resolution follow-up observations conducted for these sources. Alternatively, it could also be because more individual pulses have been seen and studied in the case of repeating FRBs. There are close to 500 one-off FRBs in the first CHIME/FRB catalogue<cit.>, but they lack the required time resolution to search for these short-duration sub-bursts. Additional studies using CHIME/FRB baseband detections of these FRBs will help elucidate the prevalence of micro-structures in one-off FRBs.
The polarimetric properties of repeating FRBs are often described by high degrees of linear polarisation, little-to-no circular polarisation, flat PPAs during the burst duration and only small (<30^∘) variations in PPA between bursts detected at the same observing epoch<cit.>. FRB 20180916B and have also been seen to depolarise with decreasing frequency<cit.>. In this work, we additionally show that, at very high time-resolution, the PPA during the burst duration varies significantly (Figure <ref>), resembling the high-time-resolution PPA variations seen for FRB 20180916B<cit.> and FRB 20200120E<cit.>.
Recently, <cit.> showed that bursts at 1.25 GHz are occasionally circularly polarised (in <1 % of bursts) while being 0 % linearly polarised (as is the case for all bursts at these frequencies). We observe one ultra-FRB in our sample to be significantly circularly polarised, burst B43: this corresponds to a rate of circular polarisation for 4–8 GHz of a few percent which appears to be roughly consistent with <cit.>, though a larger sample of high-radio-frequency bursts should be considered in a future study to compare the rates. <cit.> suggest that Faraday conversion or the radiation mechanism itself could be responsible for the circular polarisation, while multi-path propagation in either the Milky Way or FRB local environment is disfavoured. Our detection of circular polarisation at much higher radio frequencies supports the idea that the origin is intrinsic to the emission mechanism, as opposed to being a propagation effect; we speculate that the rare occurrence of circular polarisation could be a result of how the emission is beamed towards Earth. Furthermore, circular polarisation has been detected from the repeating FRB, FRB 20220912A, which has been shown to live in a relatively clean local environment, with no changes in DM or RM on month timescales <cit.>. This also suggests that the circular polarisation is intrinsic to the emission mechanism.
The distribution of burst durations (Figure <ref>) is marginally consistent with being log-uniform between microseconds to milliseconds. The selection bias as a function of burst duration is not well known, and could swing either in favour of more ultra-FRBs or more typical-duration FRBs. In our search, the timescales below 30 s have only been searched with one trial boxcar, while timescales ≥30 s have effectively been searched at least twice due to different combinations of the sampling time and boxcar matched-filtering widths (Figure <ref>). If all bursts from draw their fluences, ℱ, from the same underlying distribution, ultra-FRBs should actually be easier to find since the detection metric is ℱ/√(W), with W the temporal duration of the burst. It is unclear, however, whether the micro-shots follow the same fluence distribution as the wider bursts.
The burst spectra and polarimetric properties (polarisation fractions, RM, and PPA) are consistent between the ultra-FRBs and the wider-bursts, suggesting that they share a common emission mechanism. It is unclear, however, why this mechanism sometimes shuts off after only microseconds whereas at other times it continues for several milliseconds. For , the distribution of timescales is roughly uniform from microseconds to milliseconds. Other sources may show different temporal distributions and average burst duration; in any case, FRB searches are currently highly biased against finding ultra-FRBs. FRB 20200120E's bursts are typically 100 s in duration<cit.>, much shorter than other repeating FRBs<cit.>, further supporting the argument for an undetected FRB population with shorter timescales.
In <cit.> approximately 50 bursts from FRB 20200120E are analysed but only one burst has clear microstructure superimposed on a broader envelope. For that one burst the ratio between the amplitude of the micro-shot and the broader envelope is only about 3 — meaning that it would require fine-tuning of the telescope sensitivity to detect only the micro-shot and not the broader emission as well. For both FRB 20200102E and FRB 20180916B the broader bursts were found in searches with a time resolution of 64 s<cit.> and 82 s<cit.>, respectively. The micro-shots in those data were only found once the baseband data were both coherently dedispersed and higher time resolution data products were generated. At 64 s time resolution the ultra-FRBs from would not have been found. This supports the idea that the ultra-FRBs from are truly isolated events.
Fluctuations on timescales of tens of nanoseconds have been observed within the envelopes of bursts originating from FRB 20200120E<cit.>. With the current data, we are unable to probe similar timescales in the ultra-FRBs and wider-bursts. The data used in this study has a limiting time-resolution of 341.3̅ ns and for the majority of the ultra-FRBs discovered in this work the S/N is relatively low. The large scatter observed in the PPA at high time-resolution, however, is suggestive of the bursts being made up of narrower shots of emission. The 3 wider bursts from the <cit.> sample that we also study here show no evidence of micro-structure within the burst envelopes (Figure <ref>). This supports the hypothesis that the ultra-FRBs in our sample are truly isolated events as opposed to being fluctuations within a wider, dimmer burst component.
The isolated ultra-FRBs thus also provide a better constraint on the size of the emission region because they are unlikely to be generated by propagation effects that modulate the burst brightness post-emission<cit.>. The extremely short duration of B30 constrains the emission region to be smaller than a few kilometres (ignoring any potential relativistic effects). This favours models in which the bursts are generated near the central engine in a magnetosphere - as opposed to models in which the bursts are generated much further out in a relativistic shock.
The existence of ultra-FRBs could influence what one infers about energy, wait time and burst rate distributions. We therefore encourage that future experiments should try to search the widest-possible range of timescales. This should especially be done for repeating FRBs where the known DM and just a single pointing direction make this computationally easier compared to an untargeted wide-field search.
The limitations of extremely high-time-resolution FRB searches include: knowing the DM a priori to be able to coherently dedisperse the data, or alternatively performing a search using both coherent and incoherent dedispersion steps; scattering from the Milky Way interstellar medium, which causes temporal broadening that increases with decreasing frequency; instrumental bandwidth limiting the sampling rate of the observations; and, simply, the overall computational cost.
The discovery of ultra-FRBs from in this work highlights that such transients exist, that they are being missed in current FRB searches, and that there is reason to invest time in overcoming the challenges associated with searches for ultra-FRBs in the future, particularly at giga-Hertz frequencies where scattering from the Galactic interstellar medium is less than a microsecond for most directions on the sky. Opening a wider range of timescales for FRB discovery can both lead to the identification of new FRB source types as well as even more precise probes of the intervening magnetised plasma and gravitational potential<cit.>.
§ METHODS
§.§ The observation
Observations of were conducted with the Robert C. Byrd Green Bank Telescope (GBT) using the 4–8 GHz (C-band) receiver on 2017 August 26 during a 6-hour observing block. The first hour was used for calibration procedures: a noise diode scan was used for flux and polarisation calibration and a 5-minute observation of the pulsar PSR B0329+54 was performed to verify the calibration procedure. The last 5 hours of the observing block were split into 30-minute sections and were spent on <cit.>. Observations were conducted with the Breakthrough Listen (BL) digital backend <cit.>, which recorded 8-bit raw voltage (baseband) data from 3.9 GHz to 9.3 GHz — fully covering the C-band receiver bandwidth. The analog down-conversion system provided four 1500-MHz passbands configured with central frequencies of 8563.964844, 7251.464844, 5938.964844, and 4626.464844 MHz, respectively. Each dual-polarisation (linear basis) passband was Nyquist-sampled using 8-bit digitizers, polyphase channelized to 512 `coarse' frequency channels, and distributed to a cluster of compute nodes that recorded these data to disk. In total there are 32 compute nodes, 8 for every passband. Every compute node recorded 187.5 MHz of bandwidth over 64 `coarse' channels. The passbands have exactly 187.5 MHz of overlap with adjacent passbands (Extended Data Table <ref>). Due to the polyphase filter, the highest time resolution possible for these data is (1500 MHz / 512)^-1 = ( 2.9296875 MHz)^-1 = 341.3 nanoseconds, where the 3 represents a repeating digit. The nodes store the data in chunks to keep file sizes manageable. Every chunk consists of exactly 2^26 time samples, corresponding to 2^26× 341.3 ns = 22.91 seconds, and is 17.2 GB in size. Each 30-minute scan thus consists of 78 `full' chunks and one chunk containing the last 13.29 seconds. The chunks are stored in GUPPI (Green Bank Ultimate Pulsar Processing Instrument) raw format (i.e., files). Detailed information of the observation, the BL project, the BL backend and the GUPPI raw format can be found in Refs. <cit.>, <cit.>, <cit.> and <cit.>, respectively.
§.§ Data preparation
The baseband data ( files) of the first 30 minutes were downloaded from the BL open data archive. The data of 24 different nodes (see Extended Data Table <ref> for a list of used and available nodes) was retrieved, covering the frequency range 3876.464844 MHz – 8376.464844 MHz. In total, 79 × 24 × 17.2 GB = 32.4 terabytes (TB) of data was downloaded. For every chunk, the 24 files were spliced together in frequency using a modified version of , which is a script that is part of the BL package. The 79 spliced files, each 412 GB in size (the last one is 239 GB), were converted to a <cit.> filterbank file using . This is the that is part of . The filterbank files have been coherently dedispersed within the channels, but not between the channels, using a dispersion measure (DM) of 560.5 pc cm^-3 (from <cit.>). The resulting filterbank files contain 8-bit intensity data and have a time-resolution of 341.3 ns. The 4500-MHz bandwidth of the files is made out of 1536 channels with a frequency resolution of 2.9296875 MHz.
§.§ The search
Benchmarking showed that searching the data at a time resolution of 341.3 ns was too computationally expensive. Instead we opted to further downsample the data in time, using to create new filterbanks at time resolutions of 2.048, 32.768 and 524.288 s, respectively.
Previous searches<cit.> of these data showed that the bursts are narrowband. Therefore we create multiple `subbands' out of the 4.5-GHz bandwidth filterbank file. We use subbands of 1500, 750, 375 and 187.5 MHz, where each subband has a 50 % overlap with adjacent subbands, e.g., the 4500-MHz file is split into five 1500-MHz files. A total of
1 + ∑_i 2 × 4500 / i - 1 = 87
where i ∈[ 1500, 750, 375, 187.5 ]
unique subbands were searched. Combined with the three possible time-resolutions of the data, there were 87 × 3 = 261 passes through the data.
Every filterbank file was dedispersed and summed over all frequency channels using a range of dispersion measures (Extended Data Table <ref>) with 's . The steps in DM are equal, or smaller, than the step-size suggested by 's . The resulting time series were searched with 's using a S/N threshold of 7. For the 2 s and 33 s data we made use of all the available boxcar widths, which are logarithmically scaled between 1 and 300, while for the 524 s data the highest boxcar width that was used is 45. This made the search sensitive to timescales between 2 s and 24 ms – which is the maximum duration of a large sample of bursts at about 1.4 GHz as presented in <cit.>. At no stage do we apply any radio frequency interference (RFI) excision, to avoid masking any potential bright or short-duration bursts.
All the candidates were classified using the machine learning algorithm <cit.> (model ). We reject all the candidates that have a temporal width >0.5 ms and probability of being an astrophysical signal <50 %. Since is trained on (simulated) FRBs with widths between 0.5 ms and 50 ms, we chose to manually inspect every candidate with a width ≤0.5 ms, regardless of their -derived probability of being an astrophysical signal.
We manually inspect the roughly 2300 candidates that meet our selection criteria and find that about 2100 are FRBs. Due to the complex time-frequency structure of some of the FRBs and the fact that we have 261 passes over the data, all of the FRBs are found multiple times — the most extreme case is burst B01, which is found over 400 times (Extended Data Table <ref>).
Bursts are sorted by their arrival times and are clustered using their proximity in time. A `new' cluster starts if the time difference between consecutive bursts exceeds 10 ms. Every cluster is manually checked for any potential bursts that are close enough in time that they were grouped together. We keep the clusters if at least one of the burst detections in that cluster has S/N ≥ 8.
Given the discovery of the ultra-FRBs, we re-checked all the candidates with a duration of 2 s, regardless of their S/N or probability. After an additional manual inspection, none of these candidates are believed to be real astrophysical signals.
§.§ Dispersion measure determination
We use burst B30 to determine the best DM. This burst was chosen because it has a high S/N, extremely short duration and compared to the other microsecond-duration bursts it has a relatively large extent in frequency (Figure <ref>b). We coherently dedisperse the burst to a DM of 560.1 pc cm^-3 and incoherently dedisperse it over a range of trial DMs — from 559.85 pc cm^-3 to 560.35 pc cm^-3, in steps of 0.0002 pc cm^-3. For every trial DM we sum the dynamic spectrum over the frequency extent 5650–6525 MHz. The profile of the burst is normalized such that the off-burst regions have zero mean and unit standard deviation and the peak S/N is recorded. We plot the results in Extended Data Figure <ref>. Individual measurements are plotted as grey dots and for visual purposes a running average is shown as a solid black line. The peak S/N value is not clearly maximized at one specific DM value. We attribute this to brightness fluctuations (varying in both time and frequency) of the burst in the dynamic spectrum (Figure <ref>b). However, it might also be because the emission at different frequencies is occurring at different physical distances from Earth. The light travel distance of 341.3̅ ns (the highest possible time resolution) corresponds to 102.3 m (in the absence of relativistic effects). At such high time resolution, subtle changes in the location of the emission region may become apparent.
To determine the best DM, we fit a Lorentzian distribution to the data-points (shown as a solid green line) and take the best DM as the center of the fitted distribution, which is at 560.105 pc cm^-3. Due to the complex shape of the peak S/N profile we estimate the error on the DM to be 0.05 pc cm^-3.
If the same method is applied to burst B43 a DM of 560.31±0.25 pc cm^-3 is derived. That DM would decrease the duration of burst B43 by about 2 s. However, if a DM of 560.31 pc cm^-3 is applied to burst B30 it is clearly over-dedispersed and we therefore continue the analysis with a DM of 560.105 pc cm^-3.
We note that this DM is lower than the one reported in <cit.> (565.0 pc cm^-3) and <cit.> (563.86 pc cm^-3), both of which used a structure-maximizing DM on burst B01 (called `GB-BL' in <cit.> and `11A' in <cit.>). However, it should be mentioned that, using bursts between 1.2 GHz and 2.3 GHz, <cit.> find a DM of 560.5 pc cm^-3 for bursts in general near the epoch of our GBT observations — giving a better handle on the determined DM because of the lower radio frequencies.
§.§ Residual temporal smearing
Even though the availability of voltage data allows for coherent dedispersion there could still be intra-channel smearing due to the usage of an incorrect value of the DM. Furthermore, intrinsically narrow pulses could also be broadened by scattering from the Milky Way's interstellar medium.
In <cit.> they used the Galactic electron density model <cit.> and estimated a Galactic scattering timescale for of τ_s = 20 s ν^-α. Here ν is the observing frequency in GHz and α is a scaling parameter which is between 4 (thin screen model<cit.>) and 4.4 (Kolmogorov spectrum<cit.>). We find that the scattering timescale is between 2 ns (best case, ν = 8.0 GHz and α = 4.4) and 41 ns (worst case, ν = 4.7 GHz and α = 4.0). Since the highest-possible time resolution available in our data is 341.3̅ ns, temporal smearing due to Galactic scattering is expected to be undetectable in the temporal profiles of the bursts.
The dispersive sweep can be calculated as<cit.>:
Δ t = 𝔇× (ν_1^-2 - ν_2^-2) ×DM
where 𝔇 is the dispersive constant, and ν_1,2 are the upper- and lower bound of the frequency range. Thus, for coherent dispersion, a channel width of 2.9296875 MHz, observing frequencies of 4.7 GHz and 8.0 GHz and a dispersive constant of 1/(2.41 × 10^-4) MHz^2 pc^-1 cm^3 s (the same constant that is used in, e.g., ) we find a smearing of 234 ns and 48 ns per unit DM within an individual channel. Since the error on the DM is about 0.05 pc cm^-3 (Extended Data Figure <ref>) and the highest-possible time resolution is 341.3̅ ns, temporal smearing due to dispersion within individual channels is also insignificant.
§.§ Sensitivity to timescales
The highest time resolution at which the data was searched was Δ t = 2.048 s with a S/N ≥ 8 detection limit. Any bursts shorter than Δ t would thus have a decreased S/N since noise would have been added into the single time bin in which the burst would occur. The decrease in S/N scales with the square root of the fraction between the sampling time and the burst duration. Therefore it is possible that relatively weak and even shorter duration (less than 2 s) bursts are missed in the search. However, individual time samples in the profiles of bursts B30 and B43 reach a S/N greater than 20 at a time resolution of 341.3̅ ns (Figures <ref>ac and <ref>bd). A burst with a duration of 341.3̅ ns and a S/N of 20 would have its S/N decreased by a factor √(6) and would appear as a S/N 20 / √(6) = 8.2 candidate and would likely have been found in the search, though just barely.
§.§ Checks for saturation
Bright FRBs can saturate the receiver system and/or recording backend (see <cit.> and <cit.> for examples). We investigated if burst B43 is saturated because this burst is relatively bright between 4.9 GHz and 4.93 GHz (Figure <ref>d). The baseband data of the BL backend are stored as complex 8-bit signed integers<cit.>, meaning that both the real and the imaginary part utilize one byte to store their respective signed integers (i.e., an integer between -128 and +127, inclusive). The baseband data of burst B43 was loaded into and every sample for all subbands and both polarisation channels were checked. We found that all the integer values are within ±105, suggesting that there is no saturation in the recording system. It is not surprising that the signal is not saturated: the intensity of the signal is spread over both polarisation channels and over both the real and imaginary parts. Furthermore, the signal is also spread out in time in the baseband data due to dispersive smearing within a subband (>100 s for B43) and only with coherent dedispersion is the microsecond duration of these bursts revealed. The 8-bit depth of the baseband data used here is deeper than cases in which saturation did occur (4-bit baseband depth for the burst shown in <cit.> and 2-bit baseband depth for the bursts shown in <cit.>).
§.§ Polarimetric calibration
To study the bursts in greater detail, we use to coherently dedisperse (using a DM of 560.105 pc cm^-3) the baseband data for all the bursts that are shown in Figures <ref>, <ref> and <ref>. The resulting files contain the coherence products and have a time resolution of 341.3̅ ns, a frequency resolution of 2.9296875 MHz and a total bandwidth that varies per burst. The files are stored in filterbank<cit.> format using 32-bit floating-point numbers to avoid potential saturation effects.
In a linear basis with complex sampling, the Stokes I, Q, U and V parameters can be constructed from the auto- and cross-correlations (i.e., the coherence products) using<cit.>
I = < AA^* + BB^*>
Q = < AA^* - BB^*>
U = < 2 Re( AB^*) >
V = < 2 Im( AB^*) >
where AA^* and BB^* are the auto-correlations of the polarisation channels and Re( AB^*) and Im( AB^*) are the real and imaginary parts of the cross-correlations.
An instrumental delay between the polarisation channels will affect the cross-correlation AB^*, which in turn will affect the Stokes parameters U and V. We make use of the baseband data of a noise diode scan, taken 2 minutes before the start of the observation, to determine the delay between the polarisation channels. The data were folded on the switching period (0.04 s) of the noise diode using <cit.>. An `archive' format file was made for each of the four passband containing the coherence products. We determine the phase angle (PA) between the cross-products using:
PA( ν) = 1/2arctan[ Re( AB^*) / Im( AB^*) ]
A slope 𝒮 is fit to the PA, taking the wrapping at ±π/2 rad into account. The slope 𝒮, which has units of rad/Hz, is converted to a delay 𝒟 using 𝒟 = 𝒮 / ( π rad).
The delay between the polarisation channels is, as expected, different per passband but is on the order of 2.5 nanoseconds — similar to delays found in other radio telescopes<cit.>.
We correct the cross-correlations for the instrumental delay using:
Re( AB^*)_corrected = Re( 𝒴)
Im( AB^*)_corrected = Im( 𝒴)
where 𝒴 = [ Re( AB^*) + i Im( AB^*) ] × e^-2i πν𝒟
The Stokes parameters are constructed from the auto-correlations and the delay-corrected cross correlations using Equations <ref>–<ref>. Every channel for every Stokes parameter is normalized by subtracting the mean and dividing by the standard deviation of an off-burst region of that channel.
We determine the rotation measure (RM) using a multi-burst joint QU-fit (Figure <ref>) using the following equations:
Q / L = cos ( 2 [ c^2RM / ν^2 + ϕ] )
U / L = sin ( 2 [ c^2RM / ν^2 + ϕ] )
where c is the speed of light, L the quadrature sum of Q and U, i.e. L = √(Q^2 + U^2) and ϕ = ϕ_inf + ϕ_inst where ϕ_inf is the absolute angle of the polarisation of the sky referenced to infinite frequency and ϕ_inst is the phase difference between the polarisation hands. In the fit, the parallactic angle is assumed to be the same for all the bursts. In reality the parallactic angle changes by 6^∘ between the first and the last burst in our dataset. We deviate from the more generalized form of these equations (see, e.g., Equations 4 and 5 of <cit.>) that incorporate a term for the instrumental delay since we have removed the delay a priori. We find an observed RM, RM_obs, of 93586 ± 4 rad m^-2, where the error is the 1σ statistical error. The RM in the source reference frame<cit.>, RM_src, is (1+z)^2 RM_obs = 1.42 RM_obs = 1.3 × 10^5 rad m^-2, where z is the redshift of the host galaxy of <cit.>. This RM is consistent with previously reported RM values of bursts from the same dataset<cit.>.
The dynamic spectrum of the Stokes parameters Q and U are corrected for Faraday rotation with the aforementioned RM, using:
Q_corrected = Re( 𝒲)
U_corrected = Im( 𝒲)
where 𝒲 = [ Q + i U] × e^-2i c^2ν^-2 RM
We compute the time series of the Faraday-rotation-corrected Stokes I, Q, U and V parameters by summing over the frequency extent of the bursts. Each of the four time series are then again normalized such that the off-burst regions have zero mean and unit standard deviation. We calculate the measured linear polarisation by taking the quadrature sum of Stokes Q and U, L_meas = √(Q^2 + U^2). Since L_meas is derived from squared quantities it has a positive bias. To correct for this bias we follow the prescription as shown in <cit.>:
L_unbias=σ_I√((L_meas/σ_I)^2-1), if L_meas/σ_I≥ 1.57
0, otherwise
where σ_I is the standard deviation in the off-burst Stokes I.
The large channel widths of our very-high-time-resolution data and the extremely large RM of causes the polarisation angle, θ, to change significantly within one channel. The intra-channel Faraday rotation is given by<cit.>
Δθ = RM_obs c^2ν^-3_cΔν,
where c is the speed of light, ν_c is the observing frequency, and Δν is the channel width. For an RM of 93586 rad m^-2 and a channel width of 2.9296875 MHz this results in rotations of 13.6^∘ and 2.8^∘ at 4.7 GHz and 8.0 GHz, respectively. The depolarisation fraction is given by<cit.>
f_depol = 1 - [ sin( 2 Δθ) / 2 Δθ],
resulting in a depolarisation of 3.7 % and 0.2 % at these two frequencies, respectively.
We find that all but one of the bursts are consistent with being 100 % linearly polarised (Figure <ref>). The sole exception is burst B43, which shows 87 ± 10 % linear polarisation and 31 ± 10 % circular polarisation (Figure <ref>d). We use a conservative 10 % error which is a combination of the statistical error, systematic errors due to the instrumental calibration and the intra-channel depolarisation. Other bursts at similar frequencies as burst B43 show no signs of circular polarisation.
We determine the time-resolved PPAs across the burst profiles using:
PPA = 1/2arctan( U / Q).
We compute the PPA for every sample where L_unbias≥ 4. For every burst we subtract the weighted mean of the PPAs, using L_unbias as weights, and plot their probability distributions<cit.> in the upper panels of Figure <ref>. We fit the PPAs to a constant line by minimizing the weighted least-squares. The reduced χ^2-value, χ^2_ν, is reported in Figure <ref> and often greatly exceeds 1, indicating a significant scatter.
§.§ Energetics
To determine the peak flux density, fluence and isotropic-equivalent spectral luminosity of the 8 ultra-FRBs we make use of the data products described in subsection `Polarimetric calibration'. First, the dynamic spectra of Stokes I is averaged over a range of frequencies (as illustrated by the dashed horizontal red lines in Figure <ref>). Next, the time series are normalized such that the off-burst regions have zero mean and unit standard deviation. To convert from S/N units to physical units we make use of the radiometer equation<cit.> assuming a system temperature of 26 K and an antenna gain of 2 K Jy^-1, i.e. a system equivalent flux density (SEFD) of 13 Jy. These quantities are expected to have fractional uncertainties of at most 20 %. To determine the fluence we integrate the profile over a specific time range that is indicated with a magenta bar in Figure <ref>. The spectral luminosity is calculated assuming a luminosity distance of 972 Mpc<cit.> to . The results are tabulated in Table <ref> and plotted in Extended Data Figure <ref>.
§ DATA AVAILABILITY
The data that support the plots within this paper and other findings of this study are available from <https://doi.org/10.5281/zenodo.8112803> or from the corresponding author upon reasonable request. The voltage data are available through the Breakthrough Initiatives Open Data Portal: <https://breakthroughinitiatives.org/opendatasearch>, and are explained in detail in: <http://seti.berkeley.edu:8000/frb-data/>.
§ CODE AVAILABILITY
The pulsar package is availble at <https://dspsr.sourceforge.net/> and a modified version of , , that is able to read voltage data from the Breakthrough Listen backend is available at <https://github.com/UCBerkeleySETI/bl-dspsr>. Code to splice and extract voltage data is available at <https://github.com/greghell/extractor>. can be found at <https://github.com/devanshkv/fetch>. The PRESTO suite of tools is available at <https://github.com/scottransom/presto>.
67
urlstyle
[Petroff et al.(2019)Petroff, Hessels, &
Lorimer]petroff_2019_aarv
Petroff, E., Hessels, J. W. T. & Lorimer, D. R.
Fast radio bursts.
27, 0 4 (2019).
[Petroff et al.(2022)Petroff, Hessels, &
Lorimer]petroff_2022_aarv
Petroff, E., Hessels, J. W. T. & Lorimer, D. R.
Fast radio bursts at the dawn of the 2020s.
30, 0 2 (2022).
[Pleunis et al.(2021a)Pleunis, Good, Kaspi,
Mckinven, Ransom, Scholz, Bandura, Bhardwaj, Boyle, Brar,
Cassanelli, Chawla, (Adam) Dong, Fonseca, Gaensler, Josephy,
Kaczmarek, Leung, Lin, Masui, Mena-Parra, Michilli, Ng,
Patel, Rafiei-Ravandi, Rahman, Sanghavi, Shin, Smith, Stairs,
& Tendulkar]pleunis_2021_apj
Pleunis, Z. et al.
Fast Radio Burst Morphology in the First CHIME/FRB Catalog.
923, 0 1 (2021).
[Connor et al.(2020)Connor, Miller, &
Gardenier]conner_2020_mnras_497
Connor, L., Miller, M. C. & Gardenier, D. W.
Beaming as an explanation of the repetition/width relation in FRBs.
497, 0 3076–3082 (2020).
[CHIME/FRB Collaboration et al.(2023)CHIME/FRB Collaboration,
Andersen, Bandura, Bhardwaj, Boyle, Brar, Cassanelli,
Chatterjee, Chawla, Cook, Curtin, Dobbs, Dong, Faber,
Fandino, Fonseca, Gaensler, Giri, Herrera-Martin, Hill, Ibik,
Josephy, Kaczmarek, Kader, Kaspi, Landecker, Lanman, Lazda,
Leung, Lin, Masui, McKinven, Mena-Parra, Meyers, Michilli,
Ng, Pandhi, Pearlman, Pen, Petroff, Pleunis, Rafiei-Ravandi,
Rahman, Ransom, Renard, Sand, Sanghavi, Scholz, Shah, Shin,
Siegel, Smith, Stairs, Su, Tendulkar, Vanderlinde, Wang,
Wulf, & Zwaniga]chime_2023_apj
CHIME/FRB Collaboration et al.
CHIME/FRB Discovery of 25 Repeating Fast Radio Burst Sources.
947, 0 83 (2023).
[Marcote et al.(2017)Marcote, Paragi, Hessels, Keimpema, van
Langevelde, Huang, Bassa, Bogdanov, Bower, Burke-Spolaor,
Butler, Campbell, Chatterjee, Cordes, Demorest, Garrett, Ghosh,
Kaspi, Law, Lazio, McLaughlin, Ransom, Salter, Scholz,
Seymour, Siemion, Spitler, Tendulkar, &
Wharton]marcote_2017_apjl
Marcote, B. et al.
The Repeating Fast Radio Burst FRB 121102 as Seen on Milliarcsecond
Angular Scales.
834, 0 L8 (2017).
[Marcote et al.(2020)Marcote, Nimmo, Hessels, Tendulkar,
Bassa, Paragi, Keimpema, Bhardwaj, Karuppusamy, Kaspi, Law,
Michilli, Aggarwal, Andersen, Archibald, Bandura, Bower, Boyle,
Brar, Burke-Spolaor, Butler, Cassanelli, Chawla, Demorest,
Dobbs, Fonseca, Giri, Good, Gourdji, Josephy, Kirichenko,
Kirsten, Landecker, Lang, Lazio, Li, Lin, Linford, Masui,
Mena-Parra, Naidu, Ng, Patel, Pen, Pleunis, Rafiei-Ravandi,
Rahman, Renard, Scholz, Siegel, Smith, Stairs, Vanderlinde, &
Zwaniga]marcote_2020_natur
Marcote, B. et al.
A repeating fast radio burst source localized to a nearby spiral
galaxy.
577, 0 190–194 (2020).
[Kirsten et al.(2022)Kirsten, Marcote, Nimmo, Hessels,
Bhardwaj, Tendulkar, Keimpema, Yang, Snelders, Scholz,
Pearlman, Law, Peters, Giroletti, Paragi, Bassa, Hewitt,
Bach, Bezrukovs, Burgay, Buttaccio, Conway, Corongiu, Feiler,
Forssén, Gawroński, Karuppusamy, Kharinov, Lindqvist,
Maccaferri, Melnikov, Ould-Boukattine, Possenti, Surcis, Wang,
Yuan, Aggarwal, Anna-Thomas, Bower, Blaauw, Burke-Spolaor,
Cassanelli, Clarke, Fonseca, Gaensler, Gopinath, Kaspi, Kassim,
Lazio, Leung, Li, Lin, Masui, Mckinven, Michilli, Mikhailov,
Ng, Orbidans, Pen, Petroff, Rahman, Ransom, Shin, Smith,
Stairs, & Vlemmings]kirsten_2022_natur
Kirsten, F. et al.
A repeating fast radio burst source in a globular cluster.
602, 0 585–589 (2022).
[Nimmo et al.(2022a)Nimmo, Hewitt, Hessels,
Kirsten, Marcote, Bach, Blaauw, Burgay, Corongiu, Feiler,
Gawroński, Giroletti, Karuppusamy, Keimpema, Kharinov,
Lindqvist, Maccaferri, Melnikov, Mikhailov, Ould-Boukattine,
Paragi, Pilia, Possenti, Snelders, Surcis, Trudu, Venturi,
Vlemmings, Wang, Yang, & Yuan]nimmo_2022_apjl
Nimmo, K. et al.
Milliarcsecond Localization of the Repeating FRB 20201124A.
927, 0 L3 (2022).
[Ocker et al.(2023)Ocker, Cordes, Chatterjee, Li, Niu,
McKee, Law, & Anna-Thomas]ocker_2023_mnras
Ocker, S. K. et al.
Scattering variability detected from the circumsource medium of FRB
20190520B.
519, 0 821–830 (2023).
[Michilli et al.(2018a)Michilli, Seymour, Hessels,
Spitler, Gajjar, Archibald, Bower, Chatterjee, Cordes, Gourdji,
Heald, Kaspi, Law, Sobey, Adams, Bassa, Bogdanov, Brinkman,
Demorest, Fernandez, Hellbourg, Lazio, Lynch, Maddox, Marcote,
McLaughlin, Paragi, Ransom, Scholz, Siemion, Tendulkar, van
Rooy, Wharton, & Whitlow]michilli_2018_natur
Michilli, D. et al.
An extreme magneto-ionic environment associated with the fast radio
burst source FRB 121102.
553, 0 182–185 (2018).
[Hilmarsson et al.(2021)Hilmarsson, Michilli, Spitler,
Wharton, Demorest, Desvignes, Gourdji, Hackstein, Hessels,
Nimmo, Seymour, Kramer, & Mckinven]hilmarsson_2021_apjl
Hilmarsson, G. H. et al.
Rotation Measure Evolution of the Repeating Fast Radio Burst Source
FRB 121102.
908, 0 L10 (2021).
[Spitler et al.(2014)Spitler, Cordes, Hessels, Lorimer,
McLaughlin, Chatterjee, Crawford, Deneva, Kaspi, Wharton,
Allen, Bogdanov, Brazier, Camilo, Freire, Jenet,
Karako-Argaman, Knispel, Lazarus, Lee, van Leeuwen, Lynch,
Ransom, Scholz, Siemens, Stairs, Stovall, Swiggum,
Venkataraman, Zhu, Aulbert, & Fehrmann]spitler_2014_apj
Spitler, L. G. et al.
Fast Radio Burst Discovered in the Arecibo Pulsar ALFA Survey.
790, 0 101 (2014).
[Hessels et al.(2019)Hessels, Spitler, Seymour, Cordes,
Michilli, Lynch, Gourdji, Archibald, Bassa, Bower, Chatterjee,
Connor, Crawford, Deneva, Gajjar, Kaspi, Keimpema, Law,
Marcote, McLaughlin, Paragi, Petroff, Ransom, Scholz, Stappers,
& Tendulkar]hessels_2019_apjl
Hessels, J. W. T. et al.
FRB 121102 Bursts Show Complex Time-Frequency Structure.
876, 0 L23 (2019).
[Josephy et al.(2019)Josephy, Chawla, Fonseca, Ng, Patel,
Pleunis, Scholz, Andersen, Bandura, Bhardwaj, Boyce, Boyle,
Brar, Cubranic, Dobbs, Gaensler, Gill, Giri, Good, Halpern,
Hinshaw, Kaspi, Landecker, Lang, Lin, Masui, Mckinven,
Mena-Parra, Merryfield, Michilli, Milutinovic, Naidu, Pen,
Rafiei-Ravandi, Rahman, Ransom, Renard, Siegel, Smith, Stairs,
Tendulkar, Vanderlinde, Yadav, & Zwaniga]josephy_2019_apjl
Josephy, A. et al.
CHIME/FRB Detection of the Original Repeating Fast Radio Burst
Source FRB 121102.
882, 0 L18 (2019).
[Wang et al.(2022)Wang, Zhang, Zhu, Zhang, Li, Zhang,
Cao, Feng, Han, Lee, Yang, Niu, Niu, Wang, Miao, Luo,
Li, Wang, Wang, Wu, Xu, Jiang, Xu, Men, Zhou, Yao, Yu,
Zhang, & Zhang]wang_2022_atel_15619
Wang, P. et al.
FRB 20121102A is active again with significantly smaller DM as
revealed by FAST.
The Astronomer's Telegram 15619, 0 1 (2022).
[CHIME/FRB Collaboration et al.(2020)CHIME/FRB Collaboration,
Amiri, Andersen, Bandura, Bhardwaj, Boyle, Brar, Chawla,
Chen, Cliche, Cubranic, Deng, Denman, Dobbs, Dong, Fandino,
Fonseca, Gaensler, Giri, Good, Halpern, Hessels, Hill,
Höfer, Josephy, Kania, Karuppusamy, Kaspi, Keimpema,
Kirsten, Landecker, Lang, Leung, Li, Lin, Marcote, Masui,
McKinven, Mena-Parra, Merryfield, Michilli, Milutinovic,
Mirhosseini, Naidu, Newburgh, Ng, Nimmo, Paragi, Patel, Pen,
Pinsonneault-Marotte, Pleunis, Rafiei-Ravandi, Rahman, Ransom,
Renard, Sanghavi, Scholz, Shaw, Shin, Siegel, Singh, Smegal,
Smith, Stairs, Tendulkar, Tretyakov, Vanderlinde, Wang, Wang,
Wulf, Yadav, & Zwaniga]chime_2020_natur_periodicactivity
CHIME/FRB Collaboration et al.
Periodic activity from a fast radio burst source.
582, 0 351–355 (2020).
[Pleunis et al.(2021b)Pleunis, Michilli, Bassa,
Hessels, Naidu, Andersen, Chawla, Fonseca, Gopinath, Kaspi,
Kondratiev, Li, Bhardwaj, Boyle, Brar, Cassanelli, Gupta,
Josephy, Karuppusamy, Keimpema, Kirsten, Leung, Marcote, Masui,
Mckinven, Meyers, Ng, Nimmo, Paragi, Rahman, Scholz, Shin,
Smith, Stairs, & Tendulkar]pleunis_2021_apjl
Pleunis, Z. et al.
LOFAR Detection of 110-188 MHz Emission and Frequency-dependent
Activity from FRB 20180916B.
911, 0 L3 (2021).
[Gajjar et al.(2018)Gajjar, Siemion, Price, Law, Michilli,
Hessels, Chatterjee, Archibald, Bower, Brinkman, Burke-Spolaor,
Cordes, Croft, Enriquez, Foster, Gizani, Hellbourg, Isaacson,
Kaspi, Lazio, Lebofsky, Lynch, MacMahon, McLaughlin, Ransom,
Scholz, Seymour, Spitler, Tendulkar, Werthimer, &
Zhang]gajjar_2018_apj
Gajjar, V. et al.
Highest Frequency Detection of FRB 121102 at 4-8 GHz Using the
Breakthrough Listen Digital Backend at the Green Bank Telescope.
863, 0 2 (2018).
[MAGIC Collaboration et al.(2018)MAGIC Collaboration, Acciari,
Ansoldi, Antonelli, Arbet Engels, Arcaro, Baack, Babić, ,
Banerjee, Bangale, Barres de Almeida, Barrio, Becerra González,
Bednarek, Bernardini, Berti, Besenrieder, Bhattacharyya,
Bigongiari, Biland, Blanch, Bonnoli, Carosi, Ceribella,
Chatterjee, Colak, Colin, Colombo, Contreras, Cortina, Covino,
Cumani, D'Elia, da Vela, Dazzi, de Angelis, de Lotto, Delfino,
Delgado, di Pierro, Domínguez, Dominis Prester, Dorner,
Doro, Einecke, Elsaesser, Fallah Ramazani, Fattorini,
Fernández-Barral, Ferrara, Fidalgo, Foffano, Fonseca, Font,
Fruck, Gallozzi, García López, Garczarczyk, Gaug,
Giammaria, Godinović, , Guberman, Hadasch, Hahn, Hassan,
Herrera, Hoang, Hrupec, Inoue, Ishio, Iwamura, Kubo, Kushida,
Kuveždić, , Lamastra, Lelas, Leone, Lindfors,
Lombardi, Longo, López, López-Oramas, Maggio, Majumdar,
Makariev, Maneva, Manganaro, Mannheim, Maraschi, Mariotti,
Martínez, Masuda, Mazin, Minev, Miranda, Mirzoyan, Molina,
Moralejo, Moreno, Moretti, Neustroev, Niedzwiecki, Nievas
Rosillo, Nigro, Nilsson, Ninci, Nishijima, Noda, Nogués,
Paiano, Palacio, Paneque, Paoletti, Paredes, Pedaletti,
Peñil, Peresano, Persic, Prada Moroni, Prandini, Puljak,
Garcia, Rhode, Ribó, Rico, Righi, Rugliancich, Saha,
Saito, Satalecka, Schweizer, Sitarek, Šnidarić, ,
Sobczynska, Somero, Stamerra, Strzys, Surić, , Tavecchio,
Temnikov, Terzić, , Teshima, Torres-Albà, Tsujimoto,
Vanzo, Vazquez Acosta, Vovk, Ward, Will, Zarić, Marcote,
Spitler, Hessels, Kashiyama, Murase, Bosch-Ramon, Michilli, &
Seymour]magiccollaboration_2018_mnras
MAGIC Collaboration et al.
Constraining very-high-energy and optical emission from FRB 121102
with the MAGIC telescopes.
481, 0 2479–2486 (2018).
[Hiramatsu et al.(2023)Hiramatsu, Berger, Metzger, Gomez,
Bieryla, Arcavi, Howell, Mckinven, &
Tominaga]hiramatsu_2023_apjl
Hiramatsu, D. et al.
Limits on Simultaneous and Delayed Optical Emission from
Well-localized Fast Radio Bursts.
947, 0 L28 (2023).
[Spitler et al.(2016)Spitler, Scholz, Hessels, Bogdanov,
Brazier, Camilo, Chatterjee, Cordes, Crawford, Deneva, Ferdman,
Freire, Kaspi, Lazarus, Lynch, Madsen, McLaughlin, Patel,
Ransom, Seymour, Stairs, Stappers, van Leeuwen, &
Zhu]spitler_2016_natur
Spitler, L. G. et al.
A repeating fast radio burst.
531, 0 202–205 (2016).
[Chatterjee et al.(2017)Chatterjee, Law, Wharton,
Burke-Spolaor, Hessels, Bower, Cordes, Tendulkar, Bassa,
Demorest, Butler, Seymour, Scholz, Abruzzo, Bogdanov, Kaspi,
Keimpema, Lazio, Marcote, McLaughlin, Paragi, Ransom, Rupen,
Spitler, & van Langevelde]chatterjee_2017_natur
Chatterjee, S. et al.
A direct localization of a fast radio burst and its host.
541, 0 58–61 (2017).
[Lorimer et al.(2007)Lorimer, Bailes, McLaughlin, Narkevic,
& Crawford]lorimer_2007_sci
Lorimer, D. R., Bailes, M., McLaughlin, M. A., Narkevic, D. J. &
Crawford, F.
A Bright Millisecond Radio Burst of Extragalactic Origin.
Science 318, 0 777 (2007).
[Tendulkar et al.(2017)Tendulkar, Bassa, Cordes, Bower,
Law, Chatterjee, Adams, Bogdanov, Burke-Spolaor, Butler,
Demorest, Hessels, Kaspi, Lazio, Maddox, Marcote, McLaughlin,
Paragi, Ransom, Scholz, Seymour, Spitler, van Langevelde, &
Wharton]tendulkar_2017_apjl
Tendulkar, S. P. et al.
The Host Galaxy and Redshift of the Repeating Fast Radio Burst FRB
121102.
834, 0 L7 (2017).
[Bassa et al.(2017)Bassa, Tendulkar, Adams, Maddox,
Bogdanov, Bower, Burke-Spolaor, Butler, Chatterjee, Cordes,
Hessels, Kaspi, Law, Marcote, Paragi, Ransom, Scholz,
Spitler, & van Langevelde]bassa_2017_apjl
Bassa, C. G. et al.
FRB 121102 Is Coincident with a Star-forming Region in Its Host
Galaxy.
843, 0 L8 (2017).
[Michilli et al.(2018b)Michilli, Hessels, Lyon,
Tan, Bassa, Cooper, Kondratiev, Sanidas, Stappers, & van
Leeuwen]michilli_2018_mnras
Michilli, D. et al.
Single-pulse classifier for the LOFAR Tied-Array All-sky Survey.
480, 0 3457–3467 (2018).
[Plavin et al.(2022)Plavin, Paragi, Marcote, Keimpema,
Hessels, Nimmo, Vedantham, & Spitler]plavin_2022_mnras
Plavin, A. et al.
FRB 121102: Drastic changes in the burst polarization contrasts with
the stability of the persistent emission.
511, 0 6033–6041 (2022).
[Feng et al.(2023a)Feng, Jiang, Zhou, Wang,
Zhang, Li, Zhu, Zhang, Zhang, Yang, Han, Lee, Li, Luo,
Miao, Niu, Niu, Tsai, Wang, Wang, Wang, Wu, Xu, Yao,
Yu, Zhang, & Zhang]feng_2023_atel
Feng, Y. et al.
A highly depolarized burst from FRB 20121102A with significantly
smaller RM as revealed by FAST.
The Astronomer's Telegram 15980, 0 1 (2023).
[Cho et al.(2020)Cho, Macquart, Shannon, Deller, Morrison,
Ekers, Bannister, Farah, Qiu, Sammons, Bailes, Bhandari, Day,
James, Phillips, Prochaska, & Tuthill]cho_2020_apjl
Cho, H. et al.
Spectropolarimetric Analysis of FRB 181112 at Microsecond
Resolution: Implications for Fast Radio Burst Emission Mechanism.
891, 0 L38 (2020).
[Farah et al.(2018)Farah, Flynn, Bailes, Jameson,
Bannister, Barr, Bateman, Bhandari, Caleb, Campbell-Wilson,
Chang, Deller, Green, Hunstead, Jankowski, Keane, Macquart,
Möller, Onken, Osłowski, Parthasarathy, Plant, Ravi,
Shannon, Tucker, Venkatraman Krishnan, & Wolf]farah_2018_mnras
Farah, W. et al.
FRB microstructure revealed by the real-time detection of
FRB170827.
478, 0 1209–1217 (2018).
[Nimmo et al.(2021)Nimmo, Hessels, Keimpema, Archibald,
Cordes, Karuppusamy, Kirsten, Li, Marcote, &
Paragi]nimmo_2021_natas
Nimmo, K. et al.
Highly polarized microstructure from the repeating FRB 20180916B.
Nature Astronomy 5, 0 594–603 (2021).
[Nimmo et al.(2022b)Nimmo, Hessels, Kirsten,
Keimpema, Cordes, Snelders, Hewitt, Karuppusamy, Archibald,
Bezrukovs, Bhardwaj, Blaauw, Buttaccio, Cassanelli, Conway,
Corongiu, Feiler, Fonseca, Forssén, Gawroński, Giroletti,
Kharinov, Leung, Lindqvist, Maccaferri, Marcote, Masui,
Mckinven, Melnikov, Michilli, Mikhailov, Ng, Orbidans,
Ould-Boukattine, Paragi, Pearlman, Petroff, Rahman, Scholz,
Shin, Smith, Stairs, Surcis, Tendulkar, Vlemmings, Wang,
Yang, & Yuan]nimmo_2022_natas
Nimmo, K. et al.
Burst timescales and luminosities as links between young pulsars and
fast radio bursts.
Nature Astronomy 6, 0 393–401 (2022).
[Zhang et al.(2018)Zhang, Gajjar, Foster, Siemion, Cordes,
Law, & Wang]zhang_2018_apj
Zhang, Y. G. et al.
Fast Radio Burst 121102 Pulse Detection and Periodicity: A Machine
Learning Approach.
866, 0 149 (2018).
[MacMahon et al.(2018)MacMahon, Price, Lebofsky, Siemion,
Croft, DeBoer, Enriquez, Gajjar, Hellbourg, Isaacson,
Werthimer, Abdurashidova, Bloss, Brandt, Creager, Ford, Lynch,
Maddalena, McCullough, Ray, Whitehead, &
Woody]macmahon_2018_pasp
MacMahon, D. H. E. et al.
The Breakthrough Listen Search for Intelligent Life: A Wideband Data
Recorder System for the Robert C. Byrd Green Bank Telescope.
130, 0 044502 (2018).
[van Straten & Bailes(2011)]vanstraten_2011_pasa
van Straten, W. & Bailes, M.
DSPSR: Digital Signal Processing Software for Pulsar Astronomy.
28, 0 1–14 (2011).
[Lorimer(2011)]lorimer_2011_ascl
Lorimer, D. R.
SIGPROC: Pulsar Signal Processing Programs.
Astrophysics Source Code Library, record ascl:1107.016 (2011).
[Ransom(2001)]ransom_2001_phdt
Ransom, S. M.
New search techniques for binary pulsars.
PhD thesis Harvard University, Massachusetts (2001).
[Cordes & McLaughlin(2003)]cordes_2003_apj
Cordes, J. M. & McLaughlin, M. A.
Searches for Fast Radio Transients.
596, 0 1142–1154 (2003).
[Agarwal et al.(2020)Agarwal, Aggarwal, Burke-Spolaor,
Lorimer, & Garver-Daniels]agarwal_2020_mnras
Agarwal, D., Aggarwal, K., Burke-Spolaor, S., Lorimer, D. R. &
Garver-Daniels, N.
FETCH: A deep-learning based classifier for fast transient
classification.
497, 0 1661–1674 (2020).
[Hankins et al.(2003)Hankins, Kern, Weatherall, &
Eilek]hankins_2003_natur
Hankins, T. H., Kern, J. S., Weatherall, J. C. & Eilek, J. A.
Nanosecond radio bursts from strong plasma turbulence in the Crab
pulsar.
422, 0 141–143 (2003).
[Hankins & Eilek(2007)]hankins_2007_apj
Hankins, T. H. & Eilek, J. A.
Radio Emission Signatures in the Crab Pulsar.
670, 0 693–701 (2007).
[Jessner et al.(2010)Jessner, Popov, Kondratiev, Kovalev,
Graham, Zensus, Soglasnov, Bilous, & Moshkina]jessner_2010_aa
Jessner, A. et al.
Giant pulses with nanosecond time resolution detected from the Crab
pulsar at 8.5 and 15.1 GHz.
524, 0 A60 (2010).
[Faber et al.(2021)Faber, Gajjar, Siemion, Croft, Czech,
DeBoer, DeMarines, Drew, Isaacson, Lacki, Lebofsky, MacMahon,
Ng, Pater, Price, Sheikh, Webb, & Worden]faber_2021_rnaas
Faber, J. T. et al.
Re-analysis of Breakthrough Listen Observations of FRB 121102:
Polarization Properties of Eight New Spectrally Narrow Bursts.
Research Notes of the American Astronomical Society
5, 0 17 (2021).
[Everett & Weisberg(2001)]everett_2001_apj
Everett, J. E. & Weisberg, J. M.
Emission Beam Geometry of Selected Pulsars Derived from Average
Pulse Polarization Data.
553, 0 341–357 (2001).
[Lorimer & Kramer(2012)]lorimer_2012_hpa
Lorimer, D. R. & Kramer, M.
Handbook of Pulsar Astronomy.
Cambridge University Press (2012).
[Tendulkar et al.(2021)Tendulkar, Gil de Paz, Kirichenko,
Hessels, Bhardwaj, Ávila, Bassa, Chawla, Fonseca, Kaspi,
Keimpema, Kirsten, Lazio, Marcote, Masui, Nimmo, Paragi,
Rahman, Payá, Scholz, & Stairs]tendulkar_2021_apjl
Tendulkar, S. P. et al.
The 60 pc Environment of FRB 20180916B.
908, 0 L12 (2021).
[CHIME/FRB Collaboration et al.(2019)CHIME/FRB Collaboration,
Andersen, Bandura, Bhardwaj, Boubel, Boyce, Boyle, Brar,
Cassanelli, Chawla, Cubranic, Deng, Dobbs, Fandino, Fonseca,
Gaensler, Gilbert, Giri, Good, Halpern, Hill, Hinshaw,
Höfer, Josephy, Kaspi, Kothes, Landecker, Lang, Li, Lin,
Masui, Mena-Parra, Merryfield, Mckinven, Michilli, Milutinovic,
Naidu, Newburgh, Ng, Patel, Pen, Pinsonneault-Marotte, Pleunis,
Rafiei-Ravandi, Rahman, Ransom, Renard, Scholz, Siegel, Singh,
Smith, Stairs, Tendulkar, Tretyakov, Vanderlinde, Yadav, &
Zwaniga]chime_2019_apjl
CHIME/FRB Collaboration et al.
CHIME/FRB Discovery of Eight New Repeating Fast Radio Burst
Sources.
885, 0 L24 (2019).
[Nimmo et al.(2023)Nimmo, Hessels, Snelders, Karuppusamy,
Hewitt, Kirsten, Marcote, Bach, Bansod, Barr, Behrend,
Bezrukovs, Buttaccio, Feiler, Gawroński, Lindqvist, Orbidans,
Puchalska, Wang, Winchen, Wolak, Wu, & Yuan]nimmo_2023_mnras
Nimmo, K. et al.
A burst storm from the repeating FRB 20200120E in an M81 globular
cluster.
520, 0 2281–2305 (2023).
[CHIME/FRB Collaboration et al.(2021)CHIME/FRB Collaboration,
Amiri, Andersen, Bandura, Berger, Bhardwaj, Boyce, Boyle,
Brar, Breitman, Cassanelli, Chawla, Chen, Cliche, Cook,
Cubranic, Curtin, Deng, Dobbs, Dong, Eadie, Fandino, Fonseca,
Gaensler, Giri, Good, Halpern, Hill, Hinshaw, Josephy,
Kaczmarek, Kader, Kania, Kaspi, Landecker, Lang, Leung, Li,
Lin, Masui, McKinven, Mena-Parra, Merryfield, Meyers, Michilli,
Milutinovic, Mirhosseini, Münchmeyer, Naidu, Newburgh, Ng,
Patel, Pen, Petroff, Pinsonneault-Marotte, Pleunis,
Rafiei-Ravandi, Rahman, Ransom, Renard, Sanghavi, Scholz, Shaw,
Shin, Siegel, Sikora, Singh, Smith, Stairs, Tan, Tendulkar,
Vanderlinde, Wang, Wulf, & Zwaniga]chime_2021_apjs
CHIME/FRB Collaboration et al.
The First CHIME/FRB Fast Radio Burst Catalog.
257, 0 59 (2021).
[Feng et al.(2022a)Feng, Li, Yang, Zhang, Zhu,
Zhang, Lu, Wang, Dai, Lynch, Yao, Jiang, Niu, Zhou, Xu,
Miao, Niu, Meng, Qian, Tsai, Wang, Xue, Yue, Yuan, Zhang,
& Zhang]feng_2022_sci
Feng, Y. et al.
Frequency-dependent polarization of repeating fast radio
bursts—implications for their origin.
Science 375, 0 1266–1270 (2022).
[Beniamini et al.(2022)Beniamini, Kumar, &
Narayan]beniamini_2022_mnras
Beniamini, P., Kumar, P. & Narayan, R.
Faraday depolarization and induced circular polarization by
multipath propagation with application to FRBs.
510, 0 4654–4668 (2022).
[Feng et al.(2022b)Feng, Zhang, Li, Yang,
Wang, Niu, Dai, & Yao]feng_2022_scibu
Feng, Y. et al.
Circular polarization in two active repeating fast radio bursts.
Science Bulletin 67, 0 2398–2401 (2022).
[Zhang et al.(2023)Zhang, Li, Zhang, Cao, Feng, Wang,
Qu, Niu, Zhu, Han, Jiang, Lee, Li, Luo, Niu, Tsai,
Wang, Wang, Wu, Xu, Yang, Zhang, Zhou, &
Zhu]zhang_2023_arxiv
Zhang, Y.-K. et al.
FAST Observations of FRB 20220912A: Burst Properties and
Polarization Characteristics.
Preprint at
https://ui.adsabs.harvard.edu/abs/2023arXiv230414665ZarXiv:2304.14665
(2023).
[Feng et al.(2023b)Feng, Li, Zhang, Tsai,
Wang, Yang, Qu, Wang, Zhou, Niu, Miao, Yuan, Xu, Lynch,
Armentrout, Gregory, Meng, Wang, Chen, Dai, Niu, Xue, Yao,
Zhang, Zhang, Zhu, & Zhu]feng_2023_arxiv
Feng, Y. et al.
An extreme active repeating fast radio burst in a clean
environment.
Preprint at
https://ui.adsabs.harvard.edu/abs/2023arXiv230414671FarXiv:2304.14671
(2023).
[Beniamini & Kumar(2020)]beniamini_2020_mnras_498
Beniamini, P. & Kumar, P.
What does FRB light-curve variability tell us about the emission
mechanism?
498, 0 651–664 (2020).
[Worden et al.(2017)Worden, Drew, Siemion, Werthimer,
DeBoer, Croft, MacMahon, Lebofsky, Isaacson, Hickish, Price,
Gajjar, & Wright]worden_2017_acaau
Worden, S. P. et al.
Breakthrough Listen - A new search for life in the universe.
Acta Astronautica 139, 0 98–101 (2017).
[Lebofsky et al.(2019)Lebofsky, Croft, Siemion, Price,
Enriquez, Isaacson, MacMahon, Anderson, Brzycki, Cobb, Czech,
DeBoer, DeMarines, Drew, Foster, Gajjar, Gizani, Hellbourg,
Korpela, Lacki, Sheikh, Werthimer, Worden, Yu, &
Zhang]lebofsky_2019_pasp
Lebofsky, M. et al.
The Breakthrough Listen Search for Intelligent Life: Public Data,
Formats, Reduction, and Archiving.
131, 0 124505 (2019).
[Hewitt et al.(2022)Hewitt, Snelders, Hessels, Nimmo,
Jahns, Spitler, Gourdji, Hilmarsson, Michilli, Ould-Boukattine,
Scholz, & Seymour]hewitt_2022_mnras
Hewitt, D. M. et al.
Arecibo observations of a burst storm from FRB 20121102A in 2016.
515, 0 3577–3596 (2022).
[Cordes & Lazio(2002)]cordes_2002_arxiv
Cordes, J. M. & Lazio, T. J. W.
NE2001.I. A New Model for the Galactic Distribution of Free
Electrons and its Fluctuations.
Preprint at
https://ui.adsabs.harvard.edu/abs/2002astro.ph..7156Castro-ph/0207156
(2002).
[Scheuer(1968)]scheuer_1968_natur
Scheuer, P. A. G.
Amplitude Variations in Pulsed Radio Sources.
218, 0 920–922 (1968).
[Lee & Jokipii(1975)]lee_1975_apj
Lee, L. C. & Jokipii, J. R.
Strong scintillations in astrophysics. II. A theory of temporal
broadening of pulses.
201, 0 532–543 (1975).
[Ikebe et al.(2023)Ikebe, Takefuji, Terasawa, Eie, Akahori,
Murata, Hashimoto, Kisaka, Honma, Yoshiura, Suzuki, Oyama,
Sekido, Niinuma, Takeuchi, Yonekura, & Enoto]Ikebe_2023_PASJ
Ikebe, S. et al.
Detection of a bright burst from the repeating fast radio burst
20201124A at 2 GHz.
75, 0 199–207 (2023).
[Kirsten et al.(2023)Kirsten, Ould-Boukattine, Herrmann,
Gawroński, Hessels, Lu, Snelders, Chawla, Yang, Blaauw,
Nimmo, Puchalska, Wolak, & van Ruiten]kirsten_2023_arxiv
Kirsten, F. et al.
Connecting repeating and non-repeating fast radio bursts via their
energy distributions.
Preprint at
https://ui.adsabs.harvard.edu/abs/2023arXiv230615505KarXiv:2306.15505
(2023).
[van Straten et al.(2010)van Straten, Manchester, Johnston, &
Reynolds]vanstraten_2010_pasa
van Straten, W., Manchester, R. N., Johnston, S. & Reynolds, J. E.
PSRCHIVE and PSRFITS: Definition of the Stokes Parameters and
Instrumental Basis Conventions.
27, 0 104–119 (2010).
[Mckinven et al.(2021)Mckinven, Michilli, Masui, Cubranic,
Gaensler, Ng, Bhardwaj, Leung, Boyle, Brar, Cassanelli, Li,
Mena-Parra, Rahman, & Stairs]mckinven_2021_apj
Mckinven, R. et al.
Polarization Pipeline for Fast Radio Bursts Detected by CHIME/FRB.
920, 0 138 (2021).
[Kirsten et al.(2021)Kirsten, Snelders, Jenkins, Nimmo, van
den Eijnden, Hessels, Gawroński, & Yang]kirsten_2021_natas
Kirsten, F. et al.
Detection of two bright radio bursts from magnetar SGR 1935 + 2154.
Nature Astronomy 5, 0 414–422 (2021).
§ ADDITIONAL INFORMATION
Correspondence and requests for materials should be addressed to M.P. Snelders.
§ ACKNOWLEDGEMENTS
We would like to thank the Breakthrough Listen project for keeping the raw baseband data from these observations and making it publicly available. Breakthrough Listen is funded by the Breakthrough Initiatives (<https://breakthroughinitiatives.org/>). We thank the referees for their constructive comments that improved the manuscript. We thank J. Weisberg for useful discussions about radio astronomy and polarimetry. A.D. Seymour is thanked for tips regarding the GBT BL data. Research by the AstroFlash group at University of Amsterdam, ASTRON and JIVE is supported in part by an NWO Vici grant (PI Hessels; VI.C.192.045). K.N. is an MIT Kavli Fellow.
§ AUTHOR CONTRIBUTIONS
M.P.S. led the burst search, data analysis, and made the figures and tables. He wrote the majority of the manuscript. K.N. made significant contributions to the writing and provided guidance on the data analysis. J.W.T.H. supervised the work, guided the overall approach, and made significant contributions to the writing. All co-authors provided input on the scientific interpretation.
§ COMPETING INTERESTS
The authors declare no competing interests.
[figure]name=Extended Data Figure
[table]name=Extended Data Table
|
http://arxiv.org/abs/2307.01646v2
|
20230704105842
|
SwinGNN: Rethinking Permutation Invariance in Diffusion Models for Graph Generation
|
[
"Qi Yan",
"Zhengyang Liang",
"Yang Song",
"Renjie Liao",
"Lele Wang"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
Task Planning Support for Arborists and Foresters: Comparing Deep Learning Approaches for Tree Inventory and Tree Vitality Assessment Based on UAV-DataSpecial thanks to our cooperation partner Smart City Bamberg. The project BaKIM is supported by Kommunal?Digital! funding of the Bavarian Ministry for Digital Affairs. Project funding period: 01.01.2022 - 31.03.2024.
Jonas Troles 10000-0001-8686-5168 Richard Nieding1
Sonia Simons2 Ute Schmid10000-0002-1301-0326
August 1, 2023
===============================================================================================================================================================================================================================================================================================================================================================================
Diffusion models based on permutation-equivariant networks can learn permutation-invariant distributions for graph data.
However, in comparison to their non-invariant counterparts, we have found that these invariant models encounter greater learning challenges since 1) their effective target distributions exhibit more modes; 2) their optimal one-step denoising scores are the score functions of Gaussian mixtures with more components.
Motivated by this analysis, we propose a non-invariant diffusion model, called SwinGNN,
which employs an efficient edge-to-edge 2-WL message passing network and utilizes shifted window based self-attention inspired by SwinTransformers.
Further, through systematic ablations, we identify several critical training and sampling techniques that significantly improve the sample quality of graph generation.
At last, we introduce a simple post-processing trick, , randomly permuting the generated graphs, which provably converts any graph generative model to a permutation-invariant one.
Extensive experiments on synthetic and real-world protein and molecule datasets show that our SwinGNN achieves state-of-the-art performances.
Our code is released at <https://github.com/qiyan98/SwinGNN>.
§ INTRODUCTION
Diffusion models <cit.> have recently emerged as a powerful class of deep generative models.
They can generate high-dimensional data, , images and videos <cit.>, at unprecedented high qualities.
While originally designed for continuous data, diffusion models have inspired new models for graph generation that mainly involves discrete data and has a wide range of applications, including molecule generation <cit.>, code completion <cit.>, urban planning <cit.>, scene graph generation <cit.>, and neural architecture search <cit.>.
There exist two ways to generalize diffusion models for modeling graphs.
The first one is straightforward, , treating adjacency matrices as images and applying existing techniques built for continuous data.
The only additional challenge compared to image generation is that a desirable probability distribution over graphs should be invariant to the permutation of nodes.
<cit.> construct permutation equivariant score-based models that induce permutation invariant distributions.
Post-hoc thresholding is needed to convert sampled continuous adjacency matrices to binary ones.
The other one relies on the recently proposed discrete diffusion models <cit.> that naturally operate on binary adjacency matrices with discrete transitions.
<cit.> shows that such models learn permutation invariant graph distributions by construction and achieve impressive results on molecule generation.
In this paper, we first examine the pitfall of learning permutation invariant distributions using equivariant networks.
We empirically find that permutation-invariant diffusion models are harder to learn than their non-permutation-invariant counterparts.
Our analysis reveals that this is likely caused by 1) their effective target distributions having more modes; 2) their optimal one-step denoising scores being score functions of Gaussian mixtures models (GMMs) with more components.
Motivated by the analysis, we propose a non-permutation-invariant diffusion model, called SwinGNN.
Inspired by 2-WL graph neural networks (GNNs) <cit.> and SwinTransformers <cit.>, our SwinGNN performs efficient edge-to-edge message passing via shifted window based self-attention mechanism and customized graph downsampling/upsampling operators, thus being scalable to generate large graphs (, over 500 nodes).
SwinGNN differs from the vision SwinTransformer in that it is specifically designed as a high-order graph transformer to handle edge representations.
Further, we thoroughly investigate the recent advances of diffusion models for image <cit.> and identify several techniques that significantly improve the sample quality of graph generation.
At last, we propose a simple trick to randomly permute the generated samples, which provably converts any non-permutation-invariant graph generative model to a permutation-invariant one.
Extensive experiments on synthetic and real-world molecule and protein datasets show that our SwinGNN achieves state-of-the-art performances, surpassing the existing models by several orders of magnitude in most metrics.
§ RELATED WORK
Generative models of graphs (, random graph model) have been studied in mathematics, network science, and other subjects for several decades since the seminal Erdős–Rényi model <cit.>.
Most of these models, , Watts–Strogatz model <cit.> and Barabási–Albert model <cit.>, are mathematically tractable, , one can rigorously analyze their properties like degree distributions.
In the context of deep learning, deep generative models of graphs <cit.> emphasize fitting to complex distributions of real-world graphs compared to obtaining tractable properties and have achieved impressive performances.
They can be roughly categorized based on the underlying generative modeling techniques.
The first class of methods relies on denoising diffusion models <cit.> that achieve great successes in image and video generation.
We focus on continuous diffusion graph generative models which treat the adjacency matrices as images.
<cit.> proposes a permutation-invariant score-matching objective along with a GNN architecture that can generate binary adjacency matrices via thresholding continuous matrices.
<cit.> extends this approach to handle node and edge attributes via stochastic differential equation framework <cit.>.
Following this line of research, we investigate the limiting factors of these models and propose our improvements that achieve state-of-the-art performances.
Besides continuous diffusion, discrete diffusion based graph generative models also emerged recently.
<cit.> proposes a permutation-invariant model based on discrete diffusion models <cit.>.
Compared to the continuous one, it is more natural to model discrete graph data, , binary adjacency matrices and categorical node and/or edge attributes, using discrete diffusion.
Apart from the above diffusion based models, there also exist models based on generative adversarial networks (GANs) <cit.>, variational auto-encoders (VAEs) <cit.>, normalizing flows <cit.>,
and autoregressive models <cit.>.
Among them, autoregressive based models enjoy the best empirical performance, although they are not invariant to permutations.
§ LEARNING PERMUTATION INVARIANT DISTRIBUTION IS HARD
In this section, we first introduce the notation and background on diffusion models for graphs.
We highlight the learning difficulty via the analysis of the modes of the effective target distribution.
At last, we corroborate the learning difficulty findings with empirical investigations.
Notation.
A graph = (, ) consists of a node set and an edge set , with n nodes = [n] {1,2,…, n} and edges ⊆×.
For ease of exposition, we focus on simple graphs, , unweighted and undirected graphs containing no self-loops or multiple edges[Note that weighted graphs (, real-valued adjacency matrices) are easier to handle for continuous diffusion models since the thresholding step is unnecessary. We also generalize our model to multigraphs with node and edge attributes (, molecules) and will explain in detail in the experiment section and <ref>.].
Alternatively, a simple graph can be specified by an adjacency matrix ^π∈{0,1}^.
We denote the ordering of n nodes as π and emphasize that an adjacency matrix implicitly contains an ordering of nodes, , the ordering of rows/columns.
A permutation is a bijection between two node orderings π_1, π_2 and can be represented as a permutation matrix P_π_1 →π_2.
One can permute the nodes of a graph ^π_1 to ^π_2 via ^π_2 = P_π_1 →π_2^π_1 P_π_1 →π_2^⊤.
To simplify the notation, we will omit the superscripts and subscripts of node orderings unless explicitly specified.
We denote the set of all n! permutation matrices with n nodes as _n.
There could exist some permutation matrix P that map to itself, , = P P^⊤. Such a permutation is called a graph automorphism of .
Given two graphs _1, _2 with n nodes and their adjacency matrices _1 and _2,
they are isomorphic if and only if there exists P ∈_n such that P _1 P^⊤ = _2.
We denote the isomorphism class of an adjacency matrix as _{P P^⊤| P ∈_n}, , the set of adjacency matrices isomorphic to .
We observe samples, {_i}_i=1^m ∼(), drawn from the unknown data distribution of graphs ().
Graph generative models aim to learn a distribution p_θ() that closely approximates ().
§.§ Background
Denoising Diffusion Models.
A denoising diffusion model consists of two parts: (1) a forward continuous-state Markov chain (pre-specified beforehand and not learnable) that gradually adds noise to observed data until it becomes standard normal noise and (2) a backward continuous-state Markov chain (learnable) that gradually denoises from the standard normal noise until it becomes observed data.
The transition probability of the backward chain is typically parameterized by a deep neural network.
Two consecutive transitions in the forward (backward) chain correspond to two noise levels that are increasing (decreasing).
One can have discrete-time (finite noise levels) <cit.> or continuous-time (infinite noise levels) <cit.> denoising diffusion models.
In the context of graphs, if we treat an adjacency matrix as an image, it is straightforward to apply these models.
In particular, considering one noise level σ, the loss of denoising diffusion models is
𝔼_()( | )[ ‖ D_θ(, σ) - ‖^2_F ],
where D_θ(, σ) is the denoising network, is the noisy data (graph), and
‖·‖_F is the Frobenius norm.
The forward transition probability is a Gaussian distribution ( | ) 𝒩(; , σ^2).
Note that we add element-wise Gaussian noise to the adjacency matrix.
Based on the Tweedie's formula <cit.>, one can derive the optimal denoiser D_θ^*(, σ) = + σ^2 ∇_log(), where the noisy data distribution () ∫() ( | ) d appears.
Score-based Models.
Score-based models aim to learn the score function (the gradient of the log density data) of the data distribution (), denoted by s() ∇_log().
Since () is unknown, one needs to leverage techniques such as denoising score matching (DSM) <cit.> to train a score estimation network s_θ.
Similar to diffusion models, we add element-wise Gaussian noise to data, ,
( | ) = 𝒩(; , σ^2).
For a single noise level σ, the DSM loss is
𝔼_()( | )[ ‖ s_θ(, σ) - ∇_log( | ) ‖^2_F ],
where ∇_log( | ) = - /σ^2.
Minimizing <ref> almost surely
leads to an optimal score network s_θ^*(, σ) that matches the score of the noisy data distribution, , s_θ^*(, σ) = ∇_log() <cit.>.
Further, the denoising diffusion models and score-based models are essentially the same <cit.>.
The optimal denoiser of <ref> and the optimal score estimator of <ref> are inherently connected by D_θ^*(, σ) = + σ^2 s_θ^*(, σ).
We use both terms interchangeably in what follows.
DSM Estimates the Score of GMMs.
Our training set consists of samples (adjacency matrices) {_i}_i=1^m.
The corresponding empirical data distribution[With a slight abuse of notation, we refer to both the data distribution and its empirical version as since the data distribution is unknown and will not be often used.] is a mixture of Dirac delta distributions, , () 1/m∑_i=1^mδ( - _i),
from which we can get the closed-form of the empirical noisy data distribution () 1/m∑_i=1^m𝒩(; _i, σ^2).
() is a GMM with m components and uniform weighting coefficients.
The DSM objective in <ref> learns the score function of this GMM.
§.§ Learning Invariant Effective Target Distribution is Hard
In this section, we first disentangle the effective target distribution that the generative model truly aims to match, from the empirical data distribution.
We identify the learning difficulties for invariant models induced by factorial many modes, which are corroborated by our experiments.
Theoretical Analysis.
As shown in <ref>, the empirical graph distribution () may only assign a non-zero probability to a single observed adjacency matrix in its isomorphism class.
The ultimate goal of a generative model for graphs is to match this empirical distribution, which may be biased by the observed permutation.
Meanwhile, the target distribution that generative models are trained to match may differ from the empirical one, dependent on the model design permutation symmetry.
To clarify the subtlety, we define the effective target distribution as the closest distribution (, measured in total variation distance) to the empirical data distribution that is achievable by the generative model, assuming sufficient data and model capacity.
Previous works <cit.> learn permutation invariant models p_θ() using permutation equivariant networks.
We argue that such invariant effective target distributions are hard to learn.
Consider a simple case where our training set only contains a single graph _1, , in <ref>.
Even if we optimize the invariant model distribution p_θ() towards the empirical one () to the optimum, they can never exactly match.
This is because if it does (, ( = _1) = p_θ( = _1) = 1), p_θ() will also assign the same probability to isomorphic graphs of _1 due to the permutation invariance property, thus violating the sum-to-one rule.
Instead, the optimal p_θ() (, the effective target distribution) will assign equal probability 1/|__1| to all isomorphic graphs of _1.
Formally, given a training set of adjacency matrices {_i}_i=1^m, one can construct the union set of each graph's isomorphism class, denoted as ^* = __1∪__2⋯∪__m.
The corresponding Dirac delta mixture distribution is ^*() 1/Z∑_^* ∈^*δ( - ^*), where Z = |^* | = O(n!m) is the normalizing constant.
Note that Z = n!m may not be achievable due to graph automorphism.
lemmaeffectiveTargetDist
Let denote all discrete permutation invariant distributions.
The closest distributions in to , measured by total variation, have at least Ω(n!) modes.
If, in addition, we restrict to be the set of permutation invariant distributions such that p(_i) = p(_j) > 0 for all matrices in the training set {_l}_l=1^m, then the closest distribution is given by _q ∈ TV(q, ) = ^*.
Under mild conditions, ^*() of O(n!m) modes becomes the effective target distribution, which is the case of the existing equivariant networks that learn invariant model distributions (see <ref> for more details).
In contrast, if we employ a non-equivariant network (, the underlying model distribution is not invariant), the effective target distribution
would be () which only has O(m) modes.
Arguably, learning a permutation invariant distribution is much harder than learning a non-invariant one, as the number of modes of the effective target distribution is often much higher.
Empirical Investigation.
In the training data {_i}_i=1^m, we usually observe one adjacency matrix out of its isomorphism class.
One can easily construct ^* based on by applying permutation n! times.
We define a trade-off between them, dubbed the k-permuted empirical distribution: ^k() 1/mk∑_i=1^m∑_j=1^kδ( - P_j_i P_j^⊤), where P_1, …, P_k are k distinct permutation matrices.
^k has O(km) modes, which is governed by k.
With proper permutation matrices,
^k = when k=1 and ^k≈^* when k=n! (they are exactly the same if there is no non-trivial automorphisms).
Subsequently, we let the k-permuted distribution ^k serve as the effective target distribution of a diffusion model and study the effect of k (, the number of modes in the effective target distribution) on the empirical performance.
As discussed previously, an equivariant network matches the invariant target ^* with O(n!m) modes as in the case of k=n!.
For k < n!, one must resort to the non-equivariant network that learns an non-invariant empirical distribution.
We conduct experiments on a toy dataset to assess empirical performance, where we ensure training is converged for all models.
The dataset consists of 10 random regular graphs with degrees in [2, 11] and each of them has 16 nodes.
The k values are selected in the range of [1, 500].
We use two equivariant networks as baselines to learn the invariant target ^*: DiGress <cit.> and PPGN <cit.>.
Notably, the PPGN is a 3WL-discriminative GNN that entails rich expressiveness.
For the non-equivariant networks matching ^k with k < n!, we add index-based positional embedding to PPGN and compare it with our non-equivariant SwinGNN network (detailed in <ref>).
We use the recall as the metric, defined by the proportion of generated graphs that are isomorphic to any training graph.
The recall requires isomorphism testing and is invariant to permutation, which is fair for all models.
Please see <ref> for more details on datasets and experiment setup.
As shown in <ref>, the permutation-invariant diffusion models DiGress and PPGN fail to achieve high recall.
In contrast, the non-invariant models perform exceptionally well when k is small, , the target distributions have modest permutation data augmentation.
As k goes up, the sample quality of the non-invariant models drops significantly, indicating the learning difficulties incurred by more modes.
Notably, the invariant diffusion models, whether employing discrete (DiGress) or continuous (PPGN) Markov chains, perform worse than the non-invariant models.
We provide sample complexity analysis for the non-invariant models in <ref>.
Our empirical investigation shows that non-invariant models successfully match non-invariant target distributions, leading to stronger empirical performance.
Building upon these findings, we present our novel non-invariant model, designed to produce graph data of exceptional quality.
§ METHOD
Motivated by the hardness results of learning permutation invariant models, we introduce SwinGNN, a novel non-invariant diffusion model that can generate large-scale graphs with high qualities.
§.§ Efficient High-order Graph Transformer
A continuous denoising network for graph data takes a noisy adjacency matrix and its noise level σ as input and outputs a denoised adjacency matrix.
Unlike in typical graph representation learning, our model does not have any graph topology (, binary adjacency matrices) as input.
We argue the interaction among edge representations (, the noisy adjacency matrix entries) is critical.
Each entry of the true score function s()= ∂log()/∂ depends on the whole input matrix .
Therefore, the message passing mechanism must allow edge entries at different positions to interact with other.
To this end, we propose a graph transformer as the denoising network to better capture the edge-to-edge interplay.
Our model treats each edge representation in the input matrix as a token, apply transformers with self-attention <cit.> to update the token representation, and output final edge representations for denoising.
The rationale is similar to k-order GNN <cit.> or k-WL GNN <cit.> that improves network expressivity by using k-tuples of nodes (k=2 in our case).
The k-WL discrimination power characterizes the isomorphism testing capability, which is equivalent to the function approximation capacity <cit.>.
However, a naive extension of pure transformer results in poor scalability: for a graph with n nodes, we have O(n^2) edge tokens, and the self-attention computation complexity is O(n^4).
Approximating Edge-to-Edge Attention.
To reduce the computation complexity of the high-order graph transformer, we apply window-based partitioning to restrict self-attention to local representations.
The model splits the entries into local windows of size M × M in the grid map and computes self-attention for entries belonging to the same window in parallel.
The number of tokens inside each window is M^2, and the self-attention complexity becomes O(n^2M^2), which is considerably smaller than the original O(n^4) with some reasonable M.
The local self-attention window comes at the cost of blocking message passing between tokens belonging to different windows.
To better model cross-window interactions, we adopt the shifted window technique <cit.> to create a non-regular partitioning window as shown in <ref>.
We use regular and shifted window partitioning interchangeably to approximate the dense edge-to-edge interaction.
Multi-scale Edge Representation Learning.
The window self-attention complexity O(n^2M^2) is dependent on n^2, which hinders scaling for large graphs.
To further reduce memory footprint and better capture long-range interaction, we apply channel mixing-based downsampling and upsampling layers to construct hierarchical graph representations.
As shown in <ref>, in the downsampling layer, we split the edge representation tensor into four half-sized tensors by index parity (odd or even) along rows and columns, make concatenation along channel dimension, and update tokens in the downsized tensors independently using an MLP.
The upsampling stage is carried out likewise in the opposite direction, following the skip connection that concatenates same-size token tensors along channels.
Putting things together, we propose our model, dubbed SwinGNN, that efficiently learns edge-wise representations via approximating dense edge-to-edge interaction.
Our proposed SwinGNN serves as a high-order graph transformer specifically designed to handle edge representations, distinguishing it from the vision SwinTransformer that learns image-level representations.
Refer to <ref> and <ref> for experimental results showcasing our model's superior performance.
§.§ Training and Sampling with SDE
We construct the noisy data distribution through stochastic differential equation (SDE) modeling with a continuous time variable t ∈ [0, T].
Let σ evolve with t and the noisy distribution reads as:
p_σ(t)() = 1/m∑_i=1^m𝒩(; _i, σ^2(t)), _i ∈.
In other words, p_σ(0) is the Dirac delta data distribution, and p_σ(T) is close to the pure Gaussian noise (sampling prior distribution).
The general forward and reverse process SDEs <cit.> are defined by:
1.0d_+ = f_d(, t)dt + g_d(t) d,
1.0d_- = [f_d(, t)dt - g_d(t)^2 ∇_log p_t()]dt + g_d(t) d,
where the d_+ and d_- denote forward and reverse processes respectively, f_d(, t) and g_d(t) are the drift and diffusion coefficients, and is the standard Wiener process.
We solve for the f_d(, t) and g_d(t) so that the noisy distribution p_t in <ref> satisfies <ref>, and the solution is given by
f(, t) = 0, g(t) = √(2σ̇(t)σ(t)),
which can be found in Eq. (9) of <cit.> or Eq. (6) of <cit.>.
Noise Scheduling and Network Preconditioning.
We select the time-varying noise strength σ(t) to be linear with t, , σ(t) = t, as in <cit.>, which turns the SDEs into
d_+ = √(2t) d, d_- = - 2t ∇_log p_t()dt + √(2t)d.
Moreover, we adopt network preconditioning for improved training dynamics following the Elucidating Diffusion Model (EDM) <cit.>.
First, instead of training DSM with <ref>, we use its equivalent form in Eq. (<ref>), and parameterize denoising function D_θ with noise (time) dependent scaling:
D_θ(, σ) = c_s(σ) + c_o(σ) F_θ (c_i(σ), c_n(σ)),
where F_θ is the actual neural network and the other coefficients are summarized in <ref>.
During implementation, D_θ is a wrapper with preconditioning operations, and we construct F_θ using our SwinGNN model in <ref>.
Second, we sample σ with ln(σ) ∼𝒩(P_mean, P_std^2) to select noise (time) stochastically in a broad range to draw training samples.
Third, we apply the weighting coefficients λ(σ) = 1/c_o(σ)^2 on the denoising objective to improve training stability.
The overall training objective is
𝔼_σ, , [ λ(σ) ‖
c_s(σ) + c_o(σ) F_θ (c_i(σ), c_n(σ))
- ‖^2_F ].
The target of F_θ is - c_s(σ) /c_o(σ), an interpolation between pure Gaussian noise - (when σ→ 0) and clean sample (when σ→∞), downscaled by c_o(σ).
These measures altogether ease the training of F_θ by making the network inputs and targets have unit variance.
Self-Conditioning.
We also apply self-conditioning <cit.> to let D_θ dependent on the sample created by itself.
We change the denoising function to be D_θ(, , σ) (likewise F_θ(, , σ)), where is a sample previously created by D_θ.
In the reverse process, D_θ iteratively denoises corrupted samples, and it can access generated sample after the first step.
This is implemented by concatenating and at the first layer of F_θ.
During training, given a noisy sample , with 50% probability, we set = 0; otherwise, we first obtain = D_θ(, 0, σ), and then use it as the self-conditioning signal.
Note we disable the gradient calculation , so the extra memory overhead is negligible.
Stochastic Sampler with 2^nd-order Correction.
Our sampler is formally presented in Alg. <ref>.
It is based on the 2^nd-order sampler in <cit.>.
At the i-th iteration, the sampler adds noise to ^(i) moving slightly forward to time t̂_i, evaluate d/dt at t̂_i, and moves backward following the reverse process in <ref> to obtain ^(i+1).
The sampler then evaluates d/dt on time t_i+1 and corrects for ^(i+1) using Heun's integration <cit.>.
Also, we keep track of the generated samples ^(i)_sc for model self-conditioning.
We investigate the effects of the above modeling techniques in <ref>.
§.§ Permutation Invariant Sampling
r0.46
Finally, we present a simple trick to achieve permutation invariance during sampling.
Let ∼ p_θ() be a random adjacency matrix drawn from the model distribution following Alg. <ref>.
As our architecture does not preserve permutation equivariance, p_θ is not permutation invariant, , p_θ() = p_θ(P P^⊤), ∀ P ∈_n is not guaranteed.
We have the following lemma to construct a permutation invariant sampling distribution.
lemmapermuteSample
Let be a random adjacency matrix distributed according to any graph distribution on n vertices. Let P_r ∼Unif(_n) be uniform over the set of permutation matrices. Then, the induced distribution of the random matrix _r = P_r P_r^⊤, denoted as q_θ(_r), is permutation invariant, , q_θ(_r) = q_θ(P _r P^⊤), ∀ P ∈_n.
This trick is applicable to all types of generative models.
Note that the random permutation does not go beyond the isomorphism class.
Although q_θ is invariant, it captures the same amount of isomorphism classes as p_θ.
Graphs generated from q_θ must have isomorphic counterparts that p_θ could generate.
§ EXPERIMENTS
We now empirically verify the effectiveness of our model on synthetic and real-world graph datasets including molecule generation with node and edge attributes.
Experiment Setup.
We consider the following synthetic and real-world graph datasets:
(1) Ego-small: 200 small ego graphs from Citeseer dataset <cit.>,
(2) Community-small: 100 random graphs generated by Erdős–Rényi model <cit.> consisting of two equal-sized communities,
(3) Grid: 100 random 2D grid graphs with ||∈ [100, 400],
(4) DD protein dataset <cit.>,
(5) QM9 dataset <cit.>,
(6) ZINC250k dataset <cit.>.
Please see <ref> for more details on datasets and evaluation protocols.
Baselines.
We compare our method with state-of-the-art graph generative models.
For non-molecule experiments, we include autoregressive model GRAN <cit.>, VAE-base GraphVAE-MM <cit.> and GAN-based SPECTRE <cit.> as baselines.
We also compare with continuous diffusion models with permutation equivariant backbone, , EDP-GNN <cit.> and GDSS <cit.>, and discrete diffusion model DiGress <cit.>.
We additionally compare to the PPGN networks used in <ref>.
Moreover, to validate the effectiveness of our SwinGNN, we also compare it with a recent UNet <cit.> and the vision SwinTransformer <cit.> backbones.
Specifically, we employ the UperNet on top of the SwinTransformer for the denoising task (refer to <ref> for more details).
We re-run all these baselines with the same data split for a fair comparison.
For molecule generation, we compare with GDSS <cit.>, DiGress <cit.>, GraphAF <cit.> and GraphDF <cit.>.
Implementation Details.
Besides molecule generation, we try two variants of our SwinGNN: standard SwinGNN with 60-dim token and large SwinGNN-L with 96-dim token.
For molecule generation, we compare different ways to encode node and edge attributes using scalar, binary bits or one-hot encoding.
The same training and sampling methods as in <ref> are used for the UNet baseline.
We use self-conditioning and exponential moving average (EMA) to train SwinGNN variants in all experiments.
We leave more implementation details in <ref>.
§.§ Synthetic Datasets
<ref> showcases samples generated by various models on ego-small, community-small, and grid datasets.
The quality of our generated samples is high and comparable to that of auto-regressive models.
Equivariant diffusion models like GDSS can capture structural information for small graphs but fail on larger and more complicated graphs (, grid).
Quantitative maximum mean discrepancy (MMD) results are presented in <ref>.
Our SwinGNN consistently outperforms the baselines across all datasets by several orders of magnitude, and can generate large graphs with around 500 nodes (grid).
Notably, our tailored SwinGNN backbone exhibits superior performance compared to UNet.
§.§ Real-world Datasets
Protein Dataset.
We conduct experiments on the DD protein dataset.
As shown in <ref>, our model can generate samples visually similar to those from the dataset, while previous diffusion models cannot learn the topology well.
Tab. <ref> demonstrates that the MMD metrics of our model surpass the baselines by several orders of magnitude.
Molecule Datasets.
Our SwinGNN generates molecule graphs with node and edge features, and the details for extending SwinGNN for such tasks are provided in <ref>.
We evaluate our model against baselines on QM9 and ZINC250k using metrics such as validity without correction, uniqueness, Fréchet ChemNet Distance (FCD) <cit.>, and neighborhood subgraph pairwise distance kernel (NSPDK) MMD <cit.>.
Notably, our models exhibit substantial improvements in FCD and NSPDK metrics, surpassing the baselines by several orders of magnitude.
§.§ Ablation Study
r0.5
!
3cGrid
Method Self-cond. EMA Deg. ↓ Clus. ↓ Orbit. ↓
EDM 1.91e-7 0.00 6.88e-6
EDM 1.14e-5 2.15e-5 1.58e-5
EDM 5.12e-5 2.43e-5 3.38e-5
EDM 5.90e-5 2.71e-5 9.24e-5
DDPM 2.89e-3 1.36e-4 3.70e-3
DDPM 4.01e-2 8.62e-2 1.05e-1
Ablation results on grid dataset.
We perform ablations on grid dataset to validate the techniques in <ref>.
Following EDM <cit.>, we incorporate SDE modeling and adopt the objective in <ref>, which we compare against the vanilla DDPM <cit.>.
The EDM framework provides a significant boost in performance.
Also, EMA consistently improves the model regardless of other techniques, but is limited compared to EDM.
Self-conditioning <cit.> contributes minimally on its own.
Nevertheless, when combined with EMA, it improves the model significantly and enables the model to achieve its best results.
§ CONCLUSION
We investigate the hardness of learning permutation invariant denoising diffusion graph generative models from theoretical and empirical perspectives, attributed to the multi-modal effective target distribution.
We propose a non-invariant SwinGNN that efficiently performs edge-to-edge message passing and can generate large graphs with high qualities.
Experiments show that our model achieves state-of-the-art performance in a wide range of datasets.
In the future, it is promising to generalize our model to the discrete denoising diffusion framework which is more memory efficient.
§ ACKNOWLEDGMENTS AND DISCLOSURE OF FUNDING
This work was funded, in part, by NSERC DG Grants (No. RGPIN-2022-04636 and No. RGPIN-2019-05448), the NSERC Collaborative Research and Development Grant (No. CRDPJ 543676-19), the Vector Institute for AI, Canada CIFAR AI Chair, and Oracle Cloud credits. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute
(), Advanced Research Computing at the University of British Columbia,
the Digital Research Alliance of Canada (), and the Oracle for Research program.
plain
§ PROOFS AND ADDITIONAL THEORETICAL ANALYSIS
§.§ Proof of.
*
Recall the definitions on and ^* in our context:
() = 1/m∑_i=1^mδ( - _i), ^*() = 1/Z∑_^*_i ∈^*δ( - ^*_i)
In other words, and ^* are both discrete uniform distributions.
We use and ^* = __1∪__2⋯∪__m to denote the support of and ^* respectively.
Let TV^* TV(^*, ) be the “golden standard" total variation distance that we aim to outperform.
Let q ∈ without loss of generality.
As q is permutation invariant, it must assign equal probability to graphs in the same isomorphism class.
We denote the support of q by 𝒬 = {_q_i}_i=1^Φ.
q() = ∑_i=1^Φρ_i ∑__i ∈_q_iδ( - _i), where ∑_i=1^Φρ_i |_q_i| = 1, ρ_i > 0, and Φ >0 stand for the number of isomorphism classes contained in q.
<ref> summarizes the possibilities of TV used in our proof.
Proof of Ω(n!) Modes.
This first part of the lemma imposes no constraints on : we do not require distributions in to be uniform on .
We first prove two helpful claims and then use proof by contradiction to prove our result on Ω(n!) modes.
Claim 1: The maximal TV(q, ) is achieved when 𝒬 and ^* are disjoint (so are 𝒬 and ).
Proof of Claim 1: Without loss of generality, the TV distance is:
TV(q, ) = ∑_i=1^Φ(∑_∈_q_i∩|ρ_i - 1/|||
+ ∑_∈_q_i , ∉ρ_i)
+ ∑_∉𝒬 , ∈1/||
< ∑_i=1^Φ(∑_∈_q_i∩ (ρ_i + 1/||)
+ ∑_∈_q_i , ∉ρ_i )
+ ∑_∉𝒬 , ∈1/|| triangle inequality
= ∑_i=1^Φ(∑_∈_q_i∩1/|| + ∑_∈_q_iρ_i )
+ ∑_∉𝒬 , ∈1/||
= ∑_i=1^Φ∑_∈_q_iρ_i + ∑_∈1/|| = 2
The triangle inequality is strict as ρ_i and 1/|| are both strictly positive, and |ρ_i - 1/||| < ρ_i + 1/|| always holds.
Namely, TV(q, ) is bounded by 2.
If 𝒬 and ^* are disjoint, TV(q, ) = ∑_i=1^Φ1/|_q_i|×ρ_i + 1/||× || = 2, meaning that the maximum TV distance is achieved when there is no intersection between 𝒬 and ^*.
Claim 2: It is always possible to have a q ∈ with TV(q, ) < 2 by allowing 𝒬 and ^* to intersect on some isomorphism classes {_q_i}_i∈ν.
Notice _q_i∩≠∅, ∀ i ∈ν as per isomorphism.
Proof of Claim 2:
TV(q, ) = ∑_i=1^Φ(∑_∈_q_i∩|ρ_i - 1/|||
+ ∑_∈_q_i , ∉ρ_i)
+ ∑_∉𝒬 , ∈1/||
= ∑_i∈ν(∑_∈_q_i∩|ρ_i - 1/|||
+ ∑_∈_q_i , ∉ρ_i)
+ ∑_i ∉ν∑_∈_q_iρ_i
+ ∑_∉𝒬 , ∈1/||
= ∑_i∈ν( |_q_i∩| |ρ_i - 1/|||
+ (_q_i - |_q_i∩|) ρ_i
)
+ ∑_i ∉νρ_i |_q_i|
+ ∑_∉𝒬 , ∈1/||
= ∑_i∈ν |_q_i∩| (|ρ_i - 1/|||
- ρ_i
)
+ ∑_i =1^Φρ_i |_q_i|^1
+ ∑_∈1/||^1
- ∑_∈𝒬∩1/||
= ∑_i∈ν |_q_i∩| (|ρ_i - 1/|||
- ρ_i
)
- ∑_i =1^Φ |_q_i∩| 1/||
+ 2
= ∑_i∈ν |_q_i∩| (|ρ_i - 1/|||
- ρ_i - 1/||)^<0
- ∑_i ∉ν|_q_i∩| 1/||^>0
+ 2
< 2
The last inequality is strict as ρ_i, 1/|| and |_q_i∩| are all strictly positive for i ∈ν.
Claim 3: Let q^* ∈_q ∈TV(q, ) be a minizizer whose support is 𝒬^*.
𝒬^* and ^* must not be disjoint, , 𝒬^* intersects ^* for at least one isomorphism class in ^*.
Proof of Claim 3:
Assume 𝒬^* and ^* have no intersection, then min_q ∈TV(q, ) = TV(q^*, ) = 2, which is validated in Claim 1.
From Claim 2, we know there must exist another q^†∈ with TV(q^†, ) < 2.
Therefore, min_q ∈TV(q, ) ≤ TV(q^†, ) < 2, which contradicts min_q ∈TV(q, ) = 2.
From Claim 3, we know the support of optimum q^* must contain at least one isomorphism class in ^*, whose size is up to O(n!).
Namely, |𝒬^*| has a lower bound Ω(n!) that does not depend on the size of empirical data distribution ||.
Proof of Optimality of ^* when is Discrete Uniform on a Superset of .
Now we proceed to prove the second part of our lemma, which is a special case when has some constraints w.r.t. .
Assume = {_i}_i=1^m consists of m graphs (adjacency matrices) belonging to k isomorphic equivalence classes, where m ≥ k due to some potential isomorphic graphs.
Subsequently, let ^* have l adjacency matrices for all k equivalence classes (l ≥ m), , ^* = ∪_i=1^m__i = ∪_i=1^k _c_i, where
{_c_i}_i=1^k denote k equivalence classes.
Further, let q_γ∈ and let 𝒬_γ⊇ be its support.
That is,
q_γ = 1/|𝒬_γ|∑__i ∈𝒬_γδ( - _i),
where |𝒬_γ| ≥ m.
The total variation distance is:
TV^* = |1/m - 1/l| × m + 1/l (l-m) = 2(1 - m/l),
TV(q_γ, ) = |1/|𝒬_γ| - 1/m| × m + ||𝒬_γ| - m/|𝒬_γ|| = 2(1 - m/|𝒬_γ|).
To minimize TV(q_γ, ) over q_γ, we need to minimize |𝒬_γ|.
Since ⊆𝒬_γ and q_γ is permutation invariant, the smallest |𝒬_γ| would be |∪_i=1^m__i| = |∪_i=1^k _c_i| = |^*| = l .
Therefore, we conclude that
min_q_γ∈ TV(q_γ, ) = TV^* = 2(1 - m/l), and
_q_γ∈ TV(q_γ, ) = ^*.
Justification on the Constraints of to Guarantee the Optimality of ^*.
In the end, we justify the reason why has to be discrete uniform on a superset of (, assign equal probability to each element in ) for the second part of the lemma to hold.
We list all possible conditions in the table below and give concrete counterexamples for the cases where the optimality of ^* is no longer true, , ^* ∉_q ∈ TV(q, ) or equivalently min_q ∈ TV(q, ) < TV^* = TV(^*, ).
For ease of proof, we further divide into two categories based on the existence of isomorphic graphs.
Consider graphs with n=4 nodes,
let the support of ^* (, ^*) be adjacency matrices belonging to two isomorphism classes _a and _b, where |_a| = 24, |_b| = 6.
Namely, _a has no automorphism (, complete disconnected graphs), and the automorphism number of _b is 4 (, star graphs).
Let _a {_1, _2, ⋯, _24} and _b {_25, ⋯, _32}.
Case 1: Let = {_23, _24, _25} with isomorphic graphs.
Let q_α() = ρ_a ∑_i=1^24δ( - _i) + ρ_b ∑_i=25^32δ( - _i) be a mixture of Dirac delta distributions, where ρ_a, ρ_b > 0.
Due to normalization, ∑_q_α() = 24ρ_a + 6 ρ_b = 1 or ρ_b = 1 - 24 ρ_a/6.
We can tweak ρ_a, ρ_b so that q_α is not necessarily uniform over its support, but q_α is permutation invariant by design, , q_α(_1) = q_α(_2) = ⋯ = q_α(_24) and q_α(_25) = q_α(_26) = ⋯ = q_α(_32).
The TV is:
TV^* = |1/3 - 1/32| × 3 + 1/32× 29= 29/16, TV(q_α, ) = |ρ_a - 1/3| × 2 + |ρ_b - 1/3| + 22ρ_a + 5ρ_b = 5/3 + 4ρ_a.
Setting TV(q_α, ) = 5/3 + 4ρ_a < TV^* = 29/16, we have: ρ_a < 7/48.
Let ρ_a = 1/48, ρ_b = 1/12.
We now have:
min_q ∈ TV(q, ) ≤ TV(q_α, ) < TV^*,
and ^* is not a minimizer of min_q ∈ TV(q, ).
Case 2: Let = {_24, _25} without isomorphic graphs.
Similarly, let q_α() = ρ_a ∑_i=1^24δ( - _i) + ρ_b ∑_i=25^32δ( - _i) be a mixture of Dirac delta distributions.
The TV is:
TV^* = |1/2 - 1/32| × 2 + 1/32× 30= 15/8, TV(q_α, ) = |ρ_a - 1/2| + |ρ_b - 1/2| + 23ρ_a + 5ρ_b = 5/3 + 6ρ_a.
Setting TV(q_α, ) = 5/3 + 6ρ_a < TV^* = 15/8, we have: ρ_a < 5/144.
Let ρ_a = 1/48, ρ_b = 1/12.
Again, we have:
min_q ∈ TV(q, ) ≤ TV(q_α, ) < TV^*,
and ^* is not a minimizer of min_q ∈ TV(q, ).
Case 3:
Let = {_23, _24, _25} with isomorphic graphs.
Let q_β be a uniform discrete distribution on _b.
q_β is permutation invariant (thus q_β∈) whose support does not contain .
The TV is:
TV^* = |1/3 - 1/32| × 3 + 1/32× 29= 29/16, TV(q_β, ) = |1/6 - 1/3| + 1/6× 5 + 1/3× 2 = 5/3.
So,
min_q ∈^* TV(q, ) ≤ TV(q_β, ) < TV^*.
^* is not a minimizer of min_q ∈^* TV(q, ).
Case 4: Let = {_24, _25} without isomorphic graphs.
We use the same q_β as above.
The TV is:
TV^* = |1/2 - 1/32| × 2 + 1/32× 30= 15/8, TV(q_β, ) = |1/6 - 1/2| + 1/6× 5 + 1/2× 1 = 5/3.
Again,
min_q ∈^* TV(q, ) ≤ TV(q_β, ) < TV^*.
^* is not a minimizer of min_q ∈^* TV(q, ).
In fact, in case 3 and 4, ^* ∉, and by definition, ^* cannot be a minimizer of min_q ∈ TV(q, ).
To see that, the support of ^* must contain ^* (a superset of ), while the support of any q ∈ is not a superset of ^* as per conditions in case 3 and 4.
§.§ Proof of .
*
Let ℙ (·) denote the probability of a random variable.
q_θ(_r) = ∫ q_θ(_r | ) p_θ() ddefine the random permutation as conditional
=∫ℙ(P_→_r) p_θ() dP_→_r satifies P_→_r P_→_r ^T = _r
=∑_∈__rℙ(P_→_r) p_θ() permutation cannot go beyond isomorphism class
Let us define the set of `primitive graphs' that could be generated by p_θ: 𝒞(__r) = { | p_θ() > 0, ∈__r} that corresponds to the isomorphism class __r.
Let Aut(·) denote the automorphism number.
Then, we have:
q_θ(_r) =∑_∈__rℙ(P_→_r) p_θ()
= ∑_∈𝒞(__r)ℙ(P_→_r) p_θ()
= ∑__r = , ∈𝒞(__r)ℙ(P_→_r) p_θ() + ∑__r ≠, ∈𝒞(__r)ℙ(P_→_r) p_θ()
= ∑__r = , ∈𝒞(__r)Aut(_r)/n! p_θ() + ∑__r ≠, ∈𝒞(__r)|{P: P_→_r P_→_r ^T = _r}|/n! p_θ()
= ∑__r = , ∈𝒞(__r)Aut(_r)/n! p_θ() + ∑__r ≠, ∈𝒞(__r)Aut(_r)/n! p_θ()
= ∑_∈𝒞(__r)Aut(_r)/n! p_θ()
For the case of _r =, ℙ(P_→_r) is the probability of obtaining automorphic permutation matrices, which is Aut(_r)/n! by definition.
As for _r ≠, we need to compute the size of Ω_P = {P: P_→_r P_→_r ^T = _r, _r ≠}, , how many permutation matrices there are to transform into _r.
The orbit-stabilizer theorem states that there are n!/Aut() many distinct adjacency matrices in _ (size of permutation group orbit).
For any , we could divide all the permutation matrices in _n into ρ = n!/Aut() subgroups, where each group i ∈{1, 2, ⋯, ρ} transforms to a new adjacency matrix _i.
One of the subgroups transforms into itself (, automorphism), and the rest ρ-1 subgroups transform into distinct adjacency matrices.
As _r ≠, the size of Ω_P is equal to the size of one such subgroup, which is Aut().
Therefore, we could aggregate the two cases of _r = and _r ≠.
For any _r' and _r that are isomorphic to each other, we have:
q_θ(_r') = ∑_∈𝒞(__r')Aut(_r')/n! p_θ() = ∑_∈𝒞(__r)Aut(_r)/n! p_θ() = q_θ(_r)
The second equality is based on Aut(_r') = Aut(_r), __r' = __r, and 𝒞(__r') = 𝒞(__r), which are straightforward facts for isomorphic graphs.
Therefore, any two isomorphic graphs have the same probability in q_θ.
The random permutation operation essentially propagates the probability of the primitive graphs to all their isomorphic forms.
§.§ Invariant Model Distribution via Permutation Equivariant Network
For denoising model, if we consider one noise level, the optimal score network would be the score of the following noisy data distribution () = 1/m∑_i=1^m𝒩(; _i, σ^2), which is a GMM with m components for the dataset {_i}_i=1^m.
This is the case for diffusion models with non-permutation-equivariant networks, and in what follows, we first show that for equivariant networks, the noisy data distribution is actually a GMM with O(n!m) components.
As score estimation and denoising diffusion are essentially equivalent, we use the terms `score' or `diffusion' interchangeably.
lemmainvarByGNN
Assume we only observe one adjacency matrix out of its isomorphism class in dataset {_i}_i=1^m, and the size of isomorphism class for each graph is the same, |__1| = |__2| = ⋯ = |__m|.
Let s_θ^eq be a permutation equivariant score estimator.
Under our definitions of ^*() and (), the following two training objectives are equivalent:
𝔼_^*() ( | )[ ‖ s_θ^eq(, σ) - ∇_log( | ) ‖^2_F ]
=𝔼_() ( | )[ ‖ s_θ^eq(, σ) - ∇_log( | ) ‖^2_F ].
We conduct the proof from <ref> to <ref>.
Let P ∈_n be an arbitrary permutation matrix and p__n be a uniform distribution over all possible permutation matrices _n.
𝔼_() ( | )[ ‖ s_θ^eq(, σ) - ∇_log( | ) ‖^2_F ]
=𝔼_() ( | )[ ‖ P s_θ^eq(, σ) P^T - P - /σ^2 P^T ‖^2_F ] Frobenius norm is permutation invariant
=𝔼_() ( | )[ ‖ s_θ^eq(P P^T, σ) - P - /σ^2 P^T ‖^2_F ] s_θ is permutation equivariant
=𝔼_() ( | ) [ ‖ s_θ^eq(, σ) - P - P^T P/σ^2 P^T ‖^2_F · |Det (d/d) | ] change of variable P P^⊤
=𝔼_() ( | ) [ ‖ s_θ^eq(, σ) - P P^⊤ - /σ^2‖^2_F ·|Det(P ⊗ P)|^=1 ]
=𝔼_() ( | ) [ ‖ s_θ^eq(, σ) - ∇_log( | P P^⊤) ‖^2_F ]
=𝔼_() p__n(P) ( | ) [ ‖ s_θ^eq(, σ) - ∇_log( | P P^⊤) ‖^2_F ] let P ∼ p__n be uniform
=𝔼_^*() ( | ) [ ‖ s_θ^eq(, σ) - ∇_log( | ) ‖^2_F ] permuting samples leads to ^* samples
=𝔼_^*() ( | ) [ ‖ s_θ^eq(, σ) - ∇_log( | ) ‖^2_F ] change name of random variable
The change of variable between and leverages the fact that permuting Gaussian random variables does not change the multivariate joint distributions.
Conditioned on and P, we have = + ; = P P^⊤ + P P^⊤, where the randomness related to (Gaussian noise) is not affected by permutation due to property.
The second last equality is based on our definition of and ^*.
Recall we take the Dirac delta function over to build and over ^* to build ^*, where ^* is the union of all isomorphism classes in .
By applying random permutation on samples drawn from , we subsequently obtain samples following the distribution ^*.
The main idea is similar to the proofs in previous works <cit.>.
Note if we do not have the assumptions on the size of isomorphism class, the non-trivial automorphism would make the equality between <ref> and <ref> no longer hold, as each isomorphism class may be weighted differently in the actual invariant distribution.
We can then replace the ^* by a slightly different invariant distribution: k-permuted (k=n!) empirical distribution, defined as follows: ^k() 1/mk∑_i=1^m∑_j=1^kδ( - P_j_i P_j^⊤), where _n ={P_1, …, P_k}.
Notably, both ^k() and ^* have O(n!m) many modes.
Consequently, the assumptions do not affect the number of GMM components for underlying noisy data distribution, which is a property we care about.
More formally, we connect the number of modes in the discrete distribution or ^* to the number of components in their induced GMMs.
We show that the noisy data distribution of permutation equivariant network is ^*() 1/Z∑_^*_i ∈^*𝒩(; _i, σ^2) of O(n!m) components.
Namely, the optimal solution s_θ^*^eq to <ref> or (<ref>) is ∇_log^*().
Leveraging the results from <cit.>, we have
𝔼_^*() ( | )[ ‖ s_θ^eq(, σ) - ∇_log( | ) ‖^2_F ]
= 𝔼_^*()[ ‖ s_θ^eq(, σ) - ∇_log^*() ‖^2_F ]^Explicit score matching for ^*()
- C_1 + C_2,
C_1 = 𝔼_^*() [
‖∇_log() ‖_F^2 ], C_2 = 𝔼_^*() ( | ) [
‖∇_log( | ) ‖_F^2 ].
As C_1, C_2 are constants irrelevant to θ, optimization objective on ∇_log^*() is equivalent to <ref>, the latter of which is often used as the training objective in implementation.
§.§ Sample Complexity Lower Bound of Non-permutation-equivariant Network
In this part, we study the minimum number of samples (, the sample complexity) required to learn the noisy data distribution in the PAC learning setting.
Specifically, our analysis is mainly applicable for the non-permutation-equivariant network, where we do not assume any hard-coded permutation symmetry.
We leave the analysis for permutation equivariant network as future work.
Recall that training DSM at a single noise level amounts to matching the score of the noisy data distribution.
Knowing this sample complexity would help us get a sense of the hardness of training DSM since if you can successfully learn a noisy data distribution then you can obtain its score by taking the gradient.
Now we derive a lower bound of the sample complexity for learning the noisy data distribution (, a GMM).
lemmasampleComGeneralDSM
Any algorithm that learns the score function of a Gaussian noisy data distribution that contains l centroids of d-dimension requires Ω(ld/ϵ_f) samples to achieve Fisher information distance ϵ_f with probability at least 1/2.
Here the Fisher divergence (or Fisher information distance) <cit.> is defined as 𝒥_F (f, g) 𝔼_f()[ ‖∇_log f() - ∇_log g()‖_F^2 ] where f() and g() are two absolutely continuous distributions defined over ^d.
Now we apply <ref> to the graph distribution in our context.
Recall we assume m graphs {_i}_i=1^m with n nodes in the training set.
Similar to our investigation in <ref>, we use the GMM corresponding to the k-permuted empirical distribution:
^k() = 1/mk∑_∈∪_i=1^m{P_j_i P_j^⊤}𝒩(; , σ^2).
Let q_θ() denote the estimated distribution returned by a non-permutation-equivariant network.
corollarysampleComInvGraphDSM
Any algorithm that learns q_θ for the target k-permuted distribution ^k to ϵ_f error in 𝒥_F (^k, q_θ) with probability at least 1/2 requires Ω(mkn^2/ϵ_f) samples.
<ref> states the condition to learn distribution q_θ explicitly with bounded Fisher divergence, from which one could compute a score estimator s_q_θ = ∇_log q_θ() with bounded DSM error the target ^k.
The sample complexity lower bound holds regardless of the specific learning algorithm.
The highlight is that the sample complexity lower bound has a dependency on Ω(k), which would substantially increase as k goes to its maximum n!.
A more practical implication is that given α training samples drawn from ^k, one could expect, with at least 1/2 probability, a score network to have at least o(mkn^2/α) error in Fisher divergence.
If we extend k to the maximum n!, the learning bottleneck would be the size of the graph n instead of the number of the graphs m, for a single large graph could induce a prohibitively enormous sample complexity.
This analysis is also in line with our experimental investigation in <ref>, where the recall metrics drop with k going up.
Proof of <ref>.
We first introduce two useful lemmas before diving into the proof.
For twice continuously differentiable distributions f and g defined over ^d, let 𝒥_F (f, g) 𝔼_f()[ ‖∇_log f() - ∇_log g()‖_F^2 ] be the Fisher information distance and let 𝒥_TV (f, g) sup_B⊆^d∫_B (f() - g()) d be the total variation (TV) distance.
𝒥_F (f, g) ≥ C 𝒥_TV^2 (f, g) for some constant
C > 0. <cit.>.
The exact value of C relies on conditions of f and g (. Theorem 5.3 <cit.>).
Any method for learning the class of l-mixtures of d-dimensional isotropic Gaussian distribution with ϵ_t error in total variation with probability at least 1/2 has sample complexity Ω(ld/ϵ_t^2) <cit.>.
Now we restate <ref> and show the proof formally.
*
We derive our results through the distribution learning (density estimation) approach on top of existing analysis.
Without loss of generality, let f() and g() be two
twice continuously differentiable distributions defined over ^d.
Let us assume f() to be a GMM whose Gaussian components have isotropic variance, similar to the noisy data distribution ^k in the diffusion model.
We consider the general distribution learning problem where a learning algorithm takes a sequence of i.i.d. samples drawn from the target distribution f and outputs a distribution g as an estimate for f.
According to <ref>, the Fisher information distance 𝒥_F (f, g) is lower bounded by the square of TV distance 𝒥_TV^2 (f, g) with a positive multiplicative factor C, under some mild conditions.
In order to bound the 𝒥_F (f, g) by ϵ_f, 𝒥_TV (f, g) must be smaller than √(ϵ_f/C).
Importantly, the target f is a GMM, whose density estimation problem with TV distance have been well-studied and it admits a sample complexity lower bound illustrated in <ref>.
Plugging in the desired error bound for 𝒥_TV (f, g) ≤ϵ_t = √(ϵ_f/C), we obtain a sample complexity lower bound Ω(ld/ϵ_f), where the constant C is absorbed.
It has the same PAC-learning meaning for 𝒥_TV (f, g) with error bound ϵ_t =√(ϵ_f/C), and for 𝒥_F (f, g) with error bound ϵ_f.
We first identify that TV distance is a weaker metric than the Fisher information distance, the latter of which corresponds to the original score estimation objective.
Then, we utilize the recent advances in sample complexity analysis for learning GMMs with bouned TV error, and thus obtain a sample complexity lower bound for score estimation.
In summary, we use the result of a `weaker' distribution learning task to show how hard the score estimation objective at least is.
Proof of <ref>.
*
We conduct the proof by applying the results from <ref>.
We know ^k is an GMM with O(mk) components.
Since our samples drawn from the noisy distribution ^k are noise-perturbed adjacency matrix ∈^, we first vectorize them to be ^n^2.
In this way, we can view ^k as a GMM with O(mk) components of n^2-dimensions.
Recall we inject noise to each entries, so each Gaussian component has isotropic covariance.
The conditions of <ref> (specifically, <ref> ) are all satisfied.
Plugging in the above parameters, we obtain the sample complexity lower bound Ω(mkn^2/ϵ_f) for score estimation ^k through distribution learning perspective.
§ ADDITIONAL EXPERIMENT DETAILS
§.§ Detailed Experiment Setup
Toy Dataset.
For the toy dataset experiment, we conduct a sampling process where each model is allowed to generate 100 graphs.
To determine the graph recall rate, we perform isomorphism testing utilizing the <cit.> package. The training set's visualization is provided in <ref>.
Synthetic and Real-world Datasets.
We consider the following synthetic and real-world graph datasets:
(1) Ego-small: 200 small ego graphs from Citeseer dataset <cit.> with ||∈ [4, 18],
(2) Community-small: 100 random graphs generated by Erdős–Rényi model <cit.> consisting of two equal-sized communities whose ||∈ [12, 20],
(3) Grid: 100 random 2D grid graphs with ||∈ [100, 400].
(4) Protein: real-world DD protein dataset <cit.> that has 918 graphs with ||∈ [100, 500].
We follow the same setup in <cit.> and apply random split to use 80% of the graphs for training and the rest 20% for testing.
In evaluation, we generate the same number of graphs as the test set to compute the maximum mean discrepancy (MMD) of statistics like node degrees, clustering coefficients, and orbit counts.
To compute MMD efficiently, we follow <cit.> and use the total variation distance kernel.
Molecule Datasets.
We utilize the QM9 <cit.> and ZINC250k <cit.> as molecule datasets.
To ensure a fair comparison, we use the same pre-processing and training/testing set splitting as in <cit.>.
We generate 10,000 molecule graphs and compare the following key metrics:
(1) validity w/o correction: the proportion of valid molecules without valency correction or edge resampling;
(2) uniqueness: the proportion of unique and valid molecules;
(3) Fréchet ChemNet Distance (FCD) <cit.>: activation difference using pretrained ChemNet;
(4) neighborhood subgraph pairwise distance kernel (NSPDK) MMD <cit.>: graph kernel distance considering subgraph structures and node features.
We do not report novelty (the proportion of valid molecules not seen in the training set) following <cit.>.
Specifically, the QM9 dataset provides a comprehensive collection of small molecules that meet specific predefined criteria.
Generating molecules outside this set (novel graphs) does not necessarily indicate that the network has accurately captured the underlying data distribution.
Data Quantization. In this paper, we learn a continuous diffusion model for graph data.
Following DDPM <cit.>, we map the binary data into the range of [-1, 1] and add noise to the processed data during training.
During sampling, we start with Gaussian noise.
After the refinement, we map the results from [-1, 1] to [0, 1]. Since graphs are discrete data, we choose 0.5 as a threshold to quantize the continuous results.
Similar approaches have been adopted in previous works <cit.>.
§.§ Network Architecture Details
Our Models and Baselines.
<ref> shows the network architecture details of our models and the UNet baselines on various datasets.
Regarding the PPGN <cit.>, we utilize the implementation in <cit.>.
It takes an noisy matrix as input and produces the denoised signal as output.
To build non-permutation-equivariant version of PPGN, sinusoidal positional encoding <cit.> is applied at each layer.
We use the same diffusion setup for PPGN-based networks and our SwinGNN, as specified in <ref>.
For a fair comparison, we utilize the publicly available code from the other baselines and run experiments using our dataset splits.
Network Expressivity.
Both theoretical and empirical evidence have underscored the intrinsic connection between the WL test and function approximation capability for GNNs <cit.>.
The permutation equivariant PPGN layer, notable for its certified 3-WL test capacity, is deemed sufficiently expressive for experimental investigation.
Further, it is crucial to note the considerable theoretical expressivity displayed by non-permutation-equivariant GNNs, particularly those with positional encoding <cit.>.
We argue that the GNNs employed in our studies theoretically have sufficient function approximation capacities, and therefore, the results of our research are not limited by the expressiveness of the network.
GPU Memory Usage.
Our model's efficiency in GPU memory usage during training, thanks to window self-attention and hierarchical graph representations learning, allows for faster training compared to models with similar parameter counts.
In <ref>, we compare the training memory costs for various models with different batch sizes using the real-world protein dataset.
§.§ Node and Edge Attribute Encoding
Molecules possess various edge types, ranging from no bond to single, double, and triple bonds.
Also, they encompass diverse node types like C, N, O, F, and others.
We employ three methods to encode the diverse node and edge attributes: 1) scalar representation, 2) binary bits, and 3) one-hot encoding.
Scalar Encoding.
We divide the interval [-1, 1] into several equal-sized sub-intervals (except for the intervals near the boundaries), with each sub-interval representing a specific type.
We quantize the node or edge attributes in the samples based on the sub-interval to which it belongs as in <cit.>.
Binary-bit Encoding.
Following <cit.>, we encode attribute integers using multi-channel binary bits.
For better training dynamics, we remap the bits from 0/1 to -1/1 representation.
During sampling, we perform quantization for the continuous channel-wise bit samples and convert them back to integers.
One-hot Encoding.
We adopt a similar process as the binary-bit encoding, up until the integer-vector conversion.
We use to quantize the samples and convert them to integers.
Network Modifications.
We concatenate the features of the source and target node of an edge to the original edge feature, creating multi-channel edge features as the augmented input.
At the final readout layer, we use two MLPs to convert the shared edge features for edge and node denoising.
§.§ Diffusion Process Hyperparameters
The hyperparameters of the diffusion model training and sampling steps are summarized in <ref>.
For our SwinGNN model, we maintain a consistent setup throughout the paper, unless stated otherwise.
This setup is used for various experiments, including the ablation studies where we compare against the vanilla DDPM <cit.> and the toy dataset experiments.
The pivotal role of refining both the training and sampling phases in diffusion models to bolster performance has been emphasized in prior literature <cit.>.
Such findings, validated across a broad spectrum of fields beyond image generation <cit.>, inspired our adoption of the most recent diffusion model framework.
For a detailed discussion on the principles of hyperparameter fine-tuning, readers are encouraged to refer to the previously mentioned studies.
§.§ Comparing against the SwinTransformer Baseline
To further demonstrate the effectiveness of our proposed network in handling adjacency matrices for denoising purposes, we include an additional comparison with SwinTransformer <cit.>. SwinTransformer is a general-purpose backbone network commonly used in visual tasks such as semantic segmentation, which also involves dense predictions similar to our denoising task.
In our experiments, we modify the SwinTF + UperNet <cit.> method and adapt it to output denoising signals.
Specifically, we conduct experiments on the various graphs datasets, and the results are presented in <ref>.
The results clearly demonstrate the superior performance of our proposed SwinGNN model compared to simply adapting the visual SwinTransformer for graph generation.
§.§ Additional Qualitative Results
|
http://arxiv.org/abs/2307.02530v1
|
20230705180001
|
On the origin of globular clusters in a hierarchical Universe
|
[
"Gabriella De Lucia",
"J. M. Diederik Kruijssen",
"Sebastian Trujillo-Gomez",
"Michaela Hirschmann",
"Lizhi Xie"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
firstpage–lastpage
Frustrating quantum batteries
F. Franchini
August 1, 2023
================================
We present an end-to-end description of the formation process of globular clusters (GCs) which combines a treatment for their formation and dynamical evolution within galaxy haloes with a state-of-the-art semi-analytic simulation of galaxy formation. Our approach allows us to obtain exquisite statistics to study the effect of the environment and assembly history of galaxies, while still allowing a very efficient exploration of the parameter space of star cluster physics. Our reference model, including both efficient cluster disruption during galaxy mergers and a model for the dynamical friction of GCs within the galactic potential, accurately reproduces the observed correlation between the total mass in GCs and the parent halo mass. A deviation from linearity is predicted at low halo masses, which is driven by a strong dependence on morphological type: bulge-dominated galaxies tend to host larger masses of GCs than their later-type counterparts of similar stellar mass. While the significance of the difference might be affected by resolution at the lowest halo masses considered, this is a robust prediction of our model and represents a natural consequence of the assumption that cluster migration from the disk to the halo is triggered by galaxy mergers. Our model requires an environmental dependence of GC radii to reproduce the observed mass distribution of GCs in our Galaxy at the low-mass end. At GC masses >10^6 M_⊙, our model predicts fewer GCs than observed due to an overly aggressive treatment of dynamical friction. The metallicity distribution measured for Galactic GCs is well reproduced by our model, even though it predicts systematically younger GCs than observed. We argue that this adds further evidence for an anomalously early formation of the stars in our Galaxy.
stars: formation – globular clusters: general – galaxies: evolution – galaxies: formation – galaxies: star clusters: general
§ INTRODUCTION
Globular clusters (GCs) are found in all galaxies in the local Universe down
to galaxy stellar masses of ∼ 10^8 M_⊙. GCs typically have old ages (∼10 Gyr,
), nearly
uniform sizes <cit.>, and a peaked mass distribution that
can be approximated by a log-normal function, with a characteristic peak mass
that depends weakly on the host galaxy stellar mass
<cit.>. The old ages and small sizes of GCs have long prevented
direct observations of their formation - a situation that is changing rapidly with the new James Webb Telescope finally in operation <cit.>
For decades, the origin
of GCs has represented a largely unsolved problem that encompasses the fields of
star and galaxy formation. First theoretical work on this subject envisioned
that GC formation could be triggered by special conditions in the early
Universe: e.g. <cit.> argued that GCs may have formed
before the first galaxies, with masses determined by the Jeans mass. In a later
work by <cit.>, GCs were assumed to form during the
collapse of protogalaxies due to thermal instabilities in the hot gaseous
haloes. Alternative models pushed for a significantly later formation of GCs,
possibly triggered by mergers between gas-rich disk galaxies that can compress
and shock the interstellar medium <cit.>. At present, none of these scenarios are thought to explain the origin of the majority of GCs (see and for reviews).
Important information about the physical processes leading to GC
formation can be inferred from their present day properties and from the
formation of (young) massive stellar clusters in the local Universe. A key insight has been that young GCs are observed to form in the local Universe whenever conditions are present that mimic those in high-redshift galaxies, such as high gas pressures and densities <cit.>. This has led to the formulation of a family of models in which GCs are the byproduct of normal star and galaxy formation throughout cosmic history <cit.>.
In our current standard paradigm for structure formation, galaxies form at the
centre of dark matter haloes that collapse in a bottom-up fashion, with small
systems forming first and later merging into progressively more massive
structures. In this framework, galaxy formation is a complex physical process
that involves both gas condensation at the centre of dark matter haloes, as
well as galaxy mergers and interactions either with other galaxies or with the
central regions of dark matter haloes <cit.>. This means that an end-to-end
description of the formation process of today's GCs should include
an explicit treatment of both their formation and their dynamical evolution
within their evolving host galaxy haloes.
In the past years, different attempts have been made to study the dynamical
evolution of GCs within their host galaxy haloes. These include largely analytical studies that
focused on the effects of two-body relaxation, gravitational shocks and mass
loss by stellar evolution on the mass function of star clusters, starting from
an initial distribution approximated by a power-law
<cit.>; and work that has
tried to constrain the physical processes leading to the formation of GCs by
using their observed metallicity distributions in the local Universe and a combination of empirical relations and/or merger trees extracted from N-body simulations
<cit.>. This previous work either
focused on specific aspects of GC evolution/formation or neglected important
physical mechanisms of GC evolution.
Resolving the process of GCs formation directly within galaxy formation
simulations is prohibitively expensive, because it would require extremely high
resolution (particle masses/cells below 10^3 M_, and sub-parsec
scale), as well as an appropriate treatment of the star formation and stellar feedback processes. While some work has begun to resolve aspects of GC formation in
cosmological simulations
<cit.>,
the approach remains limited to small volumes and a narrow range in redshifts, preventing a
detailed comparison with the wealth of observational data in the local
Universe.
An alternative approach is that of modelling GC formation and evolution within
their parent galaxy haloes resorting to `sub-grid' or `semi-analytic'
models. The important advantage, in this case, is that the limited
computational costs allow an efficient investigation of the influence of
different specific assumptions, as well as a rapid exploration of the
parameters space. Coupling these techniques to dark matter-only and
high-resolution cosmological volumes provides access to a large dynamic range in
halo and galaxy masses allowing statistical analysis as a function of redshift,
galaxy properties, and environment. Efforts in this direction include
post-processing analyses of dark matter simulations with the inclusion of
baryons through scaling relations inspired by observational data or physical models
<cit.>.
In more recent years, direct simulations of galaxy formation have been used to
model the formation and evolution of GCs. For instance, the E-MOSAICS project
<cit.> couples an analytic model that
describes the formation, evolution, and disruption of stellar clusters to the
EAGLE galaxy formation model. Over the past years, the E-MOSAICS simulations have been used to provide context, interpretation, and predictions for a wide range of GC properties, such as their numbers <cit.>, metallicity distribution <cit.>, formation histories <cit.>, mass function <cit.>, spatial distribution and kinematics <cit.>, origin <cit.>, and their use in tracing galaxy formation and assembly <cit.>. The success of this approach has motivated several similar initiatives <cit.>. The big hurdle faced by all of these models is one of statistics, as the requirement of resolving galaxy formation implies that cosmological volumes larger than ∼50 Mpc <cit.> remain out of reach.
In this work, we adopt an approach very similar to that used in the E-MOSAICS
project, but take advantage of a state-of-the-art semi-analytic galaxy
formation model to describe the evolution of the galaxy
population across a much larger cosmological volume that can be spanned by spatially resolved hydrodynamical simulations. Specifically, we build on the GC model presented in
<cit.> that explains the observed properties of GCs as the
natural outcome of star and cluster formation in high-redshift galaxies, and include
its basic assumptions in the GAlaxy Evolution and Assembly (GAEA) semi-analytic
model <cit.>, coupled to a large dark matter-only
cosmological simulation. In this paper, we provide the details of our model and
discuss how its basic predictions for the GC population compare with available
data.
The layout of the paper is as follows: we present the simulation and the galaxy
formation model used in our study in Section <ref>.
Section <ref> provides a detailed description of how we have
included in our semi-analytic model the formation of young stellar clusters, how we have
modelled their evolution, and how we have tested various physical descriptions of these physics. In Section <ref>, we present a case study of two model galaxies to illustrate how the mass distribution of GCs evolves as a function of time. Sections <ref> and <ref> show the
basic predictions of our model and compare them to observational estimates.
Finally, in Section <ref>, we discuss our results and present our
conclusions.
§ THE SIMULATION AND THE GALAXY FORMATION MODEL
The model predictions presented in this work are based on dark matter merger
trees extracted from the Millennium Simulation <cit.>. This
dark matter-only simulation follows 2,160^3 particles in a box of 500 Mpc h^-1 on a side, and assumes cosmological parameters consistent
with WMAP1 (Ω_Λ=0.75, Ω_m=0.25, Ω_b=0.045, n=1,
σ_8=0.9, and H_0=73 km s^-1 Mpc^-1). In previous work
<cit.>, we have shown that (small) modifications of the
cosmological parameters do not significantly affect model predictions, once the
model parameters are retuned to reproduce a given set of observational results
in the local Universe. Therefore, we do not expect significant changes with an
updated cosmological model. We will verify this in future work, where we plan to extend our analysis to
higher resolution simulations and an updated cosmological model.
In this work, we take advantage of the GAlaxy Evolution and Assembly (GAEA) semi-analytic model of galaxy formation[https://sites.google.com/inaf.it/gaea/], described in detail in
<cit.>, with the updated modeling for disk sizes described in <cit.>. While the latter modification does not affect significantly the basic predictions of our model, it leads to a better agreement between model predictions and observational data for galaxy sizes - an element that we deem important for the stellar cluster model discussed in this paper. Our galaxy formation model includes a sophisticated treatment for
the non-instantaneous recycling of gas, metals and energy that allows us to
follow individual metal abundances <cit.>, and a new stellar
feedback scheme that is partly based on results from hydrodynamical
simulations <cit.>. In previous work, we have shown that
our reference model is able to reproduce a large number of important
observational constraints, including the galaxy stellar mass function up to
z∼ 7 and the cosmic star formation rate density up to z∼ 10
<cit.>, the relation between galaxy stellar mass and gas
metallicity and its secondary dependence as a function of star formation rate
and gas mass <cit.>, as well as the observed evolution of
the galaxy mass - gas/star metallicity relations
<cit.>.
For the analysis presented in this paper, we have used only a small fraction
(about 10 per cent) of the entire volume of the Millennium Simulation. On the
basis of previous work, we consider this sub-volume representative (it includes
a few of the most massive haloes that can be identified in the entire
simulation volume). In addition, we have used the same parameter set adopted in
<cit.> with only one modification: in our published
reference model, we had assumed that galaxies at the centres of haloes with masses
below 5× 10^10 M_ would inject most of the newly
synthesized metals (95 per cent) directly into the hot gas phase. This
modification had been included in previous work focused on faint satellites of
our Milky-Way <cit.> to better reproduce their
metallicities. While not affecting significantly galaxy (and GC) properties
on larger scales, it does enter a critical resolution regime for the dark
matter simulation used in this work: the particle mass is m_
p=8.6×10^8 M_⊙, which means that a halo of
5×10^10 M_⊙ is resolved with only about 40
particles. Therefore, in this work we assume that all new metals are
immediately mixed with the cold gas in the disc, and explicitly show when this
has some influence on our model predictions.
§ A TWO-PHASE MODEL FOR THE ORIGIN OF GLOBULAR CLUSTERS
To model the abundance and properties of GCs, we build on the
`two-phase' model introduced by <cit.>, including additional
implementations that we detail in the following. We refer to the original study
by <cit.> for a detailed review of the approach, which attempts
to provide an end-to-end description of the origin of GCs, from their formation
at high redshift until the present day. In summary:
(i) young stellar clusters are assumed to form in the high-pressure discs
hosted by high-redshift galaxies;
(ii) the clusters undergo rapid disruption by tidal perturbations from
molecular clouds and clumps in the host galaxy disc;
(iii) the disruption continues until clusters migrate into the halo of
the galaxy, following e.g. a merger event;
(iv) finally, clusters undergo a slow evolutionary phase, in the host
galaxy halo, due to tidal evaporation.
In the following subsections, we describe the specific prescriptions adopted to
include each of the four elements listed above in our galaxy formation
model. In practice, we have associated to each model galaxy two
three-dimensional arrays storing the information about the stellar clusters
that are associated with the disk and the halo of the galaxy. The three
dimensions of each array correspond to bins of stellar cluster mass (we
consider NMBINS= 50 mass bins, logarithmically spaced between 10^2
and 10^10 M_⊙), [Fe/H] (NZBINS= 20, linearly spaced
between -3.5 and 0.68), and formation time (NTBINS= 27, linearly spaced
between 0 and 13 Gyr). The disk and halo arrays are initialized and evolved as
detailed in the next sections.
§.§ Formation of young stellar clusters
In our galaxy formation model, star formation can take place in a `quiescent'
mode (from cold gas associated with the galaxy disk), and during merger driven
starbursts. We assume that a (small) fraction of the star formation occurring
through the quiescent channel[We suppress the formation of new stellar clusters during merger-driven starbursts (see Section 3.3).] leads to the formation of young massive clusters. A new population of stellar clusters is initialized for each new episode of
star formation: if Δ M_* is the mass of stars formed during a
time-step[The differential equations governing the evolution of
each galaxy are solved dividing the time interval between two subsequent
snapshots into 20 time-steps, each corresponding to ∼ 10-19 Myrs up to
z=3. The spacing between subsequent snapshots is smaller at higher
redshift, corresponding to even smaller time-steps.], we assume that the
corresponding mass in stellar clusters is ΓΔ M_*, where
Γ quantifies an environmentally dependent cluster formation efficiency
(CFE). We have computed this quantity using the analytic model presented in
<cit.>, in which bound stellar clusters collapse in the highest-density regions of the interstellar medium (we refer to the original paper for full details).
The CFE is determined by galaxy scale physics, and can be expressed in terms of
the cold gas surface density, the galaxy angular velocity, and the <cit.> Q instability parameter. For this calculation, and throughout this paper, we
have set the Toomre instability parameter (Q) equal to 1. This value is
within the typical range observed in star-forming galaxies at z∼ 2,
Q=0.2-1.6 <cit.>. The cold gas surface density and galaxy angular velocity are given by:
Σ_gas = M_cold / πr_half^2 and Ω= V_max/r_half.
In the above equations, M_ cold is the cold gas mass, and r_ half
is the disk half mass radius. The latter is computed as 1.68× R_ d
(assuming an exponential disk), where R_ d is the scale radius of the
disk and is computed by tracing the specific angular momentum of the gas as
described in detail in <cit.>. V_ max is inherited from
the subhalo catalogue and is the maximum circular velocity of the parent
dark matter substructure for each model galaxy (for orphan satellites,
i.e. those that are no longer associated with an existing substructure, this is
the value at the last time there was a distinct parent subhalo).
The model by <cit.> provides a good match to cluster formation
efficiencies that are observed in the local Universe, ranging from a few per
cent at low pressures to as much as 50 per cent in high-density starburst environments
<cit.>.
A proper calculation of Γ requires several integrations that would slow
down significantly our galaxy evolution code. In order to speed up our
computation, we have used tabulated values of Γ (on a 30×30 grid)
corresponding to different values of the gas surface density and of the angular
velocity. The entries of the table are then linearly interpolated at each star
formation episode, to compute the corresponding CFE. We have verified that
values obtained using our approach are very close to those that would be
obtained using a full integration calculation.
Observations of young stellar cluster populations <cit.> have shown that the initial cluster mass function (ICMF) is well described by a power law with index α=-2, which is consistent with expectations from gravitational collapse in hierarchically structured clouds <cit.>. At low masses, a fiducial truncation corresponding to M_ min=100 M_ is typically adopted <cit.>. However, this assumption appears in contrast with findings that very large fractions (up to about 50 per cent) of low-metallicity stars in some nearby dwarf galaxies are bound to their GCs <cit.>, which implies that few low-mass clusters (or possibly none) could have formed coevally with these GCs. In fact, an ICMF extending down to 100 M_ in these galaxies would require the majority of low-mass clusters to have been disrupted, returning their mass to the field population. This would allow for a maximum of 10 per cent of the low-metallicity stars to reside in surviving GCs, contrary to observations. At the high-mass end, an exponential truncation of the ICMF is often assumed. Observational studies find truncation masses varying from M_ max = 0.5 to 10^5 M_ <cit.>.
In this work, we assume that the ICMF is well-described by a power law, with
exponential truncations at both the high- and low-mass ends <cit.>:
dN/ dM∝ M_i^α exp(
-M_ min/M) exp( -M/M_ max).
We assume α=-2, and that M_ max is determined by
evaluating the mass fraction of a centrifugally limited region that can
collapse before stellar feedback is able to halt star formation, as detailed in
<cit.>. The latter study shows that the
resulting upper truncation mass depends on the cold gas surface density and the galactic angular velocity,
with environments characterized by a larger gas surface densities being able to promote
the formation of more massive clusters.
As for M_ min, we adopt the model presented
in <cit.>, based on empirical scaling relations of
molecular clouds in the local Universe, and on the hierarchical nature of star
formation in clouds regulated by stellar feedback. Specifically, the minimum
mass of stellar clusters is evaluated by estimating the time-scale required for
stellar feedback to halt star formation, against the collapse time of clouds
with a given mass spectrum. This allows an estimate of the range of cloud
masses that can achieve the minimum star formation efficiency required to
remain bound after feedback has blown out the remaining gas. The resulting
values of M_min vary by orders of magnitude from local and quiescent discs
to high-redshift starbursting systems, with a strong dependence on the surface
density of the ISM and no significant dependence on the
galaxy angular velocity and the Toomre parameter Q. Below, we will test
explicitly the impact of assuming an environmentally dependent M_min.
Both M_min and M_max can be expressed as a function of the gas
surface density and of the galaxy angular velocity, as in the case of CFE, which for all of these quantities reflects an underlying physical dependence on the gas pressure. For the
sake of computational time, we have generated pre-computed tables giving the
minimum and maximum truncation mass over the same grid used to pre-compute
Γ. The tables are linearly interpolated to evaluate Eq. <ref>
each time the cluster mass function needs to be initialized or updated, due to
a new episode of star formation.
§.§ Cluster evolution during the rapid disruption phase
After their formation, stellar clusters evolve within the host galaxy disk, and
are subject to frequent and strong tidal perturbations by dense gas clumps <cit.>.
Following <cit.>, we describe the rapid disruption phase driven by these tidal shocks in
terms of a mass loss rate:
( dM / dt)_ cce = - M / t_ cce,
where the subscript `cce' stands for `cruel cradle effect'
<cit.>. The disruption
time-scale can be expressed as <cit.>:
t_ cce = t_5 ,
cce(f_Σ/4)^-1(ρ_ ISM/
M_⊙ pc^-3)^-3/2(M/10^5
M_⊙) Φ^-1_ ad
where t_5 , cce = 176 Myr is a proportionality constant, f_Σ
is the ratio between the surface density of giant molecular clouds and the mean
gas surface density in the galactic mid-plane, and ρ_ ISM is equated
to the mean density in a galactic disc mid-plane for an equilibrium disc
<cit.>:
ρ_ISM = 3 Ω^2 / πG.
Finally,
Φ_ ad is a correction factor that accounts for the absorption of
tidal energy by adiabatic expansion. Following <cit.>, we
assume:
f_Σ = 3.92 ( 10 - 8 f_mol/2)^1/2
with
f_mol = 1 / (1+0.025 Σ_gas,2^-2).
In the last equation, Σ_ gas,2 is the surface density of the gas in
units of 100 M_⊙ pc^-2.
The adiabatic correction is expressed as follows:
Φ_ad = [ 1 + 9 ( ρ_h/ρ_ISM/10^4)]^-3/2,
where ρ_ h = 3M/8π r_ h^3.
In the following, we will consider two different assumptions for the cluster
radius. In one case, we will assume that it is independent of the cluster mass,
and equal to r_ h = 1.5 pc. Alternatively, and this will be the
fiducial assumption in our reference run, we will assume that r_ h depends on the
cluster mass. Specifically, we adopt eq. 13 of <cit.>:
r_ h≃ 3.8 pc( γ_ GMC/12.8 Gyr) ^2/9( M/10^4 M_⊙) ^1/9,
where
γ_ GMC≃ 6.5 Gyr( σ/10 km s^-1) ( 10 M_⊙^2 pc^-5/f_Σ·Σ_ gas·ρ_ ISM)
and, assuming an equilibrium disk and Q=1:
σ = π/2G Σ_ gas/Ω
When considering environmentally dependent cluster radii, we multiply our expression
for t_ cce by a factor:
(r_ h/r_ h, 0)^-3, with r_ h, 0 = 1.5 pc.
This makes disruption much faster (up to a factor 15) in low-density
environments, and is justified by the fact that more extended clusters would be
more susceptible to tidal perturbations, as observed.
To implement the rapid disruption phase in our galaxy formation model, we need
to account for the dependence of Eq. <ref> on both the stellar cluster
mass and on the physical properties of the host galaxy. Practically,
Eq. <ref> must be evaluated for each galaxy, at each time-step of the
evolution, and for each value of the cluster mass considered. To limit the
computational overhead, we have adopted an approach that is sketched in
Figure <ref>, and that can be summarized as follows:
(i) for each bin boundary of the mass grid considered at a given
time t, we use Eqs. <ref> and <ref> to compute the
evolved values at the following time-step t'. In Figure <ref>, this
forward integration is illustrated by the dashed red lines, and M'_i (with i=1,2,...6) represent the evolved values of the grid
boundaries (M_i) at the time t'.
(ii) We then get the interpolated indices for which M'_i =
M_i at the time t'. These indices are used to compute the mass values at
the time t that would evolve into the fixed boundaries of the bins at
the time t'. This `backward' integration is illustrated in
Figure <ref> by the dashed-dotted blue lines. M”_i (with
i=1,2,...6) correspond to the values that would evolve into the fixed grid
boundaries at the time t'.
(iii) The values obtained as described above can be used to compute the
evolved cluster mass function at time t' (i.e. the number of stellar
clusters in each mass bin considered) starting from the mass distribution at
time t. In the example shown in Figure <ref>, the evolved number of
stellar clusters in the first mass bin (between M_1 and M_2
at time t') can be computed by taking the number of stellar clusters in the
unevolved array between M”_1 and M”_2 at time t. The
evolved number of clusters in the second mass bin would be equal to the
number of stellar clusters between M”_2 and M”_3 at the
time t, etc.
To simplify our calculation, we assume that stellar clusters are distributed
uniformly[This assumption is correct in the case of infinitesimally small
mass bins.] in each mass bin. We have verified that our results do not change
significantly when increasing the number of mass bins considered, so a more
sophisticated treatment is not expected to change significantly our results.
§.§ Cluster migration into the galaxy halo
The rapid disruption phase continues until the stellar clusters that form in
(and are associated with) the galaxy disk migrate into the halo. Following
<cit.>, we assume that the migration agent is represented by
(minor and major) galaxy mergers. Major mergers occur when the baryonic mass
ratio between the merging galaxies is larger than 0.3. In this case, we assume
that all stellar clusters in the discs of both the accreted and accreting galaxies
migrate into the halo of the merger remnant. In case of a minor merger, we assume
that only the clusters in the disc of the accreted galaxy migrate into the halo
of the remnant, whereas the main progenitor instead retains the stellar clusters in its own disc.
During both minor and major mergers, the clusters in the haloes of both progenitor
galaxies are always transferred to the halo of the remnant galaxy.
In our model, galaxy mergers can trigger starbursts <cit.>. We have suppressed the formation of a new
stellar cluster population during this star formation channel, because it is
expected to be associated with tidal perturbations stronger than those that
characterize quiescent star formation episodes. We note that this specific
assumption does not affect significantly our model results, as merger driven
starbursts only contribute to a minor fraction of the total star formation in
our model <cit.>.
In addition, we also consider the possibility that the rapidly changing tidal
field during galaxy interactions can lead to an efficient cluster disruption in
the galaxy disc, before the merger is completed. To model this, we use results
from <cit.>, based on numerical simulations of merging
disc galaxies. In particular, we use their Eq. 10 to compute the survival
fraction of clusters (this is applied to all star clusters independently of their mass):
f_ surv = 4.5× 10^-8 M_ min,2^2 ( t_
depl/ yr)^0.77 - 0.22 log(M_ min,2),
and assume M_ min,2 = M_ min/10^2 M_⊙∼ 1. In the
work by <cit.>, t_ depl is parametrized as the
ratio between the amount of gas available during the merger and the peak star
formation rate. We evaluate the latter quantity simply as the ratio between the
amount of stars formed during the burst associated with the merger and the corresponding internal code time-step.
§.§ Cluster evolution during the slow disruption phase
After their migration into the galaxy halo, stellar clusters lose mass due to different physical processes: mass loss by stellar evolution and by dynamical effects, like two-body relaxation and stripping of stars due to the tidal field in which the star cluster is immersed and shocks <cit.>. Following <cit.>, we also describe the slow tidal evaporation phase in
terms of a mass loss rate:
( dM / dt)_ evap = - M / t_ evap,
with <cit.>:
t_ evap = t_5 , evap( M/10^5
M_⊙)^γ
In the last equation, γ = 0.7 and t_5 , evap depends on the
stellar cluster metallicity:
[Fe/H] = -1.03 - 0.5 log(t_5, evap/10 Gyr).
This semi-empirical relation was adopted by <cit.> so that the near-univeral characteristic mass-scale of GC is reproduced as a function of [Fe/H] at z=0. It reflects
the idea that the globular cluster metallicity correlates with the binding
energy of the galaxy in which it formed, which was proposed to be approximately preserved
during migration. The above relation between [Fe/H] and the disruption time is shown by <cit.> to be consistent with the observed metallicity gradient of the Galactic GC population.
To implement the slow evaporation phase in our galaxy evolution model, we have
adopted an approach similar to that outlined in Section <ref> for the
rapid disruption phase, but taking advantage of the fact that
Eq. <ref> does not depend on the physical properties of the host
galaxy and therefore does not need to be evaluated at each code
time-step. From the practical point of view, this allows us to reduce the
number of calculations and pre-compute a mass loss grid at each new snapshot[This is
necessary because the spacing between subsequent snapshots, and therefore
also the internal time-step, are not constant. Otherwise, it would have been
sufficient to compute the mapping only once.]
rather than for each internal time-step of our model. Specifically:
* for each boundary of the mass bins considered, and for each
metallicity corresponding to central values of the metallicity bins
considered, we have pre-computed the time-scales given by Eq. <ref>
and the mapping needed to evolve the star clusters;
* we have then used this pre-computed mass loss grid to evolve the mass
function of the clusters associated with the galaxy halo at each internal
time-step.
As for the rapid disruption phase, we assume that clusters are distributed
uniformly within each mass, metallicity, and formation time-bin. As mentioned
earlier, we have checked that results do not change significantly when
increasing the number of mass bins considered.
§.§ Dynamical friction
When computing the evolution of the most massive clusters in the galaxy disk, it is important to
account for the effect of dynamical friction, which can cause them to spiral in to the centre of the host galaxy. Following <cit.>, we evaluate the dynamical friction time-scale for
each central value of the mass bins considered:
t_df = 2 Gyr (10^6 M_/M )
(R/2 kpc )^2
(V/200 kms^-1 )
where we assume R is equal to the half-mass radius of the galaxy disk and
V=V_ max. We then simply set equal to zero the number of stellar
clusters in all those bins of our three-dimensional array that have experienced
more than one dynamical friction time-scale, integrated over their lifetimes.
As we will see in the following, our implementation of dynamical friction might
be too aggressive. This might be due to the fact that our implementation
does not account for the mass loss of the clusters as they spiral in: the
environment gets increasingly disruptive, which causes them to lose more mass
and spiral in more slowly.
§.§ Alternative physics
To study the impact of the physics discussed in this section, we have
run alternative models in which we have switched off specific
processes one by one. In particular, we have considered the following cases:
* no environmental dependence of GC radii
(eqs. <ref>-<ref>);
* no efficient cluster disruption in the galaxy disc during galaxy mergers
(see eq. <ref>);
* no dynamical friction (see Section <ref>);
Additionally, we have considered a run where we assume that the minimum cluster mass
scale (M_ min) is always equal to 10^2 M_⊙. We find that,
for the range of masses considered in this work, this assumption gives results
that are indistinguishable from those obtained using our reference run, that
assumes the model presented in <cit.> to determine
M_ min. Finally, as anticipated above, we have also considered a run
where metal ejection in small haloes is treated as in the model published in
<cit.>, i.e. almost all newly synthetized metals (95 per
cent) are directly injected into the hot gas component in haloes less massive
than 5× 10^10 M_.
§ CASE STUDIES: EVOLUTION OF THE GC MASS FUNCTION
Before analysing the predictions of our model in detail, we discuss two `case
studies' to show how the mass distribution of stellar clusters evolves due to
the physical mechanisms discussed in Section <ref> and implemented
in our model.
Figure <ref> corresponds to a low-mass galaxy (the galaxy stellar mass at
z=0 is ∼ 1.3×10^9 M_) that experiences one single
major merger event between z=1.91 and z=2.07 . The top left panel of Figure <ref>
shows the mass distribution of stellar clusters associated with the two
progenitors of the final galaxy at high redshift (a second progenitor exists only for those snapshots where a dotted line is plotted, i.e. from z=3.06 to
z=2.07). At these early times, the distribution of young stellar clusters
(those that are associated with the disk of our model galaxies) results from a
`competition' between ongoing star formation and the efficient rapid disruption of clusters by tidal shocks. At the final redshift shown in this panel, the total mass in stellar clusters is actually larger than that predicted at even higher redshift for the most massive progenitor (compare the red solid line with the black solid
line). The distributions shown in this panel also highlight the strong effect of our dynamical friction implementation, which we will return to below. The top-right panel again shows the distribution of stellar clusters in
both progenitors at z=2.07 (solid and dotted black lines) and the
distribution of clusters associated with the halo of the remnant galaxy after
the major merger (dot-dashed lines). As detailed in Section <ref>, we
assume that the rapidly changing tidal field during the interaction leads to an
efficient cluster disruption before the stellar clusters associated to the
merging galaxies migrate to the halo of the remnant galaxy (dot-dashed
brown line). At this point, the evolution of stellar clusters is driven by
the slow evaporation phase described in Section <ref>. This slow
evolution of the cluster mass function can be appreciated, down to z=0, in
the bottom panels of Figure <ref>. At present, the mass distribution of
the clusters associated with the model galaxy considered in this example peaks
at ∼ 1.5×10^5 M_, and corresponds to a specific frequency
of ∼ 13.8 (∼ 6.3 when considering only stellar clusters more massive
than 10^5 M_).
Figure <ref> corresponds to a just slightly more massive galaxy (the
stellar mass at present is ∼ 3.2×10^9 M_) with a more
complex merger history. Specifically, the galaxy considered in this case has
suffered one minor merger episode at z∼1.5 and two major mergers (one at
z∼1.8 and the second at z∼1.3). At the redshifts shown in the top
left and top right panels, there are two and four progenitors for this
galaxy. The mass distribution of star clusters in the disk evolves rapidly because of disruption and new star formation. The three merger events occurring during
the redshift interval shown in the bottom-left panel lead to the formation of a
stellar cluster component, associated with the halo of the model galaxy, which
then evolves slowly as a result of evaporation. At present, the mass distribution of
GCs associated with the example galaxy considered shows a double peak, one at a
mass slightly larger than ∼ 10^5 M_ and one at a mass ∼
3.6× 10^3 M_. The specific frequency of GCs measured for
this galaxy is ∼ 235 if we consider all stellar clusters down to
10^2 M_, and ∼ 16.5 when we consider only stellar clusters
more massive than 10^5 M_.
In the two example cases considered above, GCs form at early cosmic times and
migrate to the halo, where they are no longer subject to rapid disruption, at
z>1-1.5. Since the merger activity peaks at high redshift, most of the
surviving GCs associated with model galaxies will be old. Figure <ref>
compares the formation history of surviving GCs (solid lines, with the red line showing the formation history of GCs more massive than 10^5 M_)
with the star formation history of all model galaxies in the simulated volume
considered. The black dashed line shows the average star formation history
obtained considering all model galaxies more massive than 10^9 M_, while the orange and cyan lines correspond to galaxies more and
less massive than 3.2× 10^10 M_ respectively. All lines
have been normalised to unity in the top panel to ease the comparison, while the
middle and bottom panels show the actual values of the GC and star formation
rate, respectively. The figure shows that surviving GCs correspond to the oldest
stellar component formed in our model galaxies, with a formation history that
peaks at redshift higher than that obtained for the most massive galaxies, and
a clear trend for more massive GCs forming earlier than their less massive
counterparts. This is a survivor bias – low-mass clusters are disrupted more rapidly, and any surviving low-mass GCs are therefore more likely to be young. In Section <ref>, we will analyse in more detail the
age (and metallicity) distribution of GCs associated with galaxies of different
stellar mass.
§ SCALING RELATIONS AND SPECIFIC FREQUENCY OF GLOBULAR CLUSTERS
Several observational studies have highlighted the existence of well-defined scaling relations between the GC population and the properties of their
host galaxies. In particular, the total mass in GCs scales almost linearly with
the host galaxy's halo mass across at least three or even four orders of
magnitude in halo mass <cit.>. The
relation possibly extends for another two orders of magnitude (down to halo
masses ∼ 10^9 M_), with no significant deviation from
linearity but an increased scatter towards low halo masses
<cit.>.
The observed correlation appears surprising because, if GC formation is related
to star formation, one would expect a more natural correlation between the
total mass/number of GCs and the stellar mass of the galaxy. Virtually all studies
mentioned above interpreted the strong quasi-linear relation observed as an
indication that GCs formed at early times, with the process being unaffected by
the stellar/AGN feedback that introduces a non-linear correlation between
galaxy stellar mass and halo mass. An alternative explanation was provided by
<cit.>, who argued that the nearly constant ratio (η)
between the total mass in GCs and the halo mass arises from the combination of
galaxy formation within dark matter haloes and strongly
environmentally dependent cluster disruption. Recent work by
<cit.>, based on a semi-analytic model for GC formation
coupled to dark matter merger trees, has argued that a constant value of η
is a natural consequence of hierarchical assembly, with the relation being
sensitive to the details of GC formation at low halo masses (< 10^11.5
M_). These arguments have been revisited in a recent work by
<cit.>, based on the E-MOSAICS simulation suite. In
particular, the latter study argues that the normalisation of the relation is
primarily set by cluster disruption, while the downturn also predicted by
<cit.> at low halo masses is imprinted by the underlying
relation between galaxy stellar mass and halo mass.
Figure <ref> shows our model predictions. The left panel shows the
predicted correlation between the total mass in GCs and the parent halo mass,
for all model central galaxies in the simulated volume considered (solid
circles with error bars show the 25th and 75th percentiles of the
distributions). We have considered only central galaxies because observational
work typically uses scaling relations that are calibrated on central galaxies to
infer the parent halo mass. In addition, to have a fair comparison with
observational data, we considered all stellar clusters
with masses ≳ 1.2×10^5 M_, age ≳ 7 Gyr, and
-2.54 ≲ [Fe/H] <0.34. These limits are motivated by the properties of the Galactic GC population <cit.>. We also show results
obtained when splitting our model galaxies into two subsamples according to
their bulge-to-total stellar mass ratio (solid and open squares correspond to
bulge/disc-dominated systems, respectively). In the right panel of the same
figure, we show how the ratio between the total mass in GCs and the halo mass
varies as a function of the halo mass. Empty symbols with error bars correspond to
the median and percentiles of the distribution obtained including also
satellite galaxies. In this case, the plotted halo mass is obtained by
multiplying the number of bound particles associated with the parent dark
matter subhalo[For `orphan' galaxies, i.e. galaxies that are no longer
associated with a distinct dark matter substructure, we have considered the
number of particles associated with the last identified parent substructure.]
by the particle mass. Lines with different styles in both panels show observational estimates, as indicated in the legend.
Figure <ref> shows that our model predicts a quasi-linear relation
between the total mass in GCs and halo mass, for haloes more massive than ∼
3× 10^12 M_. A deviation from linearity is predicted for
lower-mass haloes. As mentioned above, a curvature at low halo masses has been found in previous work
<cit.>. In our model, the curvature is not affected by the alternative physical models considered (see Section <ref> and the discussion below). The curvature is also only slightly weakened by the inclusion of satellite model galaxies. The latter has a more pronounced effect in the E-MOSAICS simulation suite <cit.>. We note here that haloes with mass ∼ 10^12 M_ are resolved with about 850 particles in the Millennium Simulation, so it would be interesting to verify the trends predicted with a higher resolution volume – a higher resolution would allow smaller mergers to be resolved, and these could potentially bring into the halo a larger number of GCs from lower-mass, accreted galaxies. In addition, one should bear in mind that observed halo masses of
low-mass galaxies are affected by relatively large uncertainties, which
limits the statistical significance of a downturn in observational data (see
discussion in ).
Our model also predicts that bulge-dominated galaxies are typically characterised by a larger total mass in GCs compared to galaxies of similar
stellar mass but later morphological type. This is not surprising given
that the main channel for bulge formation in our model is represented by
galaxy mergers <cit.>, and that we have assumed that mergers
trigger the migration of stellar clusters from the disk to the halo. A positive correlation between the number of GCs and the number of mergers was also found in the E-MOSAICS simulation suite <cit.>. Early
observational studies did not find any significant dependence of η on
the environment or the morphological type of the galaxy
<cit.>. Larger statistical samples
and revised analysis methods have highlighted a second-order difference between
ellipticals and spirals <cit.> in the same sense that is
predicted by our model, but less pronounced. This could suggest a weaker correlation between bulge formation and stellar cluster migration than that assumed in our model.
Analysing our results based on alternative physical prescriptions, we find that
both efficient cluster disruption and a treatment for dynamical friction are
needed in our model to predict both the correct slope and normalisation of the
relation. Figure <ref> shows the impact of efficient cluster
disruption during mergers (left panel) and dynamical friction (right panel) on the relation between the total mass in GCs as a function of the parent halo mass. As in
Figure <ref>, we plot the median relation (and percentiles) obtained
for all model galaxies (filled circles), and for different morphological types
(filled and empty squares). Both physical modifications cause an increase of
the number (and total mass) of GCs associated with model galaxies when switched
off. This affects the overal normalisation of the predicted relation. Since
galaxy mergers are, by construction, more important for early-type galaxies,
the lack of an efficient cluster disruption during mergers has a stronger
impact for early-type rather than late-type galaxies, thus further strengthening the morphological dependence and implying that our fiducial model is more consistent with observations. Since early-type galaxies
represent a larger fraction of the overall population with increasing galaxy
stellar mass, these particular physical ingredients also affect the predicted
slope of the relation. Our treatment for dynamical friction reduces the mass of
GCs that is associated with model galaxies, more or less independently of the
galaxy type. This translates into a significant effect only on the overall
normalisation of the relation, but a negligible effect on its shape.
Figure <ref> shows the predicted dependence of the GCs' specific
frequency, as a function of galaxy stellar mass. For this comparison, we have
considered the same cuts in GC mass, age, and metallicity indicated above, and
we have only considered galaxies (both centrals and satellites) with
bulge-to-total stellar mass ratio larger than 0.7 and residing in haloes more
massive than 10^14 M_. These limits have been considered to have a fair comparison with the observational data by <cit.>, based on the Virgo
Cluster Survey conducted with the Hubble Space Telescope. Our simulated sample
considered here consists of about 3000 galaxies, i.e. about 40 times larger than
the number of galaxies included in the observational data set over the same
stellar mass range.
Figure <ref> shows that the model predictions cover the same region of observational estimates, with a few noticeable differences. In particular, for the lowest-mass galaxies considered, the observational estimates tend to be larger than our model predictions. The latter exhibits a significant bending of the median GC mass per galaxy stellar mass towards lower values. This might be due to resolution, i.e. to the fact that an artificially low number of mergers are resolved in this stellar mass range. For galaxies with intermediate stellar masses (between ∼ 10^10 and 10^11 M_), the median values of our model predictions lie systematically above the observational measurements, with a large number of model galaxies that are characterised by a specific frequency larger than that measured for Virgo galaxies of comparable stellar mass. We note that model predictions shown in Figure <ref> correspond to an ensemble of massive haloes. By contrast, the observational data sample is based on one galaxy cluster only. Given the large variations of halo mass accretion histories, and the large expected stochasticity in the merger history of their hosted galaxies, cosmic variance can significantly affect the comparison illustrated in this figure.
An interesting result shown in
Figure <ref>, which is valid in general and not specific to the cluster
environment, is the relatively large scatter of the predicted specific
frequency/mass in GCs at fixed galaxy stellar mass. This result, which can be
appreciated already considering the two specific case studies discussed in
Section <ref>, is a natural consequence of the stochasticity of the merger
events and of the large scatter of galaxy physical properties and GC population
at the time of mergers. While a general trend as a function of galaxy stellar
mass (see also the next section) exists and is expected (the average number of
galaxy mergers increases as a function of galaxy mass), this scatter is
important and needs to be taken into account to correctly interpret the
observational results.
As discussed above, the mass in GCs (and as a consequence also the specific
frequency of GCs) increases significantly without an efficient cluster
disruption, or when no treatment for dynamical friction is included.
Figure <ref> shows that, when these two physical processes are
switched off, both the number and the total mass in GCs associated with model
galaxies increase significantly to values that are considerably larger than current
observational estimates. The left panel of the figure shows that the difference
is larger for the specific frequency, suggesting a different shape of the
cluster mass function in these alternative models. We will come back to this in the next section.
§ MASS, METALLICITY AND AGE DISTRIBUTIONS OF GLOBULAR CLUSTERS
In this section, we analyse the physical properties of GCs and their
distributions in terms of mass, metallicity, and age. We will compare these with
available observational measurements for the Milky Way and analyse how the
distributions vary as a function of galaxy stellar mass. For the analysis
presented in this section, we have considered only model galaxies that host at
least one GC. This does not significantly affect the results presented below,
except for the two lowest galaxy stellar mass bins considered. For such low-mass galaxies, the large number of model galaxies with zero GCs (likely due to
resolution limits) has a non-negligible effect on the median distributions
presented.
Figure <ref> shows the mean (dashed line) and median (solid line) mass
distributions of GCs associated with Milky Way-like model galaxies, compared
with the observed GC mass function of the Milky Way (filled circles with error bars; from ). The model sample
used for this figure consists of 59,951 galaxies that are disc dominated (the
bulge-to-total mass ratio is smaller than 0.2) and that are centrals of haloes
with mass between 3×10^11 and 3×10^12 M_.
The dotted lines show a few individual (random) examples. These highlight the
existence of a large galaxy-to-galaxy variance (in terms of normalisation, peak mass, presence of a double peak), which is determined by the different merging histories and physical properties of galaxies at the time of GC formation/migration, even though we consider only a limited halo mass range and galaxies with a low number of mergers (as reflected by the cut in morphological type).
The median distribution obtained for model galaxies follows the observational
measurements quite well, with a small deficit of GCs at the peak of the
distribution, and a more significant deficit at masses larger than
∼6×10^5 M_ that could be improved by using a less
`aggressive' treatment for dynamical friction (see Section <ref>).
This can be seen in the middle panel of Figure <ref>, where we
show the same model predictions, but from a run without a treatment for dynamical
friction. In this run, the predicted GC mass distribution for Milky Way-like
galaxies extends up to ∼ 10^8 M_, with a gradual decrease for
masses larger than ∼ 2× 10^5 M_. At variance with model
predictions discussed in the previous section, we find that the results shown in
Figure <ref> are not significantly affected if no efficient
cluster disruption during galaxy mergers is considered (see the top panel of
Figure <ref>). This is not surprising, given that the sample of
galaxies considered in this case is dominated by late-type galaxies, i.e.galaxies that did not experience a significant number of mergers. Switching
off this physical ingredient actually brings model results in somewhat better
agreement with the observed peak of the mass distribution. However, as we have discussed in the previous section, an efficient cluster disruption during galaxy mergers is needed to reproduce the normalisation of the relation between the total mass in GCs and halo mass. Finally, the bottom panel of Figure <ref> shows that, when the assumption of an environmental dependence of cluster radii is relaxed, the peak of the mass distribution is moved to lower masses (for the Milky-Way like galaxies considered here, it moves to ∼ 5×10^4 M_), and the number of GCs below the peak increases by a factor of 2-3. This is a consequence of the treatment adopted (see Section <ref> and
eqs. <ref>-<ref>), which leads to longer survival
times for stellar clusters less massive than 10^4 M_ if a constant, compact radius is adopted.
Figure <ref> shows the age-metallicity distribution of GCs
associated with the same Milky Way-like model galaxies considered above and
compares it with observational estimates of GCs in our Galaxy (circles with
error bars). Considering that observations are cut at [Fe/H]=-0.5, the agreement in terms of the metallicity distribution is fairly good. As for ages, our model predictions
are systematically younger than the observational estimates by ∼ 0.75 Gyr, on
average. A similar shift was found for the E-MOSAICS simulations <cit.>. Specifically, <cit.> found a median value of τ_25 (the assembly time of 25 per cent of the final halo mass) of 10.78 Gyr, against 11.5 Gyr estimated for the MW. The same was found for the GCs: the median age of GCs in the MW is 12.26 Gyr, whereas for galaxies in E-MOSAICS the median GC age was found to be 10.73 Gyr. This shift might be explained by an anomalously early formation time of our Galaxy. In fact, there are several further observations confirming the idea that the Milky-Way is not a `typical' galaxy for its stellar mass, such as the offset from the Tully-Fisher relation <cit.>, the unusual satellite population <cit.>, and the tension between the early formation of the Milky Way's disk inferred from galactic archaeology and state-of-the-art numerical simulations <cit.>.
We have tested the impact of all relevant parameters in the GC model (see Section <ref>, and we find that the only physical process that has a significant impact on the age-metallicity distribution shown in Figure <ref> is the treatment of chemical
enrichment (more specifically metal ejection) in small haloes.
Figure <ref> shows the age-metallicity distribution of GCs
associated with MW-like galaxies, in a model in which 95 per cent of the metals
in small haloes are ejected directly into the hot gas phase. The figure shows a
bimodal distribution of metallicities with a second peak at very low values
([Fe/H]∼ -2.5). These GCs all have very old ages (∼ 13 Gyr) and are
therefore associated with very early episodes of star formation in low-mass
haloes. The `bimodality' that is observed in Figure <ref> could be affected by the limited resolution of the simulation adopted: a higher resolution would resolve a larger number of mergers with lower mass galaxies. This could increase the number (and total mass) of GCs associated with low mass galaxies and affect the metallicity distribution. Although we plan to investigate this in future work, we note here that this feature suggests the prescriptions adopted for chemical enrichment within low-mass haloes may need revisiting. A similar conclusion was also reached independently by studies focusing on the abundances and properties of Damped Lyα absorbers predicted by our GAEA model <cit.>.
Finally, we consider the mass distributions and age-metallicity distributions for model galaxies in bins of galaxy stellar mass, independently of the parent halo mass and morphology.
Figure <ref> shows that the peak of the mass distribution increases
weakly with galaxy stellar mass, varying between ∼ 1× 10^5
M_ and ∼ 2× 10^5 M_ over two orders of
magnitudes in galaxy stellar mass, in qualitative agreement with findings by <cit.> based on the Virgo Cluster Survey. Figure <ref> shows the median mass distribution obtained
in runs where different physical ingredients in our model have been switched
off. The left panel corresponds to a run with no efficient cluster disruption
during mergers; the middle panel to a run with no dynamical friction; the right
panel to a run with no environmental dependence of cluster radii. The impact of
these model prescriptions is consistent with what was discussed for the mass
distribution of GCs in Milky Way-like galaxies. In addition, the figure shows
that the lack of a prescription for dynamical friction virtually removes the
already weak dependence of the peak of the mass distribution on the stellar mass of the galaxy for galaxies more massive than ∼ 10^10.5 M_.
Figure <ref> shows the age-metallicity distribution of GCs
associated with model galaxies of increasing galaxy stellar mass, as indicated
in the different panels. For the lowest mass galaxies considered, most of the
GCs have very old ages and a relatively narrow range of metallicities. As the
galaxy stellar mass increases, both the ranges of ages and metallicities widen,
extending to younger ages and larger metallicities (the two are of course
correlated: younger GCs are formed in gas that has been enriched by previous
generations of stars, and therefore also have larger metallicities).
Figure <ref> shows the age-metallicity distribution of GCs
associated with galaxies of different stellar mass, for a model where 95 per cent of the metals in low-mass haloes are ejected directly into the hot gas
phase. The figure shows that this different assumption about metal enrichment
in small haloes leads to the development of secondary density peaks around very
old ages and low metallicities, for all galaxy masses considered. This secondary peak at very low metallicities and old ages is dominant in the least massive galaxies considered, supporting the warning given above that this result might be affected by resolution.
§ DISCUSSION AND CONCLUSIONS
In this paper, we present an end-to-end description of the formation process of
GCs, that combines a treatment for their formation and dynamical evolution
within galaxy haloes with a modelling of the latter through a state-of-the-art
semi-analytic model of galaxy formation and evolution. This theoretical framework has been constructed by effectively coupling the GC model presented in <cit.> and the GAlaxy Evolution and Assembly (GAEA) semi-analytic model <cit.>. A similar approach has
been recently used in the framework of the E-MOSAICS project
<cit.>. Being based on hydrodynamical
simulations, however, these studies are limited to relatively small cosmological
boxes (∼ 50 Mpc on a side) or to a few individual re-simulations. Our
fully semi-analytic approach requires significantly shorter computational
times, allowing us to:
(i) efficiently explore the coupling between the galaxy and star cluster
formation physics parameter space by running a large number of model
variants;
(ii) model large populations of galaxies to study the effect of
environment and assembly history with exquisite statistics;
Our approach also improves upon previously published models that parametrize
the formation of the GC populations using dark matter accretion histories
extracted from cosmological simulations <cit.>, by including an explicit treatment for galaxy formation and
the effect of the evolving galactic environment on star cluster formation and
disruption.
Our model reproduces naturally the observed correlation between the total mass
in GCs and the parent halo mass <cit.>.
The predicted relation is linear for haloes more massive than ∼ 3×10^12 M_, with a deviation from linearity for lower halo masses. In the framework of our model, such a deviation is not affected by GC formation physics, at least not by the processes that we have explicitly varied: a fixed/varying value for the minimum cluster mass scale, the environmental dependence of GC radii, the efficiency of cluster disruption during mergers, and dynamical friction. In our model, the turnover at low masses is driven by a significant dependence on morphological type in this mass range: bulge dominated galaxies host, on average, larger masses of GCs than their late-type counterparts resulting in a closer to linear behaviour at low halo masses. This dependence is a natural consequence of our assumption that cluster migration from the disk to the halo is triggered by galaxy mergers, and that bulges are predominantly built through mergers. Although a similar dependence is seen in the observational data, the observed effect is not as strong as predicted by our model, which might suggest a weaker correlation between bulge formation and the migration of stellar clusters than predicted by our model.
One caveat to bear in mind is that haloes with mass ∼ 10^12 M_ are resolved with ∼ 800 particles in the Millennium Simulation that we have employed in our study. This might result in merger trees that are not resolved well enough to give convergent results for the corresponding central galaxies: increased resolution would lead to a larger number of mergers (with smaller haloes) and these could potentially bring in a larger number of GCs from accreted galaxies. This could reduce the bending of the predicted relation at low halo masses, and potentially also the different mass of GCs predicted for the corresponding central galaxies that have late and early-type galaxies. Such a difference is, however, predicted also for galaxies that are well resolved within the simulation employed in this study, and therefore represents a robust prediction of our model.
We find that both the slope and the normalization of the predicted relation between halo mass and total mass in GCs depend on cluster physics. In particular, our model requires both an efficient cluster disruption during galaxy mergers and a rather aggressive treatment for dynamical friction to bring model predictions in close agreement with observational results: the former affect both the slope and the normalization of the relation because of the stronger impact on early-type galaxies and the varying fraction of these galaxies as a function of galaxy stellar mass; the latter has a significant effect on the overall normalization but a negligible impact on the slope.
Our reference model reproduces quite well the observed mass distribution of GCs in our Galaxy, with a deficit at GC masses larger than 6×10^5 M_ that could be improved with a less aggressive treatment for dynamical friction. At lower GC masses, our model requires an environmental dependence of GC radii (i.e. lower survival times for less massive clusters) to bring model predictions in agreement with observational data. The model GCs in Milky-Way galaxies tend to be systematically younger (by ∼ 1 Gyr) than those observed in our Galaxy. A similar result was found in the E-MOSAICS simulations <cit.> and could be explained by an anomalously early formation of our Galaxy. The metallicity distribution is unimodal, with a peak at [Fe/H]∼ -0.5. When considering the overall galaxy population, our model also predicts a weak increase of the peak of the GC mass distribution with increasing galaxy stellar mass, in qualitative agreement with observational results. The predicted age-metallicity distribution depends significantly on galaxy mass: both the age and metallicity ranges widen for more massive galaxies, that include younger and more metal rich GCs. Again, there is no clear sign of bimodality in either the age of metallicity distributions obtained when considering the entire galaxy population in the simulated volume.
As mentioned above, the large volume of the simulation translates into large statistical samples of galaxy populations. Our sample of Milky-Way like galaxies, selected only on the basis of halo mass and bulge-to-total mass ratio and using only about 10 per cent of the Millennium Simulation volume, is made up of about 60,0000 galaxies. Our results highlight the existence of a very large galaxy-to-galaxy variance, even in the limited halo mass bin of Milky-Way like haloes, that is driven by the different galaxy merger histories and physical conditions at GC formation and migration. When considering individual galaxies, bimodal or multi-modal metallicity and age distributions become not uncommon. In future studies, we plan to investigate in further detail how the predicted properties of GCs depend on the mass accretion and merger history of their parent galaxy/halo.
§ ACKNOWLEDGEMENTS
GDL gratefully acknowledges support from the Alexander von Humboldt Foundation, and the hospitality of Heidelberg University, where part of this work was carried out.
JMDK gratefully acknowledges funding from the DFG through an Emmy Noether Research Group (grant number KR4801/1-1).
STG gratefully acknowledges the generous and invaluable support of the Klaus Tschira Foundation.
JMDK and STG gratefully acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme via the ERC Starting Grant MUSTANG (grant agreement number 714907).
COOL Research DAO is a Decentralised Autonomous Organisation supporting research in astrophysics aimed at uncovering our cosmic origins.
MH acknowledges funding from the Swiss National Science Foundation (SNF) via the PRIMA Grant PR00P2 193577 “From cosmic dawn to high noon: the role of black holes for young galaxies”.
§ DATA AVAILABILITY
The model data underlying this article will be shared on request to the corresponding author. An introduction to GAEA, a list of our recent work, as
well as data files containing published model predictions, can be found at: https://sites.google.com/inaf.it/gaea/
mnras
|
http://arxiv.org/abs/2307.00216v1
|
20230701034825
|
Rounding-Error Analysis of Multigrid V-Cycles
|
[
"Stephen F. McCormick",
"Rasmus Tamstorf"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"65F10, 65G50, 65M55"
] |
Q-YOLO: Efficient Inference for Real-time Object Detection † Equal contribution. ⋆ Corresponding author.
Mingze Wang ^† Huixin Sun ^†Jun Shi ^† Xuhui Liu Baochang Zhang Xianbin Cao^⋆
========================================================================================================
This paper provides a rounding-error analysis for two-grid methods that use one relaxation step both before and after coarsening. The analysis is based on floating point arithmetic and focuses on a two-grid scheme that is perturbed on the coarse grid to allow for an approximate coarse-grid solve. Leveraging previously published results, this two-grid theory can then be extended to general V(μ,ν)-cycles, as well as full multigrid (). It can also be extended to mixed-precision iterative refinement () based on these cycles. An added benefit of the theory here over previous work is that it is obtained in a more organized, transparent, and simpler way.
rounding-error analysis, floating point arithmetic, mixed precision, multigrid
65F10, 65G50, 65M55
§ INTRODUCTION AND NOTATION
Our aim is to analyze the effects of applying floating-point arithmetic to multigrid methods for computing the solution y ∈^n of
Ay = r,
where r ∈^n is given and A ∈^n × n is a given symmetric positive definite matrix, with at most m_A nonzeros per row. In concert with the iterative refinement application of multigrid described in <cit.>, our focus is on a two-grid () cycle applied to (<ref>) that starts with a zero initial guess and uses both one pre-relaxation (i.e., before coarsening) and one post-relaxation (i.e., after coarsening). This allows all of the theory in <cit.> and <cit.> to be extended to a general V(μ, ν)-cycle. We assume that the reader is familiar with this earlier theory, but emphasize that the analysis here represents an improvement in terms of organization, transparency, and simplicity.
Assume that A has been computed exactly in “low” -precision as in <cit.> and, for simplicity, that A is scaled so that A = 1, where · denotes the Euclidean norm. Assume also, as in <cit.>, that the interpolation matrix P ∈^n × n_c, n_c < n, and the Galerkin coarse-level matrix A_c = P^tAP are exact to -precision, and that P has at most m_P nonzeros in any row or column and is scaled so that A_c = 1. The two-grid cycle is assumed to use pre-relaxation and post-relaxation given by the stationary linear iterations x ← x - M (Ax - b) and x ← x - N (Ax - b), respectively. Here, M, N ∈^n × n are easily computed nonsingular matrices that approximate A^-1 well enough to guarantee convergence in energy in that I - M A_A < 1 and I - N A_A < 1, where I is the identity. (I_c is used below to denote the coarse-level identity.) Coarsening in consists of an exact solve of the Galerkin coarse-level problem that has been perturbed to allow for a recursive solve as in <cit.>. We specify this perturbed coarse-grid correction in as B_c A_c^-1 P^t r, where B_c ∈^n_c × n_c satisfies B_c - I_c_A_c < 1.
We complete this section with additional notation.
Denote the condition numbers of A and A_c by κ = A·A^-1 = A^-1 and κ_c = A_c·A_c^-1 = A_c^-1, respectively. Define the energy norm by ·_A = A^1/2· and let
η_A= |A|, η_P = |P|, η_M= M, η_N = N_A,
= m_A + 1/1 - (m_A + 1) , = m_P + 1/1 - (m_P + 1) , and = m_N + 1/1 - (m_N + 1) .
The measures used here for M and N are the respective Euclidean and energy norms. The Euclidean norm for pre-relaxation is a departure from what is used in <cit.> because it allows us to avoid the more cumbersome energy norm in the earlier stages of the proof; see (<ref>)-(<ref>). For Krylov methods, these two measures are the same because M_A = A^1/2MA^-1/2 = M when M is a polynomial in A.
To account for rounding errors in relaxation, suppose that constants α_M and α_N exist such that computing M z and N z for a vector z ∈^n in -precision yields the respective results
M z + δ_M, δ_M≤α_M z, and
N z + δ_N, δ_N≤α_N z.
Because our error bounds are posed in energy while our basic precision estimates are in the Euclidean norm, the analysis that follows makes frequent use of the following inequalities that are assumed to hold for any w∈^n:
w = AA^-1w≤A^1/2·A^1/2A^-1w= A^-1w_A ,
w_c = A_c^-1/2A_c^1/2w_c≤A_c^-1/2·A_c^1/2w_c= κ_c^1/2w_c_A_c ,
and
w_A = A^1/2w≤w = A^-1/2 A^1/2 w≤A^-1/2·w_A = κ^1/2w_A
The remainder of the paper states basic rounding-error estimates in Section <ref>, followed by a description of in Section <ref>. The two-grid theory is then presented in Section <ref>, and concluding remarks about extending to general V(μ, ν)-cycles, full multigrid (FMG), and progressive precision are given in Section <ref>.
§ ROUNDING-ERROR MODELS
Floating-point error estimates in this section are taken from <cit.>.
Quantization error in -precision for w ∈^n obeys the estimate
(w) = w + δ, δ≤w ,
where (·) denotes the computed result. With v, w ∈^n, we also have that
(v ± w) = v ± w + δ, δ≤v ± w .
Let the absolute value of a vector or matrix denote the respective vector or matrix of absolute values. For any K ∈^n × n and with m_K and ṁ^+_K defined in analogy to the above, the model for -precision computation of the residual Kw - c is then given by
(Kw - c) = Kw - c + δ, δ≤ṁ^+_K (c + |K|·w).
With P and P^t in mind, note that (<ref>) reduces loosely for K in ^m × n or ^n × m to
(Kw) = Kw + δ, δ≤ṁ^+_K |K|·w.
Instead of using below, we add δ's to exact expressions to denote quantities computed in finite precision. For example, the computed residual in (<ref>) might be written as Kw - c + δ_r while the computed solution of Kw = c might be K^-1c + δ.
§ TWO-GRID CYCLE
The pseudocode for the two-grid cycle, , is shown in Algorithm <ref> below.
All computations in this algorithm are performed in “low” -precision (green font), except for the exact, infinite-precision coarse-level solve (black font). Accordingly, since the input right-hand side (RHS) may be in higher precision, the cycle is initialized with a rounding step. The solver then proceeds by relaxing on the zero initial guess to the solution of (<ref>), improving the result by a coarse-level correction based on prolongation, and then applying one more relaxation step. We use subscripts μ and ν here and in the next section to identify quantities occurring respectively before and after coarse-grid correction.
.8
Assume for the rest of this section that is performed in infinite precision, meaning that y and the other quantities are computed in exact arithmetic. A basic assumption we make is that converges in infinite precision. Noting that the respective initial and final errors are 0 - A^-1r = - A^-1r and y - A^-1r, we assume that there exists a ρ_tg^* < 1 such that
y - A^-1r_A ≤ρ_tg^* A^-1r_A .
Note then that
y_A ≤y - A^-1r_A + A^-1r_A ≤ (1 + ρ_tg^*) A^-1r_A ≤ 2 A^-1r_A .
The theory uses the fact that the intermediate result y_ν does not increase the initial energy error. The fact that this is true is a direct result of the bound just after equation (5.2) in <cit.>. For completeness, we derive this bound tailored to our setting. Specifically, since T ≡ I - P(P^tAP)^-1P^tA is an energy-orthogonal projection onto the energy-orthogonal complement of P (c.f. <cit.>), it follows that the error propagation matrix connecting the initial error - A^-1r to the intermediate error y_ν - A^-1r is bounded according to
(I - PB_c (P^tAP)^-1P^tA)(I - MA)_A^2
≤(I - PB_c (P^tAP)^-1P^tA)_A^2 I - MA_A^2
≤(I - PB_c (P^tAP)^-1P^tA)_A^2
= T_A^2 + P(B_c - I_c) (P^tAP)^-1P^tA_A^2
≤T_A^2 + B_c - I_c_A_c^2 (P^tAP)^-1P^tA_A_c^2
≤T_A^2 + I - T_A^2 = 1.
We can thus conclude that
y_ν - A^-1r_A ≤A^-1r_A and y_ν_A ≤A^-1r_A + y_ν - A^-1r_A ≤ 2 A^-1r_A .
Note also that
A^-1r_μ_A = y_μ - A^-1r_A = (I - MA) A^-1r_A ≤A^-1r_A
and, since I - A^1/2PA_c^-1P^tA^1/2 is an orthogonal projection onto the orthogonal complement of the range of A^1/2P, then A^1/2PA_c^-1P^tA^1/2≤ I and, hence,
A_c^-1P^tr_μ_A_c = ⟨ PA_c^-1P^tr_μ, r_μ⟩^1/2≤⟨ A^-1r_μ, r_μ⟩^1/2 = A^-1r_μ_A ≤A^-1r_A .
This bound and the fact that B_c_A_c≤B_c - I_c_A_c + I_c_A_c < 2 imply that
d_c≤κ_c^1/2d_c_A_c = κ_c^1/2B_cA_c^-1r_c_A_c≤ 2κ_c^1/2A_c^-1P^tr_μ_A_c≤ 2 κ_c^1/2A^-1r_A .
In the following, we make use of the above infinite-precision bounds to establish results for finite-precision computations.
§ FINITE PRECISION TWO-GRID THEORY
The main theorem below makes use of the following parameters:
C_0 = (1 + (1 + η_A η_M) + + (1 + ) η_A (η_M + α_M (1 + ))) ,
C_1 = κ_c^1/2 (η_P C_0 + η_P (1 + η_A η_M + C_0)) ,
C_2 = 2 κ_c^1/2η_P (1 + C_1) + 2 C_1 ,
C_3 = C_2 + ((2 + C_2)κ^1/2 + (η_M + α_M (1 + ))(1 + )) ,
C_4 = η_A C_3 + ((η_A (2 + C_3) + 1) κ^1/2 + ) ,
C_5 = (2 + C_3 + η_N C_4 + (1 + κ^1/2 C_4)) κ^1/2 .
The complexity of these parameters is due to the many finite-precision steps that requires. We simplify them somewhat in Remark <ref> by considering their asymptotic behavior with respect to n for the applications we have in mind.
Two-Grid Error Estimate. Let y denote the result of one cycle applied to (<ref>) using exact arithmetic. Then one cycle applied to (<ref>) using -precision arithmetic produces a result y + δ_y that satisfies
y+ δ_y - A^-1r_A ≤ρ_tgA^-1r_A, ρ_tg = ρ_tg^* + δ_ρ_tg ,
where δ_ρ_tg = C_3 + C_4 + C_5.
The proof is organized in an ordered list of the -precision errors as they occur, with the bounds accumulating the estimates along the way. We emphasize that the quantities used in Algorithm <ref> are understood here to be in infinite precision; errors in the computed quantities are expressed instead by adding deltas. For each step in the proof, we refer to the corresponding line number in Algorithm <ref>.
* Line <ref>. δ_r = error in quantization of r.
From (<ref>) and (<ref>):
δ_r≤r≤A^-1r_A .
* Line <ref>. δ_μ = error in M(r + δ_r) given r + δ_r.
From (<ref>), (<ref>), and (<ref>):
δ_μ≤α_M (r + δ_r) ≤α_M (1 + ) A^-1r_A .
* Line <ref>. δ_y_μ = M δ_r + δ_μ, accumulated error in y_μ = M r.
From (<ref>) and (<ref>):
δ_y_μ≤η_M δ_r + δ_μ≤ (η_M + α_M (1 + )) A^-1r_A .
* Line <ref>. δ_a_μ = error in A (y_μ + δ_y_μ) - (r + δ_r) given y_μ + δ_y_μ and r + δ_r.
From (<ref>), (<ref>), (<ref>), and (<ref>):
δ_a_μ ≤ (r + δ_r + η_A (y_μ + δ_y_μ))
≤ ((1 + η_A η_M) r + δ_r + η_A δ_y_μ)
≤ (1 + η_A η_M + + η_A (η_M + α_M (1 + )) ) A^-1r_A .
* Line <ref>. δ_r_μ = A δ_y_μ - δ_r + δ_a_μ, accumulated error in r_μ = A y_μ - r.
From (<ref>), (<ref>), and (<ref>):
δ_r_μ≤η_A δ_y_μ + δ_r + δ_a_μ≤ C_0 A^-1r_A .
* Line <ref>. δ_p_μ = error in P^t(r_μ + δ_r_μ) given r_μ + δ_r_μ.
From (<ref>), (<ref>), and (<ref>):
δ_p_μ ≤η_P (r_μ + δ_r_μ)
≤η_P (AMr + r + C_0) A^-1r_A
≤η_P (1 + η_A η_M + C_0) A^-1r_A .
* Line <ref>. δ_d_c = B_c A_c^-1 (P^tδ_r_μ + δ_p_μ), propagated error in d_c ≡ B_c A_c^-1 P^tr_c.
From (<ref>) and (<ref>):
δ_d_c_A_c≤ 2 A_c^-1(P^tδ_r_μ + δ_p_μ)_A_c≤ 2 κ_c^1/2 (η_P δ_r_μ + δ_p_μ)≤ 2 C_1 A^-1r_A .
* Line <ref>. δ_p_ν = error in P(d_c + δ_d_c) given d_c + δ_d_c.
From (<ref>) twice, (<ref>), (<ref>), and (<ref>):
δ_p_ν_A ≤η_P (d_c + δ_d_c)
≤ 2 κ_c^1/2η_P (1 + C_1) A^-1r_A .
* Line <ref>. δ_d = Pδ_d_c + δ_p_ν, accumulated error in d ≡ Pd_c.
From (<ref>) and (<ref>):
δ_d_A ≤Pδ_d_c_A + δ_p_ν_A = δ_d_c_A_c + δ_p_ν_A ≤ C_2 A^-1r_A .
* Line <ref>. δ_y_- = error in (y_μ + δ_y_μ) - (d + δ_d) given y_μ + δ_y_μ and d + δ_d.
From (<ref>), (<ref>) twice, (<ref>), (<ref>), and (<ref>):
δ_y_- ≤(y_μ + δ_y_μ) - (d + δ_d)
≤ (y_μ - d + δ_y_μ + δ_d)
≤κ^1/2 (y_ν_A + δ_d_A) + δ_y_μ
≤ ((2 + C_2)κ^1/2 + (η_M + α_M (1 + ))) A^-1r_A .
* Line <ref>. δ_y_ν = δ_y_μ - δ_d + δ_y_-, accumulated error in y_ν≡ y_μ - d.
From (<ref>), (<ref>), and (<ref>):
δ_y_ν_A ≤δ_y_μ_A + δ_d_A + δ_y_-_A ≤ C_3 A^-1r_A .
* Line <ref>. δ_a_ν = error in A(y_ν + δ_y_ν) - (r + δ_r) given y_ν + δ_y_ν and r + δ_r.
From (<ref>) four times, (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>):
δ_a_ν_A
≤ (η_A (y_ν + δ_y_ν) + r + δ_r)
≤ (κ^1/2 (η_A (y_ν_A + δ_y_ν_A) + r_A) + A^-1r_A)
≤ ((η_A (2 + C_3) + 1) κ^1/2 + ) A^-1r_A .
* Line <ref>. δ_r_ν = Aδ_y_ν + δ_a_ν, accumulated error in r_ν≡ Ay_ν - r.
From (<ref>) and (<ref>):
δ_r_ν_A ≤η_A δ_y_ν_A + δ_a_ν_A ≤ C_4 A^-1r_A .
* Line <ref>. δ_ν = error in N(r_ν + δ_r_ν) given r_ν + δ_r_ν.
From (<ref>), (<ref>) twice, (<ref>):
δ_ν ≤α_N (r_ν + δ_r_ν)
≤α_N (A(y_ν - A^-1r) + κ^1/2δ_r_ν_A)
= α_N (y_ν - A^-1r_A + κ^1/2δ_r_ν_A)
≤α_N (1 + κ^1/2 C_4) A^-1r_A .
* Line <ref> δ_r_N = Nδ_r_ν + δ_ν accumulated error in r_N ≡ N r_ν.
From (<ref>), (<ref>):
δ_r_N_A ≤η_N δ_r_ν_A + δ_ν≤ (η_N C_4 + (1 + κ^1/2 C_4)) A^-1r_A .
* Line <ref>. δ_ν = error in (y_ν + δ_y_ν) - (r_N + δ_r_N) given y_ν + δ_y_ν and r_N + δ_r_N.
From (<ref>), (<ref>) four times, (<ref>), (<ref>), and (<ref>):
δ_ν_A
≤ (y_ν - r_N + δ_y_ν + δ_r_N)
≤κ^1/2 (y_A + δ_y_ν_A + δ_r_N_A)
≤ C_5 A^-1r_A .
* Line <ref>. δ_y = δ_y_ν - δ_r_N + δ_ν, accumulated error in y ≡ y_ν - r_N.
From (<ref>), (<ref>), (<ref>):
δ_y_A ≤δ_y_ν_A + δ_r_N_A + δ_ν_A ≤ (C_3 + C_4 + C_5) A^-1r_A .
While the bound for δ_ρ_tg in (<ref>) is of course very complicated, it can, however, be used to predict the effects of specific precisions under the reasonable assumption that the involved parameters can be estimated. For uniform discretization and refinement of uniformly elliptic PDEs, all of the terms in the C_k other than and κ^1/2 are typically O(1). The condition numbers in this case typically grow in proportion to a power of the inverse of the mesh size (and, therefore, n and n_c). Thus, since the objective is to choose so small that ρ_tg≪ 1, it follows that the principal term in δ_ρ_tg is ≡κ^1/2. Given the progressive precision objective of determining for any given n so that is roughly constant, it is then easy to see that δ_ρ_tg = O().
To be more specific, under assumptions that are typical of discrete elliptic equations, suppose that A represents an element of a hierarchy of matrices where κ grows with n and is chosen to ensure that ρ_tg is suitably smaller than 1. A close inspection of the C_k reveals that theoretical convergence requires to be substantially less than 1. This in turn implies that is substantially less than 1 and must decrease as n grows. With this in mind, we can separate out the the linear term in from each constant, while the rest would then involve combinations of , ^2, ^2, and higher powers. Since the linear terms dominate, if we eliminate C_0 because of its dependence only on and let ξ = (κ_c/κ)^1/2, then we can write C_k = γ_k +O(^2), 1 ≤ k ≤ 5, where the constants are given by
γ_1 = ξ (η_P (1 + (1 + η_A η_M) + η_A (η_M + α_M)) + η_P (1 + η_A η_M)) ,
γ_2 = 2 η_P + 2 γ_1,
γ_3 = ξγ_2 + 2 + η_M,
γ_4 = η_A γ_3 + (2 η_A + 1),
γ_5 = 2.
If we now define the various quantities in the definition of the γ_k as bounds relevant to each A and P in the hierarchy and we choose so that it is roughly constant, then we can say that δ_ρ_tg_A is in fact bounded by a polynomial in with constant coefficients. The above estimates show that we can choose to be a fixed value less than 1 to guarantee that the two-grid cycle converges optimally in the sense that ρ_tg is bounded uniformly less than 1.
§ CONCLUSION
The theory in <cit.> established optimal error bounds for iterative refinement, V(1,0)-cycles, and FMG based on the parameters = κ^1/2, = κ^1/2, and = κ. The theory in <cit.> extends these results to include multiple relaxation sweeps, quantization of A and P, and rounding errors in forming A_c. The crux of the theory in <cit.> is the bound of the -precision error for the two-grid cycle with just one pre-relaxation step. Theorem <ref> above extends this two-grid theory to also allow for post-relaxation. The proof in <cit.> for the V(1,0)-cycle relies solely on the error being bounded in energy by a polynomial in . Our theory here achieves the same type of bound, which means that we can appeal to <cit.> and <cit.> to achieve all of the same extensions established there. Note, for example, that our theory thus confirms optimal error estimates for general progressive precision V(μ,ν)-cycles with multiple sweeps and FMG that uses them.
siamplain
|
http://arxiv.org/abs/2307.02987v1
|
20230706134139
|
Cosmological effects of the Galileon term in Scalar-Tensor Theories
|
[
"A. G. Ferrari",
"M. Ballardini",
"F. Finelli",
"D. Paoletti",
"N. Mauri"
] |
astro-ph.CO
|
[
"astro-ph.CO"
] |
./figures/
figureFig.Figs.
equationEq.Eqs.
tableTableTables
angelo.ferrari3@unibo.itDipartimento di Fisica e Astronomia, Università di Bologna, viale Berti Pichat 6/2, 40127 Bologna, Italy.INFN, Sezione di Bologna, viale C. Berti Pichat 6/2, 40127 Bologna, Italymario.ballardini@unife.itDipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, via Giuseppe Saragat 1, 44122 Ferrara, ItalyINFN, Sezione di Ferrara, via Giuseppe Saragat 1, 44122 Ferrara, ItalyINAF/OAS Bologna, via Piero Gobetti 101, 40129 Bologna, Italyfabio.finelli@inaf.itINAF/OAS Bologna, via Piero Gobetti 101, 40129 Bologna, ItalyINFN, Sezione di Bologna, viale C. Berti Pichat 6/2, 40127 Bologna, Italydaniela.paoletti@inaf.itINAF/OAS Bologna, via Piero Gobetti 101, 40129 Bologna, ItalyINFN, Sezione di Bologna, viale C. Berti Pichat 6/2, 40127 Bologna, Italynicoletta.mauri@bo.infn.itDipartimento di Fisica e Astronomia, Università di Bologna, viale Berti Pichat 6/2, 40127 Bologna, Italy.INFN, Sezione di Bologna, viale C. Berti Pichat 6/2, 40127 Bologna, ItalyCosmological effects of the Galileon term in Scalar-Tensor Theories
Nicoletta Mauri
===================================================================
§ INTRODUCTION
The ΛCDM model is the standard cosmological model, it provides a remarkably good fit to most cosmological observations such as the measurements of cosmic microwave background (CMB) anisotropies in temperature and polarization <cit.>, baryon acoustic oscillations (BAO) in the galaxy and cluster distributions <cit.>, cosmic shear measurements of the galaxy distribution <cit.> and of the CMB <cit.>, and measurements of the luminosity distances of type Ia Supernovae (SN Ia) <cit.>.
Nevertheless, in recent years its enduring success has been threatened by the growing tension between the inference of H_0 within ΛCDM from the Planck measurement of CMB anisotropy <cit.> and its determination with Supenovae at low redshift, which has grown to a maximum level of 5σ with the latest SH0ES release with SN Ia calibrated using Cepheids <cit.>.
Whereas inferences of H_0 from different CMB experiments are very consistent with one another <cit.>, different approaches in calibrating SN provide different estimates and different uncertainties in the determination of H_0, as the latest SH0ES result based on Cepheids is H_0 = 73.04 ± 1.04 km s^-1 Mpc^-1 at 68% confidence level (CL) <cit.>, while the latest Carnegie-Chicago Hubble program estimate, using the tip of the Red Giant Branch calibration technique is H_0 = 69.8 ± 0.8 (stat)± 1.7 (sys) km s^-1 Mpc^-1 at 68% CL <cit.>.
These differences in the local determinations of H_0 and their error obviously affects the tension with the CMB inference in a statistically significant way and might point to unknown systematics or uncertainties not properly taken into account.
Nevertheless, this tension which emerged with the first Planck cosmological release <cit.> has fueled interest in models explaining a larger value of H_0 compared to ΛCDM <cit.>.
Modified gravity (MG) is one of the most promising fundamental physics route to reconcile the inference of H_0 from the Planck measurement of cosmic microwave background (CMB) anisotropy and its inferred value determined with the calibration of Supernovae at low redshift within ΛCDM model.
A simple mechanism to increase H_0 by an evolving gravitational constant is at play even in the simplest scalar-tensor models, i.e. Induced Gravity (IG, equivalent to the Brans-Dicke model by a field redefinition).
Without fine-tuning of parameters, the scalar field regulating the gravitational strength moves around recombination because of its coupling to pressureless matter.
This mechanism, which can be also seen as a dynamical and self-consistent variation in the gravitational constant, induces a degeneracy between H_0 and the non-minimal coupling of the scalar field to the curvature leading to a larger H_0 compared to ΛCDM <cit.>.
Although this mechanism is completely general, the increase of H_0 with respect to ΛCDM depends on the model and also on the number of the additional parameters with respect to ΛCDM.
With Planck DR3 likelihoods, we have Δ H_0 = H_0 - H_0^ΛCDM∼ 2 km s^-1 Mpc^-1 for IG <cit.>, and Δ H_0 ∼ 4 km s^-1 Mpc^-1 for a non-minimally coupled scalar field with a quartic potential <cit.>, dubbed Early Modified Gravity (EMG), which is among the best solutions to the H_0 tension according to <cit.>.
All the scalar-tensor models of gravity mentioned above don't have Vainshtein screening and a nearly negligible Chameleon screening due to the coupling to matter.
For this reason, one has to compare the volume of the parameter space of the non-minimal coupling required to increase H_0 to the Solar System bounds on the corresponding post-Newtonian parameters <cit.>.
As for the H_0 degeneracy, also in this case, the comparison is non-trivial and model dependent.
Whereas for the simplest IG models, the Solar system bounds constrain the non-minimal coupling to values which do not lead to an appreciable increase of H_0 compared to ΛCDM <cit.>, it is sufficent to consider a non-minimally coupled coupled scalar field to have a large volume of parameter space where viable models on Solar System scales can lead to a value of H_0 larger than in ΛCDM <cit.>.
EMG is a further step in the construction of a scalar-tensor viable model on Solar System scales with large H_0 by adding a self-interaction term which implements an attractor mechanism towards General Relativity (GR) with a cosmological constant.
In this paper we instead consider the cosmology of IG, with the addition of a Galileon term, whose purpose is to reconcile the theory with GR on scales below the so-called Vainshtein radius.
We therefore would like to investigate the implications of the addition of an explicit term implementing a screening mechanism on the cosmology of the simplest scalar-tensor theories of gravity.
The paper is organized as follows. In <ref> we describe the models considered and their background dynamics together with the stability conditions and the screening mechanism.
In <ref> we present evolution of the linear fluctuations, CMB anisotropies and matter power spectrum.
<Ref> is devoted to the presentation of the datasets and the details of our MCMC analysis, joint with the presentation of the results. We draw our conclusions in <ref>.
§ THE MODEL
We study the following subclass of Horndeski theories <cit.>:
S = ∫^4x√(|g|)[G_4(σ)R + G_2(σ,X) +G_3(σ,X)□σ + ℒ_ ℳ]
where |g| is the absolute value of the determinant of the metric g_μν, σ is a scalar field, X ≡ -∇_μσ∇^μσ/2 = -∂_μσ∂^μσ/2, and ≡ ∇_μ∇^μ is the covariant d'Alambert operator.
ℒ_ ℳ is the matter Lagrangian which does not depend on the scalar field σ and it is minimally coupled with the metric g.
The action (<ref>) predicts that gravitational waves travel at the speed of light, consistently with the current measurements <cit.>.
We consider the following form for the G functions:
G_4 = γσ^2/2
G_3 = -2 g(σ)X
G_2 = Z X-V(σ) + 4ζ(σ)X^2.
where Z=± 1 is the sign of the kinetic term.
This Lagrangian is equivalent to the extension of the Brans-Dicke model (BD) <cit.> with a Galileon term.
In fact with the field redefinition ϕ=γσ^2/2, with γ=Z/(4ω_ BD) > 0 and Z=± 1, the G functions become
G_4 = ϕ
G_3 = -2 f(ϕ)χ
G_2 = 2 ω_ BD/ϕχ-V(ϕ),
where χ≡ -∇_μϕ ∇^μϕ/2 and the relationship between g, ζ and f is g(σ) = γσ^3 f(σ); ζ(σ)=σ^-1g(σ).
Therefore, we refer to the model defined by <ref> as Brans-Dicke Galileon (BDG), while the model with G_2 = Z X - V(σ) is the Induced Gravity Galileon (IGG), as it is the extension of Induced Gravity (IG) <cit.> with a Galileon term G_3 = -2 g(σ)X.
Therefore even if BD and IG are equivalent, their extensions with Galileon terms are not. In particular, IGG corresponds to BDG given by <ref> with, formally, ζ=0, but it is not simply a special case of BDG when ζ = 0.
In fact, as described above, in the BDG model the functions g and ζ are not independent and setting ζ = 0 would mean setting g = 0 as well.
We study two different versions of BDG and IGG, relating to the sign of the kinetic term Z:
Standard branch: Z=+1 in which the kinetic term has a standard sign.
In this branch we consider both IGG and BDG (IGGst and BDGst, respectively) and we take V(σ) and g(σ) to be power-laws of the scalar field: V(σ) = λ_n σ^n, g(σ) = α_m σ^m, for several combinations of n and m. In this scenario the potential dominates the dynamics and provides the acceleration of the expansion of the Universe, while the G_3 enters as a small correction to the standard BD theory.
Phantom branch: Z=-1 and γ<1/6, equivalent to ω_ BD <-3/2, in which the kinetic term has a nonstandard sign.
In this branch we consider only BDG with g(σ) = ασ^-1, either with or without a potential (cosmological constant).
If there is no potential the burden to provide comic acceleration, an healthy theory and effective screening on small scales, is on the G_3 term <cit.>.
Reinserting the potential into the theory, both the Galileon term and the cosmological constant contribute to the dynamics and give rise to the late-time acceleration of the Universe.
Even if we have a nonzero potential, the G_3 is still able to provide an healthy theory and effective screening on small scales, where we recover General Relativity (GR).
§.§ Covariant equations
By varying the action with respect to the metric we obtain the modified Einstein field equation
G_μν = 1/F(σ)[ T_μν^ (M) + T_μν^ (G) + Z ( ∂_μσ∂_νσ - 1/2g_μν∂^ρσ∂_ρσ)
- g_μν V(σ) + (∇_μ∇_ν - g_μν) F(σ) ] ,
where F(σ)=γσ^2, G_μν=R_μν-1/2R is the Einstein tensor and T_μν^ (M) = -2/√(|g)δ(√(|g|)ℒ_ M)/δ g^μν is the energy-momentum tensor of matter. T_μν^(G) is defined as
T_μν^ (G) = -2 { g(σ) ∇_μσ∇ _νσ σ - ∇_(μ σ ∇_ν)[ g(σ) (∂σ)^2 ]
+ 1/2 g_μν∇_α σ∇^α[ g(σ) (∂σ)^2 ] - ζ(σ)/2 g_μν (∂σ)^4
+2ζ(σ) ∇_μσ∇ _νσ (∂σ)^2 },
with ∇_(μ σ ∇_ν) =1/2(∇_μ σ ∇_ν + ∇_ν σ ∇_μ).
It is useful to write down the trace of the Einstein equation (<ref>), which is
R = 1/F[ - T^( M) - T^ (G) + Z (∂σ)^2 + 4V + 3 □ F(σ) ],
with T^(G) as
T^ (G) = - 2 { g(σ) (∂σ)^2 σ + ∇_μσ∇^μ[ g(σ) (∂σ)^2 ] }.
Note that T^(G) does not depend on ζ(σ).
The equation for the evolution of the scalar field, obtained varying the action (<ref>) with respect to the field, is
σ[ Z - 4ζ (∂σ)^2 ] -2 g { (σ)^2 - ∇^μ∇^νσ∇_μ∇_νσ
- ∇^μσ∇^νσ R_μν} +4 g,_σ∇^μσ∇^νσ∇_μ∇_νσ
+ g,_σσ (∂σ)^4
- 3ζ,_σ (∂σ)^4 - 4ζ(σ)∇_μ [ (∂σ)^2 ]∇^μσ + γσ R -V,_σ = 0.
§.§ Background evolution
Working in a spatially flat Friedmann-Lemaître-Robertson-Walker (FLRW) Universe described by the line element
s^2 = - t^2 + a^2(t) x^2 ,
the Einstein field equations reduce to:
3 F H^2 = ρ + 1/2 Z σ̇^2 - 3 H Ḟ + V(σ)
+ σ̇^3[6 g(σ) H - ġ(σ) + 3ζ(σ) σ̇]
≡ ρ + ρ_σ ,
-2 F Ḣ = ρ + p + Zσ̇^2 + F̈ - HḞ
+ σ̇^2[(6 g H - 2 g,_σσ̇)σ̇+ 4ζσ̇^2 -2 g σ̈]
≡ ρ + p + ρ_σ + p_σ ,
where:
ρ_σ = Z/2σ̇^2 - 3 H Ḟ + V(σ)
+ σ̇^3[6 g(σ) H - ġ(σ) + 3ζ(σ) σ̇],
p_σ = Z/2σ̇- V(σ) + F̈ + 2 H Ḟ - σ̇^4 ( g,_σ - ζ ) - 2 g σ̇^2 σ̈.
The scalar field equation in the FLRW metric takes the form:
σ̈(Z+12 g H σ̇- 4 (g,_σ -3ζ)σ̇^2 )- 6γσ(2H^2+Ḣ) +V,_σ
+ 3 Z H σ̇+ 6 g (3H^2+Ḣ)σ̇^2 +12 H ζσ̇^3
- (g,_σσ - 3ζ,_σ) σ̇^4 = 0.
Because of the non-minimal coupling between the scalar field and the Ricci scalar in the Lagrangian, the Newton constant in the Friedmann equations is replaced by G_ cosm = (8π F)^-1 that now varies with time.
Following the notation of <cit.> we define the density parameters for radiation (r), pressureless matter (m) and the scalar field (σ) as
Ω_i = ρ_i/3FH^2≡ρ_i/ρ_ crit (i= r, m, σ).
It is also convenient to define dark energy density and pressure parameters in a framework which mimics Einstein gravity at the present time, by rewriting the Friedmann equations as <cit.>
3 F_0 H^2 = ρ + ρ_ DE ,
-2 F_0 Ḣ = ρ + p + ρ_ DE + p_ DE,
which leads to
ρ_ DE = F_0/Fρ_σ + ρ(F_0/F-1) ,
p_ DE = F_0/F p_σ + p(F_0/F-1) .
In this way, the effective parameter of state for dark energy can be defined as w_ DE≡ p_ DE/ρ_ DE. In this context the density parameters mimicking radiation, matter and dark energy (DE) in Einstein gravity are
Ω_i = ρ_i/3 F_0 H^2 (i= r, m, DE).
The two definitions in <ref> coincide at z=0: Ω_0, i=Ω_0, i.
§.§.§ Standard branch (Z=1)
In the standard branch we consider IGG: the theory without ζ(σ): G_4=γσ^2/2, G_2= X-V(σ), G_3= -2 g(σ)X.
In the region in parameter space that produces a reasonable cosmological background evolution BDG and IGG are nearly indistinguishable.
For this reason we restrict ourselves to IGG for Z=1.
We consider a monomial potential V(σ)=λ_n σ^n and g(σ)=α_m σ^m.
For this choice of V(σ) and g(σ) the IGG model presents exact solutions with accelerated expansion in absence of matter where the scale factor is a(t)∝ t^p, with n and m related by m=1-n, and
p = 2 (-2 + n + 4 β - 4 γ + n^2 γ) / ( 24 β - 16 γ + 20 n γ - 8 n^2 γ + n^3 γ) ,
σ(t) = c_0 t^ -2/(n-2) ,
where β, defined by α≡β c_0^n-2, is a reparameterization of α, useful to show that in the limit of α→ 0 (β→ 0) we recover the analogous solution found in IG <cit.>.
For n=4 and n=2, there are de Sitter solutions a(t)∝e^Ht with a constant H >0.
For n=4, the solution found for IG <cit.>: σ =± H√(3γ/λ_4) is still valid in IGG for every possible form of g(σ), as this term does not contribute under the ansatz σ = constant.
For n=2, when g(σ)∝σ^-1, there are solutions of the form σ∝ e^H t δ, with δ≡δ(α,γ,H). Its explicit form is
δ = -1, or δ = -1-4γ±√((1+4γ)^2 + 48 H^2 γα)/12 H^2 α.
For α→ 0 these solutions reduce to the ones discussed in <cit.>:
δ=-1 or δ=2γ/1+4γ .
The evolution of the scalar field is shown in the upper panel of <ref>, in the model with a constant potential V(σ)=λ_0≡Λ and g(σ)=α, compared with the analogous IG model with γ=5× 10^-5. We consider the rescaled, adimensional quantity α̃= α× (Mpc [GeV]^-1 )^-2× (M_ Pl [GeV] )^1+m and it can be seen that the departure from IG is significant for larger α, i.e. stronger gravity at early times.
Deep in the radiation era the field is nearly at rest but then it grows steeply (the larger α the steeper the growth) reaching the value expected in IG in the matter era and evolving until today in the same way as it does in IG.
The value of the field at z=0 is fixed by requiring that the effective gravitational constant <cit.>
G_ eff = 1/16 π G_4[ 4 G_4,σ^2 + G_4 ( G_2,X -2 G_3,σ ) / 3 G_4,σ^2 + G_4 ( G_2,X -2 G_3,σ)]
coincide at z=0 with the measured value of the Newton’s constant G.
For m=0, the G_3 does not contribute to <ref>, as it enters only through the first derivative with respect to the field, while, for m≠0 one can check that the contribution is negligible due to the redshift evolution of the scalar field in the matter era and we can therefore use the IG approximation <cit.>.
8π G_ eff(z=0) = 1/γσ_0^21+8γ/1+6γ
to fix the value of σ (z=0) = σ_0.
In the bottom plot of <ref> we show the evolution of density parameters in IGG with a constant potential V(σ)=Λ and g(σ)=α.
The values of α are chosen large enough to show the effect of the G_3 term in this scenario, which causes a different evolution of Ω_ r and Ω_σ in the early Universe with respect to IG and Λ CDM.
The middle panel of <ref> presents the evolution of the parameter of state of dark energy w_ DE defined above, it tracks the IG behaviour at late times but departs from it at z≥10^3, in correspondence to the analogous uptick in the evolution of Ω_σ.
In fact, w_ DE follows the dominant component: deep in the radiation epoch it has a value close to 1/3 , then in the matter era it decreases towards zero; finally, at present epoch, it becomes negative, w_ DE≃ -1, mimicking a cosmological constant.
The bump which approximately occurs in the radiation era corresponds to the epoch in which the energy density of the field grows and becomes of the same order of that of radiation. The growth of Ω_σ so early in time is due to inefficient cosmological Vainshtein screening <cit.> in this model for the range of parameters considered.
In figure <ref> we show the dependence of the background quantities (σ, w_ DE, Ω_i) on the exponent m in the function g(σ) = ασ^m, while keeping γ = 5× 10^-5 and α̃= 5× 10^-6 fixed, together with the exponent n=0 in V(σ)∝σ^n.
We point out that changing m while keeping α̃ fixed modifies the actual value of α and its dimensions since α̃= α× (Mpc [GeV]^-1 )^-2× (M_ Pl [GeV] )^1+m.
We can see from the figures that for small values of m the time evolution of scalar field is suppressed and with that the equation of state parameter and the density parameters resemble more those of Λ CDM.
For larger values of the exponent m the scalar field tends to dominate the dynamics both in the early Universe, reaching equality with radiation at z=10^6 for m=1, and in the late Universe where it mimics a cosmological constant.
This early epoch of scalar field domination also delays the matter radiation equality which happens at z<10^4, with Ω_ r=Ω_ m≃ 0.1, and leads to two different epochs in which Ω_ m is equal to the density parameter of the scalar field.
One of these epochs is the beginning of the matter era, around z≃ 10^-3 and one is the onset of the late time acceleration of the expansion of the Universe in which the field dominates the dynamics once again.
§.§.§ BDG phantom
In the phantom branch we study BDG with g(σ)=ασ^-1 because, for this choice, it was shown in <cit.> that, for a null potential and in absence of matter and radiation, ρ=p=0, there is a self-accelerating solution with Ḣ=Q̇=0, with Q=σ̇/σ.
This solution satisfies
y≡Q/H=γ-4±√(-32 - 6 Z/γ)/Z+8γ;
and it is real for Z=-1 and γ<3/16 (equivalent to ω_ BD<-4/3), for which <ref> give
H^2=γ/α 3+6y+y^2/(2γ)/2y^3(3+2y).
For a full study of the theory with Λ = 0 which exactly respects <ref> see <ref>.
Inspired by these analytical solutions, we study an intermediate case always considering g(σ)=ασ^-1 and reinserting the potential.
In doing so we obtain a theory that has a Λ CDM limit. In order to provide the late time cosmic acceleration the parameters α and Λ should be fine tuned, the procedure is the following: we pick a value for α and use a shooting algorithm on the cosmological constant to obtain, in a flat Universe, Ω_0, DE (Λ, α)=1-Ω_0, m-Ω_0, r.
In this way the burden to provide the late time acceleration of the expansion of the universe is shared between the potential and the Galileon term.
Today's value of the scalar field is such that the Planck mass is
M_ Pl^2(z=0) ≡γσ_0^2 = 1.
We do not set σ_0 following Eq. (<ref>) or (<ref>) because, due to the Vainshtein screening mechanism, the theory reduces to GR at small scales, in the sense that the Post-Newtonian parameters are those of GR and the gravitational constant on small scales is G_ N=G_ cosm<cit.>.
In this way, thanks to the screening mechanism we recover GR with the correct value of the gravitational constant on small scales today (see <ref> for details).
In the following, instead of working directly with the parameter α, we use α̃ defined previously, rescaled by a factor 10^8: α̃_8≡ 10^-8α̃.
All the plots and the MCMC constraints will be expressed as a function of 1/α̃_8. The reason for using the inverse of α̃ is that the Λ CDM limit is obtained when α̃→∞ (1/α̃→ 0).
In the top panel of figure <ref> we show the evolution of the scalar field with redshift as a function of 1/α̃_8: for larger values of this parameter the field starts at a lower initial value and it grows more steeply in the late universe to reach its value at z=0 fixed by the requirement γσ_0^2=1.
Contrary to the standard branch where the field started to evolve deep in the radiation era, here the field is frozen approximately until the epoch in which the σ energy density starts to grow.
This is a manifestation of the cosmological Vainshtein screening mechanism <cit.>.
The fact that field is frozen means that for a large portion of the history of the universe we have a gravitational constant which is effectively constant but different from the value measured here on Earth today.
In the bottom panel of figure <ref> we show the evolution of the density parameters and it can be seen that already for 1/α̃_8=0.1 the background expansion in this model is indistinguishable from Λ CDM.
Only for more extreme values we see departures from Λ CDM as the growth of the dark energy density is delayed and then steeper for 1/α̃_8=0.4.
This behaviour is confirmed also by the time evolution of w_ DE that approaches and reaches the value -1 earlier for smaller values of 1/α̃_8, while for the larger values we observe a phantom behaviour with w_ DE < -1 and w_ DE≠ -1 today, reaching -1 eventually in the future. In this scenario, therefore the Λ CDM limit is approached when 1/α̃_8 → 0.
This is the reason why we are considering the inverse of α as a parameter: in the Markov chain Monte-Carlo analysis we will sample on 1/α̃_8 in order to have the Λ CDM limit at zero and not at infinity, as we would if we had sampled on α.
§.§ Stability conditions in the phantom branch
When the sign of the kinetic term in the Lagrangian is non standard, i.e. Z=-1, and G_3=0, scalar-tensor theories can suffer from ghost and Laplacian instabilities.
Ghost instabilities occur when the sign of the kinetic term is negative in the second-order action for perturbations of the Horndeski action, whereas Laplacian instabilities occur when the speed of sound squared becomes negative <cit.>. In general, the theory in <ref> is free from ghost and Laplacian instabilities if <cit.>
q_s≡ 4 G_4 { G_2X + 2 G_3σ +σ̇[(G2_XX + G3_Xσ)σ̇
-6 G_3X H ] } + 3 (2G_4σ + G_3Xσ̇^2)^2 > 0,
c_s^2≡ [ 4 G_2X G_4 +8 G_3σ G_4
+ (6 G_4σ^2 - G_3Xσ̇^2) (2 G_4σ^2 + G_3Xσ̇^2)
-8 G_4 (G_3Xσ̈+ 2G_3XHσ̇+ G_3Xσσ̇^2) ]/q_s > 0,
where, G_i σ, G_i X mean derivative of the i-th G-function with respect to the scalar field and the kinetic term, respectively.
From <ref>, it can be seen that even IG
with Z = -1 and 0< γ < 1/6 (ω_ BD<-3/2) - a region of the parameter space which would contain otherwise a ghost - can be stable thanks to the addition of the G_3 term in the Lagrangian.
This is fully confirmed by <ref> where we show <ref> as functions of redshift for the BDG in the phantom branch. Both conditions are satisfied at all times for the values of 1/α̃_8 we have considered.
Note, however, that in BDGph the speed of sound of scalar perturbations can become temporarily superluminal for large values of 1/α̃_8 at redshifts close to the transition to dark energy domination, as can be seen in the bottom panel of <ref>.
This was first noticed in <cit.> for the case Λ=0 and it could potentially put a constraint on the value of 1/α̃_8.
If a theory with superluminal propagation is viable or not is still a controversial issue: some authors claim it is not problematic <cit.>, others argue the opposite <cit.>.
We choose conservatively to not impose any theoretical prior c_s^2 ≤ 1 in our MCMC analysis.
Note that the stability conditions in the tensor sector are automatically satisfied by the choice of our Lagrangian and parameters: the regime γ > 0 we consider ensures that the cosmological Planck mass and the square of the tensor speed of sound are positive.
§.§ Screening
The scalar degree of freedom in scalar-tensor theories of gravity can leave imprints also on scales which have detached from the cosmological expansion where deviations from GR are strongly constrained.
Thus, a theory of MG should have either small deviations from GR on Solar System scales or be equipped with a screening mechanism.
The Vainshtein screening is a mechanism that operates in theories with a self interaction of the form Xσ, like the models considered in this paper: around local sources the non-linear interactions lead to the decoupling of the scalar field from the remaining degrees of freedom within the so called Vainshtein radius r_V.
For scales r<r_V the theory of gravity reduces to GR: the Post-Newtonian parameters are those of GR and the gravitational constant on small scales is G_ N=G_ cosm<cit.>.
As it was demonstrated in <cit.>, the Vainshtein mechanism cannot suppress the cosmological evolution of the scalar field: the mechanism is not able to determine the coupling constant of the theory (Newton gravitational constant or σ_0).
The cosmological evolution of the scalar field sets the value of the coupling constant of GR today and on small scales. For this reason, for models with effective Vainshtein screening, we fix σ_0 according to <ref> in our background evolution.
The Vainshtein radius, which is the radius within which the theory reduces to GR, in a cosmological background, for a spherical astronomical object of mass δ M, is given by <cit.>:
r_V = (ℬ 𝒞μ /H^2)^1/3,
where μ = δ M /(16π G_4), and ℬ and 𝒞 are functions of the Hordenski functions G_is:
B≡4β_0/α_0+2α_1α_2+α_2^2, C≡α_1+α_2/α_0+2α_1α_2+α_2^2,
where
α_i(t)≡A_i/ G_T,
β_0(t)≡B_0/ G_T,
with
β_0≡α_1/2+α_2 (≠ 0),
and
F_T ≡ 2G_4 ≡ G_T
E ≡ 2XG_2X-G_2-6Xσ̇HG_3X+2XG_3σ
-6H^2G_4-6Hσ̇G_4σ
P ≡ G_2+2X(G_3σ+σ̈G_3X)
+2(3H^2+2Ḣ) G_4
+2(σ̈+2Hσ̇) G_4σ+4XG_4σσ
Θ σ̇XG_3X+
2HG_4 + σ̇G_4σ,
A_0 ≡ Θ̇/H^2+Θ/H
- G_T-2Ġ_T/H- E+ P/2H^2,
A_1 ≡ 1/HĠ_T + G_T- F_T,
A_2 ≡ G_T-Θ/H,
B_0 ≡ -X/Hσ̇G_3X.
In figure <ref> we show the cosmological evolution of the Vainshtein radius for an object with δ M = M_⊙, for BDG in the phantom branch.
The screening is effective and the Vainshtein radius starts growing for z ≲ 10, reaching r_V⊙≳ 90 pc today, successfully recovering GR in the Solar System.
We do not present Vainshtein radius in the standard branch since the screening is not effective and the Vainshtein radius is very small.
This is the reason why σ_0 is fixed using <ref> in the background evolution in the standard branch.
§ LINEAR PERTURBATIONS AND CMB ANISOTROPIES
In the synchronous gauge the perturbed FLRW metric is to first order
s^2 = a^2(τ) [-^2 τ + (δ_ij + h_ij) x^i x^j ]
where τ is the conformal time, x the spatial comoving coordinate, and h_ij is the metric perturbation.
From this point on, we move in Fourier space for the calculation of the perturbed quantities.
The scalar mode of h_ij, following the conventions of Ref. <cit.>, can be expressed as a Fourier integral:
h_ij(τ, x) = ∫^3k e^ k· x[ k̂_ik̂_j h(τ, k)
+ (k̂_ik̂_j-δ_ij/3) 6 η(τ, k) ]
where k̂_i ≡ k_i/k with k ≡ | k| and h≡δ^ij h_ij is the Fourier transform of trace of h_ij(τ, x).
The scalar field perturbation δσ is, in this notation,
δσ(τ, x) = ∫^3 k e^ k· x δσ(τ, k).
§.§.§ The perturbed Einstein and scalar field equations
Following <cit.> we use the definitions of the velocity potential θ and the anisotropic stress perturbation Θ
(ρ̅+p̅)θ≡ k^i δ T^0_ i ,
(ρ̅+p̅)Θ≡ -(k̂_ik̂_j-δ_ij/3) Σ^i_ j ,
Σ^i_ j≡ T^i_ j - δ^i_j T^k_ k/3 ,
to write down the perturbed Einstein equations in the following form:
k^2 η -1/2 h' = -a^2/2(δρ̃ + δρ̃^(G)) ,
k^2 η' = a^2/2[(ρ̃+P̃)θ̃+(ρ̃^(G)+P̃^(G)) θ̃^(G)] ,
h” + 2 h' -2k^2η = -3a^2 (δP̃ + δP̃^(G)) ,
h”+6η” +2(h'+6η')-2k^2η = -3a^2 (ρ̃+P̃)Θ̃ .
Where a tilde denotes the effective perturbations defined in <ref>.
The equation for the evolution of the scalar field fluctuation in Fourier space is
δσ” = δG̃/ Z + 6γ + 2σ'/a^2[ 3 g (2 -σ'/σ) + σ' (6ζ-2g_,σ) ] ,
where the explicit expression for δG̃ can be found in <ref>.
In the <ref> we distinguished explicitly between the quantities coming from IG and the ones arising from the Galileon term to make as manifest as possible the reduction of IGG and BDG to the IG equations of <cit.> when g(σ)=ζ(σ)=0.
§.§.§ Imprints on CMB anisotropies and Large Scale Structure
In <ref> we show the deviations in the temperature (TT) and and E-mode polarization (EE) CMB angular power spectra with respect to Λ CDM, whereas in <ref> we show how the CMB lensing potential angular power spectrum and the total linear matter power spectrum at redshift z=0 differ in these theories with respect to the standard model.
In the top panels of <ref> we have IGG in the standard branch, while in the bottom panels we show BDG phantom with potential V(σ)=Λ.
We note how in the standard branch there are small effects in the TT power spectrum at at low multipoles, resulting in an ISW effect extremely similar to the one observed in Λ CDM.
This behaviour is common to all the spectra in IGG at low multipoles, for which the differences with respect to GR are small.
The departure from Λ CDM is more evident on the scales of the acoustic peaks in TT and on smaller scales, where, due to the modification of gravity around the time of recombination, the contribution of the scalar field shifts and enhances the peaks of the power spectrum.
On the contrary, in the model with Z=-1 we can observe large departures from Λ CDM at low multipoles in TT, due to the interplay of G_3 and the ISW effect <cit.>.
We show the relative difference in the ISW contribution to the TT angular power spectrum in <ref>.
For the BDG in the phantom branch, while we see differences on all scales in all power spectra of <ref>, due to the presence of the cosmological constant, it is clear that as 1/α̃_8 gets smaller we recover the predictions of Λ CDM and there is therefore room to provide a good fit to the data even with 1/α̃_8 ≠ 0 but small.
The differences in the matter power spectrum with respect to the standard model, presented in the right panels of <ref>, are interesting targets for upcoming large scale structure data such as DESI [<https://www.desi.lbl.gov>], Euclid [<https://www.esa.int/Science_Exploration/Space_Science/Euclid>], or Vera Rubin observatory[<https://www.vro.org/> , <https://www.lsst.org/>], which will significantly tighten constraints on these models.
§ CONSTRAINTS FROM COSMOLOGICAL OBSERVATIONS
In this section, we present the constraints on the cosmological parameters of the models studied: IGG in the standard branch (IGGst) and BDG in the phantom branch with a potential (BDGph). In the following results the potential is always given by a V(σ)=Λ, while the function g(σ) is a constant in the standard branch and g(σ) = ασ^-1 for Z=-1.
Note that in the following we consider only IGG in the standard branch since, for the same range of parameters, BDG give exactly the same results.
§.§ Methodology and datasets
We study these models first with CMB data only and we extend the analysis to other datasets when necessary in order to constrain a parameter within our prior or when the model is not ruled out by CMB alone and the results are interesting enough justify the study with additional likelihoods.
We perform a Markov chain Monte Carlo analysis using the publicly available sampling code MontePython-v3[<https://github.com/brinckmann/montepython_public>]<cit.> connected to an extended version of CLASSig<cit.> which is a modified version of CLASS[<https://lesgourg.github.io/class_public/class.html>]<cit.>.
For the sampling, we use the Metropolis-Hastings algorithm with a Gelman-Rubin <cit.> convergence criterion R-1 < 0.01.
The reported mean values and uncertainties on the parameters, together with the contour plots have been obtained using [<https://getdist.readthedocs.io/en/latest/>]<cit.>.
We make use of CMB data in temperature, polarization, and lensing from Planck Data Release 3<cit.>.
We consider the Planck baseline likelihood (hereafter P18) which is composed by the Plik likelihood on high multipoles, l>30, the commander likelihood on the lower multipoles for temperature and SimAll for the E-mode polarization <cit.>, and the conservative multiple range, 8< L <400, for the CMB lensing.
As complementary data to Planck, we use BAO data from to the post-reconstruction measurements from BOSS DR12 <cit.>, low-z BAO measurements from SDSS DR7, 6dF and MGS <cit.>, Lyα BAO measurements from eBOSS DR14, and combination of those <cit.>.
Hereafter we call this combination of datasets simply BAO.
For the BDGph model, on top of BAO, we will also consider the combination of P18 with the Pantheon catalogue of SN Ia in the redshift range 0.01 < z < 2.3<cit.>[<https://github.com/dscolnic/Pantheon>] (hereafter SN) using a prior on the supernovae peak absolute magnitude, M_ B<cit.> [hereafter p(M)] of
M_ B = -19.2435 ± 0.0373 mag.
We sample on 6 standard parameters: ω_ b, ω_ c, H_0, τ_ reio, ln(10^10 A_s), n_s, and the modified gravity parameters, in particular
* For IGGst we sample on α̃ at fixed γ=5×10^-5.
* For BDGph we sample on 1/α̃_8 with fixed γ = 5 × 10^-5.
In addition to the cosmological parameters we sample on the nuisance and foregrounds parameters of the P18 likelihoods and, when considering the Phanteon dataset, on M_ B.
In our analysis we assume 2 massless neutrino with N_ eff = 2.0328 and a massive one with fixed minimum mass m_ν = 0.06 eV.
As in <cit.>, we fix the primordial ^4He mass fraction Y_ p taking into account the different value of the effective gravitational constant during Big Bang Nucleosynthesis (BBN), and the the baryon fraction, ω_ b, tabulated in the public code
PArthENoPE<cit.>.
Moreover, for each run we compute the best-fit values, obtained minimizing the χ^2 following the methods of <cit.>, and quote the difference in the model χ^2 with respect to Λ CDM one, i.e. Δχ^2 ≡χ^2-χ^2_Λ CDM. Thus, negative values of Δχ^2 indicate an improvement in the fit with respect to Λ CDM.
§.§ Results
§.§.§ Induced Gravity Galileon standard
For IGGst we show the results obtained by considering n=m=0 , γ=5× 10^-5, with α allowed to vary.
We show the P18 results for this model in <ref> with a comparison to ΛCDM and IG with γ = 5× 10^-5.
We obtain a tight constrain on the Galileon term, i.e. α̃< 2.5 × 10^-6 at 95 % CL with P18 data. This result shows how CMB data are sufficient to strongly constrain IGG around IG.
All the contours for the other parameters overlap with the IG ones.
The marginalized mean and uncertainty for the Hubble constant H_0 [ km s^-1 Mpc^-1] at 68% CL is 67.72 ± 0.54 without any hint at a possible reduction of the Hubble tension, as there is no significant increase neither in the mean nor in the uncertainty with respect to the Λ CDM value of 67.36 ± 0.54<cit.>, or the IGst value 67.64 ± 0.54 (when γ=5× 10^-5 is fixed in the MCMC).
When combining BAO with P18 we obtain tighter constraints but same qualitative behaviour (<ref>).
The cosmological constraints from P18 and P18+BAO on α̃ allow a Vainshtein mechanism to occur only at sub-parsec scale for an object of a solar mass.
§.§.§ Brans-Dicke Galileon phantom
For this model in the phantom branch with a cosmological constant, we add BAO to P18 data in order to constrain the parameter 1/α̃_8 within our prior range, 1/α̃_8 = 10^8 / α̃∈ [0, 0.4].
Current CMB data are indeed not enough to constrain this 7 parameter model with γ fixed.
The presence of the cosmological constant helps in providing posterior distributions closer to Λ CDM and similar values for most of the cosmological parameters with the exception of H_0: due to the shape of the posterior in the plane 1/α̃_8-H_0 (see <ref>), higher values of the Hubble constant compared to the Λ CDM ones are allowed (as it happens for non-minimally coupled scalar fields with standard kinetic terms due to the degeneracy γ-H_0).
The marginalized mean and uncertainty for the Hubble constant H_0 [ km s^-1 Mpc^-1] at 68% CL using the combination P18+BAO is 69.1^+0.9_-1.3. For the same dataset, IG phantom with γ=5×10^-5 fixed at the same value considered here for BDGph, gives 67.62± 0.42, while it gives 67.20^+0.68_-0.55 when γ is a free parameter. Thus, BDGph can raise the value of the Hubble constant relatively to IGph and it reduces the Hubble tension to a significance of 2.5σ.
This result motivates our further analysis using the Pantheon catalogue with the addition of a prior on the peak absolute magnitude of SN Ia as in <cit.>. When considering this dataset, the Hubble tension has a significance of only 1.7σ in BDGph, as the marginalized mean and uncertainty at 68% CL is H_0 = 70.58± 0.97 km s^-1 Mpc^-1. The corresponding values in IGph, with γ=5× 10^-5 and γ free to vary in the MCMC are, respectively, 68.15± 0.52 and 68.07± 0.56. This means that the Galileon term is not only necessary to avoid instabilites present in IGph (see <ref>) but it also plays an important role in alleviating the Hubble tension.
We would also like to emphasize another aspect of our findings in terms of the Vainshtein radius.
We obtain 1/α̃_8=0.23^+0.06_-0.05 at 68% CL with P18 + SN + p(M), this means that we see high statistical significance for 1/α̃_8≠ 0, and consequently a Vainshtein radius of 𝒪(100) pc for a solar mass.
This is in contrast with the case P18+BAO where 1/α̃_8 is consistent with zero at 1σ and we only have an upper limit 1/α̃_8 < 0.28 at 95% CL, perfectly consistent with the Λ CDM limit of the theory.
§ CONCLUSIONS
We have studied the cosmological effects of a model where a scalar field, σ, with a coupling to the Ricci scalar of the type F(σ) = γσ^2, has a cubic Galileon term of the form α σ^m (∂σ)^2 □σ.
Since the extensions of induced gravity (IG) and Brans-Dicke with (BD) with a Galileon term are not equivalent theories up to a field redefinition, differing by a term 4 ζ(σ)X^2 in the Lagrangian, we have studied the two models separately: Induced Gravity Galileon (IGG) and Brans-Dicke-Galileon (BDG).
This Galileon term leads to the Vaishtein mechanism which can screen gravity, potentially reconciling the theory with GR inside the so-called Vainshtein radius also for values of the coupling to the Ricci scalar - or of the Brans-Dicke parameter ω_BD - which evade the Solar System constraints.
Moreover, thanks to the presence of the Galileon term, the theory is free of ghost and Laplacian instabilities even for a non-canonical sign of the kinetic term in the Lagrangian.
We have therefore considered the theory (<ref>) in the two branches, called respectively standard (Z=1) and phantom (Z=-1) branch, where Z enters the kinetic term in the Lagrangian as Z X, with X = - ∇_μσ∇^μσ / 2.
We have shown that the Galileon term leads to important cosmological effects in all the cases considered. In this paper we have computed the theoretical predictions for cosmological observables such as the CMB anisotropy and matter power spectra.
We have also compared these predictions with observations by restricting the analysis to fixed γ and m, if not otherwise stated.
For a standard kinetic term, we find an instability in the radiation dominated era due to the presence of the Galileon term. This instability is dissipated in the subsequent matter dominated era, during which the Galileon term becomes subleading, and then eventually the scalar field energy density is no more negligible contributing to the acceleration of the Universe.
We find that the CMB anisotropy pattern is sensitive to the dissipation of the instability in the matter era. This effect constrains the Galileon term to be small close to the CMB last scattering surface.
In the branch Z=-1 we have considered only BDG for the sake of simplicity.
The phenomenology of a kinetic term with a negative sign - corresponding to a BD theory in the parameter range ω_BD< 0 - is rather different.
Firstly, the presence of a Galileon term leads to an healthy theory for all the values of γ, i.e. for any negative value of ω_BD, therefore rescuing the range ω_BD < -3/2 which would contain a ghost in the BD theory with no Galileon term.
The Galileon term leads to a dynamics very different from the corresponding branch of induced gravity, by freezing the scalar field for most of the matter dominated era and releasing it at lower redshift.
In the standard branch, Planck 2018 and BAO data constrain the amplitude of the Galileon parameter as α̃< 2.2 × 10^-6 at 95 % CL (<ref>) for IGG with γ=5× 10^-5, m=0, and a constant term Λ as a potential.
The constraints for BDG in the standard branch are identical to the ones of IGG, we have therefore reported them only once for IGG.
The cosmological constraint on α̃ allows a Vainshtein mechanism to occur only at sub-parsec scale for an object of a solar mass. Summarizing, for a standard kinetic term, we have therefore found tight constraints on the Galileon term and resulting posterior probabilities for cosmological parameters very similar to those of IG.
For BDG with Z=-1 and a constant term Λ as a potential, we have found that Planck 2018 and BAO data constrain H_0 = 69.11^+0.94_-1.3 km s^-1 Mpc^-1 at 68% CL and α̃^-1 < 0.281× 10^-8 at 95% CL, for γ = 5 × 10^-5 and m=-1 (<ref>).
The addition of a Galileon term in the phantom branch is therefore crucial to obtain a value of H_0 larger than ΛCDM.
We have therefore a modified gravity model which leads to a value of H_0 larger than in ΛCDM with a screening of 𝒪(100) pc for a solar mass, as desired.
By adding the Pantheon dataset with a prior on the supernovae peak absolute magnitude M_ B = -19.2435 ± 0.0373 mag, we find H_0 = 70.58± 0.97 km s^-1 Mpc^-1 and α̃^-1 = 0.23^+0.06_-0.05× 10^-8 at 68% CL, always for γ = 5× 10 ^-5 and m=-1 (<ref>).
As it happens for the degeneracy between H_0 and γ in IG <cit.>, the SH0ES measurement pulls H_0 along the degeneracy with α^-1.
By adding a Galileon term, the theory inherits also characteristics of a late model in increasing H_0.
In addition, the value and the posterior of S_8 are unchanged with respect to ΛCDM, not aggravating the so called σ_8 tension <cit.>.
We have also analyzed the theoretical predictions of the of model introduced by Silva and Koyama <cit.> in which there is no potential and the late-time acceleration is driven exclusively by the Galileon term.
This particular model is physically viable and provides screening on Solar System scales. We have shown that it leads to CMB predictions which are at odds with the Planck data, with a Δχ^2 = 30.6 with respect to ΛCDM. Therefore, although theoretically interesting because the acceleration is not driven by an effective cosmological constant, the model is ruled out by observations.
In conclusion, we have shown how a Galileon term leads to rather non-trivial cosmological effects to the simplest scalar-tensor theories of gravity.
It can lead simultaneously to a value of H_0 larger than in ΛCDM and to effective screening for a large volume of parameter space.
It is therefore interesting to find that another term in the Horndeski Lagrangian can lead to a value larger then the ΛCDM one, as previously found for the basic evolving Newton's constant in <cit.>.
A more general exploration of the parameter space of this theory in which the speed of gravitational waves is c without any fine tuning, by allowing to vary simultaneously the coupling to the Ricci scalar, γ, and the Galileon term is in progress.
§ ACKNOWLEDGMENTS
MB, FF, DP acknowledge financial support from the contract ASI/ INAF for the Euclid mission n.2018-23-HH.0, from the INFN InDark initiative and from the COSMOS network (www.cosmosnet.it) through the ASI (Italian Space Agency) Grants 2016-24-H.0 and 2016-24-H.1-2018, as well as 2020-9-HH.0 (participation in LiteBIRD phase A).
This work has made use of computational resources of CNAF HPC cluster in Bologna.
§ BDG IN THE PHANTOM BRANCH WITH Λ=0
In this appendix we study the model, introduced in <cit.>, that exactly respects the solution (<ref>), considering it as a limiting case of BDGph when Λ = 0.
We discuss the background evolution, including stability conditions and the Vainshtein screening of the theory, CMB anisotropies and matter power spectrum and the constraints on the model coming from CMB observations.
§.§ Background evolution, stability conditions and screening
In absence of the cosmological constant, in order to describe today's acceleration of the expansion of the Universe, the parameter α should be fine tuned, this is done by solving <ref> for α and using it as an initial guess for a shooting algorithm that fixes the value of α to produce the desired Ω_ DE=1-Ω_ m-Ω_ r, given the density parameters of matter and radiation as inputs.
With α fixed by the requirement of cosmic acceleration, the only free parameter of the theory is γ; resulting therefore in a theory that can provide cosmic acceleration without a cosmological constant, with as many free parameters as the BD model.
Today's value of the scalar field is such that it satisfies <ref>. See <ref> for details on why we do not need to set σ_0 following <ref> or (<ref>) thanks the Vainshtein screening mechanism, but we still need to respect the condition M_ Pl=1 today.
The evolution of the scalar field with redshift is plotted in the top panel of <ref>.
The field is frozen deep in the radiation era and it starts to grow at very late times reaching the value required by consistency with GR on small scales today.
In the figure we normalized the field to its value at z=0 which is different depending on the chosen value of γ, since γσ_0^2=1.
Departures from Λ CDM are evident already at the background level, especially in the matter era and in the field dominated era.
In fact, during the matter era, before the equality with DE, we observe, from the bottom panel of <ref>, Ω_σ becoming slightly negative.
This effect is more prominent for larger values of γ.
At the time when Ω_σ is decreasing and becoming negative, the corresponding Ω_ DE in the Λ CDM model is already growing, therefore, a steeper growth of Ω_σ is needed in order to reach the Λ CDM value today and for the matter-dark energy equality to happen more or less at the same redshift.
We emphasize that Ω_σ<0 is not a problem from the physical point of view: this parameter simply describes the contribution of the scalar field to the total expansion rate when the Friedmann equations are recast in a form that resembles Einstein gravity <cit.>.
The evolution of the parameter of state of DE w_ DE from the middle panel of <ref> is also interesting, as it presents a phantom behaviour w_ DE<-1 in correspondence to Ω_σ becoming negative and growing back to dominate the energy content of the Universe.
This behaviour is more prominent for smaller value of γ for which the dip tends to be deeper and w_ DE reaches lower values.
The parameter w_ DE today is slightly different from -1 and it eventually reaches -1 in the future, as shown in <cit.>.
The top panels of <ref> show the evolution of the stability conditions (<ref>) and (<ref>) with redshift.
The conditions are satisfied for all the values of γ considered thanks to the Galileon term.
This term is also responsible for effective screening on small scales through the Vainshtein screening mechanism that allows the theory to reduce to GR within the so called Vainshtein radius.
Thus, within this radius the Post-Newtonian parameters are those of GR.
The time evolution of the Vainshtein radius (<ref>) for a celestial body of a solar mass is shown in the bottom panel of <ref>: the screening is effective and r_V⊙∼𝒪(100) pc today for all the values of γ studied.
§.§ CMB anisotropies and matter power spectrum
As we saw in <ref> the interplay of the G_3 and the ISW effect can result in an enhancement of the power spectrum on large scales <cit.>.
For this reason, in <ref> we can observe very large departures from Λ CDM at the largest scales in TT, due to the enhanced ISW effect.
It is worth pointing out that as there is no potential, the curves do not flatten out towards zero for smaller γ, this is due to the fact that without a cosmological constant there is not a Λ CDM limit, as already discussed in <ref>.
So, while this model provides late-time cosmic acceleration without a cosmological constant, it shows large differences with respect to Λ CDM at the level of CMB and matter power spectra, this is a disadvantage for differences >30% when comparing the predictions against the data, since Λ CDM fits Planck data quite well.
§.§ Constraints from cosmological observations
As we saw, this is the most extreme of the models considered, as it has no cosmological constant and therefore no Λ CDM limit and it can't reproduce the CMB and LSS theoretical prediction of Λ CDM, this put it at a disadvantage when it is tested against data, and a Δχ^2=30.6 when considering P18 confirms it.
While this completely rules out the model, there is a feature that is worth highlighting in addition to the theoretical appealing characteristic of providing cosmic acceleration without a cosmological constant: it raises the Hubble constant to H_0 = 79.57 ± 0.67 km s^-1 Mpc^-1.
While this result is still far from alleviating the tension, the ability to produce an Hubble constant larger than the SH0ES value <cit.> is quite interesting and it might be worth to construct and investigate similar models that, while retaining this ability, might provide a better fit to the data. The results for the model are shown in <ref> and <ref>.
§ LINEAR PERTURBED EQUATIONS
Here we summarize the definition of the coefficients appearing in the perturbed Einstein and scalar field equations:
δρ̃ ≡δρ_m/γσ^2 -h'σ'/a^2σ -2/a^2{δσ/σ[a^2ρ_m/γσ^2 +Zσ'^ 2/2γσ^2 +a^2/γσ(V/σ -V_,σ/2) - 3σ'/σ +k^2 ] + δσ'/σ( 3 -Zσ'/2γσ)
} ,
δρ̃^(G) ≡
-2/γ a^4σ'^ 2/σ^2{δσ/σ[
3σ' (2 g - σ g_,σ) + σ'^ 2/2(σ g_,σ,σ - 2g_,σ) -k^2σ g +3σ'^ 2/2(2ζ - ζ_,σσ)
]
+ δσ' ( σ' (2g_, σ - 6 ζ ) - 9 g ) - 1/2 h' σ' g
} ,
(ρ̃+P̃) θ̃ ≡ (ρ_m+P_m)/γσ^2θ_m +2k^2/a^2{ δσ/σ[ σ'/2γσ(Z+2γ) -] +δσ'/σ} ,
(ρ̃^(G)+P̃^(G)) θ̃^(G) ≡2k^2/γ a^4σ'^ 2/σ^2 [δσ( 3 g -g_,σσ' +2ζσ' ) - δσ' g ] ,
δP̃ ≡δ P_m/γσ^2 +1/a^2{ - δσ/σ[a^2/γσ(V_,σ -2V/σ) + 2a^2 P_m/γσ^2 +σ'^ 2/γσ^2 (Z+4γ) + 2σ”/σ+2σ'/σ - 4k^2/3]
+ δσ' (σ'/γσ^2 (Z+4γ) +2/σ) +2 δσ”/σ +2σ'/3σ h'
} ,
δP̃^(G) ≡1/γ a^4σ'^ 2/σ^2 {δσ/σ[ 2 (2 g -g_,σσ) (σ” - σ') +σ'^ 2(2 g_,σ - g_,σ,σ σ - 2ζ + ζ_,σ σ)
]
-δσ' ( 2g_,σσ' -4 g +2 g σ”/σ' +4 ζ σ' ) -2 g δσ”} ,
(ρ̃+P̃)Θ̃ ≡(ρ_m+P_m)Θ/γσ^2 + 1/3a^2[ 4k^2δσ/σ + 2(h'+6η')σ'/σ ] ,
δG̃ ≡a^2( δρ_m-3 δ P_m)(1+6γ)σ - h'/2 a^2[ a^2 σ' (Z+6γ) + 2 gσ' ( 3σ' -σ'^2/σ +2gσ”) +4ζσ'^3 ] - h” g σ'^2/a^2
+1a^2δσ{ a^4 [ 3 P_m - ρ_mσ^2 -4Vσ^2 + 4 V_,σσ - V_,σσ + (Z+6γ) (σ'^2σ^2 -k^2) ]
+ g [ 2 k^2 ( σ'^ 2σ -2σ' -2σ”) -6 σ'^ 2σ^2σ”] -4k^2ζ σ'^ 2 + g_,σ σ' [ 6 σ”( σ'σ -2) - 2 σ' ( 3' +σ'^ 2σ^2) ]
-12 σ'^ 2ζ_,σσ” +2g_,σσσ'^ 2 (σ -2σ' + 2σ”) -3ζ_,σσσ'^ 4 - g_,σσσσ'^ 4}
+1a^2δσ' { a^2[ - 2 (Z+6γ)( + σ'σ)] +12 g [σ”(σ'σ - ) -'σ' ] -24ζσ'σ”
+4 g_,σσ'(2σ -3σ'+2σ”) -12 ζ_,σσ'^ 3 + 4 g_,σσσ'^ 3} .
|
http://arxiv.org/abs/2307.01913v1
|
20230704204439
|
From Edge State Physics to Entanglement Spectrum: Studying Interactions and Impurities in Two-Dimensional Topological Insulators
|
[
"Marcela Derli",
"E. Novais"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall",
"cond-mat.str-el"
] |
eduardo.novais@ufabc.edu.br
Centro de Cincias Naturais e Humanas, Federal University of ABC,
Brazil.
We present a novel theoretical approach to incorporate electronic
interactions in the study of two-dimensional topological insulators.
By exploiting the correspondence between edge state physics and entanglement
spectrum in gapped topological systems, we deconstruct the system
into one-dimensional channels. This framework enables a simple and
elegant inclusion of fermionic interactions into the discussion of
topological insulators. We apply this approach to the Kane-Mele model
with interactions and magnetic impurities.
From Edge State Physics to Entanglement Spectrum: Studying Interactions
and Impurities in Two-Dimensional Topological Insulators
E. Novais
August 1, 2023
================================================================================================================================
Introduction: Two dimensional systems have captivated physicists
with their intriguing and often counter intuitive properties. Frequently,
they demand the development of new theoretical and numerical tools
for investigation. A notable historical example of this behavior is
observed in the quantum Hall effect. While the integer quantum Hall
effect can be comprehensively explained within the framework of non-interacting
particles filling Landau bands, the discovery of non-integer Hall
plateaus due to electronic interactions and disorder was a surprising
revelation. Recently the better understanding of topological properties
in the band structure of two dimensional systems have yielded new
and exciting theoretical and experimental results. However, the interplay
of topology and correlations is still a relatively open area of investigation.
In the study of strongly correlated two dimensional systems, one promising
approach is to leverage insights gained from one-dimensional models
by suitably identifying one dimensional structures in the higher-dimensional
system. Several methods can be employed to achieve this dimensional
reduction. One intuitively appealing approach is to first consider
quantum wires and subsequently introduce tunneling between them<cit.>.
The connection to the two dimensional system is obtained using renormalization
group arguments to the tunneling amplitude. Another approach is to
deconstruct the topological system into one dimensional wires in order
to classify the topological phases<cit.>.
In a complementary view to this anisotropic starting point, we propose
a novel approach that is particularly well-suited for studying impurities
in two-dimensional topological systems.
The manuscript revolves around a pivotal concept: the spectrum of
entanglement <cit.> in gapped systems provides
a valuable insights into the dynamics of local operators<cit.>.
This fresh perspective introduces a fictitious boundary, enabling
us to represent the original wave function as a superposition of bulk
states and boundary states. When focusing on the behavior
of local operators solely defined on the boundary, it becomes
evident that only the boundary states interact with these operators.
However, this introduction of the boundary comes at the cost
of working within a formalism at a finite fictitious temperature,
which is determined by the problem's characteristics, such as the
presence of the spectral gap.
By starting with a two-dimensional problem and transitioning to a
one-dimensional model at a finite temperature, we gain the ability
to incorporate interactions into the problem on much better footing
than in a straightforward approach within the original two-dimensional
model. This avenue of investigation presents a novel approach to studying
electronic systems, enabling a deeper comprehension of their dynamics
and properties. Furthermore, it presents a clear pathway towards two-dimensional
bosonzation, and possibly new numerical techniques, on these systems.
Though out this manuscript we consider natural unit, ħ=c=k_B=1.
The manuscript is organized as follows: first the general formalism
is introduced. Subsequently, we delve into the analyses of the prototypical
topological band insulators, the Kane-Mele model. We add to
this model local degrees of freedoms and electronic interactions to
investigate the interplay of magnetism, interactions and topology.
We finish with a conclusion and some possible future perspectives.
The physics of a hard-wall boundary: In a D-dimensional system,
a hard-wall boundary is a D-1 surface that prohibits the flow of particles
and information. These properties are implemented by specific boundary
conditions on the Schrdinger equation. For instance, let's consider
massive Dirac's fermions in two dimensions with a hard boundary located
at x=0<cit.>.
We need to ensure that the single particle wave function ψ(x,y)=X(x)Y(y)
follows the hard wall conditions
∂ψ(0,y)/∂ x=0,and ψ(x<0,y)=0.
These conditions lead to two very distinct type of X(x)
functions: bulk and boundary solutions. Bulk solutions
correspond to standing waves and must be zero at the boundary,
X(x) =x_0sin[ε x],
with x_0 and ε constants. Conversely, zero energy
boundary solutions can exist. They correspond to exponentially decaying
functions
X(x) =x_0(1+x/ξ)e^-x/ξ,
where ξ is the characteristic length of the state. The behavior
of Y(y) can depend on the details of the microscopic
theory. A well known example is graphene<cit.>. In
graphene, an armchair edge in graphene gives rise to boundary
states with energies away from the Fermi level, while a zigzag
edge introduces states at the Fermi level with no dispersion. The
fact that we can classify solutions in the presence of a real hard
wall boundary as bulk states and boundary states is
a powerful insight into the spectrum of entanglement.
Spectrum of entanglement and edge states: The concept of the
entanglement spectrum in topological systems was introduced by Li
and Haldane in 2008 <cit.>. They investigated
a Hall system on the surface of a sphere and employed the Schmidt
decomposition to to recast the problem in terms of wave functions
defined on each hemisphere of the sphere. The logarithm of the Schmidt
coefficients was then coined the entanglement spectrum. It
encodes the physical information on how the original wave function
is weaved at the fictitious boundary created by the choice of basis.
Remarkably, through numerical analysis, they observed that the entanglement
spectrum accurately reproduces the low energy spectrum of a genuine
boundary in a Hall bar.
Following their pioneering work, subsequent contributions have further
support for these findings<cit.>.
These studies consistently demonstrate that in topological systems
the entanglement spectrum faithfully reproduces the spectrum of genuine
edge states, albeit at finite fictitious temperature of entanglement,
T_E, and low energies.
In what follows, we assume that the initial wave function that we
aim to describe , denoted as|g⟩, is the ground
state wave function of a gapped local Hamiltonian of a topological
insulator. We proceed by dividing the system into two partitions,
|A⟩ and |B⟩, with an arbitrary
cut. The Hamiltonian can be written as the sum of Hamiltonians pertaining
to the individual partitions, namely H_A/B , along with the inter-partition
Hamiltonian H_AB,
H_0=H_A+H_B+H_AB.
The state |g⟩ can always be expressed using the
Schmidt decomposition |g⟩ =∑_iλ_i|A_i⟩|B_i⟩,
where |A_i/B_i⟩ form a complete basis of
each partition. These bases comprise both bulk and boundary
states. The key observation is that bulk states must vanish
at the boundary. As a consequence, any set of operators, { O_i},
that live at the cut defining the partitions will couple only to the
boundary states of the basis <cit.>. When
calculating expectation values of { O_i},
the state |g⟩ can be truncated to include only
the boundary states, leading to the definition of two boundary
theories,H_A^ B and H_B^ B, which can be
used to label these states. In this context, these boundary theories
can be called as entanglement models. Notably, it has been
demonstrated that if the boundary theory is conformal<cit.>,
the maximally entangled conformal state, known as the Ishibashi state
|g^*⟩ =∑_i| B_A,i⟩| B_B,i⟩,
is projected into the desired wave function by the extrapolation
length β,
|g⟩ =e^-β/2(H_A^ B+H_B^ B)|g^*⟩ .
A similar discussion can also be found in <cit.>
and it was established that β^-1=T_E is proportional to
the bulk gap of the topological insulator.
The evaluation of correlations perpendicular to the boundary direction
exhibits exponential decay due to the expansion of the ground state
in terms of boundary states. A direct consequence of Eq. (<ref>)
is that any two operators situated on the fictitious boundary will
also have exponentially decaying correlations. These observations
can be understood within the framework of the Lieb-Robinson bound<cit.>.
In its more restricted form, the bound says that for a system with
a uniform gap and local interactions, the two point correlation function
of local operators O_1 and O_2 calculated
over the ground state should decay exponentially. Thus, the fictitious
temperature on Eq. (<ref>) enforces the Lieb-Robinson
bound along the direction of the cut.
The degree of arbitrariness in choosing the cut that creates the partitions
is an important question. When evaluating the expectation value of
an operator at the same spatial position, the particular cut is irrelevant.
The main advantage to use the spectrum of entanglement in this case
is to leverage the knowledge of the boundary theory and its symmetries.
This approach enables us to incorporate, for instance, fermionic interactions
into the analysis, expanding the scope of the discussion.
However, it is when evaluating operators { O_i}
at different positions we truly capture the underlying physics (see
Fig. (<ref>)). All possible cuts are equally
suitable to create the entanglement model, hence each cut corresponds
to an one-dimensional channel through which information can propagate
in the bulk. To obtain the expectation value, we need to consider
all possible one-dimensional channels between the operator sites
⟨ O_1 O_2⟩ =1/N∑_{ c}⟨ g| O_1 O_2|g⟩ ,
where { c} are all possible cuts that passes through
O_1 and O_2 and N is the number of these
cuts. The shortest path that connects O_1 and O_2
dominates the sum, since Eq. (<ref>) tell us that
correlations will decay exponentially along the cut.
If we introduce additional terms to Eq. (<ref>),
V_A^ B, the simple description of the ground state in
terms of Eq.(<ref>) may no longer hold. A common scenario
is to add another Hilbert space,{|s⃗_i⟩},
at the boundary, V_A^ B=∑_i,s⃗ O_i⊗|s⃗_i⟩⟨s⃗_i|.
The true ground state, |G⟩, is not known a
priori. Nevertheless, the same reasoning that lead to Eq. (<ref>)
can be used to write
|G⟩ =e^-β/2(H_A^ B+H_B^ B+V_A^ B)|g^*⟩⊗[∑_i|s⃗_i⟩],
as an equivalent one dimensional problem at finite temperature. An
entire range of 1D tools can now be used to find |G⟩.
We now use the Kane-Mele model to illustrate all these possibilities.
Kane-Mele model with an IRLM: Arguably the simplest topological
model that preserves time reversal symmetry is the Kane-Mele model
of free fermions on a honeycomb lattice<cit.>.
Let us consider this model in its standard form<cit.>
and introduce a Hubbard interaction between the fermions
H_0 =∑_⟨ i,j⟩ ,σc_iσ^†c_jσ+iΔ∑_⟨⟨ i,j⟩⟩ ,σ,σ^'ν_ijs_σσ^'^zc_iσ^†c_jσ^'
+u∑_in_i↑n_i↓
where n_iσ=c_iσ^†c_iσ, Δ in
the next nearest neighbor hopping strength, ν_ij=±1 depending
on the hopping orientation, s^z in the z Pauli matrix and
u is the Hubbard interaction strength.
In Fig (<ref>), we introduce a cut that separates
the two sublattices of the Kane-Mele model, resulting in two zig-zag
edges that can be described in terms of Luttinger liquids<cit.>.
These edges exhibit fermionic currents, namely J_A/B^R↑
and J_A/B^L↓. The fact that the model preserves time-reserval
symmetry implies that at low energies, only marginal forward-scattering
interactions need to be considered. These interactions can be described
by the boundary Hamiltonians
H_A/B,I^ B=v_f/4π∫ dx∑_a≠ b∈{ R↑,L↓}[g_1J_A/B^aJ_A/B^b+g_4(J_A/B^a)^2],
where v_f represents the Fermi velocity. By employing Abelian
bozonization<cit.>, the low energy theory
of each boundary shown in Figure <ref>) can
be expressed as
H_A/B,0^ B+H_I^ B=ṽ/8π∫_-∞^∞dx1/g∂_xϕ_A/B^2+g∂_xθ_A/B^2,
where [ϕ_A/B(a),∂_xθ_A/B(b)]=-4π iδ(a-b),
g=√((1+g_4-g_1)/(1+g_4+g_1))
is the Luttinger parameter and ṽ=v_f√((1+g_4-g_1)(1+g_4+g_1))
is the bosonic velocity. The energy gap and the interactions in the
bulk system determine the Fermi velocity and the band width, which
can be approximated as Λ∼ T_E.
At a site on the edge of partition A we introduce an interacting
ressonant level model with on-site repulsive interactions H_A,1^ B=H_A,0^ B+V∑_σc_0,σ^†d_σ+h.c.+UN_↑N_↓,
with N_σ=d_σ^†d_σ. Using the Schrieffer-Wolff
transformation<cit.> and taking the large U
limit, we obtain the Kondo model in a L.L. as one of the boundary
theories<cit.>
H_A,K^ B =H_A,0^ B+H_A,I^ B+J_z∂_xθ_A(0).S^z
+(J_⊥Λ)e^-iϕ_A(0)S^-+h.c.
where J_k=z,⊥∼ V^2/U and S⃗ the spin projection
of the fermion at the the d level. The scaling equations for this
Kondo problem flow to strong coupling for repulsive interactions,
g<1, and/or antiferromagnetic Kondo coupling, J_z,⊥>0.
The Kondo temperature can be estimated using the usual argument as
T_K/T_E=J_⊥^-1/1-g+4J_z/ṽ.
The physics of the ground wave function will depend on a comparison
between the Kondo temperature, T_k, and the temperature of entanglement,
T_E, which acts as an infrared cut-off to the renormalization
process.
For repulsive interactions and when T_k≫ T_E the Kondo singlet
forms. In the limiting case of J_z,⊥→∞, a c-fermion
at the the ressonat level site is bound to form the Kondo singlet.
This effectively removes the site from the lattice and introduces
a zig-zag edge in the middle of the bulk, see Fig. <ref>a.
However, for a finite J_z,⊥ the many body state is more complex.
The fermion at the local level d is entangled with many electrons
on the c-band, forming what is known as the Kondo cloud<cit.>.
The Kondo cloud is not a well define entity in real space, but it
is usually assumed to have a characteristic length scale of ξ_k=ṽ/T_k.
This region of the material, characterized by the Kondo cloud, is
an intriguing state of matter, that should be surrounded by gapless
real edge states. If a diluted set of these Kondo impurities is separated
by distances of the order of ξ_k, it could allow current to
pass through the bulk of the material. The tunneling of electrons
in internal edge states created by impurities have been proposed before
as a mechanism to destabilize the topological phase<cit.>.
The difference here is that the region is not related to size of the
impurities, but rather to the length of the Kondo cloud.
In the limit where T_k≲ T_E, we enter a perturbative
regime where the renormalized J_z,⊥ remains small due to
an irrelevant flow or an infrared cutoff. In this regime, multiple
magnetic impurities can interact through the c-band. To study this
interaction, we need to evaluate the Green's functions of the c-electrons
that mediate the interaction. For simplicity, let us consider a pair
of magnetic impurities, {S⃗_1,S⃗_2},
located on the same sublattice of the honeycomb lattice, see Fig.
<ref>b. We introduce a partition that connects the
two impurities through a minimum path of distance Δ x along
the edge. Since we are in the perturbative regime, it is straightforward
to write a Ruderman-Kittel-Kasuya-Yosida (RRKY) like interaction between
the two impurities using the finite temperature Green's function of
the L.L.<cit.>,
H_RKKY^ B∝J_k^2/[β_E/πsinh(π/β_EΔ x)]^gS⃗_1.S⃗_2+l.r.t. .
The ground state of the magnetic impurities is a singlet. Therefore,
a photoemission like experiment can then be used to directly determine
the Luttinger parameter in these types of systems by measuring several
of these singlet energies for different Δ x.
The physics of a dense array of magnetic impurities will ultimately
depend on the interplay of the three energy scales in the problem:
T_E, T_K, T_RRKY. This interplay opens up a wide range
of possibilities for exploring the rich phenomenology of heavy fermions.
Conclusions: In this manuscript we propose a new theoretical
approach for incorporating electronic interactions in the study of
two dimensional topological insulators. Building upon the established
correspondence between edge state physics and the entanglement spectrum
in gapped topological systems, we leverage this connection to deconstruct
the two dimensional system as one dimensional channels that connect
any two points within in the system. The effective theory of these
channels is precisely the edge state theory at the fictitious temperature
of entanglement. The effective theory carries the same topological
protection that real edge states have. Consequently, we can elegantly
incorporate fermionic interactions into the framework. To analyze
these systems we can employ well-known techniques such as Abelian
bosonazation or te density matrix renormalization group. While considering
one dimensional channels in topological insulators is not a new idea<cit.>,
our approach naturally emerges from the ground state wave function.
We follow the general discussion by considering the Kane-Mele model
with interactions and in the presence of magnetic impurities. It is
well-established that impurities can give rise to localized regions,
or islands, within a topological system, which harbor edge states<cit.>.
Our findings reveal two intriguing scenarios. Firstly, in the Kondo
regime we identify a possible mechanism to destabilize the topological
phase. Secondly, in the perturbative regime we observe that the impurities
can order magnetically through a RRKY-like interaction. Interestingly,
this magnetic ordering in a dilute impurity system presents a unique
opportunity to experimentally measure the strength of fermionic interactions
in the topological system.
M. Derli would like to thank CAPES for financial support. E. Novais
would like to thank useful discussions with T. C. Jordo and F. M.
Hungria.
apsrev
31
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1
[Kane et al.(2002)Kane, Mukhopadhyay, and
Lubensky]kane_fractional_2002
authorC. L. Kane,
authorR. Mukhopadhyay,
and authorT. C.
Lubensky, journalPhysical Review Letters
volume88, pages036401
(year2002).
[Teo and Kane(2014)]teo_luttinger_2014
authorJ. C. Y. Teo and
authorC. L. Kane,
journalPhysical Review B volume89,
pages085101 (year2014).
[Tam and Kane(2021)]tam_non-diagonal_2021
authorP. M. Tam and
authorC. L. Kane,
journalPhysical Review B volume103,
pages035142 (year2021).
[Neupert et al.(2014)Neupert, Chamon,
Mudry, and Thomale]neupert_wire_2014
authorT. Neupert,
authorC. Chamon,
authorC. Mudry, and
authorR. Thomale,
journalPhysical Review B volume90,
pages205101 (year2014).
[Iadecola et al.(2016)Iadecola, Neupert,
Chamon, and Mudry]iadecola_wire_2016
authorT. Iadecola,
authorT. Neupert,
authorC. Chamon, and
authorC. Mudry,
journalPhysical Review B volume93,
pages195136 (year2016).
[Peschel(2003)]peschel_calculation_2003
authorI. Peschel,
journalJournal of Physics A: Mathematical and General
volume36, pagesL205 (year2003).
[Qi et al.(2012)Qi, Katsura, and
Ludwig]qi_general_2012
authorX.-L. Qi,
authorH. Katsura, and
authorA. W. W. Ludwig,
journalPhysical Review Letters volume108,
pages196402 (year2012).
[Lieb et al.(1961)Lieb, Schultz, and
Mattis]lieb_two_1961
authorE. Lieb,
authorT. Schultz, and
authorD. Mattis,
journalAnnals of Physics volume16,
pages407 (year1961).
[Edward Witten(2016)]edward_witten_three_2016
authorEdward Witten, journalLa Rivista
del Nuovo Cimento volume39, pages313
(year2016).
[Fukui(2020)]fukui_theory_2020
authorT. Fukui,
journalPhysical Review Research volume2,
pages043136 (year2020).
[Lado et al.(2015)Lado,
Garca-Martnez, and Fernndez-Rossier]lado_edge_2015
authorJ. Lado,
authorN. Garca-Martnez,
and
authorJ. Fernndez-Rossier,
journalSynthetic Metals volume210,
pages56 (year2015).
[Li and Haldane(2008)]li_entanglement_2008
authorH. Li and
authorF. D. M. Haldane,
journalPhysical Review Letters volume101,
pages010504 (year2008).
[Casini and Huerta(2009)]casini_entanglement_2009
authorH. Casini and
authorM. Huerta,
journalJournal of Physics A: Mathematical and Theoretical
volume42, pages504007
(year2009).
[Giudici et al.(2018)Giudici,
Mendes-Santos, Calabrese, and Dalmonte]giudici_entanglement_2018
authorG. Giudici,
authorT. Mendes-Santos,
authorP. Calabrese,
and authorM. Dalmonte,
journalPhysical Review B volume98,
pages134403 (year2018).
[Vidal et al.(2003)Vidal, Latorre, Rico,
and Kitaev]vidal_entanglement_2003
authorG. Vidal,
authorJ. I. Latorre,
authorE. Rico, and
authorA. Kitaev,
journalPhysical Review Letters volume90,
pages227902 (year2003).
[Fidkowski(2010)]fidkowski_entanglement_2010
authorL. Fidkowski,
journalPhysical Review Letters volume104,
pages130502 (year2010).
[Swingle and Senthil(2012)]swingle_geometric_2012
authorB. Swingle and
authorT. Senthil,
journalPhysical Review B volume86,
pages045117 (year2012).
[Chen and Fradkin(2013)]chen_quantum_2013
authorX. Chen and
authorE. Fradkin,
journalJournal of Statistical Mechanics: Theory and Experiment
volume2013, pagesP08013
(year2013).
[Ejima et al.(2014)Ejima, Lange, and
Fehske]ejima_spectral_2014
authorS. Ejima,
authorF. Lange, and
authorH. Fehske,
journalPhysical Review Letters volume113,
pages020401 (year2014).
[Buhr et al.(2011)Buhr, Carrington,
Fugleberg, Kobes, Kunstatter, McGillis, Pugh, and
Ryckman]buhr_geometrical_2011
authorD. Buhr,
authorM. E. Carrington,
authorT. Fugleberg,
authorR. Kobes,
authorG. Kunstatter,
authorD. McGillis,
authorC. Pugh, and
authorD. Ryckman,
journalJournal of Physics A: Mathematical and Theoretical
volume44, pages365305
(year2011).
[Bravyi et al.(2006)Bravyi, Hastings, and
Verstraete]bravyi_lieb-robinson_2006
authorS. Bravyi,
authorM. B. Hastings,
and
authorF. Verstraete,
journalPhysical Review Letters volume97,
pages050401 (year2006).
[Hastings(2010)]hastings_locality_2010
authorM. B. Hastings,
titleLocality in Quantum Systems
(year2010), notearXiv:1008.5137 [math-ph,
physics:quant-ph].
[Nachtergaele and Sims(2010)]sims_lieb-robinson_2010
authorB. Nachtergaele
and authorR. Sims, in
booktitleContemporary Mathematics, edited by
editorR. Sims and
editorD. Ueltschi
(publisherAmerican Mathematical Society,
addressProvidence, Rhode Island, year2010), vol.
volume529, pp. pages141–176.
[Kane and Mele(2005)]kane_quantum_2005
authorC. L. Kane and
authorE. J. Mele,
journalPhysical Review Letters volume95,
pages226801 (year2005).
[von Delft and
Schoeller(1998)]von_delft_bosonization_1998
authorJ. von Delft and
authorH. Schoeller,
journalAnnalen der Physik volume510,
pages225 (year1998).
[Hewson(1993)]hewson_kondo_1993
authorA. C. Hewson,
titleThe Kondo Problem to Heavy Fermions
(publisherCambridge University Press, year1993),
edition1st ed..
[Lee and Toner(1992)]lee_kondo_1992
authorD.-H. Lee and
authorJ. Toner,
journalPhysical Review Letters volume69,
pages3378 (year1992).
[Furusaki and Nagaosa(1994)]furusaki_kondo_1994
authorA. Furusaki and
authorN. Nagaosa,
journalPhysical Review Letters volume72,
pages892 (year1994).
[Gogolin et al.(2004)Gogolin, Nersesyan,
and Tsvelik]gogolin_bosonization_2004
authorA. O. Gogolin,
authorA. A. Nersesyan,
and authorA. M.
Tsvelik, titleBosonization and strongly
correlated systems (publisherCambridge University Press,
addressCambridge, year2004), edition1st
ed..
[V. Borzenets et al.(2020)V. Borzenets,
Shim, Chen, Ludwig, Wieck, Tarucha, Sim, and
Yamamoto]v_borzenets_observation_2020
authorI. V. Borzenets,
authorJ. Shim,
authorJ. C. H. Chen,
authorA. Ludwig,
authorA. D. Wieck,
authorS. Tarucha,
authorH.-S. Sim, and
authorM. Yamamoto,
journalNature volume579,
pages210 (year2020).
[Shen(2017)]shen_topological_2017
authorS.-Q. Shen,
titleTopological Insulators: Dirac Equation in
Condensed Matter, vol. volume187 of
seriesSpringer Series in Solid-State Sciences
(publisherSpringer Singapore, addressSingapore,
year2017).
|
http://arxiv.org/abs/2307.00337v1
|
20230701133303
|
Recursive Algorithmic Reasoning
|
[
"Dulhan Jayalath",
"Jonas Jürß",
"Petar Veličković"
] |
cs.LG
|
[
"cs.LG"
] |
[
Recursive Algorithmic Reasoning
equal*
Dulhan Jayalathequal,camb
Jonas Jürßequal,camb
Petar Veličkovićcamb,deepmind
cambDepartment of Computer Science and Technology, University of Cambridge, Cambridge, UK
deepmindGoogle DeepMind, London, UK
Dulhan Jayalathdhj26@cl.cam.ac.uk
Jonas Jürßjj570@cl.cam.ac.uk
Machine Learning, ICML
0.3in
]
Learning models that execute algorithms can enable us to address a key problem in deep learning: generalizing to out-of-distribution data. However, neural networks are currently unable to execute recursive algorithms because they do not have arbitrarily large memory to store and recall state. To address this, we (1) propose a way to augment graph neural networks (GNNs) with a stack, and (2) develop an approach for capturing intermediate algorithm trajectories that improves algorithmic alignment with recursive algorithms over previous methods. The stack allows the network to learn to store and recall a portion of the state of the network at a particular time, analogous to the action of a call stack in a recursive algorithm. This augmentation permits the network to reason recursively. We empirically demonstrate that our proposals significantly improve generalization to larger input graphs over prior work on depth-first search (DFS).
§ INTRODUCTION
If neural networks could learn to reason in an algorithmic structure, they may gain some of the substantial generalization properties seen in algorithms <cit.>. For example, sorting algorithms are correct regardless of the size of the input array. On the other hand, we cannot expect neural networks to generalize to significantly larger inputs than those seen during training. Furthermore, as first demonstrated by <cit.>, neural networks that mimic algorithms can outperform hand-coded solutions in terms of efficiency. Recently, <cit.> also showed that the algorithmic reasoning paradigm can enable neural networks to execute algorithms even with missing input features.
Since many classical algorithms are naturally amenable to graph representations <cit.>, recent approaches train GNNs with a recurrent state to execute algorithms <cit.>. Even with this state, these GNNs cannot execute recursive algorithms like DFS with arbitrarily large problem instances as they need memory at least large enough to store as many states as the maximum recursion depth of the problem.
To address this fundamental issue, we propose a framework for augmenting GNNs with a stackThe code used for our experiments is available at <https://github.com/DJayalath/gnn-call-stack>. This repository contains commands to reproduce all experiments.. Inspired by call stacks in computer programs, this augmentation enables the network to learn how to save and recall state as required by recursive algorithms. As DFS is the prototypical recursive algorithm, we also conduct an analysis which allows us to identify several key modifications to how intermediate algorithm trajectories are captured in the CLRS-30 algorithmic reasoning benchmark <cit.>. These improvements allow the network to more closely structurally resemble a recursive algorithm.
We test our framework by implementing two methods of augmenting GNNs with a stack. We evaluate these approaches on the benchmark, empirically observing that our stack-based methods outperform standard GNNs (including the work in CLRS-30) on out-of-distribution generalization. Moreover, through a set of ablation experiments, we find support for our arguments regarding the limitations of existing recursive algorithms in CLRS-30 and the benefits of our modifications.
Our insights are practical beyond DFS. The execution path of a recursive function's call graph is precisely a depth-first search. Therefore, DFS can in principle be used to express all other recursive algorithms given that the execution path is known upfront. Consequently, we believe that our analysis will be beneficial for algorithmic reasoning across many recursive problems.
Our main contributions are:
* A novel neural network architecture that uses a stack to learn to save and recall state exactly; this architecture significantly outperforms previous work <cit.> on out-of-distribution generalization when learning DFS.
* An analysis of how intermediate algorithm trajectories are captured in recursive algorithms and subsequent improvements to allow GNNs to align more closely with these algorithms.
§ BACKGROUND
§.§ Neural Algorithmic Reasoning
Algorithms can have strong guarantees about correctness and generalization. For example, it is possible to guarantee that a particular algorithm is correct for an input of any size. On the other hand, generalization remains a key problem in deep learning. We cannot guarantee that a neural network will be correct for problem instances of all sizes. Motivated by this dichotomy, the field of neural algorithmic reasoning (NAR) <cit.> seeks to mimic algorithms using neural networks with the goal of achieving similar generalization properties to algorithms.
Recent work has motivated the use of graphs as representations for algorithms. <cit.> showed GNNs are structurally well-suited to learn dynamic programming (DP) algorithms. <cit.> stated more generally that algorithms align closely with graph representations as they can be seen as manipulations of sets of objects and the relations between them. For example, the arrays that are input to and output by a sorting algorithm can be represented as chains. Based on this, <cit.> proposed a framework for neural algorithmic reasoning with GNNs.
In addition to generalization with respect to the size of the problem instance, this line of work can enable deep neural networks to utilize knowledge of algorithms they have already learned. <cit.> have shown that we can learn to transfer algorithmic knowledge with NeuralExecutor++ and <cit.> have shown that a single generalist neural network can learn to execute a wide range of algorithms—sometimes achieving better generalization performance than neural networks trained on the specific algorithm under test.
§.§ The CLRS Algorithmic Reasoning Benchmark
Towards the goal of unified evaluation in NAR, <cit.> introduced CLRS-30. It is both a benchmark for evaluating GNNs on algorithmic tasks and a standardised neural model for algorithmic reasoning. CLRS-30 measures NAR performance on a set of 30 curated algorithms, aligning closely to the definitions in the algorithms textbook Introduction to algorithms by <cit.>.
The neural model represents the inputs and outputs of an algorithm (and the relations between them) as a graph G = (V, E). At each step of training, CLRS-30 provides ground truth values for the state of variables in an algorithm—these are called hints <cit.>. For example, the left and right pointers in the quicksort algorithm are hints. The network learns to predict the state of these hints, which are encoded as vectors, at each step of computation. By learning to predict hints, the network's reasoning may align more closely with that of the algorithm.
Figure <ref> provides a high-level unfolded view of the recurrent steps in the CLRS-30 benchmark. The inputs to one step are node inputs 𝐱_i, edge inputs 𝐞_ij, and graph inputs 𝐠. These inputs are defined by the algorithm and hints are included as part of them (where hints can belong to nodes, edges, or the graph). They are encoded with linear layers f to get node, edge, and graph features 𝐡_i, 𝐡_ij, 𝐡_g ∈ℝ^d_𝐡 where
𝐡_i^t = f_n(𝐱_i^t) 𝐡_ij^t = f_e(𝐞_ij^t) 𝐡_g^t = f_g(𝐠^t),
and d_𝐡 defines the dimension of these features[This is only the typical case. Note that the node, edge, and graph features can each have different dimensions if so desired.]. The features are passed through a processor network (a GNN) ψ to get processed node and edge features 𝐩_i, 𝐩_ij∈ℝ^d_𝐡 such that
𝐩_i^t, 𝐩_ij^t = ψ(𝐡_i^t, 𝐩_i^t-1, 𝐡_ij^t, 𝐡_g^t)
where 𝐩_i^t-1 is a recurrent state carried forward from the previous time step. These processed features are decoded to make predictions for the hints
ℋ̂^t = g_ℋ({𝐩_i^t| i ∈ V}, {𝐩_ij^t| (i, j) ∈ E})
where g_ℋ is the hint decoder. In the last step T, predictions for the outputs
𝒪̂ = g_𝒪({𝐩_i^T| i ∈ V}, {𝐩_ij^T| (i, j) ∈ E})
where g_𝒪 is the output decoder. The hint predictions are aggregated with the algorithm inputs in the following step to form the next inputs 𝐱_i^t+1, 𝐞_ij^t+1, and 𝐠^t+1. Therefore, the network is applied like a recurrent component across steps. In training, the hints can optionally be teacher forced (i.e., we can replace the generated hints with the ground truth hints during training) with some probability. Note that the processed node embeddings 𝐩^t_i will be the recurrent state passed to the next step.
§.§ Depth-First Search in CLRS-30
The DFS algorithm in CLRS-30 (Appendix <ref>) has a set of hints which we describe in Table <ref>. In this algorithm, each hint is encoded for each node. For example, there is a pointer to the predecessor of every node in the graph. Therefore, we call these hints per-node hints. In other algorithms, a hint can be associated with the whole graph (a graph hint). One example of a graph hint is the min pointer in binary search. It does not belong to any particular element of the input representation, but is instead shared between all of them.
§ AUGMENTING A GNN WITH A STACK
Recursive algorithms typically require storing state in a call stack, executing the recursive call, and finally restoring this state to complete the recursion step. To support similar reasoning, our method adds stack memory to the processor network described in Section <ref>, providing an inductive bias towards storing and recalling state like a call stack. We first add a one-hot encoded graph hint
^t∈{,,[Indicates no stack operation. The state of the stack is unmodified.]}
denoting the stack operation that the target algorithm performs at step t. The ground truth for this hint is when entering a recursive call in the target algorithm, when returning from one, and otherwise. This simulates the actions of a call stack.
We introduce a stack at step t as 𝒮^t, which is composed of a sequence of stack elements 𝐳^0, …,𝐳^_t where _t indicates the number of elements on 𝒮^t. We start with _0:=0 and define 𝐳^0 := 0.
The elements that are pushed to the stack are defined by the type of stack. We introduce two types: a stack for every node whose elements are processed node features (a node-wise stack), and a single stack for the graph for which elements are some pooled encoding of the node features (a graph stack). Figure <ref> provides a high-level demonstration of how the stack is used and how stack usage is learned. Stack operations are supervised such that the network learns to push and pop at precisely the same times as the recursive algorithm (i.e., when state needs to be saved, and when it needs to be recalled).
§.§ Node-Wise Stack
To store one element per node i∈ V we define 𝐳_i^_t∈ℝ^d_stack to be the top stack element corresponding to node i at step t.
Depending on the predicted operation Ĥîn̂t̂ŝ:̂_^t we can then update the stack for step t+1 as follows:
_t+1 =
_t + 1, if
max{_t-1,0}), if
_t, if
𝐳_i^_t+1 =
ϕ_value(𝐩_i^t), if
𝐳_i^_t-1, if
𝐳_i^_t, if
Here, ϕ_value:ℝ^d_𝐡→ℝ^d_stack denotes a (potentially learnable) function to decide which information to put on the stack. In each step t we concatenate 𝐳^_t_i to the initial node embeddings that serve as input to the GNN ψ given by 𝐡_i^t and optionally (see Section <ref>) 𝐩_i^t-1.
Notably, using the top of the stack as an input to the network is effectively the same as providing a dynamic skip connection across time. As a result, we mitigate vanishing gradient issues because we do not need to backpropagate through intermediate time steps between when the state was first pushed and the current time.
§.§ Graph-Level Stack
We also consider a graph-level stack. In this case, the stack element 𝐳^_t∈ℝ^d_stack is a vector of fixed size. The and operations are similar to the node-wise stack. In the case of a operation, we update the stack with
𝐳^_t+1:=⊕_i∈ Vϕ_value(𝐩_i^t)
where ⊕ is some permutation-invariant aggregation. In step t the top stack element 𝐳^_t is concatenated to the graph features 𝐡^t_g. For ϕ_value, we use a 2-layer MLP. Another option is to take the first d_stack entries of the node embedding such that
ϕ_value(𝐩_i^t):= (𝐩_i^t)_0:d_stack
where d_h≥ d_stack.
§ STACKS ARE NOT ALL YOU NEED
§.§ Recursive Problems Require Additional Memory
The DFS algorithm implementation in CLRS-30, although based on a recursive algorithm, is not truly recursive. Hints, such as the predecessor π, are present for each node (per-node hints). Together, these per-node hints provide global information about the entire computation state. This information is already sufficient to deduce what to do in the next algorithm step without additional memory as demonstrated by Figure <ref>. Essentially, with these per-node hints, a stack is not required. In contrast, the recursive implementation of DFS described in Appendix <ref> has access to only the predecessor of the current node. As a result, the variables in the recursive algorithm are more similar to graph hints, relative to the current node being explored, than per-node hints. The use of per-node hints in the CLRS-30 implementation is problematic as it implies that the network will not be closely aligned with a recursive algorithm.
To remedy this issue, we use a different set of hints with the aim of achieving closer algorithmic alignment with recursion. We provide hints based directly on the variables in the algorithm (Table <ref>), where all except color are graph hints rather than per-node hints. These new hints are relative only to the current node being explored. The color hint remains a per-node hint as this information is required when looping over the neighbours of a node in the algorithm (see line 5 of DFS-Visit() in Appendix <ref>). This configuration of hints is more similar to the state pushed to the call stack in a truly recursive DFS algorithm as we only save the state related to the node we are currently exploring. With this change, the network will not have enough information to execute DFS without storing and restoring state as the DFS algorithm does. A similar procedure can be applied to recursive algorithms in general by ensuring that variables which would be pushed onto the call stack in a recursive call are used as graph hints.
§.§ Recurrent States Can Encode Global Information
Removing per-node hints is not enough as they can also be learned implicitly in the node-wise recurrent state 𝐩_i^t-1. Whatever information can be carried from a hint, can also be learned as part of this hidden state. Therefore, the network could learn a representation similar to the previous unmodified hints through this state. In such a case, the network would not need to save or restore state once again. Consequently, we do not pass information in the recurrent state to the GNN and enforce that it relies only on the information on our stack. Hence, we modify Equation <ref> to
𝐩_i^t, 𝐩_ij^t = ψ(𝐡_i^t, 𝐡_ij^t, 𝐡_g^t)
Similarly, a node-wise stack could be used like a node-wise recurrent state by pushing to the stack in each step. In this case, we would provide a recurrent state to the GNN, which could learn per-node hints implicitly, via the call stack. Since our stack is explicitly supervised through ground truth stack operations, we avoid this problem by incentivizing it to perform the correct stack operation rather then pushing in every step.
§.§ Recursive Algorithms Generate Results Sequentially
Moreover, recursive algorithms do not output the complete result at once. Instead, they sequentially generate the result over the recursive calls. For instance, in DFS, the search result (predecessors of all discovered nodes) is generated sequentially as nodes are explored.
This is different to CLRS-30 where, in the output, all of the predecessors are predicted at once from the processed features. Since we use graph hints instead of per-node hints and replace the node-wise hidden state by a graph-level stack, we need to memorize the previous results as part of the graph-level stack element. This is a memory bottleneck and will inevitably lead to detrimental performance with a growing number of nodes. Therefore, we modify CLRS-30 to collect the predicted outputs for each node into a single output to align more closely with the behaviour of the recursive algorithm. We provide an example of this for DFS in Figure <ref>. This method can also be applied to other recursive algorithms. For example, in quicksort, we would collect the final position of the pivot element after each recursive call.
§ RESULTS & DISCUSSION
In Table <ref>, we summarize our results on different network configurations. We refer to test accuracy on larger graphs (96 nodes) as out-of-distribution (OOD) performance. While the setup proposed by <cit.> (Experiment <ref>) achieves near-perfect test accuracy on our dataset, it fails to generalize to larger graphs. In contrast, our method achieves similar in-distribution performance, and at 73% accuracy, drastically better OOD generalization performance (Experiment <ref>). Experiment <ref> shows that teacher forcing is required to achieve the generalization performance we see. In line with a similar observation by <cit.>, DFS is one of the algorithms in CLRS-30 that benefits from teacher forcing.
In Experiment <ref>, we note that removing the stack only has a minor effect on the the OOD performance but significantly reduces in-distribution accuracy. We hypothesize that this is the performance that can be achieved when only the hints are provided. There is no additional information propagation in terms of stack or hidden state. The main generalization benefit stems from turning our per-node hints into graph hints and collecting outputs as described in Section <ref>. This is a result of a significant improvement in alignment with the original DFS algorithm.
Notably, not learning the value network ϕ_value (Equation <ref>), does not noticeably impact performance (Experiment <ref>). This could indicate that learning an encoding of the processed features is unnecessary as the GNN ψ, which processes features, is able to learn a good encoding anyway. In addition, as we empirically demonstrate in Experiment <ref>, collecting the outputs is a crucial component of aligning with the DFS algorithm as it has a significant impact on accuracy.
While we adjusted the DFS hints and output collection in a way that theoretically makes a graph-level stack sufficient, learning to store the relevant information for each node appears to be easier than also learning which node to focus on. This reflects in the fact that adding a (per-node) recurrent state to the graph-level stack (as done in the original method in Experiment <ref>) yields another significant boost in performance (Experiment <ref>). Removing the graph-level stack only has a minor effect (Experiment <ref>).
Based on this insight, we evaluate a network augmented with a node-wise stack as described in Section <ref>. We use the same graph hints and outputs as described in Section <ref>. This allows us to propagate per-node information while maintaining the inductive bias of a call stack. As our network is essentially a graph RNN, it faces the same forgetting issues as other recurrent networks <cit.>. The stack resolves these forgetting issues, which are caused by using a node-wise hidden state instead of a stack as in Experiment <ref>. Our method enables the network to perfectly recall state from more than one step in the past—a particularly practical feature when dealing with problems of high recursion depth. We achieve perfect test accuracy in-distribution as well as on larger out-of-distribution graphs (Experiment <ref>). Even when reintroducing the recurrent state, the network learns to make use of the node-wise callstack and achieves only slightly worse generalization accuracy (Experiment <ref>).
Our modifications make the implementation in CLRS-30 significantly more closely aligned with the recursive DFS algorithm defined by <cit.>. Noting that memoization does not change expressiveness, DFS can be expressed as a dynamic programming problem. Previous work may have achieved strong empirical results on DFS without our modifications because the previous hints were sufficient to allow the algorithm to be solved as DP. Hence, the problem can be reasoned about without a stack due to the alignment between GNNs and DP as shown by <cit.>. It is perhaps because other approaches do not conform as closely to DP as our approach conforms to recursion that these implementations do not result in networks which generalize as well out-of-distribution.
§ RELATED WORK
Motivated by patterns which are difficult to learn in deep neural networks, <cit.> developed stack-augmented recurrent networks. They use the stack to learn control over the memory of the network, enabling it to learn with infinite structured memory. They showed that these networks are able to learn some basic algorithms (such as binary addition) which require memorization. However, they did not study recursive algorithms or attempt to align the learned method with the algorithm structure. <cit.> incorporated recursion through the Neural Programmer-Interpreter (NPI) framework <cit.>. This was achieved by incorporating recursive elements into the NPI traces (somewhat similar to incorporating per-node hints in CLRS-30). They demonstrated strong generalization performance when learning sorting algorithms. In contrast to our approach, they do not explicitly introduce a call stack to learn the relevant state. <cit.> proposed a method of relaxing conditions on control structures in algorithms such that they were smoothly differentiable. This approach allows neural networks to directly learn the relaxed algorithms. Similar to the method proposed by <cit.>, this technique also cannot permit reasoning like recursive algorithms as there is no saving or restoring of state.
§ CONCLUSION & FUTURE WORK
To enable neural networks to inherit some of the generalization properties of recursive algorithms, we introduced a new framework for augmenting GNNs with a call stack. This framework permits GNNs to execute recursive algorithms in a way that is more aligned with recursion. We also proposed improvements to capturing intermediate algorithm trajectories and predicting outputs in the CLRS-30 algorithmic reasoning benchmark that further improved algorithmic alignment. With these improvements, our framework allowed a GNN with a call stack to significantly outperform previous work when generalizing out-of-distribution on DFS. Moreover, our stack-augmented graph neural network has the ability to perfectly recall state from history, avoiding the memory bottleneck of hidden states in recurrent networks.
It would be desirable to formalize the alignment of our modifications in CLRS-30 with our stack-augmented GNN architecture, similar to the alignment between DP and GNNs shown by <cit.>. In addition, we note that the stability of learning could be improved by teacher forcing the executed stack operation in addition to the stack hint.
Our method relies on ground-truth hints to supervise stack usage. This is usually not available outside an algorithmic reasoning setting. Usage of the stack could instead be learned through reinforcement learning, supervising the stack with the policy loss. This would enable call stacks to be used with neural networks beyond only known recursive algorithms. Similar to the work by <cit.>, it could allow the network to discover new methods of using the stack which can outperform known solutions. For example, the policy network of a navigation agent could employ our architecture. This would enable the agent to better learn to map and navigate its environment as this task requires implicitly learning to recursively plan paths.
Our work is the first to demonstrate the use of a stack-augmented neural network in NAR. We have used this architecture to improve algorithmic alignment in recursive problems and enlarged the class of algorithms we can precisely reason about with GNNs. As a result, this work is a step towards transferable algorithmic knowledge <cit.> and generalist algorithmic learners <cit.>. We hope that our work illuminates the path towards reasoning on recursive problems with neural networks.
§ ACKNOWLEDGEMENTS
DJ and JJ would like to sincerely thank Edan Toledo for valuable technical contributions which did not make it to the final manuscript, Dobrik Georgiev for assistance understanding the inner workings of CLRS-30, and Yonatan Gideoni for reviewing early drafts of this work. All the authors thank Zhe Wang and Murray Shanahan for reviewing the final draft of this paper. Finally, we would like to thank Reviewer #2 at the ICML 2023 Workshop on Knowledge and Logical Reasoning in the Era of Data-Driven Learning for their detailed and insightful comments on our work.
icml2023
§ INCREASING DATASET DIFFICULTY WITH HIGHER RECURSION DEPTH
The graph distribution in CLRS-30 for training and testing is generated from randomly sampling Erdős–Rényi (E-R) graphs with different edge connection probabilities. The expected distance between two nodes in these graphs is logarithmic in the number of nodes <cit.>. For a more challenging task, we modify the graph distribution by replacing 15% with randomly generated binary trees. These graphs have longer chains on average than the E-R graphs, requiring a greater recursion depth in DFS. As a result, the network will be required to recall states from further back in time.
§ POOLING WITH ATTENTION
In addition to the setup described in Section <ref>, we also evaluate a simple form of attention where we weight the embeddings produced per node based on a learned function ϕ_att:ℝ^d_𝐡×ℝ^d_graphfts→ℝ depending on both, the node embeddings as well as the encoded graph features.
𝐳:=⊕_i∈ Vϕ_att(𝐩_i^t, 𝐡_g^t)ϕ_value(𝐩_i^t)
The underlying idea here is that many algorithms would likely focus on the information of one or few nodes for each step embedding. Apart from that, note that an extensive variety of attention-based mechanisms would be possible. On the one hand, one could change ϕ_att from a simple MLP to a more complex function, e.g. by generating key and value vectors from the node embeddings and query vectors from the graph level features. On the other hand, one could change the representation of graph-level data from the graph features 𝐡_g^t to e.g. some pooled version of the node embeddings ⊕_i∈ V𝐩_i^t or a combination of these. However, thoroughly evaluating all of these techniques was not the main goal of this work.
In Experiment <ref>, we evaluate this attention-based pooling approach. It fails to generalize to larger graphs. Since we use sum pooling, this may be because the magnitude of the encoded graph features are out-of-distribution, making the generated weights perform worse. A mean-pooling approach could lead to better performance.
§ IMPLEMENTATION DETAILS
Table <ref> details the values of hyperparameters in our method. Note that complete implementation details are also provided in our repository. The codebase contains a file with commands to reproduce all of our experiments.
§ DEPTH-FIRST SEARCH ALGORITHM
|
http://arxiv.org/abs/2307.00484v1
|
20230702061000
|
Enhanced Quantum Force Sensing by Digital Twinning of Atomic Bose-Einstein Condensates
|
[
"Tangyou Huang",
"Zhongcheng Yu",
"Zhongyi Ni",
"Xiaoji Zhou",
"Xiaopeng Li"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.quant-gas"
] |
a4paper,left=2.5cm,right=3cm,top=2cm,bottom=2cm
↑↓
1,5]TangyouHuang2]ZhongchengYu1,5]ZhongyiNi2,3,4]XiaojiZhou[5,6,1,7,8]XiaopengLixiaopeng_li@fudan.edu.cn[1]Shanghai Qi Zhi Institute, AI Tower, Xuhui District, Shanghai, 200232, China[2]State Key Laboratory of Advanced Optical Communication System and Network, School of Electronics, Peking University, Beijing 100871, China[3]Institute of Advanced Functional Materials and Devices, Shanxi University, Taiyuan 030031, China[4]Institute of carbon-based thin film electronics, Peking University, Shanxi, Taiyuan 030012,China [5]State Key Laboratory of Surface Physics, Key Laboratory of Micro and Nano Photonic Structures (MOE), and Department of Physics, Fudan University, Shanghai 200433, China[6]Institute for Nanoelectronic Devices and Quantum Computing, Fudan University, Shanghai 200433, China[7]Shanghai Artificial Intelligience Laboratory, Shanghai 200232, China[8]Shanghai Research Center for Quantum Sciences, Shanghai 201315, China
High sensitivity detection plays a vital role in science discoveries and technological applications.
The advancement of sensitivity has been pivotal in expanding their boundaries.
While intriguing methods utilizing collective many-body correlations and quantum entanglements have been developed in physics to enhance sensitivity, their practical implementation remains challenging due to rigorous technological requirements.
Here, we propose an innovative approach that harnesses the capabilities of machine learning, to significantly augment weak-signal detection sensitivity.
By training a generative machine learning model on time-of-flight measurements from atomic Bose-Einstein condensates (BEC), we create a digital twinning of the experimental system, accurately matching probabilistic distributions.
The digital replica is capable of generating both typical and atypical configurations, mirroring the fluctuations observed in experimental measurements caused by quantum shot-noise and technical noise.
An anomaly score, quantifying the level of configuration atypicality, is obtained through the machine learning model. When an external force is applied, it perturbs the measurement outcomes of the physical system. Remarkably, even a weakly affected physical system can be detected by the machine learning model by examining the anomaly score, enabling anomaly detection. This unconventional approach to force sensing is entirely data-driven, devoid of prior knowledge about the physical system or assumptions regarding the sensing process.
Our findings demonstrate a significant advancement in sensitivity, achieving an order of magnitude improvement over conventional protocols in detecting a weak force of approximately 10^-25 N. The resulting sensitivity reaches
1.7(4) × 10^-25 N/√(Hz). Notably, our machine learning-based signal processing approach does not rely on system-specific details or processed signals, rendering it highly applicable to sensing technologies across various domains.
Enhanced Quantum Force Sensing by Digital Twinning of Atomic Bose-Einstein Condensates
*
=======================================================================================
§ INTRODUCTION
In recent decades, quantum technologies have made remarkable strides, culminating in the emergence of quantum sensing techniques that enable high-precision detection at the microscopic level. Quantum sensor leverages quantum resources to detect subtle changes in physical quantities such as time, force, and electromagnetic fields, providing extreme precision at the atomic scale <cit.>.
These implementations have been successfully realized across diverse platforms, including cold atoms <cit.>, superconducting circuits <cit.>, and solid-state spin systems <cit.>. Remarkably, recent progresses in quantum sensing highlight its capability for real-world applications such as precision navigation <cit.>, gravity detection <cit.>, dark matter searches <cit.> and among others <cit.>.
However, they are often facing challenges associated with measurement uncertainty caused by quantum decoherence and back-action effects, limiting the practical applications <cit.>.
Surpassing this limit typically requires the involvement of strong correlations and high entanglement or utilizing sophisticated measurement protocols <cit.>. While hardware upgrades can enhance sensing performance, computational approaches based on signal processing and data analysis offer a somewhat more economical way to enhance the high-precision detection.
Earlier computational approaches have employed statistical learning of signal acquisition from sensing observable taking into account the inherent noise and variations <cit.>. In recent years, machine learning approaches have been used for sensing-related analysis and feature selection <cit.>. Nevertheless, these methods often demand a substantial amount of sensing data or rely on prior knowledge of signal and noise properties, thereby still limiting their applicability.
More recently, the concept of digital twinning has gained significant attention as a powerful tool for simulating and understanding complex physical systems. Digital twinning involves creating a virtual replica or model that mirrors the behavior and properties of a physical system, allowing for real-time analysis, optimization, and prediction. By effectively bridging the physical and virtual realms, digital twinning provides a unique opportunity to enhance the performance and capabilities of quantum force sensing experiments.
In this article, we propose a novel application of digital twinning for quantum force sensing in atomic BECs. Leveraging the advancements in generative machine learning models, we create a digital twin that faithfully represents the atomic BEC system under investigation. The digital twin captures the intricate correlations and non-linear dynamics of the physical system, enabling us to devise a novel approach for quantum force sensing based on anomaly detection. Conventional quantum force sensing techniques often rely on extracting basic statistical moments from high-dimensional experimental data, neglecting the valuable information encoded in complex correlations. In contrast, our digital twinning approach incorporates a generative machine learning model to construct a nonlinear function, which takes advantage of the complex correlation effects in the high-dimensional data. This innovative approach allows for improved signal-to-noise ratio and enhanced sensitivity, without compromising the long-term stability of the sensing system. We anticipate that our anomaly detection technique, facilitated by digital twinning, can be broadly applied to sensing experiments involving high-dimensional data acquisition cycles. By maximizing the utilization of high-dimensional data, our approach surpasses conventional techniques that rely solely on basic statistical features, unlocking new avenues for quantum force sensing and precision measurements.
§ RESULTS
The Bose-Einstein condensate system.
We create a bosonic quantum system by trapping about 2× 10^5^87Rb atoms. This system forms a BEC at low temperature, about 50 nK in our experiment (Fig. <ref>). The atomic BEC is loaded in a triangular optical lattice to suppress unwanted real-space dynamics <cit.>. After preparation of the lattice BEC system, the optical lattice and the trapping potential are shut-off, which then let the atoms expand ballistically. Performing the time-of-flight (TOF) experiment, we measure the momentum distribution n( k). It takes about T_0 = 38 s to complete one experimental cycle.
In the time-of-flight measurements, we have shot-to-shot noise. There are quantum shot-noises, which arise from quantum superposition of different momentum eigenstates due to atomic interactions, trapping potentials, and optical lattice confinement.
There are thermal noises. Although the atomic BEC is cooled down to 50 nK, we still have thermally activated atoms. These atoms induce stochastic fluctuations in the measurement outcomes.
There are also technological noises. The control of the atom numbers, the trapping potential, and the optical lattice depth are not perfect in the experiment. They may have noticeable or unnoticeable drifts in different experimental runs.
As a result, the TOF measurement outcomes in the consecutive experimental runs would unavoidably fluctuate from shot to shot. We thus denote the measurement outcomes as n_α ( k), with α indexing different experimental runs.
In detecting an external force acting on the BEC, a standard approach is to examine the response in the averaged center-of-mass (COM) momentum, i.e.,
k_ COM =∑_α∫ d^2 k n_α( k) k/∑_α∫ d^2 k n_α( k) .
Although the measurement outcome n_α ( k) is a two-dimensional image, which potentially involves very rich information,
the conventional approach of data processing following Eq. (<ref>) only consumes the zeroth and first order moments of the two-dimensional data, leaving behind higher-order correlations.
The digital replica.
In order to incorporate the full information in the measurement outcomes, n_α ( k), one plausible way is to perform digital twinning of the physical system by matching the probability distributions. It then automatically takes into account high-order correlation effects.
Since the atomic BEC system in the experiment has noises of various origins, it is impractical to simulate the experimental measurement outcomes using conventional modeling approaches, for example by simulating Gross-Pitaevskii equations <cit.>.
In this study, we create a digital twinning of the experimental system, by implementing a generative machine learning model, which incorporates quantum, thermal, and technical noise channels simultaneous at equal-footing in a purely data-driven approach.
We implement a generative adversarial network (GAN) <cit.> for digital twinning of the atomic BEC.
GAN consists of a generatorG(·) that attempts to map a latent vector z to a realistic data of momentum distribution,
G(·): z↦ñ ( k),
and a discriminatorD(·) tries to differentiate the real data, n( k), from the fake data from the generator, ñ ( k) = G( z).
The two networks are trained simultaneously, with the generator attempting to produce data that can fool the discriminator, and the discriminator learning to correctly identify synthetic data (Methods).
The generator and discriminator are realized by two parameterized deep neural networks, denoted as G( z ;θ_G) and D(n( k) ;θ_D).
We collect 3.6 k independent measurements of momentum distributions, and feed to GAN.
This amount of data takes about forty hours to collect in experiments, which is reasonably affordable.
In Fig. <ref>.a, we present fake data produced by the generator during training procedure, where the generated data is visually similar to real data, indicating that the model is capable of capturing the underlying data distribution without model collapsing.
The trained generator produces a digital replica of the experimental measurements (Fig. <ref>.b,c), which could generate fluctuating configurations involving all noise channels automatically.
Remarkably, even phd students cannot tell the difference between the experimental data and the data generated by the digital replica.
We create two groups of data, each containing 64 TOF measurement outcomes. The first group contains real experimental data, and the second group is a mixture of real and fake data with a probability 50%:50%. We randomly select 30 Ph.D. students and let them know how these groups are formed. They are asked to identify which ones of the second group are real experimental data. We find that the averaged accuracy is 48%<cit.>. This confirms the digital replica indeed captures all the features in the TOF measurements of the experimental atomic BEC.
As in the experimental data that has typical and atypical configurations due to noises, the digital replica also generates typical and atypical configurations.
Within the framework of GAN, the degree of atypicality is quantified by an anomaly score,
𝒜( n( k) )= 𝒜_R( n( k) )+λ·𝒜_D ( n( k) ).
Here, the discrimination loss (DL) 𝒜_D is directly given by the discriminator.
The residual loss (RL) is produced by adding an encoder in front of the generator (Fig. <ref>), with
A_R = n( k) - ñ( k) ) _2. The encoder is trained by minimizing the residual loss.
The weighting coefficient λ∈ℝ is a hyper-parameter that balances RL and DL.
These two components evaluate the discrepancy between fake and real data in terms of image distance and feature discrimination <cit.>.
We choose λ = -0.76 in this work.
We find that the anomalous score has an approximate normal distribution (Fig. <ref>), which indeed reflects the typicality of momentum distribution data.
Sensing by anomaly detection.
When an external force is applied, the physical BEC system then produces TOF data n( k) of different distributions. Conventional sensing schemes examine the response in the COM momentum (Eq. (<ref>)). The distributions of the COM momentum with and without an external force are different. Force sensing requires differentiating such distributions. When a weak force (F_0=7.81×10^-26 N in our experiment) is applied, the distribution of COM momentum is only very weakly affected, with a barely noticeable difference from force-free distribution.
With the digital replica, namely the generative machine learning model, we compute the anomaly score A(n( k)), a highly nonlinear function of n( k), that could incorporate higher order correlations of the data. This provides a systematic nonlinear data processing approach, much different from the linear data processing as in analyzing the simple COM momentum.
Remarkably, the anomaly score is much sensitive to the force. The resultant distribution of the anomaly score caused by the weak external force is significantly different from the the force-free distribution, in sharp contrast to the COM momentum (Fig. <ref>).
Despite the nonlinearity in the data processing, the response of the the anomaly score remains to be linear to the external force (Fig. <ref>.c).
By comparing the distributions of the COM momentum and the anomaly score, it is evident that analyzing the anomaly score, known as anomaly detection in the context of machine learning, is more efficient for detecting the external force applied to the BEC system.
We further investigate the primary characteristics that contribute to the anomaly score in presence of an external force.
Specifically, we assess the momentum dependence of the residual loss, A_R, namely,
n_R ( k) = | n(k)-G^*(E^*(n(k))) |,
with the generator G^*(·) and the encoder E^*(·) being fixed.
In Fig. <ref>.d, we provide
six representative atomic images from real experimental datasets labeled with anomaly scores.
In Fig. <ref>.e, we observe bright speckles in n_R( k), with these speckles becoming increasingly prominent as the applied force intensifies.
This observation suggests that the signals contributing to the anomaly scores exhibit localization in momentum space, commonly referred to as anomaly localization in the context of anomaly localization.
The phenomenon of anomaly localization indicates that the machine learning model indeed captures the relevant features of the experimental data.
Further discussion on model interpretability from the perspective of feature representation <cit.> is provided in Supplementary material <cit.>.
Our findings regarding anomaly localization, exemplified by the presence of a hexagonal peak structure in the rightmost portion of Figure <ref>.e, imply that the dominant signal for force sensing originates primarily from high-momentum peaks observed in time-of-flight (TOF) measurements.
This could be attributed to the reduced impact of various sources of noise, such as atomic scattering, trapping potential, and thermal activation, on the high-momentum components of the Bose-Einstein condensate (BEC), owing to energy separation.
Sensitivity and Stability.In order to quantitatively characterize the advantage of the anomaly detection over conventional approaches, we compute the corresponding sensitivity.
A general force sensing process involves a force F to be detected, and a signal q that is directly or indirectly measured in the experiment.
In the force sensing using the COM momentum, q corresponds to one component of k_ COM. It corresponds to the anomaly score in the anomaly detection.
The measured signals q would fluctuate in consecutive experimental measurements due to noises of various channels.
We assume the induced fluctuations on q are white noise, so the fluctuations in different measurements are completely independent.
The strength of the fluctuations is quantified by the standard deviation of q, to be referred to as σ_0.
A single measurement has a fixed time cost of T_0.
The minimum force we can resolve with one single-measurement is given by
V_ min∼σ_0/|∂_V q|.
Performing N times of experimental measurements, the signal to noise ratio (SNR) is given by SNR = √(N)× |∂_V q V|/σ_0. The a one-sigma sensitivity is defined as <cit.> S = √(T_0)×σ_0/|∂_V q|.
This definition applies to both conventional approaches and the anomaly detection.
Acting linear transformation on q, the sensitivity remains the same according to Eq. (<ref>). However, different signals that are related by nonlinear transformations do not necessarily have the same sensitivity.
We compare the sensitivities of the COM momentum and the anomaly detection approaches taking exactly the same set of experimental data.
For the COM momentum approach, we obtain a sensitivity 𝒮 ^COM =6.8(9) × 10^-24 N/√(Hz) (Fig. <ref>.a).
For the anomaly detection, we have S ^AS=1.7(4)×10^-25 N/√(Hz).
This means the anomaly detection is about 40 times more sensitive than the the COM momentum approach.
We emphasize that in the above comparison we use the raw experimental data without invoking any prior knowledge of the physical process. The anomaly detection approach is thus entirely data-driven.
We further examine how much the conventional COM momentum approach can be improved by machine learning based noise reduction. We perform Gaussian processing prior to extracting the COM momentum <cit.>. The resultant sensitivity can be improved to S_r^COM=1.6(4)×10^-24 N/√(Hz) (Fig. <ref>.a). But this is still one-order-of-magnitude worse than our anomaly detection.
In Figure <ref>.c, we show the comparison of the achieved sensitivity of digital twinning atomic BEC with previous experiments,
including phase-coherent velocimetry <cit.>, cold atoms in a cavity <cit.>, and trapped ions <cit.>.
Our achieved force sensitivity shows orders-of-magnitude improvement over other experiments.
There are two remarks we would like to mention here. First, the standard quantum limit of our atomic BEC force sensor is 5.45×10^-29 N<cit.>, which indicates there is still quite some room to improve the sensitivity here, either by improving the technical noises, or by more advanced digital twinning techniques that effectively reduces technical noises. Second, the digital twinning and the anomaly detection techniques developed here are quite generic. These techniques
are readily applicable to improve the sensitivity of other experimental setups as well.
For quantum force sensing, it is also important to have long-term stability besides the high sensitivity. This is captured by the Allan Deviation <cit.>,
which is widely used to examine long-term drifts.
We confirm that the Allan Deviation of the anomaly detection falls off with the integration time (τ) as 1/√(τ), having the same scaling as the Allan Deviation of the COM momentum (Fig.<ref>.b). It is thus evident that no long-term drifts are induced by the nonlinear data processing in anomaly detection.
The 1/√(τ) decay also implies the fluctuations of the anomaly score are mainly white noise, which justifies the above definition of sensitivity.
It is worth noting here that the sensitivity can also be enhanced by choosing the high data-quality region of the TOF measurements according to the impulse theorem, as used in a previous study <cit.>. Such analysis requires certain prior information of the force, and gives a sensitivity comparable to the present anomaly detection approach. Nonetheless, we emphasize that the anomaly detection approach is purely data driven, and is consequently more robust against long-term drifts in experiments. The improvement in the long-term stability is evident by comparing the Allan Deviation of the anomaly detection to the pervious study <cit.>.
In the previous study, the Allan Deviation bends up at a time scale of τ= 10^4 s, whereas it keep decreasing following 1/√(τ) even at the time scale of 4× 10^4 s.
§ DISCUSSION
In this study, we present a novel method for quantum force sensing using digital twinning of atomic BEC and anomaly detection facilitated by a generative machine learning model. By incorporating complex correlation effects present in the experimental data through nonlinear processing, we achieve a significant enhancement in sensitivity while maintaining long-term stability. Unlike conventional approaches that rely on extracting basic statistical moments from high-dimensional data, such as time-of-flight (TOF) measurements in BEC, our anomaly detection approach employs a neural network-based nonlinear function, denoted as f_ NN ( x), to fully exploit the information within the high-dimensional data. Through extensive training and iterative refinement, this methodology effectively amplifies the signal-to-noise ratio, as confirmed by convergence in sensitivity with increasing training data <cit.>.
Notably, our findings reveal an intriguing aspect: the sensitivity of a physical sensor is intimately tied to the data processing strategy, denoted as S [f_ NN]. This implies the existence of an upper bound,
S_ opt = max_f_ NN{ S [f_ NN] },
which represents the maximum achievable sensitivity attainable by a given sensor configuration. The determination of this upper bound warrants further investigation in future research endeavors.
The other important direction is to investigate the fundamental quantum limits of the anomaly detection approach. How the optimal sensitivity S_ opt is fundamentally limited by the quantum shot-noise, and its scaling with the number of atoms in the BEC or the imaging resolution, are worth further theoretical studies.
Acknowledgments
We acknowledge helpful discussion with W. Vincent Liu.
This work is supported by National Program on Key Basic Research Project of China (Grant
No. 2021YFA1400900), National Natural Science Foundation of China (Grants No. 11934002, 12075128, T2225008), Shanghai Municipal Science and Technology Major Project (Grant No. 2019SHZDZX01), and Shanghai Science Foundation
(Grants No.21QA1400500).
§ METHODS
Generative Adversarial Networks
Generative Adversarial Networks (GANs) are a class of deep learning models used for unsupervised learning tasks, such as artificial image generation and information processing <cit.>.
Our goal is to utilize GANs as the generative model to construct the digital replica of our experimental data.
We start with a set of raw observable X= { x_1, x_2, x_3,..., x_N} measured from N independent experiments. Next, we shall train the GAN relying on finite number of datasets X.
Generally, a GAN consists of a generatorG(·) that mapping the latent vector z to a realistic data G(·): z↦x̃,
and a discriminatorD(·) identifies the real x∈X rather than the fake date x̃=G( z) from the generator.
The two networks are trained simultaneously, with the generator attempting to produce data that can fool the discriminator, and the discriminator learning to correctly identify synthetic data.
In general, the generator and discriminator are realized by two parameterized deep neural networks, i.e., G( z;θ_G) and D(x;θ_D).
In this sense, parameter θ_D, θ_G are simultaneously optimized for a standard min-max loss function V(D,G) <cit.>:
min_Gmax_D V(D,G) = 𝔼_ x∼𝒫_0[log D( x)]+𝔼_ z∼𝒫_ z[log (1-D(G( z)))].
θ_D and θ_G are respectively adjusted to maximize the expectation value 𝔼_ x∼𝒫_0 and minimize the 𝔼_ z∼𝒫_ z.
GANs work by training a generator network to produce synthetic data that resembles the healthy distribution x∼𝒫_0( x), while a discriminator network is trained to distinguish the data out of healthy distribution.
In this vein, the convergence criteria is that the generator is able to fool the discriminator, resulting in the high-quality digital replica of training data.
Anomaly detection
GANs are widely used in anomaly detection due to their ability to learn complex data distributions, making them effective in identifying anomalies that deviate from the healthy distribution.
Hereby, an anomaly is to quantify the similarity of query data from the health distribution 𝒫_0( x).
To identify anomalies via the GAN model, the authors in <cit.> proposed a AnoGAN structure to detect the degree of anomaly quantitatively.
More specifically, for a query data x: first, one shall find the optimal representation z_γ from the latent space 𝒵 to guarantee the maximum degree of similarity between original data x and generated data x̃= G( z_γ), and computing the residual loss (RL);
second, calculating the discrimination loss (DL) from the perspective of feature layers;
third, the overall loss as the weighted sum of RL and DL resulted as anomaly score(AS).
For this purpose, we define the RL
𝒜_R( x) = 1/n_x∑| x-G^*( z_γ)|,
and the DL
𝒜_D( x) = 1/n_f∑|h( x)-h(G^*( z_γ))|,
where h(·) refers to the feature layer of an intermediate layer of the discriminator and n_x, n_f corresponding pixel numbers.
Therefore, the anomaly score is defined as in Eq.(<ref>), e.g.,
𝒜( x)= 𝒜_R( x)+λ·𝒜_D( x),
where the hyperparameter λ is a weighted coefficient. Note that the generator G^*(θ_G^*) and discriminator D^*(θ_D^*) are fixed from the previous adversarial training, and only the input vector z is adapted via backpropagation for a query data x<cit.>.
Remarkably, a higher precise generator can be realized by introducing the inverse mapping μ(x̃)→ z.
Instead of direct backpropagation <cit.>, we here apply
an adaptive method <cit.> called f-AnoGAN by introducing the extra DNN encoderE(·) which can be trained only on X to automatically produce latent vector for any inputting data, e.g., z_γ = E( x), see details in supplementary material <cit.>.
Accordingly, the anomaly detection via f-AnoGAN is concluded as follows: the GAN should be initially trained on the datasets X (signal-free) sampled from the healthy distribution 𝒫_0( x), and the anomaly detection is subsequently processing by evaluating the anomaly score𝒜( x) (Eq. (<ref>)) for arbitrary data x∈𝒳. The model yields large AS for data from a destructive distribution 𝒫̃( x), whereas a small AS refers to the high probability of query data from the health distribution 𝒫_0( x).
We refer the reader to supplement materials <cit.> for details about model training and evaluation.
As a result, we obtain a fixed GAN-based generative model to automatically output the anomaly score 𝒜=f^*_NN( x|{θ_G^*,θ_D^*}) for data x∈𝒳.
In a sensing application, the anomaly score assigned for signal-free experimental outcomes follows the healthy probability distribution ℙ_0(𝒜( x)).
When it comes to an unknown perturbation V (i.e., an external force in our setup ), a destructive probability distribution ℙ̃(𝒜(x̃)) as well can be formed according to our fixed model, see the workflow in Fig.<ref>. The former anomaly score of signal-free are statistically smaller then the later one,
indicating that sensitive signal (anomaly score) could be formulated by anomaly detection method.
DATA AVAILABILITY
Data are available from the authors upon reasonable request.
CODE AVAILABILITY
The computation code for producing the results in this work is available upon reasonable request.
AUTHOR CONTRIBUTION
X.L. conceived the main idea in discussion with
T.H. and X.J.Z.
T.H. designed the machine learning framework and carried out the tests.
Z.C.Y. contributed to the experimental data analysis.
All authors contributed to writing the paper.
COMPETING INTERESTS
The authors declare no competing interests.
Supplementary Material
§ DERIVATION OF THE SENSITIVITY
We start with two probability distributions of signal q for N independent sensing experiments, i.e., ℙ_0(q) and ℙ_δ(q), which refer to the healthy and destructive distribution, respectively.
In other words, the signal initially obeys q∼ℙ_0(q) with static quantity V_0, and follows an destructive distribution q∼ℙ_δ(q) after interacting with V = V_0+δ V.
The readout error occurs when determining whether a query signal q belongs to ℙ_δ instead of ℙ_0, which can be quantified as the variance σ^2 = 1/(4C^2N)<cit.> with efficiency coefficient C∈[0,1]. Where C=1 indicates the experiment observable can be perfectly identified, and C = 0 refers to the sensor is not able to detect the signal in a finite number of experiments.
It's worth to note that C= 1/4 refers the statistic signal q belong to a the Gaussian distribution in this case.
On the other hand, we consider the decoherence effect resulting as the exponentially decay on the signal change δ q (t) = δ q e^-χ(t) during the sensing period t∈[0, T], where q_0=q(0) and the χ(t) is an empirical function that depends on the decoherence of quantum resource <cit.>.
Note that we define the partial derivative of signal fluctuation ∂_V q = δ q /δ V respect to a small change of physical quantity δ V.
As a result, we obtain the signal-to-noise as
SNR = δ q e^-χ(t)/σ=2δ V|∂_Vq|e^-χ(t)C√(N),
where the experiment count N can be represented in a time basis as N=T/(t+t_m), where T, t and t_m are the total time, sensing time and the time for senor initialization and measurement of single experiment, respectively.
Generally, the signal change δ q is a function of quantity δ V i.e., δ q ∝ (δ V)^k with k=1,2 for a Ramsey experiment <cit.>.
The minimum detectable physical quantity V_min is resulted from unit SNR respect to the measurement uncertainty, e.g., δ q =σ. In this vein, the sensitivity is interpreted the minimum detectable signal that yields unit SNR over one second (T= 1s)<cit.>
sensitivity: =e^χ(t)√(t+t_m)/2C|∂^k_Vq|.
The above equation provides a general expression for the definition of sensitivity in quantum sensing applications.
In our case, we assume the signal respond δ q is a linear function of perturbation δ V since δ V→ 0 in high-precision detection. Meanwhile, we suppose the single experiment time T_0 is smaller than the coherence time for avoiding the exponential decoherence effect.
Without loss of generality, we define the minimum detectable quantity is resulted at one-sigma uncertainty of health distribution σ_0=√(var[ℙ_0(q)]), leading to the sensitivity definition:
𝒮 = √(T_0)×σ_0/|∂_Vq|,
where T_0 = t+t_m is the time cost for single experiment.
Since the noise interference is usually inevitable on a pre-tolerance quantum device, one can reduce the readout noise by performing a large number of experiments N→+∞. However, it is prohibited in a state-of-the-art quantum experiment due to the decoherence effect. In this regard, our method aims to optimize the signal construction f(·):V↦ q for increasing the sensitivity.
§ THE UNSUPERVISED LEARNING OF THE F-ANOGAN
In this section, we introduce the training procedure including the model structure and loss functions, where the source data and codes are available from the corresponding author upon request.
In our setup, we collect 3.6×10^3 two-dimension (2D) TOF atomic images with and without in the present of optical-force.
The training datasets X contains 90% of total experimental outcome, and the rest as testing datasets. Each images are converted to the resolution 64× 64, and normalized to range x∈[-1,1]^n.
The discriminator D(θ_D) is a binary classifier to classify the real data x∼𝒫_data and reconstructed data x̃∼𝒫_g, where the output contain one-dimension scalar p_D∈[0,1] representing the probability of being a real data for a inputting query image.
Noting that the hidden layers of discriminator consists a few perception layers before the activation layer. And we choose the last layer of discriminator as our feature layer h( x).
The generator G(θ_G) is composed by multiply fully-connected artificial neural works, where its input is a latent vector z with dimensionality 100 randomly drawn from a uniform distribution z∼𝒫_ z∈ℝ and the output x̃ is a reconstructed image with the same resolution size.
In the frame of GANs, the goal of discriminator is accurately classifying the real date x as real and the generated data x̃ false. For a minibatch m samples, the training process of the discriminator focuses on updating and maximizing:
L_D= 1/m∑_i=1^m [log D( x^(i))+log(1-D(G( z^(i))))].
The generator aims to fool the discriminator by minimizing the loss function
L_G= 1/m∑_i=1^m (1-D(G( z))).
In other words, D and G are trained with the minimax loss function:
min_Gmax_D V(D,G) = 𝔼_ x∼𝒫_data[log D( x)]+𝔼_ z∼𝒫_ z[log (1-D(G( z)))].
where trainable parameters {θ_D,θ_G} are sequentially optimized in a generative adversarial training process.
The above training process theoretically provides the convergence condition at 𝒫_data = 𝒫_g resluting the minimum loss value 0 for D and -log 4<cit.> for G.
In the early stage of training the GAN <cit.>, the loss function Eq.(<ref>-<ref>) are designed to minimize the Jensen-Shannon (JS) divergence between the distribution of realistic data 𝒫_data and generated images from the generator𝒫_g. However, this formulation usually suffers from gradient collapse, and it leads to the failure of training a GAN. Remarkably, Arjovsky et al<cit.> proposed to use the Wasserstein distance instead of JS divergence since it provide a smoother training landscape. In our case, we apply the Wasserstein distance to train our GAN, e.g., using a Wasserstein GAN (WGAN)<cit.>.
To formulate the anomaly detection, we shall find the best z_γ∈𝒵 and fed it into the trained generator G^*( z_γ) to obtain the maximum similarity between reconstructed data and real data. In the original paper <cit.>, they optimize the coefficient z (via an auto propagation optimization process) to minimize the distance between real and fake images, aiming to find the optimized z= z_γ for achieving the most similar image G^*( z_γ) with x.
This method has drawback that it lacks the flexibility on real world applications. To acquire an adaptive latent coefficient z∈𝒵, we implement an encoderE(x) network to train the reverse mapping E( x)→ z, where the input x∈ X and output latent vector z.
The training database is the same as we used in WGAN training process, and we apply the izif-typed loss function <cit.>
L_E(x) = 1/n_x∑_i=1^n_x |x-G^*(E(x))|^2+ λ/n_f∑_i=1^n_f |h(x)-h(G^*(E(x)))|^2,
where n_x and n_f are the pixel number in an image x and the corresponding feature map h( x) of the fixed discriminatorD^*( x) and the weighted constant λ.
This method using a extra network called encoder to map the latent in anomaly detection is called fast AnoGAN (f-AnoGAN), see more detail in Ref <cit.>.
Meanwhile, we use the symbol E^*(·) represents the fixed encoder after training on datasets X. To this end, we have the fixed GAN combining of discriminator D^*( x), generator G^*( x) and encoder E^*( x).
Here, we apply a stochastic gradient-descent based optimizer Adam()<cit.> for mentioned optimization training processes.
And the whole data processing and training algorithm are performed in the Pytorch platform with GPU acceleration using computation resource in cloud service Colaboratory<cit.>.
The anomaly score.
According to the fixed networks G^*(·), D^*(·) and E^*(·), we then calculate the anomaly score defined by the Eq.(<ref>) in the main context.
First, the residual loss (RL) score is directly calculated by
𝒜_R( x) = 1/n_x∑| x-G^*(E^*( x))|.
where the pixel number n_x = 64× 64.
And we analyze the discrimination loss (DL) by computing the distance of the feature layer between real and fake image
𝒜_D( x) =1/n_f∑ |h( x)-h(G^*(E^*( x)))|,
where we choose the last layer of discrimator (before the last activation layer) which is an one-dimensional array with size 8× 8 as the feature layer h(·).
We also discuss the important effect from the weighted coefficient λ in the definition of anomaly score (Eq. (<ref>)).
Without loss of generality, we use the optimal lambda λ_op to realize the maximum sensitivity argmax_λ∈[-1,1]𝒮(λ).
In Fig.<ref>.(a-b), we present the sensitivity as a function of λ for the force sensing experiment. We thus choose the λ_op=-0.76 to calculate the anomaly score for inputting atomic images.
In addition, we testify the generality of our trained model by calculating the anomaly score for the testing data which is not included in the training dataset. We compare the anomaly distribution of testing and training datasets in Fig. <ref>, which showing the high university of method in our setups.
§ THE SURVEY ON THE DIGITAL REPLICA
In this section, we present the statistic result of survey on the digital replica. Firstly, we randomly sample data from real datasets to generate a group containing 64 real TOF atomic images.
Secondly, we form a mixture of real data and fake data with a probability 50%:50%.
We randomly select 30 Ph.D. students and let them know how these groups are formed. They are asked to identify which ones of the second group are real experimental data.
We present the statistic result in Fig. <ref>, where the averaged accuracy is 48(9)% with one-sigma uncertainty.
§ FEATURE DISCRIMINATION BY T-DISTRIBUTED STOCHASTIC NEIGHBOR EMBEDDING (T-SNE).
The t-distributed stochastic neighbor embedding (t-SNE)<cit.> is a widespread tool in data analysis. It is a nonlinear dimensionality reduction technique by embedding high-dimensional data into a low-dimensional space (two or three dimensions).
Specifically, a N-dimension scalar x can be embedded into a dot in the two-dimension t-SNE space. Remarkably, the similarity between two objects { x^i, x^j} are modelled by the Euclidean distance between corresponding dots in a two-dimension t-SNE grid. For a detailed review of the t-SNE technique, we refer to the reference <cit.>.
In our case, we use t-SNE to visualize the similarity between force-free and force-involved feature maps.
First, we produce the feature map of datasets X and X̃ from the last layer of discriminator, e.g., m( x)=h(D^*(E^*( x ))) which is a 8×8 tuple, where the pixel-wise m[i,j]∈[0,1].
According to the anomaly score Eq.(<ref>), the DL 𝒜_𝒟 is characterized by feature deviation, and it plays an important role in improving the digital replica in our case (see the discussion on the weighted coefficient λ).
To visualize how the feature map reflects the efficiency of GAN, we next use the t-SNE to embed the 64-dimension feature layer into a two-dimension space.
In Fig.<ref>, we display the feature map of real images for force-free (blue dots) and force-involved (red dots) data in two-dimension t-SNE space. The distinct clusters indicate that our model captures unique features resulting from signal interaction, highlighting the differences in feature deviations between the presence and absence of force. This demonstrates the ability of our GAN model to effectively capture normal variability in signal-free datasets, and it has capacity to detect weaker signal respond.
§ NOISE REDUCTION BY GAUSSIAN PROCESS
The Gaussian Process.
First, we use the Gaussian-Based convolution method to reduce the pixel noise of atomic images.
A Gaussian filter, also known as a Gaussian smoothing filter or Gaussian blur, is a commonly used image processing technique for reducing image noise, where an isotropic Gaussian filter is defined as g(i,j) = 1/√(2π)σe^-i^2+j^2/2σ^2 with standard deviation σ in the pixel basis i, j =1,2,...,m.
The convolution operation involves sliding the kernel over each pixel in the image and computing a weighted average of the pixel values and their neighbors.
Mathematically, the convolution operation at each pixel x(i, j) can be expressed as follows:
x_G(i, j) = ∑ [ x(i,j) * g(i, j)]/m^2.
Here, x(i, j) represents the pixel values of the input image, g(i, j) represents the corresponding values in the Gaussian kernel, and the summation is performed over the kernel's dimensions m× m.
The size m determines the dimensions of the kernel matrix, and the standard deviation σ controls the spread of the Gaussian distribution. A larger standard deviation results in a wider distribution, leading to a stronger smoothing effect.
Second, we determine the centroid position of absorption images by a Gaussian surface-fitting routine which is widely used in the calibration of the absorption imaging <cit.>.
The two-dimensional elliptical Gaussian function is expressed as
G(i,j) = Ae^-[a(i-i_0)^2+b(i-i_0)(j-j_0)+c(j-j_0)^2],
where the amplitude A, the peak location (i_0,j_0) and a,b,c are parameters to be confirmed.
The goal is to find parameters g={A,i_0,j_0,a,b,c} that minimizing the mean-square distance between the Gaussian function G(i,j) and an atomic image x∈ X, e.g., min_g∈ℝ |G(i,j)- x |^2, in the pixel-wise grid [i,j] where i,j = 1,2...,64. To this end, our Gaussian process upon a probed image can be formulated as x_G = f_G(f_g( x)), where f_g(·) and f_G(·) represent Gaussian smoothing and Gaussian-fitting channel, respectively.
Next, we process every raw observable in datasets {X,X̃} by Gaussian process and compute the corresponding COM.
Here, for the sake of simplicity, we use the machine learning toolkit scipy to realize the Gaussian fitting process.
As a result, we measure the sensitivity 𝒮_r^COM =1.6(5)× 10^-24 N√(Hz).
This noise reduction process benefits the sensitivity compared with the result with raw data 𝒮^COM. Meanwhile, the data preprocessing may erase valuable information contained in the raw data, implying a lower signal sensitivity compared with our method 𝒮^AS. Balancing the noise reduction and information loss is another critical issue that we leave here for further investigation.
Our method relies on no data preprocessing but offers a more sensitive signal than the conventional configuration with data analysis.
§ TRAINING SATURATION
The data burden is the core problem in machine learning applications. Despite the fact that unsupervised learning requires much less than the supervised learning counterpart, it is crucial to balance the experimental cost and resulted accuracy.
This section aims to explore the optimal number of experiment counts to achieve signal detection with acceptable sensitivity. To do so, we randomly sample N data points from the original datasets and utilize this dataset to train our generative model, employing the same methodology described in the main context's Method section. Subsequently, we use the corresponding fixed model to generate the anomaly score for the original dataset consisting of 3.6×10^3 data points.
In Fig.<ref>, we present the sensitivity as a function of training amount N, where each data point with one-sigma uncertainty over ten training experiments.
For comparison, we establish a sensitivity threshold which is 𝒮_c= 6.7×10^-25 N/√(Hz). This threshold is calculated when the signal change is equivalent to the one-sigma uncertainty of the health distribution (δ q = σ_0). Sensitivities below this critical value are considered indicative of unreliable signal detection.
The figure shows that the sensitivity reaches the reliable regime when the training size exceeds one hundred.
It is important to note that, for comparison, results in Fig.<ref> employ the same optimizer and identical neural network structures for the GAN training process. However, they differ in learning rate and batch size to ensure the confident results. This training saturation is important to estimate the experiment cost of preparing training datasets for our method.
It implies that the data acquisition for our method only requiring tens of hours.
|
http://arxiv.org/abs/2307.00994v1
|
20230703131825
|
Environmental effects on emergent strategy in micro-scale multi-agent reinforcement learning
|
[
"Samuel Tovey",
"David Zimmer",
"Christoph Lohrmann",
"Tobias Merkt",
"Simon Koppenhoefer",
"Veit-Lorenz Heuthe",
"Clemens Bechinger",
"Christian Holm"
] |
physics.bio-ph
|
[
"physics.bio-ph",
"cs.LG",
"cs.RO"
] |
[
Jinyoung Han
August 1, 2023
==================
Multi-Agent Reinforcement Learning (MARL) is a promising candidate for realizing efficient control of microscopic particles, of which micro-robots are a subset.
However, the microscopic particles' environment presents unique challenges, such as Brownian motion at sufficiently small length-scales.
In this work, we explore the role of temperature in the emergence and efficacy of strategies in MARL systems using particle-based Langevin molecular dynamics simulations as a realistic representation of micro-scale environments.
To this end, we perform experiments on two different multi-agent tasks in microscopic environments at different temperatures, detecting the source of a concentration gradient and rotation of a rod.
We find that at higher temperatures, the RL agents identify new strategies for achieving these tasks, highlighting the importance of understanding this regime and providing insight into optimal training strategies for bridging the generalization gap between simulation and reality.
We also introduce a novel Python package for studying microscopic agents using reinforcement learning (RL) to accompany our results.
§ INTRODUCTION
In recent years, machine learning (ML) and physics have enjoyed a strong relationship in broad fields ranging from quantum chemistry <cit.> to cosmology <cit.>.
An emerging application of ML in the physical sciences is understanding the dynamics and collective behavior of microscopic particles or colloids <cit.>.
In these low Reynolds number regimes, the physics of Brownian motion and hydrodynamics dominate, leading to exciting challenges <cit.>.
Despite these challenges, unusual collective behavior has been observed in both biological <cit.> and artificial <cit.> active matter environments.
A current research area in this field is understanding how to program these artificial microscopic colloids or agents to achieve tasks like locating positions in an environment or moving objects.
Such detailed control over microscopic agents can revolutionize many fields, with medicine being of great interest <cit.>.
While there has been a significant amount of progress utilizing classical algorithms <cit.>, recently, reinforcement learning (RL) has become a promising candidate <cit.>.
Most RL approaches thus far have centered around using Q-learning for single-agent tasks.
However, effective deployment of these agents will likely involve collective behavior and, therefore, a multi-agent setting.
Furthermore, the role of the environment in these tasks has yet to be investigated to a great extent, partially due to many groups relying on experimental implementations of these agents and, therefore, not having access to such variables.
This also prohibits the training time of the models, as experimentally driven RL is laborious.
In this work, we approach the problem of understanding how the Brownian motion of these agents impacts their emergent strategy and its efficacy at different temperatures.
We utilize a powerful physics engine designed to study these systems, ESPResSo <cit.>, to perform these experiments at scale.
Experiments involve implementing MARL for the problems of detecting target points in a box and rotating a rod.
Our main contributions are:
* We demonstrate emergent collective behavior in RL-driven micro-robots in realistic simulations.
* We discuss the role of Brownian motion in the agents' success in their tasks.
* We demonstrate the robustness of MARL algorithms against environmental noise.
* We highlight the differences in emergent strategies as a function of temperature and comment on this as a necessary consideration in bridging the generalization gap with experiments.
All of the infrastructure for the work performed in this investigation is packaged into an open-source Python project, SwarmRL, which is available publicly on GitHub.
§ RELATED WORK
Much work has been dedicated to both emergent strategy in microscopic agents and the application of RL at these length scales.
A lot of this work has focused on understanding relationships between artificial microswimmers and their biological counterparts or applying simplified RL strategies such as Q-learning to various problems.
Collective Behaviour in Artificial Microswimmers
Understanding the behavior of organisms on microscopic scales has been a central research focus since the early 1900s.
In recent years, with the invention of artificial microswimmers, improved imaging methods, and advanced computational systems, one has been able to probe these behaviors in much greater detail.
Of particular interest is the emergence of collective behavior in groups of microswimmers; that is, with limited local information about their environments, groups of agents can share knowledge to achieve a task.
In <cit.> and <cit.>, group formation and behavior were studied utilizing classical interaction algorithms for microswimmers under varying conditions.
These studies demonstrated the emergence of collective behavior for swimmers with varying degrees of sensory ability.
In their 2022 paper, <cit.> investigated the response of microscopic swarms to external threats.
This work emphasized that collective groups of microswimmers use information sharing to respond to events that individuals may not be aware of.
In each of these studies, while the effects of Brownian motion are implicit in the setup, the role of these fluctuations is not explored, nor is a specific task programmed.
Microscopic RL
In the direction of RL, several studies have approached the problem of organizing microswimmers to the end of achieving a goal.
These studies typically rely on RL as a policy development algorithm.
<cit.> utilize a Q-learning algorithm to learn the swimming strategies of microswimmers and identify new swimming gaits for linked artificial micro-robots.
In the direction of learning policy, <cit.> have used a genetic algorithm to reproduce biological chemo-taxis in artificial swimmers and <cit.> implement actor-critic RL on a predator-prey problem and found that agents could utilize hydrodynamic cues to avoid predators in their environment.
Finally, <cit.> utilized a Q-learning approach to study how the environments of microswimmers impact their learning process for individual swimmer tasks, including the role of Brownian motion.
However, they did not investigate the impact on collective behavior or for more complex tasks which require algorithms beyond Q-learning.
To date, the role of the stochastic forces experienced by microswimmers in their emergent strategy has yet to be investigated.
However, these forces are a crucial component of micro-robotics, and therefore, their impact on learning is identified here as the primary research gap we aim to fill.
§ PROBLEM DESCRIPTION
Controlling multi-agent systems is a challenging task in-and-of itself.
Additionally, on microscopic agents' length scales, thermal fluctuations are relevant.
To varying degrees they will prohibit agents from performing a task by introducing a random component preventing the outcome of an action from being predicted with certainty.
On the contrary, the existence of this Brownian motion will encourage exploration of the agents as each action they take is perturbed.
We study numerically a system comprising colloids on the micrometer scale that can perform actions such as actively moving or rotating.
Using RL, we want to enable the colloids to solve tasks like finding the source of an attractant or rotating a rod.
We set out to understand the role of thermal fluctuations in the efficacy of RL on microscopic scales and the emergent strategy adopted by the agents.
In all simulations, a Langevin thermostat correctly reproduces the agents' Brownian motion and maintains the system's temperature in a statistical physics sense.
More broadly, stochastic effects can also occur at length scales where thermal motion is not relevant, for example imperfect motors or sensors on a robot.
Our work is therefore also relevant for applications beyond the micro-scale.
§ METHODS
The successful implementation of these experiments has required many unique components.
Here we outline briefly the critical aspects of the work.
§.§ Particle Simulations
We simulate the trajectories of particles using the overdamped Langevin equations of motion for position and orientation
ṙ_i = 1/γ_t[F(t) e_i(Θ_i) - ∇ V(r_i, {r_j}) ] + √(2 k_B T / γ_t)R^t_i(t),
Θ̇ = 1/γ_rτ(t) + √(2 k_B T / γ_r) R^r_i(t).
Here, r_i is the (two-dimensional) position of particle i, Θ_i the angle describing the particle orientation, γ_(t,r) the translational (rotational) friction coefficient, F and τ an active force and torque corresponding to an action, e = (cos(Θ), sin(Θ))^T the particle orientation, V an interaction potential between all particles in the system, k_B the Boltzmann constant, T the temperature and R^(t, r)_i a noise term with zero mean and correlations according to R^(t,r)_i(t) R^(t,r)_j(t') = δ_ijδ(t-t'), where · denotes an ensemble average.
For colloids of radius a in a fluid with dynamic viscosity μ we calculate the friction coefficient according to Stokes' law as γ_t = 6 πμ a and γ_r = 8 πμ a^3.
<Ref> are solved numerically using the ESPResSo <cit.> simulation package with a time-step δ t = 0.01, the actions that determine F(t) and τ(t) are updated every time slice Δ t = 1.
In all cases, unless otherwise specified, when referring to time in this investigation, we refer to the time slice, i.e., the number of times an action is computed for each agent in the simulation.
We model particle interactions with the two-body Weeks-Chandler-Anderson (WCA) potential <cit.>, which can be seen as an almost-hard-sphere interaction:
V = 4· V_0( (σ/r_ij)^12 - (σ/r_ij)^6) + V_0.
Here, r_ij = || r_i - r_j ||_2 is the absolute distance between the particles, and σ = 2a the colloid diameter. We choose the interaction strength V_0 = k_B T.
§.§ RL Architecture
RL is a branch of machine learning in which an agent in state s_t interacts with its environment by taking action a_t that brings it to a following state s_t+1.
The agent aims to maximize each state's scalar reward r_t. It does so by optimizing the policy π that governs the agent's behavior.
MARL as a subfield of RL is closely related to game theory and focuses on multiple such learning agents.
In addition to interacting with the environment, agents also interact with each other in cooperative or competitive ways <cit.>.
At every time slice of the experiment, each agent takes an action based on its observation.
Each possible action at that time is sampled with a probability determined by the policy.
The Gumble-max trick <cit.> is used to sample this categorical distribution efficiently.
The policy is updated using a policy gradient method in which the actor takes the role of the policy and the critic evaluates the actor's performance after each episode <cit.> .
The actor and the critic are neural networks, each consisting of two dense layers with 128 neurons.
The actor-critic architecture is shown in Figure <ref>.
The Adam optimizer with a learning rate of λ = 0.001 <cit.> is used for the network parameter update.
§.§ Active Brownian Particles as Agents
We have used microscopic agents in all the experiments performed that mimic active Brownian particles such as the Janus particle <cit.> shown in Figure <ref>A).
These particles can mimic active biological matter upon excitation from an outside driving force.
In experimental setups, this driving force is often one or more lasers applied along a specific axis of the colloid to induce either rotation or translation.
This axis is designated the director of the colloid and emulates a forward-facing agent.
The driving mechanism is accurate enough to allow for a well-defined action space for each colloid in the system:
𝒜=
Translate: F = 10.0, τ = (0.0, 0.0, 0.0)
Rotate CCW: F = 0.0 , τ = (0.0, 0.0, 10.0)
Rotate CW: F = 0.0 , τ = (0.0, 0.0, -10.0)
Do Nothing: F = 0.0 , τ = (0.0, 0.0, 0.0)
where F is the force magnitude in simulation units applied along the forward direction of the colloid and τ is the torque.
Another key feature of the reinforcement process is the state description parsed as an input for each agent.
In this investigation, two different state descriptions, referred to hereafter as observables, are utilized.
Concentration Sensing
The most simple observable used in the experiments is termed concentration sensing.
This observable aims to replicate the sensing capabilities of biological organisms in their ability to register a change in an environmental factor such as temperature or a food source.
The observable itself computes a change in the concentration of some field defined in this study as a potential decaying like 1/r.
At each action update, the models receive a value computed by:
o_i(t) = f(||𝐫̂_i(t) - 𝐫̂_s(t)||_2) - f(||𝐫̂_i(t-Δ t) - 𝐫̂_s(t-Δ t)||_2),
where o_i is the observable for the i^th colloid, f is the chosen field, 𝐫̂_i(t) is the position of the i^th colloid at time t, Δ t is the amount of time since the last action was computed, 𝐫̂_s(t) is the position of the source of the potential at time t, and ||·||_2 denotes a Euclidean norm.
In our experiments, the source remains fixed in space and will not vary in time.
The concentration sensing observable is shown graphically in Figure <ref>B).
Vision Cones
The second observable employed in our studies was the vision cone shown graphically in Figure <ref>C).
This approach mimics an agent with the ability to identify particle species and a blurred sense of direction, that is, a range of angles within which it knows other particles are present.
In this study, the vision cone is broken into five components.
Each component of the vision cones computes N numbers where N is the number of unique species in the system, e.g., for rod rotation, there are active particles and rod particles resulting in N=2.
The value of the cone is computed by:
o^jk_i = ∑_n ∈𝒞_j1/||𝐫_i - 𝐫^k_n||_2,
where o^jk_i is the observable of the i^th particle for the j^th vision cone computed for all particles of species k which lie within the cone represented by 𝒞_j.
In order to provide this information to an actor and critic, we implemented an embedding scheme.
In this scheme, the vision cone values associated with the rods and colloids are split into two vectors and passed through their trainable embedding layers, producing a reduced, 2-dimensional output.
Element-wise addition of these vectors forms the final observable used to compute actions.
The embedding layers are trained along with the actor and critic during the simulations, allowing each model to learn a representation of its environment.
§.§ SwarmRL
The RL performed in this investigation is part of a project involved with not only training micro-swimmers but also deploying trained models in natural experiments.
Therefore, all of the infrastructure surrounding the training of the agents and deployment in the ESPReSso simulation engine has been written into an open-source Python package SwarmRL.
As illustrated in Figure <ref>, SwarmRL provides an interface to each component of the RL pipeline as well as different environments including both the simulation engine and actual experiment setups.
SwarmRL is built on the JAX <cit.> ecosystem and utilizes Flax <cit.> for neural networks.
SwarmRL is publicly available on GitHub at <https://github.com/SwarmRL/SwarmRL>.
§ EXPERIMENTS
This investigation performs two experiments: source detection and rod rotation.
Each task requires a differing degree of complexity, and in each case, the agents discover different strategies depending on their environment.
Training for each task occurs in the same way.
Each model is trained with ten agents for a total of 10.000 episodes.
The first 5000 episodes are performed using an exploration rate of 20 %; that is, 20 % of the time, colloids chose a random action other than that provided by the actor.
The second 5000 episodes were performed without this exploration policy.
This is performed for ten ensembles with different starting simulation conditions and neural network initialization.
In all computations, an average value over these ensembles is computed, resulting in the corresponding error values.
At the end of the training, all the trained models are used in production simulations of 5.000.000 time-steps.
This procedure was carried out for five different temperatures: {0 K, 150 K, 273 K, 300 K, 350 K } to explore the changes in emergent strategy and efficacy of chosen strategy.
§.§ Source Detection
The simplest task tested was that of source detection.
In this case, the colloids have a biologically inspired sense of smell; they can sense changes in some applied concentration field.
Their reward is based on changes in this field, i.e., moving closer to the source of the concentration yields a better reward.
Such a task is reminiscent of chemo-taxis in bacteria <cit.> or, theoretically, detection of a drug delivery site in medicinal micro-robotics <cit.>.
Mathematically, the reward closely resembles the observables of these colloids:
r_i = α·Clip(1/||𝐫̂_i(t) - 𝐫̂_s(t)||_2 - 1/||𝐫̂_i(t-1) - 𝐫̂_s(t-1)||_2, 0 , None)
where α is some reward scaling value, the clip operation ensures that particles are rewarded for moving toward the source and receive no input if they move away.
The experiment results are outlined in Figure <ref>.
Figure <ref>A) illustrates the policy breakdown for inputs to the neural network at different temperatures.
To construct these plots, we pass artificial data through the trained models and compute the probability of different actions for these inputs.
We see that negative inputs in each temperature, i.e., a movement towards lower values of the concentration field, result in increased rotation probability and decreased translation.
This is opposed to movements towards the source, which increases translation probability.
Such a learned policy is reminiscent of run-and-tumble motion in bacteria in response to changing environments <cit.>.
It appears that at all temperatures, the Do Nothing actions is never included in the learned strategy and remains a randomly selected action.
We also noticed in these experiments that once the agents learned to rotate either clockwise or counterclockwise, they did not use the other action.
Figure <ref>B), C), and D) illustrate the distance of the colloids from the source at different temperatures.
In Figure <ref>C), the colloids trained at higher temperatures converge faster onto the source.
This is expanded upon in <ref>D) where the histograms of the equilibrium region are plotted for each temperature.
These histograms show that models trained at higher temperatures converge closer to the source of the field and, in general, experience less variance around it.
This is generally true; however, it can also be seen that the 150 K model reached the closest to the source while higher temperatures moved back away.
The mechanism behind this improved policy can be understood by looking directly at the trajectories of the models in Figures <ref>E) and F).
In the 0K simulation shown in <ref>E), the colloids undertake an orbital movement towards the source and, in doing so, maximize their reward over time.
In the case of a 350 K environment shown in <ref>F), the colloids cannot afford to take such a policy as the random fluctuations will push them out of their orbit.
In these cases, the colloids move more directly toward the source and perform more oscillatory movement once they are close.
This approach explains the improved proximity of the higher temperature models as they orient themselves to move directly to their target.
However, the stronger the Brownian forces acting on the colloids, the harder it will be to remain close to the source and the more significant are their fluctuations around it.
In summary, including Brownian forces in the model training results in a more direct approach to the source of the field attracting the colloids.
This policy results in closer average distances to the source and less variation around the it.
§.§ Rod Rotation
Increasing the complexity of our experiments, the subsequent investigation involved training the agents to rotate a rod.
In these experiments, the rod is modeled as a series of rigidly bonded small colloids so that they cannot move away from one another.
This task is interesting as the colloids should cooperate to achieve this task efficiently.
The observables we use in the rod rotation experiments are the vision cones discussed in Section <ref>, which return values for both the agents and the rod colloids.
The rewards for this task are broken into two components, detecting the rod and increasing its angular velocity.
Rod detection is encouraged by rewarding the agents if they move closer to the rod particles:
r^rod proximity_i = ∑_j1/𝐫̂_i(t) - 𝐫̂_j(t) - ∑_j1/𝐫̂_i(t-Δ t) -𝐫̂_j(t-1),
where 𝐫̂_j are all particles in the rod.
The reward for the rod rotation itself is more involved as a partitioning scheme is used in order to compute a meaningful single-agent contribution which is applied to the total reward as:
r^rod rotation_i = τ_i/τ_net·(ω_rod(t) - ω_rod(t-Δ t)),
where τ_i is the torque exerted on the rod by colloid i, τ_net is the net torque acting on the rod, and ω_rod(t) is the angular velocity of the rod at time t.
The final reward for the colloid is then computed with the following:
r_i = α· r^rod proximity_i + β· r^rod rotation_i,
where α and β are scaling factors chosen in this work to be 10 and 100, respectively, such that successful rod rotation dominates the reward.
The results of the rod rotation experiment are outlined in Figures <ref> A)-C).
During the rod rotation experiments, two observables were of interest: the angular velocity of the rod, which measures the agents' efficacy, and the agents' distribution along the rod, which provides a description of their strategy.
Figure <ref>B) shows the average absolute rod velocity as a function of temperature.
In order to account for the natural motion of the rod, simulations have been performed without active colloids and these rod velocities subtracted from the active values.
It is clear from the measurements that the agents are capable of achieving greater rod velocities at higher temperatures.
This is of interest as the 0 K simulation provides the least resistance to rotation as no effects of the Brownian motion present.
As the temperature continues to increase, the mean rod velocity oscillates and decreases, suggesting that at some stage, the random fluctuations again become problematic for the agents' strategy.
The policy of the agents can be seen in Figures <ref> A) and C).
Figure <ref>A) plots the 2D histogram of the colloids for one of the 150 K simulation.
It is evident in this plot that the agents form on either end of the rod in a manner that maximises the applied torque.
A clearer picture of this strategy emerges when studying only the single-dimensional histogram along the long axis of the rod, shown in Figure <ref>C).
Here we can see that concentration along the ends of the rod is an emergent strategy for all temperatures.
This concentration is maximized at 150 K, possibly explaining the enhanced angular velocity in Figure <ref>C).
As temperature increases it seems that the agents move towards the center of the rod, perhaps preventing them for sliding off due to the additional random forces.
This trend is broken in the 350 K simulation where the agents once again form on the ends of the rod, although in a more asymmetric fashion, perhaps suggesting an alternative strategy in these conditions.
§ CONCLUSION AND OUTLOOK
We have performed experiments using the ESPResSo simulation engine to understand how temperature impacts the emergent strategy and efficacy of this strategies in micro-scale agents governed by MARL.
In the location detection experiments resembling biological chemotaxis, we found that increasing the temperature resulted in the agents taking direct approaches toward the source of the field.
In contrast, with no Brownian motion, they entered a decaying orbit.
This was accompanied by the agents equilibrating closer to the source at higher temperatures, suggesting the strategy was more effective than orbiting.
The most exciting outcome was the policy adopted by the agents as we saw them take the translation action upon a move closer to the source and more rotation as they moved away.
This approach strategy is heavily reminiscent of bacterial run and tumble motion.
Rod rotation also demonstrated an evolving strategy with temperature as we saw the colloids favor sitting on the ends of the rods, thereby maximizing the applied torque.
However, after 150 K the colloids began to favor moving inwards along the rod, presumably to avoid sliding off during rotation.
These results demonstrate that the environments of microscopic, intelligent agents profoundly impact their efficacy and the strategies they need to adopt to achieve different tasks.
Continued work in this area could look into more efficient training strategies to achieve the performance of the models at 150 K in the 350 K simulations.
Further, training models in simulation environments to be robust against these environmental factors could result in a smaller generalization gap for crossing into physical experiments.
Micro-robotics and MARL are, without a doubt, technologies of the future.
The key to unlocking their success lies in understanding the conditions under which they will work and using these to their advantage.
§ ACKNOWLEDGEMENTS
V.L.H and C.B acknowledge funding from the DFG Centre of Excellence 2117, Germany ”Centre for the
Advances Study of Collective Behaviour”, ID: 422037984.
C.H and S.T acknowledge financial support from the German Funding Agency (Deutsche Forschungsgemeinschaft DFG) under Germany’s Excellence Strategy EXC 2075-390740016, and S. T was supported by a LGF stipend of the state of Baden-Württemberg.
C.H, and S.T acknowledge financial support from the German Funding Agency (Deutsche Forschungsgemeinschaft DFG) under the Priority Program SPP 2363.
plainnat
|
http://arxiv.org/abs/2307.02410v1
|
20230705163320
|
Dynamical Effects of Magnetic Opacity in Neutron Star Accretion Columns
|
[
"Xin Sheng",
"Lizhong Zhang",
"Omer Blaes",
"Yan-Fei Jiang"
] |
astro-ph.HE
|
[
"astro-ph.HE"
] |
UTF8gbsn
firstpage–lastpage
Multi-objective Deep Reinforcement Learning for Mobile Edge Computing
Ning Yang,
Junrui Wen,
Meng Zhang*,
Ming Tang
Ning Yang and Junrui Wen are with Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China. (e-mail: ning.yang@ia.ac.cn, yvwtogo@gmail.com).
Meng Zhang is with the ZJU-UIUC Institute, Zhejiang University, Zhejiang, 314499, China. (e-mail: mengzhang@intl.zju.edu.cn).
Ming Tang is with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China. (e-mail: tangm3@sustech.edu.cn).
(*Corresponding author: Meng Zhang)
Received ; accepted
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We present relativistic, radiation magnetohydrodynamic simulations of supercritical neutron star accretion columns in Cartesian geometry, including temperature-dependent, polarization-averaged Rosseland mean opacities accounting for classical electron scattering in a magnetic field. Just as in our previous pure Thomson scattering simulations, vertical oscillations of the accretion shock and horizontally propagating entropy waves (photon bubbles) are present in all our simulations. However, at high magnetic fields ≳10^12 G, the magnetic opacities produce significant differences in the overall structure and dynamics of the column. At fixed accretion rate, increasing the magnetic field strength results in a shorter accretion column, despite the fact that the overall opacity within the column is larger. Moreover, the vertical oscillation amplitude of the column is reduced. Increasing the accretion rate at high magnetic fields restores the height of the column.
However, a new, slower instability takes place at these field strengths because they are in a regime where the opacity increases with temperature. This instability causes both the average height of the column and the oscillation amplitude to substantially increase on a time scale of ∼10 ms. We provide physical explanations for these results, and discuss their implications for the observed properties of these columns, including mixed fan-beam/pencil-beam emission patterns caused by the oscillations.
instabilities – MHD – radiation: dynamics – stars: neutron – X-rays: binaries
§ INTRODUCTION
Accretion of matter onto a magnetized compact object is an important process in a variety of astronomical contexts, from polars and intermediate polars in white dwarf binaries to accretion-powered pulsars in neutron star binaries. Inside the Alfvén radius, where magnetic stresses are comparable in magnitude to the ram pressure of the accreting material, matter is thought to be magnetically guided through the compact object's magnetosphere toward the magnetic poles <cit.>. In many systems the material lands on the stellar surface, resulting in hot spots that locally emit more or less isotropically (so-called pencil-beam emission). However, in high mass X-ray binary neutron star systems where the local accretion rate can be a significant fraction of Eddington, radiation pressure can result in an accretion shock above the stellar surface, below which a quasi-hydrostatic region forms in which matter subsonically settles down onto the stellar surface. This accretion column structure radiates much of the accretion luminosity from its sides <cit.> in what has come to be known as fan-beam emission.
Accretion columns are central to all models of high luminosity X-ray pulsars and pulsating ultra-luminous X-ray sources (see for recent reviews). These models generally assume a simplified static, 1D structure, but in fact these columns are theoretically expected to have significant time-dependent behavior with substantial lateral spatial complexity <cit.>. Multidimensional numerical simulations are an essential tool to elucidate this behavior <cit.>.
This is the fourth in a series of papers on radiation magnetohydrodynamic
simulations of neutron star accretion columns using the code Athena++ <cit.> with the radiation module developed by <cit.>. Previous papers in this series examined the nonlinear development of photon bubble instabilities in static neutron star atmospheres <cit.>, the nonlinear dynamics of short accretion columns in Cartesian geometry <cit.>, and the nonlinear dynamics of more global, large accretion columns in split monopole magnetic fields <cit.>. All the previous simulations in this series assumed classical, isotropic Thomson scattering for the Rosseland mean opacity, and did not account for any magnetic effects on this opacity. Indeed, the only simulations that we are aware of that attempted to account for such magnetic opacity effects were by <cit.>.
For the high magnetic fields that are typical of young neutron stars, the magnetic field can significantly affect the opacity. This can greatly reduce the radiation pressure force on the plasma at low temperatures (e.g. ), and even increase the radiation pressure force for temperatures comparable to the cyclotron energy <cit.>. This can dramatically affect the dynamics of the accretion column, and possibly even introduce new instabilities. In this paper, we simulate these effects for columns with different magnetic fields and accretion rates, exploring in particular their variability and resulting light curves.
This paper is organized as follows. In <ref>, we describe the numerical methods that we employ in the simulations. In <ref>, we present our simulation results, discuss the physics that drives their behavior, and discuss the properties of the emergent radiation. In <ref>, we discuss some of the numerical caveats and also mention some observationally testable predictions. We then summarize our results in <ref>.
§ NUMERICAL METHOD
§.§ Equations
We follow the numerical treatments in <cit.> to solve the fluid conservation laws together with radiative transfer incorporating magnetic opacity. The governing equations are summarized as follows:
∂_0(ρ u^0) + ∂_j(ρ u^j) = S_gr1 ,
[t]
∂_0(w u^0u^i - b^0b^i)
+ ∂_j(w u^iu^j + (P_ g+ 1/2b_νb^ν)δ^ij - b^ib^j ) = S_gr2^i - S_r2^i
,
[t]
∂_0[w u^0u^0 - (P_g+ 1/2b_νb^ν) - b^0b^0]
150mu + ∂_j(w u^0u^j - b^0b^j) = S_gr3 - S_r3 ,
∂_0I + n^j∂_j I = ℒ^-1(S̅_r)
,
where the gas density ρ and gas pressure P_ g are defined in the fluid rest frame. The four-velocity is defined as (u^0, u^i)=Γ(1, v^i), where Γ=(1-v_j v^j)^-1/2 is the Lorentz factor and the three-velocity v^i is in units of the speed of light. The quantity I is the frequency-integrated radiation intensity and is a function of position and photon propagation direction n^i. Note that Latin indices indicate the spatial components of a three-vector (i.e. here i=1,2,3 refer to x,y,z, respectively), and Greek indices refer to the time-spatial components of a four-vector. Given the three-vector magnetic field B^i, its four-vector form b^μ=(b^0, b^i) and total enthalpy w are given by:
b^0 = u_jB^j, b^i = 1/u^0(B^i + b^0u^i)
,
w = ρ + γ/γ-1P_g + b_ν b^ν .
Here the adiabatic index γ=5/3 is assumed for the ideal gas. The gravitational source terms S_gr1, S_gr2^i, and S_gr3 are derived from general relativity in the weak field limit, and can be found in appendix B of <cit.>. In equation <ref>, ℒ^-1 is the Lorentz boost operator from the comoving frame to the lab frame. The source term of the radiative transport S̅_r is intrinsically defined in the comoving frame, while the radiation source terms S_r2^i and S_r3 that exchange momentum and energy between gas and radiation are defined in the lab frame. Both of their formulations can be found in equations (4) of <cit.>, except here we adopt the magnetic opacity κ_m for photon-electron scattering. The details of the magnetic opacity we use in our simulations can be found in [sec:magnetic_opacity_derivation]Appendix A. In our previous paper <cit.>, we assumed for simplicity that the scattering opacity was isotropic and constant and given by the Thomson value κ_T. However, this can significantly overestimate the photon-electron interaction when the magnetic field is strong at low gas temperatures. Moreover, magnetic electron scattering is anisotropic and polarization dependent. Nevertheless, a Rosseland and polarization-averaged opacity can be derived which is approximately isotropic (; [sec:magnetic_opacity_derivation]Appendix A). It is this magnetic opacity κ_m that we use to replace the Thomson opacity that we adopted in our previous simulations. <ref> shows the temperature dependence of this magnetic opacity for different magnetic field strengths. The magnetic opacity significantly increases with gas temperature T until it reaches a peak value of 1.95κ_T at kT=0.385 times the electron cyclotron energy ħω_ ce. It then returns to Thomson at higher temperatures.
§.§ Simulation Setup
§.§.§ Simulation Parameters
For our simulations, we select version 2 (HR-Wide-25) in <cit.> as the prototype, and vary its magnetic field strength and accretion rate to explore the dynamical changes of the accretion column with magnetic opacity. For all of our simulations, the mesh grids are set to be 700×2048 in the horizontal and vertical directions, respectively. The domain height is 0.35 R_⋆, and the width of the accretion column region is 0.06 R_⋆, where we adopt a neutron star radius R_⋆=10^6 cm.
The global parameters that we vary in our seven simulations are listed in <ref>. For the first five simulations, we vary the magnetic fields that we use to compute the magnetic opacities, from 10^11 G (Lowacc01) to 6×10^12 G (Lowacc6). Simulations Lowacc1 (10^12 G) to Lowacc6 span the range of magnetic fields inferred from electron cyclotron line observations in X-ray pulsars <cit.>, but we include Lowacc01 to compare with our previous Thomson scattering simulations. For simulations Highacc4 and Highacc6, we increase the accretion rates for the simulations with the two strongest magnetic fields, in order to build up higher accretion columns. The parameter ϵ is the accretion rate expressed as a local Eddington ratio and ρ_acc is the density of the incoming accretion flow at the top boundary. Both of their definitions can be found in equation (6) of <cit.>.
§.§.§ Boundary and Initial Conditions
The numerical setup of the simulation domain is identical to that of <cit.>, where the actual accretion column region is at the center with two vacuum regions on both sides and a gas-supported base at the bottom as effective boundaries. The boundary conditions of the simulation domain are also the same as what we used in <cit.>, and we summarize them as follows. The bottom boundary is reflective for both gas and radiation, and the magnetic fields are set to be constant and vertical. The side boundaries are reflective for both gas and magnetic fields, but allow the radiation to escape freely (i.e. a radiation vacuum boundary condition, see section 3.2 in ). Although the boundary conditions at the sides of the simulation domain are reflective for the magnetic field, the field at the edge of the actual accretion column inside the two vacuum regions is not so constrained. The magnetic fields at the top boundary are set to be constant and vertical. A cold accretion flow is injected from the top boundary within the accretion column region, where the comoving radiation fields are set to be isotropic and in local thermal equilibrium. Outside the accretion column region, the top boundary is outflow for the gas and vacuum for the radiation so they are free to escape.
For the five simulated accretion columns at the low accretion rate (ϵ=25), we adopt as initial condition the 1D solution of the one-zone stationary model with Thomson opacity (for details, see section 2.3 of ). For the high accretion rate simulations, we start from the the corresponding low accretion rate simulation that uses the same magnetic field, select a snapshot when the accretion column is in the quasi-steady state, and restart it using the higher accretion rate. To prevent numerical failures associated with a sudden accretion rate change, we slowly and linearly increase the accretion rate from t=3000t_ sim to t=4000t_ sim, where the selected simulation time unit is t_ sim=2.8×10^-7 s.
§.§ Additional Numerical Treatments
The magnetic pressure in neutron star accretion columns is larger than the thermal pressure by orders of magnitude. As a result, the variable inversion algorithm in Athena++ in going from conservative to primitive variables (including gas pressure) can fail to
numerically resolve the small gas pressure. This introduces numerical noise in determining the gas temperature which is used to define opacities and emissivities. We therefore adopt an initial vertical magnetic field of 8×10^10 G for the actual magnetohydrodynamics (MHD) in all of the simulations presented in this paper. This is sufficiently strong to confine the column against horizontal radiation pressure forces, and to constrain the matter to move vertically. It is also low enough to avoid excessive numerical noise from the variable inversion in the sinking zone of the accretion column. Hence, the magnetic fields listed in <ref> are purely used for the computation of the magnetic opacity, and not in the MHD itself.
However, the variable inversion algorithm can still fail in the low-density, free-fall region, which introduces substantial numerical noise into the gas temperature there. Because the magnetic opacity depends on temperature, the opacity can also be noisy, and in fact sometimes achieves artificially high values when the temperature noise is substantial. This can result in a strong interaction between the gas and radiation above the shock front. In some of our early numerical experiments, this effect gradually destabilized the accretion column and eventually led to ejection of the incoming accretion flow above the shock front. In reality, the matter in the free-fall zone has low temperature and so low magnetic opacity. Therefore, in our simulations, we adopt a small, but nonzero, fixed value of the magnetic opacity (κ_ m = 0.06κ_ T) in the free-fall zone in order to eliminate this artificial noise and smoothly handle the transition between the free-fall zone and sinking zone.
§ RESULTS
As noted above, our simulations with low accretion rates started from a 1D, Thomson scattering initial condition. This always overestimates the column height because of (1) the underestimated cooling efficiency from the oversimplified top-hat column shape and (2) the altered radiation-gas interaction from magnetic opacity. (We discuss the effects of this in detail in <ref>.) Therefore, the sinking zone quickly collapses and reaches a new equilibrium state where gravity is roughly balanced by the adjusted radiation pressure support. After the accretion column has relaxed from the initial condition, the system gradually enters into a quasi-steady state with high-frequency oscillations that persist to the end of the simulation, similar to what we found in <cit.>. For the simulations with high accretion rates, the column also reaches a quasi-steady state with oscillations. However, at late times both the column height and the oscillation amplitude dramatically increase due to an instability associated with the fact that the opacity increases with temperature at these high magnetic fields (see <ref>).
§.§ Behavior at Weak Magnetic Fields and Low Accretion Rate
In <ref>, we present the density distribution over one full oscillation period of Lowacc01 as an illustration of the oscillatory behavior. As discussed in <cit.>, the oscillation originates from the instantaneous mismatch between replenishment of internal energy and sideways radiative cooling. When the accretion column is most vertically extended, the sideways radiative cooling is maximized, while the heat that is mostly generated at the shock front cannot be transported to the bottom fast enough to balance this cooling. Hence, the column structure begins to collapse due to insufficient radiation pressure support. When the accretion column is compressed to its lowest height, the sideways cooling is minimized and the low altitude shock front over-heats the column, resulting in vertical expansion.
<ref> also shows the presence of vertical finger-like structures that propagate horizontally inward toward the center of the column. These are a manifestation of the entropy waves that are associated with the photon bubble instability in the slow radiation diffusion regime <cit.>. These entropy waves are present in all of our simulations, but have little effect on the fundamental oscillatory dynamics, nor do they alter the oscillation frequency.
Compared with the analogous simulation using Thomson opacity (see HR-Wide-25 in ), all the simulations using magnetic opacity lack the long-lived, coherent pre-shock structures because of the weak gas-radiation interaction when the gas temperature is far below the cyclotron energy in the free-fall zone. Nevertheless, as is evident from <ref>, density fluctuations do occur in the free-fall zone, particularly in the weaker magnetic field low accretion rate simulations. As we discuss more extensively below in <ref>, these simulations exhibit the strongest variability, and that variability is responsible for these fluctuations due to the interaction of the upward radiation field from the column. The less variable simulations show significantly reduced fluctuations in the free-fall zone. We also found that the pre-shock structure disappeared when the magnetic field decreased with height in the split monopole geometry of <cit.>, simply because of the reduced ram pressure at higher altitudes.
Lowacc01 has such a weak magnetic field that the temperature width of the opacity peak is easily surmounted in the shock and much of the sinking zone is supported by Thomson opacity. (See the 10^11 G curve in <ref>: almost all of the temperature range toward the base of the column is in the Thomson regime). It is therefore closest in its behavior to the simulations of our previous papers <cit.> that assumed pure Thomson opacity. <ref> depicts both the density and opacity in the stronger magnetic field simulation Lowacc1. As illustrated by the 10^12 G magnetic field curve in <ref>, the stronger magnetic field produces a wider opacity peak in temperature space. This is evident in the lower panel of <ref>. As material crosses the shock, the temperature climbs over the opacity peak and results in a high opacity in the post-shock plasma. Going downward into the sinking zone, the opacity declines as we are past the opacity peak, eventually approaching the Thomson opacity in the deep interior. Lowacc1 still shows substantial vertical shock oscillations and inward propagating entropy waves that were evident in Lowacc01. As we will see in the next section, the effects of the opacity on the dynamics and structure of the column become much more important as we increase the magnetic field still further.
§.§ Effects Arising from Changing Magnetic Field
In this section, we use the five low accretion rate simulations (Versions 1 to 5, Lowacc01 - Lowacc6) to study how the dynamical behavior of the accretion column varies with changes in opacity caused by changes in magnetic field strength, at fixed accretion rate. Naively, one might expect that increasing the magnetic field would tend to decrease the overall opacity, so the radiation pressure force on the plasma would decrease, and the time-averaged height of the accretion column would therefore decrease. The time-averaged density profiles for the low accretion rate simulations are shown in <ref>, and do in fact show a decrease in column height with increasing field strength. However, for this accretion rate, <ref> shows that the opacity actually generally increases, not decreases, in the time-averaged column structure as the magnetic field increases from Lowacc01 to Lowacc4. Only in going from Lowacc4 to Lowacc6 does the opacity
decrease, and the column then almost becomes a surface hot spot.
Once again, <ref> provides the explanation for this behavior. Although the shapes of the opacity curves are all very similar in this logarithmic plot, the actual linear temperature width is very small at low magnetic field strengths and increases toward higher field. For the lowest magnetic field (Lowacc01 in the left-most panel of <ref>), the opacity across the shock jumps right through the peak into the Thomson regime. The low values of the opacity
at the surface, indicated by the cyan color in <ref>, are simply due to the time-averaging over the vertical oscillation of the shock, so that at those locations one is averaging over the near-zero opacity in the free-fall zone and the Thomson opacity in the sinking zone. As we discussed in the previous section, Lowacc1 transitions through the opacity peak to almost reach Thomson in the interior, but the time-averaged opacity structure (second panel from the left in <ref>) does not exhibit the actual peak opacity because of the averaging over lower opacities in the vertically oscillating shock. As one moves to the simulations with still higher magnetic field, the wider and wider opacity peak becomes better and better resolved in the time-average, and the opacity in the deep interior never declines back down to Thomson. In Lowacc4, the opacity is close to the peak value everywhere in the sinking zone, and finally in Lowacc6 the temperature jump across the shock and down to the base of the column is too small for the opacity to quite reach the peak. (Note the location of the intersection of the base temperature indicated by the left vertical line in <ref> with the 6×10^12 G magnetic opacity curve.) The opacity therefore increases as one moves downward toward higher temperatures in what is left of the sinking zone. Further increase of the magnetic field would evidently result in too little post-shock opacity to support a column, and we would be left with a hot spot.
We are still left with the question as to why the column height decreases with increasing magnetic field in going from Lowacc1 to Lowacc4, even while the opacity is increasing inside the column. Let us begin by comparing Lowacc1 and Lowacc2. Lowacc2 has significantly increased opacity just below the shock compared to Lowacc1, simply because it has a wider opacity peak. This appears to provide a further barrier to vertically distributing the accretion power liberated in the shock to the rest of the column. Instead, more of the accretion power is radiated outward from the shock. This results in a shorter column, and a column that does not oscillate vertically as much as in the weaker magnetic field case. As we discussed above, these vertical oscillations are also a direct way of redistributing the accretion power vertically, but this also is now failing in the shorter column. As we continue to increase the magnetic field from Lowacc2 to Lowacc4 the post-shock opacity is even larger, and the column height again decreases, becoming almost a hot spot configuration. A concomitant feature of our simulations is that the vertical displacement amplitude of the oscillation declines with increasing magnetic field strength at fixed accretion rate. As we discuss in more detail in <ref> below, this, in turn, results in a smaller luminosity variability amplitude.
The postshock opacity declines in moving from Lowacc4 to Lowacc6, because the opacity peak is now so wide that the peak is not reached in Lowacc6. And yet the column height still decreases. This is due to a second important contribution to the decrease in column height which is evident from <ref>. Despite the fact that simulations Lowacc01-Lowacc6 have the same accretion rate, they do not have the same emitted luminosity. In fact, the luminosities of the simulated columns show a decreasing trend with stronger magnetic field, except in going from Lowacc1 to Lowacc2, where the luminosity slightly increases. The reason for this is that as the column height declines (driven by the postshock opacity variation), the sideways emitting area declines and, at the same time, the shock-liberated accretion power is brought closer to the stellar surface. This means that more of the accretion power is able to advect through the sinking zone and into the relatively cold neutron star base layer, and this is also a contributing factor to the decrease of the column height. It is this effect that dominates the decrease in going from Lowacc4 to Lowacc6. In going from Lowacc1 to Lowacc2, the post-shock opacity reaches its maximum possible value, and this is what causes Lowacc2 to have a slightly higher fraction of radiated accretion power.
§.§ Effects Arising from Changing Accretion Rate
Our highest field strength simulations at low accretion rate (Lowacc4 and Lowacc6) resulted in very short accretion columns. One would expect that increasing the accretion rate in these two magnetic field regimes would produce more luminosity and radiation pressure support, resulting in taller columns. We did this in simulations Highacc4 and Highacc6, and we present the resulting time-averaged density and opacity in
<ref> and <ref>, respectively. Highacc4 has the same magnetic field (4×10^12 G) as Lowacc4, but an accretion rate that is 15 times larger. Highacc6 has the same magnetic field (6×10^12 G) as Lowacc6, but an accretion rate that is 20 times larger. As expected, the increased accretion rates in these two simulations result in taller structures, and in fact taller than the weak magnetic field, low accretion rate simulation Lowacc01. As indicated in <ref>, no more than ten percent of the accretion power is transferred to the neutron star base, compared to ≃40 percent in the low accretion rate simulations Lowacc4 and Lowacc6. A taller column with its much larger surface area (and higher postshock opacity in these two cases) is better able to release the accretion power in emergent radiation.
<ref> shows the time-dependent behavior of both the density and the opacity over a time interval of 1.4×10^-4 s in simulation Highacc6. As we discuss in more detail in the next subsection, this corresponds to one oscillation period in the peak frequency of the power spectrum of the light curve of this simulation. Comparing <ref> to <ref> and <ref>, it is apparent that the structure of the column is varying much more dramatically in the latter, weaker magnetic field simulation. This is true even though these simulations produce columns with comparable time-averaged heights. We discuss why this is in the next subsection.
§.§ Variability
<ref> depicts the luminosity light curves of all the simulations. We compute these light curves by simply summing the horizontal and vertical lab-frame fluxes times cell face areas for cells near the photosphere. Because the velocity of the flow is restricted by the magnetic field to be almost exactly vertical, there is almost no difference between the lab frame and fluid frame horizontal radiation fluxes. However, for simulations Highacc4 and Highacc6, which ran for a much longer time than the Lowacc simulations, some horizontal motion began to occur as the effective boundary at the base overheats and we start to lose magnetic confinement. This produces artificial high frequency, Alfvénic oscillations in the lab frame flux and we have removed these from the light curves in <ref> by using the fluid frame horizontal flux.
All the light curves in <ref> show significant high frequency variability, including quasi-periodic oscillations (QPOs) with varying degrees of coherence. Power spectra of these light curves are shown in <ref>, and the relative amplitude of the most significant QPOs are listed in <ref>. In our previous Thomson scattering simulations <cit.>, we showed that the origin of these oscillations is due to a breakdown in thermal equilibrium caused by the fact that advection of heat in the settling flow and PdV work are generally unable to balance the sideways radiative cooling when the column is at maximal vertical extent. The column therefore overcools and the shock height falls, resulting then in overheating which causes the shock to rise again. This can vary with horizontal position within the column as separate vertical fingers oscillate up and down. That this is happening here in these magnetic opacity simulations is shown in <ref>, which compares the power spectrum of shock height variations at different horizontal locations in Highacc4 with the luminosity power spectrum. Clearly the three most significant QPOs in the latter match the shock height oscillations at different horizontal locations. Note that more light curve power is in the middle frequency QPO, while the most shock height power is in the lowest frequency QPO. These two QPOs are in a 2:1 frequency ratio, and the relationship between shock height and emitted light is not likely to have the same proportionality at different harmonics in these non-sinusoidal oscillations.
Consistent with their physical origin, the frequencies of these oscillations are related (inversely) with the local cooling time at that horizontal section of the column. For example, for taller accretion columns, the shock front generally needs to oscillate with a sufficiently large vertical amplitude to replenish the heat to support the bottom region, and this also leads to a longer oscillation time and lower oscillation frequency. This is exactly what we see in <ref> and <ref> as the stronger magnetic field in simulations Lowacc01 to Lowacc6 results in shorter accretion columns. In particular, when the magnetic field is sufficiently strong (e.g. Lowacc4 and Lowacc6), the accretion column almost collapses into a hot spot and then oscillates fast with a small amplitude. Since the sideways cooling area is small, the variations in the light curve are very small. However, when we restore taller column heights by increasing the accretion rates in these high magnetic field cases (Highacc4 and Highacc6), oscillations with lower frequencies occur.
However, the oscillation amplitude in the high accretion rate simulations is substantially less than that in the low accretion rate simulations of comparable height, particularly Lowacc01. This is true both in luminosity, and in the overall shock height variation. In fact, the main body of the sinking zone away from the oscillating shock has an almost static structure apart from the presence of horizontally-propagating entropy waves. The high accretion rate simulations have higher horizontal optical depth (by one to two orders of magnitude throughout most of the column in the case of Highacc6) than in Lowacc01, both because the density is higher and because the opacity is larger. Hence the cooling time is longer and the column is better able to establish a balance between heating from accretion power and radiative cooling. It is only in the upper part of the column where thermal equilibrium is unable to be established because of the more rapid diffusive cooling, and this is the region that oscillates. While the increased opacity in the lower, high temperature regions enables these regions to achieve thermal balance, it turns out that this temperature dependence also leads to a new unstable behavior at late times, and we discuss this further below in <ref>.
§.§ Angular Distribution of Emergent Radiation
When the neutron star accretion forms a hot spot at a relatively low accretion rate or in a strong surface magnetic field, the accreting material is halted at the stellar surface and releases mechanical energy into radiation. Since the incoming flow is cold and has low opacity, it is transparent and radiation can directly leave the system upwards (i.e. pencil beam). However, when the accretion rate is sufficiently high, the accretion flow is shocked above the stellar surface and forms a radiation pressure supported, optically thick region below which radiation emerges from the sides (i.e. fan beam).
The fact that classical models of accretion columns produce fan beam radiation patterns is simply due to the fact that most of the emission area is on the sides, even though the accretion shock itself is at the top of the column. In our 2D, more mound-shaped columns, the sides still have more emitting area, and the shock itself covers this mound-shape, so that significant direct dissipation of the accretion power is also happening along the sides of the column.
The angular distribution of the emitted radiation is therefore of course determined by both the geometry and surface brightness distribution of the photosphere of the column. In <ref>, we compare the instananeous fraction of emergent radiation that is in a fan-beam across all the simulations, computed from the integral of horizontal flux leaving the photosphere divided by total luminosity. We also list the time-averaged fan beam ratio in <ref>, computed in two different ways: the ratio of time-averaged horizontal luminosity to time-averaged total luminosity, and the time-average of the ratio. Despite the strong variability exhibited in <ref>, particularly in Lowacc01 and Lowacc1, these two methods of averaging produce very consistent results.
For simulations Lowacc4 and Lowacc6 with short, flat columns (almost hot-spots), the fan-beam fraction is significantly below 0.5 (e.g. Lowacc4 and Lowacc6) and the emergent radiation is therefore more like the classical pencil-beam of 1D models. In all the other simulations, the system develops a columnar structure, where most radiation escapes sideways in a fan-beam pattern (i.e. fan-beam fraction above 0.5). However, the low-accretion columns in relatively low magnetic fields (Lowacc01 and Lowacc1) exhibit large variations of radiation beaming patterns, which result from the large shock oscillation amplitude. In particular, when the accretion column is mostly compressed (e.g. middle panel in <ref>), the sideways cooling is minimized and the system is over-heated with quite a large fraction of radiation leaving from the top. For similar heights of columnar structure, the variation of the fan-beam fraction in the case of high accretion rates and strong magnetic fields (Highacc4 and Highacc6) is in general smaller because the shock front oscillates with much smaller amplitude.
§.§ Opacity-Driven Instability in the High Accretion Rate Simulations
<ref> shows the temporal behavior of the shock height at the middle of the column for simulation Highacc4, extending well beyond the time range shown in <ref>. Both the shock oscillation amplitude and the average height of the shock increase dramatically beyond t=40×10^-4 s. Simulation Highacc6 also shows evidence of this behavior, though we were unable to track it for as long as Highacc4. We never observed such behavior in any of our pure Thomson scattering simulations <cit.>. This suggests the presence of an additional unstable mechanism in the column dynamics that is directly related to the temperature-dependence of the magnetic scattering opacity.
Under conditions of pure Thomson scattering and a vertical magnetic field, the maximum growth rate of the photon bubble instability occurs for near horizontal propagation directions in the slow diffusion regime <cit.> and propagation directions at 45 degrees in the rapid diffusion regime <cit.>. As we show in [appendix:pbi_mag_opacity]Appendix B, a magnetic opacity that increases with temperature modifies the photon bubble dispersion relation such that unstable growth can occur for vertical propagation in the slow diffusion regime. The physics of this unstable growth differs from the pure photon bubble instability, and while it is slower than the photon bubble instability, we suggest that it is the cause of the growth that takes place at late times in <ref>.
Because of the broad temperature width of the magnetic opacity peak in both Highacc4 and Highacc6, the postshock material does not reach the opacity peak and the opacity therefore increases further with temperature and depth in the column (see <ref> and <ref>). The time-averaged opacity structure in simulation Lowacc1 shown in <ref> also appears to show a region of opacity increasing with temperature, but this is an artifact of the time-averaging. The instantaneous opacity structure shown in <ref> shows no such behavior. Hence only Highacc4 and Highacc6 have extended regions in which the opacity is below the peak and therefore increases with temperature, and it is only these simulations that exhibit this unstable behavior.
We have solved the full dispersion relation <ref> for conditions along the x=0 midline for a snapshot of Highacc4 at t=25.46×10^-4 s, shortly before the unstable behavior in <ref> becomes evident. The results are shown in <ref> as a function of height in the column, for two different wavelengths and for vertical wave vectors (angle between wave vector and magnetic field θ=0). We also solved the approximate dispersion relation <ref> for this instability, which is wavelength-independent, and the result is very similar. The unstable region in the center of the column covers exactly the range of heights where the opacity increases with depth and temperature. Growth rates near the top of the column are ∼100 s^-1, and this is consistent with the growth time scale ∼10^-2 s in <ref>.
All of this is based on short wavelength WKB theory and so is difficult to apply to our actual column structure. Moreover, the central shock of the accretion column is already highly dynamic due to the basic thermal oscillation driven by the mismatch between heating and cooling, so linear perturbation theory on a static structure cannot be applied. Nevertheless, the predicted growth rates are comparable to what we see in these simulations, and it is suggestive that only Highacc4 and Highacc6 exhibit this behavior, and only Highacc4 and Highacc6 have vertically extended regions at lower temperatures than where the peak opacity occurs. It is important to note that our opacities depend only on temperature, and not density. More accurate opacities <cit.> will have some density dependence, and these can also excite instability, even in the rapid diffusion regime ([appendix:pbi_mag_opacity]Appendix B).
§ DISCUSSION
§.§ Numerical Caveats
We remind the reader that we treat the neutron star as a classical gas pressure dominated region and only allow heat transport by advection and radiation transport <cit.>. This effective boundary condition is designed to minimize the boundary effects on the overall column dynamics. However, the fact that advection of heat into the neutron star is a contributing factor to the overall height of the shorter column simulations Lowacc4 and Lowacc6 suggests that this boundary condition might affect the overall scaling between accretion rate and column height. Our treatment of the neutron star surface neglects degeneracy pressure, which affects the heat capacity, and also neglects thermal conduction by electrons. It would be worthwhile in future to incorporate a more accurate treatment of the interface between the accretion column and the neutron star when the columns are approaching the hot spot regime, in order to more accurately determine how the column height varies with accretion rate. However, we suspect that the overall dynamics of the column will not be affected significantly.
At very late times, both Highacc4 and Highacc6 exhibit sudden flares of radiation from one side of the base of the column, and this rapidly causes the simulation to crash. This is an unphysical behavior that arises from the effective lower boundary condition, where the neutron star surface has heated sufficiently that radiation pressure starts to bend the magnetic field. As we mentioned in <ref>, the field we use in the MHD is lower than the field used to determine the opacities in order to avoid errors in the conservative to primitive variable inversion. Further work is needed to implement a variable inversion that would allow us to run at higher magnetic fields, in addition to improving the physics of the bottom boundary condition. This would better enable us to determine the long-term outcome of the opacity driven instability that manifested in these simulations at late times.
§.§ Opacity Due to Pair Production
We have shown in <ref> that instabilities can be present in accretion columns where the magnetic opacity is below the peak and therefore increases with temperature. After we completed our simulations, new
opacity calculations by <cit.> were published. These demonstrate that there is an even sharper increase of opacity with temperature when the medium is hot enough that pair production becomes significant (<ref>). If this regime can exist within the column, we suspect that it will be a site of strong instability. However, it may be that this opacity feature acts as a wall that cannot be reached in a real accretion column, because it may limit the temperature at the base of the column. Once the accretion rate becomes high enough and the bottom of the accretion column starts to enter into the pair production regime, the strong opacity boost may provide additional radiation support and cause the column to expand. This expansion can then cool the accretion column and bring the base temperature below the pair production value via the extra sideways emission. However, similar to what we have found in the column oscillation, this expansion might then overcool the bottom of the column, contracting the column structure and heating the base back towards pair production. This again suggests a strong destabilizing mechanism at high accretion rates and strong magnetic fields, in which a steady-state column simply cannot exist. It would be interesting to simulate this in future, and we are currently working on an algorithm for angle and polarization-dependent opacities for use in Athena++. It would also be worthwhile to search for observational evidence of enhanced variability in this regime.
§.§ Observational Significance
One dimensional model fits to the observed pulse profiles of X-ray pulsars often require a mixture of fan and pencil-beam emission geometries, particularly at intermediate luminosities (e.g. ). As shown in <ref>, our high accretion rate simulations Highacc4 and Highacc6 both show almost pure (>90 percent) fan-beam emission. However, in our lower accretion rate simulations, only Lowacc2 exhibits an almost pure fan beam emission pattern. Lower magnetic field strengths instead produce a strongly oscillating column with an emission pattern that can vary between approximately half fan and pencil-beam and full fan-beam. The time-average of these oscillations still result in >80 percent fan-beam, but this may be a contributing factor to the need for some pencil-beam emission for intermediate luminosities. At the highest magnetic field strengths (Lowacc4 and Lowacc6), the beam patterns are fairly steady in time, with roughly 40 percent fan-beam in the case of Lowacc4, due largely to the fact that the vertical and horizontal projections of the photosphere are comparable in size. Lowacc6 is almost a hot spot, and emits only only 14 percent of its luminosity in a fan-beam.
It would therefore be interesting to explore observationally how the emission geometry depends not only on luminosity but also on magnetic field strength inferred from cyclotron lines. That geometry can be strongly dependent on observed photon energy <cit.>, so post-processing of simulations such as ours would also be a useful way of confronting the observational constraints.
Accretion columns are observed to exhibit an inverse correlation between the luminosity and the magnetic field strength as measured by the energy of cyclotron lines <cit.>. One possible explanation for this is that higher luminosities correspond to taller columns which then sample weaker magnetic fields in a diverging field geometry (but see for an alternative explanation). In simulations Lowacc4 and Highacc4, we increase the accretion rate of the column in the same magnetic field, and the column is taller. Although we use a uniform, vertical magnetic field in the simulations, we can crudely estimate the corresponding decrease in field strength in a more realistic dipole geometry. The luminosity increases by a factor of ∼ 23.59, while the magnetic field strength at half the height of the column would decrease by a factor of ∼ 0.82. Similarly, when comparing simulations Lowacc6 and Highacc6, the luminosity increases by a factor of ∼ 35.07 and the magnetic field would decrease by a factor of ∼ 0.81. These numbers are actually very close to the observed behavior of V 0332+53 <cit.>, where the luminosity increases by a factor of ∼ 20 and the observed magnetic field strength decreases by a factor of ∼ 0.8. However, this source is inferred to have a surface field strength that is slightly weaker (2.6×10^12 G) than these two pairs of simulations. Moreover, as we have a uniform magnetic field in our simulations, the magnetic opacities may differ at higher altitudes. However, this difference is unlikely to significantly impact our results since the post-shock gas temperature should be sufficiently high so that the opacity is near the Thomson regime. In addition, as our accretion columns are relatively short (∼ 0.1 neutron star radius), this crude estimation might still be reasonable. Nevertheless, future simulations must incorporate the variation of magnetic field with height for a more accurate evaluation of this inverse correlation.
The QPOs that arise from the vertically shock oscillations are at frequencies in excess of ≃5 kHz for the simulations we have presented in this paper, and this may prove challenging to observe directly with existing X-ray facilities. However, the opacity-driven instability at high magnetic field strengths grows on much longer time scales ≃0.01 s. It is unclear how this will saturate and manifest in the lightcurve, as we were unable to run our simulations for long enough before the effective bottom boundary condition failed. Even so, it suggests the possibility of longer time-scale variability that may be more easily observable for high magnetic field X-ray pulsars, whose fields are observed to extend up to as high as 6.6×10^12 G <cit.>. Further investigation of this instability with future simulations is therefore warranted.
§ CONCLUSIONS
We have extended our Cartesian simulations of <cit.> to incorporate polarization-averaged, temperature-dependent magnetic scattering opacities. These can dramatically affect both the dynamics and the time-averaged structure of the accretion column. For weak magnetic fields (≃10^11 G, simulation Lowacc01), the opacities inside the sinking zone are close to Thomson, and the column dynamics is very similar to what we found in <cit.>, the main difference being that coherent pre-shocks in the free-fall zone are largely absent because the opacity in that cold infalling material is much less than Thomson. For higher magnetic field strengths, magnetic opacities produce much more significant differences, a result that is perhaps not surprising given that neutron star columns are supported against gravity by radiation pressure.
Increasing the magnetic field strength increases the temperature at which the opacity peaks, and also increases the width of that peak in temperature space, both effects scaling directly with the magnetic field strength. At fixed accretion rate, increasing the magnetic field strength generally increases the post-shock opacity as an approximately fixed temperature jump across the shock is less and less able to climb over the opacity peak. Despite the fact that the opacity in the interior of the column is on average larger, the time-averaged column height is reduced. This is partly because a larger fraction of the immediate post-shock accretion power is radiated away rather than advecting into the higher opacity post-shock regions, and partly because more accretion power flows into the neutron star at the base of the column.
Again at fixed accretion rate, the taller columns at weaker magnetic field strengths 10^11-10^12 G exhibit strong vertical oscillations.
While entropy waves (slow diffusion photon bubbles) are clearly present as horizontally-inward propagating waves, these oscillations are actually a result of global thermal imbalance, with over-cooling at maximum vertical extension and over-heating at minimum vertical extension. This is the same mechanism that produced oscillations in our previous Thomson scattering simulations, both in Cartesian geometry <cit.> and split monopole geometry <cit.>. The amplitude of these oscillations is large enough to cause the angular distribution of emitted radiation to oscillate from 50-80 percent fan beam at minimum vertical extent to 100 percent fanbeam. This might be a contributing factor to the need for 1D models to sometimes require a mixture of pencil beam and fan beam to explain observed light curves (e.g. ). The amplitude of these oscillations is reduced and the frequency is increased as the field strength increases at fixed accretion rate. The angular distribution of emitted radiation is almost entirely fan beam until the field strength gets high enough that the column height is short enough to transition into almost a pure (∼85 percent) pencil beam emitting hot spot (this happens at 6×10^12 G at the low accretion rates considered here).
Increasing the accretion rate at these higher magnetic field strengths restores the height of the column and the nearly 100 percent fan beam emission, but the oscillation amplitude remains quite low. The mere fact that higher accretion rate results in taller columns may contribute to explaining the observed inverse correlation of cyclotron line energy with accretion rate during supercritical phases of accretion <cit.>. But our simulations would also predict that the critical accretion rate separating hot spots (pencil beam emission, positive correlation between cyclotron energy and accretion rate) and columns (fan beam emission, negative correlation between cyclotron energy and accretion rate) should itself be a function of magnetic field strength across difference sources.
We have identified a new instability in the column that exists when the field strength is high enough that the opacity within the column increases inward with increasing temperature. In the two high accretion rate simulations here, this instability grows on a time scale of ∼0.01 s. While we were not able to fully investigate the nonlinear outcome of this instability due to numerical issues, it may contribute to enhanced variability on this time scale for high magnetic field neutron stars.
§ ACKNOWLEDGEMENTS
We thank Ilaria Caiazzo, Jeremy Heyl, Matthew Middleton, Ekaterina Sokolova-Lapa, and Joern Wilms for useful conversations. This work was supported in part by NASA Astrophysics Theory Program grant 80NSSC20K0525. Computational facilities used in this research were purchased with funds from the National Science Foundation (CNS-1725797) and administered by the Center for Scientific Computing (CSC). The CSC is supported by the California NanoSystems Institute and the Materials Research Science and Engineering Center (MRSEC; NSF DMR 1720256) at UC Santa Barbara. The Center for Computational Astrophysics at the Flatiron Institute is supported by the Simons Foundation.
§ DATA AVAILABILITY
All the simulation data reported here is available upon request to the authors.
mnras
§ IMPLEMENTATION OF ANGLE, POLARIZATION, AND FREQUENCY-AVERAGED MAGNETIC SCATTERING OPACITIES
The scattering opacities in a neutron star accretion column depend strongly on both photon frequency, propagation angle with respect to the magnetic field, and polarization. A multifrequency group method for isotropic absorption and emission has now been implemented in Athena++ <cit.>, and generalizing this to include angle-dependence and polarization would be worthwhile. In the meantime, the simulations in this paper use the implementation of frequency-integrated radiation transfer in Athena++ <cit.>. Here we describe the angle, polarization, and frequency-averaged scattering opacities that we use.
We neglect quantum effects and treat photon-electron scattering classically, assuming a cold electron-ion plasma with infinite ion inertia and negligible plasma frequencies so that dispersive effects are negligible. The full Mueller matrix for electron scattering in a uniform magnetic field under these conditions has been derived by <cit.>, and maps the Stokes parameters of the incoming to the outgoing radiation field. We simplify the problem here by azimuthally averaging around the direction of the magnetic field <cit.> and neglecting circular polarization. Switching from Stokes parameters I_ν and Q_ν to
O and X mode intensities defined by I_Oν=(I_ν+Q_ν)/2 and I_Xν=(I_ν-Q_ν)/2, respectively, we can then derive the following scattering coefficients for X-modes,
χ_Xν^=n_ eσ_ Tω^2(ω^2+ω_ ce^2)/(ω^2-ω_ ce^2)^2≡ fn_ eσ_ T,
O-mode radiation propagating perpendicular to the magnetic field,
χ_Oν^⊥=n_ eσ_ T(4+f/5),
and O-mode radiation propagating parallel to the magnetic field,
χ_Oν^∥=n_ eσ_ T(2+3f/5).
Here ω=2πν is the angular photon frequency, and ω_ ce=eB/(m_ ec) is the electron cyclotron angular frequency.
These expressions are identical to those in equation (45) of <cit.>, except that they used an approximation where f is unity for ω>ω_ ce and ω^2/ω_ ce^2 for ω<ω_ ce. The exact (albeit classical) expression that we use here retains the enhancement of scattering near the cyclotron resonance.
Provided mode exchange is relatively efficient, the angle-averaged mean intensities in both polarization modes will be approximately equal <cit.>. The polarization-averaged opacities for diffusion are then
χ_ν^⊥=2χ_Oν^⊥χ_Xν^/χ_Oν^⊥+χ_Xν^=f(4+f)/2+3fn_ eσ_ T
and
χ_ν^∥=2χ_Oν^∥χ_Xν^/χ_Oν^∥+χ_Xν^=f(2+3f)/1+4fn_ eσ_ T,
in agreement with equation (50) of <cit.>.
We neglect finite photon chemical potential effects, and compute blackbody Rosseland means:
χ_ R⊥ = n_ eσ_ T∫_0^∞ dxx^4e^x/(e^x-1)^2/∫_0^∞ dxx^4e^x(2+3f)/f(4+f)(e^x-1)^2
=n_ eσ_ T×
1 x_ ce→0
8π^25x_ ce^2 x_ ce→∞ ,
and
χ_ R∥ =n_ eσ_ T∫_0^∞ dxx^4e^x/(e^x-1)^2/∫_0^∞ dxx^4e^x(1+4f)/f(2+3f)(e^x-1)^2
=n_ eσ_ T×
1 x_ ce→0
8π^25x_ ce^2 x_ ce→∞ ,
where x≡ hν/(kT) and x_ ce=ħω_ ce/(kT).
To see how to implement these opacities, consider for simplicity a static medium. The Rosseland mean opacities can then be incorporated into a frequency-integrated, polarization-averaged transfer equation as follows (cf. equation (6) of ):
1/c∂ I/∂ t+
n̂·∇ I
=χ_ Pa(acT^4/4π-J)+χ_ Ra(J-I)
+[1/2(3χ_ R⊥
-χ_ R∥)+
5/2(χ_ R∥-χ_ R⊥)cos^2θ]
(J-I)
.
Here χ_ Pa and χ_ Ra are the Planck and Rosseland mean absorption coefficients for true absorption processes. This equation automatically gives the correct zeroth and first moment equations for radiation energy density and flux, provided we assume nearly isotropic radiation closures on the second and third angular moments: K_zz=J/3 and Q_izz=(H_i+2δ_izH_z)/5, where J and H_i are the zeroth and first angular moments, respectively. The resulting moment equations are then
1/c∂ J/∂ t+∇·H=χ_Pa(acT^4/4π-J),
1/c∂ H_i/∂ t+1/3∇_iJ=-(χ_Ra+χ_R⊥)H_i-(χ_R∥-χ_R⊥)δ_izH_z,
exactly as required. While we have worked in a static medium here, this is sufficient for the numerical radiation MHD scheme used in Athena++, which computes the source terms of the transfer equation in the local fluid rest frame and then Lorentz transforms them into the lab frame (see for details).
<ref> shows the behavior of the perpendicular and parallel Rosseland opacities (κ_s=χ_R/ρ) from equations <ref> and <ref> as a function of x_ ce, as well as the combination in the second line of equation <ref> for different angles of propagation. All these curves are the same except near x_ ce=1, and we therefore simplify our transfer equation still further by neglecting the angle dependence in equation <ref>. We do this by simply replacing cos^2θ with unity, as that produces the largest opacity near x_ ce=1, in order to partly account for cyclotron resonance which we have neglected in this paper. <ref> compares our prescription with the Rosseland mean opacities computed by <cit.>, and the agreement is reasonably good below the temperature 3×10^8 K where pair production becomes significant. (We do not reach such temperatures in any of the simulations presented in this paper.)
To summarize, the frequency-integrated, polarization and angle-averaged magnetic scattering absorption coefficient that we actually use in the fluid rest frame is 2χ_ R∥-χ_ R⊥, where χ_ R⊥ and χ_ R∥ are given by equations (<ref>) and (<ref>), respectively.
§ PHOTON BUBBLE DISPERSION RELATION WITH VARIABLE OPACITY
<cit.> derived the dispersion relation for photon bubbles, including the effects of density and temperature-dependent opacities, in the short wavelength, rapid diffusion limit. Here we allow for slow diffusion, generalizing the infinitely strong, vertical magnetic field analysis of <cit.> to incorporate variable Rosseland mean opacities κ(ρ,T). We define
Θ_ρ≡∂lnκ/∂lnρ and Θ_ T≡∂lnκ/∂ln T.
Then the only perturbation equation that changes from the linear analysis of a static, radiation pressure supported medium in Appendix A1 of <cit.> is their equation (A13) for the vertical radiative flux. This now becomes
δ F_z=-c/κ[1/ρ_0∂δ P/∂ z+g/ρ_0(1+Θ_ρ)δρ+g/4P_0Θ_Tδ P].
Carrying through the analysis of <cit.>, assuming a spacetime dependence for the perturbations ∝exp[i(k_x x+∫ k_zdz-ω t)], we arrive at a cubic dispersion relation for the frequencies similar to equation (A18) of that paper:
0 = ω^3+ω^2[ik^2(c/3κρ_0+V)
+cgΘ_Tk_z/9κρ_0 c_ r^2] +ω(-k_z^2c_ r^2-k^4cV/3κρ_0
+ik^2VcgΘ_T k_z/9κρ_0 c_ r^2) +(-cg/3κρ_0k_x^2k_z+ik_z^2g^2c
Θ_T/9κρ_0 c_ r^2+cgk_z^3Θ_ρ/3κρ_0).
The viscous (V) terms generally set the short wavelength cutoff scale when finite gas pressure effects are negligible. Neglecting these terms and taking the short wavelength k→∞ limit, we find that there is a mode given by
ω^2=-igk_z/k^2(k_x^2-k_z^2Θ_ρ).
This agrees exactly with equation (98) of <cit.> for a vertical magnetic field, and generalizes the dispersion relation of <cit.> to incorporate the effects of a variable opacity. Temperature fluctuations are smoothed out by rapid diffusion at short wavelengths, which is why Θ_ T does not appear in this limit. Indeed, note that in the last three terms of equation (<ref>), the Θ_T term is at one order of k lower than the other two terms.
The magnetic opacities that we have considered in this paper have no density-dependence: Θ_ρ=0. In addition, the photon bubble instability requires k_x0. If we eliminate photon bubbles by considering purely vertically propagating modes, and adopt Θ_ρ=0, we find a short wavelength mode with frequency given by
ω^2+i3κρ_0c_ r^2/cω+g^2/3c_ r^2Θ_T=0.
One of the roots of this equation is always unstable if Θ_T>0. Because it is entirely vertical, the magnetic field plays no role here except in determining the temperature dependence of the opacity, and the instability is fundamentally hydrodynamic in nature.
|
http://arxiv.org/abs/2307.01290v1
|
20230703184233
|
Three is the magic number -- distance measurement of NGC 3147 using SN 2021hpr and its siblings
|
[
"Barnabas Barna",
"Andrea P. Nagy",
"Zsofia Bora",
"Donat R. Czavalinga",
"Reka Konyves-Toth",
"Tamas Szalai",
"Peter Szekely",
"Szanna Zsiros",
"Dominik Banhidi",
"Barna I. Biro",
"Istvan Csanyi",
"Levente Kriskovics",
"Andras Pal",
"Zsofia M. Szabo",
"Robert Szakats",
"Krisztian Vida",
"Zsofia Bodola",
"Jozsef Vinko"
] |
astro-ph.SR
|
[
"astro-ph.SR",
"astro-ph.CO",
"astro-ph.GA",
"astro-ph.HE"
] |
Department of Experimental Physics, Institute of Physics, University of Szeged, Dóm tér 9, 6720 Szeged, Hungary
bbarna@titan.physx.u-szeged.hu
Eötvös Loránd University, Department of Astronomy, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary
Konkoly Observatory, Research Centre for Astronomy and Earth Sciences, Konkoly Th. M. út 15-17., 1121 Budapest, Hungary
ELKH-SZTE Stellar Astrophysics Research Group, Szegedi út, Kt. 766, 6500 Baja, Hungary
Baja Astronomical Observatory of University of Szeged, Szegedi út, Kt. 766, 6500 Baja, Hungary
CSFK, MTA Centre of Excellence, Konkoly Thege Miklós út 15-17, 1121 Budapest, Hungary
ELTE Eötvös Loránd University, Gothard Astrophysical Observatory, 9400 Szombathely, Hungary
Eötvös Loránd University, Institute of Physics, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany
Scottish Universities Physics Alliance (SUPA), School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews, KY16 9SS, UK
The nearby spiral galaxy NGC 3147 hosted three Type Ia supernovae (SNe Ia) in the past decades, which have been subjects of intense follow-up observations. Simultaneous analysis of their data provides a unique opportunity for testing the different light curve fitting methods and distance estimations.
The detailed optical follow-up of SN 2021hpr allows us to revise the previous distance estimations to NGC 3147, and compare the widely used light curve fitting algorithms to each other. After the combination of the available and newly published data of SN 2021hpr, its physical properties can be also estimated with higher accuracy.
We present and analyse new BVgriz and Swift photometry of SN 2021hpr to constrain its general physical properties. Together with its siblings, SNe 1997bq and 2008fv, we cross-compare the individual distance estimates of these three SNe given by the SALT code, and also check their consistency with the results from the MLCS2k2 method. The early spectral series of SN 2021hpr are also fit with the radiative spectral code TARDIS in order to verify the explosion properties and constrain the chemical distribution of the outer ejecta.
After combining the distance estimates for the three SNe, the mean distance to their host galaxy, NGC 3127, is 42.5 ± 1.0 Mpc, which matches with the distance inferred by the most up-to-date LC fitters, SALT3 and BayeSN. We confirm that SN 2021hpr is a Branch-normal Type Ia SN that ejected ∼ 1.12 ± 0.28 M_⊙ from its progenitor white dwarf, and synthesized ∼ 0.44 ± 0.14 M_⊙ of radioactive ^56Ni.
Three is the magic number - distance measurement of NGC 3147 using SN 2021hpr and its siblings
B. Barna 1 A. P. Nagy1 Zs. Bora2,3 D. R. Czavalinga4,5 R. Könyves-Tóth1,6,7 T. Szalai1,4 P. Székely1 Sz. Zsíros1 D. Bánhidi5 I. B. Bíró 4, 5 I. Csányi5 L. Kriskovics2, 6 A. Pál2, 6 Zs. M. Szabó3, 6, 9, 10 R. Szakáts2, 6 K. Vida2, 6 Zs. Bodola1 J. Vinkó2, 6, 1, 8
Received December 12 2022
=====================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The Type Ia supernovae (SNe Ia), being high-luminosity standardizable candles, have essential importance in the cosmic distance measurements providing extension of the distance ladder toward higher redshifts. Since at present there is a significant tension between the cosmological parameters, like H_0, inferred locally and from the Cosmic Microwave Background (CMB), it is important to further reduce the potential biases in the measured distances, which may help in revealing the cause of the discrepancy. Local galaxies that hosted SNe Ia and have observable Cepheid populations are especially important in this respect <cit.>.
The basis of the standardization process of Type Ia light curves (LCs) is the Phillips relation <cit.>, i.e. the empirical correlation between the peak absolute brightness (typically in the B-band) and the Δ m_15 decline rate measured during the first 15 days after the moment of maximum light (t_0). Later, multiple studies tried to link the shape of the LC to the peak luminosity. The most widely used SN Ia LC synthesis and distance estimator codes are the newest versions of SALT <cit.> and MLCS <cit.>, but other approaches, like BayeSN <cit.> and SNooPy <cit.> have also been published (see Section <ref>).
However, LC fitters and distance estimations still suffer from intrinsic scattering due to spectrophotometric calibration issues and maybe some sort of unknown systematic effects. One way to reduce the sources of uncertainties is using SN siblings, i.e. SNe discovered in the same galaxy. These SNe share the same distance, redshift, and other physical properties of their common host galaxy. Thus, the expected dispersion in their individually estimated distances should be significantly lower than the intrinsic scatter in the Hubble-diagram at the same redshift. Thus, SN siblings may also allow us to test distance measurement methods and may support their further improvements <cit.>.
Due to the decade-long observations by recent transient discovery programs, the number of galaxies hosting multiple SNe is gradually increasing. The absolute record holder of modern times is NGC 6946, also called the Firework Galaxy, which hosted ten SNe in a century. However, the datasets of early SNe usually suffer either from high uncertainties, data gaps or lack of wavelength coverage, thus, not all of these SNe can be used for high-precision distance estimations.
We searched for additional supernova siblings in the Open Supernova Catalog <cit.>. By narrowing our search to the simple Ia category in the OSC, we found 67 galaxies that hosted two or more “normal” Type Ia supernovae. The actual number is higher as the OSC has a sub-classification of Ia supernovae that we omitted from our search, and the catalog has not been updated since April 2021. Recent studies by <cit.> and <cit.> have increased the sample further. In the most recent study by <cit.>, 113 galaxies were found to host 236 thermonuclear SNe (including both “normal” SNe Ia and other subclasses) in the OSC.
There are only 6 galaxies in which at least three Type Ia supernova siblings have been discovered. Both M84 (SNe 1957B, 1980I, and 1991bg) and NGC 1316 (SNe 1980N, 1981D, 2006dd, and 2006mr) include two Type Ia supernovae before the CCD era, which increases the importance of NGC 5468 (SNe 1999cp, 2002cr, and 2005P), NGC 5018 (SNe 2017isq, 2002dj, 2021fxy), NGC 3367 (SNe 2018kp, 2003aa, 1986A), and NGC 3147 (SNe 1972A, 1997bq, 2008fv, and 2021hpr). The last one hosted the recently discovered SN 2021hpr, which was the subject of intense follow-up observations by several observatories. Therefore, it became a key object in the distance estimations using Type Ia supernova siblings.
In this paper we present new optical photometric observations of SN 2021hpr, and combine them with published LCs of SNe 1997bq and 2008fv to determine an improved distance to their common host galaxy, NGC 3147. Based on the improved distance, we infer and discuss the physical parameters for SN 2021hpr by building models for its spectra and the bolometric LC.
The paper is structured as follows: in Section <ref>, we introduce the datasets of the three SNe Ia hosted by galaxy NGC 3147, with a special interest in the newly obtained LC of SN 2021hpr published in this paper first.
Methods for the spectral synthesis and LC analysis, as well as the fitting algorithms used for distance estimations, are described in Section <ref>. The results are presented and discussed in Section <ref>. Finally, we summarize our conclusion in Section <ref>.
§ THREE SUPERNOVAE OF NGC 3147
NGC 3147 is a barred spiral galaxy in the Draco constellation, at α(2000.0) = 10^h 16^m 50^s, δ(2000.0) = +73^∘ 24', being in the focus of interest due to its low-luminosity Type II Seyfert active galactic nucleus <cit.>. Its heliocentric redshift is z=0.00934 <cit.>. The historical distance estimations show a wide range between 27.7 Mpc <cit.> and 55.2 Mpc <cit.>, but the latest pre-SN 2021hpr results narrowed down to 39.3 Mpc <cit.> and 43.7 Mpc <cit.>. The most recent Cepheid-based distance is derived by the comprehensive analysis of <cit.> where the authors estimated 40.1 Mpc ± 3.3 Mpc including the analysis of 27 Cepheid variables of NGC 3147.
The proximity and face-on orientation of NGC 3147 makes it a prominent host for discovering transient events. In the past half-century, this galaxy hosted six SNe, four of which were classified as Type Ia (the other two, SN 2006gi and SN 2021do, were Type Ib/c events). For SN 1972H, only photographic photometry was published <cit.>. Thus, only SNe 1997bq, 2008fv, and the recently discovered 2021hpr can be used in a modern LC analysis.
The galactic component of the interstellar reddening in the direction of NGC 3147 is E(B-V)=0.021 mag <cit.>. The explosions took place at distant sites within NGC 3147, sampling different regions of the galaxy. The host galaxy component of the reddening is taken from previous studies for each object, as listed below.
SN 1997bq was discovered on 50546.0 MJD by <cit.>. The SN was located outside the observable spiral arms of NGC 3147 with 60" offset (R ≈ 16.0 kpc) southeast from the bulge. UBVRI LCs were obtained at the Fred Lawrence Whipple Observatory of the Harvard-Smithsonian Center for Astrophysics and published by <cit.>. Due to the outskirt location of SN 1997bq, no significant host galaxy reddening is expected, but the individual LC fit performed with the code BayeSN indicated a total extinction of A_V = 0.45 mag <cit.>.
SN 2008fv was first detected on 54736.0 MJD by K. Itagaki to 37" north-east from the bulge of NGC 3147 <cit.>. <cit.> reported a significant host-galaxy reddening of E(B-V)_host=0.22 with selective extinction coefficients of R_V=2.9 based on <cit.>. The SN peaked at 14.55 mag on ∼54749.0 MJD in the B-band. Optical and near-infrared (NIR) photometry was obtained by <cit.> and <cit.>. However, there is an enormous difference between the two datasets in the I-band. The choice of the adopted magnitudes for the LC fitting is explained in Sec. <ref>.
The discovery of SN 2021hpr was reported by <cit.> based on the first observation on 2021-04-02 at 10:46 UTC (59306.4 MJD). Later a pre-discovery detection (59304.92 MJD) was reported by <cit.> by 0.3 day before the observation of the Zwicky Transient Factory. The first, yet ambiguous classification <cit.> claimed that the new transient is probably a Type Ia SN due to its Si II λ6355 feature with an expansion velocity of 21,000 km s^-1. The strong high-velocity feature (HVF) and the rapid brightening of the object suggested that SN 2021hpr was discovered at its very early phase.
SN 2021hpr was the subject of two previous studies, which published and analyzed their independent follow-up observations: <cit.> presented BVRI LCs and optical spectra between -14 and +64 days to B-band maximum, while <cit.> provided grizy photometry between -10 and +40 days and one optical spectrum at -4.2 days. In this study, we publish a new set of optical photometry obtained at two Hungarian observatories. Furthermore, we include the analysis of UV-photometry taken by the Neil Gehrels Swift Observatory Ultraviolet and Optical Telescope (UVOT). Hereafter, we use our new BVgri photometry, supplemented by Swift UBV data, for the LC analysis of SN 2021hpr. The previously published LCs mentioned above, are also used for comparison, but not for re-analysis.
We also model the spectra of SN 2021hpr available from the literature (see Tab. <ref>).
The host extinction was assumed to be negligible by <cit.>, but the LC analysis of <cit.> suggested A_V,host≃ 0.20 mag. Without any more established estimation, we use their mean value, A_V,host=0.1 mag, for the rest of the paper.
The main properties of NGC 3147, as well as of the three SNe, are listed in Tab. <ref>.
§.§ Observations
In this paper we present a SN photometric dataset that was obtained with two recently installed 0.8m telescopes in Hungary: one at the Piszkéstető mountain station of Konkoly Observatory, and one at Baja Observatory.
The twin instruments are two 0.8m Ritchy-Chrétien telescopes (hereafter KRC80 and BRC80, respectively), manufactured and deployed by the company AstroSysteme Austria (ASA). The focal length of 5700 mm provides an f/7 light-gathering power. The telescope is equipped with Johnson BV and Sloan ugriz filters and a 2048x2048 back-illuminated FLI PL230 CCD chip with a pixel scale of 0.55". Due to the similarities between the two telescopes, the combined LCs of SN 2021hpr can be considered as a homogeneous dataset.
We carried out standard Johnson–Cousins BV and Sloan griz CCD observations on 50 nights between April and September 2021 (Fig. <ref>). The achieved photometric accuracy varied between 0.01–0.05 mag depending on the weather conditions. The exposure times were 180 seconds except for the B filters where we used 300 sec.
All data were processed with standard IRAF[IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.] routines, including bias, dark and flat-field corrections. Then we co-added three images per filter per night aligned with the wcsxymatch, geomap and geotran tasks. We obtained PSF photometry on the co-added frames using the daophot package in IRAF, and image subtraction photometry based on other IRAF tasks like psfmatch and linmatch, respectively. For the image subtraction we applied a template image taken at a sufficiently late phase, when the transient was no longer detectable on our frames.
For PSF photometry we built an automated pipeline using self-developed C-codes and bash shell scripts, and the necessary IRAF tasks are called as system binaries outside the IRAF environment. The IRAF executables are collected into a single parallel processing script using gnu-parallel <cit.>. This “all inclusive” method enabled us to reduce the processing time for the ∼ 1 GB of data per night to a few minutes on a normal PC with 16 CPU cores.
The photometric calibration was carried out using
stars from Data Release 1 of Pan-STARRS1 (PS1 DR1) [https://catalogs.mast.stsci.edu/panstarrs/]. The selection of the photometry reference stars and the
calibration procedures are as follows. First, sources within a 5 arcmin radius around the SN with r-band brightness between 15 and 17 mag (to avoid saturation, <cit.>) were downloaded from the PS1 catalog. Next, non-stellar sources were filtered out based on the criterion i_PSFmag - i_Kronmag < 0.05 for stars [https://outerspace.stsci.edu/display/PANSTARRS/].
In order to get reference magnitudes for our Johnson B- and V-band frames, the PS1 magnitudes were transformed into the Johnson BVRI system based on equations and coefficients found in <cit.>. Finally, the instrumental magnitudes were transformed into standard BVgriz magnitudes by applying a linear color term (using g-i) and wavelength-dependent zero points. Since the reference stars fell within a few arcminutes around the target, no atmospheric extinction correction was necessary.
S-corrections were not applied.
The obtained BVgriz LCs are plotted in Fig. <ref>. Direct comparison with B- and V-band LCs of <cit.> further confirm that our data are free of systematic errors.
In the case of SN 2021hpr the ground-based optical observations were supplemented by the available archival data of the Neil Gehrels Swift Observatory <cit.> taken with the Ultraviolet-Optical Telescope <cit.> in April 2021 (see in Fig. <ref>). Data were collected in six filters from optical to ultraviolet wavelengths (u, b, v, uvw1, uvm2, uvw2). The SN was detectable as a point source on the images, although it was located in a complex galactic environment. In order to model its background flux, we applied five different background regions distributed around the SN, and determined the background as the average of the flux values taken from each region. The Swift/UVOT data were processed using the HEAsoft software package. We summed the individual frames using the task and carried out aperture photometry on the summed images using the task.
Two spectra used in this study (see Tab. <ref>) were published by <cit.>, and another was obtained at Smolecin Observatory (L25). All spectra are available at WISeREP online supernova database <cit.>.
§ METHODS
In this section, we briefly introduce the modeling codes adopted for the photometric and spectroscopic analyses.
§.§ The radiative transfer code TARDIS
Our approach for the fitting of the spectral series of SN 2021hpr is performed with the one-dimensional radiative transfer code TARDIS <cit.>. TARDIS calculates synthetic spectra on a wide wavelength range in exchange for a low computational cost, providing an ideal tool for fitting both the continuum and spectral features of homologously expanding ejecta.
The main assumption of the code is a sharp photosphere emitting blackbody radiation modeled via indivisible energy-packets representing bundles of photons <cit.>. The model atmosphere is divided into radial layers with densities and chemical abundances defined by the user. The algorithm follows the propagation of the packets, while calculating the wavelength- and direction-changing light-matter interactions based on a Monte Carlo scheme in every shell. The final output spectrum is built up by summarizing the photon packets escaping from the model atmosphere.
The approach of TARDIS offers several improvements from the simple LTE assumptions. We followed the same settings as in most of the studies that used TARDIS for spectral fitting <cit.>. As an example, the ionization state of the material used here are estimated following an approximate non-LTE (NLTE) mode, the so-called nebular approximation, which significantly deviates from the LTE method by accounting for a fraction of recombinations returning directly to the ground state <cit.>. The excitation state is calculated according to the dilute-LTE approximation which is also not purely thermal. The summary of TARDIS numerical parameters and modes adopted in this study are listed in Tab. <ref>. The limitation of the simulation background, as well as the detailed description of the NLTE methods, are presented in the original TARDIS paper <cit.>.
§.§ Light curve fitter codes for SNe Ia
We applied the recently released version of Spectral Adaptive Lightcurve Template <cit.> for estimating the distance to SN 2021hpr and its two siblings. We also utilized the earlier version, SALT2.4 <cit.>, as well as an independent LC fitter, MLCS2k2 <cit.> for checking the consistency of the inferred distance moduli with those from earlier calibrations.
MLCS2k2 is the improved version of the original Multi-Color Light Curve Shape (MLCS) code introduced by the High-z Supernova Search Team <cit.>. The LCs obtained in Johnson-Cousins UBVRI filters are fitted with two tabulated functions (P_λ and Q_λ) trained on a carefully selected sample of 133 SNe <cit.> and with the Δ parameter that linked the shape of the LC with the peak absolute brightness in the V-band. The code fits the observed LC m_λ (t) with the following function
m_λ (t) = M_λ (t) + μ_0 + ζ_λ( α_λ + β_λ / R_V ) · A_V + P_λ(t) ·Δ + Q_λ(t) ·Δ^2,
where t is the rest-frame time (corrected for time dilation) elapsed from the moment of B-band maximum, M_λ (t) is the LC of the fiducial SN Ia in absolute magnitudes, and μ_0 is the true (extinction-free) distance modulus. MLCS2k2 also takes into account the effect of interstellar reddening with A_V extinction in the V-band. α, β describe the interstellar reddening law as a function of wavelength, while ζ takes into account the temporal variation of the reddening correction due to the spectral evolution of the SN.
SALT2 was introduced by the SuperNova Legacy Survey Team <cit.>. Unlike MLCS2k2, this code models the entire spectral energy distribution (SED) as a function of time. To do this, a combination of multiple colour-dependent vectors is trained on a large sample of thoroughly chosen SNe. The components of the model function are
F(λ,t) = x_0· [M_0(λ,t) + x_1· M_1(λ,t)] ·exp(c · CL(λ)),
where M_0, M_1 and CL are the trained vectors of SALT, while x_0, x_1 and c are fitting parameters, representing the normalization, the stretch and the colour of the SED, respectively.
SALT3 is a recent improvement to SALT2.4 developed by <cit.>. SALT3 was trained on more than one thousand SN Ia spectra, an order of magnitude larger sample than the previous version of the code, providing lower uncertainties and fewer systematics compared to the previous versions.
The SALT model does not contain the distance as a direct fitting parameter. Instead, it is inferred from the fitting parameters of the SALT3 code by the following formula <cit.>:
μ = -2.5 log_10(x_0) + α x_1 - β c - M_0
We adopt the recent calibration of <cit.> for α and β parameters as α = 0.133 ± 0.003 and β = 2.846 ± 0.017, respectively.
§ RESULTS
§.§ Distance measurements
Recently, a comprehensive analysis of the distance of a multiple-SNe host galaxy was performed by <cit.>. The authors presented a new version of the BayeSN model <cit.> fitting the optical-NIR regime (in grizy bands) of SN 2021hpr, but also included the photometry of SNe 1997bq and 2008fv, both individually and simultaneously. They inferred the mean value of the distance moduli as μ = 33.14 mag with a standard deviation of 0.01, smaller than the intrinsic scattering of SNe Ia. The common μ-fit to the data of the three SNe resulted in μ = 33.13 ± 0.08 mag.
<cit.> also compared BayeSN to the widely used LC-fitter code SNooPy. The latter code provided a higher standard deviation of σ∼ 0.1 mag, but the mean value of the individual three distance moduli (μ = 33.18 mag) was close to that of the BayeSN code.
In the present paper, we apply SALT3 for a similar analysis as <cit.> and use SALT2.4 and MLCS2k2 for cross-comparing the inferred distance moduli. All LC fittings are computed separately on the individual datasets of SNe 1997, 2008fv, and 2021hpr. While SALT3 can be fed by a collection of various filters and magnitude systems, MLCS2k2 was trained for Johnson-Cousins UBVRI filters and Vega magnitudes. In the case of SN 2021hpr, where the object was not observed in Johnson-Cousins R and I filters, only B and V data were used during the fitting with MLCS2k2. SALT3 fits are shown in Figs. <ref> - <ref>, while MLCS2k2 and SALT2 fits can be seen in Figs. <ref> - <ref>.
Due to the low (z < 0.01) redshift of the host galaxy, K-correction for transforming observed LCs to rest-frame bands is estimated in the order of 0.01 mag, which is lower than the random observational uncertainties of the individual data, thus, K-corrections were neglected. To avoid any discrepancy due to the different H_0,ref values that were assumed to tie LC fitting codes to the distance scale, all distances are transformed to a common value of H_0 = 73 km s^-1 <cit.>:
μ (H_0) = μ_fit - 5 log(H_0 / H_0,ref)
where μ_fit is the distance modulus inferred directly by the LC fitter.
Despite that all three LC fitter codes have been trained for the U-band wavelengths, we refrain from using the U-band (both Johnson and Swift) magnitudes. The near-UV diversity of SNe Ia is not fully covered, because of the limited sample of observations in the U-band. Thus, the training of the LC fitters cannot be sufficient for this spectral regime and increase the uncertainty of the distance estimation.
The individual distance estimations based on the three SNe and carried out with the three LC fitters are summarized in Tab. <ref>.
§.§.§ SN 1997bq
The BVRI LCs of SN 1997bq are deficient around the peak, which makes constraining the date of maximum light more difficult. The polynomial fit of B-band LC provided T_max(B)=50557.5 MJD which is consistent with the result of SALT when the time of maximum is a fitting parameter (T_max(B)=50558.0 MJD). Adopting the latter date for the MLCS2k2, the inferred host galaxy extinction is A_host=0.60 mag, and the constrained distance modulus, μ=33.00 mag, is close to the Cepheid-distance <cit.>.
The SALT distances perfectly coincide with each other and also support our final conclusion about the distance of NGC 3145 (see below). Moreover, it is in between the previous estimates published by <cit.> and <cit.> (32.97 and 33.16 mag, respectively, assuming H_0 = 73 km s^-1, see Eq. <ref>) based on the same photometric data of SN 1997bq.
§.§.§ SN 2008fv
For SN 2008fv the publicly available I-band observations of <cit.> are significantly brighter than expected, and they exceed the published I-band magnitudes of <cit.> by ∼0.8 mag. <cit.> investigated the colours of SN 2008fv and concluded that the I-band data of <cit.> are probably inaccurate, thus, they were discarded from the fitting process. The <cit.> photometry in other filters was also discredited by <cit.>, but the estimated discrepancy is in the order of 0.1 mag. In this study, we aimed to use most of the available data, thus, we adopted the complete BVR photometry of <cit.> together with post-maximum observations of <cit.> in BVRI.
The MLCS2k2 code provides μ_08fv = 32.97 mag, which is in good agreement with the Cepheid-based distance modulus of the host galaxy <cit.>. The total A_V is constrained as 0.96 mag, significantly higher than that estimated by <cit.> with the BayeSN code. At the same time, the nearly identical models of SALT2.4 and SALT3 fail to fit the I-band fluxes and infer significantly higher distance moduli of μ∼ 33.33 ± 0.09 and μ∼ 33.35 ± 0.05 mag, respectively.
§.§.§ SN 2021hpr
SN 2021hpr has the most densely covered light curves among the three SNe extending from -15 to +100 days in BVRI and griz bands. However, the z-band LC suffers from higher uncertainties due to inferior sky conditions during the observations, thus, it was omitted from the fitting.
The MLCS2k2 fit is made using only the BV LCs from the (B)RC80 observations to keep the homogeneity of the dataset and avoid any systematic errors. However, the result (μ = 32.85 mag) greatly differs from any other distance modulus of this study, which underlines the importance of having a photometric dataset covering wide spectral range. At the same time, the SALT codes provide good fits to all of the LCs in all bands around the maximum light, resulting in similar distance moduli within a 1 σ agreement. As a further validation, we also include the BVRI photometry of <cit.> to the homogeneous BVgriz dataset for an additional modeling with SALT3. The extra LCs and data points barely change the parameters of the fits (Fig. <ref>), including the inferred distance modulus of μ = 33.15 mag.
As a conclusion, we propose the distance modulus μ = 33.14 ± 0.05 mag of SN 2021hpr estimated by SALT3 for the distance of NGC 3147. This distance shows a good match with the mean value of (all) the distance moduli (including SNe 1997bq and 2008fv) estimated in this study (μ = 33.12 ± 0.10 mag), and also with that of <cit.> (μ = 33.14 ± 0.12 mag). Moreover, the inferred distance is consistent with the result of the Cepheid-based distance <cit.>. The summary of the distance estimation of NGC 3147 published in the literature and inferred in this study can be found in Fig. <ref> and in Tab. <ref>.
§.§ The physical properties of SN 2021hpr
§.§.§ Rise time
To constrain the moment of the explosion of SN 2021hpr, we adopted the assumption of the expanding fireball model. According to this, the emerging pre-maximum flux increases as a power-law function of time <cit.>:
F = a ·(T - T_first)^n
Here, T_first is the time of first light, which is not equivalent to the time of the
explosion (T_exp), as it refers to the moment when the first photons emerge,
while the latter refers to the actual moment when the explosion starts.
The intermediate ‘dark phase’ may last for a few hours to days <cit.>.
In theory, the power-law function with n=2 is valid only for the bolometric flux. However, studies pointed out that it is still more-or-less valid for quasi-monochromatic fluxes in the optical bands, but in those cases the value of the exponent may differ from the textbook example, varying between n=2.2 <cit.> and n=2.44 <cit.> for normal SNe Ia.
We fit the early magnitudes of the most densely sampled LCs of SN 2021hpr, i.e. the B and V bands, simultaneously with the function Eq. <ref>, but using the same T_first for both light curves. At first, we fix the exponent as n=2, which corresponds to the classical fireball model. The resulting moment of first light is 59305.4 ± 1.5 MJD, which is slightly late in the aspect of the first detection a day later.
Next, we allow n to vary between 2.0 and 2.5 for each LCs (Fig. <ref>). The inferred date of T_first = 59304.6 ±1.6 MJD is a more realistic time for the explosion, considering the early discovery 1.9 days later. The LC in the B-band peaks at 59323.0 MJD as it is constrained by polynomial fit. As a key parameter, this date is used as input for the MLCS2k2 and SALT2 fits in the following. The estimated 18.4 days rise time is in good agreement with the average of normal SNe Ia <cit.>.
§.§.§ Abundance tomography
We carried out modeling the spectroscopic evolution of SN 2021hpr with the radiative transfer code TARDIS <cit.>. To synthesize the spectral luminosities, we fixed the distance of SN 2021hpr to 42.5 Mpc corresponding the distance modulus carried out with SALT3 (see in Sec. <ref>).
For de-reddening, a total of A_V=0.17 mag was adopted as the average value of the total extinction assumed by <cit.> and <cit.>.
Three of the earliest spectra of SN 2021hpr have been subject to fit (see in Tab. <ref>). The best-fit synthetic spectra are presented in Fig. <ref>. The key parameters of the spectral synthesis are the total bolometric luminosity (L), the photospheric velocity (v_phot), and the time since the explosion (t_exp, derived from T_exp). We fixed the density function of our input to the well-known W7 model <cit.> to reduce the number of free parameters. The latter simplification results in a discrepancy, if we assume the constrained T_first (see in Sec. <ref>) directly as T_exp, because the diluted density structure causes a too-dense and too-hot model ejecta, especially for the first epoch (59307.5 MJD). To compensate for this, we choose to fit T_exp in our abundance tomography within the range of one day before T_first, taking into account an approximate dark phase. The best value is characterized as T_exp = 59304.0 MJD.
The spectral tomography taken at the earliest epoch samples the outermost layers, which is located above 18 000 km s^-1 based on the v_phot at t_exp = 3.5 days. Two other spectra were taken within half a day after the first epoch, but these datasets do not carry additional information, thus, we do not include them in our analysis.
The next spectrum was taken at t_exp = 13.6 days, when the photosphere receded to 11 000 km s^-1. This agrees with the conclusion of <cit.>, where the authors classified SN 2021hpr as a high-velocity gradient <cit.> SN Ia based on the ∼800 km s^-1 day^-1 decrease of v_phot.
Because of the exponentially decreasing density function toward higher velocities, the dominant light-matter interaction occurs in the few thousand km s^-1 wide region above the photosphere (except for a few elements like Fe). Thus, the majority of the velocity domain over 10,800 km s^-1 is poorly sampled.
Finally, a third epoch at t_exp = 19.9 days is chosen for spectral synthesis (v_phot = 9,800 km s^-1). Assuming a linear decrease of v_phot between the second and third epoch, we characterize v_phot = 10,000 km s^-1 at the moment of maximum light.
After the maximum, the assumption of the blackbody emitting photosphere becomes weak in the case of the normal Type Ia SNe, which prevents the computation of realistic spectral fits with TARDIS.
To reduce the number of the fitting parameters, we set the densities fixed to the exponential fit of the W7 profile (see the upper panel of Fig. <ref>), adopting ρ_0 = 4.7 g cm^-3 as the central density at the reference time t_0 = 100 s after the explosion, and v_0 = 2750 km s^-1 as the exponential decrease in the function:
ρ(t_exp, v) = ρ_0 ·(t_exp/t_0)^-3·exp(-v/v_0)
The summary of the input physical parameters producing the well-fitting synthetic spectra of Fig. <ref> can be found in Table <ref>.
Despite the low time resolution, stratification of the chemical mass fractions can be mapped (Fig. <ref>), but the steep changes in the abundance functions cannot be tracked due to limited constraints. The first epoch samples the outermost region (v > 18 200 km s^-1), which was designed with the initial assumption of a pure C/O layer. The abundance of C is fit with a monotonously decreasing trend inwards the ejecta to reproduce the C II λ6580 line. The model of the outermost region also includes increased intermediate-mass element (IME) abundances in order to reproduce the HVFs observed at Ca II H&K, Si II, Ca II NIR lines frequently reported in the literature. Moreover, Mg is present in the model ejecta with a mass fraction of X(Mg) > 0.1, but here the only constraint is the tentative need for the Mg II λ4481, which, if real, is severely overlapped with features of ionized iron. The Fe II and Fe III features are very sensitive to the abundance at this early epoch, and we characterize it as X(Fe) = 0.01 above 18 200 km s^-1. All these Fe and IME mass fractions are introduced on the conto of C/O abundances. Note that O is not a well-constrained element is our fitting process, as the O I λ7774 feature is relatively insensitive to the mass fraction of the element. Thus, we use X(O) as a filler in our chemical composition.
The majority of the model ejecta is designed according to the fit of the second spectral epoch (v_phot = 10 900 km s^-1), however, some compromise has to be implemented to achieve a better agreement for the third epoch with only a slightly lower photospheric velocity (v_phot = 9 800 km s^-1). The changes in the abundances of elements can be tracked by the absorption profile of the prominent lines. The red wing of the Ca II H&K profile caught by the second epoch, constrains the inner abundances of the element with an upper limit of X(Ca) < 0.0005 below 16 000 km s^-1. The Si and S abundances peak around ∼13 000 km s^-1 according to the fit of Si II λ6355 and S II W feature. The red wings of these absorptions also indicate reduced mass fractions toward the lower velocities.
The IGE elements (except the high-velocity Fe) are limited below ∼14 000 km s^-1, otherwise, the complex Fe feature around 5000 Å would be excessive at both epochs, especially towards the shorter wavelengths. Due to the limitations of IME abundances (see the paragraph above), Fe and Ni become the dominant elements here.
Note again that the derived abundance structure should be handled with caution due to the limited fitting constraints provided by the small spectral sample and the high number of fitting parameters. Between 9,800 and 20,000 km s^-1, where we can the following regions can be distinguished in the model ejecta by the general trend of the most prominent elements:
* C/O outer region: the chemical profile is dominated by O with an inwards decreasing C contribution, while there is only a moderate IME and almost no IGE abundance.
* IME region: the mass fraction of Si higher than 0.5 in a narrow, 1000-2000 km s^-1 wide region; the abundance of C/O drops, while the Ni abundance rises with decreasing velocity.
* IGE inner region: the ^56Ni produced in the explosion and its daughter isotopes (^56Co, ^56Fe) dominate that ejecta
This kind of stratification is not specific to only one explosion scenario, instead, most of the deflagration-to-detonation transition and pure detonation models show similar regions with varying locations and relative strengths. The exact velocities, where these regions are separated from each other, just like the chemical abundances, are sensitive to the initial conditions of the hydrodynamic simulations. Thus, a direct quantitative comparison with predictions of any possible explosion scenario is not feasible from the present dataset.
§.§.§ Analysis of the bolometric light curve
By using the distance modulus μ=33.14 mag constrained in Sec. <ref>, physical properties like the initial radioactive nickel mass produced in the explosion (M_Ni) can be constrained. To do so, we construct the pseudo-bolometric light curve of SN 2021hpr using two different methods. In both cases, we apply the available fluxes from Swift UVM2 filter to SDSS z-band, but exclude the Swift UVW1 and UVW2 filters from the process because of their significant red leak <cit.>. As a first approximation (hereafter referred to as BOL1), we use the SuperBol[<https://zenodo.org/badge/latestdoi/73849147>] code for computing polynomial fits and extrapolations on the individual light curves. Then, before the integration, we extrapolate the observed SEDs with blackbody fits in the bluest and reddest bands for each epoch. By integrating these individual light curves we generate the pseudo-bolometric LC with BB-correction.
As a second method (BOL2), the flux contribution from the unobserved UV- and IR regimes is estimated in another way. For integrating the UV contribution, the trapezoidal rule is applied with the assumption that the flux reaches zero at 1000 Å <cit.>. The infrared contribution is taken into account by the exact integration of a Rayleigh-Jeans tail attached to the observed flux at the longest observed wavelength (i.e. I- or i-band in the present case). Hence, in the following sessions, we refer to this light curve as the pseudo-bolometric LC with Rayleigh-Jeans tail.
There are multiple ways to estimate the initial ^56Ni mass from the bolometric LC, e.g. the t_15 method <cit.> or the tail luminosity method <cit.>. Here we fit the estimated pseudo-bolometric LC (supplemented with blackbody corrections) with a semi-analytic code based on the model of <cit.> assuming radiative diffusion in a homologously expanding SN ejecta heated by the radioactive decay of ^56Ni and ^56Co. This model has been developed further by <cit.> and <cit.>.
The fitting parameters of the Arnett-model are the mean diffusion timescale, t_d (also called the light curve timescale, which is practically the geometric mean of the diffusion and the expansion timescales, see ), the gamma-ray leakage timescale, T_γ. and the initial mass of the radioactive ^56Ni, M_Ni. These parameters are directly related to the global physical parameters of the ejecta, namely the total ejecta mass M_ej, the characteristic expansion velocity v_ exp, the effective optical opacity κ, and the opacity for gamma-rays κ_γ <cit.>.
Since the Arnett-model provides only two timescales (t_d and T_γ) for three physical parameters (M_ej, v_exp and κ), the physical parameters cannot be constrained independently from only photometry. To overcome this difficulty, <cit.> used an approximate, iterative method by estimating a lower and an upper limit for the optical opacity first, then using the average of them to constrain the ejecta mass and the expansion velocity.
Recently the diffusion model was challenged by <cit.>, who suggested an alternative formalism to estimate the initial nickel masses for various types of SN explosions. However, <cit.> demonstrated that for Type Ia SNe in particular, the nickel masses inferred from the two methods are consistent within their uncertainties.
As a first attempt, we fit the pseudo-bolometric light curve (see in Fig. <ref>) provided by SuperBol (BOL1). The best-fit model parameters estimated by the Minim code <cit.> are shown in Table <ref> together with the inferred physical parameters. These data
suggest a nickel mass of M_Ni∼ 0.48 M_⊙, which is consistent with the absolute peak magnitude of M_max(B) ∼ 19 mag.
However, the BOL1 LC suffers from higher uncertainties mainly due to the imperfect BB-extensions to the near-UV and near-IR regime fit by the SuperBol code. The strong bump between +20 and +30 days is partially non-physical as the BB-fits of the code overestimate the flux contribution from the not-observed spectral regimes. Since these uncertainties lead to an inferior fitting, we disfavour the results from the BOL1 LC.
The BOL2 light curve is also fit (Fig. <ref>) following the same methodology, and the inferred physical quantities from the best-fit parameters (see Table <ref> indicate a realistic M_Ni∼ 0.44 M_⊙, and v_exp∼ 11200 km s^-1.
The mean optical opacity is constrained as κ∼ 0.144 according to the iterative method by <cit.>, which corresponds to an ejecta mass of ∼ 1.12 M_⊙.
As an alternative solution, the same set of best-fit parameters can be used with a fixed expansion velocity adopted from the spectral analysis (Section <ref>). The results are shown in the 4th column of Table <ref>. By assuming v_phot = 10 000 km s^-1 as v_exp, we get a lower ejecta mass of M_ej∼ 0.89 M_⊙. Note, however, that even though v_phot is generally used as an approximation for v_exp, these two velocities are not the same quantity by definition <cit.>. Thus, this estimate can be considered only as a lower limit for the ejecta mass.
For further validation, we can also constrain the value of the effective optical opacity (κ) directly from fitting the light curve with the Arnett-model-based LC2 code <cit.> coupled with Minim. In LC2 the ejecta mass, the radius of the progenitor, the nickel mass, the opacity and the kinetic energy of the ejecta can be chosen as fitting parameters. Since our data span only the first few months after the explosion, we ignore the leaking of positrons from the ejecta, and assume only the usual gamma-ray leaking with an effective gamma-ray opacity of κ_γ∼ 0.03 cm^2g^-1. The advantage of this approach is that the highly uncertain expansion velocity is not a fitting (and neither an input) parameter, however, the results are sensitive to the value of κ included in the fitting. Note, that due to significant parameter correlations in the Arnett-model <cit.> all the previously described methods are tainted with significant systematic uncertainties. We take into account these correlations while estimating the average errors of the inferred physical properties of SN 2021hpr (see Table <ref>).
The main parameters inferred from the fitted κ method (M_ej∼ 1.28 M_⊙) are consistent with those estimated by the previous approaches and are rather close to the results of the mean κ method. Thus, we accept M_Ni = 0.44 ± 0.14 M_⊙ and the other corresponding parameters (see the 3rd column with boldface fonts in Table <ref>) as the final result of the bolometric LC analysis.
§ CONCLUSION
We analyzed the optical/UV light curve of the normal Type Ia SN 2021hpr together with its two well-observed siblings, SNe 1997bq and 2008fv, that appeared in the same host galaxy, NGC 3147. The three SNe provide a unique opportunity to revise the distance of their host, and allow the testing of systematic effects influencing the various distance estimation methods.
We took new photometric data on SN 2021hpr with the twin robotic telescopes, RC80 and BRC80, from two Hungarian observatories (Konkoly and Baja).
The light curves, including BVgriz filters, start from t= -14.9 days, adopting 59323.0 MJD as the moment of maximum light in the B-band. Based on fitting the early data points, we constrained T_first∼ 59304.4 MJD as the date of first light, which gives t_rise = 18.4 days for the rise time. Besides the optical follow-up, we also downloaded and analyzed the available Swift UVOT photometry taken from -16.1 to +12.3 days relative to the B-band maximum.
We estimated the distance to SN 2021hpr by fitting our new BVgri data with the LC-fitter codes MLCS2k2, SALT2 and SALT3. The same three codes were applied for the two sibling SNe, and the inferred individual distances were compared to each other. We found that the results scatter between μ = 32.97 and 33.35 mag with a mean value of μ = 33.12 ± 0.16 mag. This estimation for the distance of NGC 3147 is in good agreement with previous SN-based studies (see Table <ref> and Figure <ref>), and also marginally consistent with the most recent Cepheid-based distance <cit.>.
We adopted the distance inferred from the SALT3 fitting to SN 2021hpr, where the most complete self-consistent optical dataset was applied and the models provided good fits to all bands. The SALT3-based distance modulus μ = 33.14 ± 0.05 mag is also consistent with that of <cit.> estimated by the BayeSN code from fitting to an independent LC of SN 2021hpr.
In order to study the physical parameters of SN 2021hpr based on our new, improved distance, abundance tomography was performed by fitting three pre-maximum spectra with the radiative transfer code TARDIS. The constrained chemical structure is consistent with multiple explosion models, including deflagration-to-detonation transition and pure detonation scenarios, with an additional overabundance of Mg, Si and Ca to reproduce the observed blue-shifts and high-velocity features of the corresponding absorption lines.
We calculated the bolometric LC of SN 2021hpr from its optical data supplemented by near-UV data from Swift, and fit it with the radiative diffusion Arnett-model. The peak luminosity turned out to be 1.02 × 10^43 erg/s, while the total mass of ^56Ni produced in the explosion is estimated as 0.44 ± 0.14 M_⊙, while the ejecta mass and expansion velocity were calculated as 1.12 ± 0.28 M_⊙ and 11 200 ± 1200 km s^-1, respectively. Considering all the estimated physical properties and spectroscopic characteristics of SN 2021hpr, it belongs to the Branch-normal and high velocity gradient classes of Type Ia SNe.
This research made use of tardis, a community-developed software package for spectral synthesis in supernovae <cit.>. The
development of tardis received support from GitHub, the Google Summer of Code
initiative, and from ESA's Summer of Code in Space program. tardis is a fiscally
sponsored project of NumFOCUS. tardis makes extensive use of Astropy and Pyne.
The authors acknowledge the Hungarian National Research, Development and Innovation Office
grants OTKA K-131508, K-138962, K-142534, FK-134432, KKP-143986 (Élvonal), and 2019-2.1.11-TéT-2019-00056. LK acknowledges the Hungarian National Research, Development and Innovation Office grant OTKA PD-134784.
BB, RKT and ZsB is supported by the ÚNKP-22-2 New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund.
APN is supported by NKFIH/OTKA PD-134434 grant, which is founded by the Hungarian National Development and Innovation Fund. LK, KV, and TS are Bolyai János Research Fellows of the Hungarian Academy of Sciences. KV and TS are supported by the Bolyai+ grants ÚNKP-22-5-ELTE-1093 and ÚNKP-22-5-SZTE-591, respectively. SZ is supported by the National Talent Programme under NTP-NFTÖ-22-B-0166 Grant. ZMS acknowledges funding from a St Leonards scholarship from the University of St Andrews. ZMS is a member of the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne.
aa
§ OBSERVATIONAL DATA
ccccccc
Log of SN 2021hpr photometry from Observatory of Baja.
1cMJD 1cB 1cV 1cg 1cr 1ci 1cz
59312.1 15.312 (0.08) 15.237 (0.06) 15.201 (0.01) 15.201 (0.01) 15.509 (0.01) 15.528 (0.02)
59312.9 15.050 (0.16) 15.022 (0.08) 15.004 (0.01) 15.012 (0.01) 15.324 (0.01) 15.398 (0.02)
59314.8 14.684 (0.06) 14.684 (0.04) 14.671 (0.01) 14.733 (0.02) 14.998 (0.01) 15.070 (0.02)
59329.1 14.418 (0.10) 14.266 (0.05) 14.327 (0.01) 14.321 (0.01) 15.045 (0.01) 15.085 (0.04)
59339.0 15.481 (0.22) 14.762 (0.05) 14.948 (0.03) 14.893 (0.02) 15.484 (0.03) 15.192 (0.10)
59403.8 17.678 (0.34) 17.114 (0.08) 17.331 (0.05) 17.119 (0.03) 17.610 (0.04) 18.012 (0.14)
59407.9 17.700 (0.07) 17.174 (0.03) 17.350 (0.02) 17.242 (0.02) 17.600 (0.03) 17.871 (0.11)
59426.0 18.023 (0.37) 17.675 (0.09) 17.708 (0.07) 17.916 (0.05) 18.222 (0.06) 18.631 (0.22)
59432.8 18.269 (0.95) 17.672 (0.10) 17.789 (0.04) 17.978 (0.98) 18.403 (0.06) 18.720 (0.16)
59446.8 18.362 (0.29) 18.062 (0.08) 18.109 (0.07) 18.398 (0.06) 18.748 (0.16) 18.209 (0.29)
59459.7 18.356 (0.78) 18.313 (0.13) 18.112 (0.05) 18.793 (0.06) 18.989 (0.10) 19.095 (0.30)
59467.8 18.854 (0.15) 18.433 (0.11) 18.349 (0.05) 19.224 (0.08) 19.347 (0.12) 18.937 (0.27)
ccccccc
Log of SN 2021hpr photometry from Observatory of Piszkesteto.
1cMJD 1cB 1cV 1cg 1cr 1ci 1cz
59308.01 17.72 (0.10) 16.78 (0.06) 17.06 (0.05) 16.81 (0.04) 17.27 (0.07) 16.90 (0.13)
59308.91 17.00 (0.10) 16.31 (0.04) 16.49 (0.03) 16.36 (0.03) 16.74 (0.03) 16.50 (0.06)
59310.93 15.83 (0.08) 15.45 (0.05) 15.52 (0.02) 15.53 (0.02) 15.87 (0.02) 15.81 (0.04)
59311.90 15.45 (0.09) 15.15 (0.05) 15.27 (0.04) 15.23 (0.03) 15.57 (0.02) 15.61 (0.04)
59312.92 15.19 (0.07) 14.93 (0.04) 14.98 (0.03) 15.00 (0.02) 15.31 (0.02) 15.35 (0.04)
59314.90 14.84 (0.08) 14.62 (0.04) 14.63 (0.02) 14.66 (0.02) 15.00 (0.02) 15.12 (0.04)
59319.97 14.45 (0.09) 14.20 (0.04) 14.24 (0.03) 14.27 (0.03) 14.83 (0.02) 14.89 (0.04)
59320.87 14.42 (0.08) 14.18 (0.04) 14.22 (0.03) 14.24 (0.03) 14.83 (0.02) 14.89 (0.04)
59324.79 14.46 (0.08) 14.13 (0.04) 13.95 (0.09) 14.17 (0.04) 14.90 (0.03) 14.91 (0.05)
59325.88 14.49 (0.20) 14.21 (0.05) 14.29 (0.07) 14.27 (0.02) 14.92 (0.03) 14.97 (0.05)
59328.89 14.61 (0.16) 14.30 (0.05) 14.34 (0.09) 14.30 (0.05) 15.06 (0.04) 15.10 (0.05)
59331.88 14.83 (0.13) 14.35 (0.05) 14.48 (0.05) 14.49 (0.03) 15.22 (0.04) 15.17 (0.05)
59334.84 15.11 (0.08) 14.57 (0.04) 14.68 (0.04) 14.71 (0.03) 15.45 (0.03) 15.16 (0.09)
59335.99 15.28 (0.15) 14.65 (0.06) 14.77 (0.04) 14.80 (0.04) 15.47 (0.03) –
59336.84 15.25 (0.10) 14.69 (0.04) 14.83 (0.04) 14.81 (0.03) 15.47 (0.03) 15.14 (0.08)
59338.85 15.56 (0.09) 14.79 (0.04) 14.98 (0.03) 14.87 (0.03) 15.49 (0.02) 15.23 (0.21)
59341.94 15.90 (0.12) 14.97 (0.04) 15.22 (0.03) 14.93 (0.03) 15.43 (0.02) 15.08 (0.04)
59342.85 16.00 (0.09) 15.01 (0.05) 15.31 (0.03) 14.94 (0.03) 15.41 (0.02) 15.08 (0.05)
59344.92 16.21 (0.08) 15.11 (0.03) 15.48 (0.03) 14.96 (0.03) 15.36 (0.02) 15.05 (0.05)
59345.86 16.28 (0.09) 15.17 (0.03) 15.55 (0.02) 14.98 (0.02) 15.34 (0.02) 15.07 (0.05)
59350.87 16.69 (0.08) 15.44 (0.04) 15.95 (0.03) 15.13 (0.03) 15.32 (0.02) 15.07 (0.05)
59352.83 16.86 (0.13) 15.57 (0.05) 16.09 (0.03) 15.26 (0.02) 15.42 (0.02) 15.11 (0.04)
59354.92 16.98 (0.08) 15.73 (0.04) 16.24 (0.04) 15.41 (0.03) 15.54 (0.03) 15.25 (0.04)
59359.94 17.34 (0.15) 16.04 (0.06) 16.50 (0.07) 15.72 (0.03) 15.90 (0.02) 15.62 (0.05)
59360.88 17.42 (0.16) 16.04 (0.06) 16.49 (0.09) 15.73 (0.04) 15.94 (0.02) 15.67 (0.06)
59365.91 17.46 (0.08) 16.19 (0.06) 16.66 (0.04) 15.99 (0.04) 16.27 (0.04) 15.99 (0.08)
59368.91 17.34 (0.08) 16.28 (0.04) 16.71 (0.04) 16.09 (0.03) 16.36 (0.02) 16.17 (0.06)
59369.84 17.40 (0.16) 16.37 (0.06) 16.78 (0.05) 16.14 (0.05) 16.39 (0.03) 16.17 (0.08)
59370.95 17.51 (0.10) 16.38 (0.05) 16.78 (0.04) 16.18 (0.03) 16.47 (0.03) 16.28 (0.08)
59381.86 17.55 (0.13) 16.67 (0.05) 16.98 (0.05) 16.54 (0.03) 16.92 (0.05) 16.64 (0.07)
59387.90 17.82 (0.14) 16.82 (0.05) 17.23 (0.07) 16.80 (0.04) 17.13 (0.05) 17.14 (0.11)
59387.90 17.82 (0.14) 16.82 (0.05) 17.23 (0.07) 16.80 (0.04) 17.13 (0.05) 17.14 (0.11)
59393.88 17.74 (0.10) 16.97 (0.04) 17.20 (0.03) 16.93 (0.03) 17.37 (0.03) 17.29 (0.11)
59402.86 17.82 (0.11) 17.17 (0.04) 17.38 (0.04) 17.16 (0.02) 17.61 (0.04) 17.85 (0.17)
59407.88 17.95 (0.09) 17.27 (0.04) 17.43 (0.03) 17.36 (0.03) 17.80 (0.03) 17.91 (0.09)
59419.01 18.25 (0.23) 17.58 (0.11) 17.66 (0.12) 17.75 (0.07) 18.06 (0.11) 18.45 (0.35)
59423.97 18.13 (0.09) 17.59 (0.04) 17.75 (0.04) 17.82 (0.04) 18.21 (0.08) 18.58 (0.24)
59429.04 17.95 (0.25) 17.76 (0.10) 18.32 (0.32) 18.08 (0.09) 18.20 (0.32) –
59434.02 18.36 (0.08) 17.84 (0.05) 17.81 (0.04) 18.11 (0.03) 18.60 (0.11) 19.28 (0.62)
ccccccc
Log of Swift UVOT photometry of SN 2021hpr.
1cMJD 1cUW2 1cUM2 1cUW1 1cU 1cB 1cV
59335.240 18.005 (0.102) 18.741 (0.159) 16.788 (0.059) 15.379 (0.033) 15.062 (0.020) 14.553 (0.026)
59332.190 17.723 (0.084) 18.390 (0.125) 16.354 (0.045) 15.030 (0.027) 14.770 (0.017) 14.401 (0.024)
59331.060 17.875 (0.191) 18.785 (0.358) 16.249 (0.087) 14.944 (0.053) 14.544 (0.033) 14.292 (0.047)
59326.290 17.178 (0.079) 18.192 (0.150) 15.848 (0.047) 14.284 (0.025) 14.293 (0.020) 14.085 (0.028)
59321.590 – – 15.552 (0.026) – – –
59321.570 17.084 (0.062) – – – – –
59318.730 17.014 (0.059) 18.414 (0.154) 15.445 (0.032) 13.882 (0.018) 14.220 (0.016) 14.255 (0.026)
59316.660 17.206 (0.060) 18.509 (0.146) 15.633 (0.030) 14.031 (0.017) 14.326 (0.015) 14.415 (0.025)
59311.890 18.208 (0.260) 19.094 (0.564) 16.925 (0.137) 15.358 (0.066) 15.251 (0.045) 15.184 (0.079)
59309.830 – – 18.046 (0.161) 16.727 (0.063) 16.115 (0.031) 15.754 (0.067)
59308.460 – – 18.651 (0.266) 17.704 (0.128) 17.010 (0.054) 16.438 (0.091)
59307.240 – – 18.983 (0.353) 18.217 (0.195) 17.753 (0.096) 17.503 (0.244)
59306.800 – 18.759 (0.346) 18.180 (0.240) 17.320 (0.161) 18.078 (0.277) –
ccccc
Log of the spectra; t_exp shows the time since the date of explosion constrained in the abundance tomography (MJD 59 304.0); while the phases are given relative to the maximum in B-band (MJD 59 321.9).
1cMJD 1ct_exp [d] 1cPhase [d] 1cTelescope/Instrument 1cWavelength range [Å]
59307.5 3.5 -14.4 XLT/BFOSC 3700 - 8800
59317.6 13.6 -4.3 XLT/BFOSC 3700 - 8800
59323.9 19.9 +2.0 Smolecin Observatory 3900 - 7100
§ LIGHT CURVE FITS
In the followings, the plots of MLCS2k2, SALT2.4 and SALT3 fits for SNe 1997bq, 2008fv and 2021hpr are listed. The distance moduli estimated from these fits are discussed in Sec. <ref>, while the corresponding fitting parameters are listed in Tab. <ref>.
|l|ccc|ccc|cc|
Log of the light curve fits.
3cSALT3 3cSALT2.4 2cMLCS2k2
x_0 x_1 c x_0 x_1 c A_host Δ
SN 1997bq 0.03142 -1.025 0.0696 0.0305 -0.6073 0.1094 0.60 0.00
SN 2008fv 0.0260 0.9040 0.1452 0.02688 0.7690 0.1433 0.90 -0.29
SN 2021hpr 0.04091 -0.044 0.0051 0.03739 0.4483 0.0423 0.45 0.03
|
http://arxiv.org/abs/2307.00594v1
|
20230702152848
|
Quantifying the 'Needle in a Haystack' Problem of Finding Supermassive Black Hole Related Flares in the ZTF Public Survey
|
[
"Yael Dgany",
"Iair Arcavi",
"Lydia Makrygianni",
"Craig Pellegrino",
"D. Andrew Howell"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.GA"
] |
0000-0002-7579-1105]Yael Dgany
The School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel
0000-0001-7090-4898]Iair Arcavi
The School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel
CIFAR Azrieli Global Scholars program, CIFAR, Toronto, Canada
0000-0002-7466-4868]Lydia Makrygianni
The School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel
0000-0002-7472-1279]Craig Pellegrino
Las Cumbres Observatory, 6740 Cortona Drive, Suite 102, Goleta, CA 93117-5575, USA
Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA
0000-0003-4253-656X]D. Andrew Howell
Las Cumbres Observatory, 6740 Cortona Drive, Suite 102, Goleta, CA 93117-5575, USA
Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA
Iair Arcavi
arcavi@tauex.tau.ac.il
Transient accretion events onto supermassive black holes (SMBHs), such as Tidal Disruption Events (TDEs), Bowen Fluorescence Flares (BFFs), and active galactic nuclei (AGN) sudden increases of activity, offer a new window into the SMBH population, accretion physics and stellar dynamics in galaxy centers. However, such transients are rare, and finding them in wide-field transient surveys is challenging. Here we present the results of a systematic real-time search for SMBH-related transients in Zwicky Transient Facility (ZTF) public alerts, using various search queries. We examined 345 rising events coincident with a galaxy nucleus, with no history of previous activity, of which 223 were spectroscopically classified. Of those, 5 (2.2%) were TDEs, 1 (0.5%) was a BFF and 2 (0.9%) AGN flares. Limiting the search to blue events brighter than magnitude 19, the fraction of TDEs more than doubles to 5.2%. Limiting the search further to candidate post-starburst galaxies increases the relative number of TDEs to 20%, but the absolute numbers in such a search are small. The main contamination source is supernovae (95.1% of the events), of which the majority (82.2% of supernovae) are of Type Ia. In a comparison set of 39 events with limited photometric history, the AGN contamination increases to ∼30%. Host galaxy offset is not a significant discriminant of TDEs in current ZTF data, but might be useful in higher resolution data. Our results can be used to quantify the efficiency of various SMBH-related transient search strategies in optical surveys such as ZTF and LSST.
§ INTRODUCTION
Wide field optical surveys have recently found new types of transients occurring exclusively in galaxy centers. These transients are thought to be associated with enhanced accretion events onto supermassive black holes (SMBHs). As such, they have the potential to reveal the presence and properties of otherwise inactive SMBHs, as well as constrain physics of accretion and related radiative processes. Notable examples of such transients are optical-ultraviolet tidal disruption events (TDEs) and Bowen Fluorescence Flares (BFFs). Both types of events are characterized by a sudden increase of flux by several orders of magnitude, and are thus much more dramatic than the few tens of percent level variability seen in most active galactic nuclei (AGN), which host SMBHs with steadily accreting material.
A TDE is the result of the disruption of a star by a SMBH <cit.>. In such an event, half of the stellar material is expected to accrete onto the SMBH <cit.>. For disruptions occurring outside the event horizon (expected for solar-type stars disrupted by SMBHs of masses ≲10^8 M_⊙, for example), the accretion event will be accompanied by an observable flare. Several such flares have been detected in X-rays, as expected from directly-observed accretion emission (see for a recent review). However, somewhat surprisingly, a class of optical-ultraviolet TDEs has also been discovered <cit.>. These events are mostly blue, with blackbody temperatures of a few 10^4 K lasting for several months to years, and show broad emission lines of H and/or He in their spectra. The emission mechanism leading to these observed properties is a topic of active debate (see and for recent reviews).
In addition to their emission mechanism puzzle, optical-ultraviolet TDEs show a peculiar and strong host-galaxy preference for post-starburst galaxies <cit.>. This preference is not yet fully understood, but might be related to the spatial distribution and dynamics of various stellar populations in the centers of such galaxies <cit.>. Studying optical-ultraviolet TDEs can thus also help shed light on the stellar dynamics in galaxy nuclei, which are responsible for driving up TDE rates in post-starburst environments <cit.>.
Like TDEs, BFFs are also blue and show H and He lines in their spectra, leading <cit.> to classify the first such observed event as a TDE. However, a second <cit.> and then third <cit.> event showed that their typical spectral line widths are much narrower than those of TDEs, and that their light curves decline much more slowly than those of TDEs. This led <cit.> to classify BFFs as a separate observational class. There are now also hints that BFFs occur in previously existing AGN (Makrygianni et al. in prep), meaning that they could be the result of accretion instabilities in an AGN disk, or of a TDE occurring in an AGN and interacting with its existing accretion disk <cit.>.
BFFs are named as such because they exhibit certain emission lines (such as He II 4686Å and N III 4640Å, among others) that are associated with the Bowen Fluorescence mechanism <cit.>. In this mechanism, extreme-ultraviolet and X-ray photons excite certain He II transitions which in turn launch a cascade of transitions observed in the optical and ultraviolet regimes. This process requires the presence of extreme-ultraviolet photons hitting high-density and high-optical-depth material and was indeed predicted decades ago to occur in AGN <cit.>. Since the identification of this mechanism in BFFs, it was also suggested to occur in some TDEs <cit.>, hinting at a possible connection between the conditions of matter and radiation in these two types of events related to SMBH accretion.
It is clear that studying more TDEs and BFFs is necessary in order to better constrain their nature, emission mechanisms, and the physics they can teach us about SMBHs and their associated accretion processes. However, these events are intrinsically rare. The exact TDE rate remains uncertain but is likely to be in the range of 10^-5–10^-4 events per galaxy per year <cit.>. The BFF rate is not yet estimated at all, but observationally, they are less common than TDEs (this could be due in large part to selection effects, as discussed below). In addition to their intrinsic rarity, finding TDEs and BFFs is also observationally challenging. As events that occur in galaxy centers, their detection is contaminated by image subtraction artifacts, “regular” AGN activity, unresolved non-SMBH-related transients, and even variable stars which can not be easily distinguished from distant or compact galaxies. For these reasons, only a few dozen TDEs and a few BFFs have been identified so far.
Attempts have been made to devise selection criteria to weed out such transients from the large alert streams produced by wide-field transient surveys. Such criteria typically include selecting candidates by the significance of the flare (since TDEs and BFFs are luminous), color (since TDEs and BFFs are blue), and host properties (since TDE hosts are mostly quiescent).
<cit.> search a set of 493 nuclear transients (08 from their host galaxy center) from the intermediate Palomar Transient Factory, for events with g-R<0 mag residing in galaxies with u-g>1 mag and g-r>0.5 mag. These cuts reduce the set of candidates to just 26, of which 2 are TDEs. Still, the contamination fraction is large. A substantial amount of telescope time is required to vet 13 candidates (through spectroscopy or ultraviolet colors) for each bona fide TDE.
One way to further reduce the amount of transient contamination in TDE searches, is to focus the search on galaxies most similar to the post-starburst hosts that TDEs seem to prefer. <cit.> use Sloan Digital Sky Survey <cit.> Data-Release 12 main galaxy survey <cit.> galaxies, with similar spectral properties as those of actual TDE hosts, to train a machine-learning algorithm to identify such galaxies from photometry alone. They then use this algorithm to identify several tens of thousands of TDE host-galaxy candidates in archival survey data. <cit.> find that indeed using the <cit.> catalog of galaxies reduces contamination by roughly a factor of 3-50 (depending on the subset of galaxies used from the catalog) compared to filtering just on quiescent galaxies, and that the only contaminant transients in such galaxies are Type Ia SNe. That study, however, was based on archival data alone.
Here we perform a systematic real-time search for TDEs, BFFs and other possible SMBH flares in the Zwicky Transient Facility <cit.> alert stream, as parsed by the Lasair broker <cit.>. We use various search criteria that rely on candidate brightness and color and their host-galaxy properties, and compare their effectiveness in selecting TDEs and BFFs against actual spectroscopic classifications obtained by us and by the rest of the community. We focus on rising events (i.e. events discovered before peak), selected using visual inspection, for two main reasons: First, such events are more scientifically valuable, as they include the peak time and brightness, as well as the early pre-peak emission, both of which contain important information for constraining models of SNe and TDEs. In addition, rising events present a way to decrease the number of events to a more manageable sub-sample for spectroscopic classification while avoiding biasing the sample towards a particular class (the selection of rising events is done before their classifications are known)[One exception is that we would be less likely to catch rapidly rising events during their rise compared to slower rising events, however the typical ZTF public survey cadence is ≲3 days, which should be enough to catch most rapidly evolving transients <cit.>.].
Our goal is to quantify the contamination fraction for the various search criteria, and to check whether any of them miss TDEs and BFFs. Here, we do not constrain the intrinsic rates of SNe, TDEs or BFFs in nature, but rather the observed fraction of events to help guide searches in ongoing and future transient surveys, and to help prioritize limited spectroscopic classification resources. We detail our search criteria in Section <ref>, present and analyze our results in Section <ref>, and discuss them and conclude in Section <ref>.
§ METHODS
We searched the ZTF real-time alert stream for transients in galaxy centers, every day between 2020 Nov 3 and 2022 March 6 (UT dates), with the exception of a ∼2-month break due to a ZTF technical outage between 2021 December 5 and 2022 February 17. In total, our search includes alerts from 414 days. We used the custom query builder on version 1.0 of the Lasair broker[<https://lasair.lsst.ac.uk>] to filter the alerts. Lasair uses a contextual classifier called Sherlock[<https://lasair.readthedocs.io/en/develop/core_functions/sherlock.html>]. Sherlock is a boosted decision tree algorithm which provides an initial classification of every non-moving object by performing a spatial cross-match against data from historical and ongoing astronomical surveys, including catalogs of nearby galaxies, variable stars and AGN (see Section 4.2 of for more details).
Our queries, which are based on the TDE queries by M. Nicholl on version 1.0 of Lasair[ <https://lasair.roe.ac.uk/filters/94/> and <https://lasair.roe.ac.uk/filters/95/>], filter ZTF alerts according to the following criteria (for each we state the corresponding Lasair query condition):
* The candidate is within a certain threshold distance of the nearest Sherlock catalog source. For 80% of our sample, we choose a threshold of 05[This value was chosen given that the ZTF pixel scale of 1 per pixel results in a typical centroiding accuracy of ≲03, which we increase to 05 to be inclusive]. For the rest, we increased the separation threshold to 1 to check if this has a strong effect on the results:
sherlock_classifications.separationArcsec < 0.5
or
sherlock_classifications.separationArcsec < 1
We found that the value of the threshold has no significant effect on the results (see Appendix), and therefore analyze the joint sample of both separation thresholds together to increase our sample size. This condition, regardless of separation threshold, filters out “hostless” events, i.e. with no host in the Sherlock catalog.
* The nearest catalog source is likely a galaxy rather than a star[sgscore1 is based on a Random Forest classifier trained and implemented into the ZTF alerts by <cit.>. An sgscore value closer to 0 means the nearest source in the Panoramic Survey Telescope and Rapid Response System <cit.> first survey <cit.> catalog is more likely a galaxy, while a value closer to 1 means it is more likely a star.]:
objects.sgscore1 < 0.5
* The Sherlock classification of the candidate is either “SN” (Supernova) or “NT” (Nuclear Transient)[The other Sherlock classifications, which we exclude from our search, are: “VS” (Variable Star), “CV” (Cataclysmic Variable), “BS” (Bright Star), “AGN”, “Orphan” (if the transient fails to be matched against any catalogued source) and “Unclear”.]:
sherlock_classifications.classification in (`SN',`NT')
* The candidate does not have detections more than 100 days ago (indicating it might be a variable, rather than a transient source, though some past detections could be artifacts)[A ZTF alert is reported to the brokers with a 30-day history, which may contain pre-discovery detections. Lasair marks the time of the first detection in this 30-day history as jdmin.]:
objects.jdmin > JDNOW()-100
* The candidate does not have a ZTF17 or ZTF19 name, meaning that it was not created by ZTF in 2017 or 2019 (this is another way to filter out variable sources)[Sporadic false detections in galaxy centers may occur, sometimes years before a real event occurs at the same position, and based on false detections that are later filtered out by the brokers. In such a case the real event would have an old name from when the false detection occurred years before. Removing such events might thus undesirably filter out interesting candidates. In order to avoid loosing many candidates but still not being inundated with variable sources, we decided to allow events with ZTF18 names (several bad subtractions in 2018 caused false events then; E. Bellm, private communication) while removing those with ZTF17 and ZTF19 names.]:
objects.objectId NOT LIKE `ZTF17%' AND objects.objectId NOT LIKE `ZTF19%'
* The candidate is not a previously classified SN:
crossmatch_tns.tns_prefix != `SN'
* The candidate has <3 of its detections deemed unreliable (i.e. which are not marked as good quality and/or the candidate is dimmer than the reference)[ncand is the total number of detections from ZTF, which can be either positive or negative subtraction residuals (i.e. a brightening or fading with respect to the reference image). ncandgp counts only `good and positive' detections, i.e. a positive flux with respect to the reference and having a ZTF machine-learning real-bogus score >0.75. This criterion requires that most detections are good and positive but allows for one or two light curve points with poor real-bogus scores if, for example, the transient was detected very young and the subtraction residuals at the earliest epochs have a low real-bogus score due to a relatively low signal to noise ratio].:
objects.ncand - objects.ncandgp < 3
* At least one of those detections was no more than 14 days ago (in order to avoid old objects which might already be fading):
objects.ncandgp_14 > 1
* The candidate is more than 10 degrees away from the Galactic plane (in order to filter stellar flares or variability):
objects.glatmean > 10 OR objects.glatmean < - 10
Conditions 4 and 5 could introduce a bias against finding BFFs <cit.> and TDEs occurring in AGN. However, these conditions are necessary in order to remove “normal” AGN activity which can otherwise be a major contaminant (see below).
In addition to conditions 1–9, we create two variations of the query, each with a different magnitude limit:
* The latest g- or r-band magnitude of the candidate is brighter than 19:
objects.rmag < 19 OR objects.gmag < 19
* The latest g- or r-band magnitude of the candidate is brighter than 19.5:
objects.rmag < 19.5 OR objects.gmag < 19.5
The motivation for these variations is that several spectrographs on dynamically scheduled telescopes (which are ideal for rapid classification of transients), such as the Floyds spectrographs <cit.> on the Las Cumbres Observatory Faulkes Telescopes North (FTN) and South (FTS), and the SPectrograph for the Rapid Acquisition of Transients <cit.> on the Liverpool Telescope, are on 2-meter class telescopes, with a typical limiting magnitude of 19. The advanced extended Public European Souther Observatory (ESO) Spectroscopic Survey for Transient Objects <cit.>, responsible for a large number of transient classifications, uses the ESO Faint Object Spectrograph and Camera v2 <cit.> on the 3.6-meter New Technology Telescope (NTT) to reach a magnitude of 19.5.
For each of these variations, we create three sub-variations: one without any additional conditions, one with an additional condition on the color of the event:
* The candidate has a g-r magnitude difference <0.05 (in order to select only blue events, since TDEs are observed to be blue; see e.g. )[The g-r color is evaluated on the most recent night with positive detections (relative to the reference) in both bands. We choose a threshold of 0.05 to be slightly more conservative than the threshold of 0 suggested by <cit.>. We do not take into account magnitude errors here.]:
objects.g_minus_r < 0.05
and finally, one sub-variation which searches for candidates coincident with galaxies from the <cit.> catalog of likely TDE hosts <cit.>, as implemented in the “E+A Galaxies” watchlist[<https://lasair.roe.ac.uk/watchlist/321/>] on Lasair.
In total we have six queries, which we hereby number as follows:
* Conditions 1–9, with a limiting magnitude <19 (Condition 10a), blue (Condition 11) and in a PS galaxy.
* Conditions 1–9, with a limiting magnitude <19 (Condition 10a), and blue (Condition 11).
* Conditions 1–9, with a limiting magnitude <19 (Condition 10a).
* Conditions 1–9, with a limiting magnitude <19.5 (Condition 10b), blue (Condition 11) and in a PS galaxy.
* Conditions 1–9, with a limiting magnitude <19.5 (Condition 10b), and blue (Condition 11).
* Conditions 1–9, with a limiting magnitude <19.5 (Condition 10b).
Obviously, these are not independent, with some queries being subsets of others, and all being subsets of Query VI. These queries produced roughly 30 new candidates per day in total, which we inspected manually. Only those showing a coherently rising light curve were marked as candidates of interest. Candidates not obviously rising at discovery were monitored for an extra epoch of ZTF photometry and checked again. Candidates for which it was still not clear whether they were rising or not, were monitored for another week. This step removed events which had a flat, varying, or incoherent light curve, which could be due to “normal” AGN variability, stellar variability of Galactic objects, or artifacts of the ZTF image subtraction pipeline. In addition, this removed true transients which were already after their peak luminosity, and which are not part of our sample as defined here. Of all our filtering steps, this is the most subjective, as it requires visual inspection rather than some strict criterion for what consistutes a “coherently rising” light curve. However, by checking each candidate during multiple epochs, we aim to make this step as inclusive as possible. In addition, since this step is performed before the classification of the candidate is known, it should not bias the search against a particular type of transient (except extremely rapidly rising events, with rise times ≲3 days).
After these cuts we were left a total of 345 candidates of interest (from our entire 414-day search, i.e. ∼0.83 candidates of interest per day, on average), which we attempted to classify spectroscopically within a few days of discovery.
Version 3.0 of Lasair (also known as “Iris”) was released in 2021 March. To improve performance, not all information that was available in version 1.0 (such as the full detections histories of all candidate events) was carried over to version 3.0. To check for any differences in query results we add four more queries which we run on Lasair 3.0 (Iris) between 2022 April 6, and 2022 Aug 2 (for a total of 118 days):
* Conditions 1–9, with a limiting magnitude <19 (Condition 10a).
* Conditions 1–9, with a limiting magnitude <19.5 (Condition 10b).
* Conditions 1–9, with a limiting magnitude <19 (Condition 10a), and in a PS galaxy.
* Conditions 1–9, with a limiting magnitude <19.5 (Condition 10b), and in a PS galaxy.
Color information was not available as a query parameter in Iris, therefore here we can not filter by condition 11. We perform the same manual cuts as above and are left with 39 events, which is an average of 0.33 candidates per day. Version 4.0 of Lasair was released in 2022 May, but we do not test it here.
We obtain a total of 345 candidates of interest from Lasair 1.0 (310 of which, when using a separation threshold of 05 in Criterion 1, and the rest when using a separation threshold of 1) and 39 candidates of interest from Lasair 3.0 (all of which, when using a separation threshold of 1).
For all those brighter than 19th magnitude, we requested spectra through the Las Cumbres Observatory Floyds spectrographs mounted on the 2-meter FTN and FTS telescopes at Haleakala (United States) and Siding Spring (Australia) observatories, respectively. Weather, technical issues, and over-subscription of the telescopes mean that not all requested spectra are obtained, or that some were obtained first by the community and reported to TNS. We were able to obtain spectra of 83 candidates of interest, taken through a 2 slit placed on the candidate along the parallactic angle <cit.>. One-dimensional spectra were extracted, and flux and wavelength calibrated using the [<https://github.com/LCOGT/floyds_pipeline>] <cit.>. Fainter targets, accessible from La Silla Observatory, were sent for consideration to the ePESSTO+ collaboration, for classification with the NTT.
All of our classification spectra, as well as those obtained by the ePESSTO+ collaboration, were publicly reported to the Transient Name Server (TNS)[<http://www.wis-tns.org>]. Many of our candidates of interest were classified by other members of the community, and also reported to the TNS. In total, 246 of our 384 candidates of interest (64.1%) were classified on the TNS. We take these classifications as reported to the TNS and analyze their distribution in the next section.
§ RESULTS AND ANALYSIS
llllllllll
Numbers and fractions of the classes of candidates of interest from the different queries.
Query Total Not SN AGN TDE Other Galaxy BFF Varstar
Transients Classified
10cLasair 1.0
I: <19 Mag, Blue and in PS 6 1 4 0 1 0 0 0 0
Percentage of All Transients 16.67% 66.67% 0 16.67% 0 0 0 0
Percentage of Classified Transients 80.00% 0 20.00% 0 0 0 0
II: <19 Mag and Blue 116 19 91 0 5 0 1 0 0
Percentage of All Transients 16.38% 78.45% 0 4.31% 0 0.86% 0 0
Percentage of Classified Transients 93.81% 0 5.15% 0 1.03% 0 0
III: <19 Mag 213 41 163 2 5 1 1 0 0
Percentage of All Transients 19.25% 76.53% 0.94% 2.35% 0.47% 0.47% 0 0
Percentage of Classified Transients 94.77% 1.16% 2.91% 0.58% 0.58% 0 0
IV: <19.5 Mag, Blue and in PS 9 3 5 0 1 0 0 0 0
Percentage of All Transients 33.33% 55.56% 0 11.11% 0 0 0 0
Percentage of Classified Transients 83.33% 0 16.67% 0 0 0 0
V: <19.5 Mag and Blue 193 71 116 0 5 0 1 0 0
Percentage of All Transients 36.79% 60.10% 0 2.59% 0 0.52% 0 0
Percentage of Classified Transients 95.08% 0 4.10% 0 0.82% 0 0
VI: <19.5 Mag 345 121 213 2 5 1 1 1 1
Percentage of All Transients 35.07% 61.74% 0.58% 1.45% 0.29% 0.29% 0.29% 0.29%
Percentage of Classified Transients 95.09% 0.89% 2.23% 0.45% 0.45% 0.45% 0.45%
10cLasair 3.0 (Iris)
VII: <19 Mag, in Iris 33 13 14 6 0 0 0 0 0
Percentage of All Transients 39.39% 42.42% 18.18% 0 0 0 0 0
Percentage of Classified Transients 70.00% 30.00% 0 0 0 0 0
VIII: <19.5 Mag, in Iris 39 17 16 6 0 0 0 0 0
Percentage of All Transients 43.59% 41.03% 15.38% 0 0 0 0 0
Percentage of Classified Transients 72.73% 27.27% 0 0 0 0 0
IX: <19 Mag, in Iris and in PS 1 0 0 1 0 0 0 0 0
Percentage of All Transients 0 0 100.00% 0 0 0 0 0
Percentage of Classified Transients 0 100.00% 0 0 0 0 0
X: <19.5 Mag, in Iris and in PS 2 1 0 1 0 0 0 0 0
Percentage of All Transients 50.00% 0 50.00% 0 0 0 0 0
Percentage of Classified Transients 0 100.00% 0 0 0 0 0
The full list of our candidates of interest can be found in Table <ref> in the Appendix. The redshift distribution of all classified transients with a determined redshift on the TNS (244 events) is presented in Figure <ref>[One classified transient, AT 2022amc, has no redshift determination since its spectrum consists of a blue continuum with no clearly identifiable lines]. While our queries can in principle find TDEs out to redshift of z∼0.16 <cit.>, the median redshift of our classified candidates of interest is z=0.069, and all but one of the TDEs are at redshifts z<0.04 (the more distant TDE, AT 2022csn at a redshift of z=0.148, is also more luminous than typical TDEs; Dgany et al. in prep.). The reason that most classified events are much closer than our redshift limit is likely because nearby events are typically prioritized for spectroscopic classification over more distant events. At the median redshift, our angular cut of 05 from the galaxy nucleus corresponds to a physical cut of ∼0.66 kpc <cit.>.
The distribution of classifications of our candidates of interest, per query, are listed in Table <ref> and presented in Figures <ref> and <ref>. In the interest of simplicity, we consolidate the various supernova (SN) classifications into one category which we name “SN”. These include SNe of undetermined type, SNe I of undetermined sub-type, SNe Ia and its various sub-types, as well as SNe Ib, Ic, Ic-BL, II, IIn, IIb, and superluminous SNe (SLSN) of Types I and II. A breakdown of number of events per SN type is available in Table <ref> in the Appendix.
For all of our events of interest in Lasair 1.0 (Query VI), we find that the vast majority (95.09%) of classified events are SNe, 2.23% (five events) are TDEs, 0.45% are BFFs and 0.89% are flaring AGN[Here, the term “flaring AGN” refers to events with coherent brightening episodes much stronger than any typical variability seen in their historical light curves. Specifically, the flares seen here rose by 0.25 magnitudes on average in one week, which is much higher than normal AGN variability <cit.> ]. The remaining 1.34% consist of one variable star, one event classified as “Galaxy” (which means it was either an artifact or it faded before the spectrum was obtained) and one classified as “Other”. The “Other” event is AT 2022amc, which displays a featureless blue continuum. This could have been a young core collapse SN, or some other hot flare including a SMBH-related one, such as a TDE or BFF. Unfortunately, no followup spectra were posted to TNS or, to our knowledge, published elsewhere, so its nature remains undetermined.
The five TDEs are AT 2020vwl <cit.>, AT 2021ehb <cit.>, AT 2022bdw <cit.>, AT 2022csn <cit.> and AT 2022dbl <cit.>. AT 2020vwl was classified as a “TDE H+He” <cit.> by the ZTF group with a spectrum obtained from the Spectral Energy Distribution Machine <cit.> on the Palomar 60-inch telescope <cit.>. Despite the low spectral resolution, broad He II and Hα can be clearly identified, on top of a blue continuum, making the TDE classification secure. AT 2021ehb was also classified by the ZTF team using a SEDM spectrum, however that spectrum is not publicly available on the TNS. Followup spectra which are available on the TNS do not show clear TDE signatures, but X-ray detections <cit.> make the TDE classification likely. AT 2022bdw, AT 2022csn and AT 2022dbl were classified by the effort presented here, using the Las Cumbres Observatory Floyds spectrographs, based on broad H and He II spectral features on top of a blue continuum. AT 2022csn was initially classified by the ePESSTO+ collaboration as a Type I SLSN <cit.> but later re-classified by us as a TDE after the emergence of TDE spectral features. AT 2022dbl is also listed on the TNS as AT 2018mac due to a sporadic detection in 2018 at a similar position, likely resulting from an image subtraction artifact. This is the only TDE out of the five found in a PS host from the <cit.> catalog. We conclude that all five TDE classifications are secure, with the possible exception of AT 2021ehb, since its classification spectrum is not available on the TNS.
The BFF is AT 2021seu <cit.>, also classified by this effort using Floyds <cit.>. The classification is based on a possible N III / He II emission complex on top of a blue continuum, not seen in an archival SDSS spectrum at that position <cit.>, and resembling the spectra of BFFs in <cit.>.
All five TDEs pass the “blue criterion” (Criterion 11) and were found by Query V, but the number of SNe passing this criterion is much smaller, nearly doubling the percentage of TDEs among classified blue transients. Limiting the search to candidates that are both blue and brighter than magnitude 19 at discovery (Query II), keeps all TDEs and further increases their percentage to 5.15%. Looking at events in PS galaxies (Query IV)[Here, we study all events that were both in a PS host and blue. There were two more transients in PS hosts that were not blue, ZTF20acselme (a Type Ia SN) and ZTF22aabsemf (an unclassified event).], only 1 of the 5 TDEs remains, but it is 1 of 6 (16.67%) classified transients there. This is consistent (to 1.2σ) with the finding of <cit.> that 10.0%±5.5% of classified transients in PS galaxies should be TDEs.
For all of our events of interest in Lasair 3.0 (Query VIII), SNe are still the majority of classified events (72.73%), with the rest all flaring AGN.
The fraction of flaring AGN in Lasair 3.0 is thus 30 times larger than in Lasair 1.0. Part of this difference is likely explained by the fact that, at least initially, Lasair 3.0 did not provide the full multi-year light curve history of each candidate. This precluded filtering most AGN by their historical activity.
In order to quantify the significance of the difference in fractions between queries, we calculate their confidence bounds using the Clopper–-Pearson method <cit.>. <cit.> discusses how this method, which uses binomial statistics to estimate lower and upper confidence bounds for ratios, is especially useful for ratios of different event types, when the numbers of observed events are small. The 1σ confidence bounds calculated with this method (and used hereafter) are shown in Table <ref>.
The fraction of TDEs in our global Lasair 1.0 query (Query VI) is 2.23%±0.98% and that of BFFs is 0.45%±0.44. Requiring candidates be blue (Query V), increases the TDE fraction by a factor of 1.84±1.11. Adding the requirement for a PS host (Query IV) increases the TDE fraction by a factor of 7.47±7.53 compared to the global query (Query VI). Without the full light curve history of Lasair 3.0, the fraction of AGN there increases by a factor of 30.55±23.86 in Query VIII compared to Query VI.
llllllll
Fractions of classified candidates of interest from the different queries with 1σ Clopper–-Pearson confidence bounds.
Query SN AGN TDE Other Galaxy BFF Varstar
I 80.00%±17.79% 0 20.00%±17.79% 0 0 0 0
II 93.81%±2.43% 0 5.15%±2.23% 0 1.03%±1.02% 0 0
III 94.77%±1.69% 1.16%±0.81% 2.91%±1.27% 0.58%±0.58% 0.58%±0.58% 0 0
IV 83.33%±15.13% 0 16.67%±15.13% 0 0 0 0
V 95.08%±1.95% 0 4.10%±1.78% 0 0.82%±0.81% 0 0
VI 95.09%±1.44% 0.89%±0.63% 2.23%±0.98% 0.45%±0.44% 0.45%±0.44% 0.45%±0.44% 0.45%±0.44%
VII 70.00%±10.19% 30.00%±10.19% 0 0 0 0 0
VIII 72.73%±9.44% 27.27%±9.44% 0 0 0 0 0
IX 0 100.00%±0.00% 0 0 0 0 0
X 0 100.00%±0.00% 0 0 0 0 0
We wish to check whether the offset of a source from its host-galaxy center can be used as a way to better select for TDEs and BFFs. To do that, we retrieved the distnr parameter of each detection of each SN, TDE and the BFF in our sample using the Automatic Learning for the Rapid Classification of Events (ALeRCE) broker[<https://alerce.science/>] <cit.>. The distnr parameter is provided with each detection of a source in the ZTF alert packets[<https://zwickytransientfacility.github.io/ztf-avro-alert/schema.html>]. It denotes the distance of that detection to the nearest source in the reference-image PSF catalog (within 30), in units of pixels (which is equal to units of , since the ZTF pixel scale is 1 pixel per ). We plot the distribution of this value in Figure <ref>. TDEs show a slightly lower average distnr value than that of SNe, however the difference is much smaller than the spread of values of each type of event, and hence not significant. Our one BFF actually shows a larger average distnr value than that of SNe. However, this is based on detections of a single event, and could be driven by the centroid measurement of its particular host galaxy in ZTF. We conclude that there is no significant difference in distnr values between SNe, TDEs and BFFs in ZTF, and therefore that this parameter is not a good discriminant.
There are 24 additional events classified as TDEs on the TNS, from the time period of our search, which do not appear here. Of those, 16 have robust TDE classifications (i.e. at least one public spectrum showing clear broad He II and/or Hα emission, and blue colors). The rest have either no public spectrum available, very noisy spectra, or show no clear spectral features. Of the 16 robust TDEs, most (11) were not identified here because they were deemed not to have a coherent rising light curve at discovery. Three events were missed due to a bug in the queries (which was later fixed), and two did not pass Criterion 7, regarding the number of unreliable detections (one required increasing the number of unreliable detections from <3 to <5, and one required it be increased to <21). However, relaxing Criterion 7 would have likely also led to an increase in contaminant candidates.
§ DISCUSSION AND CONCLUSIONS
Our results quantify the “needle in a haystack” problem of finding TDEs and BFFs in wide-field transient surveys. We find that photometric history of candidates is crucial for removing most AGN contamination. Even so, roughly 1 in 35–45 events is a TDE, and 1 in 170–220 a BFF. This sets a significant challenge for identifying these events in current transient surveys, and for identifying even a small subset of the thousands of TDEs expected to be discovered by the Legacy Survey of Space and Time <cit.> every year <cit.>.
The fraction of TDEs increases by almost a factor of 2 to roughly 1 in 20–25 events when selecting only blue (g-r<0.05) transients, and by another factor of ∼3 to roughly 1 in 5–6 events when selecting probable TDE host galaxies. However, such galaxies are rare, making the total number of TDEs discoverable this way, small. Given the huge increase in expected TDE discovery fractions, though, it would be beneficial to update the <cit.> galaxy catalog and to expand its coverage using new spectroscopic and photometric surveys such as SDSS-V <cit.>.
An additional ∼50% TDEs are found when relaxing Criterion 7 to allow events with a smaller number of reliable detections, and roughly three times as many TDEs are found when relaxing the condition that the event be rising in brightness at discovery. However, the number of contaminants that relaxing such criteria adds is significant. For example, changing the threshold of Criterion 7 from <3 to <5 (which would have added one TDE to the sample) increased the number of daily candidates by a factor of 2–3, and changing it to <21 (which would have added a second TDE to the sample), increased the number of daily candidates by an order of magnitude.
We find no significant difference between the offsets of TDEs vs. nuclear SNe from their host galaxy centers. Therefore, this parameter can not be used as a discriminant for selecting more likely TDEs, at least not in ZTF, as quantified by the distnr parameter. LSST, with its higher spatial resolution, might be able to make host nucleus offsets a more viable distinguishing parameter.
An additional possible discriminant for selecting TDEs, but which was not tested here, is their ultraviolet to optical colors <cit.>. Obtaining ultraviolet photometry rapidly and for many targets is currently possible almost exclusively with the the Neil Gehrels Swift Observatory <cit.> Ultraviolet/Optical Telescope <cit.>. Indeed, <cit.> showed that selecting transients by a combination of their ultraviolet to optical colors using Swift and the optical colors of their host galaxies, increased the fraction of TDEs to 1 in 4.5. However, Swift is limited in the number of transients it can vet. The upcoming Ultraviolet Transient Astronomy Satellite <cit.>, with its wide-field ultraviolet imager, is expected to obtain ultraviolet photometry for thousands of TDEs. This will be an excellent way to discriminate TDEs from other transients without the need for substantial classification resources.
Another approach is to train machine learning algorithms to classify transients from photometry alone. This has been done for distinguishing between some SN types <cit.>. Until recently, the number of observed TDEs has been too small to be used for classification training, leading <cit.> to train their algorithm on simulated TDE light curves. Today, with TDE light curves available for dozens of events, it might be possible to effectively train a machine learning algorithm to distinguish TDE light curves from those of other SNe. How effectively this can be done, and at what phase of the light curve a robust classification can be obtained (if at all), remains to be tested.
Of course, any such filtering based on photometric properties of the transient or characteristics of its host galaxy can also bias population studies of TDEs and BFFs. A specific population that almost all current searches (including ours) are biased against is that of slowly evolving transients with years-long evolution. Such events are already suppressed by the alert mechanisms of most transient surveys, even before reaching the brokers. ZTF alerts, for example, are generated by comparing a new image to a reference image taken up to a few months – years earlier. Therefore, events that rise on a time scale of several years, will not be much brighter in the new image compared to the reference, and thus an alert might never be issued. Since the set of images used as references is updated from time to time, such transients could remain hidden during the lifetime of the surveys. Indeed, when <cit.> compared images from PS1 to images obtained a decade earlier by SDSS, they found a population of slowly rising nuclear transients. This population is not seen in current transient surveys, which are optimized to find transients that change on shorter time scales.
It will thus continue to be challenging to find TDEs and BFFs in optical transient surveys in an unbiased way, even for events evolving on time scales of days to months. One way forward is to use some combination of well-defined photometric and host-galaxy filters such as those used here. However, making searches as complete as possible will still require ample spectroscopic resources for vetting large numbers of SMBH-related transient candidates.
We thank B. Trakhtenbrot for providing some of the spectroscopy time on the Las Cumbres network used to classify candidates identified here, and A. Lawrence, K. Smith, R. Williams and D. Young for assistance with using Lasair and for helpful comments. We also thank M. Nicholl for implementing many of the queries used here on Lasair, as well as the <cit.> galaxy catalog as a watchlist there, and A. Riba for developing a Target and Observation Manager used to manually inspect candidates and schedule followup observations. We are grateful to B. Zackay for helpful comments regarding host offsets.
Y.D., I.A. and L.M. acknowledge support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement number 852097).
I.A. is a CIFAR Azrieli Global Scholar in the Gravity and the Extreme Universe Program and acknowledges support from that program, from the Israel Science Foundation (grant number 2752/19), from the United States – Israel Binational Science Foundation (BSF; grant number 2018166), and from the Israeli Council for Higher Education Alon Fellowship.
This work makes use of observations with the Las Cumbres Observatory global telescope network. The Las Cumbres Observatory group is supported by NSF grants AST-1911151 and AST-1911225, and BSF grant 2018166.
aasjournal
§ FRACTIONS OF SN SUBTYPES
We list the division of SN types found by each query in Table <ref>. The majority of contaminants across all queries are Type Ia SNe, with the next likely contaminant being Type II SNe (including their subvariants IIb and IIn). The ZTF Bright Transient Survey <cit.> aims to classify all transients brighter than certain magnitude cuts (similar to our Query III), but, unlike this work, does not focus on galaxy nuclei. Their Type Ia SN fraction is lower than ours (∼73% vs. ∼82% here) and Type II SN fraction is higher (∼20% vs. ∼10% here)[<https://sites.astro.caltech.edu/ztf/bts/bts.php>]. Since Type Ia and Type II SNe have similar radial distributions in their host galaxies <cit.>, this difference might be a selection effect whereby Type Ia SNe are preferentially detected in this work compared to the BTS since they are more luminous than Type II events and stand out more clearly in bright galaxy nuclei.
As found previously by <cit.>, Type Ia SNe are the only contaminant to TDEs and BFFs in post-starburst galaxies (unless AGN can not be fully filtered out, as in the case of the Lasair 3.0 queries).
llllllllllll[h]
Internal division of SN types from the different queries.
Query Total SN Ia SN Ib SN Ic SN Ic-BL SN I SN II SN IIb SN IIn SN SLSN
SNe
I 4 4 (100.0%) 0 0 0 0 0 0 0 0 0
II 91 76 (83.5%) 1 (1.1%) 0 2 (2.2%) 0 10 (11.0%) 0 2 (2.2%) 0 0
III 163 134 (82.2%) 2 (1.2%) 2 (1.2%) 4 (2.5%) 0 17 (10.4%) 0 3 (1.8%) 1 (0.6%) 0
IV 5 5 (100.0%) 0 0 0 0 0 0 0 0 0
V 116 97 (83.6%) 1 (0.9%) 0 2 (1.7%) 0 13 (11.2%) 0 2 (1.7%) 0 1 (0.9%)
VI 213 175 (82.2%) 2 (0.9%) 3 (1.4%) 4 (1.9%) 1 (0.5%) 21 (9.9%) 1 (0.5%) 3 (1.4%) 1 (0.5%) 2 (0.9%)
VII 14 8 (57.1%) 0 0 0 0 4 (28.6%) 0 2 (14.3%) 0 0
VIII 16 10 (62.5%) 0 0 0 0 4 (25.0%) 0 2 (12.5%) 0 0
IX 0 0 0 0 0 0 0 0 0 0 0
X 0 0 0 0 0 0 0 0 0 0 0
§ RESULTS PER SEPARATION THRESHOLD
As mentioned in Section <ref>, we split the Lasair 1.0 sample into two sub-samples, each with a different separation threshold for Condition 1 (05 and 1). The fractions (Table <ref>) split per separation threshold are presented in Tables <ref> (for the 05 threshold) and <ref> (for the 1 threshold).
llllllllll
Same as Table <ref> but only for events in Lasair 1.0 selected with a separation (Condition 1) threshold of 05.
Query Total Not SN AGN TDE Other Galaxy BFF Varstar
Transients Classified
10cLasair 1.0
I: <19 Mag, Blue and in PS 4 1 3 0 0 0 0 0 0
Percentage of All Transients 25.00% 75.00% 0 0 0 0 0 0
Percentage of Classified Transients 100.00% 0 0 0 0 0 0
II: <19 Mag and Blue 96 17 76 0 2 0 1 0 0
Percentage of All Transients 17.71% 79.17% 0 2.08% 0 1.04% 0 0
Percentage of Classified Transients 96.20% 0 2.53% 0 1.27% 0 0
III: <19 Mag 186 38 143 1 2 1 1 0 0
Percentage of All Transients 20.43% 76.88% 0.54% 1.08% 0.54% 0.54% 0 0
Percentage of Classified Transients 96.62% 0.68% 1.35% 0.68% 0.68% 0 0
IV: <19.5 Mag, Blue and in PS 7 3 4 0 0 0 0 0 0
Percentage of All Transients 42.86% 57.14% 0 0 0 0 0 0
Percentage of Classified Transients 100.00% 0 0 0 0 0 0
V: <19.5 Mag and Blue 169 67 99 0 2 0 1 0 0
Percentage of All Transients 39.64% 58.58% 0 1.18% 0 0.59% 0 0
Percentage of Classified Transients 97.06% 0 1.96% 0 0.98% 0 0
VI: <19.5 Mag 310 113 191 1 2 1 1 1 0
Percentage of All Transients 36.45% 61.61% 0.32% 0.65% 0.32% 0.32% 0.32% 0
Percentage of Classified Transients 96.95% 0.51% 1.02% 0.51% 0.51% 0.51% 0
llllllllll
Same as Table <ref> but only for events in Lasair 1.0 selected with a separation (Condition 1) threshold of 1.
Query Total Not SN AGN TDE Other Galaxy BFF Varstar
Transients Classified
10cLasair 1.0
I: <19 Mag, Blue and in PS 2 0 1 0 1 0 0 0 0
Percentage of All Transients 0 50.00% 0 50.00% 0 0 0 0
Percentage of Classified Transients 50.00% 0 50.00% 0 0 0 0
II: <19 Mag and Blue 20 2 15 0 3 0 0 0 0
Percentage of All Transients 10.00% 75.00% 0 15.00% 0 0 0 0
Percentage of Classified Transients 83.33% 0 16.67% 0 0 0 0
III: <19 Mag 27 3 20 1 3 0 0 0 0
Percentage of All Transients 11.11% 74.07% 3.70% 11.11% 0 0 0 0
Percentage of Classified Transients 83.33% 4.17% 12.50% 0 0 0 0
IV: <19.5 Mag, Blue and in PS 2 0 1 0 1 0 0 0 0
Percentage of All Transients 0 50.00% 0 50.00% 0 0 0 0
Percentage of Classified Transients 50.00% 0 50.00% 0 0 0 0
V: <19.5 Mag and Blue 24 4 17 0 3 0 0 0 0
Percentage of All Transients 16.67% 70.83% 0 12.50% 0 0 0 0
Percentage of Classified Transients 85.00% 0 15.00% 0 0 0 0
VI: <19.5 Mag 35 8 22 1 3 0 0 0 1
Percentage of All Transients 22.86% 62.86% 2.86% 8.57% 0 0 0 2.86%
Percentage of Classified Transients 81.48% 3.70% 11.11% 0 0 0 3.70%
In order to assess if there is a statistically significant difference between the fractions resulting from the different thresholds, we calculate the 1σ Clopper-Pearson confidence bounds per each separation threshold sub-sample in Tables <ref> (for the 05 threshold) and <ref> (for the 1 threshold).
llllllll
Same as Table <ref> but only for events in Lasair 1.0 selected with a separation (Condition 1) threshold of 05.
Query SN AGN TDE Other Galaxy BFF Varstar
I 100.00%±0.00% 0 0 0 0 0 0
II 96.20%±2.14% 0 2.53%±1.76% 0 1.27%±1.25% 0 0
III 96.62%±1.48% 0.68%±0.67% 1.35%±0.94% 0.68%±0.67% 0.68%±0.67% 0 0
IV 100.00%±0.00% 0 0 0 0 0 0
V 97.06%±1.66% 0 1.96%±1.37% 0 0.98%±0.97% 0 0
VI 96.95%±1.22% 0.51%±0.50% 1.02%±0.71% 0.51%±0.50% 0.51%±0.50% 0.51%±0.50% 0
llllllll
Same as Table <ref> but only for events in Lasair 1.0 selected with a separation (Condition 1) threshold of 1.
Query SN AGN TDE Other Galaxy BFF Varstar
I 50.00%±35.16% 0 50.00%±35.16% 0 0 0 0
II 83.33%±8.74% 0 16.67%±8.74% 0 0 0 0
III 83.33%±7.57% 4.17%±4.06% 12.50%±6.71% 0 0 0 0
IV 50.00%±35.16% 0 50.00%±35.16% 0 0 0 0
V 85.00%±7.94% 0 15.00%±7.94% 0 0 0 0
VI 81.48%±7.43% 3.70%±3.61% 11.11%±6.01% 0 0 0 3.70%±3.61%
There are no statistically significant differences between the results of the two separation thresholds. It does appear that there is a much higher fraction of TDEs in the sub-sample of the 1 threshold (of order 10%) compared to the 05 threshold (of order 1%). However this is not statistically significant, and is a result of small number statistics. To demonstrate this, we plot all of the values of all the detections of our five TDEs in Figure <ref>.
Most detections for both the two TDEs in the 05 threshold sub-sample and the three TDEs in the 1 theshold sub-sample are below 05 separation. This means that the increase of the threshold to 1 is not responsible for the larger fraction of TDEs in that sub-sample, but rather it is a small number statistics fluctuation.
§ LIST OF EVENTS
We list in Table <ref> the full set of events considered transients of interest in this work, their publicly available classification and redshift, and the query number(s) in which they were found.
lp0.09p0.1DDllllp0.22
Candidates of interest.
ZTF name TNS name Other names 2cRA 2cDec Query Classification Redshift Note Reference(s)
2c(deg) 2c(deg)
ZTF18aabdajx AT 2022dbl (also AT 2018mac) ASASSN-22ci 185.1878 49.55128 I TDE 0.0284 <cit.>
ZTF18aacnlxz SN 2020aavr 134.95468 38.10909 III SN II 0.072475 <cit.>
ZTF18aadsuxd AT 2020yui ATLAS20bfcp, PS20kyb, ZTF18aaefvaq 129.53396 31.66792 III
ZTF18aagtwyh SN 2021oud Gaia21cyq, ZTF18aahbypm 189.92466 16.53793 II SN Ia 0.066041 <cit.>
ZTF18aahfbqp SN 2020acua 156.67845 21.7239 III SN Ia 0.041362 <cit.>
ZTF18aahkoqq SN 2021slw ATLAS21bbji, Gaia21dnq, ZTF21abnabtv 203.17888 32.4431 III SN Ia 0.034457 <cit.>
ZTF18aahvpcy SN 2022eaz ATLAS22ina, Gaia22bhf 208.40005 37.73573 II SN Ia 0.0607 <cit.>
ZTF18aaisqmw SN 2020acqt 176.35752 55.52702 III SN Ia 0.0527 <cit.>
ZTF18aaiwewk SN 2022cox ATLAS22gie, PS22biu 196.64354 54.9627 V SN Ia 0.087 <cit.>
ZTF18aaizgoq SN 2021gad ATLAS21ipu 231.51665 36.32309 III SN Ia 0.07 <cit.>
ZTF18aajhkjz AT 2020acub ATLAS20bkaw, PS20nox, ZTF20acxtayx 149.88365 15.84735 V
ZTF18aajhkjz AT 2021mrp 166.45372 15.12587 V
ZTF18aakeanj AT 2021dc Gaia21ald, ZTF21aaaollj 203.76788 53.87351 II a
ZTF18aamtiwb SN 2022dcf ATLAS22gxr, PS22cdu 185.06648 56.62635 II SN Ia-91T-like 0.066 <cit.>
ZTF18aansqom SN 2021hbg 251.0451 39.78875 III SN Ia 0.03 <cit.>
ZTF18aarefcm AT 2022drv ATLAS22iga 204.58714 40.15554 II
ZTF18aasypqx SN 2022cvt ATLAS22gxa 214.68999 22.36559 II SN Ia-91T-like 0.073 <cit.>
ZTF18aauvmnq SN 2021tty ATLAS21bden 207.19808 26.55212 II SN Ia 0.0621 <cit.>
ZTF18aaviokz AT 2021duz 207.86824 40.44794 IV
ZTF18aaviokz AT 2021duz 207.86828 40.44793 I c
ZTF18abfdxbt SN 2022dme Gaia22bak 255.92866 78.65643 II SN Ia 0.048914 <cit.>
ZTF18abmakin 298.62199 55.58944 II
ZTF18abrfccm SN 2020yxo ATLAS20beyo 26.86302 0.96765 III SN Ia 0.1 <cit.>
ZTF18abrzevn AT 2019byp 269.35527 26.84119 VII AGN 0.045399 <cit.>
ZTF18abwitkf SN 2021xy ATLAS21bsl, Gaia21amh 15.70534 -21.83647 III SN Ia 0.061 <cit.>
ZTF18abwnmnc AT 2020aczj ZTF20acxuyqs 330.52799 -5.66265 VII
ZTF18abxusti AT 2021gvi 119.03849 65.87328 III
ZTF18acfhmhp AT 2020aaxb 13.13143 46.33825 III
ZTF18acjxyak SN 2021efe 281.14242 33.72181 III SN Ia 0.0585 <cit.>
ZTF18acsremz SN 2021hmc 148.62747 53.16909 I SN Ia 0.025136 <cit.>
ZTF18actytgu SN 2021lyz ATLAS21barz 11.03286 30.351 II SN Ia 0.019 <cit.>
ZTF18acwyvak AT 2021jmb 174.04786 74.77397 VII AGN 0.055785 <cit.>
ZTF20aanlygx AT 2020abea 56.01208 -18.62466 III AGN 0.063 <cit.>
ZTF20aanzldk AT 2022cwl ATLAS22gfj, PS22bps 139.37062 -2.08112 III AGN 0.053 <cit.>
ZTF20aaoluli AT 2021aqe 14.09962 13.90157 VI
ZTF20aaprrgg SN 2021aayn PS21oli 62.01957 -6.30023 IV SN Ia 0.087 <cit.>
ZTF20aaqhlvq AT 2022jiw ATLAS22nmb, Gaia22dmk 196.76539 -4.30944 VII
ZTF20aaqhlvq AT 2022jiw ATLAS22nmb, Gaia22dmk 196.7654 -4.30943 VIII
ZTF20aartqux AT 2021hed ATLAS21jyx 208.60376 23.67644 II
ZTF20aarvpbq SN 2021gen ATLAS21ikj 222.98006 10.12371 III SN Ia 0.080611 <cit.>
ZTF20aasivtu AT 2020fhp Gaia20dxw 276.63611 21.15216 VII AGN 0.223 <cit.>
ZTF20aavfogu AT 2020hoj 173.45788 -20.81444 VI a
ZTF20abajndx AT 2020acvj 187.25093 -15.50851 II a
ZTF20abbhlct AT 2020mma 188.91823 -21.04658 VII AGN 0.061759 <cit.>
ZTF20abeyroj AT 2022ebh Gaia22dfz 198.5069 -3.47522 VII
ZTF20abmtbpa AT 2021adlq ATLAS21blqh 31.75077 6.96475 VI
ZTF20abpbmxi AT 2020qnk 297.46295 -18.15618 II b
ZTF20absulsv AT 2021adhi ATLAS21blnl, PS21mqk 358.86024 -4.2969 V
ZTF20abtcijp SN 2021iat ATLAS21kmm 258.99041 66.6909 II SN Ia 0.07 <cit.>
ZTF20abynbhh AT 2021hsv ATLAS21kna 112.67908 41.46758 III
ZTF20achpcvt AT 2020vwl ATLAS20bdgk, Gaia20etp 232.65758 26.98244 II TDE 0.035 <cit.>
ZTF20ackzfvn SN 2021adcw ATLAS21blvp, PS21ojv 31.137 -21.11931 VII SN II 0.017 <cit.>
ZTF20acnwgtl SN 2020yhh 329.42018 -5.67019 III SN Ia 0.14 <cit.>
ZTF20acobevy SN 2020yub ATLAS20bezd 0.56296 -23.49506 III SN Ia 0.08 <cit.>
ZTF20acoxmsr SN 2020yza ATLAS20bfpa, PS20mwz 160.39647 -3.44147 III SN Ia 0.071 <cit.>
ZTF20acpwjus SN 2020zjv ATLAS20bfsh, PS20nfq 141.63857 -1.65029 VI SN Ia-91T-like 0.075 <cit.>
ZTF20acpylns SN 2020zlc 145.34409 15.9613 III SN Ia 0.076388 <cit.>
ZTF20acqjmzk SN 2020zpj 340.91099 -14.06795 VI SN Ia 0.1 <cit.>
ZTF20acqzpta SN 2020aaez ATLAS20bgmh 24.47855 30.10691 III SN Ia 0.07135 <cit.>
ZTF20acrheie SN 2020aarw 81.31309 -8.62605 III SN Ia 0.049431 <cit.>
ZTF20acsdykp SN 2020aaxd Gaia20fox, PS20lul 124.86338 2.10144 III SN Ia 0.1 <cit.>
ZTF20acselme SN 2021huw 134.13621 39.14126 III SN Ia 0.042 <cit.>
ZTF20actdeqw SN 2020abfd ATLAS20bgfy, Gaia20fqk 26.94368 -21.76087 III SN Ia 0.045 <cit.>
ZTF20actlvpy SN 2020aavu ATLAS20bgwg, PS20lyk 328.80104 -11.32326 III SN Ia 0.11 <cit.>
ZTF20actrovz SN 2020aazy ATLAS20bgju, PS20nkl 357.11884 -5.39563 III SN Ia 0.09 <cit.>
ZTF20actvgsy SN 2020aazk ATLAS20bhjy, PS20mhg 139.69959 30.35468 III SN Ia 0.10196 <cit.>
ZTF20actvlbq SN 2020abaf ATLAS20bgnd, PS20mnx 152.56933 7.95562 III SN Ia 0.083038 <cit.>
ZTF20acuaqlf SN 2020abbo Gaia20fsb 357.77519 6.94249 III SN II 0.017 <cit.>
ZTF20acujaft SN 2020abgn ATLAS20bgmz, Gaia20fsf, PS20mwx 32.4228 -6.73706 III SN Ia 0.08 <cit.>
ZTF20aculruc SN 2020abij ATLAS20bgvv 358.55995 -4.80556 III SN Ia 0.077 <cit.>
ZTF20acumcrz SN 2020abln 22.91202 18.24501 VI SN Ia 0.054 <cit.>
ZTF20acuoclk SN 2020abqd ATLAS20bgyf, PS20mnz 124.06977 37.40383 III SN Ia 0.082 <cit.>
ZTF20acveadu SN 2020absk ATLAS18bcmv, PS20mbv 136.00384 -0.08888 III SN II 0.01889 <cit.>
ZTF20acvebcu SN 2020abqx ATLAS20bgye, PS20mkm 178.10281 67.5477 III SN Ib 0.063 <cit.>
ZTF20acvexen AT 2020abrr ATLAS20bjsp, PS20nwf 195.44765 32.60167 V c
ZTF20acvjagm SN 2020abtf ATLAS20bgve, PS21akm 119.89367 15.29998 VI SN II 0.014 <cit.>
ZTF20acvkqxy SN 2020abve ATLAS20bjhg 142.22477 40.88916 III SN Ia 0.063327 <cit.>
ZTF20acwnrty SN 2020acef ATLAS20bhob, PS20mhi 146.09988 22.97916 VI SN Ia-91T-like 0.065 <cit.>
ZTF20acwofhd SN 2020acfp ATLAS20bjsc 171.89415 47.37943 III SN Ic 0.032816 <cit.>
ZTF20acwpaov SN 2020acfq ATLAS20bjnr 132.39613 9.58738 III SN Ia 0.05913 <cit.>
ZTF20acwpixx SN 2020aceu ATLAS20bjqf, PS20nuq 178.07599 7.89155 II SN Ia 0.077451 <cit.>
ZTF20acwqmul AT 2020acbn ATLAS20bhzn, ZTF18aatfpbe 190.1667 13.81572 III c
ZTF20acxdawc SN 2020acmf ATLAS20bigv 40.86169 16.66293 VI SN Ia 0.025 <cit.>
ZTF20acxycfh SN 2020acwp 147.03499 6.81964 III SN Ia 0.078525 <cit.>
ZTF20acynlkj SN 2020aden ATLAS20bkdg 17.00425 -15.40552 III SN Ia 0.055008 <cit.>
ZTF20acywefl SN 2020adka ATLAS21asd 118.36714 28.26255 III SN Ia 0.058596 <cit.>
ZTF20acyxomd AT 2020aeuf ATLAS20bkau 150.36432 19.46194 VI
ZTF20adaftef SN 2020adra ATLAS21atx 151.3117 21.91095 II SN Ia 0.084025 <cit.>
ZTF21aaaaddl SN 2021D ATLAS21asf 358.64203 15.74521 VI SN Ia 0.07459 <cit.>
ZTF21aaaosmn SN 2021ct ATLAS21axd 221.76091 56.31489 III SN Ia 0.0375 <cit.>
ZTF21aaaovuq AT 2021bc ATLAS21czg 233.88764 18.14889 II
ZTF21aaapesv AT 2021gk 223.30836 28.85542 III
ZTF21aaapfal AT 2021da ATLAS21crw 250.53913 35.60025 II c
ZTF21aaarmti SN 2021ek ATLAS21ajr, PS21fo 50.95792 -10.0448 V SLSN-I 0.193 <cit.>
ZTF21aaaytny SN 2021mc ATLAS21cvk 240.43947 18.01928 II SN Ia 0.0459 <cit.>
ZTF21aacigst AT 2021sq ATLAS21bsd 149.86789 28.37349 V a
ZTF21aacnjlf AT 2021acw ATLAS21bnv 25.84651 2.22959 V
ZTF21aacudxe SN 2021vt ATLAS21chd, PS21bys 201.78258 11.35239 V SN Ia 0.087 <cit.>
ZTF21aadahyi SN 2021akb ATLAS21cac 170.90465 -5.53246 VI SN Ia 0.097 <cit.>
ZTF21aadatfg SN 2021xv ATLAS21cmb 241.88673 36.77951 II SN Ic-BL 0.05 <cit.>
ZTF21aadkgpm SN 2021zu ATLAS21cgz, Gaia21ahe, PS21aqu 168.74875 27.45333 II SN Ia 0.06 <cit.>
ZTF21aadrmok AT 2021acb 261.05295 46.82644 II c
ZTF21aadrmsv AT 2021amq ATLAS21czo 242.97524 21.0087 V
ZTF21aadrtcs SN 2021abt ATLAS21cae 199.82393 -7.2492 II SN Ia 0.0014285 <cit.>
ZTF21aadrtqz SN 2021abn ATLAS21bmj 208.7036 -13.60863 II SN Ia 0.058 <cit.>
ZTF21aaekkbv SN 2021ait ATLAS21crv, Gaia21anh 264.64777 42.21554 II SN Ia 0.07 <cit.>
ZTF21aaekmoy SN 2021ais ATLAS21cry 223.53354 2.97951 II SN Ia 0.080003 <cit.>
ZTF21aaevrjl SN 2021arg ATLAS21dkx, PS21aia 67.82827 -10.39637 V SN II 0.031 <cit.>
ZTF21aafdvxz SN 2021bkb ATLAS21dhf, PS21ewl 236.87341 10.93365 III SN II 0.05 <cit.>
ZTF21aafkktu SN 2021avg ATLAS21cux, Gaia21aku, PS21bkp 174.99587 14.52796 II SN II 0.031 <cit.>
ZTF21aaflcbk SN 2021arr ATLAS21dab, Gaia21aml, PS21bzh 189.12726 19.82585 VI SN Ia 0.07 <cit.>
ZTF21aaglrzc SN 2021bmc ATLAS21djm 127.24882 12.41938 II SN II 0.05 <cit.>
ZTF21aagnbsu AT 2021cld ATLAS21efr 124.66522 6.12749 V
ZTF21aagrzqz AT 2021chg (also AT 2013kz) ATLAS21ehz 144.49052 48.3884 II c
ZTF21aagsbot AT 2021bvu ATLAS21emi 158.1223 16.16226 V
ZTF21aagshha AT 2021byg ATLAS21emb 171.75836 16.73231 III c
ZTF21aagskhr AT 2021bpz ATLAS21end 191.98612 29.72008 V a
ZTF21aagtcgb SN 2021bmb ATLAS21djl, PS21zq 224.32537 43.45972 II SN Ia 0.0375 <cit.>
ZTF21aagtexi SN 2021bqo ATLAS21eie 188.74309 45.44921 II SN Ia 0.08 <cit.>
ZTF21aagtquu AT 2021brg ATLAS21eem 244.44327 65.12367 III c
ZTF21aagxmcs SN 2021cca ATLAS21efx, PS21zy 154.0805 -1.78745 V SN Ia 0.085 <cit.>
ZTF21aagzswk 26.91632 -1.97598 VI
ZTF21aagzwod 19.41428 -2.94091 III
ZTF21aahfizx AT 2021bwg ATLAS21egb 225.31005 13.0221 V
ZTF21aahfjbs SN 2021cky ATLAS21eev 229.39754 18.11804 V SN Ia-pec 0.073 <cit.>
ZTF21aahfjlo SN 2021clz ATLAS21flj, Gaia21bfk, PS21axb 214.64922 52.1011 II SN Ia 0.039 <cit.>
ZTF21aahhmev AT 2021bzb ATLAS21els 76.697 3.75387 V
ZTF21aahnkut SN 2021cez ATLAS21epx 197.81538 -16.54579 II SN Ia 0.07 <cit.>
ZTF21aahnqnn AT 2021dng ATLAS21gvh 167.32333 -5.75498 V
ZTF21aahpoxv AT 2021csz ATLAS21gau, PS21cqr 174.10466 4.8955 V
ZTF21aahpxww AT 2021cff ATLAS21fwh 158.95735 10.91143 VI
ZTF21aahzspd SN 2021cjx ATLAS21fkc 148.00477 21.17413 V SN Ia 0.11 <cit.>
ZTF21aaiucta AT 2021cfu ATLAS21fza, PS21cnz 254.84455 -1.20244 VI
ZTF21aaiucta AT 2021cfu ATLAS21fza, PS21cnz 254.84454 -1.20245 III c
ZTF21aajgdeu SN 2021cjd ATLAS21ekw 193.27745 36.81955 III SN IIP 0.027929 <cit.>
ZTF21aajgdpo SN 2021cjy ATLAS21hdg, PS21aei 194.82941 38.88605 II SN II 0.036088001 <cit.>
ZTF21aakbgpf SN 2021crv ATLAS21gdz, PS21bhh 122.63426 -8.12558 II SN Ia 0.05 <cit.>
ZTF21aakfqwq AT 2021crk ATLAS21hdc, PS21cpw 176.27889 18.54037 V
ZTF21aalgboj AT 2021dfo ATLAS21hcp 135.79646 51.78792 II c
ZTF21aalgqsv SN 2021dic PS21bxl 154.08238 0.65215 II SN Ia 0.079358 <cit.>
ZTF21aalhgqi SN 2021dib ATLAS21gtb, PS21aie 173.7361 9.03865 VI SN I 0.086 <cit.>
ZTF21aalnxkl SN 2021djz PS21bze 202.98122 -2.45151 V SN Ia 0.087 <cit.>
ZTF21aalxxzn AT 2021fxu Gaia22dgm 206.82834 2.18269 VII AGN 0.1097 <cit.>
ZTF21aamjzki AT 2020hoj ZTF20aavfogu 173.45805 -20.81437 III
ZTF21aamkxbl SN 2021dwo ATLAS21gwk 216.18103 -1.57031 II SN Ia 0.057153 <cit.>
ZTF21aamucom SN 2021dsb ATLAS21hgi, Gaia21byu, PS21bxv 158.82831 -13.00542 II SN Ia-91T-like 0.025 <cit.>
ZTF21aamxduf SN 2021eij ATLAS21hjg 217.38808 10.94154 VI SN Ia 0.1 <cit.>
ZTF21aanswls SN 2021ell ATLAS19bftq 298.58614 1.58498 III SN Ia 0.025 <cit.>
ZTF21aanuyro SN 2021eno ATLAS21huk, Gaia21blr, PS21ctb 224.82649 59.90032 II SN Ia 0.07 <cit.>
ZTF21aanvtng AT 2021ehx 167.89582 5.58533 V
ZTF21aanxhjv AT 2021ehb ATLAS21jdy 46.94925 40.3113 II TDE 0.018 <cit.>
ZTF21aaocgci AT 2021eox ATLAS21hnu 128.46705 -11.86269 II c
ZTF21aaocibg SN 2021epo ATLAS21hpb, Gaia21biq, PS21cgw 131.54246 56.12765 II SN Ia 0.071 <cit.>
ZTF21aaoexjt SN 2021ezs ATLAS21hsb 208.39581 27.08093 II SN Ia 0.11 <cit.>
ZTF21aaomiwf 194.63988 83.11293 III
ZTF21aaootdm AT 2021gny (also AT 2015dv) ATLAS21ith, PS21cgk 211.95257 7.1261 V
ZTF21aaopupn AT 2021fap ATLAS21hon 217.35744 9.06336 V
ZTF21aaoqayg AT 2021eyw ATLAS21iuk 252.98655 28.92162 III c
ZTF21aaoqcnh SN 2021feu ATLAS21ixy 272.47882 11.30154 III SN Ia 0.04 <cit.>
ZTF21aaovdlh SN 2021fck ATLAS21huq, PS21bxe 130.34449 2.88978 II SN Ia 0.056 <cit.>
ZTF21aapfmut AT 2021fyr ATLAS21ikk, PS21fzt 154.93818 32.24469 III c
ZTF21aaphjwb AT 2021ghw ATLAS21kdb 177.84396 24.65899 V
ZTF21aaphzor AT 2021hcd 154.5871 5.92311 VI
ZTF21aapjmgf SN 2021fzp ATLAS21iif, Gaia21bkh, PS21clp 220.98371 16.41324 II SN II 0.054 <cit.>
ZTF21aapjmpq SN 2021fzq ATLAS21imm, PS21cgr 225.28036 12.31399 VI SN Ia-91T-like 0.13 <cit.>
ZTF21aapjmpq SN 2021fzq ATLAS21imm, PS21cgr 225.28036 12.31399 VI SN Ia-91T-like 0.13 <cit.>
ZTF21aapjpee AT 2021hdo ATLAS21jdj 202.63239 -6.3692 V
ZTF21aapkuur SN 2021gba ATLAS21iuc, PS21bxj 219.26622 -8.07644 V SN Ia 0.1 <cit.>
ZTF21aappehx SN 2021gwu 139.10441 -17.35608 II SN Ia 0.054514 <cit.>
ZTF21aaprfkc SN 2021gwp ATLAS21jcn, PS21ego 199.01655 -15.50397 III SN Ia 0.056 <cit.>
ZTF21aaptmrn SN 2021gdm ATLAS21itu 237.38765 4.67097 VI SN Ia 0.112 <cit.>
ZTF21aapuwry AT 2021gdw PS21clx 249.19601 -16.22766 VI
ZTF21aapvxnf SN 2021gyw ATLAS21kpq 281.88222 64.46884 II SN Ia-91T-like 0.085 <cit.>
ZTF21aapxogj SN 2021ghj PS21cfn 125.95185 38.38982 II SN II 0.060214 <cit.>
ZTF21aaqgrrf SN 2021gqm ATLAS21jlu, PS21eeh 153.76115 3.79654 V SN Ia 0.082 <cit.>
ZTF21aaqhqke SN 2021gpw ATLAS21iwv, Gaia21czj 200.47858 16.74437 II SN IIn 0.075 <cit.>
ZTF21aaqjuyt 165.70851 23.85029 VI
ZTF21aaqmoof SN 2021gvv ATLAS21jlj, PS21fbg 256.14998 14.08037 V SN II 0.038 <cit.>
ZTF21aaqpykm SN 2021hay ATLAS21jli, PS21gdb 229.5394 15.17631 V SN Ia 0.06 <cit.>
ZTF21aaqwbgq SN 2021hjb ATLAS21jgx 133.93957 16.92466 III SN Ia 0.06651 <cit.>
ZTF21aarigsr SN 2021iaf ATLAS21kat, Gaia21chu 227.84385 26.79227 II SN Ia 0.059 <cit.>
ZTF21aarmuxl AT 2021hpp ATLAS21kne, PS21cus 141.24411 42.80907 III c
ZTF21aarohyu SN 2021hzo ATLAS21khv 180.16705 51.84093 II SN Ia 0.063 <cit.>
ZTF21aarrkwn AT 2021igm ATLAS21kdi, PS21fxj 226.85561 18.31667 V
ZTF21aarteuc AT 2021hzc ATLAS21kxo, PS21dfc 151.98128 -8.28587 V
ZTF21aarycyl SN 2021hvu ATLAS21jpb, Gaia21bxb, PS21fia 153.41 -24.52076 II SN Ia-91T-like 0.104 <cit.>
ZTF21aasgcve SN 2021idn ATLAS21kye 129.16748 1.16559 V SN Ia-91T-like 0.09 <cit.>
ZTF21aasgrpw SN 2021idh ATLAS21laz, PS21dhx 155.82763 -0.42939 VI SN Ia 0.095 <cit.>
ZTF21aaskuth AT 2021icw ATLAS21lge, PS21fxn 262.75059 7.96276 VI
ZTF21aassamj SN 2021ify ATLAS21lot, PS21fig 261.72373 13.17885 II SN Ia 0.056 <cit.>
ZTF21aaswvyc SN 2021ijy ATLAS21lyk 258.4873 39.0988 III SN Ia 0.07 <cit.>
ZTF21aatdzmt SN 2021imq ATLAS21lqa, Gaia21cfl, PS21fih 227.36531 36.81745 II SN Ia 0.076 <cit.>
ZTF21aatlbsi SN 2021ipy ATLAS21ncf 292.34717 74.07815 II SN Ia 0.07 <cit.>
ZTF21aatyxox AT 2021jtw ATLAS21mtq, PS21fwq 206.9273 -26.37541 V
ZTF21aauufbz 160.2291 24.74699 IV a
ZTF21aauufgo AT 2021jjp ATLAS21mwb, PS21eik 211.12748 -24.86573 V
ZTF21aauurqh 204.13567 -5.05061 VI
ZTF21aavpgvx AT 2021jui ATLAS21mwc, PS21fnl 186.71367 -17.77826 V
ZTF21aavrywc AT 2021jwa ATLAS21njq 236.35534 8.20475 V
ZTF21aawaguv AT 2021kae 181.57238 5.81541 V
ZTF21aawckpe 198.85353 1.12187 III
ZTF21aawtazf SN 2021kkz ATLAS21nnp, PS21gar 248.98476 18.08285 III SN Ia 0.061948 <cit.>
ZTF21aaxjiii SN 2021klq ATLAS21nqa 351.50055 46.65778 III SN Ia 0.04 <cit.>
ZTF21aaxkckg SN 2021kmv ATLAS21nme, PS21elb 152.84195 41.78835 II SN Ia 0.063 <cit.>
ZTF21aaxstsv SN 2021ktw ATLAS21nqy 215.87831 8.34491 II SN Ia 0.09 <cit.>
ZTF21aaxtpty AT 2021kqp ATLAS21oaf, PS21doo 223.79475 -6.98593 V
ZTF21aaxxihx SN 2021ktv ATLAS21nqw 165.76618 8.86105 III SN Ic-BL 0.06 <cit.>
ZTF21aaxxjdr AT 2021ktu ATLAS21nqv 169.0091 21.29421 II
ZTF21aaxxjen SN 2021kun ATLAS21ofj 166.41407 19.46031 I SN Ia 0.097 <cit.>
ZTF21aaxyecd SN 2021kui ATLAS21ohm 194.25355 40.43447 II SN Ia 0.061249 <cit.>
ZTF21aaydomp AT 2021ldb ATLAS21oan 190.5094 -14.12457 VI
ZTF21aaydtqk AT 2021kxh ATLAS21owr 178.90335 5.54756 V
ZTF21aayoiis AT 2021mpc ATLAS21pes, PS21gct 179.09923 -17.53295 V
ZTF21aayqrgx SN 2021lax Gaia21cof 212.33849 72.11727 II SN Ib 0.033 <cit.>
ZTF21aazlxiw SN 2021lta ATLAS21oez, PS21dqe 246.50245 28.42744 II SN Ia 0.1 <cit.>
ZTF21aazmjaf AT 2021lkq ATLAS21ovp 316.71025 10.73573 V
ZTF21aazpyza AT 2021lob 222.37709 -27.24104 V
ZTF21aazpzqo AT 2021loa 214.60334 -27.65508 III
ZTF21aazqtor SN 2021lny ATLAS21oyt 205.26646 -25.60261 II SN Ia 0.0954 <cit.>
ZTF21aazqynq AT 2021lsw ATLAS21osi 186.93889 26.77057 V
ZTF21aazybsh 16.08828 55.74215 III
ZTF21abaagct AT 2021lrw ATLAS21orf, PS21gdn 305.75781 -6.41894 V
ZTF21abalkop AT 2021mho ATLAS21rcr 183.01062 19.04756 VI
ZTF21abamnyi AT 2021mck ATLAS21rch 220.04695 66.13794 V
ZTF21abawvgt AT 2021mbe ATLAS21pjw 326.67805 8.54946 VI
ZTF21abbliav SN 2021mqv ATLAS21peg, Gaia21cte 322.88652 11.14894 VI SN Ia 0.049 <cit.>
ZTF21abboaaz AT 2021mkx ATLAS21qbe 179.52419 -27.75361 III
ZTF21abbuxzr AT 2021mmq ATLAS21pad 167.3917 -1.5678 III Varstar <cit.>
ZTF21abbxdcm SN 2021msu ATLAS21pis 223.63703 9.22811 II SN Ia-91T-like 0.05 <cit.>
ZTF21abbxlbu AT 2021mqq ATLAS21pju 315.86938 4.94065 II
ZTF21abbytmm SN 2021msv ATLAS21piv 195.09912 -11.9655 II SN Ia 0.054 <cit.>
ZTF21abbzjeq SN 2021mwb ATLAS21pjy, PS21dxq 242.23797 35.42108 II SN Ia 0.043 <cit.>
ZTF21abcfgwv AT 2021mxu ATLAS21rfm 171.34341 24.90104 V
ZTF21abcixor SN 2021nip ATLAS21prb, PS21ety 244.21006 12.64293 II SN II 0.033 <cit.>
ZTF21abcmepi SN 2021nlj ATLAS21pvm, PS21hrs 187.86256 4.22648 II SN Ia 0.089 <cit.>
ZTF21abcrhpk SN 2021nqo ATLAS21pvj, PS21gzv 322.36694 4.8875 II SN Ia 0.0744 <cit.>
ZTF21abdbodg SN 2021ohp Gaia21czd 206.99498 -13.78255 III SN Ia 0.064 <cit.>
ZTF21abdmevk SN 2021ont ATLAS21qky, PS21ivp 246.67702 39.14524 II SN II 0.02833 <cit.>
ZTF21abfoyac SN 2021pni Gaia21ebi 268.07435 8.23976 III SN II 0.033 <cit.>
ZTF21abfxibf SN 2021qtn 355.38227 14.12417 II SN Ia 0.063044 <cit.>
ZTF21abgxbvi AT 2021quh 250.76254 -26.69082 III
ZTF21abhhbmd SN 2021qnv ATLAS21wwc, Gaia21dox 338.16656 28.20678 III SN Ia 0.042 <cit.>
ZTF21abhshmt SN 2021qvg ATLAS21ztt, Gaia21dpf 200.08813 48.72436 II SN Ia-91T-like 0.055 <cit.>
ZTF21abhuoyz SN 2021qyh ATLAS21baec, PS21hwf 215.89982 14.11976 II SN Ia 0.0875 <cit.>
ZTF21abhymom SN 2021qyf ATLAS21zvh 205.34227 1.98462 III SN Ia 0.08 <cit.>
ZTF21abhzboh SN 2021qyc ATLAS21zwq 237.23744 6.84263 II SN Ia 0.051097 <cit.>
ZTF21abidtrd SN 2021rax ATLAS21baeg 283.24974 28.95966 III SN Ia 0.048 <cit.>
ZTF21abifutc SN 2021rdq ATLAS21baen 190.19495 42.30288 II SN Ia 0.072 <cit.>
ZTF21abjbjba SN 2021sbv ATLAS21bbfc 196.47762 2.14131 II SN Ia 0.069122 <cit.>
ZTF21abjciua AT 2021seu ATLAS21bbfi 215.39427 37.90969 II BFF 0.06 <cit.>
ZTF21abkinhh SN 2016elg iPTF16elg 212.641 -0.98848 III SN Ia 0.054027 <cit.>
ZTF21abkqkon SN 2021swl ATLAS21bceo, Gaia21djx 345.65222 14.86495 II SN Ia 0.0885 <cit.>
ZTF21abmgxyu SN 2021tnd ATLAS21bbvf, PS21imd 327.23818 -9.07113 III SN Ia 0.085 <cit.>
ZTF21abmwzxt SN 2021tsz ATLAS21bccc, Gaia21dmi, PS21ikp 354.49331 -0.4415 II SN II 0.04 <cit.>
ZTF21abnfgff SN 2021tvc ATLAS21bdey, Gaia21dpc 350.19705 4.261 III SN Ia 0.06 <cit.>
ZTF21abnhnxu AT 2021uvr ATLAS21bdyq, PS21iom 351.75532 -11.02135 II
ZTF21abnlhxs SN 2021tyw ATLAS21bche, Gaia21doz, PS21isy 346.48525 14.35774 II SN II 0.013 <cit.>
ZTF21abnlnmq AT 2021txf ATLAS21beii 14.57232 20.59191 V
ZTF21abnvsic SN 2021uby ATLAS21bcgu 257.17946 39.53526 II SN Ia 0.072 <cit.>
ZTF21abotose SN 2021ugl ATLAS21bebk 246.98203 20.25316 VI SN IIb 0.0412 <cit.>
ZTF21abowqqa SN 2021uga ATLAS21bdru 247.81439 12.32168 V SN Ia 0.09 <cit.>
ZTF21abpjpxr SN 2021uwa ATLAS21bdqp 335.92088 19.73504 V SN Ia 0.09 <cit.>
ZTF21abpjrqx AT 2021uhr ATLAS21bebe 321.71605 5.75134 VI
ZTF21abqfpsc SN 2021umz (also AT 2016jkb) ATLAS21bdjn, Gaia21ecj 291.64365 -26.34741 V SN Ia-91T-like 0.076 <cit.>
ZTF21abqjjrb SN 2021uqu ATLAS21bdzp, PS21inv 357.29406 -11.25516 III SN 0.078 <cit.>
ZTF21abrfvax SN 2021vis ATLAS21bfiy 226.27859 21.27113 VI SN Ic 0.0518 <cit.>
ZTF21abrfvax SN 2021vis ATLAS21bfiy 226.27859 21.27113 III SN Ic 0.0518 <cit.>
ZTF21abrotmc SN 2021vku ATLAS21bfzs, PS21jdw 322.38702 8.07912 V SN Ia 0.097 <cit.>
ZTF21abrskvb SN 2021vkr ATLAS21bgbc 308.9457 5.45901 VI SN Ia 0.083 <cit.>
ZTF21abrzeif AT 2021vjy ATLAS21bgnx 29.30134 -4.87903 VI
ZTF21absbwyz SN 2021vou ATLAS21bfjz, PS21izi 249.47325 11.59448 V SN Ia 0.09 <cit.>
ZTF21absjryg AT 2021vxq 15.10958 -4.1558 VI
ZTF21abtmsie AT 2021wfm ATLAS21bgln 339.79123 41.78092 III
ZTF21abtsoky SN 2021zfo ATLAS21bjdz, PS21kff 270.11269 69.16242 V SN Ia 0.085 <cit.>
ZTF21abtuutw SN 2021whj ATLAS21bgnp, PS21jyt 2.30267 -5.93881 III SN Ia 0.079 <cit.>
ZTF21abvyczt AT 2021xco ATLAS21bibd 247.27735 -8.48109 VI
ZTF21abwycli SN 2021xyh ATLAS21biiw, PS21jtu 339.96861 12.58475 V SN II 0.035 <cit.>
ZTF21abxqkjd AT 2021xvw ATLAS21bhyv, PS21nmn 343.22562 -19.7127 V
ZTF21abxzzys SN 2021ygf ATLAS21binx 248.96119 38.37612 V SN Ia 0.098876 <cit.>
ZTF21abyonuw SN 2021yip ATLAS21bjhu, PS21ket 345.43594 28.46884 VI SN Ia 0.128 <cit.>
ZTF21abzbuvf SN 2021yik ATLAS21bipy, PS21kvq 7.50773 -3.03131 VI SN Ia 0.1 <cit.>
ZTF21acaohho AT 2021ynv 44.19692 0.73207 V
ZTF21acbgcma SN 2021yvh ATLAS21bizm, PS21okm 249.94553 32.57075 III SN Ia 0.05239 <cit.>
ZTF21acbjxhr SN 2021ysn ATLAS21bjhq, PS21kui 359.18144 13.34985 V SN Ia 0.085 <cit.>
ZTF21acbqjsz AT 2021zda 312.98109 0.09868 V
ZTF21acdnnfg SN 2021zgu ATLAS21bjeq, Gaia21end, PS21mgp 73.43527 0.82282 V SN Ia 0.068 <cit.>
ZTF21acdozee AT 2021zhq ATLAS21bjcv 246.67356 24.92438 V
ZTF21acehvay SN 2021aaiq 327.87377 26.48246 III SN Ia 0.055291 <cit.>
ZTF21acftror SN 2021acej 304.47809 3.23255 III SN Ia 0.05 <cit.>
ZTF21acgylas SN 2021abaq ATLAS21bkml, PS21lui 347.01665 22.57312 VI SN Ia 0.078 <cit.>
ZTF21acgyzel SN 2021abam ATLAS21bkdm, PS21lji 17.05099 -19.79002 II SN Ia 0.049 <cit.>
ZTF21acgzxbn AT 2021able 73.49642 -16.38124 V
ZTF21acgzytq AT 2021abjk ATLAS21blzh 84.14974 -20.07543 V
ZTF21achafji AT 2021abwc 72.66684 -20.16099 VI Galaxy 0.089 <cit.>
ZTF21achaxmt SN 2021abnc ATLAS21bkcv 284.06308 33.0193 II SN Ia 0.08 <cit.>
ZTF21achbgdo SN 2021abbm 282.58432 70.48222 III SN Ia 0.089 <cit.>
ZTF21achctlk AT 2021abdx PS21ltv 23.46599 -19.67035 V
ZTF21achdsoo SN 2021abmr 6.32401 -4.15696 VI SN Ia 0.109 <cit.>
ZTF21achdvyu SN 2021aadc ATLAS21blak, PS21kpm 1.13614 19.76144 VI SLSN-II 0.1953 <cit.>
ZTF21achlxxo SN 2021abll 71.84171 74.02835 III SN Ia 0.08145 <cit.>
ZTF21achqiue SN 2021abmy ATLAS21bkiu, Gaia21fca, PS21lta 355.12566 12.93151 II SN Ia 0.028 <cit.>
ZTF21achutuk SN 2021abpw ATLAS21bkjt, Gaia21fhj 311.27769 -25.73199 II SN Ia 0.063 <cit.>
ZTF21achxzrj SN 2021abpo ATLAS21bkzi 63.27148 -5.44853 VI SN Ia 0.1 <cit.>
ZTF21achzfpg SN 2021absx ATLAS21bklo, Gaia21ews 276.66158 15.70446 III SN Ia 0.056 <cit.>
ZTF21acioiha SN 2021acgq ATLAS21bkvo 117.5581 72.5341 II SN Ia 0.098 <cit.>
ZTF21acipdhn SN 2021abzk PS21msm 157.55136 78.80644 II SN Ia 0.079 <cit.>
ZTF21acirwxt AT 2021acbf ATLAS21bkoq 335.56448 36.06381 II
ZTF21acistjw AT 2021accn ATLAS21blfb, PS21lvj 8.94413 -0.30567 V
ZTF21aciuhyy SN 2021acgl ATLAS21bkzw 5.11115 29.44932 III SN Ia 0.097 <cit.>
ZTF21aciuque AT 2021acgm (also AT 2013lh) ATLAS21bkqh 320.63354 15.41891 VI
ZTF21acjbcdp AT 2021acsg ATLAS21blay 322.59706 3.14409 VI
ZTF21acjbcdp AT 2021acsg ATLAS21blay 322.59705 3.14409 VI
ZTF21acjbeap AT 2021acob ATLAS21blcu 334.34494 8.24779 VI
ZTF21acjbedo SN 2021acpx ATLAS21bkzt 334.36867 13.38465 II SN Ia 0.067 <cit.>
ZTF21acjdwfe SN 2021acof 345.46786 36.82734 III SN Ia 0.107 <cit.>
ZTF21acjnzdh SN 2021acoz ATLAS21blvk 102.10201 77.86272 II SN Ia 0.095 <cit.>
ZTF21acjosgg SN 2021acnf 180.5743 64.71934 I SN Ia 0.1058 <cit.>
ZTF21acjrrle AT 2021acvr 34.69133 -16.66564 VIII
ZTF21ackbhmm SN 2021adhg 322.26635 -12.85072 II SN Ia 0.082 <cit.>
ZTF21ackmeft AT 2021adfc 316.26739 -21.43709 VI
ZTF21ackqvyg AT 2021adee 45.64063 -14.41236 V
ZTF21ackthuy AT 2021adni 56.49016 -4.39208 V
ZTF21ackyxaa AT 2021adqx PS21lzj 7.3939 2.80337 VIII
ZTF21ackzlds AT 2021adjb 359.80772 -8.53838 VI
ZTF21acldmwy SN 2021aefa ATLAS21bmgf, PS21mjg 87.02766 -17.53591 VIII SN Ia 0.11 <cit.>
ZTF21aclidzb AT 2021adij PS22fk 160.42135 25.57288 II
ZTF21aclknsx SN 2021admz 305.87894 -23.44238 II SN IIn 0.056 <cit.>
ZTF21aclrkgs SN 2021admm 13.2952 -25.15131 II SN Ia-CSM 0.108515 <cit.>
ZTF21aclzvgz SN 2021adyo ATLAS21bmxn 174.22039 75.71104 III SN II 0.0415 <cit.>
ZTF21acmrhta AT 2021adwp 273.89771 12.66109 V
ZTF21acmtqzi SN 2021aebc ATLAS21bmge, PS21mwv 36.51753 31.20972 III SN Ia 0.057785 <cit.>
ZTF21acoruyj SN 2021aexp ATLAS21bmox, PS21mrd 24.5498 -29.70116 III SN Ia 0.07 <cit.>
ZTF21acouakb AT 2021aeke ATLAS21bmxg 134.73665 59.7688 III c
ZTF21acphsgr SN 2021aeqi ATLAS21bmoj 330.70721 -13.80123 II SN Ia 0.052 <cit.>
ZTF21acpocmd SN 2021aesp ATLAS21bmpo, Gaia21ffo 147.25064 39.23999 II SN Ia 0.0411409996 <cit.>
ZTF21acpzade AT 2021afjy 355.31269 -6.63483 II
ZTF21acravtt SN 2021afzz 179.06486 -6.17787 II SN Ia 0.083725 <cit.>
ZTF22aaagqyx SN 2022cma 73.58878 -14.33213 II SN Ia 0.056 <cit.>
ZTF22aaagrex SN 2022ccg ATLAS22gdf, Gaia22arg, PS22blq 43.12134 11.10408 VII SN Ia 0.07 <cit.>
ZTF22aaahbrf SN 2022cdh ATLAS22gat, PS22bmo 56.64761 12.98938 II SN Ia 0.035 <cit.>
ZTF22aaahtqz AT 2022bdw ATLAS22dth, Gaia22baj, PS22avi 126.29315 18.58266 II TDE 0.03782 <cit.>
ZTF22aaahuly SN 2022cca ATLAS22ges, PS22bqe 137.29314 -3.86594 II SN Ic-BL 0.042 <cit.>
ZTF22aaajlje AT 2022amc ATLAS22cts, PS22bfb 168.80077 7.9683 VI Other <cit.>
ZTF22aaajuiz SN 2022cfs ATLAS22frs, PS22cms 255.39057 32.95254 II SN Ia 0.091 <cit.>
ZTF22aaaoyme SN 2022cob ATLAS22gcc, PS22bvc 224.89748 14.12884 III SN Ic-BL 0.0445 <cit.>
ZTF22aaaqhgc AT 2022cms ATLAS22gef 58.49762 16.43121 VI
ZTF22aaaqwar SN 2022coz ATLAS22get, PS22brv 121.23503 5.06504 III SN Ia 0.035 <cit.>
ZTF22aabfojs AT 2022crg ATLAS22hfm, PS22drp 217.16948 16.92929 V
ZTF22aabimec AT 2022csn ATLAS22ggz, Gaia22ayp, PS22bju 112.22887 26.89703 II TDE 0.148 <cit.>
ZTF22aabjpii SN 2022cvp ATLAS22gww, PS22drn 221.12711 18.73436 II SN II 0.044 <cit.>
ZTF22aaboeyu AT 2022dlx ATLAS22hob 214.78224 12.65903 VI
ZTF22aabsemf AT 2022dgt ATLAS22hgi, PS22cji 181.78881 62.27814 VI
ZTF22aabsnrg SN 2022dfp ATLAS22heg, PS22bwp 128.81234 30.56737 I SN Ia 0.07532 <cit.>
ZTF22aabtqob SN 2022dfl ATLAS22hcn, PS22ccq 127.98901 3.01631 III SN Ia 0.088 <cit.>
ZTF22aabtyxu AT 2022eku ATLAS22kxs, Gaia22byt 167.50144 8.07103 VII
ZTF22aabuamq AT 2022dsw 171.99702 35.77912 IX AGN 0.074 <cit.>
ZTF22aabwemz SN 2022dkw ATLAS22hmu 218.95957 24.68283 III SN IIn 0.036 <cit.>
ZTF22aabwyco SN 2022doi ATLAS22hta 226.91814 21.37483 V SN Ia 0.092 <cit.>
ZTF22aacabjk AT 2022dsp ATLAS22hyn 256.4644 23.61804 V
ZTF22aacinau SN 2022ebd ATLAS22ino 237.39294 13.9994 II SN Ia 0.069875 <cit.>
ZTF22aacmgor SN 2022duv ATLAS22ifq, PS22bta 120.11265 -3.38228 II SN Ia 0.054 <cit.>
ZTF22aadesjc SN 2022fnl ATLAS22keu, Gaia22cdd, PS22fdk 233.42703 43.74597 VII SN IIn 0.1035 <cit.>
ZTF22aaetqzk SN 2022gzi ATLAS22nuf, Gaia22cei, PS22ewd 266.5202 42.27627 VII SN IIn 0.089 <cit.>
ZTF22aahdxdt AT 2022ipr 181.94545 4.09447 X
ZTF22aaidexf SN 2022jdv 227.75364 5.97167 VIII SN Ia-91T-like 0.075 <cit.>
ZTF22aaikbez AT 2022jbz ATLAS22nwq 346.46355 41.59842 VII
ZTF22aaimgkp AT 2022jks ATLAS22odt 180.95259 15.02741 VII
ZTF22aaitzvr SN 2022jdf ATLAS22nks, PS22eoi 229.61761 5.23966 VII SN II 0.04 <cit.>
ZTF22aajibhc AT 2022jnp ATLAS22puz 212.28997 41.50829 VII
ZTF22aajidyk SN 2022jnn ATLAS22ntt, PS22ejw 211.73102 -25.01963 VII SN Ia 0.049 <cit.>
ZTF22aajlruz SN 2022jsj ATLAS22ohs, PS22efd 219.66979 43.7476 VII SN Ia 0.09 <cit.>
ZTF22aajpgof SN 2022jsg ATLAS22otg, PS22ehg 234.94636 -20.16468 VII SN Ia 0.07 <cit.>
ZTF22aajrdad SN 2022jtj ATLAS22obw 142.7719 20.0688 VII SN Ia 0.070942 <cit.>
ZTF22aajrrzz SN 2022jut ATLAS22nxl, PS22ejn 241.45843 17.47988 VII SN Ia 0.034129 <cit.>
ZTF22aajzfxb AT 2022kto ATLAS22pea 328.33521 -5.23381 VII
ZTF22aakaygk SN 2022kbm ATLAS22okl, Gaia22cxp, PS22eje 246.32633 9.02035 VII SN II 0.03 <cit.>
ZTF22aakdnme AT 2022ket ATLAS22ods, Gaia22cgy, PS22ewg 203.66917 22.89847 VII c
ZTF22aakejxf SN 2022kbc 181.19546 9.43827 VII SN Ia-91bg-like 0.0687 <cit.>
ZTF22aakmtrc SN 2022kpa ATLAS22otb, PS22epr 217.21103 14.99661 VII SN Ia 0.086 <cit.>
ZTF22aakvcqb AT 2019aaon ZTF19aamakuv 183.86497 -2.42975 VII
ZTF22aalmgkb AT 2022liy 202.51259 -19.40763 VII
ZTF22aalxsmp AT 2022lrq ATLAS22pun 191.32769 13.49602 VII
ZTF22aamcwwm AT 2022lni ATLAS22pxv 318.09583 -6.15508 VII
ZTF22aamrkns SN 2022kye ATLAS22qhm, Gaia22cod, PS22epo 315.44949 -19.53634 VII SN II 0.041 <cit.>
aNot clear if indeed rising at discovery.
bLower than expcted signal to noise in spectrum attempt.
cFaded before spectrum was attempted (could have been rapidly evolving or an artifact).
Since some queries are subsets of other, here we list only the most stringent query which produced each event.
|
http://arxiv.org/abs/2307.01875v1
|
20230704183711
|
Approximate, Adapt, Anonymize (3A): a Framework for Privacy Preserving Training Data Release for Machine Learning
|
[
"Tamas Madl",
"Weijie Xu",
"Olivia Choudhury",
"Matthew Howard"
] |
cs.LG
|
[
"cs.LG",
"cs.CR",
"62-08",
"G.4"
] |
Realizing late-time cosmology in the context of Dynamical Stability Approach
[
============================================================================
The availability of large amounts of informative data is crucial for successful machine learning. However, in domains with sensitive information, the release of high-utility data which protects the privacy of individuals has proven challenging. Despite progress in differential privacy and generative modeling for privacy-preserving data release in the literature, only a few approaches optimize for machine learning utility: most approaches only take into account statistical metrics on the data itself and fail to explicitly preserve the loss metrics of machine learning models that are to be subsequently trained on the generated data.
In this paper, we introduce a data release framework, 3A (Approximate, Adapt, Anonymize), to maximize data utility for machine learning, while preserving differential privacy. The framework aims to 1) learn an approximation of the underlying data distribution, 2) adapt it such that loss metrics of machine learning models are preserved as closely as possible, and 3) anonymize by using a noise addition mechanism to ensure differential privacy. We also describe a specific implementation of this framework that leverages mixture models to approximate, kernel-inducing points to adapt, and Gaussian differential privacy to anonymize a dataset, in order to ensure that the resulting data is both privacy-preserving and high utility.
We present experimental evidence showing minimal discrepancy between performance metrics of models trained on real versus privatized datasets, when evaluated on held-out real data. We also compare our results with several privacy-preserving synthetic data generation models (such as differentially private generative adversarial networks), and report significant increases in classification performance metrics compared to state-of-the-art models. These favorable comparisons show that the presented framework is a promising direction of research, increasing the utility of low-risk synthetic data release for machine learning.
§ INTRODUCTION
Training and using machine learning models in domains with sensitive and personally identifiable information such as healthcare presents significant legal, ethical and trust challenges; slowing the progress of this technology as well as its potential positive impact. Data synthesis has been proposed as a potential mechanism that can be a legally and ethically appropriate solution to the sharing and processing of sensitive data. It is possible to synthesize a dataset that allows accurate machine learning (ML) model training without compromising privacy, depending on privacy definition and requirements. In the case of the GDPR, for example, data that do not allow singling out may be excepted from the regulation <cit.>. In this work, we will rely on differential privacy <cit.>, which unlike weaker privacy criteria such as k-anonymity has been shown to protect against singling out individuals <cit.>.
§.§ A privacy-preserving synthetic data generation framework maximizing utility for machine learning
For the purposes of privacy-preserving ML by means of differentially private (DP) data release, we are interested in approaches which 1) approximate the true data distribution, 2) preserve utility for machine learning (ML models trained on the data release perform similarly to models trained on true data), and 3) preserve privacy in accordance to DP.
More formally, we propose studying a class of data generation algorithms M which, given an original dataset D=(X_i,Y_i)_i=1^n with n data points X_i and labels Y_i, produce a synthetic dataset D̃=M(D), such that they
* Approximate the underlying data distribution: estimate a parametric density p_θ(x) by optimizing a log-likelihood objective L_1(p_θ, D):=𝔼_x ∼ D[-log p_θ(x)]
* Adapt the approximated data distribution such that the loss of a classifier f trained on data sampled from it is close to the loss of a classifier f̃ on the original data, under loss l:
L_2(D, D̃, f, f̃) = |𝔼_(x, y) ∼ D[ℓ(f(x), y)]-𝔼_(x, y) ∼D̃[ℓ(f̃(x), y)]|
The overall optimization procedure needs to trade off the importance of L_1 the objective encouraging faithfully preserving the data distribution, and of L_2, the objective encouraging matching classifier loss: L = α L_1 + (1-α) L_2.
* Anonymize by ensuring (ϵ, δ) differential privacy of the overall data publishing algorithm, such that the participation of a single data point is unlikely to be distinguishable. That is, ensure the data publishing algorithm is differentially private <cit.>.
Many instantiations of this general framework are possible. In this paper, we evaluate ClustMix, a simple algorithm instantiating these 3 steps. We will choose 1) a Gaussian Mixture Model as the density estimator <cit.>, 2) the Kernel Inducing Point meta-learning algorithm <cit.> as the loss approximator (with an objective allowing a tradeoff between preserving density fidelity vs. preserving classifier fidelity), and 3) an improved version of Random Mixing to ensure privacy <cit.> - preserving combinations of data points, rather than individual data points, to facilitate a `safety in numbers' approach to avoiding reidentification. Our main contributions are the flexible privacy-preserving data generation framework described above, and the introduction of cluster-based instead of random mixing for preserving differential privacy, which, together, allow significant accuracy increases over previously published methods (see Table <ref>).
The idea to create new training examples by taking convex combinations of existing data points has been successfully leveraged in machine learning, e.g. for data augmentation <cit.>, learning with redundancy in distributed settings <cit.>, and more recently also for private machine learning <cit.>. Lee et al. <cit.> leverage random mixtures (convex combinations of a randomly sampled subset of a dataset) and additive Gaussian noise to design a differentially private (DP) data release mechanism.
However, most of these methods ignore data geometry, sampling at random instead of explicitly attempting to preserve the original data distribution. This may lead to lower downstream utility for machine learning, as low-density regions around decision boundaries may not be preserved. Mixtures of random samples may also fail to preserve certain data distributions, such as skewed and multimode continuous variables.
In our approach, instead of random sampling, we sample from the immediate neighborhood of cluster centroids in order to preserve data distribution. Focusing on preserving cluster structure and mixing similar data points instead of random data points allows noisy mixtures to more closely approximate the original data distribution, and lose less utility than competing methods despite stronger DP guarantee.
§ RELATED WORK
Privacy-preserving data publishing for machine learning tasks is an important problem, and several approaches have been proposed to address it. Mechanisms for publishing such data can operate in an interactive or non-interactive setting <cit.>. In an interactive setting, the data publishing method receives queries from users and responds with noisy outputs to preserve data privacy. The performance of interactive data publishing methods depend on several constraints, such as type of query, maximum number of queries, accuracy, and computational efficiency. Malicious users may still be able to deduce an output much closer to the original answer, thereby exposing the sensitivity of the dataset. In a non-interactive setting, the publishing method releases the data all at once in a privacy-preserving manner.
Among the state-of-the-art non-interactive approaches, synthetic data publishing is the most widely-used technique. This can be achieved by either anonymizing the dataset <cit.> or generating new samples based on the original data distribution <cit.>. As noted in <cit.>, computational complexity of most of the existing methods is exponential in the dimensionality of the dataset. Hence, they are not applicable to deep learning applications that commonly deal with high-dimensional data.
An alternate method is local perturbation that perturbs every data point with additive noise <cit.>. A relevant approach is random projection, which extracts lower-dimensional features via random projection and then perturbs them with some noise <cit.>. In <cit.>, the authors propose a new data publishing algorithm called Differentially Private Mix (DPMix), which generates a dataset by mixing ℓ randomly chosen data points and then perturbing them with an additive noise.
Generative Adversarial Network (GAN) <cit.> is a well-known method for generating synthetic data from real data. As GANs do not provide any privacy guarantees by default, several methods have been proposed to modify it. PATE-GAN <cit.> adapts the training procedure of the discriminator to be differentially private by using a modified version of the Private Aggregation of Teacher Ensembles (PATE) framework <cit.>. The Differentially Private GAN (DP-GAN) <cit.> aims to preserve privacy during training, by adding noise to the gradient of the Wasserstein distance. More recently, Differentially Private Mean Embeddings (DP-MERF) with Random Features have been suggested and shown to provide substantial utility improvements. DP-MERF leverages random Fourier feature representations to approximate the maximum mean discrepancy (MMD) objective in terms of two finite-dimensional mean embeddings, detaching the approximated embedding of the true data from that of the synthetic data. The former is the only term that is data dependent and requires privatization only once (and can then be re-used repeatedly to train a generator model).
In the Results section, we evaluate against these methods, and add a non-DP method for completeness’s sake - ADS-GAN (Anonymization Through Data Synthesis Using Generative Adversarial Networks) <cit.>. In addition to training a generator model that minimizes discrepancy to the real data (in this case using Wasserstein distance), the objective function of ADS-GAN also penalizes an identifiability term directly. ADS-GAN formulates identifiability as the percentage of synthetic data points appearing in closer proximity to a real data point than the next-nearest real neighbor of that real data point (in other words, synthetic instances which may put real instances at risk of distance-to-closest-record attacks).
§ METHODS
We describe a simple instantiation of the 3A framework, dubbed ClustMix. We will choose 1) a Gaussian Mixture Model as the density estimator <cit.>, 2) the Kernel Inducing Point meta-learning algorithm <cit.> as the loss approximator (with an objective allowing a tradeoff between preserving density fidelity vs. preserving classifier fidelity), and 3) an improved version of Random Mixing to ensure privacy <cit.>.
ClustMix is presented in Algorithm 2. The mixing component leverages the same intuition from previous research in data augmentation <cit.> and vicinal risk minimization <cit.> that convex combinations of samples from the vicinity of existing training examples tend to follow the original data distribution.
§.§ Approximation through Gaussian Mixture Models
We consider modeling the data distribution from the probabilistic perspective, approximating the probability density of the data by means of a finite mixture model <cit.>. The joint density over the training samples factorizes as
p(X, Z |θ)=∏_n=1^N∏_k=1^K[p(z_n=k) p(x_n|θ_k)]^𝕀[z_n=k]
In ClustMix, before fitting the Gaussian Mixture Model, we randomly sliced data into n clusters and fit the model for each randomly sliced cluster. To be specific, for feature, we uniformly sample n data points as cutoff points. We apply the following algorithm to each small region. This makes sure each sample can only contribute to the creation of one cluster and creation of one synthetic sample. We also make the assumption of isotropic Gaussian in order to reduce computational complexity, but note that the algorithm can be made more general by relaxing this constraint.
Due to the privacy-utility tradeoff associated with the size of mixtures - too small mixtures incur privacy risk, as shown by <cit.>. The exact minimum mixture size l^min(ϵ,δ,σ_max) for given privacy parameters (ϵ,δ) can be obtained from Eq. <ref> by means of optimization
Thus, our data synthesis Algorithm 2 creates mixtures from clusters obtained from a Gaussian Mixture Model (GMM) adapted for size-constrained clustering, with the constraint that all cluster sizes have to be greater than or equal l^min(ϵ,δ,σ_max). We can reformulate Eq. <ref> by factorizing to explicitly control the distribution over cluster sizes, as first proposed by <cit.>:
p(X, Z |θ)=∏_k=1^K[p(s_k) ∏_n=1^N p(x_n|θ_k)^I[z_n=k]],
Where s_k refers to the number of samples in cluster k. The assignment of data points to clusters can be obtained by finding the maximum of the joint log likelihood. X is the features and Z is latent distributions.
log p(Z | X, θ) = log p(X |θ, Z)+log p(Z)
= ∑_nlog p(x_n|θ_z_n)+∑_klog p(s_k)
p(s) can be chosen to take the form of a step function, such that the probability is zero for all s < l^min that are smaller than our minimum cluster size l^min, and uniformly nonzero otherwise. The optimum assignment based on Eq. <ref> can then be found by expectation maximization, and yields cluster parameters and cluster assignments that approximate the data distribution given the size constraint (all clusters must contain at least l^min data points). We skip details for reasons of space (different solution approaches can be found in <cit.> and <cit.>).
Below, we will use GMMConstrained(𝒳, n, l^min) to denote a function taking a dataset 𝒳 and a minimum cluster size l^min as inputs, and producing n clusters (subsets of 𝒳) as its output.
§.§ Adapt through Kernel Inducing Points
The Kernel Inducing Points (KIP) meta-learning algorithm <cit.> allows obtaining a smaller or distorted version of an original dataset which nevertheless maintains similar model performance as a model trained on the original data, and is thus uniquely well suited for the second step of our framework. It leverages kernel ridge regression, allowing a computationally efficient convex first order optimization approach to adaptation.
Let D=(X_t,Y_t)_i=1^n be the original dataset, and D̃=(X̃_̃s̃,Ỹ_̃s̃)_i=1^s∼ p_θ(x) be a dataset obtained from the approximated model. In our case, ClustMix uses the cluster centroids as the smallest number of most representative samples (since the amount of noise that needs to be added to protect privacy rapidly grows with sample size - see next section).
Given a kernel K, KIP defines a kernel ridge regression loss function as follows:
L_KRR(X_s, y_s)=1/2y_t-K_X_t X_s(K_X_s X_s+λ I)^-1 y_s_2^2
where λ>0 is a fixed regularization parameter. The KIP algorithm then minimizes Eq. <ref> with respect to the dataset Xs. The optimization procedure is initialized with X̃_̃s̃, the synthetic dataset obtained from the approximated model through extracting centroids.
The kernel we use here is the Neural Tangent Kernel <cit.>, mirroring the exact setup described in <cit.> (neural_tangents library, Adam optimizer, z-scoring as preprocessing), with two exceptions: omitting any data augmentation, and using a fully connected architecture instead of a convolutional neural network in order to facilitate application to tabular data.
In our optimization procedure, we explicate a tradeoff between the importance of preserving data density and classifier accuracy. In addition to allowing users to make this choice explicitly (e.g. in domains where faithfully preserving density is important), this is also important for privacy, as allowing KIP to over-optimize on the loss may result in Xs regimes which are difficult to anonymize. A simple example may be a data point in Xs matching an outlier in X, thus easily exposing the identity of that outlier to simple privacy attacks. This can be avoided with a nonzero alpha in our procedure, which forces Xs staying arbitrarily close to GMM cluster centroids. Finally, a nonzero α acts as a regularizer and prevents overfitting to a decision boundary that may not generalize well.
[ L(X_s, y_s)=
α . tr((X_s - X̃_̃s̃)^T (X_s - X̃_̃s̃))/tr( X̃_̃s̃^T X̃_̃s̃); +(1-α)1/2y_t-K_X_t X_s(K_X_s X_s+λ I)^-1 y_s_2^2 ]
The first term is the sum of normalized, α-weighted Euclidean distances between the data points in the original approximation set X̃s̃ and corresponding KIP-adapted data points Xs. Thus, α=0 would preserve the output of the Approximate step (in our case, GMM cluster centroids), and α=1 would simply yield the Xs most conducive to preserving classifier accuracy (ignoring density approximation). In our implementation, we obtain the optimal α as part of hyperparameter optimization. We apply KIP in each randomly sliced clusters.
§.§ Anonymize through Gaussian Differential Privacy
Once the data distribution is approximated and adapted, privacy-preserving synthetic data can be generated by a variety of approaches. <cit.> showed that a simple approach of taking linear combinations of data points yields promising results, provided that a sufficient number of instances are mixed. We follow the same approach, with the difference that we mix data points of the same cluster, rather than random data points, in order to more closely preserve the underlying data distribution. Finally, we add noise as derived from Gaussian Differential Privacy as the last ingredient of the algorithm shown below.
To obtain the accuracies reported in the Results, Algorithm 2 was re-run with different parameter choices for σ_max. Of the resulting datasets, the one yielding the highest accuracy against the training set when training a classifier on the synthetic data was taken as the final synthetic dataset to evaluate.
§.§ Differential privacy guarantees
We first provide some preliminaries and theorems on differential privacy for our proofs. We adopt gaussian privacy since it achieved better tradeoff between privacy and utility <cit.>.
(Adjacent Datasets)
let two datasets S = (X_i, Y_i)_i=1^n and S^' = (X_i^', Y_i^')_i=1^n be adjacent if they are identical except for one data point. We write S ∼ S^'. <cit.>
(Differential Privacy)
A data publishing algorithm M(S) is called (ϵ, δ) differentially private if it satisfies,
P[M(S) ∈ A] ≤ e^ϵ P[M(S^') ∈ A] + δ
M is an algorithm that publishes some statistics of the data. <cit.>
Here you can consider M(S) is an array where each entry means a feature in a generated data point.
(trade-off function) For any two probability distributions P and Q on the same
space, define the trade-off function T(P, Q):[0,1] → [0,1] as,
T(P, Q)(α) = inf{β_ϕ: α_ϕ≤α}
where the infimum is taken over all measurable rejection rules. α_ϕ and β_ϕ represent the distributions.
<cit.>
(Gaussian Differential Privacy)
A data publishing algorithm M(DD) is called μ-differentially private if it satisfies,
T(M(S), M(S^')) ≥ G_μ
where G_μ(α) = Φ(Φ^-1(1 - α) - μ) where ϕ denotes standard normal CDF.
<cit.>
(Sensitivity)
Assume θ(S) is a univariate statistic of the dataset. The sensitivity of θ is
sens(θ) = sup_S, S^'|θ(S) - θ(S^')|
<cit.>
A Gaussian mechanism that operates on a statistic θ as M(S)=θ(S)+ξ, where ξ∼𝒩(0, sens(θ)^2 / μ^2), is μ-GDP.
See theorem 2.7 in <cit.>
A mechanism is μ-GDP if and only if it is (ϵ,δ(ϵ))-DP for all ϵ≥ 0. Where,
δ(ε)=Φ(-ε/μ+μ/2)-e^εΦ(-ε/μ-μ/2)
See Corollary 2.13 in <cit.>
The n-fold composition of μ_i-GDP mechanisms is √(μ_1^2 + ... + μ_n^2)-GDP
See Corollary 3.3 in <cit.>
Consider T data points consisting of a feature matrix X := [X_1X_2...X_n] ∈ R^D × T and X is normalized such that X ∈ [0,1]^D × T. We obtain each synthetic data point X'_t by averaging l original points, and adding Gaussian noise:
X'_t=1/l∑_i=1^l X_t_i + Q_t,
where Qt is sampled from N(0,σ_t^2 I_D) and l ≤ D.
Suppose each original data point can only be used one time in the generation of the synthetic dataset and the number of category equals to C, this data publishing algorithm is (ϵ,δ(ϵ,σ,l,C, D))-DP for all ϵ≤ 0, where δ(ϵ,σ,l,C,D) is given by
δ(ϵ,σ,l,D) = Φ(-ϵ l σ/√(CD)+√(CD)/2 l σ)-e^ϵΦ(-ϵ l σ/√(CD)-√(CD)/2 l σ),
and Φ denotes the standard normal CDF of N(0,1).
If we change a single data point in the original data from X_1 to X_1^', this will at most change a single synthetic data entry from X_t to X_t^'. By Definition 5 and Equation 6, we have
sensitivity = sup|X_t - X_t^'|
= sup|1/l (X_1 - 1/l X_1^') |
≤1/l
as each data point in X ∈ [0, 1]^D × T. Thus, the sensitivity is 1/l
Let μ=1/lσ and we have Q_t is sampled from N(0,σ_t^2 I_D)
Since
σ_t^2 = (lσ/l)^2 = (1/l/1/l σ)^2
= (1/l/μ)^2
Since the sensitivity is 1/l and based on theorem 1, we can show that our mechanism is 1/lσ-GDP.
Since changing one original data point can influence a maximum of one synthetic data point and D features, μ=√(1 / (l σ)^2× D)=√(D)/l σ. However, since we use KIP algorithm, one single point could potentially influence other classes in that region μ=√(1 / (l σ)^2× D × C)=√(CD)/l σ. Since there are C classes, Based on Corollary 2, our data publishing algorithm is thus √(CD)/l σ-GDP
Corollary 1 in <cit.> states that a mechanism is μ-GDP if and only if it is (ϵ,δ(ϵ))-DP for all ϵ≥ 0. Here, δ(ε)=Φ(-ε/μ+μ/2)-e^εΦ(-ε/μ-μ/2), in order to ensure that a Gaussian output perturbation mechanism is (ϵ,δ(ϵ))-DP, as first shown by <cit.>. Substituting μ=√(CD)/l σ, our data publishing algorithm is thus (ϵ,δ(ϵ))-DP, where δ(ϵ)
is given by
δ(ϵ,σ,l,C,D) = Φ(-ϵ l σ/√(CD)+√(CD)/2 l σ)-e^ϵΦ(-ϵ l σ/√(CD)-√(CD)/2 l σ)
Since each data point in the original set can be used by one randomly selected cluster to fit on GMM, and we only apply KIP algorithm to this region, it can only influence the generation of C sample. Thus, our data publishing algorithm is differentially private.
§.§ Data and Experimental Setup
Implementation Details: we leverage an extension of Gaussian Mixture Model implementation of scikit-learn for the Approximate step (adapted to ensure clusters never contain fewer data points than the given constraint - see Section 3.1; reverting to constrained KMeans at timeout or if no solution was found), and an open-source implementation of KIP[https://github.com/google-research/google-research/tree/master/kip] for the Adapt step, modified to optimize for the weighted objective function in Eq. (<ref>), trading off faithfulness to the output of the Approximate step and the utility for ML by means of the α parameter. Noise addition is based on GDP; we obtain the standard deviation of the Gaussian to sample noise from Eq. (<ref>) by means of optimization, as it does not have a closed-form solution. Specifically, we use Nelder-Mead procedure <cit.> to obtain the smallest possible noise that can be added that ensures that given DP parameters ϵ and δ are not exceeded (and discard generated data points where no such solution can be found). Obtained noise parameters were cached for computational performance.
Data preprocessing: data features were scaled to the interval [0,1] for ClustMix to simplify proportionate noise addition (using min-max scaling or a Normal quantile transform, whichever worked better during hyperparameter optimization). For the only dataset with continuous target variables (MIMIC-III hospital length of stay), lengths of stay were discretized into 4 buckets for classification (less than 3 days, less than a week, less than two weeks, or two weeks or more).
Hyperparameter tuning: for all benchmark algorithms as well as for ClustMix, we use heteroscedastic evolutionary Bayesian optimisation (HEBO) <cit.> to optimize free parameters - over 30 iterations for all approaches - excepting DP parameters which are fixed (ϵ=1 and δ=1/N with N being sample size; except for MNIST, where δ=10^-5 to facilitate easier comparison with benchmark papers using the same approach).
Benchmark Datasets and Methods: We compare with state-of-the-art DP data generation algorithms such as DP-GAN, DP-MERF, PATE-GAN. We also included non DP data generation algorithm ADS-GAN as it can lower the risk of information leakage. We compare on 7 different real datasets which we list deatils of datasets in Table 2.
Evaluation: qualitative results (i.e. MNIST images, and plots of toy data) can be inspected in Appendix A. Quantitative results are presented in Table <ref> using Area Under the ROC Curve (AUC), micro-averaged (i.e. one vs. one for all pairwise combinations) in the case of multi-class classification, except for MNIST, conventionally evaluated using accuracy. For each dataset, we set aside 25% of the data as a held-out test set (except MNIST, which has its own conventionally used test set, comprising 14% of the total data). `Real' performance metrics are calculated by training a LightGBM model of gradient boosted decision trees <cit.> on the training set and evaluating on the test set. For each benchmark method and for ClustMix, performance metrics are calculated by first running the method to generate a privacy-preserving synthetic training set based on the real training set, then training a LightGBM model on that synthetic training set, and evaluating it on the real test set.
§ RESULTS
We evaluate ClustMix against several states of the art differentially private data generation approaches in Table <ref>. The combination of cluster-based instead of random mixing for preserving differential privacy (unlike similar mixing-based approaches <cit.>), as well as adaptation for machine learning utility, result in significantly higher predictive accuracy of models trained on synthetic and tested on real data, when compared against differentially private methods.
The mechanism can be more clearly understood from the simple 2D toy datasets illustrated in Figure <ref>. In an attempt to pick σ_max that maximizes utility on the training set, ClustMix is forced to generate a very small number of noisy cluster centroids when ϵ=0.1 (which ensures large l and thus minimizes the additive noise), but yields many low-noise centroids at ϵ=10 (since at this ϵ, smaller clusters are sufficient to ensure that the noise levels are small enough).
Similar observations apply to the higher-dimensional MNIST digit dataset (Figure <ref>), where larger clusters yield the optimal accuracy at ϵ=1 (each digit is a linear combination of 118 training digits on average, with added noise), but smaller clusters are more helpful at ϵ=0.2 (where each digit is obtained by averaging 20 training digits plus noise).
§ CONCLUSION
We propose a simple framework for privacy-preserving synthetic data generation that explicitly takes into account machine learning utility, in addition to faithfully modeling data distribution. We have also presented a specific instantiation of this framework, ClustMix, and have presented substantial increases in classification performance metrics compared to state-of-the-art models; implying that the presented framework constitutes a promising direction of research increasing the utility of low-risk synthetic data release for machine learning.
§ APPENDIX
§ MNIST EXAMPLE
§.§.§ Acknowledgements
Many thanks to Cemre Zor for helpful comments on the Manuscript.
|
http://arxiv.org/abs/2307.03017v2
|
20230706143101
|
EffLiFe: Efficient Light Field Generation via Hierarchical Sparse Gradient Descent
|
[
"Yijie Deng",
"Lei Han",
"Tianpeng Lin",
"Lin Li",
"Jinzhi Zhang",
"Lu Fang"
] |
cs.CV
|
[
"cs.CV"
] |
With the rise of Extended Reality (XR) technology, there is a growing need for real-time light field generation from sparse view inputs.
Existing methods can be classified into offline techniques, which can generate high-quality novel views but at the cost of long inference/training time, and online methods, which either lack generalizability or produce unsatisfactory results.
However, we have observed that the intrinsic sparse manifold of Multi-plane Images (MPI) enables a significant acceleration of light field generation while maintaining rendering quality.
Based on this insight, we introduce EffLiFe, a novel light field optimization method, which leverages the proposed Hierarchical Sparse Gradient Descent (HSGD) to produce high-quality light fields from sparse view images in real time.
Technically, the coarse MPI of a scene is first generated using a 3D CNN, and it is further sparsely optimized by focusing only on important MPI gradients in a few iterations.
Nevertheless, relying solely on optimization can lead to artifacts at occlusion boundaries. Therefore, we propose an occlusion-aware iterative refinement module that removes visual artifacts in occluded regions by iteratively filtering the input.
Extensive experiments demonstrate that our method achieves comparable visual quality while being 100x faster on average than state-of-the-art offline methods and delivering better performance (about 2 dB higher in PSNR) compared to other online approaches.
Light field, Multi-plane Image, Hierarchical Sparse Gradient Descent, Occlusion-aware Iterative Refinement.
EffLiFe: Efficient Light Field Generation via Hierarchical Sparse Gradient Descent
Yijie Deng^*,
Lei Han^*,
Tianpeng Lin,
Lin Li,
Jinzhi Zhang,
and Lu Fang^
Yijie Deng, Jinzhi Zhang and Lu FANG are with Dept. of Electrical Engineering at Tsinghua University and also Beijing National Research Center for Information Science and Technology (BNRist), Beijing 100084. Yijie Deng and Jinzhi Zhang are also with Tsinghua Shenzhen International Graduate School. Lei Han, Tianpeng Lin and Lin Li are with Huawei Technology.
*: Equal Contribution.
: Correspondence Author
(mailto:fanglu@tsinghua.edu.cnfanglu@tsinghua.edu.cn,
<http://luvision.net>).
Received: date / Accepted: date
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
With the rapid development of XR/Naked Eye 3D display devices, there is an increasing demand for live light field video rendering technique that can record and stream light field video in real time, allowing viewers to interactively change the viewpoint and focus of the video. Even with the latest breakthroughs of light field generation<cit.> and neural radiance fields<cit.>, it still remains a challenging task.
To enable live light field generation, the supporting algorithms and representations should be capable of producing high-quality view synthesis results in real time and generalize well to unseen scenes. However, current approaches face difficulties in achieving a balance between visual performance and real-time effectiveness:
* Methods that rely on implicit scene representation, such as NeRF <cit.> and its derivatives <cit.> can produce view synthesis results with high visual quality, but require long-time per-scene optimization and dense view inputs.
While some generalization techniques <cit.> can generate novel views that are generalizable with the help of multi-view stereo (MVS) geometry rules, it is still not fast enough for them to achieve real-time performance due to high memory usage and complex computation of intermediate cost volumes.
* Methods that rely on explicit scene representation, such as Plenoxels<cit.>, PlenOctrees<cit.> and DVGO<cit.> enclose a 3D scene in a voxel grid and optimize the radiance fields within it. While such methods can reproduce intricate geometric and texture details at high grid resolutions, they can not generalize to new scenes without extensive per-scene optimization.
While some methods with learned features <cit.> can generalize well to unseen scenes from sparse views, they still require a significant amount of time to infer a scene due to the heavy computational burden of processing large-sized MPIs and plane sweep volumes.
* Methods relying on surrogate geometry are computationally efficient by only sampling around the estimated geometry scaffolds such as depth maps <cit.> and surface mesh. However, these methods may fail if the precomputed geometry scaffolds are incorrectly estimated for semi-transparent surfaces or thin plates, therefore they are not very robust for complex scenarios.
Based on the above observations, we conclude that live light field reconstruction remains a challenging task for the following reasons: 1) Recovering a 3D light field from 2D images is a difficult ill-posed inverse problem that necessitates either dense input views or geometric priors learned from data;
2) Implicit representations save memory but are computationally complex, resulting in a long inference time. Explicit representations offer faster rendering speed but require significant memory and time for constructing the light field;
3) High-quality light field generation necessitates sophisticated optimization to handle intricate geometry details such as semi-transparent surfaces and thin structures.
To overcome the challenges, we propose EffLiFe, which employs Hierarchical Sparse Gradient Descent (HSGD) to produce high-quality light fields in real time from sparsely sampled views.
Unlike prior methods<cit.> that directly generate a full-sized MPI, our approach derives it hierarchically, as light field details within an MPI can be effectively refined utilizing high-resolution information grounded on a well-established manifold.
Traditionally, an MPI is estimated by minimizing disagreement between predicted and observed measurements using analytical gradient descent. DeepView<cit.> incorporates a learned gradient descent strategy<cit.> that explicitly computes gradients of MPI's color and alpha channels. These gradients are fed into a neural network for rapid and adaptive gradient descent. However, in contrast to DeepView <cit.> that utilizes full-sized MPI gradients, our approach selectively employs important gradients, based on the observation that useful gradients in the MPI updating are spatially sparse.
Specifically, inverse rendering utilizes gradient descent to refine the desired scene representation, while the majority of gradients are close to zero. This implies that further optimization in these areas is unlikely to enhance rendering quality and would instead demand significant computational resources.
Fig. <ref> depicts the derivation of our MPI sparse alpha gradients which are utilized for sparse gradient descent. Firstly, the multi-plane gradients 𝒜_d are derived by computing the gradients of the MPI with respect to its color channel. Next, 𝒜_d is sparsified by clipping small gradients. Finally, the sparse gradient descent module updates the MPI according to the sparse important gradients.
Experiments on public datasets demonstrate that compared with previous methods<cit.>, our optimization strategy based on the sparsity of gradients significantly reduces the computational complexity without loss of accuracy. Also, in comparison with methods<cit.> relying on surrogate geometry, our method achieves superior visual performance while maintaining high temporal efficiency.
Typically, novel view synthesis approaches suffer from poor quality at occlusion boundaries, which can result in unpleasant visual artifacts. Although these artifacts may have a limited impact on standard metrics such as PSNR/SSIM, they can still be noticeable and negatively impact the overall viewing experience. Thus, we further propose an occlusion-aware iterative refinement module to address this issue without introducing additional complexity. We identify that ambiguity at occlusion boundaries is mainly caused by inconsistent observations across input views. More specifically, an MPI is accurately estimated only if observations from all input views correspond well to each other, but occluded views provide random noisy observations making the ill-posed inverse problem hard to converge. Based on this observation, we identify the occluded regions based on the previously estimated scene representation and replace them with observations from other non-occluded views.
Overall, the technical contributions are summarized as follows.
* We present an optimization strategy: Hierarchical Sparse Gradient Descent (HSGD), that efficiently recovers high-quality light fields while effectively reducing inference time and GPU memory usage.
* We propose an occlusion-aware iterative refinement module that filters the input to reduce occlusion artifacts. This method is free of learning and performs effectively for scenes with severe occlusion.
* Extensive experiments on public datasets demonstrate that EffLiFe achieves a PSNR improvement of approximately 2db compared to other online approaches, or comparable performance while being on average 100x faster than offline approaches.
* As a 3D display application demonstrated in Fig. <ref>, our solution can generate the MPIs at around 30 FPS, and novel views can be rendered from a built-up MPI at about 700 FPS for the resolution of 378× 512.
Novel views of different angles are then provided to a 3D display, which presents naked-eye 3D effects through multi-layer light field decomposition and directional backlighting <cit.>.
§ RELATED WORK
We focus on live light field generation in this paper, which requires novel view synthesis that generalizes to arbitrary scenes with high rendering quality and can be processed in real time. However, existing approaches can hardly achieve all these requirements.
§.§ High-quality Novel View Synthesis
The rendering quality of novel view synthesis has seen significant improvement since the proposal of NeRF<cit.>. NeRF encodes a scene in a 3D radiance volume and optimizes the density and color of the volume using a simple MLP. Novel views are generated by integrating the volume's density and color along each viewing ray. Building upon the success of NeRF, several methods such as Mip-NeRF<cit.>, RawNeRF<cit.>, Ref-NeRF<cit.>, and Mip-NeRF360<cit.> have further improved the rendering quality of novel view synthesis. For example, Mip-NeRF reduces objectionable aliasing artifacts by casting cones and prefiltering the positional encoding function, while RawNeRF trains the NeRF model on raw camera data to achieve novel high dynamic range view synthesis. Ref-NeRF produces realistic shiny surfaces by explicitly modeling surface reflections, and Mip-NeRF360 solves the problem of unbounded scene novel view synthesis with a non-linear scene parameterization, online distillation, and a novel regularizer. Additionally, another MPI-based method<cit.> parameterizes each pixel as a linear combination of basis neural functions, successfully reproducing realistic view-dependent effects.
Although these methods have improved the rendering quality from different perspectives, they all require a long per-scene optimization time because they train their models to encode the content of a particular scene, making them not generalizable to new scenes.
§.§ Generalizable Novel View Synthesis
To achieve generalizable novel view synthesis, several methods have been proposed that construct radiance fields from image features. PixelNeRF<cit.> encodes the input images into pixel-aligned feature grids and then renders points along a target ray by projecting them on the input image. For better performance, IBRNet<cit.> aggregates more information(image colors, features and viewing directions) and designs a ray transformer for decoding colors and densities. GRF<cit.> and NeuRay<cit.> achieve occlusion-aware novel view synthesis by incorporating the prior that "The features of a point in 3D space are the same when being viewing from different angles". StereoMag<cit.>, LLFF<cit.>, DeepView<cit.> and MLI<cit.> achieve generalization by modeling the scene as a set of fronto-parallel planes, in which multi-view consistency are subtly encoded. However, MLI<cit.> conducts an extra step to convert the planes to deformable layers and achieves high visual metrics. MVSNeRF<cit.> also leverages the concept of multi-view geometry and generalizes to new scenes with a neural cost volume. Instead of introducing image features and geometric priors for generalization, <cit.> applies meta-learning algorithms to learn the initial weight parameters for a NeRF model, enabling faster convergence.
Though generalizable, it still takes them seconds or minutes to generate a novel view. In addition, methods like PixelNeRF<cit.> and MVSNeRF<cit.> are not robust enough that they usually require extra finetuning to produce satisfying results.
§.§ Real-time Novel View Synthesis
To achieve real-time inference speed of a trained scene, structured representations and efficient computation skills are employed. KiloNeRF<cit.> replaces the original large MLP of NeRF with thousands of smaller faster-to-evaluate MLPs, enabling up to 40 FPS of rendering speed. Plenoctrees<cit.> render novel views by representing the scene in a structured octree-based 3D grid. FastNeRF<cit.> is capable of rendering novel views at 200 FPS by caching a deep radiance map and efficiently querying it. Except for NeRF-like representations, a built-up multi-plane image (MPI) from StereoMag/LLFF/DeepView<cit.> can achieve a rendering speed of 60 FPS. (The rendering speeds of above methods are all evaluated on 800x800 images of Synthetic-NeRF<cit.> dataset.)
Despite the real-time inference speed of novel view synthesis, real-time building-up of a new scene is hard to be reached. The cutting-edge optimization-based method: InstantNGP<cit.> reduces training time of NeRF from hours to several seconds by using a hash grid and hardware acceleration techniques. DeepView<cit.> is now the fastest MPI-based method, however, it still requires about 50 seconds to generate an MPI. ENeRF<cit.> is the state-of-the-art generalizable novel view synthesis method and can render new views at 30 FPS on the ZJU-MoCap<cit.> dataset by sampling few points around a computed depth map, however, the rendering speed and image quality can still be improved.
§.§ Light Field Video Streaming
Instead of producing a single image at a time, <cit.> output the entire light field of the scene as multi-plane images or layered mesh representations. With a generated scene light field, novel views can be rendered easily with nearly no computation budget by simply ray querying. <cit.> encodes the scene in a set of multi-sphere images and compresses them into an atlas of scene geometry, allowing light field videos to be streamed over a gigabit connection. Though this method is real-time in rendering and streaming, it still requires a long time to generate a new scene light field because it is based on DeepView<cit.>. Starline<cit.> and VirtualCube<cit.> are another two methods that make live light field streaming possible. They both build up their systems using several RGBD cameras, and the additional depth channel enables fast scene geometry extraction, thus saving much time for computation. However, in this paper, we only compare RGB-based novel view synthesis methods to explore their potential for live light field generation.
§ METHOD
The proposed method, EffLiFe, aims to facilitate real-time generation of light fields using a collection of posed source view images. To accomplish this goal, we introduce Hierarchical Sparse Gradient Descent (HSGD), an efficient and high-quality optimization technique for the Multi-plane Image. By leveraging HSGD, we can achieve real-time light field generation while maintaining a high level of quality.
Fig. <ref> presents an overview of the proposed method, which comprises two primary processes: data preparation and hierarchical sparse gradient descent. Sec. <ref> introduces the data preparation stage, which encompasses the construction of the plane sweep volume and the initial generation of the Multi-plane Image (MPI). Sec <ref> introduces the hierarchical sparse gradient descent stage, as depicted in Fig. <ref>. This module comprises three major operations: gradient formulation, sparse gradient selection, and sparse gradient update. Prior to discussing our method in detail, we provide a concise introduction of the Multi-plane Image (MPI), and our fundamental optimization method, Learned Gradient Descent, in Sec. <ref>.
§.§ Preliminaries
Multi-plane Image (MPI). An MPI <cit.> M is composed of D fronto-parallel RGBα planes in the view frustum of a camera. We denote the RGB channels of an MPI plane as c_d and the alpha channel as α_d. The target view Ĩ_t is rendered by compositing the MPI in the back-to-front order utilizing the repeated over operator O as defined in <cit.>:
Ĩ_t = 𝒪(ω_t(M_r)),
where ω_t warps the MPI M_r from reference view to target view via homowarping. And the over operator 𝒪 is defined as the compact form:
𝒪(M) = ∑_d=1^Dc_dα_d∏_i=d+1^D(1 - α_i).
Learned Gradient Descent. Learned gradient descent is firstly proposed by <cit.> to solve ill-posed inverse problems for CT reconstruction. Here, we follow DeepView<cit.> to apply learned gradient descent to the MPI generation problem. Inverse problems refer to problems where one seeks to reconstruct parameters characterizing the system under investigation from indirect observations. Mathematically, the problems can be formulated as reconstructing a signal f_true∈ X from data g ∈ Y:
g=𝒯(f_true)+δ g,
where 𝒯: X→ Y is the forward operator that models how a signal generates data without noise, and δ g ∈ Y is a single sample of a Y-valued random variable that represents the noise component of data.
Typically, such problems are resolved by iteratively updating the reconstructed f̂ with the analytical gradient descent method:
f̂_n + 1 = f̂_n + λ [∂ℒ(f̂_n)/∂f̂_n + ∂ϕ (f̂_n)/∂f̂_n],
where ℒ is a loss function measuring the difference between the true signal f_true and the reconstructed f̂, λ is the step size for each iteration, and ϕ is a prior on f_true.
However, it requires too many iterations to converge and is very likely to fall into local optima. Therefore, <cit.> propose to use learned gradient descent instead of analytical gradient descent, where the update rule is defined by a deep neural network 𝒩_ω:
f̂_n + 1 = f̂_n + 𝒩_ω [∂ℒ(f̂_n)/∂f̂_n + f̂_n].
The learned gradient descent approach is already proved in various works to achieve better results with fewer iterations.
§.§ Data Preparation for Optimization
As depicted in Fig. <ref>, our primary optimization process, Sparse Gradient Descent, relies on two inputs: filtered plane sweep volumes (PSVs) and an MPI. In the following section, we provide a brief overview of how these inputs are constructed and generated. Additionally, we discuss the occlusion filter module in detail in Sec. <ref>.
Hierarchical PSV Construction. Given N source view images I_i=1^N of size H× W× 3 and their corresponding camera poses [K_i,R_i|t_i], as well as for the reference view I_r, we construct the plane sweep volume P of size N × D × H × W × 3 leveraging homographic warping:
ℋ_i(d) = K_iR_i(ℐ + (R_i^-1t_i - R_r^-1t_r)n^TR_r/d)R_r^-1K_r^-1,
where n denotes the plane normal and ℐ is an identity matrix. And the homography matrix ℋ_i(d) warps a pixel (u,v) of the reference view at depth d to the source view I_i. The warped source view I_i^w is defined as:
I_i^w(u,v,d) = I_i(ℋ_i(d)[u,v,1]^T).
By concatenating all the warped source view images I_i^w along the depth dimension, the plane sweep volume P is constructed. Before feeding P into the network, we downsample it hierarchically at multiple resolutions. Specifically, we create the plane sweep volume P_m of size N × D × H/2^m × W/2^m × 3 for m's iteration. In the default implementation, the number of iterations is set to 3.
Initial MPI Generation. A 3D neural network accepts the reshaped coarsest Plane Sweep Volume (PSV) of dimension 3N × D × H × W as input and produces the Multi-plane Image (MPI) at the lowest resolution. The neural network is trained to aggregate information across images and generate a coarse MPI for subsequent refinement.
§.§ Hierarchical Sparse Gradient Descent
Hierarchical sparse gradient descent is a coarse-to-fine process that progressively refines the Multi-plane Image (MPI) using MPI gradients. Among these gradients, the one relative to the color channel holds utmost significance as it provides the network with essential visibility information, commonly referred to as the "alpha gradient". Fig. <ref> (b) visually depicts that the alpha gradients for the majority of pixels are close to zero. These pixels have minimal influence on the final rendering results; however, they consume significant computational resources during the optimization process. This observation shows significant potential for sparse optimization and acceleration.
So we propose sparse gradient descent, an adaptive approach that concentrates on important MPI gradients for efficient optimization. Specifically, this method consists of three steps: gradient formulation, sparse gradient selection, and sparse gradient update. The detailed process is shown in Fig. <ref>.
Gradient Formulation. To obtain the sparse MPI gradient, we first formulate the input gradient components for learned gradient descent. They are defined as the concatenation of the input PSV P, the input MPI M and the warped alpha gradient 𝒜^w_i of each source view. To compute the warped alpha gradient for a single source view, the MPI of the reference view is initially warped to this source view using homographic warping (Eq. <ref>) to obtain M_i. Then the alpha gradient 𝒜_i_d corresponding to depth d is calculated by the following equation:
𝒜_i_d = ∂𝒪(M_i)/∂ c_d = α_d∏_j=d+1^D(1 - α_j).
After this, the warped alpha gradient 𝒜^w_i is derived by applying inverse homographic warping Eq. <ref> from the source view to the reference view. So the final gradient components V are defined as:
V = concat(P, M, 𝒜^w_1, ..., 𝒜^w_N),
where P serves as the raw input to provide details of higher resolution, the MPI M is the basis for optimization, and alpha gradient 𝒜^w_i provides useful visibility information to eliminate occlusion.
Sparse Gradient Selection.
As shown in Fig. <ref> (a),
V is a large volume that would require significant computational resources. Therefore, we employ sparsification on V utilizing the alpha gradient 𝒜 of the MPI, resulting in a sparsified version denoted as V_s.
Specifically, the selection process involves choosing only the k pixels with the highest alpha gradient along the D axis, resulting in the compression of the original input volume V from C× D× H× W to C× k× H× W. To strike a balance between rendering quality and efficiency, k is set for 5 for the default model. It is important to note that we choose alpha gradient 𝒜 rather than α as the sparsification criterion for two reasons: 1) The alpha values of the farthest few planes in a generated MPI are close to 1, as they represent relatively static background content. If pixels were selected based on α, the majority would be from the background rather than the object surfaces; 2) Alpha gradient 𝒜 can be regarded as the weight of each color plane c_d to the final color, suggesting that pixels with higher alpha gradient contribute more to the final color.
The sparse indices S, which store the positions of the k selected pixels in the original D pixels, are also retained to facilitate the restoration of the compressed MPI residual. This allows it to be added to the input MPI for an update.
Sparse Gradient Update. As shown in Fig. <ref> (c), the sparsified input volume V_s is fed to a 3D CNN, functioning as the learned gradient descent network, to derive the sparse MPI gradient for an MPI refinement iteration.
The pixels in the sparse MPI gradient are then restored to their original positions using the sparse indices S, and subsequently added to the input MPI to obtain a refined version:
M_n+1 = M_n + ℛ(𝒩_w(V_s), S),
where 𝒩_w is a 3D CNN that functions as the learend gradient descent neural network, and ℛ is the operator that returned the pixels of the sparse MPI gradient to their original positions.
§.§ Occlusion-aware Iterative Refinement
The hierarchical sparse gradient descent strategy achieves significant acceleration. Nonetheless, ghosting effects at object boundaries may be observed in certain scenes with severe occlusion. The occlusion-aware iterative refinement module, illustrated in Fig. <ref>, seeks to eliminate occlusion within the input plane sweep volume P, employing the MPI M from the previous iteration.
Specifically, the depth maps of the reference view and source views are rendered in the same manner as RGB images:
𝒪_d(M) = ∑_i=1^Dd_iα_i∏_j=i+1^D(1 - α_j),
where 𝒪_d is the over composite operator on depth, and d_i is the depth value of plane i. Utilizing homographic warping in Eq. <ref>, the depth plane sweep volume (DPSV) P_D∈ N× D× H× W is generated by just replacing the RGB plane with the rendered depth planes. P_D can be perceived as a 3D grid in the shape of D× H× W, with each voxel of the grid containing N depth values corresponding to N source views. By comparing N depths at coordinate (d, h, w) to the depth of the target view at coordinate (h, w) where d∈ [0,D), h∈ [0, H), w∈ [0, W), the index of the occlusion view at (d, h, w) can be readily identified if:
1/𝒟_t[h, w] - 1/P_D[i, d, h, w] > ϵ,
where 𝒟_t∈ H× W is the rendered depth map of the target view t and ϵ is a threshold controlling the minimum tolerance for considering view i at coordinate (d, h, w) as the occlusion view. Next, the RGB color of source view i at (d, h, w) is substituted with the mean value calculated from the remaining source views. This adjustment enhances the visual coherence of all source views at (d, h, w). By replacing the colors, the method achieves sharper edge rendering by eliminating sudden color discrepancies at object boundaries. Additionally, this operation does not disrupt areas of smooth depth transitions because small depth difference between source views and the target view is filtered by ϵ.
§.§ Implementation Details
Network Details. The trainable components of the entire pipeline are the 3D CNNs that convert the input plane sweep volume into a multi-plane image in Fig. <ref>, and the one that refines the sparse MPI gradient in Fig. <ref>. In consideration of time efficiency, we use only a 3-layer 3D convolution block denoted in both figures. The kernel size and padding size of the first two convolution layers are 3 and 1, respectively, followed by a batchnorm layer and a relu layer. The third layer is a straightforward 1x1x1 3D convolution. For the "doubled conv layers" setting of the ablation study, the 3D CNN is replaced with a stronger one that has doubled convolution layers and more channels, resulting in even higher visual metrics.
The backbone 3D CNN of the default model EffLiFe is shown in Fig. <ref>(a), and the backbone 3D CNN of EffLiFe-dc that has doubled convolution layers is shown in Fig. <ref>(b).
Training Loss. We train our network using synthesized views as supervision and choose the commonly used deep feature matching loss L_VGG<cit.> as the basic loss function. In detail, our overall rendering loss L_r is a weighted average loss of all iterations, and the rendering loss for each iteration is also a weighted average one for both the synthesized reference and source views:
L_r = ∑_i=1^mλ_i(L_VGG(I_r_i, Ĩ_r_i) +
μ∑_j=1^NL_VGG(I_j_i, Ĩ_̃j̃_̃ĩ)),
where I_r_i and I_j_i are the ground truth reference
and source images at iteration i, Ĩ_r_i and Ĩ_j_i are the rendered reference and source images from the generated MPI at iteration i. And m is the number of iterations, N is the number of input source images, λ_i is a hyper-parameter weighing the importance of each iteration, and μ is also a hyper-parameter balancing reference and source image supervision.
In addition, we propose a sparsity loss L_s that regularizes the alpha values of the MPI pixels to be close to 0 or 1:
L_s = ∑_h=1^H∑_w=1^W∑_d=1^Dlog(1.5 - |0.5 - α_h,w,d|)/H× W× D,
where α_h,w,d is the alpha channel of the MPI, and H, W, D are the height, width and number of depth planes of the volume. This loss aims to reduce the entropy of the MPI with respect to its α channel. It is ideal for sparse gradient selection, as it allows us to select more informative top k samples that lie around the surfaces of a scene. The overall training loss is:
L = L_r + λ_s L_s,
where λ_s is a hyper-parameter balancing color supervision and sparsity regularization.
§ EXPERIMENTS
§.§ Baselines, datasets and metrics
Baselines. In order to evaluate the visual performance and real-time efficiency of our method, we compare it with four generalizable novel view synthesis methods: IBRNet<cit.>, MVSNeRF<cit.>, and ENeRF<cit.>. Additionally, we compare our method with five light field generation methods, namely Stereo Magnification (StereoMag)<cit.>, LLFF<cit.>, Soft3D<cit.>, MLI<cit.>, and DeepView<cit.>. Among the baseline methods, IBRNet, MVSNeRF and ENeRF conduct extra finetuning experiments to improve performance. However, finetuning takes so much time that it is incompatible with real-time light field generation, thus we only compare the results without finetuning.
Datasets. For training and evaluation, we select 4 datasets: Spaces<cit.>, Real Forward-Facing<cit.>, SWORD<cit.> and Shiny<cit.> dataset. To compare results on the Real Forward-Facing evaluation set, comprising 8 scenes, and the SWORD evaluation set, comprising 25 scenes (selected from the total evaluation scenes) and Shiny evaluation set, comprising 8 scenes, we trained our method using a combined training set in the same manner as IBRNet<cit.>. This combined training set consists of 90 scenes from the Spaces training set and 35 scenes from the Real Forward-Facing training set. To compare results on the Spaces evaluation set, which consists of 10 scenes with three baseline settings of large, medium, and small, we trained our method using the same training dataset as DeepView, comprising 90 scenes from the Spaces training set.
Metrics. We compared the visual performance of the proposed method using standard metrics, structural similarity (SSIM), peak signal-to-noise ratio (PSNR), and perceptual similarity (LPIPs). We evaluated the temporal efficiency of all methods based on the time required to generate a single representation (RGB images for IBRNet<cit.>, MVSNeRF<cit.> and ENeRF<cit.>, MPIs for Soft3D<cit.>, StereoMag<cit.>, DeepView<cit.> and our method, multi-layered meshes for MLI<cit.>). Furthermore, given that light field generation methods yield light field representations suitable for offline rendering, we compared the offline rendering speed of light field generation methods, also in FPS. (We use the OPENGL renderer provided by LLFF<cit.> evaluate the offline rendering speed of our method.) To assess the viability of each method for 3D display applications, we evaluated their 3D display efficiency (measured in FPS), which we define as the reciprocal of the time required to provide sufficient light field information to support light field rendering on a 3D display. Specifically, a 3D display (e.g., a looking glass) requires n views (n is set to 18 in Tab. <ref>) at different viewing angles to reproduce a light field, providing a continuous, forward-facing viewing experience. Novel view synthesis methods necessitate n forward processes to generate n new views. In contrast, light field generation methods, which create light field representations, require only a single forward process to generate n novel views because the built-up representations enable really fast novel view rendering in the forward-facing viewing range.
§.§ Experiment Configurations
We compare our method with state-of-the-art generalizable novel view synthesis methods and light field generation methods on public datasets including SWORD<cit.>, Real Forward-Facing<cit.>, Shiny<cit.> and Spaces <cit.>.
SWORD, Real Forward-Facing and Shiny. To compare results on a sparse view setting, we set the number of input views to 3 for all methods. The evaluation image resolution for three datasets is 378× 512, and the number of MPI planes for our method is set to 40. To compare with MLI<cit.>, we choose their best-performance model SIMPLI-8L that uses 8 layered meshes for representations. We train two versions of our models, EffLiFe and EffLiFe-4I, containing 3 and 4 update iterations respectively (refer to Sec. <ref> for details about EffLiFe-4I). In order to assess the interpolation and extrapolation capabilities, we evaluate all views from the three datasets. For each view of a given scene, we select the nearest available 3 views as input. The evaluation results for a single scene are calculated by averaging the results across all views. Likewise, the final evaluation results for a dataset are computed by averaging the results across all scenes.
Spaces. We follow the configuration of DeepView<cit.> to compare with their evaluation results. The number of input views is 4, the evaluation image resolution is 480× 800, and the number of planes of an MPI is 40. We use the same input and evaluation views as DeepView<cit.> for three baseline settings.
Our models are trained for 100,000 iterations for both configurations, and all methods are evaluated on an RTX 3090 GPU.
§.§ Quantitative Results
Tab. <ref> presents the quantitative results on SWORD<cit.>, Real Forward-Facing<cit.> and Shiny<cit.>. We classify all methods into offline and online according to Generation time.
Compared with offline methods, our model EffLiFe-4I outperforms offline methods in terms of visual metrics, achieving top-1 or top-2 rankings on three datasets. And our default model EffLiFe strikes a balance between the rendering quality and efficiency. Our models, on average, achieve a 100× faster generation speed compared to other offline methods. Compared with the online method ENeRF<cit.>, our models are a little bit slower in generation time; however, our method exhibits superior visual metrics on three datasets. When considering the metric of 3D Display Efficiency, our method stands out as the only one capable of supporting real-time light field video display. This is due to its high temporal efficiency in both light field generation and offline rendering speed.
Tab. <ref> lists the quantitative results on the Spaces evaluation set, where we can see that DeepView generates the highest-quality rendering results overall, followed by our method and Soft3D<cit.>. However, in terms of time efficiency, our method is about 200 times faster than DeepView. The Generation Time of Soft3D and StereoMag^+ are left as "-" because Soft3D is not open-sourced and StereoMag^+ is a modified version by the authors of DeepView, so that we can not evaluate their time efficiency accurately. To summarize, we have the following observations:
* Compared with offline novel view synthesis methods<cit.> and offline light field generation methods<cit.>, our approach achieves around 100× speedup ratio with superior or comparable visual performance.
* Compared with the online novel view synthesis method<cit.>, our method achieves better visual performance, while maintaining comparable time efficiency.
* Novel views can be rendered at significantly higher FPS than other novel view synthesis methods<cit.> from our generated MPIs. This property makes our method particularly suitable for displaying on 3D displays and other light mobile devices.
§.§ Qualitative Results
We show qualitative comparisons on Real Forward-facing<cit.> (the first two rows), Shiny<cit.> (the third row) and SWORD (the last row) in Fig. <ref>.
For a comprehensive comparison with previous state-of-the-art methods, we evaluate the rendering quality in the following aspects.
Occluded Regions. In Fig. <ref> (a), ghosting effects are produced at the edges of flowers by ENeRF<cit.> and MVSNeRF<cit.>. In contrast, our method generates clear boundaries, comparable to IBRNet<cit.> and SIMPLI-8L<cit.>. In Fig. <ref> (b), ENeRF<cit.> produces an incomplete horn, and MVSNeRF<cit.> blurs the background around the horn. Our results are comparable to IBRNet<cit.>, which show a complete horn and clean background, but not as good as SIMPLI-8L<cit.>.
Intricate Texture Details. Fig. <ref> (d) shows the faces of two statues with intricate texture details. ENeRF<cit.>, MVSNeRF<cit.>, and IBRNet<cit.> produce varying degrees of blurriness, while our method and SIMPLI-8L<cit.> restore the complicated texture details very well.
View-dependent Effects. Fig. <ref> (c) and (e) show view-dependent specularity effects on the cd and the black pillar, respectively. For the cd scene, MVSNeRF<cit.> fails to produce reasonable specular patterns, and ENeRF<cit.> and SIMPLI-8L<cit.> generate blurry edges of the white reflection. However, our method and IBRNet<cit.> produce more accurate view-dependent effects. For the black pillar, ENeRF<cit.>, SIMPLI-8L<cit.>, and MVSNeRF<cit.> fail to generate the complete shape of the pillar, and IBRNet<cit.> does not restore the specularity as accurately as our method.
Thin Plates. In Fig. <ref> (e), the green stick of the scooter is a very thin plate. ENeRF<cit.>, MVSNeRF<cit.>, and IBRNet<cit.> fail to restore the clear structure of the stick. Our method produces a relatively complete shape, comparable to SIMPLI-8L<cit.>.
Among all the evaluation metrics, our proposed approach achieves comparable or better visual quality with offline approaches, and performs superior to online approaches. More qualitative comparison results on Shiny<cit.> and IBRNet collected<cit.> are shown in Fig. <ref>. Our method produces clear results that accurately captures view-dependent effects, albeit with slightly less sharpness compared to SIMPLI-8L<cit.>. This trade-off is made in exchange for reduced time spent constructing the light field representations.
§.§ Ablation Study
In this section, we conduct a series of ablation experiments to assess the contribution of each component to the final rendering performance. The metrics of all experiments are evaluated on the Real Forward-facing<cit.> dataset, which contains typical challenging cases such as occlusion, complex scene geometry, and view-dependent effects. The rendered images for training and evaluation are downscaled to 378× 512, and the number of input views is set to 3. The default number of planes in an MPI is 40. For detailed ablation settings and details, please refer to Tab. <ref>.
Number of iterations and planes, and backbone network. Tab. <ref> demonstrates that if we only consider image quality, using as many MPI planes and iterations as possible would be ideal. However, to strike a balance between rendering quality and time efficiency, our default model configuration consists of only 3 iterations and 40 MPI planes. In the doubled conv layers setting, we doubled the number of convolution layers and increased the number of channels to improve visual metrics. While this change decreases the rendering FPS, it further enhances the rendering quality. Nevertheless, a more efficient and robust backbone network could be implemented to improve both rendering speed and quality.
Sparsity Loss. Fig. <ref> visually compares the differences in the rendered images and individual MPI planes from experiments conducted with and without the sparsity loss. Compared to the model trained with sparsity loss, the model trained without it generates rendered images that display more artifacts, particularly around sharp edges. A comparison of individual MPI planes shows that those produced by the model with sparsity loss contain clearer surfaces. This evidence suggests that sparsity loss aids the network in minimizing scene content layout ambiguity, resulting in cleaner and sharper images.
Occlusion Refinement. In Fig. <ref>, we compare the results between experiments trained with and without the iterative occlusion refinement module. The occlusion-aware iterative refinement module markedly enhances the visibility of background content adjacent to the foreground edges. However, this module is deactivated in the model as the occlusion in the evaluation datasets is generally mild, and its activation could decrease the model's frames per second (FPS).
§ CONCLUSION
This paper presents a novel method for efficiently generating light fields using hierarchical sparse gradient descent. Our proposed method achieves a resolution of 378× 512 for light field representations (MPIs) at a frame rate of approximately 30 FPS, from which novel views can be rendered at around 700 FPS. The hierarchical sparse gradient descent module of our network focuses on important MPI gradients, resulting in significant improvements in time efficiency without compromising rendering quality. And the occlusion-aware iterative refinement module helps eliminate severe occlusion, further enhancing rendering quality to state-of-the-art levels. Our method has the potential to deliver high-quality and real-time light field videos for XR and Naked Eye 3D display devices.
Limitations. Our approach has a few limitations. Firstly, our current implementation can only be trained with a fixed number of views, and the order of input views may slightly affect the final rendering quality. Secondly, our model is unable to produce relatively good results at the borders of the image where not all source views are overlapped, shown in Fig. <ref>.
Future work. Although the proposed approach is able to generate light fields at around 30 FPS of resolution 378× 512, its efficiency could still be improved with customized CUDA kernels, which is not emphasized in this paper. Furthermore, a robust and adaptive view-aggregating module is required to support an arbitrary number of input views. This enhancement will facilitate the use of our model for various camera configurations.
IEEEtran
[
< g r a p h i c s >
]Yijie Deng is currently a master student in Tsinghua-Berkeley Shenzhen Institute (TBSI), Tsinghua University. He received B.E. from Wuhan University in 2021. His research interest is 3D vision.
[
< g r a p h i c s >
]Lei Han is currently a researcher in Linx Lab of HiSilicon (subsidiary of Huawei Technologies). He studied Electrical Engineering at the Hong Kong University of Science and Technology and Tsinghua University. He received the B.S. degree in July 2013 and joined the Department of Electrical Computing Engineering at the Hong Kong University of Science and Technology in September 2016, where he is pursuing the PhD degree. His current research focuses on multi-view geometry and 3D computer vision.
[
< g r a p h i c s >
]Lin Li is currently an engineer in Huawei. She received the B.S. and the integrated M.S. and Ph. D. degrees from the Department of Electronic Information Technology and Instrument Institute of ZheJiang University in 2010 and 2015, respectively.
[
< g r a p h i c s >
]Tianpeng Lin is currently a researcher in Linx Lab of HiSilicon (subsidiary of Huawei Technologies). He received Ph.D in GeoInformatics in the Chinese University of Hong Kong in 2016, and B.E. from Tsinghua University in 2009. He works for Kirin chipsets and Linx Lab and focuses on the chipset algorithm development in SLAM and 3D reconstruction.
[
< g r a p h i c s >
]Jinzhi Zhang is currently a Ph.D student in Tsinghua-Berkeley Shenzhen Institute (TBSI), Tsinghua University. He received B.E. from Huazhong University of Science and Technology in 2019. His research interest is 3D vision.
[
< g r a p h i c s >
]Lu Fang is currently an Associate Professor in Tsinghua University. She received Ph.D from the Hong Kong Univ. of Science and Technology in 2011, and B.E. from Univ. of Science and Technology of China in 2007. Her research interests include computational imaging and visual intelligence. Dr. Fang is currently IEEE Senior Member, Associate Editor of IEEE TIP and TMM.
|
http://arxiv.org/abs/2307.00313v1
|
20230701120224
|
PM-DETR: Domain Adaptive Prompt Memory for Object Detection with Transformers
|
[
"Peidong Jia",
"Jiaming Liu",
"Senqiao Yang",
"Jiarui Wu",
"Xiaodong Xie",
"Shanghang Zhang"
] |
cs.CV
|
[
"cs.CV",
"68T07",
"I.5.1"
] |
Both authors contributed equally to this research.
[1]
Peking University
China
Harbin Institute of Technology, Shenzhen
China
Beihang University
China
Peking University
China
The Transformer-based detectors (i.e., DETR) have demonstrated impressive performance on end-to-end object detection. However, transferring DETR to different data distributions may lead to a significant performance degradation. Existing adaptation techniques focus on model-based approaches, which aim to leverage feature alignment to narrow the distribution shift between different domains. In this study, we propose a hierarchical Prompt Domain Memory (PDM) for adapting detection transformers to different distributions. PDM comprehensively leverages the prompt memory to extract domain-specific knowledge and explicitly constructs a long-term memory space for the data distribution, which represents better domain diversity compared to existing methods.
Specifically, each prompt and its corresponding distribution value are paired in the memory space, and we inject top M distribution-similar prompts into the input and multi-level embeddings of DETR.
Additionally, we introduce the Prompt Memory Alignment (PMA) to reduce the discrepancy between the source and target domains by fully leveraging the domain-specific knowledge extracted from the prompt domain memory. Extensive experiments demonstrate that our method outperforms state-of-the-art domain adaptive object detection methods on three benchmarks, including scene, synthetic to real, and weather adaptation. Codes will be released.
<ccs2012>
<concept>
<concept_id>10010147.10010178.10010224.10010245.10010250</concept_id>
<concept_desc>Computing methodologies Object detection</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
< g r a p h i c s >
(a) compares the t-SNE results of different methods on the source and target domain data, and our method aligns the domain shift well compared to the baseline method. (b) indicates that our method achieves state-of-the-art (SOTA) performance on three challenging domain adaptation benchmarks.
PM-DETR: Domain Adaptive Prompt Memory for
Object Detection with Transformers
Shanghang Zhang
==============================================================================
§ INTRODUCTION
Object detection is a crucial computer vision task and serves as a prerequisite for various real-world applications, such as autonomous driving <cit.>, visual grounding <cit.>, and manipulation <cit.>. Convolutional Neural Networks (CNN) detectors <cit.> have shown satisfactory results, but they heavily depend on hand-crafted operations like non-maximum suppression. In recent times, a series of DEtection TRansformer (DETR) methods <cit.> have been proposed with an end-to-end pipeline, which delivers promising performance when the test data is from the same distribution as the training data. However, such a fixed distribution is not typical in real-world scenarios <cit.>, which often comprise diverse and disparate domains. When applying pre-trained DETR models, distribution shift commonly occurs <cit.>, leading to significant performance degradation on the target data.
Existing adaptation techniques for DETR mainly rely on model-based approaches <cit.>, which aim to narrow the distribution shift between different domains via sequence feature alignment. Recent developments in prompt learning for both natural language processing (NLP) <cit.> and computer vision <cit.> have motivated researchers to introduce visual prompts in domain adaptation tasks. Several recent studies <cit.> have leveraged prompts randomly set at the image or feature-level and fine-tuned them to extract domain-specific or maintain domain-invariant knowledge. These approaches offer a prompt-based perspective to address distribution shift, which can further aid the model-based methods in achieving better representations in the target domain.
However, when prompt-based methods are applied in the target domain with various scene conversion and complex distribution data (i.e., autonomous driving data), the prompts are difficult to learn the long-term domain knowledge for the full data.
Meanwhile, since object detection is an instance-level task and exits multiple objects in each sample, the previous prompt methods are hard to extract diverse domain knowledge for each category.
To this end, we propose a hierarchical Prompt Domain Memory (PDM) for adapting detection transformers, which can extract domain-specific knowledge by learning a set of prompts that dynamically instruct the DETR. Specifically, each prompt and its corresponding distribution value are paired in the memory space, and we dynamically select top M distribution similar prompts for each sample using the value. PDM explicitly constructs a long-term memory space for the detection transformer, allowing DETR to learn complex data distribution and different category domain knowledge at multi-levels, including input, token, and query levels.
With the help of PDM, as shown in Fig.<ref> (a), feature representations in the two domains achieve smaller distribution shift compared to the previous method. However, while the prompt memory can extract more comprehensive domain-specific knowledge, it cannot reduce the distribution distance between different domains <cit.>. To address this limitation, we propose the Prompt Memory Alignment (PMA) method, which reduces the discrepancy between source and target domains in the Unsupervised Domain Adaptation (UDA) task.
Traditional feature alignment methods <cit.> can only align a small number of different domain samples in each iteration due to the limited GPU memory. Different from previous methods, since the proposed prompt memory can better represent the diversity of each domain, the PMA can fully leverage the domain-specific knowledge extracted from the memory and efficiently address the distribution shift. In addition, along with introducing PDM, we make the first attempt to design the visual prompt alignment strategy to jointly address the domain shift problem.
In conclusion, our proposed approach of PDM and PMA enhances the performance of detection transformers in adapting to target domains by extracting diverse domain-specific knowledge and reducing the discrepancy between source and target domains.
We evaluate the prompt-based PM-DETR on three challenging benchmarks of UDA, including scene adaptation (Cityscapes <cit.> to BDD100k <cit.>), synthetic to real adaptation (Sim10k <cit.> to Cityscapes), and weather adaptation (Cityscapes to Foggy Cityscapes <cit.>). Our method outperforms state-of-the-art (SOTA) domain adaptive object detection methods, which improves the result to 58.6%, 33.3%, and 44.3% mAP in the three benchmarks, shown in Fig.<ref> (b).
The main contributions are summarized as follows:
1) We propose a hierarchical Prompt Domain Memory (PDM) to adapt detection transformers to different distributions, which constructs a long-term memory space to fully learn the complex data distribution and diversity domain-specific knowledge.
2) In order to better apply PDM in the Unsupervised Domain Adaptation (UDA), we propose the Prompt Memory Alignment (PMA) method to reduce the distribution distance between two domains, which can fully leverage the domain-specific knowledge extracted from the memory space.
3) We conduct extensive experiments on three challenging UDA scenarios to evaluate the effectiveness of our method. The method achieves SOTA performance in all scenarios, including scene, synthetic to real, and weather adaptation.
§ RELATED WORKS
§.§ Object Detection
Object detection is a critical task of computer vision <cit.>. Previous convolutional neural network (CNN)-based approaches can be broadly categorized into two groups: the more complex two-stage methods <cit.> and the lighter one-stage methods <cit.>. However, these approaches exhibit a significant limitation due to their heavy reliance on handcrafted processes and initial guesses, particularly the non-maximum suppression (NMS) post-processing, which hinders their ability to be trained end-to-end. Recent advancements, such as DETR <cit.> and Deformable DETR <cit.>, have addressed this issue by incorporating vision Transformers <cit.>.
Deformable DETR introduces an innovative deformable multi-head attention mechanism that enables sparsity in attention and multi-scale feature aggregation without necessitating a feature pyramid structure. This innovation results in faster training and enhanced performance. For the critical issue of DETR, slow training convergence, Conditional DETR <cit.> speed up DETR training by leveraging a conditional cross-attention mechanism. In order to more effectively utilize the attention mechanism, SMCA <cit.> proposes a co-attention scheme that expedites DETR convergence, which includes multi-head and scale-selection attention.
Moreover, DN-DETR <cit.> offers a novel perspective on faster training by employing a denoising approach to improve the stability of bipartite graph matching during the training stage. In our study, we use the classical Deformable DETR as the base detector and make the first attempt to introduce domain prompts into its workflow. We also design a hierarchical domain prompt memory to facilitate diverse domain knowledge extraction.
§.§ Domain adaptive object detection
Domain Adaptive Faster R-CNN <cit.> established the foundation for investigating domain-adaptive object detection techniques. Subsequent studies primarily utilized the adversarial training paradigm for cross-domain feature alignment. Various strategies have been proposed in these approaches to aggregate image or instance features, such as leveraging categorical predictions <cit.> and exploiting spatial correlations <cit.>. Hierarchical alignment of features was conducted at multiple levels, encompassing global, local, instance, and category levels <cit.>. Recent advancements in this field introduced innovative methods, including PICA <cit.>, which specializes in few-shot domain adaptation, and Visually Similar Group Alignment (ViSGA)<cit.>, which employs similarity-based hierarchical agglomerative clustering, achieving exceptional performance on specific benchmarks. Moreover, several studies explored alternative domain adaptation techniques or utilized different base detectors, such as Mean Teacher with Object Relations (MTOR)<cit.> and Unbiased Mean Teacher (UMT) <cit.>. Regarding the Transformer object detector, existing adaptation techniques for DETR predominantly rely on model-based approaches <cit.>, aiming to reduce the distribution shift between different domains through sequence feature alignment. In this paper, we provide a novel perspective on DETR cross-domain transfer by introducing a prompt-based approach. Specifically, we propose a Prompt-based Domain Memory (PDM) and Prompt Pool Alignment (PMA) to enhance the performance of detection transformers in adapting to target domains. This is achieved by extracting diverse domain-specific knowledge and minimizing the discrepancy between source and target domains.
§.§ Prompt Learning
Prompt learning, originally introduced in the field of natural language processing (NLP), aims to adapt pre-trained language models to various downstream tasks in a parameter-efficient manner <cit.>. Recently, researchers have extended the paradigm of prompt learning to efficient fine-tuning of vision models <cit.>. VPT <cit.> and its variants <cit.> introduce minimal trainable parameters at the image or feature level of Transformer-based models for efficient transfer learning. L2P <cit.> and its follow-up method <cit.> propose a prompt pool-based approach for continual learning, aiming to avoid catastrophic forgetting and error accumulation.
More recently, visual prompts have shown promising results in domain adaptation. DAPL <cit.> made the initial attempt to incorporate visual prompts into unsupervised domain adaptation (UDA). Subsequent studies, such as <cit.>, explored diverse approaches to leverage visual prompts for classification domain adaptation problems. Additionally, SVDP <cit.> proposed a sparse visual prompt for efficient adaptation in segmentation tasks. However, these studies primarily focus on image-level <cit.> and pixel-level <cit.> domain adaptation tasks and are not optimized for instance-level transformer detection. Furthermore, when prompt-based methods are applied in the target domain with various scene conversions and complex data distributions <cit.> (e.g., autonomous driving data), it becomes challenging to utilize previous lightweight methods <cit.> to learn long-term domain knowledge. Due to the instance-level property, with multiple objects present in each sample, previous prompt methods <cit.> struggle to extract diverse domain knowledge for each category. To address this issue, we design a Prompt-based Domain Memory (PDM) tailored for adapting detection transformers, particularly in scenarios involving diverse data distributions.
§ METHODS
Preliminary This section introduces PM-DETR for transformer-based domain adaptive detectors. Given a model D_S(y|x) trained in labeled source domain samples 𝒟_S(x,y), where x and y represent input data and Ground Truth, respectively. Our goal is adapting D_S to target model D_T through unlabeled target domain data 𝒟_T(x).
We propose a comprehensive prompt-based method to enhance the performance of detection transformers in adapting to target domains by extracting diverse domain-specific knowledge and reducing the discrepancy between source and target domains.
In section <ref>, we first verify the motivation of the prompt-based adaptation method for transformer detectors.
In section <ref>, a hierarchical Prompt Domain Memory (PDM) is illustrated to extract domain-specific knowledge in the multi-level transformer latent space. In section <ref> , Prompt Memory Alignment (PMA) is proposed to reduce the discrepancy between source and target domains. Finally, in Section <ref> , we elaborate on our training policy in detail. The overall pipeline is shown in Fig. <ref>, and Deformable DETR <cit.> is utilized as our default detection network.
§.§ Motivation of Prompt-based Method
We verify the motivates of introducing a prompt-based method and constructing a long-term prompt domain memory. First, there is a brief explanation of how the traditional model-based adaptation method tackles the Unsupervised Domain Adaptation (UDA) problem.
These methods <cit.> pursue to shrink upper boundary of the target error err_T
by the sum of the source error err_S and a notion of distance d_ℋ between the source and the target distributions in hypothesis space ℋ. Since err_S is primarily influenced by model complexity, researchers have predominantly focused on minimizing the inter-domain distance d_ℋ.
Suppose that construct a unified dataset 𝕌 defined as below:
𝕌 = {x_i, y=0}_i=1^p∪{x_j, y=1}_j=p+1^q
Where the first p samples are from source domain 𝒟_S and labeled as 0, the rest samples are from target domain 𝒟_T and labeled as 1.
By constructing this unified dataset 𝕌, we create a share high dimensional space for both the source and target domains, allowing us to effectively measure the distance between their distributions in the hypothesis space ℋ, shown in Fig. <ref> (a). This unified dataset serves as a foundation for minimizing the inter-domain distance and enabling better adaptation from the source to the target domain. Furthermore, the work of Ben-David et al. <cit.> has provided evidence that the empirical ℋ-divergence between two domains can be computed by the following equation:
d_ℋ(S,T) = 2 (1 - min_D ∈ℋ1/p∑_i=1^p D[𝕌(y_i=0)]
+ 1/q-p∑_j=p+1^q-p D[𝕌(y_j=1)] )
= 2 (1 - ε_D^S - ε_D^T)
The methods discussed above are based on an intuitive assumption that model parameters capable of fitting both the source and target domain data well can be learned in the same hypothesis space ℋ. However, it is important to note that this assumption does not always hold true in practical scenarios. In reality, there can be inherent differences between the source and target domains that make it challenging to find a single hypothesis space that adequately captures both domains. As a result, when attempting to adapt a model from the source to the target domain, a compromise error may be introduced. Fig. <ref> visually illustrates this compromise error, which represents the discrepancy between the optimal models for each domain and the compromise model that attempts to accommodate both domains.
To this end, drawing inspiration from soft prompt learning techniques used in NLP and computer vision tasks, where the pre-trained model is adapted to different downstream tasks through prompt manipulation within the input sequence, we propose that incorporating a lightweight visual prompt can assist in bounding the inter-domain distance d_ℋ within a smaller interval. Specifically, we adopt domain prompt warp into input image, encoder embedding, and decoder queries, so that the hypothesis space in source and target domain can be decoupled to ℋ_S and ℋ_T. The prompt memory explicitly constructs a long-term memory space for better representing the diversity of domain knowledge, which further assist the hypothesis space decoupling. Under different hypothesis spaces, prompt memory alignment encourages mining in-domain knowledge by constraints, so the model is optimized as D_S and D_T in the source and target domains, respectively, as shown in Fig. <ref> (b). Thus, in theory, the inter-domain distance will be reduced by the following equation
d_ℋ_S, ℋ_T(S,T) = 2 (1 - ε_D_S^S - ε_D_T^T) < d_ℋ(S,T)
To further substantiate the presence of compromise error, we conduct experiments on three domain adaptive object detection tasks as depicted in Fig. <ref>. The figure quantitatively compares the classification errors of the model-based, previous prompt-based, and Prompt Domain Memory (PDM) method on a domain classifier consisting of three multi-layer perceptrons in series. As we can see, the previous prompt-based method achieve smaller classification errors compared to the model-based method, and the proposed PDM further has a significant improvement in classification errors. As proven by <cit.>, generalization upper boundary on the target risk can be smaller attributed to lower classification error, which means that model will perform better in the target domain.
§.§ Hierarchical Prompt Domain Memory
We propose prompt domain memory 𝐏_S and 𝐏_T
for source and target domains, respectively. Each memory pool 𝐏
has N prompt pairs {<𝐯_𝐢,𝐩_𝐢>|i=1,...,N,𝐯∈ℝ^1 × d, 𝐩∈ℝ^L × d}, where 𝐯
indicates prompt distribution value and 𝐩 indicates visual prompt weight. L stands for embedding length and d stands for embedding dimention.
We randomly initialize 𝐯 and 𝐩. Visual prompt weight 𝐩
stands for the aggregation of domain-specific knowledge, which can be learned by warping in the prefix of the input sequence.
Prompt distribution value 𝐯 will be used to measure the distribution similarity with the input. In this way, prompt memory can cover diverse domain knowledge for complex data distribution.
Hierarchical Warp Position. We introduce prompt memory
in three crucial embeddings including
input image, encoder token, and decoder queries. The motivations are three-fold.
First, the combination of multi-level prompts provides a comprehensive domain transfer mechanism, where different levels of prompts decrease domain shift that could not be narrowed by previous levels.
Second, prompt in decoder query can align the distribution of objects in different domain datasets,
thus improving the recall of Deformable DETR. Third, the multi-level prompts only increase the number of parameters by a very small amount (0.063% of model parameters), but it can greatly enhance the plasticity of the model and release the power in learning diverse domain-specific representations.
Our ablation experiments in Sec. 4.2 show that the hierarchical prompt memory can better represent the domain diversity and boost the performance of Deformable DETR in the target domain.
Distribution Value Similarity Selection. The image scenes, object classes,
and object distributions in the target dataset have large variances,
which often leads to sub-optimal performance if only use the
same prompt for each instance to extract domain knowledge.
We design a distribution-guided strategy to adaptively
select prompts from prompt memory. We project input embedding
by transformation function γ to v's shape. Here we utilize
the average mean along the embedding channel to aggregate input characteristics.
Then we calculate the cosine similarity between v and the projection embedding utilizing function ψ.
𝐕_𝐌 = argmax∑_i=1^Mψ (𝐕, γ(x))
According to the cosine similarity value, 𝐕_𝐌 is the selected nearest M neighbor prompts (i.e. M=4) in prompt memory.
§.§ Prompt Memory Alignment
After receiving the domain knowledge transferred
from prompt memory, we further introduce Prompt Memory
Alignment (PMA) to address the domain shift accumulation.
For the encoder phase, we aim to pull close the input and token level prompts from two domain prompt memory, as shown in Fig .<ref>.
Specifically, we utilize respective MLPs to project prompt
tokens to a shared embedding space, in which the dimension
is 𝐋× C× 2, 𝐋 equals to encoder token length, channel dimension C is set to 256. We
adopt encoder prompt alignment loss ℒ_epa to pull close the two domain prompt embeddings and explicitly constrain prompt to learn in-domain knowledge, as shown in Eq. <ref>.
ℒ_epa(X, D) = λ_1 min D(𝐗_i<|𝐩|× M) + λ_2 max D(𝐗_i ≥ |𝐩|× M)
Where D denotes the domain discriminator. For decoder phase, decoder prompt alignment loss ℒ_dpa, similar to ℒ_epa, are proposed. Since objects in the same category and spatially connected tend to be visually similar, ℒ_dpa constraint prompts in decoder queries to learn from categorical and spatial correlations while decreasing data distribution distance between the two domains.
§.§ Overall Optimization for PM-DETR
PM-DETR leverages the teacher-student framework, which includes two models with the same architecture and weights at initialization During training, the student model is updated using back-propagation, while the teacher model is updated by taking the Exponential Moving Average (EMA) of the student's weights. The weights of the teacher model θ_t^' at time step t is calculated by taking a weighted average of the teacher's previous weights and the current student's weights θ_t:
θ_t^' = αθ_t-1^' + (1-α)θ_t
α is a smoothing coefficient hyperparameter and is set to 0.999. And the object query embeddings are kept the same between the two models to enhance consistency. The object queries are trainable embeddings, initialized with the normal distribution at the start of the training procedure. Then we use the temporally ensembled teacher model to optimize the parameter of the student model and prompts in the target domain via pseudo labels.
For the integral optimizing, the first loss is a penalty on the source domain for supervised learning (ℒ_sup), which distills domain independent generic features and avoids catastrophic forgetting. The second loss uses the target domain pseudo-labels generated by the teacher model for unsupervised learning (ℒ_unsup) to extract the target domain knowledge. The two losses are separately used to optimize source and target domain prompt memory. Combined with the proposed prompt alignment losses, the overall constraint function is:
ℒ = λ_sℒ_sup + λ_usℒ_unsup + λ_epaℒ_epa + λ_dpaℒ_dpa
To maintain the balance of loss penalties, λ_s and λ_us are set to 1, λ_epa, and λ_dpa are set to 0.25. The detection loss (ℒ_sup and ℒ_unsup) are combined by focal loss and L1 loss <cit.>.
§ EXPERIMENTS
We conduct extensive experiments to demonstrate the advantages of our proposed method for object detection Unsupervised Domain Adaptation (UDA) task. In Section <ref>, we provide a description of the datasets, as well as the details of model settings. In Section <ref>, we compare mean Average Precision (mAP) metric of PM-DETR with other baselines <cit.> in three challenging domain adaptation scenarios, including Weather, Scene, and Synthetic to Real Adaptation. Comprehensive ablation studies are conducted to investigate the impact of each component in Section <ref>. Furthermore in Section <ref>, qualitative analysis is given to facilitate intuitive understanding.
§.§ Experimental Setup
Datasets. We evaluate our method on four public datasets, including Cityscapes <cit.>, Foggy Cityscapes <cit.>, Sim10k <cit.>, and BDD100k <cit.>. These datasets provide diverse and challenging scenarios for domain adaptation tasks:
Weather Adaptation. In this scenario, we use Cityscapes as the source dataset, consisting of 2,975 training images and 500 evaluation images. The target dataset is Foggy Cityscapes, generated from Cityscapes using a fog synthesis algorithm. Foggy Cityscapes introduces foggy conditions to the images, enabling us to evaluate the performance of our method in adapting object detection models from clear weather to foggy weather scenarios.
Scene Adaptation. In this condition, Cityscapes serves as the source dataset, while the target dataset is the daytime subset of BDD100k. BDD100k consists of 36,728 training images and 5,258 validation images, all annotated with bounding boxes. This subset provides a diverse range of scenes captured during the daytime
Synthetic to Real Adaptation. In this particular scenario, we employ Sim10k as the source domain, which is generated using the Grand Theft Auto game engine. Sim10k comprises 10,000 training images, accompanied by 58,701 bounding box annotations. As for the target domain, we utilize the car instances from Cityscapes for both training and evaluation purposes.
Implementation Details. Our method is built based on Deformable DETR <cit.>. We set ImageNet <cit.> pre-trained ResNet-50 <cit.> as CNN backbone in all experiments. In the burn-in step, we adopt Adam optimizer <cit.> for training over 50 epochs. The initial learning rate is set to 2e-04, which decayed by 0.1 after 40 epochs. The batch size is set to 4 for all adaptation scenarios. In the second cross-domain training step, the model is trained for 12 epochs. The prompt-based parameters are initialized with random float numbers and set the learning rate to 2e-05. The learning rate for the other parameters, excluding the prompt-based parameters, is relatively small and set to 2e-06. All learning rates decayed by 0.1 after 10 epochs. In addition, we adopt mean Average Precision (mAP) with a threshold of 0.5 as the evaluation metric. We set the filtering threshold (confidence) for the pseudo-label generation to 0.5. All experiments are conducted on two NVIDIA Tesla A100 GPUs.
§.§ Comparisons with SOTA Methods
Weather Adaptation. To assess the reliability of object detectors under varying weather conditions, we transfer models from Cityscapes to Foggy Cityscapes. As shown in Table <ref>, our proposed method PM-DETR significantly outperforms other cutting-edge approaches, achieving a 44.3% score compared to the closest SOTA end-to-end model, MTTrans <cit.>, at 43.4%. Furthermore, it reveals that PM-DETR considerably enhances Deformable DETR's cross-domain performance, achieving a 15.8% absolute gain in mAP50 and outperforming all previous domain adaptive object detection methods. These promising results highlight the ability of our method to extract diverse domain-specific knowledge and effectively address distribution shift, leading to improved performance in unsupervised domain adaptation for object detection tasks.
Scene Adaptation. In real-world applications, such as autonomous driving, scene layouts are not static and frequently change. It makes model performance under scene adaptation crucial. Our proposed method, PM-DETR, demonstrates its effectiveness in scene adaptation as shown in Table <ref>, achieving SOTA results (33.3%) and significantly improving upon previous works. Additionally, the performance of five out of seven categories in the target domain dataset has been enhanced.
Synthetic to Real Adaptation. The training process of object detectors using affordable and accurate simulation datasets has been proven to yield improved performance. However, this approach also brings about a notable challenge in the form of a significant inter-domain gap. In the synthetic to real adaptation scenario, we evaluated the performance of our proposed method, PM-DETR, as shown in Table <ref>. PM-DETR achieved state-of-the-art accuracy with a mAP of 58.6%, outperforming Deformable DETR by 11.2% mAP. These promising results further demonstrate the importance of a long-term domain memory space for transformer detectors to effectively extract comprehensive domain knowledge in real-world unsupervised domain adaptation scenarios.
§.§ Ablation Study
Effectiveness of each component. To better analyze each component in our proposed PM-DETR framework, we conduct ablation studies by accruing parts of the components in PM-DETR. As presented in Table <ref> (PM-DETR-AS0), the teacher-student structure is a common technique in UDA <cit.>, which is used to generate pseudo labels in the target domain and has 8.5% mAP drop compared to our method. This verifies the improvement of our method does not come from the usage of this prevalent scheme and the model still suffers from the domain shift problem due to imperfect target domain feature extraction. In PM-DETR-AS11, by introducing prompt domain memory (PDM) in the input image, we observe that the mAP increase by 7.3%. When employing PDM in encoder token embedding (PM-DETR-AS12) and in decoder query embedding (PM-DETR-AS13), mAP improves by 6.5% and 6.9%, respectively. The result clearly demonstrates that the utilization of a long-term memory space enables the model to fully learn the complex data distribution and capture diverse domain-specific knowledge in multiple levels of DETR. In terms of Prompt Memory Alignment (PMA), PM-DETR-AS21 improves the mAP to 43.8% by encoder prompt alignment, and PM-DETR-AS22 improves the mAP 43.9% by decoder prompt alignment. The improved performance evaluates that PMA can further reduce the discrepancy between the two domains. PM-DETR shows the complete combination of all components which achieves 15.8% improvement in total. It proves that all components compensate each other and jointly mitigate the object detection domain shift problem in an unsupervised paradigm.
How do the prompt memory size and selection strategy affect the performance? In Fig. <ref> (a), we observe the impact of different prompt memory sizes on model performance. When using a single prompt (memory size equals one), there is a significant drop in performance, suggesting that a single prompt suffers severe diversity interference. As the memory size increases, the performance of the prompt-based model gradually improves and reaches its peak when the memory size is 10. Further increasing the size leads to a slight decrease in performance, but it still outperforms the single prompt scenario. This demonstrates that our prompt domain memory effectively constructs a long-term domain memory, enabling the model to understand complex data distribution and diverse domain-specific knowledge. Fig. <ref> (b) compare the effect of different prompt selection schemes on model performance, including random, k-means, and distribution-based approaches. Our distribution-based selection strategy achieves the highest performance, highlighting the effectiveness of our method in capturing crucial domain-specific information. A selection method that considers the instance-level inputs can effectively stably handle the variance of the data distribution in the target domain. To better understand the prompt selection mechanism, we plot the prompt selection frequency histograms for three domain adaptive tasks in Fig.<ref> (b). Our prompt selection mechanism clearly encourages more knowledge sharing between similar categories and more knowledge comparison between dissimilar categories.
§.§ Visualization and Analysis
Detection Results. We show some visualization results of PM-DETR on three target domain datasets, i.e. Foggy Cityscapes, BDD100k, and Cityscapes, accompanied by ground truth and previous state-of-the-art (SOTA) methods. As shown in Fig. <ref> (a) Row1 (Cityscapes to Foggy Cityscapes), PM-DETR has higher recall and more accurate classification results in dense fog occlusion. As shown in Fig. <ref> (a) Row2 (Cityscapes to BDD100k), our method properly classifies and locates objects even when they are heavily occluded or challengingly small in size. In Fig. <ref> (a) Row3 (Sim10k to Cityscapes), we can even alleviate label misalignment (car & truck) without supervision to some degree. All visual results are consistent with the numerical assessment results in three target domains, indicating that PM-DETR manages to mitigate the domain shift problem in the UDA Transformer detector.
t-SNE Distribution Results.
Following the t-distributed stochastic neighbor embedding (t-SNE) method <cit.>, in Fig. <ref> (a) and Fig. <ref> (c), we visualize two types of t-SNE plots to illustrate the effectiveness of our approach: global-wise t-SNE and instance-wise t-SNE. By examining the global-wise t-SNE plot, we can gain insights into how well our method mixtures different domains. On the other hand, in the instance-wise t-SNE, we focus on visualizing individual instances and their embeddings. It can be observed that the t-SNE of our method is most similar to the results of fully supervised training in terms of inter-category distance as well as similar category aggregation. This firmly corroborates the ability of our method in mining diverse domain-specific knowledge for each category.
§ CONCLUSIONS
This paper presents a novel prompt-based method to enhance the adaptation ability of transformer detection by decoupling hypothesis space and mitigating the existing compromise error. Our approach leverages a hierarchical Prompt Domain Memory (PDM) to maintain a long-term memory space that facilitates comprehensive learning of the complex data distribution and diverse domain-specific knowledge. To effectively utilize PDM in cross-domain learning, we propose the Prompt Memory Alignment (PMA) method, which reduces the distribution distance between two domains by boundedly extracting the domain-specific knowledge from the memory space. We evaluate the effectiveness of our method through extensive experiments on three challenging Unsupervised Domain Adaptation (UDA) scenarios. The results demonstrate the significant improvements achieved by our approaches, PDM and PMA jointly address the distribution shift problem.
Moreover, our method is applicable across different domain distances, making it a versatile solution for various domain adaptation problems.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.00741v1
|
20230703041055
|
UnLoc: A Universal Localization Method for Autonomous Vehicles using LiDAR, Radar and/or Camera Input
|
[
"Muhammad Ibrahim",
"Naveed Akhtar",
"Saeed Anwar",
"Ajmal Mian"
] |
cs.RO
|
[
"cs.RO",
"cs.AI",
"cs.CV",
"cs.LG"
] |
Joint Power Allocation and Beamforming for Active IRS-aided Directional Modulation Network
Rongen Dong
Rongen Dong is with the School of Information and Communication Engineering, Hainan University, Haikou, 570228, China.
August 1, 2023
======================================================================================================================================================
empty
empty
Localization is a fundamental task in robotics for autonomous navigation. Existing localization methods rely on a single input data modality or train several computational models to process different modalities. This leads to stringent computational requirements and sub-optimal results that fail to capitalize on the complementary information in other data streams. This paper proposes UnLoc, a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions. Our multi-stream network can handle LiDAR, Camera and RADAR inputs for localization on demand, i.e., it can work with one or more input sensors, making it robust to sensor failure. UnLoc uses 3D sparse convolutions and cylindrical partitioning of the space to process LiDAR frames and implements ResNet blocks with a slot attention-based feature filtering module for the Radar and image modalities. We introduce a unique learnable modality encoding scheme to distinguish between the input sensor data. Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets. The results ascertain the efficacy of our technique.
§ INTRODUCTION
Vehicle localization in outdoor environment is an essential task in robotics, especially in the autonomous driving domain. To achieve self-autonomy in urban outdoor environment, a vehicle must be able to precisely localize itself. Current outdoor localization systems rely on the Global Navigation Satellite System (GNSS). However, the lack of accuracy and signal blockage in densely populated regions for GNNS make it an inadequate technology for autonomous vehicles.
Creating an offline map of the environment and using query frames during online navigation provides a viable alternate solution to the problem. Conventional methods in this direction <cit.> employ frame registration for localization. However, this leads to impractical computational requirements. More recently, deep learning techniques have shown great promise in addressing the issue <cit.>.
Among the deep learning methods, 3D point cloud regression-based approaches, e.g., <cit.>, can directly predict six degrees of freedom (6DoF) poses to localize vehicles.
Point clouds provide depth information of the scene and 360^∘ field of view (FoV), which are helpful for precise localization. However, LiDAR data is also inherently prone to high level of noise in rainy and foggy weather. Moreover, its unstructured nature demands complex and computationally expensive modeling when outdoor localization completely relies on the LiDAR data.
In the related literature, processing RGB camera images with deep learning is also considered suitable for localization.
Currently, techniques such as PoseNet and its variants <cit.> use a single or a series of image frames to predict 6DoF poses. Whereas image modality offers detailed spatial information, it is easily influenced by environmental variations, such as sunlight, rain, fog etc., which is detrimental for localization. Comparatively, Radar data is largely insensitive to various weather conditions, e.g., darkness, fog, snow and sunlight. Leveraging that, Cen <cit.> extracted features from Radar scans and then applied scan matching to predict ego-motion. Radarloc <cit.> is a recent deep learning localization method that predicts global poses from Radar data.
Nevertheless, Radar data does not have precise 3D information and is noisy, which compromises the overall localization performance.
For the applications like self-driving vehicles, robust outdoor localization is only possible by leveraging complementary characteristics of different data modalities.
In this work, we present a multi-sensor localization approach, shown in Fig. <ref>, that learns a unified neural model, called UnLoc, for point cloud, Radar and image data, to achieve precise 6DoF localization. The proposed model uses sparse 3D convolutions to process cylindrical representations of point clouds, while 2D convolutions and slot attention-based feature filtering are used to process Radar and image modalities. We also introduce a learnable modality encoding technique to optimally discriminate between different data modalities for their on demand use. Our method allows the use of a single or any combination of modalities during inference. This makes our method robust to sensor failure.
Our network is trained on six sensor inputs for three different modalities. Due to the unique universal nature of our method, we also devise a technique to generate common ground-truth for different sensor data streams, which enables effective training of our model.
Our method is an adaptive deep-learning technique that can be used in challenging real-world environments.
We establish the performance of our approach on three publicly available datasets: Oxford Radar RobotCar <cit.>, Appolo-Southbay <cit.>, and Perth-WA <cit.>.
We also conduct a thorough ablation study to analyze the effects of using various sensor inputs with our model. We establish strong localization results on all the datasets, outperforming the existing methods on each modality and further improving the performance by processing multi-modality input.
§ RELATED WORK
Localization is essential for autonomous driving <cit.>. In the existing literature, different input data modalities, such as point cloud, image, and Radar data, distinguish different localization methods.
Conventional point cloud matching techniques employ registration methods <cit.>. On the other hand, recent point cloud deep learning-based techniques associate input frames to a point cloud map for 6DoF pose estimation <cit.>. Among the mentioned methods, PointLoc <cit.>, Slice <cit.> and L3Net <cit.> compress the map into a neural 6DoF pose predictor for vehicles. The PointLoc <cit.> exploits PointNet framework to predict poses while Slice <cit.> uses transformer architecture <cit.>. In general, directly predicting 6DoF pose from a point cloud frame is challenging because the unstructured LiDAR representation conflicts with the high precision demands of the task. Therefore, other methods, e.g., <cit.> often augment their neural models to handle the data complexity.
Some works also devise camera-based deep learning localization methods. For instance, PoseNet <cit.> is the pioneering technique that utilizes camera images to predict 6DoF poses. Likewise, the recent variants <cit.> of PoseNet regress poses using a single or multiple images, exploiting geometric loss and modeling poses with Bayesian Neural Network (BNN) to enhance the performance. Walch <cit.> employed LSTMs for matching geometric features to improve the pose precision. Retrieval-based learning approaches, e.g., CamNet <cit.>, RelocNet <cit.>, and Camera Relocalization CNN <cit.>, use agents that have previously visited the exact location. However, the above approaches are restricted in performance due to the demerits of the visual sensors.
Contemporary localization strategies also explore the Radar data modality. For example, Barnes <cit.> proposed a deep correlative scan matching technique based on learned feature embedding and a self-supervised module for Radar odometry system. Later, the authors developed a deep key point detector and metric localizer <cit.> for Radar odometry estimation. Cen <cit.> extracted features from Radar scans and then applied scan matching to predict ego motion. RadarLoc <cit.> is the latest method that predicts global absolute poses with respect to the world coordinate system employing Radar data. Compared to camera and LiDAR, radar is not as sensitive to the weather conditions and
provides 360^∘ FoV of a scene.
However, it lacks precise 3D information when compared to the LiDAR data.
The methods mentioned above mainly rely on a single sensor data modality. However, outdoor localization, especially for autonomous driving, requires the precision and robustness that is hard to achieve with a single modality.
Still, localization through multiple sensors (Radar, camera and LiDAR) in outdoor environments is currently a largely unexplored direction. In this work, we fill this gap by devising a universal localization neural model that leverages point cloud, Radar and/or camera input on demand, to provide robust and precise 6DoF pose predictions.
§ METHODOLOGY
We propose a multi-sensors localization method that can leverage point cloud, Radar and/or image modalities on demand. Our approach uses three parallel streams for these modalities in the early stages of the model, see Fig. <ref>. It applies sparse 3D convolutions on the LiDAR data and employs ResNet blocks followed by a slot attention-based feature filtering for the Radar and image modalities. Moreover, it employs a learnable modality encoding that learns to identify the input sensor data stream for optimal performance. The architecture for the Radar and image data streams are largely similar, except for an additional 2D convolution layer to process the Radar data. Our technique computes a feature vector for each data modality and applies modality encoding to that. The individual data streams get projected onto their respective feature spaces that have similar dimensionality across modalities. The data features eventually get processes by a regression module to predict the 6DoF pose.
Our model is trained on six sensor inputs: LiDAR (left and right), camera (left, rear and right), and Radar. For inference, it accepts any single input or any combination of the input modalities. Our method is designed for localization in outdoor environment, particularly for self-driving vehicles. Due to its multi-modality nature, it is well-suited to different weather conditions and is robust to sensor failures. Each component of our framework is explained in detail below.
§.§ 3D Features Extraction
We propose a sparse 3D convolution localization block to process the point cloud data, see Fig. <ref>. This block extracts 3D geometric features from LiDAR data, which are particularly relevant for localization in the outdoor environment. The 3D convolution using voxelization is known for it efficacy to process LiDAR data <cit.>. However, voxel processing using 3D convolution is computationally expensive. Hence, we devise a lightweight sparse 3D convolutional method utilizing cylindrical partitioning. Our approach uses a series of sparse 3D convolutional sub-blocks for processing point cloud data, followed by Max-pooling and Average pooling layers. Details of this process are provided below.
§.§.§ 3D Cylindrical Partition and Features
Outdoor LiDAR point cloud has the attribute of changing density, with nearby regions having substantially higher point densities than the far regions. Hence, cylindrical coordinate system is well-suited to partition LiDAR data,
which evenly distributes the points across various partitions by providing larger volumes for the far-off points.
The top-left corner in Fig. <ref> illustrates the workflow of our partitioning, followed by point feature processing. First, the (X, Y, Z) coordinates of points are converted to (R,θ, and z) for a cylindrical grid representation, where R, θ are the radius and height, respectively. Then, a cylindrical partitioning is performed to generate voxels having largely uniform point distribution. The farther regions have larger voxels compared to the nearby regions. Next, the cylindrical grid representation is passed through an MLP-based module with linear layers to obtain cylinder features. These features are further processed
using Unique indices and Scatter-Max layers to obtain maximum magnitude features. Finally, we get the 3D cylinder representations with size ∈ℝ^D × H × W × L, where D is the feature dimension, and H, W, L respectively represent the height, width, and length of the cylinder. Our 3D Sparse convolution module subsequently processes this representation.
§.§.§ 3D Sparse Convolution
To process the cylindrical representation, we had two options: conventional 3D convolution or sparse 3D convolution. We choose the latter for memory and computational efficiency. Inspired by Cylinder3D <cit.>, we create two asymmetric types of 3D sparse convolution modules, without down-sampling (CB) and with down-sampling (CBD), as shown in Fig. <ref>. Both convolution blocks have two sparse convolution streams with stride 1. The first stream has kernel (1,3,3) followed by kernel (3,1,3) and the second stream has the same kernels in the reverse order. The output of both streams is added. In the convolution with down-sampling (CBD), an additional 3D convolution with kernel (3,3,3) and stride 2 is applied for downsampling.
We employ one CB and four CBD modules in series with 32, 64, 128, 256, 512 output channels. This block's output features size is B, 512, 30, 23, 8, representing batch size B, the feature dimensionality, and cylinder height, width and length, respectively.
§.§.§ 3D Max-pooling and Average Pooling
To aggregate the information, we use max- and average-pooling techniques. The pooling layers compute maximum and average feature values along the spatial/cylindrical dimensions. These layers generate outputs of size ℝ^B × 512, which are concatenated to generate the final feature vector of size ℝ^B × 1024 for further processing, which is discussed in Sec. <ref>.
§.§ 2D Features Extraction
Here, we explain the 2D feature extraction blocks for image and Radar modalities. The architecture for both blocks in our framework is largely identical, except for an additional 2D convolution layer
used for the Radar modality, see Fig. <ref>. This additional layer broadcasts channel size from one to three for later processing. We also transform the polar scanning Radar outputs into Cartesian coordinates as grey-scale images ∈ℝ^ H × W, as shown in Fig. <ref>. This transformation helps in localization performance. The architecture for our feature extraction blocks includes a ResNet module, a fine-tuning module and a slot attention-based features filtering module. These components are discussed below in detail.
§.§.§ ResNet Blocks
The primary responsibility of this module is to extract useful local features from Radar and camera modalities. The state-of-the-art camera-based localization methods <cit.>, <cit.>, <cit.> utilize a pre-trained ResNet model <cit.> as a features extractor. The aforementioned approaches typically involve selecting layers from a significant portion of the pre-trained model, resulting in computationally demanding models. Our framework focuses solely on the ten initial blocks of the pre-trained ResNet-152 model, thereby considerably reducing the computational cost. The input to this module is in ℝ^ B × D × H × W, where B is the batch size, D is the input channel size and (H, W) are the height and width of the input. The input values for D, H, W are 3, 512 and 512, respectively. The output of this module is in ℝ^B × 512 × (H/8) × (W/8).
§.§.§ Fine-tuning Module
The features extracted in Sec <ref> are fine-tuned for localization task in this module. Also, this module makes the features more suitable for the subsequent slot attention-based filtering. In the fine tuning module, the input is passed through a series of 2D convolutional layers, and is augmented with positional information channel-wise. The resultant features map is flattened along the spatial dimensions and fed into a linear layer for further processing.
The positional encoding in this module is learnable with the encoding tensor of size ℝ^ B × (H/32) × (W/32) × 1024. The final output size of this module is B × (HW/1024) × 1024.
§.§.§ Slot Attention-based Features Filtering
Two types of distortions can considerably affect the localization performance. One results from the angular and range errors of the sensors, while the other from the foreground moving objects, e.g., bikes, buses, trucks and pedestrians in the outdoor scene. To minimize these distortions, state-of-the-art methods apply feature filtering techniques. For instance, Barnes <cit.> designed a UNet-type architecture to predict distraction-free Radar odometry. Radarloc <cit.>, AtLoc <cit.> and PointLoc <cit.> apply attention-based encoder and decoder techniques to filter out these noises. However, these methods fail to leverage object-centric features in the scene for this purpose, and they are also not easily tailored to multiple modalities. To this end, we design a unique slot attention <cit.> based feature filtering module to leverage object-centric features with Radar and image modalities. Slot attention is the key component of our module, whose architectural details are given in Fig. <ref>. Its primary function is to project the N input features to K output vectors, which represent slots - we chose K = 20 in our framework.
In this module, the slots are randomly initialized, and they improve iteratively during the training phase. The slots contest for input features through softmax-based attention throughout each iteration, and then use a gated recurrent unit (GRU) to update their representation. Similar to a transformer, key, query, and value vectors are used in the slot attention mechanism. Their computations is denoted by k(·), q(·), and v(·), respectively in the text to follow. These vectors are kept learnable in our slot attention to map the inputs and slots with a channel size D.
The size of the affinity matrix for slot attention is N × K as compared to N × N for the multi-head attention (MHA) utilized in conventional transformer architectures, where N >> K. This makes slot attention much more efficient then MHA. The slot attention output in ℝ^ B × 20 × 1024 is passed through an MLP and a normal layer for feature refinement. The size of the feature vector generated here is ∈ℝ^ B × 1024.
Concretely, a single iteration of the employed slot attention performs the following computation.
α =1/√(C) k(input)· q(slots)^⊺∈ℝ^N × K,
Γi,j =e^Λ_i,j/∑_s e^Λ_s,j,
β = W^⊺· v(input) ∈ℝ^K × D,
W = Γi,j/∑_s=1^NΓs,j.
In the above, α, Γ, and β respectively represent the attention coefficient matrix, normalized attention over the slots, and the slot update for further processing by the GRU. Finally, a GRU with as many hidden units as the dimensionality of the slots is used to update the slot. The update is based on a previous slot state and the signal β.
§.§ Modality Encoding
Inspired by the transformer’s positional encoding technique, we propose modality encoding and incorporate it in our network to identify the sensor modality. Unlike pre-fixed `cosine’ and `sine’ positional encoding, we learn the modality encoding for optimal performance. Since our framework supports three modalities, i.e., image, Radar and point cloud, we randomly initialize three modality encoding vectors with the same size as the feature vectors. These vectors are learned during the training phase and are added to the feature vectors to detect the sensor modalities. Experiments showed that this modality encoding works well during inference time.
§.§ Regression Module
The feature vectors encoded in the earlier stages of the model are passed to a Regression Module which is responsible to predict the 6DoF pose for localization. It comprises two common fully connected (FC) layers at the top, preceded by two parts of four FC layers - see Fig. <ref>. The network has a dicephalous architecture to precisely predict the translation and rotation parameters for the 6DoF pose. The channel output sizes in each division of the FC layers are 1024, 512, 256, 3. The initialization of the layers is set with Xavier_uniform distribution and ReLU activation is used. To reduce the variations between rotation and translation values, the rotation branch of this module is normalized.
§.§ Training Loss Computation
We utilize ℓ_1-loss for rotation and translation, which we found more suitable for our network due to the various types of input data. We combine the rotation and translation losses to compute the loss for each input data with learnable balancing factors α and β, as shown in Eq. (<ref>). During the training phase, we forward pass inputs from all six sensors and combine the loss for each input to compute the net loss for the network, see Eq. (<ref>). Finally, the net loss is back propagated in the network for optimization.
ℒ = t - t' e^α + α+ r - r'e^β +β.
ℒ_net = ℒ_L1 + ℒ_L2 + ℒ_C1 + ℒ_C2 + ℒ_C3 + ℒ_R.
In Eq. (<ref>), t and t' indicate ground-truth and predicted translation, whereas r and r' denote the respective rotations.
In Eq. (<ref>), ℒ_net, ℒ_L1, ℒ_L2, ℒ_C1, ℒ_C2, ℒ_C3, ℒ_R are the net loss, LiDAR left, LiDAR right, camera left, camera right, camera rear, and Radar sensor losses, respectively.
§ EXPERIMENTS
We evaluate the proposed method for the localization task on benchmark datasets and compare it with state-of-the-art techniques. To ensure a fair comparison, we choose popular existing methods based on the availability of source code by the original authors. We present results on three major localization datasets: Oxford Radar RobotCar <cit.>, Apollo-SouthBay <cit.> and Perth-WA <cit.>. Our results establish the effectiveness of the proposed model for point cloud, image and Radar modality, both individually and collectively. Prior to presenting the results for each dataset, we discuss implementation details in the section below.
§.§ Implementation Details
To ensure fair benchmarking, we adopt uniform configurations for our method across the Oxford Radar RobotCar, Apollo-SouthBay and Perth-WA datasets. We apply batch sizes of 6 and 1 for training and testing, respectively. We use Adam optimizer with a learning rate of 0.0001 and weight decay 0.0005. At the outset, the model is trained on Oxford Radar RobotCar for multi-modalities and then fine-tuned on the Apollo-SouthBay dataset and Perth-WA dataset for point cloud modality. The model is trained for 40 epochs on all three datasets. For all experiments, NVIDIA GeForce RTX 3090 GPU with 24 GB memory is used. The experiments are conducted using PyTorch 1.8.0 on Ubuntu 18.04 OS.
§.§ Results on Oxford Radar RobotCar
Dataset details:
The Oxford Radar RobotCar
<cit.> is an extension
of their previous dataset <cit.> that includes three different modalities: RGB camera images, Radar data, and point clouds from six sensors: left, right, and rear cameras; a Navtech CTS350-XFMCW Radar scanner; and left and right Velodyne HDL-32E LiDAR. NovAtel SPAN-CPT ALIGN inertial (INS) and GPS navigation systems are used to collect the ground-truth poses for this dataset. It covers a total of 280km of urban area, including more than 30 sequences, each captured over 9km. This dataset is large and challenging for localization due to the presence of a variety of foreground objects, such as people and cars. We use the same training and test sequences as Radarloc <cit.> for our experiments on this dataset.
Common Ground-truth Generation for multi-sensors:
Our approach has the unique ability to leverage all three data modalities provided by the dataset.
However, the frame rates for Radar (4Hz), LiDAR (20Hz), and camera (16Hz) data have a large disparity between them, which leads to timestamp misalignment between the modality sensors and the ground-truth.
To generate a unified ground-truth for all the data sensors at a given time, we synchronize the Radar timestamps with the ground-truth poses using interpolation between GPS/INS measurements and the Radar timestamps. This step of ours provides
ground-truth
pose for each Radar frame. We then compute the position information for each frame for all modality sensors based on their timestamps. To acquire the corresponding frame for each remaining sensor, we search for the closest frame position corresponding to each Radar frame using the minimum Euclidean distance with KDTree search <cit.>. Each Radar frame and its corresponding closest searched frame share the same ground-truth. Missing GPS/INS data is handled by interpolating values from visual odometry data provided by the Radar RobotCar. In this way, we calculate single ground-truth pose for each frame of all modality sensors. The process is also illustrated in Fig. <ref>.
Performance:
We present the experimental results
in Table <ref> where we use individual data modalities to compare with the approaches of the respective modalities. Due to the universal nature of our method, we are able to compare
with Radar, camera, and LiDAR-based
deep localization methods. Our technique outperforms all the existing methods by a considerable margin, which is clear from the respective sections of the table. For the point cloud modality, our approach surpasses the best performer PointLoc <cit.> by reducing the errors for translation and rotation nearly by 5× and 3×, respectively.
The results confirm that our encoding and sparse 3D convolutional modules
are effective components for localization with point clouds. Table <ref> also ascertains that our method is much more accurate than the camera and Radar-based methods. Our technique outperforms RadarLoc <cit.> by a more than 4×
error reduction in translation and rotation estimates. We conjecture that the strong performance of our
approach has two main sources. Firstly, for each modality, our network is carefully designed with the state-of-the-art representation learning components.
Secondly, our model is able to leverage the complementary information from different modalities during the learning stage to better train each individual modality network. For each modality, the inference stage is able to
take guidance from learned positional encoding for optimal performance.
Ablation studies:
To investigate the influence of different blocks of our technique, we conduct ablation studies and summarize the results in Table <ref>. For the experiments, we kept the architecture for each block the same but turned on/off the image, Radar, and point cloud modality blocks to determine the impact on the localization results. First, we test our framework with a single modality and turned off the other two modality blocks. Table <ref> reports the full results of these experiments in the first four columns, which can be compared with the results in Table <ref>.
Among the single modalities, our approach already achieves the best overall performance.
To further analyse the performance of our network, we test different combinations of the blocks in our method using three sensors at a time.
From Table <ref>, it is clear that the localization performance of our technique improves by using different data modalities.
In this case, the best performance is achieved with a combination of left Lidar, right Lidar and Radar, resulting in 1.42m and 0.49^∘ errors for translation and rotation, respectively.
Finally, we utilized all six data streams in our technique. The last column of the table shows that this results in the overall best performance for our technique. This ascertains that each data modality is able to contribute to improve the performance, and that our network is able to leverage the complementary data information effectively.
§.§ Results on the Apollo-SouthBay Dataset
Dataset:
ApolloSouthBay <cit.> is a comprehensive localization dataset collected in San Francisco, USA, utilizing an IMU-based system to record the ground-truth poses for the LiDAR frames. The dataset is captured in residential, urban, downtown area and highways. The dataset includes six routes: BaylandsToSeafood, ColumbiaPark, SanJoseDowntown, SunnyvaleBigLoop, Highway237 and MathildaAVE. All these routes provide separate training and test sets.
Performance:
The outcomes of our experiments on this dataset are presented in Table <ref>. For benchmarking, we fine-tune our model on the training sets and assess our model on all six routes of the test set.
We employ RMSE as the evaluating metric by following <cit.> and
compare our approach with the state-of-the-art localization methods, Levinson <cit.>, Wan <cit.>, L3-net <cit.> and Slice3D <cit.>. In the Table, we report average values on all six routes. Levinson , Slice3D <cit.> and L3-net <cit.>are single modality methods, whereas Wan <cit.> is a fusion model that integrates multiple sensors including a GPS. The results of compared methods are taken directly from the literature.
Our method outperforms all techniques by achieving the lowest average errors across the six routes. We avoid reporting results of individual routes for brevity, however, note that our method achieves the best performance on each individual route as well.
§.§ Results on Perth-WA dataset
Dataset:
Perth-WA dataset <cit.> is captured in the Central Business District (CBD), Perth, Western Australia. The dataset comprises a LiDAR map of 4km^2 with 6DoF ground-truth per frame. The scenes include commercial structures, residential areas, food streets, complex routes, hospital buildings The data is collected in three different two-hour sessions under various weather conditions. We apply the same split for training and testing sets as in <cit.>. The training set comprises 20K frames of sparse and dense point clouds, and another 2.2K frames are used as the test set. The dataset is available online on IEEE data portal <cit.>
Performance:
We evaluate the performance of our approach against recent point cloud-based localization approaches: PointLoc <cit.>, Slice3D baseline and pretrained models of <cit.>, as shown in Table <ref>.
To conduct this experiment, we fine-tune our framework on the training set and evaluate it on the test set. In line with Pointloc <cit.>, we use the Mean Absolute Error of poses for analyzing the performance.
Our localization approach outperforms all the methods for angular and translational mean error values. These results show that our proposed approach facilitates more effective point cloud feature learning, making it a preferred choice for outdoor localization using LiDAR frames.
§ CONCLUSION
This paper presents a novel localization framework for multi-sensors along with a deep neural network architecture that processes LiDAR, Radar and/or camera inputs on demand.
The proposed network employs 3D sparse convolution and cylindrical partition to process LiDAR frames, and implements ResNet blocks with fine-tuning layers and a slot attention-based feature filtering module for the Radar and image modalities. It also introduces a novel learnable modality encoding technique to identify the type of input data modality. The network is trained on six inputs from three sensor types and can process either a single or multiple sensor inputs at inference. This makes our method robust to sensor failure. Our method is useful for self-driving vehicles that need precise localization regardless of the weather conditions.
We benchmark our method on three benchmark datasets and achieve state of the art results.
./IEEEtran
|
http://arxiv.org/abs/2307.03150v1
|
20230706172501
|
Super-Schur Polynomials for Affine Super Yangian $\mathsf{Y}(\widehat{\mathfrak{gl}}_{1|1})$
|
[
"Dmitry Galakhov",
"Alexei Morozov",
"Nikita Tselousov"
] |
hep-th
|
[
"hep-th",
"math-ph",
"math.AG",
"math.MP",
"math.QA",
"math.RT"
] |
arrows
arrows.meta
positioning
shapes,snakes
fit
decorations.pathmorphing,decorations.pathreplacing,decorations.markings
calc
C>c<∂ tr Tr θ1/2coloring1/2 A B C D E F G H I J K L M N O P R V W X O Z E G H I B Q S T U X Y Z𝔸𝔹ℂ𝔻𝔼ℤ𝔾ℍ𝕀IJ𝕁𝕂𝕃 Eℕ𝕆ℙℚℝ𝕊𝕋𝕍𝕎ℤ𝔞𝔟𝔠𝔡𝔇𝔢𝔣𝔤𝔤𝔥𝔥𝔧𝔨𝔩𝔪𝔫𝔬𝔭𝔮𝔯𝔰𝔱𝔱𝔭𝔮𝔯𝔰𝔱𝔲𝔳𝔴𝔵𝔶𝔷𝔄𝔅ℭ𝔈ℌ𝔊ℑ𝔍𝔎𝔏𝔐𝔑ℜ𝔖𝔗𝔓𝔙𝔔𝔛𝔜ΦΣΨϕψφ∇bbsymbolUbboldmnbbsymbol"30bbsymbol"31bbsymbol"32bbsymbol"33bbsymbol"34bbsymbol"35bbsymbol"36bbsymbol"37bbsymbol"38bbsymbol"39𝖸[scale=0.15]
(0,0) – (1,0) – (1,-1) – (0,-1) – cycle;
[scale=0.15]
(0,0) – (1,0) – (0,-1) – cycle;
[scale=0.2]
(0,0) – (1,0) – (1,-1) – (0,-1) – cycle;
[scale=0.2]
(0,0) – (1,0) – (0,-1) – cycle;
boxrule=0.6pt,colback=white!97!blue#1 [DG: #1] 24.5cm
17cm
=-1.1in
= - 1.0in
MIPT/TH-14/23
ITEP/TH-17/23
IITP/TH-13/23
1.5in
Super-Schur Polynomials for Affine Super Yangian 𝖸(𝔤𝔩_1|1) 0.2in
Dmitry Galakhov^2,3,4,[e-mail: galakhov@itep.ru], Alexei Morozov^1,2,3,4,[e-mail: morozov@itep.ru] and Nikita Tselousov^1,2,4,[e-mail: tselousov.ns@phystech.edu] 0.2in
^1MIPT, 141701, Dolgoprudny, Russia 0 cm
^2NRC “Kurchatov Institute”, 123182, Moscow, Russia 0 cm
^3IITP RAS, 127051, Moscow, Russia 0 cm
^4ITEP, Moscow, Russia
0.2in
16pt
ABSTRACT
We explicitly construct cut-and-join operators and their eigenfunctions –
the Super-Schur functions –
for the case of the affine super-Yangian 𝖸(_1|1).
This is the simplest non-trivial (semi-Fock) representation, where eigenfunctions
are labeled by the superanalogue of 2d Young diagrams,
and depend on the supertime variables (p_k,θ_k).
The action of other generators on diagrams is described by the analogue
of the Pieri rule.
As well we present generalizations of the hook formula for the measure on super-Young diagrams and of the Cauchy formula.
Also a discussion of string theory origins for these relations is provided.
§ INTRODUCTION
Yangian and DIM symmetries,
which are the far-going generalizations of the Lie algebraic ones,
are currently in the center of study in theoretical physics.
They appear in a variety of problems, from celestial amplitudes <cit.>
to Seiberg-Witten-Nekrasov theory (low-energy sectors of brane models) <cit.>.
And they have rich and interesting representation theory,
including Fock, MacMahon and triangular-times sectors.
In another direction, raised to Yangian and DIM can be arbitrary
simple Lie (super)algebra, and one can wonder on how representation
theory depends on this choice.
In the present paper we address one of initial questions on this route:
the semi-Fock representation of the simplest super-Yangian (_1|1).
Our task is to describe the corresponding analogue of Young diagrams,
associated time-variables, commuting cut-and-join operators,
their common eigenfunctions (super-Schur/Jack functions)
and the action of other generators on them.
As usual, generators act by adding or subtracting boxes to the diagram
and/or by the differential operators in time variables.
We do not go into details and motivations of the definitions,
just cite a few selected papers, emphasising different aspects of the story: <cit.>.
Instead we refer the reader to sec.2 of <cit.> and sec.3 of <cit.> for reminder of the properties of the ordinary Schur/Jack functions,
which are the eigenfunctions of the ordinary cut-and-join operators <cit.>,
associated with the ordinary Young diagrams and with the ordinary Yangian (_1).[Yet we preferred a name “super-Schur” rather than “super-Jack” since super-Jack polynomials were introduced in <cit.> in a different context, and we hope for much farther reaching generalization opportunities than discussed in this note.]
It is this construction that we are going to generalize to (_1|1).
Generalizations to other representations, other Yangians and DIM algebras (their quantization)
are the three obvious directions for further developments –
which are beyond the scope of the present paper.
This note is organized as follows.
In sec.<ref> we review a construction of affine super Yangian (_1|1) and its semi-Fock representation, introduce super-partitions.
Sec.<ref> is the heart of this note.
We present a construction of super-commuting super-time variables and derive in these terms a set of differential cut-and-join operators (<ref>).
We define super-Schur polynomials as a set of cut-and-join eigenfunctions.
In sec.<ref> we put remarks on this derivation as well as possible project that might be interesting to develop in the future.
In this note we were trying to concentrate on simple algebraic properties of novel cut-and-join operators and super-Schur functions, therefore we accumulate hints and reasoning from the string theory on this subject in app.<ref>.
Finally, in the very end of this note, in app.<ref>, we provide data (super-Schur functions, hook measures, cut-and-join eigenvalues) on all super-partitions up to level 9/2.
§ AFFINE SUPER-YANGIAN
§.§ Algebra: affine Yangian Ygl11
Algebra (_1|1) is an (shifted) affine Yangian algebra based on the affine Dynkin diagram:
_1|1: [ (0,0) to[out=30,in=150] (2,0) (0,0) to[out=-30,in=-150] (2,0);
[fill=gray] (0,0) circle (0.1);
[fill=white!40!blue] (2,0) circle (0.1);
[above] at (0,0.1) +;
[above] at (2,0.1) -; ]
We would like to mark two nodes of this diagram by different colors, and we will distinguish those nodes as a (+) node and a (-) node.
(_1|1) is generated by two families of Chevalley generators e^±_n, f^±_n, ψ^±_k, where n∈_≥ 0, k∈, for two nodes.
This is a superalgebra, we assign odd (fermionic) parity to raising e- and lowering f-generators, and even (bosonic) parity to Cartan ψ-generators.
In addition algebra (_1|1) depends on two parameters we denote as h_1,2.
The (super-)commutation relations between generators read:
{e^a_n,e^a_k}={f^a_n,f^a_k}=[ψ^a_n,e^a_k]=[ψ^a_n,f^a_k]=0, a=± ,
{e_n+2^+,e_k^-}-2{e_n+1^+,e_k+1^-}+{e_n^+,e_k+2^-}-h_1^2+h_2^2/2{e_n^+,e_k^-}+h_1^2-h_2^2/2[e_n^+,e_k^-]=0 ,
{f_n+2^+,f_k^-}-2{f_n+1^+,f_k+1^-}+{f_n^+,f_k+2^-}-h_1^2+h_2^2/2{f_n^+,f_k^-}-h_1^2-h_2^2/2[f_n^+,f_k^-]=0 ,
[ψ_n+2^+,e_k^-]-2[ψ_n+1^+,e_k+1^-]+[ψ_n^+,e_k+2^-]-h_1^2+h_2^2/2[ψ_n^+,e_k^-]+h_1^2-h_2^2/2{ψ_n^+,e_k^-}=0 ,
[ψ_n+2^+,f_k^-]-2[ψ_n+1^+,f_k+1^-]+[ψ_n^+,f_k+2^-]-h_1^2+h_2^2/2[ψ_n^+,f_k^-]-h_1^2-h_2^2/2{ψ_n^+,f_k^-}=0 ,
[ψ_n+2^-,e_k^+]-2[ψ_n+1^-,e_k+1^+]+[ψ_n^-,e_k+2^+]-h_1^2+h_2^2/2[ψ_n^-,e_k^+]-h_1^2-h_2^2/2{ψ_n^-,e_k^+}=0 ,
[ψ_n+2^-,f_k^+]-2[ψ_n+1^-,f_k+1^+]+[ψ_n^-,f_k+2^+]-h_1^2+h_2^2/2[ψ_n^-,f_k^+]+h_1^2-h_2^2/2{ψ_n^-,f_k^+}=0 ,
{e^a_n,f^b_k}=-δ_abψ^a_n+k, a,b=± .
These relations are accompanied by quartic Serre relations:
Sym_i,j Sym_k,l{e_i^+,[e_k^-,{e_j^+,e_m^-}]}=0 ,
Sym_i,j Sym_k,l{f_i^+,[f_k^-,{f_j^+,f_m^-}]}=0 .
In some contexts it turns out to be useful to introduce generating functions for the generators:
e^±(z)=∑_k=0^∞e_k^±/z^k+1, f^±(z)=∑_k=0^∞f_k^±/z^k+1, ψ^±(z)=∑_k=-∞^∞ψ_k^±/z^k+1 .
In terms of these generating functions we could rewrite relations (<ref>) in the following form:
{e^a(z),e^a(w)}={f^a(z),f^a(w)}=[ψ^a(z),e^a(w)]=[ψ^a(z),f^a(w)]=0, a=±,
((z-w)^2-h_2^2)e^+(z)e^-(w)≃ -((z-w)^2-h_1^2)e^-(w)e^+(z) ,
((z-w)^2-h_1^2)f^+(z)f^-(w)≃ -((z-w)^2-h_2^2)f^-(w)f^+(z) ,
((z-w)^2-h_2^2)ψ^+(z)e^-(w)≃((z-w)^2-h_1^2)e^-(w)ψ^+(z) ,
((z-w)^2-h_1^2)ψ^-(z)e^+(w)≃((z-w)^2-h_2^2)e^+(w)ψ^-(z) ,
((z-w)^2-h_1^2)ψ^+(z)f^-(w)≃((z-w)^2-h_2^2)f^-(w)ψ^+(z) ,
((z-w)^2-h_2^2)ψ^-(z)f^+(w)≃((z-w)^2-h_1^2)f^+(w)ψ^-(z) ,
{e^a(z),f^b(w)}≃ -δ_abψ^a(z)-ψ^a(w)/z-w, a,b=± ,
where sign ≃ implies that we equate Laurent polynomials in z^kw^m on both sides up to monomials z^k≥ 0w^m and z^kw^m≥ 0.
Series of generators ψ_k^± are bounded below, so that:
ψ_k<-M_+^+=0, ψ_k<-M_-^-=0 ,
where M_± account for shifts (see <cit.>).
§.§ Semi-Fock and Fock representations
In this note we concentrate on a representation of (_1|1) that we call semi-Fock representation.
We will comment on this name in the end of this subsection.
It is an infinite representation where vectors in a module are labeled by molten 2d crystals, or super-Young diagrams.
We will define this representation and its properties in this subsection.
Let us consider a subset of an infinite bipartite graph forming a cone filling the bottom right quadrant of a 2d plane:
= [ [scale=0.5,rotate=-45]
[-stealth] (0,0) – (7,0);
[-stealth] (0,0) – (0,2.5);
[right] at (7,0) x;
[right] at (0,2.5) y;
sty1/.style=fill=graysty2/.style=fill=white!40!bluesty3/.style=ultra thick, black!40!red, postaction=decorate,
decoration=markings, mark= at position 0.65 with stealthı in 0, 1, 2ȷ in 0, ..., ı[sty3] (ı, -ı + 2*ȷ) – (ı + 1, -ı + 2*ȷ);
ı in 0, 1, 2ȷ in 0, ..., ı[sty3] (ı+1, -ı + 2*ȷ) – (ı+1, -ı + 2*ȷ + 1);
[sty3] (ı+1, -ı + 2*ȷ) – (ı+1, -ı + 2*ȷ - 1);
ı in0, 1, 2, 3[dashed] (3,-3 + 2 * ı) – (4,-3+2*ı);
ı in 0, 1, 2, 3ȷ in 0, ..., ı[sty1] (ı, -ı + 2*ȷ) circle (0.2);
ı in 0, 1, 2ȷ in 0, ..., ı[sty2] (ı+1, -ı + 2*ȷ) circle (0.2);
(3,3) circle (0.4);
[above] at (2.6,3) I;
(3,-1) circle (0.4);
[below] at (3.3,-1.3) II; ]
The nodes, we could call “atoms” by an analogy from chemistry, of lattice are of two types coinciding with the node types of _1|1 Dynkin diagram (<ref>).
In depicting (<ref>) we have rotated the coordinate frame by 45^∘ in comparison to fig.<ref>, so that the crystal diagram is more similar to a Young diagram we discuss in the next subsection.
In (<ref>) we denoted nodes by the corresponding color code.
We define molten crystals in the following way.
Call a subset of nodes λ⊂ a molten crystal if for any node a∈λ any other node b∈ is connected to a by an arrow b→ a then also b∈λ.
We place nodes in on an integral 2d lattice, so that nodes have integral coordinates (x,y): the node in the tip of the cone has coordinates (0,0), its nearest neighbor has coordinates (1,0), the next nodes have coordinates (1,± 1) and so on.
We weight x- and y-coordinates of a node with complex weights h_1 and h_2 parameterizing (_1|1), so that a weight of node a reads:
ω_a:=h_1 x_a+h_2 y_a .
We would like to introduce some useful functions on crystals λ.
Let us denote by λ^± a set of nodes of type (+) or (-) respectively in crystal λ.
We denote the numbers of nodes of respective types in λ as:
n_λ^+:=|λ^+|, n_λ^-:=|λ^-| .
Also we introduce net weights of (+)- and (-)-nodes in a crystal:
w_λ^+:=∑_a∈λ^+ω_a, w_λ^-:=∑_a∈λ^-ω_a .
In these terms we could write a generating function for the numbers of molten crystals in a compact form:
χ_1/2(q_1,q_2)=∑_λq_1^n^+_λq_2^n^-_λ=∏_k=1^∞1+q_1^kq_2^k-1/1-q_1^kq_2^k .
The semi-Fock representation is a crystal representation of (_1|1)– thus it falls into a class of representations of quiver BPS algebras <cit.>.
The vectors in the semi-Fock module are labeled by crystals |λ⟩.
And we could use techniques of quiver BPS algebras to derive explicit matrix elements of (_1|1) generators e_k^±, f_k^±, ψ_k^±.
We derive explicit expressions in Appendix <ref>.
The semi-Fock representation is not unique.
The easiest way to observe this is to turn to the geometric picture in fig.<ref>.
We constructed the semi-Fock representation as a slice of atoms on the east face of the pyramid partition.
However we could have chosen different slices.
If a west side slice is chosen the resulting representation is equivalent to the semi-Fock module constructed in this subsection under switching signs of h_1,2.
Let us denote these modules as 1/2.
Whereas if one chooses the north or south side with a simultaneous shift one step down along the pyramid the resulting representation 1/2 is in involution with 1/2.
The involution permutes the nodes of the Dynkin diagram (<ref>) and the roles of (+)- and (-)-generators respectively.
On h_1,2 it acts as h_1→ -h_1, h_2→ h_2, and the super-Young diagram is reflected with respect to the horizontal axis.
We could construct new crystal representations from old ones using a universal naive tensor product defined in <cit.>.
We define a Fock representation as a naive tensor product of two semi-Fock representations (therefore prefix “semi”):
:= ⊗_ niave .
The character of the Fock representation is a product of characters:
χ_ Fock(q_1,q_2)=χ_1/2(q_1,q_2)χ_1/2(q_2,q_1)=∏_k=1^∞(1+q_1^kq_2^k-1)(1+q_1^k-1q_2^k)/(1-q_1^kq_2^k)^2 .
This character coincides with <cit.>, and we believe our Fock representation is isomorphic to a Fock representation constructed in <cit.> in the context of a CFT βγ-system under embedding (_1|1) in the corresponding conformal algebra.
In this note we will not use the Fock representation concentrating mostly on the semi-Fock one, however let us note that a problem of mapping our super-time variables onto CFT fields is intriguing.
§.§ Super-Young diagrams for super-partitions
We would like to note that molten crystals defined in the past subsection are in one-to-one correspondence with super-partitions and super-Young diagrams we define in this subsection.
We call by a super-partition λ of a semi-integer number u∈_≥ 0/2 a sequence of semi-integer numbers:
λ_1≥λ_2≥λ_3≥…≥ 0 ,
such that ∑_iλ_i=u and if λ_i is not integer then inequalities in sequence (<ref>) are strict: λ_i-1>λ_i>λ_i+1.
In a complete analogy with ordinary partitions we introduce for super-partitions super-Young diagrams.
The diagram is constructed as a filling of the bottom right plane quadrant with tiles, so that each next tile is supported on the top and left by a previous tile or a wall.
To the halves we assign triangular half-tiles.
The height of the tile column correspond to a number λ_i, for example:
{4,7/2,2,2,3/2,1}=[ [scale=0.3]
ı/ȷ in 0/-4, 0/-3, 0/-2, 0/-1, 0/0, 1/-3, 1/-2, 1/-1, 1/0, 2/-2, 2/-1, 2/0, 3/-2, 3/-1, 3/0, 4/-1, 4/0, 5/-1, 5/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0, 1/-3, 1/-2, 1/-1, 1/0, 2/-2, 2/-1, 2/0, 3/-1, 3/0, 4/-1, 4/0, 5/0, 6/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 2/-3, 5/-1 (ı,ȷ) – (ı-1,ȷ-1); ] .
Let us select pairs of connected nodes (+)→(-) in crystal λ (these pairs are highlighted by bubbles in fig. <ref>).
Then we will find that some of pairs are complete, and in some pairs a (-)-node is missing.
We identify complete pairs with complete square tiles in the corresponding super-Young diagram and incomplete pairs with triangular half-tiles.
Then under such an identification a crystal diagram transforms into a super-Young diagram, see fig. <ref>.[A similar diagram system for representations associated with crystal slices was introduced and developed in <cit.>. However here we do not split a complete square tile in two halves of different color.]
In terms of super-Young diagrams it is more convenient to introduce a different system of coordinates on the diagram plane.
We assume that a tile (or a half-tile) has coordinates (i,j) where i is the number of tile row and j is the number of tile column, and the tile in the top left corner has coordinates (0,0).
Each node of type (+) corresponds to a tile or a half-tile, whereas a node of type (-) corresponds to a complete tile.
Corresponding coordinate transformations are given by the following relations:
(i,j)_+=(x_a-y_a/2,x_a+y_a/2), (i,j)_-=(x_a-y_a-1/2,x_a+y_a-1/2) .
Accordingly we introduce weights for the tiles in a diagram:
ϵ_1=h_1-h_2, ϵ_2 =h_1+h_2 .
In these terms we could redefine our functions (<ref>) and (<ref>) in terms of diagrams:
n^+_λ=∑_,∈λ1, n^-_λ=∑_∈λ1, w^+_λ=∑_,∈λ(ϵ_1 i+ϵ_2 j), w^-_λ=ϵ_1+ϵ_2/2n_λ^-+∑_∈λ(ϵ_1 i+ϵ_2 j) .
Finally, we would like to introduce a measure on super-Young diagrams.
Let us call a “leg”leg_λ() of tile in diagram λ a string of all tiles (both complete and incomplete) below .
Similarly, by an “arm”arm_λ() we call all the tiles located to the right (see fig. <ref>).
We would like to distinguish even and odd arms/legs.
We say that a leg (an arm) is odd if the last tile in the string is a half-tile , and even if the last tile is a complete tile .
For example, in fig.<ref> the leg for the highlighted red tile is odd, whereas the arm is even.
By |·| we denote a number of tiles in the leg (arm).
For tile in Young diagram λ we introduce a hook potential υ_λ() based on parity of leg_λ() and arm_λ() according to the following table:
[ υ_λ(); (-ϵ_1| leg_λ()|+ϵ_2| arm_λ()|-ϵ_1
)
· (ϵ_1| leg_λ()|-ϵ_2| arm_λ()|-ϵ_2); (-ϵ_1| leg_λ()|+ϵ_2| arm_λ()|
)·(ϵ_1| leg_λ()|-ϵ_2| arm_λ()|-ϵ_2); (-ϵ_1| leg_λ()|+ϵ_2| arm_λ()|-ϵ_1)
·(ϵ_1| leg_λ()|-ϵ_2| arm_λ()|); 1; ]
We introduce a hook measure (cf. <cit.>) on super-partitions according to the following:
_λ:=∏_∈λυ_λ() ,
where the product runs over only complete tiles.
The geometric meaning of this measure is that it coincides with the Euler class of the tangent space to the corresponding fixed point on the quiver variety (<ref>).
Also it defines a natural norm on semi-Fock module vectors (see (<ref>) and (<ref>)).
§ “BOSONISATION” OF SEMI-FOCK REPRESENTATION
§.§ Supertimes in semi-Fock representation
Using (<ref>) we define the following set of super-commuting operators θ_k, p_k, k∈_≥ 1 from the Borel positive part of (_1|1):
θ_1=e_0^+ ,
p_1={e_0^-,e_0^+} ,
θ_k+1=1/k[e_1^+,{e_0^-,θ_k}] ,
p_k+1=1/k{e_0^-,[e_1^+,p_k]} .
These operators super-commute (anti-commute for fermions and commute otherwise) as ordinary p_k and Grassmann variables θ_k.
It is natural to identify those with “super-times”– analogs of time variables in the case of (_1) and Jack/Schur polynomials – and consider a polynomial ring:
=[p_1,p_2,…,θ_1,θ_2,…] .
Apparently, monomials in are in one-to-one correspondence with super-partitions: we simply identify an element λ_i in λ (see (<ref>)) with either p_k or θ_k in a monomial product according to the following rule:
λ_i↔{[ p_λ_i, λ_i ;; θ_λ_i+1/2, . ]. .
One could introduce additive bosonic and fermionic degrees for time super-variables:
deg_b p_k=2k, deg_b θ_k=2k-1, deg_f p_k=0, deg_f θ_k=1 .
Then generating functions for monomial degrees and numbers of molten crystals/super-Young diagrams agree:
∑_∈q^ deg_bT^ deg_f=∏_k=1^∞1+T q^2k-1/1-q^2k
= (<ref>)|_q_1→ qT, q_2→ q/T=
= 1 + Tq+q^2+2Tq^3+(2+T^2)q^4 + 4Tq^5+(3+2T^2)q^6+7Tq^7+(5+5T^2)q^8+(12T+T^3)q^9 + …
The agreement between these generating functions suggests that in ring one could choose a selected basis, so that vectors of this basis would correspond to molten crystals/super-Young diagrams under (<ref>).
In what follows we will argue that such basic polynomials exist and we call them super-Schur _λ polynomials by analogy with Jack/Schur polynomials in the case of (_1).
An agreement between these bases may be expressed in the following relation:
[ams equation]_λ(p_1,p_2,…,θ_1,θ_2,…)|∅⟩ =|λ⟩
Similarly we could have constructed conjugate supertimes times θ_-k, p_-k, k∈_≥ 1 from the Borel negative part of (_1|1):
θ_-1=f_0^+ ,
p_-1={f_0^-,f_0^+} ,
θ_-k-1=1/k[f_1^+,{f_0^-,θ_-k}]
p_-k-1=1/k{f_0^-,[f_1^+, p_-k]} .
Together these Borel positive and negative parts form a super-Heisenberg algebra:
{θ_k,θ_m}=δ_k+m,0(-1)^k-1(ϵ_1ϵ_2)^2|k|-2· ,
[θ_k,p_m]=0 ,
[p_k,p_m]=δ_k+m,0(-1)^k-1k(ϵ_1ϵ_2)^2|k|-1· .
§.§ Super-Cut-and-join operators and super-Jack/Schur polynomials
The vectors of the semi-Fock module are eigenvectors in the basis of Cartan operators ψ_k^±.
Using relations (<ref>) it is easy to derive that generating functions ψ_k^± have the following expansions:
ψ^+(z)=1/z-ϵ_1ϵ_2/z^3n̂^–2ϵ_1ϵ_2/z^4ŵ^-+O(1/z^5) ,
ψ^-(z)=-z-ϵ_1+ϵ_2/2-ϵ_1ϵ_2/zn̂^+-2ϵ_1ϵ_2/z^2(ŵ^++ϵ_1+ϵ_2/4n̂^+)+O(1/z^3) ,
where operators n̂^± and ŵ^± are commuting operators with natural eigenvalues in the super-partition basis given by (<ref>):
n̂^±|λ⟩=n^±_λ|λ⟩, ŵ^±|λ⟩=w^±_λ|λ⟩ .
Using identification (<ref>) we might wonder if these operators have a differential representation in polynomial ring .
Operators n̂^± implement a function counting the number of boxes and half-boxes that is equivalent to counting degrees of corresponding polynomials.
Therefore we call these operators grading operator.
Naturally they acquire a form of the dilatation operator xd/dx:
n̂^+=∑_a=1^∞a p_a/ p_a+∑_a=1^∞a θ_a/θ_a ,
n̂^-=∑_a=1^∞a p_a/ p_a+∑_a=1^∞(a-1) θ_a/θ_a .
Operators ŵ^± have a more intricate form.
The closest analog to these operators in the case of (_1) is a cut-and-join operator <cit.>.
So we think of ŵ^± as super-cut-and-join operators:
[ams equation] ŵ^+=1/2∑_a,b[-ϵ_1ϵ_2(a+b)p_ap_b/ p_a+b+ab p_a+b^2/ p_a p_b]+
+∑_a,b[-ϵ_1ϵ_2 b p_aθ_b/θ_a+b+ab θ_a+b^2/ p_aθ_b]+∑_a ϵ_1+ϵ_2/2a(a-1)(p_a/ p_a+θ_a/θ_a) ,
ŵ^-=1/2∑_a,b[-ϵ_1ϵ_2(a+b)p_ap_b/ p_a+b+ab p_a+b^2/ p_a p_b]+
+∑_a,b[-ϵ_1ϵ_2 (b-1) p_aθ_b/θ_a+b+a(b-1) θ_a+b^2/ p_aθ_b]+∑_a ϵ_1+ϵ_2/2(a^2p_a/ p_a+(a-1)^2θ_a/θ_a) .
We define the super-Schur polynomials _λ as eigen functions of these operators with respective eigen values:
n̂^+_λ(p_1,p_2,…,θ_1,θ_2,…)=n^+_λ_λ(p_1,p_2,…,θ_1,θ_2,…) ,
n̂^-_λ(p_1,p_2,…,θ_1,θ_2,…)=n^-_λ_λ(p_1,p_2,…,θ_1,θ_2,…) ,
ŵ^+_λ(p_1,p_2,…,θ_1,θ_2,…)=w^+_λ_λ(p_1,p_2,…,θ_1,θ_2,…) ,
ŵ^-_λ(p_1,p_2,…,θ_1,θ_2,…)=w^-_λ_λ(p_1,p_2,…,θ_1,θ_2,…) .
There are no degeneracies, therefore these conditions are enough to obtain
all the polynomials from known eigenvalues (<ref>).
We present some explicit expressions for them in app.<ref>.
Also these polynomials satisfy generalized Pieri rule for addition and subtraction of boxes
to the diagram, and form a closed (generalized) Littlewood-Richardson algebra:
their products can be expanded in linear combinations of super-Schur polynomials:
_λ·_λ'=_λλ'^λ”_λ” ,
with |λ”|=|λ|+|λ'|.
§.§ Free field representation of semi-Fock modules
To complete our discussion of super-times we would like to present a map inverse to (<ref>) and (<ref>).
Using expansion (<ref>) for the semi-Fock representation we derive that some modes of operators ψ_k^± are simply central elements:
ψ_0^+=1, ψ_-2^-=-1, ψ_-1^-=-ϵ_1+ϵ_2/2 .
Then using algebra relations (<ref>) we derive the following relation between modes:
[ψ^±_1± 1,e_k^∓]=-ϵ_1ϵ_2e_k^∓, [ψ^±_1± 1,f_k^∓]=ϵ_1ϵ_2f_k^∓ ,
[ψ^±_2± 1,e_k^∓]=-2ϵ_1ϵ_2 e_k+1^∓-(1∓ 1)ϵ_1ϵ_2ϵ_1+ϵ_2/4e_k^∓ ,
[ψ^±_2± 1,f_k^∓]=2ϵ_1ϵ_2 f_k+1^∓+(1∓ 1)ϵ_1ϵ_2ϵ_1+ϵ_2/4f_k^∓ .
Then again translating these expressions with (<ref>) one observes that cut-and-join operators ŵ^± shift mode numbers of e_k^±, f_k^±:
e^±_k+1=[ŵ^±,e_k^±], f^±_k+1=-[ŵ^±,f_k^±] .
Thus, eventually, to reconstruct the whole (_1|1) algebra we will need expressions only for w^±, e_0^±, f_0^±.
All the remaining generators may be re-constructed using (<ref>) and the following relation from the algebra:
ψ^±_k+m=-{e_k^±,f_m^±}
Expressions for the zero modes of raising/lowering operators in terms of super-times may easily derived from (<ref>) and (<ref>):
e_0^+ =θ_1, f_0^+ =/θ_1,
e_0^- =∑_kp_k/θ_k,
f_0^- =ϵ_1ϵ_2∑_k k θ_k/ p_k .
§.§ Cauchy relation
In this subsection we would like to confirm that super-Schur polynomials satisfy the following generalization of the Cauchy formula <cit.> for two sets of super-times (p_i,θ_i) and (q_j,ξ_j):
exp[∑_k=1^∞(p_k q_k/k+θ_kξ_k)]=∑_λ(-ϵ_1ϵ_2)^n_λ^-/_λ_λ(p_i,θ_i)_λ(q_j,ξ_j) ,
where the summation runs over all super-Young diagrams, n_λ^- is a number (<ref>) of complete square boxes in λ, and, finally, _λ= e_λ is a geometrical hook measure on partitions (see (<ref>) and (<ref>)).
§ CONCLUSION AND FUTURE DIRECTIONS
In this paper we provided a detailed description of the semi-Fock representation
of the affine super Yangian (_1|1).
This looks like a small step forward from (_1),
which requires generalization in two complementary directions:
to arbitrary Yangians and to arbitrary representations.
However, we consider this consideration as quite important,
because it helps to emphasize non-conventional aspects of the story
and supports the possibility of a more “physical” approach to the problem.
Conventional approach to affine Yangian representations is to note that
they are associated with one or another substitute of Young diagrams,
and generators of the algebra add or subtract boxes.
The “simple-root” generators e_α and f_α act by
adding and subtracting single boxes.
The “Cartan” generators ψ_α do not change the diagrams, in this sense
diagrams can be considered as their common “eigenfunctions”.
In this approach the only two essential questions concern the choice
of diagram system and the choice of coefficients in the action of generators.
The latter are usually described in terms of some “charge” function with poles
and zeroes, which are adjusted to the diagram system so that they allow
adding/subtracting only at the right places
(so that the diagram of a given type is deformed within the system).
There are still various unresolved problems with this approach,
including building the theory for solid (4d) partitions.
However, the approach is well-established and followed by the majority
of investigators in the field.
Unfortunately, it stops far from the needs of mathematical physics,
where we need much more:
a well-controlled set of special functions in convenient coordinates,
which can be used in description of correlators and partition functions
of the physically reasonable models.
This means that we need to associate with the diagrams concrete states,
with the wave-functions which are eigenfunctions of ψ_α, e_α and f_α acting as differential operators on them
(which become difference operators in the further lifting from Yangians
to DIM algebras).
These wave-functions should depend on appropriate variables and
are usually named time-dependent Schur functions.
In various examples the names can be different: Jacks, Halls, Macdonalds,
Shiraishi, Q-Schurs, 3d Shurs etc – but we prefer and use Schurs as a common name.
The usual sequence of notions in this approach includes
[ [yscale=1.3]
bbl/.style=draw, rounded corners, fill=white!97!blue[shift=(-1,0)]
[bbl] (A) at (-5,0) Algebra;
[bbl] (B) at (-2,0) Representation;
[bbl] (C) at (2,0) Diagram system Λ;
[bbl] (D) at (6,0) Time variables;
[bbl] (E) at (0,-1) Explicit choice of the main generators like Ê∼ e_0^± and F̂∼ f_0^±, grading n̂^± and cut-and-join Ŵ∼ŵ^±;
[shift=(-3.5,0)]
[bbl] (F) at (0,-2) Schur functions S_λ∈Λ;
[bbl] (G) at (6,-2) Cauchy identity (to normalize them);
[bbl] (H) at (0,-3) Pieri rules (for adding and subtracting boxes);
[bbl] (I) at (0,-4) Littlewood-Richardson multiplication algebra S_λ S_λ' = N_λλ'^λ” S_λ”;
[bbl] (J) at (0,-5) Matrix model, defined by the W-representation <cit.>, like
Z:=e^Ŵ· 1 = <1>;
[bbl] (K) at (0,-6) Its superintegrability property <cit.>⟨ S_λ⟩∼ S_λ;
[bbl] (L) at (0,-7) Sets (rays) of commuting Hamiltonians, made from iterated Ê and F̂<cit.>;
[shift=(-3,0)]
[bbl] (M) at (0,-8) W_1+∞-algebras and VOAs <cit.>;
[bbl] (N) at (5.5,-8) Stable envelopes <cit.>;
[bbl] (O) at (9,-8) …;
[bbl] (P) at (0,-9) Spectral R-matrix;
arr/.style=ultra thick, postaction=decorate,decoration=markings,
mark= at position 0.65 with stealth (A) edge[arr] (B) (B) edge[arr] (C) (C) edge[arr] (D) (F) edge[arr] (G) (H) edge[arr] (I) (I) edge[arr] (J) (J) edge[arr] (K) (K) edge[arr] (L);
let 1 = (D.south) in node (QQ) at ([shift=(-0.5,0)]1,-0.5) ;
let 1 = (E.north) in node (PP) at ([shift=(0.5,0)]1,-0.5) ;
[arr] (D.south) to[out=270,in=0] (QQ.center) – (PP.center) to[out=180,in=90] (E.north);
let 1 = (E.south) in node (QQ) at ([shift=(-0.5,0)]1,-1.5) ;
let 1 = (F.north) in node (PP) at ([shift=(0.5,0)]1,-1.5) ;
[arr] (E.south) to[out=270,in=0] (QQ.center) – (PP.center) to[out=180,in=90] (F.north);
let 1 = (G.south) in node (QQ) at ([shift=(-0.5,0)]1,-2.5) ;
let 1 = (H.north) in node (PP) at ([shift=(0.5,0)]1,-2.5) ;
[arr] (G.south) to[out=270,in=0] (QQ.center) – (PP.center) to[out=180,in=90] (H.north);
let 1 = (L.south) in node (QQ) at ([shift=(-0.5,0)]1,-7.5) ;
let 1 = (M.north) in node (PP) at ([shift=(0.5,0)]1,-7.5) ;
[arr] (L.south) to[out=270,in=0] (QQ.center) – (PP.center) to[out=180,in=90] (M.north);
let 1 = (L.south) in node (QQ) at ([shift=(0.5,0)]1,-7.5) ;
let 1 = (N.north) in node (PP) at ([shift=(-0.5,0)]1,-7.5) ;
[arr] (L.south) to[out=270,in=180] (QQ.center) – (PP.center) to[out=0,in=90] (N.north);
let 1 = (L.south) in node (QQ) at ([shift=(0.5,0)]1,-7.5) ;
let 1 = (O.north) in node (PP) at ([shift=(-0.5,0)]1,-7.5) ;
[arr] (L.south) to[out=270,in=180] (QQ.center) – (PP.center) to[out=0,in=90] (O.north);
let 1 = (M.south) in node (QQ) at ([shift=(0.5,0)]1,-8.5) ;
let 1 = (P.north) in node (PP) at ([shift=(-0.5,0)]1,-8.5) ;
[arr] (M.south) to[out=270,in=180] (QQ.center) – (PP.center) to[out=0,in=90] (P.north);
let 1 = (N.south) in node (QQ) at ([shift=(-0.5,0)]1,-8.5) ;
let 1 = (P.north) in node (PP) at ([shift=(0.5,0)]1,-8.5) ;
[arr] (N.south) to[out=270,in=0] (QQ.center) – (PP.center) to[out=180,in=90] (P.north); ]
While very well established for the ordinary Young diagrams
(i.e. for Fock representation of (_1)),
this program makes only first steps in various other directions:
* to other diagram systems, like strict and odd partitions for Q-Schurs<cit.>,
* to quantized (finite-difference) case, i.e lifting to DIM <cit.>
* to plane partitions (3d Schurs for triangular/true MacMahon representations of (_1)<cit.>
* …
In this sense any new non-trivial example is a big progress.
From this point of view our description of semi-Fock representations for (_1|1)
and their association with a new non-trivial generalization of Young diagrams seem important.
Even in this case we actually did only the first half of the job in (<ref>),
the second part remains for the future work.
Especially close to this particular story is the theory of Q-functions,
where some other super-Jacks also appeared <cit.>,
while the possibility of super-field description <cit.> remained under-investigated.
Another amusing parallel is with the theory of Nekrasov functions,
where parameters ϵ_1 and ϵ_2 also appear –
moreover, their hidden raison d'etre (un secret de Polichinelle) is the same:
they parameterize the toric action on CY3.
These are also questions for further investigation.
Also we should mention another far reaching yet lying beyond the scope of this note aim of our journey – a modern look at integrability as a property of a topological QFT moduli space.
This point of view allows one to interpret R-matrices intertwining tensor powers of representations as holonomies of a Berry connection on some TQFT moduli space <cit.>.
This story has a continuation in a dominion of enumerative and algebraic geometry of quiver varieties in a form of stable envelopes <cit.>.
Physically <cit.> a stable envelope is a transition matrix between various choices of brane boundary conditions for a string theory inhabiting a Calabi-Yau manifold whose crepant resolution defines the quiver in question.
In this context a family of toric Calabi-Yau 3-folds represented by generalized conifolds associated with affine Yangians (_m|n) is interesting since it presents quiver varieties beyond the ones of the Nakajima type <cit.>, and a simple resolved conifold – algebra (_1|1)– delivers the simplest example of a non-Nakajima quiver.
This quiver variety has no canonical symplectic structure pairing present in the Nakajima quiver varieties <cit.>, and, therefore, the standard enumerative geometry methods should be improved to capture the construction of stable envelopes for these varieties as well.
We believe that our construction of super-Schur polynomials and, especially, a similarity between languages of symmetric polynomials in this note and works <cit.> on instanton R-matrices will open a new road towards this goal.
On the other hand, canonical constructions of the R-matrices for (_m|n) become in the light of the present note no less exciting.
Unfortunately, canonical constructions in the case of (_m|n) are rather involved for both purely algebraic <cit.> and CFT <cit.> approaches.
We hope that combinatorial relations (hook formulae) for matrix coefficients of the semi-Fock representation would allow one to improve and simplify these techniques for a more open study of associated integrable models.
§ ACKNOWLEDGMENTS
This work is supported by the grants of the Foundation for the Advancement of
Theoretical Physics “BASIS” (A.M., N.T.) and by the joint grant 21-51-46010-CT_a (A.M., N.T.).
§ SEMI-FOCK REPRESENTATIONS OF YGL11 FROM D-BRANE DYNAMICS
§.§ On smooth quiver varieties and Fock representations
Attempts to construct free field representations for generic representations of affine Yangian algebras encounter certain difficulties <cit.>, and the resulting algorithms lack transparency and become rather involved quite rapidly.
In contrast a free field representation for the Fock representation of (_1) is well-studied and is given in terms of Jack polynomials.[Other free field constructions (see e.g. <cit.>) of (_m|n) are rather involved and require a non-trivial extension making Hamiltonians ψ_k^± non-commutative.]
This motivates us to search for related deformations of Jack polynomial families in a domain of Fock representations of affine Yangians.
However what representation should be promoted to a canonical Fock representation remains unclear, especially when we discuss super-algebras.
We will attempt to find an inspiration in the physical picture where the affine Yangian is a an algebra of scattering BPS states in a system of D-branes wrapping a toric Calabi-Yau 3-fold <cit.>.
We will not attempt to cover this construction in details referring the interested reader to a concise <cit.> and a more detailed <cit.> reviews.
For a family of super-algebras (_m|n) the correspondence is given by the following recipe: the affine Dynkin diagram of _m|n is translated to a quiver of the toric CY3 crepant resolution.
Each edge of the Dynkin diagram becomes a pair of counter-directed morphism, and each even node acquires a self-morphism, whereas an odd node does not.
Furthermore one puts a system of D0-D2-D4-D6 branes on the corresponding toric CY3, and the quiver describes its gauge-matter content in the IR description of this theory.
Corresponding quivers for (_1) and (_1|1) are depicted in fig. <ref>.
A canonical representation of the affine Yangian emerging in this setting is a MacMahon-like representation where vectors of a module are labeled by 3d crystals.
In the example of (_1) corresponding to ^3 these 3d crystals are simply plane partitions.
The Fock module emerges in this algebra when we try to confine D-branes inside a heavy D4-brane wrapping a real 4-cycle inside CY3, in this particular case any of three ^2-planes invariant with respect to the toric action inside ^3.
This deformation <cit.> changes the quiver framing and modifies the superpotential (see fig. <ref>), so that a specific field X_1 becomes selected and the superpotential acquires the following form:
W= Tr X_1 [f(X_2,X_3,…)+IJ] ,
where function f is independent of X_1, and I, J are fields corresponding to the quiver framing.
We should note that in the cases of (_n) this procedure enhances the initial supersymmetry from 8 charges to 16, and corresponding fields playing the role of X_1 become “Lagrange multipliers”, so that a quiver variety cut out by an F-term condition:
_X_1W=f(X_2,X_3,…)+IJ=0 ,
is a Nakajima quiver variety known to be smooth<cit.> compared to the initial quiver variety that is singular.
In the case of (_1) this is a well-known ADHM description of Hilbert scheme Hilb^*(^2)<cit.>:
[B_1,B_2]+IJ=0 .
The supersymmetry of the quiver gauge theory corresponding to a superalgebra (_m|n) can not be enhanced.
Yet we hope to acquire a smooth quiver variety by repeating the strategy described above.
We will concentrate on the case of (_1|1) corresponding to a conifold xy=zw resolution.
By choosing a 4-cycle we modify the quiver and the superpotential accordingly (see fig. <ref>).
In this case the role of a Lagrange multiplier is played by field A_2, and the F-term condition describes the following quiver variety (see fig. <ref>):
B_2A_1B_1-B_1A_1B_2+IJ=0 .
This procedure results in representations that are slices of the canonical MacMahon representation – a pyramid <cit.> representation for (_1|1).
We have chosen parameters in such a way that the slice goes over one-atom layer of the east side of the pyramid.
We describe explicitly the molten crystals for the semi-Fock representation in sec.<ref>.
Let us denote quiver dimensions as d_± respectively and the moduli space of the quiver variety representation confined to surface (<ref>) as ℳ(d⃗).
Crystal λ denotes a fixed point on this space with respect to the equivariant action of the complexified gauge and flavor groups.
We denote the corresponding tangent space as 𝖳_λℳ(d⃗).
A BPS wave function of a D-brane system is a cohomology of the supercharge and corresponds to the Euler class of the tangent space:
e_λ:= Euler(𝖳_λℳ(d⃗)) .
A BPS algebra on the Hilbert space of BPS D-brane states is induced by processes of capturing/emitting an elementary brane of fractional charge, such that the quiver dimension vector is shifted by a unit vector e_+=(1,0) or e_-=(0,1).
Depending if the shift is positive (capture) or negative (emission) we distinguish raising E and lowering F operators analogously to Borel positive and negative parts of (_1|1).
Practically these generators are constructed according to the following recipe <cit.>(cf. <cit.>).
Let us denote ℳ=ℳ(d⃗) and ℳ'=ℳ(d⃗+e⃗_±).
Consider a surface ⊂ℳ×ℳ' defined by a condition that there is a homomorphism of quiver representations ζ: ℳ'→ℳ.
Fixed points on are labeled by pairs λ, λ' such that λ'=λ+a contains an additional node a in comparison to λ.
The matrix coefficients of the raising/lowering operators of the BPS algebra are calculated with the help of equivariant integration accordingly:
E_λ(a):= Euler(𝖳_λℳ)/ Euler(𝖳_λ,λ'), F_λ(a):= Euler(𝖳_λ'ℳ')/ Euler(𝖳_λ,λ') .
§.§ Explicit formulae for matrix coefficients and Euler class
We find that the Euler class of the tangent space to a fixed point given by partition λ coincides with the hook measure (<ref>) and could be represented in an integral form (similarly to <cit.>), or as a residue of an integral measure:
e_λ=_λ=( res_λ Mes(x⃗,y⃗)^-1)^-1 ,
where
Mes(x⃗,y⃗) = weight()/ weight()× weight()=
=∏_i=1^d_1∏_j=1^d_2(y_j-x_i-h_1)(x_i-y_j-h_2)(x_i-y_j+h_2)×∏_i=1^d_1x_i×∏_j=1^d_2(-y_j-h_1)/∏_i,i'=1^d_1(x_i-x_i')×∏_j,j'=1^d_2(y_j-y_j')×∏_i=1^d_1∏_j=1^d_2(x_i-y_j-h_1) .
and d_1,2 are quiver dimensions given by n_λ^± respectively.
To calculate the residue in (<ref>) we order nodes of crystal λ according to the length of a path in the crystal connecting the node in consideration with the node located at (0,0).
Then we change variables x_i(a)→ω_a+t_a if node a is of (+)-type, or y_j(a)→ω_a+t_a if a is of (-)-type.
Then we define:
res_λ := … res_t_4=0 res_t_3=0 res_t_2=0 res_t_1=0 .
The Euler class imposes a suitable norm on the semi-Fock representation:
⟨λ,λ'⟩=δ_λ,λ' e_λ ,
so that operators e_k^± and f_k^± are conjugate to each other with respect to this norm.
We derive the following formulae for matrix coefficients of the semi-Fock representation of (_1|1) (cf. <cit.>):
e^±(z)|λ⟩=∑_k=0^∞e_k^±/z^k+1|λ⟩=∑_a∈ Add λ^± E_λ(a)/z-ω_a|λ+a⟩ ,
f^±(z)|λ⟩=∑_k=0^∞f_k^±/z^k+1|λ⟩=∑_a∈ Rem λ^± F_λ-a(a)/z-ω_a|λ-a⟩ ,
ψ^±(z)|λ⟩=∑_kψ^±/z^k+1|λ⟩=ψ_λ^a(z)|λ⟩ ,
where Add λ^± (Rem λ^±) are sets of nodes of a respective type (+) or (-) in such that they could be added to (removed from) λ and a new set of nodes λ± a is again a molten crystal.
Matrix coefficients and eigenvalues of operators are derived using (<ref>) and are given by the following combinatorial expressions (cf. <cit.>):
E_λ(a)=A_a×∏_b∈λη_a̅,b̅(ω_a-ω_b), F_λ(a)=B_a×∏_b∈λξ_a̅,b̅(ω_a-ω_b), ψ^±_λ(z)=ψ^±_∅(z)×∏_b∈λφ_±,b̅(z-ω_b) ,
where c̅ denotes a node c type, and coupling potentials are defined as:[We should warn the reader that bare generators derived using the BPS or Cohomological Hall algebra methods in the super case may not have a definite parity. One should switch signs of some matrix elements in a prescribed order <cit.>. Here we simply switched the signs of ξ_ab(z) form the equivariant calculation.]
[ η_++(z)=η_–(z)=z ; ξ_++(z)=ξ_–(z)=1z ;; η_+-(z)={[ z-h_12z(z+h_1), z=± h_2 ;; 12z, z=h_1 ;; z-h_1z^2-h_2^2, ]. ξ_+-(z)=z+h_1 ;; η_-+(z)={[ 1, z=h_1 ;; 1z-h_1, ]. ξ_-+(z)=z^2-h_2^2z+h_1 . ]
φ_a̅,b̅(z)=-η_a̅,b̅(z)/η_b̅,a̅(-z)=-ξ_a̅,b̅(z)/ξ_b̅,a̅(-z)=η_a̅,b̅(z)ξ_a̅,b̅(z)={[ 1, a̅=b̅ ;; z^2-h_1^2z^2-h_2^2, (a̅,b̅)=(+-) ;; z^2-h_2^2z^2-h_1^2, (a̅,b̅)=(-+) .; ].
and
A_a={[ 2|x_a|+|y_a|, a̅=+a ;; 2h_1ω_a, a̅=+a ;; 1, a̅=- ; ].
B_a={[ 1, a̅=+ ;; -ω_a-h_1, a̅=- ; ].
ψ^+_∅(z) = 1/z, ψ_∅^-(z)=-z-h_1 .
§.§ Example of calculations
In this subsection we present an explicit pedagogical calculation routine for matrix element E_λ(a) with:
λ=[ [scale=0.4]
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/0, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1); ]=[ [scale=0.7]
[thick] (0,0) – (1,-1) (1,-1) – (2,0) (2,0) – (3,-1);
[fill=gray] (0,0) circle (0.25) (2,0) circle (0.25);
[fill=white!40!blue] (1,-1) circle (0.25) (3,-1) circle (0.25);
[above] at (0,0.2) 1_+;
[above] at (2,0.2) 2_+;
[below] at (1,-1.2) 1_-;
[below] at (3,-1.2) 2_-; ] E_λ(a)⟶ λ'=[ [scale=0.4]
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-1 (ı,ȷ) – (ı-1,ȷ-1);
at (0.5,-0.5) 1;
at (1.5,-0.5) 2; ]=[ [scale=0.7]
[thick] (0,0) – (1,-1) node[pos=0.5,below left=-0.2]A_1 (1,-1) – (2,0) node[pos=0.5,above left=-0.2]B_1 (2,0) – (3,-1) node[pos=0.5,above right=-0.2]A_1 (1,-1) – (0,-2) node[pos=0.5,below right=-0.2]B_2;
[fill=gray] (0,0) circle (0.2) (2,0) circle (0.2) (0,-2) circle (0.2);
[fill=white!40!blue] (1,-1) circle (0.2) (3,-1) circle (0.2);
[above] at (0,0.2) 1_+;
[above] at (2,0.2) 2_+;
[below right] at (1.1,-1.1) 1_-;
[below right] at (3.1,-1.1) 2_-;
[left] at (-0.2,-2) a=3_+;
[thick, black!40!red] (0,-2) circle (0.4); ]
In this process we add node a marked by the red circle of type (+).
In what follows we construct explicitly a tangent space to fixed point λ', and the tangent space to fixed point λ is constructed by a complete analogy.
We enumerate all the nodes according to (<ref>).
And construct vector spaces of the quiver representation:
V_+'= Span{|1_+⟩,|2_+⟩,|3_+⟩}, V_-'= Span{|1_-⟩,|2_-⟩}
Vacuum fields are calculated according to the following rule: I is a vector pointing towards the root atom, J=0, A_2=0 as the Lagrange multiplier, the rest of the fields acquire non-zero vevs corresponding to the links of the crystal diagram:
A_1'=(
[ 1 0 0; 0 1 0; ]), B_1'=(
[ 0 0; 1 0; 0 0; ]), B_2'=(
[ 0 0; 0 0; 1 0; ]), I'=(
[ 1; 0; 0; ]), J'=(
[ 0 0; ]) .
This representation is a fixed point with respect to the action of the vector field induced by the complexified gauge and flavor symmetries:
𝒱=∑_a→ b∈{ quiver arrows}(Φ_bX_a→ b-X_a→ bΦ_a- weight(a→ b)· X_a→ b)/ X_a→ b .
where expectation values of the complexified gauge fields are defined by node weights according to the following rule:
V= Span{|a⟩,|b⟩,|c⟩,…} ⇒ Φ_V= diag(ω_a,ω_b,ω_c,…) .
In our case we have:
Φ_+= diag(0, h_1+h_2, h_1-h_2), Φ_-= diag(h_1,2h_1+h_2) .
Generically the tangent space is parameterized by infinitesimal field deviations from the vacuum values:
δ A_1'=(
[ α_1;11' α_1;12' α_1;13'; α_1;21' α_1;22' α_1;23'; ]), δ B_1'=(
[ β_1;11' β_1;12'; β_1;21' β_1;22'; β_1;31' β_1;32'; ]) ,
δ B_2'=(
[ β_2;11' β_2;12'; β_2;21' β_2;22'; β_2;31' β_2;32'; ]), δ I'=(
[ ι_1'; ι_2'; ι_3'; ]), δ J'=(
[ υ_1' υ_2'; ])
We would like to parameterize the infinitesimal scale by t and work up to O(t^2) in what follows.
The action of the complexified gauge groups is linearized to the action of the corresponding Lie algebra on the tangent space:
_a=((g_ij^a))∈(d_a,): δ X_a→ b→δ X_a→ b+_bX_a→ b-X_a→ b_a .
For example, for δ A_1 we have:
(
[ α_1;11' α_1;12' α_1;13'; α_1;21' α_1;22' α_1;23'; ])→(
[ α_1;11' α_1;12' α_1;13'; α_1;21' α_1;22' α_1;23'; ])+(
[ g_11^–g_11^+ g_12^–g_12^+ -g_13^+; g_21^–g_21^+ g_22^–g_22^+ -g_2,3^+; ]) .
To derive the gauge-invariant moduli of the quiver representation we simply subtract from the vector space spanned by all the degrees of freedom its vector subspace spanned by the gauge degrees of freedom.
As a result the gauge invariant parameterization reads:
δ A_1'=(
[ 0 0 0; 0 0 0; ]), δ B_1'=(
[ 0 β _1;12'; 0 β _1;22'; 0 β _1;32'; ]), δ B_2'=(
[ β _2;11' β _2;12'; β _2;21' β _2;22'; 0 β _2;32'; ]), δ I'=(
[ 0; 0; 0; ]),δ J'=(
[ υ _1' υ _2'; ])
We project these degrees of freedom on hypersurface (<ref>):
(B_2+t δ B_2)(A_1+t δ A_1)(B_1+t δ B_1)-(B_1+t δ B_1)(A_1+t δ A_1)(B_2+t δ B_2)+(I+t δ I)(J+t δ J)=O(t^2) .
That results in a set of relations:
β _2;12'+υ _1'=0, υ_2'=0, β_2;22'-β _2;11'=0, -β _2;12'=0, β _2;32'=0, β _1;12'=0 .
That further restricts our tangent space 𝖳_λ'ℳ' to the following form:
δ A_1'=(
[ 0 0 0; 0 0 0; ]), δ B_1'=(
[ 0 0; 0 β _1;22'; 0 β _1;32'; ]), δ B_2'=(
[ β _2;22' 0; β _2;21' β _2;22'; 0 0; ]), δ I'=(
[ 0; 0; 0; ]), δ J'=(
[ 0 0; ]) .
The equivariant weights of of these degrees of freedom are defined by the 𝒱-action:
w(β_1;22)=-h_1-h_2, w(β_1;32')=-h_1-3h_2, (β_2;21')=2 h_2, w(β _2;22')=h_2-h_1 .
The resulting Euler class reads:
e_λ'=-2 (h_1-h_2) h_2 (h_1+h_2) (h_1+3 h_2)=
=-ϵ _1 (ϵ _1-2 ϵ _2) (ϵ _1-ϵ _2) ϵ _2=(-ϵ_1+ϵ_2)(ϵ_1-2ϵ_2)_υ_λ'(1)×(-ϵ_1)(-ϵ_2)_υ_λ'(2) ,
where we split the last equality in a product of two hook contributions (<ref>) with hook corners located in tiles marked in (<ref>) correspondingly.
Similarly, for fixed point λ on ℳ we have:
A_1=(
[ 1 0; 0 1; ]), B_1=(
[ 0 0; 1 0; ]), B_2=(
[ 0 0; 0 0; ]), I=(
[ 1; 0; ]), J=(
[ 0 0; ]) ,
δ A_1=(
[ 0 0; 0 0; ]), δ B_1=(
[ 0 β _1;12; 0 β _1;22; ]), δ B_2=(
[ β _2;22 0; β _2;21 β _2;22; ]), δ I=(
[ 0; 0; ]), δ J=(
[ 0 0; ]) ,
w(β _1;12)=-2 h_1-2 h_2, w(β _1;22)=-h_1-h_2, w(β_2;21)=2 h_2, w(β _2;22)=h_2-h_1 .
Finally, the homomorphism of a quiver representation is a singular gauge map acting in quiver nodes so that the following diagram commutes (in our case up to O(t^2)):
[ (A) at (0,0) V_a';
(B) at (4,0) V_b';
(C) at (0,-1.5) V_a;
(D) at (4,-1.5) V_b;
(A) edge[->] node[above] X_a→ b'+t δ X_a→ b'(B) (C) edge[->] node[above] X_a→ b+t δ X_a→ b(D) (A) edge[->] node[left] τ_a(C) (B) edge[->] node[right] τ_b(D); ]
In our case we derive:
τ_1=(
[ 1 0 0; 0 1 0; ]), τ_2=(
[ 1 0; 0 1; ]) ,
and acquire an additional constraint on cases when such a homomorphism exists:
β _1;12=0, β_1;22-β _1;22'=0, β _2;21-β_2;21'=0, β _2;22-β _2;22'=0 .
By resolving those constraints in terms of β and β' variables we find the tangent space to the incidence locus:
𝖳_λ,λ'= Span {β _1;22',β _1;32',β _2;21',β _2;22'} .
Thus for the matrix coefficient we have (cf. (<ref>)):
E_λ(a)=2 (h_1+h_2)/h_1+3h_2=
=1_A(3_+)×(h_1-h_2)_η_++(ω_3_+-ω_1_+)×(-2h_2)_η_++(ω_3_+-ω_2_+)×-h_2-h_1/2(-h_2)(-h_2+h_1)_η_+-(ω_3_+-ω_1_-)×-2h_2-2h_1/(-3h_2-h_1)(-h_2-h_1)_η_+-(ω_3_+-ω_2_-) .
§.§ Towards stable envelopes for Ygl11
In this subsection we would like to perform a slight elaboration of scheme (<ref>) in applications to integrability.
Integrability is an intriguing subject especially in the light of its relation to quantum field theories and non-perturbative phenomena (see e.g. <cit.>).
Yangian algebras are a natural source of non-trivial solutions to the Yang-Baxter equations –R-matrices.
The resulting R-matrix is a function of 4 indices labeling vectors of representation and a complex spectral parameter.
In the context of the Yangian R-matrix is an intertwining operator for a non-diagonal co-product structure (see <cit.> for details and relations between naive and non-diagonal co-product structures):
R_12Δ_12=Δ_21R_12 .
A difficulty of this construction when approached form the side of D-brane systems is that instead of the semi-Fock representations, that correspond to a shifted Yangian, and we have concentrated in this note on, one should use unshifted representations.
And the simplest unshifted representation is the Fock one.
Here we encounter a difficulty in the approach to BPS quiver Yangians as algebras of systems of D-branes – switching a representation modifies quiver framing and superpotential:
[ col0 = [fill=white]
col1 = [fill=gray]
col2 = [fill=white!40!blue]
[postaction=decorate,
decoration=markings, mark= at position 0.55 with stealth]
(0,-2) – (0,0) node[pos=0.5,below left] I_1;
[postaction=decorate,
decoration=markings, mark= at position 0.75 with stealth]
(2,0) – (0,-2) node[pos=0.7,above left] J_1;
[postaction=decorate,
decoration=markings, mark= at position 0.75 with stealth]
(0,0) – (2,-2) node[pos=0.7,above right] J_2;
[postaction=decorate,
decoration=markings, mark= at position 0.55 with stealth]
(2,-2) – (2,0) node[pos=0.5,below right] I_2;
[postaction=decorate,
decoration=markings, mark= at position 0.65 with stealth, mark= at position 0.45 with stealth] (0,0) to[out=30,in=150] node[pos=0.5, above] A_1, A_2 (2,0);
[postaction=decorate,
decoration=markings, mark= at position 0.65 with stealth, mark= at position 0.45 with stealth] (2,0) to[out=210,in=330] node[pos=0.5, below] B_1, B_2 (0,0);
[col1] (0,0) circle (0.1);
[col2] (2,0) circle (0.1);
[left] at (-0.1,0) 1;
[right] at (2.1,0) 2;
[shift=(0,-2)]
[col0] (-0.08,-0.08) – (-0.08,0.08) – (0.08,0.08) – (0.08,-0.08) – cycle;
[shift=(2,-2)]
[col0] (-0.08,-0.08) – (-0.08,0.08) – (0.08,0.08) – (0.08,-0.08) – cycle; ], W=(A_1B_1A_2B_2-A_1B_2A_2B_1+ A_2I_1J_1+B_2I_2J_2) ,
so that the resulting quiver variety is not smooth anymore.
It is natural to expect that in this case there are no closed combinatorial expressions for the measure (<ref>) or matrix coefficients.
The argument is purely dimensional.
Since all the variables in (<ref>) and weights have a dimension of the mass, a closed combinatorial expression for the Euler class would allow one to derive a fixed expression for the dimension of the tangent space for given quiver dimensions d_±.
However we could observe just in a few examples that the dimension of the tangent space of a singular quiver variety jumps from one fixed point to another even for d_± fixed.
Yet we expect to acquire some closed expressions using the algebraic approach discussed in sec.<ref>.
We will implement this strategy elsewhere.
Another problem related to a canonical construction of the co-product implemented in purely algebraic terms <cit.> and with the help of CFT βγ-system <cit.> is that both require a rather involved construction of higher raising/lowering operators e_α,k/f_α,k depending on a non-simple affine root and an extra mode number.
In the case of CFT system these operators are required to construct a map from the Yangian to the conformal or a W-algebra <cit.>.
In principle, one needs the co-product only for the generating basis:
Δ(e_0^±)=e_0^±⊗ 1+1⊗ e_0^±, Δ(f_0^±)=f_0^±⊗ 1+1⊗ f_0^± ,
Δ(ψ_2^±)=ψ_2^±⊗ 1+ψ_1^±⊗ψ_0^±+ψ_0^±⊗ψ_1^±+1⊗ψ_2^±+^±
the rest of generators can be reconstructed via the co-product homomorphism.
Here ^± are non-trivial corrections to a diagonal co-product expected to have the following form:
^±∼∑(f_α_1· f_α_2·…)⊗(e_β_1· e_β_2·…) .
And expectations of <cit.> suggest that this higher order correction factorizes as ∑_α,kf_α,k⊗ e_α,k, however our calculation of the correction second level:
^±=-2(h_1^2-h_2^2)f_0^∓⊗ e_0^∓-
∓2({f_0^±,f_0^∓}⊗{e_0^±,e_1^∓}+{f_0^±,f_1^∓}⊗{e_0^±,e_0^∓}-
-{f_0^±,f_0^∓}⊗{e_1^∓,e_0^±}-{f_1^(a),f_0^(b)}⊗{e_0^∓,e_0^±})+… .
reveals no apparent factorization.
This might indicate that the case of (_1|1) has certain stand alone hidden complications.
An alternative approach rather popular in the modern literature to a construction of a solution to the Yang-Baxter equation intertwines TQFTs with enumerative geometry and sends us to the realm of stable envelopes.
The easiest way to notice the stable envelope entity is to work with the mirror dual frame of a quiver sigma-model – a Landau-Ginzburg model of vortex disorder operators <cit.>.
In this case if one is aiming for a disk partition function of such a theory a choice of brane boundary conditions is in order, and a natural choice ensuring a convergence of the path integral is a set of Lagrangian submanifolds of Lefschetz thimbles.
The “body” of a Lefschetz thimble is generated from a fixed point associated with some diagram λ∈Λ by a gradient flow with respect to a Morse height function h= Re W, where W is an effective superpotetnial in the Landau-Ginzburg model.
This structure induces an ordering on diagrams Λ by the value of h in each λ, so that the gradient flow may go from λ to λ' only if λ<λ'.
In this setting a non-perturbative disk partition function is a wave function and has the following expansion:
Ψ_λ=∑_λ'≥λe^-S_ soliton(λ→λ')+…ψ_λ' ,
where ψ_λ is a perturbative Gaussian wave function in the neighborhood of vacuum λ, and the expansion coefficient is given by a Euclidean path integral around a gradient flow soliton trajectory connecting vacua λ and λ'.
A potentially non-trivial form of a Lefschetz thimble that could intersect greater vacua forces us to introduce a transition matrix _λ,λ' between non-perturbative and perturbative bases we call a stable envelope.
If one considers a chain of diagrams λ⃗ embedded in the weight space, the form of Lefschetz thimble may depend on some moduli of the problem, whereas the perturbative basis remains ignorant.
This allows one to define an R-matrix as a holonomy of the non-perturbative bases on a permutation of moduli and factorize it a product of stable envelopes:
R_λ⃗,λ⃗”(12)=_λ⃗,λ⃗'(12)·_λ⃗',λ⃗”(21)^-1 .
The construction of the stable envelopes in physical terms is rather involved.
And enumerative geometry saves the day by mapping supersymmetric wave functions into elements of supercharge cohomology.
Mathematically literature <cit.> defines of a stable envelope as a unique function on fixed points λ satisfying the following rules:
* _λ,λ'=0 for all λ>λ'.
* _λ,λ= e_λ
* A Nakajima quiver variety has a specific symplectic structure. An edge of the Dynkin diagram is resolved in a symplectic pair of counter-oriented morphisms with equivariant weights w and ħ-w for some fixed ħ. Then _λ,λ' is divisible by ħ.
Two first rules are natural physical requirements.
The first one follows from the gradient flow directness discussed above.
The second one is a natural normalization condition for a choice of D-brane boundary conditions in the gauged linear sigma model phase of the theory – the wave functions becomes simple the Euler cohomology classes.
And the last rule restricts the application of this construction to Nakajima quiver varieties only.
Therefore a generalization to our case of non-Nakajima conifold resolution associated with (_1|1) is especially intriguing.
If one performs a phenomenological attempt of analyzing the results known in the literature for (_n)<cit.> one finds a direct resemblance between the stable envelope expression and a Euler class measure expression like (<ref>).
If the class is represented as product of weights, then the stable envelope is a product of weights in certain degrees:
e_λ∼∏_w(λ)w(λ), _λ,λ'∼∏_w(λ')w(λ')^⟨ w(λ),w(λ')⟩ ,
where pairing ⟨⋆,⋆⟩ is allowed to take values 0 and 1 and is a function of moduli.
We hope that our construction of super-Schur polynomials and hook formulae for the Euler class in the case of semi-Fock representations will allow one to produce similar expressions for Fock representations and guess a phenomenological expression à la (<ref>).
So that a new R-matrix in the form of (<ref>) for (_1|1) is constructed.
utphys1cm
§ DATA ON SUPER-PARTITIONS UP TO LEVEL 9/2
|C|C|C|C|C|C|C|C|ℓ λ _λ _λ n_λ^+ n_λ^- w_λ^+ w_λ^-
0 ∅ 1 1 0 0 0 0
1/2 [ [scale=0.15]
ı/ȷ in 0/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 ] 1 1 0 0 0
1 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/0, 1/0 (ı,ȷ) – (ı,ȷ-1); ] 0.95[ p_1 ] ϵ _1 ϵ _2 1 1 0 ϵ _1/2+ϵ _2/2
3/2 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-1 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1-θ _2/ϵ _2 ] -ϵ _1 (ϵ _1-ϵ _2) 2 1 ϵ _1 ϵ _1/2+ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/0, 1/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 2/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1-θ _2/ϵ _1 ] -ϵ _2 (ϵ _2-ϵ _1) 2 1 ϵ _2 ϵ _1/2+ϵ _2/2
2 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı,ȷ-1); ] 0.95[ p_1^2-p_2/ϵ _2 ] -2 ϵ _1^2 (ϵ _1-ϵ _2) ϵ _2 2 2 ϵ _1 2 ϵ _1+ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-1, 2/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 θ _2/ϵ _1 ϵ _2 ] 1 3 1 ϵ _1+ϵ _2 ϵ _1/2+ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/0, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1); ] 0.95[ p_1^2-p_2/ϵ _1 ] -2 ϵ _1 ϵ _2^2 (ϵ _2-ϵ _1) 2 2 ϵ _2 ϵ _1+2 ϵ _2
5/2 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-2 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^2-2 θ _2 p_1/ϵ _2-θ _1 p_2/ϵ _2+2 θ _3/ϵ _2^2 ] 2 ϵ _1^2 (ϵ _1-ϵ _2) (2 ϵ _1-ϵ _2) 3 2 3 ϵ _1 2 ϵ _1+ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 2/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^2-θ _2 p_1/ϵ _1-θ _1 p_2/ϵ _2+θ _3/ϵ _1 ϵ _2 ] ϵ _1 (ϵ _1-ϵ _2) ϵ _2 (ϵ _2-2 ϵ _1) 3 2 ϵ _1+ϵ _2 2 ϵ _1+ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-1 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^2-θ _2 p_1/ϵ _2-θ _1 p_2/ϵ _1+θ _3/ϵ _1 ϵ _2 ] ϵ _1 (ϵ _1-2 ϵ _2) ϵ _2 (ϵ _2-ϵ _1) 3 2 ϵ _1+ϵ _2 ϵ _1+2 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/0, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 3/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^2-2 θ _2 p_1/ϵ _1-θ _1 p_2/ϵ _1+2 θ _3/ϵ _1^2 ] 2 ϵ _2^2 (ϵ _2-ϵ _1) (2 ϵ _2-ϵ _1) 3 2 3 ϵ _2 ϵ _1+2 ϵ _2
3 [ [scale=0.15]
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-2, 1/-1, 1/0 (ı,ȷ) – (ı,ȷ-1); ] 0.95[ -3 p_2 p_1/ϵ _2+2 p_3/ϵ _2^2+p_1^3 ] 6 ϵ _1^3 (ϵ _1-ϵ _2) (2 ϵ _1-ϵ _2) ϵ _2 3 3 3 ϵ _1 9 ϵ _1/2+3 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-2, 2/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 θ _2 p_1/ϵ _1 ϵ _2-θ _1 θ _3/ϵ _1 ϵ _2^2 ] -ϵ _1 (ϵ _1-ϵ _2) 4 2 3 ϵ _1+ϵ _2 2 ϵ _1+ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1); ] 0.95[ -p_2 p_1 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2+p_3/ϵ _1 ϵ _2+p_1^3 ] ϵ _1^2 (ϵ _1-2 ϵ _2) ϵ _2^2 (ϵ _2-2 ϵ _1) 3 3 ϵ _1+ϵ _2 5 ϵ _1/2+5 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-1, 3/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 θ _2 p_1/ϵ _1 ϵ _2-θ _1 θ _3/ϵ _1^2 ϵ _2 ] -ϵ _2 (ϵ _2-ϵ _1) 4 2 ϵ _1+3 ϵ _2 ϵ _1+2 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/0, 1/0, 2/0, 3/0 (ı,ȷ) – (ı,ȷ-1); ] 0.95[ -3 p_2 p_1/ϵ _1+2 p_3/ϵ _1^2+p_1^3 ] 6 ϵ _1 ϵ _2^3 (ϵ _2-ϵ _1) (2 ϵ _2-ϵ _1) 3 3 3 ϵ _2 3 ϵ _1/2+9 ϵ _2/2
7/2 [ [scale=0.15]
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0, 1/-2, 1/-1, 1/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-3 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^3-3 θ _2 p_1^2/ϵ _2-3 θ _1 p_2 p_1/ϵ _2+6 θ _3 p_1/ϵ _2^2+2 θ _1 p_3/ϵ _2^2+3 θ _2 p_2/ϵ _2^2-6 θ _4/ϵ _2^3 ] -6 ϵ _1^3 (ϵ _1-ϵ _2) (2 ϵ _1-ϵ _2) (3 ϵ _1-ϵ _2) 4 3 6 ϵ _1 9 ϵ _1/2+3 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-2, 1/-1, 1/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 2/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^3-θ _2 p_1^2/ϵ _1-3 θ _1 p_2 p_1/ϵ _2+2 θ _3 p_1/ϵ _1 ϵ _2+2 θ _1 p_3/ϵ _2^2+θ _2 p_2/ϵ _1 ϵ _2-2 θ _4/ϵ _1 ϵ _2^2 ] -2 ϵ _1^2 (ϵ _1-ϵ _2) (2 ϵ _1-ϵ _2) ϵ _2 (ϵ _2-3 ϵ _1) 4 3 3 ϵ _1+ϵ _2 9 ϵ _1/2+3 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-2 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^3-2 θ _2 p_1^2/ϵ _2-θ _1 p_2 p_1 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2+θ _3 p_1 (2 ϵ _1+ϵ _2)/ϵ _1 ϵ _2^2+θ _1 p_3/ϵ _1 ϵ _2+θ _2 p_2/ϵ _1 ϵ _2-2 θ _4/ϵ _1 ϵ _2^2 ] -ϵ _1^2 (2 ϵ _1-2 ϵ _2) (ϵ _1-ϵ _2) ϵ _2 (ϵ _2-2 ϵ _1) 4 3 3 ϵ _1+ϵ _2 5 ϵ _1/2+5 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 2/-1 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^3-θ _2 p_1^2 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2-θ _1 p_2 p_1 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2+3 θ _3 p_1/ϵ _1 ϵ _2+θ _1 p_3/ϵ _1 ϵ _2+θ _2 p_2 (ϵ _1^2-ϵ _2 ϵ _1+ϵ _2^2)/ϵ _1^2 ϵ _2^2-θ _4 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2^2 ] ϵ _1 (ϵ _1-2 ϵ _2) (ϵ _1-ϵ _2) ϵ _2 (ϵ _2-2 ϵ _1) (ϵ _2-ϵ _1) 4 3 2 ϵ _1+2 ϵ _2 5 ϵ _1/2+5 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 3/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^3-2 θ _2 p_1^2/ϵ _1-θ _1 p_2 p_1 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2+θ _3 p_1 (ϵ _1+2 ϵ _2)/ϵ _1^2 ϵ _2+θ _1 p_3/ϵ _1 ϵ _2+θ _2 p_2/ϵ _1 ϵ _2-2 θ _4/ϵ _1^2 ϵ _2 ] -ϵ _1 (ϵ _1-2 ϵ _2) ϵ _2^2 (ϵ _2-ϵ _1) (2 ϵ _2-2 ϵ _1) 4 3 ϵ _1+3 ϵ _2 5 ϵ _1/2+5 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/0, 2/0, 3/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-1 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^3-θ _2 p_1^2/ϵ _2-3 θ _1 p_2 p_1/ϵ _1+2 θ _3 p_1/ϵ _1 ϵ _2+2 θ _1 p_3/ϵ _1^2+θ _2 p_2/ϵ _1 ϵ _2-2 θ _4/ϵ _1^2 ϵ _2 ] -2 ϵ _1 (ϵ _1-3 ϵ _2) ϵ _2^2 (ϵ _2-ϵ _1) (2 ϵ _2-ϵ _1) 4 3 ϵ _1+3 ϵ _2 3 ϵ _1/2+9 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0, 3/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/0, 1/0, 2/0, 3/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 4/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^3-3 θ _2 p_1^2/ϵ _1-3 θ _1 p_2 p_1/ϵ _1+6 θ _3 p_1/ϵ _1^2+2 θ _1 p_3/ϵ _1^2+3 θ _2 p_2/ϵ _1^2-6 θ _4/ϵ _1^3 ] -6 ϵ _2^3 (ϵ _2-ϵ _1) (2 ϵ _2-ϵ _1) (3 ϵ _2-ϵ _1) 4 3 6 ϵ _2 3 ϵ _1/2+9 ϵ _2/2
4 [ [scale=0.15]
ı/ȷ in 0/-4, 0/-3, 0/-2, 0/-1, 0/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0, 1/-3, 1/-2, 1/-1, 1/0 (ı,ȷ) – (ı,ȷ-1); ] 0.95[ -6 p_2 p_1^2/ϵ _2+8 p_3 p_1/ϵ _2^2+3 p_2^2/ϵ _2^2-6 p_4/ϵ _2^3+p_1^4 ] -24 ϵ _1^4 (ϵ _1-ϵ _2) (2 ϵ _1-ϵ _2) (3 ϵ _1-ϵ _2) ϵ _2 4 4 6 ϵ _1 8 ϵ _1+2 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0, 1/-2, 1/-1, 1/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-3, 2/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 θ _2 p_1^2/ϵ _1 ϵ _2-2 θ _1 θ _3 p_1/ϵ _1 ϵ _2^2-θ _1 θ _2 p_2/ϵ _1 ϵ _2^2+2 θ _1 θ _4/ϵ _1 ϵ _2^3 ] 2 ϵ _1^2 (ϵ _1-ϵ _2) (2 ϵ _1-ϵ _2) 5 3 6 ϵ _1+ϵ _2 9 ϵ _1/2+3 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-2, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1); ] 0.95[ -p_2 p_1^2 (3 ϵ _1+ϵ _2)/ϵ _1 ϵ _2+2 p_3 p_1 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2^2+p_2^2/ϵ _1 ϵ _2-2 p_4/ϵ _1 ϵ _2^2+p_1^4 ] -2 ϵ _1^3 (2 ϵ _1-2 ϵ _2) (ϵ _1-ϵ _2) ϵ _2^2 (ϵ _2-3 ϵ _1) 4 4 3 ϵ _1+ϵ _2 5 ϵ _1+3 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-2, 2/-1 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 θ _2 p_1^2/ϵ _1 ϵ _2-2 θ _1 θ _3 p_1/ϵ _1 ϵ _2^2+θ _1 θ _2 p_2 (ϵ _1-ϵ _2)/ϵ _1^2 ϵ _2^2+θ _2 θ _3 (2 ϵ _1-ϵ _2)/ϵ _1^2 ϵ _2^3+θ _1 θ _4/ϵ _1^2 ϵ _2^2 ] -ϵ _1 (2 ϵ _1-2 ϵ _2) (ϵ _1-ϵ _2) (ϵ _2-2 ϵ _1) 5 3 4 ϵ _1+2 ϵ _2 5 ϵ _1/2+5 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-2, 3/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 θ _2 p_1^2/ϵ _1 ϵ _2-θ _1 θ _3 p_1 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2^2+θ _2 θ _3/ϵ _1^2 ϵ _2^2+θ _1 θ _4/ϵ _1^2 ϵ _2^2 ] ϵ _1 (ϵ _1-ϵ _2) ϵ _2 (ϵ _2-ϵ _1) 5 3 3 ϵ _1+3 ϵ _2 5 ϵ _1/2+5 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-2, 1/-1, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0 (ı,ȷ) – (ı,ȷ-1); ] 0.95[ -2 p_2 p_1^2 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2+4 p_3 p_1/ϵ _1 ϵ _2-p_4 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2^2+p_2^2 (ϵ _1^2-ϵ _2 ϵ _1+ϵ _2^2)/ϵ _1^2 ϵ _2^2+p_1^4 ] 4 ϵ _1^2 (ϵ _1-2 ϵ _2) (ϵ _1-ϵ _2) ϵ _2^2 (ϵ _2-2 ϵ _1) (ϵ _2-ϵ _1) 4 4 2 ϵ _1+2 ϵ _2 4 ϵ _1+4 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 2/-1, 3/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 θ _2 p_1^2/ϵ _1 ϵ _2-2 θ _1 θ _3 p_1/ϵ _1^2 ϵ _2-θ _1 θ _2 p_2 (ϵ _1-ϵ _2)/ϵ _1^2 ϵ _2^2-θ _2 θ _3 (ϵ _1-2 ϵ _2)/ϵ _1^3 ϵ _2^2+θ _1 θ _4/ϵ _1^2 ϵ _2^2 ] -(ϵ _1-2 ϵ _2) ϵ _2 (ϵ _2-ϵ _1) (2 ϵ _2-2 ϵ _1) 5 3 2 ϵ _1+4 ϵ _2 5 ϵ _1/2+5 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/0, 3/0 (ı,ȷ) – (ı,ȷ-1); ] 0.95[ -p_2 p_1^2 (ϵ _1+3 ϵ _2)/ϵ _1 ϵ _2+2 p_3 p_1 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2+p_2^2/ϵ _1 ϵ _2-2 p_4/ϵ _1^2 ϵ _2+p_1^4 ] -2 ϵ _1^2 (ϵ _1-3 ϵ _2) ϵ _2^3 (ϵ _2-ϵ _1) (2 ϵ _2-2 ϵ _1) 4 4 ϵ _1+3 ϵ _2 3 ϵ _1+5 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0, 3/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/0, 2/0, 3/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-1, 4/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 θ _2 p_1^2/ϵ _1 ϵ _2-2 θ _1 θ _3 p_1/ϵ _1^2 ϵ _2-θ _1 θ _2 p_2/ϵ _1^2 ϵ _2+2 θ _1 θ _4/ϵ _1^3 ϵ _2 ] 2 ϵ _2^2 (ϵ _2-ϵ _1) (2 ϵ _2-ϵ _1) 5 3 ϵ _1+6 ϵ _2 3 ϵ _1/2+9 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0, 3/-1, 3/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/0, 1/0, 2/0, 3/0, 4/0 (ı,ȷ) – (ı,ȷ-1); ] 0.95[ -6 p_2 p_1^2/ϵ _1+8 p_3 p_1/ϵ _1^2+3 p_2^2/ϵ _1^2-6 p_4/ϵ _1^3+p_1^4 ] -24 ϵ _1 ϵ _2^4 (ϵ _2-ϵ _1) (2 ϵ _2-ϵ _1) (3 ϵ _2-ϵ _1) 4 4 6 ϵ _2 2 ϵ _1+8 ϵ _2
9/2 [ [scale=0.15]
ı/ȷ in 0/-4, 0/-3, 0/-2, 0/-1, 0/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-4, 0/-3, 0/-2, 0/-1, 0/0, 1/-3, 1/-2, 1/-1, 1/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-4 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^4-6 θ _1 p_2 p_1^2/ϵ _2+8 θ _1 p_3 p_1/ϵ _2^2+3 θ _1 p_2^2/ϵ _2^2-6 θ _1 p_4/ϵ _2^3-8 θ _2 p_3/ϵ _2^3; -4 θ _2 p_1^3/ϵ _2+12 θ _3 p_1^2/ϵ _2^2+12 θ _2 p_2 p_1/ϵ _2^2-24 θ _4 p_1/ϵ _2^3-12 θ _3 p_2/ϵ _2^3+24 θ _5/ϵ _2^4 ] 24 ϵ _1^4 (ϵ _1-ϵ _2) (2 ϵ _1-ϵ _2) (3 ϵ _1-ϵ _2) (4 ϵ _1-ϵ _2) 5 4 10 ϵ _1 8 ϵ _1+2 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-4, 0/-3, 0/-2, 0/-1, 0/0, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0, 1/-3, 1/-2, 1/-1, 1/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 2/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^4-θ _2 p_1^3/ϵ _1-6 θ _1 p_2 p_1^2/ϵ _2+8 θ _1 p_3 p_1/ϵ _2^2+3 θ _1 p_2^2/ϵ _2^2-6 θ _1 p_4/ϵ _2^3; +3 θ _3 p_1^2/ϵ _1 ϵ _2+3 θ _2 p_2 p_1/ϵ _1 ϵ _2-6 θ _4 p_1/ϵ _1 ϵ _2^2-2 θ _2 p_3/ϵ _1 ϵ _2^2-3 θ _3 p_2/ϵ _1 ϵ _2^2+6 θ _5/ϵ _1 ϵ _2^3 ] 6 ϵ _1^3 (ϵ _1-ϵ _2) (2 ϵ _1-ϵ _2) (3 ϵ _1-ϵ _2) ϵ _2 (ϵ _2-4 ϵ _1) 5 4 6 ϵ _1+ϵ _2 8 ϵ _1+2 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0, 1/-2, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-3 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^4-θ _1 p_2 p_1^2 (3 ϵ _1+ϵ _2)/ϵ _1 ϵ _2+2 θ _1 p_3 p_1 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2^2+θ _1 p_2^2/ϵ _1 ϵ _2-2 θ _1 p_4/ϵ _1 ϵ _2^2-2 θ _2 p_3/ϵ _1 ϵ _2^2; -3 θ _2 p_1^3/ϵ _2+θ _3 p_1^2 (6 ϵ _1+ϵ _2)/ϵ _1 ϵ _2^2+θ _2 p_2 p_1 (3 ϵ _1+2 ϵ _2)/ϵ _1 ϵ _2^2-2 θ _4 p_1 (3 ϵ _1+2 ϵ _2)/ϵ _1 ϵ _2^3-3 θ _3 p_2/ϵ _1 ϵ _2^2+6 θ _5/ϵ _1 ϵ _2^3 ] 2 ϵ _1^3 (3 ϵ _1-2 ϵ _2) (ϵ _1-ϵ _2) (2 ϵ _1-ϵ _2) ϵ _2 (ϵ _2-3 ϵ _1) 5 4 6 ϵ _1+ϵ _2 5 ϵ _1+3 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0, 1/-1, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-2, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 2/-1 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^4-θ _2 p_1^3 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2-θ _1 p_2 p_1^2 (3 ϵ _1+ϵ _2)/ϵ _1 ϵ _2+2 θ _1 p_3 p_1 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2^2+θ _1 p_2^2/ϵ _1 ϵ _2-2 θ _1 p_4/ϵ _1 ϵ _2^2; +4 θ _3 p_1^2/ϵ _1 ϵ _2+θ _2 p_2 p_1 (3 ϵ _1^2+ϵ _2^2)/ϵ _1^2 ϵ _2^2-θ _4 p_1 (5 ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2^2-θ _2 p_3 (2 ϵ _1^2-ϵ _2 ϵ _1+ϵ _2^2)/ϵ _1^2 ϵ _2^3-θ _3 p_2 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2^2+2 θ _5 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2^3 ] -ϵ _1^2 (2 ϵ _1-2 ϵ _2) (ϵ _1-ϵ _2)^2 ϵ _2 (ϵ _2-3 ϵ _1) (ϵ _2-2 ϵ _1) 5 4 4 ϵ _1+2 ϵ _2 5 ϵ _1+3 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-3, 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-2, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 3/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^4-2 θ _2 p_1^3/ϵ _1-θ _1 p_2 p_1^2 (3 ϵ _1+ϵ _2)/ϵ _1 ϵ _2+2 θ _1 p_3 p_1 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2^2+θ _1 p_2^2/ϵ _1 ϵ _2-2 θ _1 p_4/ϵ _1 ϵ _2^2; +2 θ _3 p_1^2 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2+4 θ _2 p_2 p_1/ϵ _1 ϵ _2-2 θ _4 p_1 (ϵ _1+2 ϵ _2)/ϵ _1^2 ϵ _2^2-2 θ _2 p_3/ϵ _1 ϵ _2^2-2 θ _3 p_2/ϵ _1^2 ϵ _2+4 θ _5/ϵ _1^2 ϵ _2^2 ] 2 ϵ _1^2 (2 ϵ _1-2 ϵ _2) (ϵ _1-ϵ _2) ϵ _2^2 (ϵ _2-ϵ _1) (2 ϵ _2-3 ϵ _1) 5 4 3 ϵ _1+3 ϵ _2 5 ϵ _1+3 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-2, 1/-1, 1/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-2 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^4-2 θ _1 p_2 p_1^2 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2+4 θ _1 p_3 p_1/ϵ _1 ϵ _2-θ _1 p_4 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2^2+θ _1 p_2^2 (ϵ _1^2-ϵ _2 ϵ _1+ϵ _2^2)/ϵ _1^2 ϵ _2^2-2 θ _2 p_3/ϵ _1 ϵ _2^2; -2 θ _2 p_1^3/ϵ _2+2 θ _3 p_1^2 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2^2+2 θ _2 p_2 p_1 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2^2-6 θ _4 p_1/ϵ _1 ϵ _2^2-2 θ _3 p_2 (ϵ _1^2-ϵ _2 ϵ _1+ϵ _2^2)/ϵ _1^2 ϵ _2^3+2 θ _5 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2^3 ] -2 ϵ _1^2 (ϵ _1-2 ϵ _2) (2 ϵ _1-2 ϵ _2) (ϵ _1-ϵ _2) ϵ _2 (ϵ _2-2 ϵ _1) (ϵ _2-ϵ _1) 5 4 4 ϵ _1+2 ϵ _2 4 ϵ _1+4 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-2, 2/-1, 3/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 θ _2 θ _3/ϵ _1^3 ϵ _2^3 ] 1 6 3 4 ϵ _1+4 ϵ _2 5 ϵ _1/2+5 ϵ _2/2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/0, 3/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-2 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^4-2 θ _2 p_1^3/ϵ _2-θ _1 p_2 p_1^2 (ϵ _1+3 ϵ _2)/ϵ _1 ϵ _2+2 θ _1 p_3 p_1 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2+θ _1 p_2^2/ϵ _1 ϵ _2-2 θ _1 p_4/ϵ _1^2 ϵ _2; +2 θ _3 p_1^2 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2^2+4 θ _2 p_2 p_1/ϵ _1 ϵ _2-2 θ _4 p_1 (2 ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2^2-2 θ _2 p_3/ϵ _1^2 ϵ _2-2 θ _3 p_2/ϵ _1 ϵ _2^2+4 θ _5/ϵ _1^2 ϵ _2^2 ] 2 ϵ _1^2 (2 ϵ _1-3 ϵ _2) (ϵ _1-ϵ _2) ϵ _2^2 (ϵ _2-ϵ _1) (2 ϵ _2-2 ϵ _1) 5 4 3 ϵ _1+3 ϵ _2 3 ϵ _1+5 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-2, 1/-1, 1/0, 2/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 3/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^4-2 θ _2 p_1^3/ϵ _1-2 θ _1 p_2 p_1^2 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2+4 θ _1 p_3 p_1/ϵ _1 ϵ _2-θ _1 p_4 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2^2+θ _1 p_2^2 (ϵ _1^2-ϵ _2 ϵ _1+ϵ _2^2)/ϵ _1^2 ϵ _2^2; +2 θ _3 p_1^2 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2+2 θ _2 p_2 p_1 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2-6 θ _4 p_1/ϵ _1^2 ϵ _2-2 θ _2 p_3/ϵ _1^2 ϵ _2-2 θ _3 p_2 (ϵ _1^2-ϵ _2 ϵ _1+ϵ _2^2)/ϵ _1^3 ϵ _2^2+2 θ _5 (ϵ _1+ϵ _2)/ϵ _1^3 ϵ _2^2 ] -2 ϵ _1 (ϵ _1-2 ϵ _2) (ϵ _1-ϵ _2) ϵ _2^2 (ϵ _2-2 ϵ _1) (ϵ _2-ϵ _1) (2 ϵ _2-2 ϵ _1) 5 4 2 ϵ _1+4 ϵ _2 4 ϵ _1+4 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/0, 3/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 2/-1 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^4-θ _2 p_1^3 (ϵ _1+ϵ _2)/ϵ _1 ϵ _2-θ _1 p_2 p_1^2 (ϵ _1+3 ϵ _2)/ϵ _1 ϵ _2+2 θ _1 p_3 p_1 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2+θ _1 p_2^2/ϵ _1 ϵ _2-2 θ _1 p_4/ϵ _1^2 ϵ _2; +4 θ _3 p_1^2/ϵ _1 ϵ _2+θ _2 p_2 p_1 (ϵ _1^2+3 ϵ _2^2)/ϵ _1^2 ϵ _2^2-θ _4 p_1 (ϵ _1+5 ϵ _2)/ϵ _1^2 ϵ _2^2-θ _2 p_3 (ϵ _1^2-ϵ _2 ϵ _1+2 ϵ _2^2)/ϵ _1^3 ϵ _2^2-θ _3 p_2 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2^2+2 θ _5 (ϵ _1+ϵ _2)/ϵ _1^3 ϵ _2^2 ] -ϵ _1 (ϵ _1-3 ϵ _2) (ϵ _1-2 ϵ _2) ϵ _2^2 (ϵ _2-ϵ _1)^2 (2 ϵ _2-2 ϵ _1) 5 4 2 ϵ _1+4 ϵ _2 3 ϵ _1+5 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-2, 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0, 3/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/0, 3/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 4/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^4-3 θ _2 p_1^3/ϵ _1-θ _1 p_2 p_1^2 (ϵ _1+3 ϵ _2)/ϵ _1 ϵ _2+2 θ _1 p_3 p_1 (ϵ _1+ϵ _2)/ϵ _1^2 ϵ _2+θ _1 p_2^2/ϵ _1 ϵ _2-2 θ _1 p_4/ϵ _1^2 ϵ _2; +θ _3 p_1^2 (ϵ _1+6 ϵ _2)/ϵ _1^2 ϵ _2+θ _2 p_2 p_1 (2 ϵ _1+3 ϵ _2)/ϵ _1^2 ϵ _2-2 θ _4 p_1 (2 ϵ _1+3 ϵ _2)/ϵ _1^3 ϵ _2-2 θ _2 p_3/ϵ _1^2 ϵ _2-3 θ _3 p_2/ϵ _1^2 ϵ _2+6 θ _5/ϵ _1^3 ϵ _2 ] 2 ϵ _1 (ϵ _1-3 ϵ _2) ϵ _2^3 (ϵ _2-ϵ _1) (2 ϵ _2-ϵ _1) (3 ϵ _2-2 ϵ _1) 5 4 ϵ _1+6 ϵ _2 3 ϵ _1+5 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0, 3/-1, 3/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/-1, 0/0, 1/0, 2/0, 3/0, 4/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 1/-1 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^4-θ _2 p_1^3/ϵ _2-6 θ _1 p_2 p_1^2/ϵ _1+8 θ _1 p_3 p_1/ϵ _1^2+3 θ _1 p_2^2/ϵ _1^2-6 θ _1 p_4/ϵ _1^3; +3 θ _3 p_1^2/ϵ _1 ϵ _2+3 θ _2 p_2 p_1/ϵ _1 ϵ _2-6 θ _4 p_1/ϵ _1^2 ϵ _2-2 θ _2 p_3/ϵ _1^2 ϵ _2-3 θ _3 p_2/ϵ _1^2 ϵ _2+6 θ _5/ϵ _1^3 ϵ _2 ] 6 ϵ _1 (ϵ _1-4 ϵ _2) ϵ _2^3 (ϵ _2-ϵ _1) (2 ϵ _2-ϵ _1) (3 ϵ _2-ϵ _1) 5 4 ϵ _1+6 ϵ _2 2 ϵ _1+8 ϵ _2
2-8 [ [scale=0.15]
ı/ȷ in 0/-1, 0/0, 1/-1, 1/0, 2/-1, 2/0, 3/-1, 3/0, 4/0 (ı,ȷ) – (ı+1,ȷ);
ı/ȷ in 0/0, 1/0, 2/0, 3/0, 4/0 (ı,ȷ) – (ı,ȷ-1);
ı/ȷ in 5/0 (ı,ȷ) – (ı-1,ȷ-1); ] 0.95[ θ _1 p_1^4-6 θ _1 p_2 p_1^2/ϵ _1+8 θ _1 p_3 p_1/ϵ _1^2+3 θ _1 p_2^2/ϵ _1^2-6 θ _1 p_4/ϵ _1^3-8 θ _2 p_3/ϵ _1^3; -4 θ _2 p_1^3/ϵ _1+12 θ _3 p_1^2/ϵ _1^2+12 θ _2 p_2 p_1/ϵ _1^2-24 θ _4 p_1/ϵ _1^3-12 θ _3 p_2/ϵ _1^3+24 θ _5/ϵ _1^4 ] 24 ϵ _2^4 (ϵ _2-ϵ _1) (2 ϵ _2-ϵ _1) (3 ϵ _2-ϵ _1) (4 ϵ _2-ϵ _1) 5 4 10 ϵ _2 2 ϵ _1+8 ϵ _2
|
http://arxiv.org/abs/2307.01475v1
|
20230704045649
|
PIMCOMP: A Universal Compilation Framework for Crossbar-based PIM DNN Accelerators
|
[
"Xiaotian Sun",
"Xinyu Wang",
"Wanqian Li",
"Lei Wang",
"Yinhe Han",
"Xiaoming Chen"
] |
cs.AR
|
[
"cs.AR",
"cs.ET"
] |
PIMCOMP: A Universal Compilation Framework for Crossbar-based PIM DNN Accelerators
Xiaotian Sun, Xinyu Wang, Wanqian Li, Lei Wang, Yinhe Han, Xiaoming Chen1
Center for Intelligent Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences
University of Chinese Academy of Sciences
1Corresponding author: chenxiaoming@ict.ac.cn
August 1, 2023
==================================================================================================================================================================================================================================================================================
fancy
This paper is published in Design Automation Conference 2023 (DAC'23)
Crossbar-based PIM DNN accelerators can provide massively parallel in-situ operations. A specifically designed compiler is important to achieve high performance for a wide variety of DNN workloads. However, some key compilation issues such as parallelism considerations, weight replication selection, and array mapping methods have not been solved. In this work, we propose PIMCOMP - a universal compilation framework for NVM crossbar-based PIM DNN accelerators. PIMCOMP is built on an abstract PIM accelerator architecture, which is compatible with the widely used Crossbar/IMA/Tile/Chip hierarchy. On this basis, we propose four general compilation stages for crossbar-based PIM accelerators: node partitioning, weight replicating, core mapping, and dataflow scheduling. We design two compilation modes with different inter-layer pipeline granularities to support high-throughput and low-latency application scenarios, respectively. Our experimental results show that PIMCMOP yields improvements of 1.6× and 2.4× in throughput and latency, respectively, relative to PUMA.
NVM, PIM accelerator, deep neural network, compilation framework
§ INTRODUCTION
In recent years, deep neural networks (DNNs) have made breakthroughs in a variety of tasks. Due to the expansion of the parameter scale of DNN models, the industry expects considerable performance improvements in hardware to efficiently run DNN algorithms. Therefore various DNN accelerators (e.g., <cit.>) have been proposed. However, these devices based on CMOS and following the von Neumann architecture are now hitting the memory wall challenge <cit.>, and encounter bottlenecks in the improvements of storage, bandwidth, and energy consumption.
Process-in-memory (PIM) is regarded as a promising technology to avoid the memory wall, and has become a popular research direction in the field of DNN accelerators. PIM has a variety of implementations, among which emerging Non-Volatile Memory (NVM) devices such as RRAM, PCM and MRAM have great potential to challenge the dominance of CMOS. Integrating these devices into a 2D cross-point array results in a crossbar array. The 2D crossbar structure formed by NVM devices has attracted growing interest due to its high memory density and parallel in-situ computing properties <cit.>.
There are previous works (e.g., <cit.>) proposing NVM crossbar-based PIM DNN accelerators. However, they mainly focus on the design of specific architectures and lack consideration of the execution details of DNNs. First, existing works rely on manually mapping the weight data to the crossbars, which ignores the impact of weight mapping on the parallelism of the crossbars and has poor scalability. Second, existing designs often determine the replication of the weights intuitively, such as replicating weight data in early layers to keep the execution pipeline balanced. However, this method does not make effective use of resources. It needs a toolchain to elaborate the task mapping, resource allocation, data distribution, dataflow scheduling, etc., to fully exploit the performance of PIM DNN accelerators when running DNN models. Although PUMA <cit.> has proposed a compiler for memristor crossbar-based accelerators, it has not effectively solved the problems mentioned above.
To address these limitations, we propose a universal compilation framework, PIMCOMP, for NVM crossbar-based PIM accelerators. The compilation process is divided into 4 phases: node partitioning, weight replicating, core mapping, and dataflow scheduling. Based on these steps, we design a DNN inference compiler. We make the following contributions.
* We propose a general and representative NVM crossbar based accelerator architecture as a hardware abstraction for studying the actual execution of DNNs;
* We provide two compilation modes: “low latency” and “high throughput”, according to the user's application scenario;
* We propose a genetic algorithm to optimize weight replication and core mapping, with novel fitness function for both compilation modes;
* We design scheduling algorithms to support DNN networks with complex topologies and multiple operators and optimize on-chip memory usage.
§ BACKGROUND AND MOTIVATION
§.§ NVM Crossbar based DNN Accelerators
r0.18
< g r a p h i c s >
A NVM crossbar with peripheral devices
Fig. <ref> shows the structure of the NVM crossbar which is the basic operation unit of PIM DNN accelerators. Due to hardware characteristics, NVM crossbars can perform matrix-vector multiplication (MVM) operations efficiently in an analog manner. The conductance of a crossbar cell at each cross-point can be programmed to store an element of matrix G and a voltage vector representing the vector V is applied to the rows. According to Ohm's law, the current value at each cross-point is I_ij = G_ijV_j, and according to Kirchhoff's law, the accumulated current value that can be read in each column is I_i=∑_j G_ij V_j. In this way, the crossbar can calculate the dot product of matrix G and vector V in parallel. Since the NVM crossbar operates in the analog domain and other circuits of the accelerator work in the digital domain, peripheral circuits such as ADC/DAC, Sample and Hold (S&H), Shift And Add (S&A) are required in the operation unit.
To speed up DNN inference, the weight data is mapped into the crossbar, and the input is fed as the voltage input. So the crossbar can complete the MVM operations in DNNs in parallel and previous works (e.g., <cit.>) have used NVM crossbars to accelerate DNN inference.
§.§ DNN Compilation Framework
To bridge the gap between various Deep Learning (DL) frameworks and different DL hardwares, several DNN compilation frameworks have been proposed by industry and academia (e.g., <cit.>).
TVM <cit.> is a popular end-to-end DL optimization stack that provides multi-stage back-end optimization
for different hardwares. However, TVM is not suitable for PIM architectures. First, storage units in PIM are also computation units, which makes the storage and computing characteristics of PIM different from traditional hardware. Second, a significant stage in TVM's optimization of computing is the scheduling of multi-layer loops, including unrolling, vectorization, parallelization, and tiling. However, the parallel MVM operation is naturally supported by PIM crossbars, so TVM optimization will have limited effects on NVM crossbar-based accelerators.
In the field of PIM, PUMA <cit.> is the first memristor-based ML inference accelerator that supports ISA with a compiler that can convert high-level languages into ISA code. Nonetheless, heuristic weight replicating and core mapping methods adopted by its compiler are difficult to guarantee high performance. In addition, the granularity of PUMA's inter-layer pipeline is inference, that is, different layers process data of different inferences, and this processing manner is unacceptable in low-latency scenarios.
§ ABSTRACT ACCELERATOR ARCHITECTURE
We propose an abstract PIM accelerator architecture. On this basis, we propose an execution model to capture the execution characteristics and parallelism of PIM accelerators.
§.§ Hardware Abstraction
Fig. <ref> illustrates the proposed abstract DNN accelerator architecture. At a high level, the accelerator consists of a series of cores connected to a global memory. Each core can be controlled by instructions or state machines. The cores can be interconnected through NoC or busses. The weights of the neural network are stored in the cores, while the inputs, outputs and intermediate results are stored in the global memory. Different cores perform operations asynchronously. Inter-core synchronization occurs when there is an inter-core data transfer.
The abstract core in Fig. <ref> consists of a control unit, a local memory and two types of operation units: PIM matrix unit (PIMMU) and vector functional unit (VFU). The PIM matrix unit is composed of several PIM crossbars, which are used to perform MVM operations in the DNN, while the VFU performs other operations, including activation function, pooling, and element-wise operations. The local memory works as a scratchpad that stores inputs and outputs of DNN nodes. The PIMMU and VFU can only access data in the local memory. In addition, operations such as padding, concatenation, and split can also be handled using the local memory.
Our proposed abstract architecture is compatible with the Crossbar/IMA/Tile/Chip structure widely adopted in previous work <cit.>. This work focuses on the general optimization idea of the compilation process of crossbar-based PIM DNN accelerator, and we do not consider the detailed optimization inside the PIM matrix unit. Nonetheless, related optimizations such as mixed size crossbars <cit.> and low-bit ADCs <cit.> are compatible with this abstract architecture.
§.§ Execution Model
According to our abstract architecture, after mapping the DNN to the cores, each core can get a static operation sequence, which is composed of the basic operations such as MVM, VEC, COMM, and MEM, representing MVM operations by PIMMU, vector operations by VFU, communication between cores, and access to global memory, respectively.
We do not restrict the format of the operation sequence. It can be a series of instructions, or a schedule of basic operators, etc.
For any two MVMs in the operation sequence of the same core, the parallel relationship between them is analyzed as follows.
* If they are for the same crossbar, that is, there is a structural conflict between them, the latter one must wait for the previous one to complete before it can start.
* If the input data of the latter MVM is the output of the former one, that is, there is a data dependency between them, the latter one must also wait for the former MVM to finish before it can start.
For several MVM operations that do not have structural conflicts, data dependencies, and synchronization blocking at a certain moment, the start time interval between two adjacent MVM operations is determined by the on-chip bandwidth of each core.
§ COMPILATION FRAMEWORK
§.§ The Overview of PIMCOMP
Fig. <ref> illustrates the high-level overview of PIMCOMP. First, PIMCOMP reads the user input and loads DNN model in ONNX format which facilitates conversion between different DL frameworks, and obtains the model description including node (in this work, node and layer share the same meaning) information and topological relationship after parsing the model. This information will be fed to the backend. The entire backend includes 4 stages. Node Partitioning (Section <ref>) describes the rules for dividing the convolutional and fully connected layers according to the crossbar size. Weight Replicating (Section <ref>) determines the replication numbers of different nodes, Core Mapping (Section <ref>) decides the mapping relationship between crossbars and cores, and Dataflow Scheduling (Section <ref>) performs scheduling and optimization according to user's requirement to generate control flow or instruction flow.
We provide two compilation modes for users to choose from: High Throughput (HT) and Low Latency (LL), which are suitable for scenarios with continuous input data of large batches and intermittent input of a small amount of data, respectively. Their main design difference is the granularity of the inter-layer pipeline. In HT mode, DNN processes in a layer-by-layer manner. When the pipeline is filled, different layers process data from different inferences. There is no inter-layer data communication so parallelism between layers is high. In LL mode, as long as one layer produces an output, it passes the output to the position required by subsequent layers. When a layer receives enough data, it can start its operation, so the overall latency is low.
§.§ Node Partitioning
Due to the limited size of a crossbar array, weight data of convolutional layers and fully connected layers cannot be completely mapped to the same crossbar in general, so we need to partition these nodes to fit in the crossbars. Node partitioning strategy is shown in Fig. <ref>. First, the convolutional layers and fully connected layers (which can be regarded as special convolutional layers) in DNN are converted into MVM operations. Specifically, the weights of each convolution kernel are flatten into a column, thus obtaining a weight matrix with height k_w × k_h × C_in and width C_out, where k_w , k_h represent the width and height of the convolution kernel and C_in, C_out represent the number of input channels and output channels, respectively. After that, the entire weight matrix is divided into several Array Groups (AG) according to the size of the crossbar array. The height of each AG is equal to the height of the crossbar array H_xbar, and the width is equal to C_out. Therefore each AG contains ⌈ C_out/W_xbar⌉ crossbar arrays, where W_xbar is the width of crossbar array. Each AG needs to run H_out× W_out input cycles (input sliding windows), where H_out and W_out are the height and width of the output feature.
It is preferred to map all crossbars of one AG to the same core. Crossbar arrays belonging to the same AG can be driven by the same instruction or control signal, therefore gathering them in one core can reduce control complexity. More importantly, these crossbar arrays have the exact same input. If they are in the same core, input data can be broadcast to these arrays, which avoids repeatedly accessing Input Register and alleviates on-chip bandwidth and buffer pressure.
After partitioning the weight, the AGs need to be allocated to the cores. One core may contain AGs of multiple nodes, and AGs of one node may be mapped to different cores. The MVM results obtained by AGs of the same node need to be accumulated to get a complete computation result. If AGs of the same node are mapped to different cores, data accumulation across cores is required.
§.§ Weight Replicating and Core Mapping
The storage units in PIM are also computation units, so an important way to improve computation parallelism is to replicate the weight data multiple times. Besides, allocating computing tasks among cores also greatly influences performance of PIM accelerators, taking into account structural conflict and data dependency. The above two steps are intertwined so we employ a modified genetic algorithm to optimize them simultaneously.
§.§.§ Algorithm Design
In the genetic algorithm, each gene represents several AGs of a node, which is encoded as an integer, expressed as node_index × 10000 + AG_num. For example, 1030025 represents 25 AGs of the 103rd node. To ensure that the mapping result is not so scattered that on-chip communication becomes a bottleneck, we set a limit on the number of nodes that each core can hold, max_node_num_in_core. Therefore, the chromosome length of each individual is core_num × max_node_num_in_core, where core_num is user specified. The location of each gene in the chromosome determines the index of core these AGs are mapped to. This encoding takes into account flexibility and operational efficiency. If each AG is coded, the chromosome will be too long to process efficiently.
In the initialization phase, we randomly select the replication number for each node, and randomly map the AGs to the cores. The design of the fitness function is directly related to the optimization effect, which will be described in the following subsection. Crossover phase in this issue lacks practical significance, so we skip it. Mutation is an important phase to improve resource utilization and inference performance. In this stage, the algorithm will randomly select individuals to perform one of the following four mutation operations: 1. Randomly select a node, increase its replication number, and randomly map the expanded AG to cores. 2. Randomly select a node, reduce its replication number, and recover the crossbar arrays occupied by that replicated block. 3. Randomly select a gene and spread its AG to other cores. 4. Randomly select a gene and merge its AG into the same node of other cores.
§.§.§ Fitness Function
For HT mode, the overall inference time can directly reflect the performance. Fig. <ref> is an example of how to calculate the estimated inference time of the i-th core in HT mode. A total of 4 nodes are mapped to the i-th core, containing 2, 2, 1, and 3 AGs respectively, and the AGs of each node have 3000, 1000, 500, and 300 input sliding windows respectively. Because there is no data dependency between AGs, each AG starts to execute in turn at interval T_interval under the condition that no structural conflict occurs. Rearrange the information to get the table in Fig. <ref>(b). The (8, 300) in the first column means that the number of AGs in this core is 8 for the first 300 operation cycles. After 300 cycles, the 4-th node is completed, so the number of AGs for the next 200 cycles is 5. Based on this information, the estimated time of the i-th core can be calculated according to the formula in Fig. <ref>(c). f(n) calculates the operation time of one operation cycle when there are n AGs in the core. If n>T_MVM/T_interval, where T_MVM is the time to complete a single MVM operation, then each operation cycle takes f(n)=n × T_interval; otherwise f(n)=T_MVM. In this way, an ideal inference time time_i is calculated for each core, so the fitness function for HT mode is F_HT = max_itime_i.
For LL mode, the inter-layer execution sequence in LL mode can be equivalent to “after waiting for the provider node (layer) to generate enough output, the consumer node (layer) starts to execute and generates all outputs without pause”. This waiting percentage W can be calculated for each node. Further, if the uninterrupted execution time of node m is T_m and if the replication number of node m is r times that of its consumer node n, it takes T_m × r × (1-W_n) for the consumer node n to complete the calculation without pausing after waiting T_m × W_n.
Therefore the overall time to complete computing these two nodes is T_m × ( W_n + r × (1-W_n)).
The estimated runtime of LL mode can be derived as shown in Fig. <ref>. For convenience, we set f_x = min (R_p(x)/R_x,1), where R_x is the replication number of the node x and p(x) gets the provider node index of node x. We let E_x=1-W_x, which represents the percentage of execution of node x. We assume the execution time of the first node without extra replication is T. Iterate based on topology and the final estimated time is used as the fitness function F_LL.
§.§ Dataflow Scheduling
The dataflow scheduling stage will generate a sequence of instructions or control flow according to the pipeline mode selected by the user. We first introduce the dataflow scheduling algorithms in HT mode and LL mode, and then describe the on-chip memory optimization technique.
§.§.§ HT Dataflow
Algorithm <ref> shows dataflow scheduling for HT mode. It is unrealistic to store all data in a inference of each node in on-chip local memory so it is necessary to periodically transfer some input (output) data from (to) the global memory to (from) the local memory (Lines 3 and 9). In the inter-core data accumulation process (Line 7), AGs are required to transfer the data to be accumulated to the core where the first AG of this replicated weight block is located. To improve parallelism, other operations such as POOL, CONCAT, ELTWISE are distributed among several cores (Line 10). DNNs with complex topology are easy to implement in HT mode with each node reading the corresponding data from global memory.
§.§.§ LL Dataflow
In LL mode, each node computes an output and then immediately passes it to its consumer nodes. When one node receives enough input, it can start executing. The condition for the output (r,c)_i of node i to start computing is that the node has received the last input (r_d,c_d)_i <cit.> that it requires, and (r_d,c_d)_i can be formulated as:
(r_d)_i =
{[ min( H_i, ( K_i + s_i × (r-1) - p_i ) ) CONV, POOL; H_i FC; (r_d)_i CONCAT, ELTWISE; ].
(c_d)_i =
{[ min( W_i, ( K_i + s_i × (c-1) - p_i ) ) CONV, POOL; W_i FC; (c_d)_i CONCAT, ELTWISE; ].
where H_i and W_i are the height and width of the output feature of node i, respectively. K_i, s_i, and p_i are kernel size, stride, and padding size of node i, respectively, if node i is convolutional or pooling layer.
According to the formula, the required input and expected output of each replicated block can be obtained. In order to improve computational parallelism and reduce the transmission between cores, other operations in DNN are divided into several cores according to the replication number of their predecessor convolutional layer.
§.§.§ On-Chip Memory Reuse
Since the capacity of on-chip local memory is limited and global memory access is expensive, efficient utilization of on-chip local memory is important to reduce the frequency of accessing global memory, improve executing efficiency and reduce power consumption. Fig. <ref> illustrates an example of memory optimization. For the naive method, we allocate new memory block for each operation. Most memory blocks are accessed once and will never be used again. So we use an ADD-reuse in Fig. <ref>(b) to reuse the memory for data accumulation. But allocating memory blocks for each AG still causes a lot of waste. Therefore, we propose AG-reuse on the basis of ADD-reuse in Fig. <ref>(c), which fully reuses the on-chip storage of AGs.
§ EVALUATION
§.§ Experiment Setup
§.§.§ hardware characteristics
In order to evaluate the performance of PIMCOMP and facilitate fair comparison, we adopt PUMA architecture to instantiate our design. The PIM material is ReRAM, and NoC is selected as core connection implementation. Our evaluation adopts the same parameters as PUMA, including area and power information of ReRAM crossbars, VFUs, and control units. The ReRAM cell precision is 2-bit, and inputs, outputs and weights are 16-bit fixed-point numbers. Memory modules and routers are modeled by CACTI <cit.> and Orion 3.0 <cit.> respectively, to get energy and area estimation. The detailed configurations of hardware are summarized in Table <ref>.
§.§.§ benchmarks and baselines
We use both computationally intensive network (vgg16) and topologically complex networks (resnet18, squeezenet, googlenet and inception-v3) as benchmarks. We build a cycle-accurate simulator. It accepts the operation stream compiled by PIMCOMP, and can simulate the structure conflict and data dependency of MVM operation, the usage of on-chip local memory, the synchronization overhead of inter-core communication and can obtain energy consumption and area overhead.
As a comparison, we faithfully implement the PUMA dataflow under our framework and it is called a PUMA-like dataflow. According to <cit.>, the purpose of node replicating is to balance the pipeline and a heuristic method is adopted to deal with core mapping. Since PUMA only support dataflow with pipeline granularity of inference (HT mode), we implement the LL mode for PUMA.
§.§ Experimental Result
§.§.§ Throughput and Latency
Fig. <ref> shows the results of throughput and latency of PIMCOMP under different degrees of parallelism. The degree of parallelism here refers to how many AGs are allowed to calculate at the same time, limited by the user given on-chip bandwidth. According to the results, we can observe that with the increase of parallelism, the improvement of PIMCOMP gradually decreases. This is because the source of optimization is the gap between the actual performance and the ideal performance of the hardware. Still, PIMCOMP gains 1.6× and 2.4× improvements in throughput and latency, respectively, relative to PUMA.
In HT mode, the optimization effect of googlenet and squeezenet is limited. The main reason is that the MVM calculation pressure of these networks is light, so the time to access global memory and to process vector and memory operations become dominant. For computationally intensive tasks such as vgg16, PIMCOMP can achieve better optimization effect. In LL mode, the optimization effect of PIMCOMP is more significant because the node replicating method adopted by PUMA is not efficient enough.
§.§.§ Energy
Fig. <ref> shows the energy evaluation results with parallelism degree 20. The computational load of the same network is relatively fixed so dynamic energy of PUMA and PIMCOMP are close. The main difference between two compilation results is static energy consumption. In HT mode, PUMA allocates computation unevenly, causing some cores to run for a long time while others finish early. PIMCOMP ensures the computing tasks are evenly distributed. This results in a slight increase in static energy due to having more cores active, although PIMCOMP has a shorter overall runtime. In LL mode, there is data dependency between cores, and the active time of each core is related to the overall inference time. PIMCOMP is able to reduce static energy by 58.3% by reducing overall runtime.
§.§.§ Memory Usage
Fig. <ref> shows the effect of different memory reuse optimization. In the evaluation of HT mode, every core will transfer results back to global memory and load new input data into local memory after each AG performs 2 MVM operations. In HT mode, if AG-reuse is adopted, the global memory access can be reduced by an average of 47.8% compared with the naive method. This will result in faster inference speed and lower memory access energy consumption. In LL mode, the average local memory usage can be controlled within 64kB using AG-reuse optimization, which is in line with our architectural design.
§.§.§ Compiling Time
Table <ref> shows the running time of PIMCOMP. The population size is 100 and the maximum iteration number is 200 in the genetic algorithm. We can observe that weight replicating and core mapping take longer time in HT mode, while dataflow scheduling is time-consuming in LL mode. The overall compiling time is acceptable.
§ CONCLUSION
Previous research studies on NVM crossbar based PIM accelerator downplay several practical issues, such as parallelism consideration, weight replication selection and array mapping methods. In this work, we propose PIMCOMP, which has 4 general optimization stages to form a complete compilation toolchain for PIM accelerators. Evaluations show that our work outperforms a PUMA-like compiler on average in terms of performance and power consumption. PIMCOMP is orthogonal to other studies on PIM accelerators, and researchers can apply PIMCOMP to their accelerators for further improvement.
ieeetr
|
http://arxiv.org/abs/2307.00968v2
|
20230703123926
|
REAL: A Representative Error-Driven Approach for Active Learning
|
[
"Cheng Chen",
"Yong Wang",
"Lizi Liao",
"Yueguo Chen",
"Xiaoyong Du"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
Cheng Chen et al.
: A Representative Error-Driven Approach for Active Learning
Cheng Chen; Yong Wang; Lizi Liao; Yueguo Chen; Xiaoyong Du
: A Representative Error-Driven Approach for Active Learning
Cheng Chen^10009-0006-6805-5894
Yong Wang^2()0000-0002-0092-0793
Lizi Liao^20000-0002-9973-3305
Yueguo Chen^10000-0002-2239-4472
Xiaoyong Du^10000-0002-5757-9135
^1Renmin University of China ^2Singapore Management University
======================================================================================================================================================================================================================================================
Given a limited labeling budget, active learning () aims to sample the most informative instances from an unlabeled pool to acquire labels for subsequent model training.
To achieve this, typically measures the informativeness of unlabeled instances based on uncertainty and diversity.
However, it does not consider erroneous instances with their neighborhood error density,
which have great potential to improve the model performance.
To address this limitation, we propose ,
a novel approach to select data instances with
Representative Errors for Active Learning. It identifies minority predictions as pseudo errors within a cluster and allocates an adaptive sampling budget for the cluster based on estimated error density.
Extensive experiments on five text classification datasets demonstrate that consistently outperforms all best-performing baselines regarding accuracy and F1-macro scores across a wide range of hyperparameter settings.
Our analysis also shows that selects the most representative pseudo errors that match the distribution of ground-truth errors along the decision boundary.
Our code is publicly available at <https://github.com/withchencheng/ECML_PKDD_23_Real>.
()The corresponding author.
§ INTRODUCTION
Labeling data for machine learning is costly, and the budget on the amount of labels we can gather is often limited.
Therefore, it is crucial to make the training process of machine learning models more label-efficient,
especially for applications where labels are expensive to acquire.
Active learning () is to select a small amount of the most informative instances from an unlabeled pool, aiming to maximize the model performance gain when using the selected instances (labeled) for further training.
Identification of the most informative instances from the unlabeled data pool is critical to the success of .
The techniques can be classified into three groups: uncertainty-based, diversity-based, and hybrid methods.
Uncertainty-based methods select instances whose prediction probability is more evenly distributed over classes <cit.>, instances with a larger expected loss/gradient <cit.>, or those closer to decision boundaries <cit.>.
However, solely relying on instance-level uncertainty metrics may cause redundancy in samples <cit.>.
Hence, diversity-based methods try to mitigate the redundancy problem by selecting a small but diverse set of data instances to represent the whole unlabeled pool <cit.>.
However, they ignore the fact that training on errors is more label-efficient <cit.>.
Hybrid methods try to select instances that are both uncertain and diverse<cit.>.
Our proposed method belongs to the hybrid category.
Our novelty is to seek representativeness for the errors rather than the whole unlabeled pool, by selecting instances with a larger error probability and higher neighborhood error density.
Fig. <ref> shows an illustrative example of our method. Specially, we first cluster the unlabeled instances by their representations.
The majority prediction in a cluster is expected to be correct even with limited labeled training data <cit.>, owing to the strong representation power of the pretrained models for images <cit.> or texts <cit.>.
Also, it is common for to achieve a decent test accuracy after the warm-up training on the initial limited labeled data <cit.>.
Consequently, we treat the majority prediction in a cluster as the pseudo label for all the instances in the cluster.
We call instances in a cluster whose predictions disagree with the cluster pseudo label as pseudo errors. More pseudo errors with a lower prediction probability (larger disagreement) on the pseudo label will bring a larger sampling budget to its affiliated cluster. In this way, we emphasize dense areas of errors, and thus adaptively select more representative errors.
To our best knowledge, is the first approach to sample representative errors to achieve label-efficient active learning.
By taking text classification as an example application, we demonstrate the effectiveness of .
In summary, the major contributions of this paper can be summarized as follows:
* We propose a new sampling algorithm, , that explores selecting representative errors from the unlabeled pool.
* We show consistently beats all the best-performing baselines on five text classification benchmark datasets
in terms of both accuracy and F1-macro scores.
* We empirically investigate error distribution and find that 1) most errors are distributed along the decision boundary; 2) the distribution of selections made by align well with that of ground-truth errors.
§ PRELIMINARIES
§.§ Related Work
§.§.§ Uncertainty-based
Uncertainty-based sampling is to sample the most uncertain instances for model training.
Three classical metrics for the uncertainty of model prediction probabilities are: entropy <cit.>, least confidence <cit.>, and smallest margin <cit.>.
Recent research studies take the expected loss <cit.>, expected generalization error reduction <cit.>,
or distance to the decision boundary <cit.> as surrogates for uncertainty.
<cit.> selects contrastive examples that are similar in the feature space of pre-trained language model () and
maximally different in the output probabilities.
Unlike which ignores the correctness of sampled instances, our method aims to mine the yet-to-be errors.
OPAL <cit.> computes the expected misclassification loss reduction, but is limited to binary classification using the outdated Parzen window classifier.
§.§.§ Diversity-based
Diversity-based sampling aims to maximize the diversity of sampled instances.
Cluster-Margin <cit.> selects a diverse of instances with the smallest margin using hierarchical agglomerate clustering.
Sener and Savarese <cit.> proposed a coreset approach to find a representative subset from the unlabeled pool.
Kim et al. <cit.> assessed the density of unlabeled pool and selected diverse samples mainly from regions of low density.
Meanwhile, generative adversarial learning <cit.> is applied in as a binary classification task. They trained an adversarial classifier to confuse data from the training set and that from the pool.
However, our method cares about the density of errors rather than the whole unlabeled pool.
§.§.§ Hybrid
Hybrid methods try to combine uncertainty and diversity sampling.
Such a combination can be achieved by
meta learning <cit.> and reinforcement learning <cit.>,
which automatically learn a sampling strategy in each round instead of using a fixed heuristic.
<cit.> and <cit.> both compute uncertainty representations of instances and then cluster them.
transforms data into gradient embeddings that encode the model confidence and then apply ++ <cit.>.
first utilizes the self-supervision loss of as uncertainty representation.
<cit.> uses weighted clustering to find highly uncertain regions.
The clustering in is weighted by some uncertainty measure, e.g., entropy or <cit.>.
deliberately tries to separate the uncertain regions from the confident regions via weighted clustering, with an implicit assumption that the two kinds of regions are separable.
§.§ Problem Definition
We take text classification as an example to illustrate the core idea of our approach.
Given a small labeled set = {(x_i, y_i)}_i=1^L (warm-up dataset) for initial model training and a large unlabeled data pool = {(x_i,)}_i=1^U,
where x_i is the i-th input instance (e.g., the token sequence for text classification),
y_i ∈{1, … ,Y} is the target label, and ≪,
we want to select and obtain the labels of the most informative instances in for training model ℳ,
so that the performance of ℳ can be maximized given a fixed labeling budget B
[Following the convention in machine learning community <cit.>, we ignore the cognitive difference for labeling different instances studied in the HCI community<cit.>,
and assume the labeling cost is 1 for every instance. For example, if our total labeling budget is B=800 and we have T=8 rounds of , then b=100 is the budget per round.].
ℳ is trained iteratively.
Suppose there are T rounds in total, then the budget for each round is b=B/T.
In each round, a sampling function α(, ℳ) selects b samples from
based on the previously learned model ℳ,
and then moves the labeled b samples into .
Model ℳ is trained on the updated and then evaluated on a hold-out test set.
The process terminates when the total budget B is exhausted or the model performance is good enough.
The core of an method
is to study the sampling function α.
§ REPRESENTATIVE PSEUDO ERRORS
§.§ Overview of Representative Error-Driven Active Learning
We aim to explore one critical research question for active learning:
how will learning from errors improve the active learning accuracy for models?
Intuitively, sampling more errors for model training will prevent the model from making the same mistakes on the test set, thus improving the test accuracy.
Errors bring larger loss values, making them more informative for model training <cit.>.
Though one existing work <cit.> tries to directly calculate the erroneous probability for some image using the Bayesian theorem, it ignores the density of errors.
Other than computing a single unlabeled instance's erroneous probability, we develop a sense of representativeness for the selected errors into our approach.
The process starts from training the model ℳ on the initial labeled data set
^(0).
Formally, we minimize the average cross entropy loss ℓ for all the instances in ^(0):
min_θ1/|^(0)|∑_(x_i, y_i) ∈ℓ ( ℳ(x_i, θ^(0)), y_i) .
In each of the following rounds, (Algorithm <ref>) selects a set of representative errors Q consisting of b instances from , obtains their labels, and then adds Q into for subsequent model training.
Algorithm <ref> consists of two components: pseudo error identification and adaptive sampling of representative errors.
§.§ Pseudo Error Identification
The first challenge is to select instances from where the model ℳ makes mistakes, which is non-trivial since we do not have access to the ground truth labels before selection.
However, prior studies <cit.> have shown that
can effectively learn the sentence representations and support accurate classifications very well by simply clustering the embedded representations of sentences <cit.>.
Also, it is commonly-seen that active learning will be employed after the machine learning models have achieved a reasonable performance <cit.>.
Building upon these facts,
it is safe to expect the majority prediction in a cluster by the has a high probability to be the ground truth label, even with a small amount of training data.
Thus,
we assume that the majority prediction is the correct label for all the instances in the cluster.
Our preliminary experiments also show a relatively high and stable accuracy of our pseudo-label assignment strategy, i.e., over 0.80 for all the chosen datasets.
Since the majority prediction is treated as the pseudo label to each cluster,
pseudo errors are defined as those instances whose predictions disagree with the majority prediction in each cluster.
As will be shown in Section <ref>, the sampled pseudo errors usually have higher error rates when compared with ground truths, indicating that such a way of defining pseudo labels and pseudo errors is effective.
In round t (1 ≤ t ≤ T) of active learning, we first obtain the representations of instances in
by feeding them into model ℳ's encoder Φ(.).
Specifically, we only take the token embedding from the output in the last layer of encoder Φ(.).
Then ++ <cit.> is employed as an initialization of the seeding scheme for the following clustering process.
We denote the k-th cluster as 𝒞_k^(t) = {x_i | c_i^(t) = k }, k ∈{1, … ,K},
where c_i^(t) is the cluster id for the instance x_i at round t.
After obtaining K clusters with the corresponding data 𝒞_k^(t),
we assign a pseudo label for each cluster.
First, the pseudo label for an individual instance x_i at round t of is computed as:
y_i = _j ∈{1, … ,Y} [ ℳ(x_i; θ^(t) ) ]_j ,
where ℳ(x_i; θ^(t) ) ∈ℝ^Y is the probability distribution for instance x_i over the Y target classes,
and [ ℳ(x_i; θ^(t) ) ]_j is the j-th entry denoting the probability
of x_i belonging to the target class j, inferenced by the current model.
Then the majority vote (the pseudo label of cluster 𝒞_k^(t)) is derived as:
y_maj = _j (∑_i ∈𝒞_k^(t)1{y_i=j})/ |𝒞_k^(t)|.
The instances that are not predicted as y_maj are defined as pseudo errors in the corresponding cluster 𝒞_k^(t).
[]
Round t of
unlabeled pool , budget for one iteration b, classification model ℳ,
number of clusters K, model's encoding part Φ(.)
sampled set Q
𝒞_k^(t) = KMeans(Φ() ), ( k ∈{1, … ,K}) Clustering
(Process cluster 𝒞_k) k ∈{1, … ,K}
Run Eq. <ref> for cluster 𝒞_k to get the instance-level pseudo labels
Run Eq. <ref> to find the cluster-level pseudo label y_maj for 𝒞_k
Init pseudo error set E_k in Eq. <ref>
Compute the error density ϵ_k for cluster 𝒞_k by Eq. <ref>
Get the sampling budget b_k based on error density for each cluster using Eq. <ref>
∑_k b_k < b
Δ = b - ∑_k b_k Budget residual
b_k += 1, ∀ k ∈Δ-argmax_k( b_k) and b_k >0
Allocate residual to top-Δ largest b_k
Q= ∅ Init the sample set
k ∈{1, … ,K}
Random sample min( |E_k|, b_k ) instances from E_k into Q
|Q| < b
Q = Q ∪ {( b-|Q|) instances from with top ϵ(.) scores (Eq. <ref>) and not in Q }
§.§ Adaptive Sampling of Representative Errors
For each round of active learning, assuming that the labeling budget is b, we need to decide how we should select the b samples from the unlabeled pseudo errors.
To ensure the representativeness of selected samples,
we allocate the sampling budget b to each cluster according to the density of pseudo errors in the cluster, i.e., the percentage of the pseudo errors within a cluster over the total number of pseudo errors in the whole unlabeled data pool.
A larger sampling budget will be allocated to the cluster with a higher pseudo error density.
The density of pseudo errors ϵ_k for cluster 𝒞_k^(t) is defined as:
ϵ_k = ∑_x_e ∈E_kϵ(x_e),
where E_k ={x_e | x_e ∈𝒞_k^(t) and y_e ≠ y_maj} is the pseudo error set in the k-th cluster,
and ϵ(x_e) is one pseudo error x_e's contribution to the cluster-level error density:
ϵ(x_e) =1- [ ℳ(x_e; θ^(t) ) ]_maj .
The sampling budget b_k for the k-th cluster is then normalized as:
b_k = ⌊ b ϵ_k ∑_i ϵ_i ⌋ , ∀ k ∈{1 … K} .
Apart from selecting pseudo errors, we also try to select errors near the classification decision boundary
by emphasizing clusters with denser pseudo errors.
The empirical evidences in <ref> also show that our adaptive budget allocation is able to pick more representative pseudo errors along the decision boundary.
In real-world applications of ,
it is possible that there may not be enough pseudo errors to be sampled in a cluster (i.e., |E_k| < b_k).
For instance, when the model is already well-trained via active learning, most of the data instances will be correctly classified.
In those cases, we complement the sampled set Q by instances with a higher erroneous probability within all the unlabeled pool (Line <ref> in Algorithm <ref>), which are illustrated as blue instances ( ) in Fig. <ref>.
The complexity of consists of two parts: the inference time O(||) and the time for
clustering O(dK||), where d is the encoder feature dimension |Φ(.)|.
implemented in faiss <cit.> costs only 2 or 3 seconds even for large datasets such as and in <ref>.
§ EXPERIMENTAL SETUP
§.§ Datasets
Following prior research <cit.>,
We conduct experiments on five text classification datasets from different application domains, i.e.,
<cit.>,
<cit.>,
<cit.>,
<cit.>,
and <cit.>.
Table <ref> shows their detailed statistics.
Due to the limited computational resources,
we follow the prior study <cit.> and take a subset of the original training set and validation set if they are too large.
Specifically, we randomly sample 20K× Y instances form each training set if its size exceeds 20K× Y,
where Y is the number of target classes.
We also keep the size of validation set no more than 3K to speed up the validation process.
§.§ Baselines & Implementation Details
We compare against 8 baselines:
(1) selects instances with the most even distribution of prediction probability <cit.>;
(2) <cit.> is a diversity-based baseline which selects b instances closest to centers of the token embeddings;
(3) <cit.> transforms data into gradient embeddings that encode the model confidence and then use ++ to select;
(4) <cit.> defines uncertainty as the mutual information among different versions (via multiple MC dropouts <cit.>) of the model's predictions;
(5) <cit.> selects by masked language modeling loss in ;
(6) <cit.> tries to sample the most contrastive instances along the classification decision boundary;
(7) <cit.> selects the unlabeled samples with high uncertainty for active annotation and those with low uncertainty for semi-supervised self-training by weighted ;
Since semi-supervised self-training is out of the scope of our current work, we remove the self training part from for a fair comparison.
(8) uniformly samples data from the unlabeled pool .
For the text classification model ℳ, we follow the prior study <cit.> and use -base <cit.> implemented in the HuggingFace library <cit.> for our experiments.
We train the model on the initial warm-up labeled set for 10 epochs,
and continually train the model for 4 epochs after each round of active sampling to avoid overfitting.
We evaluate the model 4 times per training epoch on the validation set and keep the best version.
At the end of training, we test the previously-saved best model on the hold-out test set.
We choose the best hyperparameters for baselines as indicated in their original papers.
We set the rounds to be 8 for all the 9 methods.
Following <cit.>, all of our methods and
baselines are run with 4 different random seed and the result is based on the average performance
on them.
This creates 5 (datasets) × 4 (random seeds) ×
13 (8 baselines + 1 + 4 variants) × 8 (rounds) = 2080 experiments,
which is almost the limit of our computational resources.
More details on the experiment setup can be found in our code repository.
§ RESULTS
In the experimental study,
we try to answer the following research questions:
* RQ1. Classification Performance: How is the classification performance of compared to baselines? (<ref>)
* RQ2. Representative errors: What are the characteristics of the samples, e.g., error rate and representativeness? (<ref>)
* RQ3. Ablation & hyperparameter: What is the performance of different design variants of ? How robust is under different hyperparameter settings? (<ref>)
§.§ Classification Performance
For RQ1,
we compare the classification performance of against state-of-the-art baselines.
Following the existing work of text classification <cit.>, we use two criteria: accuracy and F1-macro
to measure the model performances.
Accuracy is the fraction of predictions our model got right.
F1-macro is the average of accuracy independently measured for each class (i.e., treating different classes equally).
Table <ref> shows
the average accuracy and F1-macro scores
of different strategies on all the datasets.
The detailed accuracy for each round is shown in Fig. <ref>.
outperforms all the baselines by 0.43% – 0.70%
performance gain
w.r.t. the mean accuracy of all the eight rounds.
The two most recent baselines, and rank in the second or third places in most cases, which is consistent with the reports in their original papers.
is also a strong baseline and performs better than and .
The relatively good performance of
can be explained by pretrained language model's good uncertainty estimations <cit.>.
It stands out in dataset, probably because it is relatively easy to pick samples around the decision boundary for the binary classification task. As shown in Fig. <ref>,
is a difficult dataset to learn.
The model's test accuracy on it is less than 85% even with the full training set.
It is probably because the professional medical text is rarely seen in and the label distribution of is a skewed.
and perform very badly on , since they heavily rely on the distribution of the prediction probability. Compared to other baseline methods, has a clear advantage on .
§.§ Representative Errors
We address RQ2 by investigating whether can sample representative errors and comparing it with those baselines.
Specifically,
we evaluate the capability of in selecting representative error samples from the following two perspectives:
* The error rate and initial training loss of samples (Table <ref> and Fig. <ref>);
* The distribution divergence between samples and boundary errors (Fig. <ref>).
§.§.§ Error Rate and Initial Training Loss of Samples
Table <ref> shows the mean error rates and initial training loss (for all rounds) of samples Q for different strategies across all the datasets.
The error rate ε(Q) is the proportion of wrongly-predicted instances in Q by comparing the model prediction with the ground truth label for each instance.
It is inappropriate to directly compare the error rates ε(Q) of different strategies,
because the error rates ε() in the whole unlabeled pool are different.
It is more easier to achieve a high sampling error rate ε(Q) given a high background error rate ε().
Therefore, we compares the of sampling error rate, which is defined as ε(Q) / ε(),
which implies how effective does an strategy select errors, compared with random selection.
Another metric is the average cross entropy loss ℓ_0 of samples Q in the first training step
, which is a more fine-grained version of error rate.
Many previous research work <cit.>
have already validated that samples with higher loss are usually more informative to the model.
Table <ref> shows
usually has a large of sampling error rate, second only to , despite the fact that
the unlabeled pool error rate ε() of is the lowest.
The large of sampling error rate implies successfully identifies the errors in .
It is worth mentioning that ε() can serve as test set because we don't use in the previous rounds
for training or validation.
has the lowest error rate on 4 out of 5 datasets, which means the highest accuracy when testing on .
The initial training loss of 's samples is also the largest in the last four datasets.
Fig. <ref> shows more detailed loss distribution for each round.
§.§.§ Representativeness
We investigate how 's samples align with ground-truth errors on decision boundary compared to baselines.
Based on theoretical studies on margin theory for active learning <cit.>,
selecting instances close to the decision boundary can significantly reduce the number of annotations required <cit.>.
Though identifying the precise decision boundaries for deep neural networks is intractable <cit.>, we use the basic grid statistics on t-SNE embeddings as an empirical solution.
Specifically, we apply t-SNE to the token embeddings of Φ( )
and project the original token embeddings of 768 dimensions to 2D plane,
as shown in Fig. <ref>.
Then, we split the bounding box of on 2-D plane into 50 × 50 uniform grids, where
g_i denotes the number of instances that fall into the i-th grid.
We keep only the ground-truth errors within the decision boundaries.
Our intuition of decision boundary is where instances have similar representations but different predictions <cit.>.
We hypothesize that the pseudo errors selected by is near to the model’s decision boundary because
1) instances in the same cluster have similar representations;
and 2) pseudo errors have different predictions than the majority in a cluster.
To verify this hypothesis, we compute the distribution entropy of ground-truth labels for each grid, and reserve only the top 0.15 grids with high entropy values.
For example, in binary classification, a ground-truth label distribution [98,100] for grid g_1 means there are 98 instances belonging to class-0 and 100 instances belonging to class-1 in g_1.
Suppose another grid g_2 has ground-truth label distribution [180, 10].
Then g_1 will be closer to the decision boundary than g_2 because g_1 has a more even distribution, thus a larger entropy.
We call grids with top 15% high entropy values as boundary grids.
g_i^ε denotes the number of ground-truth errors fall into i-th boundary grid.
We compare the Jensen-Shannon divergence (JSD) between boundary grids set
{g_i^ε}_i=1^m (m is the number of boundary grids)
and {s_i}_i=1^m, where s_i is the number of sampled instances in the i-th grid.
JSD extends KL divergence (KLD) to derive a symmetric distance measure between two probability distributions P_1 and P_2: JSD(P_1 P_2)=1/2KLD(P_1 M)+1/2KLD(P_2 M),
where M=1/2(P_1+P_2).
Together with previously introduced sampling loss,
we plot the divergence in Fig. <ref> for the two largest datasets and .
clearly has the lowest divergence,
which means our samples' distribution aligns well with the ground-truth errors on the boundary.
Besides the lowest divergence, also has the largest sampling loss.
As shown in Fig. <ref>, 's samples clearly distribute in the upper left corner.
In contrast, 's samples lie in the lower right corner.
The least sampling loss and largest divergence from boundary errors may be the reason why fails.
To our surprise, darker dots usually appear in higher positions, in Fig. <ref>, which means samples in later rounds provide larger loss.
The reason may be that the model in later rounds is stable and confident in its predictions,
thus introducing new samples will cause a larger loss.
Fig. <ref> provides the case studies of our samples against on .
We can see that most errors distribute near the decision boundaries.
tends to miss some decision boundary areas and is lack of diversity.
matches the boundary errors better.
§.§ Ablation and Hyperparameter Study
In this section,
we address RQ3 by extensive ablation (Fig. <ref>) and hyperparameter studies ( Fig. <ref>) to understand the important components in .
A. Ablation Study.
(1) We test with different budget allocation strategies (Eq. <ref>) per cluster.
(1.1) Ignore the idea of “allocation by cluster”.
Specifically, we rank all the instances in based on its erroneous probability (Eq. <ref>), and select top-b instances per round ().
(1.2) For each cluster, uniformly sample B/K pseudo errors, i.e., ignore cluster error weights in Eq. <ref>. ()
(2) randomly samples within each cluster's pseudo errors (line <ref> in Algorithm <ref>) based on the adaptive budget.
Given the adaptive budget in each cluster, we also try to sample:
(2.1) Instances with the largest erroneous probabilities in Eq. <ref> ();
(2.2) Pseudo errors with the largest prediction entropy ().
Fig. <ref> show that most of 's
variants still perform better than the best baseline on large datasets and .
However, the results on are unstable. The reason may be that the decision boundary of binary classification is too simple so that dedicated methods are not necessary.
fails only on .
is slightly worse than in later rounds, which indicates the importance of weighted budget allocation for .
is very close to on , possibly because lacking diversity hurts it on the most difficult dataset.
B. Hyperparameter Study
We study the impact of varying the number of clusters K.
Experiment results in Fig. <ref> shows our method stably beats the best baseline across a wide range of K (on a scale of tens to hundreds).
§ CONCLUSION AND FUTURE WORK
We present , a novel sampling strategy that selects representative pseudo errors for efficient model training.
We define pseudo errors as minority predictions within each cluster.
The sampling budget per cluster is adaptive to the cluster's total estimated error density. Experiments on five datasets demonstrate that performs better than other AL sampling strategies consistently. By analyzing the actively sampled instances, we find that improves over all the best-performing baselines by guiding uncertainty sampling in errors near the decision boundary.
The ablation study shows most alternative designs of still beat the state-of-the-art baseline.
Future work will investigate the theoretical effectiveness of selecting errors near decision boundary for and the diversity within pseudo errors.
Currently we only take text classification as an example to illustrate the effectiveness of .
But the framework of can be easily adapted to other tasks such as image classification implemented in neural classification architectures.
§ ETHICAL STATEMENT
All the datasets are widely-used benchmark text classification datasets and are publicly-available online, which do not have any privacy issues.
Also, our approach can benefit data labeling workers and bring welfare to them.
Data labeling is very costly and labour-intensive.
For example, labeling toxic content is reported to be a “mental torture” <cit.>.
Our approach aims to make active learning more label-efficient and can reduce the workload of data labeling workers, which is beneficial to the mental health of data labeling workers.
§ ACKNOWLEDGMENTS
This work was done during Cheng Chen's internship at Singapore Management University (SMU) under the supervision of of Dr. Yong Wang.
This work was supported by
the National Key Research and Development Program of China (2020YFB1710004),
Lee Kong Chian Fellowship awarded to Dr. Yong Wang by SMU,
and the National Science Foundation of China under the grant 62272466.
We would like to thank all the anonymous reviewers for their valuable feedback.
splncs04
|
http://arxiv.org/abs/2307.02391v2
|
20230705160205
|
Physical Layer Secret Key Agreement Using One-Bit Quantization and Low-Density Parity-Check Codes
|
[
"John A. Snoap"
] |
cs.IT
|
[
"cs.IT",
"eess.SP",
"math.IT"
] |
=10000
=.15em
=.1em
P[1]>p#1
M[1]>m#1
R[1]>m#1
"
figure0em2.55em
table0em2.55em
./figures/
.pdf,.eps
corol
lemma
theorem
|
http://arxiv.org/abs/2307.01306v1
|
20230703192216
|
Notes on Factorization Algebras and TQFTs
|
[
"Araminta Amabel"
] |
math.AT
|
[
"math.AT",
"hep-th",
"math.QA"
] |
These are notes from talks given at a spring school on
topological quantum field theory in Nova Scotia during May of 2023.
The aim is to introduce the reader to the role of factorization algebras
and related concepts in field theory.
In particular, we discuss the relationship between
factorization algebras, E_n-algebras, vertex algebras,
and the functorial perspective on field theories.
Social Impressions of the NAO Robot and its Impact on Physiology
Ruchik Mishra
Department of Electrical and Computer Engineering
University of Louisville
Louisville, USA
r0mish02@louisville.edu
Karla Conn Welch
Department of Electrical and Computer Engineering
University of Louisville
Louisville, USA
karla.welch@louisville.edu
Received ; accepted
==============================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Ideas motivated by quantum field theory connect a wide array of
mathematics.
From probability theory to knot theory, or differential geometry to infinity categories,
modeling the physical aspects of field theories and their quantization
provides a wealth of interesting concepts.
Because they permeate an expanse of topics,
it can be hard to move between perspectives without reading a large amount of background material.
The intention behind these notes is to provide an accessible
and introductory account of several different,
but mathematically close,
descriptions of field theories.
The viewpoints chosen are those most accessible to an algebraic topologist.
They are:
* factorization algebras of observables,
* E_n-algebras,
* the functorial perspective, and
* vertex algebras.
Roughly, these are all methods for accounting for the different measurements one can make on a physical system.
We will give the most attention to factorization algebras.
Factorization algebras are algebraic gadgets whose
multiplicative structures are controlled by disjoint open subsets of a manifold;
see Definition <ref>.
Historically, similar structures were first considered
under the name of chiral algebras by Beilinson and Drinfeld <cit.>.
Whereas factorization algebras highlight the geometry
of the manifold governing their multiplication,
E_n-algebras have a more topological flavor.
Vertex algebras are more common in the representation theory literature
and the functorial perspective is useful when looking for structural results.
Each perspective has its benefits which we try to highlight throughout these notes.
Lastly, we end these notes with a discussion of more recent advances and open areas of research.
As these lectures were given over the course of five days,
they are hopefully a quick and informal read.
Exercises with hints are included.
§.§ Assumed Background
The reader is assumed to be familiar with the language of (ordinary, not infinity) categories.
No knowledge of field theory is necessary.
Roughly, the expectation is that the reader has the knowledge of an average first year graduate student with an interest in topology.
§.§ Linear Overview
The first part includes a preliminary definition
of a classical field theory
and introduces the notion of a factorization algebra.
The main takeaway of this part
is that field theories can be studied by
considering their algebras of observables.
We end the first part with a discussion of quantization
in terms of observables.
In the second part,
we focus on topological field theories
and their relationship to E_n-algebras.
The third part is on
the global invariant of observables called factorization homology.
Afterwards, in the fourth part, we discuss the Atiyah-Segal functorial
perspective of field theories using bordism categories.
In the last part, we consider holomorphic field theories and
their relationship to vertex algebras.
This last part also contains remarks
on the cobordism hypothesis,
Koszul duality,
and the Stolz-Teichner program.
After each part, exercises are included.
The exercises vary in difficulty level, depending on the reader's background.
At the end of these notes,
hints or solutions to each exercise are given.
§.§ Further Reading
These notes are short, and not all-inclusive.
Many important topics have been left out.
The lectures these notes are based on where given in
conjunction with two other lecture series:
one on spectral sequences by Debray
and one on fusion categories by Delaney.
Notes for these talks can be found online;
see <cit.>.
For more content and and in-depth discussion of the
ideas in these notes,
we make the following recommendations:
* For factorization algebras in field theory: the original reference,
which is wonderfully readable, is the two part series <cit.> and <cit.>.
* For E_n-algebras: <cit.> is a canonical reference. In the ∞-setting, see <cit.>.
* For factorization homology: the various works of Ayala-Francis including <cit.> and <cit.>
as well as Ginot's notes <cit.>.
* For the cobordism hypothesis: see <cit.> for an overview, <cit.> for Lurie's proof sketch, <cit.> for the details of the 2-dimensional case, and <cit.> for a construction of a target category.
* For vertex algebras: <cit.> or <cit.>.
§.§ Acknowledgements
As these notes were written for the Atlantic TQFT Spring School in May of 2023,
the author would like to thank the organizers
Theo Johnson-Freyd and Geoff Vooys.
Thanks are also due to the participants of the Atlantic TQFT Spring School
for their questions.
In particular, the author thanks Will Stewart, Eilind Karlsson, and Mathew Yu
for serving as the TAs for the spring school.
We thank Owen Gwilliam and Hiro Lee Tanaka for providing comments and corrections
on an earlier draft.
The author was
supported by NSF grant DMS-1902669.
PART:
Day 1
I am supposed to teach you about factorization algebras.
There is a lot to say about these algebraic gadgets,
so I will not come close to covering everything.
I hope you come away with three skills:
* knowing what a factorization algebra is (the definition and examples),
* being able to relate them to other notions of field theories, and
* a curiosity for learning more about the subject.
The five lectures will roughly cover the following themes:
Day 1. physical motivation
Day 2. topological field theories
Day 3. factorization homology
Day 4. functorial perspective
Day 5. holomorphic field theories and applications
For today, we have the following goal.
Discover “factorization algebras" in nature.
The quotes are because I have not yet told you the definition of factorization algebras,
and part of the goal is to stumble upon the definition ourselves.
Here, nature means physics.
The story contained here is roughly mimicking the introduction in <cit.>.
We refer the reader there for more details.
§ CLASSICAL STORY
If you have not heard of a field theory before,
the picture I want you to have in mind is the following.
Imagine we have some container X
(a manifold or more general space)
and a particle moving around in X.
For I⊂R a time interval,
the space of maps
Map(I,X)
is all the paths a particle could take.
So t∈ I maps to to the position of the particle at time t.
This system of a particle in X
might be subject to constraints in the real world.
Only paths of least energy.
X=R^2 and only straight line paths.
X=S^2 the sphere, and only great circle paths.
These are examples of the general case
where paths are geodesics.
The physical constraints on what paths a particle can take
are called the equations of motion
or the Euler-Lagrange equations.
These can be written as a PDE,
and are determined by a map
SMap(I,X)R
called the action functional.
The allowable paths are then
EL⊂Map(I,X)
the subset of paths so that
EL={f I X:(dS)(f)=0}.
This is the critical locus of S.
More generally, we might be interested in how the particle
moves around over some time interval I
and as we move around some parameter space N.
Then N× I=M is “spacetime"
and we are interested in parameterized paths
𝖬𝖺𝗉(M,X).
For now we will take the following definition.
A classical field theory is a space of fields
𝖬𝖺𝗉(M,X)
and a set of equations of motion that can be encoded by the critical locus of a map
S𝖬𝖺𝗉(M,X)R.
The dimension of the field theory is the dimension of M.
The action functional S must be local: it must arise as the integral over M
of some polynomial in a field ϕ and its derivatives;
see <cit.>.
We already have several equivalent descriptions of a given classical field theory.
One could hand you the space of fields
and the action functional,
or the equations of motion.
For more equivalent descriptions,
see <cit.>
and in particular <cit.> and <cit.>.
What if someone just told you the critical locus EL?
Would that be equivalent data to the whole field theory?
For an answer, see <cit.>.
[Classical mechanics, massless free theory]
Say I=[a,b].
Take fields
𝖬𝖺𝗉(I,R^n).
The action functional sends a field f to
S(f)=∫_a^b⟨ f(t),d^2/dt^2f(t)⟩ dt.
Here, we are taking the inner product on R^n.
In this case,
the critical locus is straight lines,
EL={straight lines}.
For details see <cit.>.
If we add in a nonzero mass,
the free theory with mass m is described in <cit.>.
Can you extend this notion to a massless free theory
on a general n-manifold M?
What extra structure will you need M to have?
For an answer, see or <cit.>.
[Gauge Theory]
Given a Lie group G and a spacetime M,
G gauge theory on M
which I will denote 𝖦𝖺𝗎𝗀𝖾_G^M
has fields
𝖬𝖺𝗉(M,B_∇ G)=𝖡𝗎𝗇^∇_GM.
Note that principal G-bundles 𝖡𝗎𝗇_GM on M
are classified by maps into the classifying space BG.
Here, B_∇ G is an upgraded version of BG that
classifies principal G-bundles with connection.
The space B_∇ G is the quotient of Ω^1(-;𝔤)
by the adjoint action; see <cit.>.
See <cit.> for more details on 𝖡𝗎𝗇_G^∇(M).
A field is then a principal G-bundle P on M
with connection A.
There are many different choices for action functionals,
giving different gauge theories.
[Yang-Mills]
One type of gauge theory 𝖦𝖺𝗎𝗀𝖾_R^M
is Abelian Yang-Mills.
The action functional is
S(A)=-1/2∫_M tr(dA∧ *dA).
Here * is the Hodge star operator,
More generally, we can consider Yang-Mills for an arbitrary
compact Lie group G.
In this case, dA is replaced with the curvature of A.
For details, see <cit.>.
This is particularly studied when G=SU(n) and M has dimension 4.
We can alter this action functional to produce new theories by adding
another term in the integrand.
S(A)=-1/2∫_M tr(dA∧ *dA)+θ∫_Mtr(dA∧ dA)
This additional term is called the “theta term";
see <cit.> or <cit.>.
Note that the theta term does not depend on the connection,
and hence does not change the equations of motion.
Because of this lack of dependence,
the theta term is referred to as a “topological term."
Can you show that the examples of action functionals given above
are local?
The space of fields does not have to be a mapping space.
When it is, it is referred to as a sigma model.
§ MEASUREMENTS
Knowing a particle is moving in a box is good,
but we want to know more.
Say you hand a little kid a box containing a spider
(i.e. a scary particle).
They look up at you really scared.
You tell them,
Do not worry,
I can tell you what type of paths the spider can take.
It only moves along straight lines!
They are still scared.
And they have questions.
* Where in the box is the spider right now?
* How fast is the spider moving?
* Is the box open?
* Are there holes in the box?
* Can the spider get out of the box?
Notice that these questions are of three different types.
The first two are about the particle and the equations of motion.
They can maybe be answered locally.
The third and fourth questions are about the topology of the spacetime.
That is global information.
Answering the last question requires knowledge about the whole field theory.
These questions are types of measurements
one can make on the system.
In classical field theory,
one can (in theory) answer all of these questions.
This changes in quantum field theory!
In quantum field theory,
one cannot precisely know both the position and the momentum
of a particle at the same time.
Formulate this uncertainty mathematically.
§.§ Observables
Given a classical field theory
EL⊂Maps(I,X),
the classical observables are the
real functions on the critical locus,
Obs^cl=𝒪_EL.
Thus, an observable takes in an allowed path
and spits out a real number:
the measurement on that path.
I did not say what kind of functions we are taking.
This will depend on the context.
Generally the (derived) critical locus lives in the world of derived C^∞ geometry;
see <ref>.
When things are nice, holomorphic functions also makes sense.
The vector space of functions
Obs^cl=𝒪_EL
form a commutative algebra;
f,gELR
combine to give
(fg)(u)=f(u)g(u).
Physically,
given two measurements f,g,
we can perform them at the same time
to get a new measurement fg.
Note: This is exactly what fails in the quantum world!
There, we cannot take two measurements
(like position and momentum) simultaneously.
What can we do?
Say I=[a,b] is our spacetime.
Let f,g be observables.
We can form a new measurement on I
that on
(t_1,t_2) does f,
on (t_3,t_4) does g
and does nothing in between.
Nothing here means the measurement sending every path to zero.
So we have a way of combining observables
on disjoint open intervals.
More generally, say we were looking at a spacetime
M=I× N
where N has dimension n.
We can perform different measurements on disjoint disks.
Given two embeddings
D^n+1≃ I×D^n I× N
we do f on one and g on the other.
Upshot.
Whatever quantum field theory is,
the set of measurements (or observables)
one can make on it have
a weird multiplication structure controlled
by disjoint open subsets of spacetime.
The resulting algebraic structure is called a factorization algebra.
§ FACTORIZATION ALGEBRAS
The following can be found in <cit.>
and <cit.>.
Let M be a manifold.
A factorization algebra on M is a functor
ℱ𝖮𝗉𝖾𝗇(M)𝖢𝗁
together with maps
ℱ(U_1)⊗⋯⊗ℱ(U_k)ℱ(V)
for disjoint unions of opens U_1,…, U_k in M contained in a single open subset V,
U_1⊔⋯⊔ U_k⊂ V,
that are equivalences if
U_1⊔⋯⊔ U_k= V,
and such that the maps are compatible
⊗_i=1^l⊗_j=1^k_iℱ(U_i,j)[rr][dr] ⊗_i=1^lℱ(V_i)[dl]
ℱ(W)
for disjoint opens U_i,1,…,U_i,k_i⊂ V_i inside
disjoint opens V_1,…,V_l inside an open W.
Moreover,
ℱ satisfies a (Weiss) cosheaf condition.
Let's break that all down.
A factorization algebra is the data of:
* for an inclusion of disjoint disks
_i=1^k D_i↪ M,
a vector space
ℱ(_i=1^k D_i)
and an isomorphism
ℱ(_i=1^k D_i)≃⊗_i=1^k ℱ(D_i),
and
* for an inclusion
_i=1^k D_i⊂ D↪ M,
a map
ℱ(_i=1^k D_i)ℱ(D).
The weird multiplication comes from putting this data together:
⊗_i=1^k ℱ(D_i)≃ℱ(_i=1^k D_i)ℱ(D)
tells us how to multiply observables on the disjoin disks D_i.
The last piece was the cosheaf condition.
This is going to be on the exercises.
A reference for Weiss covers is <cit.>.
Whatever a quantum field theory is,
we should be able to take measurements on it.
The set of these measurements is called the
quantum observables.
In keeping with the spirit of these notes,
rather than defining what a quantum field theory is,
we just describe the set of observables.
To obtain a precise mathematical description of
a quantum field theory,
we need to assume that the QFT is perturbative.
Under this assumption we have the following.
The quantum observables 𝖮𝖻𝗌^q
of a field theory on spacetime M
has the structure of a factorization algebra on M.
This is <cit.>.
In the nonperturbative setting,
the quantum observables form a prefactorization algebra;
see <cit.>.
Theorem <ref>
works for QFTs in Riemannian signature.
For a Lorentzian signature analogue,
see <cit.>.
The factorization algebra structure on 𝖮𝖻𝗌^q
means we get a vector space
𝖮𝖻𝗌^q(U)
for every open subset U of spacetime.
These are the local observables at U.
When we were discussing classical observables,
we had a single vector space
𝖮𝖻𝗌^cl=Hom(EL,R).
One could ask how to get a single
object 𝖮𝖻𝗌^q(M)
from the factorization algebra 𝖮𝖻𝗌^q.
This should be some sort of “global" observables.
Since 𝖮𝖻𝗌^q is a type of cosheaf,
we can take its global sections.
Let ℱ be a factorization algebra on M.
The factorization homology of ℱ
is the global sections
∫_Mℱ=ℱ(M).
§ QUANTIZATION
Previously we talked about how classical observables form a commutative algebra,
and quantum observables form a factorization algebra.
I want to end today by talking about how
to quantize a field theory.
To get at this question,
we need an easy, workable example.
Let R be our spacetime,
and take as fields
𝖬𝖺𝗉(R,T^*R).
Lastly,
our action functional is going to be S=0.
From the exercises, we know that
* the critical locus EL of S has functions
𝖮𝖻𝗌^cl=C^∞(T^*R),
* and the quantum observables are a factorization algebra on R.
Assume that 𝖮𝖻𝗌^q comes from an associative algebra A.
On the level of observables,
we can rephrase our quantization question as:
Given a commutative algebra,
how do we get a factorization algebra out of it?
The process of going from a commutative algebra
to an associative algebra
is called deformation.
Let T be a commutative algebra.
A deformation of T is an associative algebra
structure on T[[ħ]] such that
T[[ħ]]/ħ≃ T
as algebras.
If this was all we asked for,
we could always take
𝖮𝖻𝗌^q=𝖮𝖻𝗌^cl[[ħ]]
with the usual multiplication.
That is not telling us anything about the field theory.
We need to ask for the deformation to encode more information.
In our running example, we have
𝖮𝖻𝗌^cl=C^∞(T^*R)=R[p,q].
We have an interesting structure on T^*R:
the symplectic form.
On functions, the symplectic form gives a Poisson bracket,
{p,q}=1.
In general, the derived critical locus EL of S
has the structure of a (-1)-shifted symplectic stack,
so 𝒪_EL is a P_0-algebra.
We can ask for deformations of classical observables
that respect this Poisson bracket.
That is,
a deformation R=T[[ħ]]
so that
[f,g]=ħ{f,g}
up to higher order terms in ħ,
for f,g∈ T.
We can think of this as saying that the Poisson structure is
“semi-classical."
It tells us the first step in the quantum direction,
that is the ħ term.
The quantum observables of our example theory
is the Weyl algebra
𝖮𝖻𝗌^q=ℝ[[ħ]][p,q]
with multiplication so that [p,q]=ħ.
More generally,
we could have a theory with fields
Map(R,V)
where V was a symplectic manifold.
Then 𝒪_EL again has a Poisson bracket,
and quantum observables are a deformation respecting this bracket.
One can in fact deform functions on any Poisson manifold,
using a globalized version of the Weyl algebra;
see <cit.>
wherein one can also find a formal definition of deformation quantization.
In higher dimensions,
we ask for the space of fields to have a shifted symplectic structure,
and use this to deform the observables.
Thus in the language of factorization algebras,
quantization is deformation.
See <cit.> for details.
§ EXERCISES
There are lot of exercises here of varying difficulty.
You do not need to do all of them,
or any of them completely.
Just get a feel for things and have fun learning!
§.§.§ Cosheaf Condition
A good reference for these questions is
<cit.>.
Recall that a functor
F𝖮𝗉𝖾𝗇(M)𝒞
is a cosheaf
if for every open cover 𝒰 of M,
the diagram
_i,jF(U_i∩ U_j)⇉_k F(U_k) F(M)
is a colimit diagram.
An open cover {U_j} of M
is a Weiss cover if for every finite set x_1,…,x_k
of points in M,
there exists some U_j so that
{x_1,…, x_k}⊂ U_j.
For a detailed introduction to cosheaves,
see <cit.>.
A canonical example of a sheaf is smooth functions on a manifold M.
More generally, given a vector bundle π V M,
show that sections of π form a sheaf.
In contrast, show that compactly supported sections of a vector bundle form a cosheaf.
Given an example of an open cover (in the usual topology) of R^2
that is not a Weiss cover.
Give an example of a Weiss cover that exists on any manifold M.
Let 𝖱𝖺𝗇(M) be the set of nonempty finite subsets of M.
Take on faith for a moment that the set 𝖱𝖺𝗇(M) has a reasonable topology.
For details, see <cit.>
and <cit.>.
Show that a Weiss cover for M determines an ordinary cover for 𝖱𝖺𝗇(M).
Conversely,
show that an ordinary cover for 𝖱𝖺𝗇(M) determines an Weiss cover on M.
Ignore topology issues for this,
just show it on the level of sets.
It turns out that 𝖱𝖺𝗇(M) has a topology
so that cosheaves on 𝖱𝖺𝗇(M) are the same as
cosheaves on M for the Weiss topology;
see <cit.> and the references therein.
§.§.§ Examples
Given a field theory with space time M,
classical observables are a commutative algebra.
Show that classical observables form a factorization algebra on M.
In fact, any commutative algebra forms a factorization algebra on M.
Show that an associative algebra forms a factorization algebra.
Note that I did not tell you what space on which it should be a factorization algebra;
this is part of the exercise.
Call this space X.
Let X be the space on which associative algebras determine a factorization algebra.
When is a factorization algebra ℱ on X
determined by an associative algebra?
Let ℱ be a factorization algebra on [0,1]
so that
ℱ|_(0,1)
comes from an associative algebra.
Describe the structure of ℱ over [0,1]
in more familiar terms.
§.§.§ Critical Locus
Recall that the classical observables are
functions on the critical locus,
𝖮𝖻𝗌^cl=𝒪_EL
where
EL⊂𝖬𝖺𝗉(M,X)
is that the critical locus of the action functional
S𝖬𝖺𝗉(M,X)R.
Take M to be a point and X=W to be a vector space.
Describe the critical locus of an action functional
S𝖬𝖺𝗉(pt,W)R.
For convenience,
let us now pretend the mapping space Map(M,X)
(the fields) is a smooth manifold Y.
Let Γ(dS)⊂ T^*Y
denote the graph of dS.
Show that the critical locus of S
is the intersection
Γ(dS)∩Zero_Y
where Zero_M is the zero-section of Y in T^*Y.
What is EL if S=0?
In reality, we want EL to be a fancier version of the critical locus:
the derived critical locus.
The derived critical locus of S YR is a dg space on Y,
so it is determined by its functions (which are a chain complex).
The derived critical locus of S YR has functions the derived tensor product
C^∞(Γ(dS))⊗_C^∞(T^*Y)^LC^∞(Y).
See <cit.> for details.
What is the derived critical locus if the space of fields is
𝖬𝖺𝗉(pt,W)
as in Question <ref>?
What is the derived critical locus of S=0?
What is the derived critical locus of general S in terms of S?
PART:
Day 2
§ TQFTS
Yesterday we talked about how classical observables form a commutative algebra,
and quantum observables form a factorization algebra.
Today I want to talk about how this structure behaves when
we have a topological field theory.
Informally, a field theory is topological if
it does not depend on a metric.
Let's investigate what this means for quantum observables.
Let M be our spacetime.
If our theory is topological the cosheaf
𝖮𝖻𝗌^q𝖮𝗉𝖾𝗇(M)𝒞
cannot know about measurements of size or distance.
For example, consider the value
𝖮𝖻𝗌^q(B_r(0))
on a ball of radius r. centered at the origin.
Since 𝖮𝖻𝗌^q does not know about size,
it cannot distinguish between balls of different radii.
Thus
𝖮𝖻𝗌^q(B_r(0))∼𝖮𝖻𝗌^q(B_r'(0))
for r<r'.
Moreover, it does not know distance from the origin.
So if we move B_r(0) around inside B_r'(0),
that will not effect the answer either.
A factorization algebra ℱ on M is locally constant
if for any inclusion of disks
D_1⊂ D_2
in M,
the induced map
ℱ(D_1)ℱ(D_2)
is an equivalence.
This is <cit.>.
Motivated by . <cit.>,
we are going to use this as a definition.
A field theory is topological if 𝖮𝖻𝗌^q is locally constant.
A locally constant factorization algebra on R is an associative algebra.
A locally constant factorization algebra on R^∞ is a commutative algebra.
The data of a locally constant factorization algebra on R^2 is
a vector space V with many types of multiplications and coherencies.
Locally constant factorization algebras on euclidean spaces
give a family of algebra structures starting from associative,
and becoming more commutative as the dimension increases.
To precisely describe this structure,
we will use the language of operads.
§ OPERADS
Informally,
an operad 𝒬 is a sequence of spaces 𝒬(k) encoding k-ary operators of an algebraic structure.
For example, there is an operad Assoc that records the data of a unit, a multiplication map, and the associativity conditions for an associative algebra.
To formalize this, we need some preliminary definitions.
Let Fin^bij be the category of finite sets and bijections.
The category of symmetric sequences in Spaces
is the functor category
Sseq(Spaces)Fun(Fin^bij,Spaces).
Symmetric sequences can be given the structure of a monoidal category as follows.
The composition product R∘ S of two symmetric sequences is
(R∘ S)(n)=⊕_i R(i)⊗_Σ_i(⊗_j_1+⋯+j_i=n (S(j_1)⊗⋯⊗ S(j_i))×_Σ_j_1×⋯×Σ_j_iΣ_n).
The unit of the composition product, denoted 𝒪_triv, sends a finite set B to the unit 1_Spaces of Spaces if |B|=1 and to the zero object * of Spaces otherwise.
An operad in Spaces is a monoid object in symmetric
sequences Sseq(Spaces).
An operad 𝒪 in spaces has an underlying functor Fin^bijSpaces.
For each k∈N, we denote by 𝒪(k) the image of the finite set with k elements [k] under this functor.
This matches with <cit.>.
[Little n-Disks Operad]
Define an operad E_n in spaces by
E_n(k)=Conf_k(R^n),
the configuration space of k distinct points in R^n,
topologized as a subset of (R^n)^k.
The easiest way to see the product
E_n∘E_nE_n
is to consider each point in R^n as a little open disk centered at that point.
Then the product is just the inclusion of disks.
For more details, see <cit.>.
A good reference for configuration spaces in general is <cit.>.
We think of the space 𝒪(k)
as parameterizing k-ary operations for some type of algebraic structure.
An 𝒪-algebra in 𝒞
is an object V∈𝒞 together with maps
𝒪(k)⊗ V^⊗ k V
for every k,
compatible with the multiplication maps for 𝒪.
Here, the tensor product 𝒪(k)⊗ V
is tensoring together a space 𝒪(k)
and an object V∈𝒞.
To make sense of this in general,
𝒞 must be tensored over spaces.
This is true if, for example, 𝒞 is
an ∞-category; see <ref>.
For now, we have the following examples.
* For 𝒞=𝖲𝖾𝗍𝗌, we tensor V with the set π_0(𝒪(k)).
* For 𝒞=Vector, we tensor V with the vector space H_0(𝒪(k)).
* For 𝒞=𝖢𝗁, we tensor V with the chain complex C_∙(𝒪(k)).
* For 𝒞=𝖢𝖺𝗍, we tensor V with the fundamental groupoid π_∙(𝒪(k)).
Locally constant factorization algebras on R^n are the same as
E_n-algebras.
This is <cit.>.
§.§ Factorization Homology
Recall that for a factorization algebra ℱ
on M,
we had a notion of factorization homology
given by global sections
∫_Mℱ=ℱ(M).
We want to investigate this local to global structure in the locally constant case.
For this,
note that there is an equivalence
𝖤𝗆𝖻^fr(_kR^n,R^n)=𝖢𝗈𝗇𝖿_k(R^n)
which records the center of the disks.
We are going to construct a notion of factorization homology
of an E_n-algebra A over a framed n-manifold M;
∫_MA.
§ DIGRESSION ON HOMOTOPY COHERENCE
The target category for our factorization (or E_n) algebras
has been chain complexes 𝖢𝗁.
As we will explain here,
to distinguish between E_n-algebras and commutative algebras
for n>1,
we need 𝖢𝗁 to be the ∞-category of chain complexes.
More generally,
there are interesting notions of factorization (or E_n) algebras valued
in any symmetric monoidal ∞-category 𝒞.
Later in these notes, everything will start to become ∞-categories
for things to be true as stated.
If you do not care for category theory,
ignore the rest of this section.
For a reference on ∞-categories see <cit.> for detail
or <cit.> for an introduction.
See also <cit.> for an introduction to ∞-categories
in the factorization homology context.
If you are not familiar with ∞-categories,
you can get through these notes by keeping three ideas in mind:
* Chain complexes up to quasi-isomorphism 𝖢𝗁
is a good example of an ∞-category.
* In an ∞-category, there are spaces of morphisms rather than sets of morphisms.
* To actually say things, like writing a functor between ∞-categories,
extra care must be taken. Rather than writing down what a map does on objects and morphisms, things like universal properties must be used.
Our main example of an ∞-category will be 𝖢𝗁.
Ordinary categories are special types of ∞-categories.
For example, the category of real vector spaces Vector
or the category of chain complexes Chain.
One can also build more interesting ∞-categories from
ordinary categories.
Let D be an ordinary abelian category.
Form the (ordinary) category Chain(D) of chain complexes in D.
Regard Chain(D) as an ∞-category.
The derived ∞-category of D
is obtained from Chain(D) by inverting quasi-isomorphisms in the land
of ∞-categories.
For a reference on derived categories see <cit.>.
If one inverts the quasi-isomorphisms of Chain(D) regarded as an
ordinary category and not in the land of ∞-categories,
one obtains the homotopy category of the derived ∞-category of D.
For example, the derived ∞-category of Vector is
𝖢𝗁.
The notation 𝖵𝖾𝖼𝗍 is sometimes used to denoted the derived ∞-category of vector spaces.
We will stick to the notation 𝖢𝗁 here.
Similarly, the homotopy theory of spaces
by inverting weak equivalences in the category of topological spaces.
The resulting ∞-category is denoted 𝖲𝗉𝖺𝖼𝖾𝗌 in these notes.
We also get the ∞-category of ordinary categories (denoted 𝖢𝖺𝗍)
by inverting equivalences of categories.
Let 𝒞 be an ordinary symmetric monoidal category, such as Vect.
Then the forgetful functor
𝖠𝗅𝗀_E_∞(𝒞)𝖠𝗅𝗀_E_n(𝒞)
is an equivalence for all n>1.
This follows from the Eckmann-Hilton theorem <cit.>
which roughly says that a set with two unital binary operations
which commute is a commutative algebra.
If A is a E_n-algebra for n>1,
then there are multiple compatible binary operations
from the different configurations 𝖢𝗈𝗇𝖿_2(R^n).
§ EXERCISES
Show that a locally constant factorization algebra on R^n
determines a locally constant factorization algebra on R^m
for any m<n.
Write out and try to visualize the multiplication
(E_n∘E_n)(k)E_n(k)
for some small values of n and k
such as n=1,2 and k=1,2,3.
Show that E_1-algebras in vector spaces are associative algebras.
Show that E_∞-algebras in vector spaces are commutative algebras.
Show that an E_1-algebra in 𝖢𝖺𝗍
is a monoidal category.
Show that an E_2-algebra in 𝖢𝖺𝗍
is a braided monoidal category.
Show that an E_∞-algebra in 𝖢𝖺𝗍
is a symmetric monoidal category.
In fact, E_n-algebras in 𝖢𝖺𝗍 for n>2
are symmetric monoidal categories.
Let X be a pointed topological space.
Show that Ω^nX is an E_n-algebra in spaces.
In fact, every (connected, grouplike) E_n-algebra in spaces
looks like Ω^nX for some X.
This is called May's Recognition Principle, <cit.>.
§.§.§ Enveloping Algebras
A good reference for these exercises is <cit.>.
For the following exercise,
recall the Chevalley-Eilenberg complex of a Lie algebra
C_∙(𝔥)=(Sym(𝔥[1]),d)
where d is determined by the bracket on 𝔥;
see, for example, <cit.>.
Let 𝔤 be a Lie algebra over R.
Let 𝔤^R
be the cosheaf valued in chain complexes on R assigning
Ω^*_c(U)⊗𝔤
with differential d_dR
to an open interval U⊂R.
Show that the assignment
U↦ H^∙(C_∙(𝔤^R(U)))
defines a factorization algebra on R.
Call it 𝒰(𝔤).
Show that 𝒰(𝔤) is locally constant.
Show that 𝒰(𝔤) is determined by the
associative algebra U𝔤,
the universal enveloping algebra.
PART:
Day 3
§ FACTORIZATION HOMOLOGY
Our next goal is to define factorization homology in the language
of E_n-algebras.
To do this, we will follow <cit.>.
Another reference is <cit.>.
Let Mfld_n be the ∞-category of n-manifolds.
This has objects n-manifolds and morphisms smooth embeddings.
Let Disk_n⊂Mfld_n be the full ∞-subcategory consisting of manifolds isomorphic to finite disjoint unions of Euclidean spaces.
See <cit.> for more details on these definitions,
including a necessary technical condition on the manifolds:
that they admit finite good covers.
If you are pretending these are just topological categories, the mapping spaces Emb(M,N) are given the compact-open topology.
We consider the empty set to be an object of Mfld_n for every n.
We will need the following variation of disk categories.
* Framed disks: Disk^fr_n
will have the same objects of Disk_n
but with framed embeddings as morphisms.
A framed embedding is an embedding M N
so that the given framing on M and
the pulled back framing commute up to a chosen homotopy.
Note that Disk_n
is a symmetric monoidal ∞-category
with tensor product given by disjoint union.
Given an n-manifold M, let Disk_n/M
denote the over category.
Objects of Disk_n/M
are embeddings U↪ M
so that U is isomorphic to a finite disjoint union
⊔R^n of Euclidean spaces.
Morphisms in this category are triangles
U[r][d] M
V[ur]
that commute up to a chosen isotopy.
The over category Disk_n/M
comes with a forgetful functor
Disk_n/MDisk_n
The following is <cit.>.
An n-disk algebra A with values in
a symmetric monoidal ∞-category V
is a symmetric monoidal functor
A:Disk_nV
Let Alg_n(V) denote the category of n-disk algebras.
§.§ E_n-algebras
If we redo everything above with framed manifolds, framed embeddings, and such, then a
framed n-disk algebra is the same as an E_n-algebra.
The equivalence goes as follows.
Given a framed n-disk algebra A with values in V,
define an E_n-algebra in V by A(R^n)
and action
Emb(∐_IR^n,R^n)⊗ A(R^n)^⊗ I A(R^n)
by identifying
A(R^n)^⊗ I≃ A(∐_IR^n)
and applying the given embedding.
More precisely, there is an equivalence of categories
Alg_Disk_n^fr(V)≅Alg_E_n(V)
For details, see <cit.>.
Let M be an n-manifold and A an n-disk algebra
valued in V.
The factorization homology of M with coefficients in A
is the homotopy colimit
colim(Disk_n/MDisk_nAV).
This is <cit.>.
The symbol ∫ is also used in category theory to denote (co)ends.
Ordinarily, a coend over a category 𝒞 is denoted by ∫_𝒞
and an end is denoted ∫^𝒞.
Although the variable M appears in the bottom of the ∫ symbol, factorization homology can be expressed as a coend
-over the symmetric monoidal envelope of E_n.
See <cit.> for details.
Factorization homology for framed manifolds can also be
described as a bar construction in the language of operads;
see <cit.>.
§.§ Homology Theories for Manifolds
Factorization homology satisfy a version,
more suited to manifolds,
of the Eilenberg-Steenrod axioms for homology theories.
The main axiom of such theories is called “⊗-excision."
The following is <cit.>.
A symmetric monoidal functor
F:Mfld_nCh_Q
satisfies ⊗-excision if, for every collar-gluing U⋃_V×RU'≃ W, the canonical morphism
F(U)⊗_F(V×R)F(U') F(W)
is an equivalence.
One reason we have restricted to collar-gluings
is so that this tensor product makes sense.
As an exercise for this section,
you will check that the pieces of (<ref>)
have the correct algebraic structure to form the tensor product.
The ∞-category of homology theories for n-manifolds
valued in Ch_Q is the full ∞-subcategory
H(Mfld_n,Ch_Q)⊂Fun^⊗(Mfld_n,Ch_Q)
of symmetric monoidal functors that satisfy ⊗-excision.
Not only is factorization homology a homology theory for n-manifolds,
it also is the only such thing.
The following is <cit.>.
There is an equivalence
∫:Alg_n(Ch_Q)⇆H(Mfld_n,Ch_Q):ev_R^n
One can replace Ch_Q
with a general symmetric monoidal ∞-category V
as long as V is “⊗-presentable."
For details, see <cit.>.
§.§ Examples
We compute factorization homology ∫_MA
for simple choices of M and A.
Take M=R^n.
Then Disk_n/R^n has a final object
given by the identity map R^n=R^n.
Thus the colimit is given by evaluation on R^n,
∫_R^nA=colim(Disk_n/R^nDisk_nAV)=A(R^n)
Take M=∐_IR^n.
Then Disk_n/M again has a final object
and as above we obtain
∫_∐_IR^nA≃ A(∐_IR^n)≅ A(R^n)^⊗ I
Here we are seeing the fact that
∫_(-)A:Mfld_nCh_Q
is a symmetric monoidal functor.
Take M=S^1, as a framed manifold.
Note that an E_1-algebra A is the same as an associative algebra
A̅:=A(R^1).
We will use excision to compute ∫_S^1A.
Express S^1 as a collar-gluing
S^1≅R∪_S^0×RR
By ⊗-excision, we have
∫_S^1A≃(∫_RA)⊗_(∫_S^0×RA)(∫_RA)≃A̅⊗_A̅⊗A̅^opA̅
where we obtained
A̅⊗A̅^op
because the two copies of R^1 in S^0×R^1⊂ S^1
are oriented differently.
This is the Hochschild homology of A.
This appears as <cit.>.
We have a functor
Alg_n(V)V
given by evaluating on R^n.
This is the forgetful functor.
The left adjoint to the forgetful functor is the free functor
F_nVAlg_n(V)
Consider the free n-disk algebra on V∈Ch_Q.
This sends a disjoint union ∐_kR^n to
⊕_0≤ iC_*(Emb(∐_iR^n,∐_kR^n))⊗_Σ_iV^⊗ i
We can similarly define a free framed n-disk algebra.
In the framed case,
F_nV(R^n)=⊕_i≥ 0C_*Emb^fr(∐_iR^n,R^n)⊗_Σ_iV^⊗ i
Since Emb^fr(∐_iR^n,R^n)≃Conf_i(R^n),
this agrees with the free E_n-algebra on V.
For M a framed manifold,
and V∈Ch_Q,
we have
∫_MF_nV≃⊕_0≤ iC_*(Conf_iM)⊗_Σ_iV^⊗ i
A similar statement is true in the non-framed case;
see <cit.>.
For U≅∐_IR^n we have
F_nV(U)=(⊕_i≥ 0C_*(Conf_iR^n)⊗_Σ_iV^⊗ i)^⊗ I≅⊕_i≥ 0C_*(Conf_iU)⊗_Σ_iV^⊗ i
Thus
∫_MF_nV =U∈Disk_n/Mcolimit⊕_i≥ 0(C_*(Conf_iU)⊗_Σ_iV^⊗ i)
=⊕_i≥ 0U∈Disk_n/Mcolimit(C_*(Conf_iU)⊗_Σ_iV^⊗ i)
Let Disk^fr_n/M
denote the ordinary category of framed n-disks in M.
We will show things for the ordinary category Disk^fr_n/M,
instead of for the ∞-category Disk^fr_n/M.
It turns out that this is sufficient
by <cit.>:
The functor Disk^fr_n/MDisk^fr_n/M is a localization. Hence factorization homology can be computed as a colimit over Disk^fr_n/M.
For a more direct proof in the ∞-category case,
see <cit.>.
To compute this colimit,
we use a hypercover argument.
This is theorem A.3.1 in <cit.>;
see also <cit.> or <cit.>.
Let X be a topological space.
Let Opens(X) denote the poset of open subsets of X.
Let C be a small category and let
F:COpens(X) be a functor.
For every x∈ X, let C_x
denote the full subcategory of C
spanned by those objects C∈C
such that x∈ F(C).
If for every x∈ X, the simplicial set N(C_x)
is weakly contractible,
then the canonical map
C∈CcolimSing(F(C))Sing(X)
exhibits the simplicial set
Sing(X) as a homotopy colimit of the diagram
{Sing(F(C))}_C∈C.
To use the Seifert-van Kampen theorem, consider the following commutative diagram,
Disk^fr_n/M[r][dr]@/_2pc/[ddr]_Conf_i(-) Disk^fr_n[rr]^Conf_i(-) Ch_Q
Opens(M)[urr]_Conf_i(-)
Opens(Conf_iM)@/_2pc/[uurr]
Let x̅=(x_1,…,x_i)∈Conf_iM.
The category (Disk^fr_n/M)_x̅
contains embedded disks U↪ M
so that {x_1,…,x_i} is in U.
By the Seifert-van Kampen theorem, if
B(Disk^fr_n/M)_x̅≃ *,
then
U∈Disk_n/McolimitConf_iU≃Conf_iM
To show that this category is contractible, we will show it is cofiltered.
A nonempty ordinary category C is cofiltered if
1) for every pair U,V∈C there exists W∈C
and maps W U and W V, and
2) given two maps u,v:X Y in C,
there exists Z∈C and a map w:Z X so that uw=vw.
Let U,V∈(Disk^fr_n/M)_x̅.
We need to find a finite disjoint union of euclidean spaces
W M containing (x_1,…,x_i)
and maps W U and W V.
Note that U∩ V contains x̅,
but may not be a disjoint union of euclidean spaces.
However, we can find a small disk around
each x_i and still in U∩ V.
The second condition is satisfied since
Disk^fr_n/M is a poset.
Thus (Disk^fr_n/M)_x̅ cofiltered,
and hence contractible.
Applying the Seifert-van Kampen theorem,
(and adding in a few details about V) we get
∫_MF_nV≃⊕_i≥ 0C_*(Conf_iM)⊗_Σ_iV^⊗ i.
For more details, see <cit.>.
§ EXERCISES
§.§.§ Warm Up
What is the factorization homology
∫_[0,1]A
of an associative algebra A?
What can you say about ∫_S^2A for A a 2-disk algebra?
Make sense of the tensor product in excision.
How are things modules, algebras, and such?
§.§.§ Poincaré Duality for Factorization Homology
Yesterday, you saw that Ω^nX was an E_n-algebra.
Show that C_∙(Ω^n X) is an E_n-algebra.
Write C_∙(Ω^nX) and Ω^nX as n-disk algebras
(valued in chain complexes and spaces, respectively).
Write them as factorization algebras as well.
Note that
Ω^n X=𝖬𝖺𝗉_c(R^n, X).
Let X be an n-connective space.
Convince yourself that compactly supported maps
𝖬𝖺𝗉_c(-,X)
satisfies ⊗-excision.
That is,
given a collar glueing
M= U⋃_V×R U',
there is an equivalence
𝖬𝖺𝗉_c(U,X)×_𝖬𝖺𝗉_c(V×R,X)𝖬𝖺𝗉_c(U',X)≃𝖬𝖺𝗉_c(M,X).
You can do this by just skimming the proof given in <cit.>,
if you want;
or by trying it out in a few easier cases.
The following is a theorem of Salvatore <cit.>, Segal <cit.>, and Lurie <cit.>,
in various contexts.
We are following the proof of Ayala-Francis <cit.>.
[Nonabelian Poincaré Duality]
Let X be an n-connective space.
Show that
∫_MΩ^nX≃𝖬𝖺𝗉_c(M,X).
Say M is compact.
If X an Eilenberg-MacLane space,
the right-hand side looks like cohomology.
This motivates the relationship between nonabelian Poincaré duality
and usual Poincaré duality.
§.§.§ Enveloping Algebras
Let 𝔤 be a Lie algebra.
Recall the Chevalley-Eilenberg complex
C_∙^Lie(𝔤)
from yesterday.
Define a Lie algebra structure on
𝖬𝖺𝗉_c(R^n,𝔤).
Show that
C_∙^Lie(𝖬𝖺𝗉_c(R^n,𝔤))
forms an n-disk algebra.
Call it U_n𝔤.
Show that U_1𝔤 is the enveloping algebra U𝔤.
Check this with your understanding of U𝔤
as an E_1-algebra from yesterday.
Thus U_n𝔤 gives us a version of the enveloping algebra
in higher dimensions -a “higher enveloping algebra"
This is a key example in field theory.
Many field theories have observables that look similarl to
a higher enveloping algebra construction.
For example, any free theory has this property.
Compute
∫_M U_n𝔤.
§.§.§ Spare Questions
Show that ∫_MA has a canonical action of 𝖣𝗂𝖿𝖿(M).
Let H⊂GL(n) be a sub-Lie-group.
You can think SO(n) if you want.
Define a notion of an H-oriented TFT in the functorial setting.
* Can you define a notion of an H-oriented E_n-algebra?
* How about an H-oriented factorization algebra?
PART:
Day 4
§ FUNCTORIAL THINGS
As we have seen in the past two days,
factorization algebras, E_n-algebras, and n-disk algebras
are all related by the fundamental transformation
of thinking of embedded disks, or points at their centers.
By similar reasoning, observables are sometimes called point observables.
To record what we learned,
we have a Costello-Gwilliam approach
and a special case when the field theory is topological.
The observables are a factorization algebra in general,
and in the topological case we
get a E_n-algebra in 𝖢𝗁.
Costello-Gwilliam
Topological Case
point observables
factorization algebra on spacetime
E_n-algebra in 𝖢𝗁
Indeed,
say our field theory is topological
and lives on R^n.
The factorization algebra of observables only depends on the data of
ℱ(D^n)∈𝖢𝗁
and the multiplications coming from inclusions of disks into bigger disks.
If we draw this,
we see that ℱ(D^n) is the value at a disk centered at some point
and then inclusion of disks can be pulled out to be a bordism between
spheres.
< g r a p h i c s >
This sphere is the linking sphere of the disk.
The result, which I will denote
ℱ(D^n)=𝒵(S^n-1)
for its dependence on the linking sphere
is the E_n-algebra in 𝖢𝗁.
§.§ Definition of Line Operators
Observables gave us some cool algebras to think about,
but there is some structure of the field theory that it misses.
Consider G-gauge theory on M.
Given a loop C in M,
we can define a map
𝖡𝗎𝗇^∇_GMR
sending a principal G-bundle P M
to the trace of the holonomy map
Hol_C(P)𝔤𝔤.
This is called a Wilson loop operator.
So given a loop in spacetime,
we get a function on fields,
which is something observables could know about,
but observables does not see any of the dependence of loops in M.
In general,
this type of construction giving observables
depending on loops or lines in spacetime
are called line operators.
I know this is super vague.
It is my understanding that line operators
are still partially in the physics art stage rather than
being fully mathematically understood.
Let's think about what type of structure
the set of line operators would have.
One piece of structure comes from stacking lines.
This gives us a way of composing line operators.
We should get a category of line operators with
* objects: line operators (a pair of a line in spacetime and the operator), and
* morphisms: 𝖧𝗈𝗆(L,L') is the set of point observables
that can be inserted between L and L' to form a new line operator.
Is there any algebraic structure on the category of line operators?
For observables,
the E_n-algebra structure came from looking at the linking sphere
of the point we were at.
Let's replicate that with lines.
In R^n, the linking sphere of a line is S^n-2.
Now instead of including disks into bigger disks,
we have lines colliding
(analogous to points colliding).
The result is a bordism between copies of S^n-2.
This is a E_n-1-algebra structure.
For a topological theory on R^ n,
line operators form an E_n-1-monoidal category
𝒵(S^n-2).
Costello-Gwilliam
Topological Case
point observables
factorization algebra on spacetime
𝒵(S^n-1) an E_n-algebra in 𝖢𝗁
line operators
?
𝒵(S^n-2) an E_n-1-algebra in 𝖢𝖺𝗍
§.§ Line Operator Constructions
How do we say anything about line operators on the non-topological side?
Let F be a field theory on M.
There is an ansatz from physics that
a line operator for C⊂ M
is the same as the data of boundary theory for
F restricted to M∖𝖭𝗈𝗋𝗆(C).
That is,
the field theory works the same away from C
and has a “defect" at C;
see <cit.> for a discussion in this vein.
Thinking in terms of observables,
in the topological R^n case,
we would like an E_n-algebra
away from C.
What is the data of observables of a boundary theory for
F on M∖𝖭𝗈𝗋𝗆(C)?
The following is part of <cit.>.
Let A be an E_n-algebra.
The data needed to produce a new E_n-algebra
that agrees with A on R^n∖R^k is
an object in
𝖫𝖬𝗈𝖽(∫_S^n-k-1×R^k+1A).
To state this theorem,
we are using the E_1-algebra structure on
∫_S^n-k-1×R^k+1A.
This comes from the R^k+1 direction by stacking.
This is related to the exercise from yesterday
on making sense of excision.
For line operators,
we are looking at modules over
∫_S^n-2A.
This S^n-2 is the linking sphere of the line C that we encountered before.
Thus, we have an approximation to the category of line operators
𝖫𝗂𝗇𝖾𝖮𝗉≈𝖬𝗈𝖽(∫_S^n-2A).
This is not a good approximation for non-perturbative field theories.
§ ATIYAH-SEGAL APPROACH TO TFTS
Returning to the topological side,
we start to see a pattern.
Continuing down,
we could ask for the date of
an E_n-algebra 𝒵(S^n-1),
an E_n-1-monoidal category 𝒵(S^n-2),
and so on.
We will use a functorial TQFT perspective to
package together into a single functor.
For X an oriented manifold, let X denote X with the opposite orientation.
The following is <cit.>.
Let n be a positive integer. We define a category 𝐂𝐨𝐛(n) as follows:
* objects: (n-1)-dimensional oriented manifolds
* morphisms from M to N are given by equivalence classes of n-dimensional oriented manifolds with boundary B together with an an orientation-preserving diffeomorphism
B≃M⊔ N
Two morphisms B and B' are equivalent if there is an orientation-preserving diffeomorphism B B' that restricts to the identity on the boundaries,
B[r][d]^≃ B'[d]^≃
M⊔ N@=[r] M⊔ N
Composition is given by gluing cobordisms along their shared boundary.
View 𝐂𝐨𝐛(n) as a symmetric monoidal category under disjoint union.
For k a field, let 𝐕𝐞𝐜𝐭(k) denote the symmetric monoidal category of k-vector spaces with tensor product.
The following comes from <cit.>.
Let k be a field. A topological field theory (TFT) of dimension n is a symmetric monoidal functor Z𝐂𝐨𝐛(n)𝐕𝐞𝐜𝐭(k).
In particular, Z(∅)=k.
The value Z(S^n-1) is an E_n-algebra.
The pair of pants bordism gives the
E_n(2)⊗ Z(S^n-1)^⊗ 2 Z(S^n-1)
map.
More legs, means more points.
Note that Atiyah's definition considers all possible spacetimes at once, instead of working on one specific n-manifold at a time. We could instead consider the category of bordism submanifolds within a fixed manifold.
For V a k-vector space, let V^∨ denote the linear dual, V^∨=Hom(V,k).
Let Z be an n-dimensional TFT. Given an oriented (n-1)-manifold M, the product manifold M×[0,1] can be viewed as a morphism in 𝐂𝐨𝐛(n) in multiple ways.
* As a morphism from M M, the product M×[0,1] maps to the identity map
id Z(M) Z(M).
* As a morphism M⊔M∅, the product M×[0,1] determines an evaluation map
ev Z(M)⊗ Z(M) k.
* As a morphism ∅M⊔ M, the product M×[0,1] determines a coevaluation map
coev k Z(M)⊗ Z(M).
Recall that a pairing V⊗ W k is perfect if it induces an isomorphism V W^∨.
The following is <cit.>.
Let Z be a topological field theory of dimension n. For every (n-1)-manifold M, the vector space Z(M) is finite dimensional. The evaluation map Z(M)⊗ Z(M) k, induced from the cobordism M×[0,1], is a perfect pairing.
§ CLASSIFYING TOPOLOGICAL FIELD THEORIES
§.§ Low Dimensions
[Dimension 1]
Let Z𝐂𝐨𝐛(1)𝐕𝐞𝐜𝐭(k) be a 1-dimensional TFT. Let P denote a single point with positive orientation and Q=P. Let Z(P)=V. This finite-dimensional vector space, determines Z on objects. By Proposition <ref>, Z(Q)=Z(P)=V^∨. A general object of 𝐂𝐨𝐛(1) looks like
M=∐_S_+P⊔∐_S_-Q
for S_+,S_- sets. Since Z is symmetric monoidal, we have
Z(M)=∐_S_+V⊔∐_S_-V^∨
What about morphisms? A morphism in 𝐂𝐨𝐛(1) is a 1-dimensional manifold with boundary B. Using the monoidal structure, it suffices to describe Z(B) where B is connected. There are five possibilities
* B is an interval viewed as a morphism P P. Then Z(B)=Id_V.
* B is an interval viewed as a morphism Q Q. Then Z(B)=Id_V^∨.
* B is an interval viewed as a morphism P⊔ Q∅. Then Z(B) V⊗ V^∨ k. By Proposition <ref>, this is the canonical pairing of V and V^∨,
Z(B)(x⊗ f)=f(x)
Under the isomorphism V⊗ V^∨≅End(V), the morphism Z(B) corresponds to taking the trace.
* B is an interval viewed as a morphism ∅ P⊔ Q. Then Z(B) is the map
k V⊗ V^∨≅End(V)
sending λ∈ k to λId_V.
* B=S^1 is a circle viewed as a morphism ∅∅. Then Z(S^1) is a linear map k k; i.e., multiplication by some γ∈ k. To determine γ, view S^1 as the union of two semi-circles along P⊔ Q. This determines a decomposition of S^1 into the composite of two cobordisms,
∅ P⊔ Q∅
By the above cases, Z maps this to the composite
kEnd(V) k
of the map λ↦λId_V and the trace map. Thus Z(S^1) is the scaling by Tr(Id_V)=(V) map.
In 1-dimension, the observables are given by Z(S^0)=End(V). Notice that this has the structure of an associative algebra.
Thus in dimension 1, we see that the vector space Z(P) determines the TFT Z. Does every V∈𝐕𝐞𝐜𝐭(k) appear as Z(P) for some 1-dimensional TFT? Nope, only the finite-dimensional ones. We get an equivalence of categories
Fun^⊗(𝐂𝐨𝐛(1),𝐕𝐞𝐜𝐭(k))𝐕𝐞𝐜𝐭^fin(k)
by evaluating on the point.
Let's try to do something similar in dimension 2.
[Dimension 2]
Following <cit.> we have the following.
Let Z be a 2-dimensional TFT. The only objects in 𝐂𝐨𝐛(2) are the empty set and disjoint unions of copies of S^1. We do not get a new object S^1 since the circle has an orientation-reversing diffeomorphism. The observables, A=Z(S^1) determines Z on objects.
What about morphisms? A morphism in 𝐂𝐨𝐛(2) is 2-dimensional oriented manifold with boundary.
* The pair of pants cobordism determines a map m A⊗ A k. One can check that m defines a commutative, associative multiplication on A.
* The disk D^2 viewed as a cobordism S^1∅ determines a linear map Tr A k.
* The disk D^2 viewed as a cobordism ∅ S^1 determines a linear map k A. The image of 1∈ k under this map acts as a unit for the multiplication. Indeed, we can glue D^2 to one of the legs of the pants. The resulting manifold is diffeomorphic to S^1×[0,1]. But S^1×[0,1] maps to Id_A under Z.
Note that the composite of
A⊗ Am ATrk
comes from the cobordism S^1×[0,1] viewed as a map S^1⊔ S^1∅. By Proposition <ref>, the map Tr∘ m is a nondegenerate pairing.
A commutative Frobenius algebra over k is a finite-dimensional commutative k-algebra A, together with a linear map Tr A k such that the bilinear form (a,b)↦Tr(a,b) is nondegenerate.
The category of 2-dimensional oriented TFTs is equivalent to the category of Frobenius algebras.
A detailed proof can be found in <cit.> and a good expository account can be found in <cit.>.
See <cit.> for details in the fully extended case.
In 2-dimensions, the observables Z(S^1) of a TFT Z has the structure of a Frobenius algebra.
§ EXERCISES
Let V be a finite dimensional vector space.
Let Z_V be the 1-dimensional TQFT determined by V.
What is the corresponding associative algebra of observables,
in terms of V?
Let Z be an n-dimensional field theory.
Show that Z(S^n-1) acts on Z(N) for every n-manifold N.
For G-gauge theory on M,
we defined a Wilson loop operator.
Given a representation ρ of G,
define a version of a Wilson loop operator on 𝖦𝖺𝗎𝗀𝖾_GM.
This should be an assignment of a real number to a principal G-bundle
P M with connection.
As a first step, you should build the associated vector bundle to P using ρ.
Let A be an E_n-algebra.
For example, A=𝖮𝖻𝗌^q the observables of a TQFT on R^n.
The line operators of this field theory should
be a E_n-1-monoidal category.
As guesses,
build two different categories
out of A.
Do these categories have any monoidal structure?
§.§.§ Classification of 3d TQFTs
We saw that,
up to reversing orientation and taking disjoint unions,
𝖢𝗈𝖻(1) had a unique object P, the point, and
𝖢𝗈𝖻(1) had a unique object S^1.
Examples of objects in 𝖢𝗈𝖻(3) include the torus T^2 and the sphere S^2.
Using the classification of surfaces,
describe the set of objects of 𝖢𝗈𝖻(3).
Recall the operation
of connected sum for oriented closed 2-manifolds.
Show that under connect sum, one can construct every object of 𝖢𝗈𝖻(3)
from T^2 and S^2.
Note that objects of 𝖢𝗈𝖻(3)
determine morphisms in 𝖢𝗈𝖻(2);
we can view a closed 2-manifold as a
cobordism from the emptyset to itself.
As morphisms in 𝖢𝗈𝖻(2),
what does the operation you thought of in
Question <ref> correspond to categorically?
This breaking complicated 2-dimensional manifolds
down into easy pieces (like S^2 and T^2)
is very helpful.
If we want a classification of 3-dimensional TQFTs,
we would like to be able to do this in 𝖢𝗈𝖻(3).
Can you think of a way to encode the operation
from Question <ref> into a categorical setting
involving 3-dimensional bordisms?
(Any guesses or ideas are fine, this one is to just get you thinking!)
PART:
Day 5
Yesterday we talked about how for TQFTs
the factorization algebra of observables
(which is E_n)
can be situated in the Atiyah-Segal definition of a TQFT.
We ended by classifying TQFTs in dimensions 1 and 2.
§ COBORDISM HYPOTHESIS - HIGHER DIMENSIONS
The problem when we try to classify
TFTs in higher dimensions is that the
objects become too complicated.
Up to reversing orientation and taking disjoint unions,
the categories 𝐂𝐨𝐛(1) and 𝐂𝐨𝐛(2)
have a unique object, P and S^1, respectively.
For n=3, there are infinitely many oriented 2-manifolds,
one for each genus g.
We do not think of genus g surfaces as being that complicated.
In fact, we usually think of Σ_g, the genus g surface,
as coming from g connect sums of the torus.
A closely related way to say this,
is that Σ_g has a relatively easy handle-body decomposition.
But what happens when we view Σ_g under its handle-body decomposition?
We are really viewing it as a composition of cobordisms;
i.e., as a morphisms in 𝐂𝐨𝐛(2).
Similarly, when we tried to understand the value of a 2-dimensional field theory on S^1,
we broke S^1 into the union of two semi-circles,
that is to say, into its handle-body decomposition.
If we want to be understand an
n-dimensional field theory by breaking manifolds down,
using their handle-body decompositions,
into lower-dimensional manifolds,
we need the TFT to know about manifolds of dimension <n-1.
In particular, we would like some sort of data assigned to
every (n-2)-dimensional manifold and
we would like this data to have something to do with the values on
(n-1)-manifolds.
In particular, from our discussion yesterday,
we expect S^n-2 to be assigned a category.
The way to encode all this data is the language of higher categories.
A strict n-category is a category C enriched over (n-1)-categories.
For n=2, this means that for objects A,B∈C the morphisms Hom_C(A,B) is itself a category.
The strict 2-category 𝐕𝐞𝐜𝐭_2(k) has objects cocomplete k-linear categories and morphisms
Hom_𝐕𝐞𝐜𝐭_2(k)(C,D)=𝐅𝐮𝐧_k^cocon(C,D)
the functor category of cocontinuos, k-linear, functors.
The strict 2-category 𝐂𝐨𝐛_2(n) has
* objects: closed, oriented manifolds of dimension n-2.
* morphisms: Hom_𝐂𝐨𝐛_2(n)(X,Y)=:C should be the category with
* objects: cobordisms X Y
* morphisms: Hom_C(B,B') is equivalence classes of bordisms X from B B'
The big problem here is making the composition law strictly associative. One would like to define composition by gluing bordisms, but get messed up in defining a smooth structure on the result, and things that used to be equalities are now just homeomorphisms. The solution will be to get rid of the “strictness" and move to (∞,2)-categories.
Let 𝖢𝗈𝖻_n(n) denote an (∞,n)-category version of the cobordism category,
see Calaquee-Scheimbauer <cit.>.
Let C be a symmetric monoidal (∞,n)-category. An extended C-valued n-dimensional TFT is a symmetric monoidal functor
ZCob_n(n)C
Take 𝒞 so that
Ω^n𝒞=C,
Ω^n-1𝒞=𝖢𝗁_C,
and
Ω^n-2𝒞=𝖫𝗂𝗇𝖢𝖺𝗍_C.
Then Z(S^n-2) is an (∞,1)-category.
It has an E_n-1-monoidal structure from the pair of pants bordism.
This is line operators.
Similarly, Z(S^n-k) is an E_n-k-1-monoidal (∞,n-k-1)-category.
It describes (k+1)-dimensional defects (i.e. operators).
The purpose of this definition is to allow us to reduced n-dimensional TFTs down to information about 1-dimensional TFTs. As we saw before, a 1-dimensional TFT is determined by its value on a point. Thus we might make the following guess.
An extended field theory is determined by its value on a single point. Moreover, evaluation on a point determines an equivalence of categories between TFTs valued in C and C.
There are two problems with this guess.
* Even in 1-dimension, not every vector space determined a TFT.
We needed to restrict to finite-dimensional ones.
The analogue in higher dimensions will be something called “fully dualizable objects."
* Orientation in dimension 1 is the same as a framing.
This is not true in higher dimensions.
We actually wanted a framing,
not just an orientation so that we could say that locally M^k was canonically diffeomorphic to R^k
(via the exponential map).
Thus we need a version of Cob_n(n) that works with framed manifolds instead of oriented ones.
The following conjecture is due to Baez and Dolan <cit.>.
Let C be a symmetric monoidal (∞,n)-category with duals. Then the evaluation functor Z↦ Z(*) induces an equivalence
Fun^⊗(𝐁𝐨𝐫𝐝_n^fr,C)C^∼
between framed extended n-dimensional TFTs valued in C and the fully dualizable subcategory of C.
Partial and complete proofs are due to Hopkins-Lurie, Lurie <cit.>, Grady-Pavlov <cit.>.
Many others have worked on this as well.
See <cit.> for an expository account of the cobordism hypothesis
and quantum field theory.
This leads to the natural question,
given Z(pt),
how does one obtain Z(N)?
It is expected that there is a version of factorization homology
for (∞,n)-categories so that
∫_N Z(pt)=Z(N)
for all manifolds N of dimension 0,…,n.
This is discussed in work of Ayala-Francis;
see <cit.>,
where it is shown how to prove the cobordism hypothesis assuming the
existence of an upgraded version of factorization homology
known as “β-factorization homology."
Take 𝒞 to be a suitable choice of an (∞,n)-category
of algebras up to Morita equivalences
𝒞=𝖬𝗈𝗋𝗂𝗍𝖺_n.
See Scheimbaurer's thesis <cit.> for a factorization homology
reconstruction of a fully extended TQFT with Morita target.
The fully dualizable objects of 𝖬𝗈𝗋𝗂𝗍𝖺_n-1
are certain types of E_n-1-algebras.
Thus, you can describe an n-dimensional TQFT
by just giving an E_n-1-algebra (satisfying certain conditions)
that the field theory assigns to a point.
Usually, we are thinking of an n-dimensional TQFT
as corresponding to its E_n-algebra of observables,
so what is up with the E_n-1-algebra?
How do we get from Z(pt) to Z(S^n-1)?
§.§ Drinfeld Centers
To answer this question,
we are going to use a version of
a notion Delaney talked about yesterday <cit.>.
You saw yesterday that Drinfeld centers created braided monoidal structures from just monoidal structures.
You can think of that as saying that the center
of an E_1-category is E_2.
More generally,
we have the higher version of the Deligne conjecture due to
Kontsevich, <cit.>.
The E_n-Drinfeld center of an E_n-category is and E_n+1-category.
This was proven in full generality in <cit.>.
Specifically, see <cit.>.
One can show that E_n-Drinfeld center of
Z(pt) is Z(S^n-1),
answering our previous question.
As a good geometric example, we have the following theorem.
Let X be a perfect stack.
The center of quasi-coherent sheaves on X is
sheaves on the free loop space,
𝖢𝖾𝗇𝗍_E_n(QC(X))≃ QC(ℒ^nX).
See <cit.>.
§ HOLOMORPHIC FIELD THEORIES
Now I want to switch gears and talk about
non-topological field theories.
These will provide our first example of factorization algebras
that are not E_n-algebras.
Recall that a function
fCC
is holomorphic
if it is complex differentiable at every point.
Equivalently,
if we write
f(x+iy)=u(x,y)+iv(x,y),
and assume f is continuous,
then f is holomorphic if and only if f
satisfies the Cauchy-Riemann equations
u/ x= v/ y
and
u/ y=- v/ x.
This is the Looman-Menchoff theorem.
Basically these are incredibly nice complex functions.
In order to make sense of holomorphic conditions,
our holomorphic field theories will have spacetime C^n.
Topological field theories were particularly nice
because of their invariance property on observables,
𝖮𝖻𝗌^q(D_1)∼𝖮𝖻𝗌^q(D_2).
To be able to get somewhere with holomorphic theories,
we will additionally assume an invariance property:
translation invariance.
Let V be a holomorphically translation-invariant vector bundle on C^n.
This means that we are given a holomorphic isomorphism between V
and a trivial bundle.
Our space of fields will be
Ω^0,*(C^n,V).
Note that by Dolbeault's theorem,
this complex has cohomology
H^ 0,*(C^n,V)=H^*(C^n,Ω^0⊗ V)
where Ω^0 is the sheaf of holomorphic functions on C^n.
Thus, our space of fields is a derived model for the mapping space
of holomorphic functions
𝖬𝖺𝗉_hol(C^n,V).
Let
η_i=/z̅_i∨ (-)Ω^0,k(C^n,V)Ω^0,k-1(C^n,V)
be the contraction operator.
A field theory on Ω^0,*(C^n,V)
is holomorphically translation invariant if
the action functional
SΩ^0,*(C^n,V)C
is translation-invariant and satisfies
η_i S=0
for all i=1,…,n.
As with topological theories,
we are interested in how the
holomorphically translation-invariant
condition on a field theory
impacts the factorization algebra of observables.
To make the following definition precise
requires more background and time than we have.
Here is the idea;
details can be found in <cit.> and <cit.>.
A factorization algebra ℱ on C^n
is holomorphically translation invariant
if we have isomorphisms
τ_xℱ(U)≃ℱ(τ_x U)
for all x∈R^n and open U⊂R^n.
These isomorphisms are required to vary
holomorphically in x and satisfy
τ_x∘τ_y=τ_x+y
and commute with the factorization algebra maps.
Note that 𝖮𝖻𝗌^q will be a
factorization algebra on C^n.
The following is <cit.>.
The observables 𝖮𝖻𝗌^q
of a holomorphically translation-invariant
field theory
is a holomorphically translation-invariant factorization algebra.
Say we are working over C.
Assume that our field theory additionally
is S^1-invariant.
That is,
that there is an S^1 action on V
which,
together with the S^1 action on Ω^0,*(C),
gives an action on the space of fields,
and all the structures of the field theory are invariant under this.
In this situation, we will get a nice algebraic
description of the observables,
like we did for locally constant factorization algebras
as E_n-algebras.
The following is <cit.>.
A holomorphically translation-invariant and S^1-invariant
factorization algebra on C
determines what is called a vertex algebra.
A vertex algebra is the following data:
* a vector space V over C (the state space);
* a nonzero vector |Ω⟩∈ V (the vacuum vector);
* a linear map T V V (the shift operator);
* a linear map
Y(-,z) V𝖤𝗇𝖽V[[z,z^-1]]
such that
* (vacuum axiom) Y(|Ω⟩ ,z)=id_V and
Y(v,z)|Ω⟩∈ v+zV[[z]]
for all v∈ V;
* (translation axiom) [T,Y(v,z)]=_zY(v,z) for every v∈ V
and T|0>=v;
* (locality axiom) for any pair of vectors v,v'∈ V,
there exists a nonnegative integer N such that
(z-w)^N[Y(v,z),Y(v',w)]=0
as an element of 𝖤𝗇𝖽V[[z^± 1,w^± 1]].
See <cit.> for a good reference on vertex algebras.
Vertex algebras were around before factorization algebras.
They have the benefit of being super computational.
If you are really good at power series manipulations,
or more familiar with representation theory methods,
vertex algebras might be better suited for you.
Factorization algebras are more geometric
and closer to the topologists E_n-algebras.
As motivation for the proof,
recall how we got an E_n-algebra A
from a locally constant factorization algebra ℱ_A on R^n.
The underlying space of A is given by
ℱ_A(U)
where U is any disk in R^n.
For convenience,
we can take U=B_1(0),
the unit ball around the origin.
The structure maps are from the inclusions
of disjoint disks in to B_1(0),
ℱ_A(B_r(0))⊗ℱ_A(B_r'(0))ℱ_A(B_1(0)).
We will get a vertex algebra from
a holomorphically translation-invariant
factorization algebra by a similar process,
with one important difference.
Vertex algebras act more like Lie algebras.
They have structures like Lie brackets,
rather than multiplications like groups do.
If evaluating on B_1(0)
gave us a group like structure,
than to get a Lie algebra structure,
we should somehow take the tangent space at 0.
This is in analogy with how given
a Lie group G,
one obtains the Lie algebra
as T_eG.
Let ℱ be the
holomorphically translation-invariant and S^1-invariant
factorization algebra on C.
Let
ℱ_k(B_r(0))
be the weight k eigenspace of the S^1 action.
To zoom in on 0,
we take the limit
V_k=lim_r 0 H^∙(ℱ_k(B_r(0)).
(CG assume the maps in this limit are quasi-isomorphisms.)
(They also assume that V_k=0 for k>>.)
The underlying vector space of the
vertex algebra will be
V=⊕_k∈ZV_k.
The vacuum element is given by the image of
the unit in ℱ(∅).
The translation map is given by
the derivation / z
from infinitesimal translation in the z direction.
The state-field map
Y V𝖤𝗇𝖽(V)[[z,z^-1]]
comes from the inclusion of two disjoint disks into a bigger disk,
and expanding the holomorphic map into a Laurent series.
One can attach a factorization algebra to a certain type of
vertex algebra;
see <cit.>.
Factorization homology in the language of vertex algebras
is related to “conformal blocks."
§.§ Examples
A commutative ring V with derivation T
determines a vertex algebra
with state space V,
translation operator T,
and state-field correspondence
Y(u,z)v=uv.
An important example of a vertex algebra
is the Virasoro vertex algebra.
For a detailed description of the associated factorization algebra
and a great example of Theorem <ref>,
see <cit.>.
Given a manifold X,
differential operators 𝖣𝗂𝖿𝖿_X is an associative algebra,
so a type of factorization algebra on R.
A 2-dimensional analogue would
be something like a factorization algebra on R^2);
for example, a vertex algebra.
Such a vertex algebra was constructed by
Malikov-Schechtman-Vaintrob <cit.>
and is called chiral differential operators.
The factorization algebra analogue was build
by Gorbounov-Gwilliam-Williams <cit.>.
This is an example of a factorization algebra
built locally on the target from a Lie algebra in an enveloping algebra-esque construction.
§.§ Stolz-Teichner Program
The field theory 𝒞𝒟𝒪_X whose observables
is chiral differential operators on X
is holomorphic Chern-Simons theory
(a.k.a. a curved βγ system) <cit.>.
It is a 2-dimensional field theory.
An invariant that we have not talked about for field theories
is called the partition function.
The partition function of 𝒞𝒟𝒪_X
recovers an invariant of X known as the Witten genus.
This appears in <cit.>.
The Witten genus has close ties to the geometry of elliptic curves.
In fact, there is a cohomology theory built from elliptic curves,
called tmf <cit.>,
that encodes the Witten genus as an orientation.
An important aspect of tmf is that is has
chromatic height 2.
That is,
it is part of a hierarchy of more and more complex cohomology theories
of increasing chromatic height.
Similarly, the associative algebra analogue of chiral differential operators
is differential operators.
Differential operators on X are related to the observables
of a 1-dimensional theory 𝒟_X
called 1-dimensional Chern-Simons theory.
The partition function of 𝒟_X recovers the Â-genus of X,
<cit.>.
The Â-genus is encoded in ordinary cohomology.
Ordinary cohomology has chromatic height 1.
dimension
Field Theory
Partition Function
Cohomology Theory
Chromatic Height
1
1d Chern-Simons theory
related to the  genus
ordinary cohomology H^∙(-)
1
2
holomorphic Chern-Simons theory
related to the Witten genus
elliptic cohomology tmf
2
In summary, we have examples of a relationship between
the dimension of a field theory
and the chromatic height of the cohomology theory in which the partition function is encoded.
A conjecture of Stolz and Teichner proposes a deeper relationship
between chromatic height and dimension of field theories;
see <cit.>.
For recent progress on this program see <cit.> and other works of Berwick-Evans.
§ DUALITY
We understand field theories by studying their
observables.
This translation is great for a few things:
* a precise definition of topological field theory
* take advantage of the structure of factorization homology
* quantization algebraically becomes deformationo
I want to end by talking about another benefit of this viewpoint.
Basically in all fields of math I think about,
notions of duality are super exciting.
For example, Poincaré duality in manifold theory.
In your problem sessions,
you saw a version of this called non-abelian Poincaré duality.
Are there notions of dualities in field theory and factorization algebras?
We are going to start on the algebra side.
The cool type of duality for algebras is called Koszul duality.
Let A be an associative algebra.
The Koszul dual of A is the linear dual
D(A)=(1⊗_A1)^∨.
Here 1 denotes the trivial (left or right) A-module.
We can recover this construction using factorization homology.
Indeed,
there is an equivalence
∫_D^1A=1⊗_A1
for any E_1-algebra A.
This motivates a definition of Koszul duality for E_n-algebras.
Let A be an E_n-algebra.
The Koszul dual of A is
D(A)=(∫_D^nA)^∨.
This is explained in <cit.>.
There is a different original definition of the Koszul dual
of an E_n-algebra, due to Ginzburg-Kapranov and Lurie
if different contexts.
Really Ayala-Francis' result is that the definition I
gave above agrees with these previous definitions.
Recently, Ching and Salvatore <cit.>
proved a long standing conjecture
regarding the Koszul dual of the operad E_n.
The E_n-operad is Koszul dual to itself.
This was previously known at the level of chain complexes;
see <cit.>.
Ching and Salvatore's result aslo recovers a previously known result on the level of algebras.
The Koszul dual of an E_n-algebra is an E_n-algebra.
This can be found several places,
including <cit.>.
Let 𝔤 be a Lie algebra.
The Koszul dual of the enveloping algebra U(𝔤)
is the Lie algebra cochains,
D(U_n𝔤)=C^∙_Lie(𝔤).
This is also true for the E_n-enveloping algebra U_n(𝔤).
The Koszul dual of U_n(𝔤) is C^∙_Lie(𝔤),
viewed as an E_n-algebra,
see <cit.>.
Thus, you can ask if the factorization homology
of dual algebras are related.
The following is the main theorem of <cit.>.
Under conditions,
there is an equivalence
∫_MA≃(∫_M^+D(A))^∨.
The changing of M to M^+ reflects,
for example, the difference in Poincaré duality for manifolds with
boundary.
Turning back to the field theory side,
note that
the Koszul dual of observables on R^n
has the right algebraic structure to be observables of
a different field theory on R^n.
That is,
let 𝖮𝖻𝗌_X denote the observables of a field theory X.
We would like to know if there is another field theory Y so that
the Koszul dual of observables on X is observables on Y;
in symbols
D(𝖮𝖻𝗌_X)≃𝖮𝖻𝗌_Y.
The holomorphically twisted version of AdS/CFT duality
gives an example of Koszul duality for observables.
See <cit.> and the references therein.
To make this conjecture precise,
one would need a notion of Koszul duality for factorization algebras.
That is an open problem.
Checking this conjecture in examples is an active
area of research by lots of people.
§ EXERCISES
Let Z be a fully extended n-dimensional TQFT
valued in 𝒞 with
Ω^n𝒞=C,
Ω^n-1𝒞=𝖢𝗁_C,
and
Ω^n-2𝒞=𝖫𝗂𝗇𝖢𝖺𝗍_C.
Recall that
Z(S^n-1) is an E_n-algebrain 𝖢𝗁
Z(S^n-2) is an E_n-1-algebra in 𝖫𝗂𝗇𝖢𝖺𝗍.
Show the analogous statement for Z(S^n-k).
Let K be a (n-1)-manifold with boundary.
Show that K determines an object in Z(∂ K).
Can you describe the unit of Z(S^n-2)?
Let A be an E_n-algebra.
Recall from yesterday's exercises that you constructed
an (∞,n)-category from A.
Use this construction to write down sensible values
for an n-dimensional field theory Z
with Z(S^n-1)=A.
In particular, write down Z(S^n-k) for all k,
and Z(pt).
Show that a vertex algebra (V,Y,T) with
Y(u,z)∈𝖤𝗇𝖽V[[z]]
is equivalent to one formed by a commutative ring with derivation.
PART:
Solutions and Hints for Exercises
§.§.§ Exercise <ref>
See <cit.>.
§.§.§ Exercise <ref>
The open cover consisting of U_1=(-1,∞)×R
and U_2=(-∞,1)×R is not a Weiss cover.
The finite set {(-6,0),(6,0)} is not fully contained in any open set of the cover.
§.§.§ Exercise <ref>
Take the open cover consisting of just M itself.
More interestingly, you could take the open cover
consisting of all disjoint unions of embedded open disks in M.
§.§.§ Exercise <ref>
An open cover of 𝖱𝖺𝗇(M)
is a collection of open sets {U_i}
so every finite set of points in M is contained in some U_i.
§.§.§ Exercise <ref>.
Let A be a commutative algebra (say in chain complexes).
The constant precosheaf defines a functor
F_1𝖮𝗉𝖾𝗇(M)𝖢𝗁
sending every open set to A.
We can view this as a precosheaf valued in algebras in chain complexes,
F_2𝖮𝗉𝖾𝗇(M)𝖢𝗈𝗆𝗆(𝖢𝗁).
We can cosheafify F_2 using the Weiss topology to get a Weiss cosheaf
F_3𝖮𝗉𝖾𝗇(M)𝖢𝗈𝗆𝗆(𝖢𝗁).
Since coproducts and tensor products are the same in algebras in chain complexes,
we have
F_3(U⊔ V)≃ F_3(U)⊗ F_3(V).
Forgetting back down,
we have a factorization algebra
F_4𝖮𝗉𝖾𝗇(M)𝖢𝗁.
The functor F_4 is still a Weiss cosheaf since the forgetful functor
𝖢𝗈𝗆𝗆(𝖢𝗁)𝖢𝗁
preserves reflexive coequalizers.
§.§.§ Exercise <ref>
Let A be an associative algebra.
This should be a factorization algebra on R.
It assigns the associative algebra A to every open interval.
The factorization algebra structure comes from the multiplication on A.
§.§.§ Exercise <ref>
When the factorization algebra ℱ
sends every inclusion (a,b)⊂ (c,d) to
an equivalence.
§.§.§ Exercise <ref>
Your answer should involve right and left modules.
§.§.§ Exercise <ref>
The space of fields is a vector space,
saw W=R^n.
The critical locus of S is then the critical locus of a function
SR^nR.
§.§.§ Exercise <ref>
The critical locus is the set of points with dS(p)=0.
The zero section of T^*Y is the set of pairs (p,0).
The graph of dS is the set of pairs (p, dS(p)).
§.§.§ Exercise <ref>
If S=0, then dS=0.
§.§.§ Exercise <ref>
See <cit.>.
§.§.§ Exercise <ref>
See <cit.>.
§.§.§ Exercise <ref>
An E_n-algebra is an E_m-algebra for m<n.
If you have already done Exercise <ref>,
you can see this geometrically by May's recognition principle.
The space Ω^nX is the same as
Ω^mY where Y=Ω^n-mX.
§.§.§ Exercise <ref>
Draw pictures of disjoint unions of disks including into other disks.
§.§.§ Exercise <ref>
See <cit.>.
§.§.§ Exercise <ref>
See <cit.>.
Alternatively, one can see this on the level of operads
as is done in <cit.>.
§.§.§ Exercise <ref>
See <cit.>.
Alternatively, one can see this on the level of operads
as is done in <cit.>.
§.§.§ Exercise <ref>
See <cit.>.
§.§.§ Exercise <ref>
See <cit.>.
Alternatively, one can see this on the level of operads
as is done in <cit.>.
§.§.§ Exercise <ref>
Think of Ω^nX as
compactly supported maps R^n X
and using inclusion of disjoint unions of disks into other disks.
See <cit.>.
§.§.§ Exercise <ref>
Start by checking that disjoint union of disks is sent to a tensor product.
§.§.§ Exercise <ref>
See <cit.>.
§.§.§ Exercise <ref>
See <cit.>..
§.§.§ Exercise <ref>
Use excision and the description of a factorization algebra on [0,1] from a previous exercise.
§.§.§ Exercise <ref>
Use excision.
§.§.§ Exercise <ref>
Here F(V×R) inherits an
E_1-algebra structure from the copy of R^1,
Emb^fr(∐_IR,R)⊗ F(V×R)^⊗ I≃Emb^fr(∐_IR,R)⊗ F(V×(∐_IR)) F(V×R)
The tensor product
F(U)⊗_F(V×R)F(U') F(W)
is then the tensor product in modules over the E_1-algebra F(V×R).
§.§.§ Exercise <ref>
Show that C_∙ takes E_n-algebras in spaces
to E_n-algebras in chain complexes.
§.§.§ Exercise <ref>
The proof is given in <cit.>.
§.§.§ Exercise <ref>
Use the fact that every homology theory for manifolds
looks like factorization homology for some E_n-algebra.
§.§.§ Exercise <ref>
See <cit.>.
§.§.§ Exercise <ref>
See <cit.>.
§.§.§ Exercise <ref>
Use both the fact that C_∙^Lie commutes with
factorization homology
and nonabelian Poincaré duality.
§.§.§ Exercise <ref>
In general, we should take Z(S^n-1).
Here the dimension is n=1 so we are looking at Z(S^0).
Since Z_V sends a positively oriented point to V
and a negatively oriented point to V^∨,
we have Z(S^0)=𝖤𝗇𝖽(V).
§.§.§ Exercise <ref>
Consider the bordism N×[0,1].
Remove an open disk from N×{1/2}.
This creates a bordism from N⊔ S^n-1 to N.
§.§.§ Exercise <ref>
Let ρ G𝖠𝗎𝗍(W) be the representation.
Given a principal G-bundle P M,
build the associated bundle P× _GW.
Take the holonomy as when constructing Wilson loop operators.
The fiber of P×_GW is W
so the holonomy gives a map W W.
Take the trace of this map as a linear map.
§.§.§ Exercise <ref>
One category is 𝖬𝗈𝖽_A of modules.
Another is BA with a single object and morphisms A.
§.§.§ Exercise <ref>
Oriented closed 2-manifolds, all of which look like
disjoint unions of spheres and multi-hole torii.
The empty set is also an object.
§.§.§ Exercise <ref>
This is the classification or oriented surfaces.
§.§.§ Exercise <ref>
We can think of building the two-holed torus
as a connect sum as analogous to splitting the
two-holed torus into a composition of morphisms in 𝖢𝗈𝖻(2).
§.§.§ Exercise <ref>
Read the next part for the answer!
§.§.§ Exercise <ref>
Use the pair of pants bordisms of dimension n-k+1.
That is, the bordism from multiple disjoint copies of S^n-k
to a single copy of S^n-k.
§.§.§ Exercise <ref>
View K as a morphism in the bordism category from the empty set to K.
The functor Z takes this morphism to a morphism
from the unit C-linear category 𝖢𝗁_C to Z( K).
The object C is sent to an object in Z( K).
This is the object K correspond to.
§.§.§ Exercise <ref>
Consider the object coming from the bordism D^n-1.
§.§.§ Exercise <ref>
Consider the (∞,k)-categories k𝖬𝗈𝖽_A
of iterated modules.
§.§.§ Exercise <ref>
See <cit.>.
alpha
|
http://arxiv.org/abs/2307.01116v1
|
20230703154518
|
Nonlinear internal gravity waves in the atmosphere: Rogue waves, breathers and dark solitons
|
[
"Volodymyr M. Lashkin",
"Oleg K. Cheremnykh"
] |
nlin.PS
|
[
"nlin.PS"
] |
mymainaddress,mysecondaryaddress]Volodymyr M. Lashkin mycorrespondingauthor
[mycorrespondingauthor]Corresponding author
vlashkin62@gmail.com
mysecondaryaddress]Oleg K. Cheremnykh
[mymainaddress]Institute for Nuclear Research, Pr. Nauki 47, Kyiv 03028,
Ukraine
[mysecondaryaddress]Space Research Institute, Pr. Glushkova 40
k.4/1, Kyiv 03187, Ukraine
We study nonlinear internal gravity waves (IGWs) in the
atmosphere. The reductive perturbation method is used to derive a
system of two-dimensional nonlinear equations for the envelope of
velocity stream function and the mean flow. In the one-dimensional
case, we obtain a nonlinear Schrödinger (NLS) equation
corresponding to both horizontal and vertical propagation of IGWs.
Depending on the characteristic wavelengths, the NLS equation is
focusing or defocusing. In the focusing case, non-stationary
solutions in the form of the Peregrine soliton, the Akhmediev
breather and the Kuznetsov-Ma breather are considered as potential
candidates for the modeling of rogue waves in the atmosphere. In
the defocusing case, stationary nonlinear IGWs are considered in
the form of nonlinear periodic waves and dark solitons.
internal gravity waves atmosphere rogue waves breather dark soliton
§ INTRODUCTION
Internal gravity waves (IGWs) in the atmosphere of the Earth, in
the solar atmosphere, as well as in planetary atmospheres
constitute the most intense part of the spectrum of
acoustic-gravity waves and have been the subject of a large number
of experimental and theoretical studies for many years
<cit.>. The
IGWs are low frequency disturbances associated with the density
and velocity perturbations of the atmospheric fluid in the
presence of the equilibrium pressure gradient that is maintained
by the gravity force. These waves play a significant role in the
formation of atmospheric convection and turbulence and have an
essential influence both on a dynamics of the atmosphere and
coupling of the upper atmosphere with ionosphere. The study of
IGWs is also motivated by the need to obtain accurate predictions
of atmospheric dynamics under various meteorological conditions.
The linear theory of IGWs has been developed in great detail
(see, e.g., <cit.>, reviews
<cit.> and references therein). Such
effects as the modulation instability of IGWs leading to the
emergence of zonal flows <cit.>, the influence of
the Coriolis and Ampére forces (for the case of an ionized
atmosphere) <cit.>, existence of evanescent
acoustic-gravity waves with a continuous spectrum
<cit.>, and the presence of a random temperature
profile resulting in the threshold instability of IGWs
<cit.> were studied.
In many cases, however, it is not possible to confine ourselves to
considering only linear IGWs. The dynamics of the atmosphere is
governed by the totality of all motions, taking into account their
nonlinear interaction. In particular, in the Earth's atmosphere,
the amplitudes of IGWs grow exponentially with increasing
altitude. Finite amplitude IGWs in the atmosphere have been
considered in a fairly large number of works. The resonant and
nonresonant interactions between gravity waves and vortical modes
in the atmosphere were investigated in <cit.>.
The nonlinear ionospheric response to IGWs was studied in
<cit.> and distortions in the waveform of ionospheric
disturbances caused by nonlinear effects are predicted. In
<cit.>, the interaction of atmospheric gravity waves
with ion-acoustic waves in the F region of the ionosphere was
studied and a coupled pair of Korteweg-de Vries equations was
derived. It was shown that nonlinear atmospheric gravitational
solitary waves can be excited as a result of ion-neutral
collisions. In <cit.>, a nonlinear saturation of
atmospheric gravity waves was considered and it was shown that the
amplitude of the vertical velocity perturbation of IGW which would
exponentially grow with altitude in the linear approximation was
restricted by a nonlinear stabilization. The stabilization of the
collapse (breaking) of the nonlinear IGW in an inhomogeneous
atmosphere due to the effects of viscosity was discussed in
<cit.>. Intensive numerical modeling of the dynamics
of atmospheric IGWs in the framework of nonlinear fluid equations
was carried out in
<cit.>.
Simplified two-dimensional and three-dimensional nonlinear
equations for describing the dynamics of IGWs in the atmosphere
were obtained by Stenflo
<cit.>. Based on these
equations, vortex-like coherent nonlinear structures of IGWs were
studied. Two-dimensional dipole vortices in the form of a
cyclone-anticyclone pair, analogous to Larichev-Reznik solitons
(modons), were found analytically
<cit.>. Solutions in
the form of tripole vortices and vortex chains of IGWs were
obtained in <cit.>. Nonlinear IGWs
were also considered in <cit.>, where,
neglecting dispersion, the so-called dust devils (rotating columns
of rising dust) were studied. Recently, the two-dimensional
Stenflo equations have been generalized to the case of a weakly
ionized ionosphere, taking into account the Ampére force,
transverse (Pedersen) and Hall conductivities, and solutions in
the form of dipole vortices have also been found
<cit.>.
One of the remarkable and intriguing phenomena discovered in
recent years in fluid physics is the possibility of rogue waves
(also known as "freak" waves or "killer" waves). The rogue wave
is a short-lived high-amplitude wave that suddenly appears against
a constant background and then disappears. Rogue waves are now
recognized as proper intrinsically nonlinear structures (beyond an
initial attempt to identify them as superposed linear modes).
First discovered in the ocean <cit.>,
these waves were subsequently experimentally discovered and then
theoretically studied in optics
<cit.>, superfluid helium
<cit.>, Bose-Einstein condensates <cit.>,
plasmas <cit.>, molecular systems during
chemical reaction <cit.>, and even finance
<cit.>. However, as far as we know, no theoretical studies
of rogue waves in the atmosphere have been reported yet, with the
exception of a short report by Stenflo and Marklund
<cit.>, where it is simply indicated that the
description of rogue waves in the ocean by the nonlinear
Schrödinger (NLS) equation is very similar to the description
of atmospheric disturbances and it is noted that the study of
these nonlinear wave structures in the atmosphere is of undoubted
interest.
In this paper, we consider the Stenflo equations for atmospheric
IGWs in the envelope approximation and, using the reductive
perturbation method, derive a system of two-dimensional nonlinear
equations for the velocity stream function and the mean flow. In
the one-dimensional case, we obtain the NLS equation for the
envelope corresponding to both horizontal and vertical propagation
of IGWs. Depending on the ratio of horizontal and vertical
wavelengths, this equation can have both focusing (the signs of
the dispersion and nonlinear terms are the same) and defocusing
type. In the focusing case, non-stationary solutions in the form
of the Peregrine soliton (rogue wave), the Akhmediev breather and
the Kuznetsov-Ma breather are considered as potential candidates
for the modeling of rogue waves in the atmosphere. In the
defocusing case, stationary nonlinear IGWs are considered in the
form of nonlinear periodic waves and dark solitons.
The paper is organized as follows. In Section <ref> the
Stenflo equations are presented and commented. Reductive
perturbation analysis is given in Section <ref>. In Section
<ref> we derive focusing and defocusing NLS equations.
Solutions in the form of the breathers and Peregrine soliton are
presented in Section <ref>, and the nonlinear periodic waves
and dark solitons are considered in Section <ref>. The
conclusion is made in Section <ref>.
§ MODEL EQUATIONS
Nonlinear Stenflo equations <cit.>
governing the dynamics of atmospheric IGWs in the two-dimensional
version have the form
∂/∂
t(Δψ-1/4H^2ψ)+{ψ,Δψ}+∂χ/∂ x=0,
∂χ/∂
t+{ψ,χ}-ω_g^2∂ψ/∂ x=0,
where Δ=∂^2/∂ x^2+∂^2/∂
z^2 is the two-dimensional Laplasian, and the Poisson bracket
(the Jacobian) {f,g} defined by
{f,g}=∂ f/∂ x∂ g/∂ z
-∂ f/∂ z∂ g/∂ x.
Here, ψ(x,z) is the velocity stream function, χ (x,z) is
the normalized density perturbation, H is the density scale
height (reduced atmospheric height), ω_g=(g/H)^1/2 is
the Brunt-Väisälä or buoyancy frequency, g is the
free fall acceleration. Equations (<ref>) and (<ref>)
depend only on two Cartesian coordinates x and z, where the
z axis is directed upward against the gravitational acceleration
𝐠=-g𝐳̂, where 𝐳̂ is the
unit vector along the z direction and the x axis lies in a
plane perpendicular to the z axis. They do not take into account
the curvature of the planet and the rotation of the atmosphere.
Therefore, the atmosphere in the (x, y) plane is considered
isotropic and the dependence on the coordinate y can be
eliminated by the corresponding rotation of the coordinate system
around the axis so that the x axis is directed along the
horizontal component of the fluid velocity, so that
v_x=∂ψ/∂ z and v_z=-∂ψ/∂
x.
In the linear approximation, taking ψ∼exp
(i𝐤·𝐱-iω t) and χ∼exp
(i𝐤·𝐱-iω t), where 𝐱=(x,z),
ω and 𝐤=(k_x,k_z) are the frequency and wave
number respectively, Eqs. (<ref>) and (<ref>) yield
the dispersion relation of the gravity waves
ω^2=ω_g^2k_x^2/k^2+1/(4H^2),
where k^2=k_x^2+k_z^2. In Eqs. (<ref>) and
(<ref>), the Coriolis force is neglected, and in the IGWs
dynamics is valid for ω≫Ω_0, where Ω_0 is
the angular rotation velocity of the planet. Thus, we exclude from
consideration the case of very small horizontal wave numbers
k_x≪ω_gΩ_0/g. We also consider altitudes at
which the Ampére force can be neglected, and where the effect
of the geomagnetic field is of the same order as the effect due to
the Coriolis force <cit.>. In addition, the
Brunt-Väisälä frequency is assumed to be independent
of the vertical coordinate z, that is, further we consider an
isothermal atmosphere. For the Earth's atmosphere, in particular,
this corresponds to altitudes ≳ 200 km. Then, the lower
limit for wavelengths (due to the dissipation of short-wave
harmonics) for IGWs is about ∼ 10 km at altitudes
∼ 200-300 km while typical characteristic values are
hundreds of kilometers.
§ REDUCTIVE PERTURBATION ANALYSIS
To investigate the nonlinear behavior of the IGWs, we use
reductive perturbation method (sometimes also called the
multiscale expansion method) <cit.> which is often used
in the theory of nonlinear waves. This method usually leads to
asymptotic evolution equations, sometimes more adequate to the
given problem. Following this technique, we expand the space and
time variables as
𝐱=𝐱+ε𝐗+… and
t=t+ε T+ε^2τ+… respectively, where
𝐗=(X,Z), and ε is the small dimensionless
parameter scaling the weakness of dispersion and nonlinearity. As
will be shown later, to obtain a non-trivial evolution, it
suffices to restrict ourselves to expanding the time variable up
to the second order and the space variable up to the first order
in ε. Thus, we have
∂/∂𝐱→∂/∂𝐱+ε∂/∂𝐗, ∂/∂ t→∂/∂
t+ε∂/∂ T+ε^2∂/∂τ.
We then expand the fields ψ and χ in powers in
ε as
ψ=εψ^(1)+ε^2ψ^(2)+ε^3ψ^(3)+…,
χ=εχ^(1)+ε^2χ^(2)+ε^3χ^(3)+…,
where ψ^(1)=ψ̃^(1)+ψ̅,
χ^(1)=χ̃^(1)+χ̅,
ψ̃^(1)=Ψ
(𝐗,T,τ)e^i𝐤·𝐱-iω
t+c. c.,
χ̃^(1)=Φ
(𝐗,T,τ)e^i𝐤·𝐱-iω
t+c. c..
Secondary mean flows ψ̅ and χ̅ depend only on
slow variables 𝐗, T and τ. Our goal is to obtain
nonlinear evolution equation for the envelope Ψ. Acting by
the operator ∂/∂ t on Eq. (<ref>), and then
using Eq. (<ref>)), one can obtain
ℒψ=𝒩,
where
ℒ=∂/∂
t^2(Δ-1/4H^2)+ω_g^2∂^2/∂
x^2,
𝒩=∂/∂
x{ψ,χ}-∂/∂ t{ψ,Δψ},
and the linear part of Eq. (<ref>) contains only ψ. The
operator
ℒ(∂_t+ε∂_T+ε^2∂_τ,
∂_𝐱+ε∂_𝐗) can be
expanded in terms of ε,
ℒ=ℒ_0+ϵℒ_1+ε^2ℒ_2.
Then substituting Eqs. (<ref>), (<ref>) and
(<ref>) into Eq. (<ref>) and keeping terms up to
first order in ε, we get
ℒ_0ψ̃^(1)=0, ℒ_0χ̃^(1)=0,
that is
ℒ_0(ω,𝐤)=ω^2(k^2+1/4H^2)-ω_g^2k_x^2=0
gives the dispersion relation (<ref>). In the next order
O(ε^2) we have
ℒ_0ψ̃^(2)=ℒ_1ψ̃^(1),
or
ℒ_0ψ̃^(2)=
(∂ℒ_0/∂ω∂/∂
T+∂ℒ_0/∂𝐤∂/∂𝐗)
ψ̃^(1).
Similar equations hold for χ. Note that despite the quadratic
nature of the nonlinearity in Eq. (<ref>), the right hand
sides of Eq. (<ref>) in the ε^2 order
do not contain nonlinear terms. This is due to the specific type
of nonlinearity in Eq. (<ref>) in the form of the Poisson
bracket, when the corresponding nonlinear terms disappear
identically. As usual <cit.>, the ε^2 order
secular terms, that is the right hand side of Eq.
(<ref>), represent the group motion of the envelope
and can be eliminated by transforming to a frame moving with the
group velocity
𝐯_g=∂ℒ_0/∂𝐤/∂ℒ_0/∂ω=∂ω/∂𝐤,
and thus we can put ψ̃^(2)=0. Next, we introduce a
coordinate system moving with group velocity 𝐯_g, so
that
∂/∂
T=-𝐯_g·∂/∂𝐗,
and the spatial variable 𝐗 is replaced by
𝐗^'=𝐗-𝐯_gT (and the prime
will be further omitted).
In the O(ε^3), one can obtain
ℒ_0ψ̃^(3)=-ℒ_2ψ̃^(1)+∂/∂
x(∂ψ̃^(1)/∂
x∂χ̅/∂ Z-∂ψ̃^(1)/∂ z∂χ̅/∂
X)
+(∂ψ̅/∂
X∂χ̃^(1)/∂ z-∂ψ̅/∂ Z∂χ̃^(1)/∂
x) -(∂ψ̅/∂
X∂Δψ̃^(1)/∂
z-∂ψ̅/∂ Z∂Δψ̃^(1)/∂ x) ,
where
ℒ_2=∂ℒ_0/∂ω(i∂/∂τ+1/2∂^2ω/∂
k_x^2∂^2/∂ X^2
+1/2∂^2ω/∂
k_z^2∂^2/∂ Z^2 +
∂^2ω/∂ k_x∂
k_z∂^2/∂ X∂ Z),
and ∂ℒ_0/∂ω=2ω
(k^2+1/4H^2). Removing the secular terms, that is, equating
to zero the right hand side of Eq. (<ref>), and using
Eqs. (<ref>) and (<ref>), we have
ℒ_2Ψ+k_xΨ(k_x∂χ̅/∂
Z-k_z∂χ̅/∂ X)+k_xΦ(k_z∂ψ̅/∂
X-k_x∂ψ̅/∂ Z)
+ ω k^2Ψ(k_x∂ψ̅/∂
Z-k_z∂ψ̅/∂ X)=0,
To get further progress, we use the linear response for Φ and
χ̅ from equation (<ref>),
Φ=-k_xω_g^2/ωΨ, ∂χ̅/∂ T-ω_g^2∂ψ̅/∂ X=0.
Next, in the second equation we use Eq. (<ref>). As noted
above, in the following we will be interested in obtaining a
one-dimensional NLS equation for the envelope Ψ containing
either X or Z space variables. Then, from Eq.
(<ref>) we have
χ̅=-ω_g^2ψ̅/v_gx, where
v_gx=∂ω/∂ k_x, if ∂/∂
Z=0, and χ̅=0 if ∂/∂ X=0. Thus, Eq.
(<ref>) becomes
ℒ_2Ψ+k_z(k_xω_g^2/v_gx-ω
k^2-k_x^2ω_g^2/ω)Ψ∂ψ̅/∂
X+k_x(ω k^2
+k_x^2ω_g^2/ω)Ψ∂ψ̅/∂
Z=0 .
In the order O(ε^3) from (<ref>) for the mean
flow we have the equation
ω_g^2∂^2ψ̅/∂
X^2-1/4H^2∂^2ψ̅/∂
T^2=∂_x{ψ,χ}-∂_t{ψ,Δψ},
where the bar means averaging over the fast variables. From Eq.
(<ref>), using Eqs. (<ref>) and (<ref>), we
get
ω_g^2∂^2ψ̅/∂
X^2-1/4H^2(v_gx^2∂^2ψ̅/∂
X^2+v_gz^2∂^2ψ̅/∂
Z^2) = (ω
k^2+k_x^2ω_g^2/ω)(k_z∂
|Ψ|^2/∂ X-k_x∂ |Ψ|^2/∂
Z),
where for v_gx and v_gz=∂ω/∂ k_z we
have
v_gx=ω_g(k_z^2+1/4H^2)/(k^2+1/4H^2)^3/2,
v_gz=-ω_gk_xk_z/(k^2+1/4H^2)^3/2.
Equations (<ref>) and (<ref>) are a closed system
of nonlinear equations for the envelope Ψ and the mean flow
ψ̅.
§ DERIVATION OF NONLINEAR SCHRÖDINGER EQUATION
In this section, we obtain a one-dimensional NLS equation for two
cases. In the first case, the spatial dependence corresponds to
the X coordinate (horizontal propagation), and in the second -
to the Z coordinate (vertical propagation).
Neglecting in Eq. (<ref>) the dependence on the
spatial coordinate Z, we have
2ω(k^2+1/4H^2)(i∂Ψ/∂τ
+1/2∂^2ω/∂
k_x^2∂^2Ψ/∂ X^2)
+k_z(k_xω_g^2/v_gx-ω
k^2-k_x^2ω_g^2/ω)Ψ∂ψ̅/∂
X=0,
where
∂^2ω/∂
k_x^2=-3k_xω_g(k_z^2+1/4H^2)/(k_x^2+k_z^2+1/4H^2)^5/2.
Omitting Z-dependence in Eq. (<ref>), one can obtain
∂ψ̅/∂
X=k_z(ω
k^2+k_x^2ω_g^2/ω)/ω_g^2-v_gx^2/(4H^2)|Ψ|^2.
Next, we introduce dimensionless variables τ^',
X^', and Ψ^' (recall that Ψ is the
envelope velocity stream function) by
τ^'=ω_gτ ,
X^'=X/H , Ψ^'=Ψ/ω_gH^2 ,
and further the primes are omitted. Then, substituting Eq.
(<ref>) into Eq. (<ref>), and taking into account the
explicit expressions (<ref>) and (<ref>) for
ω and v_gx respectively, we have the NLS equation,
i∂Ψ/∂τ+P∂^2Ψ/∂
X^2+Q|Ψ|^2Ψ=0,
where the dimensionless coefficients P and Q are defined as
P=-12q_x(1+4q_z^2)/(1+4q^2)^5/2,
and
Q=q_xq_z^2(1+8q^2)(4q_x^4-4q_z^4-q_z^2)/(1+4q_z^2)(1+4q^2)^3/2[1-(1+4q_z^2)^2/(1+4q^2)^3],
respectively, with q_x=k_xH, q_z=k_zH and
q^2=q_x^2+q_z^2. Note that depending on k_x and
k_x, that is, on the horizontal and vertical wavelengths, and
the effective height of the atmosphere H, the Q value can be
either positive or negative. The contour plot of the function
Q(q_x,q_z) on the q_x-q_z plane is shown in
Fig. <ref>.
For the vertical propagation, neglecting in Eq.
(<ref>) the dependence on the spatial coordinate X,
we have
2ω(k^2+1/4H^2)(i∂Ψ/∂τ
+1/2∂^2ω/∂
k_z^2∂^2Ψ/∂ Z^2)
+k_x(ω k^2
+k_x^2ω_g^2/ω)Ψ∂ψ̅/∂
Z=0,
where
∂^2ω/∂
k_z^2=k_xω_g(2k_z^2-k_x^2-1/4H^2)/(k^2+1/4H^2)^5/2.
Then, from Eq. (<ref>) one obtains
∂ψ̅/∂
Z=4k_xH^2(ω
k^2+k_x^2ω_g^2/ω)/v_gz^2|Ψ|^2.
Introducing the dimensionless variable Z^'=Z/H (then the
prime is omitted), and using the dimensionless variables Eq.
(<ref>) for τ and Ψ, we obtain NLS
equation
i∂Ψ/∂τ+P̃∂^2Ψ/∂
Z^2+Q̃|Ψ|^2Ψ=0,
where the dimensionless coefficients P̃ and Q̃
are defined as
P̃=4q_x(8q_z^2-4q_x^2-1)/(1+4q^2)^3,
and
Q̃=q_x(1+8q^2)(1+4q^2)^3/2/64q_z^2,
respectively. The P̃ coefficient at the dispersion term
of Eq. (<ref>) has an indefinite sign. The contour plot of
the function f(q_x,q_z)=8q_z^2-4q_x^2-1 is presented
in Fig. <ref>.
§ BREATHERS AND ROGUE WAVES
If the coefficients at the dispersion and nonlinear terms in Eqs.
(<ref>) and (<ref>) have the same signs, so that PQ>0
and P̃Q̃>0 , then the corresponding equations have
the form of a focusing NLS equation, otherwise, if PQ<0 and
P̃Q̃<0, the NLS equation has a defocusing type. In
what follows, for definiteness, we will consider for the time
being Eq. (<ref>) (horizontal propagation). The exact solution
of Eq. (<ref>) in the form of a plane wave with the frequency
depending on the amplitude A is
Ψ=Ae^iQA^2τ.
The standard linear stability analysis then shows that a linear
modulation with the frequency Ω and the wave number
κ obeys the dispersion relation
Ω^2=Pκ^2(Pκ^2-2QA^2),
whose right-hand side is positive if PQ<0 and then Ω is
real. In this case, the modulations of the plane wave are stable,
and for nonvanishing boundary conditions the defocusing NLS
equation has solutions in the form of the so-called dark solitons
<cit.>. Otherwise, if PQ>0, the plane
wave turns out to be unstable with respect to modulations
κ<√(2Q/P)A, and for boundary conditions falling off at
infinity, this corresponds, in particular, to bright solitons.
Other solutions of the NLS equation (both focusing and defocusing)
include nonlinear periodic cnoidal waves that can be expressed in
terms of the Jacobi elliptic functions and theta functions
<cit.>. Since in our case P<0, the type
of the NLS equation (<ref>) depends on the sign of Q. In
this section we consider the case Q<0.
Then, the nonlinear stage of the modulation instability
<cit.> of a plane wave Eq. (<ref>) (also known
as a Benjamin-Feir instability) results in the so-called Akhmediev
breather
<cit.>,
Ψ_A
(X,τ)=Ae^iQA^2τ[ν^2cosh
(QA^2στ)+iσsinh (QA^2στ)/cosh
(QA^2στ)-√(1-ν^2/2)cos (KX)-1],
where A is the background amplitude,
σ=ν√(2-ν^2) is the modulation growth rate and
K=ν√(QA^2/P) is the amplitude-dependent wave number of
the envelope. This solution is localized in the temporal variable
τ and is periodic in the spatial variable X. Both A and
ν are free real parameters. The solution exists only if
ν^2<2, that is, if K<√(2Q/P)A, which is fully
consistent with the modulation instability condition of the plane
wave presented below. Thus, Akhmediev breather can be treated as
a non-stationary soliton excitation against a constant background
(plane wave). This excitation results in the amplification of the
background wave amplitude. The maximum amplitude takes place at
t=0 and in the locations given by the condition cos (KX)=1.
At these locations, the so called amplification factor (ratio of
maximum amplitude to background) is given by
F_A=ν^2/1-√(1-ν^2/2)-1.
For 0<ν^2<2, the amplification factor F_A ranges from
1 to 3. The wave energy flux is defined as
J=iP(Ψ∂Ψ^∗/∂
X-Ψ^∗∂Ψ/∂ X)
and is related to the energy density |Ψ|^2 by the
conservation law
∂ |Ψ|^2/∂τ+∂ J/∂ X=0.
For the Akhmediev breather we have,
J_A(X,τ)=√(2)PA^2Kν(2-ν^2)sin (KX)sinh
(wτ)/[cosh(wτ)-√(1-ν^2/2)cos(KX)]^2,
where w=|Q|A^2σ. The wave energy flux of the Akhmediev
breather is exponentially localized in time and represents a burst
of energy with a characteristic duration τ_A∼ 1/w. At
the same time, a periodic redistribution of energy occurs in space
with a spatial period X_A∼ 2π/K. In a part of space, the
energy flux J_A is positive due to the modulation instability
of the background, and in another part, the flux is negative due
to the nonlinear stabilization of the instability. The contour
plot of the wave energy flux (<ref>) with A=1, ν=1,
q_x=1 and q_z=3 (herewith P=-0.04 and Q=-24.7) is
shown in Fig. <ref>. For the Earth's atmosphere, the
effective height of the atmosphere at the considered altitudes
≳ 200 km (i.e. for an isothermal atmosphere) is H∼ 40
km, i.e. the values of q_x and q_z correspond to
horizontal and vertical wavelengths ∼ 40 km and ∼ 13
km, respectively.
Another non-stationary solution of Eq. (<ref>) with Q<0 is
known as the so-called Kuznetsov-Ma breather (often referred to as
the Ma breather), which is periodic in time and localized in space
variable,
Ψ_M
(X,τ)=Ae^iQA^2τ[1+μ^2cos
(QA^2ρτ)+iρsin (QA^2ρτ)/cos
(QA^2ρτ)-√(1+μ^2/2)cosh (K̃X)],
where A is the amplitude, ρ=μ√(2+μ^2) and
K̃=μ√(QA^2/P). Free real parameters in (<ref>)
are A and μ. This solution was first found by Kuznetsov
using the inverse scattering transform method
<cit.>, and was rediscovered in
<cit.> as well as others later on. The period of
oscillations in the Kuznetsov-Ma breather is
T=2π/(|Q|A^2μ√(2+μ^2)).
J_M(X,τ)=√(2)PA^2K̃μ(2+μ^2)sinh
(K̃X)sin
(wτ)/[cos(wτ)-√(1+μ^2/2)cosh(K̃X)]^2,
where ω=|Q|A^2ρ.
The wave energy flux for the Kuznetsov-Ma breather is localized in
space with a characteristic size X_M∼ 1/K̃ and
represents periodic bursts of wave energy with a period
τ_M∼ 2π/w.
The limiting case of both the Akhmediev breather (<ref>)
with ν→ 0, and the Kuznetsov-Ma breather (<ref>)
with μ→ 0 corresponds to the Peregrine soliton
<cit.>
Ψ_P
(X,τ)=Ae^iQA^2τ[4(1+2iQA^2τ)/1+4Q^2A^4τ^2
+2QA^2X^2/P-1].
In fact, this non-stationary solution, strictly speaking, is not a
soliton, but is a rogue wave that appears against a constant
background from nowhere and disappears without a trace. The
characteristic lifetime of the rogue wave can be estimated as
τ_c∼ 1/(2Q^2A). The amplification factor for the
Peregrine soliton is F_P=3. Despite some idealization of the
model, the solution in the form of a Peregrin soliton agrees very
well with observations and experimental data. For example,
numerous observations, starting from the very first on rogue waves
in the ocean (including the first famous sighting on the Draupner
platform in the North Sea off the coast of Norway on 1 January
1995) <cit.>, show an amplification
factor ∼ 3. The rogue solution (<ref>) with A=1,
q_x=1.5 and q_z=1.5 (herewith P=-0.11 and Q=-0.34) is
shown in Fig. <ref>. Such values of q_x and q_z
correspond to horizontal and vertical wavelengths ∼ 20 km.
The peak and two troughs are visible against the background, which
corresponds to the conservation of the norm N=∫
(|Ψ|^2-A^2)dX for the NLS equation (<ref>).
For the vertical propagation corresponding to equation
(<ref>), the breather solutions and the rogue solution are
obtained from Eqs. (<ref>), (<ref>) and Eq.
(<ref>) by replacing X→ Z, and
P→P̃, Q→Q̃.
§ PERIODIC NONLINEAR WAVES AND DARK SOLITONS
In this section, we will consider the case when PQ<0 and
P̃Q̃<0 in Eqs. (<ref>) and (<ref>)
respectively. As follows from Eqs. (<ref>) and (<ref>), this
corresponds, in particular, to sufficiently long vertical
wavelengths compared to the effective height H and horizontal
wavelengths. For example, for vertical propagation, the condition
k_z^2<k_x^2/2+1/8H^2 must be met. As in the previous
section, for definiteness, we consider the case of horizontal
propagation, i.e. equation (<ref>). The case of vertical
propagation is obtained in the following equations by replacing
X→ Z, and P→P̃,
Q→Q̃.
The defocusing NLS equation (<ref>) with PQ<0 has a
stationary solution in the form of a nonlinear periodic (cnoidal)
wave <cit.>,
Ψ
(X,τ)=√(2)Ae^iQA^2τm/√(1+m^2)sn( A√(Q/P(1+m^2))X,m),
where A is the free real parameter, sn(u,m) is the
Jacobi elliptic sine with the modulus m. In the particular case
m=1, solution (<ref>) takes the form of a dark soliton,
Ψ (X,τ)=Ae^iQA^2τtanh(A√(Q/2P)X).
Since the NLS equation is Galilean invariant, solutions moving
with the velocity V can be obtained from solutions at rest
(<ref>) and (<ref>) by replacing X→
X-Vτ and replacing the exponential factor
exp(iQA^2τ)→exp(iQA^2τ+i√(Q/2P)V/2X-iQV^2/4τ).
The dark soliton (<ref>) has a dip in A against a uniform
background A and is the so-called black soliton. A more general
solution (gray solitons), corresponding to the dip of the
amplitude of smaller A against the background A, has the form
Ψ
(X,τ)=A{isinφ+cosφtanh[a√(Q/2P)
(X-Vτ)]}
×exp[i√(Q/2P)c/2X+iQ(A^2-c^2/4)τ],
where c is the free real parameter, tanφ=(c-V)/a, and
the soliton parameters are connected by the constraint
a^2+(c-V)^2=A^2, otherwise written as a=±
Acosφ. Thus, the soliton is characterized by three
independent parameters A, V (or c), and φ. The
soliton therefore exists in the domain -A+c<V<A+c.
It describes a localized kink structure moving with the velocity
V on a background plane wave; A gives the amplitude of the
background. Across the soliton, there is a phase jump in the
background wave of π-2φ. For the intensity |Ψ|^2
we have
|Ψ
(X,τ)|^2=A^2{1-cos^2φ sech^2[a√(Q/2P)(X-Vτ)]}.
The contrast of the dark soliton, defined as the ratio between the
maximum and minimum intensities, is given by cos^2φ.
For φ=0 we have the black soliton. The nonlinear periodic
waves (<ref>) with the modulus m=0.1 and m=0.9, and
the dark solitons with φ=0.7 (gray soliton) and
φ=0 (black soliton) are presented in Fig. <ref>.
Other parameters are A=1, q_x=1 and q_z=0.5 (herewith
P=-0.27 and Q=0.33). For the Earth's atmosphere, such values
of q_x and q_z correspond to horizontal and vertical
wavelengths ∼ 40 km and ∼ 80 km, respectively.
§ CONCLUSION
In this work, we have applied the reductive perturbation method to
obtain model nonlinear equations describing the dynamics of IGWs
in the atmosphere. A system of two-dimensional nonlinear equations
for the velocity stream function and the mean flow has been
derived in the envelope approximation. In the one-dimensional
case, we have obtained the NLS equation for the envelope
corresponding to both horizontal and vertical propagation of IGWs.
Depending on the values of the horizontal and vertical
wavelengths, this equation can be either focusing (the signs of
the dispersion and nonlinear terms are the same) or defocusing. In
the focusing case, non-stationary solutions in the form of the
Peregrine soliton (rogue wave), the Akhmediev breather and the
Kuznetsov-Ma breather have been considered as potential candidates
for the modeling of rogue waves in the atmosphere. In the
defocusing case, stationary nonlinear IGWs have been found in the
form of nonlinear periodic waves and dark solitons.
We have considered the approximation of an isothermal atmosphere,
which, for the Earth's atmosphere, in particular, is fully
justified at altitudes ≳ 200 km. Note, however, that other
altitude intervals can be distinguished in the Earth's atmosphere,
where the temperature changes so slowly that its change can be
neglected within sufficiently thin layers (the so-called
isothermal layers). The propagation of waves at such altitudes can
also be described in terms of the theory of an isothermal
atmosphere.
Note that in this paper we have restricted ourselves to
one-dimensional nonlinear structures in the framework of the
one-dimensional NLS equation. An analysis of the dynamics and the
possibility of the existence of nonlinear two-dimensional
structures within the framework of the system of equations
(<ref>) and (<ref>) will be addressed in a future
work.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial
interests or personal relationships that could have appeared to
influence the work reported in this paper.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
V. M. Lashkin: Conceptualization, Methodology,
Validation, Formal analysis, Investigation. O. K.
Cheremnykh: Conceptualization, Methodology, Validation, Formal
analysis, Investigation.
§ DATA AVAILABILITY
No data was used for the research described in the article.
§ ACKNOWLEDGMENTS
The work was supported by the National Research Foundation of
Ukraine, grant 2020.02/0015.
59
Hines1960
C. O. Hines, Internal atmospheric gravity waves at ionospheric
heights, Can. J. Phys. 38, 1441-1481 (1960).
Tolstoy1967
I. Tolstoy, Long-period gravity waves in the atmosphere, J.
Geophys. Res. 72, 4605-4610 (1967).
Liu1974
K. C. Yeh, C. H. Liu, Acoustic-gravity waves in the upper
atmosphere, Rev. Geophys. Space Phys. 12, 193-216
(1974).
Beer1974
T. Beer, Atmospheric Waves (John Wiley, New York, 1974).
Gossard1975
E. E. Gossard, W. H. Hooke, Waves in the Atmosphere:
Atmospheric Infrasound and Gravity Waves: Their Generation and
Propagation (Elsevier Scientific Publishing Company, 1975).
Sutherland2015
B. R. Sutherland, Internal Gravity Waves (Cambridge
University Press, Cambridge, 2015).
Francis1975
S. H. Francis, Global propagation of atmospheric gravity waves: A
review, J. Atmos. Sol.-Terrestrial Phys. 37, 1011-1054
(1975).
Fritts2003
D. C. Fritts and M. J. Alexander, Gravity wave dynamics and
effects in the middle atmosphere, Rev. Geophys. 41,
1003-1062 (2003).
Horton-zonal2008
W. Horton, T. D. Kaladze, J. W. Van Dam, and T. W. Garner, Zonal
flow generation by internal gravity waves in the atmosphere, J.
Geophys. Res. 113, A08312 (2008).
Kaladze2008
T. D. Kaladze, O. A. Pokhotelov, H. A. Shan, M. I. Shan,
L. Stenflo, Acoustic-gravity waves in the Earth's ionosphere, J.
Atmos. Sol.-Terrestrial Phys. 70, 1607-1616 (2008).
Cheremnykh2021
O. K. Cheremnykh, A. K. Fedorenko, Y. A. Selivanov, S. O.
Cheremnykh, Continuous spectrum of evanescent acoustic-gravity
waves in an isothermal atmosphere, Mon. Notic. Roy. Astron. Soc.
503, 5545-5553 (2021).
Lashkin2023
V. M. Lashkin and O. K. Cheremnykh, Acoustic-gravity waves in
quasi-isothermal atmospheres with a random vertical temperature
profile, Wave Motion 119, 103140 (2023).
Dong1988
B. Dong and K. C. Yeh, Resonant and nonresonant wave-wave
interactions in an isothermal atmosphere, J. Gephys. Res.
93, 3729-3744 (1988).
Fritts1992
D. C. Fritts, S. Sun, D.-Y. Wang, Wave-wave interactions in a
compressible atmosphere 1. A general formulation including
rotation and wind shear, J. Gephys. Res. 97, 9975-9988
(1992).
Huang1991
C. S. Huang and J. Li, Weak nonlinear theory of the ionospheric
response to atmospheric gravity waves in the F-region, Journ.
Atmosphere and Terrest. Phys. 53, 903-908 (1991).
Huang1992
C. S. Huang and J. Li, Interaction of atmospheric gravity solitary
waves with ion acoustic solitary waves in the ionospheric
F-region, Journ. Atmosphere and Terrest. Phys. 54,
951-956 (1992).
Nekrasov1994
A. K. Nekrasov, Nonlinear saturation of atmospheric gravity waves,
Journ. Atmosphere and Terrest. Phys. 56, 931-937 (1994).
Nekrasov2005
A. K. Nekrasov, N. S. Erokhin, Self-influence of the collapsing
internal gravity wave in the inhomogeneous atmosphere, Phys. Lett.
A 335, 417-423 (2005).
Gavrilov2005
S. P. Kshevetskiia, N. M. Gavrilov, Vertical propagation, breaking
and effects of nonlinear gravity waves in the atmosphere, J.
Atmos. Sol.-Terrest. Phys. 67, 1014-1030 (2005).
Huang2014
K. M. Huang, S. D. Zhang, F. Yi, C. M. Huang, Q. Gan, Y. Gong, and
Y. H. Zhang, Nonlinear interaction of gravity waves in a
nonisothermal and dissipative atmosphere, Ann. Gephys.
32, 263-275 (2014).
Fritts2015
D. C. Fritts, B. Laughman, T. S. Lund, and J. B. Snively,
Self-acceleration and instability of gravity wave packets: 1.
Effects of temporal localization, J. Geophys. Res. Atmos.
120, 8783-8803 (2015).
Snively2017
J. B. Snively, Nonlinear gravity wave forcing as a source of
acoustic waves in the mesosphere, thermosphere, and ionosphere,
Geophys. Res. Lett. 44, 12020-12027 (2017).
Fritts2019
T. Mixa, D. Fritts, T. Lund, B. Laughman, L. Wang, and L. Kantha,
Numerical simulations of high-frequency gravity wave propagation
through fine structures in the mesosphere, J. Geophys. Res. Atmos.
124, 9372-9390 (2019).
Stenflo1987
L. Stenflo, Acoustic solitary waves, Phys. Fluids 30,
3297-3299 (1987).
Stenflo1990
L. Stenflo, Acoustic gravity vortices, Phys. Scripta 41,
641-642 (1990).
Stenflo2009
L. Stenflo and P. K. Shukla, Nonlinear acoustic-gravity waves, J.
Plasma Physics 75, 841-847 (2009).
Shukla1998
P. K. Shukla and A. A. Shaikh, Dust-acoustic gravity vortices in a
nonuniform dusty atmosphere, Phys. Scripta T75, 247-248
(1998).
Fedun2013
O. Onishchenko, O. Pokhotelov, and V. Fedun, Convective cells of
internal gravity waves in the earth's atmosphere with finite
temperature gradient, Ann. Geophys. 31, 459-462 (2013).
Jovanovic2001
D. Jovanović, L. Stenflo, and P. K. Shukla, Acoustic gravity
tripolar vortices, Phys. Lett. A 279, 70-74 (2001).
Jovanovic2002
D. Jovanović, L. Stenflo, and P. K. Shukla, Acoustic-gravity
nonlinear structures, Nonlin. Proc. Geophys. 9, 333-339
(2002).
Fedun2016
O. G. Onishchenko, W. Horton, O. A. Pokhotelov, and V. Fedun,
"Explosively growing" vortices of unstably stratified atmosphere,
J. Geophys. Res. Atmos. 121, 11,264-11,268 (2016).
Fedun2021
O. Onishchenko, V. Fedun, I. Ballai, A. Kryshtal and G. Verth,
Generation of localised vertical streams in unstable stratified
atmosphere, Fluids 6, 454 (2021).
Misra2022IEEE
A. P. Misra, A. Roy, D. Chatterjee, T. D. Kaladze, Internal
gravity waves in the Earth's ionosphere, IEEE Transactions in
Plasma Science 50, 2603-2608 (2022).
Misra2022AdvSpace
T. D. Kaladze, A. P. Misra, A. Roy, D. Chatterjee, Nonlinear
evolution of internal gravity waves in the Earth's ionosphere:
Analytical and numerical approach, Adv. Space Research
69, 3374-3385 (2022).
Dysthe2008
K. Dysthe, H. E. Krogstad, and P. Müller, Oceanic Rogue
Waves, Annu. Rev. Fluid Mech. 40, 287-310 (2008).
Pelinovski2009
C. Kharif, E. Pelinovsky, and A. Slunyaev, Rogue Waves in
the Ocean (Springer, Berlin, 2009).
Solli2007
D. R. Solli , C. Ropers, P. Koonath, and B. Jalali, Optical rogue
waves, Nature 450, 1054-1057 (2007).
Frisquet2016
B. Frisquet, B. Kibler, P. Morin, F. Baronio, M. Conforti, G.
Millot, and S. Wabnitz, Optical dark rogue wave, Sci. Rep.
6, 20785 (2016).
Baronio2018
F. Baronio, B. Frisquet, S. Chen, G. Millot, S. Wabnitz, and B.
Kibler, Observation of a group of dark rogue waves in a
telecommunication optical fiber, Phys. Rev. A 97, 013852
(2018).
Ganshin2008
A. N. Ganshin, V. B. Efimov, G. V. Kolmakov, L. P. Mezhov-Deglin,
and P. V. E. McClintock, Observation of an inverse energy cascade
in developed acoustic turbulence in superfluid helium, Phys. Rev.
Lett. 101, 065303 (2008).
Bludov2009
Yu. V. Bludov, V. V. Konotop, and N. Akhmediev, Vector rogue waves
in binary mixtures of Bose-Einstein condensates, Eur. Phys. J.
Special Topics 185, 169-180 (2010).
Shukla2011
W. Moslem, P. Shukla, B. Eliasson, Surface plasma rogue waves,
Europhys. Lett. 96 25002 (2011).
Bailung2011
H. Bailung, S. K. Sharma, and Y. Nakamura, Observation of
Peregrine solitons in a multicomponent plasma with negative ions,
Phys. Rev. Lett. 107 255005 (2011).
Tlidi2016
M. Tlidi, Y. Gandica, G. Sonnino, E. Averlant, K. Panajotov,
Self-replicating spots in the brusselator model and extreme events
in the one-dimensional case with delay, Entropy 18, 64
(2016).
Yan2011
Z. Yan, Vector financial rogue waves, Phys. Lett. A 375,
4274-4279 (2011).
Marklund2010
L. Stenflo and M. Marklund, Rogue waves in the atmosphere, J.
Plasma Phys. 76, 293-295 (2010).
Dodd1982
R. K. Dodd, J. C. Eilbeck, J. D. Gibbon, and H. C. Morris,
Solitons and Nonlinear Wave Equations (Academic Press, London,
1982).
Faddeev1987
L. D. Faddeev and L. A. Takhtadjan, Hamiltonian Methods in
the Theory of Solitons (Springer-Verlag, Berlin, 1987).
Akhmediev1997
N. Akhmediev and A. Ankiewicz, Solitons, Nonlinear Pulses
and Beams (Chapman and Hall, London, 1997).
Chow1995
K. W. Chow, A class of exact, periodic solutions of nonlinear
envelope equations, J. Math. Phys. 36, 4125-4137 (1995).
Zakharov2013
V. E. Zakharov and A. A. Gelash, Nonlinear stage of modulational
instability, Phys. Rev. Lett. 111, 054101 (2013).
Ahmediev1985
N. N. Akhmediev, V. M. Eleonskii, N. E. Kulagin, Generation of
periodic trains of picosecond pulses in an optical fiber: exact
solutions, Sov. Phys. JETP 62, 894-899 (1985).
Ahmediev1986
N. N. Akhmediev, V. I. Korneev, Modulation instability and
periodic solutions of the nonlinear Schrödinger equation,
Theor. Math. Phys. 69, 1089-1093 (1986).
Ahmediev1987
N. N. Akhmediev, V. M. Eleonskii, N. E. Kulagin, Exact first-order
solutions of the nonlinear Schrödinger equation, Theor. Math.
Phys. 72, 809-818 (1987).
Kuznetsov1977
E. A. Kuznetsov, Solitons in a parametrically unstable plasma,
Sov. Phys. Dokl. 22, 507-508 (1977).
Kawata1978
T. Kawata, H. Inoue, Inverse scattering method for the nonlinear
evolution equations under nonvanishing conditions, J. Phys. Soc.
Jpn. 44 1722-1729 (1978).
Ma1979
Y.-C. Ma, The perturbed plane-wave solutions of the cubic
Schrödinger equation, Stud. Appl. Math. 60, 43-58
(1979).
Peregrine1983
D. H. Peregrine, Water waves, nonlinear Schrödinger equations
and their solutions, J. Aust. Math. Soc. Series B, Appl. Math.
25, 16-43 (1983).
|
http://arxiv.org/abs/2307.02919v1
|
20230706111618
|
On the degrees of freedom of gravitational radiation with positive cosmological constant
|
[
"Francisco Fernández-Álvarez"
] |
gr-qc
|
[
"gr-qc",
"hep-th"
] |
Memories from the W Boson Discovery
Claudia-Elisabeth Wulz
August 1, 2023
===================================
Results on the isolation of the radiative degrees of freedom of the gravitational field with a positive cosmological constant in full General Relativity are put forward. Methods employed in a recent geometric characterisation of gravitational radiation are used and, inspired by Ashtekar's work on asymptotically flat space-times, a space of connections is defined. Ground differences emerge due to the space-like character of the conformal boundary, and one has to put into play a fundamental result by Friedrich concerning the initial value problem for space-times with a positive cosmological constant. Based on this, half of the radiative degrees of freedom are identified; remarkably, they utterly determine the gravitational radiation content for space-times with algebraically special rescaled Weyl tensor at infinity. Directions for defining the phase space in the general case are proposed.
§ INTRODUCTION
Gravitational radiation is a non-linear feature of the full theory of General Relativity. Theoretical developments in the study of this phenomenon trace back to Einstein's quadrupole formula <cit.>, followed by great achievements in the non-linear regime during the 50s <cit.> and 60s <cit.>. However, it is not until the 70s when a robust geometrical understanding was available thanks to the work of Geroch <cit.>. The perspective adopted there does not depend on the choice of coordinates nor any other gauge and is built using the tools of the conformal compactification by Penrose <cit.>. Arguably, this approach is better suited to address underlying conceptual questions. As an example, Ashtekar's way of quantising the asymptotic gravitational field <cit.> was grounded on this geometrical formulation, upon which the radiative degrees of freedom at infinity were determined <cit.> and symplectic methods were developed <cit.>.
It is important to point out that most of these advances are only applicable when the cosmological constant vanishes and, while detection of gravitational waves <cit.> urges us to study in detail this aspect of gravity, evidence of a positive cosmological constant <cit.> in our universe requires new progress in the theory —this issue was already exposed by Penrose in <cit.>. A new geometrical description of gravitational radiation at infinity in full General Relativity, that applies to the case with non-negative cosmological constant, is now available <cit.> —see also references therein and <cit.> for review on other approaches too. Some of the open questions to be addressed include: the identification of a phase space of radiative modes; an abstract formulation of the characteristic problem in conformal space-time that could be connected with the presence of gravitational waves at infinity according to the new characterisation; a general formulation of energy-momentum and angular-momentum, as well as a mass-loss formula describing the energy carried by gravitational waves that arrive at infinity. This paper focuses on the first of these issues.
In the first part of the work, a class of differential operators associated to triads of vector fields is introduced on an abstract three-dimensional Riemannian manifold. The space of all such operators is shown to have the structure of an affine space where each point is labelled by two functions. Also, it is proved that for C^∞ metrics the Levi-Civita connection belongs to that space and is determined by a set of three families of two-dimensional connections associated to a triad of vector fields. In the second part, the conformal boundary of space-times with positive cosmological constant is considered. Based on results by Friedrich <cit.> and using the new characterisation of gravitational radiation in full General Relativity, it is argued how the two `coordinates' of the space of differential operators correspond to half of the gravitational radiative degrees of freedom. Remarkably, for space-times having an algebraically special rescaled Weyl tensor at infinity, the gravitational radiation is fully determined by these two degrees of freedom. For the general case, it is suggested how another pair has to come from the TT-tensor in the conformal initial data. Some examples of exact solutions are given and, also, a comparison with Ashtekar's radiative phase space in asymptotically flat space-times is presented.
§ TWO-DIMENSIONAL CONNECTIONS IN THREE-DIMENSIONAL RIEMANNIAN MANIFOLDS
In the next two sections, an abstract 3-dimensional Riemannian manifold with metric _ab will be considered. Let denote a congruence of curves on given by a unit vector field m^a and parameter v that can be viewed as a function of the coordinates —a specific coordinate system will not be introduced though. The quotient / is a two-dimensional differentiable[It can be the case that is not globally differentiable.] manifold, not Riemannian in general because it is not endowed with a standard metric. This is called the `projected surface' —the reader is referred to appendix A in <cit.> for a detailed formulation. Let E^a_A and W_a^A be a couple of linearly independent vector fields and forms on , orthogonal to m_a and m^a and, hence constituting a pair of dual bases on , where indices A, B, C...=2,3. Using these fields, the projector to the space orthogonal to m_a is constructed as[Underlining of tensors indicates that they are orthogonal to some vector field giving a congruence; in this case, m^a.]
P^a_bE^a_CW_b^C, P^c_bm_c=0=P^a_cm^c, P^c_c=2 ,
which in terms of m_a reads
P_ab=_ab-m_am_b .
The acceleration, expansion tensor, and vorticity of m^a are denoted by
a_bm^c_cm_b , κ_abP^c_aP^d_b_(cm_d) , ω_abP^c_aP^d_b_[cm_d] ,
respectively, and the shear is defined as
Σ_abκ_ab-1/2P_abκ , κP^cdκ_cd .
Since these tensors are orthogonal to m^a, they can be expressed as
a_a=W_a^Aa_A , κ_ab=W_a^AW_b^Bκ_AB , Σ_ab=W_a^AW_b^BΣ_AB , ω_ab=W_a^AW_b^Bω_AB .
It is possible to define a one-parameter family —depending on v— of metrics _AB, connections γ^A_BC and volume forms ϵ_AB in –see appendix A in <cit.>. From now on, for the sake of briefness, they are referred to as metric, connection and volume form. Nevertheless, one has to bear in mind their true significance; it will be explicitly remarked when necessary. In particular, the metric _AB on and its inverse ^AB can be written as
_AB=E^a_AE^b_B_ab , ^AB=W_a^AW_b^B^ab .
Also, one introduces a two-dimensional connection γ^C_AB as
γ^C_ABE^a_AW_c^C_aE^c_B ,
where _a stands for the covariant derivative associated to the three-dimensional Levi-Civita connection Γ^a_bc of ,_ab. The connection γ^C_AB is used to define a covariant derivative _A on in the usual way
_Av^BE^a_A∂_av^B+γ^B_ACv^C, with v^A=v^aW_a^A, v^am_a=0.
This derivative operator is metric and volume-preserving and for a general tensor field T^a_1 ...a_r_b_1...b_q on one has
W_m_1^A_1...W_m_r^A_rE^n_1_B_q...E^n_q_B_qE^r_C_rT^m_1...m_r_n_1...n_q=_CT^A_1...A_r_B_1...B_q
+∑_i=1^rT^A_1...A_i-1s A_i+1...A_r_B_1...B_qm_sκ_C^A_i+ω_C^A_i+∑_i=1^qT^A_1...A_r_B_1...B_i-1s B_i+1...B_qκ_CB_i+ω_CB_im^s.
§ TRIPLETS OF TWO-DIMENSIONAL CONNECTIONS AND TRIADS
If one considers different families of curves on ,_ab, each one has its own projected surface. For the rest of the work, it is convenient to introduce the following definition involving three orthogonal congruences:
[Triad and triplet of connections]
Let e^a_i= m^a,m^a,m^a be a triad —with i=1,2,3—, a set of unit vector fields giving three orthogonal congruences on ,_ab or on an open subset Δ⊂, and denote by , and their associated projected surfaces. Then, a triplet of connections is a set _A,_,_ where each element represents the one-parameter family of two-dimensional connections in , and , respectively.
To prevent confusion, decorations are used in quantities and Latin indices associated with and , respectively. The following geometric identities follow by direct computation from definitions in <ref>:
Let ,_ab be a three-dimensional Riemannian manifold. Then, the kinematic quantities of a triad e^a_i= m^a,m^a,m^a obey the identities
Σ_ =ω_m+ω_m , Σ_m=ω_+ω_ m , Σ_m=ω_+ω_ m ,
Σ_ =-a_m-1/2κ=-Σ_=a_m+1/2κ,
Σ_ =-a_-1/2κ=-Σ_mm=a_+1/2κ,
Σ_ =-a_-1/2κ=-Σ_mm=a_+1/2κ.
Observe that, due to the orthonormality of the triad e^a_i, one has
m^a=E^a_m^=E^a_m^ , m^a=E^a_Am^A=E^a_m^ , m^a=E^a_m^=E^a_Am^A .
The subindices m, and denote contraction with m^a,m^a,m^a of the corresponding kinematic quantities. For instance, Σ_=^A^BΣ_AB.
Since the vorticities are antisymmetric, summation of the three relations in <ref> produces another identity that will be used later:
Let ,_ab be a three-dimensional Riemannian manifold. Then, the shears of a triad e^a_i= m^a,m^a,m^a fulfil
Σ_+Σ_ m+Σ_ m=0 .
Next, one of the main results is presented,
Let be a three-dimensional Riemannian manifold endowed with metric _ab, and consider a triad of unit vector fields e^a_i= m^a,m^a,m^a as in <ref>. Then, associated to this triad, there exists a unique well-defined torsion-free differential operator _a whose action on tensor fields T_b_1...b_n^c_1...c_q reads
_aT_b_1...b_n^c_1...c_q=D_aT_b_1...b_n^c_1...c_q+∑_i=1^nt^r_ab_iT_b_1...r...b_n^c_1...c_q-∑_j=1^qt^c_j_arT_b_1...b_n^c_1...r...c_q ,
where t^a_bc is a tensor field that is antisymmetric on its covariant indices, vanishes under index contraction, depends on the vorticities of e^a_i and can be expressed as
t^a_bc 2m^aω_bc+2m^aω_bc+2m^aω_bc ,
and D_aT_b_1...b_n^c_1...c_q is a tensor field depending only on the triplet of two-dimensional connections of <ref> associated with e^a_i and acting on components of T_b_1...b_n^c_1...c_q. Also, this differential operator is related to the Levi-Civita connection _a of ,_ab by
_aT_b_1...b_n^c_1...c_q=_aT_b_1...b_n^c_1...c_q-∑_i=1^ns^r_ab_iT_b_1...r...b_n^c_1...c_q+∑_j=1^qs^c_j_arT_b_1...b_n^c_1...r...c_q ,
where s^a_bc is a tensor field symmetric on its covariant indices, traceless in all of them, that can be written in terms of the shears of e^a_i as
s^a_bc 2Σ_m^am_(bm_c)+2Σ_ mm^am_(bm_c)+2Σ_mm^am_(bm_c) .
By `well defined differential operator' it is meant that the typical properties –e.g., see <cit.>– are fulfilled: linearity, Leibnitz rule, commutativity with contraction of indices, coinciding with the partial derivative when acting on functions, and torsion-free.
The tensor field s^a_bc cannot be written in terms of the triplet of connections _A,_,_ because Σ_, Σ_ m, Σ_m depend on `purely three-dimensional' data of the Levi-Civita connection. More concretely, the Christoffel symbols given by Γ^m_, Γ^_m and Γ^_ m are involved.
Begin by introducing an auxiliary differential operator D_a defined by
D_aT_b_1...b_n^c_1...c_q_aT_b_1...b_n^c_1...c_q-∑_i=1^ns^r_ab_i+t^r_ab_iT_b_1...r...b_n^c_1...c_q+∑_j=1^qs^c_j_ar+t^c_j_arT_b_1...b_n^c_1...r...c_q .
Using the properties of _a it is straightforward to check that D_a is unique and satisfies all properties of <ref> except for having torsion –i.e., for a general differentiable function f, D_aD_bf-D_bD_af=-2t^c_abcf≠ 0. The next step is to show that D_aT_b_1...b_n^c_1...c_q depends only on the action of the triplet of two-dimensional connections of <ref> on T_b_1...b_n^c_1...c_q. To see this, consider first the action on a one-form v_a,
D_av_b =_av_b-s^r_ab+t^r_abv_r
=W_a^AW_b^B_Av_B+m_am_bm^m^_m^ev_em_+m_am_bm^m^_m^ev_em_
+ m_am_bm^m^_v_+m_bm_am^m^_v_+m_am_bm^m^_v_+m_bm_am^m^_v_
+ m_am_bm^m^_v_+m^m^_m_m^ev_e .
In the computation above, one decomposes _av_b into its tangent and orthogonal parts with respect to m^a —or equivalently w.r.t. m^a or m^a— and then does the same with the mixed terms, but this time with respect to the other two elements of the triad —m^a and m^a. Afterwards, one introduces the derivative operators _A,_,_ where possible. There are terms which cannot be written in terms of these derivatives —for the reasons given in <ref>—, which precisely compensate the term s^r_ab+t^r_abv_b. The same type of algebraic manipulation leads to a formula for contravariant vector fields v^a,
D_av^b =_av^b+s^b_ar+t^b_arv^r
=W_a^AE^b_B_Av^B+m_am^bm^m__m^ev_em^+m_am^bm^m__m^ev_em^
+ m_am^bm^m__v^+m^bm_am_m^_v^+m_am^bm^m__v^+m^bm_am_m^_v^
+m_am^bm^m__v^+m^m__m^m^ev_e+
+^rbs^e_ar+t^e_arv_e+s^b_ar+t^b_arv^r .
By application of <ref>, the last line vanishes. One proceeds in the same way with tensor fields of arbitrary rank. This shows that D_aT_b_1...b_n^c_1...c_q depends only on the action of _A,_,_ on T_b_1...b_n^c_1...c_q. Then, using D_aT_b_1...b_n^c_1...c_q in <ref>, it follows that _aT_b_1...b_n^c_1...c_q is the desired torsion-free differential operator.
There is a classical result —see[See also Corollary 3.4 in <cit.> where analicity of the metric is required, or <cit.> for an extension to pseudo-Riemannian manifolds.] theorem 4.2 in <cit.>— that plays an important role in what comes next,
Let be a three-dimensional Riemannian manifold endowed with metric _ab of class C^∞. Then, every point p of lies in a neighbourhood _p on which there exists a C^∞ coordinate chart x^a in which the metric takes the diagonal form
h=h_11x^1x^1+h_22x^2x^2+h_33x^3x^3 .
This, together with <ref>, implies the following result:
Let be a three-dimensional Riemannian manifold endowed with metric _ab of class C^∞. Then, every point p of lies in a neighbourhood _p on which there exists a triad of unit vector fields e^a_i=m^a,m^a,m^a such that the associated differential operator _a of <ref> coincides with the Levi-Civita connection of ,_ab:
_aT_b_1...b_n^c_1...c_q_aT_b_1...b_n^c_1...c_q ,
and its action on arbitrary tensor fields T_b_1...b_n^c_1...c_q depends only on the triplet of two-dimensional connections _A,_,_ of <ref> associated to that triad of vector fields.
Using the coordinate chart x^a of <ref>, one has that the triad given by
m^a=1/√(_11)δ^a_1 , m^a=1/√(_22)δ^a_2 , m^a=1/√(_33)δ^a_3
has vanishing vorticities –see also <ref>–
ω_ab=ω_ab=ω_ab=0 .
Then, t^a_bc=0. According to <ref>, this proves that the action of the associated _a on an arbitrary tensor field only depends on the triplet of two dimensional connections. Also, a direct calculation shows that
Γ^m_Γ^_mΓ^_ m 0 .
Using this, one has
Σ_=Σ_ m=Σ_ m=0 ,
which, alternatively, could have been proved by using t^a_bc=0 in <ref>.
Hence, s^a_bc=0 and, as follows from <ref>,
_aT_b_1...b_n^c_1...c_q_aT_b_1...b_n^c_1...c_q .
By this result, the action of the covariant derivative _a defined by the Levi-Civita connection of ,_ab is determined in every neighbourhood _p by the action of a triplet of two-dimensional connections. In principle, working by patches, one can cover , however different charts would have to be used, meaning that the metric would be diagonalised by different triads on different regions of . Apart from that, it is interesting to notice that all the curvature-related quantities of ,_ab can then be computed on each _p by the action of three two-dimensional connections.
An immediate consequence of <ref> is that a triad e^a_i= m^a,m^a,m^a associated to the _a that coincides with _a on each _p according to <ref> reads
m_a =√(_11)ax^1 ,
m_a =√(_22)ax^2 ,
m_a =√(_33)ax^3 .
These forms, each one being proportional to a gradient, have vanishing vorticities there. Thus, they characterise a set of 3 orthogonal foliations on _p.
A natural thing to do is to introduce a connection associated to _a as
γ^a_bcΓ^a_bc+s^a_bc ,
such that
_av^b=av^b+γ^b_acv^c.
Some properties of _a that can be computed straightforwardly are
_[a_b]f =0 ,
_[av_b] =_[av_b]=[av_b] ,
_[a_b]v_c =_[a_b]v_c+ _[as^r_b]cv_r+s^f_c[as^r_b]fv_r
=_[a_b]v_c+ _[as^r_b]cv_r+s^f_c[bs^r_a]fv_r ,
_a_bc =s^r_bc_ra .
Let e^a_i be a triad defining _a. Then, for any v_a=v_AW^A_a,
E^a_AE^b_B_av_b=E^a_AE^b_B_av_b=_Av_B ,
and the decorated version of this relation holds for v_a=v_W^_a and v_a=v_W^_a, respectively.
To conclude this section, consider conformal transformations of the metric _ab,
_ab=ω^2_ab ,
where ω is a positive definite function. This change also affects the metric on ,
_AB=ω^2_AB .
Accordingly, the kinematic quantities <ref> transform too —see Appendix E in <cit.>. In particular, one has that m^a=ω^-1m^a and
Σ_AB=ωΣ_AB , ω_AB=ωω_AB .
This should be expected, since umbilicity and surface-orthogonality are conformally invariant properties. Thus, the conformal invariance of s^a_bc and t^a_bc follows,
s^a_bc=s^a_bc , t^a_bc=t^a_bc ,
and, also, this shows that γ^a_bc undergoes the same change as Γ^a_bc:
γ^a_bcγ^a_bc +C^a_bc , C^a_bc=1/ω^at2_
t(bω_c)-_cbω_t ,
where ω_a_aω. All the results above, including <ref> are invariant under these kind of transformations.
§.§ The space of connections
Different triads can lead to different operators _a, according to <ref>. To study this, start with the following definition:
[Space of connections]
Define the space Ξ as the set of all possible differential operators _a of <ref> on or on an open Δ∈ .
From now on, no distinction between and Δ∈ will be made, but one has to take into account that these operators may not be defined globally according to <ref>. The difference between any two elements of Ξ can be characterised as the difference of their action on an arbitrary form v_b on . From <ref> one has
_a-_av_b=d^e_abv_e , d^e_abs^e_ab-s'^e_ab .
To know how many functions have to be specified to determine a point in Ξ, one just has to study the algebraic properties of d^a_bc. First note that lowering the contravariant index s_abc=_das^d_bc one has
[label=*)]
* s_abc=s_acb .
* s_abc+s_bca+s_cab=0 .
* ^abs_abc=0 .
* ^acs_abc=0 .
* s_[abc]=0=s_(abc) .
* P^abs_abc=0=P^acs_abc, for any of the three projectors associated to the elements of the triad e^a_i= m^a,m^a,m^a defining s_abc.
Not all the properties are independent from each other: from <ref>, one can derive <ref>; whereas <ref> is independent from the others. <Ref> provide with 9, 10 and 3 constraints, respectively. And from <ref> one gets 3 additional independent equations, giving a total of 25 constraints that leave 2 independent components in s_abc. The same can be shown if one looks to <ref>; it turns out that all the information that has to be specified is encoded in 3 scalars: Σ_, Σ_ m and Σ_ m. But recall that from <ref> one has <ref>, which reduces the number of independent components to just 2 functions. However, this counting does not apply directly to d_abc=_dad^d_bc. The reason is that while <ref> are satisfied by this tensor too, <ref> does not. Still, it is possible to find three additional independent constraints. One proceeds first by writing d_abc using a generic basis ẽ^a_i –with i running from 1 to 3–,
d_ijk= 2Σλ^3_iλ^1_(jλ^2_k)+2Σλ^1_iλ^2_(jλ^3_k)+2Σλ^2_iλ^1_(jλ^3_k)
- 2Σ'ω^3_iω^1_(jω^2_k)-2Σω^1_iω^2_(jω^3_k)-2Σω^2_iω^1_(jω^3_k) ,
where λ^i_j and ω^i_j relate the basis ẽ^a_i with e^a_i=m^a,m^a,m^a and e'^a_i= m^a,m^a,m'^a, respectively:
ẽ^a_i =λ^j_ie^a_j ,
ẽ^a_i =ω^j_ie'^a_j .
Each of these transformation matrices can be expressed in terms of three Euler angles[The conventions for the definition of the Euler angles of <cit.> are used.]; denote them by ϕ,θ,ψ and β,γ,α, each set defining λ^i_j and ω^i_j, respectively. Then, one can express explicitly the components d_ijk as functions of Σ, Σ, Σ,Σ', Σ, Σ and the two sets of Euler angles. Observe that the following decomposition applies,
d_ijk=A_ijkϕ,θ,ψ,Σ,Σ,Σ̂+B_ijkβ,γ,α,Σ',Σ,Σ .
Using this, a direct algebraic manipulation shows that[<Ref> has to be used to derive <ref>.]
ψ+αd_133 =-d_211 ,
ψ+αd_123 =-d_322 ,
d_231 =-d_123-2∫A_322ψ-2∫B_322α .
Summing up, <ref> can be used to reduce the independent components to: d_123, d_231, d_133, d_211, d_322. Now, using <ref>, it is enough to specify 2 components —d_133 and d_123— to determine the other 3. These properties can be straightforwardly generalised to any linear combination of tensor fields d_abc with constant coefficients.
It is interesting to see that Ξ has the structure of an affine space[See <cit.> for example for the definition and nomenclature used here.] with underlying vector space Ξ⃗ (the vector space of tensors d^a_bc) over ℝ. One can define the map Φ of `translations' of Ξ as –omitting the indices in parenthesis containing d^a_bc and _a, for clearness–
Φ(,d)_av_b=_av_b+d^e_abv_e .
And for all _a and _a in Ξ there exists another map d: Ξ×Ξ→Ξ⃗ such that _av_b=_av_b+d(,)^e_abv_e. Observe that the symbol d(,)^e_ab is used in an abuse of notation, since d^e_ab are the elements of Ξ⃗. Notice also that, for any _a, _a and ”_a in Ξ,
d( ,)^e_ab+d(,”)^e_abv_e=d( ,”)^e_abv_e .
Also, one can fix an origin[The choice of origin is arbitrary, so `vectorialising' Ξ can be seen as a gauge fixing from the physical point of view.] _a in Ξ —see <ref>. Then, any other _a is characterised with respect to this origin by a tensor field s^a_bc, fulfilling <ref> —no matter the particular choice one does, the number of functions that coordinatize Ξ is two. One could think that the natural choice of origin in Ξ on each neighbourhood _p is given by the kind of triad of <ref> that —at least locally— diagonalises the metric, for which one has s^a_bc 0, _a_a:
_a-_av_bs^e_abv_e .
Importantly, one has that a triad e^a_i=m^a,m^a,m^a defines a unique _a, but the contrary is not true. Choosing ω_i^j=δ_i^j, one has that
ϕ=aπ/2 , θ=bπ/2 , ψ=cπ/2 ,
with a, b, c natural numbers, leads to d_abc=0. That, is two triads differing by a flip of sign and/or a swap of one or more elements produce the same s^a_bc, and hence the same _a. Hence, one has that the map φ from the space of possible triads Υ to the set of triplets of connections Ξ
φ:Υ ⟶Ξ
e^a_i ⟶_a
is a surjection —see <ref>. It is possible, indeed, to have more general transformations of a given triad that do not modify the associated _a; an example is having two different triads for which s^a_bc=0. This is illustrated in the case of the C-metric in <ref>.
Introduce now the following subset of triads,
The strict set of triads is denoted by Υ_s⊂Υ and consists of those triads e^a_i for which t^a_bc=0.
This subset Υ_s is of interest as it can be put in connection with the existence of the so called first components of news –see <ref> and <ref> later. Observe that by <ref> one has that
t^a_bc=0s^a_bc=0
and therefore that the restriction φ_s of φ acting on Υ_s maps every triad to _a,
φ_s:Υ_s ⟶Ξ
e^a_i ⟶_a
For any C^∞ metric _ab, one has Υ_s≠∅, since the triad given by <ref>, which diagonalises the metric according to <ref>, has t^a_bc=0 and, consequently, belongs to Υ_s.
As a final note, consider the conformal rescalings (<ref>). While _a changes according to <ref>, `distances' between points in Ξ are preserved, that is:
d(,)^e_ab=d(,)^e_ab ,
as follows from <ref>.
§ RADIATIVE DEGREES OF FREEDOM
Results presented so far apply to general three-dimensional Riemannian manifolds, some of them requiring a C^∞ metric. This second part of the article is devoted to show their connection with the characterisation of gravitational radiation in full General Relativity with a positive cosmological constant. Specifically, the aim is to work out a strategy for determining the radiative degrees of freedom at infinity. To that end, consider a physical space-time[Greek letters are used for abstract indices of quantities in four dimensional space-time. The signature of the metric is chosen to be -,+,+,+.] M̂,g_αβ admitting a conformal completion à la Penrose M,g_αβ, where the unphysical metric is related to the physical one by g_αβ=Ω^2g_αβ. The conformal factor Ω is strictly positive in M̂, vanishes at the conformal boundary[For a detailed description see e.g. <cit.>.] and is not unique; there exist the freedom of rescaling Ω by a positive definite function ω. This ambiguity is gauge, and as such, physical statements must not depend on the particular fixing[From this point of view, the fact that these transformations preserve Ξ —in the sense of <ref>— is a favourable feature.]. As already anticipated in the introduction, the case of interest to this work is that of g_αβ being a solution to the Einstein Field Equations with a positive cosmological constant Λ> 0. In this context, is a three-dimensional space-like hypersurface with metric[To be considered C^∞ when necessary for the application of <ref> and <ref>.] _ab —that transforms as <ref> under gauge changes— and normal N_α_αΩ. In general, has a past and a future component that are disconnected. All the results in this paper apply equally to both of them, hence no distinction will be made.
Under this set-up, it is well known that the Weyl tensor[The conventions for the curvature tensor are R_αβγ^μv_μ=_α_β-_β_αv_γ and R_αβ=R_αμβ^μ.] C_αβγ^δ vanishes at and that the rescaled version
d_αβγ^δΩ^-1C_αβγ^δ
is regular and in general different from zero there. The Bel-Robinson tensor <cit.> _αβγδ vanishes at too, so one constructs a rescaled version as
_αβγδ=d_αμγ^νd_δνβ^μ+^*d_αμγ^ν^*d_δνβ^μ ,
where the star denotes the hodge dual operation on the first couple of covariant indices. With it, one defines the asymptotic supermomentum as
p^α-N^μN^νN^ρ_μνρ^α .
Based on this object, a new way of characterising gravitational radiation in full General Relativity at in the presence of a non-negative Λ was put forward in <cit.> and later developed in <cit.> —for a review, see <cit.>. For Λ> 0, N^α is future-pointing and timelike in a neighbourhood of , and there one can work instead with the unit vector field n^α N^-1N^α, with -N^2=N_μN^μ. All the information of the rescaled Weyl tensor at is contained in its electric and magnetic parts defined with respect to this vector field,
D_ab e^α_ae^β_bn^μn^νd_μανβ ,
C_ab e^α_ae^β_bn^μn^ν^*d_μανβ ,
where e^α_a is any set of linearly independent vector fields at , orthogonal to n_α.
One can also introduce a `canonical' version of the asymptotic supermomentum,
^α=-n^μn^νn^ρ_μνρ^α ,
and write the decomposition
^αN^α+e^α_a^a.
The tangent part ^a to is called asymptotic super-Poynting vector field, and the scalar , asymptotic superenergy density[Supernergy formalism shares many properties with electromagnetism <cit.>. For a general treatment, see <cit.>. For the precise content used in the study of gravitational radiation at infinity, see section 2 of <cit.> —and references therein for other applications too.]. Using ^a, there is a way of determining the presence of gravitational radiation at in full General Relativity that we recall here[The criterion has an alternative, more geometric description in terms of principal directions of the rescaled Weyl tensor that is not going to be used here. For more details on this, see <cit.>.]:
[Asymptotic gravitational-radiation condition with Λ>0]
Consider a three-dimensional open connected subset Δ⊂.
There is no radiation on Δ if and only if the asymptotic super-Poynting vanishes there
^a 0 No gravitational radiation on Δ .
The radiation condition is equivalent stated in terms of the commutator of the electric and magnetic parts of the rescaled Weyl tensor,
DC_ab=0 ^a=0.
Importantly, <ref> is invariant under conformal gauge transformations (<ref>).
This characterisation is the first clue in the itinerary towards the isolation of the radiative degrees of freedom of the gravitational field; the second one is the results by Friedrich <cit.>, by means of which a three-dimensional Riemannian manifold endowed with a conformal class of metrics and a traceless and divergence-free (TT) tensor constitute initial/final data that determine a solution of the Λ-vacuum Einstein field equations. The Riemannian manifold turns out to be the conformal boundary, and the TT-tensor, the electric part (<ref>) of the rescaled Weyl tensor. Hence, all the available information about gravitational radiation has to be encoded in the triplet ,_ab,D_ab. An additional insight comes form calculating the magnetic part of d_αβγ^δ at , which shows that
C_ab=-√(3/Λ)Y_ab ,
with Y_ab being the Cotton-York tensor of ,_ab. This proves that C_ab is completely determined by the curvature of ,_ab, as
Y_ab=-1/2ϵ_a^cd_[cS_d]b
where the volume form ϵ_abc of and the intrinsic Schouten tensor S_ab have been introduced. Therefore, gravitational radiation is determined by an interplay —<ref>— between the intrinsic geometry of ,_ab and an extra piece of information —the TT-tensor D_ab. This is the perspective adopted here and largely illustrated in <cit.>.
Yet another point that deserves attention is the existence of a `first component of news' V_AB associated to a foliation given by some m^a. It has been shown in <cit.> that this object must be part of any possible news tensor. It is convenient to recall some definitions and results from that work.
[Equipped ]
An open, connected, subset Δ⊂ with the same topology than is said to be equipped when it is endowed with a congruence of curves characterised by a unit vector field m^a. The projected surface Δ/ and are characterised by the conformal family of pairs
P_ab,m_a,
where P_ab is the projector to . Two members belong to the same family if and only if P'_ab,m'_a=Ψ^2P_ab,Ψm_a, where Ψ is a positive function on .
[Strictly equipped ]
One says that is strictly equipped when it is equipped and the unit vector field m^a is surface-orthognal, providing a foliation by cuts.[By `cut' one means any two-dimensional Riemannian manifold with induced metric _AB.]
When considering a foliation (ω_ab=0), one can always write
m_a=F_av with 1/F=__m⃗v,
where each leaf _v is labelled by a constant value of the parameter v along the curves. Next, define S_ABE^a_AE^b_BS_ab and introduce the following combination — where Σ^2Σ_ABΣ^AB and ω^2ω_ABω^AB:
U_AB S_AB+1/2κΣ_AB+L_AB,
L_AB 1/8κ^2-1/4Σ^2+3/4ω^2_AB.
Two key results read as follows:
Assume is strictly equipped with v the parameter along the curves and such that <ref> holds. If the leaves have 𝕊^2-topology, there is a unique tensor field ρ_ab on orthogonal to m^a (equivalently, a one-parameter family of symmetric tensor fields ρ_ABvE^a_AE^b_Bρ_ab on the projected surface ) whose behaviour under conformal rescalings (<ref>) is
ρ_AB=ρ_AB-a1/ω_Aω_B+2a/ω^2ω_Aω_B-a/2ω^2ω_Cω^C_AB ,
where ω_A_Aω, a∈ℝ, and satisfies the equation
P^d_aP^e_bP^f_c_[fρ_d]e= 0
in any conformal frame. This tensor field must have a trace ρ^e_eP^aeρ_ae=aK and reduces, at each leaf, to the corresponding tensor ρ_AB with all its properties.
In particular, it is given for the round-sphere one-parameter family of metrics by ρ_ab=P_abaK/2.
Here, K is the family of Gaussian curvatures on —see appendix A in <cit.>.
Assume is strictly equipped with v the parameter along the curves and such that <ref> holds. If the leaves have 𝕊^2-topology, there is a one-parameter family of symmetric traceless gauge-invariant tensor fields
V_ABU_AB-ρ_AB,
that satisfies the gauge-invariant equation
_[AU_B]C=_[AV_B]C,
where ρ_AB is the family of tensor fields of <ref> (for a=1). Besides, V_AB is unique with these properties.
In case the topology of the projected surface is not 𝕊^2, one has the generalisations of <ref> and <ref> —corollary 6.3 and 6.4 in <cit.>, respectively. Standard topologies such as 𝕊^2, ℝ×𝕊^1 or ℝ^2 are permitted.
In addition, for algebraically special rescaled Weyl tensor at , and under appropriate conditions, V_AB determines the presence of gravitational radiation (i.e., V_AB=0^a=0, and V_AB is the `whole' news tensor, see Theorem 2 in that reference) —as an example of this, the C-metric is discussed in <ref>. It is worth noting that the news tensor of Λ=0 conformal infinity enters in the expression of the (Bondi-Trautman) energy-momentum derived by Geroch <cit.> and angular momentum using the symplectic formalism <cit.> —see also <cit.>. Hence, any deeper understanding of this class of tensor fields is of interest if one aims at a mass-loss formula in the scenario with Λ>0.
After this brief review, observe that if one knows the Levi-Civita connection _a of ,_ab, then C_ab can be computed. Thus, from the viewpoint of <ref>, _a must contain one part of the radiative degrees of freedom. In fact, one can be tempted to say that it should be characterised by half of the dof, noticing the symmetry between the role played by C_ab and D_ab in <ref>. Next, consider the affine space Ξ of differential operators _a on ,_ab. By <ref>, on each neighbourhood _p, the Levi-Civita connection belongs to this space, _a∈Ξ, and it is determined with respect to a fixed origin _a∈Ξ by two functions encoded in s^c_ab,
_a-_av_b=s^e_abv_e .
These represent one part of the asymptotic radiative degrees of freedom of the gravitational field in full general relativity with a positive cosmological constant. Note that s^a_bc corresponds to d,^a_bc and, complementarily, _a can be given coordinates 0,0 in Ξ —see <ref>. Remarkably, there is a broad class of space-times for which these two degrees of freedom suffice to determine the content of gravitational radiation at infinity; those having an algebraically special d_αβγ^δ at —see <ref>. Noting that t^a_bc=0 imposes that each element of e^a_i∈Υ_s is surface-orthogonal (i.e., defines a foliation), <ref> is applicable to each of the elements e^a_i=m^a,m^a,m^a of the triad, leading to the following result
Let m^a,m^a,m^a be the elements of a triad e^a_i∈Υ_s. Then, provided that the corresponding projected surfaces , and associated with each of the elements satisfy the topology restriction of <ref> —or its generalisation to other topologies indicated in <ref>—, there exists a set of three first components of news, V_AB, V_, V_, corresponding to each of the elements of the triad.
Recalling <ref> and using <ref>, it follows that locally, and provided that the topology requirements of <ref> are met, it is always possible to find three first components of news.
One further justification of Ξ containing part of the radiative degrees of freedom comes from <ref>. Given a foliation by umbilical cuts —Σ_AB=0— defined by m^a, and the set of all possible triads containing this vector field, there is an associated set of operators _a that induce the covariant derivative _A in . And this determines the associated V_AB. A different umbilical foliation, given by m'^a, has a different V'_AB, and its connection _A' is determined now by a different set of operators _a. From the point of view of Ξ, any element of _a is determined from any element of _a by two functions.
§.§ The algebraically special case
The case with d_αβγ^δ algebraically special at —i.e., when the rescaled Weyl tensor has at least one repeated principal null direction at — is worthy of attention. A result[The wording has been adapted to suit better the present work.] of <cit.> (Lemma 6.8) is applicable, and allows to translate this condition to an equation involving just ,_ab,D_ab and a vector field m^a,
Assume that d_αβγ^δ is algebraically special at and denote by m^a the unit vector field aligned with the projection to of (any of the) repeated principal null directions. Then
D_ab-1/2D_efm^em^f3m_am_b - _ab= m^dϵ_ed(aC_b)^e +m_b)m^fC_f^e.
Thus, in this case, the TT-tensor D_ab is determined by C_ab except for the component m^am^bD_ab. Nevertheless, the latter is not involved in the gravitational condition, as one has that <ref> implies
^a=0D_ab=1/2D_efm^em^f3m_am_b - _abC_ab=1/2C_efm^em^f3m_am_b - _ab .
This last statement follows from Corollary 6.7 in <cit.>, where the radiant superenergy formalism was used[With equations (2.50, 2.52) and property (c) on page 16 in that work, one recasts the result as (<ref>)]. Notice that, due to the fact that D_ab and C_ab are traceless, one has
P^abD_ab+m^am^bD_ab=0 , P^abC_ab+m^am^bC_ab=0 ,
so that, indeed, no m^am^bD_ab (m^am^bC_ab) component is involved in <ref> . Hence, in this particular scenario, the presence of radiation is fully ruled out by the Cotton-York tensor of ,_ab, that is, by the two degrees of freedom[In <cit.> it is stated that “The condition C_ab=0 removes `half the radiative degrees of freedom' in the gravitational field and, in addition, the gravitational waves it does allow can not carry any of the de Sitter momenta across ”. From the perspective of <ref> , C_ab=0 not only cuts the degrees of freedom by half, but it kills gravitational radiation —which somehow solves the puzzle of having waves that `can not carry any of the de Sitter momenta across '. Instead, what it is stated in this work is that half of the radiative degrees of freedom are removed by condition (<ref>) in a very different way, as neither C_ab nor D_ab vanish in general, so that the presence of gravitational waves at infinity is completely feasible —and, indeed, that is the case except for d_αβγ^δ having two repeated principal null directions (type D) at co-planar with the normal N^α <cit.>.] of Ξ. Notably, this situation is in correspondence with what happens in the scenario with Λ=0, where the phase space of connections introduced by Ashtekar <cit.> at has precisely 2 degrees of freedom, as the transport equations for the relevant curvature fields along the generators are sourced by the fields themselves[ This last point also suggests that the missing pair of dof in the general case may appear at by some kind of time-derivative of the intrinsic fields of a previous space-like hypersurface.]. Working out further details, one can introduce a notion of incoming radiation, and the analogy with Λ=0 goes deeper, as in that case the `electric' part of the curvature is determined by the `magnetic' part except for the Coulomb component –see remark 6.7 in <cit.> for more details. Hence, this situation may be the closest one –from the point of view of gravitational radiation– to the asymptotically flat case. The analogy does not come at any price, as condition (<ref>) constrains the radiative degrees of freedom. Thus, it should not be surprising that adapting non-geometric methods of the Λ=0 scenario —such as coordinate systems à la Bondi or boundary conditions on metric coefficients <cit.>— may give answers that differ from the general ones obtained in the present approach.
Some relevant examples of exact solutions for which d_αβγ^δ is algebraically special at include the Robinson-Trautman metrics and the C-metric —both radiative— and the Kerr-de Sitter like solutions[The Kerr-de Sitter like solutions are characterised by having a KVF whose associated rescaled Mars-Simon tensor vanishes. In general, they have non-vanishing D_ab and C_ab at , and their form is that of <ref> with m^a being the conformal KVF induced by the space-time KVF, therefore they do not have gravitational radiation at . There is a generalisation called asymptotically Kerr-de Sitter like, in which the rescaled Mars-Simon tensor is required to vanish at but not necessarily everywhere; this more broad class can contain gravitational radiation at infinity.] <cit.> —non radiative—, whose content on gravitational radiation was analysed in <cit.> —see also <cit.>. In what follows the Kerr-de Sitter metric and C-metric with Λ>0 are used to illustrate the relation between shears of different triads, serving as examples for <ref>, <ref>, and also to give cases of the surjection φ of <ref>. The basic radiative properties are concisely reviewed and conventions are followed according to section 8 of <cit.>.
§.§.§ Kerr-de Sitter
Kerr-de Sitter is an example of a space-time with conformally flat metric _ab at , hence the Cotton-York vanishes and condition (<ref>) is satisfied, meaning that there is no gravitational radiation arriving at infinity. The metric at reads
h=N^2t^2+sin^2θ/1+N^2a^2ϕ^2-N^2asin^2θ/1+N^2a^2ϕt + tϕ+1/1+N^2a^2cos^2θθ^2 ,
where N^2=Λ/3. Now, consider the following triad e^a_i,
m^a =1+N^2a^2/N1+N^2a^2cos^2θ^1/2δ^a_t+aN^2δ^a_ϕ ,
m^a =1+a^2N^2^1/2/sinθδ^a_ϕ ,
m^a =1+a^2N^2cos^2θ^1/2δ^a_θ .
It has Σ_=Σ_ m=Σ_ m=0, which trivially fulfils <ref> and shows that s^a_bc=0. Then, according to <ref>, the associated _a coincides with the Levi-Civita _a connection. Not only that, but one can compute the vorticities to get ω_=ω_ m=ω_ m=0, which is in agreement with <ref> and implies that t^a_bc=0. Thus, e^a_i∈Υ_s and, according to <ref>, not only _a coincides with _a, but its action on arbitrary tensor fields is completely determined by the triplet of two-dimensional connections associated to this triad:
_av_b=_av_b =W_a^AW_b^B_Av_B+m_am_bm^m^_m^ev_em_+m_am_bm^m^_m^ev_em_
+ m_am_bm^m^_v_+m_bm_am^m^_v_+m_am_bm^m^_v_+m_bm_am^m^_v_
+ m_am_bm^m^_v_+m^m^_m_m^ev_e .
In passing by, notice that t^a_bc=0 implies that one can find scalar functions u, v, w, A, B, C, depending on the coordinates t,ϕ,θ, such that
m_a=Au , m_a=Bv , m_a=Cw .
Indeed, one can easily identify
u=t , v=ϕ-aN^2t , w=θ
and
A=N1+a^2N^2cos^2θ^1/2/1+a^2N^2 , B=sinθ/1+a^2N^2^1/2 , C=1/1+N^2a^2cos^2θ^1/2 .
Then, u,v,w diagonalises the metric
h=N^21+a^2N^2cos^2w/1+a^2N^2^2u^2+sin^2w/1+a^2N^2v^2+1/1+N^2a^2cos^2ww^2 ,
which also illustrates <ref>. In addition to this, it is possible to compute <cit.> the first component of news V_AB associated to this m^a and check that
V_AB=0 .
As a different choice of triad, make a rotation around m^a to get a new couple of elements,
m^a =1/Nδ^a_t ,
m^a =1/1+a^2N^2cos^2θ^1/2asinθδ^a_t+1+a^2N^2δ^a_ϕ .
Observe that m^a is a Killing vector field, thus one has Σ_=0. For the other two shears one finds
Σ_ m=-Σ_ m=aNcosθ ,
which again satisfies <ref>. That is, in this case s^a_bc≠ 0 and the operator _a is different from the previous one, so that it does not coincide with _a. This _a can be labelled with the values of any of the three shears' components, e.g. Σ_, Σ_ m=0,aNcosθ.
§.§.§ C-metric with positive cosmological constant
The C-metric (here assuming Λ>0) represents two black-holes accelerating <cit.>. Hence, it is not surprising that it contains gravitational radiation at <cit.>, whose metric in the gauge choices of <cit.> reads
—see also <cit.> for a recent treatment of the generalised black holes metrics of Petrov type D which include the C-metric:
h= (a^2S+N^2) τ^2+N^2/S(a^2S+N^2)p^2+Sσ^2 .
Here 2ma< 1, S(p) (1-p^2)(1-2a mp)≥ 0 with p∈(-1,1], σ∈[0,2π /(1-2am)) and N^2=Λ/3 as before. The rescaled Weyl tensor is algebraically special at but the metric is not conformally flat. Thus <ref> holds and all the information about gravitational radiation can be encoded in the Cotton-York tensor Y_ab –see <ref>–, and having a non-vanishing asymptotic super-Poynting vector field,
^a√(3/Λ)18a m^2 S 1+6/ΛSa^2δ_p^a .
It was shown in <cit.> that, associated to the foliation given by
m^a=-N/Tδ_τ^a-a S/Nδ_p^a
the first component of news V_ab reads
V_AB= H/S^2_Ap_Bp - H _Aσ_Bσ ,
where on each leaf
H∫ 3a m S dp 3a m1/2a mp^4-1/3p^3-a m p^2+p+ 3/2m^2a^2- 2a m .
Indeed, not only m^a, C_ab and D_ab obey <ref>, but also
V_AB=0 ^a=0 no gravitational radiation (a=0) .
One can complete a triad by choosing, for instance,
m^a =aS^1/2/Sa^2+N^2δ^a_τ+S^1/2δ^a_p ,
m^a =1/S^1/2δ^a_σ .
The explicit computation of the shears gives s^a_bc=0. The vorticity of m_a was computed in <cit.>, and it vanishes. Thus, using these results and <ref> one has also that t^a_bc=0. That is, e^a_i∈Υ_s, so that _a=_a and this connection is determined by the triplet of two-dimensional derivative operators _A,_,_. In addition, it serves as a non-trivial example of the surjection φ from the space of triads Υ to the space of connections Ξ –see <ref>–, as the triad
m_a =(a^2S+N^2)^1/2_aτ ,
m_a =N^2/S(a^2S+N^2)^1/2_ap ,
m_a =S^1/2_aσ,
which diagonalises the metric in the given coordinate system, is also in Υ_s and produces the same _a.
It is worth showing that a different triad can be defined, such that s^a_bc≠ 0≠t^a_bc. Keeping the m^a of <ref>, now choose
m^a =a/N^2+a^2Sδ^a_τ+δ^a_p-√(S-1)/Sδ^a_σ ,
m^a =√(S-1)a/a^2S+N^2δ^a_τ+√(S-1)δ^a_p+1/Sδ^a_σ .
This triad is only defined for S>1, but for fixed a and m (obeying 2am<1) one can always find a region on (a range of values for de coordinate p) in which this condition is satisfied; enough for the purposes of this example. The new components of the shears read
Σ_ m=-Σ_ m=-a/2N√(S-1)pS .
This leads to a different connection _a≠_a, and can be given coordinates Σ_, Σ_ m=0,a2N√(S-1)^-1pS.
§.§ Comparison with Ashtekar's phase space
The investigation by Ashtekar <cit.> of the radiative degrees of freedom in asymptotically flat space-times has ground differences with respect to the one put forward here. One of them, which underlies everything else, is the lightlike nature of when Λ=0. Yet, inspired by Ashtekar's work, the present work aims at isolating the asymptotic radiative degrees of freedom by constructing a space of connections. Thus, some similarities arise and the next brief comparison between Ashtekar's phase space[Elements of Ashtekar's phase space are equivalence classes of connection arising from some restricted conformal rescaling. Here, such identification is unnecessary.] Γ and the space Ξ can be depicted:
At with Λ=0,
* Each of the (equivalence classes of) connections in the space Γ determines a curvature ^*K^ab —^NC^ab in the notation of <cit.>— and a news tensor. When seen from the viewpoint of the space-time, ^*K^ab is the pullback of the `magnetic' part —w.r.t. N^a— of the rescaled Weyl tensor, and the induced connection on selects one of these connections.
* The space Γ has an affine structure and can be parametrised by two functions. These correspond to the 2 degrees of freedom of the radiative components of the gravitational field. In this case, due to the lightlike character of , they represent all the degrees of freedom of the phase space of gravitational radiation. This is supported by the radiation condition based on the vanishing of the news tensor —or, equivalently, by the vanishing of the tangent part of the asymptotic supermomentum p^a <cit.>.
* To specify the induced connection with respect to a suitably fixed point in Γ, it is enough to give the shear of a one-form ℓ_b that satisfies ℓ_bN^b=-1 —where small Latin indices are used, as the normal to is also tangent.
At with Λ > 0,
* On each neighbourhood U_p∈, there is a differential operator _a∈Ξ that coincides with the Levi-Civita connection _a of ,_ab. This determines[One could also take a different perspective; namely, defining with each _a a curvature and a Cotton-York like tensor.] the Cotton-York tensor Y_ab of the manifold. Also, from the viewpoint of the space-time, Y_ab is the magnetic part of the rescaled Weyl tensor at , and _a, the induced connection —in this sense, _a on each _p `selects' a _a∈Ξ.
* The space Ξ has an affine structure and can be parametrised by two functions. Due to the space-like character of , they cannot represent the complete set of radiative degrees of freedom in the general case. This is supported by the radiation <ref> based on the vanishing of the tangent part of the asymptotic supermomentum p^a —which involves D_ab too. However, for algebraically special rescaled Weyl tensor at , these 2 degrees of freedom determine completely the gravitational radiation.
* To specify the induced connection with respect to an arbitrary point _a in Ξ, it is enough to give the crossed components of two shears of a triad e^a_i.
§ DISCUSSION
It has been shown that an affine space Ξ of differential operators _a emerges in Riemannian manifolds from the space Υ of possible triads e^a_i, such that the action of each _a on arbitrary tensor fields depends on the action of a triplet of two-dimensional connections _A,_A,_A and an antisymmetric term t^a_bc containing the vorticities of e^a_i. Also, that there is a surjective map φ from Υ to Ξ. Remarkably, for C^∞ Riemannian metrics, the Levi-Civita connection belongs to Ξ, and is fully determined by a triplet _A,_A,_A; hence, the subset Υ_s∈Υ of triads that have t^a_bc=0 and are mapped to _a is not empty. Apart from that, points in Ξ can be labelled by 2 functions which have been justified to represent half of the degrees of freedom of the radiative gravitational field at infinity with a positive cosmological constant. Further investigation has to be carried out to extract all the possible structure in Ξ.
One way of understanding the spirit of this work comes from the dimension and causal character of . It can be argued that local methods do not suffice to determine the radiative degrees of freedom of gravity —e.g., see discussion in chapter 9 of <cit.>. Also, it is reasonable to think of gravitational radiation as linked to two-dimensional submanifolds. As an example, the theorems for the existence of news tensors —either the one of the asymptotically flat case or the first component of news with Λ>0— depend on two-dimensional cuts —e.g., see how the dimension intervenes in the proof of Corollary 5.2 in <cit.>. Additionally, observables such as the energy-momentum or mass-loss formula involve integration over a two-dimensional surface, which also relates to topology: when the cosmological constant vanishes, has ℝ×𝕊^2 topology <cit.> and the null generators naturally provide the conformal boundary with a 1+2 decomposition. For Λ>0, however, the topology of is not unique <cit.> and, in general, there is no natural choice of an intrinsic evolution direction equipping infinity with a 1+2 splitting. The space of triads Υ represents this directional freedom and induces a space of differential operators Ξ. From it, the geometry of the two-dimensional projected surfaces associated with a triad is used to show that the Levi-Civita connection belongs to Ξ, in a way that the curvature of can be determined by two-dimensional connections —see <ref>. Indeed, this result —<ref>— depends critically on the dimensionality, and generalisations of <ref> to dimension greater than 3 require restrictions on the curvature <cit.>.
On a different note, one of the big pieces missing is how to account for the total number of degrees of freedom in the general case —that is, when d_αβγ^δ is algebraically general at . A possible way out is to construct a map from one detached abstract Riemannian manifold to another, such that the image of Y_ab on the second manifold is identified with D_ab. The intuition behind this proposal is the following: the electric and magnetic parts of the rescaled Weyl tensor can be expressed as
C_ab = √(3/Λ)ϵ^pq_a_[pσ_q]b+1/2ϵ^p_ba_cσ^c_p,
D_ab =1/2√(3/Λ)σ̈_ab,
with σ_ab being the shear of n^α, and where the dot denotes covariant differentiation along this vector field. This suggests the possibility of identifying the missing pair of degrees of freedom as the evaluation at of a time derivative of the other pair —or as a map between detached three-dimensional Riemannian manifolds. Then, the two pairs would represent the 4 degrees of freedom of the phase space of gravitational radiation with Λ>0. This matter has to be tackled rigorously elsewhere.
Yet another important issue is how asymptotic symmetries come into play, and how they preserve/change the structure of Ξ. As an example, distances between points of Ξ are left invariant by conformal transformations of the Riemannian metric —in the sense that operators _a change, but the labels s^a_bc do not—, see <ref>. Thus, it is to be expected that basic infinitesimal asymptotic symmetries, i.e., those generated by CKVF ξ^a of ,_ab that satisfy
_ξ⃗D_cd=-1/3_aξ^aD_cd,
act simply and transitively on Ξ. Additionally, it is necessary to understand the interplay between translations, the first piece of news associated with a congruence of curves <cit.> and Ξ —<ref> goes in that direction.
The final goal is to identify a complete phase space for the radiative gravitational field at with Λ>0 that could be used to shed light on open problems, such as the formulation of the total energy-momentum carried away by gravitational waves in an accelerating expanding universe. Hereby, this work is just a first step on that itinerary; other important pieces, as the ones pointed out above, are yet to be fit into the jigsaw.
§.§ Acknowledgments
The paper was completed during a one-year stay at Queen Mary University of London. The author acknowledges the hospitality from the Geometry, Analysis and Gravitation research group and, specially, from Juan A. Valiente Kroon. Also, he is very grateful to Iñaki Garay for encouragement and interesting discussions during the early stages of this work, and to José M. M. Senovilla for valuable comments, including suggestions on the first draft of the manuscript. Work supported under Grant Margarita Salas MARSA22/20 (Spanish Ministry of Universities and European Union), financed by European Union – Next Generation EU.
|
http://arxiv.org/abs/2307.02941v1
|
20230706121340
|
Benign landscapes of low-dimensional relaxations for orthogonal synchronization on general graphs
|
[
"Andrew D. McRae",
"Nicolas Boumal"
] |
math.OC
|
[
"math.OC",
"90C26, 90C30, 90C35, 90C46"
] |
[
Aristides Gionis
August 1, 2023
====================
Orthogonal group synchronization is the problem of estimating n elements Z_1, …, Z_n from the orthogonal group O(r)
given some relative measurements R_ij≈ Z_i^Z_j^-1.
The least-squares formulation is nonconvex.
To avoid its local minima, a Shor-type convex relaxation squares the dimension of the optimization problem from O(n) to O(n^2).
Burer–Monteiro-type nonconvex relaxations have generic landscape guarantees at dimension O(n^3/2).
For smaller relaxations, the problem structure matters.
It has been observed in the robotics literature that nonconvex relaxations of only slightly increased dimension seem sufficient for SLAM problems.
We partially explain this.
This also has implications for Kuramoto oscillators.
Specifically, we minimize the least-squares cost function in terms of estimators Y_1, …, Y_n.
Each Y_i is relaxed to the Stiefel manifold St(r, p) of r × p matrices with orthonormal rows.
The available measurements implicitly define a (connected) graph G on n vertices.
In the noiseless case, we show that second-order critical points are globally optimal as soon as p ≥ r+2 for all connected graphs G.
(This implies that Kuramoto oscillators on St(r, p) synchronize for all p ≥ r + 2.)
This result is the best possible for general graphs; the previous best known result requires 2p ≥ 3(r + 1).
For p > r + 2, our result is robust to modest amounts of noise (depending on p and G).
When local minima remain, they still achieve minimax-optimal error rates.
Our proof uses a novel randomized choice of tangent direction to prove (near-)optimality of second-order critical points.
Finally, we partially extend our noiseless landscape results to the complex case (unitary group), showing that there are no spurious local minima when 2p ≥ 3r.
§ INTRODUCTION AND RESULTS
We examine the optimization landscape of a class of quadratically constrained quadratic programs (QCQPs) that arise from the orthogonal group synchronization problem.
This widely-studied problem has applications notably in simultaneous localization and mapping (SLAM) <cit.>, cryo-electron microscopy (cryo-EM) <cit.>, computer vision <cit.>, and phase retrieval <cit.>.
It also connects mathematically with oscillator networks <cit.>.
Our main results in this paper are presented in this section as follows:
<Ref> presents the (real) orthogonal synchronization problem on a graph and our main optimization landscape results for the resulting QCQPs. <Ref> connects our work to oscillator networks on (real) Stiefel manifolds and gives a new and optimal result in this field. <Ref> partially extends these results to the complex case.
<Ref> details some specific implications for the low-dimensional groups that are of primary interest in many applications.
<Ref> gives additional implications of our analysis that may be of independent interest.
§.§ Problem setup and optimization landscape results
The orthogonal synchronization problem we study is the following:
Let G = (V, E) be a connected, undirected graph on the vertices V = {1, …, n} for some integer n ≥ 1.
Each vertex i is associated with an unknown orthogonal matrix Z_i ∈ = { U ∈^r × r : UU^⊤ = I_r }.
We want to estimate Z_1, …, Z_n from (potentially noisy) measurements of the form R_ij = Z_i^ Z_j^⊤ + Δ_ij∈^r × r for each edge (i, j) ∈ E,
where Δ_ij represents measurement error/noise.
Since the measurements are relative, estimation can only be done up to a global orthogonal transformation.
A simple least-squares estimate of Z_1, …, Z_n can be obtained from the following optimization problem:
min_Y ∈^n ∑_(i,j) ∈ EY_i - R_ij Y_j^2,
where · denotes the matrix Frobenius (elementwise ℓ_2) norm.
Although the cost function itself is convex in Y, the constraint set ^n is nonconvex.
In general, the problem has spurious local minima in which local search methods (such as gradient descent) can get stuck.
Due to the orthogonality constraints, the above problem is equivalent to
max_Y ∈^n ∑_(i,j) ∈ ER_ijY_i^ Y_j^⊤,
where AB = (A B^⊤) is the Hilbert–Schmidt (or Frobenius) matrix inner product.
To write this more compactly, let C ∈^rn × rn denote the (incomplete) measurement matrix with blocks
[Throughout this paper, indices into a matrix dimension of length rn refer to blocks of r rows or columns. Thus, for C ∈^rn × rn, C_ij refers to the (i,j)th r × r block; for Y ∈^rn × p, Y_i refers to the ith r × p block of Y, etc.]
C_ij = R_ij if (i,j) ∈ E,
0 otherwise.
With the convention that R_ij = R_ji^⊤, the matrix C is symmetric.
Then, we can rewrite the problem as
max_Y ∈^rn × r CY Y^⊤ Y_i^ Y_i^⊤ = I_r, i = 1, …, n.
The matrix YY^⊤ is positive semidefinite with rank at most r.
Noting that Y_i^ Y_i^⊤ = (Y Y^⊤)_ii,
the classical (Shor) convex relaxation consists in replacing YY^⊤ with a positive semidefinite matrix X, ignoring the rank constraint:
max_X ∈^rn × rn
X ≽ 0 CX X_ii = I_r, i = 1,…, n.
This allows the rank of X to grow up to rn.
Alternatively, we can allow Y to have p ≥ r columns in (<ref>).
This effectively allows YY^⊤ to have rank up to p, providing a more gradual relaxation as we increase p.
This yields the rank-p Burer–Monteiro relaxation:
max_Y ∈^rn × p CY Y^⊤ Y_i^ Y_i^⊤ = I_r, i = 1, …, n.
We can view the rank p ≥ r as a hyperparameter that interpolates the original nonconvex problem (<ref>) (in the case p = r) and the full SDP relaxation (<ref>) (which corresponds to p ≥ rn).
This paper primarily considers the optimization landscape of the rank-relaxed nonconvex problem (<ref>) for various values of p.
In particular, we ask:
Q: How large does p need to be so (<ref>) has no spurious local optima?
Much is known (see <Ref> for a summary in the case r = 1):
For general cost matrices C, we need[Big-oh notation O(·) is with respect to n, treating r as a small constant.] p = O(n) to guarantee such a benign landscape (resulting in O(n^2) variables, which is the same order as the full SDP relaxation);
with the assumption that C is “generic” (outside of a zero measure set), we can reduce this to p = O(n^1/2) (resulting in O(n^3/2) variables);
for lower values of p, however, “bad” matrices C are plenty: benign landscapes require a structured C.
Remarkably, for cost matrices C arising in specific applications, it has been observed that p can be taken much smaller—just slightly above r.
This was observed and partially explained theoretically for models related to ours (complete-graph synchronization and stochastic block models) in prior works such as <cit.>.
In the case of synchronization on a general graph for SLAM (robotics), similar empirical observations were reported in <cit.>; this is the inspiration for our work.
We prove that benign nonconvexity occurs for small values of p for cost matrices C arising from the general-graph synchronization problem; this gives additional theoretical support to the algorithms and empirical observations of <cit.>.
Setting p = O(1) results in an optimization problem with O(n) variables, similar to the original problem (<ref>) but tractable despite being nonconvex.
Specifically, we give conditions under which every
second-order critical point (in particular, every local optimum) of (<ref>) is a global optimum.
We define such a point more precisely in <Ref>, but it is, essentially, a point Y ∈^rn × p where, subject to the constraints, the gradient at Y is zero and all eigenvalues of the Hessian are nonpositive.
We first consider the case with no measurement error (i.e., R_ij = Z_i^ Z_j^⊤).
Clearly, a globally optimal solution to the original least-squares problem (<ref>) is
Z
Z_1
⋮
Z_n
∈^rn × r.
Absent noise, the computational task is trivial: fix Z_1 = I_r arbitrarily (since recovery is up to global orthogonal transformation), then traverse any spanning tree of G, recursively applying the measured relative differences to infer the other Z_i.
However, in general, the landscapes of (<ref>), (<ref>) have spurious local optima.
What is the effect of relaxation then?
Our first main result states that, without noise, relaxing to p = r+2 eliminates spurious optima.
Suppose G is connected.
If the measurements are exact, i.e., R_ij = Z_i^ Z_j^⊤ for all (i,j) ∈ E,
then, if p ≥ r + 2, any second-order critical point Y of (<ref>) satisfies Y Y^⊤ = Z Z^⊤.
Equivalently, Y = Z U for some r × p matrix U satisfying U U^⊤ = I_r.
This result is tight.
Indeed, at p = r and p = r+1, there exist spurious local optima for certain connected graphs G;
see <Ref> for further details and discussion.
We next consider the effect of measurement error (noise).
Our results do not apply to arbitrary noise (otherwise they would apply to arbitrary C)
but depend on the noise level relative to the graph connectivity.
To quantify the noise, denote by Δ the rn × rn matrix with the errors Δ_ij in the appropriate places for (i,j) ∈ E, setting Δ_ij = 0 for (i,j) ∉ E.
Thus, Δ is the portion of the cost matrix C that is due to measurement error.
Our results are stated in terms of Δ (where · denotes matrix operator norm).
We quantify the connectivity of G in a spectral sense.
Let L = L(G) be the unnormalized graph Laplacian of G, defined as L = (A ) - A, where A is the adjacency matrix of G and ∈^n is the all-ones vector.
The matrix L is positive semidefinite with eigenvalues 0 = λ_1 ≤λ_2 ≤⋯≤λ_n = L.
If G is connected, λ_2 > 0.
The better G is connected, the larger λ_2 is;
λ_2 is often called the algebraic connectivity or Fiedler value of G.
With these definitions in place, we can state our first noisy landscape result:
Suppose G is connected and p > r + 2.
Define
C_p 2(p + r - 2)/p - r - 2.
Then any second-order critical point Y of (<ref>) satisfies
(Y) ≤ r + 5 C_p^2 *Δ/λ_2^2 rn.
If, furthermore, p > r + 5 C_p^2 *Δ/λ_2^2 rn,
then Y is a globally optimal solution to (<ref>),
and Y Y^⊤ is an optimal solution to the SDP (<ref>).
This result quantitatively bounds how large p needs to be so that (<ref>) has a benign landscape and yields an exact solution to the full SDP relaxation.
The bound depends on the effective signal-to-noise ratio Δ / λ_2.
When <Ref> ensures that (Y) = r exactly, we obtain a stronger result:
Suppose G is connected and p > r + 2.
Let C_p be as defined in <Ref>.
If
Δ < λ_2/√(5) C_p √(rn),
then any second-order critical point Y of (<ref>) satisfies the following:
* Y is the unique solution to (<ref>) up to a global orthogonal transformation.
* Y has rank r and hence can be factored as Y = U for some ∈^rn × r and U ∈^r × p such that U U^⊤ = I_r. Moreover, is the unique solution of (<ref>) up to a global orthogonal transformation.
* Y Y^⊤ is the unique solution to the SDP (<ref>).
First, note that if the measurement error Δ is small enough, then p = r + 3 suffices to obtain a benign landscape.
This provides baseline robustness for <Ref>.[The noiseless result <Ref> is already robust to noise when p = r + 2, but the required bound on Δ might be much worse. See <Ref> for details.]
Furthermore, the second-order critical points yield a global solution to the original, unrelaxed problem (<ref>), which does not (in general) have a benign landscape.
If the measurement graph is complete, λ_2 = n - 1, so the condition (<ref>) becomes Δ≲_p,r√(n).
If Δ consists of i.i.d. random (say, Gaussian) zero-mean random variables with zero mean and variance σ^2,
we have, with high probability, Δ≈σ√(n) (see, e.g., <cit.>),
so the condition becomes σ^2 ≲_p, r 1.
See <Ref> for comparison to existing work in the complete graph case.
More generally, if G is an graph with edge probability q,
and q ≳log n/n (which is necessary for G to be connected),
we will have, with high probability, λ_2 ≈ nq (see <cit.>).
In the i.i.d. noise case, we have, with high probability, Δ≈σ√(nq),
so the condition on the noise variance becomes σ^2 ≲_p,r q.
§.§ Implications for oscillator network synchronization
Another way to view the problem we have just described is oscillator synchronization.
We briefly describe this connection here and spell out a corollary from <Ref>.
See, for example, <cit.> for more detailed discussion and derivations.
Given a connected graph G defined as before,
a simple version of the Kuramoto model for an oscillator network on G is the following:
we have time-varying angles θ_1(t), …, θ_n(t) associated with the n vertices,
and these angles follow the dynamics model
θ̇_i = -∑_j ∈_isin(θ_i - θ_j), i = 1, …, n,
where _i is the set of neighbors of vertex i in G.
One can easily check that these dynamics are the gradient flow for the following optimization problem:
max_θ∈^n 1/2∑_i,j = 1^n A_ijcos(θ_i - θ_j).
Clearly, the optima of (<ref>) are the “synchronized” states θ_1 = … = θ_n 2π.
Many papers have studied this model,
particularly with the following question in mind:
Q:
For which graphs G does the dynamical system (<ref>) converge to a synchronized state as t →∞ for “generic” initial conditions?
Generic means “except for a zero-measure set” so that this happens with probability 1 if θ_1(0), …, θ_n(0) are chosen uniformly at random over [0, 2π).
To connect this problem to our work,
note that (<ref>) is a reparametrization of the problem (<ref>) in the case r = 1, p = 2 when Z_i = 1 for all i and the measurements are exact.[Explicitly, θ_i ↦ Y_i = [cos(θ_i), sin(θ_i)] so that cos(θ_i - θ_j) = Y_i^ Y_j^⊤. In this case, the cost matrix C is equal to the adjacency matrix A of G. The differential of the change of variable is surjective, hence it does not introduce spurious critical points <cit.>; thus the landscapes in θ and in Y are qualitatively the same.]
There are many examples (see, e.g., <cit.>) of connected graphs G under which (<ref>) has spurious local optima with strictly[Modulo the trivial direction that shifts all angles equally.] negative definite Hessian (and to which, therefore, gradient flow will converge if initialized in some positive-measure neighborhood).
This implies that (<ref>) does not always have a benign landscape for r = 1, p = 2.
However, this changes when one studies the “synchronization” landscape for higher-dimensional “oscillators.”
For r ≥ 1, consider (<ref>) in the case that Z_1 = ⋯ = Z_n = I_r.[This is without loss of generality as we can always smoothly change variables to bring the ground truth to this position without affecting the landscape: see <Ref> for details.]
Furthermore, assume that there is no measurement error,[The simple connection to oscillators outlined in this section is most meaningful in the noiseless case.] so R_ij = Z_i^ Z_j^⊤ = I_r for all (i,j) ∈ E.
Note, furthermore, that the feasible points Y = [Y_i]_i lie in a product of Stiefel manifolds (we develop this connection further in the proofs of our main results):
(r, p) = { U ∈^r × p : U U^⊤ = I_r }.
With these simplifications and notation, problem (<ref>) becomes (within a factor of 2)
max_Y ∈(r,p)^n 1/2∑_i,j=1^n A_ij(Y_i^ Y_j^⊤),
where A is the adjacency matrix of G.
Again, the global optima are precisely the Y such that Y_1 = ⋯ = Y_n ∈(r, p).
Any trajectory Y(t) of the gradient flow on the constraint manifold satisfies the following system of differential equations:
_i = -_T_Y_i*∑_j ∈_i (Y_i - Y_j) , i = 1, …, n.
Here, T_U denotes the tangent space of (r,p) at U (see <Ref>),
and _T_U is the (Euclidean) orthogonal projection onto T_U.
This dynamics model is (a simple version of) the Kuramoto model for a network of Stiefel-manifold valued oscillators.
Once again, we ask the question: when does the Stiefel-manifold valued Kuramoto oscillator network governed by (<ref>) synchronize (i.e., converge to a synchronized state Y_1 = … = Y_n as t →∞ for generic initialization)?
The remarkable results of <cit.> show that for certain manifolds, Kuramoto oscillator networks synchronize for any connected graph G.
Specifically, the paper <cit.> shows that for all p ≥ 3, oscillator networks on (1, p) (equivalently on the d-sphere S^d for d ≥ 2) always synchronize for connected G.
This was extended in <cit.> to show that (r, p) oscillator networks synchronize if 2p ≥ 3(r+1).
In this paper, we prove that (r, p)-valued oscillator networks synchronize under the weaker condition p ≥ r + 2. This is a simple corollary of the noiseless orthogonal group synchronization result, <Ref>.
For any connected graph G,
the (r, p)-valued Kuramoto oscillator network on G synchronizes if p ≥ r + 2.
This result is tight and thus positively solves a conjecture in <cit.>.
More precisely, we have shown that general connected oscillator networks on (r,p) synchronize if and only if the manifold is simply connected (this is equivalent to p ≥ r + 2: see, e.g., <cit.>).
If the manifold is not simply connected, one can easily construct graphs G such that (<ref>) has a stable equilibrium other than the synchronized state <cit.>.
To obtain <Ref> from <Ref>, note that the theorem implies that at every non-optimal critical point of (<ref>), the Hessian has at least one strictly positive[The positive eigenvalue is important because we have phrased this as a maximization problem. In the more common minimization formulation (which we will use in our analysis), we want a strictly negative eigenvalue.] eigenvalue.
Intuitively, this should imply that gradient flow converges to a global optimum for all initializations except possibly on a set of measure zero,
though proving this rigorously is rather technical and rarely done.
One can find sketches of such a proof in <cit.> (in the case r = 1) and <cit.>.
§.§ The complex case
We can extend the previous (real) orthogonal matrix estimation problem to the complex case.
Here, we seek to estimate unitary matrices Z_1, …, Z_n ∈(r) { U ∈^r × r : U U^* = I_r }
given measurements of the form R_ij≈ Z_i^ Z_j^*.
We form our cost matrix C ∈^rn × rn (C is now Hermitian, i.e., C = C^*)
and consider the relationships between the original unitary group least-squares problem
max_Y ∈^rn × r CY Y^* Y_i^ Y_i^* = I_r, i = 1, …, n,
the SDP relaxation
max_X ∈^rn × rn
X ≽ 0 CX X_ii = I_r, i = 1,…, n,
and the rank-p relaxation
max_Y ∈^rn × p CY Y^* Y_i^ Y_i^* = I_r, i = 1, …, n.
We denote the complex r × p Stiefel manifold by
(r, p, ) = { U ∈^r × p : U U^* = I_r }.
For simplicity, we only consider the noiseless landscape,
and we obtain the following result (which, again, has implications for Kuramoto oscillators):
Suppose G is connected, and the measurements are exact (i.e., R_ij = Z_i^ Z_j^* for (i,j) ∈ E).
If 2p ≥ 3r,
then any second-order critical point Y of (<ref>) satisfies Y Y^* = Z Z^*.
Consequently, the (r, p, )-valued Kuramoto oscillator network on G synchronizes.
Due to the 3/2 factor and certain similarities in the proof, this can also be seen as a complex adaptation of the result in <cit.>.
Curiously, the innovations that allow us to improve that previous result in the real case do not easily carry over to the complex case; see <Ref>.
As discussed in <Ref>, our results for the real case show that we obtain a benign landscape (and all connected oscillator networks synchronize) as soon as the real Stiefel manifold (r, p) is simply connected.
In the complex case, the Stiefel manifold (r, p, ) is simply connected as soon as p ≥ r + 1 (again, see <cit.>).
We conjecture that this is the correct condition in <Ref>, but we have been unable to prove it, and it is unclear how to test it empirically.
[The standard counterexamples to benign landscape/synchronization results are cycle graphs (indeed, the paper <cit.> uses cycle graphs to show that networks of oscillators taking values in a non–simply-connected manifold do not synchronize in general). We do not expect this counterexample to work in the complex case when p ≥ r + 1, because a cycle graph corresponds geometrically to the unit circle, that is, (1), and our result shows that, if r = 1, p ≥ 2 = r + 1 suffices in the complex case. It is not clear how to construct a higher-dimensional equivalent as a candidate counterexample to our conjecture. For example, intuition from homotopy theory would suggest that we try a graph corresponding geometrically to a higher-dimensional sphere.]
§.§ Discussion of small-r synchronization conditions
It is interesting to consider the implications of the noiseless landscape results <Ref>
for small values of the matrix dimension r, as this covers many applications.
See <Ref> for a summary.
For synchronization of rotations (rather than orthogonal transformations) in the plane ^2, we can adopt two perspectives.
We could view (2) as one of the two connected components of (2), in which case <Ref> allows us to relax to synchronization on (4, 2) (a 5-dimensional manifold).
Alternatively, we can view (2) as isomorphic to (1) (a circle, S^1), in which case <Ref> allows us to relax to synchronization on the complex Stiefel manifold (2, 1, ), which is isomorphic to S^3 (a 3-dimensional sphere).
We have a similar situation for synchronization of rotations in ^3.
Viewing (3) as a subgroup of (3),
we can relax the synchronization to (5, 3), which is a 9-dimensional manifold.
Alternatively, it is well known that we can embed[More precisely, there is a double covering group homomorphism from (2) to (3).] (3) in the special unitary group (2),
which is the subgroup of (2) of matrices with determinant 1.
With this formulation, we can synchronize in (2) by relaxing to (3, 2, ), which is an 8-dimensional manifold.
§.§ Additional results
Our analysis yields several additional results of independent interest.
For the nonconvex relaxation (<ref>), we show second-order critical points can be good approximate solutions even if they are not globally optimal.
For the SDP relaxation (<ref>), we show a tightness result (rank recovery) and an approximation result.
§.§.§ Error bounds for all second-order critical points
Regardless of whether the conditions of <Ref> are satisfied,
we can obtain useful error bounds for all second-order critical points Y of (<ref>).
Recall that ·, · respectively denote the operator and Frobenius matrix norms.
The quality metric we use for a candidate solution Y is the correlation Z Z^⊤Y Y^⊤ = Z^⊤ Y^2.
The maximum value this can take is n^2 r (because Z Z^⊤ = n, and (Y Y^⊤) = rn),
and this value is reached if and only if Y Y^⊤ = Z Z^⊤ (as in <Ref>).
Assume G is connected and p > r + 2.
Then any second-order critical point Y of (<ref>) satisfies
Z Z^⊤Y Y^⊤≥* 1 - C_p^2 Δ^2/λ_2^2n^2 r,
where C_p is defined in the statement of <Ref>.
Comparable error bounds have been derived previously for the eigenvector method (see <Ref>).
In the graph case with i.i.d. zero-mean random noise (see discussion after <Ref>),
we obtain, with high probability,
1 - Z Z^⊤Y Y^⊤/n^2 r≲σ^2/q n,
where q is the edge probability and σ^2 is the noise variance.
This is comparable to previous error bounds with other methods; again, see <Ref> for references.
§.§.§ Consequences for the SDP relaxation
For the full SDP relaxation (<ref>), we first have an exactness result (tight relaxation, rank recovery):
Assume G is connected. If Δ < λ_2/2√(5)√(rn),
then
* The SDP relaxation (<ref>) has a unique solution , and () = r.
* The unrelaxed problem (<ref>) has a unique (up to orthogonal transformation) solution ∈^rn × r.
* = ^⊤.
Next, we have a general error bound for the SDP relaxation that applies even if the exactness result above does not:
Assume G is connected.
Any solution X to (<ref>) satisfies
XZ Z^⊤≥* 1 - 4 Δ^2/λ_2^2 n^2 r.
These corollaries follow from <Ref> in the limit p →∞.
To be precise, let X be an optimal solution to (<ref>).
For any p ≥ rn, there exists Y ∈^rn × p such that X = Y Y^⊤.
The fact that X is feasible implies that Y is feasible for (<ref>).
Furthermore, the optimality of X implies that Y is a global optimum and therefore is a second-order critical point <cit.>.
We then apply <Ref> and take p →∞, noting that C_p → 2.
§ RELATED WORK
The literature on orthogonal synchronization is vast, appearing in multiple communities such as robotics, image processing, signal processing, and dynamical systems.
We highlight a few salient references here; see also <cit.> for partial surveys.
Many of the tools we use in our analysis have been used before.
We point this out in our analysis along the way.
§.§ Rank relaxation for synchronization
Low-rank factorizations of SDPs (which, in our case, correspond to partial rank relaxations of the synchronization problem)
have a long history.
This approach is often called Burer–Monteiro factorization after the pioneering work of those authors (e.g., <cit.>).
The report <cit.>, along with the more general results in <cit.>, provides a theoretical framework for analyzing Burer–Monteiro factorizations (like ours) of SDPs with (block-)diagonal constraints. The papers <cit.> and <cit.> develop fast algorithms (generalized to the special Euclidean group case in <cit.>), showing that the “Riemannian staircase” approach (iteratively increasing the relaxation rank) proposed in <cit.> provides an exact solution to the SDP relaxation, but they do not specify what relaxation rank (p in our notation) suffices. Our results provide an upper bound (optimal in the noiseless case) on how much such algorithms must relax the rank constraint for the synchronization problem.
As far as we are aware, landscape results similar to ours have previously only been proved in the complete measurement graph case.
For this case, the papers <cit.> provide error bounds and benign landscape results for rank-relaxed optimization like (<ref>) (for the case r = 1, p = 2 in <cit.>).
The paper <cit.> analyzes the landscape of (<ref>) in the same general case that we do and is thus the most comparable work to ours.
We can directly compare our <Ref> with <cit.>.
In the complete-graph case, λ_2 = n - 1, so the condition (<ref>) of <Ref> becomes Δ≲_p,r√(n).
This prior result <cit.>, in the adversarial-noise case (i.e., only assuming a bound on Δ),
has a comparable requirement.
With additional assumptions on Δ, the prior result improves this to Δ≲ n^3/4,
but the techniques used do not easily carry over to our general-graph case.
In addition, the paper <cit.> requires p > 2r for all its results.
To the best of our knowledge, the partial benign landscape result of <Ref> (which gives conditions under which (<ref>) has a benign landscape and yields an exact solution to the SDP even when the solution is not necessarily rank-r) is new even in the complete-graph case.
In a different vein, <cit.> provides a general bound on how well rank-p Burer–Monteiro factorizations for (r) optimization can approximate the full SDP relaxation in terms of objective function value.
Their results bound the approximation error by a term proportional to r/p.
Our results show that, in some cases, we already obtain perfect approximation with p only slightly larger than r.
An interesting parallel work on low-rank Burer–Monteiro factorization landscapes is <cit.>.
However, their setting is quite different from ours (requiring a strongly convex objective and no constraints), so their results are not directly comparable.
§.§ The spectral approach and previous error bounds
Our work follows a large body of prior results that use spectral properties of the measurement graph.
In particular, these results use the eigenvalues and eigenvectors of the graph Laplacian matrix L (or, in some cases, the adjacency matrix A) of the graph G.
A particularly important quantity is the graph connection Laplacian matrix (defined in <cit.> for different purposes) which can be formed directly from our observations R_ij (this is precisely the matrix in our notation—see <Ref> for the definition).
The eigenvector method was introduced in <cit.> for the purpose of rotation synchronization;
this method directly uses the eigenvectors of the graph connection Laplacian (or, in the original paper, the adjacency matrix equivalent).
Closely related to this, the paper <cit.> studies the relationship between the eigenvalues/vectors of the graph connection Laplacian and the optimal objective function value of (<ref>).
The paper <cit.>, by analyzing eigenvalues of , provides conditions under which the SDP relaxation gives exact recovery for _2 (i.e., (1)) synchronization on an graph.
The paper <cit.> considers a robust version of the SDP relaxation that uses the sum of absolute errors rather than least-squares.
They provide conditions for exact recovery for an random graph and sparse errors.
The papers <cit.> and <cit.> contain a variety of error bounds for the eigenvector method in terms of the graph Fiedler value λ_2(L) and various norms of the measurement error matrix Δ.
This is extended to (r) synchronization (special Euclidean group) in <cit.>.
These results are the most comparable to our error bounds, because they apply to general graphs and measurement errors.
The bound in <cit.> is, within constants, identical to our bounds in <Ref>.
The papers <cit.> show that the SDP relaxation, the (rounded) eigenvector method, and generalized power method (eigenvector method followed by iterative refinement) all achieve asymptotically minimax-optimal error with Gaussian noise on Erdős–Rényi random graphs (interestingly, <cit.>, like the older paper <cit.>, analyzes the adjacency matrix leading eigenvector). These results agree with our error bounds within constants (see <Ref>), though our results apply to much more general situations.
§.§ Oscillator synchronization literature
Oscillator synchronization is a large field of research. See <cit.> for a recent broad survey. Our work touches the small subset corresponding to simplified Kuramoto oscillators.
As discussed in <Ref>, the classical and most well-studied Kuramoto oscillator is that of angular synchronization (i.e., on the unit circle S^1).
One line of research studies which connected graphs synchronize on S^1.
This includes deterministic guarantees based on the density of a graph <cit.> and high-probability properties of random graphs <cit.>.
For example, <cit.> shows that every graph synchronizes in which every vertex is connected to at least 3/4 of the other vertices.
The paper <cit.> gives more general deterministic conditions based on spectral expander properties and shows that, asymptotically, graphs that are connected also synchronize.
A complementary line of research studies for which manifolds do all connected oscillator networks synchronize.
It was shown in <cit.> that Kuramoto networks on the d-sphere S^d synchronize for any d ≥ 2.
This corresponds to the Stiefel manifold (r, p) for r = 1 and p ≥ 3.
More generally, the papers <cit.>
show that networks on (r, p) synchronize if 2p ≥ 3(r + 1).
Our <Ref> improves this condition to p ≥ r + 2,
which is optimal according to <cit.>
and confirms a conjecture in <cit.>.
See the discussion around <Ref> for further details.
See also <Ref> for similar discussion in the complex case.
§ KEY MATHEMATICAL TOOLS
In this section, we make precise and fill out the mathematical framework for our analysis and results.
We only consider the real case in this section and the next.
We make the necessary adjustments for the complex case in <ref>.
§.§ Graph Laplacian formulation
In each optimization problem we consider,
the orthogonality/block diagonal constraint ensures that the r× r diagonal blocks of the cost matrix C have no effect.
We therefore replace C by another matrix that will be more convenient for analysis.
Let A ∈^n × n be the adjacency matrix of G (i.e., the symmetric matrix with A_ij = (i,j) ∈ E).
Let L be the graph Laplacian matrix defined by
L_ij = ∑_k ≠ i A_ik if i = j,
-A_ij if i ≠ j.
It is well known that L is a positive semidefinite (PSD) matrix whose smallest eigenvalue is λ_1(L) = 0 with corresponding eigenvector v_1 = 1_n. The measurement graph is connected if and only if the second-smallest eigenvalue λ_2 = λ_2(L) > 0.
Recall that Z_1, …, Z_n ∈(r) are the ground-truth matrices that we want to estimate.
Let
D = D(Z)
Z_1
Z_2
⋱
Z_n
.
With ⊗ denoting Kronecker product, let
L_Z D (L ⊗ I_r) D^⊤ and L_Z - Δ,
where Δ is the symmetric matrix containing the measurement noise blocks Δ_ij (with Δ_ij = 0 if (i,j) ∉ E).
Note that the eigenvalues of L_Z are identical to those of L (with multiplicities multiplied by r),
and, if G is connected, the r-dimensional subspace corresponding to λ_1(L) = 0 is precisely the span of the columns of Z.
The matrix was introduced in <cit.> under the name graph connection Laplacian.
Note that C_ij = -_ij for i ≠ j,
and the diagonal blocks of the cost matrix have no effect on the optimization landscape of problems (<ref>) and (<ref>) due to the block diagonal constraint.
Therefore, the SDP (<ref>) has the exact same landscape in the variable X as
min_X ≽ 0 X X_ii = I_r, i = 1,…, n.
Likewise, the (relaxed) nonconvex problem (<ref>) has the same landscape as
min_Y ∈^rn × p Y Y^⊤ Y_i^ Y_i^⊤ = I_r, i = 1, …, n.
From now on, we use those formulations.
§.§ Manifold of feasible points and necessary optimality conditions
Optimization problems of the form (<ref>) have been well studied.
See, for example, <cit.> for an overview.
We summarize the relevant facts in this section.
First, note that the constraints in (<ref>) and (<ref>) apply to the r × r diagonal blocks of a matrix.
To simplify notation, we denote the symmetric block-diagonal projection ^rn × rn→^rn × rn by
(X)_ij
= X_ii + X_ii^⊤/2 if i = j,
0 if i ≠ j.
We can then write the semidefinite problem (<ref>) more compactly as
min_X ≽ 0 X(X) = I_rn.
Similarly, we can write (<ref>) as
min_Y ∈^rn × p Y Y^⊤(Y Y^⊤) = I_rn.
The symmetrizing aspect of the projection operator is, for now, completely redundant but will become useful as we continue.
The feasible points of (<ref>) form a smooth submanifold of ^rn × p:
(r, p)^n = { Y ∈^rn × p : Y_i^ Y_i^⊤ = I_r for i = 1, …, n}.
The most important object for us to understand is the tangent space T_Y at a point Y ∈.
An rn × p matrix is in T_Y if and only if its r × p blocks satisfy
_i^ Y_i^⊤ + Y_i^_i^⊤ = 0 ∈^r × r for all i = 1, …, n.
The orthogonal projection of an arbitrary W ∈^rn × p onto T_Y is given by
_T_Y(W) = W - (W Y^⊤) Y.
Local minima of (<ref>) satisfy first- and second-order criticality conditions <cit.>, which encode the fact that the Riemannian gradient is zero and the Riemannian Hessian is positive semidefinite (on the tangent space).
To spell these out, let
S(Y) - ( Y Y^⊤).
Then, one can work out that any local minimum Y satisfies both of the following <cit.>.
* First order condition: S(Y) Y = 0. This is proportional to the T_Y-projected gradient of the objective function.
* Second order condition: for all ∈ T_Y, S(Y) ≥ 0. This says that the Riemannian Hessian of the objective function is positive semidefinite.
A point Y that satisfies the first condition is a first-order critical point.
A point Y that satisfies both conditions is a second-order critical point.
Note that second-order criticality is independent of whether we write the problem as (<ref>) or (<ref>)
(in fact, S(Y) is unchanged if we replace by -C).
§.§ Dual certificates and global optimality
A natural question that remains is the following: how do we know if a candidate solution Y to (<ref>) is, in fact, globally optimal?
This is a well-studied problem (again, see <cit.> for further discussion and references), and here we list some facts that will be useful throughout our analysis.
To study optimality conditions for (<ref>), it is useful to consider the full SDP relaxation (<ref>).
To show that a feasible point X of (<ref>) is optimal,
it suffices to provide a dual certificate.
For (<ref>), a dual certificate is a matrix S of the form
S = - (Λ)
for some rn × rn matrix Λ with the conditions that (a) S ≽ 0 and (b) SX = 0 (or S X = 0, which is equivalent because S,X ≽ 0).
If such a matrix S exists, we have, for any feasible point X' ≽ 0,
X' - X = X' - X
= SX' - X + (Λ)X' - X
= SX' - SX_= 0 + Λ(X' - X)_= 0
= SX'≥ 0.
Thus X is optimal. Furthermore, if X' is also optimal (i.e., equality holds above),
we must have S X' = 0; we use this fact later.
To show that Y is a globally optimal solution to (<ref>),
it clearly suffices to show that Y Y^⊤ is an optimal solution to (<ref>).
Thus we want to find a dual certificate proving optimality of Y Y^⊤.
A natural candidate is the matrix S(Y) = - ( Y Y^⊤) from (<ref>).
First-order criticality implies that S(Y) Y Y^⊤ = 0.
Thus it remains to show that S(Y) ≽ 0.
We do this by leveraging the following fact (see, e.g., <cit.>):
If Y is a second-order critical point of (<ref>) that is rank deficient (i.e., (Y) < p),
then S(Y) ≽ 0, and consequently Y Y^⊤ is an optimal solution to the SDP (<ref>),
and Y is a globally optimal solution to (<ref>).
This can be proved with an argument from <cit.>:
if u is any unit-norm vector in the null space of Y, then, for any z ∈^rn,
(z) z u^⊤ is in T_Y; hence, second-order criticality implies
S(Y) zz = S(Y) (z)(z)≥ 0.
Thus the key step in showing that Y is globally optimal is to show that it is rank deficient.
§ PROOFS (REAL CASE)
We (still) denote by A, A, and A the operator, Frobenius, and nuclear norms of a matrix A.
Our proof strategy is as follows.
First, we show that each second-order critical point Y of (<ref>) satisfies certain error bounds (<Ref>).
To do this, we can use inequalities of the form S(Y) ≥ 0 for all ∈ T_Y—these express the fact that the Riemannian Hessian at Y is positive semidefinite, with S(Y) as in (<ref>).
We did not find a way to select a single to prove our claims.
Instead, the key enabling proof idea is this:
we design a suitable probability distribution over the tangent space T_Y,
and we exploit the fact that S(Y) is still nonnegative in expectation over the random choice of .
This yields a valid inequality which may not be expressible using a single .
Second, to prove our landscape results (<Ref>),
the critical step is to show that second-order critical points have low rank (after which we apply <Ref>).
To do this, we show (using the error bounds previously derived) that the matrix S(Y) has high rank and then apply the first-order criticality condition S(Y) Y = 0.
To simplify notation in our proofs, we assume that Z_i = I_r for all i = 1, …, n.
This is without loss of generality because it can be arranged with a smooth and smoothly invertible change of variable.
More explicitly, if Z is arbitrary (with Z_i ∈), consider the change of variable π→ defined by π(Y)_i = Z_i Y_i.
Since π is a Riemannian isometry, the landscapes of F(Y) = YY^⊤ (<ref>) and of = F∘π are the same <cit.>, in the sense that
Y is first-order critical / second-order critical / locally optimal / globally optimal for if and only if π(Y) is so for F.
Using the definition of , the expression for simplifies to (Y) = L ⊗ I_r - YY^⊤, where _ij = Z_i^⊤Δ_ijZ_j.
This is exactly F in the event that Z_1 = ⋯ = Z_n = I_r, including a change of variable on the noise matrix Δ↦ which has no effect on its eigenvalues (hence on any of the claims we make about noise later on).
Accordingly, we proceed with the following simplified notation:
Z_i = I_r ∀ i,
D = I_rn,
L_Z = L ⊗ I_r, = L_Z - Δ.
The next step is to design a distribution of random tangent vectors.
§.§ A distribution of random tangent vectors
We analyze critical points of (<ref>).
Recall from <Ref> that, setting S(Y) = - ( Y Y^⊤),
a feasible point Y is first- and second-order critical if S(Y) Y = 0 and, for every ∈ T_Y, S(Y)^⊤≥ 0.
From here on, assume Y is such a point.
Recall that is in T_Y if and only if _i^ Y_i^⊤ + Y_i^_i^⊤ = 0 for i = 1, …, n.
In other words, each Y_i^_i^⊤ must be skew-symmetric.
This is true if and only we can write
_i = Γ_i (I - Y_i^⊤ Y_i^) + S_i Y_i
for some r × p matrix Γ_i and some skew-symmetric r × r matrix S_i.
The first term is the row-wise orthogonal projection of Γ_i onto (Y_i).
If we choose S_i = Γ_i^ Y_i^⊤ - Y_i^Γ_i^⊤,
we obtain
_i = Γ_i - Y_i^Γ_i^⊤ Y_i^ = Y_i (Y_i^⊤Γ_i^ - Γ_i^⊤ Y_i^).
In fact, this last formulation covers all possible _i,
because any r × r skew-symmetric matrix S_i can be written in the given form,
and the components of Γ_i in (Y_i^⊤) and (Y_i) can be chosen independently.
We choose a common Γ_i = Γ, where Γ is a random r × p matrix whose entries are i.i.d. standard normal random variables.
This results in a random ∈ T_Y whose components _1, …, _n are related.
Because the second-order criticality inequality holds for all ∈ T_Y, we can take an expectation to obtain
S(Y)^⊤ = S(Y)^⊤≥ 0.
Simple calculations yield
_i^_j^⊤ = ( Γ - Y_i Γ^⊤ Y_i )(Γ - Y_j Γ^⊤ Y_j)^⊤
= *ΓΓ^⊤ - Y_i Γ^⊤ Y_i Γ^⊤ - Γ Y_j^⊤Γ Y_j^⊤ + Y_i^Γ^⊤ Y_i^ Y_j^⊤Γ Y_j^⊤
= (p - 2) I_r + (Y_i^ Y_j^⊤) Y_i^ Y_j^⊤.
We have used the facts that (1) for any r × p matrix U, Γ^⊤ U Γ^⊤ = U^⊤,
and (2) for any r × r matrix B, Γ^⊤ B Γ = (B) I_p.
When i = j, this simplifies to _i^_i^⊤ = (p + r - 2) I_r. This implies (^⊤) = (p + r - 2) I_rn.
Using a random ∈ T_Y has appeared before in <cit.>.
Those papers chose (in our notation) _i = Γ(I - Y_i^⊤ Y_i), which is only the portion of Γ in (Y_i).
Another similar approach to ours appears in <cit.>,
where, rather than considering a random choice of ,
the authors analyze the quadratic form Γ↦S(Y)(Γ) (Γ)^⊤ with _i(Γ) = _T_Y_i(Γ).
Their results do not use randomness and scale differently the portions of each _i that are in (Y_i) and (Y_i^⊤) (this is related to the choice of metric on the Stiefel manifold).
§.§ Noiseless case
For <Ref>, we have Δ = 0, so = L_Z = L ⊗ I_r, and S(Y) = L_Z - (L_Z Y Y^⊤).
We first calculate
( Y Y^⊤)^⊤ = L_Z Y Y^⊤(^⊤)
= (p + r - 2) (L_Z Y Y^⊤)
= (p + r - 2) ∑_i,j=1^n L_ij(Y_i^ Y_j^⊤).
Next, by (<ref>), we derive
L_Z^⊤ = ∑_i,j=1^n L_ijI_r_i^_j^⊤
= ∑_i,j=1^n L_ij ((p-2)r + ^2(Y_i^ Y_j^⊤) )
= ∑_i,j=1^n L_ij^2(Y_i^ Y_j^⊤),
where the last equality follows from the fact that ∑_i L_ij = 0 for all j.
To proceed, note that
Y_i^ Y_j^⊤ + Y_j^ Y_i^⊤
= 2 I_r - (Y_i - Y_j)(Y_i - Y_j)^⊤,
so that
(Y_i^ Y_j^⊤) = 1/2(Y_i^ Y_j^⊤ + Y_j^ Y_i^⊤) = r - 1/2Y_i - Y_j^2.
We then find (again using the fact that all rows and columns of L sum to zero)
L_Z^⊤ = ∑_i,j=1^n L_ij* r - 1/2Y_i - Y_j^2 ^2
= -r ∑_i,j=1^n L_ijY_i - Y_j^2 + 1/4∑_i,j=1^n L_ijY_i - Y_j^4
= 2 r ∑_i,j=1^n L_ij(Y_i^ Y_j^⊤) - 1/4∑_i,j=1^n A_ijY_i - Y_j^4.
All combined, the condition S(Y)^⊤≥ 0 then implies
(p - r - 2) ∑_i,j=1^n L_ij(Y_i^ Y_j^⊤) + 1/4∑_i,j=1^n A_ijY_i - Y_j^4 ≤ 0.
The second term on the left-hand side is nonnegative. If p ≥ r + 2, the first term is also nonnegative (the sum equals L_ZY Y^⊤).
Therefore, both terms on the left-hand side of (<ref>) must be 0, so Y_i = Y_j for all (i,j) ∈ E.
If G is connected, this implies that the Y_i's are identical, and therefore
Z Z^⊤Y Y^⊤ = Z^⊤ Y^2 = n Y_1^2 = n^2 r.
This finishes the proof of <Ref>.
When p = r + 2, the robustness to (small) perturbations comes from the quartic terms in (<ref>).
This robustness is much weaker than what we prove in <Ref> for the noisy results (where we use the quadratic terms that arise when p > r + 2).
§.§ Noisy case
We now consider the case where the noise matrix Δ is nonzero.
We prove <Ref> first (high correlation) and then use it to prove <Ref> (global optimality).
To aid our analysis, we write Y = Z R + W, with R = 1/n Z^⊤ Y ∈^r × p and W orthogonal to Z (i.e., Z^⊤ W = ∑_i W_i = 0). This gives Y_i = R + W_i.
Note that
Z^⊤ Y^2 = nR^2 = n Z R^2 = n^2 r - n W^2.
Thus, we set out to use second-order criticality of Y to infer that W is small.
§.§.§ Error bound for all second-order critical points
The second-order criticality condition is now, for all ∈ T_Y,
L_Z - Δ^⊤ - ((L_Z - Δ) Y Y^⊤)^⊤≥ 0.
We use the same random as in the previous section.
We have already analyzed the terms involving L_Z in <Ref>, so it remains to analyze the terms involving Δ.
Note that because we have assumed that every Z_i = I_r, we have
∑_i,j=1^n Δ_ijI_r = ∑_i,j=1^n Δ_ijZ_i^ Z_j^⊤ = ΔZ Z^⊤.
From (<ref>), we calculate
Δ^⊤ = ∑_i,j = 1^n Δ_ij(p-2) I_r + (Y_i^ Y_j^⊤) Y_i^ Y_j^⊤
= (p-2)ΔZ Z^⊤ + ∑_i,j=1^n * r - 1/2Y_i - Y_j^2 Δ_ijY_i^ Y_j^⊤
= (p-2)ΔZ Z^⊤ + r ΔY Y^⊤ - 1/2∑_i,j = 1^n Y_i - Y_j^2 Δ_ijY_i^ Y_j^⊤.
The second equality uses (<ref>) and (<ref>).
Next, (^⊤) = (p + r - 2) I_rn implies
(Δ Y Y^⊤)^⊤ = (p + r - 2) ΔY Y^⊤.
The difference is
(Δ Y Y^⊤)^⊤ - Δ^⊤
= (p - 2) ΔY Y^⊤ - Z Z^⊤ + ∑_i,j = 1^n *Δ_ij1/2Y_i - Y_j^2 Y_i^ Y_j^⊤
= (p - 2) ΔY Y^⊤ - Z Z^⊤ + Δ(Q ⊗_r^_r^⊤) ∘ (Y Y^⊤),
where ∘ is entrywise product and Q_ij = 1/2Y_i - Y_j^2.
We aim to bound (in nuclear norm) the matrices that are in an inner product with Δ.
Note that the r × r blocks of Y Y^⊤ - Z Z^⊤ are
(Y Y^⊤ - Z Z^⊤)_ij = Y_i Y_j^⊤ - I_r = Y_i(Y_j - Y_i)^⊤ = Y_i(W_j - W_i)^⊤,
so we can write Y Y^⊤ - Z Z^⊤ = Y W^⊤ - H,
where H_ij = Y_i^ W_i^⊤.
By the matrix Hölder inequality for Schatten p-norms, Y W^⊤≤YW = √(rn)W.
Furthermore,
H =
Y_1^ W_1^⊤
⋮
Y_n^ W_n^⊤ Z^⊤.
Because each Y_i has operator norm 1, the left factor in the above expression has Frobenius norm at most W.
Also, Z = √(rn).
Thus H≤√(rn)W. We conclude that Y Y^⊤ - Z Z^⊤≤ 2 √(nr)W.
We also need a bound on
(Q ⊗_r^_r^⊤) ∘ (Y Y^⊤)(a)≤(Q ⊗_r^_r^⊤)(b)= r Q.
Inequality (a) follows from a basic inequality on singular values of Hadamard products <cit.>, using the fact that each row of Y has norm 1.
Equality (b) follows from the eigenvalue characterization of Kronecker products.
To bound Q, note that Q_ij is simply the trace of the (i,j)th block of Z Z^⊤ - Y Y^⊤.
Thus Q is a partial trace of Z Z^⊤ - Y Y^⊤,
and therefore (see <cit.>) Q≤Z Z^⊤ - Y Y^⊤≤ 2 √(nr)W.
Putting all the nuclear norm bounds together, we obtain, by Hölder's inequality for matrix inner products applied on (<ref>) (von Neumann's trace inequality),
(Δ Y Y^⊤)^⊤ - Δ^⊤≤ 2(p + r - 2) Δ√(rn)W.
Combining this with (<ref>) and the calculations of <Ref>,
we obtain
(p - r - 2)∑_i,j L_ijY_iY_j + 1/4∑_i,j=1^n A_ijW_i - W_j^4
≤ 2(p + r - 2) Δ√(rn)W.
Dropping the nonnegative quartic terms and using the fact that
∑_i,j L_ijY_iY_j = ∑_i,j L_ijW_iW_j≥λ_2 W^2
(where λ_2 = λ_2(L) is the Fiedler value of G), we get
(p - r - 2) λ_2 W^2 ≤ 2(p + r - 2) Δ√(rn)W.
Therefore,
W^2 ≤*2(p + r - 2) Δ/(p - r - 2)λ_2^2 rn
= C_p^2 rn Δ^2/λ_2^2.
Combining this with the identities (<ref>) finishes the proof of <Ref>.
§.§.§ Bound on solution rank
We next prove <Ref>.
To show that a second-order critical point Y has low rank,
recall that the first-order criticality condition states S(Y) Y = 0.
Therefore, it suffices to show that S(Y) has high rank.
Recall
S(Y) = - ( Y Y^⊤) = L_Z - Δ - (L_Z Y Y^⊤) + (Δ Y Y^⊤).
Because G is connected, λ_2 is positive.
Thus L_Z has an r-dimensional null space,
and its remaining (n-1)r eigenvalues are at least λ_2.
Assume Δ < λ_2 (as otherwise the theorem statement is true but vacuous).
Given a matrix A, let σ_k(A) denote its kth singular value, in decreasing order.
To bound the rank of Y, we count how many singular values of S(Y) can possibly be equal to zero.
Explicitly, for any c ∈ (0, 1), it holds (see the explanation below):
(Y)
≤(S(Y))
≤ r + {ℓ : σ_ℓ((L_Z Y Y^⊤) - (Δ Y Y^⊤)) ≥λ_2 - Δ}
≤ r + {ℓ: σ_ℓ((L_Z Y Y^⊤)) ≥ c (λ_2 - Δ) }
+ {ℓ: σ_ℓ((Δ Y Y^⊤)) ≥ (1-c) (λ_2 - Δ) }
≤ r + (L_Z Y Y^⊤)/c(λ_2 - Δ) + (Δ Y Y^⊤)^2/(1 - c)^2 (λ_2 - Δ)^2.
The third inequality is an application of the following fact: if matrices A and B (of the same size) have, respectively, at most k_A singular values larger than M_A and k_B singular values larger than M_B, then A + B has at most k_A + k_B singular values larger than M_A + M_B.
This follows from <cit.>, which implies σ_k_A + k_B + 1(A+B) ≤σ_k_A+1(A) + σ_k_B+1(B) < M_A + M_B.
To bound this last quantity, first note that, because Y_i = 1 for each i,
(Δ Y Y^⊤)^2 ≤∑_i (Δ Y)_i^ Y_i^⊤^2 ≤Δ Y^2 ≤Δ^2 Y^2 = rn Δ^2.
Next, using L_Z = L ⊗ I_r, note that
((L_Z Y Y^⊤))_ii = 1/2∑_j=1^n L_ij (Y_i^ Y_j^⊤ + Y_j^ Y_i^⊤)
= ∑_j=1^n L_ij*I_r - 1/2 (Y_i - Y_j)(Y_i - Y_j)^⊤
= 1/2∑_j=1^n A_ij (Y_i - Y_j)(Y_i - Y_j)^⊤
≽ 0.
Therefore, (L_Z Y Y^⊤) is positive semidefinite and it follows that its nuclear norm is bounded as:
(L_Z Y Y^⊤) = ((L_Z Y Y^⊤))
= ∑_i,j=1^n L_ij(Y_i^ Y_j^⊤)
(i)≤ C_p Δ√(rn)W
(ii)≤ C_p^2 rn Δ^2/λ_2.
Inequality (i) comes from (<ref>) in the proof of <Ref>.
Inequality (ii) follows from the bound on W provided by (<ref>) in that same proof.
Plugging (<ref>) and (<ref>) into (<ref>),
we obtain
(Y)
≤ r + *C_p^2/c λ_2 (λ_2 - Δ) + 1/(1 - c)^2 (λ_2 - Δ)^2Δ^2 rn.
Assume Δ≤λ_2 / 4.
Choosing c = 1/2 and using the fact that C_p ≥ 2,
we see that
(Y)
≤ r + *8/3 C_p^2/λ_2^2 + 64/91/λ_2^2Δ^2 rn
≤ r + 5 *C_p Δ/λ_2^2 rn.
That last bound still holds if Δ > λ_2 / 4 since (Y) ≤ rn.
This completes the rank-bound portion of <Ref>.
If p is larger than this rank bound, then Y is rank-deficient,
and we apply <Ref> to obtain the rest of the result.
§.§.§ Solution uniqueness
To prove <Ref>,
note that, under the assumptions of <Ref>, <Ref> and its proof imply that (S(Y)) ≥ rn - r and, by first-order criticality, (Y) ≤ r.
These rank inequalities are, in fact, equalities, because the constraint Y_1^ Y_1^⊤ = I_r implies (Y) ≥ r.
Thus (S(Y)) = rn - r, and (Y) = r.
Since p > r, <Ref> again implies that Y Y^⊤ solves the SDP (<ref>),
and S(Y) ≽ 0 is its dual certificate.
We set out to prove that Y Y^⊤ is the unique solution.
First, note that because Y has rank r we can write Y = U, with ∈(r)^n and U ∈^r × p such that U U^⊤ = I_r.
Then Y Y^⊤ = ^⊤.
Furthermore, as discussed in <Ref>, any optimal solution X of the SDP must satisfy S(Y)X = 0.
Because the columns of span the kernel of S(Y),
we can write X = ( B)( B)^⊤ = (B B^⊤) ^⊤ for some r × r matrix B.
The block-diagonal constraint implies that
0 = (Y Y^⊤ - X)_11 = (^⊤ - (B B^⊤)^⊤)_11
= Z_1 (I_r - B B^⊤) Z_1^⊤.
Because Z_1 ∈(r), we must have B B^⊤ = I_r, and therefore X = Y Y^⊤.
Thus Y Y^⊤ = ^⊤ is the unique SDP solution.
The fact that Y Y^⊤ = ^⊤ is the optimal solution to (<ref>) (and therefore of (<ref>)) implies that and Y are, respectively, global optima of (<ref>) and (<ref>).
For uniqueness, note that for any other global optima Z' of (<ref>) and Y' of (<ref>),
Z' Z'^⊤ and Y' Y'^⊤ are feasible points of (<ref>) with the same objective function value as ^⊤ = Y Y^⊤. Therefore, by the uniqueness of the SDP solution, we have Z' Z'^⊤ = Y' Y'^⊤ = Y Y^⊤ = ^⊤,
implying that Z' = and Y' = Y up to global orthogonal transformations.
This completes the proof of <Ref>.
§ EXTENSION OF PROOFS TO THE COMPLEX CASE
We extend the previous section's argument to the complex case in a direct way,
only considering noiseless measurements.
This yields the apparently new yet possibly suboptimal result stated in <Ref>.
We simply substitute complex quantities for real ones in the previous arguments,
replacing transposes (A^⊤) by their Hermitian counterparts (A^*).
To avoid ambiguity, we will define the matrix inner product as AB = ((A B^*)),
writing out the trace explicitly when we need to consider its imaginary part.
We still assume, without loss of generality, that Z_1 = ⋯ = Z_n = I_r, so that = L_Z = L ⊗ I_r in the noiseless case.
Once again, we use the fact that the Riemannian Hessian is PSD at a second-order critical point.
Similarly to (<ref>), we now take S(Y) = - ( Y Y^*), where now keeps the Hermitian part of the r× r diagonal blocks (again making the off-diagonal blocks zero).
Second-order criticality means that S(Y)Y = 0 and S(Y)^*≥ 0 for all in the tangent space at Y ∈(r, p, )^n.
Details appear in <cit.>.
For the time being (we shall modify this later in this section), we directly translate the construction in <Ref> and choose tangent vectors _i = Γ - Y_i Γ^* Y_i to the points Y_i ∈(p, r, ),
where now Γ is an r × p matrix of i.i.d. complex standard normal random variables.
This is indeed tangent as _i^ Y_i^* + Y_i^_i^* = 0.
By a similar calculation to before,
_i^_j^* = ( ΓΓ^* - Y_i Γ^* Y_i Γ^* - Γ Y_j^* Γ Y_j^* + Y_i^Γ^* Y_i^ Y_j^* Γ Y_j^* )
= p I_r + (Y_i^ Y_j^*) Y_i^ Y_j^*.
The first difference from the real case is that the terms with two factors of Γ or Γ^* have zero expectation, because each entry in these terms is a polynomial in standard complex normal random variables and thus is radially symmetric.[If z is a standard normal random variable (scalar), then in the real case z^2 = 1 but in the complex case z^2 = 0.]
Note that _i^_i^* = (p+r) I_r.
First, we can compute, similarly to before,
(L_Z Y Y^*)^* = (p + r) ∑_i,j=1^n L_ijY_iY_j.
Next, we have (see <Ref>)
L_Z^* = ∑_i,j=1^n (^2(Y_i^ Y_j^*) ).
We now come to the second significant difference from the real case:
the calculation (<ref>) fails here,
because (Y_i^ Y_j^*) is not necessarily real.[If we ignored this issue, we could immediately “prove” that p = r suffices to obtain a benign landscape. This is clearly false, as the case r = 1 (angular synchronization) demonstrates.]
By considering the real and imaginary parts of (Y_i^ Y_j^*), we obtain
L_Z^* = 1/4∑_i,j=1^n L_ij*^2(Y_i^ Y_j^* + Y_j^ Y_i^*) - (Y_i^ Y_j^* - Y_j^ Y_i^*)^2 .
The first term can be handled akin to (<ref>) in the real case.
The optimal way to handle the second term is unclear.
One way is to use |(A)|^2 ≤A^2 ≤ r A^2 (twice) and A+A^*^2 + A-A^*^2 = 4A^2 to show that
(Y_i^ Y_j^* - Y_j^ Y_i^*)^2
≤ r Y_i^ Y_j^* - Y_j^ Y_i^*^2
= 4 r Y_i^ Y_j^*^2 - r Y_i^ Y_j^* + Y_j^ Y_i^*^2
≤ 4 r Y_i^ Y_j^*^2 - ^2( Y_i^ Y_j^* + Y_j^ Y_i^* ).
Both sides of the above inequality are zero when i = j.
For i ≠ j, recall that L_ij≤ 0.
Therefore,
L_Z^*≤1/4∑_i,j=1^n L_ij* 2^2(Y_i^ Y_j^* + Y_j^ Y_i^*) - 4 r Y_i^ Y_j^*^2 .
Additionally, ∑_ij L_ijY_i^ Y_j^*^2 = ∑_ij L_ijY_i^* Y_i^Y_j^* Y_j^≥ 0, hence we can remove the last term while preserving the inequality.
We now combine this and (<ref>) into the inequality S(Y)^*≥ 0 to obtain
(p + r) ∑_i,j=1^n L_ijY_iY_j ≤1/2∑_i,j=1^n L_ij^2( Y_i^ Y_j^* + Y_j^ Y_i^* )
= 4r ∑_i,j=1^n L_ijY_iY_j - 1/2∑_i,j=1^n A_ijY_i - Y_j^4.
(The equality follows just as in the real case, e.g., by (<ref>).)
This latter inequality yields perfect recovery provided p ≥ 3r.
This condition seems much worse than what we obtained in the real case.
We can improve the result by making a slightly different choice of the _i's.
Note that we can write our choice as
_i = Γ - Y_i Γ^* Y_i
= Γ(I_p - Y_i^* Y_i^) + (Γ Y_i^* - Y_i^Γ^*) Y_i.
The first term has rows orthogonal to Y_i, while the second has a skew-symmetric matrix left-multiplying Y_i (rather than right-multiplying as before): both terms are therefore tangent vectors.
We can rescale them arbitrarily, as
_i = a Γ(I_p - Y_i^* Y_i^) + b (Γ Y_i^* - Y_i^Γ^*) Y_i
for any numbers a ∈ and b ∈.
For a, b > 0 (which turns out to be the only sensible choice), this is related to the choice of metric on the Stiefel manifold.
We choose a = 2 and b = 1, which makes the (Euclidean) orthogonal projection of 2 Z Γ onto T_Y.
Intrinsically, this is quite similar to a complex adaptation of the proof in <cit.>, though that argument is not phrased in terms of randomness.
Now, for Γ chosen randomly as before,
_i^_j^*
= 4 Γ(I_p - Y_i^* Y_i^) (I_p - Y_j^* Y_j^) Γ^* + (Γ Y_i^* - Y_i^Γ^*) Y_i^ Y_j^* (Y_j^Γ^* - Γ Y_j^*)
+ 2 Γ(I_p - Y_i^* Y_i^) Y_j^* (Y_j^Γ^* - Γ Y_j^*) + 2 (Γ Y_i^* - Y_i^Γ^*) Y_i (I_p - Y_j^* Y_j^) Γ^*
= 4 I_p - Y_i^* Y_i^ I_p - Y_j^* Y_j^ I_r + Y_i^* Y_i^Y_j^* Y_j^ I_r + (Y_i^ Y_j^*) Y_i^ Y_j^*
+ 2 I_p - Y_i^* Y_i^Y_j^* Y_j^ I_r + 2 Y_i^* Y_i^I_p - Y_j^* Y_j^ I_r
= * 4 (p - r) + Y_i^ Y_j^*^2 I_r + (Y_i^ Y_j^*) Y_i^ Y_j^*.
We have used the fact that (A B^*) is always real when A and B are Hermitian.
Similar calculations as before, combined with (<ref>), yield
(4p - 2r) ∑ L_ijY_iY_j ≤∑ L_ij ( r Y_i^ Y_j^*^2 + ^2(Y_i^ Y_j^*) )
≤∑ L_ij* r Y_i^ Y_j^*^2 + 1/2^2( Y_i^ Y_j^* + Y_j^ Y_i^* ) - r Y_i^ Y_j^*^2
= 4r ∑ L_ijY_iY_j - 1/2∑ A_ijY_i - Y_j^4.
Thus, if G is connected, 2p ≥ 3r implies Y_1 = ⋯ = Y_n, or, equivalently, Y Y^* = Z Z^*.
The key benefit to this choice of is that we exactly cancel (rather than drop) the Y_i^ Y_j^*^2 terms that arise in (<ref>).
§ SIMULATIONS
We implemented an algorithm for solving (<ref>) and ran experiments on several graphs.
We used Matlab with the Manopt toolbox <cit.> to optimize over a product of Stiefel manifolds with the default second-order trust-region algorithm.
We ran experiments on three graphs:
* A circulant graph on 400 vertices, each having degree 10 (results in <Ref>).
This graph topology is known to have spurious local minima (see, e.g., <cit.>).
* A single realization of the random graph on 400 vertices, each vertex having expected degree 10 (<Ref>).
Because graphs behave spectrally much like complete graphs, we expect them to be less prone to spurious local minima than the circulant graph (see <cit.>).
* The pose graph from the Freiburg Building 079 () SLAM dataset in robotics,[Recorded by Cyrill Stachniss and available at <https://www.ipb.uni-bonn.de/datasets/>.] which has 989 vertices with average degree 3.4 (<Ref>).
All experiments are in the real number case and use r = 2. All experiments use random noise and initialization. In the case p = r = 2, the initial point is chosen to be in the same connected component as the ground truth.
The noise matrix Δ is chosen with i.i.d. (0, σ^2) entries in the nonzero blocks (with the constraint Δ = Δ^⊤).
The reported (Y) is the number of singular values that are at least 10^-3√(n) (note that Z's nonzero singular values are all √(n)).
The singular value tolerance did not qualitatively change the results (there was only a very slight effect at the phase transition).
The results are summarized in <Ref>.
A few points worth highlighting are the following:
* There is a clear phase transition as the noise standard deviation σ increases;
the recovery performance, solution rank, and algorithm running time all change dramatically in approximately the same place.
* The estimates cease to be rank-r at only slightly lower noise levels than where the recovery performance noticeably begins to degrade.
This suggests that, for these experiments, the prediction of <Ref> (rank-r) is quite pessimistic compared to the known-to-be-optimal prediction of <Ref> (correlation).
* Other properties show markedly different trends from one graph to another.
* For the circulant graph (<Ref>), the algorithm running time is highest close to the phase transition and decreases for larger σ. Furthermore, the solution rank is almost never larger than 8 even at very high noise levels.
* For the and SLAM graphs (<Ref>),
there is less clear structure in the running times,
and the solution is never rank-deficient for noise levels past the phase transition.
* There is no apparent problem with converging to bad local optima when p is small (even in the case p = r, though in this case the experiments are artificially aided by initializing the solution to the correct connected component of ).
This is likely due to the fact that, especially for such large graphs, the likelihood of a random initialization landing in the basin of attraction of a spurious local optimum is small (though it is certainly nonzero, at least in the case of the circulant graph).
* Even along the phase transition, the choice of p has no apparent effect on the solution quality (correlation with Z). The case p = r = 2 is a curious exception, as the correlation is higher (however, note the previous caveat on initialization in this case).
§ ACKNOWLEDGMENTS
We thank Pedro Abdalla, Afonso Bandeira, David Rosen, and Alex Townsend for helpful conversations.
ieeetr
|
http://arxiv.org/abs/2307.00630v3
|
20230702175853
|
Two distinct transitions in a population of coupled oscillators with turnover: desynchronization and stochastic oscillation quenching
|
[
"Ayumi Ozawa",
"Hiroshi Kori"
] |
nlin.AO
|
[
"nlin.AO"
] |
[corresponding author: ]aozawa@g.ecc.u-tokyo.ac.jp
Graduate School of Frontier Sciences, The University of Tokyo, Chiba 277-8561, Japan
Synchronization, which is caused by mutual coupling, and turnover, which is the replacement of old components with new ones, are observed in various open systems consisting of many components. Although these phenomena can co-occur, the interplay of coupling and turnover has been overlooked. Here, we analyze coupled phase oscillators with turnover and reveal that two distinct transitions occur, depending on both coupling and turnover: desynchronization and what we name stochastic oscillation quenching. Importantly, the latter requires both the turnover and coupling to be sufficiently intense.
Two distinct transitions in a population of coupled oscillators with turnover: desynchronization and stochastic oscillation quenching
Hiroshi Kori
August 1, 2023
=====================================================================================================================================
Introduction
The emergence of order in open systems comprising many interacting units is widely observed in nature. In particular, mutual synchronization is observed in a wide range of coupled oscillator systems <cit.>, from biological systems <cit.> to social <cit.> and artificial ones <cit.>. Another phenomenon broadly observed in open many-body systems is turnover owing to the addition and removal of components. Examples include the protein and cell turnover in biological systems <cit.>. Other examples are found in social <cit.> and bio-inspired chemical systems <cit.>. Further, a growing population may be effectively modeled as a system with turnover when the growth causes the dilution of components <cit.>.
These two phenomena, mutual synchronization and turnover, may manifest in the same system, and their time scales are not always clearly separated. For example, KaiC proteins in a cyanobacterial cell exhibit a collective rhythm of phosphorylation and dephosphorylation with a period of approximately 24h, and their average half-life has been estimated to be approximately 10h <cit.>.
Recent studies have revealed that turnover can
deteriorate the collective oscillation; it has been shown numerically <cit.> and experimentally <cit.> that the synchronous phosphorylation-dephosphorylation cycle of KaiC proteins loses robustness when the turnover rate is a sufficiently large constant. Moreover, Ref. <cit.> has suggested that their collective oscillation disappears when the turnover rate is further increased. However, little attention has been paid to the synergistic effect of the interaction among oscillators, which is essential for mutual synchronization, and the turnover. In particular, these previous studies lack analyses on how the effect of the turnover changes as the properties of the coupling are varied.
In this regard, we study a simple model of coupled phase oscillators with turnover and show that
their collective oscillation disappears
via two distinct transitions depending on both the coupling and turnover. For sufficiently small coupling strengths, the collective oscillation is lost via desynchronization <cit.> as the turnover rate increases, while for stronger coupling strengths, we may observe what we refer to in this study as stochastic oscillation quenching (SOQ), which can be interpreted as a stochastic analog of oscillation quenching <cit.>. Interestingly, SOQ may be induced not only by increasing the turnover rate but also by strengthening the interaction among oscillators.
Thus, this extinction of the collective oscillation is
caused by a synergistic effect of the turnover and coupling. Our model is based on the Kuramoto model <cit.>, which has been successfully applied to investigate synchronization in various systems <cit.>, and incorporates the effect of the turnover as stochastic resetting <cit.>. The tractability of the model enables us to obtain transition curves.
The Kuramoto model with turnover
The dynamics of N identical Kuramoto oscillators are given as follows <cit.>:
θ_it = ω +
κ/N∑_j=1^Nsin(θ_j - θ_i),
where θ_i (i=1,2,…,N) is the phase of the ith oscillator, ω≠ 0 is the natural frequency of the oscillators, and κ≥ 0 represents the coupling strength.
Suppose one of the oscillators is randomly chosen and replaced by a new oscillator. This event is equivalent to resetting the phase of the selected oscillator to that of the newly added oscillator.
Hence, as a model for oscillators with turnover, we adopt a system that involves phase resetting.
Specifically, we consider a population of Kuramoto oscillators where each oscillator experiences a reset event with probability α t during an infinitely short time width t. In other words, the resetting events are such that α N oscillators are expected to be substituted per unit time.
Such dynamics can be described by the following Itô stochastic differential equation with jumps:
θ_i = [ω
+ κ/N∑_j=1^Nsin(θ_j - θ_i)
] t
+[ -θ_i + ϕ(γ_i)] P_i(γ_i;α)
where the last term describes the stochastic resetting of θ_i to a new phase ϕ that is drawn from the distribution f(ϕ) at each reset. More precisely, this term is the differential of a Poisson process with state-dependent and mark-dependent amplitudes <cit.>, where γ_i is the underlying mark variable and α is the constant jump rate. Hereinafter, we call α the turnover rate.
The counting processes { P_i(γ_i;α) }_i=1^N are pairwise independent. That is, the timings of the resets of two different oscillators are independent. The distributions of the mark variables are uniform in [0,2π), and the function ϕ(γ) is determined such that the distribution of ϕ is given by f(ϕ).
Consistent with previous work <cit.>, the increase of the turnover rate α in Eq. (<ref>) causes the extinction of the macroscopic oscillation. The snapshots of the phase distribution in fig:dynamics(a,b) indicate that the distribution for α=0.1 has a sharp peak and changes with time, whereas that for α=0.3 is almost uniform and steady. In the numerical simulation, we adopted f(ϕ) with the form of a Poisson kernel:
f(ϕ) = 1/2 π1-σ^2/1 - 2 σcosϕ + σ^2,
where σ determines the sharpness of f(ϕ); f(θ) = 1/(2π) for σ=0 and f(θ) →δ(θ) as σ→ 1.
Similar results were obtained for other unimodal distributions (not shown here).
Let us quantify the macroscopic oscillation by introducing the complex order parameter r, Kuramoto order parameter R, and mean phase Θ as follows <cit.>:
r(t) = R(t)e^Θ(t)1/N∑_j=1^N e^θ_j(t).
According to this definition, R reflects the coherence of the phases; R=0 when the phases are uniformly distributed, and R=1 when all oscillators have the same phase. Note that the system without turnover, Eq. (<ref>), has a stable synchronized oscillatory solution with R(t)=1 and Θ(t)=Θ(0)+ω t. The time series of r oscillates for α=0.1, whereas it remains constant for α=0.3; see Fig. <ref>(c).
We define the intensity of the macroscopic oscillation as the fluctuation of r:
Q √(( r - r)^2 ),
where denotes the long-time average. Note that Q=0 if r is constant. As shown in Fig. <ref>, the dependence of Q on phase resetting changes qualitatively as the coupling strength κ varies. For sufficiently small κ, the value of Q is hardly affected by the width σ of the distribution f and gradually decreases as α increases. In contrast, for larger values of κ, σ also affects Q; when σ≃ 0, Q gradually decreases as α increases (Fig. <ref>(a)), whereas Q suddenly drops to near 0 when σ≃ 1 (Fig. <ref>(b)).
Mean-field approximation
To analyze these transitions,
let us reduce the system by employing a mean-field approximation in the same manner
as in <cit.>.
First, let p(*θ, t) and p_i(θ_i, t) be the probability density function for *θ(θ_1, θ_2,…, θ_N) at time t and its marginalization over all phase variables except θ_i, respectively.
From Eq. (<ref>), the time evolution of p_i(θ_i, t) is given by
p_it =
- θ_i[∫_S_i p(*θ, t)v_i(*θ)*θ_i] + α f- α p_i,
where
*θ_i ( θ_1, θ_2,…, θ_i-1, θ_i+1,…, θ_N ),
∫_S_i·*θ_i ∫_0^2π⋯∫_0^2π·θ_1 θ_2 ⋯θ_i-1θ_i+1⋯θ_N, and
v_i(*θ) ω + κ N^-1∑_j=1^Nsin( θ_j - θ_i ) <cit.>.
Now we invoke the mean-field approximation; we assume p_i(θ_i,t)p_j(θ_j,t) - p_i,j(θ_i, θ_j, t)→ 0 and p_i(θ_i, t) - p_j(θ_j, t) → 0 as N →∞ for any i ≠ j, where p_i,j(θ_i,θ_j, t) is the joint probability distribution function for θ_i and θ_j.
Then, from Eq. (<ref>), the time evolution equation of the phase distribution p(θ, t) is obtained as
pt=
- θ[pv]+ α f- α p,
v=v(θ, t) ω
+ κ∫_0^2π p(θ̃,t)sin(θ̃ - θ) θ̃.
In the following, we set ω=1 without loss of generality by normalizing t, κ, and α by ω.
Desynchronization
When α=0, Eq. (<ref>) has the uniform steady solution p^(0)(θ) 1/(2π). This solution corresponds to the desynchronized state in the sense that the cohesiveness of the phases is completely lost.
For α≠ 0, p^(0)(θ) is no longer a solution of Eq. (<ref>); however, fig:dynamics(b) suggests the existence of a steady solution obtained by slightly deforming p^(0)(θ).
The stabilization of such a solution, formally analyzed below, explains the extinction of the collective oscillation observed for small κ and α in Fig. <ref>.
Let ρ be the deviation from a steady solution
p̂, i.e., ρ = p - p̂.
Linearizing Eq. (<ref>) around p̂ yields
ρt ≃ -θ{p̂𝒰[ρ]+
( 1+𝒰[p̂] )ρ} - αρℒ[ρ],
where the linear functional 𝒰 is defined by
𝒰[ρ](θ) κ∫_0^2πsin( θ̃ - θ)ρ(θ̃) θ̃.
Assume further that p̂ and ℒ are expanded as follows for sufficiently small values of α:
p̂(θ;α,κ) = p̂^(0)(θ) + ∑_l=1^∞α^l p^(l)(θ;κ),
ℒ = ∑_l=0^∞α^l ℒ^(l).
The expression for ℒ^(l) is obtained by
inserting Eqs. (<ref>) and (<ref>) into Eq. (<ref>) and collecting the terms of α^l.
It is straightforward to find that ℒ^(0) u_m^(0)(θ) = λ_m^(0) u_m^(0)(θ) holds for any integer m, where u_m^(0)(θ) = e^ m θ and λ_m^(0) = - m + κ/2δ_|m|,1.
Now, we define an inner product ·· of smooth 2π-periodic functions z_1(θ) and z_2(θ) as
z_1z_21/2π∫_0^2π z_1(θ)z_2(θ)θ,
where z_2(θ) is the complex conjugate of z_2(θ). The adjoint operator ℒ^(0) of ℒ^(0) defined on this inner-product space satisfies ℒ^(0) u_m^(0) = λ_-m^(0) u_m^(0). Thus, u_m^(0)(θ) is an eigenvector of both ℒ^(0) and ℒ^(0).
This allows us to apply the Rayleigh–Schrödinger perturbation theory <cit.> to evaluate the eigenvalues of ℒ. Specifically,
we assume that the eigenvalues and eigenvectors of ℒ can be expanded around those of ℒ^(0):
ℒ u_m = λ_m u_m,
u_m = ∑_l=0^∞α^l u_m^(l),
λ_m = ∑_l=0^∞α^l λ_m^(l).
To ensure the uniqueness of the expansion, we also impose the orthogonality condition u_m^(0)u_m^(l) = δ_0,l.
Inserting Eqs. (<ref>) and (<ref>) into (<ref>) and extracting the terms of α, we obtain
λ_m^(1) u_m^(0) = ℒ^(0) u_m^(1)+ ℒ^(1) u_m^(0) -
λ_m^(0) u_m^(1).
Then, computing the inner product λ_m^(1) u_m^(0)u_m^(0) yields
λ_m^(1) =-1.
Hence, λ_m = - m + δ_|m|,1κ/2 - α to the order of α, and the maximum of the real parts of the eigenvalues exceeds 0 when
κ≃ 2 α.
Figure <ref> implies that, for small α, the boundary on which Q vanishes agrees with the curve κ =, which is delineated by the white dotted curves.
Consistent with the numerical simulation, depends on α but not on σ, the width of f(θ).
It should be noted that the stability analysis presented here is formal but not mathematically rigorous
because (1) the validity of the perturbative calculation is assumed without any proof and (2) the continuous and residual spectra are ignored.
Nevertheless, the agreement with numerical simulation indicates the validity of the analysis.
Stochastic oscillation quenching
The dependence of Q on the turnover rate α for σ=0, shown in Fig. <ref>(a), does not change qualitatively when κ is increased. However, when σ approaches 1, the transition observed for large κ is no longer explained by desynchronization discussed above; see Fig. <ref>(b).
Numerical simulations indicate that the phase distribution after this transition is steady but far from uniform, as shown in Fig. <ref>(a). Furthermore, the velocity field
v(θ)ω + κ∫sin(θ'-θ)p(θ')θ'
is qualitatively different from that of the desynchronized state; the transition involves the emergence of
the zeros of v(θ), and these continue to exist when κ or α is further increased. See the blue dashed and green dot-dashed curves in Fig. <ref> for v(θ) just after and far beyond the transition, respectively. In contrast, v(θ) in the desynchronized state, which is indicated by the black solid curve in Fig. <ref>, has no zero.
In deterministic systems, the zeros of the velocity field of the oscillators imply oscillation quenching, the phenomenon where individual oscillators cease their oscillation <cit.>. Although our system involves stochastic resetting, we can make an analogy as follows.
Consider a “test oscillator,” i.e., an oscillator that changes its phase according to v(θ) but does not affect the system nor undergo phase resetting. If we incorporate it into the system, its phase changes toward , a point at which v(θ)=0 and dv(θ)/dθ≤ 0 hold, and then becomes almost static after initial transients. Thus, the steady distribution with which v(θ) has zeros implies the quenching of the test oscillator, and we refer to the realization of such a distribution as stochastic oscillation quenching (SOQ).
In the limit f(θ) →δ(θ), we can determine the condition under which the line v=0 is tangent to v(θ), or equivalently, v/θ()=0. To this end, we use the self-consistency argument as follows. Setting pt=0 in Eq. (<ref>) and imposing v()= v/θ()=0 yields
θ{
p(θ)[ 1-cos(θ_q - θ) ]} + α p(θ) - αδ(θ) = 0.
We seek a solution p(θ) such that p(θ)>0 in [0,) and p(θ)=0 in [, 2π) because
the phase of a new oscillator continues to approach from 0 to when SOQ occurs. This ansatz is consistent with the phase distribution obtained by numerical simulation for σ=0.99 ≃ 1, where most of the oscillators locate in the range [0,), as shown in fig:death(a).
Noting that the derivative at a discontinuous point yields Dirac's delta function, we obtain such a solution as follows:
p(θ) =
αexp[α(1/tan/2-1/tan
- θ/2)]/1-cos( - θ) (0 ≤θ
< ),
0 (otherwise),
which is continuous at θ= but discontinuous at θ=0.
Inserting Eq. (<ref>) into Eq. (<ref>) and imposing v()= v/θ()=0, we find that the parameters κ and α satisfy
κ =
g_1((α);α)(α),
where
g_1(c; α)
(
2 α e^α c/√(1 - c^2)∫_c^1
x e^-α x/√(1 - x^2)/1-x^2 x
)^-1,
and cos/2 is a zero of the function
g_2(c;α) =
∫_c^1(2x^2 - 1)
e^-α x/√(1-x^2)/(1-x^2)^3/2 x.
We solve g_2=0 numerically and insert the solutions into Eq. (<ref>) to obtain , which is plotted as a yellow solid curve in fig:Q(b).
The curve agrees with the value of α at which Q vanishes for large κ, supporting the assertion that the disappearance of the collective oscillation is due to SOQ.
Conclusion
We have analyzed a model of coupled oscillators with turnover and shown that the collective oscillation disappears via two types of transitions, namely, desynchronization and SOQ, depending on both the coupling strength and turnover rate. Importantly, SOQ occurs only when the coupling strength is sufficiently large and is promoted by increasing the turnover rate and/or the coupling strength.
Thus, this transition is not caused by the turnover alone but by a synergistic effect of the turnover and mutual coupling. While it has already been reported that turnover may eliminate collective oscillation <cit.>, understanding how this elimination occurs is essential for two reasons. First, it helps avoid the unexpected disappearance of synchronous oscillations. In the original Kuramoto model, increasing the coupling strength enhances phase cohesion and promotes the collective oscillation, so one would expect the same to be true for oscillators with turnover.
However, if the turnover rate is sufficiently large, increasing the coupling strength causes the collective oscillation to disappear through SOQ.
Second, the type of transition may be exploited for designing appropriate forcing to steer the system to a desirable state, as has been done in Ref. <cit.>. Hence, our results have potential implications for controlling and designing the collective behavior of oscillatory assemblies.
Previous experimental studies have proposed various
open reactors that sustain non-equilibrium chemical reactions by controlling the fluxes into and out of the system <cit.>.
Desynchronization and SOQ may be observed if self-sustained oscillators are incorporated into such an apparatus and parameters are varied. A candidate system might be constructed by combining techniques to maintain constant protein turnover <cit.> with protein molecules whose states change periodically <cit.>.
Our work might also be relevant to tissue formation in multicellular organisms because proliferation and differentiation sometimes involve phase resetting of cellular oscillators <cit.>.
Finally, our model can be easily extended in many ways. A natural extension is to replace the Kuramoto model with other phase oscillator models. For example, our analyses can be straightforwardly extended to the case of the Kuramoto–Sakaguchi model, which will be reported elsewhere <cit.>.
The authors thank Hiroshi Ito and Keiko Imai for helpful discussions on protein turnover, Ryota Kobayashi for illuminating comments on the Poisson process, and Namiko Mitarai, Tetsuhiro Hatakeyama, and Yuting Lou for insightful discussions on the application of the theory. This study was supported by JSPS KAKENHI Grant Number JP22KJ0899 to A.O. and JSPS KAKENHI Grant Number JP21K12056 to H.K.
A.O. and H.K. conceptualized the work; A.O. performed analyses and wrote the manuscript with support from H.K.
100
Pikovsky2001book
A Pikovsky, M Rosenblum, and J Kurths.
Synchronization: A Universal Concept in Nonlinear
Sciences.
Cambridge University Press, Cambridge, 2001.
Glass2001
Leon Glass.
Synchronization and rhythmic processes in physiology.
Nature, Vol. 410, No. 6825, pp. 277–284, 2001.
winfree01
A T Winfree.
The Geometry of Biological Time.
Springer New York, New York, 2001.
Neda2000
Z Néda, E Ravasz, Y Brechet, T Vicsek, and A.-L. Barabási.
The sound of many hands clapping.
Nature, Vol. 403, No. 6772, pp. 849–850, 2000.
Soriano2013
Miguel C. Soriano, Jordi García-Ojalvo, Claudio R. Mirasso, and Ingo
Fischer.
Complex photonics: Dynamics and applications of delay-coupled
semiconductors lasers.
Reviews of Modern Physics, Vol. 85, No. 1, pp. 421–470, March
2013.
Wiesenfeld1996
Kurt Wiesenfeld, Pere Colet, and Steven H. Strogatz.
Synchronization Transitions in a Disordered Josephson Series
Array.
Physical Review Letters, Vol. 76, No. 3, pp. 404–407, January
1996.
Rolfs2021
Zach Rolfs, Brian L Frey, Xudong Shi, Yoshitaka Kawai, Lloyd M Smith, and
Nathan V Welham.
An atlas of protein turnover rates in mouse tissues.
Nature Communications, Vol. 12, No. 1, p. 6778, 2021.
Pellettieri2007
Jason Pellettieri and Alejandro Sánchez Alvarado.
Cell Turnover and Adult Tissue Homeostasis: From Humans
to Planarians.
Annual Review of Genetics, Vol. 41, No. 1, pp. 83–105,
December 2007.
Burnett2011
Chris Burnett, Timothy Norman, and Katia Sycara.
Trust Decision-Making in Multi-Agent Systems.
In Proceedings of the 22nd International Joint Conference on
Artificial Intelligence, p. 120, Barcelona, Catalonia, Spain, January
2011.
Gualdi2015
Stanislao Gualdi, Jean-Philippe Bouchaud, Giulia Cencetti, Marco Tarzia, and
Francesco Zamponi.
Endogenous Crisis Waves: Stochastic Model with Synchronized
Collective Behavior.
Physical Review Letters, Vol. 114, No. 8, p. 088701, February
2015.
Sugiura2016
Haruka Sugiura, Manami Ito, Tomoya Okuaki, Yoshihito Mori, Hiroyuki Kitahata,
and Masahiro Takinoue.
Pulse-density modulation control of chemical oscillation far from
equilibrium in a droplet open-reactor system.
Nature Communications, Vol. 7, No. 1, p. 10212, 2016.
Zwicker2010
David Zwicker, David K Lubensky, and Pieter Rein ten Wolde.
Robust circadian clocks from coupled protein-modification and
transcription–translation cycles.
Proceedings of the National Academy of Sciences, Vol. 107,
No. 52, pp. 22540–22545, December 2010.
YuWood2015
Wen Yu and Kevin B. Wood.
Synchronization and phase redistribution in self-replicating
populations of coupled oscillators and excitable elements.
Vol. 91, No. 6, p. 062708, June 2015.
Imai2004
Keiko Imai, Taeko Nishiwaki, Takao Kondo, and Hideo Iwasaki.
Circadian Rhythms in the Synthesis and Degradation of a
Master Clock Protein KaiC in Cyanobacteria.
Journal of Biological Chemistry, Vol. 279, No. 35, pp.
36534–36539, August 2004.
Teng2013
Shu-Wen Teng, Shankar Mukherji, Jeffrey R Moffitt, Sophie de Buyl, and Erin K
O'Shea.
Robust Circadian Oscillations in Growing Cyanobacteria Require
Transcriptional Feedback.
Science, Vol. 340, No. 6133, pp. 737–740, 2013.
Kuramoto1984
Y Kuramoto.
Chemical Oscillations, Waves, and Turbulence.
Springer, Berlin, 1984.
Zou2021
Wei Zou, D V Senthilkumar, Meng Zhan, and Jürgen Kurths.
Quenching, aging, and reviving in coupled dynamical networks.
Physics Reports, Vol. 931, pp. 1–72, 2021.
Kuramoto1975
Yoshiki Kuramoto.
Self-entrainment of a population of coupled non-linear oscillators.
In Huzihiro Araki, editor, International Symposium on
Mathematical Problems in Theoretical Physics, pp. 420–422, Berlin,
Heidelberg, 1975. Springer Berlin Heidelberg.
Kuramoto2019
Yoshiki Kuramoto and Hiroya Nakao.
On the concept of dynamical reduction: The case of coupled
oscillators.
Philosophical Transactions of the Royal Society A: Mathematical,
Physical and Engineering Sciences, Vol. 377, No. 2160, p. 20190041, October
2019.
Strogatz2000
Steven H Strogatz.
From Kuramoto to Crawford: Exploring the onset of
synchronization in populations of coupled oscillators.
Physica D: Nonlinear Phenomena, Vol. 143, No. 1, pp. 1–20,
2000.
Acebron2005
Juan A. Acebrón, L. L. Bonilla, Conrad J. Pérez Vicente, Félix
Ritort, and Renato Spigler.
The Kuramoto model: A simple paradigm for synchronization
phenomena.
Reviews of Modern Physics, Vol. 77, No. 1, pp. 137–185, April
2005.
Zhai2004
Yumei Zhai, István Z Kiss, Hiroaki Daido, and John L Hudson.
Extracting order parameters from global measurements with application
to coupled electrochemical oscillators.
Physica D: Nonlinear Phenomena, Vol. 199, No. 3, pp. 387–399,
2004.
Kiss2002
István Z Kiss, Yumei Zhai, and John L Hudson.
Emerging Coherence in a Population of Chemical
Oscillators.
Science, Vol. 296, No. 5573, pp. 1676–1678, May 2002.
Evans2020
Martin R Evans, Satya N Majumdar, and Grégory Schehr.
Stochastic resetting and applications.
Journal of Physics A: Mathematical and Theoretical, Vol. 53,
No. 19, p. 193001, 2020.
Nagar2023
Apoorva Nagar and Shamik Gupta.
Stochastic resetting in interacting particle systems: A review.
Journal of Physics A: Mathematical and Theoretical, Vol. 56,
No. 28, p. 283001, June 2023.
Hanson2007
Floyd B. Hanson.
Applied Stochastic Processes and Control for Jump-diffusions
: Modeling, Analysis, and Computation.
Society for Industrial and Applied Mathematics, Philadelphia,
2007.
Crawford1999
John D. Crawford and K. T. R. Davies.
Synchronization of globally coupled phase oscillators: Singularities
and scaling for general couplings.
Physica D: Nonlinear Phenomena, Vol. 125, No. 1, pp. 1–46,
January 1999.
Sakurai2011
Jun Sakurai and Jim Napolitano.
Modern Quantum Mechanics.
Addison-Wesley, Boston, 2nd ed edition, 2011.
Murayama2017
Yoriko Murayama, Hiroshi Kori, Chiaki Oshima, Takao Kondo, Hideo Iwasaki, and
Hiroshi Ito.
Low temperature nullifies the circadian clock in cyanobacteria
through Hopf bifurcation.
Proceedings of the National Academy of Sciences, Vol. 114,
No. 22, pp. 5641–5646, May 2017.
Epstein1989
Irving R. Epstein.
The role of flow systems in far-from-equilibrium dynamics.
Journal of Chemical Education, Vol. 66, No. 3, p. 191, March
1989.
Spirin2004
Alexander S. Spirin.
High-throughput cell-free systems for synthesis of functionally
active proteins.
Trends in Biotechnology, Vol. 22, No. 10, pp. 538–545, October
2004.
Niederholtmeyer2013
Henrike Niederholtmeyer, Viktoria Stepanova, and Sebastian J. Maerkl.
Implementation of cell-free biological networks at steady state.
Proceedings of the National Academy of Sciences, Vol. 110,
No. 40, pp. 15985–15990, October 2013.
Kimchi2020
Ofer Kimchi, Carl P. Goodrich, Alexis Courbet, Agnese I. Curatolo, Nicholas B.
Woodall, David Baker, and Michael P. Brenner.
Self-assembly–based posttranslational protein oscillators.
Science Advances, Vol. 6, No. 51, p. eabc1939, December 2020.
Nakajima2005
Masato Nakajima, Keiko Imai, Hiroshi Ito, Taeko Nishiwaki, Yoriko Murayama,
Hideo Iwasaki, Tokitaka Oyama, and Takao Kondo.
Reconstitution of Circadian Oscillation of Cyanobacterial KaiC
Phosphorylation in Vitro.
Science, Vol. 308, No. 5720, pp. 414 LP – 415, April 2005.
Fukuda2012
Hirokazu Fukuda, Kazuya Ukai, and Tokitaka Oyama.
Self-arrangement of cellular circadian rhythms through
phase-resetting in plant roots.
Physical Review E, Vol. 86, No. 4, p. 041917, October 2012.
Yagita2010
Kazuhiro Yagita, Kyoji Horie, Satoshi Koinuma, Wataru Nakamura, Iori Yamanaka,
Akihiro Urasaki, Yasufumi Shigeyoshi, Koichi Kawakami, Shoichi Shimada, Junji
Takeda, and Yasuo Uchiyama.
Development of the circadian oscillator during differentiation of
mouse embryonic stem cells in vitro.
Proceedings of the National Academy of Sciences, Vol. 107,
No. 8, pp. 3846–3851, February 2010.
Ozawainpreparation
Ayumi Ozawa and Hiroshi Kori.
in preparation.
|
http://arxiv.org/abs/2307.03213v1
|
20230706164243
|
Trajectory sampling and finite-size effects in first-principles stopping power calculations
|
[
"Alina Kononov",
"Thomas Hentschel",
"Stephanie B. Hansen",
"Andrew D. Baczewski"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci"
] |
Center for Computing Research, Sandia National Laboratories, Albuquerque NM, USA
School of Applied & Engineering Physics, Cornell University, Ithaca NY, USA
Pulsed Power Sciences Center, Sandia National Laboratories, Albuquerque NM, USA
Center for Computing Research, Sandia National Laboratories, Albuquerque NM, USA
Real-time time-dependent density functional theory (TDDFT) is presently the most accurate available method for computing electronic stopping powers from first principles.
However, obtaining application-relevant results often involves either costly averages over multiple calculations or ad hoc selection of a representative ion trajectory.
We consider a broadly applicable, quantitative metric for evaluating and optimizing trajectories in this context.
This methodology enables rigorous analysis of the failure modes of various common trajectory choices in crystalline materials.
Although randomly selecting trajectories is common practice in stopping power calculations in solids, we show that nearly 30% of random trajectories in an FCC aluminium crystal will not representatively sample the material over the time and length scales feasibly simulated with TDDFT, and unrepresentative choices incur errors of up to 60%.
We also show that finite-size effects depend on ion trajectory via “ouroboros” effects beyond the prevailing plasmon-based interpretation, and we propose a cost-reducing scheme to obtain converged results even when expensive core-electron contributions preclude large supercells.
This work helps to mitigate poorly controlled approximations in first-principles stopping power calculations, allowing 1 – 2 order of magnitude cost reductions for obtaining representatively averaged and converged results.
Trajectory sampling and finite-size effects in first-principles
stopping power calculations
Andrew D. Baczewski
August 1, 2023
============================================================================================
§ INTRODUCTION
High-performance computing has revolutionized materials science, enabling prediction, design, and unprecedented understanding of materials properties to complement and accelerate experimental efforts.
Modeling dynamic, nonlinear responses to stimuli such as laser and particle irradiation falls among the most computationally demanding types of materials simulations, requiring real-time evolution of extended systems containing hundreds of atoms and thousands of quantum-mechanical electrons <cit.>.
For materials in extreme conditions, high temperatures result in orders-of-magnitude increases in the number of partially occupied electronic orbitals, either requiring additional approximations <cit.> or further escalating computational resource requirements to millions of CPU-hours or more per calculation <cit.>.
In this context, deliberate design of simulations is crucial for maximizing insight while maintaining feasible computational costs.
Here, we focus on calculations of electronic stopping power, the rate at which a moving particle loses energy to electrons.
This fundamental quantity is critically important to diverse fields.
For example, radiation therapy relies on stopping powers to predict particle ranges and precisely target tumors <cit.>.
Stopping powers also underlie radiation damage to materials in space and nuclear energy applications <cit.>.
In materials imaging and processing techniques, energy deposition by focused ion beams relates to electron emission, sample damage, and defect engineering <cit.>.
Finally, achieving ignition in fusion energy research relies on fusion products redepositing their kinetic energy into the fuel <cit.>.
First-principles simulations using real-time time-dependent density functional theory (TDDFT) can offer accurate predictions of electronic stopping powers and insights into underlying physical processes <cit.>, often in more detail than possible experimentally.
However, computing average stopping powers that are comparable to experimentally observable and practically relevant values can pose a challenge.
While an individual TDDFT calculation simulates a single projectile traversing a specific path, stopping power experiments measure energy loss distributions for finite-width ion beams, often incident on polycrystalline or disordered samples <cit.>.
Moreover, applications either also employ finite-width ion beams (e.g., materials imaging and processing) or involve randomly oriented radiation (e.g., equipment in space and fusion fuel).
Therefore, sensitivities to the projectile's trajectory can limit the utility of TDDFT stopping power predictions.
Such sensitivities often occur for projectile velocities beyond the Bragg peak and mainly affect contributions from core electrons, or more generally, spatially non-uniform electronic orbitals <cit.>.
Meaningful average stopping powers can still be obtained from first principles by averaging <cit.> or carefully integrating <cit.> the results of several TDDFT calculations using distinct projectile trajectories.
The significant computational cost of this approach makes it tempting to select a single trajectory presumed to be representative of an ensemble average.
For solids, one possible choice is the centroid trajectory, wherein the projectile travels along a crystallographic direction with a path given by the geometric centroid of a symmetry-irreducible cross-section of the crystal structure <cit.> (see Fig. <ref>).
This method appears adequate when core electron contributions are small, but becomes inaccurate for fast projectiles and high-Z targets <cit.>.
Alternatively, a randomly chosen trajectory can achieve good agreement with empirical data even when core-electron contributions are important <cit.>.
In this case, trajectory choice may constitute an uncontrolled approximation, thus necessitating quantitative methods for assessing any given choice.
Recently, Gu et al. developed an innovative pre-sampling approach <cit.> that averages results from several short trajectories carefully selected such that, in aggregate, they representatively sample a disordered system.
Here, we present a complementary method that uses a quantitative metric to guide a priori selection of a single, representative projectile trajectory for first-principles calculations of electronic stopping power.
Using proton stopping in aluminum as an exemplar, we demonstrate the utility of such an approach even in a crystalline material, as we find that achieving agreement with empirical data requires a high-quality trajectory despite wide-spread presumptions that a random choice suffices.
While both methods can reduce computational costs associated with averaging over multiple long trajectories and help obtain reliable results even in the absence of experimental data to validate against, we expect that our approach will be more efficient than the one proposed by Gu et al. <cit.> in the case of heavy ions that experience relatively long transient behavior <cit.> before entering the steady-state stopping regime.
In addition to trajectory-dependent core-electron contributions, finite-size effects limit the accuracy of first-principles stopping power calculations, particularly for fast projectiles with velocities above the Bragg peak.
Typically, TDDFT calculations are expected to underestimate stopping powers by neglecting long-wavelength plasmonic excitations that a finite periodic supercell cannot support <cit.>.
We examine variations in computed stopping powers for different supercell sizes and projectile trajectories and reveal significant departures from this model of finite-size errors.
Finally, we extend our trajectory optimization framework with a second quantitative metric that enables a priori selection of trajectories that minimize finite-size effects.
Together, the two trajectory metrics developed in this work allow deeper understanding and new opportunities for mitigation of previously poorly controlled approximations in TDDFT simulations of energetic particles traversing matter.
This contribution advances the accuracy and efficiency of first-principles stopping power calculations with wide-ranging implications for computational studies relating to radiation therapy, materials in extreme conditions, ion-beam imaging and patterning techniques, and self-heating of fusion fuel.
§ RESULTS
§.§ Sampling close collisions
Obtaining an application-relevant result from a single TDDFT stopping calculation requires choosing a trajectory along which the projectile experiences an environment that quantitatively resembles those likely to occur in that application.
In particular, close collisions between the projectile and host nuclei can involve (semi)core electron excitations that introduce sharp features in the stopping forces and can dominate the average stopping power <cit.>.
Thus, representatively sampling close collisions is essential for computing accurate stopping powers, and we expect that the distance between the projectile and the nearest host nuclei at all points along the trajectory provides a compact description of the environment that determines the average stopping power.
This notion previously inspired a method for assessing the quality of different trajectories in disordered systems <cit.>.
In what follows, we present a quantitative metric that enables rigorous comparisons among different trajectories and their attendant stopping powers.
Furthermore, we will show that such an approach is critical for selecting trajectories even in crystalline materials because not all randomly-oriented off-channeling trajectories are equally representative of an average environment over the course of a typical few-fs simulation time.
Unrepresentative finite-length off-channeling trajectories lead to poor estimates of average stopping power and could skew averages based on naive trajectory sampling.
To evaluate the quality of a given trajectory, we first calculate the distribution P_traj(δ_NN) of nearest-neighbor distances δ_NN between the host nuclei and the projectile along the trajectory, excluding data during the first 4 of the projectile's path since the ultimate stopping power extraction will ignore this early transient regime (see of the supplementary information (SI)).
This value is specific to a proton projectile, and higher Z projectiles would have longer transient regimes <cit.>, further motivating optimization of a single representative trajectory.
Computing P_traj(δ_NN) relies only on the geometric specification of the supercell and does not require expensive TDDFT simulations.
We then compare P_traj(δ_NN) to an ideal distribution P_ideal(δ_NN) generated by calculating distances to nearest host nuclei from randomly sampled points within the supercell.
This choice of P_ideal(δ_NN) represents the distribution experienced by randomly oriented radiation or a focused ion beam interacting with randomly oriented grains within a polycrystalline sample.
Earlier work by Gu et al. instead generated a reference distribution by sampling along a 500-long trajectory <cit.>, a method that should lead to the same distribution provided that the selected reference trajectory representatively samples the entire supercell.
Other choices may be more suitable depending on the specific application, e.g. a focused ion beam aligned with a lattice vector of a single crystal.
Section <ref> in the SI discusses numerical sampling of P_ideal and P_traj.
The Hellinger distance D_H between the two nearest-neighbor distributions, given by
D_H^2 = 1 - ∫_0^∞√(P_traj(δ_NN) P_ideal(δ_NN)) dδ_NN,
provides a quantitative measure of how well a trajectory samples the simulation cell.
Notably, D_H is bounded between 0 and 1, with D_H=1 achieved when the two distributions have no overlap and D_H=0 achieved only when the two distributions are identical.
Minimizing D_H will enable selection of optimally representative trajectories.
We chose the Hellinger distance over other possible ways to quantify the similarity of two distributions because it satisfies the mathematical properties of a metric.
In particular, D_H is symmetric and obeys the triangle inequality, allowing sensible comparisons of two different trajectories by replacing P_ideal(δ_NN) in Eq. (<ref>) with the nearest-neighbor distribution of the second trajectory.
We compare the D_H metric used in this work with the overlap index used by Gu et al. <cit.> in Section <ref> of the SI.
As the proton travels, P_traj(δ_NN) evolves and D_H becomes a function of the total distance traversed by the projectile.
All but a few pathological random trajectories will asymptotically approach D_H=0 as the total distance traversed tends to infinity, but the impact of finite-size effects (e.g., spurious interactions between the projectile and its wake) will also grow with the total distance.
Thus, a “good” trajectory should achieve a small D_H after the projectile travels a relatively short distance so that a single, reasonably short TDDFT calculation suffices to obtain an accurate estimate of the average stopping power.
In the following, we investigate the behavior of D_H for a range of trajectories and the implications for stopping power results.
In Fig. <ref>a we consider the case of FCC aluminum and compare the nearest-neighbor distances and corresponding distributions P_traj(δ_NN) occurring along several trajectories, including those illustrated in Fig. <ref>.
Both the hyperchanneling and centroid trajectories lie parallel to a lattice vector and thus sample periodic environments, producing nearly static nearest-neighbor distributions and asymptotically constant D_H values of about 0.67 and 0.33, respectively (see Fig. <ref>b).
Of course, D_H remains relatively large for the hyperchanneling trajectory because its nearest-neighbor distribution is severely skewed toward large δ_NN.
While the centroid trajectory significantly improves D_H over the hyperchanneling trajectory, its nearest-neighbor distribution is bimodal, oversampling near points of closest and furthest approach at δ_NN≈ 0.75 and 1.5 while lacking close collisions with δ_NN< 0.5.
Failure to capture the ideal nearest-neighbor distribution explains the poor performance reported for the centroid trajectory in regimes where core electron excitations contribute significantly to electronic stopping power <cit.>.
We also find that other channeling trajectories with different impact parameters are similarly restricted to D_H>0.3 (see Fig. <ref>a) and therefore do not representatively sample the supercell.
In contrast, both the off-channeling trajectory considered by Schleife et al. <cit.> and the other “good" off-channeling trajectory identified in this work approximate the ideal distribution more closely and continue to reduce D_H as the proton travels farther, reaching much lower D_H values of 0.05 – 0.06 by 80.
Even when the projectile starts at a high-symmetry point within the crystal structure, the vast majority of possible directions of motion achieve D_H < 0.3 by the time the projectile travels 80 (see Fig. <ref>b).
Some but not all exceptions lie at channeling directions.
For a finite simulation length, D_H can be very sensitive to trajectory direction, with small changes in the trajectory angle sometimes leading to order-of-magnitude changes in D_H.
Among fully random trajectories where the projectile's initial position and direction of motion are both uniformly sampled, only 1.3% perform as poorly as channeling trajectories with D_H still exceeding 0.3 after the proton traverses 80 (see Fig. <ref>).
To verify the utility of our trajectory metric and deduce a threshold D_H value for accurate stopping power predictions, we also consider three “bad" random trajectories that either undersample (bad 1 and bad 2) or oversample (bad 3) close collisions with δ_NN<0.5 (see Fig. <ref>a).
Unlike channeling trajectories, D_H continues to decrease along these bad off-channeling trajectories, but much more slowly than the good off-channeling trajectories, only achieving D_H=0.17 – 0.35 by the time the proton travels 80.
Finally, we perform TDDFT stopping power calculations as described in Sec. <ref> for the off-channeling trajectories examined above.
Although the two good trajectories exhibit differing dynamical behavior, the average stopping powers extracted after the proton travels 80 agree within 1% and reproduce empirical data from the SRIM database <cit.> within 3% (see Fig. <ref>).
Meanwhile, stopping powers computed using the bad trajectories deviate from empirical data by up to 60%.
As expected, trajectories that undersample close collisions (bad 1 and bad 2) underestimate stopping power while the trajectory that oversamples close collisions (bad 3) overestimates stopping power.
Based on these findings, we propose that D_H≲ 0.1 suffices for representative sampling of the nearest-neighbor distribution in these stopping power calculations.
Notably, 27% of random trajectories still exceed this threshold after the projectile travels 80 (see ), highlighting the importance of careful trajectory selection for accurate and efficient stopping power predictions.
§.§ Mitigating finite-size effects
Finite periodic supercells limit the wavelength of plasmonic excitations that a plane-wave TDDFT calculation can capture, often leading to underestimated electronic stopping powers at high projectile velocities <cit.>.
Finite-size errors caused by this plasmon cutoff can be estimated from linear response theory <cit.>,
which describes the stopping power in terms of the frequency and wave-vector dependent dielectric function ϵ(k, ω):
S(v) = 2Z^2/π v^2∫_0^∞dk/k∫_0^kv dω ω Im[ -1/ϵ(k, ω)],
where Z=1 and v are the projectile charge and velocity, respectively.
Here, we employ the Mermin model dielectric function <cit.> with a constant electron-ion collision frequency of 0.1 at. u.
We estimate the contribution of the long-wavelength plasmons that the TDDFT calculations neglect by imposing an upper limit of k_cut = 2π/L for the k integral in Eq. (<ref>), where L is the length of the cubic supercell, and evaluating the integrals numerically as recently described in Ref. hentschel:2023.
This portion of the total stopping power becomes significant when the integration limits contain the plasmon pole, i.e., when k_cutv ≳ω_p, where ω_p ≈ 16 is the aluminum plasma frequency.
Indeed, for L=12.15 the linear response formalism predicts sizeable finite-size errors for velocities above ω_p/k_cut≈ 2 at. u. (see Fig. <ref>).
However, we find that differences between stopping powers computed using TDDFT with different size supercells only loosely correlate with finite-size errors estimated from linear response.
As shown in Fig. <ref>, the relative difference between TDDFT results calculated using (12.15 Å)^3 and (16.2 Å)^3 cubic supercells often significantly exceeds the values predicted using the dielectric model.
Moreover, for some proton velocities, the smaller supercell actually produces greater stopping powers than the larger supercell, a result entirely inconsistent with the plasmon-based interpretation of finite-size effects.
As it turns out, the choice of trajectory not only affects fidelity of close collision sampling, but also influences finite-size errors, an effect not captured by the plasmonic model.
In Fig. <ref>, we compare stopping powers computed using three different size supercells and four different proton trajectories that each achieve comparably small D_H values (see Fig. <ref>).
Contributions from free (conduction) and core (2s and 2p) electrons were isolated through the use of different pseudopotentials (see for more details).
Close agreement of the converged core-electron stopping powers verifies that each trajectory adequately samples close collisions with aluminum ions.
While the core-electron contribution shows little variation with supercell size, the free-electron stopping powers are quite sensitive to finite-size effects.
Similar to the negative finite-size errors appearing in Fig. <ref>, the computed free-electron stopping powers do not always grow monotonically with increasing supercell size: for the trajectories used throughout this work and , the largest, (20.25)^3 supercell leads to somewhat smaller free-electron stopping powers than the intermediate, (16.2)^3 supercell (see Fig. <ref>a and b).
Meanwhile, in , the smallest, (12.15)^3 supercell produces a greater free-electron stopping power than the (16.2)^3 supercell.
The magnitude of discrepancies between results computed with different size supercells also depends on the projectile trajectory.
We attribute the surprising trajectory-dependent and nonmonotic behavior of finite-size effects in TDDFT stopping power calculations to artificial interactions with previously excited electrons.
In particular, if the projectile passes near its earlier path after re-entering a periodic supercell, then it interacts with an excited electron density rather than pristine material, distorting stopping power results.
Such “ouroboros" effects [We suggest “Pac-Man” effects as an alternative to “ouroborous” effects for any ophidiophobic readers.] are especially severe for channeling trajectories: upon reentry, the projectile traverses the exact same path, at which point the instantaneous stopping power begins to depend on supercell size <cit.>.
Ouroboros effects can also pollute results for off-channeling trajectories, since the projectile has a finite interaction radius that may partially overlap with previously excited regions.
Even if the projectile remains relatively far from previously traversed material, it could still interact with earlier electronic excitations that have propagated into its path.
We distinguish these different types of artificial interactions as static and dynamic ouroborous effects, respectively.
The prevalence of ouroboros effects can be estimated by considering the minimum distance D_O between periodic images of the projectile's path.
This distance is upper-bounded by the simulation cell dimensions and decreases as the projectile crosses periodic boundaries.
In the limit of large D_O, the projectile remains far from earlier excited material and ouroboros effects should be minimal.
In a metallic system, localized excitations only involve core atomic orbitals and are well-confined within the Wigner-Seitz radius r_WS of the host nuclei.
Therefore, we expect that ensuring D_O > r_WS (1.58 for FCC aluminum) eliminates static ouroboros effects.
Even for the lowest D_O tested, 1.3 for the smallest supercell in Fig. <ref>d, finite-size effects do not influence the core-electron stopping power because the 2s and 2p electrons are further localized within about 0.5 of aluminum nuclei.
Dynamic ouroboros effects, on the other hand, are much harder to characterize and avoid.
Given a Fermi velocity of about 20 in aluminum, single-particle excitations could traverse the (16.2)^3, 256-atom supercells used throughout this work in less than 1, a time scale comparable to the duration of the TDDFT simulations.
First-principles calculations of plasmon dispersion in aluminum <cit.> suggest similar propagation speeds for collective excitations.
Since an off-channeling projectile must travel around 80 in order to adequately sample the nearest-neighbor distribution in this material (see Sec. <ref>), it crosses the periodic boundaries multiple times, leading to typical D_O distances of around 3 Å that electronic excitations traverse over an even shorter time scale.
However, proton projectiles can be expected to induce fairly weak charge perturbations, as evidenced by the relative success of linear response treatments of proton stopping powers <cit.>.
So, dynamic ouroboros effects could be small compared to other sources of error.
Furthermore, alternating artificial interactions with excitations that involve excess or reduced electron density relative to the pristine material could have partially cancelling influences on the average stopping power.
Indeed, the free-electron stopping powers reported in Fig. <ref> differ among each other by at most 31% of the total stopping power, whereas poor sampling of close collisions affected total stopping powers by almost a factor of 2 in Fig. <ref>.
In Fig. <ref>, we show that D_O≳ 3 already achieves acceptable convergence of free-electron stopping powers.
The fact that finite-size effects predominantly influence free-electron contributions to stopping power can be exploited to reduce computational costs associated with first-principles stopping power calculations.
Core contributions typically dominate computational costs because explicitly modeling core electrons dramatically increases the spectral range of the Kohn-Sham Hamiltonian, generally requiring higher plane-wave cutoff energies, smaller time steps, and/or more solver iterations in implicit time-stepping algorithms such as the one used in this work <cit.>.
Since core contributions are not very sensitive to ouroboros effects or supercell size, they can be efficiently calculated using smaller supercells.
Meanwhile, free-electron contributions can be separately converged with respect to the D_O metric and supercell size while pseudizing core electrons to allow cheaper time evolution.
In this work, applying this scheme to separately compute core and free-electron contributions using (12.15)^3 and (16.2)^3 supercells, respectively, would have allowed a seven-fold speedup over using (16.2)^3 supercells to compute both core and free-electron contributions simultaneously.
In fact, this strategy would have still resulted in a nearly three-fold speedup if the larger, (20.25)^3 supercell had been used to further reduce finite-size effects in the free-electron contribution.
These savings are relative to the roughly 5× 10^5 CPU-hour cost per production calculation in this work, which included core contributions in (16.2)^3 supercells.
§ DISCUSSION
Fig. <ref> compares our final electronic stopping results as a function of proton velocity to those reported in an earlier TDDFT study <cit.> and the SRIM empirical model <cit.>.
Overall, we find good quantitative agreement between the two TDDFT datasets, with modest discrepancies arising from a combination of partially cancelling factors.
First, Ref. schleife:2015 verified convergence with respect to plane-wave cutoff energy for a channeling trajectory, but we show in of the SI that converging energy transferred during close collisions occurring along off-channeling trajectories requires higher cutoffs.
In particular, we find that the 680 cutoff used in Ref. schleife:2015 underestimates high-velocity stopping power by about 5%.
Another source of discrepancies arises from different pseudopotentials: this work used the PAW method <cit.> within an extension of VASP <cit.>, while Ref. schleife:2015 used norm-conserving pseudopotentials <cit.> within Qb@ll <cit.>.
In Sec. <ref> of the SI, we show that even when all other parameters are fixed or separately converged as appropriate, the PAW and VASP methodology used in this work produces about 10% lower stopping powers than the harder, norm-conserving pseudopotentials within Qb@ll.
Additionally, a 2.5% uncertainty in extracted stopping powers arises from variations with respect to the data range included in the stopping power extraction (see Sec. <ref> of the SI).
Further benchmarking of different TDDFT codes, pseudopotentials, basis sets, and other methodological details will be important for reducing uncertainties in first-principles stopping data <cit.>.
To enable detailed interpretation of discrepancies in computed results and reduce computational costs associated with obtaining representative average stopping powers, we presented a quantitative metric to evaluate the quality of ion trajectories in first-principles electronic stopping power calculations.
The approach resembles the one proposed by Ref. gu:2020 and considers the distribution of nearest-neighbor distances experienced by the ion along its path, which allows scrutiny of how representatively different trajectories sample the close collisions that determine core-electron contributions.
Optimizing trajectories via this metric can help compute stopping powers more accurately and efficiently, particularly for high-Z elements requiring careful sampling of core-electron excitations.
We expect that straightforward extensions of this metric to disordered systems at high temperatures and heterogeneous systems including compounds, alloys, and mixtures will be even more impactful.
We also identified a cost-reducing scheme to systematically characterize finite-size errors.
Analyzing velocity- and trajectory-dependent finite-size effects revealed behavior inconsistent with the prevailing plasmonic model <cit.>, which we explain through “ouroboros" effects wherein the projectile fictitiously interacts with previously excited electrons.
Thus, we propose considering convergence with increasing distance between periodic images of the projectile's path, rather than increasing supercell size alone.
Finally, we find that finite-size errors primarily influence free-electron contributions to stopping power, enabling convergence without the high computational costs incurred by explicitly modeling core electrons in large supercells.
Overall, our combination of approaches facilitates systematic control and analysis of two important approximations in first-principles stopping power calculations: the choice of projectile trajectory and finite supercells.
These strategies will not only enhance the accuracy and efficiency of TDDFT stopping power calculations, but also prove valuable for higher levels of theory <cit.> and quantum simulation algorithms on quantum computers <cit.> as their viabilities improve.
§ METHODS
The real-time TDDFT calculations were performed with an in-house extension <cit.> of the Vienna ab initio simulation package (VASP) <cit.>.
Ground-state orbitals from density functional theory with a Fermi smearing of 100 served as the initial condition for solving the time-dependent Kohn-Sham (KS) equations.
Plane-wave cutoff energies of 750 achieved sufficiently converged results, and large supercells allowed reciprocal space sampling using the Γ point only.
Sec. <ref> in the SI describes convergence of these parameters in more detail.
The adiabatic local density approximation <cit.> was used for exchange and correlation (XC) after verifying accuracy relative to adiabatic versions of PBE <cit.> and SCAN <cit.> (see in the SI).
The electron-ion interaction was treated with the projector augmented-wave (PAW) method <cit.>, explicitly including 3 or 11 valence electrons per aluminum atom.
The two aluminum pseudopotentials allowed access to different contributions to the stopping power, with 3-electron calculations isolating the response of free electrons, 11-electron calculations additionally including core contributions, and the difference between 11-electron and 3-electron results isolating core contributions.
Sec. <ref> in the SI further discusses the influence of the pseudopotential approximation beyond the number of electrons explicitly modeled.
The TDDFT simulations held ionic velocities fixed while propagating the KS orbitals according to the Crank-Nicolson algorithm.
For protons with velocities of 1.5 at. u. or more, the time step was chosen to scale inversely with proton velocity such that the proton traverses about 0.02 within each time step.
For slower protons, smaller time steps of 0.3 – 0.4 as were needed to achieve converged results (see Sec. <ref> of SI for more details).
The ultimate stopping powers were computed from the time-dependent force on the proton, including Hellmann-Feynman and Pulay terms, but not the fully energy-conserving form derived in Ref. ojanpera:2012, which is expected to be insignificant over the few-fs time scales simulated in this work.
Data during the first of the proton's motion were excluded from the analysis to allow for dynamical ionization of the suddenly accelerated proton.
The time-dependent force was integrated along the proton's path to obtain the stopping work, or cumulative energy deposited by the proton into the electronic system.
The slope of the least-squares linear fit of the stopping work then produced the average stopping power.
In Sec. <ref> of the SI, we compare different procedures for extracting an average stopping power from TDDFT data and show that the methodology described here converges to within 2% after the proton traverses 75, whereas other procedures can be much more sensitive to the precise endpoint of the analysis and take longer to converge.
§ DATA AVAILABILITY
The datasets computed and analyzed during the current study are available from the corresponding author upon reasonable request.
We thank André Schleife, Cheng-Wei Lee, Susan Atlas, Alexandra Olmstead, and Alexander White for helpful technical discussions.
We are also grateful to Joel Stevenson for technical support, Raymond Clay for feedback on the manuscript, and Heath Hanshaw for pre-publication review.
AK, ADB, and SBH were partially supported by the US Department of Energy Science Campaign 1.
SBH and TH were partially supported by the US Department of Energy, Office of Science Early Career Research Program, Office of Fusion Energy Sciences under Grant No. FWP-14-017426.
All authors were partially supported by Sandia National Laboratories' Laboratory Directed Research and Development (LDRD) Project No. 218456.
This article has been co-authored by employees of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The authors own all right, title and interest in and to the article and are solely responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan <https://www.energy.gov/downloads/doe-public-access-plan>.
Supplementary Information
§ STOPPING POWER EXTRACTION
An average stopping power may be extracted from the time-dependent data computed within TDDFT through various approaches.
One way is to time-average the force acting against the projectile's motion,
S_avg = -1/T-t_0∫_t_0^T 𝐅(t)·𝐯̂ dt,
where 𝐅(t) is the time-dependent force on the projectile, 𝐯̂ is the direction of the projectile's velocity, T is the total simulation time, and t_0>0 typically excludes a transient regime at the beginning of the simulation.
The integral over time in Eq. (<ref>) can be equivalently written as an integral over distance:
S_avg = -1/x_T - x_0∫_x_0^x_T𝐅(x)·𝐯̂ dx
= W(x_T) - W(x_0)/x_T - x_0,
where x is the distance traveled by the projectile, , x_T=vT, and
W(x) = -∫𝐅(x)·𝐯̂ dx
is the so-called stopping work or the energy deposited by the projectile into the electronic system.
We see from Eq. <ref> that S_avg amounts to a two-point approximation of the average energy deposition rate.
Therefore, the average stopping power extracted in this fashion can be sensitive to the precise choice of x_0 and x_T, particularly when core electron excitations during close collisions with host nuclei lead to sharp features in W(x) (see Fig. <ref>).
A scheme to eliminate these sharp features such as the baseline method <cit.> suggested by Ref. yost:2016 reduces sensitivity to x_0 and x_T, but S_avg still converges rather slowly with increasing x_T.
Instead of simply averaging the time-dependent force, one may perform a linear fit
W(x) ≈ S_fit x + W_0 for x_0 ≤ x ≤ x_T
and take the resulting slope S_fit as the average stopping power.
Although the two methods should eventually converge to the same value,
we find that this latter method produces average stopping powers that evolve more smoothly as the simulation proceeds, i.e., as x_T increases, and approach a constant average stopping power earlier (see Fig. <ref>).
Therefore, the linear fitting procedure allows more precise determination of stopping powers with shorter simulation times and thus smaller supercells.
We similarly find that linear fitting reduces sensitivity to x_0, as keeping x_T≥ 80 fixed and varying x_0 from 0 to 10 produces stopping powers within 2% of the converged value shown in Fig. <ref>.
Throughout the main text, we employ least-squares linear fits with x_0=4 Å and x_T≈ 80 Å.
Since the baseline filtering of the stopping work barely alters the fitted stopping power for x_T>80 Å, we do not bother with this complication in the main text.
§ SAMPLING NEAREST-NEIGHBOR DISTRIBUTIONS
In the main text, the ideal nearest-neighbor distribution P_ideal(δ_NN) is sampled using 50,000 random points and histogrammed using a 0.1 bin width.
The Hellinger distance between this distribution and an ideal distribution sampled using 60,000 random points is only 0.002, well below the 0.1 threshold established in the main text for selecting representative trajectories (see ).
Thus, the ideal nearest-neighbor distribution used throughout the main text is very well-converged for our purposes in this work.
We note that the number of samples needed depends on the bin width, with smaller bins requiring more points.
We sample the nearest-neighbor distribution P_traj(δ_NN) achieved by a particular trajectory by taking discrete steps of about 0.02 along that trajectory.
This step length matches the typical distance traveled by the projectile in each time step of the ultimate TDDFT simulations, and it produces around 10 times fewer samples than the number used to construct P_ideal(δ_NN).
However, sampling more points along the trajectory without increasing its total length does not significantly alter P_traj because the nearest-neighbor distance evolves continuously along the projectile's path and thus closely-spaced samples are highly correlated.
In fact, the step size only begins to noticeably affect computed D_H values when it exceeds the histogram bin width of 0.1 (see ).
§ COMPARISON BETWEEN HELLINGER DISTANCE AND OVERLAP INDEX
As described in Section <ref> of the main text, we evaluate the extent to which individual trajectories representatively sample the simulation cell using the Hellinger distance D_H (see Eq. (<ref>) in the main text).
Gu et al. <cit.> took an alternative approach based on the overlap index
O = 1-1/2∫|P_traj(δ_NN) - P_ideal(δ_NN)| dδ_NN,
which quantifies the overlapping area under P_traj and P_ideal.
Minimizing D_H, as proposed in this work, and maximizing O, as done in , represent different ways to optimize trajectories with the same underlying principle that the nearest-neighbor distribution sampled along the trajectory should resemble the ideal or reference distribution.
The D_H method offers the advantages of a mathematical metric: it satisfies the triangle inequality and thus allows rigorous comparisons among different trajectories in addition to evaluating the similarity of P_traj and P_ideal.
Although O does not have the mathematical properties of a metric, we note that 1-O does.
Furthermore, we find that O and D_H are strongly correlated (see ), and we expect that either option will result in suitably representative trajectory selections.
However, detailed comparisons between similar quality trajectories could depend on the metric used, since D_H and O can lead to opposite conclusions for which of two trajectories matches the ideal distribution more closely.
§ CONVERGENCE OF TDDFT SIMULATIONS
To establish computational parameters that achieve converged stopping powers, we performed various tests for off-channeling protons with 4 at. u. of velocity traversing aluminum.
Since this velocity lies above the Bragg peak (see Fig. <ref> in the main text), these tests probe convergence of contributions from both free and core electrons in aluminum.
Smaller supercells containing 64, 96, or 108 aluminum atoms were used to reduce computational costs.
First, we considered the influence of plane-wave cutoff energy and found that results computed with the 750 cutoff used throughout the main text are converged within 1.7% relative to a 1000 cutoff.
Throughout most of the simulation, the instantaneous forces on the proton are extremely well-converged even with a lower cutoff of 500, which suffices to capture the free-electron contribution to stopping power given by the slope of the nearly linear portions of the stopping work plotted in .
However, converging core-electron contributions requires higher cutoff energies: a 500 cutoff significantly underestimates energy transferred to core electrons during close collisions such as the one occurring at around 37 in Fig. <ref>, resulting in an 18% underestimation of the ultimate stopping power.
These findings illustrate the need for careful scrutiny of core-electron contributions during convergence testing of electronic stopping calculations.
Details of the pseudopotentials naturally influence the convergence behavior of core excitations.
For instance, under the harder, norm-conserving pseudopotentials used within Qb@ll, stopping powers computed with 750 and 1000 cutoff energies differ by 3%, somewhat more than in the VASP tests described above.
Therefore, in Qb@ll calculations we used a higher cutoff of 1000 to obtain results converged within 1% relative to a 1250 cutoff.
Next, we find that reciprocal-space sampling has a negligible effect on the accuracy of our results.
For a 108-atom aluminum supercell, the difference between stopping power computed with the Γ point only and with a Γ-centered 2× 2× 2 k-point grid is only 0.2%.
The influence of k-point sampling should be even more minuscule for the 256-atom supercell used in production calculations.
Finally, we considered convergence with respect to numerical time step.
For protons with at least 1.5 at. u. of velocity, we scaled the time step inversely with velocity such that the projectile traversed about 0.02 in each step.
For slower protons, smaller steps of 0.3 – 0.4 were needed to adequately resolve the electron dynamics.
These choices produced stopping powers converged within 1.6% or less relative to tests where the time step was halved.
§ PSEUDOPOTENTIALS
Pseudopotentials are widely known to influence stopping power calculations through their inclusion or exclusion of core excitations depending on the number of valence electrons explicitly modeled.
We exploit this fact in the main text to isolate stopping power contributions from core and free electrons, allowing a cost-reducing scheme for converging finite-size effects.
However, we also find a more subtle dependence on the type and details of the pseudopotentials even with a fixed number of valence electrons.
Fig. <ref> compares results computed using PAW <cit.> within VASP <cit.> and norm-conserving pseudopotentials <cit.> within Qbox/Qb@ll <cit.>.
In addition to aluminum pseudopotentials with 3 or 11 valence electrons per atom, we also tested two hydrogen pseudopotentials within the VASP calculations: the standard one among those distributed with the package (labeled “H") and a harder one (labeled “H_h").
The Al and H pseudopotentials used in the Qb@ll calculation both have smaller core radii than the VASP pseudopotentials, though the augmentation charge within the PAW method complicates the comparison.
The plane-wave cutoff energy and time step were separately converged for the Qb@ll calculation, where the harder pseudopotentials required a higher cutoff of 1000 and a time step of about 1 sufficed.
Other parameters, including supercell size, projectile trajectory, exchange-correlation functional, and Fermi smearing remained consistent across the VASP and Qb@ll calculations so as to isolate the influence of the pseudopotentials.
We find significant differences among stopping powers computed with the different pseudopotentials even when the number of valence electrons is held fixed.
Within VASP, the standard, softer H pseudopotential leads to stopping powers 0.3 (0.8) or 8% (13%) below those computed with the harder H pseudopotential when explicitly treating 3 (11) electrons per Al atom.
The even harder norm-conserving pseudopotentials used in the Qb@ll calculation further increase the stopping power by 0.6 or 10% relative to the hard VASP calculation.
Dynamics during close collisions (e.g., at around 11 and 37 in Fig. <ref>) contribute significantly to these discrepancies, as softer pseudopotentials tend to underestimate energy transferred during these events.
These findings indicate the need for future work to further characterize the influence of the pseudopotential approximation on stopping power calculations and develop computationally efficient approaches for mitigating this relatively large source of uncertainty in computed results.
§ EXCHANGE AND CORRELATION
As with all DFT-based methods, the choice of exchange-correlation (XC) functional remains a central approximation in our work with few avenues for improving accuracy.
We assessed the sensitivity of our results to the XC functional by comparing adiabatic LDA <cit.>, PBE <cit.>, and SCAN <cit.> for the case of an off-channeling proton moving through aluminum with 4 at. u. of velocity.
When using the same pseudopotentials and otherwise identical parameters, stopping powers computed with these XC functionals differ among each other by 0.5% or less.
Notably, these variations are much smaller than an earlier report of up to 18% differences between stopping powers computed with LDA and PBE for SiC <cit.>, a semiconductor that can be expected to exhibit higher sensitivity to XC effects.
We conclude that for metals, the influence of the XC functional appears very minor compared to other factors typically limiting accuracy of stopping power calculations, namely finite-size effects, trajectory dependence, and the pseudopotential approximation.
However, we note that non-adiabatic XC effects on electronic stopping power remain unexplored.
|
http://arxiv.org/abs/2307.00817v1
|
20230703075714
|
$β$-decay half-lives as an indicator of shape-phase transition in neutron-rich Zr isotopes with particle-vibration coupling effect
|
[
"Kenichi Yoshida",
"Yifei Niu",
"Futoshi Minato"
] |
nucl-th
|
[
"nucl-th",
"nucl-ex"
] |
[1]
[E-mail: ]kyoshida@rcnp.osaka-u.ac.jp
Research Center for Nuclear Physics, Osaka University,
Ibaraki, Osaka 567-0047 Japan
RIKEN Nishina Center for Accelerator-Based Science, Wako, Saitama 351-0198, Japan
[E-mail: ]niuyf@lzu.edu.cn
MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, China
School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, China
[E-mail: ]minato.futoshi@phys.kyushu-u.ac.jp
Department of Physics, Kyushu University, Fukuoka 819-0395, Japan
RIKEN Nishina Center for Accelerator-Based Science, Wako, Saitama 351-0198, Japan
Background
β-decay half-life is sensitive to the shell structure near the Fermi levels.
Nuclear deformation thus impacts the β-decay properties.
Purpose
A first-order shape-phase transition in neutron-rich Zr isotopes is predicted by some models.
We investigate the β-decay half-lives of neutron-rich nuclei around ^110Zr, where the shape-phase transition is predicted to occur, to see if the β-decay half-life can be an indicator of the shape changes.
Method
The proton–neutron quasiparticle random-phase approximation (RPA) is adopted to calculate the Gamow–Teller transitions.
In addition, we apply the quasiparticle phonon-vibrational coupling (PVC) to consider the phonon couplings.
Results
The spherical and oblate configurations
give similar half-lives but shorter ones than the prolate configuration at the RPA level.
The PVC effect further reduces the half-lives in general, but the effect is smaller for the deformed configuration than that for the spherical one.
As a result, it makes the shape change from the oblate configuration to the spherical configuration visible.
Therefore, a sudden shortening of β-decay half-lives is always found at the nuclear shape changes.
Conclusions
β-decay half-life is an indicator of the shape-phase transition.
The shape mixing and the roles of the triaxial deformation are subject to study in the future.
β-decay half-lives as an indicator of shape-phase transition in neutron-rich Zr isotopes with particle-vibration coupling effect
Futoshi Minato
August 1, 2023
================================================================================================================================
§ INTRODUCTION
The physics of exotic nuclei has been one of the major subjects in the field of nuclear science with the upgrading and constructing of the radioactive-ion (RI) beam accelerator facilities around the world.
Recent progresses in the development of the experimental technique of spectroscopic studies have unveiled the nuclear structure of exotic nuclei <cit.>, and
it has attracted much interest in how the shape of a nucleus changes as a function of the number of neutrons and protons.
Empirical observables revealing the evolution of nuclear shape are
the excitation energies of the 2^+_1 and 4^+_1 states and their ratio together with the E2 transition strengths.
To explore the evolution of nuclear shells and deformations,
the SEASTAR project <cit.> has been undertaken at RIKEN RIBF, aiming at
a systematic search for new 2^+ energies in the wide range of neutron-rich nuclei.
Besides that, the two-neutron separation energies, the monopole transition strengths, and the isotope shifts also reflect the structural changes of neutron-rich nuclei <cit.>.
Nuclear deformation also has a substantial impact on
the high-frequency excitation modes,
such as in the photoabsorption cross-sections <cit.>.
The Zr isotopes with A ≃ 100 have been of
theoretical and experimental interest in nuclear structure as a region of competition
between various coexisting prolate, oblate, and spherical nuclear shapes <cit.>.
The first-order phase transition occurs uniquely in this region, while we usually see a gradual change of deformation with
an increase in the neutron/proton number in other regions such as in the rare-earth nuclei.
The mean-field calculations rooted in nuclear density-functional theory (DFT) <cit.>,
the macroscopic-microscopic calculation <cit.> as well as the recent shell model calculation <cit.> describe well the sudden change from the spherical to the deformed shape at ^100Zr.
The deformed region has been confirmed up to ^110Zr by
observing a low E(2_1^+) value and the R_4/2 value being greater than three <cit.>.
Furthermore, the calculations <cit.> predict the shape transition from the deformed to the spherical configuration around N=74.
The β-decay half-life is one of the most experimentally accessible physical quantities for RI beam facilities and plays a decisive role in determining the time scale of the r-process nucleosynthesis <cit.>.
Observed short half-lives around A ≃ 110 region speed up the r-matter flow <cit.>.
There have been a considerable amount of works on the roles of the nuclear deformation on the Gamow–Teller (GT) strength distributions <cit.>.
Then, it has been found that nuclear deformation plays an important role in the β-decay half-lives.
These works are, however, based on the random-phase approximation (RPA) that considers coherent one-particle-one-hole excitations.
To understand nuclear excitations quantitatively, one sometimes needs to take into account the effect of beyond-RPA, namely higher-order configurations such as phonon-coupling effects and coherent two-particle-two-hole excitations.
Important roles of the beyond-RPA effect have been recognized also for the GT strengths, which give a crucial influence to β-decays.
The PVC effect is essential for reproducing the width of GT resonances <cit.> and improving the β-decay half-lives <cit.>.
We propose in this work the β-decay half-lives as an indicator of the shape-phase transition, which may give an impact on the r-process.
We will demonstrate it within the Skyrme Hartree–Fock–Bogolibov (HFB) approach and proton–neutron quasiparticle-random-phase approximation (pnQRPA) under the condition of an axially-deformed shape.
We also discuss that one can confirm the shape-phase transition of neutron-rich Zr isotopes from the β-decay half-lives even in the presence of the phonon-coupling effect within the quasiparticle-vibration coupling (QPVC).
The paper is organized as follows.
In Sec. <ref>, we briefly explain the models for evaluating the β-decay half-lives.
In Sec. <ref>, we show the results and discuss the roles of nuclear deformation and the effects of phonon coupling.
Section <ref> summarizes the paper.
§ NUCLEAR ENERGY-DENSITY FUNCTIONAL METHOD FOR Β-DECAY PROPERTIES
§.§ Skyrme Hartree–Fock–Bogoliubov approach for nuclear deformation
In the framework of the nuclear energy-density functional (EDF) method we employ,
the ground state of a mother nucleus is described by solving the
HFB equation <cit.>.
The single-particle and pair Hamiltonians
are given by the functional derivative of the EDF with respect to the particle density and the pair density, respectively.
An explicit expression of the Hamiltonians is found in the Appendix of Ref. <cit.>.
The average particle number is fixed at the desired value by adjusting the chemical potential.
Assuming the system is axially symmetric,
the HFB equation is block diagonalized
according to the quantum number Ω, the z-component of the angular momentum.
§.§ Proton–neutron quasiparticle random-phase approximation
Since the details of the formalism can be found in Refs. <cit.>,
here we briefly recapitulate the basic equations relevant to the present study.
The excited states | f ⟩ in a daughter nucleus are described as
one-phonon excitations built on the ground state |RPA⟩ of the mother nucleus as
| f⟩ = Γ̂^†_f | RPA⟩,
Γ̂^†_f = ∑_αβ{
X_αβ^f â^†_α, nâ^†_β, p
-Y_αβ^f â_β, pâ_α, n},
where â^†_ n (â^†_ p) and â_ n (â_ p) are
the neutron (proton) quasiparticle (labeled by α and β) creation and annihilation operators
that are defined in terms of the solutions of the HFB equation
with the Bogoliubov transformation.
The phonon states, the amplitudes X^f, Y^f and the vibrational frequency ω_f,
are obtained in the pnQRPA with a cutoff at 60 MeV.
The residual interactions entering into the pnQRPA equation
are given by the EDF self-consistently
except for the J^2 term:
the J^2 term
in the EDF is neglected in the HFB calculation but included in the pnQRPA calculation.
§.§ Quasiparticle vibration coupling in spherical nuclei
The QPVC model includes correlations beyond the spherical pnQRPA model by taking into account the quasipaticle phonon coupling.
The self-energy of pnQRPA states is obtained by considering the coupling of doorway states consisting of a two-quasiparticle excitation coupled to a collective vibration. The properties of these collective vibrations, i.e., phonons | nL⟩, are obtained by computing the QRPA response for states of natural parity
J^π =0^+, 1^-, 2^+, 3^-, 4^+, 5^-, and 6^+, where those phonons with energy less than 20 MeV and absorbing a fraction of the non-energy weighted isoscalar or isovector sum rule (NEWSR) strength being larger than 5% are taken into account the model space.
The self-energy of the pnQRPA state |f⟩ is given as
Σ_f(E) = ∑_αβα' β' W^↓_αβ,α' β'(E) X_αβ^f
X_α' β'^f
+ W^↓ *_αβ,α' β'(-E)
Y_αβ^f Y_α' β'^f,
where W^↓_αβ,α' β'(E) represents the spreading terms associated with the coupling of two-quasiparticle configurations with the doorway states,
and the detailed expressions are given in Ref. <cit.>; X^f and Y^f are the forward and backward pnQRPA amplitudes, respectively, as defined in the last subsection but for the spherical case. To calculate the β-decay half-lives,
we use Gaussian smearing to get the GT strength distribution,
S(E) = ∑_n 1/σ_n √(2π) e^-(E-E_n-Δ E_n)^2/2σ_n^2 B_n,
where σ_n = (Γ_n/2 + η) /√(2 ln 2), with Δ E_n = ReΣ_n(E) and Γ_n=-2 ImΣ_n(E), and B_n is the pnQRPA transition probability for state |n⟩. η is the averaging parameter in W^↓ to avoid divergence, taken as 200 keV. The details of the QPVC formulas can also be referred to Refs. <cit.>.
§.§ Calculation of the β-decay half-lives
The β-decay half-life T_1/2 can be calculated with the
Fermi's golden rule as <cit.>,
1T_1/2 = λ_βlog 2
=(g_A/g_V)^2_effD∑_E_f^* < Q_βf(Z, Q_β-E_f^*)
| ⟨ f | F̂ | RPA⟩ |^2,
where D=6147.0 s and we set (g_A/g_V)_eff=1 rather than its actual
value of 1.26 to account for the quenching of the spin matrix in nuclei.
The transition matrix element
for the GT operator ⟨ f|F̂| RPA⟩ is evaluated
by the quasi-boson approximation as
⟨ f|F̂| RPA⟩≃⟨ 0|[Γ̂_f,F̂]|0⟩,
where |0⟩ denotes the HFB ground state.
The Fermi integral f(Z,Q_β-E_f^*) in Eq. (<ref>)
including screening and finite-size effects is given by
f(Z,W_0) = ∫_1^W_0 p W(W_0 - W)^2 λ (Z,W) dW,
with
λ(Z,W)=2(1+γ)(2pR)^-2(1-γ)e^πν|Γ(γ + ν)Γ(2γ +1)|^2,
where γ=√(1-(α Z)^2), ν=α ZW/p, α is the fine structure
constant, R is the nuclear radius.
W is the total energy of β particle, W_0 is the total energy available in m_e c^2 units,
and p=√(W^2-1) is the momentum in m_e c units <cit.>.
Here, the energy released in the transition from the ground state of the target nucleus
to an excited state in the daughter nucleus is given approximately by <cit.>
Q_β - E_f^* ≃λ_ν - λ_π + Δ M_n-H - ω_f.
§.§ EDF employed in the numerical calculations
We employ in the actual calculations a Skyrme-type
EDF for the particle-hole channel.
The SkM* functional <cit.> is mainly used for the present investigation, and the SLy4 functional <cit.> is used to supplement the discussion.
The pairing is considered by using the mixed-type
contact interaction
V_ pp(,^')=V_0[
1-12ρ()ρ_0]δ(-^')
with V_0=-225 MeV fm^3 and
-290 MeV fm^3 for the SkM* and SLy4 functionals, respectively, and ρ() and ρ_0 being the isoscalar density and
the saturation density 0.16 fm^-3.
The pairing strengths in the deformed HFB calculation here are determined to be consistent with the pairing energy in spherical HFB calculation for QPVC, where the pairing strengths are adjusted by the experimental pairing gap of ^114Zr from three-point formulas.
In the pnQRPA calculations,
we include the proton–neutron pairing interaction as
Eq. (<ref>) with the same strength.
§ RESULTS AND DISCUSSION
Figure <ref> shows the potential energy surfaces of
the neutron-rich Zr isotopes with N=72 – 78 calculated by
using the SkM* functional,
where the shape transition is predicted to occur with an increase in the neutron number <cit.>.
The prolate and oblate configurations compete in energy in ^112, 114Zr,
while the spherical and oblately deformed configurations compete in energy in ^116,118Zr.
We find a similar feature in the results obtained by employing the SLy4 functional,
where the spherical and oblate configurations compete in energy.
A standard probe of the shape change from the prolate to oblate deformations is
a sign change of the spectroscopic quadrupole moment of the 2^+_1 state.
However, it is challenging to measure the spectroscopic quadrupole moment for these neutron-rich nuclei <cit.>.
The β-decay half-life is a rather
experimentally accessible quantity even for very neutron-rich nuclei,
and
the calculated half-lives are shown in Fig. <ref>.
The observed half-lives up to N=72 are well reproduced by
the calculation using the SkM* functional with the prolate configuration.
We see that the calculated half-lives calculated assuming the prolate shape shorten monotonically beyond N=72,
wheres a sudden drop occurs at N=74 when the nuclear shape
changes to the oblate deformation.
In the case of SLy4,
the results underestimate the measurements.
However, we see the half-lives for the oblate configuration
are shorter than for the prolate configuration as in case of SkM*.
We discuss the mechanism for the shortening of the
half-lives due to the shape change
from the prolate to oblate deformations.
Figure <ref> shows
the distributions of the partial decay-rate
associated with the GT transitions
in ^112,114Zr.
The GT states appear in low energies for the oblate configuration.
In ^114Zr, the GT strengths for the oblate configuration are
larger than those for the prolate configuration.
The GT strengths for the spherical configuration
appear relatively higher in energy but are much
larger than those for the deformed configuration,
leading to the half-lives being as short as
for the oblate configuration.
We show in Figs. <ref> and <ref> the results considering the
quasiparticle-phonon coupling for the spherical configuration denoted as QPVC.
Comparing with the results of half-lives for the spherical configuration,
those for QPVC are shortened.
This is because the GT states couple with other phonon states, resulting in distributions to lower energies.
We will discuss the mechanism in more detail later on.
It is considered that the PVC effect is weaker in deformed nuclei than
in spherical nuclei because the quadrupole correlation is mostly
described as a deformed mean field <cit.>.
Let us study the effect of PVC in the present case.
The low-lying phonon excitations are shown in Fig. <ref>.
The first-excited quadrupole state appears at 1.7 MeV with a strength of 6.4 × 10^3 fm^4 for the spherical configuration,
while we see the K=2 state located around 0.6 MeV with a strength of 1.3 × 10^3 fm^4 for the prolate configuration.
For the oblate configuration, the strength is ∼ 6.0 × 10^3 fm^4, which is
much larger than that for the prolate configuration because these nuclei show
softness against the triaxial deformation <cit.>.
We also show in the right panel of Fig. <ref> the octupole excitations, the major states coupling with the GT states.
The calculated lowest-lying octupole state appears at 1.1 MeV both in the spherical
and oblate configurations, while the strengths for the oblate and prolate configurations
are more than one order of magnitude smaller than the case of the spherical configuration.
Since ^114Zr has different behaviors for spherical and deformed shapes with respect to the quadrupole and octupole strengths, we need to investigate the interweaving roles of quadrupole and octupole phonons in PVC.
To distinguish the role of quadrupole and octupole phonons in PVC for ^114Zr, we consider a simple model.
In this simple model, we do not include the pairing correlations, as well as the momentum-dependent interactions in the PVC vertex calculation such that the transition densities of phonons could be used directly in the PVC vertex calculation <cit.>.
With this approximation, we could estimate the PVC effect for deformed configurations by using the phonon energies of deformed configurations and rescaling the transition densities from the spherical configuration to the deformed one to adjust the transition strength. The corresponding results are shown in Tab. <ref>.
From the spherical to oblate configurations,
the lowest quadrupole-phonon energy is shifted downwards, and the strength increases,
which should give a stronger PVC effect for the oblate configuration.
This is confirmed in the simple model by including only the lowest quadrupole phonon in the PVC calculation,
which gives 3.9 ms for the spherical configuration and 2.6 ms for the oblate configuration, compared with 6.4 ms in the RPA calculation.
However,
the lowest octupole phonon energy is nearly the same, but the strength is reduced by more than one order of magnitude, which would give a smaller PVC effect for the oblate configuration.
This is confirmed by including only the lowest octupole phonon in the PVC calculation,
which gives 3.9 ms for the spherical configuration and 5.4 ms for the oblate configuration.
From the spherical to oblate configurations, the quadrupole and octupole phonons play different roles.
Then we further include both phonons, and obtain 0.9 ms for the spherical configuration and 2.4 ms for the oblate configuration.
It is clear to see that the PVC effect is much smaller for oblate configuration than that for the spherical configuration.
As for the prolate configuration, the energy and strength of the lowest octupole phonon is similar to those in the oblate configuration (Fig. <ref>(b)),
while the energy of prolate shape is higher than the oblate shape with the smaller strength for the lowest quadrupole phonon (Fig. <ref>(a)).
Thus, one can expect the PVC effect for the prolate configuration to be smaller than that for the oblate configuration.
Therefore, after considering the PVC effect, the sudden change of half-lives from the prolate configuration to the spherical configuration remains, and the sudden change from the oblate to spherical configurations will also appear, which is not seen at the RPA level.
For the SkM* functional, the shape change from the prolate to oblate configurations is observed at N=74, and the half-life is shortened already at the RPA level.
With the further inclusion of the PVC effect, the change in half-life will be more apparent.
For the SLy4 functional, the shape changes from oblate to spherical, and no significant shortening is observed in half-life at the RPA level,
but with the further inclusion of the PVC effect, the sudden shortening of half-life will also manifest around N=74.
We have ignored the triaxial deformation in the present study.
By looking at the PES in two dimensions of β and γ
in Ref. <cit.>,
some nuclei are soft against the triaxial deformation.
The beta-decay half-lives need to be investigated again after considering the traxial degree of freedom as well as the shape-mixing effect.
§ SUMMARY
We have investigated the β-decay half-lives in the Zr isotopes with shape changes.
The GT strength distributions were evaluated in the proton–neutron QRPA and QPVC approaches.
The spherical and oblate configurations give similar half-lives, and the oblate configuration is shorter than the prolate one at the RPA level.
The PVC effect further could reduce the half-lives; however, the effect would be smaller for a deformed configuration than that for a spherical one.
When a sudden drop of half-lives around N=74 is observed experimentally,
it is an indication of the shape transition.
However, the present model does not take into account the triaxial shape and the shape mixing.
Considering these effects remains a challenge in the future.
This work was supported by the National Key Research and Development (R&D) Program of China (Grant No. 2021YFA1601500), JSPS KAKENHI (Grants No. JP19K03824 and No. JP19K03872) and the JSPS/NRF/NSFC A3 Foresight Program “Nuclear Physics in the 21st Century”, as well as the National Natural Science Foundation of China (Grant No. 12075104).
The numerical calculations were performed on the computing facilities
at the Yukawa Institute for Theoretical Physics, Kyoto University,
and at the Research Center for Nuclear Physics, Osaka University.
43
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Nakamura et al.(2017)Nakamura, Sakurai, and Watanabe]nak17
author author T. Nakamura, author H. Sakurai, and author H. Watanabe, title title Exotic nuclei explored at in-flight
separators, https://doi.org/https://doi.org/10.1016/j.ppnp.2017.05.001 journal journal Prog. Part. Nucl. Phys. volume 97, pages 53 (year 2017)NoStop
[SEA()]SEASTAR
@noop title Shell Evolution And Search for Two-plus
energies At RIBF: SEASTAR project , howpublished
<https://www.nishina.riken.jp/collaboration/SUNFLOWER/experiment/seastar/index.php>NoStop
[Campbell et al.(2002)Campbell, Thayer, Billowes, Dendooven, Flanagan, Forest, Griffith, Huikari, Jokinen, Moore, Nieminen, Tungate, Zemlyanoi, and Äystö]cam02
author author P. Campbell, author H. L. Thayer, author J. Billowes,
author P. Dendooven, author K. T. Flanagan, author
D. H. Forest, author
J. A. R. Griffith, author
J. Huikari, author A. Jokinen, author R. Moore, author A. Nieminen, author G. Tungate, author S. Zemlyanoi, and author J. Äystö, title title
Laser Spectroscopy of Cooled Zirconium Fission Fragments, https://doi.org/10.1103/PhysRevLett.89.082501 journal
journal Phys. Rev. Lett. volume 89, pages 082501 (year 2002)NoStop
[Yoshida and Nakatsukasa(2011)]Yoshida:2010zu
author author K. Yoshida and author T. Nakatsukasa, title title Dipole responses in
Nd and Sm isotopes with shape transitions, https://doi.org/10.1103/PhysRevC.83.021304 journal journal Phys. Rev. C volume 83, pages 021304 (year 2011), https://arxiv.org/abs/1008.1520 arXiv:1008.1520 [nucl-th] NoStop
[Oishi et al.(2016)Oishi,
Kortelainen, and Hinohara]Oishi:2015lph
author author T. Oishi, author M. Kortelainen, and author N. Hinohara, title title Finite amplitude method applied to
the giant dipole resonance in heavy rare-earth nuclei, https://doi.org/10.1103/PhysRevC.93.034329 journal journal Phys. Rev. C volume 93, pages 034329 (year 2016), https://arxiv.org/abs/1512.09146 arXiv:1512.09146 [nucl-th] NoStop
[Lalazissis et al.(1999)Lalazissis, Raman, and Ring]lal99
author author G. A. Lalazissis, author S. Raman, and author P. Ring, title title Ground-State Properties of Even-Even Nuclei in
the Relativistic Mean-Field Theory, https://doi.org/10.1006/adnd.1998.0795 journal journal Atom. Data Nucl. Data Tabl. volume 71, pages 1 (year 1999)NoStop
[Stoitsov et al.(2003)Stoitsov, Dobaczewski, Nazarewicz,
Pittel, and Dean]sto03
author author M. V. Stoitsov, author J. Dobaczewski, author W. Nazarewicz, author S. Pittel, and author D. J. Dean, title title Systematic study of deformed nuclei
at the drip lines and beyond, https://doi.org/10.1103/PhysRevC.68.054312 journal journal Phys. Rev. C volume 68, pages 054312 (year 2003), https://arxiv.org/abs/nucl-th/0307049 arXiv:nucl-th/0307049 NoStop
[Moller et al.(1995)Moller,
Nix, Myers, and Swiatecki]mol95
author author P. Moller, author J. R. Nix,
author W. D. Myers, and author W. J. Swiatecki, title title Nuclear ground state masses and
deformations, https://doi.org/10.1006/adnd.1995.1002 journal journal Atom. Data Nucl. Data Tabl. volume 59, pages 185 (year 1995), https://arxiv.org/abs/nucl-th/9308022 arXiv:nucl-th/9308022
NoStop
[Togashi et al.(2016)Togashi, Tsunoda, Otsuka, and Shimizu]tog16
author author T. Togashi, author Y. Tsunoda,
author T. Otsuka, and author N. Shimizu, title
title Quantum Phase Transition in the Shape of Zr isotopes, https://doi.org/10.1103/PhysRevLett.117.172502 journal
journal Phys. Rev. Lett. volume 117, pages 172502 (year 2016)NoStop
[Paul et al.(2017)Paul,
Corsi, Obertelli, Doornenbal,
Authelet, Baba, Bally,
Bender, Calvet, Château,
Chen, Delaroche, Delbart,
Gheller, Giganon, Gillibert,
Girod, Heenen, Lapoux,
Libert, Motobayashi, Niikura,
Otsuka, Rodríguez, Roussé, Sakurai, Santamaria,
Shimizu, Steppenbeck, Taniuchi, Togashi, Tsunoda, Uesaka, Ando, Arici, Blazhev, Browne, Bruce, Carroll, Chung, Cortés, Dewald, Ding, Flavigny, Franchoo, Górska, Gottardo, Jungclaus, Lee, Lettmann, Linh, Liu, Liu, Lizarazo,
Momiyama, Moschner, Nagamine,
Nakatsuka, Nita, Nobs,
Olivier, Patel, Podolyák,
Rudigier, Saito, Shand,
Söderström, Stefan, Orlandi, Vaquero, Werner, Wimmer, and Xu]pau17
author author N. Paul, author A. Corsi,
author A. Obertelli, author P. Doornenbal, author
G. Authelet, author
H. Baba, author B. Bally, author M. Bender, author D. Calvet, author F. Château, author S. Chen, author J.-P. Delaroche, author A. Delbart, author J.-M. Gheller, author A. Giganon,
author A. Gillibert, author M. Girod, author
P.-H. Heenen, author
V. Lapoux, author J. Libert, author T. Motobayashi, author M. Niikura, author T. Otsuka, author T. R. Rodríguez, author J.-Y. Roussé, author H. Sakurai, author C. Santamaria,
author N. Shimizu, author D. Steppenbeck, author
R. Taniuchi, author
T. Togashi, author Y. Tsunoda, author T. Uesaka, author T. Ando, author T. Arici, author A. Blazhev,
author F. Browne, author A. M. Bruce, author
R. Carroll, author L. X. Chung, author M. L. Cortés, author M. Dewald, author B. Ding,
author F. Flavigny, author S. Franchoo, author
M. Górska, author
A. Gottardo, author
A. Jungclaus, author
J. Lee, author M. Lettmann, author B. D. Linh, author J. Liu, author Z. Liu, author C. Lizarazo, author
S. Momiyama, author
K. Moschner, author
S. Nagamine, author
N. Nakatsuka, author
C. Nita, author C. R. Nobs, author L. Olivier, author Z. Patel, author Z. Podolyák, author M. Rudigier, author T. Saito, author C. Shand, author P.-A. Söderström, author I. Stefan, author R. Orlandi, author V. Vaquero, author V. Werner, author K. Wimmer, and author Z. Xu, title title Are There
Signatures of Harmonic Oscillator Shells Far from Stability? First
Spectroscopy of ^110Zr, https://doi.org/10.1103/PhysRevLett.118.032501 journal
journal Phys. Rev. Lett. volume 118, pages 032501 (year 2017)NoStop
[Blazkiewicz et al.(2005)Blazkiewicz, Oberacker, Umar, and Stoitsov]bla05
author author A. Blazkiewicz, author V. E. Oberacker, author A. S. Umar, and author M. Stoitsov, title title Coordinate space HFB
calculations for the zirconium isotope chain up to the two-neutron
dripline, https://doi.org/10.1103/PhysRevC.71.054321 journal journal Phys. Rev. C volume
71, pages 054321 (year 2005), https://arxiv.org/abs/nucl-th/0502063 arXiv:nucl-th/0502063 NoStop
[Moller et al.(2016)Moller, Sierk, Ichikawa, and Sagawa]mol16
author author P. Moller, author A. Sierk,
author T. Ichikawa, and author H. Sagawa, title title Nuclear ground-state masses and deformations:
FRDM(2012), https://doi.org/https://doi.org/10.1016/j.adt.2015.10.002 journal journal At. Data Nucl. Data Tab. volume 109-110, pages 1 (year
2016)NoStop
[Nishimura(2012)]Nishimura:2012mh
author author S. Nishimura, title title Beta-gamma
spectroscopy at RIBF, https://doi.org/10.1093/ptep/pts078
journal journal Prog. Theor. Exp. Phys. volume 2012, pages 03C006 (year 2012)NoStop
[Nishimura et al.(2011)Nishimura, Li, Watanabe, Yoshinaga, Sumikama, Tachibana,
Yamaguchi, Kurata-Nishimura, Lorusso, Miyashita, Odahara, Baba, Berryman, Blasi, Bracco, Camera, Chiba, Doornenbal, Go, Hashimoto, Hayakawa, Hinke, Ideguchi, Isobe, Ito, Jenkins, Kawada, Kobayashi, Kondo, Krücken, Kubono, Nakano, Ong, Ota, Podolyák, Sakurai, Scheit, Steiger, Steppenbeck, Sugimoto, Takano,
Takashima, Tajiri, Teranishi,
Wakabayashi, Walker, Wieland, and Yamaguchi]nis11
author author S. Nishimura, author Z. Li,
author H. Watanabe, author K. Yoshinaga, author
T. Sumikama, author
T. Tachibana, author
K. Yamaguchi, author
M. Kurata-Nishimura, author
G. Lorusso, author Y. Miyashita, author A. Odahara, author H. Baba, author J. S. Berryman, author N. Blasi, author A. Bracco,
author F. Camera, author J. Chiba, author
P. Doornenbal, author
S. Go, author T. Hashimoto, author S. Hayakawa, author C. Hinke, author E. Ideguchi, author T. Isobe, author Y. Ito, author D. G. Jenkins, author Y. Kawada, author N. Kobayashi,
author Y. Kondo, author R. Krücken, author
S. Kubono, author T. Nakano, author H. J. Ong, author S. Ota, author Z. Podolyák,
author H. Sakurai, author H. Scheit, author
K. Steiger, author D. Steppenbeck, author K. Sugimoto, author S. Takano, author A. Takashima, author K. Tajiri, author T. Teranishi, author Y. Wakabayashi, author P. M. Walker, author O. Wieland, and author H. Yamaguchi, title title
-Decay Half-Lives of Very Neutron-Rich Kr to Tc
Isotopes on the Boundary of the r-Process Path: An Indication of Fast
r-Matter Flow, https://doi.org/10.1103/PhysRevLett.106.052502
journal journal Phys. Rev. Lett. volume 106, pages 052502 (year
2011)NoStop
[Sarriguren et al.(1999)Sarriguren, Moya de Guerra, and Escuderos]Sarriguren:1999vj
author author P. Sarriguren, author E. Moya de
Guerra, and author A. Escuderos, title title Shapes and beta decay
in proton rich Ge, Se, Kr and Sr isotopes, https://doi.org/10.1016/S0375-9474(99)00346-2 journal
journal Nucl. Phys. A volume 658, pages 13 (year 1999), https://arxiv.org/abs/nucl-th/9907085 arXiv:nucl-th/9907085 NoStop
[Sarriguren et al.(2005)Sarriguren, Moreno, Alvarez-Rodriguez, and Moya de Guerra]Sarriguren:2005nk
author author P. Sarriguren, author O. Moreno,
author R. Alvarez-Rodriguez, and author E. Moya de Guerra, title title Nuclear shape dependence of
Gamow-Teller distributions in neutron-deficient Pb isotopes, https://doi.org/10.1103/PhysRevC.72.054317 journal journal Phys. Rev. C volume 72, pages 054317 (year 2005), https://arxiv.org/abs/nucl-th/0510040 arXiv:nucl-th/0510040 NoStop
[Sarriguren and Pereira(2010)]Sarriguren:2010bi
author author P. Sarriguren and author J. Pereira, title title Beta-decay properties of
Zr and Mo neutron-rich isotopes, https://doi.org/10.1103/PhysRevC.81.064314 journal journal Phys. Rev. C volume 81, pages 064314 (year 2010), https://arxiv.org/abs/1006.1478 arXiv:1006.1478 [nucl-th] NoStop
[Sarriguren et al.(2014)Sarriguren, Algora, and Pereira]Sarriguren:2014oba
author author P. Sarriguren, author A. Algora, and author J. Pereira, title title Gamow-Teller response in deformed
even and odd neutron-rich Zr and Mo isotopes, https://doi.org/10.1103/PhysRevC.89.034311 journal journal Phys. Rev. C volume 89, pages 034311 (year 2014), https://arxiv.org/abs/1403.1049 arXiv:1403.1049 [nucl-th] NoStop
[Sarriguren(2015)]Sarriguren:2015lga
author author P. Sarriguren, title title β-decay
properties of neutron-rich Ge, Se, Kr, Sr, Ru, and Pd isotopes from deformed
quasiparticle random-phase approximation, https://doi.org/10.1103/PhysRevC.91.044304 journal journal Phys. Rev. C volume 91, pages 044304 (year 2015), https://arxiv.org/abs/1504.01640 arXiv:1504.01640 [nucl-th] NoStop
[Ha and Cheoun(2014)]Ha:2014fra
author author E. Ha and author M.-K. Cheoun, title title Gamow–Teller
strength distributions in ^76Ge, ^76,82Se, and ^90,92Zr by the
deformed proton–neutron QRPA, https://doi.org/10.1016/j.nuclphysa.2014.12.002 journal
journal Nucl. Phys. A volume 934, pages 73 (year 2014)NoStop
[Ha and Cheoun(2016)]Ha:2016dqb
author author E. Ha and author M.-K. Cheoun, title title Effects of deformation
and neutron-proton pairing on the Gamow-Teller transitions for ^24,26Mg
in a deformed quasiparticle random-phase approximation, https://doi.org/10.1103/PhysRevC.94.054320 journal journal Phys. Rev. C volume 94, pages 054320 (year 2016)NoStop
[Ha and Cheoun(2017)]Ha:2017pek
author author E. Ha and author M.-K. Cheoun, title title A study of Gamow-Teller
transitions for N=Z nuclei,^24Mg,^28Si, and ^32S, by a deformed
QRPA, https://doi.org/10.1140/epja/i2017-12216-7 journal journal Eur. Phys. J. A volume
53, pages 26 (year 2017)NoStop
[Niu et al.(2014)Niu,
Colò, and Vigezzi]Niu2014
author author Y. F. Niu, author G. Colò, and author E. Vigezzi, title title Gamow-teller response and its
spreading mechanism in doubly magic nuclei, https://doi.org/10.1103/PhysRevC.90.054328 journal journal Phys. Rev. C volume 90, pages 054328 (year 2014)NoStop
[Niu et al.(2016)Niu,
Colò, Vigezzi, Bai, and Sagawa]Niu2016
author author Y. F. Niu, author G. Colò,
author E. Vigezzi, author C. L. Bai, and author
H. Sagawa, title title Quasiparticle random-phase approximation with
quasiparticle-vibration coupling: Application to the gamow-teller response of
the superfluid nucleus ^120Sn, @noop journal journal Phys. Rev. C volume
94, pages 064328 (year 2016)NoStop
[Robin and Litvinova(2019)]Robin2019
author author C. Robin and author E. Litvinova, title title Time-reversed
particle-vibration loops and nuclear gamow-teller response, https://doi.org/10.1103/PhysRevLett.123.202501 journal
journal Phys. Rev. Lett. volume 123, pages 202501 (year 2019)NoStop
[Niu et al.(2015)Niu,
Niu, Colò, and Vigezzi]Niu2015
author author Y. F. Niu, author Z. M. Niu,
author G. Colò, and author E. Vigezzi, title
title Particle-vibration coupling effect on the decay of magic
nuclei, https://doi.org/10.1103/PhysRevLett.114.142501 journal journal Phys. Rev. Lett. volume 114, pages 142501 (year
2015)NoStop
[Niu et al.(2018)Niu,
Niu, Colò, and Vigezzi]Niu2018
author author Y. Niu, author Z. Niu, author Colò, and author E. Vigezzi, title title
Interplay of quasiparticle-vibration coupling and pairing correlations on
β-decay half-lives, https://doi.org/https://doi.org/10.1016/j.physletb.2018.02.061 journal journal Phys. Lett. B volume
780, pages 325 (year 2018)NoStop
[Litvinova et al.(2020)Litvinova, Robin, and Wibowo]Litvinova2020
author author E. Litvinova, author C. Robin, and author H. Wibowo, title title Temperature dependence of nuclear
spin-isospin response and beta decay in hot astrophysical environments, https://doi.org/https://doi.org/10.1016/j.physletb.2019.135134
journal journal Phys. Lett. B volume 800, pages 135134 (year
2020)NoStop
[Dobaczewski et al.(1984)Dobaczewski, Flocard, and Treiner]dob84
author author J. Dobaczewski, author H. Flocard, and author J. Treiner, title title Hartree-Fock-Bogolyubov
description of nuclei near the neutron-drip line, https://doi.org/https://doi.org/10.1016/0375-9474(84)90433-0 journal journal Nucl. Phys. A volume
422, pages 103 (year 1984)NoStop
[Kasuya and Yoshida(2021)]kas21
author author H. Kasuya and author K. Yoshida, title title
Hartree–Fock–Bogoliubov theory for odd-mass nuclei with a time-odd
constraint and application to deformed halo nuclei, https://doi.org/10.1093/ptep/ptaa163 journal journal Prog. Theor. Exp. Phys. volume 2021, pages 013D01 (year 2021), https://arxiv.org/abs/2005.03276 arXiv:2005.03276 [nucl-th] NoStop
[Yoshida(2013)]yos13
author author K. Yoshida, title title Spin–isospin response
of deformed neutron-rich nuclei in a self-consistent Skyrme
energy-density-functional approach, https://doi.org/10.1093/ptep/ptt091 journal journal Prog. Theor. Exp. Phys. volume 2013, pages 113D02 (year 2013), https://arxiv.org/abs/1308.0424 arXiv:1308.0424 [nucl-th] NoStop
[Yoshida(2021)]yos13e
author author K. Yoshida, title title Erratum: Spin–isospin
response of deformed neutron-rich nuclei in a self-consistent Skyrme
energy-density-functional approach, https://doi.org/10.1093/ptep/ptaa166 journal journal Prog. Theor. Exp. Phys. volume 2021, pages 019201 (year 2021)NoStop
[Gove and Martin(1971)]gov71
author author N. Gove and author M. Martin, title title Log-f tables for beta decay, https://doi.org/https://doi.org/10.1016/S0092-640X(71)80026-8
journal journal At. Data Nucl. Data Tab. volume 10, pages 205 (year
1971)NoStop
[Engel et al.(1999)Engel,
Bender, Dobaczewski, Nazarewicz, and Surman]eng99
author author J. Engel, author M. Bender,
author J. Dobaczewski, author W. Nazarewicz, and author R. Surman, title
title decay rates of r-process
waiting-point nuclei in a self-consistent approach, https://doi.org/10.1103/PhysRevC.60.014302 journal journal Phys. Rev. C volume 60, pages 014302 (year 1999)NoStop
[Bartel et al.(1982)Bartel,
Quentin, Brack, Guet, and Håkansson]bartel82
author author J. Bartel, author P. Quentin,
author M. Brack, author C. Guet, and author
H.-B. Håkansson, title
title Towards a better parametrisation of Skyrme-like
effective forces: A critical study of the SkM force, https://doi.org/http://dx.doi.org/10.1016/0375-9474(82)90403-1 journal journal Nucl. Phys. A volume
386, pages 79 (year 1982)NoStop
[Chabanat et al.(1998)Chabanat, Bonche, Haensel, Meyer, and Schaeffer]Chabanat:1997un
author author E. Chabanat, author P. Bonche,
author P. Haensel, author J. Meyer, and author
R. Schaeffer, title title A Skyrme parametrization from subnuclear to neutron star densities.
2. Nuclei far from stablities, https://doi.org/10.1016/S0375-9474(98)00180-8 journal
journal Nucl. Phys. A volume 635, pages 231 (year 1998), note [Erratum:
Nucl.Phys.A 643, 441–441 (1998)]NoStop
[Geng et al.(2003)Geng,
Toki, Sugimoto, and Meng]gen03
author author L.-S. Geng, author H. Toki, author S. Sugimoto, and author J. Meng, title
title Relativistic mean field theory for deformed nuclei with
the pairing correlations, https://doi.org/10.1143/PTP.110.921
journal journal Prog. Theor. Phys. volume 110, pages 921 (year
2003), https://arxiv.org/abs/nucl-th/0306038
arXiv:nucl-th/0306038 NoStop
[Garrett et al.(2021)Garrett, Zielinska, and Clement]gar21
author author P. E. Garrett, author M. Zielinska, and author E. Clement, title title An experimental view on
shape coexistence in nuclei, https://doi.org/https://doi.org/10.1016/j.ppnp.2021.103931 journal journal Prog. Part. Nucl. Phys. , pages
103931 (year 2021)NoStop
[Lorusso et al.(2015)Lorusso, Nishimura, Xu, Jungclaus, Shimizu, Simpson, Söderström, Watanabe, Browne,
Doornenbal, Gey, Jung,
Meyer, Sumikama, Taprogge,
Vajta, Wu, Baba,
Benzoni, Chae, Crespi,
Fukuda, Gernhäuser, Inabe,
Isobe, Kajino, Kameda,
Kim, Kim, Kojouharov,
Kondev, Kubo, Kurz,
Kwon, Lane, Li, Montaner-Pizá, Moschner, Naqvi,
Niikura, Nishibata, Odahara,
Orlandi, Patel, Podolyák,
Sakurai, Schaffner, Schury,
Shibagaki, Steiger, Suzuki,
Takeda, Wendt, Yagi, and Yoshinaga]lor15
author author G. Lorusso, author S. Nishimura,
author Z. Y. Xu, author A. Jungclaus, author
Y. Shimizu, author G. S. Simpson, author P.-A. Söderström, author H. Watanabe, author F. Browne, author P. Doornenbal, author G. Gey, author H. S. Jung, author B. Meyer, author T. Sumikama,
author J. Taprogge, author Z. Vajta, author
J. Wu, author H. Baba, author G. Benzoni, author K. Y. Chae,
author F. C. L. Crespi, author N. Fukuda, author
R. Gernhäuser, author
N. Inabe, author T. Isobe, author T. Kajino, author D. Kameda, author G. D. Kim, author Y.-K. Kim, author I. Kojouharov,
author F. G. Kondev, author T. Kubo, author
N. Kurz, author Y. K. Kwon, author G. J. Lane, author Z. Li, author A. Montaner-Pizá,
author K. Moschner, author F. Naqvi, author
M. Niikura, author H. Nishibata, author A. Odahara, author R. Orlandi, author Z. Patel, author Z. Podolyák, author H. Sakurai, author H. Schaffner, author P. Schury, author S. Shibagaki, author K. Steiger, author H. Suzuki, author H. Takeda, author A. Wendt, author A. Yagi, and author K. Yoshinaga, title title
-Decay Half-Lives of 110 Neutron-Rich Nuclei across the
N=82 Shell Gap: Implications for the Mechanism and Universality of the
Astrophysical r Process, https://doi.org/10.1103/PhysRevLett.114.192501 journal
journal Phys. Rev. Lett. volume 114, pages 192501 (year 2015)NoStop
[Möller et al.(2003)Möller, Pfeiffer, and Kratz]mol03
author author P. Möller, author B. Pfeiffer, and author K.-L. Kratz, title title New calculations of gross
-decay properties for astrophysical applications:
Speeding-up the classical r process, https://doi.org/10.1103/PhysRevC.67.055802 journal journal Phys. Rev. C volume 67, pages 055802 (year 2003)NoStop
[Soloviev(1976)]nla.cat-vn1768774
author author V. G. Soloviev, @noop title Theory of complex
nuclei / V. G. Soloviev ; translated by P. Vogel, edition
1st ed. (publisher Pergamon Press Oxford ; New York, year 1976)NoStop
[Gog()]Gogny_table
@noop title HARTREE-FOCK-BOGOLIUBOV RESULTS BASED ON
THE GOGNY FORCE, howpublished
<https://www-phynu.cea.fr/science_en_ligne/carte_potentiels_microscopiques/carte_potentiel_nucleaire_eng.htm>NoStop
[Niu et al.(2012)Niu,
Colò, Brenna, Bortignon, and Meng]Niu2012
author author Y. F. Niu, author G. Colò,
author M. Brenna, author P. F. Bortignon, and author J. Meng, title
title Gamow-Teller response within Skyrme random-phase
approximation plus particle-vibration coupling, https://doi.org/10.1103/PhysRevC.85.034314 journal journal Phys. Rev. C volume 85, pages 034314 (year 2012)NoStop
|
http://arxiv.org/abs/2307.02804v1
|
20230706063927
|
OLR-WA Online Regression with Weighted Average
|
[
"Mohammad Abu-Shaira",
"Greg Speegle"
] |
cs.LG
|
[
"cs.LG"
] |
Baylor University, Waco TX 76798, USA
mohammad_abu-shaira1@baylor.eduBaylor University, Waco TX 76798, USA
greg_speegle@baylor.edu
Online Linear Regression Based on Weighted Average
Mohammad Abu-Shaira1Greg Speegle2
August 1, 2023
==================================================
Machine Learning requires a large amount of training data in order to build
accurate models. Sometimes the data arrives over time, requiring significant
storage space and recalculating the model to account
for the new data. On-line learning addresses these issues by incrementally modifying
the model as data is encountered, and then discarding the data. In this study
we introduce a new online linear regression approach. Our approach combines newly arriving
data with a previously existing model to create a new model.
The introduced model, named OLR-WA (OnLine Regression with Weighted Average) uses user-defined weights to provide flexibility in the face of changing data to bias the results in favor of old or new data.
We have conducted 2-D and 3-D experiments comparing OLR-WA to a static model using the entire data set.
The results show that for consistent data, OLR-WA and the static batch model perform similarly and for varying data,
the user can set the OLR-WA to adapt more quickly or to resist change.
§ INTRODUCTION
In Machine Learning, the conventional batch approach operates under the assumption that all data is accessible for every computation. This allows
many benefits such as repeatedly accessing the data and cross-validation by leaving portions of the data out.
Furthermore, the batch learning approach assumes <cit.>:
* the whole training set can be accessed to adjust the model;
* there are no time restrictions, meaning we have enough time to wait until the model is completely trained;
* the data distribution does not change; it is typically assumed to be independently and identically distributed (iid). After the model is calibrated, it can produce accurate results without the need for further adjustments.
However, these assumptions limit the applicability of the batch approach <cit.>. For example
consider the scenario of a machine learning model that is trained to predict stock prices. The model is initially trained on historical stock market data and is used to make predictions about future stock prices. However, as time passes, the stock market changes. For example, the economy can go through a recession or a period of high inflation, new companies can go public, and old companies can go bankrupt. These changes in the stock market mean that the initial training set used to train the model is no longer valid. In order to continue making accurate predictions, the model must be updated to adapt it to the new conditions. Another scenario is in streaming environments where predictions are required at any given moment during execution. In such cases, the batch model must be recreated each time a prediction is required <cit.>. For example, consider the scenario of a traffic management system in a smart city. In this case, the system needs to provide real-time predictions for traffic flow and congestion levels at different locations. As traffic conditions can change rapidly due to accidents, road closures, or unexpected events, the system must recreate its prediction model each time a new prediction is required. By continuously updating the model with the latest traffic data, the system can offer accurate and up-to-date information to drivers, allowing them to choose the most efficient routes and alleviate congestion in real-time.
By considering these restrictions, we come to the realization that the applicability of machine learning is greatly constrained. Many significant applications of learning methods in the past 50 years would have been impossible to solve without relaxing these restrictions. On the other hand, online learning assumes: <cit.>:
* only a portion of the data is available at any one time
* the response should be timely
* the data distribution can change over time
This study introduces a novel online linear regression model Online Regression with Weighted Average (OLR-WA) that is based on the weighted average of a base model which represents the already seen data and an incremental model which represents the new data. OLR-WA eliminates the challenge of storage requirements for large amounts of data while providing an effective solution for large-scale problems. Additionally, it does not require any assumptions about the distribution of data and instead can work in an adversarial scenario, making it adaptable to a wide range of situations where the data may not be independently and identically distributed.
This paper makes two significant contributions. First, the OLR-WA model performs comparably to a batch model over data consistent with the batch model expectations. Second, the OLR-WA model provides flexibility not found in the batch model and many other online models.
§ RELATED WORK
In this section, we review some of the work related to online learning and to linear regression.
§.§ Stochastic Gradient Descent (SGD)
Gradient descent is a commonly used optimization algorithm for training machine learning models. It is based on the idea of iteratively adjusting the model's parameters in the direction of the negative gradient of the loss function, thereby minimizing the loss. Stochastic gradient descent (SGD) is a variation of gradient descent that uses a random sample of the data, rather than the full dataset, to compute the gradient at each iteration. This makes SGD more computationally efficient. SGD is one of the most widely used
techniques for online optimization in machine learning <cit.>. Incremental algorithms, like Stochastic Gradient Descent (SGD) have been found to be more effective on large data sets than batch algorithms, and are widely used <cit.>.
Mini-batch gradient descent combines the two approaches by performing an update for every
mini-batch of n training examples.<cit.>
SGD and mini-batch SGD are well suited to online learning and widely used in the industry <cit.>. While SGD and mini-batch SGD can be used for online learning, it works under the assumption that all the observed data up to the present moment is consistently accessible. They process the data one sample or a small batch of samples at a time, updating the model parameters after each sample or batch. In addition to that, these methods suffer from different limitations like <cit.> the sensitivity to the choice of the learning rate, which affects the convergence speed. A learning rate that is too small will result in slow convergence, while a learning rate that is too large may cause the algorithm to oscillate or not converge at all. Additionally, SGD might get stuck in a local minimum, which can result in poor predictions.
§.§ Linear Regression
Linear Regression <cit.> <cit.> is one of the most common and comprehensive statistical and machine learning algorithms. It is used to find the relationship between one or many independent variables and one dependent variable. It is a
mathematical approach used to perform predictive analysis and can
be used to determine causal relationships in some cases.
Regression may either be simple or multiple regression.
Simple linear regression studies the relationship between two continuous (quantitative) variables. One variable, denoted `x', is regarded as the predictor, explanatory, or independent variable and the other variable, denoted `y', is regarded as the response, outcome, or dependent variable <cit.>. The model equation is represented by
y= β_0 + β_1x + ϵ
Multivariate linear regression (MLR) is used to predict the result of an answer variable using a number of explanatory variables. The basic model for MLR is
y= β_0 + β_1x_1 + ⋯ + β_mx_m + ϵ.
The formula to determine the formula matrix (usually called pseudo-inverse) is <cit.>
β̂ = (X^TX)^-1X^T𝐲
where
β=
[ β_0; β_1; ⋮; β_m ], X= [ 1 x_11 x_12 … x_1m; 1 x_21 x_22 … x_2m; ⋮ ⋮ ⋮ ⋮ ⋮; 1 x_n1 x_n2 … x_nm ], y = [ y1; y2; ⋮; y_n ]
§.§ On-line Linear Regression Models
In general, on-line learning models follow the framework in Algorithm <ref>, which aims to minimize the total loss incurred.
§.§.§ Widrow-Hoff
One of the on-line linear regression models is the Widrow-Hoff algorithm, also known as Least-Mean-Square (LMS) Algorithm. LMS combines stochastic gradient descent techniques with a linear regression objective function by considering data one point at a time.
For each data point, the algorithm makes successive corrections to the weight vector in the direction of the negative of
the gradient vector. This eventually leads to the minimum mean square error. The LMS update rule is:
w_t+1 = w_t - α (w_t . x_t - y_t) x_t
where α is the learning rate, and w_t is the weight vector of the current iteration.
This update rule has been derived so we can stay close to w_t, since w_t embodies all of the training examples we have seen so far. In our work, we call this confidence bias, and can be modeled within OLR-WA <cit.><cit.>.
The Widrow-Hoff algorithm is commonly used adaptive algorithm due to its simplicity and good performance. However, the value of the learning rate parameter must be chosen carefully to ensure the algorithm converges. As an iterative algorithm, it can adapt to a rapidly changing data environments, but its convergence speed may be slower than other algorithms.
Additionally, the LMS algorithm has a fixed step size for each iteration and may not perform well in situations where the input signal's statistics are not well understood or bursty.<cit.>
§.§.§ Online Support Vector Regression
Support Vector Regression (SVR) was introduced in the 1990s by Vadimir Vapnik and his colleagues (Drucker, Cortes, & Vapnik, 1996) <cit.> while working at AT&T Bell Labs. The detailed exploration of SVR can be found in Vapnik's book (Vapnik, 1999) <cit.>. Vapnik's SVR model distinguished itself from standard regression models by fitting a tube, commonly known as the ϵ-Insensitive Tube, instead of a line. This tube, with a width denoted as ϵ > 0, defines two sets of points: those falling inside the tube, which are considered ϵ-close to the predicted function and are not penalized, and those falling outside the tube, which are penalized based on their distance from the predicted function. This penalty mechanism bears similarity to the penalization used by Support Vector Machines (SVMs) in classification tasks. In addition to the tube-based approach, SVR incorporates a kernel function. This kernel function allows SVR to capture nonlinear relationships between the input features and the target variable. It provides the flexibility to choose appropriate transformations for diverse types of data and problem domains.
The online variant of SVR employs stochastic gradient descent (SGD), which applies the concept of updating the dual variables incrementally based on the deviation between the predicted and target values. The dual variables, typically represented by α, are optimization variables associated with each data point. These variables measure the importance or weights assigned to each data point in the training set. The values of the dual variables determine the influence of each data point on the final decision function. Data points with non-zero dual variables, referred to as support vectors, play a significant role in defining the decision boundary or regression surface.
Our approach fundamentally diverges from Online Support Vector Regression (Online SVR) in several key aspects. Firstly, Online SVR randomly selects one data point at a time from the entire pool of previously observed data using stochastic gradient descent. In contrast, our approach requires a minimum number of data points, forming a mini-batch, to construct a model. Moreover, our approach encompasses the ability to forget previously seen data points while preserving their associated metadata in the form of a weighted average generated model. This distinguishing feature of our approach proves advantageous, particularly in adversarial scenarios, as the weights can be tailored to favor user-specified criteria. By leveraging this weighted average model, our approach demonstrates flexibility and adaptability in such scenarios.
§.§.§ Recursive Least-Squares
(RLS) algorithm
The Recursive Least-Squares (RLS) algorithm is a type of adaptive filter algorithm that is used to estimate the parameters of an online linear regression model. It is a recursive algorithm that uses a least-squares criterion to minimize the error between the desired output and the estimated output of the system.
In the context of online linear regression, the RLS algorithm is used to estimate the coefficients of the linear model. The algorithm starts with an initial estimate of the coefficients and updates them in real-time based on new data points. The algorithm uses a recursive update rule to adjust the coefficients based on the current data point and the previous estimates. The update rule is based on the gradient descent optimization method, which aims to minimize the mean square error between the desired output and the estimated output.
The RLS algorithm uses a forgetting factor, which is a scalar value between 0 and 1, to balance the trade-off between the importance of the current data point and the importance of previous data points. A higher forgetting factor value gives more weight to the current data point, while a lower forgetting factor value gives more weight to previous data points. Similar weights are used in OLR-WA. In summary, the RLS algorithm is an efficient and robust algorithm that can estimate the parameters of online linear regression models in real-time. It has fast convergence and good performance in terms of stability and robustness. However, it can be computationally expensive when the number of input variables is large.<cit.>
Our approach is fundamentally different from either LMS or RLS in that OLR-WA computes an incremental model based upon a collection of inputs and then integrates the incremental model into an existing base model representing all of the previous data.
This allows model integration instead of data integration, which means larger bursts of data points can be handled and we can emphasize either the previous model, the incremental model or neither.
While we intend to compare both performance and accuracy of OLR-WA with other techniques in the future, in this paper we validate OLR-WA by comparing it to linear regression with all data available.
§.§.§ Online Ridge Regression
Ridge regression is a regression technique that addresses the issue of overfitting in linear regression models by introducing a regularization term. Overfitting occurs when a model fits the training data too closely, resulting in poor performance on new, unseen data. Ridge regression adds a penalty term to the loss function during training to constrain the model's coefficients, thus reducing overfitting. <cit.> The cost function in ridge regression consists of two parts: the residual sum of squares (RSS) term (y - Xw)^T(y - Xw) that measures the discrepancy between the predicted and actual values, and the regularization term λ w^Tw that penalizes large coefficient values to prevent overfitting. The regularization term helps in controlling the complexity of the model and reducing the impact of irrelevant features. <cit.> By applying stochastic gradient technique to ridge regression, we can derive a similar algorithm. In each round, the weight vector is updated with a quantity based on the prediction error (y_t - X_tw_t). The main idea behind Online Ridge Regression is to update the model's parameters in an incremental fashion while incorporating regularization. This is achieved by adapting the standard ridge regression algorithm to handle streaming data. The difference between regular ridge regression and online ridge regression lies in the way the data is processed and updated. In online ridge regression, the data is processed sequentially, updating the coefficient vector w after each observation. Here's the formula for online ridge regression:
J(w)=(y - Xw)^T(y - Xw)+ λ w^Tw
where J(w) represents the cost function, y represents the vector of observed or target values, X represents the matrix of predictor variables or features, w represents the coefficient vector or parameter vector, which contains the regression coefficients for each feature, λ represents the regularization parameter, which controls the trade-off between fitting the training data and preventing overfitting, (y - Xw)^T(y - Xw) represents the squared residual term, which measures the difference between the observed values y and the predicted values obtained by multiplying the predictor variables X with the coefficient vector w, and finally λ w^Tw represents the regularization term, which penalizes the magnitude of the coefficient vector w to prevent overfitting.
Our methodology takes a fundamentally different approach compared to Online Ridge Regression across several critical aspects. Firstly, whereas Online Ridge Regression utilizes stochastic gradient descent to randomly select individual data points from the entire pool of previously observed data, our approach requires a minimum number of data points to form a mini-batch for constructing the model. Furthermore, our method allows for the forgetting of previously encountered data points while retaining their associated metadata through the generation of a weighted average model. This distinctive feature of our approach offers clear advantages, particularly in adversarial scenarios, as the weights can be tailored to prioritize specific user-defined criteria. By leveraging this weighted average model, our approach demonstrates remarkable flexibility and adaptability in such challenging situations.
§.§.§ The Online Passive-Aggressive (PA) algorithms
The Passive-Aggressive algorithms is a family of algorithms for online learning. It is often used for classification tasks but can also be applied to regression. The algorithm updates the model's parameters in a way that minimizes the loss using the below update rule
w_t+1 = min_w ∈ℝ^n1/2‖ w - w_t‖ ^2 s.t l(w;(x_t,y_t)) = 0
while remaining passive whenever the loss is zero, that is w_t+1 = w_t or aggressive in which those rounds the loss is positive, then the algorithm aggressively forces w_t+1 to satisfy the constraint l(w_t+1;(x_t,y_t)) = 0.
In addition to that, the algorithm has two other variations PA- 1, and PA- 2, which adds the terms Cξ, and Cξ^2 respectively. C the aggressiveness parameter is a positive parameter which controls the influence of the slack term ξ on the objective function. Larger values of C imply a more aggressive update step. In other words, the parameter C represents the regularization parameter, and denotes the penalization the model will make on an incorrect prediction.
By repeating the training process for each training example, the Online Passive-Aggressive algorithm adapts its parameters to minimize the loss while considering the aggressiveness of the updates. The aggressiveness of the updates allows the algorithm to quickly adapt to new patterns in the data. Overall, the Online Passive-Aggressive algorithm is a useful tool for online learning tasks, including online linear regression. It can adapt to changing data streams and make updates to the model's parameters based on the aggressiveness determined by the training examples encountered.<cit.>
The Passive Aggressive Online Algorithm is an efficient approach for learning on the fly in scenarios involving a continuous stream of data with labeled documents arriving sequentially. An illustrative use case is monitoring the entire Twitter feed 24/7, where each individual tweet holds valuable insights for prediction purposes. Due to the impossibility of storing or retaining all tweets in memory, this algorithm optimally processes each tweet by promptly learning from it and subsequently discarding it. The PA Online Algorithm, similar in some respects to the OLR-WA, possesses a mechanism for discarding data points. However, OLR-WA distinguishes itself from PA algorithm by incorporating weights that favor either the base or incremental model. However, PA algorithm exhibits a default behavior that leans towards favoring new incoming data points.
§ OLR-WA METHODOLOGY
The OLR-WA algorithm creates a new linear regression model for each data sample. The sample model and the existing model are merged to form a new linear regression model which can be used to make predictions until the next sample arrives. We compare OLR-WA to a batch linear regression approach which waits until all of the data arrives to build a model. In addition to being able to make predicitions sooner, the final result of OLR-WA has similar results as to the batch model.
§.§ Data Sets
In this study we use relatively small synthetic and real world data sets. Each synthetic data set is drawn from either a two dimensional data distribution or a three-dimensional data distribution. For our experiments, we consider three types of distributions. The first is where all of the data is from the same linear distribution with a low noise factor as variance.
The second is also a linear distribution, but the variance is higher. The third data set is a combination of two data sets, each with different liner distributions. This is an adversarial situation representing a change in the data distribution.
Each experiment is represented by
the number of data points N, the variance Var, the correlation Cor if it is positive, or negative, and the step size between data points Step. the following figures show some distribution samples of the dataset with 2-D settings.
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
In addition to that, for the purpose of validating our model with real world data sets, we used two public data sets explained in Section <ref>.
§.§ Method
Our approach includes two parts: a base model and an incremental model. The base model is the initial set of data points used as a starting point for linear regression. There is no specific limit to the number of data points, the minimum number necessary to create a model (e.g. 2 points for a 2-D model, 3 points for a 3-D model). The incremental model is a linear regression model that is created with each new set of data points. Again, there is no limit to the number of points to create a model, but we use the same guidelines as for the base model, (e.g. 2 points for a 2-D model, 3 points for a 3-D model). In our experiments, we allocate 10% of the data points for the base model, while the remaining points are added incrementally at a rate of 10 points per increment.
Once both models are created, we calculate their weighted average. For 2D, this means averaging two lines, for 3D is averaging two planes and in higher dimensions, we must average hyperplanes.
The weighted average is computed by assigning user-defined weights to each model. This allows the user to modify the results based upon their knowledge of past and future data. If the incremental model is given higher weight, the model adapts to changing data more quickly. If the base model is given higher weight, the model is more resistant to transient changes.
By default, the weights are equal, where both w-base and w-inc are assigned a fixed static number. Later in the paper, we will elaborate on various options for weight settings.
Although the techniques for computing the weighted average of lines, planes, or spaces may differ, the basic equation used remains consistent.
V-Avg = (w-base·v-base + w-inc·v-inc)/(w-base + w-inc)
where w-base represents the weight we assign to the base model, w-inc represents the weight we assign to the incremental model,
v-base is the vector of the base model and v-inc is the vector of the incremental model.
The weights here are scalar values, but the models will differ based on the number of dimensions.
For example, in the 2D case, the base model is a line, so v-base will be computed using the equation
V = ⟨ x2-x1, y2-y1 ⟩
where (x1, y1) and (x2, y2) are the coordinates of the tail and head of the line respectively.
Similarly, in 3D, if the normal vector of the plane equation 13x + 3y - 6z = 15 then V = ⟨ 13,3,-6 ⟩ The same applies for v-inc.
V-Avg is the computed average vector, which we'll use later on with intersection point of the two models to construct the average line, plane, or space.
We can compute two weighted average vectors from the base model and the incremental model, Figure <ref> shows the idea in two-dimensional plane in which the linear regression models for the base and the increment are represented by lines. The formula generates two averages, labeled avg1 and avg2 in Figure <ref>. OLR-WA picks one of these two average vectors by detecting which one fits the data better. Since only the incremental data is available for evaluation, OLR-WA generates previous data from the base model. Using the incremental data and the generated data, the model selects the best fit. For example, in Figure<ref> avg1 should be retained as the new base model, since it will have less mean square error. Algorithm <ref> shows the steps of OLR-WA.
We explore different techniques for assigning weights to data points, each with its own feasibility, correctness, and significance. Let's discuss each assignment technique along with short examples:
a) Time-based: <cit.>, this technique assigns weights to data points based on their age or recency. The weight of a data point decreases as it becomes older. This method acknowledges that recent data points are more likely to be relevant to the current situation and gives them more importance. For instance, in a stock market prediction system, recent stock prices might carry more weight in determining the future trend compared to older prices.
b) Confidence-based: <cit.>, this technique assigns weights based on the confidence or accuracy of the data point. Data points that are known to be more accurate or reliable are given higher weights. For example, in a sentiment analysis task, if certain labeled data points have been verified by experts or trusted sources, those points could be assigned higher weights due to their higher confidence level.
c) Fixed-based: <cit.>, our model here offers the flexibility for two variations. The first variation assigns fixed equal weights to all data points, treating them equally in terms of importance. This implies that every data point contributes equally to the model. For instance, in a weather prediction model, each recorded temperature measurement might have the same weight. In this variation, the weight of the base model, w-base, is updated after each iteration by adding the weight of the incremental model, w-inc. This update is necessary because the current generated model incorporates both the base data and the incremental data, reflecting their combined influence. The second variation of the fixed-based technique assigns fixed equal weights to both the base model and the incremental model. This signifies that the existing state of the base model holds equal weight to the new model generated based on the incremental data. This ensures a balanced integration of the existing and new information. For example, in a machine translation system, both the previously trained model and the additional training data have equal importance in generating accurate translations.
One of the most common techniques is called the Decay Factor, which is a value used to reduce the weight of older data points in an online learning algorithm. The decay factor is multiplied by the weight of each data point at each iteration, so older data points have a lower weight than newer data points. This approach gives more importance to recent data points, as they are more likely to be relevant to the current situation.<cit.> The decay factor is considered a time-based weighting scheme.
It is important to note that the choice of weighting scheme depends on the specific problem and dataset at hand, and it's important to experiment with different weighting schemes to find the one that works best for the specific use case. In the experiments section, we will demonstrate some techniques for adjusting weights to achieve the desired outcomes.
§.§ Time Complexity
The algorithm presented can be applied to any form of linear regression, but using the pseudo-inverse is a particularly common choice as it is polynomial time. We will be focusing on this approach in our analysis of the algorithm. The algorithm executes one pseudo inverse linear regression for the base model, and T pseudo inverse linear regression for incremental data, the pseudo inverse linear regression equation as we stated earlier is β̂ = (X^TX)^-1X^T𝐲.
Given X to be a M by N matrix, where M is the number of samples and N the number of features. The matrix multiplications each require O(N^2M),
while multiplying a matrix by a vector is O(NM). Computing the inverse requires O(N^3) in order to compute the LU or (Cholesky) factorization.
Asymptotically, O(N^2M) dominates O(NM) so we can ignore that calculation. Since we're using the normal equation we will assume that M > N, otherwise the matrix X^TX would be singular (and hence non-invertible), which means that O(N^2M) asymptotically dominates O(N^3). Therefore, the total complexity for the pseudo inverse is O(N^2M).
Our proposed algorithm uses a linear regression technique, specifically the pseudo-inverse linear regression, to process the data incrementally. This allows for the processing of smaller batches of data at a time, resulting in a lower overall data size. Compared to the traditional batch version of the pseudo-inverse linear regression, which has a time complexity of O(N^2M), our online model's time complexity is likely to be O(KN^2(M/K)), where K is the number of iterations. Thus the total time complexity of OLR-WA is about the same as for the batch model running all at once.
§.§ Evaluation Metric
The coefficient of determination, usually denoted by R2 or r2, is the proportion of variation of one
variable (objective variable or response) explained by other variables (explanatory variables) in regression <cit.>. This is a widely used measure of the strength of the relationship in regression. It describes how well the model fits the data. An r^2 close to 1 implies an almost perfect relationship between the model and the data <cit.>. This coefficient is defined as <cit.>
0.5!r^2 = 1 - ∑_i=1^n(y_i - ŷ_̂î)^2∑_i=1^n(y_i - y_i)^2 = 1 - SE 𝐲̂SE 𝐲
where 𝐲̂ denotes the value of the objective variable (y) predicted by regression for the ith data point. The second term of this expression is the residual sum of squares divided by the sum of squares of y.
It is highly recommended to use the coefficient of determination as the standard metric for evaluating regression analyses in any scientific field because it is more informative and accurate than SMAPE, and does not have the interpretability limitations of other metrics such as MSE, RMSE, MAE, and MAPE <cit.>. The coefficient of determination is used as the evaluation metric for our model.
§ DISCUSSION
In this research, we carried out experiments utilizing both 2-dimensional and 3-dimensional datasets. We employed the versatility and management capabilities of our dataset generator to perform multiple experiments. We will now examine the model's behavior and implications.
§.§ 2-Dimensional
Figure <ref>
provides a comprehensive illustration of a 2D scenario, the green points represent the base model points, which are used for visualization purposes only, but in reality, we don't retain them, the only thing we maintain about the those base points is the model itself which is represented here by a the the green line. The blue points are the new coming (incremental) points, the blue line represent the linear regression line for the incremental points. The yellow dashed line ends with a red star on its tail represents first computed average line, and the gray dashed line ends with a red star on its tail represents the second computed linear regression line.
In the 2D model, we define the average lines, the yellow and the gray in Figure <ref> by the two norm vectors resulted from equation <ref> and a point of intersection between the base and the incremental lines. It is worth nothing that it is highly unlikely for the base and the incremental lines to have exactly the same slope, making them parallel, and thus having no intersection point. In this scenario, the current algorithm simply ignores the case and updates itself on the next iteration. Although this is a rare occurrence, for more accurate results, there are several solutions which we will consider in the future work.
As illustrated earlier, one of those two lines will be selected and will represent our current model, the other will be discarded. selection is based on the minimum Mean Square Error (MSE) of the new coming data and some sampled data from the base model.
§.§ 3 Dimensional
Figure <ref>
provides a comprehensive illustration of a 3D scenario, the blue plane represent the base model. The brown plane represents the incremental model. The orange plane represents first computed average plane, and the green plane represents the second computed plane. The 3D differs from the 2D with the coefficients. In 2D the coefficients are the slope or m_0, and the y_intercept or (b), while in 3D the coefficients are m_0, m_1, m_2 or (b). In the 3D we define the average planes, the orange and the green in Figure <ref> by the two norm vectors resulted from equation <ref> and a point of intersection between the base and the incremental plane. In this case, the intersection of two planes is a line, so we can use any point on the line. In general, the intersection point of two planes or spaces can be determined by solving the equation of both the base and incremental models, which will provide a point that is located within both planes or spaces.
§.§ Generalization to Higher Dimensions (N-Dimensional)
While OLR-WA is straightforward in 2-D and 3-D, the same procedure can be applied to higher dimensions. we only need to find a point in the intersection and the direction vector, and using those two elements, we can compute the new hyperplane. Obtaining the point of intersection differs from one dimensional space to another. For example, in 2-D, finding the intersection of two lines requires solving two equations with two variables, which results in an exact one point. However, in 3-D, finding the intersecting line required solving two equations with three variables, which produces an infinite number of solutions which are points representing that line. In higher dimensions, the intersection of N-Dimensional hyperplanes, is a (N-1)-Dimensional hyperplane which will be generated by solving two equations of N variables. This can be done using methods like Gaussian elimination, matrix inversion, or using software packages. The solution will be a point (x_1, x_2, ..., x_n) that satisfies both equations, representing the intersection of the two hyperplanes. Note that there can be multiple solutions, depending on the hyperplanes' orientation and position in the N-dimensional space.
§.§ Experiments
We have conducted several experiments using 2-D and 3-D, we used 10% of the total points as the base model, and a total of 10 points on each iteration for the incremental model. In the following we will provide a summary of these experiments and the outcomes that were obtained.
§.§.§ Experiment 1
In this experiment we generated 5 different datasets of 200 random, positively correlated points. The r^2 for the batch model ranged from 0.9152 (in 3D) to 0.9558 (also in 3D), while the r^2 for OLR-WA ranged from 0.8991 (in 2D) to 0.9517 (in 3D). Tables 1a and 1b show the sample runs of the experiment.
§.§.§ Experiment 2
In this experiment, we generated 5 different datasets of 200 random, positively correlated points. However, this time 100 data points are randomly generated with one variance and the other 100 data points are generated with a different variance. Not surprisingly, neither model was able to model the data as accurately as Experiment 1, with the r^2 for the batch model ranging from 0.8222 (in 3d) to 0.9190 (in 2D). while OLR-WA r^2 ranges from 0.8107 (in 3D) to 0.8904 (in 2D). Tables 2a and 2b shows the sample runs of the experiment.
§.§.§ Experiment 3
The aim of the this experiment is to expose OLR-WA represented by algorithm <ref> into real world datasets and validate its performance, we used two public datasets:
* <cit.> 1000 companies data set. The dataset includes sample data of 1000 startup companies operating cost and their profit. Well-formatted dataset for building ML regression pipelines. This data set is used to predict profit from R&D spend and Marketing spend
* <cit.> Math Student data set. This is a dataset from the UCI datasets repository. This dataset contains the final scores of students at the end of a math programs with several features that might or might not impact the future outcome of these students. Math Student dataset is used to predict secondary school student's performance (final grade) using first period grade, and second period grade.
Tables <ref> and <ref> show the performance of OLR-WA versus the standard batch model using the 2 aforementioned data sets.
§.§.§ Experiment 4
In this experiment we show an adversarial scenario. As in Experiment 2, we generate 200 random data points. However, in this case, the first 100 points are positively correlated while the next 100 points are negatively correlated. As expected, the batch model does not generate strong r^2 results (and it can be argued that is correct), see Figure <ref>. However, in the online approach, the user can supply weights to indicate their preference for the data. For example, older points can be given higher, lower or the same weight as newer points. We classify the weights as time-based in which older points have lower weights, confidence-based in which older points have higher weights and fixed-based in which the relative weights are established a priori.
Fixed-based weights
The default case, known as the fixed-based weights case, presents two options for the user's consideration. Firstly, all points are assumed to be treated equally, resulting in equal weights for each point. Secondly, both models are assigned fixed equal weights throughout the process. We will delve into these two cases in the following sections:
In the first case, assuming equal weights for all points, the base model begins with 40 points, denoted as w-base=40. Meanwhile, the incremental model processes 10 points during each iteration, denoted as w-inc=10. After each iteration, the weight of the base model increases as it accumulates the incremental points, given by the equation w-base += w-inc. Consequently, the base model progressively carries a higher weight than the incremental model. As a result, the model's regression plane aligns with the base model, as illustrated in Figure <ref>.
In the second case, the user assigns fixed equal weights to both models from the outset. Specifically, the user designates w-base = 1 and w-inc = 2. In this scenario, the regression plane gradually shifts towards the incremental model and aligns with it, as depicted in Figure <ref>.
Time-Based Weights
Figure <ref> shows the same distribution (but different data points), but this time the incremental model has a weight 20 times greater than the existing model. This represents a scenario in which the model trusts the data will continue with the new distribution. Note that the plane is a good fit for the incremental model.
Confidence-base Weights
Figure <ref> again has the same data distribution, but different data points. In this case, the weight of the existing model is 20 times higher than the weight of the incremental model. This scenario entails placing greater trust in the initial data while assigning minimal weight to the incremental data in order to diminish its impact. Note that the plane is a good fit for the existing model.
Between the three models, the time-base approach is the most dynamic. For example, a 3rd increment with yet another distribution would cause the model to change to match the new data, while the fixed weights would change less and the confidence weights the least.
§ CONCLUSION AND FUTURE WORK
We have conducted 2D, and 3D experiments using OLR-WA and a batch model. In general, OLR-WA preformed comparable to the batch model. For example, in the 2D case with constant variance, the r^2 values for the batch model varied from 0.9251 to 0.9492 while the r^2 for OLR-WA varied from 0.8991 to 0.9349. Similar results hold for the experiments with shifting variance and the 3D experiments. Furthermore, the time complexity of OLR-WA is on par with the batch version. Additionally, it provides flexibility that is not a standard part of the batch model by allowing dynamic adjustment of weights. In other words, the standard batch model lacks the flexibility required to handle adversarial scenarios like OLR-WA. For instance, as illustrated in Figure <ref>, when incremental data has a significant shift that calls for a new model, the r^2 of the standard batch model will be considerably low.
The work can be extended in many ways. First, while the model should adapt to any dimensionality, we have not tested extending the implementation beyond 3D. Second, while the time complexity of the model is on par with the batch model, we have not run performance tests to determine the impact of incremental evaluation, especially with respect to very large data sets. Third, the data sets were designed to find good fits. We need to compare the batch method with OLR-WA on datasets with significantly greater noise (much lower r^2 values). One issue specific to OLR-WA that can be addressed is the case where the two models do not intersect. One option is finding the closest distance between the two lines that can be used instead or possibly increasing the incremental model size to resolve the problem of parallelism.
While OLR-WA shows promise when compared to the batch model, we also need to compare it to other incremental learning techniques, such as LMS and RLS. The comparisons need to consider data distributions, adversarial situations and large data sets (both in terms of the number of features and the number of points).
Finally, OLR-WA has potential for interesting extensions, specifically in the area of weight selection. Currently, weights (w-base, and w-inc) are chosen by the user, but it would be fascinating to give OLR-WA the ability to select weights based on both user preferences and/or observed data. By recording data such as the number of points, variance, and correlation of the base model and each incremental model, as well as the user's preferred weight classes (time, confidence, or fixed), OLR-WA can automatically select weights.
For instance, if the user favors time-based weights and the incremental model significantly deviates from the existing model, OLR-WA could automatically increase the weight of w-inc which represents the incremental data weight. Likewise, when presented with adversarial data, OLR-WA could automatically increase the size of the incremental batch or consider many mini-batches to allow more points to accumulate and determine whether the new data fits a different model or represents an outlier, if it is highly likely that the adversarial data is constituting a new different model, and the user favors the weight-time weights, then, OLR-WA automatically will increase w-inc to the favor on the new model, and similarly if the user favors the base model to a certain extent, OLR-WA will automatically increase w-base which represents the base model weight till that point in time.
Another example will be computing the incoming tweet or post weight based on the likes or positive/negative comments it receives. Furthermore, introducing a forgetting factor can enhance this capability. When the model detects the introduction of a new model through a series of incremental mini-batches, it can be programmed to forget the old model and begin considering the new model. This approach increases the flexibility of the model and enables it to adapt to changing data distributions more effectively.
splncs04
|
http://arxiv.org/abs/2307.02343v3
|
20230705145650
|
The cryogenic RWELL: a stable charge multiplier for dual-phase liquid-argon detectors
|
[
"A. Tesi",
"S. Leardini",
"L. Moleri",
"D. Gonzalez-Diaz",
"A. Jash",
"A. Breskin",
"S. Bressler"
] |
physics.ins-det
|
[
"physics.ins-det"
] |
Quantum Limits of Position and Polarizability Estimation in the Optical Near Field
Stefan Nimmrichter
==================================================================================
§ INTRODUCTION
In recent years, the Thick Gaseous Electron Multiplier (THGEM) <cit.> and its derivatives have become a leading technology in the field of particle and radiation detection. Thanks to their robustness and relatively low cost, these detectors have been proposed and are currently employed in numerous fields, as reviewed in <cit.>. Amongst its derivatives, several attempts were conducted to couple the THGEM to resistive materials in order to enhance the detector's immunity against electrical discharges. Closed, WELL-like configurations <cit.> were proposed with a single-sided THGEM structure attached to a resistive anode, decoupling the multiplier from the readout electrode. Amongst them, the Resistive WELL (RWELL) <cit.> can be found, having a thin resistive layer anode decoupled from the readout by an insulator; another one is the Resistive Plate WELL (RPWELL) <cit.>, with the resistive-plate anode directly in contact with the readout electrode. Others include the Segmented RWELL (SRWELL) <cit.>, a variant of the RWELL featuring a metallic grid under the resistive film (for faster charge evacuation), that, together with the RPWELL, were proposed as potential sampling elements for Digital Hadronic Calorimetry (DHCAL) <cit.>.
All these THGEM and THGEM-like concepts were extensively investigated at room temperature. THGEM detectors (LEM, Large Electron Multipliers) were investigated very intensively in dual-phase liquid argon TPCs <cit.>, as potential candidates for the DUNE neutrino experiment <cit.>. Results of extensive systematic studies of THGEM-based detectors in noble-liquid vapors are reported in <cit.>. Others were studied at liquid-xenon temperatures in counting gases, e.g. Ne/CH_4; among them in Gas Photo-Multipliers (GPM) <cit.>. The RPWELL was successfully investigated in Ne/CH_4 down to 163K <cit.> and resistive materials have been explored also for large-volume experiments as a possible solution to electrical discharges <cit.>.
Notwithstanding, the charge readout in dual-phase detectors has been practically abandoned, at least in pure Ar vapors, due to instabilities even at very low gains <cit.>. A successful concept for attaining robust operation at higher charge gains could permit, e.g. in neutrino experiments, lowering detection thresholds thus reaching sensitivities superior to those of current single-phase devices <cit.>. A small increase of the charge gain, though, would likely remain insufficient for dark-matter experiments, dealing with very low WIMP-deposited energies <cit.>. The possibility of integrating resistive materials into detector assemblies, to mitigate the disruptive effects of sparks, turned out to be less straightforward than expected - due to the insulating behavior of resistive materials at low temperatures. So far, only the operation of an RPWELL equipped with a Fe_2O_3/YSZ ceramic plate was reported in Ne/CH_4 at liquid xenon temperature (163 K) <cit.>. A game-changing solution that made possible the operation of resistive detectors at liquid argon temperature (90 K) was provided by Diamond-Like-Carbon (DLC) layers <cit.>. This material has been employed lately for spark protection in various other gas-avalanche detectors at room temperature <cit.>.
By adjusting the thickness of the DLC layer, it was possible to cover a broad range of resistivities suitable for spark protection, down to the liquid-vapor coexistence point of argon (87.5 K at 1 bar).
In this work, the results of the successful operation of a cryogenic RWELL are summarized and an overview of the detector properties in comparison to its non-protected counterparts (THGEM, THWELL) is provided.
§ EXPERIMENTAL SETUP
§.§ Detector assembly and readout
During this study, three detector concepts were investigated: THGEM, THWELL, and RWELL; they are schematically presented in Fig. <ref>.
The first investigated configuration consisted of a double-sided 3x3 cm^2, 0.8 mm thick THGEM (0.5 mm diameter holes distributed in a hexagonal pattern with 1 mm pitch, and ∼0.1 mm hole rim) followed by an induction gap of 2 mm and an induction field E_i = 5 kV/cm (see Fig. <ref>a). In all the cases, the drift field was E_d = 0.5 kV/cm.
The second structure was a 0.8 mm thick THWELL detector: a 3x3 cm^2, 0.8 mm thick single-sided THGEM structure (0.5 mm diameter holes distributed in a hexagonal pattern with 1 mm pitch and ∼0.1 mm hole rim) directly coupled to a metallic anode (see Fig. <ref>b).
The RWELL was realized with a 3x3 cm^2, 0.8 mm thick single-sided THGEM electrode (with the same geometry of the THWELL) pressed onto a metallic anode via a resistive layer deposited on a 0.1 mm thick Kapton insulating foil (see Fig. <ref>c).
Experiments were conducted using DLC films with different surface resistivities R_S at RT (i.e., from choosing different thicknesses during sputtering) in order to assess the quenching properties at cryogenic temperatures. More details about the DLC production process can be found in <cit.>. These layers exhibited a stretched-exponential behavior as a function of temperature, with a coefficient a=1/3 (attributable to two-dimensional variable-range electron hopping).
Additional details regarding the properties of the DLC coatings at cryogenic temperature can be found in <cit.>. The responses of RWELL configurations with lower resistivity (e.g., R_S^DLC∼165 kΩ/□ at 298 K and R_S^DLC ∼10MΩ/□ at 90 K) were found to be similar to those of non-resistive readouts and thus are not reported here.
In Table <ref>, the values for the relevant R_S at 298 K and at 90 K are given.
To create electrical contacts, the DLC-coated Kapton foil was fixed onto a PCB and the DLC layer was connected to two sides of copper lines using cryogenic conductive epoxy[Master Bond EP21TDCS-LO]. An electrically-insulating cryogenic epoxy[Stycast 2850FT], was subsequently used to encapsulate the components (see Fig. <ref>).
All the electrodes (i.e. cathode, THWELL, resistive layer and readout electrode) were biased by a HV power supply[CAEN N1471H] or via a charge-sensitive preamplifier[Cremat: Model CR-110 with CR-150-R5 evaluation board].
The latter was equipped with a protection circuit composed of discharge-suppressor elements in parallel and a series resistor of 220 Ω.
The time evolution of the detector gain was studied in order to establish stable operation conditions. For gain stabilization studies, the signals from the preamplifier were fed to an amplifier[Ortec Model 450]; the amplified signals were digitized by a multi-channel analyzer[Amptek MCA 8000D]. For all the other measurements, waveforms from the preamplifier were acquired using a digital oscilloscope[Tektronix MSO 5204B] and post-processed using dedicated Matlab scripts. Histograms of the preamplifier signal amplitude were also acquired using the oscilloscope.
Currents from the electrodes were read out by a digitizer[NI USB-6008] and monitored using LabVIEW SignalExpress <cit.>.
§.§ Cryostat
The measurements were performed in a dedicated LAr cryostat, WISArD (Weizmann Institute of Science liquid Argon Detector) described in detail in <cit.>. In all the experiments, an ^241Am source emitting 5.5 MeV alpha particles with a rate of about 10 Hz was installed on a metallic plate cathode at a distance of 15 mm from the multiplying electrode. The source emission was opportunely confined using a metallic collimator of 4 mm thickness and 5 mm diameter. A 12 μ m Mylar foil was installed at the collimator's emitting face to attenuate the alpha energy to about 4 MeV. This granted full containment of the alpha-particles in the drift volume (≈10 mm range in gaseous argon at 90 K, 1.2 bar calculated with SRIM <cit.>).
Each detector configuration was operated in the saturated vapor phase of argon at 90 K. This was realized by inserting the detector assembly inside a Teflon cup, as depicted in Fig. <ref>. Argon was liquefied into the cryostat using a cryocooler[Cryomec PT90], till the upper edge of the cup. The level of the liquid was monitored using temperature sensors located in the proximity of the cup edges, inside the cup adjacent to the detector anode, and in other parts of the system. This prevented the liquid from penetrating into the detector region. The chosen operation within the vapor enclosed in the cup rather than in the liquid phase minimized losses due to electron recombination and extraction (see Fig. 3 in <cit.>), thus yielding a higher number of ionization electrons and facilitating the measurements.
The temperature inside the system was constantly monitored using a temperature controller[CryoCon model 24]. A conceptual scheme of the cryogenic setup is depicted in Fig. <ref>.
§ METHODOLOGY
§.§ Physics of the detector
The 4 MeV alpha particles (attenuated ^241Am source) generate ∼10^5 primary electrons in the drift region mostly in a track perpendicular to the detector surface (W-value of 26.3 eV in Ar <cit.>).
These are drifted towards the THGEM holes under a field E_d = 0.5 kV/cm. Increasing the bias across the multiplier electrode, defined as Δ V_THGEM results in a high electric field within the holes, allowing for the onset of avalanche multiplication of the primary charges. The amplification factor (i.e. detector gain) increases exponentially with Δ V_THGEM, typically in the range of 2.7-3.2 kV in our configuration. Charge from avalanche electrons is evacuated to the ground through the anode in the THGEM and THWELL multipliers and along the resistive-anode surface in the RWELL (see Fig. <ref>). An electrical signal is induced onto all the electrodes by the movement of charges (electrons and ions) during the entire process, as described by the Shockley-Ramo theorem <cit.>.
We refer to the collection signal as the one induced by the movement of the primary charges in the drift gap. This was recorded from the cathode in specific measurements where Δ V_THGEM = 0 kV. We refer to the amplification signal as the one induced by the movement of the avalanche charges. It was recorded from the anode with Δ V_THGEM in the range of detector operation. The effective detector gain can be estimated by the following normalization:
G_Eff = P_Amplif/P_Coll
where P_Amplif and P_Coll are the Gaussian mean values of the spectra of amplification and collection signals, respectively.
Notice that the anode signal of a THGEM operated in multiplication mode is mostly due to the fast movement of electrons within the induction gap - typically 4 μs in our setup at cryogenic temperature - whereas in THWELL and RWELL the signals contain both electron and slower ion components <cit.>.
It is important to mention that the resistive anode, decoupled from the readout electrode by a thin insulator (here 0.1 mm thick Kapton), causes a small decrease in the effective gain because of a reduced weighting potential. This effect is negligible when the insulator is thin <cit.>.
§.§ Gas purity
Gas purity plays a crucial role in LAr-TPCs, where a sub-ppb level of electronegative impurities is targeted <cit.> in order to minimize electron capture during the drift in the liquid or gas. The maximum achievable gain of gaseous multipliers in Ar is strongly affected by the presence of impurities, quenching for instance avalanche-induced VUV photons that are responsible for secondary avalanche feedback <cit.>. Therefore, for a realistic comparison of the multipliers, they should be investigated in very pure Ar vapor. For this reason, prior to any measurement, the system was vacuum-pumped for at least 12 h reaching a pressure of ∼1×10^-4 mbar.
A residual gas analyzer[SRS RGA200] was used to measure the level and composition of residual impurities. It was found that the main sources of contamination were H_2O, Ne, HF, N_2, CO. When filling the system with ∼1 bar of Ar, the residual impurities got diluted to a concentration of 0.1 ppm. This should be compared with the level of impurities in the Ar gas bottle: 99.999% with H_2O ≤ 2 ppm, O_2 ≤ 2 ppm, C_nH_m ≤ 0.5 ppm and N_2 ≤ 5 ppm, as specified by the producer. During operation, the gas was recirculated and purified with a hot getter[Entegris HotGetter PS3-MT3-R-2] in order to grant a nominal impurity content of the order of 1 ppb.
§.§ Liquefaction and thermal stabilization
The cryostat was filled up with liquid argon until a temperature of 90 K was measured in the gaseous phase at the level of the assembly (temperature sensor in Fig. <ref>). The pressure inside the cryostat throughout all the operations corresponded to the saturation pressure P_s = 1.2 bar at 90 K.
In order to grant thermal stability, according to our experience, the detector was not operated during the first six hours following liquefaction. During this period, the gas was purified at 2 L per hour for at least 24 hours.
§.§ Gain stabilization and voltage scan
The gain of each detector configuration was characterized by scanning the voltage across the RWELL, Δ V_RWELL, in the operational range while E_d = 0.5 kV/cm was kept constant.
An increase of Δ V_RWELL corresponded to an increase of the detector gain followed by a small reduction (about 20%), up to stabilization after several hours. This effect is due to the accumulation of charges on the THGEM's insulating substrate (holes' walls and rims) <cit.>.
For each voltage configuration, a minimum of 100 spectra of 120 s each were acquired. In Fig. <ref>, the gain stabilization for the RWELL detector operated at 90 K, 1.2 bar is shown as an example. Δ V_RWELL was set to 2.7 kV, sufficient to obtain well-defined spectra above noise, and it was subsequently increased up to 3.175 kV in steps of 50 V or 100 V.
After a certain value of Δ V_RWELL, the detector entered a region of electrical instability. This was different for each detector configuration. For the THGEM and THWELL, a single discharge made the detector unstable by removing the charging-up, and a power cycle was required.
For the RWELL, discharges were quenched by the resistive layer and the detector could be operated stably despite a loss in energy resolution due to small gain fluctuations (e.g. Δ V_RWELL = 3.15 and 3.175 kV in Fig. <ref>).
For the RWELL, three regions of operation were identified: i) discharge-free behavior up to 3 kV, ii) stable operation in the presence of quenched discharges for 3 kV < Δ V_RWELL < 3.175 kV and iii) presence of constant currents above 3.175 kV. The measurement was stopped when the detector entered the latter region. For the THGEM and THWELL, two regions of operation were identified: discharge-free behavior up to Δ V_THGEM = 3.2 kV and Δ V_THWELL = 2.9 kV, and an unstable region of intense discharges above it; there the measurements were stopped.
§ RESULTS
§.§ Collection signals, spectra, and efficiency
The process of primary charge drift and collection is independent of the adopted detector configuration.
An average collection signal from a dataset of 1000 waveforms is shown in Fig. <ref>, left.
The average risetime of the typical collection signal (around ∼5.5 μs) is in agreement with the theoretical value (around ∼5.8 μs) calculated for this geometry within a 5% error (from Magboltz <cit.>, v_e = 2.6 mm/μs at 90 K, 1.2 bar with E_d = 0.5 kV/cm).
In Fig. <ref>, right, a typical spectrum of collection-signal amplitudes recorded for a sample of 4.5 × 10^3 waveforms is shown. The spectrum was normalized to its maximum and calibrated by injecting a known amount of charge through a 2 pF capacitor. The mean value of the distribution corresponds to ∼1.5 x 10^5 primary electrons.
In Fig. <ref>, left, a scan of the WELL detector operated at 90 K, 1.2 bar was carried out in terms of the anode-signal amplitude from the preamplifier as a function of Δ V_WELL, with E_d = 0.5 kV/cm. The cathode-signal amplitude recorded at Δ V_WELL = 0 kV and E_d = 0.5 kV/cm, is also reported for comparison; it is the same for the RWELL (Fig. <ref>, left). Each point was recorded using the digital oscilloscope and represents the mean value of the distribution of signal amplitudes, each one containing 10^3 entries. One can observe in Fig. <ref>, right, a rise of the collection efficiency reaching a plateau at about Δ V_WELL = 1.2 kV, remaining constant until the onset of charge multiplication at about Δ V_WELL = 1.5 kV.
A charge gain ∼8 can be derived from Fig. <ref>, left, at Δ V_WELL = 2.9 kV.
Thus, for Δ V_WELL≥ 1.5 kV, the detector operates with full collection efficiency and, consequently, G_Eff (defined in Eqn. <ref>) is equivalent to the absolute detector gain G_Abs. These conclusions apply to the RWELL case as well.
§.§ 20GΩ-RWELL at 90K - Amplification signals and spectra
In Fig. <ref>, left, averages of ∼1000 normalized waveforms recorded from the RWELL anode are shown.
The detector voltage was scanned in the range Δ V_RWELL = 2.7-3.2 kV. The waveforms were prepared for averaging by applying a smoothing moving-average algorithm and subsequently by removing the signal pedestal, normalizing to the waveform maximum and subtracting time offsets at a fixed amplitude fraction.
In Fig. <ref>, right, the calibrated charge spectra are shown. The histograms were recorded using the MCA and each one contained ∼5×10^3 entries. All spectra were normalized to their area and to the maximum at Δ V_RWELL = 2.7 kV. The low-energy tail is attributed to the partial energy deposition of alpha particles exiting the collimator at large angles.
For detailed risetime studies, a waveform-by-waveform analysis was made, and the most probable value was taken from the risetime distribution (defined from 10% to 90% of the signal maximum). An increase of the risetime as a function of Δ V_RWELL was observed in Fig. <ref> (orange circles), from around 4 μs to 8 μs. A drift velocity analysis was carried out using the technique introduced in <cit.> for α-particles and a constant value of ∼1.8 · 10^3m/s at E_d = 0.5 kV/cm was found (about 30% less than the nominal value). This confirms that the main
contribution to the observed risetimes comes from the α-particles range, while the additional increase with Δ V_RWELL stems from the detector response. A quadratic subtraction of the low-field risetimes, performed in order to isolate the effect, is superimposed in Fig. <ref> (blue circles). The time constant associated with photon feedback <cit.> is shown for illustration (magenta dashed line).
Even if a good match is visible, the observed agreement might be coincidental.
Field modifications due to ion space charge or charge accumulation at the plate may induce a similar effect given that the contribution from the ion drifts within the THGEM structure accounts for ≈1 μs. Therefore, field reductions down to 20%, even if a bit extreme a priori, would cause a similar increasing trend with Δ V_RWELL. Ion feedback from the cathode, at the timescale of ms, would lead to spurious signals that were only present at Δ V_RWELL = 3.175 kV (up to 40% of the events, with a discharge rate of ≈1%). In closing, the exact mechanisms leading to the breakdown above this value could not be unambiguously elucidated.
Fig. <ref> depicts the RWELL gain as a function of ΔV_RWELL at different temperatures (presented in Fig. 2a of <cit.>).
Since the gas pressure was kept constant along the experiments, in first approximation the gas density is only temperature-dependent, ρ_g∼ T^-1. The DLC resistivity decreased with increasing temperature, from 20 GΩ/□ at 90 K to 10 GΩ/□ at 100 K and 3.5 GΩ/□ at 115 K <cit.>.
The detector characterization was performed first at 90 K, then the LAr level was reduced by warming up the system and extracting gas until the temperature reached the levels of 100 K and 115 K.
The maximal achievable stable gain in the presence of quenched discharges was approximately 30 regardless of the temperature and DLC resistivity.
The three curves manifest the same exponential trend, which is a feature of avalanche multiplication. Above a detector gain close to 18, the presence of quenched discharges was observed (≈ 0.25 μC); they did not prevent the RWELL operation and did not affect its performance. Above a gain of 30, the appearance of constant currents led to the detector tripping.
§.§ Comparative study
Past studies with LEM structures in dual-phase argon TPCs were carried out in different conditions: 87 K, 0.98 bar, cosmic muons, and unknown argon purity <cit.>.
Thus, such results in terms of maximum achievable gain can not be directly compared to the ones obtained in this study.
In order to demonstrate the potential advantages of the resistive configuration, the RWELL detector response was compared with that of THWELL and THGEM (LEM) in the same experimental system, irradiation source and conditions, at 90 K and 1.2 bar. Note that the RWELL was operated with DLC anodes of 200 MΩ/□ and 20 GΩ/□. The data were recorded after a gain stabilization cycle in purified Ar (see above). In all cases, E_d was kept at 0.5 kV/cm and the induction field in the THGEM configuration was set to 5 kV/cm.
Gain curves for all the tested detectors are shown in Fig. <ref>. It is possible to observe that both RWELL detectors outperform their non-protective counterparts.
The maximal stable-gain values achieved are summarized in Table <ref>:
The respective 2- and 4-fold gain increase of the RWELL configuration compared to the THWELL can be explained by the discharge quenching features of the resistive layer. Discharges occurring in the two non-protected detectors, WELL and THGEM, generally led to the tripping of the detector and often damaged the preamplifier. In both RWELL configurations, the presence of quenched discharges was observed: at 10 < G < 15 for the 200 MΩ-RWELL and 18 < G < 30 for the 20 GΩ-RWELL. Discharges could be sustained throughout the detector operation without affecting the stable gain, the discharge charge (≈0.25 μC) was ∼15-fold lower than the one of unprotected detectors (≥ 3.75 μC), and destructive effects to the electronics were strongly mitigated. Beyond ΔV_RWELL = 3.05 kV for the 200MΩ-RWELL and ΔV_RWELL = 3.175 kV for the 20GΩ-RWELL, both RWELL configurations were not operable due to the presence of constant currents.
§.§ Discharge probability
We define the discharge probability P_d as the number of discharges occurring per event per unit of time:
P_d = N_d/E_r× t
where N_d represents the number of discharges recorded, E_r is the rate of detected events and t is the measurement time. E_r was measured using a surface-barrier silicon detector[Ortec F Series Partially Depleted Silicon Surface Barrier Radiation Detector] and was found to be ≈10 Hz.
In Fig. <ref>, the discharge probability of the RWELL is depicted as a function of gain at different temperatures (90 K - 200 MΩ/□, 90 K - 20 GΩ/□, 100 K - 10 GΩ/□, 115 K - 3.5 GΩ/□).
Note the fast rise in discharge probability with the lowest resistivity RWELL, while the other three behave similarly. This is related to the fact that, for a higher R_S, the charge evacuation process is slower and consequently, the effective electric field at the bottom of the hole is reduced during the development of the avalanche, leading to quenching.
In all cases, at maximal gain values, the discharge probability reaches ∼1%.
§ SUMMARY & DISCUSSION
The availability of DLC films with tunable surface resistivity has permitted, for the first time, the implementation of the cryogenic RWELL. It was operated under 4 MeV alpha particles in purified Ar at 90 K and 1.2 bar. With resistive anodes of 200 MΩ/□ and 20G Ω/□, the RWELL exhibited discharge-quenching capabilities and could reach gains of ∼15 and ∼30, respectively. Relative to RWELL detectors operated at room temperature, much higher R_S values were needed to reach lower discharge quenching factors <cit.>.
A comparison with a THWELL and with a THGEM (LEM) + 2mm induction gap demonstrated the superiority of the resistive configurations in terms of stability and gain (the latter was 6 and 8 for the THGEM and THWELL, respectively).
The RWELL was operated also in the presence of moderate discharges, the latter being ∼15-fold smaller than those in THWELL.
In general, non-resistive configurations could not be operated in the presence of discharges, which also happened to damage the electronic readout. For all the configurations with R_S > 3.5 GΩ/□, it was possible to operate the detector with P_d≤ 0.01% (upper bound estimation limited by the source rate and measurement time) up to a gain ∼18, after which the discharge probability sharply increased to ≈1% at a gain of ∼30.
The dependence of the discharge probability on R_S cannot be explained by simple considerations based on the streamer mechanism, which would lead to quenching resistances lower than the ones employed here (in line with observations in quenched mixtures at RT). For this reason, the process of discharge quenching at cryogenic conditions requires further investigation.
A factor 2.5-5 higher stable gain values was reached with the RWELL compared to a bare THGEM/THWELL in our current setup under the same conditions. This demonstrates the potential advantage of coupling a resistive anode to the multiplier, also in noble gas at cryogenic conditions. A direct comparison with the results presented in literature obtained with LEM detectors <cit.> was not possible due to the different conditions of operation (e.g. gas density and purity, type of primary ionization, different electrode parameters, etc.).
To conclude, the successful operation of the RWELL detector at cryogenic
temperature represents an important milestone. Though requiring further investigations, it has the potential for improved scalable readout in future large-volume dual-phase detectors. The scaling up depends on industrial THGEM-electrode production limits and DLC-coating facilities. Electrode sizes reaching 50x50 cm^2 were already reported <cit.>; DLC-film coating currently reaches 20×100 cm^2 <cit.>. DLC films proved to be robust, minimally affected by aging, and cheap to produce <cit.>.
We would like to thank Prof. Zhou Yi from the University of Science and Technology of China for the production of the DLC layers, Mr. Y. Asher for the technical help with the mechanical productions, Mr. Y. Shahar and Dr. M. Rappaport for their support with the use of the vacuum technology and cryogenics, and Dr. Ryan Felkai for his assistance during the measurements.
This work was supported by Sir Charles Clore Prize, by the Nella and Leon Benoziyo Center for High Energy Physics, and CERN-RD51 ‘common project’ fund, the Xunta de Galicia (Centro singular de investigación de Galicia, accreditation 2019- 2022), and the “María de Maeztu” Units of Excellence program MDM-2016-0692. DGD was supported by the Ramón y Cajal program (Spain) under contract number RYC-2015-18820.
JHEP
|
http://arxiv.org/abs/2307.04769v1
|
20230707022456
|
Wave generation and energetic electron scattering in solar flares
|
[
"Hanqing Ma",
"James F. Drake",
"Marc Swisdak"
] |
astro-ph.SR
|
[
"astro-ph.SR",
"physics.plasm-ph"
] |
0000-0002-0786-7307]Hanqing Ma
Department of Physics, University of Maryland, College Park, MD 20740, USA
0000-0002-9150-1841]J. F. Drake
Department of Physics, University of Maryland, College Park, MD 20740, USA
0000-0002-5435-3544]M. Swisdak
IREAP,
University of Maryland,
College Park, MD 20742, USA
We conduct two-dimensional particle-in-cell simulations to investigate
the scattering of electron heat flux by self-generated oblique electromagnetic waves. The
heat flux is modeled as a bi-kappa distribution with a T_∥>T_⊥ temperature
anisotropy maintained by
continuous injection at the boundaries. The anisotropic distribution
excites oblique whistler waves and filamentary-like Weibel instabilities. Electron velocity distributions taken after the system has reached a steady state show that these instabilities
inhibit the heat flux and drive the total distributions towards
isotropy. Electron trajectories in velocity space show a circular-like
diffusion along constant energy surfaces in the wave frame. The key parameter controlling the scattering rate is
the drift speed of the heat flux v_d compared with the electron Alfvén speed v_Ae, with higher drift speeds producing stronger fluctuations and a more
significant reduction of the heat flux. Reducing the density of the
electrons carrying the heat flux by 50% does not significantly affect
the scattering rate. A scaling law for the electron scattering rate versus v_d/v_Ae is deduced from the simulations. The implications of these results for understanding
energetic electron transport during solar flare energy release are discussed.
§ INTRODUCTION
In solar flares electrons can be accelerated to energies above 100 keV <cit.>, three orders of magnitude higher than typical ambient coronal values. Because the X-rays and microwaves produced by solar energetic electrons are observed in both the corona (the location of the initial energization) and the chromosphere, transport, a crucial process that remains poorly understood, must be a key component of the dynamics of flare energy release. Observations point to a mechanism (or mechanisms) that scatter and inhibit electron transport <cit.>. Transport effects can also modify the energy distribution of the propagating electrons and the observed X-ray spectra, which complicates the identification of acceleration mechanisms.
Magnetic reconnection is thought to control the release rate of magnetic energy in flares <cit.>, occurring on lengthscales reaching 10^4 km over timescales on the order of tens of seconds. Thus, in the absence of scattering, a relativistic electron will take less than 0.1s to escape from a flare region. However, observations of above-the looptop sources reveal that the lifetime of energetic electrons is two orders of magnitude longer than the free-streaming transit time <cit.>, which suggests suppression of the transport of energetic electrons.
Observations have revealed the existence of looptop hard X-ray sources during flares <cit.>. These sources are situated at or distinctly above the top of the soft X-ray flaring loops. Increased detection sensitivity indicates that the above-the-looptop hard X-ray emission is likely a common feature of all flares <cit.>. Such sources require that the transport of energetic electrons be inhibited in the acceleration region so that bremsstrahlung can produce the fluxes of hard X-rays. Further, it has been shown that the fluxes of energetic electrons in the looptop are several times (a factor of 1.7 – 8) higher than the fluxes of electrons reaching the footpoints during the flare impulsive phase <cit.>, suggesting an accumulation of electrons in the looptop. Thus, <cit.> concluded that accelerated electrons must be subjected to a trapping mechanism that holds a significant fraction of them inside the coronal loops.
A long-standing issue in solar physics <cit.> is that the observed cooling times of soft X-ray sources are much longer than the predictions from models where cooling proceeds by collision-dominated <cit.> conduction. This finding gives a powerful motivation to consider the possible limitation of heat conduction by a confinement mechanism that inhibits the transport of energetic electrons. The turbulent suppression of heat conduction is one possibility. Turbulent fluctuations in the ambient magnetic field act to enhance the angular scattering rate and thus reduce the rate of escape of energetic electrons from the acceleration region <cit.>. But, numerical results show that to account for the observations the turbulence generated by the energy release should persist beyond the impulsive phase, which requires an extended release of magnetic energy <cit.>.
The above theoretical and observational evidence has led people to consider potential mechanisms that can scatter and confine energetic electrons in the corona. One possible mechanism is the formation of a highly localized electrostatic potential drop, in the form of a double layer (DL), that inhibits the transport of energetic electrons <cit.>. However, <cit.> found that the effectiveness of confinement by a DL is proportional to the magnitude of the DL potential drop, which scales as T_eh, the hot electron temperature. Since T_eh≪ m_ec^2, the strength of the DL is not sufficient to inhibit the transport of nonthermal electrons. Another possibility is magnetic trapping, for which an effective mirror requires significant perpendicular electron velocities. The observations of strong microwave-producing gyrosynchrotron emission, which arises mainly from a trapped population of electrons spiraling in coronal magnetic loops, also requires significant perpendicular electron energy <cit.>. However, recent studies of reconnection <cit.> suggested that the primary energy gain during the acceleration process is parallel to the ambient magnetic field. Therefore, to account for the observations and facilitate magnetic trapping, a mechanism must develop to scatter the energetic electron parallel motion into the perpendicular direction.
A potential mechanism is scattering by oblique whistler waves <cit.>. The cold plasma dispersion relation of a whistler wave is
ω=|k_∥|kd_e^2Ω_e/1+k^2d_e^2 or v_∥/v_Ae=kd_e/1+k^2d_e^2
where ω is the wave frequency, k_∥ is the wave number along the direction of the uniform background magnetic field B_0, v_Ae is the electron Alfvén speed and d_e is the electron skin depth. The parallel phase speed v_∥ first increases then decreases with kd_e, with a maximum of 0.5 v_Ae when kd_e=1. Thus, the characteristic phase speed of whistler waves is ∼ v_Ae. According to the quasilinear theory of whistler wave scattering <cit.>, resonant particles diffuse in velocity space along curves of constant energy in the wave frame (i.e., the reference frame that moves with the wave parallel phase speed v_ph=ω/k_∥ along B_0). Thus, the diffusive flux of particles is locally tangent to semicircles centered on the parallel phase velocity v_ph of whistler waves given by
(v_∥-v_ph)^2+v_^2=constant
For diffusion along this path, the particle's energy is conserved in the whistler wave frame, but not conserved in the lab frame, where the particle's kinetic energy is transferred to the wave's growth. Waves and particles can participate in resonant wave–particle interactions when they fulfill the resonance condition
ω-k_∥ v_∥-nΩ_e=0
where Ω_e=eB_0/m_ec is the electron cyclotron frequency, n is an integer that can take on positive and negative values <cit.>, and B_0 is the guiding field strength. n=0 gives the Landau resonance condition while n=1,-1 correspond to the normal and anomalous cyclotron resonances, respectively.
Significant scattering at resonant velocities can lead to the appearance of “horn-like” structures in the electron distribution function <cit.> in which the tail exhibits circular bumps at each resonant velocity. <cit.> show that the necessary condition to have wave growth is 0<v_ph<U_0s, with U_0s the heat flux average bulk velocity. Only in this way can the resonant particles lose kinetic energy as they diffuse and transfer energy to the growing resonant wave. Furthermore, as the wave amplitude increases, the resonance widths get larger and can eventually overlap <cit.>. The particle motion before the resonances overlap is periodic, but after the overlap it becomes diffusive. As a result, particles starting at small pitch-angles can scatter past 90 degrees if the wave amplitude is above the overlap threshold.
Heat flux suppression by oblique whistler waves in the fast solar wind is under active study <cit.>. In the solar wind, the heat flux is carried by the strahl, which appears as a field-aligned beam in the electron distribution. Observations reveal evidence that the strahl is scattered into the halo <cit.>, the superthermal component of the solar wind. Particle-in-cell simulations <cit.> and quasi-linear analysis <cit.> have shown that for typical parameters the strahl electrons are capable of generating oblique whistler waves via the first anomalous cyclotron resonance (see equation <ref>), which is known as the fan instability or the instability of runaway electrons <cit.>. The oblique whistler waves were shown to be able to drive pitch-angle scattering of the strahl, suppressing the heat flux and resulting in formation of the halo. Thus, the observations of the scattering of the strahl by whistler waves suggest that reconnection-driven energetic electrons should also drive and be scattered by oblique whistler waves.
Another possibility is the Weibel (also known as the filamentation) instability <cit.>, which is driven by an electron thermal anisotropy and can self-generate transverse magnetic perturbations with wave vectors perpendicular to the direction of the higher temperature. In the reconnection context the instability is driven when T_∥≫ T_⊥, the wavevector k is perpendicular to the ambient magnetic field and the magnetic field perturbation is perpendicular to both k and the local magnetic field. The signature of this instability is a pattern of current filaments and transverse magnetic fields stretched along the ambient field. The Weibel instability has been intensively studied theoretically <cit.>, numerically <cit.>, and experimentally <cit.> and has been considered a possible mechanism for heat transfer inhibition and thermal conduction limitation in laser-plasma experiments <cit.>.
In this study, under sufficiently strong heat flux conditions, the temperature anisotropy within the kappa distribution can trigger the Weibel instability. The competition between whistler and Weibel instabilities is complicated, and the Weibel instability is typically found to be transient, eventually giving way to the development of whistler waves once the temperature anisotropy has been reduced.
In previous simulations treating high-β systems <cit.> a temperature gradient was imposed across the system to drive the heat flux so that the resulting whistler waves could be evolved to steady state. In later simulations with β∼ 1 oblique whistlers were driven by an initial anisotropic kappa distribution function so the waves grew to large amplitude but then decayed due to the fast relaxation of the initial anisotropy <cit.>. In this study, we consider a system in which the heat flux arises from electrons with a bi-kappa distribution propagating parallel to the ambient magnetic field. In order to drive the system to a steady state, electrons reaching the boundaries are injected on the opposite ends of the computational domain with new velocities that ensure a continual input of heat flux. Oblique whistlers are initially excited by the anisotropic distribution through the n = -1 anomalous cyclotron resonance. Weibel perturbations are also evident early in the simulations. At later times, the influence of the initial distribution decays and the heat flux injection from the boundary sources continues to drive the oblique whistlers. The waves resonate with and scatter the most energetic electrons through overlapping resonances, reducing the skewness of the total distribution function and limiting the heat flux.
§ SIMULATION METHOD
We carry out 2D simulations (in the x-y plane) using the PIC code p3d <cit.>, which calculates particle trajectories using the relativistic Newton–-Lorentz equations and advances the electromagnetic fields using Maxwell’s equations. There is an initial uniform magnetic field 𝐁=B_0 𝐲̂ threading the plasma, so that v_y is the initial parallel velocity and v_x and v_z are the initial perpendicular velocities. The initial electron distribution function has two components. The first is a bi-kappa distribution with a heat flux in the positive v_y direction:
f_κ(v_∥,v_)=n_0/(πκ)^3/2θ_∥θ_^2Γ(κ+1)/Γ(κ-1/2)[1+v_∥^2/κθ_∥^2+v_^2/κθ_^2]^-(κ+1)Θ(v_∥)
where n_0 is the electron density, Γ is the gamma function, κ is a parameter that tunes the steepness of the nonthermal tail of the distribution, and
θ_n^2=v_th,n^2(κ-3/2)/κ
is the effective thermal speed, v_th,n^2=2T_n/m_e is the regular thermal speed, where T_n is the temperature in the n-direction, m_e is the electron mass and Θ(v_∥) is the Heaviside step function.
The second component is a drifting Maxwellian distribution that represents the cold return current beam (moving against B_0)
f_M(v_∥,v_)=n_0/π^3/2v_T^3[1+ erf(v_d/v_c)]exp{-[(v_∥+v_d)^2+v_^2]/v_c^2}Θ(-v_∥)
where v_c^2=2T_c/m_e is the cold thermal speed, v_d is a drift speed that ensures initial zero current, while the error function erf(v_d/v_c) makes the density of hot and cold particles equal. This distribution choice is motivated by flare observations <cit.> that suggest a power-law distribution of nonthermal electrons in the energy-release and electron acceleration region.
For parameters, we chose κ = 4, T_y = 20T_c = 40T_x, v_th,y/v_Ae = √(2), c/v_Ae = 5, β_y = 8π n_0T_y/B_0^2 = 2. While this is a relatively high β for the corona, specific flare observations show a hard X-ray source β of approximately 1 <cit.>. Recent simulation results also demonstrate that magnetic reconnection can drive the outflow to a marginal firehose state in which β_∥-β_≈2 <cit.>. When the high energy tail of an anisotropic kappa distribution with κ=4 is scattered to an isotropic one, particle number conservation implies the resulting energy spectral index will be 4.5 because the isotropic distribution occupies a larger phase space (E^0.5) than the anisotropic one (E^-0.5). This value is consistent with the results of a statistical study of flare parameters, which found that the spectral index value is around 4 based on a single power law fit <cit.>. Additionally, the large value of T_∥/T_ is motivated by 3D PIC simulations of reconnection showing P_∥/P_ approximately equal to 100 <cit.>.
To determine the nonthermal β, we separate the kappa distribution into a combination of a Maxwellian component and a nonthermal component, where the temperature of the Maxwellian component (T_M) is related to the kappa distribution temperature (T_κ) by the equation T_M=(κ-1.5)T_κ/κ <cit.>. <cit.> define the number density of the Maxwellian component (N_M) by requiring it to match the differential flux of the kappa distribution at energy T_M. The result is
N_M=n_0exp(1)(1/κ)^3/2Γ(κ+1)/Γ(κ-1/2)[1+1/κ]^-(κ+1)
The density of the nonthermal component for the parameters under consideration can be calculated as N_NT≈0.2n_0. The nonthermal energy density is calculated with the energy relation n_0 T_κ=N_MT_M+N_NTT_NT with T_κ≈ T_∥/3. The resulting nonthermal beta is 0.17, which is slightly larger than the observed value of around 0.1 near the flare looptop <cit.>, but quite close to the 0.16 of another observation <cit.>.
The simulation domain lengths are L_y = 163.84 d_e and L_x = L_y/4, where d_e = c/ω_pe is the electron skin depth, and ω_pe=(4π n_0e^2/m_e)^1/2 is the electron plasma frequency. The simulation time step is dt = 0.008Ω_e^-1, where Ω_e = eB_0/m_ec is the electron cyclotron frequency. Ions are initialized with a Maxwellian distribution of temperature T_i = T_c, and do not play an important role due to the large mass ratio m_i/m_e = 1600. The simulation uses 560 particles per species per cell and has a grid of 1024 by 4096 cells.
We set a boundary source to continuously inject heat flux into the system in order to drive the system to a steady state. Due to the initial distribution function (the total of equations (<ref>) and (<ref>)), the heat flux moves upwards in the box while a cold return current moves downwards. Electrons that encounter the top boundary are reinjected at the bottom of the box with random positions and new velocities that satisfy the kappa flux distribution
f_κ f(v_∥,v_)=2(κ-1)/πκ(θ_∥θ_)^2v_∥[1+v_∥^2/κθ_∥^2+v_^2/κθ_^2]^-(κ+1)Θ(v_∥)
where θ_n^2 and other parameters are the same as the initial kappa distribution, Eq. (<ref>). The kappa flux distribution models particles escaping from a source. Similarly, when electrons hit the bottom boundary, they are re-injected at the top of the box with random positions and new velocities that satisfy the drift Maxwellian distribution with identical parameters as in the initial distribution. The boundary condition in the x direction is periodic and ions, which are essentially only a charge-neutralizing background, simply follow periodic boundary conditions in all directions. The electromagnetic boundary conditions are set to be periodic in both the parallel and perpendicular directions. The ratio v_d/v_Ae, where v_d is the mean parallel drift speed of the heat flux (see equations <ref> and <ref>), is expected to be a key parameter controlling the degree to which the electron distribution is in resonance with the excited waves. In the simulations presented here we vary this parameter by changing the density n_0.
§ SIMULATION RESULTS
The initial distribution drives magnetic fluctuations unstable in the system. Figure <ref> shows the box-averaged magnetic field fluctuation ⟨ B_z^2 ⟩ versus time for different values of v_d/v_Ae, where v_d is the mean parallel drift speed of the heat flux (Eq. (<ref>)). The magnetic fluctuations grow rapidly, reach a peak value and, after a short decay, again grow slowly until flattening to a steady state at late time. The magnetic fluctuations reach larger amplitude for higher drift speeds, where more electrons have a velocity greater than the whistler phase speed and are therefore able to drive the waves.
Figure <ref> shows the temperature anisotropy averaged over x versus distance y at the end of the simulations for different drift speeds. The higher drift speed cases (red, green and blue curves) exhibit roughly the same anisotropy at y=0 as the systems have lost all information regarding the t=0 initial conditions, while the slower drift speed case (black curve) is still affected by the initial distribution due to the weakness of the scattering. As a result, the anisotropy remains large. For higher drift speeds (green and blue curves), the temperature anisotropy decreases quickly with distance from the boundary and reaches a minimum value around 1.5. The final distribution is more isotropic at larger value of y because the injected electrons must propagate some distance into the domain before the waves are able to isotropize them. For the slower drift speed case (black curve), the anisotropy remains large due to the small amplitude of the waves, resulting in a correspondingly weak level of scattering.
Shown in Figure <ref> is the ratio of the final to initial average y-directed heat flux for different drift speeds. The red point corresponds to a case where 25% of electrons, rather than 50%, initially satisfy the kappa distribution (Eq. (<ref>)), while the remaining 75% are from the drifting Maxwellian distribution (Eq. <ref>)). In this case, all other distribution function parameters remain unaltered, with the exception of a reduction in the drift speed (see Eq. <ref>) that is necessary for current balance. In this case the density of nonthermal electrons carrying the heat flux is significantly reduced <cit.>. The heat flux at steady state decreases with increasing drift speed, reaching a minimum value around 40% of the initial heat flux for the highest drift speed. Notably, the reduced kappa density case (red point) does not significantly impact the reduction of the heat flux.
We now address the nature of the instabilities that drive electron scattering and specifically show that a strong Weibel instability develops in the high heat flux cases while whistlers dominate scattering at lower drift speeds. In Figure <ref> we show the structure of the out-of-plane magnetic fluctuations and corresponding space-time plot for v_d/v_Ae = 1.06. Shown in Fig. <ref>(a) is the growth phase when the large anisotropy of the initial distribution excites the Weibel and produces filaments that fill the box along the heat flux direction. The spacing of these filaments along the y direction is around d_e. The magnetic structure of these filaments, with the wavevector k transverse to B and the magnetic perturbation dominantly in the z direction, matches the expected structure of a Weibel instability that is driven by the strong parallel heat flux. Such Weibel-like structures are soon replaced by larger-amplitude oblique whistler waves that propagate at angles of roughly 70^∘ from the parallel direction, as shown in Figure <ref>(b). The whistlers are excited by the fan instability, which is driven by the n = -1 anomalous cyclotron resonance (see Eq. (<ref>)) with the tail of the electron distribution. Figure <ref>(b) corresponds in time to the initial peak in the wave amplitude in Figure <ref>, so the initial peak and subsequent decay phase of the fluctuations should be associated with the drive from the initial distribution function. Figure <ref>(c) is at the time when the red curve in Figure <ref> begins to flatten. Oblique whistler waves with normal angles around 45^∘ are excited at the bottom of the box by heat flux injection and propagate upward. Filaments oriented along the y direction, which are excited by the Weibel instability, are also observed near the lower boundary; however, they occupy only a limited region near the lower boundary of the simulation domain.
Figure <ref>(d) is taken at the end of the simulation and appears similar to panel (c), showing that the system has reached a steady state.
Shown in Fig. <ref>(e) is a space-time plot of B_z, where the data are collected from a strip extending over the entire y range of the domain at a fixed x position (vertical black line in Figure <ref>(a)). The initial peak-and-decay phase of the magnetic field fluctuation appears as a transient period before tΩ_e = 100, after which upward-moving whistler waves appear throughout the entire box. After around tΩ_e = 100, the bottom boundary continuously generates upward-moving whistler waves, showing that the instability from heat flux injection is responsible for the fluctuations at late time. The phase speed of the whistlers over most of the domain remains nearly constant during the entire simulation following the transient Weibel instability phase at very early time.
Figure <ref> shows the power spectrum of the out-of-plane magnetic field for the simulation shown in Fig. <ref>. Figure <ref>(a) corresponds to the oblique whistler waves induced by the initial distribution (see Fig. <ref>(b)), exhibits peaks around k_xd_e∼1.5 and k_yd_e∼0.6, and shows power extending over a broad range of wave numbers. The wave normal angle for the highest-amplitude mode is around 70^∘, but the range of wave numbers indicates the coexistence of several oblique waves with slightly different phase speeds. The parallel phase speed ∼ 0.5 V_Ae measured in Figure <ref>(b) corresponds to kd_e ∼ 1 from the dispersion relation of Eq. (<ref>), a value consistent with the power spectrum in Fig. <ref>(a). In the steady state, there are two peaks in the power spectrum, as shown in Figure <ref>(b). One is the oblique mode centered around k_xd_e∼ 0.3 and k_yd_e∼ 0.2, which corresponds to the oblique whistler wave. The other is a perpendicular mode centered around k_xd_e∼0.6 and k_yd_e = 0, which arises from the filaments appearing near the bottom boundary of Figure <ref>(d). The parallel phase speed ∼ 0.2V_Ae measured at the end of the simulation gives kd_e ∼ 0.1 from the dispersion relation of Eq. (<ref>), which is consistent with Figure <ref>(b).
The electron velocity-space distribution functions of Fig. <ref> demonstrate the impact of the scattering of electrons by the wave activity. Shown in panels (a-c) are distributions from the end of simulation at different positions in the box. Figure <ref>(a), from the bottom of the simulation domain, is a kappa-like distribution in the upper half of the panel because of heat flux injection at the bottom boundary. The return current in the negative v_y plane is broadened in the v_x direction, due to scattering by whistler waves through the cyclotron resonance.
In the upper half of the box, shown in Fig. <ref>(b), the total distribution has become nearly isotropic. The upward moving electrons have been strongly scattered. There is no obvious horn-like structure, which is likely a consequence of the overlap of several resonance circles <cit.> and is implied by the broad range of wave numbers shown in Figure <ref>. At the top of the box, panel (c), the cold current injection from the top boundary produces a beam feature in the negative v_y component, while the positive v_y distribution is even more isotropic than in panel (b). Panels (d-f) show the difference between the final and initial distributions at the respective positions. Panel (d) shows that the return current has been heated, and in panels (e,f) the decrease in the parallel heat flux and the significant increase in the perpendicular velocity are direct evidence of the scattering of the nonthermal electrons by whistler waves.
Shown in Fig. <ref> is the structure of the out-of-plane magnetic fluctuations at the end of the simulations for the reduced number density case and the corresponding baseline run (both with v_d/v_Ae=1.50). The initial peak and decay phases are similar to Figure <ref> (a-b) with features such as the initial filament and subsequent oblique whistler mode, and therefore we omit those times here for simplicity. For the case with equal number densities carrying the heat flux and return current in Fig. <ref>(a), the filament structure from the Weibel instability forms at the bottom of the domain and extends upward, threading the entire system. In Fig. <ref>(b), where the number density of electrons carrying the heat flux is reduced, the Weibel instability is weaker and filaments only extend to the middle of the box before they are replaced by oblique whistler waves.
Shown in Fig. <ref> is the time evolution of the box-averaged magnetic fluctuation energy ⟨ B_z^2 ⟩ for the base case with v_d/v_Ae=1.50 and the case with reduced number density carrying the heat flux (corresponding to the red point in Fig. <ref> and Fig. <ref> panel (b)). When the initial heat flux is reduced by half (with the same drift speed), the wave energy before tΩ_e = 400 is small due to the reduced population of electrons available to drive the instability. Eventually the slowly growing waves reach finite amplitude and the fluctuation level flattens around tΩ_e = 800. The late time value of the wave amplitude is only modestly reduced from that of the base case, demonstrating that strong oblique whistler waves can be excited even when the number density of the electrons carrying the heat flux is reduced.
Shown in Fig. <ref> are the self-consistent trajectories of several electrons in the x-y and v_∥-v_ planes, extracted from the particle-in-cell simulation with v_d/v_Ae=1.50. The electrons all start from the bottom boundary with initial parallel velocities significantly greater than their perpendicular velocities and so are typical of electrons carrying the heat flux. In panel (a), the waves in the system are strong enough to completely reverse the direction of the electrons (they are scattered past 90^o in pitch angle). In panel (b), the trajectories of case (1) and (3) exhibit diffusion along a nearly complete semi-circle (see Eq. (<ref>)) and scatter over 90 degrees in pitch angle, reversing their direction along the ambient magnetic field. The particle in case (2) also scatters past 90 degrees but reveals a more complex trajectory as it moves from a larger to a smaller radius in velocity space. The trajectories demonstrate that the reduction of electron parallel velocity is accompanied by an increase of perpendicular velocity. Thus, the scattering of whistler waves has significant consequences: it directly inhibits the heat flux while at the same time boosting the potential for magnetic trapping by increasing the pitch angle. Both effects will boost the confinement of energetic electrons in flares.
We now compute the wave-particle scattering rate that leads to the reduction in electron anisotropy and examine its scaling with parameters. There is no single such formalism for determining the rate because of the nonlinear nature of different scattering processes (e.g., weak scattering as described by quasilinear theory versus trapping in finite amplitude waves) <cit.>. For example, <cit.> used quasilinear theory to determine the scattering rate of the whistler anisotropy instability while others have approximated the scattering due to the proton cyclotron instability by finding the maximum exponential decay rate of the temperature anisotropy <cit.>. Here is we use the Lorenz scattering operator, which describes electrons scattering off a stationary center, to model the relaxation of the temperature anisotropy in the simulations and to deduce the scattering rate. The Lorenz operator is give by
(∂ f_e/∂ t)_e=ν_e/2∂/∂ξ(1-ξ^2)∂/∂ξf_e
where ξ=cosθ and ν_e is the electron scattering rate. By taking moments of this equation to determine the rate of relaxation of the temperature anisotropy of the electron velocity distribution, we obtain a formula for ν_e,
ν_e=dR/dt/2-R-R^2
where R=T_∥/T_ is the temperature anisotropy of the total electron distribution. We use this equation to model the relaxation of the temperature anisotropy near the low boundary of the simulation domain that is shown at late time in Fig. <ref>. The anisotropy profile has reached a steady state so the total time derivative in Eq. (<ref>) reduces to the convective derivative
ν_e=v_y∂ R/∂ y/2-R-R^2,
where v_y is a constant equal to the heat flux drift speed at the lower boundary. The temperature anisotropy curves in Fig. <ref> were smoothed with a Gaussian filter and the calculated scattering rates were averaged over the ranges of 0≲ y ≲ 80d_e, 0≲ y ≲ 60d_e, 0≲ y ≲ 10d_e,0≲ y ≲4d_e, for the cases of drift speed v_d/v_Ae=0.75, 1.06, 1.50, and 2.12, respectively. In Fig. <ref> we plot the wave-particle scattering rate for electron anisotropy reduction from the four simulations (Fig. <ref>). For the weakest drift case v_d/v_Ae=0.75, the scattering rate is very small (∼10^-4Ω_e), while for larger drift cases the scattering rates are significantly larger, growing to ∼ 6×10^-2Ω_e. The figure reveals that the scattering rate scales as a linear function of v_d/v_Ae, the ratio of the drift speed to electron Alfvén speed, above a threshold given by v_d/v_Ae∼1.
§ CONCLUSIONS
Observations and theory imply that there is suppression of the transport of energetic electrons during solar flares. We have explored pitch-angle scattering by oblique whistler waves and Weibel-driven disturbances as a mechanism for reducing the electron heat flux. The energetic electron heat flux escaping from a flare-like system excites strong oblique whistler waves through the anomalous cyclotron resonance. These waves pitch-angle scatter the electrons on a rapid timescale of hundreds of electron cyclotron periods, suppressing the heat flux and increasing the perpendicular velocities of electrons. At high mean drift speeds compared with v_Ae the nonthermal electrons can also drive the Weibel instability. The resulting scattering mechanisms can operate under the generic conditions of reconnection-driven electron energization and can reduce the field-aligned electron energy flux by up to a factor of two. Of greater significance is that this scattering process increases the energetic electrons’ perpendicular velocity so that the electrons will more effectively mirror and can be trapped in the large-scale magnetic fields of the flare energy release region. This scattering mechanism can therefore suppress the escape of electrons from the energy release regions of flares, which is required for them to reach the relativistic velocities documented in observations.
In previous simulations treating whistler waves in flares, the fluctuations damp out after scattering is complete and the heat flux from the initial electron distribution is reduced <cit.>. In contrast, in the current simulations the oblique whistler waves and Weibel perturbations are maintained in a steady-state due to the continual injection of heat flux from the boundary. This mimics the continued energy release from reconnection during flares. Such a setup represents a more physically reasonable set of initial conditions and avoids the influence of arbitrary discontinuities in the initial distribution function, giving more confidence to the interpretation of the final results. We suggest, on the basis of the present simulations, that oblique whistler waves and Weibel disturbances can grow to large amplitude during flares and scatter electrons to reduce their heat flux.
Although our model was designed to study the transport of energetic electrons in flares, the model is relevant to any β∼ 1, weakly collisional plasmas with high heat flux due to the presence of nonthermal electrons. Astrophysical applications might include accretion disc coronae <cit.>. Scattering by oblique whistlers of the solar wind strahl into the halo population is currently under active investigation <cit.>. Observations at 1AU have shown that the pitch angle widths of strahl in the presence of whistler mode waves are up to 12^∘ larger than that in the absence of whistler waves <cit.>. Large-amplitude, oblique whistlers that would limit the electron heat flux have been detected in the solar wind <cit.>, in the magnetosphere <cit.>, and in flares <cit.>. Simulations show that oblique whistler waves excited by the strahl component under solar wind conditions can scatter the strahl distribution into the halo and reduce the strahl-carried heat flux <cit.>. Our simulations are focused on the flare system, where the electron heat flux is much greater than in the case of the solar wind strahl. Nevertheless, our model clarifies the mechanism for scattering of energetic electrons by whistlers that is also active in the strahl scattering problem.
We note that our results may be limited by the common disadvantages of particle-in-cell simulations such as small domains and short timescales compared with those of real flares. On the other hand, the scale size of turbulence associated with whistlers and the Weibel instability is at the kinetic scale so the typical scale separation issue is less of a problem than in the exploration of electron energy gain during magnetic reconnection. The initial electron velocity distribution contains an artificially sharp gradient near v_y = 0, but this gradient is transient in the system’s evolution and its influence disappears by the time the system reaches a steady state. Further investigation is required to elucidate the transition mechanism from the Weibel instability to oblique whistler wave. Additionally, it is worth noting that magnetic field inhomogeneities resulting from turbulence or inherent mirror structure may affect transport suppression. The confinement of energetic electrons seems to result from a combination of magnetic trapping and wave scattering, so the exploration of particle scattering in a mirror geometry should be pursued in future studies.
The authors were supported by NASA grants 80NSSC20K0627, 80NSSC20K1813, and 80NSSC20K1277
and NSF grant PHY2109083.
aasjournal
|
http://arxiv.org/abs/2307.01663v2
|
20230704115156
|
Exploring Transformers for On-Line Handwritten Signature Verification
|
[
"Pietro Melzi",
"Ruben Tolosana",
"Ruben Vera-Rodriguez",
"Paula Delgado-Santos",
"Giuseppe Stragapede",
"Julian Fierrez",
"Javier Ortega-Garcia"
] |
cs.CV
|
[
"cs.CV"
] |
printacmref=false
plain
^1 Universidad Autonoma de Madrid, Spain
^2 University of Kent, UK
The application of mobile biometrics as a user-friendly authentication method has increased in the last years. Recent studies have proposed novel behavioral biometric recognition systems based on Transformers, which currently outperform the state of the art in several application scenarios. On-line handwritten signature verification aims to verify the identity of subjects, based on their biometric signatures acquired using electronic devices such as tablets or smartphones. This paper investigates the suitability of architectures based on recent Transformers for on-line signature verification. In particular, four different configurations are studied, two of them rely on the Vanilla Transformer encoder, and the two others have been successfully applied to the tasks of gait and activity recognition. We evaluate the four proposed configurations according to the experimental protocol proposed in the SVC-onGoing competition.
The results obtained in our experiments are promising, and promote the use of Transformers for on-line signature verification.
Exploring Transformers for
On-Line Handwritten Signature Verification
Pietro Melzi,^1 Ruben Tolosana,^1 Ruben Vera-Rodriguez,^1 Paula Delgado-Santos,^1,2
Giuseppe Stragapede,^1 Julian Fierrez,^1 Javier Ortega-Garcia^1
^1Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411, Tartu, Estonia
========================================================================================================================================================
§ INTRODUCTION
On-line handwritten signature verification is a biometric modality that aims to verify the authenticity of subjects based on their personal signatures. Handwritten signatures have been traditionally used for biometric personal recognition, as they contain unique behavioral patterns that can serve as reliable identifiers <cit.>.
Early approaches focused on extracting dynamic or local features <cit.>, such as pen pressure, stroke sequences, speed, and acceleration, and leveraging machine learning and pattern recognition techniques, such as Dynamic Time Warping (DTW) <cit.> and Hidden Markov Models (HMM) <cit.>. With the integration of Deep Learning (DL) <cit.>, on-line signature verification systems have achieved remarkable performance, exhibiting robustness against various forms of forgeries <cit.> and improving user experience <cit.>.
This technology has wide-ranging applications in areas such as e-commerce, digital banking, and document verification, contributing to the prevention of identity fraud and improving the overall security of on-line transactions <cit.>.
Despite the success of DL approches based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) <cit.>, recent studies have explored the application of Transformer architectures for other behavioral biometric traits such as gait and keystroke, outperforming the state of the art <cit.>. Among the multiple advantages of Transformers, we highlight the ability to capture long-term dependencies and interactions, which is especially attractive for time series modeling <cit.>.
In this paper we explore the use of Transformers for on-line signature verification, in which signatures are acquired with pen tablet devices able to capture X and Y spatial coordinates, pen pressure, and timestamps. We investigate four different Transformer configurations: i) a Vanilla Transformer encoder <cit.>, ii) the THAT Transformer successfully applied to activity recognition <cit.>, iii) a Transformer successfully applied to gait recognition <cit.>, and iv) a novel configuration based on the Temporal and Channel modules proposed in THAT <cit.>, but with the Transformer encoder proposed in Vanilla <cit.>. To obtain a fair comparison with the literature, we evaluate the proposed configurations according to the experimental protocol proposed in the SVC-onGoing competition <cit.>. In particular, we compare the results with our recent Time-Aligned Recurrent Neural Network (TA-RNN) <cit.>. TA-RNN combines the potential of DTW and RNNs to train more robust systems against forgeries.
§ METHODS
We explore a Siamese architecture with four different Transformer configurations for on-line signature verification. From the original time signals acquired by the device (X and Y spatial coordinates and pen pressure), we extract the set of 23 local features proposed in <cit.>, obtaining additional time signals related to velocity, acceleration, geometric aspects of the signature, etc. These time signals are used as input to our Siamese architecture.
§.§ Transformer Configurations
Differently from other behavioral biometrics (e.g., gait and keystroke), on-line signatures usually consist in longer sequences. Hence, the processing of time signals with Transformer-based architectures is computationally expensive. Similarly to some approaches proposed for voice analysis <cit.>, we add convolutional layers prior to Transformers, to aggregate temporal features of the input signals and reduce their dimensionality. A general representation of the proposed architecture is provided in Figure <ref>.
The CNN layers consist of a combination of 1D convolutional and MaxPooling layers. Convolutional layers extract temporal features by combining consecutive timesteps and augment the feature size, thanks to the 64 channels in the output. MaxPooling layers reduce timesteps, making the signals more suitable for Transformers.
Using the Siamese architecture proposed in Figure <ref>, we consider four different configurations depending on the Transformer module selected. First, we consider the original Vanilla Transformer encoder <cit.>, with Gaussian Range Encoding instead of Positional Encoding, given its suitability with the time series of interest <cit.>. The Vanilla encoder processes inputs with size 250 × 64 and generates a vector of size 64 at each timestep. These vectors are processed by a RNN, whose final state of size 92 is concatenated in the Siamese architecture (see Figure <ref>).
The second and third approaches are based respectively on the THAT Transformer proposed for activity recognition <cit.> and the Transformer proposed for gait recognition <cit.>.
Finally, we explore a novel configuration based on the Temporal and Channel modules proposed in THAT <cit.>, but with the Transformer encoder proposed in Vanilla <cit.>. The Temporal branch is analogous to the one considered in the first configuration. The Channel branch processes inputs with size 64 × 250 and generates a vector of size 250 for each channel. We average these vectors and apply a Fully-Connected layer to reduce the size of the output to 92 (the same of Temporal branch). The outputs of Temporal and Channel branches are concatenated, being the final vector of size 184 (Figure <ref>).
§ EXPERIMENTAL SETUP
We evaluate our configurations with the experimental protocol proposed in the SVC-onGoing competition <cit.>. Two publicly available datasets are considered in the competition: DeepSignDB <cit.>, used for development and validation, and SVC2021_EvalDB <cit.>, used for the final evaluation. Different subjects are considered in each dataset. In particular, we focus on the office-like scenario where subjects had to perform signatures using a pen tablet device.
To train our four configurations, we generate random pairs of matching and non-matching signatures from the Development DeepSignDB dataset provided by the SVC-onGoing competition. We consider both random and skilled non-matching pairs for training. Finally, following our previous TA-RNN approach <cit.>, for each signature comparison (i.e., genuine-genuine or genuine-impostor) we align the 23 time signals of each signature pair with DTW, zero-padding them to obtain a fixed length of 2,000 time samples for each signature.
§ RESULTS
The results obtained by evaluating our four Transformer configurations according to the protocol of the SVC-onGoing competition are reported in Table <ref>. We consider non-match comparisons made of random signatures (R), skilled signatures (S), and an overall combination of the two (O). Random signatures is the type of impostors that always provide the lowest EER, except in the case of the Vanilla encoder with Temporal and Channel branches, evaluated on SVC2021_EvalDB that provides 4.42% EER for skilled non-match comparisons and 4.58% EER for random ones.
The Transformer configurations achieve similar performance compared to the TA-RNN previously presented. From the four Transformer configurations explored, we observe that the Vanilla encoder achieves the best overall results, 4.64% EER and 3.80% EER for the DeepSignDB and SVC2021_EvalDB, respectively. Focusing on the SVC2021_EvalDB dataset considered for the final evaluation of the competition, we can observe how the Vanilla encoder achieves a relative improvement of 7% in comparison to the TA-RNN approach (4.08% EER), showing a better generalisation ability to the new scenarios not considered in training.
The results of Table <ref> show how more complex configurations do not improve the results obtained in evaluation. Overall EERs raise from 4.64% to 5.36% for DeepSignDB and from 3.80% to 4.49% for SVC2021_EvalDB when we add the channel branch to the configuration based on Vanilla encoder. Similar results apply to the other two configurations considered, with the Transformer proposed for gait recognition that only get closer to the best performances, with overall EERs of 5.13% and 4.10% in DeepSignDB and SVC2021_EvalDB evaluations.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860813 - TReSPAsS-ETN. With support also from projects INTER-ACTION (PID2021-126521OB-I00 MICINN/FEDER) and HumanCAIC (TED2021-131787B-I00 MICINN).
ACM-Reference-Format
|
http://arxiv.org/abs/2307.00235v1
|
20230701055344
|
Evaluating The Interference Potential in 6 GHz: An Extensive Measurement Campaign of A Dense Indoor Wi-Fi 6E Network
|
[
"Seda Dogan-Tusha",
"Muhammad Iqbal Rochman",
"Armed Tusha",
"Hossein Nasiri",
"James Helzerman",
"Monisha Ghosh"
] |
cs.NI
|
[
"cs.NI"
] |
none
[WiNTECH '23]The 17th ACM Workshop on Wireless Network Testbeds, Experimental evaluation & CHaracterizationOctober 6, 2023Madrid, Spain
The 17th ACM Workshop on Wireless Network Testbeds, Experimental evaluation & CHaracterization (WiNTECH '23), October 6, 2023, Madrid, Spain
15.00
10.1145/XXXXXXX.XXXXXXX
978-1-4503-XXXX-X/XX/XX
printacmref=false
An Extensive Measurement Campaign of A Dense Indoor Wi-Fi 6E Network]Evaluating The Interference Potential in 6 GHz: An Extensive Measurement Campaign of A Dense Indoor Wi-Fi 6E Network
University of Notre Dame
stusha@nd.edu
University of Chicago
muhiqbalcr@uchicago.edu
University of Notre Dame
atusha@nd.edu
University of Notre Dame
hnasiri2@nd.edu
University of Michigan
jarhelz@umich.edu
University of Notre Dame
mghosh3@nd.edu
The Federal Communications Commission (FCC) has allocated the 6 GHz band (5.925 - 7.125 GHz) for unlicensed, shared use in the US. Incumbents in the band are protected via Low Power Indoor (LPI) rules that do not require the use of
an Automatic Frequency Control (AFC) mechanism and Standard Power (SP) rules which do.
Wi-Fi 6E implements the LPI rules and deployments have been increasing,
but there is limited research examining the real-world interference potential of dense LPI deployments to fixed links, which remains a concern for incumbents. We address this gap by conducting a first-of-its-kind extensive measurement campaign of a dense indoor Wi-Fi 6E network at the University of Michigan. The campaign includes walking, driving, and drone measurements to assess outdoor beacon Received Signal Strength Indicator (RSSI), building entry loss (BEL), channel utilization, and appropriate enabling signal level for a proposed client-to-client (C2C) mode in 6 GHz.
Our detailed measurements under various conditions show median outdoor RSSI between -75 dBm and -85 dBm, BEL between 12 dB and 16 dB through double-pane low-emission windows, and only 5% of indoor Basic Service Set Identifiers (BSSIDs) observed outdoors. Our overall conclusion is that the probability of interference to incumbent fixed links is low, but more research is required to determine the appropriate signal level for the C2C enabling signal.
<ccs2012>
<concept>
<concept_id>10003033.10003079.10011704</concept_id>
<concept_desc>Networks Network measurement</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003033.10003079.10011672</concept_id>
<concept_desc>Networks Network performance analysis</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Networks Network measurement
[400]Networks Network performance analysis
[
Monisha Ghosh
August 1, 2023
==================
§ INTRODUCTION
§.§ Unlicensed use of 6 GHz in the US
The increasing bandwidth demands of new wireless applications and use cases prompted the U.S. Federal Communications Commission (FCC), in 2020, to allocate the 6 GHz band from 5.925 GHz to 7.125 GHz for unlicensed use on a shared basis with existing incumbents, primarily fixed microwave links, cable television relay services (CTRS), satellite and mobile Broadcast Auxiliary Services (BAS) <cit.>. While some countries have only allocated the lower 500 MHz on an unlicensed basis reserving the upper portion for possible future auctions and licensing <cit.>, the large number of incumbents in the band (> 48,000) made the prospect of relocating incumbents prior to licensing a major challenge for the U.S. Hence, the most expedient way of making this band available for commercial applications was to develop rules for unlicensed devices to use this band while sharing with incumbents.
Since the majority of wireless traffic, approximately 60%, is handled by Wi-Fi <cit.>,
allocating this band for unlicensed use also relieves the growing congestion in the existing 2.4 GHz and 5 GHz unlicensed bands.
The 6 GHz band encompasses four U-NII (Unlicensed National Information and Infrastructure) bands: U-NII-5 to U-NII-8, as listed with its incumbents in Table <ref>.
Incumbents are protected via two sets of rules that unlicensed devices must follow: low power indoor (LPI) and standard power (SP).
LPI operation is permitted across the entire 6 GHz band without the need for an Automatic Frequency Control (AFC) system, but access points (APs) must be installed indoors. SP APs can be installed anywhere, but are limited to U-NII-5 and U-NII-7 and require an AFC to avoid interference with incumbents.
Thus, Wi-Fi 6E devices can utilize 14 additional 80 MHz channels and 7 additional 160 MHz channels over the total 1200 MHz span
as shown in Fig. <ref>. This paper focuses on LPI deployments since the AFC is still under development and SP APs have not yet been deployed.
Unlike the 5 GHz band regulations, Wi-Fi 6E APs operating under LPI rules in the 6 GHz band must adhere to a maximum power spectral density (PSD) of 5 dBm/MHz, regardless of the channel bandwidth <cit.>.
This corresponds to maximum transmit (Tx) powers shown in Table <ref>. While APs are limited to indoor use, client devices (STAs) can be anywhere, including outdoors, and are therefore required to transmit 6 dB less power than the AP.
In the Further Notice of Proposed Rulemaking (FNPRM) <cit.> and the Public Notice <cit.>, the FCC is considering enhancing the 6 GHz rules in the future by (i) raising the PSD limit to 8 dBm/MHz, (ii) adding a Very Low Power (VLP) option with a PSD of -8 dBm/MHz and maximum transmit power of 14 dBm that can be used anywhere without requiring AFC access, and (iii) implementing a client-to-client (C2C) mode that enables direct connections between client devices via an enabling signal from a LPI AP. These options require further research to ensure that incumbents will continue to be protected: the results of this paper will inform this process.
§.§ Related Work
While various coexistence scenarios in 6 GHz have been studied in the academic literature, there is a lack of research on analyzing interference to incumbents. In <cit.>, the use of multi-user orthogonal frequency division multiple access (MU-OFDMA) for uplink Wi-Fi 6E is proposed for coexistence with 3GPP-based unlicensed technologies, i.e., 5G NR-U.
Authors in <cit.>
consider the impact of Wi-Fi 6E on ultra-wideband (UWB) communications and ranging.
In <cit.>, the authors study the adjacent channel interference between Wi-Fi 6E in 6 GHz and C-V2X (Cellular Vehicle-to-Everything) in the adjacent 5.9 GHz.
Studies of coexistence with incumbents are mainly conducted by various industry stakeholders, particularly fixed links operators and unlicensed proponents.
In <cit.> the additive impact of unlicensed LPI operations is assessed by analyzing Wi-Fi 6E APs operating co-channel within the main beam of an operational
system in the FirstEnergy network. In <cit.>, a predicted potential interference analysis over 5 years has been presented for Pacific Gas & Electric’s deployments. In <cit.>, the fade margin degradation of a 6 GHz link in the presence of a Wi-Fi 6E AP in the path is demonstrated. However, the experimental set-ups in studies by the incumbents are quite contrived, e.g., APs intentionally placed near windows on the same channel as an incumbent and within the main beam, which are not reflected in real-world deployments. Hence our goal is to understand the statistics of interference based on a dense real-world deployment, instead of worst-case scenarios.
§.§ Motivation & Main Contributions
Given the above discussion, the aim of this study is to evaluate, in an unbiased manner, the potential for interference to outdoor fixed links from a real-world, densely deployed 6 GHz LPI network. The main contributions of this paper are:
∙
A first-of-its-kind, extensive measurement campaign undertaken on the main campus of the University of Michigan (UMich) in Ann Arbor which has more than 16,000 Wi-Fi 6E LPI APs deployed across 225 buildings. This is the largest such deployment in the world today.
∙
Generating heat-maps at ground level of outdoor Received Signal Strength Indicator (RSSI) measured on the 20 MHz beacon frames transmitted by LPI APs, using measurements obtained by walking and driving on the main campus area (MCA) and the nearby residential area (RA).
∙
Drone measurements around buildings near the path of 6 GHz fixed links to assess outdoor RSSI levels at higher altitudes where these links are deployed.
∙
We demonstrate median outdoor RSSI levels of -82 dBm while driving and -77 dBm while walking on campus. We show that the number of APs within a building, the positioning of the APs in relation to nearby windows, construction type and window materials, all are crucial in determining the outdoor RSSI levels.
∙
Despite the significant number of deployed indoor APs, each with an average of two Basic Service Set Identifiers (BSSIDs) per AP, only 5% of these BSSIDs are observed outdoors thus indicating that the potential for interference is limited to a smaller number than the deployed number.
∙
Measurements of building loss in two buildings on campus demonstrate building entry loss (BEL) of 12 - 16 dB through double-pane low-E windows.
§ TOOLS AND METHODOLOGY
Fig. <ref> shows the Wi-Fi 6E deployments in the MCA and the RA of UMich. The campus is located in Ann Arbor with a high density of pedestrian and vehicular traffic, serving as an ideal location to assess potential interference from densely deployed indoor 6 GHz networks. The majority of the buildings in the MCA have double-pane low-E windows. Only 227 APs are deployed in the RA, which is a less dense deployment compared to the MCA which has a few thousand deployed APs.
§.§ Measurement Tools
Client devices were used to capture signal information in various environments, using two tools, SigCap and Wireshark, on smartphones and laptops respectively, to extract various signal parameters as shown in Table <ref>.
SigCap is a custom Android app that passively collects time and geo-stamped wireless signal parameters (cellular and Wi-Fi) through APIs without root access <cit.>. Wi-Fi parameters, such as RSSI, channel, BSSID, etc. are collected from the beacon frames every 5 seconds. Optional beacon elements with information on Tx signal power, number of stations connected to each BSSID and channel utilization (percentage of time that the AP senses the channel to be busy) are also collected: fortunately, all the Wi-Fi 6E APs deployed in UMich broadcast these optional elements, thus facilitating our analysis. Wireshark is an open source tool that we used for capturing both beacon and data frames using a Lenovo ThinkPad P16 Gen1 with the Intel(R) Wi-Fi AX211 Wi-Fi adapter.
We intentionally did not use spectrum analyzers for this work since the bursty nature of Wi-Fi traffic and the low outdoor RSSI levels are better captured using the above tools. Using smartphones enables mobile data collection which is difficult to do even with a handheld spectrum analyzer.
§.§ Methodology
The measurements were conducted in two campaigns, as described below:.
§.§.§ Measurement Campaign 1 (MC1):
MC1 took place on January 7-9, 2023, during which measurements were conducted while driving and in a fixed location on campus.
Driving Measurements were conducted in the MCA as shown in Fig. <ref> between 9:50 pm and 00:50 am, at a speed of 20 miles per hour. Data was collected with SigCap running on the five smartphones listed in Table <ref>. Due to the cold weather, walking measurements were not conducted in MC1.
Fixed Location 1 (FL1) measurements were taken inside and outside a building with an open indoor area with high occupancy. Fig. <ref> shows the position of the Wi-Fi 6E LPI AP in the space. The AP is positioned 6 meters away from double pane low-E windows. The indoor measurements were taken by placing the phones near the window while the outdoor measurement location is 1.5 meters from the window.
Wireshark and Sigcap were both used for measurements, as shown in Fig. <ref>. The AP transmit power was 15 dBm over a 160 MHz channel bandwidth, which is considerably lower than the regulatory limits specified in Table <ref>. This reduction in transmit power is due to the dense deployment of LPI APs, since many users need to be supported in this area.
§.§.§ Measurement Campaign 2 (MC2):
MC2 was conducted on May 24-27, 2023 with drone, driving, walking, and fixed location measurements. The deployment had been changed from mostly 160 MHz channels observed in MC1 to mostly 80 MHz channels during MC2. This was done by UMich Information and Technology Services (ITS) to serve more users with a higher quality of service. However, since our analysis depends on measurements of the 20 MHz beacon channel RSSI, this change did not affect our results or comparisons.
Drone Measurements: There are five active, fixed links in the MCA, as shown by the black lines in Fig. <ref>. Three of these links have their transmitters (referred to as Tx1, Tx2 and Tx3 in the figure) located within the MCA, while the transmitters of the other two links (Tx4 and Tx5) are positioned at a significant distance away from the campus. Rx4 is the only receiver (Rx) on campus but the link direction is away from the buildings with dense deployments. Nine buildings, indicated by the orange pins in Fig. <ref>, were chosen for drone measurements due to their proximity to Links 1 and 2, operating at center frequencies 7037.5 MHz and 6212.065 MHz with bandwidths of 25 MHz and 56 MHz respectively <cit.>.
Table <ref> provides information on the height of these buildings and the number of Wi-Fi 6E LPI APs deployed in each. On average, we assume two BSSIDs per AP in 6 GHz as determined by UMich ITS. The drone measurements were conducted during daylight hours over a period of three days. As shown in Fig. <ref>, a Samsung S22+ smartphone with SigCap was tied to the drone for data collection. The drone moved vertically up and down, parallel to the wall of a given building.
Driving Measurements: In order to validate the driving measurements conducted in MC1, we replicated the same route as closely as possible. The measurements were carried out between 10:00 pm to 12:00 am, mirroring the timeframe of MC1 using the same 5 phones with SigCap.
Walking Measurements:
The center of the campus, where Wi-Fi 6E is densely deployed, offers only pedestrian access. Hence RSSI measurements were collected in this area by walking with hand-held phones running SigCap (Fig. <ref>).
Fixed Location 2 (FL2):
The measurement area is a conventional classroom on the first floor of a building, shown in Fig. <ref>. The single AP in the room is center-mounted on the ceiling, and the room has a north facing exterior wall. The outdoor measurement location is 7 meters away from this wall due to trees obstructing closer access.
§ RESULTS & DISCUSSIONS
We present statistical analyses of the measurements under different conditions. The discussions are categorized into two groups: (i) ground level driving & walking measurements and (ii) aerial drone measurements.
§.§ Driving and Walking Measurements
Outdoor RSSI Levels: Fig. <ref> shows the map of outdoor beacon RSSI levels measured during the driving and walking campaigns.
The minimum and maximum RSSI values measured across the MCA are -94 dBm and -62 dBm for the driving measurements, and -92 dBm and -55 dBm for the walking measurements, respectively. Transmit power levels ranging from P_TX= 15 dBm to P_TX= 21 dBm were observed within the MCA, with P_TX= 16 dBm being the most frequently used. 73% and 95% of the RSSI measurements were with P_TX≤ 18 dBm for the driving and walking measurements in the MCA, respectively.
Statistical analyses of the measurements in the MCA and RA, using cumulative distribution function (CDF) plots of the measured RSSI at different transmit power levels, are shown in Fig. <ref>. Fig. <ref> shows the CDF of driving measurements within the MCA for MC1 (S1) and MC2 (S2). While P_TX represents the transmit power for the AP, the maximum power of the 20 MHz beacon frames is 18 dBm for P_TX≥ 18 dBm as shown in Table <ref>. MC1 measurements showed a transmit power of 15 dBm: this changed when we returned in May for MC2. The median outdoor RSSI level is -85 dBm for both S1 and S2 under P_TX= 15 dBm, while the highest median RSSI value is -81 dBm for S2 under P_TX= 16 dBm due to being the most frequently used.
Fig. <ref> shows the CDF of outdoor RSSI levels recorded during walking measurements (only in MC2) in the MCA (S2) and the RA (S3). A single transmit power of P_TX= 21 dBm was observed in the RA deployment, which is less dense than the MCA and hence each AP can transmit at a higher power without interference. This is still 3 dB less than the maximum allowed power of 24 dBm for 80 MHz channels. Due to the proximity of the walking measurement locations to the buildings, an increase of 1-9 dBm is observed for the median RSSI values in the walking measurements compared to the driving measurements in the case of S2. Fig. <ref> shows the results obtained for each of the 80 MHz channels with Tx power of P_TX= 16 dBm: all the channels exhibit similar behavior.
Finally, Fig. <ref> shows the outdoor RSSI heatmap for the MCA and RA. As expected, the areas with a high concentration of APs have higher outdoor RSSI levels compared to areas with fewer APs. The gray areas show the regions where the 6 GHz beacon frames were not captured at all.
Appropriate Enabling Signal Level for C2C mode: In the proposed C2C mode, clients that can receive an enabling signal from any Wi-Fi 6E AP can directly communicate with each other, at STA LPI power levels, bypassing the need for data transmission through the AP. Device sharing is an example application that benefits from the C2C mode, reducing air time occupancy and latency. While the intended use of C2C is to improve indoor performance, care must be taken to set an appropriate level for the enabling signal so that client devices that are outdoors do not transmit to each other. The proposals submitted to the FCC recommended using -86 dBm/20 MHz and -82 dBm/20 MHz as enabling signal levels <cit.>. Based on our walking results above, where the median outdoor RSSI level varies between -75 dBm and -85 dBm, even a level of -82 dBm could trigger > 50% of outdoor devices to communicate with each other, which is not desirable. Hence, further measurements and analyses should be performed to determine an appropriate enabling signal level for C2C that minimizes the probability of interference.
Channel Utilization and Number of Unique BSSIDs:
Channel utilization and number of unique BSSIDs observed outdoors help to understand the potential interference impact of a dense deployment. A higher channel utilization and larger number of unique BSSIDs on a particular frequency point to increased potential for interference on that frequency. CDFs of channel utilization for different P_Tx are shown in Fig. <ref>. Fig. <ref> shows that the median channel utilization is about 5% for most of the P_Tx scenarios during the driving measurements.
In the walking measurements, Fig. <ref>, higher maximum channel utilization is observed but the median value is still approximately 10%. The discrepancy between driving and walking could be due to the fact that the driving measurements were taken in the night around the periphery of the campus whereas the walking measurements were during the daytime in the heart of campus. The highest channel utilization corresponds to scenarios with low P_Tx, in the range 15-18 dBm, while walking in the MCA. Since lower Tx power is usually used in dense deployments, this result is expected. The median channel utilization while walking in the RA is lower than the corresponding scenario in the MCA, which indicates more usage on campus, which is also expected since there are fewer users in the RA.
Fig. <ref> shows the number of unique BSSIDs in each 80 MHz channel observed in the MCA and RA, demonstrating a similar pattern for the walking measurements in both areas and, as expected, a reduced number in the driving measurements in the MCA. The key takeaway from this result is that while there is a slightly higher number of unique BSSIDs observed outdoors on channel 135 in both areas, overall, all channels are used relatively uniformly, thus reducing the probability of interference to an outdoor fixed link that overlaps with a particular 80 MHz channel.
§.§ Drone Measurements
Driving and walking measurements obtained at ground level alone do not offer a comprehensive understanding of the interference potential in the 6 GHz band since most outdoor fixed links are deployed at higher altitudes. Hence, the drone experiments provide insights into the RSSI levels as a function of altitude.
Outdoor RSSI vs. Altitude: Fig. <ref> summarizes the RSSI measured at different altitudes near the nine buildings listed in Table <ref>
The observed range of RSSI is between -93 dBm and -55 dBm. RSSI values greater than -60 dBm were not observed above a height of 20m. Above a height of 30m, the RSSI values are less than -68 dBm.
In order to provide an in-depth analysis of the relationship between RSSI and factors such as number of Wi-Fi 6E APs, construction material, and altitude, Fig. <ref> shows RSSI vs. altitude for four representative buildings: BLD2, BLD4, BLD5 and BLD6 with 368, 800, 68 and 92 BSSIDs respectively, as shown in Table <ref>. BLD2 and BLD4 have many more APs compared to the other two. From Fig. <ref> and Fig. <ref> we see that the drone measurements near BLD2 and BLD4 provide a larger number of data samples up to 60m compared to BLD5 and BLD6 which have fewer APs. However, there is an uniform decrease in the number of samples and RSSI with increase in altitude for all 4 buildings. Despite having fewer APs than BLD6, there are more data samples observed near BLD5 with higher RSSI: this is because unlike most buildings on campus, BLD5 is a historical building with single pane windows, resulting in lower loss.
Number of Unique BSSIDs:
Fig. <ref> shows a high RSSI value of -45 dBm obtained at 10m near BLD4 which we investigate further. Fig. <ref> shows the relative location of this data sample and the corresponding BSSID/AP inside the building. The AP is in a room on the first floor and there is line-of-sight (LOS) through a corner window, resulting in the high outdoor RSSI measured at the outdoor location. It is important to note, however, that not all APs will contribute to significant signal emissions outdoors. In addition to the number of APs within a given building, the likelihood of these APs to LOS conditions through nearby windows plays a vital role in the resulting outdoor RSSI levels, and hence potential for interference. Figs. <ref> and <ref> illustrate the number of unique BSSIDs vs. altitude for the the nine buildings and for BLD4, respectively. Although the number of unique BSSIDs observed within the altitude range of 0-20 m and 20-40 m is fairly comparable, there is a noticeable decrease in the number of unique BSSIDs as the altitude range extends to 40-60 m and 60-80 m, thus indicating reduced potential for interference at higher altitudes. Finally, Fig. <ref> shows the CDF of RSSI for BLD4. While the median outdoor RSSI values remain consistent across the three altitude intervals, there is a decrease in the maximum outdoor RSSI level as the altitude increases.
Interference with fixed links:
We evaluate the interference potential to Links 1 and 2 which overlap with Wi-Fi channels 215 and 55 respectively (Link 2 has < 1 MHz overlap with the edge of channel 39 which we ignore since the Wi-Fi signal drops off at the band-edge). Fig. <ref> shows the CDF of the RSSI on these channels at different altitudes. As the altitude increases, RSSI level decreases, thus reducing the interference potential to these links. To further evaluate the interference level, we calculate approximately the ratio of interference to noise power (I/N) for these links as I/N = 10log_10(BW_i/20)+ RSSI_Outdoor+G_rx-NF-PL, where BW_i is the link bandwidth, G_rx is the Rx antenna gain, NF is the noise floor and PL is the free space path loss. These are computed from the link parameters in <cit.>. We assume worst case conditions: highest outdoor RSSI measured of -68 dBm and
-58 dBm for Links 1 and 2 respectively, in the main Rx beam. I/N is calculated to be -72 dB for Link 1 and -66 dB for Link 2, much lower than the harmful interference threshold of I/N = -6 dB.
Although Rx4 is located in the MCA, the link points away from the densely deployed region and thus we did not calculate the interference level at Rx4.
§.§ Indoor-Outdoor BEL Measurements
BEL near a double-pane low-E Window: Fig.<ref> shows the CDF of indoor and outdoor RSSI values for the fixed location FL1 which is the open area shown in Fig. <ref>. We only consider RSSI measurements where the client devices are connected to the BSSID associated with the AP in the room, which is one of the few APs with three BSSIDs.
A 12 dB BEL is observed for BSSID_1 and BSSID_2, while BSSID_3 exhibits a higher entry loss of 16 dB.
BEL near a solid brick wall: Fig. <ref> shows the results obtained for the FL2 shown in Fig. <ref>. Inside the measurements room, the devices were able to connect to BSSID_1 and BSSID_2.
However, these two BSSIDs were not detected outside due to the solid brick wall. BSSID_3 was observed outside since it is associated with the AP located in the adjacent room, which has a window pointing out towards the outdoor measurement location. Moreover, 391 APs, corresponding to 782 BSSIDs, are deployed in the entire building of which only 159 BSSIDs are observed within the measurement room, and only 8 of these i.e., 5%, are observed outside in this location, indicating a very high loss through the brick wall.
§ CONCLUSIONS & FUTURE RESEARCH
We conducted an extensive measurement campaign via drone, driving, walking, and indoor-outdoor measurements at the world's largest indoor Wi-Fi 6E deployment on the UMich campus, investigating the interference potential of densely deployed LPI APs. To the best of the authors' knowledge, this is the first such measurement campaign conducted on a real-world Wi-Fi 6E deployment. In-depth analyses of the relationship between outdoor RSSI levels and factors such as the number of APs, the positioning of the APs in relation to nearby windows, and altitude is provided. Most LPI APs within a building cannot be received outdoors, but a few APs with LOS through windows can result in high outdoor RSSI levels in a very small number of locations, e.g. only 5% of the indoor BSSIDs in one building are observed outdoors in a location near a solid brick wall.
The BEL near double-pane low-E windows was 12 - 16 dBm. Drone measurements show the number of unique BSSIDs and outdoor RSSI levels decrease with increasing altitude, further reducing interference potential. Based on median outdoor RSSI levels, further measurements and analyses are required to determine an appropriate enabling signal level for C2C mode. Future research will investigate collaborations with fixed-link providers to quantify interference at the incumbent receiver.
§ ACKNOWLEDGEMENTS
We thank the Information and Technology Services of UMich for their help throughout the measurement campaign and support of this research.
The research was funded in part by NSF Grant# CNS-2229387.
ieeetr
|
http://arxiv.org/abs/2307.01134v1
|
20230703161410
|
Bayesian variable selection using an informed reversible jump in imaging genetics: an application to schizophrenia
|
[
"Djidenou Montcho",
"Daiane Zuanetti",
"Thierry Chekouo",
"Luis Milan"
] |
stat.AP
|
[
"stat.AP",
"stat.CO",
"stat.ME"
] |
Living with a Red Dwarf: The Rotation–Age Relationships of M Dwarfs
[
August 1, 2023
===================================================================
Modern attempts in providing predictive risk for complex disorders, such as schizophrenia, integrate genetic and brain information in what is known as imaging genetics. In this work, we propose inferential and predictive methods to relate the presence of a complex disorder, schizophrenia, to genetic and imaging features and predict its risk for new individuals. Given functional Magnetic Resonance Image and Single Nucleotide Polymorphisms information of healthy and people diagnosed with schizophrenia, we use a Bayesian probit model to select discriminating variables, while to estimate the predictive risk, the most promising models are combined using a Bayesian model averaging scheme. For these purposes, we propose an informed reversible jump Markov chain Monte Carlo, named data driven or informed reversible jump,
which is scalable to high-dimension data when the number of covariates is much larger than the sample size.
§ INTRODUCTION
Schizophrenia is a multifactorial disease whose etiology and pathophysiology aren't completely elucidated. It affects almost 1% of the population and commonly observed signs and symptoms are hallucinations, delusions, impairment of cognitive functions, disordering of thinking and akinesia <cit.>. Furthermore, until today, there is no medical test for its diagnosis which is symptom-based and some limitations of such methods have been pointed in <cit.>. Moreover, early detection of prodromal symptoms could provide good insights for better targeted treatment and possibly prevent functional degradation, delaying or avoiding transition to psychosis <cit.>. Therefore, the development of new methods or tests that could assist existing medical tools is of great public health relevance.
Modern attempts in this direction integrate neurological and genetic information in what is known as imaging genetics, connectome genetics and variants of these terminologies. Imaging genetics is a growing field of research in which both neuroimaging and genetic dataset are integrated to unravel the impact of genetic variants on brain structure and function. There is a multitude of reaches of such integration, going from a better understanding of psychiatric disorders to their prevention. However, this integration brought new statistical challenges because of the complexity and high dimensionality of such dataset, both in number of covariates and size. <cit.> and <cit.> provide a comprehensive review of statistical methods and challenges in imaging genetics, respectively.
Here, we have available fMRI (functional Magnetic Resonance Imaging) and SNP (Single Nucleotide Polymorphism) information on healthy and patients diagnosed with schizophrenia. fMRI was mainly designed to identify brain's response to task by detecting regional neuronal activity captured by blood oxygenation level-dependent (BOLD) variations. Actually, it is at the core of neuroimaging for studying schizophrenia because of its low invasiveness, absence of radiation and relatively high resolution. SNPs are substitutions of a single nucleotide at a specific position in the genome that occur in at least 1% of the population. They are frequently used in Genome Wide Association Studies (GWAS) to find possible associations to disease and phenotypes <cit.>.
<cit.> used principal and independent component analysis and found evidence of relevant association between fMRI and SNPs. <cit.> extended this inferential problem and developed an integrative Bayesian hierarchical mixture model and applied it to link brain connectivity, through fMRI, to genetic information from SNPs of healthy and schizophrenic patients. <cit.> developed a Bayesian predictive model that includes ROIs (regions of interest) based network and a new network capturing relations between SNPs and ROIs to quantify a subject's risk of being schizophrenic based on fMRI and SNPs information. Auxiliary indicator variables with spike-slab priors and a Bayesian model averaging were used for model selection and prediction, respectively.
In the Bayesian framework, model selection can be done using a variety of techniques. <cit.> and <cit.> provide a review of some of those methods jointly with worked examples for illustration and comparison. Some of them are based on information criteria such as AIC (Akaike Information Criterion, Akaike, 1973akaike1973), BIC (Bayesian Information Criterion, Schwarz et al., 1978schwarz1978estimating), DIC (Deviance Information Criterion, Spiegelhalter et al.spiegelhalter2002Bayesian) and WAIC (Widely Applicable Information Criterion Watanabe, 2013watanabe2013widely), among others. Other Bayesian methodologies such as Bayes factor (), selection with spike and slab priors (), selection with shrinkage prior () and RJ (Reversible Jump Markov chain Monte Carlo, Green, 1995green1995reversible) are also widely used today for model selection. However, when doing prediction, one does not need to restrict to only one model. Bayesian model averaging () takes model's uncertainty into account and compute the prediction averaging over all competing models with weights given by the posterior probability of each model leading to a more trustworthy prediction.
Building in these methods, here we propose a new Bayesian predictive risk model for schizophrenia, based on a sparse set of ROIs and SNPs selected using an informed RJ. The proposed method does not need the inclusion of auxiliary indicator variables for each available covariate which are updated in each MCMC iteration and the estimation of all possible models. This makes the algorithm scalable for high-dimensional data when a huge number of covariates or features is considered in the analysis.
Though widely known for its ability in joint model selection and parameters estimation, traditional RJ lacks a straight way of designing efficient proposals for inter and intra models moves. Usually, models are proposed using uniform distribution which is not our best option if the model space is very large, for example when selecting covariates from a large set, candidates and parameters are sampled from some vague Gaussian or uniform distribution. Furthermore, including information about the target distribution could increase the efficiency of MCMC (Markov Chain Monte Carlo) when compared with methods based on naive, uniform or random walk. For instance, this is done in Hamiltonian Monte Carlo <cit.> and Metropolis adjusted Langevin dynamics <cit.> using information from the gradient of the joint distribution. In the special context of RJ, many works have been dedicated to try to overcome these limitations <cit.>. Recently, <cit.> proposed locally balanced proposals for discrete spaces on top of which <cit.> also creates another informed RJ. A special informed RJ strategy proposed in <cit.> and also used in <cit.>, named DDRJ (Data Driven Reversible Jump), makes use of the data to inform about the next best candidate model and has been proposed for mapping QTLs (Quantitative Trait Locus), i.e. selecting relevant genetic categorical covariates, which regulate quantitative traits. This methodology leads to a better mixing, improves the chain dynamic and effective sample size.
In this work, our main contribution is that we build on top of the DDRJ and extends it to the context where we have numerical or both categorical and numerical covariates. It is also worth mentioning that, the search for the next candidate could be done in parallel, can rely on batches of the dataset and could be spread on multiple threads to accelerate the search. Finally, we also combine the most visited models, using Bayesian model averaging, to create a classifier for future individuals and we compare its performance in terms of misclassification error and area under the receiver operating characteristic curve to our benchmark results in <cit.>, LASSO <cit.> and random forest <cit.>.
This manuscript is organized as follow: Section <ref> proposes the Bayesian model under consideration to jointly select ROIs (quantitative variables) and SNPs (categorical variables). The DDRJ algorithm and variable selection and prediction procedures are presented in Section <ref>. Section <ref> shows their efficiency on simulated data and comparison with other selection and prediction methods. Finally, Sections <ref> and <ref> contain the application of the methodologies to the Mind Clinical Imaging Consortium (MCIC) dataset and a discussion on results, and final considerations, respectively.
§ MODELS FOR DICHOTOMOUS TRAITS
Given n independent individuals, let Y= (Y_1, …, Y_n) be the set of binary random variables characterizing their disease status, healthy or diagnosed with schizophrenia. Also consider the sets of covariates X=[X_ip]_n × g and Z=[Z_ik]_n × m as the matrices of g ROI-based summaries of BOLD intensity of m genetic features (genotype of SNPs), respectively for n subjects.
We rely on the probit data augmentation <cit.> introducing a continuous non-observable latent random variable Y_i^*, normally distributed, and classifies the individual output according to its value being above a threshold or not. The variable Y_i^* is viewed as a hidden process that depends on ROIs and SNPs, such that when its value is positive, the patient is classified as schizophrenic and healthy otherwise. Assuming the probit model leads us to nice conditional distributions for all other random variables and allows the use of Gibbs sampling to estimate model's parameters. However, one could have followed the data augmentation model proposed in <cit.> and then use a Bayesian logit model with a Metropolis-Hastings step for updating the parameters.
The proposed model assumes a Probit model and the the non-observable
Y_i^* is normally distributed and defined as
Y_i^* = β_0 + ∑_p ∈ 𝒢β_p X_ip + ∑_k ∈ℳα_k Z_ik + ∑_k ∈ℳδ_k (1-|Z_ik|) + ξ, ξiid∼ N(0,1),
where Y_i =1(Y^*_i >0) where 1(.) is the indicator function, Z_ik∈{-1,0,1 }, for SNPs having genotype aa, aA and AA, respectively. The sets 𝒢 and ℳ contain important ROI and SNP indexes, respectively. More specifically, β_p = 0 if p ∉𝒢, α_k =δ_k = 0 if k ∉ℳ and β_p, α_k, δ_k are non zero, otherwise. Regarding the coefficients, β_0 is the intercept, β_p is the effect of ROI p while α_k and δ_k account for the additive and dominant effects of SNP k for every k=1,…,m, respectively.
One could have simply used dummy variables to encode the categorical variables, such as the genotype of the SNPs, however, in biology the genetic interpretation is easier when the SNPs are encoded as we have done above <cit.>.
The goal of this work is to select, under a Bayesian framework, a set of discriminatory ROIs and SNPs from the set of available g ROIs and m SNPs , respectively. We also aim at providing estimates for the coefficients β_0, β_p and for the additive and dominant effects α_k, δ_k, respectively, for the selected features.
Let us denote the unknown parameters by θ=(γ,K,P) with β^T=(β_0, β_1, …, β_P)^T,
α^T=(α_1, …, α_K)^T, δ^T=(δ_1, …, δ_K)^T, γ=(β^T, α^T, δ^T), K= |ℳ| and P= |𝒢| . The likelihood function for θ is given by
L(θ| Y^*,X,Z) = ∏_i=1^n P(Y_i^*|θ, X,Z) = 1/(√(2π) )^n exp[-∑_i=1^nξi^2],
where ξ = Y^*_i -
β_0 - ∑_p ∈𝒢β_p X_ip - ∑_k ∈ℳα_k Z_ik - ∑_k ∈ℳδ_k (1-|Z_ik|).
We complete the model assigning independent prior distribution to each parameter and the joint prior distribution is defined by
π(θ)=π(K)π(P) π(β|P)π(α|K)π(δ|K),
where, we assume that,
K ∼ Unif(g),
P ∼ Unif(m), β∼ N_P+1(0,σ^2_β𝐈_P+1 ), α∼ N_K(0,σ^2_α𝐈_K ), δ∼ N_K(0,σ^2_δ𝐈_K )
with all hyperparameters σ^2_β, σ^2_α, σ^2_δ fixed and 𝐈_d represents an identity matrix of dimension d.
Model in Equation (<ref>) is a classical regression model in which we are assuming Gaussian priors for the coefficients. Hence, all full conditional posterior distributions are easily derived as
β|Y^*,X,Z, α, δ ∼ N(β^*, Γ_1), β^* = Γ_1 [1|X]^T {Y^*- Zα - [1- |Z|]δ} ,
Γ_1 = {1/σ_β^2I_P+1 + [1|X]^T[1|X] }^-1;
α|Y^*,X,Z, β, δ ∼ N(α^*, Γ_2), α^* = Γ_2 Z^T {Y^*- [1|X]β - [1- |Z|]δ} ,
Γ_2 = {1/σ_α^2𝐈_K + Z^TZ}^-1;
δ|Y^*,X,Z, β, α ∼ N(δ^*, Γ_3), δ^* = Γ_3[1-|Z|]^T{Y^*- [1|X]β - Zα},
Γ_3 = {1/σ_δ^2𝐈_K + [1-|Z|]^T[1-|Z|] }^-1;
Y^*_i|θ, Y_i=1, X_i,Z_i ∼ Nt([1|X_i]β + Z_iα + (1-|Z_i|)δ, 1 ,left=0) and
Y^*_i|θ, Y_i=0,X_i,Z_i ∼ Nt([1|X_i]β + Z_iα + (1-|Z_i|)δ, 1 ,right=0),
with 𝐈_N being the identity matrix of size N, Nt being the truncated Normal Distribution. [1|X] is a matrix of dimension n×(P+1) having ones (1) in the first column, X_i and Z_i are the ith row of the design matrices X and Z respectively, where the columns correspond to the important features.
Full conditional posteriors are available, thus we use, for intra-model movement, a Gibbs sampling scheme to update the parameters iteratively given K and P. In the next section, we describe the data driven reversible jump algorithm (DDRJ) to efficiently propose the inter models move, where the candidate model consists of a previous model with the inclusion (birth) or removal (death) of a covariate.
§ DATA DRIVEN REVERSIBLE JUMP FOR UPDATING K AND P
Despite its generalization, RJ's performance relies on the probability of visiting the next model and the proposal distribution to obtain the next set of parameters within each model. Indeed, bad proposals will usually lead to high rejection rate, low mixing and consequently more iterations would be needed for convergence.
One reason to understand these points may be that of trying to move from a set of parameters having high density in a bad model to a set of parameters of low density in a good model. There is a high probability of this move being rejected and if the proposals are bad, this kind of move may be frequent and not accepted.
Mainly, we try to include or exclude a single covariate from the current model in a more efficient way. Thus, first, we decide if we will include a new covariate (birth) or exclude (death) one that is present in the current model. Obviously, in the case where we don't have any covariate in the model, i.e a model with an intercept only, we would opt for a birth move with probability 1 and, at the other extreme when the model is saturated with all the possible covariates (m+g), we would opt for a death move with probability 1. After that, we define a measure roughly understood as a criterion to choose the next candidate, i.e the covariate that should be excluded or included to the current model. After obtaining the candidate model, we sample the set of parameters for it also using a data-informative mechanism and test its acceptance. A succinct description of the full algorithm can be seen in Section 1 of the supplementary material.
As the method aims at selecting quantitative (ROIs) and categorical variables (SNPs) in an integrative manner, i.e. jointly, we could think of, mainly, three alternatives to perform this joint variable selection. As the first option, we could select all possible ROIs and then select SNPs, i.e run the method considering only ROIs, then run the method for selecting SNPs conditional on selected ROIs. As a second option, we could select all possible SNPs and then select ROIs conditional on these selected SNPs. The last option is to randomly alternate between selecting quantitative and categorical variables. Options 1 and 2 are special cases of the last option, thus we focus on describing how to carry it. However, we highlight that options 1 and 2 may be computationally more efficient and show better convergence when dealing with very high-dimensional data.
More importantly, instead of using a uniform distribution to choose which SNP or ROI will be included or excluded from the model, we prioritize those covariates that seem to be more or less associated with the trait conditioned on the current model. For measuring the covariates association with the trait conditioned on the current model, we use different measures for categorical and continuous covariates. For the ROIs, we use the Pearson correlation coefficient between each covariate and the residuals of the current model, while for SNPs, we use the Kruskal-Wallis (KW) statistics between each variable and the residuals of the current model. One could choose a different criterion to measure the quality of a candidate, and from our experiment the efficiency of the DDRJ also depends on that. The KW measure was used by <cit.> for QTL mapping with categorical covariates, thus our innovation here is to use the KW and Pearson correlation for jointly selecting categorical and continuous covariates. At each stage of the process, we randomly alternate between ROIs and SNPs in the following manner. Decide with probability s=g/m+g and 1-s=m/m+g to work on ROIs or SNPs, respectively. This step allows us to jump into ROIs or SNPs space and then work on them separately. This is fair if m ≈ g as s ≈ 0.5. However, if one dimension dominates the other, it may be better to select variables separately or simply design an informed probability to favor any desired space. If ROIs space has been selected, then we apply the method described in Section <ref> conditional on already selected SNPs and ROIs up to this stage. If SNPs space has been selected, then we apply the method described in Section <ref> conditional on already selected ROIs and SNPs at this moment.
§.§ What if we jump into ROIs space?
Suppose that the current model contains P= |𝒢| ROIs and K =|ℳ| SNPs, with parameters θ=(β^T,α^T, δ^T) and we decide to jump to ROIs space. If P=0 then a birth (b) movement is proposed with probability p(b|P=0)=1, when 0<P<g, a birth or death movement is proposed with probability p(b|P)=p(d|P)=1/2 and finally if P=g then a death (d) movement is proposed with probability p(d|P=g)=1.
* Birth:
Let's suppose that a birth move has been chosen. We propose to choose the next candidate from the remaining ROIs in X_-𝒢 = { X_p : p ∉𝒢} with probability p_bj = |cor(ξ, X_j)|/∑_X_p ∈X_-𝒢|cor( ξ, X_p)|, where cor(ξ, X_p) is the correlation between a candidate ROI X_p and the residuals ξ from the current model in Equation (<ref>). Instead of uniformly choosing from the set of remaining ROIs, the main idea of our data driven proposal is to choose the ROI which is highly correlated to the residuals of the current model. To speed up computation, one could use only part of the data to compute p_bj.
After selecting a ROI X^b, with index b to be added to 𝒢, we sample θ^b from the conditional posterior distributions of β^b, α^b and δ^b and test its acceptance with probability ψ^b= min(1,A^b), where
A^b= L( θ^b|X_𝒢∪{b}, Z_ℳ,Y^* )π(θ^b)q(θ|θ^b) /L(θ|X_𝒢,Z_ℳ,Y^* )π(θ) q(θ^b|θ),
q(θ^b|θ)= p(b|P) p_bjπ(θ^b|X_𝒢∪{b}, Z_ℳ,Y^*), q(θ|θ^b )= p(d|P+1)p_djπ(θ| X_𝒢, Z_ℳ, Y^*), X_𝒢∪{b} of dimension n × (P+1) is the updated design matrix with the new ROI X^b and p_dj the probability of death (exclusion) will be better explained in the death step.
The proposal distribution q(θ^b|θ)= p(b|P) p_bjπ(θ^b|X_𝒢∪{b}, Z_ℳ,Y^*) is a simple application of conditional probabilities as the new model and parameters are obtained from a sequence of 3 conditional steps which. First, we choose a birth move with probability p(b|P), then we choose the variable to variable to be included to obtain the new model with probability p_bj and finally we sample the new parameters using the full conditional with probability π(θ^b|X_𝒢∪{b}, Z_ℳ,Y^*). This idea applies regardless of birth or death movement.
* Death:
If on the other side, a death has been selected, then a possible way of choosing the candidate ROI to be deleted is by comparing the size of their coefficients after normalizing the design matrix. Thus, we propose to select a ROI to be excluded with probability p_dj= 1/ |β_j| /∑_p ∈𝒢1/|β_p|. The larger the coefficient of a given ROI, the lesser is its probability to be deleted from the current model.
After selecting a ROI X^d with index d to be deleted from 𝒢, we sample θ^d from the conditional posterior distributions of β^d, α^d and δ^d and test its acceptance with probability ψ^d= min(1,A^d), where
A^d= L(θ^d|X_𝒢∖{d},Z_ℳ,Y^* )π(θ^d)q(θ|θ^d) /L(θ|X_𝒢, Z_ℳ,Y^* )π(θ)q(θ^d|θ) ,
q(θ^d|θ)= p(d|P)p_djπ(θ^d|Y^*,X_𝒢∖{d}, Z_ℳ), q(θ|θ^d )= p(b|P-1)p_bjπ(θ|Y^*,X_𝒢,Z_ℳ) and X_𝒢∖{d} of dimension n × (P-1) is the updated design matrix without the deleted ROI X^d.
§.§ What if we jump into SNPs space?
Under the same setting, suppose that the current model contains P= |𝒢| ROIs and K =|ℳ| SNPs, with parameters θ=(β^T,α^T, δ^T) and we decide to jump into SNPs space. In the same way as we did for ROIs, if K=0 a birth (b) movement is proposed with probability p(b|K=0)=1, if 0<K<m then a birth or death movement is proposed with probability p(b|K)=p(d|K)=1/2 and when K=m then a death (d) movement is proposed with probability p(d|K=m)=1.
* Birth:
The choice of the next SNP to be included is guided by its association with the residuals ξ from model in Equation (<ref>). Each SNP Z_k is a factor with 3 levels, so its association with the current residuals can be measured using the Kruskal-Wallis (KW) statistics. Therefore Z_k is selected from the set of remaining SNPs Z_-ℳ = { Z_k : k ∉ℳ} with probability p_bk=KW(ξ, Z_k) /∑_ Z_k∈Z_-ℳKW(ξ, Z_k) . It's worth mentioning that we are not testing hypothesis but only using the test's statistic as a measure to quantify levels of association.
After selecting a SNP Z^b with index b to be added to ℳ, we sample θ^b from the conditional posterior distributions of α^b, δ^b and β^b and test its acceptance with probability ψ^b= min(1,A^b), where
A^b= L( θ^b|X_𝒢, Z_ℳ∪{b},Y^* )π(θ^b)q(θ|θ^b) /L(θ|X_𝒢,Z_ℳ,Y^* )π(θ) q(θ^b|θ),
q(θ^b|θ)= p(b|K) p_bkπ(θ^b|X_𝒢, Z_ℳ∪{b},Y^*), q(θ|θ^b )= p(d|K+1)p_dkπ(θ| X_𝒢, Z_ℳ, Y^*), Z_ℳ∪{b} of dimension n × (K+1) is the updated design matrix with the new SNP Z^b and p_dk the probability of death (exclusion) defined in the death step.
* Death: As Z_k only takes value in {-1,0,1}, the absolute value of the coefficients α_k and δ_k in Equation (<ref>) give a measure of its importance. We propose to select a SNP to be excluded from the current model with probability p_dk= 1/|α_k|+|δ_k|/∑_k ∈ℳ1/|α_k|+|δ_k|. The higher the effect of the SNP, the lesser is its probability to be deleted.
After selecting a SNP Z^d with index d to be excluded from ℳ, we sample θ^d from conditional posterior distributions for α^d, δ^d and β^d and test its acceptance with probability ψ^d= min(1,A^d), where
A^d= L(θ^d|X_𝒢,Z_ℳ∖{d},Y^* )π(θ^d)q(θ|θ^d) /L(θ|X_𝒢, Z_ℳ,Y^* )π(θ)q(θ^d|θ) ,
q(θ^d|θ)= p(d|K)p_dkπ(θ^d|Y^*,X_𝒢, Z_ℳ∖{d}), q(θ|θ^d )= p(b|K-1)p_bkπ(θ|Y^*,X_𝒢,Z_ℳ), and Z_ℳ∖{d} of dimension n × (K-1) is the updated design matrix without the deleted SNP Z^d.
§.§ Variable selection and prediction procedures
As stated at the beginning of the manuscript, our goal is to use the proposed method for variable selection and to carry out prediction for new individuals as well. For variable selection, the full dataset is used as training, which allows us to have a greater sample size to check the inferential performance of the method. We decide to select as relevant only those covariates with marginal posterior probability of inclusion (mppi), estimated as their relative frequency of being present in the models, above a threshold (0.5 for instance). We use a 5-fold cross validation approach to assess the model's predictive performance.
To predict the disease status for a new individual having ROIs and SNPs given by X^new and Z^new, first we need to predict its non-observable variable Y^*_new via a Bayesian Model Averaging as
Ŷ^*_new= ∑_t(β̂_0^t + ∑_p ∈𝒢^tβ̂_p^t X_p^new + ∑_k ∈ℳ^tα̂_k^t Z_k^new + ∑_k ∈ℳ^tδ̂_k^t (1-|Z_k^new|) )P(M_t|Y),
where the index t represents each of the M_t models visited during the MCMC iterations, the parameters' estimates for each one are set to the be their posterior mean and P(M_t|Y) is the marginal posterior probability of the model M_t. Then, the posterior predictive probability of disease for the new individual is computed as P(Y_new=1|Ω)= Φ(Y^*_new|Ω), where Φ(.) represents the standard normal cumulative distribution function and Ω considers all parameters and data.
From these posterior probabilities and non-observable variables, we can compute the AUC (area under the ROC curve) and MCE (misclassification error) to assess the predictive performance of the method in terms of variable selection and prediction respectively.
§ SIMULATION STUDY
This section summarizes a simulation study to demonstrate the efficiency of the proposed method for performing variable selection using DDRJ and for making prediction for future individuals. For each scenario, 35,000 MCMC iterations were run with a burn-in period of 5,000 iterations holding one sample of ten. To assess convergence, monitored through log posterior, we run two chains with randomly chosen initial points. The following section contains two types of studies: one in which we test the proposed method on a simulated dataset that mimics the real dataset to be analyzed with the same number of ROIs and SNPs, and in the second study we increase the number of ROIs and SNPs to verify the algorithm's performance for a high dimensional data. The reported results applies the method for jointly selecting ROIs and SNPs. Furthermore, Section 2 in the supplementary material contains more results on simulated data where we select ROIs and SNPs separately.
We also use the posterior probability of each model to compare DDRJ to the traditional reversible jump with uniform proposals (RJ) between models, expecting DDRJ to assign a higher posterior probability to the true model compared to RJ. Finally, we compare DDRJ to the LASSO and random forest (RF) in terms of MCE and AUC using a 5-fold cross-validation. All the results were run using the R software <cit.> on a Intel(R) Core(TM) i7-8565U CPU 1.80GHz with the KW statistics being coded in C++ to accelerate the proposal's computation.
For the joint selection of ROIs and SNPs, the first dataset is a simulation of g=116 ROIs from a multivariate normal distribution with empirical mean and covariance matrix retrieved from the real ROIs design matrix and we simulate m=81 SNPs from independent discrete distributions with probabilities retrieved from the real SNP dataset for n=210 individuals. The second group of dataset contains a simulation from a standard multivariate normal and independent discrete distribution with increased number of ROIs (300, 500, 1000) and SNPs (300, 500, 1000), respectively. A very small number of ROIs and SNPs were chosen to have non null effects, summarized in Table <ref>, to maintain the proportion of healthy and diagnosed with schizophrenia. The disease status was generated using the probit model in Equation (<ref>) with prior variance set to σ_β^2=σ_α^2=σ_δ^2 = 25. As the number of candidate variable under consideration grows (600, 1000, 2000) for joint selection, we observed that a two steps procedure in which a separate pre-selection phase using a low threshold for the mppi provides better convergence. More specifically, in the first step, we separately run our method to pre-select ROIs and SNPs using a low threshold (0.1) for mppi. This strategy reduces the number of covariates to approximately 10-15%, on average. The selected variables are then used together in the second step for joint selection and prediction.
In summary, DDRJ performed well in all the scenarios, selecting all the relevant variables as well as providing good estimates and small standard errors summarized in Table <ref>. Furthermore, our method usually selects the true model with a higher posterior probability compared to the RJ with uniform proposals <cit.> as it is shown in Table <ref> with those differences probably due to the smarter of proposing the next model to be visited. Finally, regarding predictive performance, in Table <ref> the MCE and AUC computed from the Bayesian model averaging show that DDRJ generally outperforms the random forest <cit.> and is comparable to the LASSO <cit.>, another well established method for variable selection.
§ MCIC DATA ANALYSIS
The available dataset was collected by the MCIC <cit.> as an effort of deeper understanding of mental disorder. It contains both imaging data on activation patterns using fMRI during a sensorimotor task and multiple SNPs allele frequencies which have previously been implicated in schizophrenia on 118 healthy controls and 92 individuals affected by this disorder. None of the individuals presents history of substance abuse and are free of any medical, neurological or psychiatric illnesses. Following the same approach from <cit.> and <cit.>, the 5-folds cross-validation with 94 healthy controls and 74 patients for the training set and 24 healthy controls and 18 patients for the validation set will be used for predictive performance analysis.
The goal of the MCIC study, a joint effort of four research teams from Boston, Iowa, Minnesota and New Mexico, was to identify regions of interest (ROI) in the brain with discriminating activation patterns between cases and controls and relate them to a relevant set of SNPs able to explain these variations, a model selection problem clearly. The data were then preprocessed in SPM5 (<http://www.fil.ion.ucl.ac.uk/spm>), realigned to correct for the individuals movements, spatially normalized to correct for anatomic variability, spatially smoothed to improve signal to noise ratio. For each of the 116 ROIs, the activation level was summarized as the median of the statistical parametric map values <cit.> for that region.
The genetic information of the available dataset is given by 81 SNPs, already known to be related to schizophrenia retrieved from the Schizophrenia Research Forum (<http://www.schizophreniaforum.org/>) information. In the original dataset, the SNP information was coded as the number of minor allele for those with genotype aa, aA and AA respectively. More details of the experimental study and preprocessing can be found in <cit.> and <cit.>.
For each scenario, 35,000 MCMC iterations were run with a burn-in period of 5,000 iterations holding each sample of 10. The prior variance is set to σ_α^2=σ_β^2=σ_δ^2= 25 to ensure that the prior is not too informative but also not too large. We ran three independent models, where two of only consider ROIs or SNPS as covariates and a third model for joint selection.
When considering ROIs as the only available covariates, the selected variables are ROIs 61 and 115 with mppi 0.837 and 0.932, respectively, but also suggesting more investigation on ROI 35 with mppi 0.416 as shown in Table <ref>.
ROIs 35 (left posterior cingulate region) and 61 (left inferior parietal region) were also selected by <cit.> and <cit.> and are known to be related to schizophrenia. In special, ROI 115 (posterior inferior vermis–lobule IX) was a new findings that could narrow future research on lobules I to X. <cit.> found one more ROI 57 that has not been selected here but was present in the top 3 models. A more careful approach may be based on this rule, including all the covariates that appear in the top 3 models to select the ROIs and consider ROIs 35, 57, 61, 96 and 115.
Considering only the SNPs, as shown in Table <ref>, the selected variables are SNPs 22 and 61 with mppi 0.96 and 0.72, respectively. Although having a mppi 0.34 lesser than 0.5, we also suggest SNP 32. SNP 22 (rs3737597) is located in gene DISC1 (chromosome 1), a gene known to be strongly associated to schizophrenia and was also found by <cit.> and <cit.> who also found SNPs 10 and 38 to be discriminatory.
For the joint selection of ROIs and SNPs, again ROIs 35, 61 and 115 and SNP 22 are identified as discriminatory variables with mppi 0.291, 0.794, 0.968 and 0.955, respectively. In Table <ref>, we summarize the mppi, estimates and standard errors for each coefficient. Although the ROI 35 presents an mppi of less than 0.50 in the joint model, we keep it in the fitted model.
Regarding prediction evaluated using a 5-folds cross-validation strategy , in Table <ref> we show that DDRJ combined with Bayesian model averaging performs well in terms of predictive performance compared to the results from <cit.>, LASSO, random forest even though it is not a method focused on best prediction.
§ DISCUSSION
In this work, we have proposed a data driven reversible jump for variable selection using a Bayesian probit model. More specifically, for identifying relevant variables that impact and regulate dichotomous traits in genetics, for which thousands of genetic, environmental and imaging information are available nowadays. The proposed method does not need the inclusion of auxiliary indicator variables for each available covariate which are updated in each MCMC iteration and the estimation of all possible models. This makes the algorithm scalable for high-dimensional data when a huge number of covariates is considered.
Our goals, selecting ROIs and SNPs and assessing predictive risk for schizophrenia based on fMRI and SNPs information have been reached. Most ROIs 35, 57, 61, 115 and SNP 22 that we selected were in accordance with results from other authors and also known to be related to the disease, even though some new findings ROI 96 and SNPs 32 and 61 have been suggested and could be the subject of deeper research. Compared to other predictive methodologies as traditional LASSO and random forest, in terms of predictive accuracy, the DDRJ also perfoms well when predictions are done using the Bayesian Model Averaging, even if that is not usually the main focus.
From a methodological perspective, we noticed that the measure (KW or Pearson correlation) used inside the DDRJ to propose the candidate model can improve or degrade the efficiency of the algorithm, as those as mainly capturing linear association. Thus one could use some kernel based measure that accounts for non-linear relations to propose the new feature.
Regarding extensions, another direction of studies would be testing other priors such as those shrinkage priors introduced earlier to improve our current methodology. As we have also mentioned, a distance matrix between ROIs is available and has not been used in this work. This information could be included either as part of the DDRJ to make better jumps, or assume a Markov random field type of prior for ROIs and apply the DDRJ to perform variable selection and prediction for future subjects. Other extension of this work that is worth investigating is to perform clustering while selecting discriminating ROIs and SNPs, and again the DDRJ could be used to select the number of cluster and estimate parameters. Software in the form of R code, together with the datasets are available at github.com/hansamos/DDRJ.
Conflict of Interest: None declared.
unsrt
|
http://arxiv.org/abs/2307.02593v1
|
20230705184217
|
Superpositions of thermalisation states in relativistic quantum field theory
|
[
"Joshua Foo",
"Magdalena Zych"
] |
quant-ph
|
[
"quant-ph",
"gr-qc"
] |
lemmaLemma
theoremTheorem
propositionProposition
|#⟩⟨1|#2|#1⟩⟨#2|
|
http://arxiv.org/abs/2307.01269v1
|
20230703180111
|
Properties of the Line-of-Sight Velocity Field in the Hot and X-ray Emitting Circumgalactic Medium of Nearby Simulated Disk Galaxies
|
[
"J. A. ZuHone",
"G. Schellenberger",
"A. Ogorzalek",
"B. D. Oppenheimer",
"J. Stern",
"A. Bogdan",
"N. Truong",
"M. Markevitch",
"A. Pillepich",
"D. Nelson",
"J. N. Burchett",
"I. Khabibullin",
"C. A. Kilbourne",
"R. P. Kraft",
"P. E. J. Nulsen",
"S. Veilleux",
"M. Vogelsberger",
"Q. D. Wang",
"I. Zhuravleva"
] |
astro-ph.GA
|
[
"astro-ph.GA",
"astro-ph.HE"
] |
John A. ZuHone
john.zuhone@cfa.harvard.edu
0000-0003-3175-2347]John A. ZuHone
Center for Astrophysics | Harvard & Smithsonian,
60 Garden St. Cambridge, MA 02138, USA
0000-0002-4962-0740]Gerrit Schellenberger
Center for Astrophysics | Harvard & Smithsonian,
60 Garden St. Cambridge, MA 02138, USA
0000-0003-4504-2557]Anna Ogorzalek
NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771, USA
Department of Astronomy, University of Maryland, College Park, MD 20742, USA
0000-0002-3391-2116]Benjamin D. Oppenheimer
University of Colorado, Boulder, 2000 Colorado Ave, Boulder, CO 80305, Boulder, CO 80305, USA
0000-0002-7541-9565]Jonathan Stern
School of Physics & Astronomy, Tel Aviv University, Tel Aviv 69978, Israel
0000-0003-0573-7733]Ákos Bogdán
Center for Astrophysics | Harvard & Smithsonian,
60 Garden St. Cambridge, MA 02138, USA
0000-0003-4983-0462]Nhut Truong
NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771, USA
Center for Space Sciences and Technology, University of Maryland, Baltimore County, MD 21250, USA
Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany
0000-0003-0144-4052]Maxim Markevitch
NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771, USA
0000-0003-1065-9274]Annalisa Pillepich
Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany
0000-0001-8421-5890]Dylan Nelson
Universität Heidelberg, Zentrum für Astronomie, Institut für Theoretische Astrophysik, 69120 Heidelberg, Germany
0000-0002-1979-2197]Joseph N. Burchett
New Mexico State University, Department of Astronomy, Las Cruces, NM 88001, USA
0000-0003-3701-5882]Ildar Khabibullin
Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians-Universität München, Scheinerstr.1, 81679 München, Germany
Space Research Institute (IKI), Profsoyuznaya 84/32, Moscow 117997, Russia
Max Planck Institute for Astrophysics, Karl-Schwarzschild-Str. 1, D-85741 Garching, Germany
0000-0001-9464-4103]Caroline A. Kilbourne
NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771, USA
0000-0002-0765-0511]Ralph P. Kraft
Center for Astrophysics | Harvard & Smithsonian,
60 Garden St. Cambridge, MA 02138, USA
0000-0003-0297-4493]Paul E. J. Nulsen
Center for Astrophysics | Harvard & Smithsonian,
60 Garden St. Cambridge, MA 02138, USA
ICRAR, University of Western Australia, 35 Stirling Hwy, Crawley, WA 6009, Australia
0000-0002-3158-6820]Sylvain Veilleux
Department of Astronomy, University of Maryland, College Park, MD 20742, USA
0000-0001-8593-7692]Mark Vogelsberger
Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
0000-0002-9279-4041]Q. Daniel Wang
University of Massachusetts Amherst, Amherst, MA 01003, USA
0000-0001-7630-8085]Irina Zhuravleva
Department of Astronomy & Astrophysics
University of Chicago, Chicago, IL 60637, USA
The hot, X-ray-emitting phase of the circumgalactic medium in galaxies is believed to be the reservoir of baryons from which gas flows onto the central galaxy and into which feedback from AGN and stars inject mass, momentum, energy, and metals. These effects shape the velocity fields of the hot gas, which can be observed by X-ray IFUs via the Doppler shifting and broadening of emission lines. In this work, we analyze the gas kinematics of the hot circumgalactic medium of Milky Way-mass disk galaxies from the TNG50 simulation, and produce synthetic observations to determine how future instruments can probe this velocity structure. We find that the hot phase is often characterized by outflows outward from the disk driven by feedback processes, radial inflows near the galactic plane, and rotation, though in other cases the velocity field is more disorganized and turbulent. With a spectral resolution of ∼1 eV, fast and hot outflows (∼200-500 km s^-1) can be measured using both line shifts and widths, depending on the orientation of the galaxy on the sky. The rotation velocity of the hot phase (∼100-200 km s^-1) can be measured using line shifts in edge-on galaxies, and is slower than that of colder gas phases but similar to stellar rotation velocities. By contrast, the slow inflows (∼50-100 km s^-1) are difficult to measure in projection with these other components. We find that the velocity measured is sensitive to which emission lines are used. Measuring these flows will help constrain theories of how the gas in these galaxies forms and evolves.
§ INTRODUCTION
The circumgalactic medium (CGM) is the gas within the dark matter (DM) halos of galaxies, at distances from the center in between the disc/stellar halo/interstellar medium and the virial radius, and is believed to be the reservoir of low-density gas populated by inflows from the intergalactic medium (IGM) between galactic halos, which can cool and condense into the galactic halo <cit.>. As star formation, evolution, and death enrich and transform the ISM gas, feedback from supernovae and active galactic nuclei (AGN) inject mass, momentum, energy, and metals back into the CGM <cit.>, which alters its thermodynamic, kinematic, and chemical properties. Ultimately, these processes regulate the growth and quenching of galaxies, making the CGM one of the primary drivers of galaxy evolution as a whole. However, despite representing a large mass and volume of material, the low density of the CGM makes it difficult to observe in emission.
The CGM is multiphase, and the cool (T ≲ 10^5 K) and warm (10^5 K ≲ T < 10^6 K) phases of the CGM in z < 1 galaxies can be probed via emission and absorption lines of hydrogen and metals in the UV <cit.>. Observations of the CGM in the UV absorption lines of background quasar spectra have been performed by Hubble's Cosmic Origins Spectrograph (COS) <cit.>, and have shown that most of the baryons associated with galaxies are likely in the CGM <cit.> and that most of the metals released by stars are in the CGM as well <cit.>.
The hot phase (T ≳ 10^6 K) of the CGM, which is expected to be dominant in galaxies with halos more massive than ∼ 10^12 M_⊙, can only be probed via X-ray observations. In emission, the brightest X-ray signatures of the CGM are to be found in the lines of metal ions such as O VII, O VIII, Fe XVII, and Ne IX, all of which have rest-frame energies in the 0.5-1.0 keV (∼12-25 Å) band. For galaxies that are nearby and thus the easiest to detect and study, their redshifted emission lines are in the same band as the Milky Way's (MW) own CGM, which shines brightly in the same emission lines <cit.>. The CGM must also be distinguished from other sources of X-ray emission in galaxies, such as the hot ISM, AGN, and X-ray binaries. Some detections in emission of individual galaxies have been made <cit.>, and stacking analyses of galaxies from surveys can reveal the general properties of the X-ray emitting CGM <cit.>. These studies have been able to constrain the amount of hot gas present in the CGM, as well as the average temperature to a certain extent.
The main obstacle to more detailed studies of the hot CGM in X-ray emission is a lack of spectral resolution. The CCD imaging arrays aboard previous and current X-ray telescopes, including Chandra, XMM-Newton, Suzaku, and eROSITA, have spectral
resolutions of ∼100 eV (R ∼ 10 at 1 keV), which is far too coarse to resolve individual emission lines. For these instruments, the lines from the CGM not only blend with each other, but they blend into and are overwhelmed by the lines in the MW foreground emission. The diffraction gratings on these observatories have the requisite spectral resolution, but for extended sources such as the CGM, the dispersed spectrum on the CCDs is convolved with the spatial distribution of the emission from the source, smearing out spectral features. Gratings observations also lack the effective area required to detect the faint emission from the CGM.
In order to map the CGM at the required spectral resolutions, we require an integral field unit
(IFU) instrument in the X-ray band, which is a capability that can be provided by a
microcalorimeter. Microcalorimeters detect X-ray photons and measure their energies by sensing the
heat generated when they are absorbed and thermalized. To achieve the energy resolutions of
∼1-5 eV required for line emission studies in X-rays, microcalorimeters must be kept at
extremely low temperatures. The capability of microcalorimeters to resolve detailed thermodynamic,
kinematic, and chemical properties of hot space plasmas was demonstrated most recently by the
observations of the Perseus cluster of galaxies taken by the Soft X-ray Spectrometer (SXS)
microcalorimeter on board the Hitomi spacecraft
<cit.>. Unfortunately, the
Hitomi spacecraft was lost shortly after this observation, but will be followed up by the
launch of an essentially identical instrument on the XRISM spacecraft <cit.>.
Other planned and proposed microcalorimeter instruments include the X-ray Integral Field Unit
(X-IFU) on Athena <cit.>, the Lynx X-ray Microcalorimeter (LXM) on
Lynx <cit.>, Hot Universe Baryon Surveyor (HUBS) <cit.>, and
Line Emission Mapper (LEM) <cit.>. Of these, HUBS and LEM
have the necessary large field of view of studying the CGM (widths of 1 degree and
0.5 degree, respectively), so that the hot gas of the entire galaxy could be imaged in a single
pointing in the case of nearby systems. The proposed angular resolution of ∼10 arcseconds for
LEM is necessary to resolve the emission from bright background AGN point sources which can
contaminate the CGM signal.
The same high spectral resolution of X-ray IFUs that will enable more detailed study of the hot CGM's thermodynamic and chemical properties using emission lines will also make it possible to measure the velocity of the hot gas via the Doppler shifting and broadening of these same lines. Determining the kinematic properties of the hot CGM is essential to understanding its physics. Measuring its bulk (or mean) velocities via line shifts can help determine the nature of feedback from AGN and supernovae, as well as determining if there is any significant rotation in the hot phase and how it compares to other phases in the gas, as well as the rotation of the stellar disk. Line broadening measurements can probe gas turbulence, but may also reveal a complex of bulk flows at different velocities projected along a common sight line <cit.>. To make predictions for future observations of the velocity field of the hot CGM, we can use hydrodynamical simulations of the CGM in galaxies in the cosmological context that also have models for feedback from AGN and stars.
We have almost no observational constraints on the velocity of the hot CGM. Tangential motion of the hot phase of the MW CGM has been observed by <cit.> using O VII absorption line measurements against bright background AGNs with XMM-Newton, and it was found to be comparable to the rotation velocity of the stellar disk. Simulations of the hot CGM indicate the presence of a complex combination of motions: rotation/tangential motions, turbulence, outflows, and inflows. <cit.> and <cit.> have shown that the hot CGM of low-z galaxies is primarily supported against gravity by tangential motions in the inner regions (within ∼50 kpc), while in high-z galaxies the hot gas is primarily outflowing. These simulations also found that these tangential motions are primarily in the form of coherent rotation for the cold gas, while for the hot gas there is a combination of coherent rotation and uncorrelated motions. <cit.> showed that the inner hot CGM of disk galaxies from the TNG100 simulation has rotation similar to the stars and the ISM for galaxies with high angular momentum. <cit.> found that the inner hot CGM in disk galaxies from the FIRE simulation is largely supported by thermal pressure with a slow inflow component. The rotation velocity increases inward, reaching values comparable to stars at the disk edge, as in the idealized hot inflow solution described in <cit.>. Given the lack of observational constraints, the results of simulations in this area depend strongly on the specific implementations of the underlying astrophysical processes, especially feedback. This motivates the need for future X-ray observations to confront the simulations.
Some investigations of this type have already been carried out or are in progress. <cit.> investigated the possibility that resonant scattering of O VIIr emission line photons could boost the CGM signal from this line to be significantly brighter than the intrinsic emission alone, using galaxies from the the Illustris TNG50-1 (hereafter TNG50) simulations. <cit.> showed using mock LEM observations of galaxies from the Magneticum simulations that O VII and O VIII absoprtion lines can be detected at very large radius. Comparisons between galaxies from IllustrisTNG, SIMBA, and EAGLE demonstrate that emission lines from oxygen and iron in the X-ray band can be used to distinguish between different models of AGN feedback and determine the role of feedback from supernovae and black holes in regulating star formation (Truong et al., submitted). Finally, an analysis by Schellenberger et al. (in preparation)of the CGM from galaxies in the IllustrisTNG, SIMBA, and EAGLE simulations (using mock X-ray observations very similar to what will be employed in this work) demonstrated that using spectroscopically resolved emission lines the CGM can be traced out to large radii, and maps of temperature, velocity, and abundance ratios can be produced.
Milky Way and M31-like disk galaxies in the TNG50 simulation have complex CGM structure <cit.>, in part due to fast outflows driven by feedback processes <cit.> that produce bubble-like features in X-ray morphology similar to the eROSITA bubbles in the MW <cit.>. These are associated with velocities directed away from the disk in the several hundreds to thousands of km s^-1. Consequently, <cit.> and <cit.> showed that strong outflows in the CGM of disk galaxies produce anisotropic signatures in the X-ray in terms of surface brightness (SB), temperature, and metallicity. Anisotropies are also expected due to rotational support in the hot gas, as shown by the idealized hot rotating CGM models of <cit.> and <cit.>.
In this work, we analyze the thermodynamic and kinematic properties of the hot CGM plasma from disk galaxies in the TNG50 simulation. These galaxies are part of the sample chosen by <cit.> for possessing large bubbles driven by feedback processes, and our small subsample is chosen from those that have large outflow velocities. We first focus on projected quantities which would be observable in X-rays or derived from such observations, such as surface brightness, temperature, line-of-sight velocity, and line-of-sight velocity dispersion. We also examine the general properties of the velocity field of the hot gas in comparison to the warm and cold phases and to the stellar disk. These will show what features to expect from high spectral resolution X-ray observations of the CGM and the physical processes that produce them. We then follow up with synthetic microcalorimeter observations of the CGM, to determine to what extent these properties would be discernible by an instrument with capabilities such as LEM.
This paper is organized as follows. In Section <ref> we describe briefly the properties of the TNG50 simulation and the galaxies that were selected for study, as well as the procedure for determining the X-ray emission from the CGM of these galaxies and simulating observations. In Section <ref> we present the properties of the X-ray emission from the CGM and the results of the synthetic observation study. In Section <ref> we discuss the results and present our conclusions. As in the TNG50 simulation, we assume a flat ΛCDM cosmology with h = 0.6774, Ω_m = 0.3089, and Ω_Λ = 0.6911, consistent with the Planck 2015 <cit.> results. Unless otherwise noted, all error bars refer to 1-σ uncertainties.
§ METHODS
§.§ Simulation: TNG50
The galaxies we examine in this work were selected from the TNG50 simulation
<cit.>, a magnetohydrodynamics (MHD) cosmological simulation in a cube
∼50 comoving Mpc on a side with periodic boundaries, and a successor to the original Illustris
simulations <cit.>.[The IllustrisTNG simulations,
including TNG50, are publicly available at <www.tng-project.org/data> <cit.>.] The
simulation size and mass resolution is optimized for studies of galaxy formation and evolution <cit.>. The
simulations are performed with the code <cit.>, which combines a TreePM
gravity solver with a quasi-Lagrangian, Voronoi- and moving-mesh based method for the fluid
dynamics. The calculations include prescriptions for the evolution of magnetic fields
<cit.>, gas cooling and heating, star formation and evolution, metal enrichment,
feedback from supernovae, and for the creation, growth, and feedback of supermassive black holes
(SMBHs) <cit.>. The gas mass resolution of TNG50 is m_ gas = 8.5
× 10^4 M_⊙, which is sufficient to resolve the multiphase structure of the CGM for the
galaxies considered here (the gas in each galaxy contains ∼10^5-10^6 particles).
The original sample from TNG50 from which our galaxies originate was presented in
<cit.> and was selected to represent MW/M31-type galaxies: having a stellar mass of
M_*(< 30 kpc) = 10^10.5-10^11.2 M_⊙, having a disk-like stellar morphology, having
no other massive galaxy with M_* > 10^10.5 M_⊙ within 500 kpc, and that the mass of their
host halo is limited to M_200c < 10^13 M_⊙ (to avoid galaxies sitting in massive groups or
clusters). All galaxies were selected from the z = 0 snapshot. From this sample, we have selected
6 galaxies to focus on in this work. Their properties are listed in Table <ref>. As
shown in <cit.>, these galaxies have powerful outflows driven by AGN and stellar
feedback, launching giant overpressurized bubbles and shell features perpendicular to the disk in
opposite directions. Our small sample contains galaxies with particularly fast outflows, as defined
in <cit.>.
For all of the subsequent analysis, the coordinates and velocities of the particles and cells from
each galaxy have undergone a coordinate transformation such that the origin for each is the
potential minimum of the galaxy, and the z-axis of the new Cartesian coordinate system points in
the direction of the normalized spin axis of the galaxy's disk. This axis is determined by computing
the total angular momentum vector of the star particles within a sphere of radius 15 kpc centered on
the galaxy's potential minimum. The direction of the x is chosen to be perpendicular to the
z-axis, but otherwise arbitrarily, and the direction of the y-axis is then determined to give
the axes a right-handed orientation. The rest frame of the particles and cells is determined by
computing the mass-weighted mean velocity of the star particles from the same spherical region and
subtracting this velocity from the velocities of all of the particles and cells (the results are not
particularly sensitive to the choice of radius for the spherical region).
For the purposes of this paper, we make a distinction throughout between gas cells which we designate as “hot” with T ≥ 5 × 10^5 K, and those we designate as “warm/cold” with T < 5 × 10^5 K. The boundary value of T = 5 × 10^5 K corresponds to kT ∼ 0.04 keV, which in terms of photon energy is a rough boundary between the extreme UV and X-ray bands and is also close to the typical lower energy range of X-ray detectors. Since our main focus is the hot X-ray-emitting gas in this work, we do not distinguish further between warm and cold phases.
§.§ X-ray Emission and Mock Observations
For modeling the X-ray emission from the CGM, we assume collisional ionization equilibrium (CIE) and
use Astrophysical Plasma Emission Code <cit.>. This approximation is
valid for the temperatures and densities that we are examining in this work, since we are focusing
on measuring the velocities within the inner ∼200 kpc of the halo where we expect the effects
of hot outflows and rotation to be most significant. We verify that the only regions for which the
assumption of CIE can make a significant difference to the emitted SB are at radii larger than those
of interest to us in this work. The elemental abundance ratio table assumed for the emission model
is from <cit.>.
The code <cit.> is also used in this work to produce synthetic X-ray
observations from the CGM of the galaxies. The galaxies are placed at a redshift of z = 0.01, at
which the radius of r_500c for our galaxies fits roughly within a half-degree wide field of view
on the sky (at an angular diameter distance of ∼44 Mpc), and at which the emission lines from
the source we are interested in detecting are sufficiently redshifted away from the bright MW
foreground lines. Using the APEC emission model described above, we generate a cosmologically
redshifted spectrum for each X-ray-emitting gas cell in the galaxy, which are chosen by including
only non-star-forming gas cells with T > 3 × 10^5 K and ρ < 5 ×
10^-25 g cm^-3 (a value of the density close to the star formation density threshold in the
TNG simulations). In each galaxy, there is also a small set of isolated gas cells which are
abnormally bright in X-rays–these typically have extreme values of cooling time and/or thermal pressure, and on this basis are excluded from the analysis to improve visualizations, but we
do not find that leaving them in changes any of our conclusions. Inputs to these spectra include the
electron and proton number densities, temperatures, and metallicities <cit.> of the cells. We then use each spectrum to generate an initial sample of
photons with specified values of exposure time t_ exp = 1 Ms and telescope collecting area
A = 0.5 m^2. The value for A is energy-independent and is only used to ensure that the initial
sample of photons that will be drawn from in the instrument simulation step is large.
For each mock observation, the positions of this photon sample are projected onto the sky plane
along a chosen sight line, and the energies of the photons are Doppler-shifted using the component
of the gas cell velocity along the sight line. We use the Tübingen/Boulder neutral absorption
model <cit.> with a hydrogen column density of N_H = 1.8 ×
10^20 cm^-2 <cit.> to
remove a random subset of photons that will be absorbed by neutral gas in the MW. This creates a
large initial random sample of photons for each galaxy and each sight line that will be later used
by the instrument simulator to draw subsamples of photons to create “observed” X-ray events.
This step is carried out by the
code[<https://hea-www.cfa.harvard.edu/soxs>] <cit.>, which takes the set of
photons produced by and passes them through an instrument model to produce observed
X-ray “events.” This includes convolving with the energy-dependent effective area (auxiliary
response file or “ARF”) of the combined telescope and instrument, convolving with the response
matrix (“RMF”) that converts photon energies into spectral channels, and scattering of photon
positions by the telescope PSF. For this work, we use an instrument model with capabilities similar
to the LEM probe concept, with a field of view of 32 arcminutes, 0.9 eV spectral
resolution, and an effective area of ∼0.2 m^2 in the 0.5-2.0 keV (∼6-25 Å) band
<cit.>.
also includes events from background and foreground models[More details about
the background models in can be found at
<http://hea-www.cfa.harvard.edu/soxs/users_guide/background.html>.]. For the non-X-ray particle
background (NXB), a constant value of 4 counts s^-1 keV^-1 deg^-2 is assumed. For the
cosmic X-ray background (CXB), we include resolved point sources with numbers and fluxes determined
by a logN-logS distribution from <cit.>. For the Galactic foreground, we assume
a model with two APEC <cit.> components, one absorbed and thermally broadened for the
“hot halo” (using the same absorption model and value for N_H as above) and another (unabsorbed)
for the “Local Hot Bubble”, taken from <cit.>. We add to this
another absorbed and thermally broadened APEC component for the hot halo with kT = 0.7 keV (T
≈ 8.1 × 10^6 K), Z = 1 Z_⊙, and a normalization parameter roughly 0.12×
that of the first hot halo component, suggested by Halosat observations <cit.>.[Evidence for a such a hot component was also found in eROSITA observations by <cit.>.] Each galaxy and the included background and foreground emission is exposed for 1 Ms by the instrument simulator.
The used astrophysical background model does not include contribution of the line emission coming
from the heliospheric solar wind charge exchange <cit.>. This background component originates from the interaction of the
ionized particles of the Solar wind with the flow of neutral ISM through the heliosphere. It is
time-variable and much harder to predict and model, but one might expect it affecting mostly the
O VII triplet, especially its forbidden component, and higher energy lines as well but to a smaller
degree <cit.>. To the zeroth order, the presence of this component would correspond to a moderate enhancement of the Galactic OVII line emission, which should not significantly affect the results presented here.
§ RESULTS
§.§ Maps of Projected Quantities from the Simulations
For three of the disk galaxies, we make projected maps of several quantities along the line of
sight, which are shown in Figures <ref>-<ref>. Each galaxy is projected
along “edge-on” and “face-on” sight lines, defined with respect to the stellar disk. The
top-left panel of each sight line of these figures shows projected stellar mass density. The stellar streams observed in both galaxies at large radii indicate past or ongoing
merging activity with satellites.
The other four panels in each figure show projected quantities associated with the X-ray emitting
gas. The top-center panels of each sight line of Figures <ref>-<ref>
show X-ray SB in the 0.5-1.0 keV band, which spans the prominent emission lines for the hot CGM as
noted in Section <ref>. In the edge-on projections (upper panels of Figures
<ref>-<ref>), there are clear indications in the SB maps of outflows
perpendicular to the galactic disk, inflating cavities and entraining dense gas in their wake. The
face-on projections (lower panels of Figures <ref>-<ref>), do not show
any such axisymmetry in their SB maps (as expected), but instead show roughly concentric edges along
various directions, likely generated from the expansion of the outflowing gas perpendicular to the
disk axis or from satellite merger activity.
Much stronger indications of the nature of the various X-ray features can be seen in the projected
temperature and velocity maps (which are weighted by the X-ray SB in the 0.5-1.0 keV band). Outflows
are clearly associated with regions of higher temperature (top-right panels of Figures
<ref>-<ref>). Most of the hot CGM has a projected temperature of T ∼
3.5-4.6 × 10^6 K (∼ 0.3-0.4 keV), whereas the regions associated with the outflows and
bubbles in the SB maps range from T ∼ 5.8-11.6 × 10^6 K (∼ 0.5-1.0 keV).
The bottom-left panels of Figures <ref>-<ref> show the line-of-sight
bulk velocity. In the edge-on projections of Figures <ref> and <ref> for
galaxies 1 and 2, the mean velocity maps show clear signs of rotation of the CGM in the inner r
∼ 50 kpc for both galaxies. Rotation speeds measure up to ∼200-300 km s^-1. Outside of
this radius, the bulk flows do not tend to follow a clear pattern of rotation, and any measured velocities can be radial (whether inflowing or outflowing) or tangential. On the other hand,
galaxy 6 (Figure <ref>) does not show a clear rotation pattern in the
emission-weighted line-of-sight velocity map in the edge-on projection, instead exhibiting a complex
pattern of velocities. Interestingly, it also does not show a symmetric pattern of outflows on
either side of the galactic disk in the SB map, indicating that the hot outflow is not simply
directed perpendicular to the disk in this particular case. The face-on projections of each galaxy
show a complex pattern of mean velocities in both directions near the center of the galaxy, ranging from -600—600 km s^-1, indicating that though the majority of the flow is outward away from the disk, it is complex enough that in projection different sides of the galaxy may dominate in emission at particular spatial locations. At larger projected radii, the mean velocities are smaller, at ∼-200—200 km s^-1.
The line-of-sight velocity dispersion maps are shown in the bottom-center panels of Figures
<ref>-<ref>. In the edge-on projections, velocity dispersions of
∼200-1000 km s^-1 are seen primarily in the regions dominated by the outflows. These
correspond primarily to the expansion of bubbles and outflow regions along the sight line and not
typically to regions of increased turbulence. In the face-on projections, the whole inner r
≲ 100 kpc region has projected velocity dispersions of ∼300-1000 km s^-1, primarily
due to the directed hot outflows on either side of the disk. This too is very patchy, reflecting the
complexity of the outflows as seen in projection. Outside of these regions, the velocity dispersion is much smaller, around σ∼ 100-200 km s^-1. We will see later (Section <ref>) that the faster velocities (both mean and dispersion) near the center are associated with outflows, while the slower velocities at larger projected radii are associated with inflows.
Projected maps in the face-on and edge-on directions for the other three galaxies (3, 4, and 5) are
presented in Figures <ref>, <ref>, and <ref> in
Appendix <ref>. In general, these maps show similar features in the different projections to galaxies 1, 2, and 6.
§.§ Velocity Profiles
In this Section, we further examine the properties of the CGM velocity field, focusing on outflows,
inflows, and rotation. For this purpose, we adopt a cylindrical coordinate system where the vertical z-axis is perpendicular to the disk, and the radial R and angular ϕ directions define planes parallel to the disk. The origin of the coordinate system is defined to be at the center of the galaxy, where in particular z = 0 defines the galactic plane. Note that in the following we will also refer to the coordinate r = √(R^2+z^2), which is the radial coordinate in the spherical coordinate system, as it is easier to distinguish inflows and outflows in this coordinate.
§.§.§ 2D Velocity Profiles
Figures <ref>-<ref> show 2D profiles of the mass-weighted velocity in
the spherical-r direction (left sub-panels), representing inflows and outflows, and the cylindrical-ϕ direction (right sub-panels), representing rotation and general tangential motions. The profiles are a function of z and R and are azimuthally averaged over the ϕ-direction, for both the “hot” (T ≥ 5 × 10^5 K) and “warm/cold” (T < 5 × 10^5 K) gas phases in both galaxies.
The left panels of Figures <ref>-<ref> show the azimuthally averaged
mass-weighted spherical radial velocity v_r for galaxies 1, 2, and 6. Galaxies 1 and 2 (Figures
<ref> and <ref>) display a straightforward geometry – in conical
regions above and below the disk plane at z = 0, aligned with the z-axis and with an opening
angle of approximately 45^∘, the hot phase (left sub-panels) flows outward at speeds of
∼400-500 km s^-1 or more. These regions are dominated by feedback. The hot gas flows in
this direction will be most easily observed in face-on disk galaxies, though the morphological
features in X-ray SB and temperature which accompany these flows will of course be observed most
easily in edge-on disk galaxies. This basic structure in SB and temperature in TNG50 galaxies was
previously described in detail in <cit.>.
Outside of these conical regions, closer to the galactic plane, the hot phase is mostly slowly
inflowing with a velocity of ∼100 km s^-1. In these two galaxies, there is not a
significant amount of gas in the warm/cold phase, but it is largely confined to the volume away from
the conical hot outflow regions and is mostly inflowing at velocities of ∼200-300 km s^-1. Galaxy 6
(Figure <ref>), however, is quite different. Essentially all of the hot phase is
flowing radially outward at velocities of ∼300-500 km s^-1, and the warm/cold phase, which
makes up a larger fraction of the mass of the CGM in this galaxy than galaxies 1 and 2, is mostly
inflowing above the galactic plane and mostly outflowing below it, with similar speeds.
The right panels of Figures <ref>-<ref> show the azimuthally
averaged mass-weighted velocity in the ϕ-direction. In galaxies 1 and 2, the hot phase (left
sub-panels) shows coherent rotation of the CGM within a cylindrical radius of at least ∼50 kpc
and a height above the disk out to ∼75 kpc, though for Galaxy 2 it extends somewhat further
out. The majority of the warm/cold phase (right sub-panels) rotates in a disk of ∼50 kpc radius
and ∼20-30 kpc thickness near the center, with other parts of the cold gas phase at large radii
largely co-rotating with the hot phase. The rotation of the gas will obviously be most easily
observed in edge-on disk galaxies. The hot phase in Galaxy 6 shows far less coherent rotation and
instead is moving mostly randomly in the azimuthal direction, and very slowly with speeds of
∼50 km s^-1. The warm/cold phase has a more coherent rotation (though counter to the direction of rotation of the stars, see Section <ref>), with speeds of
∼100-200 km s^-1.
In summary, galaxies 1 and 2 are very similar, showing coherent hot outflows directed above and
below the disk that push hot gas outward in the R and z-directions in conically-shaped regions
on either side of the galaxy <cit.>,
while hot gas flows slowly inward closer to the galactic plane. Both galaxies also show coherent
rotation of both phases, albeit at slightly different velocities (this will be explored more in
Section <ref>). Galaxy 6 does not have any coherent rotation of the hot phase, and does not have any significant amount of gas which is inflowing. This is consistent with the disturbed appearance of the velocity field in the maps in Figure <ref> in Section <ref>. Phase plots for the other three galaxies in the sample (3, 4, and 5) are shown in Figures <ref>, <ref>, and <ref> in Appendix <ref>. Galaxies 3 and 4 appear very similar to galaxies 1 and 2, whereas galaxy 5 appears more disturbed.
§.§.§ 1D Velocity Profiles
How are the properties of the velocity field seen in the 2D profiles in the previous Section and the properties of the velocity field in projection seen in the maps in Section <ref> connected? To illustrate this, and to motivate the discussion in this and the following Sections, in Figure <ref> we show a schematic representation of the major components of the velocity field for the galaxies in our sample (1-4) which have clearly distinguished regions of inflow (light green), outflow (light red), and rotation (light blue), shown in the left image with vectors indicating the directions of flow in the different regions. Given the cylindrical geometry, and the fact that X-ray spectra must be extracted over regions large enough to contain a statistically significant number of X-ray counts, two natural choices for regions to analyze the X-ray emission in cylindrical radial profiles in two different projections are also shown. The small magenta cylinder would be a logical choice for studying the radial profile of the rotation curve of the galaxy in the edge-on projection using line shifts, in which it would appear as a rectangular region (center image), where the smaller inset rectangles represent the radial bins and the regions from which spectra would be extracted. In addition to rotation in the inner hot CGM, in these regions the radial inflows in the outer regions would also have velocity components along the sight line, fully aligned with it at the very center and decreasing with projected radius. Assuming cylindrical symmetry, the radial inflow would produce a small line broadening, strongest near the center. In the face-on projection (right image), the larger yellow cylinder would represent measuring radial profiles in a set of circular annuli, which will probe the fast outflows near the center but also the slower inflows in the outer regions. Again assuming symmetry, both the outflows and the inflows would produce line broadening.
Motivated by these considerations, in this Section we produce azimuthally and height-averaged 1D
profiles of the first two moments of the velocity field along the three coordinate directions R,
ϕ, and z in the cylindrical coordinate system, as a function of cylindrical radius R from
the center of the galaxy. We also compare the profiles of the gas velocities to those from the stars.
In what follows, for the R and ϕ directions we extract 1D velocity profiles for the gas and
stars (which will be viewed in edge-on projections) from a thin cylinder with a half-height of
30 kpc and radius of 100 kpc (represented by the magenta cylinder in Figure <ref> and
shown with dashed lines in Figures <ref>-<ref>). For the
z-direction, we extract 1D velocity profiles (which will be viewed in the face-on projection) from
a cylinder of the same radius but a half-height of 1000 kpc, which corresponds to a long cylinder
with the axis projected along the line of sight (corresponding to the yellow cylinder in Figure
<ref>).
We first show mass-weighted velocity profiles of the stars (green), hot gas (red), and warm/cold gas
(blue) for all six galaxies in Figure <ref>. The top panels of Figure
<ref> show the mean velocity (left) and the velocity dispersion (right) in the
R-direction. In this direction, a complication from the geometrical considerations discussed above
immediately arises. For the galaxies with coherent hot outflows, the top and bottom of the thin
cylinder used to extract the profiles (30 kpc away from the galactic plane) intersects with the
boundary of this outflow at a radius of roughly ∼30-50 kpc (see Figures
<ref>-<ref>). Within this radius, the hot phase is outflowing with an
average velocity μ_R∼ +50-200 km s^-1. Outside of this region, the hot gas is
either outflowing with a similar velocity, or inflowing with μ_R ∼ -50-100 km s^-1,
depending on whether this phase has the simple outflow/inflow structure, which is the case for
galaxies 1, 2, 3, and 4. The warm/cool phase gas, regardless of radius, is mostly inflowing with an
average velocity of μ_R ∼ -100-300 km s^-1. The velocity dispersion in the radial
direction is typically larger for the hot phase, with σ_R ∼ 100-500 km s^-1, than the
warm/cold gas, which has σ_R ∼ 50-100 km s^-1.
The middle panels of Figure <ref> show the same profiles for the ϕ-component
of the velocity. The middle-left panel shows the mean ϕ-velocity–essentially the rotation
curves of the different phases. Though there is a clear spread, in general the rotational speed of
the cold gas is faster than the hot gas within a radius of ∼50-kpc, with the former rotating at
∼200-400 km s^-1 and the latter rotating at ∼50-200 km s^-1. The stellar disks are
rotating at ∼100-200 km s^-1. The middle-right panel shows that the ϕ-velocity
dispersions within ∼50 kpc for the hot gas are slightly higher than the cold
gas–∼100-150 km s^-1 versus ∼50-100 km s^-1, respectively. The lack of complete
rotational support for the hot CGM in these TNG50 galaxies is consistent with previous results from
other works <cit.>. We also note the fact that there is more angular momentum in the warm/cold gas than the stars, in agreement with previous studies <cit.> and shown to be common to simulations of galaxy formation by <cit.>, arising at least in part from cold, high-angular-momentum streams of infalling gas.
The bottom panels show the same profiles in the z-direction, which would be seen if a galaxy were
viewed face-on. The mean z-velocity profile hews very closely to zero for nearly all of the
profiles. This is expected if the outflows are nearly equal and opposite on either side of the disk
in galaxies observed face-on. Exceptions to this are most prominent in the very center (r ≲
10-20 kpc), where the volumes of the radial annuli are small enough that the average can dominated
by a few cells with high velocity (see also the bottom-left panels of Figures
<ref>-<ref> in Section <ref>, which show large velocity
shifts near the center). Deviations from zero velocity mean are more pronounced in the warm/cold
phase, which is sometimes dominated by large and coherent parcels of gas (see the phase plots in
Section <ref>). The z-velocity dispersion profiles show a clear separation
between the hot phase and the warm/cold phase–the former has velocity dispersions within
∼40 kpc of ∼300-500 km s^-1, and the latter has very low dispersions of
≲100 km s^-1, with one outlier curve with a dispersion of ∼200 km s^-1 (galaxy
5) over almost the entire radial range. The high dispersions in the hot phase come from the
oppositely directed outflows on either side of the galaxy.
We now briefly look in more detail at the 1D velocity profiles of the individual galaxies. These are
shown for galaxies 1, 2, and 6 in Figures <ref>-<ref>. Here, we
also show the velocity profiles for the hot gas weighted by the X-ray emission in specific
source-frame bands around prominent emission lines in the CGM: O VII and O VIII (0.558-0.656 keV
band, orange lines), and Fe XVII and Ne IX (0.723-0.924 keV band, purple lines). These weightings
are significant since they correspond more closely to what X-ray microcalorimeter instruments will
be able to measure. The arrangement of the panels in Figures
<ref>-<ref> is the same as in Figure <ref>.
In the top panels of each figure, we show the first and second moments of the R-component of the
velocity. For galaxies 1 and 2 (Figures <ref> and <ref>), the hot gas is outflowing within ∼50 kpc with a velocity up to ∼100-200 km s^-1, depending on the weighting used. For example, in galaxy 2 the hotter gas probed by the higher-energy emission lines of Fe XVIII and Ne IX is moving faster than both the cooler hot phase probed by the lower-energy O emission lines, as well as the gas probed by the mass weighting. At these inner radii, this is indicative of the hot, outflowing gas from the central SMBH. The warm/cold phase in this region has essentially zero radial velocity. Beyond this radius in these two galaxies, the hot gas is inflowing at ∼-50-100 km s^-1, with slightly slower speeds for the phase weighted by the Fe VIII-Ne IX emission. Parcels of warm/cold gas are also inflowing at these radii with higher velocities near ∼150-300 km s^-1. Galaxy 6 is very different in that the hot gas is strongly outflowing at all radii with velocities of ∼100-400 km s^-1 depending on the weighting. The cold phase is inflowing at all radii, especially near the center, with velocity up to ∼100 km s^-1.
In the ϕ-direction (middle panels of Figures <ref>-<ref>),
the mean and dispersion of the velocity between the different weightings for the gas are all more
similar for galaxies 1 and 2. This is expected, since the azimuthal direction is least affected by the hot outflow. Galaxy 6 is once again seen to be quite different from the other two–both its warm/cold and hot phases are counter-rotating in the direction of the stars.
The bottom panels of each figure shows the moments of the z-component of velocity. We note again
(as seen in Figure <ref>) that in this projection the mass-weighted mean velocities
of both the hot and warm/cold phases are close to zero (as expected), and that the mass-weighted
velocity dispersion is higher for the hot phase. In the emission-weighted profiles, the absolute
value of the mean z-velocity can be significant in places, up to ∼150 km s^-1. Similar to the R-component, the velocity dispersion in the z-direction weighted by the Fe XVIII
and Ne IX lines can be noticeably higher than that weighted by the O lines. Similar trends between the profiles are seen in the other three galaxies (3, 4, and 5), as seen in
Figures <ref>, <ref>, and <ref> in Appendix
<ref>.
As already noted, azimuthally averaged profiles such as these are motivated not only by the geometry
but also by the number of X-ray counts available for an observation. This immediately introduces a
complicating factor–the mean and the standard deviation of the velocity in such a large region may
either arise from the velocity distribution along the sight line or from the velocity distribution
across the sky plane within the region. To check for this effect, we plot profiles of the velocity
in the z-direction for Galaxy 1 in the face-on projection in 8 different azimuthal sectors of
width 45^∘ each in Figure <ref> (to be compared to the top panels of Figure
<ref>.) This figure clearly shows that there is an effect on both the measured
velocity mean and dispersion from the azimuthal averaging (which can also be predicted from the
bottom panels of Figure <ref>). This should be taken into consideration when
interpreting the results of the azimuthally averaged profiles, and mitigated by splitting into
subregions if there are enough counts to do so.
§.§ Off-Axis Projections
Of course, most galaxies will not be inclined either perfectly edge-on or face-on to our sight line.
In off-axis projections, components of the velocity field from both the rotating CGM and the hot outflows will be observable together. In the two projections we have examined so far, the outflow
velocities could not be easily measured via line shifts, either because they were mainly out of the
sight line in the edge-on case, or in the face-on case the oppositely directed outflows largely
canceled out the overall line shift. In the case of an off-axis projection, these outflow velocities
could be measured, and if the inclination angle can be constrained from the stellar disk the total
outflow velocity may be estimated.
Figure <ref> shows maps of the same quantities as shown in Section <ref>,
except along a sight line 45^∘ away from both the edge-on and face-on projections, for
galaxy 2, to give an example. The most intriguing of these images are the projected mean
velocity maps (bottom-left panels for each galaxy). The outflow velocities on either side of the
galaxy are clearly seen, and can be spatially matched with features in X-ray SB (top-middle panel
for each galaxy) showing outflows and cavities. The pattern of the velocity field in the map is also
twisted from a purely vertical dipole, showing the effect of both CGM rotation and outflows in the
same projection.
This last effect is particularly interesting, as it reveals how the different baryonic phases can
have very different kinematic properties. Figure <ref> shows line-of-sight mean
velocity maps for the stars and warm/cold and hot gas phases, for both the edge-on and inclined
projections. In the edge-on projection, the stars and both gas phases show a common rotation pattern
in the inner ∼30 kpc region of the galaxy, though they may rotate at slightly different speeds
as noted in Section <ref>. In the inclined projection, though the stars and
warm/cold gas show the same axis of rotation as before, the velocity pattern of the CGM is tiltled
with respect to both due to the hot outflow signature combined with the rotation signature. This is an intriguing prediction which can only be tested with an X-ray microcalorimeter.
§.§ Mock X-ray Observations
In this Section, we produce synthetic X-ray observations of galaxies 1 and 2 using the procedure
described in Section <ref>, using a model with instrument characteristics similar to
LEM <cit.>. In the spectral analysis that follows,
the brightest ∼50-100 CXB point sources have been identified using
<cit.> and removed.
§.§.§ Velocity Maps from Spectral Fitting
If statistics permit, X-ray IFUs will be most useful in producing maps of projected quantities from
model fits to spectra. To demonstrate this, we carry out such a procedure on two of our model event
files to produce maps of line-of-sight mean velocity. This analysis is carried out using the CIAO
<cit.> and Sherpa packages <cit.>.
We first extract a spectrum from a region by removing all emission within a radius of 12.5' from the galaxy center. We fit this spectrum to a combined model for the MW foreground (, where is the same foreground absorption model described in Section <ref>, and is an APEC CIE model with thermal line broadening), one power-law component for the (unresolved) CXB, and another for the NXB.
With the background determined, we proceed to produce the velocity map. We then bin the counts
images in the O VIIf[Only the O VII forbidden line is sufficiently redshifted away from the
MW foreground lines at z = 0.01 to be used for this purpose.], O VIII, and Fe XVII lines (defined
by narrow bands around the line centroids at z = 0.01 with width 3 eV) into 30" pixels (twice the
size of the pixels in the simulated instrument). Each of these larger pixels is the center of a
circular region where the radius is expanded until it reaches a SNR of 7, where the maximum allowed
radius of each circle is 4.5' (18 pixels). Spectra are extracted from these regions, and grouped so
that there is at least one count in each energy bin of the spectra.
Each circular region is then fit to a single-component model for the source,
with the parameters for the model components corresponding to the MW foreground and the NXB frozen to the values obtained from the background-only fit (rescaled by area), and the normalization
of the CXB component free to vary to account for the variable CXB contributions in each localized
circular region. We fit each spectrum in 8 eV bands around the O VII, O VIII, and Fe XVII lines. For
the source model, the temperature, redshift, line width, and normalization parameters are free to
vary. The hydrogen column density for foreground galactic absorption is fixed to N_H = 1.8 × 10^20 cm^-2, and the abundance parameter is fixed to Z = 0.3 Z_⊙, assuming <cit.> relative abundances. These parameters are fixed given the narrow spectral bands we use in the fits; neither of them will be well-constrained by the fit and the measurement of the line shift is not sensitive to their value in any case. We fit by minimizing the Cash statistic <cit.>. Once a best fit for a region is found, we refine it by running a Markov Chain Monte Carlo (MCMC) analysis with 2500 steps.
Figure <ref> shows the result of this procedure on the observation of galaxy 2 with
the sight line facing edge-on (left subpanels) and inclined 45^∘ to the plane of the galactic
disk (right subpanels). The top subpanels show maps of SB in the O VIII line, with the idealized SB
map projected from the simulation in the top-left subpanels and the counts map from the mock
observation in the top-right subpanels. The bottom subpanels show the line-of-sight mean velocity,
computed from the simulation by weighting by the emission in the 0.5-1 keV band (bottom-left
subpanels), and produced from the fitted line centroid as described above (bottom-right subpanels).
Typical uncertainties on the mean velocity from the fits are ± 20-30 km s^-1. There is
remarkable agreement between the simulated and fitted velocity maps, with the model fits reproducing
the overall shape of the velocity distribution as well as the magnitude of the velocity in either
direction. The absolute values of the most extreme values of the idealized map are slightly
underestimated due to the fact that they appear in small regions with faint emission that do not
contribute greatly to the spectra in their respective circular regions. The reproduction of the
general features in the map demonstrates that an X-ray IFU with ∼1 eV spectral resolution will
be able to map the velocity field of the CGM to sufficient detail to observe the effects of rotation
and outflows.
§.§.§ Velocity Distributions in Regions from Spectral Fitting
Figure <ref> shows a 1 Ms exposure of galaxy 1 in the face-on and edge-on
projections, where the plotted events have been restricted to the 0.646-0.649 keV band, which bounds
the redshifted O VIII line at z = 0.01. As noted in Section <ref>, all backgrounds are
included in this image. Also overlaid on the two panels in Figure <ref> are numbered
regions from which spectra are extracted for fitting to emission models for the analysis in Section
<ref>. In the edge-on image (left panel), the regions are made of rectangles so
that the velocity profile of the CGM can be measured across the disk. In the face-on image, the
regions are made of annuli, reflecting the approximate cylindrical symmetry along this sight line.
We extract spectra from the regions shown in Figure <ref> and fit them using <cit.>. In each region, we model the CGM emission using a single component, where the hydrogen column density for foreground galactic absorption and the metallicity parameters are fixed as above, and the temperature, redshift, velocity broadening, and normalization parameters are free to vary. For the galactic foreground emission, we assume the model given in Section <ref>, holding all parameters fixed except an overall constant normalization which is free to vary. A power-law component is included to model the CXB, with its photon index and normalization parameters free to vary. Finally, the normalization of the constant particle background component is also free to vary. We fit within the 0.64-0.83 keV band (covering the O VIII and Fe XVII lines), and use the Cash <cit.> statistic for minimization.
The result is shown in Figure <ref>, where the blue lines show the mean (left panel) and standard deviation (right panel) of the velocity as determined from the spectral fitting, and the orange lines show the same quantities projected directly from the simulation weighted by the X-ray emission in the 0.5-1.0 keV band. The shape of the mean velocity measurements clearly shows the rotation curve, and is in excellent agreement with the simulation projection and broad agreement with the azimuthally averaged curve of the same quantity in the middle-left panel of Figure <ref>. The measured velocity dispersion is also in broad agreement with the simulation projection. For the reasons discussed in Section <ref>, this quantity will be dominated by the oppositely directed radial (R) inflows near the center of the galaxy, while at larger projected radius the contribution of differences along the azimuthal (ϕ) direction will become more important. The numbers measured here are consistent with those from the 1D radial profiles in the top-right and middle-right panels of Figure <ref>.
For the face-on projection, the story is somewhat different. In this case, where we project along
the z-axis of the cylinder, there is a complex distribution of outflows and inflows with different
temperatures and velocities. As we have already seen (Figure <ref>), azimuthally
averaging within an annular region also combines different phases in a non-trivial way. We find for
many of the annular regions shown in Figure <ref> that a single thermal emission
model component does not adequately represent the observed emission from the CGM within them. To
this end, we have fit these 9 regions with multiple components to attempt to capture multiple
temperature and velocity components in the hot gas from both the outflows and the inflows, which we
would predict to be observed especially along this sight line from the results of Sections
<ref> and <ref>. For these fits, we have successively added
additional components until no more components were statistically required. In order
to make sure we can also correctly detect and characterize weaker emission components, we used the
whole energy range (0.3-2 keV) and kept background parameters free to vary in this part of the
analysis. This kind of deep, multi-component analysis will only be possible with
microcalorimeter-quality data.
The results are shown in Figure <ref>, for which all regions are numbered for
reference back to Figures <ref>. The left panel shows the fitted gas temperature for
each region, for fits with 1, 2, and/or 3 components. Not all of the annular regions
were well-fit by a second or a third component—only regions where a new component was
statistically required are shown. It can be seen that there are three distinct gas temperatures
recovered by the fits (left panel)—the dominant Component 1 is at T ∼ 2.3 × 10^6 K,
(∼ 0.2 keV), with Component 2 at T ∼ 4.6-5.8 × 10^6 K (∼ 0.4-0.5 keV), and
Component 3 at T ∼ 7.0-9.3 × 10^6 K (∼ 0.6-0.8 keV). The mean velocities for these
three components are shown in the center panel, with Components 1 and 2 averaging around zero
velocity with a range of ∼ -100-100 km s^-1. Component 3, which is the hottest gas, has
mean velocities as high as ∼ -200 km s^-1 near a radius of ∼10-20 kpc, but these
values are also more uncertain. The right panel shows the velocity dispersion for each component,
which is ∼100-200 km s^-1 for Components 1 and 2, and ∼400 km s^-1 for Component
3, but again these latter values are more uncertain. The lower values of the mean and dispersion at
lower temperature (Components 1 and 2) are consistent with the fact that the slower inflows are
cooler, and the higher values for both of these quantities of Component 3 are consistent with the
fact that this component is associated with the hot outflowing gas that we have seen in the previous
sections, especially Figure <ref>, which showed the same for the velocity profiles
weighted by the higher energy band, which is more sensitive to hotter gas. The spectra for Region 4
in the 0.3-2 keV band, with the best-fit model overlaid with three components, are shown in Figure
<ref>.
To further examine the consistency of the fitted values with the data from the simulation, we show in Figure <ref> the phase space of temperature vs. line-of-sight velocity for 3 of the cylindrical annuli corresponding to the numbered face-on regions shown in the right panel of Figure <ref>. The top panels only show gas which is inflowing with v_r < 0 and the bottom panels only show gas which is outflowing with v_r > 0 (see also Section <ref>). The colormap indicates summed emission measure at each point, with yellow indicating the highest values. For each region, the general trend is for the phase space to be most concentrated at temperatures of T ∼ 3-6 × 10^6 K (∼ 0.25-0.5 keV), where the spread of velocities is also the lowest. As we move to higher temperatures, the spread of velocities increases, though the phase space is less populated in these regions.
Overplotted on each phase space panel are magenta points indicating the position of the fitted temperature and mean velocity from the components shown in Figure <ref>, whereas the vertical error bars on each point indicate the velocity dispersion. For all of the panels, the coldest magenta point (Component 1 from Figure <ref>) is usually consistent with the highest emission measures in the region. We can also see from the top row of panels that this component is also consistent with inflowing gas, especially Region 8, which is at large radius and the gas with the highest emission measure should be near the disk and thus inflowing. However, there is also outflowing gas in these regions in projection (bottom row of panels) at the same velocity/temperature phase, so the distinction is not always clear-cut. Where present, Components 2 and 3 also appear generally consistent with the data, though once again there are larger uncertainties. These temperatures, especially Component 3, are hotter and are more consistent with the outflowing gas. This particular analysis relied on combinations of single temperature models, but it is likely that better models accounting for the temperature and velocity distributions in a more general way will need to be developed to properly model calorimeter-quality data in the future.
§ SUMMARY
The hot, X-ray-emitting phase of the CGM has so far eluded detailed study, even for nearby galaxies,
due to the brightness of the MW's own CGM and the lack of X-ray instruments with sufficient spectral
resolution to distinguish between the emission lines of the latter and the former. In disk galaxies
with mass greater than or equal to the MW, the hot phase will be dominant. In the coming decades,
determining the properties of this phase of the CGM will be crucial to the further development of
our understanding the processes of galaxy formation and evolution. If our own galaxy is any
indication, many of these galaxies may be expected to possess hot outflows (such as evidenced by the
cavities seen in the eROSITA all-sky survey) and rotational motions in their CGM. Only
X-ray IFUs will be able to map the velocity field of these galaxies to observe these processes in
action.
Using a small selection of disk galaxies from the TNG50 simulation which were already shown to have cavities, we have shown that the CGM of such galaxies can exhibit velocities representing gas
outflows, inflows, and rotation that can be mapped by microcalorimeter instruments of the future. Our main conclusions are as follows:
* A number of the TNG50 galaxies examined in this sample (1-4) have a simple geometrical structure in the hot phase of the CGM, comprised of oppositely directed hot and fast outflows in conical regions on either side of the disk, coherent rotation in the inner hot CGM (within ∼50 kpc), and slower inflows of hot gas (∼100-150 km s^-1) close to the galactic plane at larger radii. When viewed exactly edge-on, line shift maps exhibit the rotation curve clearly, with velocities of ∼100-200 km s^-1. Hot outflows can also be seen edge-on in line shift (∼400-600 km s^-1) if they are not launched exactly in the plane of the sky, and the expansion of the outflow along the sight line can be seen in line broadening measurements (∼200-400 km s^-1). When viewed exactly face-on, line shift maps of the hot CGM of the same galaxies show a turbulent line-of-sight velocity structure with mean velocities of ∼± 200-500 km s^-1, and velocity dispersions of ∼ 300-1000 km s^-1 within ∼50-100 kpc of the galactic center, and ∼100-200 km s^-1 at larger radii.
* The hot CGM of these galaxies rotates in the same direction as the stellar disk and has a similar rotation speed (∼100-200 km s^-1), but is slower than the colder CGM and ISM (∼200-400 km s^-1). Conversely, the velocity dispersion in the azimuthal direction of the hot phase is greater (σ_ϕ∼100-150 km s^-1) than the warm/cold phase (σ_ϕ∼50-100 km s^-1). Outside of the rotation and outflow regions and closer to the disk, both phases of gas are inflowing, the hot phase at ∼100-150 km s^-1 and the warm/cold phase at ∼100-300 km s^-1.
* If viewed face-on, the mean line-of-sight velocity in these galaxies azimuthally averages out to zero (assuming as we did in this work that the velocities are measured with respect to the center-of-mass frame). The velocity dispersion in this direction averages out to σ∼ 200-500 km s^-1 within ∼50-100 kpc and σ∼ 100-200 km s^-1 at larger radii. This structure is indicative of a complex pattern of flows that nevertheless when averaged over the azimuthal direction is composed of conically-shaped outflows away from the disk near the center and inflows at larger projected radii. We find that the velocity dispersion that is obtained is sensitive to which emission lines are used, since these probe different gas phases from each other and from the mass-weighted average.
* Not all of the TNG galaxies we investigated display coherent rotation or inflow patterns in the hot CGM (in particular galaxies 5 and 6). These galaxies appear to have strong outflows not entirely aligned with the rotation axis of the stellar disk that may have disrupted the rotation and inflow of the hot CGM. These two galaxies are also the lowest-mass galaxies in our sample, and thus have less CGM gas in the hot phase which is more easily disrupted by feedback processes. Determining the precise reasons for the differences in the CGM velocity structure of these galaxies will require future work.
* For X-ray observations, using regions larger than the angular resolution of the detector will often be necessary to obtain sufficient counts to measure the line centroid shift and broadening. This will not only measure velocity differences along the sight line, but also across the sky plane within the region, which also contributes to the measured centroid shift and broadening. We find that these contributions can be as significant to the overall measurement. To separate out these effects, splitting up the regions as finely as count rate statistics allow may be necessary.
* When our galaxies are viewed at an angle inclined away from the disk, signs of both rotation and hot outflows are observed, the latter of which will be especially prominent in line shift measurements (∼± 200-500 km s^-1) in regions of X-ray SB which show evidence of cavities and bubbles. The combination of these effects produce a velocity pattern in our simulated galaxies that is distinct from the stellar and ISM velocity patterns, as the velocity fields of the latter two are dominated by rotation.
* We produced mock X-ray microcalorimeter observations of galaxy 2 and used a spectral fitting technique to produce maps of the mean velocity field along two sight lines; edge-on and tilted 45^∘ to the rotation axis of the stellar disk. In both cases we are able to reproduce the features of the mean velocity field of the simulation to high-accuracy, enabling us to determine the properties of the rotation curve and the hot outflows.
* We produced similar mock observations of galaxy 1 along the edge-on and face-on sightlines. We then selected regions in each projection to measure the first two moments of the velocity field by extracting spectra from these regions and fitting thermal emission models to them. We find that in the edge-on projection that the mean and standard deviation of the velocity are well-fit by a single thermal emission model, enabling us to measure the rotation curve of the CGM from line shifts and estimate the inflow velocity using line widths. In the face-on projection, the different phases of the gas have different velocities, and thus we require multiple thermal emission components to reproduce their properties. We find that the lower-temperature hot phase is consistent with lower velocity dispersions, and the higher-temperature gas is consistent with higher velocity dispersions. The former may be consistent with inflows (especially at large projected radii), whereas the latter is consistent with outflows, but projection effects make unambiguous identification of these two different phases difficult.
Our results show that future microcalorimeter observations of the hot CGM of galaxies will be able to measure the temperature and velocity fields of the gas, and determine if the hot CGM has the main structures we identified in this work: inflows, outflows, and rotation, or if is dominated by a chaotic and turbulent flow like two of the galaxies in our sample. Detecting hot outflows and measuring their velocities will help determine the mass and energy fluxes of these outflows, and thus their impact on the evolution of the galaxy and its environment. Measuring the rotation and inflow velocities of the hot CGM, especially in comparison to measurements of the cooler phase in the UV, will help determine how gas accretes from the CGM onto the galaxy itself in its different temperature phases and drives its evolution.
This analysis could be extended in a number of ways. The spectral analysis of the mock observations in this work merely scratched the surface of what is possible. The closest analog to studies of the hot halos of galaxies are their more massive counterparts in groups and clusters of galaxies in the intragroup and intracluster media. These are much brighter in X-rays and hence easier to study. In the era of Chandra, XMM-Newton, Suzaku, NuSTAR, and now eROSITA, spectral analysis of these extended sources has been largely limited to the ∼100 eV resolution of the imaging instruments on these telescopes. This prevents analysis of the velocity field in groups and clusters, and limits the ability to distinguish between different gas phases of different temperatures and compositions. This latter issue has not been a major limitation for most studies of the hot gas in groups and clusters, since for most applications it is well-approximated by a single-temperature phase over relevant spatial regions. However, this is not the case for the CGM, and to characterize it adequately we will need microcalorimeter instruments that can resolve the velocity field and the different gas phases. Extensions of the work presented here should focus on improvements to the process of extracting and fitting spectra to decompose the emission into these multiple thermodynamic and kinematic components, which will likely require more sophisticated statistical methods than have been required for spectra with CCD-like energy resolution, and/or machine learning techniques.
We have also only used disk galaxies from the TNG simulations, which prescribe particular modes of AGN and stellar feedback. It would be instructive to perform similar analyses on other simulated galaxies, including from cosmological simulations such as EAGLE <cit.>, SIMBA <cit.>, FIRE <cit.>, Magneticum <cit.>, ChaNGa <cit.>, and FOGGIE <cit.>, or idealized simulations <cit.>.
JAZ, AB, PEJN, and RPK are funded by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. JS was supported by the Israel Science Foundation (grant No. 2584/21). DN acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) through an Emmy Noether Research Group (grant number NE 2441/1-1). IK acknowledges support by the COMPLEX project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program grant agreement ERC-2019-AdG 882679.
This material is based upon work supported by NASA under award number 80GSFC21M0002.
The TNG50 simulation was run with compute time granted by the Gauss Centre for Supercomputing (GCS) under Large-Scale Projects GCS-DWAR on the GCS share of the supercomputer Hazel Hen at the High Performance Computing Center Stuttgart (HLRS).
<cit.>,
<cit.>,
<cit.>,
<cit.>,
<cit.>,
<cit.>,
<cit.>,
<cit.>,
<cit.>,
<cit.>,
<cit.>
aasjournal
§ PLOTS FOR GALAXIES 3, 4, AND 5
This appendix shows the main figures for galaxies 3, 4, and 5, mentioned previously in Section <ref>. These galaxies are lower-mass and not as bright in X-rays as galaxies 1 and 2 (though galaxy 6 is the lowest-mass galaxy in our sample). In general, galaxies 3 and 4 are very similar to 1 and 2 in the sense that they have the same general pattern of inflows, outflows, and rotation. Galaxy 5 is similar to galaxy 6 in the sense that it has an outflow that is not symmetric on either side of the disk and its velocity field appears more turbulent.
|
http://arxiv.org/abs/2307.01441v1
|
20230704022504
|
A Generic Multi-Player Transformation Algorithm for Solving Large-Scale Zero-Sum Extensive-Form Adversarial Team Games
|
[
"Chen Qiu",
"Yulin Wu",
"Weixin Huang",
"Botao Liu",
"Shaohuai Shi",
"Xuan Wang"
] |
cs.GT
|
[
"cs.GT"
] |
fancy
Quantized generalized minimum error entropy for kernel recursive least squares adaptive filtering (This work has been submitted to the
IEEE for possible publication. Copyright may be
transferred without notice, after which this version
may no longer be accessible.)
Jiacheng He, Gang Wang, Kun Zhang, Shan Zhong, Bei Peng, Min Li
This paper was produced by the IEEE Publication Technology Group. They are in Piscataway, NJ.
Manuscript received April 19, 2021; revised August 16, 2021.
August 1, 2023
=========================================================================================================================================================================================================================================================================
§ INTRODUCTION
Games have been critical testbeds for exploring how effectively machines can make sophisticated decisions since the early days of computing <cit.>. Finding a equilibrium in games has become a significant criterion for evaluating artificial intelligence levels. In recent years, many great results have been obtained from the field of 2-player zero-sum (2p0s) games based on Nash Equilibrium (NE) <cit.> in non-complete information environments <cit.>. However, solving equilibrium in multi-player zero-sum games with three or more players remains a tricky challenge. There are three main reasons for this: Firstly, the CFR-like algorithms for finding NE are widely used in 2p0s games, but no theoretical guarantees are provided in the literature whether they can be directly used in multi-player games <cit.>; Secondly, NEs are not unique in multi-player games, and the independent strategies of each player cannot easily form a unique NE <cit.>; Thirdly, computing NEs is PPAD-complete for multi-player zero-sum games <cit.>.
The Team-maxmin Equilibrium (TME) <cit.> is a solution concept that can handle multi-player games. In this paper, we are concerned with adversarial team multi-player games. It models a situation in which a team of players shares the same utility function against an adversary. Based on the various forms of correlation between team members, this concept is extended to extensive-games <cit.>. Notably, we focus on the ex ante coordination scenario of team members. More specifically, team members are allowed to coordinate and agree on a common strategy before the game starts, but they cannot communicate during the game. The variant of TME in this scenario is called Team-maxmin Equilibrium with Correlation (TMECor), and its computation is shown to be FNP-hard <cit.>. To the best of our knowledge, Celli and Gatti <cit.> first proposed a linear programming algorithm capable of solving TMECor in 2018. The essence of this algorithm is hybrid column generation, where the team members and adversary use different forms of strategy representation. The Associated Recursive Asynchronous Multiparametric Disaggregation Technique (ARAMDT) proposed by <cit.> also uses mixed-integer linear program (MILP) to find the TMECor. Since the feasible solutions of MILP are too large for large-scale games, this significantly limits their ability to efficiently handle large-scale games.
The TMECor can be considered as an NE between the team and the adversary that maximizes the utility of the team. In adversarial team multi-player games, the advantage of TMECor over NE is unique, considerably reducing the difficulty of finding optimal strategies. Nonetheless, there are fewer algorithms for TMECor. Therefore, it is an interesting and worthwhile research topic to utilize the algorithms (e.g., CFR, CFR+, etc.) of solving Nash equilibrium in two-player zero-sum extensive-form games for finding a TMECor. Inspired by <cit.>, we attempt to make a connection between large-scale adversarial team multi-player games and 2-player zero-sum games, achieving better performance and faster convergence than the state-of-the-art algorithms.
Main Contributions. The most outstanding outcome of our work is a generic multi-player transformation algorithm (MPTA) that can convert a tree of adversarial team multi-player games (ATMGs) into a tree of 2-player zero-sum games with theoretical guarantees. Thus, the classical and efficient algorithms in 2-player zero-sum games can be used to solve TMECor in large-scale ATMGs (e.g., CFR <cit.>, CFR+ <cit.>, DCFR <cit.>, MCCFR <cit.>, etc.). One of the reasons why large-scale ATMGs are difficult to solve is that the action space grows exponentially with the increase in players. To design this algorithm, we propose a new structure, calling it private information pre-branch, which can reduce the growth of action space from exponential to a constant level. Theoretically, the more players, the more obvious the effect of reducing the action space. Therefore, the transformed game tree of our method is smaller in size compared to similar algorithms, such that it can speed up the computation of TMECor and make it possible to use it in larger scale team games, which are closer to real-world problems. Furthermore, to provide a primary theoretical guarantee for our work, we prove the equilibrium equivalence of the game before and after the transformation. Finally, the performance of our method has been proven to be excellent through multiple sets of experiments. The experimental results show that the team-maxmin strategy profile obtained by our algorithm is closer to TMECor than the baseline algorithm at the same time and significantly reduces the running time in the same iteration rounds.
§ RELATED WORK
Many studies have focused on adversarial team multi-player games (ATMGs) in an attempt to find a solution since the concept of Team-maxmin Equilibrium (TME) was introduced in 1997. Kannan et al. <cit.> adapt the idea of a team game and develop an algorithm for finding optimal paths based on information networks. Hansen et al. <cit.> prove that the task of obtaining a TME is FNP-hard and calculate the theoretical time complexity for the first time. According to the communication capabilities of the team members, Celli and Gatti <cit.> define three different scenarios and corresponding equilibriums for the first time in the extensive-form ATMGs. In particular, it is the first time that computing the TMECor of ATMGs (i.e., the equilibrium in the scenario where team members are allowed to communicate before the game starts) using a column generation algorithm combined with hybrid representation.
Thereafter, <cit.> propose a series of improvements to the Hybrid Column Generation (HCG) algorithm. The main disadvantage of this method is that the feasible solution space of integer or mixed-integer linear program is too large, which severely limits the size and speed of games it can solve. Basilico et al. <cit.> propose a modified version of the quasi-polynomial time algorithm and an algorithm named IteratedLP which is a novel anytime approximation algorithm. IteratedLP's working principle is to maintain the current solution, which provides a policy that can be returned at any time for each team member. Farina et al. <cit.> adopt a new realization form representation for mapping the problem of finding an optimal ex-ante-coordinated policy for the team to the problem of finding NE. Zhang and Sandholm <cit.> devise a tree decomposition algorithm for solving team games. To reduce the number of constraints required, the authors use a tree decomposition for constraints and represent the team's strategy space by the correlated strategies of a polytope. Since the team need to sample a strategy policy for each player from a joint probability distribution, Farina et al. <cit.> propose a modeling on computing the optimal distribution outcome and allowing the team to obtain the highest payoff by increasing the upper limit on the number of strategy files. Cacciamani et al. <cit.> and celli et al. <cit.> use multi-agent reinforcement learning approaches. The former adds a game-theoretic centralized training regimen and serves as a buffer of past experiences. Unfortunately, these reinforcement learning methods can only be applied in particular circumstances. The idea of a team being represented by a single coordinator as used by Carminati et al. <cit.> is closely related to ours. However, the size of game tree generated by the algorithm used by the authors grows exponentially as the action space increases. This situation makes it difficult to use this algorithm for large-scale team games.
In our work, we propose an innovative approach for transforming a team multi-player game tree into a 2-player game tree, where the 2-player game tree is constructed in a form that is distinct from the method by <cit.>. In the converted game tree, the coordinator represents the strategy in the same way as the team members. At the same time, the number of new nodes is greatly reduced, allowing the game tree to avoid exponential increases in size.
§ PRELIMINARIES
This section briefly introduces some of the basic concepts and definitions used in this paper. To learn more details, see also <cit.>. For clarity and intuition, detailed descriptions of some variables are shown in Table <ref>.
§.§ Extensive-Form Games and Nash Equilibrium
An extensive-form game G is the tree-form model of imperfect-information games with sequential interactions <cit.>.
A finite extensive-form game G is a tuple ⟨ N, A, H, Z, 𝒜, P, u, I⟩:
* A set N of players, N=1, 2,..., n;
* A set A of all the possible actions, A=∪_p ∈ N A_p, where A_p denotes a set of available actions of player p;
* A set H is the set of nodes in the game tree, h ∈ H. h can also be represented action histories (sequence of actions from root node to current node h);
* A set Z contains all the leaf nodes of game tree, Z ⊆ H;
* For each decision node h ∈ H, the result returned by function 𝒜(h) is all available actions at node h;
* Given a node h, the function P(h) is player who takes an action after node h;
* For each player p ∈ N, the utility function u_p is the payoff mapped from the terminal node z ∈ Z to the reals ℝ;
* A set I belonging to player p∈ N, all nodes h, h^'∈ I are indistinguishable to p in I_p.
The Nash equilibrium is a common solution concept in zero-sum extensive-form games and is a strategy profile for player i denoted as σ^*_i. Given the strategy profile for n players, σ_1, σ_2, …, σ_n, an NE can be formally represented as
u_i(σ) ≥ max _σ_i^* ∈Σ_i u_i(σ_i^*, σ_-i)
§.§ Adversarial Team Multi-Player Games and Team-Maxmin Equilibrium with Correlation
In this paper, we concentrate on extensive-form adversarial team multi-player games. Formally, an adversarial team multi-player game (ATMG) 𝒢 has N (N ≥ 3) players, in which a team consisting of N-1 team members against independently an adversary 𝒪. We refer to 𝒞 and 𝒯 as the chance player and team (coordinator) respectively. We set up the scenario restricted to a zero-sum extensive-form ATMG, where u_𝒯 = -u_𝒪. Regardless of whether the team wins or loses, team members share benefits equally with the same utility function in ATMG. That is, u_i(l)=u_j(l), ∀ i, j ∈ T. Moreover, this work will focus on games with perfect recall, where all players are able to recall their previous actions and the corresponding information sets.
A behavioral strategy σ_p of player p∈ N is a function that assigns a distribution over all the available actions 𝒜(I_p) to each I_p.
A behavioral strategy profile σ is composed of each player's behavioral strategy, where σ = {σ_1, σ_2, ⋯, σ_n}. The extensive-form ATMGs also provide additional strategy representation:
A normal-form plan π_p = ×_I_p𝒜(I_p) of player p is a tuple specifying one action for each information set of player p.
Furthermore, the normal-form strategy of player p ∈ N is denoted as μ_p, which is the probability distribution over the normal-form plans. Similarly, a normal-form strategy profile is μ = {μ_1, μ_2, ⋯, μ_n}. Given a normal-form strategy μ_p of player p ∈ N, μ_-p refers to all normal-form strategies in μ except μ_p. A TMECor is proven to be a Nash equilibrium (NE) which maximizes the team's utility <cit.>. Concerning ATMG settings, TMECor differs from NE in that it always exists and is unique. The team members use behavioral strategies during the TME calculating procedure. This is due to the lack of necessity for coordination among team members. However, our work focus on the scenario of ex ante correlation. If the behavioral strategy is still adopted, the correlation between the normal-form strategies of team members cannot be accurately obtained because of a lack of coordination <cit.>. Therefore, in order to compute TMECor, it is necessary for team members to adopt normal-form strategies.
A TMECor is able to find through a linear programming formulated over the normal-form strategy profile of all players:
max _μ_𝒯min _μ_𝒪∑_z ∈ Z ∑_p ∈𝒯
π_p∈Π_p(z)
π_𝒪∈Π_𝒪(z)μ_𝒯[π_p] μ_𝒪[π_𝒪] u_𝒯(z)
s.t. μ_𝒯∈Δ(×_p ∈𝒯Π_p)
μ_O∈Δ(Π_O)
§ METHOD
§.§ Overview
To solve the adversarial team multi-player games, the core of our work is to transform an ATMGs-based multi-player game into a 2-player game, then use the well-established algorithms in 2-player games to find a NE equivalent to the TMECor, as shown in Figure <ref>. For achieving this purpose, we first construct a new structure representation for the team member nodes in the original multi-player game tree. Secondly, we utilize this structure to design an algorithm named MPTA, which is used to transform a original multi-player game tree into a 2-player game tree. The MPTA consists of two phases: 1) traversing over the whole original game tree and transforming all nodes to obtain a 2-player game tree; 2) merging information sets with temporary private information and pruning for 2-player game tree.
§.§ The Structure of Private Information Pre-Branch
We add coordinator players and temporary opportunity nodes to the MPTA process, and they form a new structure, which we call the private information pre-branch (PIPB), as shown in the virtual triangle box in Figure <ref>. The coordinator player only makes decisions for one team member at a time, so we give him all possible information of hand cards in advance. The extensive-form game has property with sequence decision, and we assume that the adversary takes action first in the game process. The parent of each coordinator player node must be a temporary chance node according to this structure. In other words, the coordinator makes decisions based on the structure of PIPB.
Given a transformed game 𝒢, each occurrence of a coordinator player node c in the game tree increases the action space by | cards |×| A |, where | cards | indicate the number of cards.
§.§ Phase 1: Game Tree Transformation
The primary distinction between the MPTA and previous approaches is that the coordinator player in the transformed game tree represents a team member who is playing rather than the entire team. Team members have both private information, e.g., the hand cards, and public information that can be observed by other players, including adversary, e.g., the game history. Note that the coordinator player only knows the private information of a team member currently represented and public information of the current situation, but not the other players' private information. The pseudo-code of the transformation process is depicted in Algorithm <ref>.
Since our algorithm is built on a game tree, we first construct a complete game tree based on an improved generic game scenario (e.g., Kuhn Poker <cit.>), as illustrated in the dashed box on the left of Figure <ref>. The chance player who deals the cards is the root node in the original game tree, and its branches represent different dealing situations. As the number of cards rises, the number of chance player's child nodes often increases as well. The nodes below the root node are the players' decision nodes, where the players choose their actions in turn according to the principles of a sequential game. The payoff for a round will be calculated once all players have finished their actions and reached the terminal nodes. In particular, team members share the benefits whether they win or lose because they use the same utility function in ATMG.
Then, an original game tree G will be passed as a parameter to the MPTA. We add each of its branches to the transformed game tree 𝒢 for the root node. Player decision nodes in G can be divided into two categories: decision nodes of team members and an adversary. The adversary decision nodes are unmodified. That is, the same branches are built in 𝒢 for adversary nodes. We introduce a coordinator player to replace the team member who is taking action. More specifically, we transform the team member decision nodes into the coordinator player, who provides the current player's strategies. However, the coordinator player is not aware of the team member's hand cards when first choosing an action on behalf of the team member. Thus, we construct temporary chance nodes to represent all potential situations for a team member's hand cards, as shown in Figure <ref>. We design the information pool for storing information of team members, which will contribute to optimizing our approach in Subsection <ref>.
§.§ Phase 2: Merging and Pruning
Merging information sets with temporary private information. The information set, which includes both the public action histories and the private information of players, is an indispensable property of imperfect information games. Each branch of the root node represents a different distribution of cards. In the original game tree, there are no multiple nodes belonging to the same information set within the same branch. On the contrary, the condition is quite dissimilar after transformation. We begin at the root node and select two branches of the transformed game tree where one player has the same hand cards, except for temporary chance nodes. The nodes of players are traversed in turn. The player's nodes on the same layer belong to a single information set in the case of same game histories, even though they have different virtual private information. In our work, nodes are given the same number to mark the same information set.
Pruning for transformed tree. Under the same branch of root node, the coordinator does not know private information if and only if he makes the first decisions for each team member in turn. Their private information is stored in the information pool when all team members have been replaced by the coordinator. Then, the coordinator will be allowed to extract the current team member's hand cards from information pool to make the next decisions. During the transformation process, the coordinator is only authorized to extract the private information of one player at a time, which means that the private information of team members is independent of each other. In this way, we can prune the transformed game tree to make its scale smaller, as shown in Algorithm <ref>.
§ EQUILIBRIUM EQUIVALENCE PROOF
As a primary guarantee for this work, we prove theoretically that game trees before and after the transformation are essentially equivalent. It is further reasoned that TMECor in the adversarial team multi-player games is equivalently related to NE in 2-player zero-sum extensive-form games. For simplicity, we denote the original game tree by G and the transformed game tree 𝒢 is represented as the return value of MPTA(G) in Algorithm <ref>. σ^* is used to represent the TMECor in G. Given a player p, σ_p denotes the strategy in G and μ_p denotes the normal-form strategy in 𝒢. Let H_σ_p be the set of decision nodes reached according to player p's strategy σ_p and c is the coordinator player in 𝒢. Note that temporary chance node 𝒞_t is not the players' decision node. The equilibrium equivalence will be proved according to the following lemmas and theorems.
Given a multi-player game tree G that satisfies the definition of ATMGs, and the transformed game tree 𝒢 = MPTA(G), for any player p, any decision node h_p in G can be mapped to the decision node h^'_p in 𝒢. Formally, ∀ p ∈ N in G, h ∈ H and h^'∈ H^', H_p⫋ H^'_p.
We can prove that Lemma <ref> holds according to the process of transformation in Algorithm <ref>, using the characteristics of the game tree structure. To avoid notational confusion, let Z and Z^' represent the set of leaf nodes in G and 𝒢, respectively.
* For leaf node: because the leaf nodes correspond to the players' payoff, they are added directly to the game tree without any changes. Formally, Z = Z^'.
* For adversary node: σ_𝒪 in G is mapped to μ_𝒪 in 𝒢 by our method, which provides a guarantee for the equality of H_σ_𝒪 and H_μ_𝒪.
* For chance node: the root node of the game tree is not changed during the transformation. So 𝒞 in G and 𝒞^' in 𝒢 are same.
* For team member node: For any decision node of team member p, there is always a corresponding decision node of coordinator player c with the same game histories in the transformed game tree.
Since the coordinator player does not know the private information of all players, temporary chance node of the PIPB structure in 𝒢 will provide the coordinator player with all potential situations of hand cards.
This is the end of the proof.
Given a multi-player game tree G that satisfies the definition of ATMGs, and the transformed game tree 𝒢 = MPTA(G), for any player p, any decision node h_p in 𝒢 can maintain the correspondence with the decision node h^'_p in G. Formally, ∀ p ∈ N in G, h ∈ H and h^'∈ H^', H_p⫋ H^'_p.
The proof process is similar to that of Lemma <ref>, and the above process is not repeated in this proof. Furthermore, the coordinator nodes added by 𝒞_t in 𝒢 belongs to the same information set. Therefore, their correspondence with decision nodes in G still holds.
This is the end of the proof.
Given a multi-player game tree G that satisfies the definition of ATMGs, and the transformed game tree 𝒢 = MPTA(G), for any player p in the original game, his strategies must be mapped in the transformed game. Formally,
∀ p ∈𝒯: σ_p(h) = μ_c(h)
σ_𝒪(h) = μ_𝒪'(h)
σ_𝒞 = μ_𝒞
We can learn from Lemma <ref> and Lemma <ref> that the strategies of players in the original game and the transformed game are equivalent.
Given a multi-player game tree G that satisfies the definition of ATMGs, and the transformed game tree 𝒢 = MPTA(G), for any team member p and opponent player 𝒪, we have:
∀ p ∈ N_G, c ∈ N_𝒢 : u_p(σ)= u_c(μ)
𝒪∈ N_G, 𝒪^'∈ N_𝒢 : u_𝒪(σ) = u_𝒪^'(μ)
where the strategy μ is a mapping of strategy σ from G to 𝒢.
The proof can be obtained from Lemma <ref>, Lemma <ref> and Corollary <ref> that the payoffs of the players are equivalent in the original game and transformed game. In fact, the terminal nodes representing the utility is added directly to 𝒢 according to the transformation steps in Algorithm <ref>.
Given a multi-player game tree G that satisfies the definition of ATMGs, and the transformed game tree 𝒢 = MPTA(G), a Nash equilibrium in 𝒢 has equilibrium equivalence with TMECor in G. Formally, μ^*_c = σ^*_𝒯.
Assuming that σ^*_𝒯 is a TMECor, then according to Equation <ref>, we can get:
σ^*_𝒯∈max _σ_𝒯min _σ_𝒪∑_z ∈ Z ∑_p ∈𝒯
π_p∈Π_p(z)
π_𝒪∈Π_𝒪(z)σ_𝒯[π_p] σ_𝒪[π_𝒪] u_𝒯(z)
According to Corollary <ref>, the above formula can be converted into
μ^*_c∈max _μ_cmin _μ_𝒪∑_z ∈ Z ∑_π_c∈Π_c(z)
π_𝒪∈Π_𝒪(z)μ_c[π_c] μ_𝒪[π_𝒪] u_c(z)
min _TME Cor(σ_𝒯) and min _NE(μ_c) denote the inner minimization problem in the definition of TMECor and NE respectively. We assume that existing a μ_c^' that is larger than the value of σ_𝒯^*, i.e., min_NE(μ_c^') > min_NE(σ^*_𝒯). By Theorem <ref>, it can be converted that: min_TME(μ^'_c) > min_TME(σ^*_𝒯).
This is unreasonable since by hypothesis σ_𝒯^* is a maximum.
Hence, we get that:
μ_c^*∈max_p ∈𝒯min_TME(σ_T[ π_p])
This is the end of the proof.
§ EXPERIMENTAL EVALUATION
In this section, we describe the setup of experimental scenarios and comparison methods. The performance of our method is verified by comparing with the state-of-the-art algorithm in different scenarios.
§.§ Experimental Setting
We conduct our experiments with standard testbed in the adversarial team multi-player games (ATMGs). More exactly, we use modified versions of Kuhn Poker <cit.> and Leduc Poker <cit.>. By taking the number of players and cards as parameters and modifying the team's utility function, they are possible to satisfy the definition of ATMGs. We set up scenarios in which a group of players forms a team against a single player and unify the team's utility function to meet the definition of ATMGs. Furthermore, we can achieve the purpose of generating multiple experimental platforms with different complexity by taking the number of players, suits, and cards as parameters. We adopt a total of 12 scenarios of varying difficulty, 6 each for Kuhn Poker and Leduc Poker, where the default maximum number of bets allowed per betting round is 1. In Kuhn Poker, the experiment is set up with players from 3 to 5, and cards are set to 3, 4, 6, and 8. In Leduc Poker, players are set from 3 to 5, cards are set by 3, 4, and 6, suits are set 3. It is worth noting that the more complex 5-player scenario has not been attempted before. All experiments are run on a machine with 18-core 2.7GHz CPU and 250GB memory.
We use the Counterfactual Regret Minimization plus (CFR+) to interface with the MPTA and the baseline algorithm, which is an excellent approach for finding Nash equilibrium in 2-player zero-sum extensive-form games. Nevertheless, our method can still use other CFR-like algorithms. In this paper, the baseline algorithm is Team-Public-Information Conversion Algorithm (TPICA), which is the previous best method similar to our work. It gives the coordinator information that is common to the whole team and provides all team members with the corresponding actions for each possible private state.
§.§ Experimental Results
Total runtime for finding a TMECor.
In Table <ref>, we describe in detail the size of the game trees in different scenarios, both the original game tree and the transformed tree through our method. Furthermore, they are compared with the basic method (TPICA) proposed by Carminati et al. <cit.>. TPICA and our work share the same goal of finding a TMECor in ATMGs using the effective tools of 2p0s games. The data in Table <ref> shows that our method significantly reduces the size of the game trees generation after the transformation compared to TPICA. In Kuhn Poker, the sizes of 21K3, 21K4, and 21K6 are reduced by 9.25 ×, 431.72 ×, and 1,476.26 ×, respectively. The blank cells in Table <ref> indicate that TPICA fail to convert due to out-of-memory and thus cannot obtain a valid game tree. In the four cases of 21K4, 21K4, 21K6, and 21L33 where both MPTA and TPICA can work, the total time required by our approach to compute a TMECor is 0.76s, 9.26s, 144s, and 240s respectively, which is 182.89 ×, 168.47 ×, 694.44 ×, and 233.98 × faster than TPICA. This shows that our method is effective in reducing the action space, as it improves the solving speed by several orders of magnitude. It is worth noting that MPTA is still available in other large-scale scenarios where TPICA cannot transform original game trees. In particular, 41K6 and 41L33 are 5-player cases that have never been used as experiments by previous algorithms due to their sheer size. In addition, we also observe that the reason for the speed-up is mainly due to the special structure, which greatly reduces the number of adversary nodes and temporary chance nodes.
Solving efficiency.
Exploitability is widely used as a significant evaluation criterion for a strategy profile. Informally, it represents the gap between the current policy and the optimal policy. Thus, a smaller exploitability indicates that the current strategy is closer to the TMECor in the original multi-player games. In addition to the comparison of the total solution time, we also need insight into the relationship between the solving efficiency of the transformed game trees and algorithms in the running process. For this purpose, we select three cases each in Kuhn Poker and Leduc Poker to test MPTA and baseline method's variation of exploitability over time within a limited running time of 100,000 seconds, as shown in Figure <ref>. In the three comparable cases: 21K3, 21K6, and 21L33, we observe that the curve representing MPTA is always below the TPICA, and the distance between the two curves increases with the increase of the size gap of game trees. This suggests that the CFR+ is more efficient in solving the transformed game tree for MPTA. In the remaining scenarios, we can find that the convergence rate to equilibrium is faster in 41K6 than in 31L43 and 41L33 and the curve representing the MPTA fluctuates more sharply in 41L33. This indicates that the process of computing equilibrium is more difficult when the game scale increases.
Execution efficiency.
Finding a TMECor is an iterative process, and we show a comparison of the time taken by the algorithms within the same number of iteration rounds in Figure <ref>. As can be seen from Figure <ref>, in all comparing circumstances, the game trees transformed by MPTA take considerably less time to complete the computation of a normal-form strategy profile than by TPICA in any number of iteration rounds. 21K6 and 21L33 are an order of magnitude larger than 21K3 and 21K4. However, the results in Figure <ref> and Figure <ref> show that the advantages of our approach are more evident in these two large-scale games.
§ CONCLUSIONS AND FUTURE WORK
In this paper, we present a generic multi-player transformation algorithm (MPTA) which can transform a multi-player game tree satisfying the definition of ATMGs into a 2-player game tree, thereby establishing a bridge between 2p0s games and multi-player games. In addition, we analyze theoretically that the proposed new structure limits the growth of the transformed game's action space from exponential to a constant level. At the same time, we also prove the equilibrium equivalence between the original game tree and the transformed game tree, which provides a theoretical guarantee for our work. We experiment with several scenarios of varying complexity and show that finding a TMECor is several orders of magnitude faster than the state-of-the-art baseline. As far as we know, this work is the first one to solve a ATMG with 5 or even more players.
In the future, we will be devoted to testing our algorithm in more challenging and complex scenarios in real world. We also plan to get the aid of deep neural networks' powerful data processing and generalization capabilities to provide real-time strategies for agents.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.03142v1
|
20230706171124
|
Tidal disruption of white dwarfs in a modified gravity theory with SPH
|
[
"Debojyoti Garain",
"Pritam Banerjee",
"Shaswata Chowdhury",
"Tapobrata Sarkar"
] |
gr-qc
|
[
"gr-qc",
"astro-ph.HE"
] |
1]Debojyoti Garaindgarain@iitk.ac.in
2]Pritam Banerjeepritam@phy.iitkgp.ac.in
1]Shaswata Chowdhuryshaswata@iitk.ac.in
1]Tapobrata Sarkartapo@iitk.ac.in
[1]Department of Physics, Indian Institute of Technology Kanpur, Kanpur 208016, India
[2]Department of Physics, Indian Institute of Technology Kharagpur, Kharagpur 721302, India.
Tidal disruption of white dwarfs in a modified gravity theory with SPH
[
=======================================================================
Low energy imprints of modifications to general relativity are often found in pressure balance equations inside stars. These modifications are then amenable to
tests via astrophysical phenomena, using observational effects in stellar astrophysics that crucially depend on such equations. One such effect is tidal disruption of stars in the vicinity of black holes. In this paper, using a numerical scheme modelled with smoothed particle hydrodynamics, we study real time tidal disruption of a class of white dwarfs by intermediate-mass black holes, in the low energy limit of a theory of modified gravity that alters the internal physics of white dwarfs,
namely the Eddington inspired Born-Infeld theory. In this single parameter extension of general relativity,
the mass-radius relation of white dwarfs as well as their tidal disruption radius depend on the modified gravity parameter, and these capture the effect of modifications to general relativity.
Our numerical simulations incorporating these show that departure from general relativity in these scenarios might be observationally significant, and should
therefore be contrasted with data. In particular, we study observationally relevant
physical quantities, i.e., tidal kick velocity and trajectory deviation of the remnant core and fallback rates of the tidal debris in this theory and compare them to the Newtonian limit of general relativity. We also comment on
the qualitative differences between the modified gravity theory and one with stellar rotation.
§ INTRODUCTION
Einstein's theory of general relativity (GR) remains the most successful theory of gravity till date. However, it is now widely believed that
modifications to GR are probably necessary and unavoidable. On the phenomenological side, issues of
cosmic acceleration and the cosmological constant have motivated many groups to search for possible modifications of GR, especially as an
alternative explanation to dark energy, for recent reviews see, e.g. <cit.>.
At a more fundamental level lies the issue of singularities. Indeed, singularities are known to arise
in gravitational collapse processes, a paradigmatic example being that of a star with its core fuel exhausted, that collapses to a black hole.
Singularities, often manifest as geodesic incompleteness of space-times (see e.g. <cit.>), signal a breakdown or the limits of applicability of GR.
(for comprehensive overviews, see <cit.> or the more recent review in <cit.>).
Singularities have been studied ever since the inception of GR, and in spite of several celebrated works, the issue of their resolution remains unclear.
In this context, it is believed that for singularity-free theories, extra degrees of freedom should
possibly manifest themselves in the regime of strong gravity. An ubiquitous statement in this context is that possible quantum
effects should smoothen out classical singularities, but it is fair to say that we are quite far from reaching a consistent quantum
theory of gravity. Now, in the absence of a well established quantum version of GR, a natural alternative is to construct a classical theory
of gravity itself that is free of singularities. The astrophysical consequences of such a recently discovered theory of modified gravity – called the
Eddington inspired Born-Infeld (hereafter EiBI) gravity – will be the main focus of this paper. In particular, we demonstrate the dynamics of tidal disruption of white dwarfs (WDs) whose interiors are modelled by the Newtonian limit of EiBI gravity, due to an intermediate-mass black hole (IMBH).
This is done by numerically incorporating effects of modified gravity in smoothed particle hydrodynamics (SPH).
As we have mentioned, any modification to GR, at cosmological or Planck scales, is associated with incorporating extra degrees of freedom in the theory.
These are naturally associated with extra parameters that needs to be introduced, the simplest case being a one-parameter modification
of GR which occurs in EiBI gravity. Now, any such modification has its imprints in the low energy
Newtonian limit, and hence will have observationally important consequences. Indeed, there is a sizeable body of literature by now that seeks to
constrain these parameters from astrophysical observations (for recent reviews see <cit.>). In this paper,
we study a somewhat different perspective. Namely, assuming the established constraints on EiBI theory, we seek to understand
real time stellar dynamics within the ambits of such constraints and establish how physical quantities are modified due to the freedom in
choosing an extra tuneable parameter in the theory. Importantly, it allows us to compute such physical quantities as a function of the modified gravity parameter so that we can quantify any deviation from their expected GR values.
Any observable difference of measurable astrophysical quantities from the standard Newtonian case should point to a possibility of
a modification of GR in general. Conversely, if such features are not observed observationally, it rules out the possible modification
being considered.
To set the stage and to establish the mathematical formalism used in the rest of the paper, let us first briefly review the
construction of EiBI gravity, that attempts to classically regularise singularities in GR. To this end, recall that
attempts at regularising singularities have a long history, starting from the celebrated work of
Born and Infeld <cit.>, who constructed a version of electromagnetic theory free from the divergences
associated with the Maxwell formalism.
Applied to GR, one such attempt in the recent past is the construction of the theory by Banados and Ferreira (hereafter BF) <cit.>, and
builds upon that of <cit.> and <cit.>. Let us briefly review this construction. Recall that the standard version
of Born-Infeld electrodynamics replaces the Maxwell Lagrangian ℒ = -F^2/4 by
ℒ = b^2(1-√(| det(η + F/b))|, where F is the Maxwell field tensor, η the Minkowski metric, and
b is the Born-Infeld parameter. This theory in principle eliminates the infinite self energy associated with a point particle in Maxwell's theory,
and the Maxwell Lagrangian is recovered for F ≪ b. In a similar spirit, Eddington <cit.> proposed
a variant of the Einstein-Hilbert action for GR, and in this formalism, the Lagrangian (apart from pre-factors)
is taken to be √(| det R_μν|), with R_μν being the Ricci tensor.
Here, the affine connection is considered as a dynamical variable (the so called Palatini formalism), and variation of the action
with respect to the connection gives Einstein's equation in the presence of a cosmological constant. Note that
this latter equation can alternatively be obtained from a variation of the Einstein-Hilbert Lagrangian proportional to
√(| detg_μν|)(R - 2Λ), with the metric g_μν being the dynamical variable, and where
R = g^μνR_μν being the Ricci scalar and Λ is the cosmological constant.
This line of reasoning was revived in the late 1990s by <cit.> who considered a gravitational action of the
square root form, but with an additional tensor field that needed to be tuned order by order to remove
ghost instabilities. Motivated by <cit.> and subsequent works of <cit.> which used the Palatini formalism,
BF considered a Born-Infeld type of action, with a minimal coupling of gravity with matter
fields. The BF Lagrangian reads ℒ∼√(| det[g_μν + ϵ R_(μν)]|)-λ√(| detg_μν|), apart from the matter contribution, and one considers the
symmetric part of the Ricci tensor in the Lagrangian, denoted by the braces (see e.g., <cit.>). In this formalism, 1/ϵ is the
Born-Infeld mass M_BI≪ M_Pl, the Planck mass. Also, λ is a dimensionless non-zero parameter,
related to the cosmological constant, with λ = 1 giving asymptotically flat solutions.
The BF formalism leads to singularity-free cosmology, and it was shown by <cit.> that this avoids singularities that arise
in gravitational collapse (see also <cit.>). The BF modification to GR has come to be known in the literature as EiBI gravity.
In the non-relativistic limit, EiBI gravity gives rise to a modified Poisson equation, with the modification
of the low energy limit of Einstein gravity being
characterised by a coupling term that is non-zero only in the presence of matter. Since the Poisson equation
is used as a basic input in many formulas of stellar observables, it is then natural that EiBI theories can have important
consequences in stellar astrophysics. Indeed, there has been a variety of works in the recent past in this direction. The work of <cit.>
proposed tests for the theory using solar constraints. Further, <cit.> studied such constraints using cosmological
and astrophysical scenarios, and <cit.> obtained bounds on EiBI theories by demanding that electromagnetic
forces dominate gravitational ones in nuclear reaction. More recently, the work of <cit.> has put constraints on the theory
from an analysis of WDs, and <cit.> studied gravitational waves in non-singular EiBI cosmological models.
Further recent studies on constraining EiBI theories appear in <cit.>.
A recent comprehensive review of EiBI gravity and related phenomenological tests appear in the work of <cit.>.
To be specific, in the low energy limit, EiBI gravity introduces a correction term inside a
matter source, while GR is recovered outside matter. In the Newtonian limit, the modified Poisson equation takes the form
∇^2 Φ = 4 π G ρ + κ/4∇^2ρ ,
where Φ represents the gravitational potential, ρ is the matter density, and G denotes the gravitational constant.
The parameter κ represents the correction term introduced by the EiBI theory. Equation (<ref>) is attractive from a numerical
point of view. Namely, it does not require us to assume spherical symmetry which is not the case in other known examples of modified
gravity theories such as the well studied beyond-Horndeski class of theories (see e.g., <cit.>). In fact, the form of
Equation (<ref>) allows us to work in Cartesian coordinates, which is a big advantage over this latter class of theories, as
for a tidally disrupted star, spherical symmetry no longer holds. We however, need to first construct a spherically
symmetric star within the realms of EiBI gravity before it is studied in presence of tidal fields to simulate its tidal disruption event.
In order to numerically simulate tidal disruption events, we first need to prepare a spherical WD which is in equilibrium.
In order to do this, we first note that EiBI gravity, the radial acceleration equation in the spherically
symmetric case can be expressed as
a(r) = -G m(r)/r^2 - κ/4dρ/dr ,
where m(r) represents the enclosed mass within a radius r. Then, the hydrostatic equilibrium
equation can be derived by considering the balance between gravitational forces and pressure gradients
in the system. Starting from the modified Poisson equation (Equation (<ref>)), we derive the hydrostatic
equilibrium equation for a spherical star as
dP/dr = -G m(r) ρ/r^2 - κ/4ρdρ/dr .
This equation allows us to study the equilibrium state of the system, incorporating the EiBI correction on the pressure distribution.
The presence of an additional term in the hydrostatic equilibrium equation introduces modifications to the interiors of spherical stars in
comparison to GR. From Equation (<ref>), we see that since dρ/dr is always negative, a positive (negative) value of
κ effectively weakens (strengthens) gravity inside a stellar object. This changes the mass-radius relation of WDs.
The fact that, for a WD of given mass, EiBI gravity changes its compactness by changing its radius is of primary importance in tidal disruption event. For a WD to be tidally disrupted by a black hole without being captured as a whole, its tidal radius r_t, estimated by the equation (see <cit.>),
r_t ≈(M/M_wd)^1/3 R_wd,
must lie outside the event horizon of the black hole, or to be more precise, its innermost stable orbit.
Here M_wd and R_wd represent the mass and radius of the WD, respectively, and M represents the mass of the black hole.
Note that this is an useful but approximate formula and does not take into account the hydrodynamics of the stellar interior.
It is however important to note that the tidal radius is proportional to M^1/3, while the event horizon radius is proportional to M. As a result, there
exists a maximum limit for the mass of the black hole, beyond which no disruption occurs, and the WD is captured as a whole,
without undergoing disruption.
This limiting black hole mass lies below the supermassive black hole range, namely the intermediate-mass black holes (𝒪(10^2)-𝒪(10^5)) <cit.>.
Now, from Equation (<ref>) and our earlier discussion, it follows that for a WD of a given mass, its radius R_wd changes
due to the effect of EiBI gravity, and thus r_t changes. Hence, modified gravity has a direct impact in tidal disruption events.
This makes tidal disruption events an excellent tool for testing the effects arising from EiBI gravity. Tidal disruption of WDs in particular, serve as a crucial astrophysical phenomenon for investigating EiBI theory. Their well-known chemical composition, thermodynamic properties,
and well-understood behaviour in the weak field regime of GR make them ideally suited for performing precise tests of modified gravity
via tidal disruption events.
Once spherical WDs are prepared after incorporating modified gravity, their dynamics can be studied, including the effects of
tidal disruption due to an IMBH. In the context of tidal disruption events involving WDs, IMBH holds particular significance.
In the following sections of the paper, we will set up the framework for studying tidal disruption events and analyze the associated observables.
Specifically, we will investigate how these observables are influenced by the modified gravity parameter κ.
The paper is structured as follows: In Section <ref>, we provide an overview of WD physics and its extension to incorporate EiBI gravity.
Section <ref> outlines the methodology for simulating tidal encounters between a WD, modeled with EiBI gravity, and an IMBH using SPH code.
The simulation results and analysis are presented in Section <ref>. Finally, Section <ref> concludes with a summary, discussion on the
significance of the study, and future prospects.
§ MODELING WDS WITH EIBI GRAVITY
WDs represent the end stage of the evolution of low and intermediate-mass stars with masses ranging from
∼ 0.1-8 M_⊙. During the evolution of these stars, they consume fuel in their core, lose energy, and shrink,
while the outer layers of the star expand and the star becomes a red giant. When the outer layers of a red giant is released into space, the hot and dense
core of the star, primarily composed of carbon and oxygen, remains behind, eventually becoming a WD. These WDs are
dense, with densities (∼ 10^6 g cm^-3), and have masses comparable to that of the Sun, but compressed into a volume
roughly the size of the Earth. The strong gravity of the WD is balanced by the pressure of the degenerate electron gas. We refer the reader to the
reviews <cit.> for detailed discussions on WDs.
To study the properties of carbon-oxygen WDs, we employ a model based on the formalism developed by <cit.>,
which we extend to incorporate the EiBI theory, following <cit.>.
In this model, we neglect electrostatic interactions and work in the Newtonian limit. We assume a WD in a completely ionized
state with degenerate electrons at zero temperature. Our goal is to derive an equation of state (EOS) that relates the pressure and density
within a WD.
We begin with the fact that the number density of degenerate electrons is given by n_e = m_e^3 c^3 x^3/(3π^2 ħ^3),
where the `relativity parameter' (dimentionless Fermi momentum) is x = p_F/(m_e c) (not to be confused with the Cartesian coordinate x introduced later)
with p_F being the Fermi momentum, m_e the electron mass, c the speed
of light, and ħ is the reduced Planck constant. The total density is the sum of the densities of electrons and carbon ions,
ρ = ρ_C + ρ_e, and is dominated by ρ_C due to the negligible mass of the electron as compared to the carbon atom. This ρ_C is related to the number
density of the electrons by the relation ρ_C = m_C n_e/6, with m_C being the mass of ionized carbon (6 being the atomic number of a
carbon atom). Thus, the total density is related to the relativity parameter through the relation
ρ≈ m_C m_e^3 c^3 x^3/(18 π^2 ħ^3) = 1.9479 × 10^9 x^3 kg m^-3 .
As the pressure due to non-relativistic carbon ions is much smaller than the pressure contribution from the relativistic electrons,
so the total pressure is P ≈ P_e (the Chandrasekhar approximation, see, e.g., <cit.>).
The degenerate pressure of the electrons is calculated using the kinetic theory of gases, and is given as
P ≈ m_e^4 c^5 ϕ(x)/ħ^3 = 1.4218×10^24ϕ(x) N m^-2 ,
where
ϕ(x) = 1/8π^2[x(1+x^2)^1/2(2x^2/3-1)+ log_e[x+(1+x^2)^1/2] ] .
Thus, we obtain an EOS for the WD that relates pressure and density through the parameter x.
Inserting the expressions of density and pressure into the mass continuity equation, dm(r)/dr = 4π r^2 ρ, and Equation (<ref>)
results in two first-order coupled differential equations given by
dm/dr = 2 m_C m_e^3 c^3 r^2 x(r)^3/9πħ^3
dx/dr = -24 π ^2 G ħ^3 m_C m(r) √(1+x(r)^2)/r^2[144π^2 ħ^3 c^2 m_e x(r) +
κ c^3 m_C^2 m_e^3 x(r)^2 √(1+x(r)^2)] .
These equations can be solved numerically with initial conditions m(0) = 0, and x(0) = x_0, where x_0 is related to the
central density. The radius of the star can be calculated using the condition that the pressure at the surface of the star vanishes,
P[x(R)] = 0, which implies x(R) = 0 and the total mass of the star can be obtained by M = m(R). From the above equations,
it is evident that in the EiBI theory, the mass and radius of the star depend on both x_0 and κ, whereas in GR, they depend only on x_0.
In <cit.>, an upper bound on the modified parameter is obtained using solar constraints, resulting in κ≲ 3× 10^5 m^5kg^-1s^-2.
In the context of neutron stars, <cit.> found that κ < 10^-2 m^5kg^-1s^-2. Considering cosmological and
astrophysical scenarios, <cit.> derives the bound κ≲ GR^2, where R is related to the Hubble radius or the radius of the compact
object. In <cit.>, the analysis of brown dwarf mass and radii yields -1.51 × 10^2 m^5kg^-1s^-2 < κ < 0.81 ×
10^2 m^5kg^-1s^-2 at the 1σ confidence level and -1.59 × 10^2 m^5kg^-1s^-2 < κ < 1.16 × 10^2 m^5kg^-1s^-2 at the 5σ confidence level. By utilizing the mass-radius data of cataclysmic variables, <cit.> derives the
bounds 0.005 ≤κ/GR_⊙^2 ≤ 0.352 at the 1σ level and -0.315 ≤κ/GR_⊙^2 ≤ 0.597 at the 5σ level.
Important in our context will be the study by <cit.>, where κ is constrained through a χ^2 analysis of observational data from twelve
WDs <cit.>. The results yield a bound of -0.7 × 10^3 m^5kg^-1s^-2 < κ < 1.66 × 10^3 m^5kg^-1s^-2 at the 1σ confidence level and -1.598 × 10^3 m^5kg^-1s^-2 < κ < 4.858 ×
10^3 m^5kg^-1s^-2 at the 5σ confidence level. These WDs, with mass range ∼ 0.5-1 M_⊙
in the left panel of Figure <ref> make these objects a viable set for our study in this paper, as the error bars in their mass measurements can
be attributed to the presence of modified gravity. We should also point out that the same study
reports that when considering super-Chandrasekhar white dwarfs with masses up to 2.8 M_⊙ <cit.>, κ < 0.35 × 10^2 m^5kg^-1s^-2 (5σ). These will however be excluded here in the absence of a well known equation of state and possible
effects of magnetic fields in such exotic stars.
In our study, we thus focus on WDs in the mass range ∼ 0.5-1.0 M_⊙, utilizing the 5σ bound obtained by <cit.> mentioned above. We employ these
established bounds on the modified gravity parameter, κ, to investigate the observational effects of EiBI theory on WDs, using the physics of tidal disruptions.
§ FORMALISM AND METHODOLOGY
In this section, we describe the formalism and the methodology employed to investigate the effects of EiBI gravity on tidal disruption events of
WDs by an IMBH. Tidal disruption events occur when a star comes close to a black
hole and experiences non-local disruptive forces due to the strong gravitational field. Our objective is to analyze the observational
signatures of tidal disruption events in the presence of EiBI gravity. To achieve this, we performed three-dimensional hydrodynamical
simulations based on SPH. The reader is referred to <cit.>
for a detailed description of the numerical methods and the code employed to simulate tidal disruption events.
§.§ Hydrodynamics
SPH is a Lagrangian method that models fluid stars via a set of particles. In this method, the fluid properties, such as density, pressure, and velocity, are calculated for each particle. These properties are `smoothed' using a
fixed number of neighbouring particles using a M6 quintic spline kernel. The forces acting on each particle are determined through a
binary tree algorithm, which employs a tree opening angle of θ = 0.5 to restrict the number of neighbouring particles taken into account.
To account for the dissipation of energy due to the viscosity of the fluid, artificial viscosity is introduced with standard artificial viscosity
parameters, α^AV = 1.0 and β^AV = 2.0. To calculate the external gravitational force exerted on each
particle by the black hole, we followed the same approach as in <cit.>, in which each particle
experiences the relativistic acceleration in Schwarzschild space-time. This approach takes into account the effects of general relativity
and is therefore more accurate than Newtonian gravity when modelling tidal forces. Finally, the SPH equations are evolved using the leapfrog approach at each time step, and a global time step is employed to ensure numerical stability.
§.§ White dwarf EOS in SPH
To incorporate the zero-temperature equation of state for the electron gas in SPH, we follow a few key steps. First, we
estimate the density of each particle using a kernel function that assigns weights to neighbouring particles based on their distances.
Next, we compute the relativity parameter, x for each particle from density using Equation (<ref>). Once we have the
relativity parameter for each particle, the pressure is calculated using Equation (<ref>) and Equation (<ref>) for each particle. Taking into account the zero-temperature equation of state, the sound speed is updated, and its relationship to the relativity parameter is given by
c_s^2 = ∂ P/∂ρ = 2 m_e c^2/m_Cx^2/√(1+x^2)
= 8.2173 × 10^12x^2/√(1+x^2) m^2 s^-2 .
This sound speed, determined by the above equation, is crucial for accurately capturing the dynamics in SPH simulations.
Finally, the computed pressure values are then utilized to determine the forces acting on each particle over time. The above EOS is derived with the assumption that the degeneracy pressure is significantly higher than the thermal pressure of the gas.
This is justified due to the fact that during tidal disruption, the high compression of matter leads to high densities and low temperatures.
At these low temperatures, most of the electrons are in their lowest energy state and degeneracy pressure dominates
over thermal pressure.
§.§ Implementation of EiBI gravity
In SPH, self-gravitational forces can be calculated using the near-field and the far-field approaches.
In the near-field approach, the gravitational force on each particle is determined by summing over the contributions from its
neighbouring particles within a certain smoothing length. The gravitational softening kernel, which is based on the distance
between particles and the smoothing length, is used to weight the contributions from each neighbour. In contrast, the far-field approach
calculates the gravitational force on a particle due to a group of distant particles using the multiple moment expansion.
To incorporate the EiBI theory into the SPH framework, it is necessary to use the modified Poisson equation
given in Equation (<ref>). This equation directly affects the calculation of self-gravitational forces between particles.
The modified Poisson equation introduces an extra term κ/4∇^2ρ, which modifies the gravitational softening
kernel within the smoothing length. This modified kernel captures the gravitational interaction within the vicinity of a particle,
providing a more accurate representation of the EiBI gravity effects.
However, outside the smoothing length, the gravitational softening kernel remains the same as in standard gravity, and is proportional
to 1/r^2. This is because beyond the smoothing radius, particles do not contribute to the density and hydrodynamic force on the particle of interest. Therefore, these particles are not considered a part of the fluid element of interest. As we know that the EiBI modification is prevalent only within the matter source (in this case the fluid element), only the near field gravity is modified, whereas in vacuum, EiBI tends to GR, so the far field gravity is calculated using the Newtonian gravity without any modification.
Now, the modified gravitational softening kernel is related to the density kernel using the modified Poisson equation, given by
W(r,h) = 1/4π r^2[∂/∂ r(r^2∂ϕ(r,h)/∂ r) -
κ'∂/∂ r(r^2∂ W(r,h)/∂ r)] ,
where W(r,h) and ϕ(r,h) are the density and gravitational softening kernel respectively and κ' = κ/(4 G).
By integrating the above equation, we can obtain the derivative of the softening kernel, ∂ϕ/∂ r, which is given by
∂ϕ(r,h)/∂ r = 4π/r^2∫_^r r'^2 W(r') dr'+ κ'∂ W(r,h)/∂ r + C_1/r^2 .
The constant C_1 is determined by imposing the condition that the standard Newtonian inverse square law is recovered
beyond the smoothing length of the kernel.
Further integrating the equation for ∂ϕ(r,h)/∂ r yields the softening kernel, expressed as
ϕ(r,h) = ∫_^r(4π/r̃^2∫_^r̃ r'^2 W(r')dr')dr̃ + κ' W(r,h) - C_1/r + C_2 .
Here, the constant C_2 is determined by considering the asymptotic behaviour (ϕ→ 0 as r →∞) of the softening kernel.
The incorporation of EiBI theory into the SPH framework allows an accurate representation of the effects of EiBI gravity within a star.
This inclusion has significant implications for the dynamics of tidal disruption events, which will be explored in the subsequent section.
In Appendix <ref>, we present the analytical forms of the softening kernel, ϕ(r,h), and the derivative of the softening kernel
∂ϕ/∂ r, which are used in our simulations.
§.§ Initial density profile
In order to obtain the initial density profile, we employ a numerical solution of Equations (<ref>) and (<ref>) as discussed
in Section <ref>. These equations yield the mass-radius relation for a chosen value of the parameter κ.
The influence of κ on the mass-radius relationship is shown in Figure <ref> (left panel), where we present
the mass-radius relations for various κ values. As discussed in the introduction, for κ > 0, the additional term counteracts
the self-gravitational term, allowing the WD to support more mass. Conversely, for κ < 0, the opposite behaviour occurs,
resulting in the WD being able to support less mass. Once the mass and radius for a specific κ are determined,
the density profile of the star is obtained using the profile x(r) derived from Equations (<ref>) and (<ref>).
Finally, by using the x(r) profile in Equation (<ref>), we obtain the initial density profile. Figure <ref> (right panel)
displays the radial density profiles for WDs with a mass of 0.50 M_⊙ for various κ values.
After obtaining the initial density profile, it is incorporated into the SPH code. Initially, the particles are placed within a closed-packed sphere,
which is then stretched using the stretch map technique (see <cit.>) to match the desired density profile. Subsequently, the particle distribution evolves
in isolation to attain a relaxed configuration. Once the relaxed profiles are obtained, we plot them in Appendix <ref>, alongside
the initial density profiles generated through the procedure mentioned above.
Furthermore, it is important to note that the central densities involved in our work exceed 10^5 g cm^-3. In a study by <cit.>, it was highlighted that temperature effects become significant when the density drops below 10^5 g cm^-3. Thus, the assumption of neglecting finite temperature effects proves to be a valid approximation for our study.
§.§ Simulation details
To investigate the effects of EiBI gravity on tidal disruption observables, we performed 19 simulations of tidal disruption events.
The central IMBH is modelled as a Schwarzschild black hole with a mass of M = 10^3 M_⊙, placed at the origin of the coordinate system.
The Schwarzschild radius is denoted by r_s = 2GM/c^2. Any particle that crosses this radius is removed from the system.
We construct relaxed WDs with masses of 0.50, 0.75, 1.00 M_⊙. From Figure <ref> (left panel),
it becomes apparent that the influence of EiBI gravity increases as we move towards higher masses, justifying our choice of WD mass values. It should be noted that decreasing the negative value of κ reduces the maximum mass limit.
As we lower the κ value from -0.7 × 10^3 m^5kg^-1s^-2 to -1.598 ×
10^3 m^5kg^-1s^-2, the maximum mass for which a stable WD can exist decreases.
Consequently, for κ = -0.7 × 10^3 m^5kg^-1s^-2 say, there is no WD with a mass of
1.00 M_⊙, while for κ = -1.598 × 10^3 m^5kg^-1s^-2, WDs with
masses of 0.75 M_⊙ and 1.00 M_⊙ do not exist.
In this work, we place the relaxed stars in parabolic orbits around the black hole. To ensure a meaningful comparison and to
isolate the effects of EiBI gravity, we begin by fixing the pericenter distance (r_p) from the black hole and passing different white
dwarfs around it. By doing so, we maintain a constant physical distance, ensuring that the tidal field strength experienced by the various
WDs at the pericenter remains the same. Consequently, any differences in the observables arise solely from the influence
of EiBI gravity. In another approach, we fix the impact parameter, β = r_t/r_p, where r_t represents the tidal radius. This approach allows
us to fix the average strength of the tidal field experienced by the WDs at the pericenter position relative to the tidal radius.
In our simulations, we set the initial separation as 350 r_g for cases with a fixed pericenter distance and 5r_t for cases with a
constant β. Here, r_g=G M/c^2 represents the gravitational radius. The initial positions and velocities in Cartesian coordinate
are obtained by the relativistic description given in <cit.>.
Tables <ref> and <ref> present the parameter space for our tidal disruption simulations. Both tables provide information on the mass, radius,
κ value, and tidal radius of each WD. Here, we calculate the tidal radii of WDs by using Equation (<ref>). In Table <ref>,
we maintain a fixed pericenter distance of r_p = 70 r_g. As r_t varies for different WDs having a fixed r_p, the β values differ for
different stars. In Table <ref>, we consider two different values for the impact parameter: β = 0.80 and β = 1.00 for a fixed WD
mass (M_wd = 0.75 M_⊙). This set of parameter values allows us to study the effect of EiBI gravity in both partial and full disruption scenarios.
As we set up the star in a trajectory, our goal is to compute tidal disruption observables and find out their dependence on the modified gravity parameter κ. A key observable of interest is the peak fallback rate, defined as the rate at which the disrupted
debris falls back towards the pericenter position. We follow <cit.> to calculate the peak fallback rate directly from the simulation by capturing the mass accretion towards the pericenter as the disrupted debris falls back. Directly measuring the rate at which debris is accreted onto the black hole allows us to track the fallback rate accurately. This method overcomes the limitations of the frozen-in approximation, which neglects the self-gravity of the debris (see <cit.>). Once the disrupted debris is beyond the pericenter position, they are accreted by the black hole eventually. To ensure an efficient simulation, we increase the accretion radius to r_acc≃ 3 r_t to efficiently remove the debris intended for accretion. Once any bound debris falls back to this accretion radius, it is removed from the system and contributes to the fallback rate. At this point, we need to mention that the fallback rate obtained through this radius may differ from the true accretion rate, which requires modeling of the accretion flow around the black hole and disk formation. However, if the debris accretes onto the black hole rapidly enough and there is no significant delay in the circularization process (<cit.> found that this delay time is very small in observed tidal disruption events), then the computed fallback rates obtained from our simulations closely
correspond to the true accretion rates.
Additionally, in partial disruption, outer layers are ejected and a fraction of the star remains
as a self-bound core. We adopt a methodology similar to the one described in <cit.>,
to calculate the self-bound core using an iterative approach based on the particle energy.
After identifying the core particles, we compute the core properties such as mass, specific energy, specific angular momentum, etc.
From these core properties, we further compute observables such as kick velocity and trajectory deviations (see <cit.>).
These could have implications in various areas, including black hole mass determination, understanding hypervelocity stars etc.
In partial disruption scenarios, the presence of a high-density core leads to extremely small time steps.
This makes it computationally expensive to simulate the fallback process over longer durations. To address this, once the bound core moves
a significant distance away from the black hole (> 45r_t), the core particles are replaced by a sink particle following <cit.>.
The position and velocity of the sink particle are set to the center of mass position and velocity of all the core particles. The accretion radius of
the sink particle is equal to the maximum distance of any bound particle from the core's center of mass. We have crosschecked this by introducing
the sink particle at different distances from the black hole and found no difference in the peak fallback rates. This approach allows us to deal with the
computational challenges posed by the high-density core effectively and continue the simulations with improved efficiency and accuracy.
We use 5× 10^5 number of particles to simulate the stars. Importantly, it should be noted that we performed additional simulations using 1×10^5 and 1×10^6 particles, and we observe that the results remain consistent across different resolutions.
§ RESULTS
When a WD approaches a black hole, the gravitational force from the black hole is stronger on the side of the WD closest to it compared to the farther side. This difference in gravitational force deforms the WD, causing it to become elongated in the radial direction while compressed in the vertical and azimuthal directions. As the WD continues to get closer to the black hole, the deformation increases, and the tidal force exerted on the WD increases. At the pericenter, which represents the closest point of approach, the extent of disruption is determined by the impact factor, which is the ratio of tidal radius to the pericenter distance. If the WD enters well within the tidal radius, the tidal forces overcome the WD's self-gravity, leading to full disruption, where the WD is completely torn apart, and its material forms a stream of disrupted debris. On the other hand, if the WD approaches the black hole from a distance well outside the tidal radius, only a portion of the WD torn apart, resulting in partial disruption. In such cases, the central core can either remain bound to the black hole or gain enough energy to escape its gravitational influence, possibly becoming a hypervelocity star.
Additionally, the disrupted debris from both full and partial disruptions that is bound to the black hole experiences fallback onto the black hole, forming an accretion disk. As the debris circularizes, it releases gravitational potential energy, emitting radiation across various wavelengths. The rate at which the debris falls back, known as the fallback rate, determines the luminosity of the tidal disruption event. The light curve exhibits characteristic features, such as an initial rise in brightness followed by a peak and subsequent fading over time, providing valuable insights into the dynamics and properties of the disrupted WD, the accretion processes, black hole mass, etc.
In the following two subsections, we will discuss the results obtained from our tidal disruption simulations, employing two different approaches: fixing the pericenter distance and fixing the impact parameter. In both of the approaches, we study the observed effects of the modified gravity in partial and full disruption scenarios.
§.§ Fixed pericenter distance (r_p) simulations
In this subsection, we focus on the simulations performed with a fixed pericenter distance of r_p = 70 r_g, where r_g represents the gravitational radius as mentioned earlier. This approach holds greater observational significance, as it maintains constant physical distances from the black hole for all WDs, ensuring a uniform tidal field strength and effectively isolating the effects of EiBI gravity. As r_p remains constant, we vary the impact parameter, β for different stars, and the extent of disruption varies among the stars. Interestingly, during the simulations, we observed a distinct core formation occurring when β≲ 0.90. From Table <ref>, it is evident that four WDs fall within this category. For these particular stars, partial disruption takes place, leading to the formation of the core with asymmetric tails. This asymmetry arises due to the lower mass ratio q = M/M_wd∼ 10^3.
In Figure <ref>, in the Top Left panel, we present the variations in bound core masses (m_core) relative to the initial WD masses (M_wd) over time. The time is normalized to the time at which different WDs reach their pericenter positions. As we discussed previously, in the case of partial disruption, the outer layers of the WD are torn apart, leaving behind a self-gravitating core. As a result, the bound core mass fraction gradually decreases from its initial value of 1.0 as the initial WD loses mass. Eventually, the core separates from the tails and the mass fraction stabilizes at a saturated value. To extend the simulations for a longer duration, we replace the core particles with a sink particle to account for the fallback onto the black hole. The figure demonstrates that with increasing β values, there is an increase in mass loss. This is because more deep encounters lead to a greater loss of mass from the initial WD.
Moving to the Top Right panel of Figure <ref>, we present the mass difference (Δ m) between the two tails relative to the initial WD mass. To understand this behavior, we need to consider the variation of asymmetry with two parameters: β and q. As q decreases, the difference in the tidal field across the star increases, enhancing asymmetry. Similarly, increasing β also contributes to increased asymmetry. These effects are evident in the figure. Among the four WDs, the one with a mass of 1.00 M_⊙ and κ = 4858 m^5kg^-1s^-2 has the highest β value of 0.876 and a lower q value, resulting in the highest observed asymmetry. However, for the WDs with masses of 1.00 M_⊙ and κ = 1660 m^5kg^-1s^-2 and 0.75 M_⊙ and κ = -700 m^5kg^-1s^-2, the β values are almost the same. Here, due to the decrease in q, the WD with 1.00 M_⊙ exhibits higher asymmetry compared to the 0.75 M_⊙ WD.
In partial disruption, when only a portion of the star is torn apart, an interesting phenomenon occurs due to the conservation of linear momentum. The momentum carried away by the bound tail imparts a `kick' on the remaining self-bound core. As a result, there is an increase in core velocity that translates into the increase in the specific orbital energy and the specific angular momentum of the core. The kick velocity, which quantifies the increase in specific orbital energy of the core, defined as v_kick= √(2(ϵ_core-ϵ_in)),
where ϵ_core represents the specific orbital energy of the core and ϵ_in represents the initial specific orbital energy. The variation of kick velocity over time is depicted in the Bottom Left panel of Figure <ref>. Asymmetry in the mass loss plays a significant role in increasing the specific orbital energy and, consequently, the kick velocity. From the figure, it is evident that as the asymmetry increases, so does the kick velocity. These kick velocities typically reach values on the order of ∼ 10^3 km s^-1, a range in which several observed hypervelocity stars fall.
Another observable related to partial disruption is the trajectory deviation of the core from its initial parabolic trajectory. These deviations arise from the increase in specific orbital energy and specific orbital angular momentum of the core. The Bottom Right panel of Figure <ref> displays these deviations, which are directly influenced by the asymmetry. In the zoomed portion, it becomes apparent that the WD with an initial mass of 1.00 M_⊙ and κ = 4858 m^5kg^-1s^-2 exhibits the highest asymmetry, resulting in the most significant deviation in its core trajectory. The mass ratio, q, plays a significant role in influencing the trajectories, as demonstrated in the quantitative analysis by <cit.>. Our choice of parameters falls with the range found in <cit.>, thereby producing significant deviations in the trajectory. In the figure, the x and y axes are normalized to the pericenter distance (r_p = 70 r_g).
Figure <ref> illustrates the fallback rates onto the black hole in Solar mass per hour as a function of time in hours. The left panel displays the fallback curves for fully disrupted WDs, while the right panel shows the curves for four partially disrupted WDs. As the initial mass of the WD increases, the magnitude of the peak also increases, indicating a larger amount of debris falling back onto the black hole. For fully disrupted WDs, the late-time slope follows an expected scaling of t^-5/3 (see <cit.>). However, we observe variations in the late-time slope for partially disrupted WDs. Specifically, two WDs with β values around 0.72, corresponding to mass 1.00 M_⊙ with κ = 1660 m^5kg^-1s^-2 and mass 0.75 M_⊙ with κ = -700 m^5kg^-1s^-2, exhibit a late-time slope scaling of t^-9/4, that is in agreement with <cit.>. Another partially disrupted WD with β = 0.81, the mass of 0.75 M_⊙ and κ = 0, initially follows a t^-5/3 scaling, transitioning to a t^-2 temporal scaling, and faintly showing a t^-9/4 behavior at very late times. Finally, the WD with β = 0.876, mass of 1.00 M_⊙, and κ = 4858 m^5kg^-1s^-2, exhibits a late-time slope of t^-5/3. In order to gain a comprehensive understanding of the behavior of peak fallback rates at late times, further investigations are required, exploring different parameter regimes that yield varying values of β. However, we leave this as a topic for future study.
§.§ Fixed impact parameter (β) simulations
In this subsection, we explore the results of simulations with fixed impact parameters set at values of 0.80 and 1.00 for the WD with a mass of 0.75 M_⊙. These selected values of β ensure that both partial and full disruptions occur during the simulations. We remind the reader of the following caveat before we begin the analysis.
Note that as we have mentioned after Equation (<ref>) (see the discussion after this equation), the tidal radius computed from this equation is an estimation.
It does not take into account the stellar structure and hydrodynamics, and should be modified in the presence of EiBI gravity. Hence β = r_t/r_p computed from
Equation (<ref>) is only approximate and might vary from its presumed fixed value. Nonetheless, within this approximation, fixed β simulations give us useful insights as we
can compare the behaviour of the stars with experiencing approximately the same relative strength of the tidal field at the pericenter. We will proceed with this caveat in mind.
Starting with the Top Left panel of Figure <ref>, we present the time evolution of the bound core mass fraction. Significantly, an increase in the modified gravity parameter κ results in an increased bound core mass fraction.
Additionally, we observe a deviation in the core mass corresponding to different κ values from the core mass at κ = 0. For instance, the deviation increases up to 4.8% when κ rises from κ = 0 to κ = 4858 m^5kg^-1s^-2. Furthermore, we study the mass difference between the two tidal tails in the Top Right panel of Figure <ref>. Interestingly, we find that as κ increases, the mass difference between the tails also increases.
We also analyze the variation of kick velocity over normalized time in the Middle Left panel of Figure <ref>. Notably, the kick velocity exhibits an opposite trend compared to the mass difference between the tails. This is due to the fact that after the tidal interaction, the core orbital energy and mass both increases with higher κ. However, the gain in mass exceeds the gain in orbital energy, resulting in a lower specific orbital energy gain for higher κ value.
Due to the formation of asymmetric tails during partial disruption, there are changes in the core's specific orbital energy and angular momentum, subsequently altering the trajectory of the bound core's center of mass from its initial trajectory. These trajectory deviations are depicted in the Middle Right panel of Figure <ref>. While we obtained trajectory deviations for all κ values, we chose to present the deviations specifically for κ = 4858 m^5kg^-1s^-2 and κ = 0 for better visualization. These specific values
help us highlight the variations in the trajectory deviations more clearly. Notably, the x and y axes in the plots are normalized to the tidal radius of the respective stars. The figure clearly demonstrates that higher κ values correspond to greater trajectory deviations. This is because with increasing κ, however, the specific orbital energy is lower, but there is a prominent increase in specific angular momentum. The higher change in specific angular momentum deviates the trajectory more for a higher κ value.
In the Bottom panels of Figure <ref>, we present the behaviors of the fallback rates in both partial (Right panel) and full disruption (Left panel) scenarios, considering varying values of κ. It is observed that increasing κ leads to a decrease in the peak magnitude of the fallback rate, along with an increase in the time of peak and the return time of the most bound debris. The observed trend can be explained by the increase in r_p as κ rises to maintain a fixed β. As a result, the tidal field strength acting on the less compact star, which has a higher κ, diminishes, resulting in a decreased amount of material being torn apart from the star. Additionally, in full simulations, the temporal scaling at late times follows a t^-5/3 power law. However, in partial disruption scenarios, the presence of the core introduces a deviation from the t^-5/3 slope. After the peak, all partial disruption simulations initially exhibit a t^-5/3 scaling for a few hours, but the decline subsequently steepens and transitions to a t^-2 slope, eventually reaching a t^-9/4 decline at the end.
§ DISCUSSION AND SUMMARY
The methods of smoothed particle hydrodynamics provide an invaluable tool to study stellar dynamics and has been immensely popular over the decades, and has
provided several useful insights therein. In this work we have extended the scope of SPH further, by incorporating the effects of a class of modified gravity theories –
in particular to study tidal disruption dynamics of WDs in the background of intermediate-mass black holes. The interior of the WDs have been
modelled by incorporating EiBI gravity in this study. Where there are several works in the literature that seek to constrain modified gravity using astrophysical
tests, here we have used an allowed range of parameters and explored the effects of modified gravity in a realistic tidal disruption scenario, and we have
quantified how various tidal disruption events observables depends on modified gravity. We believe that this is the first work of this kind to appear in the
literature. As we have mentioned in the introduction, any modification of gravity is associated with possible extra degrees of freedom and leaves a low energy
imprint via parameters (κ in our case) that typically alter the pressure balance equation inside stellar objects. In this sense, our work can be thought of as generic, and should be applicable to a wide range of modified gravity theories, the caveat being that assumptions of spherical symmetry might make other theories more challenging than the present study.
In this paper, we have used a zero temperature EOS to relate the pressure and density, an improvement from the polytropic EOS. As a check, we modelled the
lower mass WDs (0.5 M_⊙) using polytropic EOS with n = 5/3, where n is the polytropic index, and found that the polytropic EOS gave almost the same results for tidal observables, but as we go towards the higher mass WDs (1 M_⊙) neither n=5/3 nor n = 1.35 gave satisfactory results. Thus, zero temperature EOS can be valuable to model higher-mass WDs without assuming a polytropic approximation.
In this context, note also that we selected WDs with three different masses: 0.50, 0.75, 1.00 M_⊙. The choice of increasing mass values was motivated by the mass-radius relation, which indicates that the deviations from κ = 0 become more compared to the 5σ bounds as the mass increases.
In this study, we investigate tidal disruption events involving different white dwarf stars with various κ values, employing two different approaches. Firstly, from an observational perspective, we maintain a constant pericenter distance for all white dwarfs, ensuring the same tidal field strength. Interestingly, we find that white dwarfs with different κ values display unique behaviors in partial disruptions. With increasing κ, the initial white dwarfs with the same mass experience greater mass loss, and the mass difference between the two tails become more due to deeper encounters. This asymmetric mass loss induces a kick velocity to the remnant core, which increases with κ and can reach values of up to ∼ 5×10^3 km s^-1 for an initial white dwarf mass of 1.00 M_⊙. Moreover, the specific energy and specific angular momentum changes of the core lead to deviations from its initial trajectory, and these deviations are also observed to increase with κ. Additionally, the peak magnitude, time of the peak, and return time of the most bound debris show variations among white dwarfs with different κ values.
Furthermore, we conduct simulations with a fixed impact parameter, β, for white dwarfs of mass 0.75 M_⊙. Although the determination of β requires the use of an approximate formula for the tidal radius (Equation (<ref>)), we explore the behavior of observables in both partial and full disruptions within this approximation. In the case of partial disruption, the core mass can vary depending on the κ value. For a white dwarf with a mass of 0.75 M_⊙, we find that the core mass increases by approximately 5% when κ is increased from 0 to 4858 m^5kg^-1s^-2. We also performed additional simulations with a white dwarf of mass 1 M_⊙ and observed an increase in the core mass of up to 20% for the same κ values, which represents a significant change allowed by this class of modified gravity theories. Additionally, as κ increases, the asymmetry in the two tidal tails induces a kick velocity in the core, resulting in deviations of its trajectory from the initial trajectory. Regarding the peak fallback rate, we find that as κ increases, the peak magnitude decreases while the time of peak and return time of the most bound debris increase. The difference in peak magnitude between κ = 4858 m^5kg^-1s^-2 and κ = 0 reaches up to 28.5% for β = 1.0 and up to 23.5% for β = 0.80 for the 0.75 M_⊙ white dwarf.
These results demonstrate the impact of EiBI gravity on the observables in tidal disruption events. In the near future, upcoming missions like LISA (Laser Interferometer Space Antenna) will provide valuable observational data, offering an opportunity to compare various numerical models for tidal disruptions. In this context, our analysis holds significance as it allows us to explore different effects of modified gravity, such as those predicted by the EiBI theory. By studying the influence on the dynamics and observables of tidal disruption events with different modified gravity parameters, our analysis contributes to a better understanding of gravitational theories beyond the standard framework.
As always, it is useful to analyse possible degeneracies that can arise in our analysis, from other effects. Here, we have taken a well known EOS of WDs,
so that changes to the EOS (compared to say polytropic ones) are not relevant. Further, in the mass range that we consider, magnetic fields are not
known to play a significant role. The only other physical variable that we need to analyse is stellar rotation. SPH in the presence of such rotation
was recently analysed in <cit.>, where it was found that the direction of stellar spin helps (hinders) tidal disruption depending on whether
the spin is prograde (retrograde). Note that as we have mentioned in the introduction, the effect of κ is qualitatively similar, i.e., it either strengthens
or reduces gravity depending on its sign, see Equation (<ref>). Crucially however, the tidal radius is also non-trivially
modified by κ, see Equation (<ref>). To simplify the analysis, let us consider the situation for a fixed impact parameter β, with a positive κ.
Then although a star of a certain mass is less compact compared to the Newtonian case, as EiBI gravity causes an increase in its radius compared to Newtonian
values, it is also being disrupted at a greater distance,
as r_t and hence r_p both increase due to the effect of EiBI gravity to keep β fixed. These two factors together results in the later occurence of the peak fall back
rate with a diminishing magnitude, as is apparent from Figure <ref>.
A detailed analysis of the interplay between modified gravity and stellar rotations is an issue that is worth investigating in the future.
Acknowledgements
We acknowledge the support and resources provided by PARAM Sanganak under the National Supercomputing Mission, Government of India, at the Indian Institute of Technology Kanpur. The work of DG is supported by grant number 09/092(1025)/2019-EMR-I from the Council of Scientific and Industrial Research (CSIR). PB acknowledges financial support from Science and Engineering Research Board, Government of India, File Number PDF/2022/000332.
Data Availability Statement
The data underlying this article will be shared upon reasonable request to the corresponding author.
§ APPENDIX A
The M6 kernel function used in SPH, is given by (see <cit.>):
W(r,h) =
1/π h^3(11/20 - x^2/2 + x^4/4 - x^5/12) 0 ≤ x ≤ 1
1/π h^3(17/40 + 5x/8 - 7x^2/4 + 5x^3/4 - 3x^4/8 + x^5/24) 1 ≤ x ≤ 2
1/π h^3(81/40 - 27x/8 + 9x^2/4 - 3x^3/4 + x^4/8 - x^5/120) 2 ≤ x ≤ 3
0 x ≥ 3
For this M6 kernel, the derivative of the gravitational softening kernel, ∂ϕ(r,h)/∂ r, is given below:
∂ϕ(r,h)/∂ r =
1/h^2(11x/15 - 2x^3/5 + x^5/7 - x^6/24)
+ κ'/π h^4(-x + x^3 - 5x^4/12) 0 ≤ x ≤ 1
1/h^2(1/336 x^2 + 17 x/30 + 5 x^2/8 - 7 x^3/5 + 5 x^4/6 - 3 x^5/14 + x^6/48)
+ κ'/π h^4(5/8 - 7 x/2 + 15 x^2/4 - 3 x^3/2 + 5 x^4/24) 1 ≤ x ≤ 2
1/h^2(-169/560 x^2 + 27 x/10 - 27 x^2/8 + 9 x^3/5 - x^4/2 + x^5/14 - x^6/240)
+ κ'/π h^4(-27/8 + 9 x/2 - 9 x^2/4 + x^3/2 - x^4/24) 2 ≤ x ≤ 3
1/r^2 x ≥ 3
Where x = r/h, κ' = κ/(4 G) and the constants are determined to ensure piecewise continuity of the softening kernel. Finally, the gravitational softening kernel for the M6 kernel is given by:
ϕ(r,h) =
1/h(-239/210 + 11x^2/30 - x^4/10 + x^6/42 - x^7/168)
+κ'/π h^3(11/20 - x^2/2 + x^4/4 - x^5/12) 0 ≤ x ≤ 1
1/h(-473/420 - 1/336x + 17x^2/60 + 5x^3/24 - 7x^4/20 + x^5/6 - x^6/28 + x^7/336)
+κ'/π h^3(17/40 + 5x/8 - 7x^2/4 + 5x^3/4 - 3x^4/8 + x^5/24) 1 ≤ x ≤ 2
1/h(-243/140 + 169/560x + 27x^2/20 - 9x^3/8 + 9x^4/20 - x^5/10 + x^6/84 - x^7/1680)
+κ'/π h^3(81/40 - 27x/8 + 9x^2/4 - 3x^3/4 + x^4/8 - x^5/120) 2 ≤ x ≤ 3
-1/r x ≥ 3
§ APPENDIX B
999
CliftonRev
Clifton T., Ferreira P. G., Padilla A., Skordis C., 2012, https://doi.org/10.1016/j.physrep.2012.01.001Physics Reports, Clifton, Timothy, Ferreira, Pedro G., Padilla, Antonio, Skordis, Constantinos513, 1.
NOORev
Nojiri S., Odintsov S. D., Oikonomou V. K., 2017, https://doi.org/10.1016/j.physrep.2017.06.001Physics Reports, Nojiri, S., Odintsov, S. D., Oikonomou, V. K.692, 1. arXiv:1705.11098
LangloisRev
Langlois D., 2019, https://doi.org/10.1142/S0218271819420069International Journal of Modern Physics D, Langlois, David28, 1942006-3287.
IshakRev
Ishak M., 2019, https://doi.org/10.1007/s41114-018-0017-4Living Reviews in Relativity, Ishak, Mustapha22, 1.
KaseRev
Kase R., Tsujikawa S., 2019, https://doi.org/10.1142/S0218271819420057International Journal of Modern Physics D, Kase, Ryotaro Tsujikawa, Shinji28, 1942005.
Penrose
Penrose R., 1965, https://doi.org/10.1103/PhysRevLett.14.57Physical Review Letters, Penrose, Roger14, 57.
Hawking
Hawking S. W., 1966, https://doi.org/10.1103/PhysRevLett.17.444Physical Review Letters, Hawking, S. W.17, 444.
HawkingEllis
Hawking S. W., Ellis G. F. R., 1973, ‘The Large Scale Structure of Space-Time’, Cambridge University Press.
Wald
Wald R. M., 1984, ‘General Relativity’, The University of Chicago Press.
Senovilla
Senovilla J. M. M., 1998, https://doi.org/10.1023/A:1018801101244General Relativity and Gravitation, Senovilla, José M. M.30, 701.
SaksteinRev
Baker T., Barreira A., Desmond H., Ferreira P., Jain B., Koyama K., Li B., Lombriser L., Nicola A., Sakstein J., Schmidt F., 2021, https://doi.org/10.1103/RevModPhys.93.015003Reviews of Modern Physics, Baker, Tessa, Barreira, Alexandre, Desmond, Harry, Ferreira, Pedro, Jain, Bhuvnesh, Koyama, Kazuya, Li, Baojiu, Lombriser, Lucas, Nicola, Andrina, Sakstein, Jeremy, Schmidt, Fabian93, 015003.
OlmoRev1
Olmo G. J., Rubiera-Garcia D., Wojnar A., 2020, https://doi.org/10.1016/j.physrep.2020.07.001Physics Reports, Olmo, Gonzalo J., Rubiera-Garcia, Diego, Wojnar, Aneta876, 1. arXiv:1912.05202
Born1
Born M., 1933, https://doi.org/10.1038/132282a0Nature, Born, M.132, 282.
Born2
Born M., 1934, https://doi.org/10.1098/rspa.1934.0010Proceedings of the Royal Society of London Series A, Born, Max143, 410.
BI1
Born M., Infeld L., 1934, https://doi.org/10.1098/rspa.1934.0059Proceedings of the Royal Society of London Series A, Born, M. Infeld, L.144, 425.
BF
Banados M., Ferreira P. G., 2014, https://doi.org/10.1103/PhysRevLett.113.119901Physical Review Letters, Bañados, Máximo Ferreira, Pedro G.113, 119901.
DG
Deser S., Gibbons G. W., 1998, https://doi.org/10.1088/0264-9381/15/5/001Classical and Quantum Gravity, Deser, S. Gibbons, G. W.15, L35. arXiv:hep-th/9803049
Vollick1
Vollick D. N., 2004, https://doi.org/10.1103/PhysRevD.69.064030Physical Review D, Vollick, Dan N.69, 064030. arXiv:gr-qc/0309101
Vollick2
Vollick D. N., 2005, https://doi.org/10.1103/PhysRevD.72.084026Physical Review D, Vollick, Dan N.72, 084026. arXiv:gr-qc/0506091
Eddington
Eddington A. S., 1924, ‘The Mathematical Theory of Relativity’, Cambridge University Press.
Schrodinger
Schrodinger E., 1985, ‘Space-time Structure’, Cambridge University Press.
new4
Jimenez J. B., Delhom A., 2020, https://doi.org/10.1140/epjc/s10052-020-8143-zEuropean Physical Journal C, Jiménez, Jose Beltrán Delhom, Adrià80, 585. arXiv:2004.11357
Pani1
Pani P., Cardoso V., Delsate T., 2011, https://doi.org/10.1103/PhysRevLett.107.031101Physical Review Letters, Pani, Paolo, Cardoso, Vitor, Delsate, Térence107, 031101. arXiv:1106.3569
Delsate1
Delsate T., Steinhoff J., 2012, https://doi.org/10.1103/PhysRevLett.109.021101Physical Review Letters, Delsate, Térence Steinhoff, Jan109, 021101. arXiv:1201.4989
Casanellas1
Casanellas J., Pani P., Lopes I., Cardoso V., 2012, https://doi.org/10.1088/0004-637X/745/1/15The Astrophysical Journal, Casanellas, Jordi, Pani, Paolo, Lopes, Ilídio, Cardoso, Vitor745, 15. arXiv:1109.0249
Avelino1
Avelino P. P., 2012, https://doi.org/10.1103/PhysRevD.85.104053Physical Review D, Avelino, P. P.85, 104053. arXiv:1201.2544
Avelino2
Avelino P. P., 2012, https://doi.org/10.1088/1475-7516/2012/11/022Journal of Cosmology and Astroparticle Physics, Avelino, P. P.11, 022. arXiv:1207.4730
Banerjee2017 Banerjee S., Shankar S., Singh T. P., 2017, https://doi.org/10.1088/1475-7516/2017/10/004Journal of Cosmology and Astroparticle Physics, Banerjee, Srimanta, Shankar, Swapnil, Singh, Tejinder P.10, 004. arXiv:1705.01048
Olmo1
Olmo G. J., Rubiera-Garcia D., Sanchez-Puente A., 2015, https://doi.org/10.1103/PhysRevD.92.044047Physical Review D, Olmo, Gonzalo J., Rubiera-Garcia, D., Sanchez-Puente, A.92, 044047. arXiv:1508.03272
Pani2012
Pani P., Delsate T., Cardoso V., 2012, https://doi.org/10.1103/PhysRevD.85.084020Physical Review D, Pani, Paolo, Delsate, Térence, Cardoso, Vitor85, 084020. arXiv:1201.2814
new1
Delhom-Latorre A., Olmo G. J., Ronco M., 2018, https://doi.org/10.1016/j.physletb.2018.03.002Physics Letters B, Delhom-Latorre, Adrià, Olmo, Gonzalo J., Ronco, Michele780, 294. arXiv:1709.04249
new5
Feng W.-X., Geng C.-Q., Luo, L.-W., 2019, https://doi.org/10.1088/1674-1137/43/8/083107Chinese Physics C, Feng, Wei-Xiang, Geng, Chao-Qiang, Luo, Ling-Wei43, 083107. arXiv:1810.06753
Rosyadi2019
Rosyadi A. S., Sulaksono A., Kassim H. A., Yusof N., 2019, https://doi.org/10.1140/epjc/s10052-019-7560-3European Physical Journal C, Rosyadi, A. S., Sulaksono, A., Kassim, H. A., Yusof, N.79, 1030.
new2
Delhom A., Miralles V., Penuelas A., 2020, https://doi.org/10.1140/epjc/s10052-020-7880-3European Physical Journal C, Delhom, Adrià, Miralles, Victor, Peñuelas, Ana80, 340. arXiv:1907.05615
new3
Jimenez J. B., Delhom A., Olmo G. J., Orazi E., 2021, https://doi.org/10.1016/j.physletb.2021.136479Physics Letters B, Jiménez, Jose Beltrán, Delhom, Adrià, Olmo, Gonzalo J., Orazi, Emanuele820, 136479.
Banerjee2022
Banerjee P., Garain D., Paul S., Shaikh R., Sarkar T., 2022, https://doi.org/10.3847/1538-4357/ac324fThe Astrophysical Journal, Banerjee, Pritam, Garain, Debojyoti, Paul, Suvankar, Shaikh, Rajibul, Sarkar, Tapobrata924, 20. arXiv:2105.09172
OlmoRev
Jimenez J. B., Heisenberg L., Olmo G. J., Rubiera-Garcia D., 2018, https://doi.org/10.1016/j.physrep.2017.11.001Physics Reports, Beltrán Jiménez, Jose, Heisenberg, Lavinia, Olmo, Gonzalo J., Rubiera-Garcia, Diego727, 1. arXiv:1704.03351
KobayashiRev
Kobayashi T., 2019, https://doi.org/10.1088/1361-6633/ab2429Reports on Progress in Physics, Kobayashi, Tsutomu82, 086901. arXiv:1901.07183
Hills1975
Hills J. G., 1975, https://doi.org/10.1038/254295a0Nature, Hills, J. G.254, 295.
Maguire2020
Maguire K., Eracleous M., Jonker P. G., MacLeod M., Rosswog S., 2020, https://doi.org/10.1007/s11214-020-00661-2Space Science Reviews, Maguire, Kate, Eracleous, Michael, Jonker, Peter G., MacLeod, Morgan, Rosswog, Stephan216, 39.
Koester1990
Koester D., Chanmugam G., 1990, https://doi.org/10.1088/0034-4885/53/7/001Reports on Progress in Physics, Koester, D. Chanmugam, G.53, 837.
Isern2022
Isern J., Torres S., Rebassa-Mansergas A., 2022, https://doi.org/10.3389/fspas.2022.815517Frontiers in Astronomy and Space Sciences, Isern, J., Torres, S., Rebassa-Mansergas, A.9, 6.
Fontaine2013
Fontaine G., Brassard P., Charpinet S., Randall S. K., Van Grootel V., 2013, https://doi.org/10.1051/epjconf/20134305001European Physical Journal Web of Conferences, Fontaine, G., Brassard, P., Charpinet, S., Randall, S. K., Van Grootel, V.43, 05001.
shapiro Shapiro S. L., Teukolsky S. A., 1983, ‘Black Holes, WDs, and Neutron Stars: The Physics of Compact Objects’, Wiley.
Jain
Jain R. K., Kouvaris C., Nielsen N. G., 2016, https://doi.org/10.1103/PhysRevLett.116.151103Physical Review Letters, Jain, Rajeev Kumar, Kouvaris, Chris, Nielsen, Niklas Grønlund116, 151103. arXiv:1512.05946
Boshkayev
Boshkayev K., 2018, https://doi.org/10.1134/S106377291812017XAstronomy Reports, Boshkayev, K.62, 847. arXiv:1807.00332
Holberg2012
Holberg J. B., Oswalt T. D., Barstow M. A., 2012, https://doi.org/10.1088/0004-6256/143/3/68The Astronomical Journal, Holberg, J. B., Oswalt, T. D., Barstow, M. A.143, 68. arXiv:1201.3822
Taubenberger
Taubenberger S., Benetti S., Childress M., Pakmor R., Hachinger S., Mazzali P. A., Stanishev V., Elias-Rosa N., Agnoletto I., Bufano F., Ergon M., Harutyunyan A., Inserra C., Kankare E., Kromer M., Navasardyan H., Nicolas J., Pastorello A., Prosperi E., Salgado F., Sollerman J., Stritzinger M., Turatto M., Valenti S., Hillebrandt W., 2011, https://doi.org/10.1111/j.1365-2966.2010.18107.xMonthly Notices of the Royal Astronomical Society, Taubenberger, S., Benetti, S., Childress, M., Pakmor, R., Hachinger, S., Mazzali, P. A., Stanishev, V., Elias-Rosa, N., Agnoletto, I., Bufano, F., Ergon, M., Harutyunyan, A., Inserra, C., Kankare, E., Kromer, M., Navasardyan, H., Nicolas, J., Pastorello, A., Prosperi, E., Salgado, F., Sollerman, J., Stritzinger, M., Turatto, M., Valenti, S., Hillebrandt, W.412, 2735. arXiv:1011.5665
Banerjee2023 Banerjee P., Garain D., Chowdhury S., Singh D., Joshi R., Sarkar T., 2023, https://doi.org/10.1093/mnras/stad1284Monthly Notices of the Royal Astronomical Society, Banerjee, Pritam, Garain, Debojyoti, Chowdhury, Shaswata, Singh, Dhananjay, Joshi, Rohan, Sarkar, Tapobrata. arXiv:2212.09122
Herant Herant M., 1994, Memorie della Societa Astronomica Italiana, https://ui.adsabs.harvard.edu/abs/1994MmSAI..65.1013H65, 1013.
Coughlin2015
Coughlin E. R., Nixon C., 2015, https://doi.org/10.1088/2041-8205/808/1/L11The Astrophysical Journal, Coughlin, Eric R. Nixon, Chris808, L11. arXiv:1506.08194
Golightly2019 Golightly E. C. A., Nixon C. J., Coughlin E. R., 2019, https://doi.org/10.3847/2041-8213/ab380dThe Astrophysical Journal, Golightly, E. C. A., Nixon, C. J., Coughlin, E. R.882, L26. arXiv:1907.05895
Miles2020
Miles P. R., Coughlin E. R., Nixon C. J., 2020, https://doi.org/10.3847/1538-4357/ab9c9fThe Astrophysical Journal, Miles, Patrick R., Coughlin, Eric R., Nixon, C. J.899, 36. arXiv:2006.09375
Cufari2022
Cufari M., Coughlin E. R., Nixon C. J., 2022, https://doi.org/10.3847/1538-4357/ac32beThe Astrophysical Journal, Cufari, M., Coughlin, Eric R., Nixon, C. J.924, 34. arXiv:2110.11374
Mockler2019
Mockler B., Guillochon J., Ramirez-Ruiz E., 2019, https://doi.org/10.3847/1538-4357/ab010fThe Astrophysical Journal, Mockler, Brenna, Guillochon, James, Ramirez-Ruiz, Enrico872, 151. arXiv:1801.08221
Guillochon2013 Guillochon J., Ramirez-Ruiz E., 2013, https://doi.org/10.1088/0004-637X/767/1/25 The Astrophysical Journal, https://ui.adsabs.harvard.edu/abs/2013ApJ...767...25G767, 25. arXiv:1206.2350
Manukian2013
Manukian H., Guillochon J., Ramirez-Ruiz E., O'Leary R. M., 2013, https://doi.org/10.1088/2041-8205/771/2/L28The Astrophysical Journal, Manukian, Haik, Guillochon, James, Ramirez-Ruiz, Enrico, O'Leary, Ryan M.771, L28. arXiv:1305.4634
Gafton2015
Gafton E., Tejeda E., Guillochon J., Korobkin O., Rosswog S., 2015, https://doi.org/10.1093/mnras/stv350Monthly Notices of the Royal Astronomical Society, Gafton, E., Tejeda, E., Guillochon, J., Korobkin, O., Rosswog, S.449, 771. arXiv:1502.02039
PHANTOM Price D. J., Wurster J., Tricco T. S., et al., 2018, https://doi.org/10.1017/pasa.2018.25 Publications of the Astronomical Society of Australia, https://ui.adsabs.harvard.edu/abs/2018PASA...35...31P35, e031. arXiv:1702.03930
Rees1988
Rees M. J., 1988, https://doi.org/10.1038/333523a0Nature, Rees, Martin J.333, 523.
Coughlin2019 Coughlin E. R., Nixon C. J., 2019, https://doi.org/10.3847/2041-8213/ab412dThe Astrophysical Journal, Coughlin, Eric R. Nixon, C. J.883, L17. arXiv:1907.03034
Golightly2
Golightly E. C. A., Coughlin E. R., Nixon C. J., 2019, https://doi.org/10.3847/1538-4357/aafd2fThe Astrophysical Journal, Golightly, Elen C. A., Coughlin, Eric R., Nixon, C. J.872, 163. arXiv:1901.03717
|
http://arxiv.org/abs/2307.02058v1
|
20230705064740
|
Glass-like thermal conductivity and narrow insulating gap of EuTiO$_3$
|
[
"Alexandre Jaoui",
"Shan Jiang",
"Xiaokang Li",
"Yasuhide Tomioka",
"Isao H. Inoue",
"Johannes Engelmayer",
"Rohit Sharma",
"Lara Pätzold",
"Thomas Lorenz",
"Benoît Fauqué",
"Kamran Behnia"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci",
"cond-mat.stat-mech",
"cond-mat.str-el"
] |
Equal contribution
Present address: Fakultät für Physik, Ludwig-Maximilians Universität München, Geschwister-Scholl-Platz 1, 80539 München, Germany
JEIP, USR 3573 CNRS, Collège de France, PSL Research University, 11, Place Marcelin Berthelot, 75231 Paris Cedex 05, FranceLaboratoire de Physique et d'Étude des Matériaux
(ESPCI - CNRS - Sorbonne Université), PSL Research University, 75005 Paris, France
Equal contribution
Laboratoire de Physique et d'Étude des Matériaux
(ESPCI - CNRS - Sorbonne Université), PSL Research University, 75005 Paris, France
Present address: Wuhan National High Magnetic Field Center and School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China
Laboratoire de Physique et d'Étude des Matériaux
(ESPCI - CNRS - Sorbonne Université), PSL Research University, 75005 Paris, France
National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8565, Japan
National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8565, Japan
II. Physikalisches Institut, Universität zu Köln, 50937 Köln, Germany
II. Physikalisches Institut, Universität zu Köln, 50937 Köln, Germany
II. Physikalisches Institut, Universität zu Köln, 50937 Köln, Germany
II. Physikalisches Institut, Universität zu Köln, 50937 Köln, Germany
JEIP, USR 3573 CNRS, Collège de France, PSL Research University, 11, Place Marcelin Berthelot, 75231 Paris Cedex 05, France
Laboratoire de Physique et d'Étude des Matériaux
(ESPCI - CNRS - Sorbonne Université), PSL Research University, 75005 Paris, France
Crystals and glasses differ by the amplitude and the temperature dependence of their thermal conductivity. However, there are crystals known to display glass-like thermal conductivity. Here, we show that EuTiO_3, a quantum paraelectric known to order antiferromagnetically at 5.5 K, is one such system. The temperature dependence of resistivity and Seebeck coefficient yield an insulating band gap of ∼ 0.22 eV. Thermal conductivity is drastically reduced. Its amplitude and temperature dependence are akin to what is seen in amorphous silica. Comparison with non-magnetic perovskite solids, SrTiO_3, KTaO_3, and EuCoO_3, shows that what impedes heat transport are 4f spins at Eu^2+ sites, which couple to phonons well above the ordering temperature. Thus, in this case, superexchange and valence fluctuations, not magnetic frustration, are the drivers of the glass-like thermal conductivity.
Glass-like thermal conductivity and narrow insulating gap of EuTiO_3
Kamran Behnia
August 1, 2023
====================================================================
§ INTRODUCTION
In most insulating crystals, the flow of heat can be understood by considering the response of a gas of phonons to a temperature gradient. This picture, first drawn by Peierls <cit.>, using a linearized Boltzmann equation, is remarkably successful in describing the thermal conductivity, κ, of most insulators <cit.>. It explains why at an intermediate temperature, κ peaks <cit.>, separating a high-temperature decrease by warming (due to anharmonicity), and a low-temperature decrease by cooling (due to phonon depopulation). In amorphous solids, on the other hand, there is no such peak in κ (T) and heat diffuses thanks to off-diagonal coupling across harmonic branches <cit.>.
Some crystals, however, display glass-like thermal conductivity <cit.>. The thermal conductivity of these materials can be very low and/or feature a monotonic temperature dependence (lacking the T^-1 decrease at high temperature). They are sought after, since a low lattice thermal conductivity would lead to a large thermoelectric figure of merit <cit.>.
EuTiO_3 (ETO) is a perovskite with a cubic structure at room temperature <cit.>. Its electric permittivity increases upon cooling without giving rise to a ferroelectric instability. This quantum paraelectric behavior <cit.> is akin to what has been seen in other ABO_3 compounds such as SrTiO_3 (STO) and KTaO_3 (KTO). In contrast to them, however, it magnetically orders at T_N=5.5 K, with an antiferromagnetic alignment of the nearest neighbor Eu^2+ spins <cit.>. Like STO, but at higher temperatures, it goes also through an antiferrodistortive transition where adjacent TiO_6 octahedra rotate in opposite directions <cit.>. Upon doping, either with
oxygen vacancies <cit.> or La substitution of Ti <cit.>, it becomes a dilute metal <cit.>, but without a superconducting ground state.
Here, we show that the temperature dependence of thermal conductivity in ETO is glass-like and its amplitude, over a broad temperature range starting from room temperature and extending to cryogenic temperatures, is lower than in STO and KTO. The drastic attenuation of thermal conductivity occurs well above the Néel temperature and when the magnetic entropy is saturated to its expected value. This points to an unusual version of spin-phonon coupling such as the phonon-paramagnon hybridization postulated by Bussmann-Holder et al. <cit.>.
Evidence for spin-lattice coupling in ETO has been around for more than two decades <cit.>. However, it was restricted to temperatures comparable to the magnetic ordering. According to our findings, the phonon mean free path is affected by spins at Eu sites even at high temperature when there is neither a magnetic order nor a field-dependent entropy.
Monitoring the temperature dependence of electrical conductivity and thermopower in insulating EuTiO_3, we find an activation gap of 0.11 eV, indicating that the intrinsic transport band gap in EuTiO_3 is as low as ∼0.2 eV, much smaller than the gaps found by optical studies, but close to what is expected by Ab Initio calculations <cit.>.
The narrow gap between the chemical potential and Eu^2+ energy level and the random orientation of large magnetic moments in the paramagnetic state emerge as principal suspects shortening the lifetime of heat carrying phonons at elevated temperatures in the paramagnetic state.
In contrast to many other crystals displaying glass-like conductivity, ETO is not a spin liquid candidate, but a simple G-type antiferromagnet and there is no magnetic frustration. In this context, the glassy behavior could be framed in a picture of off-diagonal coupling across harmonic modes <cit.>, which in this case would be phonons and paramagnons.
§ RESULTS
§.§ Activation and band gap
Fig.<ref>a shows the temperature dependence of electrical resistivity ρ in as-grown EuTiO_3 (ETO) single crystals. One can see that ρ increases by seven orders of magnitude upon cooling from 360 K to 50 K.
An Arrhenius activation behavior becomes visible by plotting lnρ as a function of the inverse of temperature (Fig.<ref>b). The activation gap is ∼ 0.11 eV.
The Hall resistivity (see the supplement <cit.>) shows also an Arrhenius behavior with a comparable Δ. At low temperature, resistivity begins to saturate when the Hall carrier density is as low as 10^10 cm^-3 (See the supplement <cit.>). This implies an extremely low level of extrinsic donors . An activated behavior in longitudinal and Hall resistivities of as-grown ETO crystals was previously reported by Engelmayer et al. <cit.>, whose study focused on the emergence of metallicity in oxygen-deficient ETO.
This activated behavior is also confirmed by our measurements of the Seebeck coefficient (See Fig.<ref>(c)) and and the temperature dependence of the electric permittivity <cit.>.
The thermoelectric power has a negative sign and an amplitude in the range of mV/K, typical of a narrow gap semiconductor. The Seebeck coefficient in an intrinsic semiconductor is expected to be ≈k_B/eΔ/k_BT <cit.>. Thus, by plotting S as a function of T^-1 (see the inset), one expects to see a straight line whose slope yields Δ. As seen in the inset of Fig.<ref>c, this is indeed what our data yields, with Δ≈ 0.1 eV. Note that this temperature dependence is qualitatively distinct from what is expected in extrinsic semiconductors <cit.> as seen, for example, in the case of Nb-doped STO <cit.>.
Thus, the temperature dependence of electric and thermoelectric conductivity both point to a similar energy gap between the chemical potential and the conduction band. The Hall and the Seebeck coefficients are both negative, indicating that carriers are electron-like and thermally excited to live in the conduction band originating from Ti-d orbitals.
Both ρ and S show an anomaly near 260 K, which we identify as the temperature of the structural transition in ETO <cit.>. Like STO <cit.>, this transition makes ETO tetragonal <cit.>. We cannot rule out a very small difference in the amplitude of the activation gap between the cubic and the tetragonal phases (See the inset of Fig<ref>b). We also detected an unexpected and reproducible hysteresis near this phase transition. Taken at its face value, this indicates that this structural phase transition is first order (see the supplement).
Assuming that the chemical potential is at halfway between the chemical potential and the valence band leads us to conclude that the band gap of ETO is ≈ 0.22 eV. This is remarkably smaller than the 3.2 eV gap of STO <cit.>, but only slightly smaller than what a recent DFT calculation <cit.> found (0.27-0.33 eV). According to another and earlier theoretical study <cit.>, the magnitude of the band gap in ETO depends on the amplitude of the Hubbard U, and a realistic U (5-6 eV) would lead to a band gap of 0.2-0.4 eV. Thus, the narrow gap detected by our transport studies is close to what is intrinsically expected in this solid. Optical probes, however, have detected a larger gap (0.8-0.9 eV) <cit.>. Such larger energy scales are possible indications that the Density of States (DOS) near the Eu-f level is not featureless <cit.>.
Let us keep in mind the contrast between ETO on one hand and STO and KTO on the other hand. The first has a valence band originating from Eu-f orbitals close to the chemical potential, while the two others have a valence Band emanating from O-p orbitals and much further away from the chemical potential.
§.§ Thermal conductivity
Fig. <ref> shows the temperature dependence of thermal conductivity, κ, of two ETO crystals from slightly above room temperature down to dilution refrigerator temperatures. We measured several crystals and found that the thermal conductivity of all lies somewhere between the two cases shown in this figure. Moreover, we could not detect a correlation between the amplitude of κ and a weak variation of the saturation magnetization observed across various ETO samples (see the supplement <cit.>).
In contrast to typical crystalline insulators <cit.>, κ does not show a prominent peak. As seen in the inset, there is a clear anomaly at T_AFD and below this temperature κ rises by ten percent increase with cooling. However, there is no sign of a kinetic regime with κ∝ T^-1, as seen in many other insulators <cit.>.
§.§ Crystals, glasses and glass-like crystals
In order to put our observation in a proper context, we compare our data with what has been reported in the case of SiO_2, which shows a spectacular difference in the thermal conductivity in its crystalline and amorphous structures <cit.>. As seen in Fig.<ref>, κ in amorphous silica monotonically decreases with cooling, in contrast to crystalline quartz, which has a prominent peak. At any given temperature, the crystal conducts heat at least an order of magnitude more than the glass <cit.>. Not only the thermal conductivity of ETO is similar to silica in temperature dependence, but also in the cryogenic temperature range surrounding the Néel temperature (T_N≃ 5.5K), the ETO crystalline samples conduct heat less than amorphous silica.
The order of magnitude of thermal conductivity and its temperature dependence in ETO is comparable with other crystalline solids displaying a glass-like thermal conductivity, such as Tb_2Ti_2O_7 <cit.>, Tb_3Ga_5O_12 <cit.>, Na_4Ir_3O_8 <cit.>, Pr_2Ir_2O_7 <cit.> and La_0.2Nd_0.4Pb_0.4MnO_4 <cit.>. In order to allow a direct comparison, Fig.<ref> includes the data for Tb_2Ti_2O_7 <cit.>, the compound with the lowest thermal conductivity among these frustrated magnets. Note that in contrast to other members of this club of materials, EuTO_3 is a G-type antiferromagnet and is not frustrated.
As a further comparison, we also include κ of EuCoO_3 <cit.>, which displays a temperature dependence typical of a crystalline insulator. EuCoO_3 is a perovskite like ETO, but it has an orthorhombic symmetry, and the valence is Eu^3+ with only 6 electrons in the 4f shell. According to Hund's rules, this causes a vanishing magnetic moment (J=L-S=0), in agreement with the experimentally observed van Vleck susceptibility <cit.>, drastically different from the large local moments of 7 μ_B/Eu^2+ in ETO.
§.§ ETO compared to its non-magnetic sisters
Fig.<ref> compares the thermal conductivity of ETO and two other ABO_3 perovskites. SrTiO_3 (STO) and KTaO_3 (KTO) are also quantum paraelectric, but not magnetic, solids. Our new data on these two materials is in good agreement with previous studies of heat transport in STO <cit.> and in KTO <cit.>. In both cases, there is also a visible sample dependence, which is more pronounced near the peak. However, this sample dependence is much smaller than the difference between the three compounds. At room temperature, this difference is small, yet visible: κ (300 K) is ≈ 9 W/(K.m) in ETO, ≈ 11 W/(K.m) in STO and ≈ 17 W/(K.m) in KTO. Much more drastic is the difference in the temperature dependence between the three sister compounds. The enhancement with cooling observed in the other perovskites is absent in ETO. This difference extends over the full temperature range above the magnetic ordering.
It is instructive to scrutinize the specific heat of the three sister compounds. Fig. <ref>(a) shows that C/T of STO, ETO and KTO approach each other towards room temperature and reach C ≈ 100J/(mol.K). With five atoms, one expects the specific heat saturating to the Dulong-Petit value of 5 N_A k_B=125J/(mol.K), which is indeed what happens to STO, heated to 1800 K <cit.>, a remarkably high temperature, broadly compatible with the highest energy scale in the phonon spectrum of these solids (≈ 100 meV) <cit.>. A systematic difference in the specific heat evolves at lower temperatures. As is shown in the inset of Fig. <ref>(a), all three solids show a peak in the temperature dependence of C/T^3. In the case of STO, this peak is known to be caused by the presence of Einstein modes <cit.>. Similar peaks are visible for KTO and for ETO, and the position of this peak shifts with the increase in the molar mass. In STO (184 g/mol), it occurs at 30 K, in ETO (248 g/mol) at 25 K, and in KTO (267 g/mol) at 12 K. In the case of ETO, a distinct additional contribution shoots up below 15 K, which signals strong magnetic fluctuations above the Néel ordering temperature of the Eu^2+ spins.
§.§ Field dependence
As reported previously <cit.>, the specific heat in ETO displays a strong dependence on magnetic field. In order to separate the magnetic C_mag and the phononic C_phon contribution, we subtracted from the total specific heat of EuTiO_3 the measured specific heat of SrTiO_3 (with a temperature re-scaled by a factor of 0.83 such that the positions of the C/T^3 maxima of both materials match). Note that this scaling factor is close to the ratio of the molar masses √(M_STO/M_ETO)=0.86, the expected ratio of their Debye temperatures. Figure <ref>(b) shows the magnetic heat capacity C_mag/T of EuTiO_3 for different magnetic fields from 0 up to 10T applied along a [100]_c direction of the cubic room temperature phase. The sharp zero-field anomaly signals the antiferromagnetic order at T_N=5.5K that gets strongly broadened already in a field of 1T. This reflects the magnetic-field-induced switching to the polarized magnetic state, which sets in around 1.2 T in ETO (see, e.g., the supplement <cit.> or <cit.>) and, consequently, the C_mag/T data in larger fields no longer signal a spontaneous magnetic ordering transition, but a continuous evolution of magnetic entropy as it is the case for ferromagnets in an external magnetic field. In fact, the C_mag/T data of EuTiO_3 strongly resembles the corresponding data obtained on the Eu^2+-based ferromagnet EuC_2, which orders at T_C=14 K <cit.>. The magnetic entropy obtained by integrating C_mag/T is displayed in the inset of Fig. <ref>(b) and reveals that the full magnetic entropy N_A k_B ln(2S+1) of a spin 7/2 system is reached above about 15K for zero field and also for 1 T, whereas for fields above 2 T the entropy evolution drastically broadens and extends to much higher temperatures.
The field dependence of κ, shown in Fig. <ref>, further points to an intricate coupling between Eu^2+ spins and heat-carrying phonons. Thermal conductivity begins to depend on magnetic field below 15 K. Interestingly, as seen in the inset of Fig. <ref>, this is the temperature below which there is a significant magnetic entropy.
Above T_N, magnetic field slightly amplifies κ, indicating a weakening of the spin-induced localisation of phonons.
This field-induced amplification of κ in ETO is modest, in contrast with to the thirty-fold field-induced amplification of the ultra-low thermal conductivity in Tb_2Ti_2O_7 <cit.>.
Below 3 K, well below the Néel temperature, the field-dependence becomes non-monotonic (See Fig. <ref>. We leave the quantitative understanding of the field dependence of κ in EuTiO_3 as a subject of study for future investigations.
§.§ Thermal diffusivity and phonon mean-free-path
Replacing Sr by Eu reduces the thermal conductivity in a wide temperature range and enhances the specific heat below 15 K. Therefore, the ratio of thermal conductivity (in W/(K.m)) to specific heat per volume (in J/(K. m^3)), i.e. the thermal diffusivity, D, (in m^2/s) is drastically modified. It is plotted in Fig. <ref>. The most striking feature of D (T) is its two-orders-of-magnitude drop at the Néel temperature. Within the entry to the antiferromagnetically ordered phase, at 5.5 K, it becomes exceptionally low. Its minimum, 0.03 mm^2/s, is almost two orders of magnitude lower than the thermal diffusivity of a typical glass <cit.>.
This low thermal diffusivity, a consequence of the combination of an unusually low lattice thermal conductivity and a very large magnetic entropy may find applications in heat management in a cryogenic context.
The thermal conductivity of an insulator can be written as:
κ=∑_s,q C_s(q)v^2_s(q) τ_s(q)
The index s refers to different modes and q is the wave-vector. C_s, v_s and τ_s are, specific heat, velocity and scattering time. There are modes contributing to the total specific heat (C=∑_s C_s ), but not to thermal conductivity, because of their negligible velocity. Theoretically <cit.>, paramagnons in the paramagnetic state have no dispersion and therefore do not carry heat. They can, however, reduce the phonon thermal transport. In ETO, phonons not only dominate thermal conductivity, but also, at least down to 15 K, the specific heat. Therefore, one can simply write: κ=1/3C v_sℓ_ph. This neglects the q dependence of the scattering time and assumes that all modes have the same velocity. Taking v_s= 6.8 km/s, the measured sound velocity in STO <cit.>, and in reasonable agreement with the dispersion of acoustic branches in ETO <cit.>, one can estimate
ℓ_ph, shown in the inset of Fig. <ref>. Comparing it to the inverse of the wave-vector of thermally excited phonons: q_s=k_BT/ħ v_s, one finds that below 20 K, q_sℓ_ph≅ 1 , reminiscent of the Anderson localization. There is a drop at 15K, below which specific heat is no more phonon dominated.
§ DISCUSSION
Evidence for a coupling between spin and lattice degrees of freedom in this compound was first reported by Katsufuji and Takagi <cit.>, who found that when the spins order at 5.5 K, the electric permittivity of ETO drops by 7 percent and this drop is suppressed by the application of a magnetic field of the order of 3 T. This magneto-electric effect implies coupling between Eu^2+ spins and the soft mode governing the electric permittivity. Reuvekamp et al. <cit.> have found a quantitative agreement between the amplitude of the magneto-electric effect and the low-temperature magnetostriction of the system.
Our main finding is that lattice-spin coupling drastically attenuates the propagation of heat in ETO, even at temperatures where magnetic ordering is absent and magnetic entropy is practically saturated at its maximum value k_Bln(2S+1)/spin. This implies that even when the spins are randomly oriented, heat-carrying phonons couple to Eu^2+ states and their large magnetic moments (6.9-7 μ_B). According to ref. <cit.>, without incorporating the loss of spin symmetry, the DFT calculations cannot explain the absence of metallicity and the finite band gap of the paramagnetic phase.
The random orientation of magnetic moments at Eu sites (See Fig. <ref>) is the most plausible source of phonon localization. The superexchange interaction between Eu spins occurs through Ti ions <cit.>. The inter-atomic force constant can depend on the relative orientation of spins. The calculated phonon frequencies for parallel or anti-parallel alignment of adjacent spins are not the same <cit.>, which implies that phonons cannot keep a well-defined dispersion in presence of random spin orientation.
The narrow energy separation between the Eu^2+ energy level and the chemical potential may also play a role. In some Eu-based metals, the thermoelectric response has been linked to the temperature dependence of Eu valence <cit.>. Remarkably, a theoretical study <cit.> has concluded that the contribution of optical phonons to the overall lattice thermal conductivity is unusually large in this lattice structure. A coupling between heat-carrying optical phonons and Eu valence may be the source of observed κ attenuation.
Inelastic neutron scattering is a promising probe of this physics. In the case of Tb_3Ga_5O_12, for example, it documented the coupling between spin and lattice <cit.>). A recent study on STO <cit.> has found evidence for an unusual hybridization between acoustic and optic phonon branches. No neutron scattering data is presently available to compare ETO with STO.
Finally, let us note that the glass-like thermal conductivity of EuTiO_3, in contrast to spin-liquid crystals, occurs in a solid with a simple G-type anti-ferromagnetic ground state <cit.>. A formal theoretical treatment may be achieved by complementing the picture drawn by Eq.<ref> with an `off-diagonal' coupling between different vibrational states <cit.>:
κ_od=∑_ss',q^s≠ s' C_ss'(q)v^2_ss'(q) τ_ss'(q)
This equation was conceived for non-magnetic crystals, in which harmonic coupling occurs across phonon branches <cit.>. It can be extended to magnetic modes.
§ ACKNOWLEDGEMENTS
We thank Annette Bussman-Holder, Yo Machida, Igor Mazin and Nicola Spaldin for stimulating discussions. This work was supported in France by JEIP-Collège de France, and by the Agence Nationale de la Recherche (ANR-18-CE92-0020-01; ANR-19-CE30-0014-04) and a grant by the Ile de France
regional council. In Germany, it was supported by the DFG (German Research Foundation) via Project No. LO 818/6-1. I.H.I.In Japna, it was supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI Grant No. 18H03686 and the Japan Science and Technology Agency (JST) CREST Grant No. JPMJCR19K2.
Supplementary Materials for Glass-like thermal conductivity and narrow insulating gap of EuTiO_3
§ CRYSTAL GROWTH
EuTiO_3 single crystals were grown in two different labs.
In Cologne, the floating-zone technique was used as described in Ref. <cit.>. In order to prevent the formation of the Eu_2Ti_2O_7 pyrochlore phase, the growth was performed in Argon atmosphere and without any preliminary powder reaction, similar to Ref. <cit.>. Powders of Eu_2O_3, TiO, and TiO_2 were mixed in a ratio to reach EuTiO_3-δ, pressed to a rod, and installed into the floating-zone furnace. In a series of growth attempts, it turned out that it was necessary to mix the above materials with a nominal oxygen deficiency of δ =0.05. This allowed to compensate possible off-stoichiometries of the starting materials or an oxygen capture during the growth process. Single crystallinity of the grown crystal was confirmed by Laue images and phase purity was verified by X-ray powder diffraction.
In Tsukuba, similarly to the above description, EuTiO_3 crystals were prepared by the floating zone method <cit.>.
§ MAGNETIZATION
Figure <ref>(a) displays magnetization data M(T) of a EuTiO_3 single crystal for different magnetic fields between 10 and 500 mT applied along a [100]_c direction of the cubic room temperature phase. At the lowest fields, the antiferromagnetic ordering causes a maximum in M(T). This peak becomes a kink at an intermediate field of 500 mT. The corresponding inverse susceptibility follows a straight line over the entire temperature range from 300 K down to about 7 K (See panels (b) and (c)). The Curie-Weiss fit yields an effective magnet moment of μ_eff=7.8 μ_B/f.u. and a positive Weiss temperature of θ_W=3.55K, which signals a net ferromagnetic coupling. These values agree with the literature. The positive θ_W has been explained by the fact that in EuTiO_3 the antiferromagnetic coupling of the Eu^2+ spins to 6 nearest neighbors (NN) is overcompensated by a larger ferromagnetic coupling to 12 next nearest neighbors<cit.>.
Note that there is no magnetic frustration resulting from the ferromagnetic and antiferromagnetic couplings because both couplings support the G-type antiferromagnetic order. Nevertheless, a comparatively weak magnetic field of about 1.2 T is sufficient to induce a polarized magnetic state. This is because only the weaker NN antiferromagnetic coupling needs to be overcome. As shown in Fig. <ref>(d), the saturation magnetization M_sat≃ 6.73 μ_B/f.u. reaches about 96 % of the expected spin-only value of M_sat≃ gμ_B S_z =7μ_B for Eu^2+ with half-filled 4f shell. An analogous 4 % reduction is obtained from the Curie-Weiss fit in the paramagnetic phase for the Curie constant, which results in a 2 % reduction of the effective magnetic moment μ_eff=7.8 μ_B/f.u. in comparison to the expected μ_eff=g√(S(S+1))μ_B=7.94 μ_B for S=7/2.
As is shown in Fig <ref>(e) this weakly reduced saturation magnetization has been observed in various pristine insulating samples that were obtained from different growth attempts. In contrast, however, other EuTiO_3 samples that were cut from the same mother crystals, but were made conducting via vacuum annealing at high temperature, see e.g. <cit.>, reached the expected saturation magnetization of 7μ_B/Eu^2+. As it is known that the vacuum annealing of STO and ETO causes electron doping which is traced back to a reduction of the oxygen content, the different saturation magnetizations of pristine versus doped ETO can be interpreted as follows: appearently, the pristine ETO samples have a weak oxygen surplus that results in a corresponding partial oxidization of Eu^2+ to Eu^3+ which has a vanishing magnetic moment thereby reducing the overall magnetization. Vice versa, with decreasing oxygen content, the magnetization of pristine ETO should first increase but remain insulating until all Eu^3+ ions are reduced to Eu^2+ and only with further oxygen reduction a metal-insulator transition sets in via electron doping into the 3d band of Ti <cit.>.
§ METHODS FOR MEASURING THERMAL CONDUCTIVITY
We measured thermal conductivity with a standard one-heater-two-thermometers method. Above 2K, the measurements were carried out with a home-built setup in a Quantum Design PPMS. Down to 100mK, the measurements were realized in a Dilution refrigerator system. The sensors used for the measurement of ETO were Cernox chips Cx-1030 and RuO_2 thermometers. Above 20 K and up to room temperature, type E thermocouples were used. Close attention was paid to the overlap between different sets of experimental data. The thermometers were directly glued to the samples with Dupont 4922N silver paste. These sensors were also thermally isolated from the copper sample holder by long manganin wires with very low thermal conductance. The ETO/STO samples were connected to a heat sink (copper finger) with Dupont 4922N silver paste on one side and to a RuO_2-resistor (used as a heater) on the other side. A heating current was applied by flowing a current I through the heater from a DC current source (Keithley 6220). We evaluated the heating power from the relation I × V with V: the electric voltage measured across the heater by a digital
multimeter (Keithley 2000). κ was controlled to be independent of the applied thermal gradient Δ T by changing Δ T/T. (Δ T/T)_max was kept under 10%. Finally, calibration of the sensors was realized during each experiment and upon verification.
§ SAMPLE DEPENDENCE
As discussed above, the magnetization data of pristine ETO crystals indicate the presence of a certain mixture of Eu^2+ and Eu^3+ valencies, and it is known from many intermetallic materials that Eu (or other rare-earth) valence instabilities can drastically influence the magnetic and/or transport properties <cit.>. Therefore, we studied several EuTiO_3 crystals in order to check whether there is some correlation between the surprisingly low, glass-like thermal conductivity and the reduced saturation magnetization. As can be seen from Fig. <ref>(a), the overall variation of the κ(T) curves obtained on the different samples is covered by the data obtained on samples C1 and T1, which are discussed in the main article. The magnetization data of the pristine samples C2 and T1 are essentially identical to each other and agree with the data of the other pristine samples shown in Fig.<ref>. While this is not surprising for C1, which is another piece from the same mother crystal JE111, this was not necessarily to be expected for T1, which was grown independently under somewhat different conditions (see above) and it is evident from panel (a) that overall the κ(T) curve of T1 is lying above those measured on the samples C1, C2, and C3. On the other hand, the magnetization of C3 is the only one which essentially reaches 7μ_B. This can be traced back to the fact this sample was vacuum annealed for 1 h at 700^∘C. Resistivity measurements before and after annealing did not show a measurable difference, whereas the enhanced magnetization indicates a significant variation of Eu valence towards almost pure Eu^2+. Nevertheless, the thermal conductivity of this particular C3 sample remains as low and glassy as those of all the ETO samples. Thus, we conclude that the weak Eu^2+/3+ off-stoichiometry is not the main mechanism that induces the unusual glass-like phonon heat transport in the ETO crystals.
We also confirmed the activated behavior of the resistivity in several samples measured in three different laboratories. Figure <ref> indicates temperature profiles of resistivity, ρ of two EuTiO_3 crystals prepared by the floating zone method <cit.>. Tiny anomalies can be discerned at near 260 K, which is the transition between cubic and tetragonal structure, consistent with the discussion in Sec. II. A of the main paper. The inset shows lnρ vs. T^-1 of these crystals. The estimated value of the activation gap, Δ is comparable to that of samples T1 and T2 in Figure 1(b).
§ THE HALL COEFFICIENT
As shown in Fig.<ref>, we found that the Hall carrier density displays an Arrhenius behavior from 180 K down to 70 K. A deviation starts below the latter temperature, when the resistance was in the range of 10^8 ohms rendering the data unreliable. The corresponding carrier density was 10^10cm ^-3. At very low temperature, the saturated Hall number corresponds to a carrier density of ≈ 10^9cm ^-3.
§ HYSTERESIS NEAR THE STRUCTURAL TRANSITION
The structural transition in ETO is believed to be similar to the structural transition in STO an antiferrodistortive transition in which adjacent octahedra rotate in opposite orientations <cit.>. In the case of STO, this transition is known as a rare case of second order structural phase transition. However, in our study of resistivity, we found that the anomaly at ≈ 260 K presents a hysteresis, which was found to be reproducible. The data is shown in Fig <ref>. This may point to the first-order nature of this structural transition in ETO.
|
http://arxiv.org/abs/2307.00510v2
|
20230702081528
|
Inflation with shallow dip and primordial black holes
|
[
"Bao-Min Gu",
"Fu-Wen Shu",
"Ke Yang"
] |
astro-ph.CO
|
[
"astro-ph.CO"
] |
ifundefined@parse@version@dash
parse@version#1parse@version@0#1
parse@version@#1/#2/#3#4#5nil
parse@version@dash#1-#2-#3#4nil
parse@version@dash#1-#2-#3#4#5nil
#2#1#2#3#4
|
http://arxiv.org/abs/2307.01883v1
|
20230704190110
|
Gromov-Witten invariants in complex-oriented generalised cohomology theories
|
[
"Mohammed Abouzaid",
"Mark McLean",
"Ivan Smith"
] |
math.SG
|
[
"math.SG",
"math.AG",
"53D45, 14N35, 55N20"
] |
Generalised linear response theory for the full quantum work statistics
Harry J. D. Miller
Accepted. Received; in original form
=======================================================================
Given a closed symplectic manifold X, we construct Gromov-Witten-type invariants valued both in (complex) K-theory and in any complex-oriented cohomology theory which is K_p(n)-local for some Morava K-theory K_p(n). We show that these invariants satisfy a version of the Kontsevich-Manin axioms, extending Givental and Lee's work for the quantum K-theory of complex projective algebraic varieties. In particular, we prove a Gromov-Witten type splitting axiom, and hence define quantum K-theory and quantum -theory as commutative deformations of the corresponding (generalised) cohomology rings of X; the definition of the quantum product involves the formal group of the underlying cohomology theory. The key geometric input of these results is a construction of global Kuranishi charts for moduli spaces of stable maps of arbitrary genus to X. On the algebraic side, in order to establish a common framework covering both ordinary K-theory and K_p(n)-local, we introduce a formalism of `counting theories' for enumerative invariants on a category of global Kuranishi charts.
§ INTRODUCTION
Cohomological Gromov-Witten invariants for smooth algebraic varieties were introduced in <cit.>, based on the theory of virtual fundamental classes for moduli problems with perfect obstruction theories. The theory of Kuranishi atlases for moduli spaces of pseudo-holomorphic curves led to a generalisation of Gromov-Witten theory to arbitrary compact symplectic manifolds, cf. <cit.>. Corresponding K-theoretic Gromov-Witten invariants were introduced in the algebraic setting in <cit.>, based on the virtual structure sheaf, but their symplectic counterparts were never developed; the local-to-global construction of the symplectic virtual fundamental class did not seem well-suited to constructing a replacement for the virtual structure sheaf.
Fix an arbitrary compact symplectic manifold (X,ω).
* In Section <ref> we set up a general formalism of `counting theories' on a category of global Kuranishi charts, which may be useful elsewhere. We explain how to obtain such theories from both Morava-local cohomology theories and from complex K-theory.
* In Section <ref>, we extend the theory of `global Kuranishi charts' <cit.> from moduli spaces of rational stable maps to X to stable maps of arbitrary genus[Global charts for higher genus curves have been independently constructed, by a slightly different method, by Amanda Hirschi and Mohan Swaminathan <cit.>.]. The basic construction is implemented in Definition <ref>, while a variant that we use to prove independence of choices (in an appropriate sense) is given in Section <ref>.
* In Section <ref> we define Gromov-Witten invariants
I_g,n(β) ∈ H_*(_g,n× X^n;)
when is either (orbifold) complex K-theory or a Morava-local complex-oriented theory. Taking to be the Morava K-theory K_p(n) at large height n≫0 approximates having a theory of Gromov-Witten invariants over a finite field _p.[Following ideas of Fukaya and Ono, Bai and Xu <cit.> introduced integral-valued Gromov-Witten invariants in genus 0. We expect that their work can be extended to higher genus, but, even in genus 0, the comparison with the _p invariants that we produce is not currently understood.]
* In Section <ref> we obtain associative quantum ordinary and Morava K-theory rings (in the Morava case at primes p>2) for general symplectic manifolds. A fundamental insight of Givental is that these algebraic structures require the use of corrected fundamental classes[More precisely, in quantum K-theory Givental <cit.> and Lee <cit.> correct the metric, rather than the fundamental class; correcting the latter seems essential in the general case.] on the moduli spaces of curves, as discussed in Section <ref>.
* In Section <ref> we establish a splitting axiom (Theorem <ref> and Proposition <ref>) for an `operational suboperad' of invariants I_g,n(β), which involves `tautological integrals' over strata of split curves, organised by the formal group associated to .
The main results are summarised by Theorem <ref>, Corollary <ref> and Theorems <ref> and <ref>.
Three technical aspects of the construction may be worth highlighting. First, we find it convenient to leverage known results in algebraic cobordism for spaces of stable maps to projective space. Second, the counting theory for complex K-theory relies on Atiyah's `transverse-equivariant K-theory' <cit.>, which is the home for symbols of operators which are elliptic in the directions tranverse to the orbits of a compact Lie group action; in contrast, for Morava-local theories we work Borel-equivariantly. Finally, formal group laws play an essential role in extracting algebraic structures from the virtual fundamental classes of the moduli spaces of maps; this is consistent with Givental's approach to the subject <cit.>.
For the quantum product / splitting axiom we restrict to Morava-local theories at primes p>2, since the ring spectrum K_2(n) is not homotopy commutative. We prove a splitting but not a genus reduction axiom for Gromov-Witten theory, to avoid the more involved combinatorics of boundary divisors under self-gluing. That combinatorial complexity is irrelevant for the simplest (i.e. additive and multiplicative) formal groups, so the results presented here yield the `full' Gromov-Witten axioms without descendents for rational cohomology and complex K-theory.
Non-vanishing of Gromov-Witten invariants defined over other cohomology theories can be used to constrain those automorphisms of cohomology of (X,ω) arising from symplectomorphisms, and hence the image of π_0(X) →π_0(X). As an example, if X = Y × S^2, a diffeomorphism f: X → X preserving the class of the [S^2] and its Poincaré dual induces an endomorphism of H^*(X;) = H^*(Y;) ⊕ H^*-2(Y;) which is upper triangular (since preserving cup-product). There is a degree -2-endomorphism of any K_p(n)-local cohomology of X coming from the sweepout map associated to the obvious moduli space of rational curves {pt}×^1, and (varying p and working at large height) this shows that f^* is also lower triangular. If H^*(Y;) contains p-torsion in degrees i and i-2 then this yields constraints on f^* not detected by rational Gromov-Witten theory. One can construct related examples starting from the virtual classes of moduli spaces of higher genus curves which are not seen by genus zero invariants (for instance in products of surfaces and four-manifolds of general type).
Example <ref> aside, this paper is foundational[La lutte elle-même vers les sommets suffit à remplir un cœur d'homme; il faut imaginer Sisyphe heureux.]; a first application of these ideas was given in <cit.>, and further applications will appear elsewhere.
We are grateful to Shaoyun Bai, Amanda Hirschi, Yuan-Pin Lee, Rahul Pandharipande, Oscar Randal-Williams, Dhruv Ranganathan and Mohan Swaminathan for helpful conversations and correspondence.
M.A. was partially supported by MSRI / SLMath, NSF award DMS-2103805, and the Simons Collaboration on Homological Mirror Symmetry.
M.M. was partially supported by NSF award DMS-2203308.
I.S. was partially supported by MSRI / SLMath, by the Clay Foundation as a Clay Senior Scholar, and by an EPSRC Frontier Research grant (in lieu of an ERC advanced grant).
§ BACKGROUND AND COEFFICIENTS
We briefly recall some results on K-theories and on formal groups, simultaneously fixing conventions and notation.
§.§ Morava K-theories
Fix a prime p and an integer n>0. The Morava K-theory K_p(n) is a complex-oriented cohomology theory with coefficients
_* := K_p(n)_* = _p[v,v^-1] |v| = 2(p^n-1).
When n=1 it is a summand of mod p complex K-theory, but for larger n it is constructed algebraically. The {K_p(n)}_n≥ 0 are the `primes' in the stable homotopy category, and are intrinsic to its triangulated structure; they are the only theories (besides the Eilenberg-Maclane theories Hk for fields k) which satisfy Künneth isomorphisms for arbitrary finite cell complexes
H^*(X× Y;) = H^*(X;) ⊗__* H^*(Y;).
Recall that for any spectrum X we have homotopy orbits and homotopy fixed points
X_hG = X ∧_G EG_+ X^hG = F(EG_+, X)^G
and a norm map X_hG→ X^hG. A basic fact <cit.> is that the map
X_hΓ∧→ X^hΓ∧
induced by the collapse from EG_+ to S^0 is an equivalence when Γ is finite and is K_p(n)-local. One also has the Adams isomorphism
Y/G ≃ (Σ^-gι_*Y)^G
for G-free spectra, with ι the inclusion of a G-fixed universe 𝒰^G ↪𝒰. Together with the freeness of the G-action on the Borel construction EG_+ ∧ X, this underlies a theorem of Cheng <cit.>:
Let a compact Lie group G act on a closed smooth manifold with finite stabilisers. Then there is a Poincaré duality isomorphism
H_*^G(;) ≃H^-*_G(^-T⊕g;).
The groups and the isomorphism depend on the orbifold := /G and not its presentation as a global quotient.
For = K_p(n) this is proved by Cheng in op. cit. The extension to K_p(n)-local theories is well known to experts, and a proof is provided in <cit.>.
Stably almost complex vector bundles are oriented for Morava K-theories, and so stable almost complex structures on the virtual bundle T-g will play a role in the sequel. There is also a version of the previous proposition for non-compact manifolds respectively manifolds with boundary involving cohomology with compact supports respectively cohomology of / ∂, see <cit.>.
§.§ Equivariant K-theory
If X is compact Hausdorff,
K(X) will denote the /2-graded group consisting of K^0(X) and K^1(X), with the even group obtained from complex stable vector bundles on X, and the odd group from those on S^1 × X with a trivialisation along the product of a point with X. If X is locally compact with X^+ its 1-point compactification, we define[This notation conflicts with that of <cit.>, but allows us to have uniform notation with the Borel-equivariant Morava theory later.] K_c(X) := K̃(X^+) to be K-theory with compact support.
Vector bundles with a lifting of the structure group from the orthogonal group to Spin^c (in particular stably almost complex bundles) are K-oriented. If j: Z ⊂ X is a closed submanifold of a compact manifold X of codimension k and the normal bundle ν_Z is K-oriented, then we have a fundamental class [Z]=j_!(1_Z) ∈ K^k(X). If X is a complex algebraic variety and Z⊂ X is a closed subvariety, then [Z] is the image of [𝒪_Z] under the natural map from algebraic K-theory to topological K-theory.
Let G be a compact Lie group, and X a compact G-CW complex. There is again a /2-graded theory K_G(X) constructed from G-equivariant vector bundles.
If W is a locally compact space, we set K_G,c(W) := K̃_G,c(W^+) to be the K-theory with compact supports. This is contravariantly functorial for proper G-maps, and covariantly functorial for open embeddings of G-invariant open subsets. The natural map
K_G,c(U) → K_G,c(W)
is an isomorphism (c.f. <cit.>), where U runs through relatively compact open G-subspaces of W ordered by inclusion.
As explained in Section <ref> below, for a quasi-projective algebraic complex orbifold Y we will sometimes write K^0(Y) for the orbifold K-theory; if Y = /G is presented as a global quotient, this is canonically isomorphic to K^0_G(), which in particular does not depend on the presentation, cf. <cit.>. Defining K^1(Y) = cok(K^0(Y) → K^0(S^1× Y)) and K(Y) = K^0(Y) ⊕ K^1(Y) again yields a 2-periodic theory.
§.§ Transverse K-theory
Suppose a compact Lie group G acts on a manifold M; let T_G^*M denote the conical subset of T^*M of covectors which vanish on tangent vectors to G-orbits. Let P: Γ(E) →Γ(E) be a G-invariant pseudo-differential operator which is elliptic transverse to the G-orbits, meaning its symbol is invertible on T_G^*M\{0}. Following Atiyah <cit.>, we call such an operator transversely elliptic. Such an operator is typically not Fredholm, and has infinite-dimensional kernel, but – after restricting to a submanifold of M with compact closure, which we shall do without further comment – each G-representation occurs in the kernel with finite multiplicity. We shall only be interested in the multiplicity of the trivial representation, which can be interpreted as a map
(ind_M)^G: K_G(T^*_GM) → K_*.
This has three properties we recall:
* If j: U ↪ M is a G-invariant open embedding into a compact manifold, there is a push-forward
j_!: K_G(T_G^*U) → K_G(T_G^*M)
and (ind_M)^G ∘ j_! is intrinsic to U (independent of choice of M); this extends the theory to the non-compact case.
* If Q ↪ M is a G-equivariant embedding of closed manifolds, then T(ν_Q/M) = ν⊕ν is naturally almost complex. Clifford multiplication Λ^ev T^1,0ν→Λ^odd T^1,0ν defines a symbol class over the total space of Tν. Given any transversely elliptic symbol on Q, i.e. bundles E, F→ Q and σ∈Hom(π_TQ^*E,π_TQ^*F) invertible transverse to orbits, we can pull back under Tν→ TQ, multiply by the Clifford complex on Tν, and then extend by zero from ν_Q/M to M. This then satisfies
K_G(T_G^*Q) [rr]_j_![rd]_ind_Q K_G(T_G^*M) [ld]^(ind_M)^G
K_*
* If Y = M/G is a global quotient presentation of an orbifold, so G acts with finite stabilisers, an orbifold elliptic differential operator P on Y lifts to a transversely elliptic operator P̃ on M, and Kawasaki's orbifold index <cit.> ind_Y applied to the symbol σ(P) (in the orbifold K-theory of Y) is given by
ind_Y(σ(P)) = (ind_M)^G (P̃).
As indicated by our notation, the index (ind_M)^G can be obtained from a more elaborate structure which considers all G-representations. This is usually encoded as a distribution on G, using the fact that the multiplicities of the representations appearing in the index grow in a sufficiently controlled manner. More precisely, if ξ_1,…,ξ_r are an orthonormal basis of g and Y_i denotes the first order differential operator on E given by Lie derivative ℒ_ξ_i, then the Casimir
Δ_E = 1 - ∑_j Y_j^2
has symbol injective in directions tangent to the orbits. For each λ, P determines a Fredholm operator P_λ with domain the λ-eigenspace of Δ_E, and (P_λ) is finite-dimensional (and comprises smooth sections, so is independent of choices of Sobolev exponents in the construction). The formal sum ∑_λind(P_λ) converges to a well-defined distribution, yielding an index map
ind_M: K_G(T^*_GM) → C^-∞(G)^Ad(G)
The index (ind_M)^G can then be recovered by pairing against the identify function on G, and the three properties listed above have natural formulations in this context.
In <cit.>, Atiyah does not assume that the action of G is locally free. Under that assumption, which we always impose, the orbits of the G-action are submanifolds with tangent space g, and a choice of Riemannian metric induces an isomorphism T^*_G ⊕ g ≅ T, and hence an isomorphism K_G(T_G^*) = K_G(^T- g).
§.§ Formal groups
Let L:=L_(u,v) ∈_* u,v denote the formal group associated to a complex oriented cohomology theory . We write
L(u,v) = u+_L v; [n]·_L u = u +_L u +_L ⋯ +_L u
and
L^n_1,…,n_m(u_1,…,u_m) = [n_1]·_L u_1 +_L ⋯ +_L [n_m]·_L u_m
and, simplifying notation,
L(u_1,…,u_n) = u_1 +_L u_2 +_L ⋯ +_L u_n.
For J = (j_1,…,j_m) ∈{0,1}^m there are uniquely defined power series L^n_1,…,n_m_J with
L^n_1,…,n_m(u_1,…,u_m) = ∑_J u^J · L_J^n_1,…,n_m(u_1,…,u_m).
For instance,
L^1,1(u,v) = L(u,v) = ∑ a_ij u^i v^j = u+v+ (u v) ·∑_i, j≥ 1 a_ij u^i-1v^j-1
shows
L^1,1_0,0 = 0; L^1,1_0,1 = 1; L^1,1_1,0 = 1; L^1,1_1,1 = ∑_i,j ≥ 1 a_ij u^i-1v^j-1.
H has the additive formal group L(x,y) = x+y. Complex K-theory has the multiplicative formal group L(x,y) = x+y-xy.
Morava K-theory K_p(n) has coefficients K_p(n)_* = _p[v,v^-1] with |v| = 2(p^n-1). The formal group law satisfies
x+_K_p(n) y = x+y - v ∑_i=1^p-11/p p i x^i p^n-1 y^(p-i) p^n-1 + O(x^p^n, y^p^n).
and is essentially characterised by the height n property [p]·_L x = v x^p^n. Since
x+_K_p(n) y = x+y ∈ K_p(n)_* x,y / ⟨ p^n-1th powers⟩
for n≫ 0 this behaves somewhat like a finite field version of the additive formal group.
We will sometimes write L^n_1,…,n_m_(u_1,…,u_m)
instead of L^n_1,…,n_m(u_1,…,u_m) and analogously
L^n_1,…,n_m;J_(u_1,…,u_m)
instead of L^n_1,…,n_m_J(u_1,…,u_m)
if we wish to specify which complex oriented cohomology theory we are using.
§.§ Algebraic cobordism
Recall that algebraic cobordism Ω^* is a universal oriented Borel-Moore cohomology theory on smooth schemes over , which has cycles for Ω^*(W) being tuples
[Z,ι; _1,…, _r]
with Z a smooth scheme, ι: Z → W a morphism, and the _i line bundles on Z. In particular, a line bundle → W defines a class [W,𝕀; ], and more generally an operation on cycles
c̃_1(): [Z,ι; _1,…, _r] ↦ [Z,ι; _1,…, _r, ι^*].
Fundamental relations on cycles include the `dimension axiom' which is the vanishing condition
r > _(Z) ⇒ [Z,ι; _1,…, _r] = 0
(note that this holds without any assumption that the _i be pairwise distinct), a relation equating
[Z,ι; _1,…,_r, ] and [D, ι∘incl_D; (_1)|_D,…, (_r)|_D]
when s ∈ H^0(Z;) vanishes transversely along D; and (imposed by hand) a formal group law relation asserting that
c̃_1(⊗')[Z,ι; _1,…, _r] = L_(c̃_1(), c̃_1(')) [Z,ι; _1,…, _r]
with L_ the Lazard universal formal group law.
Let W be a smooth quasi-projective variety over . There is a comparison map η: Ω^*(W) → H^*(W;MU) from algebraic to complex cobordism, which yields base-change maps to H^*(W;) for any complex-oriented . Since Ω^*(W) ⊗__* = CH^*(W), algebraic cobordism determines the Chow ring of W so η is usually not an isomorphism.
The model of equivariant algebraic cobordism we use is defined by lifting to the stages of an algebraic approximation to the Borel construction, see <cit.> and <cit.>. In particular, while it is equipped with a natural map to Borel equivariant complex cobordism, and hence to Borel equivariant complex oriented cohomology theories, it does not map to (genuine) equivariant K-theory. This requires us to give separate arguments, in various places, in order to establish the validity of statements about equivariant K-theory.
If we have a polynomial P(t_1,⋯,t_r) = ∑_i_1,⋯,i_l a_i_1,⋯,i_lt^i_1⋯ t^i_l with coefficients in _* for some
multiplicative complex oriented cohomology theory , then
we define
[Z,ι;P(_1,⋯,_r)] := ∑_i_1,⋯,i_l a_i_1,⋯,i_l [Z,ι;_1,⋯,_1_i_1,_2,⋯,_2_i_2,⋯,_r,⋯,_r_i_r]
to be the corresponding element in H^*(W;).
§.§ Normal crossing divisors
We will say that a normal crossing divisor is strict if its irreducible components are smooth and meet pairwise cleanly, with all iterated intersections of irreducible components being smooth and locally analytically modelled on {z_1⋯ z_r=0}⊂^n. If is any complex-oriented theory, and Y a smooth variety, a divisor D⊂ Y defines a line bundle 𝒪(D) and hence a class [𝒪_Y(D)] := c̃_1(𝒪(D)) ∈ H^*(Y;). More generally, cycles of the form (<ref>) define classes in -cohomology.
Let D = ∪_i=1^m n_i D_i be a strict normal crossing divisor on a smooth quasi-projective variety Y, and let _i = _Y(D_i). For J ∈{0,1}^m let ι_J: D_J ↪ Y be the inclusion of the stratum labelled by J into Y, and let _i^J denote ι_J^*_i. Then the following identity holds in H^*(Y;)
[_Y(D)] = ∑_J [D_J, ι_J;L_^n_1,…,n_m;J(_1^J, …, _m^J)].
If the algebraic group GL(r,) acts on Y preserving the -orientation and preserving D stratum-wise, the identity (<ref>) lifts to H_U(r)^*(Y;).
Note that although the terms in (<ref>) involve power series in the Picard groups of the D_J, the `dimension axiom' for cycles (<ref>) ensures that the operational first Chern class c̃_1() is always a locally nilpotent operator, so the terms in the sum are actually finite. Note also that (<ref>) is indeed a graded identity, because the coefficients a_ij∈_* in the formal group law (<ref>) themselves have non-trivial degree.
The corresponding identity for algebraic cobordism Ω_alg^*(Y) is proved as Proposition 3.1.9 in <cit.>. As noted in Remark <ref>, for schemes Y over a field k of characteristic zero, an embedding k↪ induces a comparison map from algebraic cobordism Ω^*(Y) to complex cobordism H^*(Y;MU) (working integrally) by <cit.>. (Note that we are working with schemes over , but the characteristic of the coefficients of the cohomology theory are not constrained.)
The result for complex oriented then follows from that for MU by base-change.
The lift to equivariant algebraic cobordism follows from <cit.>. Let G = U(r).
Note BG= ∪_N Gr(r, ^N) is an increasing union of algebraic G_-varieties, and the same filtration defines a filtration {Y_N} of Y×_G EG, so Y_N → Gr(r, ^N) is the associated Y-bundle over the finite-dimensional Grassmannian. Since the G_-action preserves D, it defines a divisor D_N ⊂Y_N, to which we can apply the previous result (for fixed N). The natural map
H^*_G(Y;) = H^*(Y×_G EG_+; ) →lim_N H^*(Y_N;)
is an isomorphism. The inclusion of D_N →Y_N induces a morphism of inverse limits which is compatible under increasing N.
This establishes the result for Borel-equivariant complex oriented cohomology theories. The case of genuine equivariant K-theory follows from the Koszul resolution (c.f. Equation (<ref>)).
Even in the reduced case, note that in (<ref>) the `coefficient' of a stratum D_J ⊂ Y (more precisely, the coefficients of the various line bundles over that stratum which define the actual cycle) depends not just on the underlying scheme, or even its inclusion into Y, but on the particular way it is expressed as an intersection of irreducible components in D.
Suppose D= D_1 ∪ D_2 ⊂ W is reduced with two smooth irreducible components which meet cleanly along a smooth Z = D_1 ∩ D_2. Then, suppressing the inclusion maps ι_J, we have:
[𝒪_W(D)] = [D_1]+[D_2] + ∑_i,j ≥ 1 a_ij· (D_1)|_Z^i-1·(D_2)|_Z^j-1
with the 3rd term supported on Z and a_ij as in (<ref>). This case already contains all of the information of the formal group law L_; indeed algebraic cobordism can be reconstructed from relations associated to such `double point degenerations', compare to <cit.>.
For the next statement, we find it convenient to extend the notation of Equation (<ref>) to generalised cohomology theory, so that given a pair (Z,c) consisting of a subvariety Z of Y and a class c in the -cohomology of D, we denote by [Z;c] the pushforward of c to the -cohomology of Y. Recall that a normal crossing divisor has a cone complex, which is the dual intersection complex if it has strict normal crossings but is in general more complicated, see <cit.>.
Let D → Y be a normal crossing divisor on a smooth quasi-projective variety Y with smooth irreducible strata D_J. There is an identity in H^*(Y;)
[_Y(D)] = ∑_J [D_J;c_J]
for coefficients c_J ∈_*Pic(D_J) determined inductively in J by L_ and the cone complex to D. If D is GL(r,)-equivariant then the equality lifts to H^*_U(r)(Y;).
It suffices to prove the result in algebraic cobordism, since we can then pushforward to any complex oriented cohomology theory (in particular, the coefficients c_J lie in the image of the maps MU_*Pic(D_J)→_*Pic(D_J)). For a strict normal crossing divisor, the result holds – with the strata J labelled by {0,1}^m when D has m irreducible components and the coefficients c_J determined universally by the formal group law – by expanding equation (<ref>). The general case is reduced to this by `explosion'.
Given any normal crossing divisor D with cone complex Δ(D), we can form the blow up of Y along the minimal stratum of D, then the strict transforms of the next-to-minimal strata, etc. Iteratively this is the `explosion' of D, see for instance <cit.>; it yields a variety Y' → Y on which the proper transform of D has cone complex the barycentric subdivision of Δ(D). Exploding twice, one obtains a morphism f: Y”→ Y in which the pullback f^*D has strict normal crossings. The natural pullback map
Ω^*(Y) →Ω^*(Y”)
is a split injection, and the same holds G-equivariantly in the presence of a G_-action by working on the stages of the Borel construction. The transform f^*D has strata which are iterated projective bundles over bases obtained from strata of D by blowing up along substrata. Using the projective bundle formula for algebraic cobordism and induction, one thus obtains an implicit formula as in (<ref>). Compare to <cit.>.
For = H one has the formal group L(x,y) = x+y, which gives that [_Y(D)] = ∑ n_i [_Y(D_i)].
In complex K-theory, with conventions for the Thom isomorphism compatible with <cit.>, the Euler class e^KU(V) = ∑ (-1)^i Λ^i V^* and the first Chern class c_1^KU() = 1 - ^*. Then
c_1^KU(⊗') = c_1^KU() + c_1^KU(') - c_1^KU(⊗')
giving the multiplicative formal group in the normalisation
L(x,y) = x+y-xy.
This leads to an `inclusion-exclusion' identity one can obtain more directly: the exact sequence of sheaves
0 →/(x_1⋯ x_k) →∑_i /(x_i) →∑_i<j/(x_i,x_j) →⋯→/(x_1,…,x_k) → 0
for {x_1⋯ x_k = 0}⊂^k shows that a reduced normal crossing divisor D = ∪_i D_i ⊂ M of a smooth quasi-projective variety gives an identity
[_D] - ∑_i [_D_i] + ∑_i<j [_D_i∩ D_j] + ⋯ + (-1)^k [_D_1 ∩…∩ D_k] = 0 ∈ K_alg(M)
which one can push forward to K_c(M).
Concretely, it is hard to apply Proposition <ref> because it is inexplicit, but Example <ref> shows that in the special case of complex K-theory the fundamental class of a normal crossing divisor is given by the inclusion/exclusion formula, whether or not the normal crossing divisor is strict. This enables us to give slightly stronger results in this case.
§ COUNTING THEORIES
We introduce a category of global (Kuranishi) charts, and `counting theories' on that category, to give a setting for generalised cohomological Gromov-Witten invariants which simultaneously encompasses Morava theories and complex K-theory.
§.§ Global charts
The category of stably almost complex smooth global charts is defined as follows:
* an object is a quadruple ≡ (G, , V, s), where
(i) G is a compact Lie group, (ii) is a smooth G-manifold with finite stabilisers, (iii) V is a smooth G-vector bundle, and
(iv) s is a G-equivariant section of V. We fix in addition a G-equivariant stable almost complex structure on each of V and on the virtual bundle T -.
* a morphism f : ' → consists of (i) a map G' → G of Lie groups, (ii) a G' equivariant map ' →, and (iii) a G'-equivariant bundle map V' → f^* V. We require that the pullback of s agree with the pushforward of s' under f, and that the data is compatible with the homotopy classes of the stable almost complex structures.
A morphism is said to be proper if the induced map s'^-1(0) → s^-1(0) on zero-loci is proper. We say that a chart is proper if the projection to the point is such.
We emphasise that the stable almost complex structure is on the virtual bundle T - and not T; it is the orbifold /G which is naturally almost complex. Since we will often work G-equivariantly on itself, rather than passing to the orbifold quotient, this means that we naturally encounter `transverse almost complex structures', cf. Definition <ref>.
We shall write Z for the zero locus s^-1(0); we do not assume that Z is compact, and this is the reason why it is essential to introduce the class of proper morphisms, since the fundamental constructuctions of Gromov-Witten theory naturally lead to global charts which are proper, or more generally which are proper over a parameter space. Those charts which are not proper arise in an auxiliary though unavoidable way given the approach we take, because the moduli space of pre-stable Riemann surfaces is not proper, and we perform many construction parametrically over approximations of these moduli spaces. Non-proper charts also appear when one needs to compare the global approach to local Kuranishi descriptions of moduli spaces of curves, following <cit.> (c.f. Appendix <ref>).
We will refer to as the thickening of either Z or of Z/G, and refer to V → as the obstruction bundle. The category is symmetric monoidal, with the product of global charts _1 = (G_1, _1, V_1, s_1) and _2 = (G_2, _2, V_2, s_2) given by
_1 ×_2 ≡ (G_1 × G_2, _1 ×_2, V_1 ⊕ V_2 , s_1 ⊕ s_2),
where V_1 ⊕ V_2 denotes the Whitney sum of the pullbacks of V_1 and V_2 to the product, and s_1 ⊕ s_2 the sum of the sections. The product of morphisms is given by the evident maps, as are the associativity and commutativity structure maps.
We shall later use the fact that the subcategory of proper morphisms is closed under the monoidal structure, as well as the fact that every chart has a canonical diagonal morphism
→×,
given by the diagonal at the level of groups, spaces, and vector bundles. The diagonal map is always proper.
An equivalence f : ' → is a morphism with the property that (i) the induced map (Z',G') → (Z,G) of zero loci is an equivalence of orbispaces (i.e. an isomorphism on quotients and on stabiliser groups) and (ii) the following sequence of vector bundles on ' is exact near the 0-locus:
' →⊕ T ' → T⊕ V' → V.
It is straightforward to define composition of morphisms and to check its associativity. In practise, we will often have a particular Hausdorff topological orbispace M in mind, and will `present' it via a global chart in the sense of equipping a global chart (G,,V,s) with a homeomorphism Z/G → M compatible with the data of the pointwise automorphism groups.
We write
= - V - G
for the `virtual dimension' of . Exactness of (<ref>) implies that this is preserved by equivalences.
For later discussion, it is convenient to decompose equivalences into the following, which include the `germ equivalence', `stabilisation' and `group enlargement' moves of <cit.>.
Every equivalence factors as a composition of the following maps or their inverses:
* A free quotient →/H, with H a normal subgroup of G. This replaces (G,,V,s) by (G/H, /H, V/H,s) where V/H is viewed as a G/H-bundle on /H.
* The induced map →×_G G' associated to an embedding G → G'. This replaces (G,,V,s) by (G', ×_G G', p^*V, p^*s) where p: ×_G G' → is the natural projection.
* The stabilisation map →× W associated to a G-representation. This replaces (G,,V,s) by (G,⊕ W, V ⊕ p^*W, Δ^*s) where p: V→ and Δ is the canonical diagonal section.
* An open embedding ' → of a neighbourhood of Z'. This replaces (G, U, V|_U, s|_U) by (G,,V,s) for U⊂ open containing Z.
A morphism ' → involves a homomorphism ρ: G'→ G of groups. If the morphism is an equivalence, the resulting isomorphism of orbispaces shows that the kernel of ρ acts freely; dividing this out by the first class of equivalence, one reduces to the case of an injection of groups. The second class of equivalence then reduces us to the case where G' = G and the group homomorphism is the identity. At this point, replacing by × V' and changing the G-equivariant map ρ : ' → to (ρ,s'), the exact sequence (<ref>) can be assumed to inject T' ↪ T in a neighbourhood of Z' ⊂'.
Such a map, up to the fourth kind of equivalence is a stabilization.
It is possible to work with weaker assumptions. For example, one requires only a complex structure on the formal difference T - V - g, but we prefer to work with separate data to simplify the discussion. This does not result in any significant loss of generality, since stabilising a chart by an appropriate G-representation yields an equivalent chart in which the two separate complex structures are given (see <cit.>).
The localisation of at equivalences is a model for the category of `derived stably almost complex orbifolds', but we will not need to explicitly pass to the localisation in this paper.
Two charts _i = (G, _i, E_i, s_i) are cobordant if there is a G-equivariant cobordism of thickenings _i carrying a stably almost complex G-bundle and section s restricting to the given data over the ends. The cobordism is proper if s^-1(0) is compact.
For the next definition, we use the complexification homomorphism G → G_ which is defined and injective for any compact Lie group.
A transverse complex G-structure on a smooth G-manifold X is a G_-equivariant complex analytic structure on X ×_G G_.
Any G_-orbit then carries its natural integrable complex structure. Morphisms from X to Y in the category of transverse complex G-manifolds are G-maps for which the induced map X×_G G_→ Y×_G G_ is holomorphic.
The category of complex G-manifolds embeds into the category of transverse complex G-manifolds, and the category of transverse complex G-manifolds with finite stabilisers embeds into , compatibly with the subcategory of proper maps.
If X is a complex G-manifold we equip X× G with the diagonal action, where G acts on itself by conjugation (on the submanifold X ×{e} this loses no information about stabiliser groups). X× G then has a natural transverse complex structure, and holomorphic maps of complex manifolds then define transverse complex maps of the associated transverse complex manifolds. For the second statement, a transverse complex G-manifold Z defines a global chart (G,Z,0,0).
In Lemma <ref> one could alternatively replace X by X×g, where g carries the adjoint action. Under the hypothesis that the G-action on X has finite stabilisers this is also the vector bundle over X given by the tangent spaces to the orbits. The resulting transverse complex G-structures define equivalent global charts by passing to a neighbourhood of e∈ G and appealing to germ equivalence.
In line with Lemma <ref>, we will freely talk about G-equivariant `divisors' on transverse complex manifolds and on the associated global charts; such a divisor has an associated G-equivariant complex line bundle.
§.§ Axioms of counting theories
We use the term graded group to refer to either /2- or -graded groups.
A counting theory on global charts consists of the following data:
* a symmetric monoidal contravariant functor E^* (cohomology) from to the category of graded abelian groups;
* a symmetric monoidal covariant functor E_* (homology) from to graded abelian groups, together with a module structure over E^* which we denote by ∩;
* on the subcategory of proper maps, a symmetric monoidal covariant functor E^lf_* (locally finite homology, i.e. Borel-Moore homology) to graded abelian groups, together with a module structure over E^*, and a natural isomorphism E_*() ≅ E^lf_*() for proper charts;
* a class [] ∈ E^lf_() (the virtual fundamental class) for each object;
* an isomorphism of symmetric monoidal functors E^*() ≅ E^lf_ - *() on the subcategory of proper maps of (transverse) almost complex G-manifolds, which maps the virtual fundamental class to the unit; and
* a (commutative rank 1) formal group law L = L_ with coefficients in E^*(pt).
These data are supposed to satisfy the following axioms:
* (Functoriality) The morphisms of homology, cohomology, and locally finite homology groups associated to an equivalence are isomorphisms, and preserve fundamental classes;
* (Products) the virtual class [_1 ×_2] is the image of [_1] ⊗ [_2] under the monoidal structure;
* (Pullback) For each pullback diagram whose base consists of manifolds
_0[r,"f"] [d,"π_0"] _1[d,"π_1"]
F_0[r,"g"] F_1
in which the maps of groups are isomorphisms, the map g is a proper immersion, and the map of G-manifolds underlying π_1 is a submersion, we have
f_* [_0] = [_1] ∩π_1^*( g_* [F_0])
where we use the identification E^lf_*(F_1) ≅ E^ - *(F_1) to consider the image of [F_0] as a cohomology class on F_1;
* (Chern class) For an almost complex G-manifold M, there is a homomorphism
_G(M) → E^*(M); ↦c̃()
satisfying
c̃(_1 ⊗_2) = L(c̃(_1), c̃(_2))
and for which, if ι: W ⊂ M is the zero-set of an equivariant section of which vanishes transversely, then ι_*[W] = c̃() under the isomorphism E_*^lf(M) → E^(M)-*(M);
* (Normal crossing) Given a diagram of G manifolds with G-transverse complex analytic structures
F [d,"p"]
N [r,"g"] M
in which N is a smooth divisor and p^-1(N) is a strict normal crossings divisor, with top strata F_1, …, F_k that are G-invariant, we have
p^* (g_* [N]) = L([F_1], …, [F_k]) ∈ E^* F.
We clarify the statement that the functor E_* (and similarly for E^lf_*) is a module over E^*: first, note that the diagonal map lifts E^* to a functor valued in graded rings. We then require that E_*() be equipped with a module structure over E^*(), that the morphism E_*() → E_*(') be an E^*(')-module map, and finally that the morphism
E_*(_1) ⊗ E_*(_2) → E_*(_1 ×_2)
be a morphism of E^*(_1) ⊗ E^*(_2)-modules. The first of these can be expressed in formulae as follows: given a map f : ' →, and classes α∈ E^* and β∈ E_* ', we have
f_* ( f^* α∩β) = α∩ f_* β.
A similar formula corresponds to the fact that E^lf_* is a module over the restriction of E^* to proper maps. The G-invariance of the top strata F_i in the final axiom (i.e. the fact that they are not non-trivially permuted) is automatic if G is connected.
Suppose a proper cobordism of charts and ' is induced by taking regular values of a projection → B for some finite-dimensional G-manifold B. Then for any counting theory, the push-forwards of [] and of ['] to E_*^lf() agree.
This follows from the pullback axiom.
It would be more natural to formulate a version of Axiom <ref> that applies when p^-1(N) is a non-strict normal crossing divisor, and which appeals in concrete cases to Proposition <ref>. The resulting expression for p^*g_*[N] is unfortunately not explicit.
In the examples considered below, the `Chern class' and `normal crossing' axioms are closely related; essentially the former implies the latter, by reducing[This can be made precise for K_p(n)-local theories, where we work Borel equivariantly as in algebraic cobordism, but is not strictly true for complex K-theory where we work with equivariant bundles.] to Proposition <ref>. For clarity we have kept both axioms, rather than trying to minimise the hypotheses. One might achieve more formal clarity by modelling the axioms after those for an oriented bivariant cohomology theory in the sense of <cit.>.
By imposing the condition that the (co)-homology theories entering in the definition of a counting theory be symmetric monoidal, we have excluded examples arising from cohomology theories with a commutative but non-associative multiplication. One natural example is given by the Morava K-theories K(n) at the prime 2. We expect that, just as one can understand the failure of these theories to define a commutative product in ordinary cohomology by carefully keeping track of the side along which one performs each product <cit.> and by working in an appropriate category of bimodules over the coefficients of K(n), it should be possible to extend our results to this context.
§.§ Borel equivariant K(n)-local theories
We write |Z for the pair (, ∖ Z). Given a K(n)-local complex oriented (commutative) multiplicative cohomology theory , define
H^*(; ) ≡ H^*_G(Z; ) = H^*(Z ×_G EG; )
H^lf_*(; ) ≡ H^/G -*_G(|Z; ) = H^/G -*(|Z ∧_G EG_+; )
H_*(; ) ≡ H^/G -*_G,c(|Z; ) = H^/G -*(^+|Z ∧_G EG_+; )
to be the Borel equivariant co-homology groups (relative or with compact support), where /G = - G (and ^+ denotes the 1-point compactification). Furthermore, we define the virtual class
[] := e_G(V) ∈ H^lf_*(;)
to be the G-equivariant Euler class of the obstruction space V.
We expect the homology group to be isomorphic to the corresponding (equivariant) generalised Steenrod-Sitnikov homology groups of the zero locus Z. We are not aware of a discussion of this equivariant statement in the literature.
The cohomology ring H^*(; ) is a contravariantly functorial symmetric monoidal functor, the homology ring H_*(; ) is a symmetric monoidal covariant functor, which is a module over cohomology, as is the locally finite homology group H^lf_*(; ), for proper maps. The maps associated to equivalences are isomorphisms which preserve the fundamental class in H^lf_*().
The contravariant functoriality of cohomology is straightforward, and the fact that it is symmetric monoidal follows from the assumption that is multiplicative, and the homotopy equivalence
(Z_1 ×_G_1 EG_1) ×(Z_2 ×_G_2 EG_2) ≅(Z_1 × Z_2 ) ×_G_1 × G_2 E(G_1 × G_2).
The action of cohomology on locally finite homology follows from the continuity property of cohomology and the excision property for homology: every class in H^*_G(Z; ) lifts to some neighbourhood ν Z of Z in , and the desired map is induced by the (relative) cup product
H^*_G(ν Z; ) ⊗ H^*_G(ν Z|Z; ) → H^*_G(ν Z|Z; )
and the isomorphism H^*_G(ν Z|Z; ) ≅ H^*_G(|Z; ). Applying the same argument for compactly supported cohomology yields the action of cohomology on homology.
In order to prove the covariant functoriality of locally finite homology, it will be convenient to first use the Thom isomorphism
H^/G -*_G(|Z; ) ≅ H^-*_G(|Z^g-T ; )
where ^g-T denotes the Thom spectrum of the (virtual) difference between the tangent space T and the Lie algebra of G. Since we work Borel equivariantly, by definition
H^-*_G(|Z^g-T ; ) = π_*(F(EG_+ ∧|Z^g-T, )^G)
We use adjunction to write
F(EG_+ ∧|Z^g-T, )^G = F(EG_+,Σ^-gF(|Z^-T,))^G.
We use the norm map EG_+ ∧ X → F(EG_+,X), which induces an isomorphism on derived fixed-points when X is K(n)-local cf. (<ref>), with X = Σ^-gF(|Z^-T,), and then the Adams isomorphism Y/G = (Σ^-gι_*Y)^G cf. (<ref>) noting that Y = EG_+∧ X is G-free, to obtain
H^lf_*(; ) ≅π_*( EG ∧_G F(|Z^-T ; ) ).
We have a similar description of homology as
H_*(; ) ≅π_*( EG ∧_G F(^+|Z^-T ; ) ),
where we can model F(^+|Z^-T ; ) as the spectrum of sections of the tangent spherical fibration of , which are trivial outside a compact set and on the complement of Z.
Having reformulated the homology theories, we are ready to prove their functoriality, so we consider a map →'. This includes the datum of a G-map →', with the property that Z maps to Z'. By choosing a (complex) G-representation W, and a G-embedding → W ×' whose composition with the projection to ' lifts the map →', Thom collapse defines a G-map
'|Z'^W→|Z^W + T ' - T ,
whenever Z → Z' is proper, which dually induces a map
F(|Z^-T ; ) → F('|Z'^-T '; ).
Finally, the homomorphism G → G' yields a map
EG ∧_G F(|Z^-T ; ) → EG' ∧_G' F('|Z'^-T '; )
which on homotopy groups defines the desired map
H^lf_*(; ) → H^lf_*('; ).
For a non-proper map →', the Thom collapse map in Equation (<ref>) takes value in the 1-point compactification of the Thom space |Z^W + T ' - T, and factors through the 1-point compactification of '|Z'^W. It thus induces a map
H_*(; ) → H_*('; ).
Suppose we have open neighbourhoods Z ⊂ν Z ⊂ and ' ⊂ν Z' ⊂' for which the map →' sends ν Z into ν Z' (and Z into Z'). All the steps above are compatible with the associated excision isomorphisms, which shows that the push-forward is a module map, i.e. that Equation (<ref>) holds. Invariance of the fundamental class under equivalences follows from Lemma <ref>; for all but the first the Euler class is obviously invariant, whilst for the first class of equivalences this uses that if Γ acts freely on M then the Γ-equivariant Euler class of a Γ-bundle W → M is the pullback of the ordinary Euler class of W/Γ→ M/Γ under H_Γ^*(M;) ≅ H^*(M/Γ;).
Finally, the symmetric monoidal structure on both homology and locally finite homology follows from the multiplicativity of the Borel construction formulated in Equation (<ref>), the fact that the module action is compatible with the external product maps from the external multiplicativity of the Adams isomorphism, and the fact that it is a module map (c.f. <cit.>).
A multiplicative complex oriented K(n)-local cohomology theory defines a counting theory in the sense of Definition <ref>.
There are three remaining properties to prove. To prove the formula for the virtual fundamental class of the pullback in Diagram <ref>, it suffices (by stabilising with G-representations) to consider the case when F_0 → F_1 and _0 →_1 are embeddings. Since the diagram is a pullback, the obstruction bundle over _0 is f^*V where V→_1 is the obstruction bundle, so we must compute
f_*(e_G(f^*V)) = f_*(f^*e_G(V) ∩ 1__0)
where 1__0 denotes the unit. Since π_1 is a submersion and the diagram is a pullback, the Thom class of _0⊂_1 is the pullback of [F_0], and the result now follows from (<ref>).
Since by hypothesis is multiplicative and complex-oriented, there are Chern classes for complex line bundles and a formal group law L associated to the complex orientation determined by the equality
c_1(_1 ⊗⋯⊗_k) = L(c_1(_1), …, c_1(_k)).
Considering the context of Diagram <ref>, the image g_* [N] of the fundamental class of N agrees with the first Chern class of the corresponding complex line bundle over the Borel construction. The fact that the inverse image of N is a normal crossings divisor gives an isomorphism between the pullback of this line bundle and the tensor product of the G-equivariant complex line bundles _F_i associated to its top strata. Identifying the virtual fundamental class of F_i with the first Chern class of _F_i, the result now follows from Equation (<ref>).
§.§ Equivariant K-theory
Next, we consider equivariant K-theory, and define[Recall that /G is a (stably) complex orbifold, so the shifts could be supressed since we work 2-periodically.]
K^*() ≡ K^*_G(Z)
K^lf_*() ≡ K^/G -*_G(|Z)
K_*() ≡ K^/G -*_G,c(|Z).
Note the formal similarity with the definition in Section <ref>; the subscript G does not refer anymore to Borel-equivariant cohomology, but instead to the fact that we are considering G-equivariant vector bundles. The stable complex structure of V, and the choice of section which does not vanish away from Z, yields an equivariant Euler class in K^/G -*_G(|Z) which we define to be the virtual fundamental class of .
The ring structure on cohomology, and its action on the homology theories is defined in the same way as for the Borel equivariant theories using the continuity of cohomology. The functoriality of cohomology is easy to establish. For homology (or locally finite homology, analogously), we proceed as follows: we first identify
K^/G -*_G,c(|Z) ≅ K_G,c(|Z^ T - g).
The work of Atiyah and Singer <cit.> identifies every element of the right hand side with the symbol of a G-equivariant transversally elliptic operator on , relative those on the complement of Z. This point of view will be essential in the following discussion, because the notion of G-equivariant elliptic operator, which is usually used to construct pushforwards in equivariant K-theory, is not relevant to our context, since it corresponds to K_G,c(|Z^ T ).
Continuing with our discussion of the functoriality of homology, consider the embedding G → G × G' given by the graph of the homomorphism G → G'. We choose a complex G' representation W', and a lift of the graph of the map →' to a G × G' equivariant embedding
×_G(G × G') →× W' ×',
where × W' ×' is equipped with the product action of G × G'.
We have natural maps
K_G,c(|Z^ T - g) ≅ K_G × G',c((×_G(G × G')|Z ×_G(G × G') )^ T ×_G(G × G') - g - g' )[d]
K_G × G',c((× W' ×' | Z ×_G(G × G') )^T - g + T W' ×' - g') [d]
K_G × G',c((× W' ×' |Z × W' × Z' )^T - g + T W' ×' - g') [d]
K_G × G',c((×' |Z × Z' )^T - g + T ' - g')
where the isomorphism in the top row is given by the Thom homomorphism of <cit.>, the first vertical arrow by <cit.>, the second by the fact that the inclusion of the image of Z in Z' implies that complexes which are acyclic in the complement of Z ×_G(G × G') are necessarily acyclic in the complement of W × Z', and the last by the (inverse) Thom isomorphism associated to the inclusion of the origin in W' (this uses that W' was chosen to be complex).
If G acts on and G' acts on ', both with finite stabilisers, the map ×_G (G× G') →×' arising from the graph of a G-equivariant map →' can still fail to be an embedding but be only finite-to-one; this is why we need to further introduce the representation W' above.
Note that, while the action of G' on a representation W' cannot be locally free if G' is not finite or W' is not trivial, the product W' ×' has a locally free action, so that the difference T W' ×' - g' still gives the tangent bundle to T_G' W' ×'.
Next, we consider the projection ×' →', with fibre the G-manifold . At each point in ', an element of K_G × G',c((×' |Z × Z' )^T - g + T ' - g') determines a G-transversely elliptic operator on , with symbol invertible away from Z (as well as outside a compact set), so that the Atiyah-Singer index for transversely elliptic operators <cit.> assigns to each such operator a virtual vector space given by the G-invariant part of the kernel and co-kernel. (Explicitly one can take the kernel and cokernel of the associated Spin^c-Dirac operator.) Performing this construction in families <cit.> yields the family index map
K_G × G',c((×')^T - g + T ' - g') → K_G',c( '| Z'^T ' - g').
We summarise the above discussion as follows:
The complex structures on T - g and T ' - g' determine a map
K^/G -*_G,c(|Z) → K^'/G' -*_G',c('|Z')
associated to each morphism →' of global Kuranishi charts. If this morphism is an equivalence, the image of [] under this map agrees with ['].
The first point which is missing from the above discussion is the independence of our map on the choice of representation W'; this follows in a standard way by using the direct sum of representations to the reduce the problem to showing that the push-forward maps associated to an embedding of representations W'_0 ⊂ W'_1 agree, which is an application of the Thom homomorphism.
The fact that equivalences preserve fundamental classes follows by checking that they are preserved by each of the elementary moves of Lemma <ref>; for the first two moves, this is a consequence of the compatibility of (equivariant) Thom classes with change of groups, for the second move, this follows from the multiplicativity of such classes and their compatibility with Thom isomorphisms, and in the last case follows from locality (i.e. naturality of the Euler class).
The theory of Chern classes and the formal group for K-theory are standard. We now discuss the last two axioms:
Given a pullback diagram (<ref>), we have f_* [_0] = [_1] ∩π_1^*( g_* [F_0]).
This is a variant of results which have already been used in the study of algebraic K-theoretic Gromov-Witten invariants, e.g. <cit.>. The map F_0 → F_1 is a G-embedding of complex manifolds, hence has a push-forward in K_G-theory coming from the Thom class of the normal bundle, and the K-theory Thom class is given by the Clifford complex. Since we have a pullback diagram, the map _0 →_1 is also an embedding of G-manifolds, so the construction of the push-forward in transverse K-theory follows the more elementary template from Section <ref> and is again associated to the Clifford complex of the normal bundle. The result then follows from the naturality of the Euler class.
A complex analytic normal crossing divisor has a fundamental class, which is compatible with pullback, given by applying the multiplicative formal group law to the fundamental classes of its top strata.
This was explained in Section <ref> and Example <ref>.
For a finite group G, K_G(pt) = R(G) → has a trace which is defined by taking the dimension of the invariants; since characters are orthonormal this induces a non-degenerate pairing on R(G), which can be interpreted as a Poincaré duality statement for the K-theory of the quotient of the point by G, considered as an orbifold. For example, let G=C_2 be the cyclic group of two elements. Then K_G(pt) = R(G) = [1,x] where x=-1 is the difference of the sign and the trivial representation. The pairing has matrix ( [ 1 -1; -1 2 ])
in this basis, and is unimodular.
The Borel equivariant K-theory in this case is given by completion at the augmentation ideal R(G)^∧_I = [1] ⊕_2[x], where _2 denotes the 2-adics. This has no non-degenerate pairing associated to a -valued trace. By contrast, if = K_2(1) is the first Morava K-theory at the prime 2, we have H^*(BG;) = /2 [1] ⊕/2 [x] and the trace induces the non-degenerate pairing
( [ 1 1; 1 0 ]).
This illustrates that there is no Poincaré duality isomorphism for orbifolds in Borel-equivariant K-theory (even though there is one prime at a time), and for K-theory working Borel-equivariantly does not yield a counting theory.
Shaoyun Bai suggested that, by working consistently at the level of the orbifold /G rather than G-equivariantly on , one could replace the use of transverse K-theory with push-forward maps for orbifold K-theory (for representable maps these are constructed in <cit.>). Again the theory is underpinned by a suitable family index theorem.
§.§ Standing notation
We will write for a counting theory on , and denote by E^*() (and E_*(), E_*^lf() etc) the associated (co)homology groups on global charts; this notation encompasses both the cases H^*(;) arising from a Morava-local theory and K^*() arising from complex K-theory. Furthermore, if is a complex orbifold (in particular, a Deligne-Mumford moduli space of stable curves), we will write E^*() for the output of the counting theory on a global chart () = (G,,0,0) arising from a presentation of = /G as a global quotient. This both simplifies notation and reflects the fact that the invariants we consider are naturally associated to the underlying complex orbifold. (Thus, K^*() really denotes the orbifold K-theory K_G^*(), but we will not labour that.)
A compact symplectic 2n-manifold X has an almost complex structure which is canonically defined up to homotopy. It defines a global chart with G={e} and trivial obstruction bundle. The axioms of a counting theory include a duality isomorphism
E^*(X× X) ≅ E_4n-*(X× X)
and the diagonal Δ⊂ X× X defines a class in E^2n(X× X). We will write Δ for the class in either homology or cohomology, depending on the context.
§ GLOBAL KURANISHI CHARTS FOR GROMOV-WITTEN THEORY
In this section we construct global charts for moduli spaces of stable J-holomorphic maps from nodal curves to a symplectic manifold X with tame almost complex structure J in a fixed homology class β∈ H_2(X;). The thickening will admit a submersion over a smooth quasi-projective open in the space of stable maps to some projective space, through which the stabilisation map →_g,h to the moduli space of domains will factor. The resulting structure →→_g,h, with the first map a submersion to a smooth G-manifold and the second a map of G-spaces with trivial obstruction bundle, naturally fits with the axioms of a counting theory.
* Sections <ref> to <ref> construct global charts for the moduli space of holomorphic maps from curves of a fixed genus to a target symplectic manifold, representing a prescribed homology class. The output of the construction has total space which is naturally a G-almost complex manifold, which can be made transversely almost complex by the general trick from Lemma <ref>.
* Sections <ref> to <ref> generalize the global Kuranishi chart construction
so that it can be applied to related moduli spaces arising either in comparing choices or establishing axioms.
* Section <ref> shows that the global Kuranishi chart construction for _g,h(X,J,β) does not depend on choices up to equivalence.
* Section <ref> discusses the forgetful map.
* Section <ref> respectively <ref> construct global Kuranishi charts for split respectively self-glued curves.
* Finally, Section <ref> explains how to incorporate stabilisation maps to moduli spaces of domains, when these are themselves presented as global quotient orbifolds.
Deferred to an Appendix to help readability, we note that:
* Section <ref> states the gluing theorem needed for the construction of these global charts.
* Sections <ref> and <ref> compare our construction (locally) with Fukaya, Oh, Ohta, and Ono's approach which relies on stabilising divisors.
§.§ A Preliminary Lemma
We will consider spaces of perturbed holomorphic curves, where the perturbations to the -operator are drawn from large finite-dimensional subspaces of the universal space of perturbations. The following definition captures this recurring set-up:
Let G be a compact Lie group acting smoothly on a manifold B, and π : V B be a smooth G-vector bundle.
A finite dimensional approximation scheme
(V_μ,λ_μ)_μ∈
for C^∞_c(V)
is a sequence of finite dimensional G-representations (V_μ)_μ∈
and a sequence of
G equivariant linear maps:
λ_μ : V_μ→ C^∞_c(V), μ∈
to the space of smooth sections of V,
satisfying
* V_μ is a subrepresentation of V_μ+1 for each μ,
* λ_μ|_V_μ-1 = λ_μ-1 for each μ and
* the union of the images λ_μ(V_μ) is dense in C^∞(V)
with respect to the C^∞_loc-topology.
A finite dimensional approximation scheme exists.
We will first prove this when the base B is closed.
Choose a G invariant metric and compatible connection ∇ on V.
Let Δ : C^∞(V) → C^∞(V) be Laplacian given by the trace of ∇^2.
We define V_μ to be the sum of the first μ real non-negative eigenspaces of Δ
and λ_μ : V_μ↪ C^∞(V) the natural inclusion map.
The G action preserves V_μ and λ_μ.
Also since Δ is a self adjoint elliptic operator, the sum of these eigenspaces is C^∞ dense
in C^∞(V) and hence the union of the images of λ_μ is too.
Now let us prove this lemma in the case where B is the interior of a compact manifold with boundary B with the property that the G-action and G-bundle V extend to B
and V respectively.
We can glue two copies of B along their boundaries giving its double B_2
and E also doubles to a G-vector bundle E_2.
Now construct G-equivariant subspaces V_μ,2⊂ C^∞(V_2)
whose union is dense in C^∞(V_2) and we let λ_μ,2 : V_μ,2 C^∞(V_2)
be the natural inclusion maps.
Choose a sequence of compactly supported
bump functions ρ_μ : B [0,1], μ∈ so that ∪_μ∈ρ_μ^-1(1) = B.
Then the maps
λ_μ : V_μ,2 C^∞_c(V), λ_μ(b) := ρ_μ(b)λ_μ,2(b)
satisfy the desired properties.
Finally, if B is a general open manifold, then it is a countable union of open subsets (B_j)_j ∈
whose closure is a codimension 0 submanifold with boundary.
Let λ_μ,j : V_μ,j C^∞_c(V|_B_j) ⊂ C^∞_c(V)
be G-equivariant linear maps satisfying properties
(1)-(3) for V|_B_j.
Then the sums λ_μ := ⊕_j=1^μλ_μ,j, μ∈,
satisfy the desired properties for V.
§.§ Moduli Spaces of Curves in Projective Space
Fix g,h,d ∈.
Let _g,h,d⊂_g,h(^d-g,d)
be the subspace of stable nodal genus g curves ϕ : Σ→^d-g of degree d with h marked points
so that
*
we have that H^1(ϕ^* O(1)) = 0 and
* the automorphism group of each map is trivial.
By the Euler exact sequence, such curves ϕ satisfy
the regularity condition H^1(ϕ^*T^d-g) = 0, which in turn implies that _g,h,d is a smooth quasi-projective variety. This would suffice in this section, but the stronger vanishing condition
is useful later, cf. Lemma <ref>.
We define π_g,h,d : _g,h,d_g,h,d
to be the universal curve
and ^o_g,h,d the complement of its marked points and nodes.
We let _g,h,d|_ϕ (resp. ^o_g,h,d|_ϕ) be the fiber of _g,h,d (resp. ^o_g,h,d) over ϕ∈_g,h,d.
We let ω__g,h,d/_g,h,d
be the relative dualizing sheaf of this universal curve.
The group GL_d-g+1()
acts on all of these spaces making
π_g,h,d equivariant.
It can be shown that each point ϕ∈_g,h,d, admits neighborhood U_ϕ
equipped with a submersion U_ϕ_g,h+r for some r.
For the construction of the thickening of moduli spaces of maps, we shall need a variant of this construction: define ^(2)_g,h,d⊂_g,h((^d-g)^2,(d,d))
to be the subspace of stable nodal genus g curves u : Σ→ (^d-g)^2 of bidegree (d,d) with h marked points
so that if we compose u with either projection map to ^d-g,
we get an element of _g,h,d.
This can be seen as a space of pairs of maps (u,v) where u,v : Σ^d-g is in _g,h,d. There is a natural diagonal GL_d-g+1() action on ^(2)_g,h,d
given by sending (u,v) to (u · g, v · g) for each g ∈ GL_d-g+1().
Let Δ : _g,h,d^(2)_g,h,d be the diagonal embedding
sending u : Σ^d-g to the map (u,u).
Let H_g,h,d_g,h,d be the vector bundle whose fiber over u
is H^0(u^*T ^d-g).
Let Π : ^(2)_g,h,d_g,h,d
be the projection map to the first factor sending (u,v) to u.
On a neighborhood U_g,h,d⊂^(2)_g,h,d of the image of Δ,
this map is a submersion
and the tangent space of the fiber of Π|_U_g,h,d over a diagonal element (u,u), u : Σ^d-g
is H^0(u^*T ^d-g).
As a result, after shrinking U_g,h,d,
we can choose a U(d-g+1)-equivariant fiber preserving diffeomorphism
Δ : H_g,h,d→ U_g,h,d
whose restriction to the zero section is Δ.
§.§ Domain Metrics
We wish to put a coherent family of metrics on all
domains of appropriate maps from a nodal curve to
a symplectic manifold. We shall introduce a general framework for such a construction in this section, which we will later specialise to different settings. We thus find it convenient to consider a compact Lie group G, and write G_ for its complexification.
A G_-equivariant family of nodal curves
is a flat family of nodal curves π_ : → (possibly with marked points defined by sections of π_)
where is a smooth quasi-projective variety and where G_
acts on the domain and codomain of π_ making this map equivariant.
The marked point sections are also required to be G_-equivariant.
We define ^o to be the complement of the nodes and marked points.
We have that _g,h,d→_g,h,d
defined in Section <ref>
is a G_-equivariant family of nodal curves where G = U(d-g+1)
and G_ = GL_d-g+1().
Let (X,ω) be a closed symplectic manifold, let
β∈ H_2(X;).
Let J be an ω-compatible almost complex structure.
Let π_ : →
be a G_-nodal family as in Definition <ref>
with h marked point sections p_1,⋯,p_h : →.
Gromov compactness implies there is a minimal bound on the energy (i.e. integral of ω) over all unstable irreducible components Σ' ⊂Σ of J-holomorphic stable maps u : Σ X in class β. We denote this bound by e > 0.
Let
_ be the infinite dimensional space of
all tuples (ϕ,u) where ϕ∈ and u : |_ϕ X
is a smooth map representing β with the property that
* ∫_Σ' u^*ω≥ 0 for each
irreducible component Σ' of the domain of u;
* and ∫_Σ' u^*ω≥ e if Σ' is an unstable component of Σ.
The topology on _ is defined to be the induced
Hausdorff topology on graphs of such maps u viewed as subsets of × X.
Let _ be the pullback of to _.
There is a natural continuous G_ action on
_ and _ induced from the one on and
making the natural map π_ : __
equivariant.
A fiberwise metric on _
is a continuous map
μ : _×___→_≥ 0
so that the restriction of μ to each irreducible component of each fiber |_ϕ, ϕ∈_,
is a distance metric induced from a smooth Riemannian metric in the conformal class determined by the complex structure.
Let ω__/_(p_1,⋯,p_h) be the pullback of the dualizing
line bundle ω_/ to _.
A consistent domain metric for consists of a fiberwise metric on _
together with a Hermitian metric on ω__/_(p_1,⋯,p_h)
with the property that both are invariant under the
G_-action on _.
Here p_1,⋯,p_h are the marked point sections.
We wish to equip the fibres of π_ with such a consistent domain metric. We emphasise that this requires equivariance under the non-compact group G_, hence cannot be achieved by a naive averaging argument. The construction will go via family of metrics on the fibres of a certain algebraic moduli space, which are only required to be invariant under a compact group. We give that construction next, and return to the construction of the consistent domain metric for in Lemma <ref>.
Let d' ∈ and let r ≫ 1 be a large natural number relative to g,h,d'.
Let be the substack of _g,h+r(^d',d') satisfying the following
two properties:
* Each element of is automorphism free;
* If we remove the last r marked points, then the resulting map to ^d' is still stable.
In other words, the forgetful map forgetting the last r marked points collapses no bubbles.
Let be the corresponding universal curve over .
Let q_1,⋯,q_h+r : → be the marked point sections,
let ^o be the complement of the nodes and marked points, and
set ω_/ to be the line bundle over
whose restriction to a fiber |_ϕ, ϕ∈
is the dualizing line bundle of that curve.
We will sometimes think of the marked point sections q_1,⋯,q_h+r as divisors inside
given by their images.
Let U ⊂ be a subset.
A unitarily invariant fiberwise metric for
is a continuous map
μ : |_U ×_|_U →_≥ 0
so that the restriction of μ to each irreducible component of each fiber |_ϕ, ϕ∈ U,
is a distance metric induced from a smooth Riemannian metric compatible with the complex structure.
A consistent domain metric on consists of a fiberwise metric on
together with a Hermitian metric on ω_/(q_1,⋯,q_h)
with the property that both are invariant under the
U(d'+1)-action on and so that
if ϕ : Σ→^d'
and ϕ̌ : Σ→^d'
are elements of satisfying ϕ(σ) = ϕ̌(σ) for each σ∈Σ
and where the first h marked points for these curves are identical,
then the corresponding metrics associated to ϕ and ϕ̌ agree.
In other words, a consistent domain metric is a fiberwise metric and a Hermitian metric on the dualizing bundle
twisted by the first h marked points which only depend on the first h marked points.
So, if we move the last r marked points around, these metrics do not change.
admits a consistent domain metric.
Choose an embedding of varieties Q : ↪^N
and let μ : ×_→_≥ 0
be the induced fiberwise metric on .
Choose a Hermitian metric ν on ω_/(q_1,⋯,q_h).
We wish to modify both of these metrics so that they are U(d'+1)-invariant
and so that they do not depend on the last r marked points.
We will first modify these metrics so that they do not depend on the last r marked points.
By an averaging argument, we can assume that both of these metrics do not change
when permuting the last r marked points.
Consider the universal curve _g,h→_g,h.
Let P : →_g,h be the natural projection map sending a curve to its domain
and then collapsing all the unstable bubbles on this domain.
For each ϕ∈, let Γ_ϕ : |_ϕ→_g,h×^d'
be the map sending σ to (P(σ),u(σ)) after identifying |_ϕ with the domain of u.
Since all elements of are automorphism free, one can always canonically identify the domain of ϕ∈
with |_ϕ.
For each ϕ∈,
choose a codimension two hypersurface D_ϕ
in _g,h×^d' and a neighborhood V_ϕ⊂ of ϕ in
so that
* q_i+h(ϕ) ∈ D_ϕ for each i=1,⋯,r
(recall that q_1,⋯,q_h+r : → are the marked point sections).
* For each ϕ' ∈ V_ϕ,
we have that D_ϕ intersects Γ_ϕ'
transversally so that there exist additional marked points w_1^ϕ',⋯,w_r^ϕ'
in Γ_ϕ'^-1(D_ϕ)
so that q_1(ϕ'),⋯,q_h(ϕ'),w_1^ϕ',⋯,w_r^ϕ'
together with the map ϕ' gives an element of .
* V_ϕ deformation retracts onto ϕ.
An element ϕ”∈ is called special if there exists ϕ' ∈ V_ϕ
together with marked points w_1^ϕ',⋯,w_r^ϕ'∈Γ_ϕ'^-1(D_ϕ)
so that ϕ” = ϕ as maps, but the domain of ϕ” has marked points
q_1(ϕ'),⋯,q_h(ϕ'),w_1^ϕ',⋯,w_r^ϕ'.
Let V'_ϕ⊂ be the subspace of special curves
and V”_ϕ⊂ the connected component of V'_ϕ containing ϕ.
Fix a deformation retraction of V_ϕ to {ϕ}.
We have a natural projection
Π_ϕ : V_ϕ→ V”_ϕ
given by sending the curve ϕ' ∈ V_ϕ, ϕ' : Σ→^d'
to the unique curve Π_ϕ(ϕ') : Σ→^d' in V”_ϕ
satisfying Π_ϕ(ϕ')(σ) = ϕ'(σ),
and with the same first h marked points in Σ.
Such a curve exists and is unique for the following reason:
consider the path of curves (ϕ_t)_t ∈ [0,1] in V_ϕ
connecting ϕ and ϕ' coming from the
deformation retraction of V_ϕ onto {ϕ}.
Since ϕ_t is transverse to D_ϕ for each t ∈ [0,1]
we get a unique continuous map e_i : [0,1] →
sending t to |_ϕ_t,
satisfying e_i+r(0) = q_i+r(ϕ) and mapping via ϕ_t to D_ϕ
for each i = 1,⋯, r.
Then the (h+i)th marked point for Π_ϕ(ϕ') is e_i(1) for each i=1,⋯,r.
We let μ_ϕ : |_V”_ϕ×_V”_ϕ|_V”_ϕ→_≥ 0
the restriction of μ.
Similarly let ν_ϕ be the restriction of ν
to ω_/(q_1,⋯,q_h)|_|_V”_ϕ.
Let μ_ϕ and ν_ϕ be the pullback via Π_ϕ
of μ_ϕ and ν_ϕ respectively to |_V_ϕ.
Then by construction, these metrics do not depend on the last r marked points
among curves in V_ϕ.
Choose a partition of unity (ρ_ϕ)_ϕ∈
subordinate to the cover (V_ϕ)_ϕ∈.
Then we have a fiberwise metric ∑_ϕ∈ρ_ϕ(ϕ) μ_ϕ on
as well as a Hermitian metric ∑_ϕ∈ρ_ϕ(ϕ) ν_ϕ
on ω_/(q_1,⋯,q_h).
These metrics have the property that they do not depend on the last r marked points.
By an averaging argument we can make such metrics U(d'+1) invariant too.
Therefore we have constructed a consistent domain metric.
Returning to Definition <ref>, we then have:
admits a consistent domain metric.
Choose a Hermitian line bundle L X whose curvature Ω is close to a multiple of ω.
After possibly replacing L with a tensor product L^⊗ k for some large k,
we can assume that u^*L is basepoint free for each (ϕ,u) ∈_.
For instance this can be achieved when ∫_Σ' u^*Ω≥ 0 for each (ϕ,u) ∈_
and each irreducible component Σ' ⊂_|_ϕ and where this integral is > 2g -2 if Σ' is an unstable irreducible component.
Let d' := Ω(β)+1.
Then for each (ϕ,u) ∈_,
u^*L is basepoint free and so a unitary basis (f_0,⋯,f_d_u) defines a map
_|_ϕ^d_u, σ [f_0(σ),⋯,f_d_u(σ)]
where d_u := (H^0(u^*L)) ≤ d'. Note that, while there is no uniform way of choosing these unitary bases over the entire space of maps, the set of possible choices depends only on the G_ orbit, and admits a transitive action of the unitary group.
Since we have a linear embedding ^d_u↪^d'
given by the first d_u+1 coordinates,
we get a natural map ψ_ϕ,u : _|_ϕ^d'.
We now add r marked points to _|_ϕ making it automorphism free
and hence ψ_ϕ,u represents an element of .
Here r can be chosen to be independent of ψ.
Choose a consistent domain metric D on
by Lemma <ref>.
This gives us a metric μ_ψ on |_ψ and Hermitian metric ν_ψ on ω_|_ψ(q_1,⋯,q_h) for each ψ∈. The fact that μ_ψ and ν_ψ are unitarily equivariant implies that the resulting metrics μ_ϕ, ν_ϕ on _|_ϕ and independent on the choice of unitary framing.
These metrics do not depend on the choice of additional r
marked points.
Such metrics vary continuously with respect to ϕ
and hence define consistent metric data on _ since, as noted above, the set of framings depends only on the isomorphism class of the map to X, yielding G_-invariant data.
§.§ Basic Construction of Global Kuranishi Charts
Let (X,ω) be a closed symplectic manifold, let
β∈ H_2(X;) and let g,h,d ∈.
For now, d is arbitrary but later on we will fix it.
Let J be an ω-tame almost complex structure.
To avoid clutter, we write := _g,h,d,
:= _g,h,d, ^o := ^o_g,h,d,
π := π_g,h,d and
^(2) := ^(2)_g,h,d.
Also let H_g,h,d, U_g,h,d and Δ be as in Equation <ref>.
These manifolds are naturally G-manifolds where G := U(d-g+1).
For any element ϕ in one of these manifolds and any g ∈ G
we write ϕ· g for the corresponding element acted on by g.
Let Y := Ω^0,1_^o/⊗_ TX
be the G vector bundle over ^o × X whose fiber over a point (c,x) ∈^o × X is the space of anti-holomorphic maps from T_c(^o|_π(c)) to T_x X.
Choose a finite dimensional approximation scheme (V_μ,λ_μ)_μ∈
for C^∞_c(Y) as in Definition <ref>.
We define the pre-thickened moduli space
^ = ^_g,h,d(β,V_μ,λ_μ)
to be the space of triples (ϕ,u,e),
where (ϕ,u) ∈_ (Definition <ref>) and e ∈ V_μ, satisfying the following equation:
∂_J u|_^o|_ϕ + (λ_μ(e)) ∘Γ_u =0
where
Γ_u : ^o|_ϕ→^o × X, Γ_u(σ) := (σ,u(σ))
is the graph map.
The space ^ carries a topology coming from the natural topology on V_μ and the Hausdorff distance topology
on the graphs given by the closure of the image of Γ_u in × X. Whilst _ was infinite-dimensional, imposing the perturbed Cauchy-Riemann equation (<ref>) means that ^ is a finite-dimensional space.
The group G acts on ^
by sending
(ϕ,u,e) ↦ (ϕ· g,u · g,e · g)
for each g ∈ G where u · g : |_ϕ· g→ X is the composition of the natural map |_ϕ· g· g^-1|_ϕ with u.
Note that every point in the region where e = 0 is fixed by the diagonal circle S^1 ⊂ G. When constructing the actual thickening of the moduli spaces of curves, we will enlarge ^, with one effect being that the stabiliser groups all become finite.
The pre-obstruction bundle
E^ := E^_g,h,d(β,V_μ,λ_μ)
over ^
is the direct sum of the pullback of
H_g,h,d to ^
with the trivial bundle V_μ.
The group G acts on H_g,h,d
in the natural way and hence we have a natural G-action on E^ coming from the action above as well as the action on V_μ.
The virtual dimension of _g,h(X,β,J) is not equal to (^) - (E^).
To proceed with our construction of the genuine thickening of the moduli spaces of curves, we need to imitate the main construction of <cit.>, and realise ^d-g as the (projectivisation) of a space of holomorphic sections of a line bundle on the domain of the perturbed pseudo-holomorphic maps that we are considering. In order to formulate this precisely, we need some more notation.
A choice of line bundle data associated to (g,h) consists of a triple = (L,k,𝔇)
where
* L is
a Hermitian line bundle
over X with curvature -2π i Ω where Ω tames J,
* k is a large integer,
* 𝔇 is a consistent domain metric for as in Definition <ref>.
For each (ϕ,u,e) ∈^,
define
L_u, to be the Hermitian line bundle over |_ϕ
equal to
L_u, = (ω__g,h,d/_g,h,d(p_1,⋯,p_h)|_ϕ⊗ u^* L)^⊗ k
where ω__g,h,d/_g,h,d(p_1,⋯,p_h) is the relative dualizing bundle
of _g,h,d→_g,h,d and p_1,⋯,p_h are the divisors in _g,h,d
corresponding to the marked points.
The Hermitian structure ⟨ -,-⟩ on L_u, comes from the Hermitian structure on L and the Hermitian structure on ω__g,h,d/_g,h,d (p_1,⋯,p_h)|_ϕ = ω___g,h,d/__g,h,d(p_1,⋯,p_h)|_(ϕ,u)
coming from 𝔇. The choice of 𝔇 also equips each fibre |_ϕ with a volume form Ω_|_ϕ.
We define:
d = d_ := k(⟨ [Ω], β⟩ + 2g - 2 + h),
which is the degree of L_u, (previously d was some unspecified integer, but now we fix it via this formula).
A holomorphic -framing on (ϕ,u,e) ∈^ is a complex basis F = (F_0,⋯,F_d-g) of H^0(L_u,) for which the Hermitian matrix H_F of inner products of basis elements, with (i,j)-th entry
∫_|_ϕ⟨ f_i, f_j⟩Ω_|_ϕ,
has positive eigenvalues.
We call this a unitary -framing if H_F is the identity matrix (i.e. if F is a unitary basis).
The underlying L^2-inner product featuring in (<ref>) is intrinsic, i.e. independent of the choice of holomorphic frame, since the metric coming from D is invariant under the whole of G_. That invariance also gives the following basic naturality property of the framing matrices:
For any A ∈ GL_d-g+1(), we have H_F · A = H_F· A.
This holds because the consistent domain metric 𝔇 for is invariant under the full group GL_d-g+1() = G_.
We shall now make use of the following construction: starting with a curve ϕ∈, whose domain curve Σ_ϕ is tautologically equipped with a map
ϕ : Σ_ϕ^d-g,
each -framing determines a possibly different map
ϕ_F : Σ_ϕ^d-g, ϕ_F(σ) := [F_0(σ),⋯,F_d-g(σ)].
Since by taking k≫0 we can ensure that the bundle L_u, is very ample on all components of Σ, the map ϕ_F is injective, which means that it is automorphism-free. In particular, ϕ_F defines another element of the space .
The thickened moduli space
= _g,h,d(β,V_μ,λ_μ)
is the space of quadruples (ϕ,u,e,F)
where
(ϕ,u,e) ∈^ and
F is a holomorphic framing of (ϕ,u,e)
satisfying the property that the pair (ϕ,ϕ_F) lies in the previously fixed neighbourhood U_g,h,d of the diagonal.
The obstruction bundle
E = E_g,h,d(β,V_μ,λ_μ)
is the direct sum of a copy of the space of (d-g+1) × (d-g+1) Hermitian matrices with the pullback of E^ to via the natural projection map forgetting the framing.
Explicitly, the fiber of the obstruction bundle E over (ϕ,u,e,F) is H_g,h,d|_ϕ⊕ V_μ⊕.
The next step is to define a G-equivariant section of the obstruction bundle. We have an exponential map exp: ∼⟶^+ from Hermitian matrices to those with positive eigenvalues.
The global Kuranishi chart associated to _g,h(X,J,β) is
= _g,h,d(β,V_μ,λ_μ) = (G,,E,s),
where the section of the obstruction bundle E over is given by
s : E, s(ϕ,u,e,F) := (Δ^-1((ϕ,ϕ_F)),e, exp^-1(H_F)) ∈ E|_ϕ = H_g,h,d|_ϕ⊕ V_μ⊕.
Note that the above definition currently involves some abuse of terminology, as we have not explained in any sense what our construction has to do with _g,h(X,J,β). This is remedied by the next result, where as in <cit.>, it is crucial to the final point (`relative smoothness') that elements of p^-1(ϕ) are maps with a fixed domain Σ = |_ϕ, where p : is the natural forgetful map taking (ϕ,u,e,F) ↦ϕ.
Assuming that k and μ have both been chosen sufficiently large, the space is a topological manifold in a neighbourhood of s^-1(0), on which G acts with finite stabilisers and finitely many orbit types. Moreover, we have a natural homeomorphism
s^-1(0) / G = _g,h(X,J,β)
which is an isomorphism of orbispaces (i.e. respects stabiliser groups),
and the natural forgetful map p is a topological submersion with a C^1_loc-structure.
We discuss the proof of the homeomorphism in Equation (<ref>) first.
Along s^-1(0), we have that e=0, so that the underlying curve is a solution to the original holomorphic curve equation ∂_J u= 0.
Hence we have a natural projection map s^-1(0) →_g,h(X,J,β).
We wish to show that G acts transitively on the fibers of this map (in fact the stabilizer group exactly corresponds to the automorphism group of u).
So, fix u : Σ→ X in _g,h(X,J,β) and let Q ⊂ s^-1(0)
be the preimage of u.
For each (ϕ,u,e,F) ∈ Q, we have e=0 and ϕ=ϕ_F
and so all such elements are of the form (ϕ_F,u,0,F)
and hence only depend on the framing F, which is a basis of H^0(L_u).
Also since exp^-1(H_F) = 0, we have that H_F is unitary.
Fix such an element, (ϕ_F,u,0,F) ∈ Q.
Any other element in Q must be equal to
(ϕ_F · A,u,0,F · A) = (ϕ_F · A,u,0,F · A) for some A ∈ GL_d+1().
Now H_F · A = H_F · A by Lemma <ref>, and since both H_F and H_F · A
are unitary, this implies that A must be unitary and hence G
acts transitively on Q.
We can therefore identify the quotient of this zero-locus with the desired moduli space.
To justify the statement about stabiliser groups and isomorphism of orbispaces, note that an element of s^-1(0) is a tuple (ϕ_F, u, 0, F) which is completely determined by the framing F. An automorphism f: Σ→Σ as a stable map to X, so with u ∘ f = u, will pull back the framing F to some F · g with ϕ_F ∘ f = ϕ_F· g. On the other hand, for g∈ U(d-g+1) to stabilise (ϕ_F,u,0,F) we need some automorphism f: Σ→Σ of u with ϕ_F ∘ f = g ∘ϕ_F = ϕ_F· g. It follows that the stabiliser groups agree as claimed.
We will now show that the natural forgetful map p is a submersion with a C^1_loc-structure.
By unique continuation and elliptic regularity
we have for each (ϕ,u,0) ∈^ that C^∞_c(Y) surjects into the cokernel of the linearized ∂-operator D : W^1,2(u^*TX) → L^2(Ω^0,1(u^*TX))
where u is the composition of the normalization map Σ→Σ with u.
Since (V_μ,λ_μ)_μ∈ is a finite dimensional approximation scheme,
we get that λ_μ(V_μ) surjects onto the cokernel of D for any fixed large enough μ. Gromov compactness, then implies that μ may be chosen uniformly so that surjectivity holds for all curves.
Hence by the gluing result Corollary <ref>, we have that '^→ admits a C^1_loc structure where '^⊂^ is an open neighborhood of the region where e=0.
Let V → be the vector bundle over whose fiber
over u : Σ→^d-g is H^0(u^*𝒪(1)).
Choose a connection for V.
Let P be the principal GL_d-g+1()-bundle over
whose fiber over u : Σ^d-g consists of all possible bases for V.
Let π : ^→ send (ϕ,u,e) to ϕ.
Then the thickening naturally embeds into the pullback of P to ^ as an open subset since we have natural isomorphisms
H^0(L_u,) ≅ H^0(ϕ_F^* O(1)) ≅ H^0(ϕ^*O(1)) = π^*V|_(ϕ,u,e)
where the second isomorphism is given by parallel transport along the straight line
joining ϕ and ϕ_F inside H_g,h,d|_ϕ≅Π(Δ(H_g,h,d|_ϕ)) ⊂
where Π is the projection map sending (ψ_1,ψ_2) to ψ_1.
Hence ' → also has a C^1_loc-structure coming from the one on π^*P|_^ where ' is the preimage of '^.
Along e=0, all maps in the thickening are non-constant on any unstable component of a domain (since the finite automorphism hypothesis implies that these are stable maps). By continuity, and since the degree of u^*L on a component is cohomological, this then holds throughout an open neighbourhood of e=0; compare to Definition <ref>.
<cit.>
A topological global Kuranishi chart is a tuple (G',T',E',s')
where G' is a compact Lie group, T' is a topological G'-manifold with finite stabilizers
and E' is a G'-vector bundle with a G' equivariant section s'.
We have that (G,',E',s') is a topological global Kuranishi chart
where ' ⊂ is a neighborhood of e=0,
E' := E|_' and s' := s|_'. This chart is equivalent to a smooth global Kuranishi chart ^sm for which the natural evaluation map ^sm→ X^h × is smooth G-equivariant map.
Hence the natural evaluation map ^sm→ X^h ×_g,h is smooth.
It is explained in <cit.> that given a topological global chart for which T' → admits a G-equivariant C^1_loc-topological submersion over a smooth G-manifold, one can apply abstract smoothing theory to an equivalent chart (obtained by stabilization and germ equivalence) to yield a smooth global chart. By construction, the obstruction bundle is stably almost complex and ' carries a transverse almost complex structure. The rest follows from <cit.>
and <cit.>.
The smooth global chart ^sm does not quite fit with the set-up of Section <ref>, since ' is naturally stably almost complex rather than transverse complex, and the obstruction bundle E' is not a complex vector bundle (because it carries a copy of the real vector space of Hermitian matrices); but this can be repaired by the trick from Lemma <ref>, adding a copy of the Lie algebra g of skew-Hermitian matrices to both ' and E'.
The global chart here differs, when g=0, from that constructed in <cit.>. There, the framing data F was used in the perturbation of the ∂-equation defining the thickening. Here, the framing appears only in the section of the obstruction bundle. In any formulation of the higher genus case, as here or in the slightly different approach in <cit.>, a key point is to address issues around the non-uniqueness of a holomorphic line bundle on Σ with given degree, once g(Σ) > 0.
We call a chart as constructed above a distinguished global Kuranishi chart for the moduli space of stable maps. A distinguished chart is associated to the data of and (V_μ,λ_μ)_μ∈ as well as a choice of natural numbers k and μ.
§.§ General Pre-Thickened Moduli Spaces
Let (X,ω) be a closed symplectic manifold, let
β∈ H_2(X;). For the proof that the invariants we extract from the global Kuranishi charts from Section <ref> do not depend on choices, we shall need many variants of this basic construction. We introduce an abstract framework which covers all the cases that we need, so we start by fixing a compact Lie group G.
For any G_-equivariant family of nodal curves → (Definition <ref>)
we let Y_
be the G vector bundle over ^o × X whose fiber over a point (c,x) ∈^o × X is the space of anti-holomorphic maps from T_c(^o|_π(c)) to T_x X.
A finite dimensional approximation scheme for
is a tuple (V_μ,λ_μ)_μ∈ where (V_μ)_μ∈ are G-representations satisfying V_μ-1⊂ V_μ
and λ_μ : V_μ→ C^∞_c(Y_) are G-equivariant linear maps satisfying λ_μ|_V_μ-1 = λ_μ-1 for each μ∈, and so that for each ϕ∈ F, we have that
(V_μ,λ_μ|_|_ϕ) is a finite dimensional approximation scheme for Y_|_|_ϕ
as in Definition <ref>.
We could have chosen a finite dimensional approximation scheme for C^∞_c(Y_) as in Definition <ref>.
However, the definition above has the advantage of being compatible with pullbacks, cf. Definition <ref> below.
The pre-thickened moduli space, ^ =^(β,λ_μ)
is the space of tuples (ϕ,u,e) where
(ϕ,u) ∈_ (Definition <ref>) and e ∈ V_μ, satisfying the following equation:
∂_J u|_^o|_ϕ + (λ_μ(e)) ∘Γ_u =0
where
Γ_u : ^o|_ϕ→^o × X, Γ_u(σ) := (σ,u(σ))
is the graph map, and for which the stabilizer group G_(ϕ,u,e)⊂ G of (ϕ,u,e) is finite.
The topology is the induced Hausdorf metric topology on the closures of the images of the graphs Γ_u in × X,
combined with the natural linear topology on V_μ.
There is a natural G-action on this space sending (ϕ,u,e) to (ϕ· g, u · g, e · g)
where u · g is the composition |_ϕ· g· g^-1|_ϕu X.
An element (ϕ,u,e) is regular if
the image of λ_μ surjects onto the cokernel of the linearized ∂-operator
associated to u.
We define ^,⊂^ to be the subspace of regular elements.
Let K ⊂ be a compact subset.
Then for all μ sufficiently large,
we have that ^, contains the subspace
^|_K,e=0 := {(ϕ,u,0) ∈^ : ϕ∈ K}⊂^.
For each J-holomorphic curve u : |_ϕ→ X,
we have that V_μ surjects onto the cokernel of the linearized ∂-operator
for all μ≥μ_ϕ,u for some μ_ϕ,u∈.
Hence it holds
for all (ϕ',u') in a neighborhood U_ϕ,u of (ϕ,u) with respect to the Hausdorff topology on graphs Γ_u'.
Using that K is compact and Gromov compactness, we can cover ^|_K,e=0 by a finite number of open subsets (U_ϕ_i,u_i)_i ∈ I
and hence we choose μ to be larger than max_i ∈ Iμ_ϕ_i,u_i.
The following proposition follows from Corollary <ref>.
Let ^ be a pre-thickened moduli space as above.
Then the map ^,→ admits a C^1_loc-structure.
Also, if there exists another G_-equivariant family of nodal curves ' →^(2) and a G_-equivariant submersion →^(2)
so that is the pullback
of '
then the composite map
^,→^(2) admits a C^1_loc-structure.
The pre-thickened moduli space from Definition <ref>
is equal to the pre-thickened moduli space from Definition <ref>
with = _g,h,d (Section <ref>).
The group G here acting on this space is U(d-g+1) with G_ = GL_d-g+1().
We also have parameterized versions of Definition <ref>
and Propositions <ref> and <ref>.
These are needed to show that our Gromov-Witten invariants do not depend on the choice of almost complex structure.
Let (J_t)_t ∈ [0,1] be a smooth family of ω-tame almost complex structures.
(More generally, throughout the discussion below the interval [0,1] can be replaced with any compact manifold with boundary.)
The parameterised pre-thickened moduli space, ^_(J_t) =^_(J_t)(β,λ_μ)
is the space of tuples (t,ϕ,u,e) where t ∈ [0,1],
(ϕ,u) ∈_ (Definition <ref>) and e ∈ V_μ, satisfying the following equation:
∂_J_t u|_^o|_ϕ + (λ_μ(e)) ∘Γ_u =0
where
Γ_u : ^o|_ϕ→^o × X, Γ_u(σ) := (σ,u(σ))
is the graph map, and for which the stabilizer group G_(ϕ,u,e)⊂ G of (ϕ,u,e) is finite
for each (ϕ,u,e) ∈^.
The topology is the induced Hausdorf metric topology on the images of the graphs Γ_u
combined with the topology on [0,1] and V_μ.
There is a natural G-action on this space sending (t,ϕ,u,e) to (t,ϕ· g, u · g, e · g)
where u · g is the composition |_ϕ· g· g^-1|_ϕu X.
An element (ϕ,u,e) is regular if
the image of λ_μ surjects onto the cokernel of the linearized ∂-operator
associated to u.
We define ^,_(J_t)⊂^_(J_t) to be the subspace of regular elements.
We can use the weaker notion of regularity requiring only surjectivity onto the cokernel of the parametrised linearized ∂-operator, i.e. the quotient of the kernel of the ∂-operator by the image of the tangent space of the parameter space.
Exactly as for Lemma <ref>, and again using Corollary <ref>, we get:
Let K ⊂ be a compact subset.
Then for all μ sufficiently large,
we have that T_(J_t)^, contains the subspace
^_(J_t)|_K,e=0 := {(t,ϕ,u,0) ∈^ : ϕ∈ K}⊂^_(J_t).
Moreover, the map ^,_(J_t)→ admits a C^1_loc-structure.
§.§ General Construction of Global Kuranishi Charts
Let (X,ω) be a closed symplectic manifold, let
β∈ H_2(X;).
For any compact Lie group G,
we let exp : i𝔤→ G_/G be the exponential map in the imaginary direction.
Let G, Ǧ be compact Lie groups.
Let B be a Ǧ-space. A Ǧ-equivariant principal G-bundle
is a principal G-bundle P → B together with a Ǧ-action on P commuting with the G-action on P
so that the map P → B is Ǧ-equivariant.
Given a principal G-bundle P, we set P_ := P ×_G G_ to be the corresponding principal G_ bundle.
Let U_G ⊂ ig be a G-invariant neighborhood of 0
and Q_G ⊂ G_ a G-invariant neighborhood of G
so that the map exp|_U_G : U_G → Q_G / G is a diffeomorphism.
We let
log_P : P_ i𝔤
be the partially defined map
whose restriction to each fiber P_|_x ≅ G_
is the composition
Q_G ↠ Q_G / G exp^-1 i 𝔤.
We call the bundle P ×_G Q_G the domain of log_P.
Let G be a compact Lie group.
A smooth G-microbundle over a smooth G-manifold B is a triple
(E,i,p) where
* E is a smooth G-manifold,
* i : B → E is a smooth G-equivariant embedding and
* p : E → B is is a smooth G-equivariant map satisfying p ∘ i = 𝕀.
A microbundle isomorphism ρ : E → E' between microbundles (E,i,p) and (E',i',p') over B
is a smooth open embedding ρ : U → E' where U ⊂ E is an open neighborhood of i(B)
satisfying ρ∘ i = i' and p|_U = p' ∘ρ.
We will sometime write E instead of (E,i,p) if the context is clear.
An example of a G-microbundle is a smooth G-vector bundle. This is actually the universal example: since p is a smooth submersion, any smooth microbundle has a vertical tangent bundle, to which it is isomorphic. The microbundle language simplifies notation since it avoids picking an isomorphism from a neighbourhood of i(B) to such a normal bundle. We encountered a G-microbundle ^(2) in Section <ref>, with its projection map Π to, and diagonal map Δ from, the base B =.
Let G be a compact Lie group and → a G_-equivariant family of nodal curves (Definition <ref>).
We define pre-Kuranishi data over → to be a tuple (G,E,λ_μ)
where
* E = (E,i,p) is a smooth G-microbundle over ,
* λ_μ arise from a finite dimensional approximation scheme (V_μ,λ_μ)_μ∈
for (Definition <ref>).
Let p : R → B be a topological submersion from a topological manifold to a smooth manifold, which admits a G-equivariant C^1_loc-structure.
A C^1_loc-submanifold of R is a topological submanifold R' ⊂ R
so that for each x ∈ R', letting b=p(x), there is a product neighborhood ι : W ∼⟶ W|_b × p(W) for the total space whose restriction
ι|_R' : W|_R'→ (W|_R')|_b × p(W) is a product neighborhood of R'.
(<cit.>).
In Definition <ref> note that the image p(R') ⊂ B is open; there is a natural generalisation to C^1_loc-submanifolds with boundary, but again we are only concerned with the case when their image in B has codimension zero.
Let (G,E,λ_μ) be pre-Kuranishi data over some G_-nodal family → with E = (E,i,p).
We define
global chart data associated to (G,E,λ_μ)
to be a tuple (,s)
where
* = P_ for some principal G-bundle P over a closed C^1_loc submanifold B_ of ^ possibly with boundary where ^ :=^(β,λ_μ),
* equipping with the diagonal G-action (inside its natural G× G-action), s is a G-equivariant continuous map → E with s^-1(0) compact and contained in ^, (Definition <ref>)
and so that P_ = p ∘ s where P_ : → and p : E → are the natural projection maps.
Equivalently, s is a G-equivariant section of the pullback of E to .
Given global chart data (,s) as above, and an element (ϕ,u,e) ∈^,
a -framing of (ϕ,u,e) is an element F ∈|_(ϕ,u,e).
Let N_ be the pullback of the normal bundle N_ of i() (Definition <ref>)
to via the projection map →.
Let Q_⊂ P_ be the domain of log_P (Definition <ref>).
Choose an isomorphism of microbundles ρ : N_→ E
where N_ is the normal bundle of i() inside E.
Let U ⊂ E be the image of ρ.
The global Kuranishi chart associated to (,s)
is the tuple
(G,s^-1(U) ∩ Q_, i𝔤⊕N_⊕ V_μ, log_P ⊕ (ρ^-1∘ s) ⊕ P_μ)
where P_μ : → V_μ is the natural projection map.
The following proposition is a consequence of Lemma <ref> combined with the fact that a
G-equivariant principal G-bundle whose base admits a C^1_loc structure over a smooth G-manifold also admits a C^1_loc structure
over the same smooth G-manifold.
The global Kuranishi chart associated to (,s) is a topological global Kuranishi chart
whose base admits a C^1_loc structure over B_⊂.
If → is another G_-equivariant family of nodal curves and ρ : →
is a smooth G_-equivariant morphism which is a submersion so that is the pullback of ,
then admits a C^1_loc structure over .
The global Kuranishi chart does not depend on the choice of ρ in the sense that
if we choose a different ρ, we get an isomorphic obstruction bundle over the same thickening and this isomorphism
sends the section of one global Kuranishi chart to the other.
Let = _g,h,d, = _g,h,d,
^(2) = ^(2)_g,h,d, Δ as in Section <ref>
with G = U(d-g+1).
Let P_ : ^(2)→ be the natural projection map.
Then ^(2) = (^(2),Δ,P_) is a smooth G-microbundle over .
Choose a finite dimensional approximation scheme (V_μ,λ_μ)_μ∈ for .
Then (G,^(2),λ_μ) is pre-Kuranishi data over .
The principal bundle P →^ has fiber over (ϕ,u,e) ∈^ equal to the space of unitary -framings on (ϕ,u,e) (Definition <ref>).
Then = P_. The space of framed curves from Definition <ref> is an open subset of containing P.
We let s : →^(2) be the map sending (ϕ,u,e,F) to (ϕ,ϕ_F) as in Definition <ref>.
Then (,s) is global chart data associated to (G,^(2),λ_μ).
The corresponding global Kuranishi chart is the one described in Definition <ref>.
The map Δ from Equation (<ref>) identifies the microbundle ^(2) with the normal bundle of inside ^(2).
§.§ Parametrized families and cobordism invariance
We also have parameterized versions of Definition <ref>
and Proposition <ref>.
These are needed to show that our Gromov-Witten invariants do not depend on the choice of almost complex structure.
Let (J_t)_t ∈ [0,1] be a smooth family of ω-tame almost complex structures.
More generally, the interval [0,1] can be replaced with any compact manifold with boundary.
Let (G,E,λ_μ) be pre-Kuranishi data over some G_-nodal family → with E = (E,i,p).
We define the
parameterized global chart data associated to (G,E,λ_μ)
to be a tuple (',s')
where
* ' = P'_ for some principal G-bundle P' over a closed C^1_loc submanifold B_ of ^_(J_t) possibly with boundary where ^_(J_t) :=^_(J_t)(β,λ_μ) (Definition <ref>),
* s' is a G-equivariant continuous map ' → E with s^-1(0) compact and contained in ^,_(J_t) (Definition <ref>)
and so that P'_ = p ∘ s' where P'_ : → and p : E → are the natural projection maps.
Equivalently, s' is a G-equivariant section of the pullback of E to '.
Given parameterized global chart data (',s') as above, and an element (t,ϕ,u,e) ∈^_(J_t),
a '-framing of (t,ϕ,u,e) is an element F ∈'|_(t,ϕ,u,e).
Let N_ be the pullback of the normal bundle N_ of i() (Definition <ref>)
to ' via the projection map →.
Let Q_'⊂ P'_ be the domain of log_P' (Definition <ref>).
Choose an isomorphism of microbundles ρ : N_→ E
where N_ is the normal bundle of i() inside E.
Let U ⊂ E be the image of ρ.
The global Kuranishi chart associated to (,s)
is the tuple
(G,s'^-1(U) ∩ Q_', i𝔤⊕N_⊕ V_μ, log_P'⊕ (ρ^-1∘ s') ⊕ P'_μ)
where P'_μ : ' → V_μ is the natural projection map.
The following proposition is a consequence of Proposition <ref> combined with the fact that
G-equivariant principal G-bundles whose base admits a C^1_loc structure over a smooth G-manifold also admit a C^1_loc structure
over the same smooth G-manifold.
The global Kuranishi chart associated to (',s') is a topological global Kuranishi chart
whose base admits a C^1_loc structure over B_'⊂.
To deduce from this the invariance of smooth global charts, we include a brief discussion on uniqueness of smoothings. Let M → B be a topological submersion of a topological manifold over a smooth manifold, with a C^1_loc-structure. The entire discussion below is meant to be G-equivariant, but we suppress further discussion of the group action. In <cit.> the construction of a smooth chart from a C^1_loc chart relies on lifting the tangent microbundle of M to a vector bundle. Lashof's theory <cit.> implies that, perhaps after further stabilisation (by a G-representation), the smoothing is uniquely determined up to diffeomorphism by the stable isomorphism class of the vector bundle lift. The bundle lift constructed in <cit.> in turn depends on two pieces of data: the C^1_loc-structure on M → B, and the choice of a fibrewise submersion in the sense of <cit.> (which is a map from an open neighbourhood of the diagonal in M × M to M satisfying a raft of conditions). There is a natural notion of stabilising a fibrewise submersion: one for M → B induces one on M×^m → B ×^m. It follows from the extension result of Bai and Xu <cit.> that any S^k-parametrized family of fibrewise submersions M → B extends over ^k+1 after stabilising M and B by ^k+1 in this way, so the space of fibrewise submersions up to stabilisation is contractible.
It follows that the stable vector bundle lift of the tangent microbundle is completely determined by the C^1_loc-structure. Since this is controlled by Proposition <ref>, the virtual class associated to a stable smoothing of a C^1_loc-global chart is indeed invariant under cobordism.
§.§ Equivalences Between Kuranishi Data
We will be constructing global Kuranishi charts using various global Kuranishi chart data
as in Definition <ref>.
We next describe some operations on such charts which yield equivalences, e.g. which then induce the stabilisation, germ equivalence or group enlargement operations from <cit.> (which were shown to be equivalences in op. cit., and which are C^1_loc-analogues of the moves from Section <ref>, which they induce on smoothings). As usual we fix a closed symplectic manifold (X,ω) and
β∈ H_2(X;).
Let → be a G_-equivariant family of nodal curves and let f : ^(2)→ be a smooth G_-equivariant morphism
so that f^*→^(2) is the pullback of .
Let (V_μ,λ_μ)_μ∈ be a finite dimensional approximation scheme for (Definition <ref>).
Then the pullback f^*λ_μ : V_μ→Ω^0,1('/^(2);TX) is uniquely defined by the following property:
f^*λ_μ(e)
restricted to (f^*)|_ϕ' is equal to λ_μ(e) restricted to the isomorphic curve |_ϕ, where ϕ is the image of ϕ', for each e ∈ V_μ and ϕ' ∈^(2).
We call (V_μ,f^*λ_μ)_μ∈ the pullback of (V_μ,λ_μ)_μ∈ along ^(2)→.
Note that the pullback of a finite dimensional approximation scheme
is a finite dimensional approximation scheme.
Let (G,E,λ_μ)
be pre-Kuranishi data associated to a G_-equivariant family of nodal curves →.
A principal bundle enlargment of (G,E,λ_μ)
consists of the pre-Kuranishi data
(G ×Ǧ,Ě,λ̌_μ)
over a G_-nodal family →
where
* there exists a G_-equivariant principal Ǧ-bundle P →^(2)
so that = P_ and is the pullback of ,
* Ě is the pullback of E to P_ and
* λ̌_μ is the pullback of λ_μ along →.
Now let (,s) be global chart data associated to (G,E,λ_μ).
An enlargement of (,s) consists of global Kuranishi data
(,š) for a principal bundle enlargement (G ×Ǧ, Ě,λ_μ)
as above
where
* is a G-equivariant principal Ǧ-bundle over ,
* š composed with the projection map to E is equal to the projection map to composed with s.
Let (,s) and (,š) be as in Definition <ref> above.
The global Kuranishi chart associated to (,š) is obtained from the one associated to (,s)
via group enlargement (in the sense of <cit.> or Section <ref>).
Let (G,E,λ_μ) be pre-Kuranishi data over a G_-nodal family → as in Definition <ref>.
Let (V̌_μ,λ̌_μ)_μ∈ be another finite dimensional approximation scheme for (Definition <ref>).
Let (,s) be global Kuranishi data associated to (G,E,ρ,λ_μ) and (',s') global Kuranishi data associated to (G,E,λ_μ⊕λ̌_μ).
We say that (',s') is an approximation stabilization of (,s) if the restriction of (',s') to ^(β,λ_μ)
is (,s).
If (',s') is an approximation stabilization of (,s) as above then
the global Kuranishi chart associated to (',s') is obtained from the one associated to (,s) via stabilization.
Let p : M → F admit a C^1_loc structure. We define
T(M/F) to be the corresponding vertical tangent bundle.
Let p_0 : M_0 → F, p_1 : M_1 → F admit C^1_loc structures
over a smooth manifold F.
A fiberwise C^1_loc map f : M_0 → M_1 is a continuous map where
* p_0 = p_1 ∘ f,
* the restriction f|_ϕ : M_0|_ϕ→ M_1|_ϕ for each ϕ∈ F is smooth and
* there is a continuous map D^verf : T(M_0/F_0) → T(M_1/F_1) whose restriction to T(M_0|_ϕ) is the derivative of f|_ϕ for each ϕ∈ F.
For the next definition, Let (G,E,λ_μ) be pre-Kuranishi data over a G_-nodal family → as in Definition <ref>.
Let Q : → be a smooth surjective G_-equivariant morphism
and let → be the corresponding pullback by Q.
We say that pre-Kuranishi data (G,Ě,λ̌_μ) over → is a Q-lift
of (G,E,λ_μ)
if
* There is a map Q fitting into the following commutative diagram:
Ě E
[from=1-1, to=2-1]
[from=1-2, to=2-2]
["Q", from=2-1, to=2-2]
["Q", from=1-1, to=1-2]
where the vertical maps are the natural projection maps and
* λ̌_μ is the pullback of λ_μ via Q.
We now explain how to relate global Kuranishi data in this setting: let (,s) be global Kuranishi data for (G,E,λ_μ), and, to simplify the notation, let ^ :=^(β,λ̌_μ) and ^ :=^(β,λ_μ).
.
A general stabilization of (,s) consists of global Kuranishi data (,š) for a Q-lift (G,Ě,λ̌_μ) as above
so that
* there exists a map P fitting into the following commutative diagram:
Ě E
^ ^
[from=2-2, to=3-2]
[from=2-1, to=3-1]
[from=3-1, to=4-1]
[from=3-2, to=4-2]
["Q", from=4-1, to=4-2]
["Q", from=1-1, to=1-2]
[bend right=40, from=1-1, to=3-1]
[bend left=40, from=1-2, to=3-2]
["š", from=2-1, to=1-1]
["s", from=2-2, to=1-2]
[from=3-1, to=3-2]
["P", from=2-1, to=2-2]
where unlabelled arrows are the natural projection maps, all spaces in the diagram admit C^1_loc structures over and where all the morphisms are C^1_loc maps over .
* all the maps in the diagram are G-equivariant (cf. the action from Definition <ref>) and P maps š^-1(0) homeomorphically to s^-1(0) (respecting stabiliser groups).
* For each x ∈š^-1(0), D^š : T(/)|_x → T(Ě/E)|_š(x) is an isomorphism.
Let (,š) be a general stabilisation of (,s) as in Definition <ref>.
Then the global Kuranishi charts associated to (,s), (,š)
are related by stabilization and germ equivalence.
First of all, we can replace E and Ě with any isomorphic smooth G-microbundles.
As a result, we can think of them as vector bundles over ^ and ^ respectively.
The pullback of these vector bundles Ě' and E' to and respectively
have natural sections š' and s' induced by š and s respectively.
We can also assume that Q is a linear submersion and hence Ě' is isomorphic to E^⊥⊕ E”
where E” is the pullback P^*E.
Then š' = s^⊥⊕ s” with respect to this splitting where s” is the pullback of s'.
Also s^⊥ is transverse to zero along š^-1(0).
§.§ Independence of Choices
We need to show the global Kuranishi
charts constructed in
Section <ref> are independent of choices of finite dimensional approximation scheme,
consistent domain metrics and Hermitian line bundles, etc.
We will do this by using global Kuranishi chart data from Section <ref>.
Dependence on J was already dealt with in Proposition <ref> above.
The general template for building equivalences follows an idea from <cit.>; given two charts associated to two choices of data, we find group enlargements of each which are simultaneously stabilisation-equivalent to a global chart of `doubly framed' curves (in which both sets of choices are carried through the construction at once). Equivalences therefore appear as zig-zags in such a diagram. This template will recur several times in the coming sections.
Let g,h,d,ď∈.
Let = _g,h,d,
= _g,h,d,
^(2) = ^(2)_g,h,d,
:= _g,h,ď,
:= _g,h,ď and
^(2) := ^(2)_g,h,ď
be as Section <ref>.
We let G := U(d-g+1) and Ǧ := U(ď-g +1).
Choose finite dimensional approximation schemes
(V_μ,λ_μ)_μ∈
and (V̌_μ,λ̌_μ)_μ∈
for and respectively.
We have associated pre-thickenings
^ :=^(β,λ_μ)
and ^ :=^(β,λ̌_μ)
for some large μ.
Now choose consistent domain metrics for and
as in Definition <ref>
and Hermitian line bundles L → X and Ľ→ X
whose curvatures tame J,
and fix a large integer k ≫ 1.
We have a principal G-bundle P →^
whose fiber over (ϕ,u,e) ∈^
is a unitary basis of
L_u := u^*(ω_/(p_1,⋯,p_h)|_ϕ⊗ u^* L)^⊗ k
where p_1,⋯,p_h are the marked point sections.
We let d be the size of this basis.
We have a similarly defined principal G-bundle P̌→^
whose fiber over (ϕ,u,e) ∈^
is a unitary basis of
Ľ_u := u^*(ω_/(p̌_1,⋯,p̌_h)|_ϕ⊗ u^* Ľ)^⊗ k
where p̌_1,⋯,p̌_h are the marked point sections.
We let ď be the size of this basis.
Define := P_ and := P̌_.
Each unitary basis F gives us a G-equivariant map ϕ_F from the domain of u
to ^d-g and hence an element (ϕ,ϕ_F) of ^(2).
Therefore we have a G-equivariant map
P →^(2) which extends to a G_-equivariant map s : →^(2).
We define š : →^(2) analogously.
Then (G,,s) is global Kuranishi data associated to (G,^(2),λ_μ)
and (Ǧ,,š) is global Kuranishi data associated to (Ǧ,^(2),λ̌_μ).
(G,,s) and (Ǧ,,š)
are related by a sequence of
principal bundle enlargements (Definition <ref>)
and general stabilizations (Definition <ref>) and their inverses.
Hence their associated global Kuranishi charts are equivalent.
This is done via three intermediate global Kuranishi data.
The first is constructed as follows:
We start with pre-Kuranishi data (G ×Ǧ,^(2),λ_μ)
where G ×Ǧ acts by projecting to G first.
We let _1 be the principal Ǧ-bundle
over whose fiber over a point in |_(ϕ,u,e),
(ϕ,u,e) ∈^
is a unitary basis F̌
of H^0(Ľ_u) using the consistent
domain metric for .
We let s_1
be the composition _1 →s^(2).
Then (_1,s_1) is global Kuranishi data associated to
(G ×Ǧ,^(2),λ_μ).
It is also a principal bundle enlargement of (,s) (Definition <ref>).
We will now construct the second global Kuranishi chart data.
We start with the space _2
of genus g curves with h
marked points mapping to ^d ×^ď
whose projection to ^d
is a member of
and whose projection to ^ď
is in .
We let _2 →_2
be the corresponding universal curve
with marked point sections p^2_1,⋯,p^2_h.
This is a G_×Ǧ_-nodal family.
We let _2^(2) be the space
of genus g curves with h
marked points mapping to (^d)^2 × (^ď)^2
whose projection to (^d)^2
is a member of ^(2)
and whose projection to (^ď)^2
is in ^(2).
We let (V_μ,2,λ_μ,2)_μ∈
be the finite dimensional approximation scheme
for _2
given by the sum of the pullback (<ref>) of
(V_μ,λ_μ)_μ∈
and (V̌_μ,λ̌_μ)_μ∈
respectively.
We have an associated pre-thickening
^_2 :=^(β,λ_μ,2)
for some large μ.
We have a principal G ×Ǧ-bundle P_2 →^_2
whose fiber over (ϕ,u,e) ∈^_2 is
the space of pairs F,F̌
of unitary bases of H^0(L_u)
and H^0(Ľ_u)
respectively.
So elements of ^_2 are tuples of the form (ϕ,u,e,F,F̌).
We also have a map
s_2 :^_2 →_2^(2)
sending (ϕ,u,e,F,F̌) to (ϕ,ϕ_F,ϕ̌,ϕ_F̌).
Then (_2,s_2) is global Kuranishi data associated to
(G ×Ǧ,_2,λ_μ,2).
Also (_2,s_2)
is a general stabilization (Definition <ref>)
followed by an approximation stabilization (Definition <ref>)
of (_1,s_1).
Finally we construct the third global Kuranishi data.
This is the same as the first one,
but with and
swapped.
We start with pre-Kuranishi data (G ×Ǧ,^(2),λ̌_μ)
where G ×Ǧ acts by projecting to Ǧ first.
We let _3 be the principal G-bundle
over whose fiber over a point in |_(ϕ,u,e),
(ϕ,u,e) ∈^
is a unitary basis F
of H^0(L_u) using the consistent
domain metric for .
We let s_3
be the composition _3 →š.
Then (_3,s_3) is global Kuranishi data associated to
(G ×Ǧ,^(2),λ̌_μ).
It is also a principal bundle enlargement of (,š) (Definition <ref>)
and (_2,s_2) is a generalized stabilization (Definition <ref>)
followed by an approximation stabilization (Definition <ref>)
of (_3,s_3).
§.§ Forgetful Maps
We wish to show that if one forgets a marked point
then one gets an appropriate morphism of global Kuranishi charts.
For simplicity, we will forget the last marked point.
The same argument holds when forgetting other marked points
since one can relabel them and all of our constructions
are equivariant under such relabelings.
Our treatment of forgetful maps relies on the following trick. In the usual global chart construction, we use that for a stable map u: C → X with marked points p_1,…,p_h, and L→ X having curvature -2iπΩ, the bundle ω_C(p_1,…,p_h) ⊗ u^*L has strictly positive degree on all irreducible components. Now note that
L̂_u :=ω_C(p_1,…,p_h-1) ⊗ u^*L
has positive degree on an irreducible component C' of C except in three cases:
* C' is genus zero with one marked point p_h and two nodal points;
* C' is genus zero with marked points p_h, p_i for some i<h, and one nodal point;
* C' is genus one with one marked point p_h and no nodes.
In each of these cases, after forgetting p_h the log canonical bundle of C' has degree zero, u(C') represents the trivial homology class, and L̂_u has degree zero on C'. The map ϕ_F associated to a framing F of H^0(L̂_u^k) is regular and automorphism free in both the first two cases, whilst in the third case the curve obtained by forgetting p_h would have genus zero and no marked points, in which case there is no moduli space of stable constant maps since there is no Deligne-Mumford stack _1,0 (so we don't expect a good theory of forgetful maps anyway).
With this in mind, we will now construct three different sets of global Kuranishi data.
Let g,h,d ∈. In light of the previous discussion, we will assume that (g,h) ≠ (1,1).
* We let G, , , ^(2), ^, P →^, , s, L_u be as in Section <ref>.
Then we have that (G,^(2), λ_μ) is pre-Kuranishi data associated to → for μ large
and (,s) is global Kuranishi data associated to this pre-Kuranishi data.
The associated global Kuranishi chart (Definition <ref>) describes the moduli space _g,h(X,β).
* Let d' = d-k and choose a finite dimensional approximation scheme (V'_μ,λ'_μ)
for _g,h-1,d'→_g,h-1,d'.
We also have pre-Kuranishi data (G,^(2)_g,h-1,d',λ'_μ).
Let G' := U(d'-g+1).
Choose consistent domain metric for _g,h-1,d' (Definition <ref>).
We let P' →^(β,λ'_μ) be the principal G'-bundle
whose fiber over (ϕ,u,e) is the space of unitary bases F of the space of sections of the k-th power of the line bundle
L'_u := u^*(ω__g,h-1,d'/_g,h-1,d'(p'_1,⋯,p'_h-1))
where p'_1,⋯,p'_h-1 are the corresponding marked point sections. (Thus, if d is the degree of L_u^k then d' is the degree of (L'_u)^k since we are twisting by one fewer marked point.)
Then we have a thickening ' := P'_.
For each such basis F we have an element (ϕ,ϕ_F) ∈^(2)_g,h-1,d'
giving us a map P' →^(2)_g,h,h-1,d'
which extends to a G'_-equivariant map →^(2)_g,h-1,d' for d' = d-k.
Then (',s') is global Kuranishi chart data for (G,^(2)_g,h-1,d,λ'_μ)
whose associated global Kuranishi chart ' describes the moduli space _g,h-1(X,β).
* We let be the moduli space of curves ϕ : Σ→^d'
with h-marked points so that if we forget the last marked point, we get an element of _g,h-1,d.
We let → be the corresponding universal curve
and we let (V_μ,λ_μ)_μ∈
be the pullback of (V'_μ,λ'_μ)_μ∈ via the forgetful map →^(2).
We let ^(2) be the space of curves mapping to (^d'-g)^2
whose projection to each factor ^d'-g is an element of .
We then have pre-Kuranishi data (G',^(2),λ_μ).
We have a principal G'-bundle P→^(β,λ'_μ)
given by the pullback of P' via the map to ^(β,λ'_μ).
We also have a natural map s : P→
of the form (ϕ,ϕ_F) where F is the framing pulled back from P'.
Then (,s) is global Kuranishi data for (G',^(2),λ'_μ)
and whose associated global Kuranishi chart describes _g,h(X,β) (See Lemma <ref> below).
There is a natural morphism of global Kuranishi charts →'
given by forgetting the last marked point.
This is the forgetful map.
The global Kuranishi chart data associated to (,s) and (,s)
are related by a sequence of germ equivalences, approximation stabilizations and generalized stabilizations
as well as their inverses.
The proof of this lemma is identical to the proof of Proposition <ref>
where Ǧ is replaced with G',
is replaced with ,
^(2) is replaced with ^(2)
etc.
§.§ Split Curves
This section gives two different global Kuranish chart presentations
for the space of split nodal curves, i.e. the space of connected nodal curves in X obtained
by gluing genus g_1 and g_2 nodal curves along a marked point, and shows that they are equivalent. These two charts correspond geometrically to the following two descriptions of a split stable map:
* One considers those stable maps of genus g=g_1+g_2 whose domains lie in the boundary divisor D_g_1,g_2⊂_g;
* One considers separately curves of genus g_1 and of genus g_2 each with one marked point, and then takes the preimage of the diagonal in X× X under evaluation from the product moduli space.
Fix natural numbers g_1, g_2, h_1, h_2 corresponding to the
genus and number of marked points of each component.
We let (X,ω) be a closed symplectic manifold with β∈ H_2(X;)
and J an ω-compatible almost complex structure.
We will now construct two different global Kuranishi data for split curves.
* Let G = U(d_1+d_2-g+1) where d_1,d_2 are to be determined soon and g = g_1 + g_2.
We let be the moduli space of pairs of curves ϕ_1 : Σ_1 →^d-g,
ϕ_2 : Σ_2 →^d-g with h_1+1 and h_2+1
marked points p_1,⋯,p_h_1+1, q_1,⋯,q_h_2+1 respectively
so that
* their degrees sum up to d,
* ϕ_1(p_h_1+1) = ϕ_2(q_h_2+1),
* the resulting curves ϕ_1 #ϕ_2 : Σ_1 ∪_p_h_1+1=q_h_2+1Σ_2 →^d-g, ϕ_1 #ϕ_2|_Σ_j = ϕ_j, j=1,2
satisfy H^1((ϕ_1 #ϕ_2)^*(O(1)) = 0 and are automorphism free.
Let → be the corresponding universal curve whose fiber over ϕ_1, ϕ_2 as above
is the union Σ_1 ∪_p_h_1+1=q_h_2+1Σ_2.
We let ^(2) be the moduli space of tuples (ϕ_1,ϕ_2),(ϕ'_1,ϕ'_2) mapping to (^d-g)^4
with the property that (ϕ_1,ϕ_2) and (ϕ'_1,ϕ'_2) are in respectively.
Choose a finite dimensionanl approximation scheme (V_μ, λ_μ) for →.
Choose a Hermitian line bundle L → X with curvature -2iπΩ with Ω taming J, k ≫ 1 a large integer
and a consistent domain metric for as in Definition <ref>.
For each ((ϕ_1,ϕ_2),u) in ^(β,λ_μ),
we let L_u := (ω_|_ϕ/|_ϕ(p_1,⋯,p_h_1+1,q_1,⋯,q_h_2+1) ⊗ u^*L.
Let P →^(β,λ_μ)
be the principal G-bundle whose fiber over (ϕ_1,ϕ_2,u) is a unitary basis F for H^0(L_u^⊗ k).
We now let d_j be the degree of L_u restricted to Σ_j where Σ_j is the domain of ϕ_j for each j=1,2.
Then we have a thickening := P_ which consists of tuples (ϕ_1,ϕ_2,u,F) where (ϕ_1,ϕ_2,u) ∈^(β,λ^0_μ)
and F is a basis of H^0(L_u^⊗ k).
For each (ϕ_1,ϕ_2,u,F) ∈_0,
let ϕ_F : Σ_1 ∪_p_h_1+1=q_h_2+1Σ_2 →^d-g be as in Equation (<ref>).
Let ϕ_F,j be the restriction of ϕ_F to Σ_j for j=1,2.
We let s : →^(2) send (ϕ_1,ϕ_2,u,F) to ((ϕ_1,ϕ_2),(ϕ_F_1,ϕ_F_2)).
Then for k, μ large enough, (,s) is global Kuranishi chart data associated to (G,^(2),λ_μ)
(Definition <ref>).
* Let Ǧ = U(d_1-g_1+1) × U(d_2-g_2+1) with d_1,d_2 to be determined later.
We let be the moduli space of pairs of curves
ϕ_1 : Σ_1 →^d_1-g_1, ϕ_2 : Σ_2 →^d_2-g_2 with h_1+1 and h_2+1
marked points p_1,⋯,p_h_1+1, q_1,⋯,q_h_2+1 respectively
so that
* their degrees are d_1 and d_2 respectively,
* H^1((ϕ_j)^*(O(1)) = 0 for j=1,2
* and ϕ_1, ϕ_2 are automorphism free.
We let → be the curve whose fiber over
(ϕ_1,ϕ_2) as above is Σ_1 ∪_p_h_1+1=q_h_2+1Σ_2.
Let _j → be the curve over whose fiber over (ϕ_1,ϕ_2)
is Σ_j for each j=1,2.
We let ^(2) be the moduli space of tuples (ϕ_1,ϕ_2),(ϕ'_1,ϕ'_2) mapping to (^d-g)^4
with the property that (ϕ_1,ϕ_2) and (ϕ'_1,ϕ'_2) are in respectively.
Choose a finite dimensional approximation scheme (V̌_μ,λ̌_μ) for
→.
Let L,k be as above and choose a consistent domain metric for _j →_1 as in Definition <ref> for each j=1,2.
For each ((ϕ_1,ϕ_2),u) in ^(β,λ̌_μ),
we let L_u,1 := (ω__1|_ϕ_1/|_ϕ_1(p_1,⋯,p_h_1+1) ⊗ u^*L|__1|_ϕ_1)
and
L_u,2 := (ω__2|_ϕ_2/|_ϕ_2(q_1,⋯,q_h_2+1) ⊗ u^*L|__2|_ϕ_2).
Let P̌→^(β,λ̌_μ)
be the principal Ǧ-bundle whose fiber over (ϕ_1,ϕ_2,u) is a pair of unitary bases (F_1,F_2)
of H^0(L_u,1^⊗ k) and H^0(L_u,2^⊗ k) respectively.
Then we have a thickening := P̌_ which consists of tuples (ϕ_1,ϕ_2,u,F_1,F_2)
where (ϕ_1,ϕ_2, u) ∈^(β,λ̌_μ)
and (F_1,F_2) are bases of H^0(L_u,1^⊗ k) and H^0(L_u,2^⊗ k) respectively.
Let ϕ_F_1 : Σ_1 →^d_1-g_1, ϕ_F_2 : Σ_2 →^d_2-g_2
be as in Equation (<ref>).
We let š : →^(2) send (ϕ_1,ϕ_2,u,F_1,F_2) to (ϕ_1,ϕ_2,ϕ_F_1,ϕ_F_2).
Then for k, μ large enough, (,š) is global Kuranishi chart data associated to (Ǧ,^(2),λ̌_μ).
The Kuranishi data (G,,s) and (Ǧ,,š)
are related by a sequence of
enlargements (Definition <ref>)
and general stabilizations (Definition <ref>) and their inverses.
Hence their associated Kuranishi charts are equivalent.
This is done via three intermediate global Kuranishi data.
The first is constructed as follows:
We start with pre-Kuranishi data (G ×Ǧ,^(2),λ_μ) where G ×Ǧ acts by projecting to G first.
We let _1 be the principal Ǧ-bundle
over whose fiber over a (ϕ_1,ϕ_2,u,F) ∈ is a pair of unitary bases
(F_1,F_2)
of H^0(L_u,1^⊗ k) and H^0(L_u,2^⊗ k) respectively.
We let s_1 be the composition
_1 →s^(2).
Then (_1,s_1) is global Kuranishi data associated to
(G ×Ǧ,^(2),λ_μ).
It is also an enlargement of (,s) (Definition <ref>).
We will now construct the second global Kuranishi chart data.
We start with the space _2
of pairs of curves (ϕ_1,ϕ_2) curves mapping to ^d-g×^d_1-g_1 and ^d-g×^d_2-g_2 respectively
so that (Π_1 ∘ϕ_1, Π_1 ∘ϕ_2) ∈ and (Π_2 ∘ϕ_2, Π_2 ∘ϕ_2) ∈
where Π_1 and Π_2 are the projection maps to the first and second factors of ^d-g×^d_1-g_1 and ^d-g×^d_2-g_2 respectively.
We let _2 →_2
be the curve
over _2 whose fiber over
ϕ_1 : Σ_1 →^d-g×^d_1 -g_1,
ϕ_2 : Σ_1 →^d-g×^d_2 -g_2
is Σ_1 ∪_p_h_1+1 = q_h_2+1Σ_2
where p_1,⋯,p_h_1+1, q_1,⋯,q_h_2+1
are the marked points on the domains of ϕ_1 and ϕ_2 respectively.
We let ^(2)_2 be the space of pairs of curves (ϕ_1,ϕ_2) mapping to (^d-g×^d_1-g_1)^2 and (^d-g×^d_2-g_2)^2 respectively
so that (P_1 ∘ϕ_1, P_1 ∘ϕ_2)
and (P_2 ∘ϕ_1, P_2 ∘ϕ_2)
are in _2 where P_1 : (^d-g×^d_1-g_1)^2 → (^d-g×^d_1-g_1)^2
and
P_2 : (^d-g×^d_2-g_2)^2 → (^d-g×^d_2-g_2)^2
are the projection maps to the first and second factors respectively.
Let (V_μ,2,λ_μ,2)_μ∈
be a finite dimensional approximation scheme for _2
given by the sum of the pullback (<ref>) of
(V_μ,λ_μ)_μ∈
and (V̌_μ,λ̌_μ)_μ∈
respectively.
We have an associated pre-thickening
^_2 :=^(β,λ_μ,2)
for some large μ.
We have a principal G ×Ǧ-bundle P_2 →^_2
whose fiber over (ϕ,u,e) ∈^_2
the space of triples (F,F_1,F_2)
of unitary bases of H^0(L_u), H^0(L_u,1) and H^0(L_u,2)
respectively.
So elements of ^_2 are tuples of the form (ϕ_1,ϕ_2,u,F,F_1,F_2).
We also have a map
s_2 :^_2 →_2^(2)
sending (ϕ_1,ϕ_2,u,F,F_1,F_2)) to (ϕ_1,ϕ_2,(ϕ_F|_Σ_1×ϕ_F_1,ϕ_F|_Σ_2×ϕ_F_2))
where Σ_1 ∪_p_h_1+1=q_h_2+1Σ_2 is the domain of u.
Then (_2,s_2) is global Kuranishi data associated to
(G ×Ǧ,_2,λ_μ,2).
Also (_2,s_2)
is a general stabilization (Definition <ref>)
followed by an approximation stabilization (Definition <ref>)
of (_1,s_1).
Finally we construct the third global Kuranishi data.
This is the same as the first one,
but with and
swapped.
We start with pre-Kuranishi data (G ×Ǧ,^(2),λ̌_μ)
where G ×Ǧ acts by projecting to Ǧ first.
We let _3 be the principal G-bundle
over whose fiber over a point
(ϕ_1,ϕ_2,u,e,F_1,F_2) ∈
is a unitary basis F̌
of H^0(L_u^⊗ k) using the consistent
domain metric for .
We let s_3
be the composition _3 →š.
Then (_3,s_3) is global Kuranishi data associated to
(G ×Ǧ,^(2),λ̌_μ).
It is also a group enlargement of (,š) (Definition <ref>)
and (_2,s_2) is a generalized stabilization (Definition <ref>)
followed by an approximation stabilization (Definition <ref>)
of (_3,s_3).
§.§ Genus Reduction
This section gives two different global Kuranish chart presentations
for the space of curves together with a special non-separating node and shows that they are equivalent. These two charts correspond geometrically to the following two descriptions of a self-glued stable map:
* One considers those stable maps of genus g whose domains map to the boundary divisor of _g of curves with a non-separating node;
* One considers curves of genus g-1 with two additional marked points, and then takes the preimage of the diagonal in X× X under the evaluation map corresponding to these two marked points.
Fix natural numbers g, h corresponding to the
genus and number of marked points.
We let (X,ω) be a closed symplectic manifold with β∈ H_2(X;)
and J an ω-compatible almost complex structure.
We will now construct two different global Kuranishi data for these curves.
* Let G = U(d-g+1) where d is to be determined soon.
We let be the moduli space of genus g-1 curves ϕ : Σ→^d-g,
with h+2
marked points p_1,⋯,p_h+2 respectively
so that
* their degree is d,
* ϕ(p_h+1) = ϕ(p_h+2),
* the resulting curve ϕ^# : Σ^# := Σ / {p_h+1=p_h+2}→^d-g, where ϕ is the composition Σ↠Σ^#ϕ^#^d-g,
satisfies H^1((ϕ^#)^*(O(1)) = 0 and is automorphism free.
Let → be the corresponding universal curve whose fiber over ϕ as above
is the glued curve Σ^#.
We let ^(2) be the moduli space of tuples (ϕ,ϕ') mapping to (^d-g)^2
with the property that ϕ, ϕ' are in respectively.
Choose a finite dimensional approximation scheme (V_μ, λ_μ) for →.
Choose a Hermitian line bundle L → X with curvature -2iπΩ with Ω taming J, k ≫ 1 a large integer
and a consistent domain metric for as in Definition <ref>.
For each (ϕ,u) in ^(β,λ_μ),
we let L_u := (ω_|_ϕ/|_ϕ(p_1,⋯,p_h+1,p_h+2) ⊗ u^*L.
Let P →^(β,λ_μ)
be the principal G-bundle whose fiber over (ϕ,u) is a unitary basis F for H^0(L_u^⊗ k).
We now let d be the degree of L_u restricted to the domain Σ of ϕ.
Then we have a thickening := P_ which consists of tuples (ϕ,u,F) where (ϕ,u) ∈^(β,λ^0_μ)
and F is a basis of H^0(L_u^⊗ k).
For each (ϕ,u,F) ∈_0,
let ϕ'_F : Σ^#→^d-g be as in Equation (<ref>)
and let ϕ_F be the composition Σ↠Σ^#ϕ'_F^d-g.
We let s : →^(2) send (ϕ,u,F) to (ϕ,ϕ_F).
Then for k, μ large enough, (,s) is global Kuranishi chart data associated to (G,^(2),λ_μ)
(Definition <ref>).
* Let Ǧ = U(ď) with ď to be determined later.
We let be the moduli space of genus g-1 curves
ϕ : Σ→^ď-g+1 with h+2
marked points p_1,⋯,p_h+1,p_h+2 respectively
so that
* the degree of ϕ is ď,
* H^1(ϕ^*(O(1)) = 0
* and ϕ is automorphism free.
We let → be the curve whose fiber over
ϕ as is Σ^# := Σ / {p_h+1=p_h+2}.
We let → be the curve whose fiber over ϕ is its domain Σ.
We let ^(2) be the moduli space of tuples (ϕ,ϕ') mapping to (^d'-g+1)^2
with the property that ϕ and ϕ' are in .
Choose a finite dimensional approximation scheme (V̌_μ,λ̌_μ) for
→.
Let L,k be as above and choose a consistent domain metric for →^(2) as in Definition <ref>.
For each ((ϕ,u) in ^(β,λ̌_μ),
we let Ľ_u := ω_|_ϕ/|_ϕ(p_1,⋯,p_h+2) ⊗ u^*L.
We let ď be the degree of L=Ľ_u.
Let P̌→^(β,λ̌_μ)
be the principal Ǧ-bundle whose fiber over (ϕ,u) is a unitary basis F
of H^0(Ľ_u^⊗ k).
Then we have a thickening := P̌_ which consists of tuples (ϕ,u,F)
where (ϕ, u) ∈^(β,λ̌_μ)
and F is a basis of H^0(Ľ_u^⊗ k).
Let ϕ_F : Σ→^ď-g
be as in Equation (<ref>).
We let š : →^(2) send (ϕ,u,F_1,F_2) to (ϕ,ϕ_F).
Then for k, μ large enough, (,š) is global Kuranishi chart data associated to (Ǧ,^(2),λ̌_μ).
The proof of the following proposition is similar to the proof of Proposition <ref>
above and so we omit the proof.
The Kuranishi data (G,,s) and (Ǧ,,š)
are related by a sequence of
enlargements (Definition <ref>)
and general stabilizations (Definition <ref>) and their inverses.
Hence their associated Kuranishi charts are equivalent.
§.§ Notational conventions
Suppose X is a point, and fix a single pair (g,h). All stable maps are constant. Our general construction inputs a relatively very ample Hermitian line bundle ℒ on the universal curve over _g,h (for instance the 3rd power of the relative log canonical bundle) and outputs a global chart for _g,h. Although the map λ in the approximation scheme vanishes so the thickening is entirely composed of constant maps together with ℒ-framings, there is a non-trivial obstruction bundle, coming from H^0(u^* T^q), where q+1 = H^0(, Σ), together with the space of (q+1) × (q+1)-Hermitian matrices.
One can relate the output of the general construction to a simpler global chart. The space of unitary ℒ-framed curves itself gives a global quotient presentation
_g,h = / H
for the unitary group H = U(q+1). The space carries a natural H-transverse complex structure, and is naturally an element of the category . This `unitary' global chart, with trivial obstruction bundle, is related to the one obtained from our construction by the stabilisation procedure for the H-bundle H_g,h,q+g⊕.
There is a larger global chart for _g,h(X,β,J) in which we frame both L_u and also the bundle (st)^* where →_g,h is relatively ample over the universal curve over Deligne-Mumford space and is the classical stabilisation map. In the notation above, this leads to a chart which is G-complex and H-transversely complex, and for which there is an H-equivariant forgetful map → which plays the role of the stabilisation map →_g,h. Combined with the trick from Lemma <ref>, this gives us stabilisation maps inside the category (where carries a trivial G-action for the original group G arising from the framing choice F in the construction of ).
For simplicity, we will use the notation _g,h(^d-g,d) and _g,h(β) for the (smooth) spaces of regular stable curves in projective space and a thickening of _g,h(β) when the precise choices in the construction are not central to the immediate context. We will also usually suppress the additional framings of line bundles over domain curves, and the choice of global H-covering for Deligne-Mumford space, and simply write stabilisation as a map →→_g,h. We thus have maps
_g,h(β) →_g,h(^d-g, d) →_g,h
(where d depends on ω(β) but also various choices in the construction) whose composite we denote by .
As discussed after Corollary <ref>, a global chart constructed from framed stable maps is not naturally smooth but the thickening admits a fibrewise C^1_loc-map → to a quasi-projective variety. As explained in <cit.>, this is enough to show that the tangent microbundle T_μ admits a vector bundle lift; by further stabilising, one can then obtain smooth global charts, well-defined up to equivalence in the sense of Definition <ref>.
We will assume we have done this henceforth. After smoothing and stabilisation to be transversely stably complex, our charts naturally belong to the category considered previously.
By <cit.> we can also replace a map between the thickenings of such charts by a smooth
submersion.
We summarise the results of this section:
The moduli spaces _g,n(X,J,β) of stable maps admit a distinguished system of global charts = (G,,E,s) with thickenings := _g,n(X,J,β) for which
* the virtual bundles T - g and E admit G-equivariant stable almost complex structures;
* evaluation ev: _g,n(X,J,β) → X^n and stabilisation : _g,n(X,J,β) →_g,n both extend to G-equivariant maps on _g,n(X,J,β);
* evaluation and stabilisation maps, and the stable complex structures, are entwined by equivalences of distinguished global charts coming from varying auxiliary data or changing J, so the charts are well-defined up to isomorphism in the sense of Section <ref>.
§ GROMOV-WITTEN INVARIANTS AND TAUTOLOGICAL MODULES
§.§ Invariants as generalised homology classes
Take a counting theory on the category of global charts as in Section <ref>. We write L_ for the formal group associated to .
Fix a compact symplectic (X,ω), a class β∈ H_2(X;), a pair (g,n) and an almost complex structure J taming ω. We have a global chart _g,n(β) with zero-set Z such that Z/G is the compactified moduli space _g,n(β,J) of J-holomorphic stable maps. Evaluation and stabilisation define a map _g,n(β) → X^n ×_g,n so one can push-forward the virtual class to a class
I_g,n(β) ∈ E_*(X^n ×_g,n) ≅ E^*(X^n ×_g,n),
where we supress the dimension shift in the second isomorphism.
The above class is a well-defined symplectic invariant of (X,ω, β,g,n), and does not depend on the choice of global chart or taming almost complex structure J.
The global chart for _g,n(β,J) is well-defined up to equivalence for fixed J by Proposition <ref>, and up to cobordism for varying J by the discussion of Section <ref>. The associated virtual class [_g,n(β)] ∈ E_*^lf(_g,n(β,J)) is preserved by equivalence of global charts from the formalism of counting theories, and its push-forward is invariant under cobordism by Lemma <ref>.
Again by cobordism invariance, I_g,n(β) only depends on the deformation equivalence class of ω, i.e. its connected component in the space of symplectic forms.
§.§ First properties
We discuss analogues of the effectivity, symmetry, point-mapping and fundamental axioms for Gromov-Witten invariants.
If ⟨ [ω],β⟩ < 0 then I_g,n(β) = 0.
The moduli space _g,n(β,J) = ∅ for any taming J, hence the zero-set Z of the section s is empty and so the equivariant Euler class [] vanishes, for any global chart.
The symmetric group _n acts on X^n ×_g,n by permuting factors in X^n respectively marked points in _g,n. This yields a representation
_n →_E^*(pt)E_*(X^n ×_g,n)
to the E^*(pt)-linear automorphisms.
The class I_g,n(β) is _n-invariant.
Given data μ defining a global chart for _g,n(β), we can pullback by σ∈_n to obtain new data σ^*μ. The virtual classes for the two sets of data are equivalent by Proposition <ref>.
Going beyond Proposition <ref>, one can incorporate finite-group equivariance into the construction of global charts and build _n-equivariant global charts for moduli spaces of stable maps, obtaining a refined invariant I_g,n(β)^_n∈ E^*__n(X^n ×_g,n). This is relevant to a program of Givental <cit.>.
Consider the maps
X^n+1×_g,n+1→ X^n ×_g,n+1← X^n ×_g,n
which respectively forget the last factor and are given by a choice of section of the forgetful map _g,n+1→_g,n given by a choice of marked point.
The pushforward of I_g,n(β) to E_*(X^n ×_g,n+1) agrees with the cap product of the fundamental class of _g,n with I_g,n+1(β).
The construction from Section <ref> allows us to construct global charts _g,n+1(β) and _g,n(β) which are compatible with the forgetful map. Since the obstruction bundle (and the section) on _g,n(β) are obtained by pullback from _g,n+1(β), under the inclusion associated to a choice of section, the result then follows from the pullback axiom.
One can formulate and prove a stronger forgetful axiom with respect to the composition of maps
X^n+1×_g,n+1→ X^n ×_g,n+1→ X^n ×_g,n .
In this context, one can show that the pushforward of I_g,n+1(β) to E_*(X^n ×_g,n+1) agrees with the pullback of I_g,n(β). The proof requires a generalisation of the axioms we gave for a counting theories, because it uses the fact the map
_g,n+1(β) →_g,n(β)
has the property that the inverse image of the zero-locus of the section on _g,n(β) is the zero locus on _g,n+1(β). We obtain an induced wrong way map
E^lf_*(_g,n(β)) → E^lf_*+2(_g,n(β))
which respect virtual fundamental classes. This is a consequence of a version of the pullback axiom (3) for counting theories, generalised to the setting where the base in Diagram (<ref>) consist of global charts.
We now consider the case of maps in the trivial homology class.
As usual, let /H be a global chart presentation for _g,n. The orbibundle λ_Hodge := R^1p_*𝒪_⊠ TX →_g,n× X has a natural lift, which we still denote by λ_Hodge, to an H-equivariant bundle over .
Let β = 0 ∈ H_2(X;). The space _g,n(X,0,J) admits a global chart
(G,, V,s) = (H, X ×, λ_Hodge, 0).
For β = 0 ∈ H_2(X;), the moduli space _g,n(β,J) agrees with the product X ×_g,n, since all stable maps are constant (however this space typically has the wrong virtual dimension). The stabilisation map →_g,n is the natural projection, whilst the evaluation map → X^n
is the composition of second projection and inclusion of the small diagonal = × X pr_2⟶ X Δ⟶ X^n.
Construct the global chart by the usual procedure from Section <ref>. Over the thickening ^(0,λ_μ) there is a G-bundle V with fibre the cokernel of the linearised ∂-operator. The image of λ_μ surjects onto V and the moduli space of constants is cut out `cleanly' (in the fibrewise C^1_loc-sense) by the section s. Whenever the zero-section of a global chart is cut out cleanly in this sense, the chart is stabilisation equivalent to the zero-locus equipped with the identically vanishing section of the obstruction bundle. Identifying V with the Hodge bundle gives the result.
In the counting theories associated to Morava K-theories or complex K-theory, I_g,n(X,0) ∈ E_*(X^n ×_g,n) is given by the Euler class of λ_Hodge.
§.§ Tautological modules
Suppose (X,J) is a smooth quasi-projective algebraic variety. There is a collection of `tautological rings' associated to the moduli spaces _g,n(X,β,J) of stable maps, which generalise the tautological rings R^*(_g,n) ⊂ H^*(_g,n) of the moduli spaces of curves, see <cit.> and <cit.>. The definition of the product in the ring is somewhat non-trivial because of the singularities of the spaces _g,n(X,β,J), and proceeds via Fulton's `operational Chern-classes'. Since the operational Chern class exists in algebraic cobordism, one can define the tautological ring in any complex-oriented theory .
The tautological rings have natural analogues for any counting theory on . Suppose now (X,J) is a compact symplectic manifold with compatible almost complex structure J, and fix a curve class β. Fix a global chart (G,,β,E,s) for a moduli space _g,n(β,J), with thickening = _g,n(β) and virtual class [_g,n(β)].
We may consider the thickening _g,n(β) as a non-proper stably almost complex global chart, equipped with the trivial vector bundle. As such, we have a forgetful map →, which induces a pullback
E^*(_g,n(β) ) → E^*(_g,n(β)).
We also have a forgetful map _g,n(β) to _g,n(^d-g,d), and from there to _g,n(^d-g,d) and to _g,n.
The tautological ring R^*(_g,n(β)) of _g,n(β) is the subring generated by
* the pullback of the first Chern class of the i-th cotangent line bundle T_i^* ∈ E^*(_g,n),
* the classes of the divisors in _g,n(β) associated to splitting (β,g,n) as (β_1 + β_2, g_1 + g_2, n_1 + n_2 + 2), and choosing an integer 1 ≤ i ≤ n.
To clarify the construction of the second type of classes, observe that _g,n(^d-g,d) inherits (from _g,n(^d-g,d)) divisors associated to splitting (β,g,n) as (β_1 + β_2, g_1 + g_2, n_1 + n_2 + 2), and choosing an integer 1 ≤ i ≤ n. We formulate the divisors in _g,n(β) instead because the splitting by homology class is finer, in general, than the splitting by degree (area).
In practice, we shall need to consider `tautological modules' which lie in homology rather than cohomology. Because the moduli spaces of stable maps are compact, we have a natural isomorphism
E^*_G( | Z) = E^lf_*() ≅ E_*().
The tautological module R_*() ⊂ E_*() is the submodule
R^g,n_*(β;) := R^*(_g,n(β)) ∩ [_g,n(β)] ⊂ E^lf_*().
We will say that classes in the tautological ring, respectively module, which are not in the span of the unit, respectively virtual class, have `positive (co)dimension'.
An equivalence of distinguished global charts →' for _g,n(β) yields an isomorphism of tautological modules.
By definition the equivalence respects the virtual classes [] and ['], and from Corollary <ref> is compatible with evaluation and stabilisation maps.
In fact, these equivalences commute with the natural submersions from the thickenings to
_g,h,d. Since the cotangent line bundles T^*_i as well as the divisors in _g,n(β)
are pullbacks of the corresponding divisors in _g,h,d,
we get a corresponding isomorphism of tautological modules.
Working with pullbacks from the spaces _g,h,d is analogous, in our setting, to working with pullbacks of gluing diagrams of the Artin stacks of `stable modular curves with degree' discussed in <cit.>.
A corrected virtual class for a moduli space _g,n(β) presented by a global chart (G,,E,s) is a class
[_g,n(β)]^corr = [_g,n(β)] + α∈ E_*^lf()
where α belongs to the tautological module and has positive codimension.
§ QUANTUM GENERALISED COHOMOLOGY
Let denote a counting theory on the category of global charts.
§.§ Morphisms associated to correspondences and their compositions
The construction of operations in Gromov-Witten theory starts with a global chart equipped with evaluation maps
e_±⟶ X_1 × X_2.
This defines a map
Φ_[]: E^*(X_1) → E^*(X_2), α↦ D_X_2∘ (e_+)_* (e_-^*α∩ [])
where D_X_2: E_*(X_2) → E^(X)-*(X_2) is the duality isomorphism. In order to formulate properties of the resulting operations, we have to be able to geometrically understand the composition of such maps using the product global chart.
Given two global charts e_±: _e → X_1× X_2 and f_±: _f → X_2× X_3 as above, consider the product chart _e ×_f equipped with the evaluation maps
_e ×_f [dl,"e_-"] [d,"e_+ × f_-"] [dr,"f_-"]
X_1 X_2 × X_2 X_3.
Using the middle map to pullback the dual of the fundamental class of the diagonal on the X_2, we obtain a class
((e_+ × f_-)^* Δ_e_+,f_-) ∩ ([_e]×[_f])
which agrees with the pushforward of the fundamental class of the fibre product.
The composition Φ_[_f]∘Φ_[_e] is equal to the map
α↦ D_X_3∘ (f_+)_* (e_-^*(α) ∩ ( (e_+ × f_-)^* Δ_e_+,f_-) ∩ [_e]×[_f])).
This is standard from Fourier-Mukai theory if the fundamental classes are ordinary and not virtual. The extension to global charts follows from the symmetric monoidal hypothesis on , the equality [_e ×_f] = [_1] ⊗ [_2], and the pullback axiom for the restriction to the diagonal.
§.§ A correlator
For a class β∈ H_2(X;) we have a thickening _0,3(β) with evaluation maps
_0,3(β) [rd]_ev_∞[ld]^ev_0,1
X× X X
Let
κ: E^*(X) ⊗_E^*(pt) E^*(X) → E^*(X× X)
be the natural cross-product map, and write D_X: E_*(X) ∼⟶ E^2n-*(X) for the duality isomorphism coming from the canonical deformation class of almost complex structures on X.
We obtain a map
μ_β: E^*(X) ⊗_E^*(pt) E^*(X) → E^*(X)
by sending
a ⊗ b ↦ D_X ((ev_∞)_* [_0,3(β)] ∩ ev_0,1^*(κ(a⊗ b)) ).
If β=0 then all stable maps are constant, the virtual class [_0,3(β)] ∈ E_*(X^3) is the class of the small diagonal, and μ_β=0(a⊗ b) = a ⌣ b recovers the usual cup product on E^*(X), compare to <cit.>.
Let Λ__* denote the one-variable Novikov ring with formal variable q over the coefficient ring _* = E^*(pt), i.e. the _*-module whose elements are series
Λ__* = {∑_i a_i q^t_i | a_i ∈_*, t_i ∈, lim_i t_i = + ∞}.
The quantum product on E^*(X;Λ__*) = E^*(X) ⊗__*Λ__* is an _*-linear product which `corrects' the (non-associative) naive product
(a,b) ↦∑_βμ_β(a⊗ b) q^ω(β).
We will explain how this correction arises in this section.
§.§ From the mesosphere
The construction of an associative quantum product on H^*(X;) follows the following general template, which in particular applies to the classical cases of ordinary cohomology and K-theory. Consider the projection
: _0,4(β) →_0,4
which forgets the map and stabilises the domain. The fact that _0,4 is connected implies that the expression
[^-1(λ)] ∩ [_0,4(β)]
is independent of λ∈^1 \{0,1,∞}. Following the standard proof of associativity, one specialises to the (non-generic) case where λ∈{0,1,∞}⊂^1 = _0,4 represents one of the three reducible points. In this case, the fibre ^-1(λ) is a strict normal crossing divisor D_λ⊂_0,4(β) whose top irreducible components are indexed by splittings β = β_1+β_2. These divisors intersect so that the codimension m strata are closures of the loci where the corresponding (thickened) stable map has domain which contains a chain of m≥ 0 rational curves between the stable components which are contracted to the node in ^1∨^1 under the stabilisation map. The axioms of a counting theory express this as
L({Δ∩ ([_0,3(β_1)]× [_0,3(β_2)]) }_β_1 + β_2 = β),
where Δ is the pullback of the appropriate diagonal under evaluation.
If this expression agrees with the sum of the expression for the fundamental class of the fibre product (given by Equation (<ref>)), then associativity holds. But comparing the two expressions shows that this can only be true whenever L is the additive formal group (or if the higher order terms of the formal group law vanish by some coincidence). The higher order terms are supported on higher order strata, as illustrated in Figure <ref>.
To obtain an associative product, we then seek `corrected' virtual classes
[_0,4(β)]^corr∈ E^*(_0,4(β)) and [_0,3(β_i)]^corr∈ E^*(_0,3(β_i))
with the following fundamental property:
36emthe pullback of [_0,4(β)]^corr to the (β_1,β_2)-component of the domain (normalization) of a normal crossing boundary fibre factorises in the sense that it has the form ([_0,3(β_1)]^corr× [_0,3(β_2)]^corr) ∩ [Δ_X].
In order to also achieve commutativity as well, we shall also require that [_0,4(β)]^corr is invariant under permutations of the first 3 marked points.
We will write D_β_1,β_2 for the divisor in _0,4(β) associated to a splitting β = β_1 + β_2.
If = H is rational singular cohomology then the uncorrected class [_0,4(β)] satisfies both requirements. This is because although the map from the disjoint union of components of the normal crossing fibre is only componentwise-injective, the homology class defined by its image is given by the sum of the (virtual analogues of the) classes D_β_1,β_2 = [_0,3(β_1) ×_ev_0,3(β_2)].
If = KU is complex topological K-theory, then the class ∑ [D_β_1,β_2] satisfies the second requirement – its pullback to a component by definition factorises as the product of the virtual classes of the two factors of the fibre product – but it has no reason to be invariant under the involution exchanging marked points.
The class L_({[D_β_1,β_2]}_β_1+β_2=β) ∈ E^*(_0,4(β)) is σ_13-invariant.
This is immediate from Equation (<ref>).
§.§ Some universal algebra
The axioms of a counting theory imply that a (transverse, stably) almost complex codimension one submanifold D ⊂ M of a (transverse, stably) almost complex G-manifold has an associated class [D] = c̃∈ E^*(M) which depends only on the associated complex line bundle. Write c̃ for the first Chern class in the theory .
The corrected fundamental classes [_0,3(β_1)]^corr alluded to in the previous section will be obtained by multiplying the naive fundamental class by a power series in the Chern classes associated to boundary divisors corresponding to sphere bubbling at the output marked point. In order to understand the properties which such a power series needs to satisfy, recall that the geometry underlying Equation (<ref>) is that the general fibre of the map : _0,4(β) →_0,4 = ^1 represents ^*c̃(𝒪_^1(1)), and
𝒪(^-1(λ)) = ⊗_β_1+β_2=β𝒪(D_β_1,β_2).
In particular, if one restricts the LHS to any given component D_β_1,β_2 one gets the trivial bundle, and hence
⊗_β_1+β_2=β𝒪(D_β_1,β_2) |_D_β_1,β_2≅𝒪.
Ordering the irreducible components / divisors as D_0, D_1,… one sees
𝒪(D_i)|_D_i = ( ⊗_j≠ i𝒪(D_j)^-1)|_D_i.
Applying c̃ to this identity and recalling that c̃(⊗_j ℒ_j) = L_(c̃(ℒ_j)) for any collection of line bundles, we are led to introduce the following quotient ring. Take finitely many ordered variables D_0, D_1, D_2, …, D_N, and define the ring R to be the quotient
_* D_0,… , D_N / ⟨ D_i^2 = D_i · L_(-D_0, -D_1,…, -D_i-1, -D_i, -D_i+1,…, D_N), 1 ≤ i ≤ N ⟩
Here L_(x_1,…,x_k) denotes the power series defining x_1 +_⋯ +_ x_k and the · indicates that the corresponding variable is omitted. For notation, we will write
r_j := D_j^2 - D_j · L_(-D_0, -D_1,…, -D_j-1, -D_j, -D_j+1,…, -D_N)
so the ring R = _* D_0,… , D_N / ⟨ r_j, 1≤ j ≤ N⟩. The power series that we need to correct the virtual fundamental class arises as follows:
There is a power series f(D_0,D_1,…, D_N) = 1+⋯ for which L_(D_0, D_1,…,D_N) is given by
D_0 · f(0,…) + D_1 · f(D_0, 0,…) + D_2 · f(D_0, D_1,0,…) + ⋯ + D_N · f(D_0,…, D_N-1, 0)
in the quotient R.
We will first construct power series f_0=1, f_1,…, f_N for which
L_(D_0, D_1,…, D_N) = 1 · D_0 + f_1(D_0) · D_1 + f_2(D_0,D_1) · D_2 + ⋯ + f_N(D_0,…,D_N-1) · D_N.
Consider the following algorithm: given any monomial appearing in L_(D_0, D_1,…, D_N), if j is the largest index such that D_j occurs in that monomial, we have two options:
* D_j occurs with multiplicity one, so the monomial is D_j ·ϕ where ϕ is a monomial in D_0,…, D_j-1;
* D_j occurs with multiplicity >1 so the monomial is D_j^k≥ 2·ϕ with ϕ a monomial in D_0,…, D_j-1.
In the first case, we do nothing, and the term ϕ is put in f_j. In the second case, we use the relation for D_j^2 to decrease the power with which D_j occurs, more precisely we replace D_j^k ϕ by D_j · (L_(-D_0, -D_1,…, -D_j-1, -D_j, -D_j+1,…, -D_N))^k-1·ϕ. Note that in this expression, every term is divisible by exactly one power of D_j. Some of the terms only involve D_i with i<j, and they are put into f_j, whilst all others involve some monomial D_ℓ with ℓ > j. These are `pushed down' in the induction and treated later. We then iterate, inductively with increasing j. Thus, at the first iteration, we consider only terms D_0^i, and the coefficient of D_0 defines f_0 and all terms D_0^i with i>1 are pushed down into the next stage; then turn to monomials whose highest index is some power of D_1 (some of which come from the original expression, and some from the re-expression of terms D_0^i with i>1 in the previous stage). At the final stage, when re-expressing terms of the form D_N^j ϕ with j>1 we get terms D_N · L_(-D_0, -D_1,…, -D_N-1)^j-1·ϕ, and the whole expression L_(-D_0, -D_1,…, -D_N-1)^j-1·ϕ is then incorporated into f_N, so the algorithm indeed terminates.
This constructs a sequence of power series f_j satisfying (<ref>). Note the algorithm involves making no choice. Moreover, if
R_N := _* D_0,D_1,…, D_N / ⟨ D_i^2 = D_i · L_(-D_0,…, -D_i, …, -D_N), 0 ≤ i ≤ N⟩
is the ring introduced above on N+1 variables,
then there is a natural quotient map q: R_N → R_N-1 on setting D_N = 0, for which
q(L_(D_0,…,D_N)) = L_(D_0,…,D_N-1).
In the algorithm re-expressing L_(D_0,…,D_N) in R_N, if a monomial contains D_N^k for some k>1 then all the terms occuring when it is replaced are divisible by D_N (since the basic relation in (<ref>) has that property), so the algorithm is `compatible' with the homomorphism q. Put differently, if one views the algorithm as producing functions f_j = f_j^(N) that depend on the number of variables, then f_j^(N-1) = f_j^(N)|_D_N = 0. It follows that the f_j are specialisations of a single power series f.
Lemma <ref> constructs a distinguished power series in the sense that the algorithm makes no choice beyond the total ordering of the variables D_i. For the additive formal group,
R_j = D_j^2 - D_j((-D_0) + ⋯ + (-D_j) + ⋯ + (-D_N)) = D_j(D_0+⋯+D_N).
Therefore all elements in the ideal generated by the R_j belong to the principal ideal ⟨ D_0+⋯+D_N⟩; but clearly an element f(D_0,…,D_N-1)· D_N does not belong to this ideal. This shows that the power series f is unique in that case. We conjecture that the solution is unique over the universal formal group.
Suppose that one changes the total order on the variables {D_j} by swapping D_i =: D_i+1' and D_i+1 =: D_i'. Let f and f' be the power series produced by the algorithm constructed in Lemma <ref>. Then f = f' in the quotient ring R / (D_i · D_i+1 = 0).
Let D” := D_i + D_i+1 and consider the ordered variables
D_0 < D_1 < ⋯ < D_i-1 < D” < D_i+1 < ⋯ < D_N.
Then both f and f' agree with the output of the algorithm applied to this set of variables, because in the quotient ring R / (D_i · D_i+1 = 0) the relation D_i · D_i+1 = 0 ensures that L_(D_i, D_i+1) = D_i + D_i+1.
§.§ Associativity
Consider _0,3(β). For each δ∈ H_2(X;), there is an associated divisor
D̂_δ⊂_0,3(β)
of curves which (in the top stratum) have a δ-bubble at the output marked point and main component in class β-δ. We enumerate the D̂_δ so that D̂_i < D̂_j if ω(δ_i) > ω(δ_j) again extended arbitrarily to a total order. We then define the corrected virtual class
[_0,3(β)]^corr := [_0,3(β)]· f(D̂_0,D̂_1,…)
where f is the power series produced in Lemma <ref>. Similarly, we define a correction to the virtual class of [_0,4(β)] by multiplying by f(E_0,E_1,…) where the E_i enumerate divisors corresponding to bubbling at the output marked point. To prove that these expressions are well-defined, we need:
The power series f appearing in Lemma <ref> does not depend on the choices made in resolving the partial order to a total order, so depends only on (X,ω,J) and β.
Note that if ω(δ) = ω(δ') then Gromov compactness shows the corresponding divisors D̂_δ and D̂_δ' of broken curves in class β are disjoint. The result then follows from Lemma <ref>.
If one adds another variable D_δ” (anywhere in the partial order) and the moduli spaces of J-curves in class δ” is empty, then the resulting corrected virtual class does not change. This follows from the fact that the f_j produced by the algorithm of Lemma <ref> specialise correctly under setting variables to zero, i.e. are induced by a single power series f.
The restriction of the `naive' virtual class [_0,4(β)] to the boundary stratum associated to a splitting β = β_1 + β_2 agrees with
([_0,3(β_1)]^corr× [_0,3(β_2)]) ∩ [Δ_X].
The argument is summarised in the schematic of Figure <ref>, where black crosses are the output of an operation with grey dots as inputs, and blue dots denote locations of contracted rational tails. The left picture indicates that the divisors of interest in _0,4(β), when understanding the restriction of its naive virtual class to a normal crossing fibre of the stabilisation map, are those related to bubbling at the central node; the central picture uses these to define a corrected product, with the correction `concentrated at the output'; the right image is the composition of two such operations (this counts configurations which are more involved than those on the left, and amounts to correcting the virtual class of _0,4(β) by the divisors living at the output). On both left and right, there is no correction at the inputs exchanged by σ_13, which ensures the first of the two required properties.
Elaborating, fix attention on a particular component D_β_1,β_2⊂_0,4(β) corresponding to the given splitting; say this corresponds to some D_i in our enumeration. Each of the other divisors D_j with j<i meets D_i (if at all) in a divisor whose top open stratum corresponds to broken curves with a single contracted rational curve at the glued marked point, the three components having classes β_1-δ_j, δ_j, β_2 for some δ_j of area > ω(δ_i). Thus, D_j pulls back to the fibre product
_0,3(β_1) ×_Δ_0,3(β_2)
exactly as (D̂_j × [_0,3(β_2)]) ∩Δ. Because the ordering is by area, the restriction of D̂_j to D̂_i with i<j is simply trivial (the intersection is empty), which is the geometric analogue of the fact that the series f_j constructed in Lemma <ref> are specialisations of a single function f. This yields the required property for (<ref>).
The terms
(a∗ b)_β := D((ev_∞)_* (ev_0,1^* (a⊗ b) · [_0,3(β)]^corr))
give rise to an associative product on E^*(X;Λ__*), which is the Λ__*-linear extension of (a,b) ↦∑_β (a∗ b)_β q^ω(β).
The product a∗ b is graded commutative, because the corrected virtual class is by hypothesis invariant under permuting the first and second marked points. Lemma <ref> implies that the compositions (a ∗ b)∗ c and a ∗ (b ∗ c) can both be identified with a count on _0,4(β) of curves with a fixed cross-ratio, after its virtual class has been corrected by the divisors corresponding to bubbling at the output.
The quantum product is independent of the choice of J.
By breaking a path (ω_t, J_t) of compatible pairs into short segments constant in one or the other variable, it is sufficient to show invariance under small deformation of either J or ω. For J we already have cobordism invariance of the virtual class for the moduli space of curves in any fixed class β, so it suffices to prove well-definition of the corrected virtual class. Take a path J_t all taming ω. We can consider all classes β which contain a J_t-holomorphic representative for any t, and then partially / totally order all possible classes D_β_1,β_2 of broken curve configurations in class β by the value ω(β_1) which is independent of t. The algorithm of Lemma <ref> and Definition <ref> produces a corrected virtual class which will agree, on specialisation to the end-points t_0, t_1 of the interval of complex structures, to those associated to J_t_0 and J_t_1, by Remark <ref>.
The quantum product depends only on ω up to deformation equivalence.
As we vary ω from ω_0 to ω_1 with J fixed, since all forms are assumed to tame the same J, the associated filtrations at the end-points are both specialisations of a single more refined filtration, to which one can apply the corrected virtual class algorithm. More concretely, if there are classes β_0 and β_1 for which the ordering by area changes, ω_0(β_0) < ω_0(β_1) but ω_1(β_0) > ω_1(β_1), then since we work with fixed J we immediately see that the divisors of broken J-curves D_β_0 and D_β_1 in the space of curves in class β are disjoint. This reduces us to the case covered by Lemma <ref>.
Take = KU. The symplectic small quantum product ⋆ on K(X) is defined by
e ⋆ f = ∑_d (e⋆ f)_d q^ω(d)
where
(e⋆ f)_d = _3* (_1^*(e) ⊗_2^*(f) ⊗∑_m≥ 0 (-1)^m [D^m])
where D^m is the codimension m locus in _0,3(d) of curves with a length m rational tail at the output marked point. The group π_0(X) acts on QK^*(X) by ring automorphisms; this action was not previously known even in the case of X being smooth algebraic.
One could define a second (associative) quantum product on E^*(X;Λ__*), which corrects the classes [_0,3(β)]^naive by contributions from contracted rational tails at the two inputs of the product operation rather than from those at the single output. Breaking symmetry to choose between input and output corrections seems to be a choice one is free to make universally.
§ SPLITTING AND OPERATIONS
The axioms of a counting theory in Section <ref> are well-adapted to deal with strict normal crossing divisors, but less well adapted to general normal crossing divisors (because of the difference in explicitness between Proposition <ref> and Proposition <ref>). For this reason, we shall give two formulations of the splitting axiom: an implicit general one in the form of Proposition <ref> below, and an explicit one in the form of Theorem <ref>, asserting the existence of an algebra structure over the truncation of the Deligne-Mumford operad given by moduli spaces of curves with at least one input and one output. In the special cases of ordinary cohomology and K-theory, no restriction is required, and one can give an explicit general formula.
§.§ Divisors of reducible stable curves
We fix a genus g and number of marked points n, and a non-trivial decomposition
(g,n) = (g_1,n_1) + (g_2,n_2).
For each partition σ of {1,…,n} into complementary subsets of sizes n_1 and n_2
there is a `gluing' map of compact complex orbifolds
ι_σ := ι: _g_1,n_1+1×_g_2,1+n_2→_g,n
with image a finite quotient of a normal crossing divisor D^σ_g_1,g_2⊂_g,n. For notational simplicity we will often suppress the partition σ and/or implicitly assume that σ = {1,…,n_1}⊔{n_1+1,…,n_1+n_2=n}.
The map ι need not be generically injective: when n_1=0=n_2 and g_1=g_2 the map factors through the second symmetric product ^2(_g_1,1).
The map ι: _g,1×_g,2→_2g,1 is generically injective. However, there is a divisor _g,1≅ D_red⊂_g,2 of curves with a ^1-bubble carrying both marked points, and the map ι restricted to _g,1× D_red≅_g,1×_g,1 factors through the second symmetric product of _g,1. The image of ι is normal crossing but not strict normal crossing.
§.§ Reducible framed and thickened curves
Let :=_g,n(^r,d) denote any Zariski open subset of the moduli stack of degree d stable maps to projective space ^r of curves contained in the locus of curves u where H^1(u^*O(1)) = 0. Then is smooth and the divisor of singular stable curves meets in a normal crossing divisor.
The Euler sequence shows that H^1(u^*T^r) = 0, which ensures that is smooth. The stronger cohomological condition means the forgetful morphism to the stack M_g,n of all prestable curves is a submersion. The stabilisation map : M_g,n→_g,n is dominant, and locally toroidal (`log smooth') because it is an iterated curve fibration. We claim it is also flat. Indeed, since the preimage of the boundary under is contained in the boundary, there is an induced map on dual complexes. This sends any given ray to a ray or collapses it; flatness then follows from <cit.>. Since the map is flat and log smooth it pulls back the normal crossing boundary to something with normal crossings. The result then follows from the fact that the boundary of Deligne-Mumford space is normal crossing, and that is a smooth scheme.
Define _g_1,g_2^ □ by the pullback diagram
_g_1,g_2^ □[r]_-ι_[d] _g,n(^d-g,d)[d]
_g_1,n_1+1×_g_2,1+n_2[r]_-ι_σ _g,n
Then ι_ is a closed immersion; ι_ (_g_1,g_2^ □) ⊂_g,n(^d-g,d) is a divisor with normal crossings; and _g_1,g_2^ □ is its normalisation and is smooth.
Note that the pullback of a normal crossings divisor is normal crossings.
The previous diagram induces another pullback diagram
_g_1,g_2^ □[r] [d] _g,n(β) [d]
_g_1,g_2^ □[r]_-ι_ _g,n(^d-g,d)
(now in the category of global charts, so we also pull back the obstruction space and section in the top row). Since the map _g,n→_g,n is a submersion, the image of the top arrow again has `normal crossing singularities' in an obvious sense (though this now takes place in the category of G-transverse complex manifolds), and _g_1,g_2^ □ plays the role of the normalisation of that image, and is a union of smooth manifolds.
For each r ≥ 0,
let _g_1,g_2^ □,r⊂_g_1,g_2^ □ be the subspace consisting of those curves in which the stabilization map
to _g,n contracts a chain of r
irreducible components to the distinguished node.
Let _g_1,g_2^ □,r⊂_g_1,g_2^ □ be the preimage of _g_1,g_2^ □,r under the natural map.
For r≥ 0, the maps ι_|__g_1,g_2^ □,r and ι_|__g_1,g_2^ □,r are (r+1):1
covering maps over their image in _g,n( P^d,d) and _g,n( P^d,d) respectively.
Consider an element of _g,n(β) which lies over the divisor D^σ_g_1,g_2. The domain Σ contains a chain of r ≥ 0 rational curves which are contracted to the distinguished node by the stabilisation map (these rational components, being unstable, contain no other marked point). If a stable curve in D^σ_g_1,g_2⊂_g,n is described as the image of another curve under the gluing map ι (<ref>),
then it has a distinguished node corresponding to the gluing region.
The key point is that this chain of rational curves contains r+1 nodes that separate Σ into a genus g_1 and g_2 nodal curve respectively.
Combining the previous discussions, we have a diagram
_g_1,g_2^ □[r]_-ψ_[d]_π_0 _g,n(β) [d]_π_1
_g_1,g_2^ □× X^n [r]_-ψ_[d]_p_0 _g,n(^d-g,d) × X^n [d]_p_1
_g_1,n_1+1×_g_2,1+n_2× X^n [r]_-ψ_σ _g,n× X^n
where the first line indicates that we have a pullback in the category of global charts, defined by a G-map on thickenings which is compatible with obstruction bundles. We can write _g_1,g_2^ □ as a finite disjoint union
_g_1,g_2^ □ = ⋃_β_1+β_2 = β _g_1,g_2^ □ (β_1,β_2)
according to the decomposition of the homology classes, where finiteness follows from Gromov compactness.
§.§ Calculus of split curves
We next identify the virtual class for a smooth connected component _g_1,g_2^ □(β_1,β_2) of _g_1,g_2^ □ in terms of products of classes of simpler moduli spaces.
Let ψ_,σ denote the natural map
ψ_,σ: _g_1,n_1+1(β_1) ×_g_2,1+n_2(β_2) →_g,n(β),
whose domain is the thickening for a global chart for the product moduli space
_g_1,n_1+1(β_1) ×_g_2,1+n_2(β_2).
Passing to E_*-homology, we note that the monoidal structure induces a map
E_*(_g_1,n_1+1(β_1)) ⊗_E_*(pt) E_*(_g_2,1+n_2(β_2)) ⟶ E_*(_g_1,n_1+1(β_1) ×_g_2,1+n_2(β_2)),
and that the diagonal Δ⊂ X× X defines a class in E^*(X× X). Let Δ_σ⊂ E^*(_g_1,n_1+1(β_1) ×_g_2,1+n_2(β_2)) denote the pullback of Δ_X ⊂ X× X under the evaluation to X^n_1+2+n_2→ X^2 which picks out the two factors indexed by the glued marked points.
For the relevant component of (<ref>) we have
[_g_1,g_2^ □] = (ψ_,σ)_*([_g_1,n_1+1(β_1)] ⊗ [_g_2,1+n_2(β_2)]) ∩Δ_σ ∈ E_*(_g,n(β)).
By the equivalence of the two charts constructed in Section <ref>
we have that ^□_g_1,g_2
is isomorphic to a global Kuranishi chart _
which is the pullback of the product
chart _g_1,n_1+1×_g_2,n_2+2
of the diagonal Δ_X ⊂ X × X
under the evaluation map corresponding to a choice of marked points on each curve.
§.§ Implicit splitting axiom
The Gromov-Witten correlator for the class β is by definition
I_g,n(β) := (p_1∘π_1)_* [_g,n(β)] ∈ E_*(_g,n× X^n).
Write D_σ∈ E^*(_g,n× X^n) for the class dual to the divisor which is the image of ψ_σ. If the map ψ_ defines a strict normal crossing divisor, or the counting theory is either H or KU,
we can apply Axioms <ref> and <ref> of a counting theory respectively to the upper and lower squares in this diagram. In the upper part, we obtain
(ψ_)_*[_g_1,g_2^□(β_1,β_2)] = [_g,n(β)] ∩π_1^* (ψ_)_*(∑_β_1 + β_2 = β [^□_g_1,g_2(β_1,β_2)]),
while the lower part yields
p_1^*(D_σ) = L ({(ψ_)_*[^□_g_1,g_2(β_1,β_2)]}_β_1 + β_2 = β).
Here the formal group law L(∙) is applied to the collection of (first Chern classes of) irreducible divisors indicated, indexed by the splittings β = β_1 + β_2 compatible with the splitting of genus and marked points. In the case where ψ_ defines a non-strict normal crossing divisor, the analogue of (<ref>) is (even more) implicit, since it relies on Proposition <ref> as indicated in Remark <ref>. We obtain a preliminary version of the splitting axiom: as above fix a decomposition g=g_1+g_2 and associated partition σ of the n=n_1+n_2 marked points defining a divisor D_σ⊂_g,n.
The cap-product [_g,n(β)] ∩ (× ev)^* (D_σ× X^n) is determined, via the formal group L and the cone complex to the boundary divisor, by the tautological modules associated to the spaces with virtual classes [_g_1,n_1+1(β_1)] and [_g_2,1+n_2(β_2)] indexed by decompositions β = β_1+β_2 ∈ H_2(X;).
This follows on combining Corollary <ref> with Equations (<ref>) and (<ref>).
Recall that, for any fixed J, only finitely many decompositions are realised by stable maps, so the formal group law is only applied to finite collections of irreducible divisors. The codimension r strata in the nc-divisor ι_(_g_1,g_2^ □) exactly correspond to stable maps in which a length r chain of rational curves was contracted to the distinguished node.
Expanding out the formal group, (<ref>) gives a formula for [_g,n(β)] ∩π^*D_σ as a weighted sum over virtual classes indexed by
* tuples (β_1,γ, β_2) with β_1 + ∑γ_j + β_2 = β, where the γ_i index rational curves contracted at the distinguished node, and in particular involves all invariants of the form [_0,2(X,γ_i)] (capped against suitable multidiagonals); and
* elements of the Picard group of the moduli spaces of stable maps in these classes, coming from the restriction to some stratum D_J of the various line bundles 𝒪(D_i) and their powers (which appear from the formal group expansion).
These are by definition cap-products against appropriate multi-diagonals of elements of the tautological modules for β_1 and β_2.
The normal (orbifold line) bundle to the image of
ι_σ: _g_1,n_1+1×_g_2,1+n_2→_g,n_1+n_2
is given by the tensor product of the line bundles T_±^* defined by the cotangent line classes at the two marked points p_± glued to give the distinguished node at a curve in the target, so
⊗_β_1+β_2=β𝒪(_g_1,g_2^□(β_1,β_2)) ≅ T_+^* ⊗ T_-^*.
Therefore the tautological integrals arising from expanding the formal group will include terms which replace the virtual class of a stratum with its cap-product with powers of cotangent line classes.
For the additive or multiplicative formal group, the expansion of L(∙) contains only one term on any given stratum of (× ev)^-1(D_σ), and one can push forward (<ref>) to E_*(_g,n× X^n) to obtain the `usual' splitting theorems. In general there is no natural way to describe the terms in the sum after pushforward, since they involve tautological integrals which are not cap products of fundamental classes of strata with classes pulled back from the base.
Remark <ref> implies that Proposition <ref> does not yield an identity in E_*(X^n ×_g,n), and does not give a recursive way of computing the genus g Gromov-Witten class for β in terms of lower degree and genus invariants except when = H^*(-,) or = KU^*(-). Rather, it is recursive only if one knows the entire tautological modules, in the sense of Section <ref>, for the moduli spaces at lower genus and degree.
§.§ Operadic corrections
Consider moduli spaces of curves with 1+1 marked points, one being distinguished as an input and one as an output. The virtual class [_g,1+1(β)] defines an endomorphism
Φ_g,β: E^*(X) → E^*(X), λ↦ D ( (ev_out)_*(ev_in^* λ∩ [_g,1+1(β)]) )
by pull-push (D denotes duality E_*(X) → E^*(X) as usual). Just as the naive virtual classes for spaces [_0,3(β)] do not lead to an associative quantum product, the naive classes at higher genus do not give E^*(X) the structure of an algebra over the Deligne-Mumford operad, the simplest failure of which is that gluing of moduli spaces does not match compositions of the maps Φ_g,β. Since we are now considering operations, divisors of split curves have strict normal crossings (since components carry ordered input and output marked points), which makes the geometry underlying Proposition <ref> sufficiently explicit to correct the higher genus virtual classes to restore the operadic structure.
Recall that if D_g_1,g_2⊂_g,n is the divisor associated to gluing
_g_1,n_1+1×_g_2,1+n_2→_g,n
then the normal bundle 𝒪(D_g_1,g_2) = T_in^* ⊗ T_out^* is the tensor product of the cotangent line bundles at the distinguished marked points. Follow the same recipe as in Lemma <ref>, we have:
Let R' denote the quotient ring
R' := _* S,T, D_0,D_1,… / ⟨ D_i^2 = D_i · L_(S,T, -D_0, -D_1,…, -D_i-1, -D_i, -D_i+1,…) ⟩
Then there is a power series F(D_0,…), with coefficients in _*[[S,T]], so that
L_(D_0,…,D_N) = D_0 · F(0) + D_1 · F(D_0, 0,…) + ⋯ ∈ R'
and F(0) = 1.
A graded virtual class for _g,n+1(β) with respect to a global chart (G,,E,s) is a virtual class in E^*(_g,n+1(β)) w_1, …, w_n, z with the property that the coefficient of each monomial belongs to the tautological module of the thickening.
We think of the formal variables as attached to the input and output marked points, in an obvious way. Given an element
λ = ∑_α∈^nλ_α w^α∈ E^*(_g,n+1× X ×…× X) z_1, …, z_n
and a graded virtual class ∑_α [M_α i] w^α z^i for _g,n+1(β), we obtain an element of E^*(X) z as a sum
∑_i ≥ 0 Dev_* (ev^*λ∩ [M_α i]) z^i.
This uses a residue-type pairing of the input cohomology class formal variable w_i against the input variable z_i (i.e. view z_i = w_i^-1, and extract constant terms of the product). Assuming convergence, extending linearly yields a Gromov-Witten operation
E^*(_g,n+1) ⊗( E^*(X) z )^⊗ n→ E^*(X) z .
In the examples below, the linear extension will be well-defined (all coefficients finite) because of finiteness of contributions coming from Gromov compactness.
Returning to Lemma <ref>, view T = T^*_out be the cotangent line at the input to the second component of genus g_2 whilst S = T^*_in is that at the output of the first component of genus g_1, and the D_j are indexing divisors of broken curves according to decompositions β = β_0 + β_1 ordered intrinsically by area of β_0. Now expand
F = ∑_i≥ 0 F_i(T^*_out,D_0,…) (T^*_in)^i
in powers of S = T^*_in. (Breaking symmetry between S and T is analogous to the symmetry breaking by choosing outputs rather than inputs in Remark <ref>.)
The corrected virtual class of a moduli space of higher genus maps is the series
[_g,n+1(β)]^corr := ∑_i ≥ 0 [_g,1+1(β)] · F_i(T_out^*, D̂_0, D̂_1,…) · z^i ·∏_j = 1^n1/1-T^*_j,in· w_i ,
considered as an element of E_*() w_1, …, w_n, z
The corrected graded virtual classes given by Equation (<ref>) equip E^*(X) z with the structure of an algebra over the E-homology of the Deligne-Mumford operad _g,n+1 (with 0 < n).
Using the module structure of cohomology over homology, it suffices to prove the result for the fundamental class of the moduli spaces. The naive virtual class [_g,n+1(β)]
has the property that if ι: D_(g_1,n_1)∘_k (g_2,n_2)(β_1,β_2) →_g,n+1(β) is the inclusion of one irreducible stratum in the boundary divisor over the divisor D_(g_1,n_1) ∘_k (g_2,n_2)⊂_g,n+1 of split curves for which gluing takes place at the kth marked point, then
[_g,n +1(β)] ∩ D_(g_1,n_1)∘_k (g_2,n_2)(β_1,β_2) = [_g_1,n_1+1(β_0)]^out-corr∙ [_g_2,n_2+1(β_1)]^in-corr
factorises into corrections of the corresponding virtual classes which involve data intrinsically defined on the smaller moduli spaces and which are `concentrated' at the output respectively input, and where ∙ is the previously introduced residue pairing. More precisely,
[_g,n+1] ∩^*D_(g_1,n_1)∘_k (g_2,n_2) = [_g,n+1] ∩ L_({D_(g_1,n_1)∘_k (g_2,n_2)(β_0,β_1)})
by Proposition <ref> and Example <ref>,
and so Lemma <ref> writes this formal group expression as
∑ F_i(T^*_out,D_0, D_1,…) (T^*_in)^i
where the D_i label the divisors D_(g_1,n_1)∘_k (g_2,n_2)(β_0,β_1) ordered by ω(β_0). We can therefore set
[_g_1,n_1+1(β_0)]^out-corr = ∑_i≥ 0 [_g_1,n+1(β_0)] F_i(T^*_out, D̂_0, D̂_1,…) · z^i
and
[_g_2,1+1(β_1)]^in_k-corr = ∑_j ≥ 0 [_g_2,1+1(β_1)] (T^*_in)^j w^j_k = [_g_2,1+1(β_1)]·1/1-T^*_in w_k
to ensure that Equation (<ref>) is satisfied, where the D̂_i are the divisors of broken curves with a bubble at the output marked point, ordered intrinsically by ω-area of the bubble. Universality of the construction shows that (<ref>) is sufficient to imply compatibility with composition when defining the corrected virtual class as in (<ref>) by incorporating all input and output corrections at once.
§.§ Genus reduction
The image of the gluing map _g-1,n+2→_g,n defines a divisor which self-intersects rather than being strict normal crossings. There is again a diagram of pullback squares
_g-1,n+2^ □[r]_-ι_[d] _g,n(β) [d]
_g-1,n+2^ □× X^n [r]_-ι_[d] _g,n(^d-g,d) × X^n[d]
_g-1,n+2× X^n [r]_-ι_σ _g,n× X^n
and now the maps ι_ and ι_ have smooth irreducible domains but images in which r sheets self-intersect along the locus of maps where a chain of r-1 rational curves are contracted under stabilisation at the distinguished node. The `genus reduction axiom' for the virtual fundamental class by necessity appeals to the Equation (<ref>), which is not explicit cf. Remark <ref>. This again expresses
[_g,n(β)] ∩^*D_σ
for D_σ∈ E^*(X^n ×_g,n) dual to the image of ι_σ above in terms of tautological classes capped against [_g+1,n-2(β')]∩Δ_σ, where Δ_σ is the diagonal at the two marked points, and the β' range over classes with β' + γ = β for a rational chain γ at the marked point. The coefficients of the simpler invariants will now depend on the formal group law L(∙) of the theory and on the cone complex of D_σ. The cases of H and KU are simpler, stemming from Examples <ref> and <ref>, and the `usual' genus reduction axiom (as in the algebraic case) holds for these theories.
§ GLUING AND COMPARISONS OF CHARTS
§.§ Parameterized Gluing Theorem
Let (X,J) be a smooth almost complex manifold and let β∈ H_2(X).
Let g, h ∈ and let _g,h be a smooth étale map (in the orbifold sense)
from a smooth manifold and define
a family of curves via the pullback square:
_g,h
_g,h.
[from=1-2, to=2-2]
[from=1-1, to=1-2]
[from=1-1, to=2-1]
[from=2-1, to=2-2]
We let ^o ⊂ be the region where the fiber is smooth (i.e. the complement of the nodes)
and let Y_ := Ω^0,1_^o/⊗_ TX
be the bundle over ^o × X whose fiber over (c,x)
is the space of linear anti-holomorphic maps from the vertical tangent space of ^o to (TX,J).
Let U be a smooth manifold, W a real vector space and let
λ : W C^∞_c(U ×^o × X, P^*Y_)
be a linear map where P : U ×^o × X ^o × X is the natural projection map.
For each ν∈, let |_ν be the fiber of over ν.
Let
^(X) := {[ s = (a,ν) ∈ U × u is smooth and u_*(|_ν) = β; u : |_ν X ∂(u(·)) + λ(w)(a,·,u(·)) = 0; w ∈ W u is regular; ]}.
Let Π : ^(X) U ×
be the natural projection map sending (s,u,w) to s. The following result can be proved using the comparison with slice charts discussed in Section <ref> below, but it can also be directly extracted from Pardon's account of gluing <cit.>.
For each (s,u,w) ∈^(X), s = (a,ν), and any small neighborhoods
K ⊂Π^-1(Π(s,u,w)) of (s,u,w) and
Q ⊂ U × of its image under Π, there is a continuous map g fitting into the diagram:
Q × K ^(X)
Q U ×["Π", from=1-2, to=2-2]
["g", from=1-1, to=1-2]
["j"', hook, from=2-1, to=2-2]
["proj"', from=1-1, to=2-1]
where j is the natural inclusion map and g is
an open embedding.
The restriction of g to each fiber of the projection map Q × K Q is smooth and it is equivariant under the natural action on K given by the stabilizer group of the image of ν in _g,h if K is equivariant under this group.
The choices involved in constructing such a product neighborhood g ensures that if g' was another such neighborhood then g and g' are C^1_loc-compatible as in <cit.>.
The fibre Π^-1(Π(s,u,w)) is a moduli space of curves with a fixed domain, so is a smooth manifold by the regularity assumption on the maps u in ^reg(X) and classical elliptic regularity; taking K to be a ball, one sees in particular that ^(X) is a topological manifold.
We wish to generalize the above theorem to allow for families of curves
that do not necessarily submerse onto _g,h.
Let P : ' →_g,h be a smooth map from a smooth manifold and define
a family of curves ' ' via the pullback square:
' _g,h
' _g,h[from=1-2, to=2-2]
[from=1-1, to=1-2]
[from=1-1, to=2-1]
[from=2-1, to=2-2]
Let '^o ⊂ the complement of all nodes
and let Y_' := Ω^0,1_'^o/'⊗_ TX.
Let
λ' : W' C^∞_c('^o × X, Y_')
For each ν∈', let '|_ν be the fiber of ' over ν.
Let
'^(X) := {[ ν∈' u is smooth and u_*('|_ν) = β; u : '|_ν X ∂(u(·)) + λ'(w)(·,u(·)) = 0; w ∈ W u is regular; ]}.
Let Π' : ^(X) '
be the natural projection map sending (ν,u,w) to ν.
For each (ν,u,w) ∈'^(X), and any small neighborhoods
K ⊂Π^-1(Π(ν,u,w)) of (ν,u,w) and
Q ⊂' there is a continuous map g fitting into the diagram:
Q × K '^(X)
Q '["Π", from=1-2, to=2-2]
["g", from=1-1, to=1-2]
["j"', hook, from=2-1, to=2-2]
["proj"', from=1-1, to=2-1]
where j is the natural inclusion map and g is
an open embedding.
The restriction of g to each fiber of the projection map Q × K Q is smooth and it is equivariant under the natural action on K given by the stabilizer group of the image of ν in _g,h if K is equivariant under this group.
The choices involved in constructing such a product neighborhood g ensures that if g' was another such neighborhood then g and g' are C^1_loc-compatible as in <cit.>.
Choose a complete metric on _g,h and let exp : T_g,h→_g,h
be the exponential map.
Define
T := exp∘ DP : T' →_g,h.
Define ” := T' and
let ”→” be the pullback of _g,h via T.
Then since a submersion is locally a projection map,
we have that Theorem <ref>
applies to ”→”.
Since ' →' is the restriction of ”→”
to the zero section of ” = T',
the corollary follows since
a C^1_loc structure restricted to a smooth submanifold of the base also admits a C^1_loc structure.
Let ' →' be a flat algebraic family of curves over a smooth base.
Then the conclusion of Corollary <ref> holds for this family of curves.
Since the statement of Corollary <ref> is local in nature,
it is sufficient to find for each ν∈', a neighborhood U of ν
so that '|_U → U is the pullback of _g,h→_g,h
via a smooth map f : U →_g,h.
For the same reason, we can assume that the fibers of '|_U → U are connected.
Such a map is constructed as follows:
Choose local sections s_1,⋯,s_h disjoint from the nodes on a neighborhood U of ν
so that for each ν' ∈ U, we have that Σ_ν' := ('|_ν',s_1(ν'),⋯,s_h(ν'))
is an automorphism free marked curve.
Then f is the map sending ν' to Σ_ν'∈_g,h.
§.§ Local Slice Charts
We wish to compare our construction of a global Kuranishi chart
with that of local `slice' Kuranishi charts constructed by Fukaya, Oh, Ohta, and Ono <cit.>.
In this subsection we will recall such a construction. This section and the next are not required for the results of this paper, but may be helpful to readers already familiar with <cit.> and Kuranishi atlases.
Let (X,ω) be a closed symplectic manifold, let
β∈ H_2(X;) and let g,h ∈.
Let J be an ω-tame almost complex structure.
Let D ⊂ X be a smooth codimension 2 submanifold with boundary whose closure is a compact submanifold with boundary and let r ∈.
A (D,r)-slice curve
is a smooth map u : Σ X from a nodal curve Σ
with h + r marked points p_1,⋯,p_h+r
satisfying the following properties:
* u is transverse to D meaning that all the nodes of Σ map to X - D and u restricted to the smooth locus of Σ is transverse to D,
* u^-1(D) = {p_h+1,⋯,p_r} and
* the marked nodal curve (Σ,p_1,⋯,p_h+r) has trivial automorphism group.
We let S_r denote the permutation group on r elements, and let
S_r act by permuting the last r marked points p_h+1,⋯,p_h+r.
We let '_g,h+r⊂_g,h+r be the subspace of automorphism free curves
and '_g,h+r the corresponding universal curve and ^'o_g,h+r the complement of its nodes.
Let Y_g,h+r := Ω^0,1_^'o_g,h+r/'_g,h+r⊗_ TX
be the S_r vector bundle over ^'o_g,h+r× X whose fiber over a point (c,x) ∈^'o_g,h+r× X is the space of anti-holomorphic maps from T_c(^'o_g,h+r |_Π(c)) to T_x X, where Π is the natural projection map from ^'o_g,h+r to '_g,h+r.
Choose a finite dimensional approximation scheme (W_μ,λ_μ)_μ∈
for C^∞_c(Y_g,h+r).
For the next definition, we use the fact that the domain of a (D,r)-slice curve has no automorphisms and so we can canonically identify the smooth locus of its domain with a fibre of Π.
We define
_g,h = _g,h(β,D,r,μ)
to be the space of triples (ϕ,u,e) where ϕ∈'_g,h+r, u : '_g,h+r|_ϕ X is a (D,r)-slice curve
and e ∈ W_μ, satisfying the following equation:
∂_J u|_^'o_g,h+r|_ϕ + (λ_μ(e)) ∘Γ_u =0
where
Γ_u : ^'o_g,h+r|_ϕ→^'o_g,h+r× X, Γ_u(σ) := (σ,u(σ))
is the graph map.
The symmetric group S_r on r elements acts in a natural way on _g,h
by permuting the last r marked points.
We call (S_r,_g,h,_g,h× W_μ,0) a (D,r) slice chart.
For μ large enough, we have that (S_r,_g,h,_g,h× W_μ,0)
is a topological Kuranishi chart for an open subset of the moduli space _g,h(X,J,β).
This is a consequence of Theorem <ref>.
§.§ Comparing Global Kuranishi Charts with Slice Charts
We will now compare appropriate open subsets of our global Kuranishi chart construction from Subsection <ref>
and the slice charts from Section <ref>.
In order to do this, we first need to describe
the slice chart construction from Section <ref> in terms of global
Kuranishi data (Definition <ref>).
In addition to the choices made in the previous sectoin, let us fix an integer d ∈.
We will now construct a Kuranishi chart
of curves which intersect D transversally.
This will be a certain codimension 0 submanifold
of the global Kuranishi chart constructed
in Section <ref>.
Before we do this, we will first construct the open subset of the global Kuranishi data (,s)
from Example <ref> that will be related to our slice chart above.
We let G, , , ^(2), Δ, (V_μ,λ_μ)_μ∈, , s
be as in Example <ref>. Let μ≫ 1 be large.
We let B_⊂^(β,λ_μ)
be the subspace of tuples (ϕ,u,e)
where u : Σ→ X
is transverse to D and for which there are
are marked points p_1,⋯,p_h+r on Σ
where p_1,⋯,p_h are the marked points coming from |_ϕ
and where the additional r marked points
make u into a (D,r)-slice curve as in Definition <ref>.
We let _D ⊂ be the preimage of B_ in under the natural projection map →^.
We let s_D := s|__D.
Then (_D,s_D) is Kuranishi chart data describing the moduli space of J-holomorphic
curves that are transverse to D and which are (D,r)-slice
when adding an additional r marked points.
Let us now describe the the slice chart from Section <ref>
above in terms of Kuranishi chart data.
Let Ǧ be the permutation group on r elements.
We let := _g,h+r
be the moduli space of genus g curves
with h+r marked points
and let → the correpsonding universal curve.
We let ^(2) := Ǧ×
and we let ^(2) := (^(2),i,p) be the corresponding microbundle
where p : ^(2)→ is the natural projection map and i : →^(2)
sends ϕ to id ×ϕ.
Choose a finite dimensional approximation scheme (_μ,λ̌_μ)_μ∈
for .
Fix μ∈ large.
Let B_⊂^(β,λ̌_μ)
be the subspace of (D,r)-slice curves as in Definition <ref>.
We now let = Ǧ× B_.
We let š : →^(2)
be the natural projection map to Ǧ× = ^(2). By construction:
The Kuranishi chart associated to (,š)
is equal to a (D,r)-slice chart as in Definition <ref>.
The following proposition tells us that the global Kuranishi chart
for our moduli space of curves constructed in Section <ref>
is locally equivalent up to stabilization, germ equivalence and group enlargement
to the (D,r)-slice chart from Definition <ref>.
(G,,s) and (Ǧ,,š)
are related by a sequence of
group enlargements
and general stabilizations (Definition <ref>) and their inverses.
Hence their associated Kuranishi charts are isomorphic.
This is done in three stages.
We let G_1 := G ×Ǧ.
We let _1 := _g,h+r,d
and _1 →_1 the corresponding universal curve.
We let ^(2)_1 := Ǧ×^(2)_g,h+r,d (see Section <ref>).
We think of this as a smooth microbundle ^(2)_1 = (^(2)_1,i,p) over _1
where i sends ϕ to (id,ϕ) and p is the natural projection map to ^(2)_1.
The group G_1 acts on these moduli spaces as follows: G acts in the natural way and
Ǧ permutes the last r marked points.
We let (V_μ,1,λ_μ,1)_μ∈ be the pullback of (V_μ,λ_μ)_μ∈
to _1 (Definition <ref>).
We let B__1⊂^(β,λ_μ,1)
be the subspace of (D,r)-slice curves.
We let P' → B__1 be the principal G-bundle over B__1
whose fiber over (ϕ,u) is the space of unitary bases of H^0(L_u^⊗ k).
We let P_1 := Ǧ× P' be the corresponding G_1-bundle over B__1.
We let _1 := (P_1)_.
Then elements of _1 correspond to tuples (ǧ,ϕ,u,e,F)
where ǧ∈Ǧ, (ϕ,u) ∈^(β,λ_μ,1),
e ∈ V_μ,1 and F is a basis of H^0(L_u^⊗ k).
We let s_1 : _1 →^(2)_1 send (ǧ,ϕ,u,e,F)
to (id,(ϕ,ϕ_F)).
Then (_1,s_1) is Kuranishi data for (G_1,^(2)_1,λ_μ,1).
All the curves involved are (D,r)-slice curves,
we have that the Kuranishi chart associated to (_1,s_1)
is a group enlargement of the one associated to (_D,s_D).
(This group enlargement is not an enlargement obtained from Lemma <ref>, i.e. it is not induced by a principal bundle enlargement.)
Now let (V_μ,2,λ_μ,2)_μ∈ be the direct sum of
(V_μ,2,λ_μ,2)_μ∈
and the pullback of (_μ,λ̌_μ)_μ∈
under the natural projection map _1 → given by sending a curve to its domain.
We now let P_2 be the pullback of P_1 to ^(β,λ_μ,2).
Let _2 := (P_2)_ which is the pullback of _1.
Let s_2 be the composition of the projection map to _1 with s_1.
Then (_2,s_2) is an approximation stabilization of (_1,s_1)
as in Definition <ref>.
Let (V_μ,3,λ_μ,3)_μ∈ be the pullback of (_μ,λ̌_μ)_μ∈
under the natural projection map _1 → given by sending a curve to its domain.
We let B__3⊂^(β,λ_μ,3)
be the subspace of (D,r)-slice curves.
We let P_3 → B__3 be the Ǧ-equivariant principal G-bundle
whose fiber over (ϕ,u) is a unitary basis of H^0(L_u^⊗ k).
Let _3 = (P_3)_
and let s_1 : _1 →^(2)_1 send (ǧ,ϕ,u,e,F)
to (id,(ϕ,ϕ_F)).
Then (_2,s_2) is an approximation stabilization of (_3,s_3)
as in Definition <ref>. In addition, it is a group enlargement of (,š).
habbrv
|
http://arxiv.org/abs/2307.01651v1
|
20230704111527
|
Task Planning Support for Arborists and Foresters: Comparing Deep Learning Approaches for Tree Inventory and Tree Vitality Assessment Based on UAV-Data
|
[
"Jonas-Dario Troles",
"Richard Nieding",
"Sonia Simons",
"Ute Schmid"
] |
cs.CV
|
[
"cs.CV",
"cs.CY"
] |
Task Planning Support for Arborists and Foresters
J. Troles et al.
University of Bamberg, Cognitive Systems Group, 96049 Bamberg, Germany
jonas.troles@uni-bamberg.de TU Berlin, Straße des 17. Juni 135, 10623 Berlin, Germany
Task Planning Support for Arborists and Foresters: Comparing Deep Learning Approaches for Tree Inventory and Tree Vitality Assessment Based on UAV-DataSpecial thanks to our cooperation partner Smart City Bamberg. The project BaKIM is supported by Kommunal?Digital! funding of the Bavarian Ministry for Digital Affairs. Project funding period: 01.01.2022 - 31.03.2024.
Jonas Troles 10000-0001-8686-5168 Richard Nieding1
Sonia Simons2 Ute Schmid10000-0002-1301-0326
August 1, 2023
===============================================================================================================================================================================================================================================================================================================================================================================
Climate crisis and correlating prolonged, more intense periods of drought threaten tree health in cities and forests. In consequence, arborists and foresters suffer from increasing workloads and, in the best case, a consistent but often declining workforce. To optimise workflows and increase productivity, we propose a novel open-source end-to-end approach that generates helpful information and improves task planning of those who care for trees in and around cities. Our approach is based on RGB and multispectral UAV data, which is used to create tree inventories of city parks and forests and to deduce tree vitality assessments through statistical indices and Deep Learning. Due to EU restrictions regarding flying drones in urban areas, we will also use multispectral satellite data and fifteen soil moisture sensors to extend our tree vitality-related basis of data. Furthermore, Bamberg already has a georeferenced tree cadastre of around 15,000 solitary trees in the city area, which is also used to generate helpful information. All mentioned data is then joined and visualised in an interactive web application allowing arborists and foresters to generate individual and flexible evaluations, thereby improving daily task planning.
splncs04
|
http://arxiv.org/abs/2307.01511v1
|
20230704064746
|
Cost-Efficient High-Resolution Linear Absorption Spectra Through Extrapolating the Dipole Moment from Real-Time Time-Dependent Electronic-Structure Theory
|
[
"Eirill Hauge",
"Håkon Emil Kristiansen",
"Lukas Konecny",
"Marius Kadek",
"Michal Repisky",
"Thomas Bondo Pedersen"
] |
physics.chem-ph
|
[
"physics.chem-ph"
] |
We present a novel function fitting method for approximating the propagation of the time-dependent electric dipole moment from real-time electronic structure calculations. Real-time calculations of the electronic absorption spectrum require discrete Fourier transforms of the electric dipole moment. The spectral resolution is determined by the total propagation time, i.e. the trajectory length of the dipole moment, causing a high computational cost. Our developed method uses function fitting on shorter trajectories of the dipole moment, achieving arbitrary spectral resolution through extrapolation. Numerical testing shows that the fitting method can reproduce high-resolution spectra using short dipole trajectories. The method converges with as little as 100 dipole trajectories for some systems, though the difficulty converging increases with the spectral density. We also introduce an error estimate of the fit, reliably assessing its convergence and hence the quality of the approximated spectrum.
§ INTRODUCTION
The rapid advancement of laser technology in the past decades allows us to probe matter on spatiotemporal scales that approach the
characteristic time and length scales of the electron, opening the field of attosecond science <cit.>.
This development has forced quantum chemists to shift their attention
from the time-independent to the time-dependent Schrödinger and Dirac
equations <cit.>.
Numerical approaches to laser-driven electron dynamics are often labelled real-time methods to distinguish
them from the response-theoretical methods to the time-dependent Schrödinger/Dirac equation, the latter solving the equations of motion perturbatively in the frequency domain <cit.>.
Even without explicit reference to results derived from perturbation theory such as, e.g., Fermi's golden rule, it is still possible to
extract linear and low-order nonlinear optical properties from nonperturbative real-time simulations,
including electronic absorption spectra and time-resolved pump-probe absorption spectra
that would be hard or impossible to compute using response
theory—see Refs.
for recent examples.
In this work, we focus on electronic absorption spectra extracted from electron-dynamics simulations driven by a Dirac-delta impulse,
which excites the molecule into all dipole-allowed excited states simultaneously <cit.>.
Due to the nonperturbative nature of real-time methods,
the resulting spectrum may contain nonlinear (e.g., two-photon) as well as linear absorption lines <cit.>, depending on the field strength of the broad-band laser pulse.
In practice, the induced electric dipole moment is recorded in the
course of the simulation and subsequently transformed to the frequency domain to yield the absorption cross section. When using the
conventional discrete Fourier transform to process the signal, the spectral resolution
is inversely proportional to the number of time steps N and the time-step length Δ t, as Δω = 2π /(N Δ t).
Obtaining sufficient spectral resolution typically requires tens to hundreds of thousands of time steps since Δ t
cannot be increased beyond a certain limit if rapid oscillations of the electron density are to be captured.
Moreover, increasing Δ t reduces the accuracy and stability of the numerical integration scheme used to propagate
the electronic degrees of freedom.
As the computational effort in each time step requires multiple rebuilds of the Hamiltonian matrix, it is comparable to several iterations of a ground-state optimization within the chosen
electronic-structure model. Hence, there is considerable interest in decreasing the number of time steps required to achieve sufficient spectral resolution.
Previous efforts to improve the spectral resolution have been made by estimating excitation energies through various signal processing
techniques <cit.>. More recently,
<cit.>
investigated the use of Padé approximants to interpolate the discrete Fourier transforms used for the absorption spectrum.
These are all methods operating in the frequency domain, leaving no other validation options than comparison with a fully propagated spectrum.
The original periodic signal is typically damped using a decaying exponential function to reduce unwanted artefacts arising when the discrete Fourier transform is applied to oscillating functions in simulations with finite trajectory length. In the time domain, the number of time points can be increased by padding the damped signal with zeros, leading to finer spectra. However, this artificial extension of the trajectory length can only be applied on sufficiently damped signals.
In this work, we investigate a
more sophisticated and powerful alternative: the extrapolation of a short signal.
The discrete Fourier transform of an extrapolated signal achieves increasingly higher spectral resolution as the extrapolation length increases.
This requires the development of a stable and reliable method for time-series forecasting.
The inherently harmonic character of the time-dependent wave function in the absence of an external field suggests that such forecasting of molecular properties should be possible.
Importantly, the forecasted signal can be verified in the time domain by comparing it with relatively few additional time steps.
To the best of our knowledge, no published work exists on improving the spectral resolution by such extrapolation of
the time-dependent dipole moment.
The current success and popularity of machine learning is undeniable, including use cases in chemistry <cit.>, and one might be tempted to leverage artificial neural networks for forecasting the time-dependent electric-dipole
moment.
However, while artificial neural networks are powerful tools for pattern detection in large data sets, they struggle
with precise and reliable extrapolations <cit.>.
Although the universal approximation theorem <cit.> tells us that an excellent interpolation can be achieved,
it does not guarantee a stable extrapolation.
In order to achieve a stable extrapolation, over-fitting must be avoided by enforcing sufficient restrictions.
In this paper, we present a novel approach for obtaining high-resolution absorption spectra from real-time simulations of laser-driven electron dynamics
by exploiting a priori knowledge of the form of the dipole function from quantum mechanics in a finite-dimensional Hilbert space.
The form of the dipole function thus is motivated by the underlying physics, with unknown parameters to be determined by fitting
a short dipole trajectory from a real-time simulation.
The fitted function may be evaluated at any point in time, meaning that it can be extrapolated in the
time domain to arbitrary future time. This further implies that we can achieve arbitrary spectral resolution. For sufficiently weak Dirac-delta impulse, the evaluation of absorption spectra based on these fitted functions
may use analytical expressions for the linear response function <cit.>.
Working in the time domain, a quantitative error measure of the fitted dipole function can be monitored during the course of the real-time simulation
and used to evaluate convergence.
This way, an unnecessarily long real-time propagation can be avoided by automatically terminating the propagation upon the convergence of the fit.
The developed method is independent of the quantum mechanical model and is tested with several molecular systems using mainly real-time
time-dependent density-functional theory
(RT-TDDFT) <cit.>.
Despite certain flaws arising mainly from the almost universally adopted adiabatic density-functional
approximation <cit.>, RT-TDDFT
is the far most widely used electronic-structure method for laser-driven electron dynamics.
With computational costs comparable to (or below) time-dependent Hartree-Fock theory <cit.>,
RT-TDDFT takes into account electron-correlation effects
that would otherwise require advanced and computationally expensive wave-function theories <cit.>.
To demonstrate the independence of the underlying electronic-structure theory, we also present results obtained from real-time
time-dependent configuration interaction singles (RT-TDCIS) <cit.> theory.
We will start with a short presentation of the electric-dipole approximation within real-time simulations
before introducing the proposed method for fitting the time-dependent electric-dipole moment.
After briefly laying out the simulation details for the real-time simulation of a selection of systems,
the results of the fitting method on these systems are presented and discussed.
Finally, we reflect on the performance of the fitting method and discuss potential future improvements.
§ THEORY
In this work, we employ the following conventions:
Subscripts u, v denote Cartesian components,
vectors are typed in boldface, and quantum-mechanical operators are denoted by a hat.
Following the convention of response theory by <cit.>,
we define the Fourier transform and its inverse according to
f̃(ω)
= ℱ[f(t)]
= 1/2π∫^∞_-∞
f(t)
ω t t
,
f(t)
= ℱ^-1[f̃(ω)]
= ∫^∞_-∞f̃(ω)
-ω tω.
Atomic units are used throughout unless otherwise specified.
§.§ Real-time simulations of absorption spectra
Within the clamped-nucleus Born-Oppenheimer approximation, real-time simulations of electronic absorption spectra typically assume the electric-dipole approximation, where a molecule is subjected to a time-dependent spatially uniform electric field, F(t).
The time-dependent Hamiltonian reads
Ĥ(t) = Ĥ_0 + V̂(t),
where Ĥ_0 is the time-independent electronic Hamiltonian, and the interaction operator is given by
V̂(t) = - μ̂·u F(t),
where μ̂ is the electric dipole operator.
The linear polarization direction of the electric field is determined by the real unit vector u, such that the field aligns with one of the Cartesian axes.
This implies the form V̂(t) = -μ̂_u F(t), where μ̂_u its component along the polarization direction.
We assume that the electronic system is in the ground state |0⟩ at time t < 0, and that the external field F(t) is only active between t = 0 and time t_0 ≥ 0. At time t_0, the Hamiltonian reduces to the time-independent Hamiltonian such that Schrödinger's equation for t ≥ t_0 becomes
Ĥ_0 |Ψ(t)⟩
= d/d t|Ψ(t)⟩.
The time-dependent wave function in the absence of the external field oscillates around the solution at time t_0, |Ψ(t_0)⟩ = ∑_n k_n(t_0) |n⟩, as given by
|Ψ(t)⟩ = -Ĥ_0(t - t_0)|Ψ(t_0)⟩
= ∑_n k_n(t_0) - E_n (t - t_0)|n⟩,
where |n⟩ denotes a normalized eigenfunction of the unperturbed Hamiltonian,
Ĥ_0 |n⟩ = E_n |n⟩ <cit.>.
This formulation is exact when the electronic continuum is excluded, e.g., by choosing a fixed, finite basis as commonly done in quantum chemistry.
Actual simulations are not performed in the energy eigenbasis but in, e.g., a basis of Slater determinants, implying that the
coefficients k_n(t_0) are not known.
In order to obtain the electronic absorption spectrum averaged over all molecular orientations relative to the
electric field, the time-dependent electric dipole moment
μ_u(t) = ⟨Ψ(t) |μ̂_u |Ψ(t)|$⟩
is calculated in three separate simulations
with the electric field polarized in each of the three Cartesian directions (u = x, y, z).
The absorption cross-section is then obtained from the Fourier transform of the dipole moments,μ̃_u(ω), as <cit.>
S(ω) = 4 πω/3 c[
μ̃_x(ω)F̃^*(ω)/|F̃(ω)|^2
+ μ̃_y(ω)F̃^*(ω)/|F̃(ω)|^2
+ μ̃_z(ω)F̃^*(ω)/|F̃(ω)|^2],
wherecis the speed of light. The resulting spectrum contains both linear (one-photon
transitions between the ground and excited states) and nonlinear (multi-photon transitions between the ground and excited states,
and one- and multi-photon transitions between excited states) absorptions, as recently stressed by <cit.>
We note that only the induced dipole moment, that is the total dipole moment with the static ground-state part subtracted, contributes to the absorption cross section but, for notational convenience,
we will only distinguish between that and the total dipole moment when it is strictly required.
Since the dipole moment is calculated on a finite discrete time grid, the Fourier transforms are replaced by
discrete Fourier transforms, thus introducing artificial periodic boundary conditions.
To avoid artefacts from these,
the dipole moment is multiplied by a damping factor before the discrete Fourier transform, i.e.,
μ̃_u(ω) = ℱ[μ_u(t)-γ| t|]
= 1/2π∫_0^∞μ(t) (ω + γ)t t,
where we have used that the induced dipole moment vanishes fort<0. The Fourier transform thus becomes a Laplace transform.
The parameterγ∈ℝ_+can be interpreted as a common (inverse) lifetime of all excited states, giving rise to Lorentzian line shapes
in the simulated absorption spectra <cit.>.
The discrete Fourier transform, however, requires a very large, often prohibitive, number of
time steps to achieve sufficient spectral resolution. In the following sections, we will describe an extrapolation technique
aiming at high resolution with short simulation time.
§.§ The expected form of the electric dipole moment
Once the external field is turned off, the time-dependent electric dipole moment oscillates with the Bohr frequenciesω_n m = E_n - E_maccording to
μ_u(t)
= 2 ∑_n > m{[⟨n|μ̂_u|m⟩ k_n(t_0)^*k_m(t_0)]cos(ω_n m(t - t_0))
- [⟨n|μ̂_u|m⟩ k_n(t_0)^*k_m(t_0)] sin(ω_n m(t - t_0))
}
+ ∑_n |k_n(t_0)|^2 ⟨n|μ̂_u|n⟩,
for timet ≥t_0.
The function form of the approximated dipole moment μ̅_u(t) ≈μ_u(t)will therefore be given by
μ̅_u(t) = c_0^u + ∑_i = 1^N_ω^u[
c_i^ucos(ω_i^u (t - t_0)) + c^u_N_ω^u + isin(ω_i^u (t - t_0))
],
whereN_ω^uis the number of participating frequenciesω_i^u.
If we can determine these frequencies and their corresponding real coefficientsc_i^uandc^u_N_ω^u + ifrom a short dipole time series, we obtain a continuous dipole function and, hence,
infinite spectral resolution.
As will be described in detail below, we estimate the participating frequencies using the poles of a Fourier-Padé approximant, while the
linear coefficients are determined using linear regression in a subsequent step.
§.§ Estimating Bohr frequencies
In order to estimate the frequencies of the dipole moment, we will investigate the singular points of the Fourier-Padé approximant,
originally introduced in real-time quantum simulations by <cit.>
In general, the Padé approximant is used to accelerate the convergence of a truncated power series. The discrete Fourier transform can be written as the power series
μ̃_u(ω) = Δ t/2 π∑_n = 0^N_t - 1μ_u(t_n) z^n,
wherezdepends on the frequency according to
z ≡ z(ω) = (ω - γ)Δ t.
The diagonal Fourier-Padé approximates the Fourier transform using two polynomialsP_u(z)andQ_u(z)of degreeM = (N_t-1)/2,
[M/M]_μ_u(ω) = Δ t/2 πP_u(z(ω))/Q_u(z(ω)),
where the coefficients of the polynomials create a Toeplitz linear system. For details see Ref. .
The Fourier-Padé poles, denotedz_p^u, are found by
Q_u(z_p^u) = 0,
where the damping parameterγis set to zero. The Bohr frequencies are positive and real-valued, while the frequencies corresponding to roots ofQ_u(z)will be complex. The number of roots ofQ_u(z), (which amounts toMroots), should also significantly exceed the number of Bohr frequencies. The potential frequencies are given by
ω_p^u = |ln(z_p^u)/Δ t|,
considering only roots(z_p^u) > 0, as the complex conjugate root theorem states that complex roots will form conjugate pairs.
These conjugate pairs yield duplicates of the real-valued frequencies.
Real-valued rootsz_p^uyield purely complex frequencies, and are therefore also excluded. The potential frequenciesω_p^udiscard the imaginary component and should represent extrema of the Fourier-Padé spectrum, not singular points likez_p^u.
Estimating the potential frequencies uses the Python NumPy <cit.> library to compute the eigenvalues of the companion matrix <cit.> of the polynomialQ_u(z)to determine its roots. This method exhibits poor scaling with respect to the number of time pointsN_t, representing a computational bottleneck of the dipole-momentμ̅_u(t)fitting procedure. In real-time simulations using very small time steps, one may safely increase the step length on the dipole data used to create the Fourier-Padé approximant to alleviate the computational cost.
As shown by <cit.>, the convergence of the Fourier-Padé approximant is mostly impacted by the trajectory lengthN_tΔt, and not the time step itself. However, the discrete Fourier transform, and hence also the Fourier Padé, is periodic with a cycle length of2π/Δt. Peaks aboveπ/Δtwill fold back due to anti-symmetry and appear as negative duplicates polluting the spectrum. Therefore, it is crucial to keep the time stepΔt < π/ω_max, whereω_maxis the largest significant frequency in the signal.
The potential frequenciesω_p^umust be classified as either an estimated frequency or a redundant root.
The classification is based on the assumption thatln(z_p^u)/(Δt)should have a significant
imaginary component ifω_p^uis a redundant root, while it should lie close to the real axis ifω_p^ucorresponds to an actual Bohr frequency. This further means thatQ_u(z(ω_p^u))should be close to zero and
that[M/M]_μ_u(ω_p^u)should be large for estimated frequencies.
Hence, we create a two-dimensional representationr^u_pof the prospective frequenciesω_p^ugiven by
[r_p^u]_x = 1 - X(ω_p^u)
- min_ω_q^u[X(ω_q^u)]/max_ω_q^u[X(ω_q^u)]
- min_ω_q^u[X(ω_q^u)],
[r_p^u]_y = Y(ω_p^u)
- min_ω_q^u[Y(ω_q^u)]/max_ω_q^u[Y(ω_q^u)]
- min_ω_q^u[Y(ω_q^u)] ,
where the unnormalized features are defined as
X(ω_p^u) = log_10|[M/M]_μ_u(ω_p^u)|,
Y(ω_p^u) = log_10|Q_u(z(ω_p^u))| .
The base-10logarithm is used to manage the extreme scaling of both features, as prospective frequencies should causeQ_u(z(ω_p^u))to approach zero and hence be a nearly singular point of[M/M]_μ_u(ω_p^u). The features are constructed such that estimated frequencies should be close tor^u_p = (0, 0), while redundant roots should be closer tor^u_p = (1, 1).
We use theK-means clustering algorithm (see e.g. ), implementation from the Python SciKit-Learn <cit.> library, withK=2to classify prospective frequencies.
The2-means clustering algorithm is a computationally inexpensive way to separate a set into two categories. The centroid for the cluster of potential frequencies should be closer to(0, 0),
whereas the centroid for the redundancy cluster should be closer to(1, 1).
§.§ Determining the linear coefficients
Once the frequencies are estimated, the linear coefficients are determined using linear regression. The coefficients are optimized by minimizing the cost function <cit.>,
R(c^u)
= ∑_n = 0^N_t - 1[μ_u(t_n) - μ̅_u(t_n; c^u)]^2 .
Using the general form of the dipole moment in <ref>, the linear coefficients may be optimized using a simple least squares optimization. The only restraint on the optimization of these coefficients is that they are real. This fitting procedure is general for any type of external field,F(t). However,
as shown by <cit.>,
restricting the coefficients is crucial to avoid over-fitting the dipole moment.
The formμ̅_u(t)in <ref> is based on the full dipole moment, correct through all orders in perturbation theory and is independent of the electric field. In this work, we will use a Dirac delta-type impulse <cit.> of strengthκ,
F(t) = κδ(t),
which has an infinitely wide frequency distribution and thus generates the full absorption spectrum for the given polarization direction. This implies thatt_0 = 0. Further, we assume that the electric field strength is sufficiently weak,
such that we may regard the interaction operatorV̂(t)as a time-dependent perturbation and assume that the interaction only induces one-photon transitions from the ground state—i.e., a linear absorption spectrum. The electric dipole moment should then be of the formμ_u(t) ≈μ_u^(0) + μ_u^(1)(t),
where the zeroth order dipole moment corresponds to the ground-state value,μ_u^(0) = μ_u(t = 0). We will now investigate an analytical expression for the first-order correction to the dipole moment induced by a weak Dirac delta impulse.
We start with the exact expression for the linear response function <cit.>,
μ̂_u; V̂_u(ω)
_ω + γ
=
- 2 F̃(ω) ∑_n ≠ 0⟨0|μ̂_u|n⟩^2 ω_n 0/(ω + γ)^2 - ω_n 0^2 ,
where we have used the Fourier transform of the interaction operatorV̂(t),
V̂_u(ω) = μ̂_u F̃(ω),
F(ω) = κ/2π.
The linear response function and the first-order correction to the dipole moment are related byμ̂_u; V̂_u(ω)
_ω+ γ
= ℱ[μ_u^(1)(t)-γ|t|]
.
Sinceμ_u^(1)(t < 0) = 0, we get the relation
∫_0^∞μ_u^(1)(t) -(γ - ω)t t
= 2κ∑_n ≠ 0⟨0|μ̂_u|n⟩^2 ω_n 0/(ω + γ)^2 - ω_n 0^2.
By using well-known Laplace transforms, the first-order correction is identified as a linear combination of sine waves:
μ_u^(1)(t)= ∑_n ≠ 0 B_n^u sin(ω_n 0t), B_n^u = 2κ⟨0|μ̂_u|n⟩^2.
We have also used that the first-order perturbation correction to the dipole moment should only include one-photon transitions. This further means that the approximated dipole moment, when using a weak Dirac delta impulse, should have the form
μ̅_u(t) = c_0^u + ∑_i = 1^N_ω^u
c_i^usin(ω_i^u t),
where all sine coefficients are positive.
The coefficients ofμ̅_u(t), approximating the dipole moment from the Dirac delta impulse, are optimized using the least absolute shrinkage and selection operator (LASSO) <cit.> method. The coefficients are determined according to
c^u_LASSO =
_c^u{1/2 R(c^u)
+ λ∑_i |c_i^u|
} ,
whereλis the shrinkage parameter restricting the magnitude of the coefficients,c_i^u. In contrast to the
ordinary linear least-squares algorithm,
the LASSO method is iterative and therefore somewhat less computationally efficient.
In return, this makes it possible to enforce positive coefficients, as in the implementation by SciKit-Learn <cit.>. This makes the method less prone to over-fitting.
§.§ Molecular orbital decomposition
The electric dipole moment can be written as a sum of contributions from elementary molecular orbital (MO) transitions <cit.>,
μ_u (t) = ∑_i aμ_u^i a (t),
whereiandalabel occupied and virtual MOs, respectively.
The componentsμ_u^i a (t)are then approximated separately.
This MO decomposition can divide a dense spectrum into a series of sparser spectra and aid in the assignment of absorption
lines <cit.>.
Clustering the MO components into groups can be used to offset the increased memory consumption <cit.>. For the fitting method, creating clusters with well-separated frequencies could also
reduce the accumulation of errors when summing the component fits.
When fitting the individual components, the assumptions on the sign of the linear coefficients are no longer valid.
As is clear from the underlying theory and as demonstrated in practice by <cit.>,
the same frequencies may be found in several componentsμ_u^i a, and their corresponding partial spectra may contain negative peaks.
Only the full spectrum, i.e., the sum of the components, is guaranteed to contain positive peaks exclusively.
The ordinary least squares method must therefore be used when optimizing the linear coefficients of the individual components,
which may introduce additional errors due to over-fitting in each component.
Alternatively, the fitting algorithm may estimate the frequencies of each component separately and then optimize the linear coefficients for the full dipole moment. This way, the additional coefficient restrictions can be used in the optimization. In our experience, however,
this produces a vast number of estimated frequencies leading to problems with over-fitting even when enforcing positive linear coefficients.
§.§ Convergence criterion
The goal of the fitting method is to accurately construct the functionμ̅(t)using the shortest possible dipole trajectory.
A given trajectory is divided into two parts, a fitting domain and a verification domain. The linear coefficients are optimized using only the fitting domain, while the error is calculated on the verification domain. Measuring the error in the fitting domain gives the interpolation error, which is artificially low in cases of over-fitting, whereas the error in the verification window indicates the reliability of the extrapolated dipole moment.
The error of the fit is estimated using one minus the coefficient of determination,R^2, i.e.,
E_u = 1 - R^2
=∑_n [μ_u(t_n) -μ̅_u(t_n) ]/∑_n [μ_u(t_n) -μ_u^m ],
whereμ_u^mis the mean value of the induced dipole moment. The error measure is unitless and independent of the magnitude of the dipole moment. The fitting method can be run in parallel with real-time simulations, which are terminated onceE_udrops below a pre-defined threshold value. The computational cost of the fitting method is not insignificant, and we recommend that it is run once per time intervals of50 - 100when used to automatically terminate the real-time simulation.
The error measureE_ucannot distinguish error contributions from different parts of the spectrum, preventing termination once the desired
frequency region is converged.
In order to focus on valence excitations in the low-frequency region, we apply a low-pass (smoothing) filter to remove frequencies above a
cut-off frequencyω_maxfrom the dipole moment in the time domain. We used a Butterworth filter, implemented by SciPy <cit.>, which removes the high-energy part of the spectrum while leaving the lower-energy part almost unchanged.
If bound core excitations are the main targets, a high-pass filter must be used instead.
§ COMPUTATIONAL DETAILS
We test the dipole extrapolation scheme using RT-TDDFT simulations, supplemented by a few RT-TDCIS simulations to demonstrate
its applicability to wave-function-based theories.
The RT-TDDFT simulations are performed using the ReSpect program <cit.>, while the
RT-TDCIS calculations are performed using the Hylleraas Quantum Dynamics (HyQD) software <cit.>.
The RT-TDDFT and RT-TDCIS simulations are performed with analytic integration att = 0, as described in Ref. .
The subsequent time steps are performed numerically using the Magnus integrator for the RT-TDDFT simulations <cit.>
and the three-stage Gauss-Legendre integrator <cit.> as described in Ref.
with the residual norm convergence criterion10^-14for the implicit equations
for the RT-TDCIS simulations.
The RT-TDCIS simulations are performed with time stepΔt = 0.01and field strengthκ= 10^-3The RT-TDDFT simulations for the organic molecules CH4, CH2O, CH3OH, C2H6, and C6H6
are performed with time stepΔt = 0.01and field strengthκ= 10^-4, whileΔt = 0.01andκ= 10^-3are used for CO2, H2O, and NH3.
For the smallest systems, He, H2, Be, and LiH,Δt = 0.1andκ= 10^-3are used.
Molecular geometries are found in the supplementary information.
The simulations were performed in Dunning's cc-pVXZ and aug-cc-pVXZ, X = D,T, basis sets <cit.>
(uncontracted in the case of RT-TDDFT calculations).
The RT-TDDFT simulations were performed using the PBE0 exchange–correlation potential <cit.>
in the adiabatic approximation.
Implementation of the fitting algorithm can be found at <https://github.com/HyQD/absorption-spectrum>.
§ RESULTS
All reference spectra in this paper are produced from low-pass filtered electric dipole moments with a trajectory length of4000,
such that the spectral resolution becomesΔω= 1.6·10^-3.
In this paper, the resolution of the fitted spectrumS̅(ω)is the same as its reference spectrum. This is to allow direct comparisons of the two spectra, though the resolution ofS̅(ω)could be made arbitrarily fine. The Fourier transform of the approximated dipole momentμ̅_uis calculated according to
μ̃_u(ω)
=
- 1/2π∑_i c_i^u ω_i^u/(ω + γ)^2 - (ω_i^u)^2.
The half-life parameter was always set toγ= 0.5 ·10^-3π, and the spectra were cut at an estimated ionization energy of0.5- ϵ_HOMO, where
the energy of the highest occupied molecular orbitalϵ_HOMOof all systems are listed in the supplementary information.
Using the low-pass filter will leave the lower energy part of the absorption spectrum unaltered, while the higher energy part is removed and set to zero. Differences between filtered and unfiltered spectra are shown in the supplementary information.
The low-pass filter does not give a clean cut-off at the cut-off frequencyω_maxbut rather a gradual lowering of the peak intensity aroundω_max. The cut-off frequency should therefore be set somewhat higher than the desired range of the spectrum. We have usedω_max = 4 for all systems.
When fitting the dipole moment, the available trajectory is from when the external field is turned off att = 0to timet = T_ver^u.
The linear coefficients are determined on the time interval[0, T_fit^u], whereT_fit^u = 0.75 T_ver^u. The errorE_uis only evaluated on(T_fit^u, T_ver^u]. The error in the spectrumE_Sis calculated the same way as in the time domain, as given in <ref>.
§.§ Performance on a selection of systems
For each spatial direction, the convergence of the fit is tested every50in time of the trajectory length, starting fromT_min = 100. The simulation is terminated when the fit has converged below a given threshold or when the trajectory length reachesT_max = 1000. The convergence criterion was set toE_u < 10^-3, a strict threshold corresponding to a near-perfect fit. Since the real-time calculations on a given system using three spatial directions of the external field are independent, the trajectory length needed for a converged fit might vary between the three simulations.
The required trajectory length of each spatial directionT_ver^uand their corresponding verification errorE_uas well as the error in the spectrumE_Sare listed in <ref>.
The fitting of the dipole moment from RT-TDCIS calculations is shown in <ref>, while the fitting of RT-TDDFT data is found in <ref>. Figures of the approximated spectra of all systems can be found in the supplementary information.
The fitting method reached the strict threshold for most systems, with a maximum spectral error ofE_S ≤3·10^-3. For all converged systems, the approximated functions for the dipole momentμ̅_u(t)reliably reproduce its reference spectrum. The systems with very sparse spectra (He, H2, and Be) converged instantly (T_ver = 100), providing approximated spectra indistinguishable from their reference spectra. In these cases, the fitting method achieved a speedup of 10 times compared to the max trajectory length ofT_max = 1000, or
40 times compared to computing the reference spectra (using4000). The reduction in computational cost achieved by using the fitting method is relative to the desired spectral resolution. Should the method converge at the preset max trajectory length, one may argue that no simulation time was spared. In this case, one still achieves arbitrary improvement in the spectral resolution.
Systems with relatively sparse spectra (CH4, CO2, H2O, LiH, and NH3) also converged nicely with short dipole trajectories (T_ver^u ≤350). As the spectral density increases, the fitting method struggles to converge. Systems like C2H6 and CH2O only converged when using a double-zeta basis set, while the fitting of C6H6 and CH3OH did not achieve errors below the low threshold.
The CH2O molecule with a double-zeta basis set converged for both real-time methods. The spectra of the fit in both cases are nearly indistinguishable from their reference spectra. A comparison between the approximated and reference spectrum from RT-TDDFT calculations is shown in <ref>. The RT-TDDFT triple-zeta case nearly reached the error threshold (E_z = 2·10^-3), also providing a very low spectral error (E_S = 8·10^-4).
Among the converged systems, NH3 from RT-TDCIS calculations showed the largest error compared to its reference spectrum (E_S = 3·10^-3). Its spectrum is shown in <ref>, and was the approximated spectrum with the most visible deviation from its reference spectrum among the converged systems.
The approximated spectrum shows a deviation in a peak atω≈0.75, but the rest of the peaks correspond well to the reference spectrum.
The fitting method only partially converged for C6H6, as well as C2H6 and CH2O with triple-zeta basis, meaning that error of the fit was below the set threshold in only one or two of the spatial directions.
Still, the spectral error in all three cases is quite low. The result of the fitting of benzene is shown in <ref>, which had the largest spectral error (E_S = 4·10^-2) of the three.
The trajectory length needed for the fitting method to converge strongly depends on the spectral density. We observed a trend in that the fitting becomes increasingly difficult as the spectral density increases. Increasing either the number of electrons in the system or the size of the basis set will in general require longer real-time simulations before the fitting method converges. The trend with increasing basis set size is clearly seen from the fitting of CO2 from RT-TDCIS calculations. The simulation using the cc-pVDZ basis set converges faster (T_ver^u = 100 ) than when using the larger basis sets like the aug-cc-pVDZ basis set (T_ver^u ≤250 ) or aug-cc-pVTZ basis set (T_ver^u ≤300 ).
Only the fit of CH3OH did not get errors below the convergence threshold in any of the spatial directions. This was true for both the double and tripe-zeta basis (from RT-TDDFT calculations). The result using a triple-zeta basis set is shown in <ref> and is the case with the highest error in the time domain,E_u ∼10^-1. There is a significant deviation from the reference spectrum, though the main features are intact. Of all systems in this paper, this gave the worst approximation to the reference spectrum. Despite this, the spectrumS̅(ω)still provides a reasonable coarse approximation.
These results are promising in all cases as the converged fit seems to reproduce its reference spectrum reliably with only minor deviations in the peak intensities. The error of the fitE_ualso correlates with the spectral error,E_S.
This predictability is crucial if the convergence criterion is used to automatically terminate real-time simulations. Our results also indicate that the convergence criterion used in this study is stricter than necessary. A slight relaxation in the criterion might lead to faster convergence without significantly impacting the quality of the approximated spectrum.
For the estimated dipole moment, the frequencies and their corresponding linear coefficients are known. For a successful fit, one may therefore obtain the transition probability⟨0|μ̂_u|n⟩^2of a transition with energyE_n - E_0directly from the linear coefficient, as⟨0|μ̂_u|n⟩^2 = B_n^u/(2κ). This could be used to calculate the oscillator strength and create stick spectra. However, estimated frequencies in different spatial directions but corresponding to the same transition will have a small error associated with the frequencies. In order to compute the oscillator strength, one would therefore have to assess which estimated frequencies across spatial directions correspond to the same transition.
The convergence of the dipole moment fitting depends primarily on the frequency estimation. When the fit does not converge, it follows that the Fourier-Padé approximant is not sufficiently converged to accurately capture the Bohr frequencies. The quality of the Fourier-Padé depends on the dipole trajectory lengtht_Nrather than the number of steps or step length <cit.>. However, there is no given final timet_Nensuring convergence, the necessary trajectory length depends on the spectral density. High spectral density can cause the Fourier-Padé to fail, even for relatively long simulations. The fitting method introduces a measure of the errorE_uwhich does not rely on any reference spectrum. This introduces a more reliable way of estimating the error in the approximated spectrum.
§.§ Fitting using MO decomposition
We assessed the performance of the fitting procedure used to extrapolate the components of the dipole moment of C6H6 decomposed to MO pairsμ̅_u^i ain the RT-TDDFT calculation. Instead of creating a fitting function for each individual MO pair, which would increase the memory overhead, we clustered the componentsμ̅_u^i ainto groups of ten. These groups are formed so that the overall sparsity of spectra obtained for each cluster is maintained. This is accomplished by spreading the individual constituents of the cluster across the energy range.
A fitted function of each cluster was then created from the sum over the MO pairs that the cluster contains. The total error is measured for the full dipole moment,μ_u (t). Some of the MO pairs were omitted entirely, making the MO decomposition work as a low-pass filter. When fitting components, the low-pass filter is therefore not needed.
Fitting the decomposed signal, however, did not improve the convergence compared to when the full dipole moment was used for extrapolation. Simulations for both directionsμ_xandμ_yreached the max trajectory length (t = 1000) without the fitting error going below the error threshold. The errors (E_x = 1·10^-2andE_y = 1·10^-2) were only slightly lower compared to fitting without the MO decomposition. The last spatial directionμ_zconverged atT_ver = 650(E_z = 4·10^-4),
which is somewhat slower than without MO decomposition. The spectral error wasE_S = 9·10^-3, which corresponds to a low spectral error.
Although the MO decomposition did not lead to accelerated convergence of the fitting method, we still observed improvements. For example the simulation withT^u_ver = 600has a lower error of the fit for the decomposed dipole moment
(E_x = 6·10^-2,E_y = 4·10^-2andE_z = 1·10^-3)
for all spatial directions compared to the fit on the full dipole moment,
(E_x = E_y = 3·10^-1andE_z = 2·10^-3).
The decomposed fit in <ref> (E_S = 3·10^-2) is visibly improved compared to the fit using the full dipole moment in <ref> (E_S = 2·10^-1).
It is important to note that the scope of our testing of the fitting method using MO decomposition was limited. Previous success using the Fourier-Padé approximant in combination with the MO decomposition on RT-TDDFT data suggests that this in many cases is very effective<cit.>. Our system could be particularly unlucky in this regard, however, our study raises cause for caution regarding the use of the Fourier-Padé approximant. The Padé can struggle, even when using MO decomposition. The unknown amount of error introduced to the final spectrum by this procedure remains an open problem that the user should be aware of when analyzing spectra with the Fourier-Padé method with the MO decomposition.
The particular form of the componentsμ_u^p qis also very dependent on the quantum mechanical method used to compute the time-dependent wave function. Using MO decomposition on the electric dipole moment from RT-TDCCSD calculations leads to large overlaps in frequencies among different components<cit.>. The usefulness of such decomposition might vary between the different quantum mechanical frameworks.
§ CONCLUSION
We have developed a novel method for creating functions approximating the electric dipole moment from real-time calculations. The fitted functions for the dipole moment in the three spatial directions can then be used to produce absorption spectra with arbitrary high resolution. Real-time calculations of absorption spectra require the use of the discrete Fourier transforms, demanding long simulation times to obtain high spectral resolution. In our work, we have shown that the length of the real-time simulations, and hence the computational cost, can be greatly reduced by the developed fitting method.
We introduced a quantitative error measure to evaluate the convergence of the fit. For all systems in this work, a converged fit reliably reproduced the reference spectrum from long real-time calculations. In order to reduce the computational cost of calculating absorption spectra, the real-time calculations should be automatically terminated once the convergence criterion is reached.
In this work, we set the verification window to be25%of the available dipole trajectory. The critical step of the method is determining the frequencies, which always uses all available data. For the linear optimization, the verification window should include an entire period of the smallest frequency in the signal, as an insufficiently large verification window may lead to misleading error estimates. In future work, the verification window should depend on an estimate of the smallest frequency in the signal based on differences in the molecular orbital energies.
The fitting method converged with as little as100long trajectories in time for systems with sparse spectra. Convergence slows down as spectral density increases, even leading to failure of convergence in some cases. The current version of the fitting method shows encouraging results for smaller systems, although aspects of the method require further investigation.
Our testing of the fitting method using molecular orbital decomposition of a single system gave mixed results. The decomposition did not enable the fit to meet the convergence criterion, although we observed improvements in the approximated spectrum. This motivates the need for further investigations.
An apparent weakness of the current implementation is the way of estimating frequencies. Future versions should not rely on the Fourier-Padé but rather investigate other methods of estimating frequencies. This could include other methods for harmonic inversion or letting the function form of the fitted dipole moment be a truncated Fourier series based on an estimation of the fundamental frequency.
The same frequencies can appear in all spatial directions, which could be exploited to improve the frequency estimation. In particular, in cases where the frequencies are successfully estimated in one spatial direction, knowledge of these existing frequencies could be used to alleviate the search in the other spatial directions with potentially higher spectral density.
Improving the frequency estimation is crucial for stabilizing the fitting method for systems with high spectral density.
This work has focused on the Dirac delta impulse, though the general fitting algorithm may be used on systems with any type of external field. Using a laser pulse targeting a specific spectral region may provide both an upper and lower bound when estimating the frequencies. Any a priori information about the frequencies should be exploited by the fitting algorithm.
Additionally, the Dirac delta impulse targets all excitation energies, maximizing the spectral density.
It is not unlikely that a narrow-band laser pulse would somewhat alleviate the fitting process by reducing the spectral density.
This work was supported by the Research Council of Norway through its Centres of Excellence scheme, project number 262695, and its Mobility Grant scheme (project nos. 301864 and 314814).
The simulations were performed on resources provided by Sigma2—the National Infrastructure for High Performance Computing
and Data Storage in Norway, Grant No. NN4654K. T. B. P. acknowledges the support of the Centre for Advanced Study in Oslo,
Norway, which funded and hosted the CAS research project Attosecond Quantum Dynamics Beyond the Born-Oppenheimer Approximation
during the academic year 2021-2022. In addition, this project received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement No. 945478 (SASPRO2), and the Slovak Research and Development Agency (Grant No. APVV-21-0497).
Molecular geometries and HOMO energies for all systems. Comparisons between the spectra of filtered versus unfiltered dipole moments. Spectra of the fitted functions of the dipole moments compared with the corresponding reference spectrum.
|
http://arxiv.org/abs/2307.03373v1
|
20230707035121
|
All in One: Exploring Unified Vision-Language Tracking with Multi-Modal Alignment
|
[
"Chunhui Zhang",
"Xin Sun",
"Li Liu",
"Yiqian Yang",
"Qiong Liu",
"Xi Zhou",
"Yanfeng Wang"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
All in One: Exploring Unified Vision-Language Tracking with Multi-Modal Alignment
Chunhui Zhang, Xin Sun, Li Liu, Member, IEEE, Yiqian Yang, Qiong Liu, Xi Zhou, Yanfeng Wang
Chunhui Zhang, and Xin Sun are with the Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, 200240, China and the CloudWalk Technology Co., Ltd, 201203, China. Emails: chunhui.zhang@sjtu.edu.cn, huntersx@sjtu.edu.cn.
Li Liu is with the Hong Kong University of Science and Technology (Guangzhou), Guangzhou, 511458, China. E-mail: avrillliu@hkust-gz.edu.cn.
Yiqian Yang is with the Northwestern Polytechnical University, Xi'an, 710072, China. E-mail: frank.stuart@mail.nwpu.edu.cn.
Qiong Liu, and Xi Zhou are with the CloudWalk Technology Co., Ltd, 201203, China. E-mails: liuqiong@cloudwalk.com, zhouxi@cloudwalk.cn.
Yanfeng Wang is with the Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, 200240, China and the Shanghai AI Lab. E-mail: wangyanfeng@sjtu.edu.cn.
Corresponding author: Li Liu.
Abstract
0.9
We review the modular flavor symmetric models of quarks and leptons
focusing on our works.
We present some flavor models of quarks and leptons
by using finite modular groups and discuss the phenomenological implications.
The modular flavor symmetry gives interesting phenomena at the fixed point of
modulus. As a representative, we show the successful texture structure at the fixed point τ = ω.
We also study CP violation, which occurs through the modulus stabilization.
Finally,
we study SMEFT with modular flavor symmetry by including higher dimensional operators.
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
plain
Current mainstream vision-language (VL) tracking framework consists of three parts, a visual feature extractor, a language feature extractor, and a fusion model. To pursue better performance, a natural modus operandi for VL tracking is employing customized and heavier unimodal encoders, and multi-modal fusion models. Albeit effective, existing VL trackers separate feature extraction and feature integration, resulting in extracted features that lack semantic guidance and have limited target-aware capability in complex scenarios, similar distractors and extreme illumination. In this work, inspired by the recent success of exploring foundation models with unified architecture for both natural language and computer vision tasks, we propose an All-in-One framework, which learns joint feature extraction and interaction by adopting a unified transformer backbone. Specifically, we mix raw vision and language signals to generate language-injected vision tokens, which we then concatenate before feeding into the unified backbone architecture. This approach achieves feature integration in a unified backbone, removing the need for carefully-designed fusion modules and resulting in a more effective and efficient VL tracking framework. To further improve the learning efficiency, we introduce a multi-modal alignment module based on cross-modal and intra-modal contrastive objectives, providing more reasonable representations for the unified All-in-One transformer backbone. Extensive experiments on five benchmarks, OTB99-L, TNL2K, LaSOT, LaSOT_ Ext and WebUAV-3M, demonstrate the superiority of the proposed tracker against existing state-of-the-arts on VL tracking. Codes will be made publicly available.
Unified vision-language tracking, Multi-modal alignment, Transformer, Foundation model.
§ INTRODUCTION
Vision-language (VL) tracking <cit.>, one of the fundamental and challenging problems at the intersection of computer vision and natural language understanding, aims to locate the object in each frame of a video based on a given natural language prompt and an initial object box. It plays a crucial role in human-machine interaction, transportation surveillance, virtual reality, autonomous driving, delivery, . Compared with traditional visual object tracking <cit.> using a bounding box to describe the object of interest, VL tracking has the potential to achieve more robust tracking by leveraging the complementary superiority of multiple modalities.
In the past few years, two-stream VL trackers <cit.>, which extract visual features and language features separately and then perform feature interaction in a fusion model (as shown in Fig <ref>(a)), have emerged as a domain framework and obtained significant progresses. For instance, Feng <cit.> proposed a Siamese natural language region proposal network for multi-stage feature extraction, and then applied an aggregation module to dynamically combine predictions from both visual and language modalities. Guo <cit.> suggested an asymmetrical modeling architecture to learn adaptive VL representations. Following the two-stream pipeline, the latest transformer-based VL tracker JointNLT <cit.> formulates grounding and tracking as a unified task of establishing relation between visual-language references and test images via a multi-source relation modeling architecture.
Despite the convincing designs of existing two-stream VL trackers, they still suffer from the fundamental challenge of learning target-aware capability in some complex and corner scenarios, similar distractors, occlusion, and extreme illumination <cit.>. Firstly, the separation of feature extraction and integration prevents the model from performing early multi-modal feature interaction, resulting in limited object-background discriminative power <cit.>. Although some works have attempted to design complicated <cit.> or multi-stage <cit.> fusion models to enhance the associations between modalities, the lack of mutual interaction remains an insurmountable gap. More seriously, heavy fusion models increase the number of parameters, leading to significant computational inefficiency. Secondly, performing feature interaction directly ignores the huge distribution discrepancies between the vision and language modalities in the feature space <cit.>, leading to significant learning inefficiency in VL representation learning.
To tackle the above issues, we propose a unified framework (as shown in Fig <ref>(b)), namely All-in-One, for VL tracking. The core idea is to establish bidirectional information flow between well aligned visual and language signals as early as possible via a unified transformer backbone. Our All-in-One framework brings multiple new benefits for multi-modal VL tracking. (1) The unified architecture not only simplifies the model, but also leads to more efficient VL representation learning. (2) It has great potential to serve as a foundation model for VL tracking. With this framework, we develop a general VL tracking model, which generalizes well to complex, user-defined language descriptions/prompts on various VL tracking datasets. (3) Compared with the two-stream vision language foundation models (CLIP <cit.>), our All-in-One framework follows the simple and general one-stream route <cit.>.
Specifically, we introduce a versatile All-in-One transformer, as shown in Fig. <ref>, to embed raw visual and language signals into joint VL representations, and the produced visual search region features can be directly used for object location without additional fusion model. The visual inputs (search region and template) and language input are first mapped by a patch embed and a text tokenizer, respectively, and then flattened into the same dimension embeddings. A modal mixup operation is used to inject language information into the visual embeddings (template embeddings and search region embeddings), followed by a stack of transformer encoder layers enabling iteratively feature integrating between template and search region embeddings with language information guidance. Thus, both template and search region embeddings can be enhanced dynamically with strong target-aware capability. In addition, we introduce a multi-modal alignment (MMA) module to alleviate the huge distribution discrepancies between multiple modalities based on contrastive learning (CL) <cit.>. The MMA module includes cross-modal alignment (CMA) and intra-modal alignment (IMA), forcing the visual and language signals from the same video to be close in the feature space, while making the distribution of multi-modal features more uniform and reasonable in the entire feature space, which can promote feature integration. In conclusion, our main contributions can be summarized as follows:
* We propose a simple, compact and effective one-stream framework for VL tracking, namely All-in-One, which learns VL representations from raw visual and language signals end-to-end in a unified transformer backbone.
* We develop a novel multi-modal alignment module incorporating cross-modal and intra-modal alignments to enable efficient multi-modal learning by aligning multiple signals in the feature space before learning.
* Extensive experiments demonstrate that the proposed approach achieves higher accuracy against state-of-the-art (SOTA) trackers.
§ RELATED WORK
§.§ Vision-Language Tracking
In recent years, the two-stream framework <cit.> has emerged as a dominant VL tracking paradigm (see Fig. <ref>(a)). They first extract features using two independent unimodal feature extractors, and then model relation of visual features and language features in sequential manner by a lightweight <cit.> or relatively heavy <cit.> network. Early work <cit.> contains a visual specification network and a lingual specification network, and further selectively focuses on parts of language prompts using a lingual specification attention network. Later, GTI <cit.> and <cit.> decompose the VL tracking problem into three sub-tasks of visual tracking, grounding and integration. VLT_ TT <cit.> suggests to learn VL representations through an asymmetrical modeling architecture. JointNLT <cit.> introduces a joint visual grounding and tracking framework by localizing the referred object based on the visual-language references. However, these works rely on separate visual and language encoders to extract multi-modal features, leading to limited information interaction. We note that several works <cit.> introduce one-stream framework for visual object tracking. Different from them, we extend the one-stream framework to multi-modal VL tracking by training
jointly on videos and language prompts. As shown in Fig. <ref>(b), for the first time we seamlessly integrate feature extraction and interaction into a unified backbone architecture for VL tracking. The proposed framework not only enables information flow from language to vision, but also allows bidirectional integration of information between visual and language features.
§.§ Transformer for Unified Architecture
Thanks to the scalability to very large-scale model and capability to handle sequential and non-sequential data, transformer has become a prevailing architecture in both natural language <cit.> and computer vision <cit.> communities. Following ViT <cit.>, a series of variants of ViT have been developed to improve the performance on vision tasks, including reducing computational cost <cit.>, and architecture design <cit.>. Additionally, transformer has extensively used in various multi-modal tasks <cit.>.
In consideration of the capacity of the unified transformer model to handle unimodal input or multi-modal input with a shared encoder, a few pioneering works have tried to explore unified multi-modal encoders <cit.>. For instance, ViLT <cit.> proposed a vision-language transformer without using regional features or deep convolutional visual embeddings. VATT <cit.> developed a video-audio-text transformer using multi-modal self-supervised pre-training to improve the performance of video action recognition and audio event classification. In this paper, we follow the trend of unified architecture for multi-modal data. To the best of our knowledge, the proposed All-in-One transformer is the first unified backbone architecture for multi-modal VL tracking.
§.§ Multi-Modal Learning
Recently, a potential multi-modal learning paradigm is to adopt transformer to process and relate information from multiple
modalities <cit.>. CLIP <cit.> applied language prompts as supervisory signals to learn better visual representations. VisualBERT <cit.>, VilBERT <cit.>, and Unicoder-VL <cit.> combined visual and textual features into transformers to capture cross-modal relationships. However, previous works mainly focus on how to learn multi-modal representations by exploiting the complementary advantages of multiple modalities, or to fuse multi-modal features for prediction. Multi-modal alignment, discovering relationships and correspondences between fine-grained elements (objects and words) of instances (images and languages) from two or more modalities, has been rarely explored. In this work, we propose the MMA module with CMA and IMA based on self-supervised CL <cit.> to explore efficient multi-modal learning for VL tracking.
§ PROPOSED METHOD
This section presents the All-in-One, a simple yet effective framework for the VL tracking task. Our All-in-One framework consists of an All-in-One transformer backbone, a multi-modal alignment module, and a tracking head, as shown in Fig. <ref>. The All-in-One transformer backbone is used to achieve feature extraction and information interaction between visual inputs (visual search region and visual template) and language input simultaneously in a unified architecture. Before that, visual embeddings and language embeddings are aligned via a multi-modal alignment module, providing more reasonable feature embeddings in the feature space. The output features of the visual search region are sent to the tracking head to predict the location of the target.
§.§ Problem Formulation
Before detailing the architecture of our All-in-One framework, we briefly review the transformer tracking <cit.>, which achieves remarkable tracking performance. Given a video with a pair of visual template and visual search region 𝒳_xz, an initial target box ℬ_0, the transformer tracking can be formulated as F_trans:{𝒳_xz,ℬ_0}→ℬ, where F_trans is the transformer tracker, ℬ is the predicted box of the target in all subsequent search frames. In general, the transformer tracker F_trans can be decomposed into Φ∘ f, where f:{𝒳_xz,ℬ_0}→ℋ denotes the backbone (ViT <cit.>) for feature extraction and interaction function, ℋ represents the output features of visual search region, and the tracking head Φ:ℋ→ℬ is adopted to predict the target box.
Specifically, a pair of images, namely visual search region 𝐱∈ℝ^3 × H_x× W_x and visual template 𝐳∈ℝ^3 × H_z× W_z are divided into N_x and N_z non-overlapping image patches of resolution P × P, where N_x=H_xW_x/P^2 and N_z=H_zW_z/P^2 are the number of patches of search region and template, respectively. Then, a linear projection is applied to these image patches to generate 1D tokens ℋ_x∈ℝ^N_x× D and ℋ_z∈ℝ^N_z× D, where D is the token dimension. Two learnable positional embeddings are added to ℋ_x and ℋ_z to retain the position information. After that, these tokens are concatenated as a sequence ℋ_xz^0 = [ℋ_x; ℋ_z] and fed to a L-layer transformer encoder. Here, we represent ℋ_xz^l-1 as inputs to the l-th encoder layer E^l. Formally, the forward operation of the l-th encoder layer can be written as:
ℋ^l_xz=E^l(ℋ^l-1_xz),l = 1,2,3,..., L
ℬ=Φ(ℋ^L_xz),
where each transformer encoder layer contains a multi-head self-attention (MHSA), and a feed-forward network (FFN). Each sub-layer is constructed as a residual connection, where layer normalization (LN) is followed by the residual connection. The visual search region tokens ℋ^L_x of the last transformer encoder layer is taken as the input of tracking head Φ for target box prediction.
For VL tracking <cit.>, it introduces an extra language prompt 𝒯 for each video to express the attribute, behavior, position (relative location), and surroundings of the target. Accordingly, VL tracking can be formulated as F_VL:{𝒳_xz,ℬ_0,𝒯}→ℬ, where F_VL is the VL tracker. Similarly, the VL tracker F_VL can also be decomposed into Φ∘ f^*, where Φ is the tracking head, and f^* represents the proposed unified backbone architecture in this work.
§.§ Unified Vision-Language Tracking
Fig. <ref> gives an overview of our All-in-One framework for VL tracking. To optimize the VL tracker F_VL, a pair of visual template and visual search region 𝒳_xz={𝒳_x, 𝒳_z}, and an extra language prompt 𝒯 are first fed to a patch embed (a linear projection) and a text tokenizer <cit.>, respectively. They are mapped and flattened into D-dimension embeddings, where D=768. We denote them as vision tokens (ℋ^0_x and ℋ^0_z), where ℋ^0_x∈ℝ^N_x× D and ℋ^0_z∈ℝ^N_z× D are visual search region tokens and visual template tokens, and language tokens ℋ^0_t∈ℝ^N_t× D, where N_t is the number of language tokens. Following <cit.>, a special classification token ([CLS]) is attached at the beginning of the language tokens. Then, ℋ^0_x, ℋ^0_z, and ℋ^0_t are aligned with the multi-modal alignment module (see section <ref>) in the embedding space. It is worth noting that the well aligned vision embeddings and language embeddings can facilitate multi-modal representation learning and interaction <cit.>. Here, we still refer to the aligned vision embeddings and language embeddings as ℋ^0_x, ℋ^0_z, and ℋ^0_t, respectively. Afterwards, we perform a modal mixup operation <cit.> between the aligned vision embeddings and language embeddings as follows:
𝐅^0_x=ℋ^0_x⊙ Linear(ℋ^0_t)+ℋ^0_x,
𝐅^0_z=ℋ^0_z⊙ Linear(ℋ^0_t)+ℋ^0_z,
where ⊙ represents the Hadamard product, Linear(·) is a linear projection layer. In this way, the language information is injected into vision embeddings via the modal mixup operation. Moreover, Eqs. (<ref>)-(<ref>) also construct a bidirectional information flow between vision and language modalities that allows mutual guidance for multi-modal feature extraction and interaction. By establishing a bidirectional information flow between well aligned visual and language signals as early as possible via a unified transformer backbone, we can avoid the loss of discriminative information and thus make the extracted features highly target-aware <cit.>.
Formally, the operations of the l-th encoder of our All-in-One transformer backbone can be expressed as:
𝐐=𝐊=𝐕=[𝐅^l_x; 𝐅^l_z],
[𝐅^' l_x; 𝐅^' l_z]= LN([𝐅^l_x; 𝐅^l_z]+ MHSA(𝐐,𝐊,𝐕)),
[𝐅^l+1_x; 𝐅^l+1_z]= LN([𝐅^' l_x; 𝐅^' l_z]+ FFN([𝐅^' l_x;𝐅^' l_z])),
where 𝐐, 𝐊, and 𝐕 represent the query, key, and value embeddings, [;] denotes the concatenation operation, 𝐅^l_x and 𝐅^l_z are the input embeddings of the l-th transformer encoder. Therefore, the language information-injected vision embeddings are jointly processed by the transformer encoder, enabling seamless multi-modal feature extraction and integration. Finally, the visual search region embeddings of the last layer of the transformer encoder are reshaped into a 2D feature map. The feature map is fed into the tracking head to predict the location of the target.
To model the interaction between language and vision features, recent VL trackers <cit.> adopted a customized fusion model to directly serialize vision and language embeddings into sequences to learn a joint multi-modal embedding. Although, our All-in-One transformer backbone using the pretrained ViT <cit.> has the ability to model long-range dependencies of sequential data, alleviating the negative effects of modal differences for multi-modal learning, the vision embeddings and language embeddings lying in different feature spaces is still challenging for the transformer encoder to learn their interactions <cit.>. To tackle this limitation, we further propose a self-supervised MMA module, which is used before feature extraction and integration. The MMA module includes CMA and IMA, which can efficiently learn more reasonable feature distributions as shown in Fig. <ref>.
§.§ Multi-Modal Alignment Module
Cross-modal Alignment.
Since the vision and language embeddings from the same video are distributed in different feature spaces, a natural thought is to enforce them close in the feature space to reduce the difficulty for multi-modal interaction. Absorb this in mind, we introduce the CMA to pull the matched vision and language embeddings closer in feature space, while pushing away mismatched pairs. Actually, the goal of CMA is to maximize the mutual information (MI) <cit.> between vision and language that are matched, which contain the same semantics. Fig. <ref> presents an example, the high-level language embedding (green star) and sparse vision embedding (yellow star) from the same video are pulled closer in the feature space. Specifically, visual search region tokens ℋ^0_x, visual template tokens ℋ^0_z and language tokens ℋ^0_t are projected into the same dimension through three linear projections, which we denote as 𝐟_x∈ℝ^C, 𝐟_z∈ℝ^C, and 𝐟_t∈ℝ^C, respectively, where C=256. To maximize the MI of vision and language tokens, we optimize the InfoNCE loss <cit.> between vision and language, denoting the lower bound of their MI. Formally, InfoNCE losses of vision-to-language are defined as:
ℒ_x2t(𝐟^i_x,𝐟^i_t,𝐟_t)=-𝔼_(𝐟_x,𝐟_t)logexp(sim(𝐟^i_x,𝐟^i_t)/τ)/∑_j=1^N-1exp(sim(𝐟^i_x,𝐟^j_t)/τ),
ℒ_z2t(𝐟^i_z,𝐟^i_t,𝐟_t)=-𝔼_(𝐟_z,𝐟_t)logexp(sim(𝐟^i_z,𝐟^i_t)/τ)/∑_j=1^N-1exp(sim(𝐟^i_z,𝐟^j_t)/τ),
where 𝐟^i_x, 𝐟^i_z, and 𝐟^i_t are two vision tokens and language tokens of the same video, respectively, 𝐟_t={𝐟^1_t,...,𝐟^N-1_t} is a set of negative language examples for 𝐟^i_x or 𝐟^i_z, N is the batch size, sim(𝐟^i_x, 𝐟^i_t)=𝐟^i_x·𝐟^i_t/(||𝐟^i_x||||𝐟^i_t||), τ is a temperature parameter. The InfoNCE losses, ℒ_t2x(𝐟^i_t,𝐟^i_x,𝐟_x) and ℒ_t2z(𝐟^i_t,𝐟^i_z,𝐟_z), of language-to-vision can be calculated similarly. Hence, the CMA loss can be formulated as ℒ_cma=1/2[ℒ_x2t(·)+ℒ_z2t(·)]+1/2[ℒ_t2z(·)+ℒ_t2x(·)].
Intuitively, by optimizing the CMA loss, vision and language embeddings can be well aligned in the feature space as in Fig. <ref>. However, the CMA ignores the significant intra-modal supervisory signals (visual template and visual search region) for learning desired multi-modal features. Aligning the visual template with the visual search region enables learning temporal-invariant features <cit.>, which are crucial to enhance the discriminative ability of tracking models. To this end, we further propose the IMA to fully utilize the intra-modal temporal supervision information.
Intra-modal Alignment. The language prompt mainly contains global/static semantic meaning of the target, while the visual modality contains the temporal information of the target (motion, and appearance variation through the video) <cit.>. As mentioned earlier, IMA aims to learn temporal-invariant features within the same modality of positive and negative samples. Therefore, we only consider visual modality in IMA. Specifically, we consider visual search region tokens 𝐟_x∈ℝ^C, and visual template tokens 𝐟_z∈ℝ^C from the same video as positive pairs, while tokens from different videos as negative pairs. We also apply the contrastive loss to maximize the MI between 𝐟_x and 𝐟_z. Formally, InfoNCE losses between vision tokens can be defined as:
ℒ_x2z(𝐟^i_x,𝐟^i_z,𝐟)=-𝔼_(𝐟_x,𝐟_z)logexp(sim(𝐟^i_x,𝐟^i_z)/τ)/∑_j=1^2(N-1)exp(sim(𝐟^i_x,𝐟^j)/τ),
ℒ_z2x(𝐟^i_z,𝐟^i_x,𝐟)=-𝔼_(𝐟_z,𝐟_x)logexp(sim(𝐟^i_z,𝐟^i_x)/τ)/∑_j=1^2(N-1)exp(sim(𝐟^i_z,𝐟^j)/τ),
where 𝐟={𝐟^1_x,...,𝐟^N-1_x,𝐟^1_z,...,𝐟^N-1_z} is a set of negative examples for 𝐟^i_x or 𝐟^i_z, N is the batch size. Then, the IMA loss can be formulated as ℒ_ima = 1/2[ℒ_x2z(·)+ℒ_z2x(·)].
The IMA loss encourages learning representations by aligning temporal-invariant positive pairs within visual modality. Importantly, it enforces the uniformity of vision and language, resulting in a uniform distribution across the whole feature space <cit.>. Therefore, CMA and IMA have complementary advantages in multi-modal learning: on the one hand, CMA encourages matched vision and language embeddings close in the feature space. On the other hand, IMA maximizes the temporal-invariant features between visual tokens, and make the multi-modal features evenly distributed in the feature space. As shown in Fig. <ref>, combining them makes the learned representations more reasonable, and further facilitates joint multi-modal feature learning and interaction.
§.§ Tracking Head and Loss
Following <cit.>, the tracking head is decomposed into two branches of classification and bounding box regression. As shown in Fig. <ref>, the learned visual search region tokens are first reshaped into a 2D feature map according to the original spatial resolution, followed by a 4-layer fully convolutional network to predict the target classification score map. In the classification branch, a weighted focal loss ℒ_cls <cit.>, is adopted to enhance the model's ability to distinguish objects from background. The bounding box regression branch is used to predict the center coordinate offset of the object and the size of the object. To regress the center coordinate offset and size of objects, we combine the ℓ_1 loss and the generalized IoU loss ℒ_giou <cit.>. The regression loss is calculated as ℒ_reg=λ_giouℒ_giou+λ_1ℒ_1, where λ_giou and λ_1are two hyper-parameters.
To train our model in an end-to-end manner, we convert it into a multi-task optimization problem <cit.>, simultaneously optimizing classification loss, regression loss, CMA loss, and IMA loss. Finally, the overall loss function for our model is defined as:
ℒ_total=ℒ_cls+ℒ_reg+λ_cmaℒ_cma+λ_imaℒ_ima,
where λ_cma and λ_ima are trade-off weights to balance the multi-task optimization problem.
§ EXPERIMENTS
To demonstrate the effectiveness and generalization ability of our approach, we conduct experiments on all five public VL tracking benchmarks to date, including UAV scenes (WebUAV-3M <cit.>), generic scenes (LaSOT <cit.>, LaSOT_Ext <cit.>, OTB99-L <cit.>), and real-synthetic scenes (TNL2K <cit.>).
§.§ Implementation Details
We adopt ViT-Base <cit.> as the architecture of All-in-One transformer backbone. It is stacked by L (12) transformer encoder layers, and each layer contains two sub-layers, a multi-head self-attention layer and a feed-forward network. Each sub-layer is a residual connection structure, followed by a layer normalization. To accelerate convergence, we initialize our backbone with MAE-pretrained weights <cit.>. The visual template and visual search region are 2^2 times and 4^2 times of the target bounding box, and then resized to 128×128 and 256×256, respectively. We use bert-base-uncased tokenizer <cit.> to tokenize language prompts.
Our experiments are conducted on an Ubuntu server with two NVIDIA RTX 3090 GPUs. The training data includes training splits of LaSOT <cit.>, GOT-10k <cit.>, TrackingNet <cit.>, COCO <cit.>, OTB99-L <cit.>, TNL2K <cit.>, WebUAV-3M <cit.>, and VisualGenome <cit.>. For several datasets ( <cit.>) without natural language prompts, we use class names as pseudo language labels similar to <cit.>. The tracker is optimized using an AadmW optimizer <cit.> with initial learning rate 4×10^-4. The total epoch is 300. The weight decay factor is 1×10^-4 after 240 epochs. Following <cit.>, the hyper-parameters λ_giou and λ_1 are set to 2 and 5. While λ_cma and λ_ima are set to 1 and 1 without parameter optimization. We set the temperature parameter τ=0.5. The batch size N is 32. Following <cit.>, we adopt the one-pass evaluation with five metrics, precision (P), normalized precision (P_norm), success rate (AUC), complete success rate (cAUC), and accuracy (ACC) to measure the tracking performance.
§.§ Ablation Study and Analysis
We first conduct ablation experiments trained on the LaSOT training set and evaluated on the LaSOT test set to validate different components of our approach.
Impact of All-in-One Transformer (AOT). To analysis the impact of AOT, we train two trackers, AOT-[CLS] and AOT-[Mean] using [CLS] token and mean token of language prompt in AOT. From Tab. <ref>, we can find that the AOT obviously boosts the tracking performance. Specifically, the P scores are improved by 1.6% (from 66.3% to 67.9%) and 2.1% (from 66.3% to 68.4%), respectively compared with the baseline method. Importantly, using the mean token is slightly better than the [CLS] token. We speculate that the possible reason is that the mean token can provide more semantic information about the target compared with the [CLS] token. Therefore, the mean token is as our default setting in AOT.
Impact of Cross-modal Alignment (CMA). From Tab. <ref>, we can see the CMA component improves tracking performance by 0.1%, 0.1%, and 0.2% in terms of AUC, P_norm, and P scores, respectively. This validates the CMA is beneficial to align vision and language embeddings in the feature space and improve tracking accuracy.
Impact of Intra-modal Alignment (IMA). By combining the CMA and IMA, we improve the tracking AUC by 0.4% (from 64.0% to 64.4%), P_norm by 0.2% (from 72.6% to 72.8%), and P by 0.4% (from 68.4% to 68.8%), as shown in Tab. <ref>. The significant performance gains demonstrate that the MMA module makes the distributions of vision and language embeddings more reasonable in the feature space, and facilitates feature learning and interaction.
Visualization.
To further investigate the effectiveness of our All-in-One framework, we visualize the response maps and the tracking results in Fig. <ref>. With the AOT, our approach highlights the target region due to language prompt, even with complex background distractions. Combining AOT and MMA, our approach has a more unambiguous and discriminative response, and predict a more precise bounding box. Previous visual search regions also demonstrate that our approach can focus on real target when facing some complex scenarios, such as occlusion and background clutter.
Sentence Prompts vs. Class Prompts. To analysis the impact of language prompts, we train two trackers, Ours-S and Ours-C with sentence (original language prompt) and class (class name of video) prompts on the LaSOT training set. From Tab. <ref>, we have some observations upon inspection. First, better tracking results are achieved when the language prompts for training and testing are consistent, training using sentences/classes and testing using sentences/classes. Second, the best performance (64.6% in AUC, 73.2% in P_norm, 68.9% in P) is obtained using class prompts training and class prompts testing on the LaSOT dataset. We speculate that trackers are sensitive to ambiguous language prompts. Compared with sentence prompts, class prompts may bring about less ambiguity for both training and evaluation <cit.>. Additionally, as shown in Fig. <ref>, given ambiguous language prompts our trackers fail to localize the real object. A potential solution is to provide clear sentence prompts or clear class prompts (see Fig. <ref>), both of which enable our trackers to accurately localize the real object.
Speed Analysis. Real-time tracking is urgently demanded in many practical applications <cit.>. Our one-stream framework achieves joint multi-modal feature extraction and interaction, and has great efficiency. Tab. <ref> shows that the average inference speed of our approach is around 60 frames per second (FPS) without model acceleration. The speed of our approach is obviously faster than that of many (state-of-the-art) SOTA real-time trackers <cit.> and common video flow <cit.>, demonstrating that applying it to real-world applications is imperceptible in terms of time consumption.
§.§ Evaluation on UAV Scenes
WebUAV-3M. WebUAV-3M <cit.> is the latest million-scale UAV tracking dataset with visual bounding box, language and audio annotations, which contains 4,500 videos and offers over 200 target categories. UAV tracking scenes are extremely challenging due to continuous viewpoints changes, motion blurs, low-resolutions, . As reported in Tab. <ref>, All-in-One outperforms other visual trackers and VL trackers in tracking accuracy. Furthermore, our tracker improves P/AUC/P_norm/cAUC by a large margin as shown in Fig. <ref>. Notably, with a simple and general unified model architecture, our tracker outperforms the most competitive VL tracker VLT_ TT <cit.> by 7.7% in P, 9.5% in P_norm, 9.8% in AUC, and 9.9% in cAUC.
§.§ Evaluation on Generic Scenes
LaSOT. LaSOT <cit.> is a densely annotated large-scale VL tracking dataset that contains 1,120 videos for training and 280 long-term videos for evaluation. In this dataset, objects disappear and reappear frequently, making long-term tracking in generic scenes highly challenging. From Tab. <ref>, we can observe that our approach sets a new SOTA on LaSOT, which provides compelling evidence for long-term tracking and suggests that our approach is capable of recognizing objects in extremely long videos. Fig. <ref> demonstrates that All-in-One outperforms other trackers on eight challenging attributes, background clutter and motion blur, illustration variation, low resolution, fast motion, full occlusion, deformation, and aspect ration change.
LaSOT_ Ext. LaSOT_ Ext <cit.> is the extended version of <cit.>, which comprises 150 manually annotated videos. Tab. <ref> indicates All-in-One surpasses all previous advanced trackers and obtains the best AUC score of 71.7%, gaining a significant improvement of 2.6% compared with the current SOTA one-stream tracker OSTrack <cit.>.
OTB99-L. OTB99-L <cit.> is an early VL tracking dataset contains 51 videos for training and 48 videos for public evaluation. As shown in Tab. <ref>, compared with recent SOTA trackers, our tracker achieves comparable results, which validates the effectiveness of our tracker.
§.§ Evaluation on Real-Synthetic Scenes
TNL2K. TNL2K <cit.> is a recently released dataset, which comprises 1,300 videos for training and 700 videos for evaluation in real and synthetic (cartoon videos and virtual game videos) scenes with diverse challenging fctors, such as significant appearance variation and adversarial samples. Results in Tab. <ref> show that our approach achieves the highest AUC (55.3%) and P (57.2%) scores, demonstrating the generalization ability of All-in-One.
§.§ Qualitative Performance
As shown in Fig. <ref>, we compare All-in-One with three SOTA trackers (VLT_ TT <cit.>, TransT <cit.>, and SiamRPN++ <cit.>) on three videos from the LaSOT test set, in which the main challenges include similar distractors, severe viewpoint changes, background clutter, appearance variation, occlusion and extreme illumination. We can see that All-in-One is more robust than other methods. For instance, the prior most competitive VL tracker VLT_ TT gradually loses the target as the target appearance varies in the video of Sepia-16 (the second video in Fig. <ref>). By contrast, All-in-One provides accurate and stable prediction results, demonstrating the effectiveness of our unified framework in complex environments.
§ CONCLUSION AND DISCUSSION
Conclusion.
In this work, we present All-in-One, a new unified framework for multi-modal VL tracking, which includes the All-in-One transformer and the multi-modal alignment module. The core insight is to is establish bidirectional information flow between well aligned visual and language signals as early as possible via a unified transformer backbone. Besides, the multi-modal alignment module based on cross-modal and intra-modal contrastive objectives enables to learn more reasonable VL representations, which effectively facilitates joint multi-modal feature learning and interaction. Extensive experiments on multiple VL tracking benchmarks have demonstrated the effectiveness and generalization of our approach against state-of-the-art trackers.
Discussion. We first provide a discussion to demonstrate that developing a foundation model, All-in-One, for VL tracking is valuable in the era of large language/vision models. (1) As the echoes of large language models (ChatGPT <cit.>, GPT-4 <cit.>) remarkable success continue to permeate the natural language community, its formidable successors, ViT-22B <cit.>, have emerged in the computer vision community. Although they have emergent abilities <cit.>, the huge training cost (thousands of GPUs) and terrible environment unfriendliness cannot be ignored <cit.>. Instead, we believe that training a foundation model for a specific task is more flexible and affordable for research purposes. (2) Despite the breakthroughs in large multi-modal models <cit.>, they have not achieved the same success as large language models, highlighting the need to explore foundation models in the multi-modal domain. All-in-One is designed to be such a foundation model for multi-modal VL tracking. (3) The All-in-One has great potential to become a foundation model for multi-modal tracking because it can enable more accurate and efficient processing of multi-modal data, which fully utilizes both vision and language information. Our model not only learns all modalities in one backbone (All-in-One), but trains once and generalizes well to all VL tracking datasets (Once-for-All) with complex and user-defined language prompts. (4) Additionally, having a streamlined and standardized foundation model for multi-modal tracking can facilitate the development of more complex and specialized models in the future, allowing for even more sophisticated analysis and understanding of multi-modal data.
Our work still has the following two limitations. (1) Our approach is designed to localize objects of interest based on object boxes and language prompts. Inevitably, it suffers from inaccurate language prompts, such as ambiguous language descriptions, and the states (position and appearance) of objects change significantly in videos making them inconsistent with language prompts. (2) While All-in-One is a unified framework for multi-modal VL tracking, it currently focuses mainly on language prompts. Actually, All-in-One has great potential to be extended to leverage more types of prompts, such as audio, point, mask, and scribble prompts <cit.>. We leave it for the future work.
IEEEtran
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.