entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
15
199
authors
list
primary_category
stringlengths
5
18
categories
list
text
stringlengths
1
461k
http://arxiv.org/abs/2307.02845v1
20230706082250
Modelling language ideologies for the dynamics of languages in contact
[ "Pablo Rosillo-Rodes", "Maxi San Miguel", "David Sanchez" ]
physics.soc-ph
[ "physics.soc-ph", "cond-mat.stat-mech" ]
prosillo@ifisc.uib-csic.es Institute for Cross-Disciplinary Physics and Complex Systems IFISC (UIB-CSIC), Campus Universitat de les Illes Balears, E-07122 Palma de Mallorca, Spain In multilingual societies, it is common to encounter different language varieties. Various approaches have been proposed to discuss different mechanisms of language shift. However, current models exploring language shift in languages in contact often overlook the influence of language ideologies. Language ideologies play a crucial role in understanding language usage within a cultural community, encompassing shared beliefs, assumptions, and feelings towards specific language forms. These ideologies shed light on the social perceptions of different language varieties expressed as language attitudes. In this study, we introduce an approach that incorporates language ideologies into a model for contact varieties by considering speaker preferences as a parameter. Our findings highlight the significance of preference in language shift, which can even outweigh the influence of language prestige associated, for example, with a standard variety. Furthermore, we investigate the impact of the degree of interaction between individuals holding opposing preferences on the language shift process. Quite expectedly, our results indicate that when communities with different preferences mix, the coexistence of language varieties becomes less likely. However, variations in the degree of interaction between individuals with contrary preferences notably lead to non-trivial transitions from states of coexistence of varieties to the extinction of a given variety, followed by a return to coexistence, ultimately culminating in the dominance of the previously extinct variety. By studying finite-size effects, we observe that the duration of coexistence states increases exponentially with network size. Ultimately, our work constitutes a quantitative approach to the study of language ideologies in sociolinguistics. Modelling language ideologies for the dynamics of languages in contact David Sánchez August 1, 2023 ====================================================================== § INTRODUCTION Modelling language shift is valuable because it can unveil the mechanisms that lead to language death or its maintenance <cit.>. The pioneering model of Abrams and Strogatz <cit.> assumed that language shift is mostly driven by a prestige parameter, which quantifies the relative strength between two linguistic varieties in contact but with different sociolinguistic statuses <cit.>. When the transition rate for speakers to change their initial language is proportional to the number of people that speak the target language, the only stable fixed point of the model implies the extinction of the variety with the least prestige. Interestingly, the extinction processes of languages have analogs with the evolutionary properties of biological species <cit.>. Since then, different mechanisms <cit.> have been proposed to enable the coexistence of varieties seen in reality, which in fact is a rather common situation in multilingual societies <cit.>. For instance, a community of bilingual speakers may help stabilize a fixed point with different fractions of monolingual speakers. Another possibility is to introduce a volatility parameter, which accounts for the fact that a speech community can be more opaque to the influence of speakers with a different variety. However, all these theoretical approaches (reviewed in, e.g., Refs. <cit.>) do not fully take into account the role of language ideologies, a social factor that is currently considered as as a key concept in understanding language use and attitudes within a cultural group. This is the gap we want to fill in with our work. Language ideologies comprise a wide spectrum of beliefs, assumptions and feelings that a group of speakers socially share about certain language forms <cit.>. As such, ideologies lead to linguistic attitudes <cit.> and values that express with explicit actions degrees of favor or disfavor toward a language or a variety. These psychological tendencies generate prejudices, stereotypes, biases, etc. A commonplace case refers to languages that have undergone a standardization process in which the standard variety is advocated in school, government offices and mass media against the vernacular variety or dialect spoken in a particular region <cit.>. Typically, this leads to an overt prestige that encourages speakers to use the standard variety by penalizing utterances that depart from the linguistic norms. However, there also exists a covert prestige <cit.> that describes a positive willingness towards socially considered lower forms due to cultural attachment or group identity with regard to the vernacular variety. This can happen owing to the presence of ethnic differences (e.g., African-American English <cit.>) or the influence of a third variety (e.g., bilingual Basque-Spanish speakers preferring on average Basque Spanish to Standard Spanish <cit.>), among other causes. From the viewpoint of mathematical modeling, an equivalent situation considers the competition between a global and a local language (the latter may be endangered), where these two languages are related vis-à-vis with the standard and vernacular varieties indicated above. Further, one could envisage two ways of speaking (young versus old generations, high versus low socioeconomic classes, etc.) associated to distinct sociological parameters. Our theoretical proposal is thus completely general in this respect and just considers two speech communities with different linguistic preferences and two language varieties in contact with different prestige. This way our findings can be applied to a broad range of sociolinguistic situations. Our model builds upon previous efforts <cit.> that consider communities of binary agents with different states. The agents can change their states interacting with their neighbors following predefined rules. As a consequence, the state of the population evolves in time until a consensus is reached (or not). In our case, the state is the language or variety spoken by the agent while the transition rates for variety adoption reflect the influence of the surrounding individuals in terms of the variety prestige and the fraction of those individuals speaking any of the two varieties. Crucially, the agents can be have two internal preferences caused by their language ideologies. These preferences for the standard or the vernacular variety determine in term the values assigned to each variety prestige. In short, the model accounts not only for what language the individuals speak but also what language they prefer to speak. Our findings reveal that in some cases the agents' preference can counteract the force of the most prestigious variety, thus leading to the survival (or even dominance) of the local variety in relation with the standard variety. More strikingly, our model shows a rich constellation of phases—upon increasing of the coupling between the two communities with different preferences we find a transition from a social state where the vernacular (majority) language dominates to a phase where this variety becomes extinct, sandwiched between intermediate regions for which the coexistence between varieties is possible, and finally a phase there the standard (minority) language is dominant across the society. These results can be better understood in the mean field limit where the agents are connected all to all. Yet we also investigate finite size effects with the aid of agent-based modeling and calculate the survival times. Below, we give more details on this complex landscape, which both deepens our knowledge on the dynamics of languages in contact and and may have an impact in the design of appropriate language policies that seek to revitalize endangered languages. § MODEL Our goal is to quantify the influence that the linguistic preference of the speakers may have over the distribution of speakers within the different varieties of a language. For this purpose, we propose a mathematical framework which models a society in which only one language with two different varieties—the standard and the vernacular—exists. As explained in the Introduction, this model is also valid for two languages or for two ways of speaking induced by sociological factors. Therefore, speakers may speak either one variety or the other but they may prefer one variety over the other. Let X (Y) denote the standard (vernacular) variety while the preference is labeled with 1 or 2. This implies that we have four groups of speakers: x_1, x_2, y_1, and y_2. On the one hand, x_1 is the fraction of standard speakers that prefer the standard variety whereas x_2 is the fraction of standard speakers that prefer the vernacular variety. On the other hand, y_2 is the fraction of vernacular speakers that prefer the vernacular variety whereas y_1 is is the fraction of vernacular speakers that prefer the standard variety. This depiction of a society with one language, two varieties and four population groups is the minimal model that captures essentially the influence that preferences have on language shift. Since we are dealing with population fractions, it is a good approach to consider that our society consists of a large number of interacting speakers. The dynamics of the system when the speakers interact all to all (mean field approximation) is given by the rate equations dx_1/dt = P_y_1 → x_1 y_1 - P_x_1 → y_1 x_1, dx_2/dt = P_y_2 → x_2 y_2 - P_x_2 → y_2 x_2, dy_1/dt = P_x_1 → y_1 x_1 - P_y_1 → x_1 y_1, dy_2/dt = P_x_2 → y_2 x_2 - P_y_2 → x_2 y_2, where the transition rates to shift from variety X (Y) to variety Y (X) are accordingly proportional to the total number of Y (X) speakers: P_x_1 → y_1 = (1-s_1)(y_1+y_2), P_x_2 → y_2 = s_2(y_1+y_2), P_y_1 → x_1 = s_1(x_1+x_2), P_y_2 → x_2 = (1-s_2)(x_1+x_2). Importantly, the shift probabilities given by Eqs. (<ref>), (<ref>), (<ref>), and (<ref>) include the parameters s_1 and s_2, which account for the prestige of the standard variety for the vernacular speakers and vice versa. Quite generally, we take s_1,s_2>0.5 to model the fact that those speakers whose preference is not aligned with their language switch more easily than those speakers whose preference is aligned. For instance, vernacular speakers who prefer to speak language X (i.e., the group y_1) change with a rate proportional to s_1 [Eq. (<ref>)] whereas standard speakers who speak their preferred language (i.e, the group x_1) change with a smaller rate, since this is proportional to 1-s_1 [Eq. (<ref>)]. This ingredient is absent from previous models and emphasizes the importance of preference alignment or disalignment in language shift processes. On the other hand, we take s_1>s_2 to reflect the fact that overt prestige, associated to the higher-status language or standarized variety, is higher than covert prestige, associated to the lower-status language or vernacular variety. However, the mechanism for preference alignment operates similarly as before: those speakers who prefer variety Y (i.e., the group x_2) are more likely to shift [Eq. (<ref>)] that those vernacular speakers whose preference agrees with their variety (i.e., the group y_2), see Eq. (<ref>). The fractions in Eqs. (<ref>), (<ref>), (<ref>), and (<ref>) obviously obey x_1 + x_2 + y_1 + y_2 = 1. We note that transitions are not allowed between groups of different preferences. Thus, Eqs. (<ref>), (<ref>), (<ref>), and (<ref>) constitute a fixed-preference model. In Fig. <ref> we illustrate the transitions between the different population groups x_1, y_1, x_2 and y_2, which occur only between groups of speakers with the same preference, i.e., x_1 ↔ y_1 and x_2 ↔ y_2. This is especially relevant for populations that may change their language but not their preference. Of course, preferences can evolve with time but language ideologies are maintained in a population typically over a generation <cit.>, much longer than language change of usage, which can occur at a significantly higher rate <cit.>. Therefore, our results are restricted to time ranges when language shift can take place but preferences are constant. We define the constant α as the total fraction of speakers who prefer the standard variety, α = x_1 + y_1, Using Eq. (<ref>) this also determines the fraction of speakers who prefer the vernacular variety, x_2 + y_2=1-α. As we mentioned, the preference of the speakers is beforehand fixed. We define the constant α as the total fraction of speakers who prefer the standard variety, x_1 + y_1 ≡α, so that dα/dt = dx_1/dt+dy_1/dt = 0. As for the definition of the specific dependence on s_1 and s_2 of each of the rates in Eqs. (<ref>)-(<ref>), we have taken into account the influence of the preference of the speakers. For example, in P_x_1 → y_1 the dependence on s_1 is justified by the preference of the speakers for the standard variety; in this sense, they are sensitive to the prestige that the variety they prefer may have from the point of view of the speakers who do not use that variety. As P_x_1 → y_1 consists of a change from the standard variety to the vernacular one, the dependence is such that the higher the prestige, the lower the rate. The opposite happens in the case of P_x_2 → y_2. When the speakers prefer the vernacular variety, they are sensitive to s_2, and as P_x_2 → y_2 involves a switch from the standard variety to the vernacular one, the higher the prestige of the latter, the higher the rate will be. In these parameters, we encode the information regarding situations in which speakers globally acknowledge a higher authority or correctness to the standard variety, and that is why we impose s_2 < s_1. In addition, we set s_1 > 0.5 and s_2 > 0.5 to avoid situations in which a speaker who prefers a given variety switches faster to the other one. For example, if x_1 + x_2 = y_1 + y_2 = 0.5 and s_1 < 0.5, following Eqs. (<ref>) and (<ref>) we would have that P_x_1 → y_1>P_y_1 → x_1, meaning that speakers change more probably to the variety they do not prefer, a situation which we want to avoid. Thus, the influence of preferences in the dynamics is also encoded in this restriction. These rates are proportional to x_1+x_2 and y_1+y_2, i.e., the total fraction of standard or vernacular variety speakers. This is because we consider that the influence the prestige of a variety may have in causing a speaker to switch to that variety from another one depends on the pressure exerted by the total number of speakers of the former. § FIXED POINTS To understand more easily the results, it is convenient to make the change of variables (X, Y, ω, z) = (x_1+x_2, y_1+y_2, x_1-x_2, y_2-y_1), where X and Y are clearly the total speakers for standard and vernacular varieties, respectively, and ω and z quantify how many speakers of X and Y, respectively, are aligned with their internal preferences. Due to constraints imposed by Eqs. (<ref>) and (<ref>), our original set of 4 independent variables x_1, x_2, y_1 and y_1 turns into a set with only two independent variables, chosen to be X and ω. Thus, the dynamics of the system are governed by the rate equations dX/dt = (s_1-s_2)X(1-X) -1/2(s_1+s_2-1)[(1-2α)X+(2X-1)ω], dω/dt = 1/2[ (s_1-s_2-1)ω+2(s_2-s_1)Xω. . +2(s_1+s_2-1)X(1-X) . . +(2α-1)(1+s_1-s_2)X ], which are the result of combining Eqs. (<ref>), (<ref>), (<ref>) and (<ref>) properly. The two terms constituting Eq. (<ref>) have a clear interpretation. The first corresponds to the logistic equation, representing unlimited growth due to s_1 > s_2 with only one fixed point at X = 1. However, the inclusion of preferences introduces a deviation from this unrestrained growth of the most prestigious variety. The sign of dX/dt will be given by the second term, since [(1-2α)X+(2X-1)ω] may be either positive, negative or null, and (s_1+s_2-1) < 1 holds true at all times. This inequality is guaranteed because s_1, s_2 > 0.5 and s_1 > s_2. We show in Table <ref> the analytical expressions for the fixed points of Eqs. (<ref>) and (<ref>) for these two independent variables X and ω. Fixed points with IDs E and D imply the extinction of one of the varieties: E describes a situation in which all speakers employ the vernacular variety while D implies that all individuals speak the standard variety. In turn, C implies the coexistence for speakers of both varieties. This is the first remarkable result as compared with Ref. <cit.>, where coexistence is not possible for linear transition rates. The fraction of speakers of each variety and their preference distribution depends on s_1, s_2 and α. In contrast, extinction and dominance fixed points E and D, respectively, are independent of the parameters of the model. Indeed, they constitute absorbing states in a stochastic simulation. We will later elaborate on this observation when we discuss our agent-based model simulations. Fixed points of extinction (E) and dominance (D) of the standard variety always exist within the limits of the phase plane, i.e., 0≤ X≤1, -1≤ω≤ 1. However, the fixed point implying coexistence of varieties (C) only lies inside the existence range of the chosen variables when α_1 ≤α≤α_2, with α_1 = (1-s_1)(2s_2-1)/s_1+s_2-1 and α_2 = s_1 (2s_2-1)/s_1+s_2-1. As for the stability of the fixed points, the two eigenvalues λ_1 and λ_2 of the Jacobian matrix that result from the linearization of the dynamical equations around one of these fixed points are given by Eq. (<ref>). A computational analysis of the expressions for the fixed points in Table <ref> and their stability following Eq. (<ref>) yields two important results. First, the eigenvalues λ are always real. We can then exclude dynamic states such as cycles. Second, there is always one and only one stable fixed point for each parameter configuration. Depending on the parameters, there will be only one steady state characterised by the extinction of the standard variety (E), its dominance (D), or the coexistence of the two varieties (C). When α < α_1, extinction of the standard variety (E) is stable, and when α > α_2 standard dominance (D) is stable. Remarkably, the set of parameters that implies the stability of the coexistence fixed point (C), computed by imposing λ_1 < 0 and λ_2 < 0 for X^* = X^*_C and ω^* = ω^*_C, is also given by Eq. (<ref>), meaning that, whenever coexistence is possible, it is stable over time. We will now investigate the influence of speakers' preferences in the particular state achieved by the system in the long time limit. As we mentioned before, there is always one and only one stable fixed point for every possible value of the triad (s_1, s_2, α). Figure <ref>(a) shows an interesting case: despite the fact that the standard variety has a higher prestige, the stable solution corresponds to all speakers using the vernacular variety (E). This is because, for particular values of α<0.5, the community preference is biased for the vernacular variety. Consequently, a sufficiently low value of α can counteract the strength of a higher prestige variety. Both X and ω are null in this case, as all the standard variety speakers switch to the vernacular variety. In Fig. <ref>(b) we depict an expected case: if s_1 is sufficiently large as compared with s_2, the preference α cannot prevent the death of the minority language (D), and X = 1. However, the speakers are still biased towards the vernacular variety. In this case, ω^* = 2α-1<0, and as ω = x_1-x_2 < 0, x_2>x_1, meaning that there are more standard variety speakers that prefer the vernacular variety. Figures <ref>(c) and (d) are representative cases for coexistence states (C). We can further study their nature as s_1, s_2 and α vary. Intuitively, for extreme values of the preference parameter such as α = 0 (α = 1), coexistence is not possible, as the absence (dominance) of speakers with a preference for the standard variety drives the system to a state with extinction (dominance) of the standard variety. While α = 0 or α = 1 do certainly not allow for coexistence, 0<α<1 may allow for it depending on the value of s_1 and s_2. To characterise the phases of the system, we compute the boundaries in phase space which separate the phases of coexistence (C), extinction of the standard variety (E), and its dominance (D). To compute the boundaries we simply compare the expressions for the fixed points in Table <ref>, as there is one and only one stable fixed point for each parameter configuration. When X^*_D = X^*_C, the sub-index referring to the ID of the fixed point, we are in the transition line between the dominance of the standard variety and coexistence between the two varieties. In this sense, we obtain the dominance-coexistence (DC) transition line s_2^DC(s_1,α) = s_1-α+s_1 α/2s_1-α. Similarly for X^*_E = X^*_C, we obtain the extinction-coexistence (EC) transition line s_2^EC(s_1,α) = (1-α)(s_1-1)/α + 2(s_1-1). Eqs. (<ref>) and (<ref>) are both null at s_1 = 0.5, Eq. (<ref>) intersects with s_2(s_1,α) = s_1, a limit of the phase space, at s_1 = α and Eq. (<ref>) does so at s_1 = 1-α. This means that when α < 0.5 only Eq. (<ref>) intersects with the border s_2 = s_1; when α > 0.5, only Eq. (<ref>) exists within the limits of the phase plane and it will intersect with the border s_2=s_1. We have then a clear distinction of the phase space depending on whether α < 0.5 or α > 0.5. This may be seen in Fig. <ref>, where we plot the different values of X in the stable fixed point for each parameter configuration, X^*_st. These values form the phase space for two general values of the preference, α < 0.5 and α > 0.5. Only for α < 0.5 a phase with extinction of the standard variety exists, and in the case of α > 0.5, the greater α, the smaller the area of coexistence. To better illustrate the influence of α, in Fig. <ref> we plot the boundaries of the phase space and the value of X in the stable fixed point for each parameter configuration for three chosen values of α. Fig. <ref>a) depicts a situation with α = 0.25, i.e., a quarter of the population prefers the standard variety. This allows for the existence of three regions: the extinction of the standard variety if s_1 is sufficiently low and s_2 is sufficiently close to s_1, the dominance of the standard variety if s_2 is sufficiently low, and coexistence of both varieties for a wide range of values of s_1 and s_2, with a predominant use of the vernacular variety over the standard one. Interestingly, Figs. <ref>b) and c), which account for α = 0.5 and α = 0.75, respectively, only show regions in which there exists either coexistence of the two varieties or domination of the standard variety. These figures allow us to observe another direct effect of preferences. As α≥ 0.5 in Figs. <ref>b) and c), 0.5≤ X^*_st<1 in the coexistence zone: vernacular speakers will, at best, equal in number the standard speakers. Zones of coexistence in which standard speakers outnumber vernacular speakers are no longer allowed, in contrast to Fig. <ref>a). Additionally, the region of coexistence in Fig. <ref>c) has a considerably greater area than the one in Fig. <ref>b), which suggests that a higher preference for the standard variety has a negative effect on the number of possible parameter configurations which allow for coexistence. From Eqs. (<ref>) and (<ref>) we can compute the area of the parameter space with coexistence, σ^c_st. A straightforward integration yields ( 1/2)^3 σ_st^c = ∫_1/2^1-α s_2^EC(s_1,α) ds_1+ ∫_1-α^1 s_1 ds_1 - ∫_1/2^1 s_2^DC(s_1,α) ds_1 = 1/4(1-α) αlog( 2-α/α) for α < 0.5, and ( 1/2)^3 σ_st^c = ∫_α^1 s_1 ds_1 - ∫_α^1 s_2^DC(s_1,α) ds_1 = 1/2 (α-1) αatanh(1-α) for α > 0.5. Eqs. (<ref>) and (<ref>) are plotted in Fig. <ref>. We would expect that the maximum proportion of situations of stable coexistence occurs for α = 0.5, as the proportion of speakers with a preference for one variety or the other would be equal. Nevertheless, we may notice several relevant facts in Fig. <ref>, where we plot the area in parameter space with 0 < X^*_st < 1, implying coexistence; with X^*_st = 1 implying the dominance of the standard variety, or with X^*_st = 0 implying its extinction, as a function of α. Firstly, the maximum of the curve for 0 < X^*_st < 1 is located at α < 0.5. This makes sense, as the standard variety has a higher prestige than the vernacular one. Because of this, coexistence occurs more probably at values of α which imply a higher preference for the vernacular variety than for the standard one, i.e., α < 0.5. The preference acts as a counterforce against the differences in prestige, and its effect is maximum at α_max = 0.27. Secondly, for α > 0.5 we stop finding stable fixed points in which X^*_st = 0, as there are more people who prefer the standard variety than the vernacular one, and this in addition to the difference in prestige, prevents the standard variety from losing all of its speakers. Finally, the curve for the extinction of vernacular variety never vanishes except for α = 0. This happens because of the fixed hierarchy of prestiges, i.e., s_1 > s_2 holds true at all times. As the standard variety is always more prestigious than the vernacular one, it does not matter how high the preference for the vernacular variety is among the speakers: there will always be a set of parameters which lead to stable situations in which the standard variety dominates. This is due to the fact that, as the standard prestige is always higher than the vernacular one, as low as the fraction of speakers with a preference for the standard variety is, it is enough to get to stable situations in which X^*_st = 1. Even though the maximum of the curve is around α = 0.27, the specific range of α in which coexistence is stable considering a particular parameter configuration depends on s_1 and s_2, following Table <ref> and Eq. (<ref>). To sum up, within a model with a large number of speakers interacting all-to-all, the existence of internal preferences due to the speakers' ideology brings the possibility of coexistence between the two varieties with different prestige. This result agrees with the sociolinguistic situation of many countries and regions where different speech communities show distinct language attitudes. However, societies are not generally made up of completely interconnected speech communities. A more realistic approach takes into account different degrees of coupling between speech communities. § COUPLING Now, we want to address the following question: How does the level of connection between people with different preferences impact how the system behaves? In other words, we want to investigate the effects of varying degrees of interaction between individuals who have diverse preferences on the overall dynamics of the model. To do so, we propose a modification of the model with the implementation of a degree of interconnectivity, γ. This parameter γ represents the proportion of all possible links between speakers with different preferences which are actually taking place. The situation is depicted in Fig. <ref>. The system is made up of two networks. The speakers of each network have exclusively one preference, i.e., we have a community exclusively of speakers who prefer the standard variety and another community exclusively of speakers who prefer the vernacular variety. According to Eq. (<ref>), the size of the community with a preference for the standard variety in terms of the size of the total population is α, and the size of the other community is 1-α. To study the dynamics of the system analytically, we assume that each community is fully connected. We can then approximate the dynamics by applying the mean-field approach that we followed in Eqs. (<ref>)-(<ref>), rescaling the interactions between speakers with different preferences by a factor γ as an approximation. The new rates then become P_x_1 → y_1 = (1-s_1)(y_1+γ y_2), P_x_2 → y_2 = s_2(γ y_1+y_2), P_y_1 → x_1 = s_1(x_1+γ x_2), P_y_2 → x_2 = (1-s_2)(γ x_1+x_2). As a consequence, the rate equations read dx_1/dt = (1-s_1)(x_1-α-γ y_2)x_1 +s_1(α-x_1)[x_1-γ(α-1+y_2)], dy_2/dt = (1-s_2)(α-1+y_2-γ x_1)y_2 -s_2(α-1+y_2)(y_2+γ(α-x_1)), whereas the equations for x_2 and y_1 can be obtained from Eqs. (<ref>) and (<ref>). Alternatively, we can work with the rate equations dX/dt = 1/2(X{(1-2α)+s_2[2(α-1)+(1+γ)X-γ] . . . . +s_1[2α-(1+γ)X+γ]} + ω{ 2[(1-s_1)X . . . . . . +s_1α-s_2(X+α-1)]-(s_1-s_2)(2α-1)γ. . . . +(s_1-s_2)(γ-1)ω-1}), dω/dt = 1/4{[(1-2s_2)(X-ω)+(2-s_1-s_2)γ(X . . . . +ω)][X-ω+2(α-1)]- [(s_1+s_2)γ(X-ω) . . . . +(2s_1-1)(X+ω)](X+ω-2α)}. We will study X and ω to obtain a global overview of the dynamics of the system and x_1 / α and y_2 / (1-α) to get insight into what happens inside each community. These two approaches are equivalent, as our system is described only by two independent variables. In Table <ref> we show the fixed points of Eqs. (<ref>)-(<ref>). As in the model without coupling, we find three kinds of fixed points: coexistence of the standard and vernacular varieties (C), extinction of the standard variety (E), and its dominance (D). For the study of the stability of the fixed points, in Eq. (<ref>) we show the eigenvalues for the Jacobian matrix. The linear stability analysis of the fixed points yields an important result: as in the model without coupling, there exists always a unique stable fixed point. Thus, we can study stability diagrams as in the model without coupling. To study the effects of coupling in the phase space of the model we may adopt two approaches. Firstly, in Fig. <ref> we plot the boundaries in s_1-s_2 space between the different stable fixed points in terms of α and γ. We have computed the boundaries by performing numerical solving. Fig. <ref>a) shows the phase diagram for α=0.25<0.5, and Fig. <ref>b) does so for α = 0.55>0.5. Both values of α have been arbitrarily chosen and depict the general behavior of the phase space for α < 0.5 and α > 0.5, respectively. The case with γ = 1, i.e., the case in which both communities are completely connected, is equivalent to the previous model given by Eqs. (<ref>)-(<ref>). Then, we may already make an observation on the influence of the coupling in the dynamics of the system: the decrease of γ, i.e., the increase in the isolation between the two communities, enlarges the area of the phase space which allows for coexistence. In other words, an increase in the interconnectivity between communities with opposite preferences decreases the area of the parameter space allowing for coexistence. This is an expected result <cit.> that our model captures as a validity check. Secondly, we compute the boundaries between the different phases characterised by the stable fixed points in the preference-coupling space, i.e., the α-γ parameter space, in terms of given values of s_1 and s_2. The mathematical details of their computation are available in Appendix <ref>. In Fig. <ref> we may show a representative phase space for the model under two different parameter configurations. There are two boundaries which separate coexistence from either vernacular or standard dominance. If we focus on a single value of α, the variation of the coupling γ allows us to go from one phase to another. For example, let us focus on the case with s_1 = 0.7 and s_2 = 0.6 of Fig. <ref>. Given a fixed value of α, we can make a transition from coexistence (C) to standard dominance (D) or from coexistence to vernacular dominance (E). We also have the option to remain in the coexistence phase for every value of γ. However, for some other values of the prestige parameters, s_1 and s_2, γ allows us to witness more than a single transition. In the case of s_1 = 0.58 and s_2 = 0.51 of Fig. <ref>, for a given set of values of α, e.g., α = 0.15, we may witness several transitions as we increase γ: from coexistence to vernacular dominance, then again to coexistence, and then to standard dominance. This is due to the fact that the boundary between standard extinction (E) and coexistence (C) has a local maximum for γ_max at α_max. In Appendix <ref> we give further details of its calculation and the parameter sets for which this maximum exists. §.§ Regime transitions As seen in Fig. <ref>, some parameter configurations allow us to witness three transitions as the coupling of the two communities increases. In the aforementioned case of s_1 = 0.58 and s_2 = 0.51, the line α = 0.15 crosses the boundaries between phases in three intersection points given by γ_1 = 0.13, γ_2 = 0.34 and γ_3 = 0.79 (see Appendix <ref> for details about their calculation). All the regimes in which the system may be found when these three intersection points γ_i for i = 1,2,3 exist are the following: * Regime I) null coupling: In this regime, γ = 0 and we have two isolated communities in which the only spoken variety is the one preferred by their members. The system is then in phase C, the proportion of standard (vernacular) speakers being determined by the size of their preference community, α (1-α). This could describe the situation of an elite that occupies a land but, e.g., does not establish relation with the local people. * Regime II) small coupling: when the coupling is increased to 0 < γ≤γ_1, this little amount of coupling is enough to allow for coexistence due to the influence of each community in the other one. Nevertheless, inside each community, the dominant variety is the majority one. The system remains in phase C. This regime could correspond with the ruling elite increasing the exchanges with the local people. In these cases, there exists a language shift but it is not dramatic. * Regime III) medium coupling: In this regime with γ_1 < γ≤γ_2, the coupling is enough for the majority with less prestige to dominate over the minority with higher prestige. The system is then in phase E. This could correspond to cases such as the Norman elite, who after the England conquest gradually abandoned their more prestigious French language in favor of the English language preferred by the majority. Another example would be the rise of Hindi (lower status) versus the decline of English (higher status) in present-day India <cit.>. * Regime IV) reasonably high coupling: Interestingly, the increase in the coupling for γ_2 < γ≤γ_3 benefits the prestigious minority in comparison to the previous regime. The system re-enters the coexistence phase C and reaches a state in which coexistence is allowed again, but inside each community, the dominant variant is the preferred one among the speakers of the community. There are many examples of this regime nowadays. E.g., in Belgium there are two interacting communities, each keeping their own language. * Regime V) almost total coupling: with γ_3 < γ≤ 1 the coupling is enough for the prestigious minority to dominate in the whole society, so the system arrives to phase D. A historical example of this is the death of many indigenous languages in Latin America, and the survival of Spanish or Portuguese, originally spoken by the ruling minority, Finally, for γ = 1 we recover the results from the first model. These regimes are illustrated in Fig. <ref>. Once the attributes of the different regimes have been described, we will focus on what happens in the system while transitioning from one regime to another using two different approaches. Firstly, by the numerical integration of the rate equations (<ref>) and (<ref>) with abrupt changes of γ in time; secondly, by studying the analytical expressions of the stable fixed points in Table <ref> in terms of γ. Thus, we integrate numerically Eqs. (<ref>) and (<ref>) and see how the different regimes are created. One example is shown in Fig. <ref>, where we can see how the increase of the coupling affects each group of speakers. The transition from Regime I (γ = 0 in Fig. <ref>) to Regime II (γ = 0.1) occurs as a result of a rapid decline in the number of speakers of the standard variety (x_1 and x_2 in Fig. <ref>). This decline is attributed to the small size of the community with a preference for the standard variety and its gradual integration into a much larger community that favors the vernacular variety, as in the example of the Norman conquest. As the interconnection between the two communities increases, the transition from Regime II (γ = 0.1) to Regime III (γ = 0.2) leads to vernacular dominance, despite the higher prestige of the standard variety. These regime changes are a direct effect of the interconnection between the two communities, and they are depicted in a more illustrative way in Fig. <ref>. However, when the interconnection reaches a sufficiently high level, an interesting re-entering transition from vernacular dominance to a coexistence phase (Regime IV with γ = 0.6) takes place. This transition is characterised by a rapid decrease in the number of speakers of the vernacular variety and a simultaneous increase in the number of speakers of the standard variety. Surprisingly, the increase in interconnection now has the opposite effect: even though the community with a preference for the vernacular variety is larger than the community favouring the standard variety, the higher prestige of the standard variety noticeably impacts the community with a preference for the vernacular variety, as we can see in the rapid increase in x_2 and decrease in y_2. Finally, the transition from Regime IV (γ = 0.6) to Regime V (γ = 0.9) demonstrates a clear dominance of x_2 and x_1 over y_2 and y_1, respectively, as in the case of Latin American countries. To understand how the system reaches the aforementioned transitions, we may study the variation of the stable fixed point with γ for fixed values of s_1, s_2 and α. In Fig. <ref>a) we plot the value of the stable fixed point in terms of γ. As γ increases, we observe the aforementioned phases of coexistence, vernacular dominance and standard dominance, through the change from one regime to another. The interest here relies on the evolution of the steady state while reaching each different phase. For that, we also define linguistic happiness H as H = x_1 + y_2, which clearly refers to the proportion of speakers who are satisfied because their language and preference are aligned (see the faces of the polygons in Fig. <ref>). The linguistic unhappiness may be defined as U = 1-H. These quantities are plotted in Fig. <ref>b). The first observation we can make is that the transitions from one phase to another are smooth. The evolution of linguistic happiness shows that an increase in the coupling causes a decrease in linguistic happiness, as the dynamics of the system rely on the eagerness of the speakers to neglect their preferences. However, there is a narrow region just before Regime III (the phase in which everyone speaks Y, so that x^*_1,st = 0 and y^*_2,st = 1-α) in which linguistic happiness increases. This is due to the fact that the coupling and the relative sizes of the communities allow, in virtue of P_x_1 → y_1 and P_x_2 → y_2 [Eqs. (<ref>) and (<ref>), respectively], for a flow from x_1 to y_1 and then from x_2 to y_2. As 1-α≫α with α = 0.15, the increase of the speakers of y_2 has a greater impact in H than the decrease of speakers of x_1 and the system gets happier to reach the phase with vernacular dominance. However, reaching a phase with vernacular dominance is not needed for this phenomenon of linguistic happiness momentarily increasing to take place. As we see in Fig. <ref>a), even for an α greater than the one corresponding to the maximum for which we can observe the vernacular dominance phase, namely α_max, there exists a minimum in y^*_2,st. This is due to the relative size of the communities. The minimum in y^*_2,st is located at γ_min, which increases with α as seen in Fig. <ref>b). Once the minimum in y^*_2,st fails to exist, linguistic happiness is absolutely decreasing as the coupling increases. In summary, the analytical exploration of different phases and transitions in the mean-field model with coupling lays the groundwork for understanding the dynamics of societies with languages in contact. However, this approach is limited by the fact that societies have a finite number of speakers. To account for finite-size effects and intricate details, we complement this analysis with an agent-based model implemented on complex networks. This approach allows us to validate the analytical analysis and investigate the influence of network structures, providing a comprehensive understanding of the aforementioned dynamics in realistic social contexts. §.§ Finite-size effects We have thus far neglected fluctuation effects since populations are assumed to be large. The results of our deterministic approximation are valid in the thermodynamic limit of infinite systems. To model a more realistic substratum, we now proceed by conducting agent-based simulations of the model, implementing coupling in complex networks. By doing so, we can explore and validate our previous findings while also examining finite-size effects on the dynamics of language competition. To this purpose, we define a network of N nodes constituted by two fully-connected sub-networks (the so-called communities) with a fixed preference for standard or vernacular variety. Their sizes are N_1 = α N and N_2 = (1-α) N, respectively. These sub-networks are connected following a random process of link assignment between speakers with different preferences, as in Fig. <ref>. For that, we activate a fraction γ of all the possible links between speakers with different preferences; the number of active links, N_a, is given by N_a = γ N_1 N_2 = γα (1-α) N^2. The simulation of the model with coupling in the aforementioned networks takes place as follows. Each Monte-Carlo step of the simulation consists of a sequential update of all the nodes in the network. The change of the state of an agent during an update depends on a transition probability given by Eqs. (<ref>)-(<ref>) but changing the variables of the proportions of speakers by the local densities of each kind of speaker within the neighbourhood of the agent to update, i.e., those agents who are connected with a direct link to the agent to update. In Fig. <ref> we plot the phase diagram for X as a result of simulations with different sets of parameters. We can see a clear agreement between the simulations in networks and the mean-field approach shown in Fig. <ref>, meaning that the conclusions drawn from the analytical analysis of the rate equations are valid. However, a main difference between the rate equations description and the finite size simulation is that the phases D and E are absorbing states of the stochastic dynamics, C is not an absorbing state and a finite size fluctuation will eventually take the system from phase C to either E or D. These absorbing states imply the extinction of either of the varieties, which can have significant societal implications. The relevant question is then: what is the lifetime of phase C for a finite system? As we can see in Fig. <ref>, survival times scale exponentially with network size and the exponential growth decreases with coupling, meaning that the coexistence between varieties in a society with a given size has a lifetime which decreases as the interconnection of communities with different preferences or social mixing increases. § CONCLUSION To sum up, we have explored the role of speakers' linguistic preferences in contexts that involve language shift. We did so by proposing a model for two language varieties in contact, accounting for the preferences that speakers may have towards one variety or the other. We have first considered, within a mean field approach, the case of a fully connected population. We have shown that although the standard variety is always more prestigious than the vernacular variety, the speakers' preference quite generally determines the dynamics of the system, allowing for language coexistence in situations in which prestige alone would have led the system towards the extinction of the vernacular variety. Secondly, we have considered a varying degree of interconnectivity or coupling between the two speech communities with different preferences. The degree of coupling measures the extent to which the two communities communicate with each other. We have found that increasing coupling implies that language coexistence is less likely. This is due to the fact that a stronger connection between speakers with opposing preferences favors the more prestigious variety while reducing the number of individuals aligned with their internal preference. By increasing the coupling parameter, for fixed prestige values and fixed sizes of the communities with different preferences, we have identified transitions between extinction, dominance, and coexistence phases, which could be applied to real-world scenarios. For example, today's linguistic coexistence in Belgium is allowed in spite of a reasonably high coupling. Additionally, historical sociolinguistic events such as the disappearance of Old French in England or the deaths of many indigenous languages in Latin American countries depend not only on the prestige but also on the coupling degree between the speech communities. Beyond the mean-field approximation, we have also conducted agent-based simulations of the model on complex networks. These simulations validate the mean-field results and allow for the study of finite-size effects. Remarkably, we have found a nice agreement between the network simulations and the results obtained from the mean-field approximation as far as the behavior with preference and interconnectivity is concerned. We have also found that the lifetime of the coexistence states depends exponentially on system size. Our model has a number of limitations. First, it considers that the society is spatially homogeneous. However, the varieties spoken in urban and rural areas differ along with their prestige and preferences <cit.>. Therefore, there is considerable latitude for the incorporation of the spatial degree of freedom in our model <cit.>. Further, it would be interesting to study the dependence of our results on the interconnectivity within each community. Another limitation is that we do not consider bilingual speakers that are known to alter the transition rates of the model and consequently their fixed points <cit.>. This could be fixed by adding a third population to the dynamics. Finally, we neglect the extent of volatility <cit.> and interlinguistic similarity <cit.>, which could be modeled with a parameter scaling the transitions. More importantly, to achieve predictive power one would require reliable data on language usage evolution and language preference. Available fieldwork data are sparse and restricted to small networks <cit.>. Social digital datasets have much larger sizes but they are subjected to biases <cit.> and it is not clear to us how to operationalize both language prestige and individual preferences thereof. Nevertheless, this is indeed an interesting research avenue that we plan to explore in the future. Overall, we highlight the importance of other sociolinguistic parameters beyond the well studied effect of language prestige. In this paper we have discussed the relevant effect of language ideologies and the different degrees of interconnectivity between speech communities. Our findings might have practical implications, especially for policymakers, particularly in the context of minority language preservation and language planning <cit.> for contemporary societies. § FIXED POINTS AND EIGENVALUES The eigenvalues of the first model without coupling (Eqs. (<ref>)-(<ref>)) are given by λ_1,2 = 1/2[α (1-s_1)∓1/2√(A)+ 3X^*(s_2-s_1) . . +s_1(1-ω^*)+s_2(α-ω^*-2)+ω^* ], where A = 4 (α +s_1 (3X^*+ω^* -α-1)+s_2 (ω^*-α -3X+2) . . -ω^* )^2+8 (-2 α +s_1 (2 α+4 s_2 (1-2 X)^2 . . . . +X^*(7-2 α -8 X^*)-ω -2) -s_2 (-2 α +X^* (2 α. . +8 X^*-11)+ω+4)+X^*(2 α +4 X^*-5)+ω^* +2). Note that Eqs. (<ref>) and (<ref>) depend only on the parameters and on the specific values of X^* and ω^*, because Y and z can be eliminated using Eqs. (<ref>) and (<ref>). The fixed points in Table <ref> are given by X^*_C(s_1,s_2,α,γ) =1/4 (1-2 s_1)^2 (1-2 s_2)^2 (1+γ)[(-1+2 s_1)(1-2 s_2)^2 (-2+2 α-γ) γ - (4α-2)√(G(s_1,s_2,γ)). . + (1-2 s_1)^2 (2 s_2 - 1) (-2+4 s_2 (1+γ) + γ (-2+2 α+γ))], where G(s_1,s_2,γ) =(1-2 s_1)^2 (1-2 s_2)^2 ((1-2 s_1)^2 (1-2 s_2)^2 - 2 (-1+2 s_1)(2 s_2 - 1) (-s_2+s_1 (2 s_2 - 1)) γ^2 +(s_1-s_2)^2 γ^4), X^*_4(s_1,s_2,α,γ) =1/2 (1-2 s_1)^2 (1-2 s_2)^2 (1+γ)[ (1- 2 α )√(G(s_1,s_2,γ)) +2 s_1^2 (2 s_2 - 1) (-2+4 s_2 (1+γ) . . + γ (-2+2 α+γ)) +(2 s_2 - 1)(-1+2 (α-1) γ+s_2 (2+γ (4-2 α+γ))) . . - s_1(2 s_2 - 1)(γ-4 (6(-1+ α)+γ)+2 s_2 (4+γ (6-2 α+γ)))]. ω^*_3,4(s_1,s_2,α,γ) =γ (-2 α (γ-1) + γ)/(2s_2-1)∓2√(H(s_1,s_2,γ))/(1-2s_1)^2(1-2s_2)^21/4(γ-1) + -2 + 4s_1(2α-1)(γ-1)+2α(γ-2)(γ-1) - (γ-4)γ/(2s_1-1), where H(s_1,s_2,γ) = -2 + 4s_1(2α-1)(γ-1)+2α(γ-2)(γ-1) - (γ-4)γ/(2s_1-1). The eigenvalues of the model with coupling (Eqs. (<ref>) and (<ref>)) are given by λ_1,2 = 1/2( (s_1+s_2)[2(α-ω^*)+γ(ω^*-α)] +s_1 γ. . - 2 s_2 + γ (α-ω^*-1)+2(ω^*-α)-(γ. . +2)X^*(s_1-s_2)+1∓1/2√(A_c)), where A_c = 8 (-2 (2 s_1 - 1) (2 s_2 - 1) (α-1) α2 (-2 s_2 + (α-1)^2 - s_1 (α-1)^2 + s_2 (4 - 3 α) α. . . . + s_1 s_2 (2 γ+ 4 (α-1) α))- 2 (s_1 - s_2) (α-1) αγ^2 + 2 (2 s_1 -1) (2 s_2 - 1) (X^*)^2 (1 + γ) . . +X^* (2 αγ-2 - 3 γ + s_2 (4 + γ (6 - 2 α + γ)) - s_1 ( 8 s_2 -4 + (1 + γ)+γ (2 α -4+ γ))) . . + (2 (2 s_1 - 1) (2 s_2 - 1) (2 α -1)+(4 s_1-3) (2 s_2 - 1) γ + 2 (s_1 (3 - 8 s_2) + 5 s_2-2) αγ. . . . + (s_1 - s_2) (2 α-1) γ^2) ω^* + 2 (2 s_1 - 1) (2 s_2-1) (γ-1) (ω^*)^2)+ 4 (1 - 2 α - γ + γ (α - ω^*) . .+ 2 ω^* + s_1 (2 α + γ - αγ - X (2 + γ)+ (γ-2) ω^*) + s_2 (-2 + 2 α - αγ + X (2 + γ) + (γ-2) ω^*))^2. They only depend on the parameters and on the specific values of the fixed points X^* and ω^*. § PHASE TRANSITIONS DUE TO COUPLING By computing the analytical expressions for the curves which define the boundaries of the several phases in the α-γ parameter space of the model with coupling we can analyze some interesting results. For that, we revisit the fixed points in Table <ref>. If we focus on the value of X, and as there exists one and only one stable fixed point for each parameter configuration, we can compute the boundary of the phases of coexistence (C) and dominance (D) of the standard variety by solving X^*_D(s_1,s_2,α,γ)=X^*_C(s_1,s_2,α,γ), which yields α^DC(s_1,s_2,γ) =1 + √(B(s_1,s_2,γ)) + 2s_1^2(2s_2-1)(4s_2(1+γ)-2/2((2s_1-1)(s_1+s_2-1)(2s_2-1)γ + √(B(s_1,s_2,γ))) -γ(2+γ)) + s_1(2s_2-1)(4+γ(2+γ)+2s_2((γ-2)γ)-4) /2((2s_1-1)(-1+s_1+s_2)(2s_2-1)γ + √(B(s_1,s_2,γ))) +s_2(γ-4^2-2s_2(γ-2^2))/2((2s_1-1)(s_1+s_2-1)(2s_2-1)γ + √(B(s_1,s_2,γ))), where B(s_1,s_2,γ) =(1-2s_1)^2 (1-2s_2)^2 ((1-2s_1)^2 (1-2s_2)^2 . . -2(2s_1-1)(2s_2-1)(-s_2+s_1(2s_2-1))γ^2 + (s_1-s_2)^2γ^4). We can proceed in the same way for computing the boundary of the phases of coexistence (C) and extinction (E) of the standard variety by solving X^*_E(s_1,s_2,α,γ)=X^*_C(s_1,s_2,α,γ), which yields α^EC(s_1,s_2,γ) =2(2s_1-1)s_2^2(2+4s_1(γ-1)+(γ-4)γ)/2(2s_1-1)(2s_2-1)(1+s_1(4s_2-γ-2)+s_2(γ-2))(γ-1) + √(C(s_1,s_2,γ)) + (2s_1-1)(1-2γ+s_1(γ-2(2+γ)))/2(2s_1-1)(2s_2-1)(1+s_1(4s_2-γ-2)+s_2(γ-2))(γ-1) - (2s_1-1)s_2(4+(γ-8)γ+2s1(γ-4(4+γ)))/2(2s_1-1)(2s_2-1)(1+s_1(4s_2-γ-2)+s_2(γ-2))(γ-1), where C(s_1,s_2,γ) =(1-2s_1)^2 (1-2s_2)^2 ((1-2s_1)^2 (1-2s_2)^2 - 2(2s_1-1)(2s_2-1)(s_1(2s_2-1-s_2))γ^2 + (s_1-s_2)^2γ^4). Firstly, after a long but simple derivation, we can see that dα^DC(s_1,s_2,γ)/dγ < 0 ∀γ, hence α^DC(s_1,s_2,γ) is monotonically decreasing. Interestingly, α^EC(s_1,s_2,γ) may have a maximum, as ∃𝒞{s_1,s_2} / dα^EC(s_1,s_2,γ)/dγ = 0, for γ_max = √(1-2(s_1+s_2)+4s_1s_2/s_1-s_2), so that α_max(s_1,s_2) = α^EC(s_1,s_2,γ_max). To compute this set 𝒞, we impose 0≤γ_max≤1 and we arrive at the condition s_2 ≤3s_1-1/4s_1-1. This condition defines the region described by the dotted curve in Fig. <ref>. A maximum in α^EC(s_1,s_2,γ) implies the existence of at least 3 different phases for some values of α. If the minimum value of α^DC(s_1,s_2,γ), i.e., α^DC(s_1,s_2,1), is such that α^DC(s_1,s_2,1)<α_max(s_1,s_2), we can find 4 phases for α∈[α^DC(s_1,s_2,1),α_max(s_1,s_2)]. If we define the following quantity Δα(s_1,s_2) = α_max(s_1,s_2)-α^DC(s_1,s_2,1), we have that Δα(s_1,s_2) > 0 only for certain values of s_1,s_2, which are given by s2 < 7 s_1 - 7 s_1^2 + 11 s_1^3-3/3 (2 s_1 - s_1^2 + 4 s_1^3-1) + E(s_1)/3 (2 s_1 - s_1^2 + 4 s_1^3-1) + 4 s_1^-1 - 8 + 5 s_1 + 2 s_1^2 - 23 s_1^3/3 (2 s_1- s_1^2 + 4 s_1^3-1)D(s_1), where D(s_1) = s_1^3 (24 s_1 - 54 s_1^2 + 167 s_1^3 - 261 s_1^4 + 249 s_1^5 . . - 181 s_1^6 -8+6 √(3) s_1 (E(s_1))^1/6), and E(s_1) = 4 - 36 s_1 + 142 s_1^2 - 361 s_1^3 + 726 s_1^4 - 1178 s_1^5 + 1518 s_1^6 - 1633 s_1^7 + 1378 s_1^8 - 864 s_1^9 + 416 s_1^10. These values of s_1 and s_2 are depicted by the solid red line in Fig. <ref>. We are now in a position to compute analytically the values of γ in which a given value of α, i.e., a horizontal line in α-γ phase space, intersects with the vernacular and standard boundaries. For the standard boundary, we have that γ^s_1 = s_1 + (s_2-1)α^2 + F(s_1,s_2,α)/2(s_1-s_2)(α-1)α +√(4(2s_1-1)(s_1-s_2)(2s_2-1)(α-1)^2α^2 + [s_1 + (s_2-1)α^2 + G(s_1,s_2,α)]^2)/2(s_1-s_2)(α-1)α, where F(s_1,s_2,α) = s_1α(3α-2) - s_1 s_2[2 + 4(α-1)α]. For the vernacular boundary, we have performed numerical solving. This work was partially supported by the Spanish State Research Agency (MCIN/AEI/10.13039/501100011033) and FEDER (UE) under project APASOS (PID2021-122256NB-C21) and the María de Maeztu project CEX2021-001164-M, and by the Government of the Balearic Islands CAIB fund ITS2017-006 under project CAFECONMIEL (PDR2020/51). 49 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Krauss(1992)]Krauss1992 author author M. Krauss, title title The world's languages in crisis, @noop journal journal Language volume 68, pages 4 (year 1992)NoStop [Crystal(2000)]CrystalDeath author author D. Crystal, @noop title Language Death (publisher Cambridge University Press, year 2000)NoStop [Mufwene(2004)]Mufwene2004 author author S. S. Mufwene, title title Language birth and death, @noop journal journal Annu. Rev. Anthropol. volume 33, pages 201 (year 2004)NoStop [Abrams and Strogatz(2003)]Abrams2003 author author D. M. Abrams and author S. H. Strogatz, title title Modelling the dynamics of language death, @noop journal journal Nature volume 424, pages 900 (year 2003)NoStop [Chambers and Trudgill(1998)]Chambers1998 author author J. K. Chambers and author P. Trudgill, @noop title Dialectology (publisher Cambridge University Press, year 1998)NoStop [Lieberman et al.(2007)Lieberman, Michel, Jackson, Tang, and Nowak]Lieberman2007 author author E. Lieberman, author J.-B. Michel, author J. Jackson, author T. Tang, and author M. A. Nowak, title title Quantifying the evolutionary dynamics of language, @noop journal journal Nature volume 449, pages 713 (year 2007)NoStop [Atkinson et al.(2008)Atkinson, Meade, Venditti, Greenhill, and Pagel]Atkinson2008 author author Q. D. Atkinson, author A. Meade, author C. Venditti, author S. J. Greenhill, and author M. Pagel, title title Languages evolve in punctuational bursts, @noop journal journal Science volume 319, pages 588 (year 2008)NoStop [Steele et al.(2010)Steele, Jordan, and Cochrane]Steele2010 author author J. Steele, author P. Jordan, and author E. Cochrane, title title Evolutionary approaches to cultural and linguistic diversity, @noop journal journal Philosophical Transactions of the Royal Society B: Biological Sciences volume 365, pages 3781 (year 2010)NoStop [Patriarca and Leppänen(2004)]Patriarca2004 author author M. Patriarca and author T. Leppänen, title title Modeling language competition, @noop journal journal Physica A: Statistical Mechanics and its Applications volume 338, pages 296 (year 2004)NoStop [Mira and Paredes(2005)]Mira2005 author author J. Mira and author Á. Paredes, title title Interlinguistic similarity and language death dynamics, @noop journal journal Europhysics Letters volume 69, pages 1031 (year 2005)NoStop [Castelló et al.(2006)Castelló, Eguíluz, and San Miguel]Castello2006 author author X. Castelló, author V. M. Eguíluz, and author M. San Miguel, title title Ordering dynamics with two non-excluding options: bilingualism in language competition, @noop journal journal New Journal of Physics volume 8, pages 308 (year 2006)NoStop [Minett and Wang(2008)]Minett2008 author author J. W. Minett and author W. S. Wang, title title Modelling endangered languages: The effects of bilingualism and social structure, @noop journal journal Lingua volume 118, pages 19 (year 2008)NoStop [Patriarca and Heinsalu(2009)]Patriarca2009 author author M. Patriarca and author E. Heinsalu, title title Influence of geography on language competition, @noop journal journal Physica A: Statistical Mechanics and its Applications volume 388, pages 174 (year 2009)NoStop [Kandler et al.(2010)Kandler, Unger, and Steele]Kandler2010 author author A. Kandler, author R. Unger, and author J. Steele, title title Language shift, bilingualism and the future of britain's celtic languages, @noop journal journal Philosophical Transactions of the Royal Society B: Biological Sciences volume 365, pages 3855 (year 2010)NoStop [Patriarca et al.(2012)Patriarca, Castelló, Uriarte, Eguíluz, and San Miguel]Patriarca2012 author author M. Patriarca, author X. Castelló, author J. R. Uriarte, author V. M. Eguíluz, and author M. San Miguel, title title Modeling two-language competition dynamics, @noop journal journal Advances in Complex Systems volume 15, pages 1250048 (year 2012)NoStop [Isern and Fort(2014)]Isern2014 author author N. Isern and author J. Fort, title title Language extinction and linguistic fronts, @noop journal journal Journal of the Royal Society Interface volume 11, pages 20140028 (year 2014)NoStop [Prochazka and Vogl(2017)]Prochazka2017 author author K. Prochazka and author G. Vogl, title title Quantifying the driving factors for language shift in a bilingual region, @noop journal journal Proceedings of the National Academy of Sciences volume 114, pages 4365 (year 2017)NoStop [Luck and Mehta(2020)]Luck2020 author author J.-M. Luck and author A. Mehta, title title On the coexistence of competing languages, @noop journal journal The European Physical Journal B volume 93, pages 1 (year 2020)NoStop [Maffi(2005)]Maffi2005 author author L. Maffi, title title Linguistic, cultural, and biological diversity, @noop journal journal Annu. Rev. Anthropol. volume 34, pages 599 (year 2005)NoStop [Fincher and Thornhill(2008)]Fincher2008 author author C. L. Fincher and author R. Thornhill, title title A parasite-driven wedge: infectious diseases may explain language and other biodiversity, @noop journal journal Oikos volume 117, pages 1289 (year 2008)NoStop [Louf et al.(2021)Louf, Sánchez, and Ramasco]Louf2021 author author T. Louf, author D. Sánchez, and author J. J. Ramasco, title title Capturing the diversity of multilingual societies, @noop journal journal Physical Review Research volume 3, pages 043146 (year 2021)NoStop [Seoane and Mira(2022)]Seoane2022 author author L. F. Seoane and author J. Mira, title title Are dutch and french languages miscible?, @noop journal journal The European Physical Journal Plus volume 137, pages 836 (year 2022)NoStop [Wang and Minett(2005)]Wang2005 author author W. S. Wang and author J. W. Minett, title title The invasion of language: emergence, change and death, @noop journal journal Trends in ecology & evolution volume 20, pages 263 (year 2005)NoStop [Solé et al.(2010)Solé, Corominas-Murtra, and Fortuny]Sole2010 author author R. V. Solé, author B. Corominas-Murtra, and author J. Fortuny, title title Diversity, competition, extinction: the ecophysics of language change, @noop journal journal Journal of The Royal Society Interface volume 7, pages 1647 (year 2010)NoStop [Baronchelli et al.(2012)Baronchelli, Loreto, and Tria]Baronchelli2012 author author A. Baronchelli, author V. Loreto, and author F. Tria, title title Language dynamics, @noop journal journal Advances in Complex Systems volume 15, pages 1203002 (year 2012)NoStop [Boissonneault and Vogt(2021)]Boissonneault2021 author author M. Boissonneault and author P. Vogt, title title A systematic and interdisciplinary review of mathematical models of language competition, @noop journal journal Humanities and Social Sciences Communications volume 8, pages 21 (year 2021)NoStop [Albury(2020)]AlburyAttitudes author author N. Albury, @noop title Handbook of Home Language Maintenance and Development. Chapter 18: Language attitudes and ideologies on linguistic diversity, edited by editor A. C. Schalley and editor S. A. Eisenchlas (publisher De Gruyter Mouton, year 2020) pp. pages 357–376NoStop [Garrett(2001)]Garrett2001 author author P. Garrett, title title Language attitudes and sociolinguistics, @noop journal journal Journal of Sociolinguistics volume 5, pages 626 (year 2001)NoStop [Garrett(2007)]GarrettAttitudesBook author author P. Garrett, @noop title The Routledge Companion to Sociolinguistics. Chapter 14: Language attitudes, edited by editor P. S. Carmen Llamas, Louise Mullany (publisher Routledge, year 2007) pp. pages 133–139NoStop [Milroy(2007)]MilroyIdeologyStandard author author J. Milroy, @noop title The Routledge Companion to Sociolinguistics. Chapter 16: The ideology of the standard language, edited by editor P. S. Carmen Llamas, Louise Mullany (publisher Routledge, year 2007) pp. pages 133–139NoStop [Labov(1972)]LabovSP author author W. Labov, @noop title Sociolinguistic Patterns (publisher University of Pennsylvania Press, address Philadelphia, year 1972)NoStop [White et al.(1998)White, Vandiver, Becker, Overstreet, Temple, Hagan, and Mandelbaum]White1998 author author M. J. White, author B. J. Vandiver, author M. L. Becker, author B. G. Overstreet, author L. E. Temple, author K. L. Hagan, and author E. P. Mandelbaum, title title African american evaluations of black english and standard american english, @noop journal journal Journal of Black Psychology volume 24, pages 60 (year 1998)NoStop [Elordieta and Romera(2021)]Elordieta2021 author author G. Elordieta and author M. Romera, title title The influence of social factors on the prosody of Spanish in contact with Basque, @noop journal journal International Journal of Bilingualism volume 25, pages 286 (year 2021)NoStop [Liggett(1985)]Liggett1985 author author T. M. Liggett, @noop title Interacting particle systems, Vol. volume 2 (publisher Springer, year 1985)NoStop [Castellano et al.(2009)Castellano, Fortunato, and Loreto]Castellano2009 author author C. Castellano, author S. Fortunato, and author V. Loreto, title title Statistical physics of social dynamics, @noop journal journal Reviews of modern physics volume 81, pages 591 (year 2009)NoStop [Masuda et al.(2010)Masuda, Gibert, and Redner]Masuda2010 author author N. Masuda, author N. Gibert, and author S. Redner, title title Heterogeneous voter models, @noop journal journal Physical Review E volume 82, pages 010103 (year 2010)NoStop [Masuda and Redner(2011)]Masuda2011 author author N. Masuda and author S. Redner, title title Can partisan voting lead to truth?, @noop journal journal Journal of Statistical Mechanics: Theory and Experiment , pages L02002 (year 2011)NoStop [Baronchelli(2018)]Baronchelli2018 author author A. Baronchelli, title title The emergence of consensus: a primer, @noop journal journal Royal Society open science volume 5, pages 172189 (year 2018)NoStop [Redner(2019)]Redner2019 author author S. Redner, title title Reality-inspired voter models: A mini-review, @noop journal journal Comptes Rendus Physique volume 20, pages 275 (year 2019)NoStop [Hart and Case()]polygonparable author author V. Hart and author N. Case, @noop title Parable of the Polygons, howpublished <https://ncase.me/polygons/>NoStop [McIntosh(2014)]McIntosh2014 author author J. McIntosh, title title Linguistic atonement: Penitence and privilege in white Kenyan language ideologies, @noop journal journal Anthropological Quarterly volume 87, pages 1165 (year 2014)NoStop [W. C. So and Lau(2013)]So2013 author author D. W. C. So and author C.-f. Lau, title title Rapid large scale intra-nationality language shift in Hong Kong, @noop journal journal Journal of Chinese Linguistics volume 41, pages 21 (year 2013)NoStop [Gorenflo et al.(2012)Gorenflo, Romaine, Mittermeier, and Walker-Painemilla]Gorenflo2012 author author L. J. Gorenflo, author S. Romaine, author R. A. Mittermeier, and author K. Walker-Painemilla, title title Co-occurrence of linguistic and biological diversity in biodiversity hotspots and high biodiversity wilderness areas, @noop journal journal Proceedings of the National Academy of Sciences volume 109, pages 8032 (year 2012)NoStop [De Silva et al.(2020)De Silva, Basheer, Antwi-Fordjour, Beauregard, Chand, and Parshad]Desilva2020 author author K. De Silva, author A. Basheer, author K. Antwi-Fordjour, author M. A. Beauregard, author V. Chand, and author R. D. Parshad, title title The “higher” status language does not always win: The fall of English in India and the rise of Hindi, @noop journal journal Advances in Complex Systems volume 23, pages 2050021 (year 2020)NoStop [Gonçalves and Sanchez(2014)]Goncalves2014 author author B. Gonçalves and author D. Sanchez, title title Crowdsourcing dialect characterization through Twitter, @noop journal journal PLOS ONE volume 9, pages e112074 (year 2014)NoStop [Vazquez et al.(2010)Vazquez, Castelló, and San Miguel]Vazquez2010 author author F. Vazquez, author X. Castelló, and author M. San Miguel, title title Agent based models of language competition: macroscopic descriptions and order–disorder transitions, @noop journal journal Journal of Statistical Mechanics: Theory and Experiment volume 2010, pages P04007 (year 2010)NoStop [Milroy and Llamas(2013)]Milroy2013 author author L. Milroy and author C. Llamas, title title Social networks, @noop journal journal The handbook of language variation and change , pages 407 (year 2013)NoStop [Olteanu et al.(2019)Olteanu, Castillo, Diaz, and Kıcıman]Olteanu2019 author author A. Olteanu, author C. Castillo, author F. Diaz, and author E. Kıcıman, title title Social data: Biases, methodological pitfalls, and ethical boundaries, @noop journal journal Frontiers in Big Data volume 2, pages 13 (year 2019)NoStop [Kaplan and Baldauf(1997)]Kaplan1997 author author R. B. Kaplan and author R. B. Baldauf, @noop title Language planning from practice to theory, Vol. volume 108 (publisher Multilingual Matters, year 1997)NoStop
http://arxiv.org/abs/2307.01015v1
20230703134524
CGAM: Click-Guided Attention Module for Interactive Pathology Image Segmentation via Backpropagating Refinement
[ "Seonghui Min", "Won-Ki Jeong" ]
cs.CV
[ "cs.CV" ]
Monitoring the large-scale magnetic field of AD Leo with SPIRou, ESPaDOnS, and Narval S. Bellotti1,20000-0002-2558-6920 J. Morin 30000-0002-4996-6901 L. T. Lehmann10000-0001-5674-2116 C. P. Folsom 40000-0002-9023-7890 G. A. J. Hussain 20000-0003-3547-3783 P. Petit 10000-0001-7624-9222 J-F. Donati 10000-0001-5541-2887 A. Lavail1,50000-0001-8477-5265 A. Carmona 60000-0003-2471-1299 E. Martioli 7,80000-0002-5084-168X B. Romano Zaire90000-0002-9328-9530 E. Alecian60000-0001-5260-7179 C. Moutou 10000-0002-2842-3924 P. Fouqué10000-0002-1436-7351 S. Alencar9 E. Artigau100000-0003-3506-5667 I. Boisse110000-0002-1024-9841 F. Bouchy120000-0002-7613-393X C. Cadieux100000-0001-9291-5555 R. Cloutier150000-0001-5383-9393 N. J. Cook100000-0003-4166-4121 X. Delfosse 60000-0001-5099-7978 R. Doyon100000-0001-5485-4675 G. Hébrard8,130000-0001-5450-7067 O. Kochukhov 50000-0003-3061-4591 G. A. Wade 14 Received ; accepted ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Tumor region segmentation is an essential task for the quantitative analysis of digital pathology. Recently presented deep neural networks have shown state-of-the-art performance in various image-segmentation tasks. However, because of the unclear boundary between the cancerous and normal regions in pathology images, despite using modern methods, it is difficult to produce satisfactory segmentation results in terms of the reliability and accuracy required for medical data. In this study, we propose an interactive segmentation method that allows users to refine the output of deep neural networks through click-type user interactions. The primary method is to formulate interactive segmentation as an optimization problem that leverages both user-provided click constraints and semantic information in a feature map using a click-guided attention module (CGAM). Unlike other existing methods, CGAM avoids excessive changes in segmentation results, which can lead to the overfitting of user clicks. Another advantage of CGAM is that the model size is independent of input image size. Experimental results on pathology image datasets indicated that our method performs better than existing state-of-the-art methods. Interactive segmentation, digital pathology § INTRODUCTION Segmenting tumor area in whole-slide images (WSI) is an important task in digital pathology, as it serves as a basis for the diagnosis of a target lesion. However, the difference in visual features, including the color and texture of malignant and normal regions, is insignificant in particular histopathological images. Because of this innate biological property, even experts in this domain need considerable time to accurately distinguish these areas with the naked eye. In addition, it is difficult to capture precise boundaries for classifying malignant regions using conventional automated segmentation methods that mostly rely on edges. To this end, interactive segmentation modifies automated segmentation methods to enable user interactions <cit.>. This allows users to quickly obtain high-quality segmentation results by providing interactions that reflect their intentions. Commonly used user-interaction types include bounding boxes <cit.>, scribbles <cit.>, and clicks <cit.>. Among these, we specifically focused on click-based interactive segmentation in which users provide positive/negative clicks to differentiate between foreground and background regions. Prior to deep learning, conventional approaches <cit.> considered interactive segmentation as an optimization problem. Because semantic information has not been fully exploited with many built-in heuristics, these methods require large amounts of user interactions. Deep learning-based interactive segmentation methods <cit.> improve the segmentation accuracy of deep neural networks <cit.> by incorporating user interactions. While showing outstanding performance compared to conventional methods, existing deep learning-based interactive segmentation methods rely heavily on high-level semantic priors and perform poorly for object classes not seen during training. Recently, backpropagating refinement scheme (BRS) <cit.> addressed this issue by integrating optimization-based and deep learning-based methods. BRS sets the interaction maps entered into the network as trainable parameters. By backpropagating the loss calculated by prediction and user clicks, BRS fine-tunes the interaction maps in an online manner. Feature backpropagating refinement scheme (f-BRS) <cit.> is an improvement to the previous method in terms of inference time and computational budget by inserting a set of auxiliary parameters after the intermediate network layer for optimization. Backpropagation only through a subpart of the network improves the efficiency of f-BRS. In a follow-up study, Lin et al. <cit.> proposed generalized backpropagating refinement scheme (G-BRS), advanced layer architectures that enable more delicate refinement. However, the above methods optimize additional modules only through the minimization of loss calculated by limited user interactions, causing unwanted overall changes owing to overfitting. In this study, we propose a new click-guided attention module (CGAM) for BRS-based interactive segmentation. CGAM addresses the overfitting issue of existing BRS-based methods by directly receiving click maps and a feature map from where it is inserted as a module input and utilizing them for optimization. CGAM enforces the desired specification on the result of a deep learning model by restricting the feature space with self-attention and the additional guidance of click maps. Furthermore, in contrast to G-BRS, the model size of CGAM is independent of the input image size, allowing the method to easily handle large-scale images. We demonstrate the segmentation performance of CGAM over existing deep learning-based interactive segmentation refinement methods on the PAIP2019 challenge dataset <cit.>. § METHOD §.§ Architecture Overview An overview of the proposed method is shown in Fig. <ref>. To compare the proposed model, f-BRS, and G-BRS directly, we chose the standard DeepLabV3+ with ResNet-101 containing a distance maps fusion module (DMFM) proposed in <cit.> as the basis architecture. As in <cit.>, CGAM is inserted after the atrous spatial pyramid pooling (ASPP) layer in the DeepLabV3+ decoder. CGAM modifies the feature map of the corresponding location by receiving the feature map and click maps as inputs. The output logits are generated as the modified feature map passes through the rest of the network. In this study, we define interactive segmentation as an optimization problem and solve it with respect to the parameters of CGAM. Thus, the optimization loss is computed from the output logits and user-provided clicks. By backpropagating the loss, CGAM is updated for better segmentation performance in an online manner. Assume f is a function implemented by the basis network. With input image I and click maps C, the intermediate feature map of the location where CGAM is inserted is defined as g(I,C). Using h to denote a function that the network head implements, f can be represented as f(I,C):= h(g(I,C)). We express the whole process f̂ with CGAM parameterized by θ inserted as follows: f̂(I,C;θ) = h(θ(g(I,C), C)). We define a set of user-provided clicks as {(u_i,v_i,l_i,r_i)}^N_i=1 where (u,v), l∈{-1,1}, and r represent the coordinates, label, and radius of each click, respectively. M∈{0,1}^H× W is a binary mask generated using the newest click and selects the region outside of r. The optimization problem is formulated as a minimization for the following loss similar to that of <cit.>: ℒ_t(I,C_t) = min_θ_t𝔼_i∈[1,t] [max(l_i - f̂(I,C_t;θ_t)_u_i,u_v,0)]^2 + λM⊙(f̂(I,C_t; θ_t-1)-f̂(I,C_t;θ_t))^2_2, where ⊙ is Hadamard product, and t∈[1,N] is an interaction step. The first term enforces the correct output segmentation corresponding to the user-provided clicks, and the second term prevents excessive modification to avoid overfitting. The scaling constant λ regulates the trade-off between the two terms. §.§ Click-Guided Attention Module CGAM is a self-attention module that specializes in interactive segmentation, inspired by self-attention methods <cit.>. By receiving assistance from the additional guidance of click maps, CGAM highlights feature responses in regions reflecting user intention. Fig. <ref> left illustrates the pipeline of CGAM. By denoting the input feature map as g(I,C)=m∈ℝ^c× hw, attention matrix α∈ℝ^c× hw is obtained as follows: α = ψ^T (ReLU (W_C ^T C_d + W_m ^T m)), where C_d∈ℝ^2× hw represents click maps downsampled to the resolution of m. The linear transformations with weight matrices W_C∈ℝ^2×c 2, W_m∈ℝ^c×c 2, and ψ∈ℝ^c 2× c are implemented as a 1×1 convolution. The output of CGAM, a modified feature map m̂∈ℝ^c× hw, is then finally calculated as follows: m̂ = m ⊙ α. CGAM preserves the initial behavior of the network before it learns through backpropagation by setting the initial value of α to one. It can be observed from the attention heat map in Fig. <ref> that CGAM focuses on important regions by exploiting information in the click map. The attention matrix assigns element-wise weights to the feature map, which enables local refinement. This operation sets the CGAM free from the dependency of the input image size on G-BRS. § EXPERIMENT §.§ Data Description The whole-slide image (WSI) dataset used in our experiment was from the PAIP2019 challenge <cit.>. After scaling at 5× magnification, because interactive refinement of segmentation results is mainly required at the boundary of the tumor, patches with tumor areas accounting for 20% to 80% of the total area were considered boundary regions and extracted. §.§ Implementation Details We trained our network on the pathology dataset with 5190 patches using the normalized focal loss proposed in <cit.>. We sampled the clicks during training following the standard procedure of <cit.>. The maximum number of clicks per image was set as 20, limiting the number of positive and negative clicks to less than 10. We used the Adam optimizer with β_1 = 0.9, β_2 = 0.999, and trained the networks for 120 epochs. We set the learning rate as 5 × 10^-4 for the first 100 epochs, and 5 × 10^-5 for the last 20 epochs. For the inference time optimization, we also used the Adam optimizer with β_1 = 0.9 and β_2 = 0.999. We performed back-propagation for 20 iterations for each click. We set the learning rate as 5 × 10^-2 and λ as 1. We conducted an experiment on a Windows 10 PC equipped with an NVIDIA RTX 3090 GPU. §.§ Evaluation Protocol For fair comparisons, we used the automatic click generation strategy proposed in <cit.>: The class of the following click was determined based on whether the dominant prediction error type was false positive or false negative. The click was placed at the point where the corresponding error region had the maximum Euclidean distance from its boundary. The distance was set as the radius of the click. This process was repeated until the target Intersection over Union (IoU) or the maximum number of clicks was reached. §.§ Evaluation Metrics We set the target IoU as 85% and 90%. We limited the maximum number of clicks to 20. We reported the mean number of clicks (NoC) required to achieve the target IoU. We reported the number of failures (NoF) indicating the number of cases in which the target IoU was not reached with the maximum number of clicks. We reported the second per click (SPC) to measure the response time for each click. Finally, we reported the total time required to process the entire dataset. § RESULTS §.§ Comparison We evaluated 131 patches of WSI whose initial predictions had IoU scores between 50% and 70%. We compared CGAM with f-BRS and G-BRS, state-of-the-art BRS-based methods. For G-BRS, we selected the G-BRS-bmconv layer with the best performance reported in <cit.>. Table <ref> presents the average NoC, NoF, SPC, and total time of the three methods for target IoUs of 85% and 90%. CGAM outperformed the other methods in nearly all metrics. The NoC results show that users can obtain satisfactory segmentation masks with less effort using CGAM. From the NoF results, as compared to the other methods, it is observed that CGAM successfully reached the target IoU in most cases. For speed-related metrics, the SPC of CGAM was slightly slower than that of G-BRS. However, CGAM reduced the total time required to reach the target IoU with fewer clicks. §.§ Ablation Study We conducted an ablation study to assess how click maps contribute to the performance of CGAM. We tested the following three scenarios; The first scenario received clicks on the appropriate coordinates and classes, generated by an automatic click-generation strategy described in Subsection <ref>. The second scenario assumed inappropriate (incorrect) clicks by generating clicks on random coordinates and classes. The third scenario assumed that no clicks were provided using a zero tensor with the same shape as that of the click maps. As shown in Table <ref>, CGAM achieved the best performance in all metrics when proper click maps were provided. Considering that the results of random clicks are worse than those of zero-tensor case, we can confirm that CGAM leverages the information in the click maps for optimization. §.§ Discussion CGAM outperformed other methods in terms of both accuracy and time. In particular, the model size of CGAM is fixed and constant regardless of the input size, unlike G-BRS, which expands linearly in proportion to the height and width of the input image; this decreases the number of parameters from 93k to 84k even for small images with 400 × 400 pixels. These results show the efficiency of CGAM and further demonstrate its potential for extending to multi-scale and large image segmentation tasks (e.g., segmentation of WSI). § CONCLUSION In this study, we proposed CGAM for interactive image segmentation through back-propagating refinement. Exploiting the information in click maps by using it as input, CGAM increases the utility of user-provided clicks in interactive segmentation tasks. Experiments showed the improved performance of CGAM in pathology image segmentation as compared to other state-of-the-art methods. In future work, we plan to extend the current framework to address entire WSI such that it can be flexibly applied in real-world situations. § COMPLIANCE WITH ETHICAL STANDARDS This research study was conducted retrospectively using human subject data made available in open access by PAIP2019. Ethical approval was not required as confirmed by the license attached with the open access data. § ACKNOWLEDGMENTS This work is supported by the National Research Foundation of Korea (NRF-2019M3E5D2A01063819, NRF-2021R1A6 A1A13044830), the Institute for Information & Communications Technology Planning & Evaluation (IITP-2023-2020-0-01819), and the Korea Health Industry Development Institute (HI18C0316). IEEEbib
http://arxiv.org/abs/2307.01715v2
20230704133447
Align With Purpose: Optimize Desired Properties in CTC Models with a General Plug-and-Play Framework
[ "Eliya Segev", "Maya Alroy", "Ronen Katsir", "Noam Wies", "Ayana Shenhav", "Yael Ben-Oren", "David Zar", "Oren Tadmor", "Jacob Bitterman", "Amnon Shashua", "Tal Rosenwein" ]
cs.CL
[ "cs.CL", "cs.LG", "cs.SD", "eess.AS" ]
APRIL: Approximating Polygons as Raster Interval Lists Nikos Mamoulis August 1, 2023 ====================================================== OrCam Technologies LTD, Jerusalem, Israel Connectionist Temporal Classification (CTC) is a widely used criterion for training supervised sequence-to-sequence (seq2seq) models. It enables learning the relations between input and output sequences, termed alignments, by marginalizing over perfect alignments (that yield the ground truth), at the expense of imperfect alignments. This binary differentiation of perfect and imperfect alignments falls short of capturing other essential alignment properties that hold significance in other real-world applications. Here we propose Align With Purpose, a general Plug-and-Play framework for enhancing a desired property in models trained with the CTC criterion. We do that by complementing the CTC with an additional loss term that prioritizes alignments according to a desired property. Our method does not require any intervention in the CTC loss function, enables easy optimization of a variety of properties, and allows differentiation between both perfect and imperfect alignments. We apply our framework in the domain of Automatic Speech Recognition (ASR) and show its generality in terms of property selection, architectural choice, and scale of training dataset (up to 280,000 hours). To demonstrate the effectiveness of our framework, we apply it to two unrelated properties: emission time and word error rate (WER). For the former, we report an improvement of up to 570ms in latency optimization with a minor reduction in WER, and for the latter, we report a relative improvement of 4.5% WER over the baseline models. To the best of our knowledge, these applications have never been demonstrated to work on a scale of data as large as ours. Notably, our method can be implemented using only a few lines of code[The code will be made publicly available in the supplementary materials.], and can be extended to other alignment-free loss functions and to domains other than ASR. § INTRODUCTION Sequence-to-sequence (seq2seq) tasks, in which the learner needs to predict sequence of labels from unsegmented input data, are prevalent in various domains, e.g. handwriting recognition <cit.>, automatic speech recognition <cit.>, audio-visual speech recognition <cit.>, neural machine translation <cit.>, and protein secondary structure prediction <cit.>, to name a few. For many years, the segmentation issue was a bottleneck as finding the input-output relations, termed alignments, is the most difficult aspect of many seq2seq tasks <cit.>. Two main approaches were introduced to overcome the absence of explicit supervision of the input segmentation, namely soft and hard alignment. Soft alignment methods use attention mechanism <cit.> that softly predict the alignment using attention weights, while hard alignment methods learn in practice an explicit alignment <cit.>, by marginalizing over all alignments that create the ground truth labels. As streaming audio and video become prevalent <cit.>, architectures that can work in a streaming fashion gain attention. Although soft alignment techniques can be applied in chunks for streaming applications <cit.>, their implementation is not intuitive and less computationally efficient compared to hard alignment methods, which are naturally designed for streaming processing. Among the hard alignment methods, the CTC criterion <cit.> is a common choice due to its simplicity and interpretability. During training, CTC minimizes the negative log-likelihood of the ground truth (GT) sequence. To overcome the segmentation problem, CTC marginalizes over all possible input-GT output pairings, termed perfect alignments. This is done using an efficient and differentiable forward-backward algorithm, which is the core algorithm in CTC. Note that <cit.> and <cit.> showed that the CTC posteriors tend to be peaky, and hence the posterior of one certain alignment is dominant over all others. Thus, as a by product, in practice CTC learns to predict an alignment without a direct supervision related to the alignment, While convenient, the implicit alignment learning comes with the cost of inability to control desired properties of the learned alignment. This can be explained by the inherent nature of CTC that marginalizes solely over all perfect alignments. Therefore, the CTC does not induce further prioritization between perfect alignments, nor does it prioritize between imperfect alignments. However, many real-world seq2seq applications may come with a property that induces, and sometimes requires, such prioritization. For example, in the contexts of ASR and OCR, a standard metric to test the quality of the system is the word error rate (WER). Therefore, prioritizing imperfect alignments with low WER can improve the performance of a system measured by this metric, and by that reduce the gap between the training and the testing criteria <cit.>. Another example is a low-latency ASR system. Here, even a perfect CTC score can only guarantee a perfect transcription while completely ignoring the latency of the system. Clearly, under this setting, for an application that requires fast response, prioritizing alignments with fast emission time is crucial. Figure <ref> visualizes the above mentioned properties. To exemplify the importance of prioritization, Table <ref> shows that a CTC score is not a good proxy for some properties of the output alignment. It shows two different models with a similar training loss that have different WER and emission time, although trained on the same data. In general, there are many other properties that also necessitate prioritization between alignments, whether perfect or imperfect. A similar phenomenon, where some property remains indistinguishable based on the training criteria, was also observed with the maximum likelihood criterion in natural language processing (NLP) domain. More specifically, <cit.> suggested BRIO, a simple and elegant mitigation technique, that achieved state-of-the-art results in abstraction text summarization, overcoming the challenge of multiple ground-truths. Essentially, this technique adds an additional loss term that prioritizes sequences that have a high ROUGE <cit.> score. To complement the CTC with additional prioritization between alignments, we aim to control the learned alignments by taking inspiration from BRIO. To achieve such controllability, we propose Align With Purpose (AWP) - a Plug-and-Play framework, which allows enhancing a given property in the outputs of models that are trained using the CTC criterion, while maintaining the transcription abilities of the model. We add an additional loss term, L_AWP, that expresses a more subtle differentiation between alignment, so that the final loss becomes L = L_CTC + α L_AWP. Specifically, for a given property, we design a function f_prop that receives an alignment as an input, and outputs an improved alignment, with respect to the property. Then, we sample N alignments based on the output probabilities of the pre-trained CTC model, apply f_prop on the sampled alignments, to create N pairs of alignments. Finally, we calculate the L_AWP using hinge loss over the N pairs, thus encouraging the model to increase the probability mass of the preferable alignments, as described in Figure <ref>. Our work can be seen as an extension of BRIO to the CTC case, in which we aim to control properties unrelated to the diversity of the target distribution. The controllability goal for hard alignment criteria (such as CTC) was suggested in prior work, where many of these solutions involve intervention in the forward-backward algorithm <cit.>. As a consequence, these methods do not allow to address the imperfect alignments, unlike AWP which supports it naturally. Additionally, this intervention requires good engineering and consumes a considerable amount of development time and optimization. Other methods that do address the imperfect alignments such as <cit.> still suffer from the latter, unlike AWP which can be implemented using few lines of code. To summarize, our main contributions are: * Align With Purpose - a simple and general Plug-and-Play framework to enhance a general property in the outputs of a CTC model. * We show promising results in two properties that are independent of each other: we report an improvement of up to 570ms in latency optimization, and a relative improvement of 4.5% WER over the baseline models for the minimum WER optimization. * We show the generality of our framework in terms of property selection, scale of training dataset, and architectural choice. To the best of our knowledge, these applications have never been demonstrated to work on a scale of data as large as ours. * The framework enables prioritization between perfect alignment, as well as between imperfect alignments. We apply our approach to the ASR domain, specifically to models that are trained with CTC criterion. However, this method can be extended to other alignment-free objectives, as well as to other domains besides ASR. § CTC AND ALIGN WITH PURPOSE The outline of this section is as follows: We will start with a description of the CTC loss in subsection <ref>, followed by a detailed explanation of the proposed "Align With Purpose" method in subsection <ref>, and finally we will showcase two applications: low latency in subsection <ref> and mWER in subsection <ref>. §.§ CTC The Connectionist Temporal Classification criterion <cit.> is a very common choice for training seq2seq models, as it does not require input segmentation, i.e., frame-level alignment between transcript and audio pairs. To relax the requirement of segmentation, an extra blank token ∅ that represents a null emission is added to the vocabulary V, so that V' = V ∪{∅}. Given a T length input sequence =[x_1,...x_T] (e.g. audio), the model outputs T vectors _t∈^|V'|, each of which is normalized using the softmax function, where _t^k can be interpreted as the probability of emitting the token k at time t. An alignment is a T length sequence of tokens taken from V', and P( | ) is defined by the product of its elements: P( | ) = ∏_t=1^T p(_t | ). The probability of a given target sequence (e.g. text) of length U, = [y_1, ..., y_U] where U ≤ T, is the sum over the alignments that yield : P( | ) = ∑_: ∈ℬ^-1() p(|), where ℬ is the collapse operator that first removes repetition of tokens and then removes blank symbols. The CTC objective function is to minimize the negative log-likelihood of the alignments that yield , as seen in Eq. <ref> L_CTC() = -log P(|). Therefore, by definition, the CTC criterion only takes into account perfect alignments, meaning that all imperfect alignments are equally bad as stated in <cit.>. In addition, the CTC criterion does not prioritize between perfect alignments, as they are equally good as stated in <cit.>. §.§ Align with Purpose In this section, we present Align With Purpose (AWP), a method that aims to overcome the lack of controllability by the CTC criterion. AWP complements the CTC loss with an additional loss term that enables more subtle differentiation between alignments. Importantly, AWP is a general Plug-and-Play framework for enhancing a desired property in models trained with the CTC criterion, while maintaining the seq2seq capabilities of the model. Specifically, given a desired property to enhance, one needs to design a property-specific function f_prop, that takes an alignment and improves it to obtain , which is considered better w.r.t the property. During training, we sample random alignments according to the distribution induced by the output of the seq2seq model. Then we apply f_prop on the random alignments, to obtain . Finally we prioritize over by applying hinge loss on their probabilities. See Fig. <ref> for illustration of the proposed framework. As pointed out in previous works <cit.>, sampling from a randomly initialized model is less effective, as the outputs are completely random. Therefore, we train the model to some extent with a CTC loss as in Eq. <ref>, and proceed training with the proposed method. Formally, we define the property-specific function that takes as input an alignment and returns an alignment with the same length: f_prop:V'^T→ V'^T. Then, at each training step we sample N random alignments according to the distribution induced by the output of the seq2seq model, such that ^i_t∼_t for t ∈ [1..T] and i ∈ [1..N]. We then apply ^i = f_prop(^i) to obtain better alignments. This creates N pairs of alignments (^i, ^i), such that ^i is the least favored in each pair. Finally, to enhance the desired property we encourage the model to increase the probability mass of ^i, by applying hinge loss on the alignment pairs: L_AWP(x) = 1/N∑_i=1^N max { P(^i|x) - P(^i|x) + λ , 0 }, where λ is the margin determined on a validation set. To enable differentiation during the sampling process, we utilize Gumbel-Softmax sampling, as proposed by <cit.>. Putting it all together, the training loss then becomes: L(x) = L_CTC(x) + α L_AWP(x), where α is a tunable hyper-parameter that controls the trade-off between the desired property and the CTC loss. §.§ Applications: Low Latency Streaming ASR systems with low latency is an active research field, as it serves as a key component in many real world applications such as personal assistants, smart homes, real-time transcription of meetings, etc. <cit.>. To measure the overall latency of a system, three elements should be taken into account: data collection latency (DCL) which is the future context of the model, drift latency (DL), and computational latency (CL), as defined by <cit.>. We leave the CL component out of the scope of this work, as it sensitive to architectural choice, hardware, and implementation. Thus, we denote by TL=DCL+DL the total latency of the system. Several techniques were suggested to reduce the TL: input manipulation <cit.>, loss modification <cit.>, loss regularization <cit.>, and architectural choice <cit.>. These methods were specific to the low latency settings, or required intervention in the forward-backward algorithm, unlike AWP which is a general plug-and-play method. One way to reduce the DCL, is by limiting the future context of the model. In attention based models it can be achieved by left context attention layers <cit.>, and in convolutional NN it can be achieved using assymetrical padding <cit.>. However, <cit.> have shown that training with limited future context results in a drift (delay) in the emission time of tokens (DL), as can be seen in Fig. <ref>. The cause of the drift was explained by <cit.>, who made the observation that less future context deteriorates performance. Therefore, by delaying the emission time, the model effectively gains more context, which in turn improves the performance. To apply AWP for mitigating the drift effect (DL), given an alignment , we sample a random position within it, and shift one token to the left from that position, to obtain as seen in Fig. <ref>. Clearly, tokens emission time of is one time step faster than , from the random position and on. By limiting the initial shift position to correspond to tokens that are repetitions, we ensure that the collapsed text of remains the same as . To make a T length alignment, we pad it with a trailing blank symbol. Formally, we define the function f_low_latency. Given an alignment , define the subset of indices [j_1, .., j_T'] ⊆ [2..T] as all the indices such that _j_k = _j_k - 1, meaning that _j_k is a repetition of the previous token. Then we sample a random position j from [j_1, .., j_T'], and obtain in Eq. <ref>: _t = _t if t < j - 1 _t+1 if j-1 ≤ t < T ∅ if t == T §.§ Applications: Minimum Word Error Rate The most common metric to asses an ASR system is the word error rate (WER). As pointed out by <cit.>, there's a gap between the training and testing criteria, as the CTC objective function does not prioritize between imperfect alignments. Therefore, it could improve the system's performance to add such prioritization w.r.t their WER. Prior work addressed this issue. <cit.> suggested to approach it by minimizing the expected WER, and <cit.> suggested a similar objective for training with the cross entropy (CE) loss. As illustrated in Figure <ref>, to apply AWP for minimum WER (mWER) training, we define f_mWER. Given a sampled imperfect alignment and a ground truth transcription , to obtain , we select the word in the collapsed text ℬ() which requires the minimum number of substitutions to correct. Then we fix the alignment of this word according to the ground truth, so that the number of word errors in ℬ() is smaller by 1. § EXPERIMENTAL SETUP Our proposed framework is evaluated on two end-tasks: low latency and mWER, by conducting experiments using multiple architectures and different scales of datasets. General settings are detailed in Sec. <ref>, Sec. <ref> describes the low latency experiment, and Sec. <ref> describes the mWER experiment. §.§ General settings Datasets. To examine our framework on different scales of data, we train on small, medium, and large scale datasets. For the small scale, we train models on the LibriSpeech dataset <cit.>, which consists of 960 training hours (LS-960). For the medium scale, we train models on a 35K hours curated subset of LibriVox[<http://www.openslr.org/94/>] (LV-35K), where samples with low confidence of a reference model were filtered out. For the large scale, we train models on an internal dataset of 280K hours of audio-transcript pairs (Internal-280K), which is, to the best of our knowledge, the largest dataset that was used to train either a low-latency model, or a model aiming directly to reduce the WER. We test our framework on the test splits of LibriSpeech. Audio is sampled at 16KHz, 16 bits/sample. Architecture. We trained Stacked ResNet and Wav2Vec2 models <cit.>. We used a pretrained version of the base Wav2Vec2 model (90M parameters) available on HuggingFace [<https://huggingface.co/facebook/wav2vec2-base-100k-voxpopuli>]. The model was pre-trained for 30 epochs on the 100K hours from VoxPopuli dataset <cit.>. The model recieves the raw audio as an input, and outputs 29 English lower-case characters, including apostrophe, space, and blank tokens. Regarding the Stacked ResNet model, we extracted 80-channel Mel filter-banks features computed from a 32ms window with a stride of 16ms. For each frame, we stacked the filter banks with a first and second derivatives, resulting in a 240 dimensional input vector. We down-sample the audio input from 16ms to 32ms by applying MaxPool layer within the first layer of the first Stacked ResNet block, then stacked 20 ResNet blocks <cit.> with kernel size of 5. Skip connections are added every 4 ResNet blocks. The head of the model results in 29 English lower-case characters, including apostrophe, space, and blank tokens. The model consists of 66M parameters in total. This architecture induces a 6.4 seconds of context in total. Results shown are using an exponential moving average (EMA) model, which is aggregated alongside the model. Decoding. Models were decoded using an in-house implementation of a beam search decoder described in <cit.>, using a beam size of 100, and two language models: an open-source 5-gram language model[<https://www.openslr.org/11/>] (WordLM) trained on the Librispeech LM corpus, and a character-level language model (CharLM) that we trained on the same corpus. The beam search picks transcriptions y which maximize the quantity L(y) defined by: L(y)=P_acoustic(y|x)+β P_CharLM(y)+γ P_WordLM(y) where β=0.8 and γ = 0.8 are the CharLM and WordLM weights, respectively. Text Normalization. We used an in-house implementation of text normalization to remain in a vocabulary of 29 English characters. Sampling Method. To make the sampling process differentiable, we applied Gumbel-Softmax <cit.> during the sampling of alignments. Empirically, the Gumbel-softmax had no effect on results. Training. To test the effectiveness of AWP, we train the models for several steps, and then apply our framework, namely adding the AWP loss to the CTC loss as stated in Eq. <ref>. The epoch in which we start to apply our framework is denoted as 'start epoch' in tables <ref>, <ref>. We repeated this process several times, each time with a different 'start epoch'. §.§ Low Latency Architecture. All experiments (online and offline models) detailed in this section are conducted with a Stacked ResNet architecture mentioned in Sec. <ref>. This architecture can be implemented in a streaming manner and can be highly optimized for edge devices. Therefore, it's a natural choice for a system that works in an online fashion in a low resources environment. The offline model has 6.4 seconds of context in total, divided equally between past and future contexts. Although the model can be implemented in a streaming fashion, it would have a very large (3.2s) latency, due to its generous future context. The online model has a similar architecture and total context, but only 430 ms of the context relates to the future, which can be achieved by asymmetric padding as suggested in <cit.>. The small future context of the model makes it feasible to deploy as a streaming online ASR system. Measuring DL. Measuring the DL of the online models is relative to the offline model. To measure the DL, we force-align the target (GT) transcript of the offline and online models independently and compare the first appearance of each token in the two force-aligned texts. Then, we take the average difference between the occurrences. Training. When training AWP on LS-960, LV-35K, and Internal-280K, the hyper-parameters α and λ were set to 0.001, 0.001, 0.0005 and 0.01, 0, 0, respectively. RAdam optimizer <cit.> with α=0.9, β=0.999, and weight decay of 0.0001 was used. We set the LR to 0.001, with a ReduceLROnPlateau scheduler [<https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html>]. Models were trained for 20 epochs on LS-960 (small scale), 3 epochs on LV-35K (medium scale), and 1 epochs on Internal-280K (large scale). §.§ Minimum Word Error Rate Architecture. In this setting, to verify that our framework is not architecture-specific, we trained Stacked ResNet model, as well as a Wav2Vec2 model as described in Sec. <ref>. The Stacked Resnet model that was used for enhanceing mWER property is the same as the offline model described in subsection <ref>. Training. The baseline Stacked Resnet model was pre-trained on the Internal-280K dataset. Then we continue its training solely on LS-960 for 4.3 epochs before we apply AWP. The AWP hyperparameters were α = 0.1, λ = 0. The baseline and the model with AWP were trained for 4.2 additional epochs, reaching 8.5 epochs in total. We used the RAdam optimizer <cit.> with the same hyper parameters as in Sec. <ref>. The Wav2Vec2 baseline model was finetuned with SpecAugment <cit.> (with p=0.05 for time masking and p=0.0016 for channel masking) solely on LS-960 for 2.3 epochs before we applied AWP, and both the baseline and the AWP models were trained for another 27.5 epochs. We used the Adam optimizer <cit.> for this training, as well as a flat learning rate (LR) scheduler <cit.>. AWP hyper-parameters were set to α = 0.05, λ = 0. While training all models with AWP, we used a softmax temperature of 0.5 for the sampling of the N alignments. § RESULTS In this section we present the results achieved by training using Align With Purpose framework on the low latency and mWER applications mentioned earlier. §.§ Low Latency Table  <ref> shows the results when training on small, medium and large scales of data. We can see a clear trend across all scales where the AWP training successfully decreases the DL. More than that, it even manage to achieve a negative DL, thus decreasing the TL below the maximal TL expected by the architectural choice. In most cases, achieving such low TL solely by reducing the architectural future context using another padding optimization would not have been possible. In almost all the experiments, the WER increases with the latency reduction. This is a known trade-off between latency and accuracy as reported in prior work <cit.>. The choice of the operating point in terms of the balance between latency and accuracy can be determined by the weight of the AWP loss, α, and the scheduling of when we add the AWP loss ('start epoch'). We can also see that as the scale of the data increases, the WER decreases. This statement holds independently for the offline models and for the online models, and remains valid also after adding the AWP loss. This shows that AWP does not affect the ability of the model to improve its basic transcription capabilities using larger scales of data, which aligns with previous observations on large scale training . §.§ Minimum Word Error Rate Table <ref> shows a significant relative improvement of 4-4.5% in Word Error Rate (WER) when using AWP. This enhancement demonstrates the effectiveness of AWP in enhancing ASR performance. Moreover, we observe that the degree of improvement is influenced by the difficulty level of the benchmark. As the benchmark becomes more challenging, the gain achieved by our method becomes more pronounced. Furthermore, our proposed framework proves to be versatile, as it successfully operates on both streaming (Stacked ResNet) and offline (Wav2Vec2) architectures. The ability of our approach to adapt to different architectures highlights its applicability across various ASR systems. § DISCUSSION & FUTURE WORK The results obtained from our study provide valuable insights regarding the potential for improvement in ASR models trained with CTC criterion. Although not tested, this framework could be easily applied to other hard-alignment criteria such as Transducer <cit.>. Furthermore, by adapting and extending the concepts from our framework, it may be possible to enhance soft-alignment methods, even in domains beyond ASR. In addition, an intriguing aspect for future research is the formalization of the properties that can be enhanced using Align With Purpose. By establishing a formal framework, researchers can systematically identify, define, and prioritize the properties to be enhanced. This can lead to targeted improvements and a deeper understanding of the impact of different properties on ASR performance. Finally, our study showcases the capability of enhancing a single property at a time. In some applications, multiple properties should be enhanced simultaneously, potentially leading to better performance. § LIMITATION & BROADER IMPACT Although the AWP framework is relatively easy to use, its main limitation is that one needs to think carefully about the property function f_prop. When done in an elegant fashion, the implementation is straight forward. The proposed AWP framework enables one to enhance a desired property of an ASR model trained with CTC. As mentioned in <ref>, this method can be applied or adapted to domains other than ASR. On the choice of the property to enhance, especially in generative AI, one should be thoughtful not to increase bias, malicious or racist content of models. § CONCLUSIONS The binary differentiation of CTC between perfect and imperfect alignments highlights its limitation in capturing additional alignment properties, which is a key-requirement in many real-world applications. To overcome this limitation, we introduce Align With Purpose, a general Plug-and-Play framework designed to enhance specific properties in models trained using the CTC criterion. Our experimental results demonstrate promising outcomes in two key aspects: latency and minimum Word Error Rate (WER) optimization. Importantly, these optimizations are independent of each other, highlighting the versatility of our framework. The reduced latency achieved by our approach indicates faster transcription while maintaining transcription quality even with significantly reduced drift. Furthermore, using minimum WER (mWER) training emphasizes the importance of incorporating imperfect alignments, further enhancing the transcription quality of CTC systems. One of the strengths of our framework lies in its generality. It offers flexibility in selecting specific alignment properties, applies to large-scale training datasets, and is versatile to architectural choice. Our method does not require modifications to the CTC loss function and can be implemented using only a few lines of code. arxiv_upload
http://arxiv.org/abs/2307.01567v1
20230704084258
Once-Training-All-Fine: No-Reference Point Cloud Quality Assessment via Domain-relevance Degradation Description
[ "Yipeng Liu", "Qi Yang", "Yujie Zhang", "Yiling Xu", "Le Yang", "Xiaozhong Xu", "Shan Liu" ]
eess.IV
[ "eess.IV" ]
labelformat=default,labelsep=space Full-reference (FR) point cloud quality assessment (PCQA) has achieved impressive progress in recent years. However, as reference point clouds are not available in many cases, no-reference (NR) metrics have become a research hotspot. Existing NR methods suffer from poor generalization performance. To address this shortcoming, we propose a novel NR-PCQA method, Point Cloud Quality Assessment via Domain-relevance Degradation Description (D^3-PCQA). First, we demonstrate our model's interpretability by deriving the function of each module using a kernelized ridge regression model. Specifically, quality assessment can be characterized as a leap from the scattered perceptual domain (reflecting subjective perception) to the ordered quality domain (reflecting mean opinion score). Second, to reduce the significant domain discrepancy, we establish an intermediate domain, the description domain, based on insights from subjective experiments, by considering the domain relevance among samples located in the perception domain and learning a structured latent space. The anchor features derived from the learned latent space are generated as cross-domain auxiliary information to promote domain transformation. Furthermore, the newly established description domain decomposes the NR-PCQA problem into two relevant stages. These stages include a classification stage that gives the degradation descriptions to point clouds and a regression stage to determine the confidence degrees of descriptions, providing a semantic explanation for the predicted quality scores. Experimental results demonstrate that D^3-PCQA exhibits robust performance and outstanding generalization ability on several publicly available datasets. The code in this work will be publicly available at https://smt.sjtu.edu.cn. Point cloud, blind quality assessment, subjective modeling, learning-based metric Once-Training-All-Fine: No-Reference Point Cloud Quality Assessment via Domain-relevance Degradation Description Yipeng Liu, Qi Yang, Yujie Zhang, Yiling Xu, Le Yang, Xiaozhong Xu, Shan LiuThis paper is supported in part by National Natural Science Foundation of China (61971282, U20A20185). The corresponding author is Yiling Xu(e-mail: yl.xu@sjtu.edu.cn). Y. Liu, Y. Zhang and Y. Xu are from Cooperative Medianet Innovation Center, Shanghai Jiaotong University, Shanghai, 200240, China, (e-mail: liuyipeng@sjtu.edu.cn, yujie19981026@sjtu.edu.cn, yl.xu@sjtu.edu.cn) Q. Yang, X. Xu, S. Liu are from Media Lab, Tencent, Shenzhen, China, (e-mail: chinoyang@tencent.com, xiaozhongxu@tencent.com, shanl@tencent.com) L. Yang is from the Department of electrical and computer engineering, University of Canterbury, Christchurch 8041, New Zealand, (e-mail: le.yang@canterbury.ac.nz) Corresponding author: Y. Xu August 1, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Recently, point cloud data has emerged as a promising representation format for representing 3D objects in various applications <cit.>. A point cloud is a collection of non-uniformly scattered 3D points that may suffer from impairments in both geometry and attributes (e.g., color) during processing, resulting in perceptual degradation. To facilitate quality of experience (QoE)-oriented tasks (e.g., compression <cit.> and enhancement <cit.>), point cloud quality assessment (PCQA) has gained significant attention among researchers. PCQA can be achieved through subjective experiments or objective metrics. However, although subjective experiments can provide the ultimate prediction, they can be expensive in terms of time, cost and testing conditions <cit.>. Therefore, designing effective objective metrics has become a hotspot in recent research. Objective metrics can be categorized as full-reference (FR), reduced-reference (RR) and no-reference (NR) methods. FR and RR metrics require the entire original samples or partial features as a reference, which may not be readily available in most scenarios. Thus, we focus on NR metrics as they are designed for scenarios where the high-quality original point cloud is not available. §.§ Motivation Current NR metrics are mostly based on deep learning. The common strategy is to use well-designed deep neural networks to map the input point clouds into the feature space, and then regress the final scores using the obtained latent features <cit.>, which can be formulated as q = f(ϕ(x)), where x and q represent the input point cloud and final objective score, and ϕ(·) and f(·) represent the feature extraction and quality mapping operation. However, the performance of this architecture, especially in terms of generalization, is far from satisfactory. The main reason is that this paradigm ignores or weakens some important intermediate developments of subjective evaluation. The process of quality assessment entails the transformation between different domains. Based on the study of the human visual system (HVS), we know that vision begins with the cone cells in the retina. The layer of nerve cells transmits visual signals to the brain, culminating in the generation of the final quality perception <cit.>. We define the distribution of immediate visual stimuli of samples perceived by the HVS as the perception domain, and the distribution of final subjective quality scores as the quality domain. The perception domain has high dimensionality due to the presence of massive visual information, while the quality domain is a hierarchically ordered space with a limited range typically from 0-5 or 0-10 depending on the settings of the subjective experiment <cit.>. The HVS does not acquire a straightforward mapping from the perception domain to the quality domain. Instead, it requires a training session before rating and the inherent experience to cultivate the prior knowledge, signifying the intrinsic relevance within each quality level <cit.>, as auxiliary information to facilitate domain transformation. Subsequently, the individual viewer can foster a basic judgment regarding the degradation degree, which, however, is insufficient for accurately representing a testing sample in the quality domain due to personal limitations and biases. Therefore, current subjective experiments incorporate confidence correction to refine this basic judgment based on averaging scores obtained from multiple participants, leading to the establishment of mean opinion score (MOS). Objective methods align with subjective perception in their shared objective of achieving domain transformation. However, NR-PCQA methods face a first problem stemming from the scattered distribution of training data in the perception domain, which hinders model fitting. Point clouds exhibit greater complexity in terms of geometry and attributes compared with images. Nevertheless, existing PCQA datasets, such as PointXR <cit.>, IRPC <cit.>, ICIP2020 <cit.>, M-PCCD <cit.>, SJTU-PCQA <cit.>, and WPC <cit.> typically consist of merely a few hundred distorted point clouds, resulting in only a few dozen samples for each distortion type. Considering the significant variability in content and distortion types, the available distorted point clouds are usually dispersedly scattered in the high-dimensional perception domain, resulting in a huge domain discrepancy with the ordered quality domain. The huge domain discrepancy poses challenges for domain transformation. The second problem arises from the disparity between subjective experiments, which require a physiological mechanism, and the current NR-PCQA frameworks that attempt to establish a direct mapping relationship between the perception domain and the quality domain. The HVS prefers to generate discrete semantic descriptions for perceived quality based on the prior knowledge <cit.>, using integers to represent the perceived quality level as shown in Table <ref>, such as “5" to describe “the distortion is almost imperceptible”. However, the final MOSs are typically floating numbers that are the statistic results obtained from multiple participants, e.g., at least 16 reliable subjective scores are required to calculate the MOS <cit.>. The integral part of the MOS signifies the fundamental quality level, referring to the discrete semantic descriptions, whereas the decimal part can be interpreted as a confidence degree, which naturally establishes an intermediate domain between the perception domain and the quality domain. We define this intermediate domain as the description domain which pertains to the feature distribution for discrete semantic descriptions. The clustering of perception domain features corresponding to quality-aware information gives rise to the generation of the description domain, and the elements in the quality domain serve as the fine-grained expression of the description domain features. However, most current objective methods neglect the expression of this description domain, resulting in neural networks needing to span a huge domain discrepancy and being more susceptible to content-aware information rather than quality-aware information, as shown in Fig. <ref>. The third problem is that most NR-PCQA methods utilize a projection-based backbone which converts point clouds to images with predefined resolutions. This choice is made due to its lower memory usage and faster inference speed when compared with consuming 3D schemes <cit.>. However, these projection images exhibit different perceptions compared with original point clouds, primarily due to information loss and masked distortions, limiting the performance potential of projection-based methods. §.§ Our Approach We propose a novel NR-PCQA method called Point Cloud Quality Assessment via Domain-relevance Degradation Description (D^3-PCQA) regarding the above problems. D^3-PCQA establishes the ignored description domain which decomposes the quality prediction into two stages. For better explanation, we set the generation of quality level as Stage-1 and the generation of confidence degree as Stage-2. First, to implement the proposed method, the training set is divided into a support set and a query set. The support set which is divided into five quality levels according to Table <ref> is used to establish the description domain, and the query set is used to train the quality prediction network which will be introduced in the following part. In Stage-1, we predict the quality level by learning a structured latent space, which reproduces the function of the description domain. We improve SCNN, a lightweight neural network backbone, to generate the perception domain features with a HVS-based projection method which represents point clouds by emulating the effect of observation distance on HVS. To establish the description domain, the perception domain features are first disentangled into the domain-invariant features by a proposed Residual Transformer Network. Then the required structured latent space is learned by promoting the regular distribution of the disentangled features using the support set with a series of constraints. This process exploits the intrinsic relevance among different samples of the same quality level (namely intra-domain relevance). The clustering centers corresponding to each quality level, which contain the representative feature for the local description domain, can be used as the anchor features to promote domain transformation. Finally, we measure the relevance between the query set samples and the five anchor features (namely inter-domain relevance) and map it into the quality level by a proposed classification network. In stage-2, we assess the confidence degree of the assigned quality level by measuring the feature relevance within the local description domain corresponding to a specific quality level. To do so, we utilize the perception domain features as information compensation. The measurement and mapping of feature relevance between the query set sample and the support set samples of determined quality level (which is yet another instance of intra-domain relevance) are approached as a regression problem, leading to the determination of the confidence degree. Finally, we obtain the final quality score by combining the quality level generated in Stage-1 with the confidence degree derived from Stage-2. To elucidate the roles and functionalities of individual modules within the artificial neural network, we derive the pipeline of each module from a kernelized ridge regression model. §.§ Contributions The contributions of this paper are summarized as follows: * We propose a novel NR-PCQA method called D^3-PCQA to emulate the working mechanisms of subjective experiments. * We greatly improve the generalization ability for quality prediction by exploiting the intra-domain relevance to establish the description domain. * The proposed D^3-PCQA shows reliable performance and outstanding generalization ability. Additionally, further experiments demonstrate the model's scalability when integrated with 3D backbones. The rest of this paper is organized as follows. The related work is surveyed in Section <ref>. Section <ref> derives the function of each module from the solutions to a kernelized ridge regression problem. Section <ref> presents the network implementation of the proposed D^3-PCQA, with its performance evaluation given in Section <ref>. Finally, the conclusion is drawn in Section <ref>. § RELATED WORK This section reviews existing PCQA metrics. For FR-PCQA, Moving Picture Experts Group (MPEG) has applied point-to-point (p2point) <cit.>, point-to-plane (p2plane) <cit.> and PSNRyuv <cit.> in point cloud compression (PCC) standardization. Other point-wise metrics, such as those proposed in <cit.>,  <cit.> and <cit.>, have also been made available. Considering the geometry and color attributes simultaneously, Meynet et al. <cit.> proposed a metric that pools curvature statistics and color lightness together via optimally-weighted linear combination <cit.>. Viola et al. <cit.> suggested quantifying point cloud quality using color histograms. Alexiou et al. <cit.> incorporated four types of point cloud attributes into the form of SSIM <cit.>. Yang et al. <cit.> utilized color gradient to estimate point cloud quality based on graph signal processing. Zhang et al. <cit.> proposed a HVS-based multi-scale method that can be integrated into several PCQA metrics. Javaheri et al. <cit.> developed a point-to-distribution metric to measure point cloud quality. Another approach for FR-PCQA is to project the 3D point cloud onto a number of 2D planes and then represent point cloud quality using the weighted indices of these image planes. Torliget al. <cit.> proposed real-time voxelization and projection techniques to present point clouds and evaluated IQA metrics for PCQA. Alexiou et al. <cit.> measured the distortion using the angles between tangent planes perpendicular to point normals. Yang et al. <cit.> combined global and local features of projection planes to estimate point cloud quality. Javaheri et al. <cit.> proposed a joint geometry and color projection and applied 2D quality metrics to reflect point cloud quality. For RR-PCQA, Viola et al. <cit.> inferred point cloud quality using statistical information of geometry, color and normal vector. Q. Liu et al. <cit.> and Y. Liu et al. <cit.> estimated quality using compression parameters to guide PCC strategy with certain rate constraints. The above-mentioned PCQA metrics are categorized into FR and RR metrics, which necessitate both distorted point clouds and their reference versions as inputs. However, acquiring the reference version can be challenging in practical scenarios, prompting the development of NR metrics. Like in FR-PCQA, NR metrics can be performed either over the 2D projection of point clouds or directly on the raw data. For methods conducted over point cloud projection, Tao et al. <cit.> employed multi-scale feature fusion to predict the quality of point clouds. Liu et al. <cit.> proposed to leverage distortion classification information as an auxiliary feature to assist in the training of the network. Yang et al. <cit.> bridged conventional images and point cloud projection via domain adaptation to expand the scale of trainable point cloud data. Fan et al. <cit.> and Zhang et al. <cit.> integrated the point cloud projection into a video, followed by the utilization of video quality assessment methods for the purpose of evaluating the quality of point clouds. For methods over raw 3D data, Liu et al. <cit.> adopted an end-to-end sparse convolution network to learn the quality representation of point clouds. Shan et al. <cit.> extracted anti-perturbation features for point clouds using a graph neural network. In addition, other algorithms have been developed that leverage both point cloud projection and raw 3D data to extract integrated features, as exemplified by the work of Zhang et al. <cit.>. Most existing NR-PCQA methods adopt a uniform architecture of feature extraction using a range of techniques, followed by regression into score values that fail to accurately emulate the intricate human visual mechanism. In this work, we aim to introduce a fresh approach to the NR-PCQA problem by mapping extracted features to align with human perception. § PROBLEM FORMULATION In this section, we illustrate each module of D^3-PCQA from the perspective of a kernelized ridge regression to better understand the architecture of the proposed method. Given the training samples {x_i,y_i} _i = 1^N, we can formulate NR-PCQA as a generalized linear ridge regression problem: w = min_w ∑_i = 1^N (y_i - w^Tϕ (x_i))^2 + λ/2 w ^2, where x_i is the i-th distorted sample, y_i is the continuous quality score, ϕ(·) represents the nonlinear feature mapping; N is the total number of training samples. Besides, w is the regression vector, and λ signifies the trade-off parameter for the regularization term. The solution to (<ref>) is the weighted sum of the training samples (i.e., w = ∑_i=1^N α _i^ϕ (x_i)), according to the well-known representer theorem <cit.>. In order to establish a new intermediate domain between the perception domain and the quality domain, we may express the label y and response r=w_^Tϕ (x) in (<ref>) as the summation of an integer quality level and decimal degree of confidence. As a result, we can decompose the original NR-PCQA task aimed at producing a continuous objective score into the combination of a multi-class classification problem that finds the integer quality level corresponding to the degradation quantification and a regression problem that generates the degree of confidence. Mathematically, we have y = y_L + y_R, r = w_^Tϕ (x) = r_L + r_R, where y_L and r_L indicate the integer quality level of y and r, and y_R and r_R represent the decimal degree of confidence of y and r. §.§ Stage-1: Semantic Degradation Description Prediction The degradation description defined by BT.500 <cit.> (Table <ref>) of a testing sample x_i can be determined by classifying y_i,L∈{1,2,3,4,5}. In order to establish the description domain corresponding to y_i,L, the training samples are grouped according to the distribution of their normalized quality scores. Correspondingly, the summands in (<ref>) can be organized in groups and the quality level can then be determined by concatenated ridge regression models: w_L = min_w_L∑_j = 1^N/K[ ∑_i = 1^K (y_i,j,L - w_j,L^Tϕ (x_i,j))^2 + λ/2w_j,L^2], where x_i,j signifies the i-th samples in the j-th quality level group, and K is the number of training samples for each quality level. w_L is defined as w_L = {w_j,L}_j=1^N/K. Note that in (<ref>), the regressand has been changed to y_i,j,L, the class label corresponding to the degradation description of y_i,j. In other words, it is now a multi-class classification problem and we aim at identifying the sample's quality level (in terms of y_i,j,L∈{1,2,3,4,5}). The regression vector for the j-th quality level is, again from the representer theorem <cit.>, w_j,L = ∑_i = 1^K α _i,j,L^ϕ (x_i,j). Here, x_i,j is the i-th distorted sample with its quality level equal to j. Using (<ref>) transforms (<ref>) into α_L = min_α_L∑_j = 1^N/K[∑_i = 1^K (y_i,j,L - ∑_k = 1^K α _k,j,Lϕ^T (x_k,j)ϕ (x_i,j) )^2 + λ/2∑_i = 1,k = 1^K α _i,j,Lα _k,j,Lϕ^T (x_k,j)ϕ (x_i,j)]. Let k(x_k,j,x_i,j) = ϕ^T (x_k,j)ϕ (x_i,j) be the positive definite kernel function. As a result, for sample x_i,j, the response under a particular quality level j would be r_i,j,L = ∑_k = 1^K α _k,j,Lk(x_k,j,x_i,j) = ∑_k = 1^K α _k,j,Lϕ^T (x_k,j)ϕ (x_i,j). In other words, w_j,L = ∑_k = 1^K α _k,j,Lϕ (x_k,j) can be considered as the common feature extracted from the samples with the j-th quality level, which can characterize a representative feature for the local description domain corresponding to a specific degradation description. Let ϕ_j (x_m) = w_j,L = ∑_k = 1^K α _k,j,Lϕ (x_k,j). We call ϕ_j (x_m) the anchor feature deriving from the intra-domain relevance of a specific quality level. The response in (<ref>) can be rewritten as r_i,j,L = ϕ^T_j (x_m)ϕ (x_i,j) = k_j(x_m,x_i). The kernel function k_j(x_m,x_i) now measures the inter-domain relevance between an input distorted sample x_i and the anchor feature ϕ_j (x_m) for j-th quality level. Therefore, we indeed utilize the relevance measurement to classify the degradation description for quality assessment. The kernel function k_j(x_m,x_i) in (<ref>) may be evaluated using the following quadratic form to further take into account the interaction between elements in the feature vectors ϕ_j(x_m^) and ϕ (x_i,j^): k'_m,i,j = ϕ^T_j (x_m^)β_j,Lϕ (x_i,j^), as long as the trainable matrix β_j,L is at least positive semidefinite. In the alternating iteration method, the method in  <cit.> can be invoked to approximate the estimated β_j,L using the nearest (in terms of Frobenius norm ||·||_F) positive semidefinite matrix. Substituting (<ref>) into (<ref>), we obtain the following optimization problem α_L ,β_L = min_α_L ,β_L∑_j=1^N/K[∑_i=1^K (y_i,j,L - ϕ^T_j (x_m^)β_j,Lϕ (x_i,j^) )^2 + λ _1/2∑_i = 1,k = 1^K α _i,j,Lα _k,j,Lk'_i,k,j + λ _2/2||β_j,L||^2_F ]. The solution to (<ref>) can be obtained by leveraging the anchor feature through evaluating ϕ_j (x_m) = ∑_k = 1^K α _k,j,Lϕ (x_k,j), j=1,2,...,5. For an unseen distorted sample x_i, its degradation description can then be determined using r_i,j,L = ϕ^T_j (x_m^)β_j,Lϕ (x_i^). The value of j corresponding to the largest r_i,j,L would be output as the quality level of x_i. §.§ Stage-2: Confidence Degree Prediction We can adopt the same nonlinear ridge regression framework in (<ref>) to map the coarse-grained quality level obtained in Stage-1 into the accurate quality score in the quality domain. Specifically, with slight abuse of notations, we aim at solving α_R ,β_R = min_α_R ,β_R∑_j = 1^N/K [∑_i = 1^K (y_i,j,R - ∑_k = 1^K α _k,j,Rϕ^T (x_k,j^)β_j,Rϕ (x_i,j^) )^2 + λ _1/2∑_i = 1,k = 1^K α _i,j,Rα _k,j,Rk'_i,k,j + λ _2/2||β_j,R||^2_F ]. The regressand y_i,j,L, which is the class label for quality level prediction, is replaced with y_i,j,R, which is the decimal degree of confidence. Note that the main difference is that the anchor feature ϕ_j (x_m) for j-th quality level is not calculated first. Instead, the entire formula is employed to measure the intra-domain relevance between ϕ (x_i,j) and sample features of a specific quality level. This is because measuring feature relevance is critical for fine-tuning. Since here both α_R and β_j,R contribute to mapping the relevance, β_j,R in (<ref>) is fixed to be identity matrix of an appropriate size to reduce the computational burden. We thus only need to find α_R such that the estimated decimal degree of confidence for an unseen sample x_i, whose quality level was found to be equal to j, can be calculated using ∑_k = 1^K α _k,j,Rϕ^T (x_k,j^)ϕ (x_i^). Combining this result with the obtained quality level yields the final estimate of the continuous quality score for a testing sample. §.§ Functions of Derived Modules Based on the above derivation, several modules can be identified from the response terms of (<ref>) and (<ref>), as illustrated in Fig. <ref>. The proposed method includes the following modules: * Feature Extraction: The original feature of sample x_i is extracted from a feature extraction backbone, denoted as ϕ(x_i). * Anchor Feature Generation: The description domain is established, where the anchor feature for j-th quality level is generated by measuring the intra-domain relevance among sample features in the perception domain, which is ϕ_j (x_m) = ∑_k = 1^K α _k,j,Lϕ (x_k,j), j=1,2,...,5. * Relevance Mapping: In Stage-1, the quality level is determined by measuring the inter-domain relevance between the sample feature and five anchor features, i.e., r_i,L= max_j{ϕ^T_j (x_m^)β_j,Lϕ (x_i^), j=1,2,...,5}. In Stage-2, the confidence degree is determined by measuring the intra-domain relevance among samples of the determined quality level, which is r_i,R = ∑_k = 1^K α _k,j,Rϕ^T (x_k,j^)ϕ (x_i^). * Quality Combination: The quality level and the confidence degree are combined to generate the continuous quality score, given as r_i = r_i,L + r_i,R. § NETWORK IMPLEMENTATION We implement the equivalent functions for the response terms of (<ref>) and (<ref>) using neural networks. The proposed D^3-PCQA aims at expressing the ignored description domain for performance improvement, whose overall architecture is depicted in Fig. <ref>. The proposed framework consists of several modules, including the feature extraction module (symbolized as ϕ(·)), the anchor feature generation module, the relevance mapping module (denoted as G( · ) and H( · )) and the quality combination module. Among them, the anchor feature generation module involves the feature disentanglement (represented by Φ ( · )), the description domain establishment, and the feature aggregation (indicated by F( · )). In the following subsections, we will detail each of these modules. To implement the proposed method, inspired by <cit.>, the training samples are partitioned into a support set and a query set. We use the support set to establish the description domain and to generate the anchor features. Then, we train the quality prediction network using the query set. We illustrate the transformation between different domains of the output of each module in Fig. <ref>. The feature extraction module ϕ(·) extracts the scattered perception domain features that reflect the visual stimuli. The feature disentanglement module Φ ( · ) extracts the description domain-related components from the perception domain features, and then the anchor feature generation using F( · ) and the quality level prediction using G( · ) can be performed in the description domain which is established with a series of constraints. Finally, leveraging the perception domain features, H( · ) predicts the confidence degree for generating the elements in the quality domain, i.e., the continuous quality scores. The network implementation also takes into account the functions of the regularization terms in (<ref>) and (<ref>). The regularization term which is intended to reduce overfitting is achieved through weight decay during network training. Meanwhile, the regularization term for ensuring that the kernel matrix is positive semidefinite, thereby guaranteeing that the output is nonnegative, is accomplished through the use of imposed activation functions. To provide better illustration, we use the established description domain to decompose our proposed model into two stages, i.e., Stage-1 to predict the quality level and Stage-2 to generate the confidence degree. Note that although we have staged our demonstration for clarity, our proposed method is an end-to-end framework, and the two stages mutually reinforce each other. §.§ Feature Extraction Module The proposed D^3-PCQA architecture is versatile and not limited to either 2D or 3D features. In this work, we focus on the projection-based backbone to extract the perception domain features, which consumes less memory and has a faster inference speed compared with the 3D-based schemes. Besides, in Section <ref>, we also test 3D-based backbones to demonstrate the generalization ability and scalability of our method. In this subsection, we introduce some important visual characteristics to improve the 2D backbone to obtain projection images that are consistent with human perception for feature extraction. §.§.§ Simulation of HVS Observation Mechanism The information loss in immediate point cloud projection derives from the mismatch between the point cloud size and projection plane size. Additionally, 2D encoders usually require regular-size input images, which are typically much smaller than the point cloud size for mini-batch training. Existing methods, such as the cropping method in <cit.> and the folding method in <cit.>, have been used to maintain consistent input sizes across samples, but they distort the whole perception and introduce extra distortions. To address these issues, our work calls upon the HVS-based multi-scale representation of point clouds. Specifically, projecting one point cloud onto one small viewing window can be simulated as how human eyes perceive distant point clouds. As the viewing distance increases, our perception is affected by several visual phenomena, i.e., scale reduction, loss of detail and blurring. These factors contribute to the construction of point cloud presentation at different scales. According to <cit.>, point clouds with a relatively small scale, resulting from long observation distances, still match well with subjective scores. Based on this, reducing the original point clouds to a small scale based on the HVS mechanism can contribute to the generation of projection images with a regular and small size. By simulating the above three visual phenomena when human eyes observe point clouds from a distance, a new 2D representation of point clouds is established. The generation process of this new 2D representation consists of three steps, i.e., region rescaling, projection and low-pass filtering. Region Rescaling. In order to map the point cloud P=[P^C, P^O]∈R^N×6 onto the 2D plane Z, all points of the point cloud are initially moved towards the center of the bounding box until matching the size of the projection images, thereby simulating the size change with increased viewing distance. We define this operation as region rescaling, denoted as Ψ (·). The geometry attribute P^C of P is modified, while the color attribute P^O remains unchanged. The resulting point cloud can be represented by P_1 = [Ψ(P^C),P^O] ∈R^N × 6. To ensure that the points of the point cloud and the pixels of the projection image align, the points are converted to voxels by rounding the coordinates. The resulting point cloud can be signified as P_2 = [Γ(Ψ(P^C)),P^O] ∈R^N × 6, where Γ is the round-off operation which leads to the blurring in geometrical coordinates. In this process, the point density ρ of the point cloud will be changed, which is defined as the number of points per unit volume. Specifically, for a local patch P_s located within the corresponding sphere region N(s, R_s), the point density can be calculated as ρ = | P_s|/4/3π R_s^3. Given the bounding box size of the point cloud, [[X_min,X_max],[Y_min,Y_max],[Z_min,Z_max]], and the projection image size, [[x_min,x_max],[y_min,y_max]], we can calculate the scaling factor, symbolized as δ, using the equation δ = min ([| Δ x |,| Δ y |])/max ([| Δ X |,| Δ Y |,| Δ Z |]), where |Δ x|=x_max-x_min, and Δ x and Δ y are usually the same. The point density of the resulting point cloud can be denoted as ρ ' = | P_s|/4/3π(δ·R_s)^3. Projection. The point cloud P_2 is mapped orthographically onto a pre-determined 2D plane Z to generate a texture projection image I_1 ∈N^W × H × 3 and a depth projection image I_2 ∈N^W × H × 3 where W=x_max-x_min and H=y_max-y_min. Each pixel i^1 in I_1 is filled by the color attribute of the corresponding point in the point cloud, while each pixel i^2 in I_2 is filled by the geometry attribute of the corresponding point in the point cloud. Due to the scaling down of the point cloud during the projection process, some points corresponding to the same pixel are discarded, resulting in loss of detail. Then inspired by <cit.>, we splice together six different perspectives to generate a multi-perspective image to address the uneven information density across different perspectives, as illustrated in Fig. <ref>. Low-pass Filtering. To further simulate the color blurring as the scale decreases, the blurring operation is applied to the projection images. To introduce the additional blurring distortion, we use a low-pass filtering operation defined as f(·). The texture projection image I_1 is filtered into I'_1, while the depth projection image I_2 is filtered into I'_2. The filtered projection image can be represented by I'_1 = f(I_1) ∈N^W × H × 3, and I'_2 = f(I_2) ∈N^W × H × 3. The low-pass filter can be implemented by averaging the neighborhood N(x,R) within a certain radius of R which varies linearly with the change in point density: f(x) = 1/|N(x,R)|∑_i ∈ N(x,R)x_i s.t. R = {[ k(τ - ρ '), if ρ ' < τ; 0, if ρ ' ≥τ ]., where τ is an empirical critical point density for the water-tight surface, which represents the threshold at which the change in point density will not affect subjective perception. When the point clouds are normalized into a consistent scale and the sphere region N(s, R_s) is sufficiently large to encompass the complete point cloud, the radius R can be approximated as R = {[ k(τ - ρ '), if ρ ' < τ; 0, if ρ ' ≥τ ]. = {[ k(|P_τ|/4/3πR_s^3 - | P'|/4/3πR_s^3), if | P'|/4/3πR_s^3 < |P_τ|/4/3πR_s^3; 0, if | P'|/4/3πR_s^3≥|P_τ|/4/3πR_s^3 ]. = {[ k'(|P_τ| - |P'|), if |P'| < |P_τ|; 0, if |P'| ≥ |P_τ| ].. In other words, the degree of introduced blurring can be determined by the change in point number caused by the distortions. When |P'| < |P_τ|, (<ref>) can be rewritten as R=k'(|P_τ| - |P'|) = k'|P_τ|(1 - |P'|/|P_τ|) = R_m(1 - |P'|/|P_τ|), where R_m indicates the filter radius corresponding to the blurring distortion. §.§.§ Subjective Experiment We conduct a compact subjective experiment to demonstrate the validity of the proposed HVS-based point cloud projection. Specifically, the point clouds with down-sampling distortion are selected from the SJTU-PCQA <cit.> dataset for the subjective experiment. These distorted point clouds are projected onto planes of size 224 × 224 from six perspectives using two projection methods. The first method involves an immediate projection without any additional operations, while the second method involves three steps mentioned in Sec <ref>. The resulting six-perspective projection images are spliced together to generate a multi-perspective image for the subjective experiment <cit.>. To evaluate the projection quality, these projection images are mixed and shuffled and then evaluated by human viewers using the subjective experiment process as defined in BT.500 <cit.>. The results of the correlation performance are presented in Table <ref>, which is calculated based on the scores obtained from the subjective experiment and the ground truth MOS. Additionally, defining the scoring error as the absolute difference between the obtained score and MOS, a statistical analysis of the scoring errors is conducted, and the results are shown in Table <ref>. The scale of the obtained scores and MOS ranges from 1-10. According to Table <ref> and Table <ref>, it is evident that the proposed HVS-based projection shows better correlation with the subjective perception compared with the immediate projection. §.§.§ Backbone After performing the HVS-based projection for point clouds, we utilize a backbone (denoted as ϕ(·)) to extract the raw features of these projections. These raw features can represent the immediate visual stimuli perceived by the HVS. We call them the perception domain features. Although the feature extraction backbone is not the main focus of this paper, it is required to be lightweight and extract robust features for PCQA. Following the work in <cit.>, we propose a modified lightweight network based on <cit.>, as shown in Fig <ref>. Each convolutional layer in Fig <ref> is characterized by the input channel, output channel, kernel size, stride and padding, and is followed by batch normalization and a nonlinearity activation function (ReLU). §.§ Anchor Feature Generation Module Referring to (<ref>), the anchor features can be considered the common features for a specific quality to facilitate the domain transformation and can be extracted from a weighted combination of sample features that share the same degradation description, denoted as ϕ_j (x_m) = ∑_k = 1^K α _k,jϕ (x_k,j)=W_αϕ_j (x^(k)), where j signifies the j-th quality level, K is the size of a mini-batch, ϕ_j (x^(k))∈R^K ×d_f represents the sample features of the j-th quality level, and d_f is the feature dimension. This operation requires extracting common characteristics but cannot handle the scattered distribution in the perception domain features. Therefore, in the network implementation, we first disentangle the perception domain features using a proposed Residual Transformer Network into the domain-invariant features. Then a structured latent space is learned to establish the description domain by regularizing the distribution of the disentangled features. Finally, a series of anchor features can be aggregated from the learned structured latent space to drive domain transformation for quality level prediction. To learn the anchor features, we assume that the samples in the support set follow a uniform distribution across various impairment scales, and we divide the support set into 5 groups corresponding to five-grade degradation descriptions <cit.> based on their MOS. §.§.§ Feature Disentanglement The attainment of anchor features necessitates the structured distribution within a learned latent space which reproduces the function of the description domain. However, directly restricting the distribution of the perception domain feature ϕ (x^(k)), which is extracted from the backbone, may not yield the desired results. As explained earlier, ϕ (x^(k)) represents the differentiated stimuli perceived by the HVS and scatters in the perception domain. Thus, we propose a Residual Transformer Network to disentangle the perception domain features into the domain-invariant features and the domain-specific features. The domain-invariant features which capture the cross-sample common characteristics for a specific quality level are utilized for description domain establishment. Domain-invariant Feature Disentanglement. The perception domain feature ϕ (x^(k)) is considered to contain two components, i.e., the domain-invariant features that are shared among samples located in the local description domain corresponding to the same quality level, and the domain-specific features related to the specific manifestation from different source domains. The former exists based on the common characteristics of the same quality level. The latter, such as the specific texture information of the sample itself, is responsible for the huge discrepancy among samples in the perception domain. Hence, the domain-invariant features are first extracted before regularizing the feature distribution to learn the structured latent space. To fully consider the interaction between samples, inspired by <cit.>, we employ a Residual Transformer <cit.> Network to combine features. This module projects f_p^(k) = ϕ (x^(k)) ∈R^K ×d_f into queries Q ∈R^K ×d, keys K ∈R^K ×d and values V ∈R^K ×d using linear projections, where Q= ϕ (x^(k))W_q, K= ϕ (x^(k))W_k, and V= ϕ (x^(k))W_v. To compute the weighted sum of the values v as the combined output, the scaled dot-product is applied to obtain the cross-sample attention weight: α_A = softmax (QK^T/√(d)), which generates the similarity weight based on K and Q. The weighted feature as output is given by Y =α_AV -1/K[1] ϕ (x^(k)) =α_A ϕ (x^(k))W_v -1/K[1] ϕ (x^(k)) ≈ (α _A-1/K[1]) ϕ (x^(k)), where [1] signifies the matrix with all elements equal to 1, and K is the length of the mini-batch. ϕ and W_v both map the single features, leading to the approximate ignorance of W_v <cit.>, and (<ref>) achieves the interaction between different features required by (<ref>). The proposed Residual Transformer Network can extract the domain-invariant features across samples by measuring the similarity between f_p and other features in the mini-batch, thus leveraging the intra-domain relevance among samples with the same quality level. Additionally, during training, the shuffle operation has been implemented to introduce dynamic mini-batches, thereby diminishing the reliance of output features on specific content of individual samples as the training progresses. Module Structure. This module denoted as Φ( · ) can be formulated as y_0 = [ ϕ(x^(1)),ϕ(x^(2)), … ,ϕ(x^(K))], Q_i = K_i = V_i = FC( y_i - 1), y_i^' = MSA( Q_i,K_i,V_i), y_i = FFN( y_i^'), i = 1, … ,l, [ f_E_1,f_E_2, … ,f_E_K] = y_l - mean(y_0), f_E = [ f_E_1,f_E_2, … ,f_E_K], where y_0 represents the input mini-batch containing K extracted perception domain features of the same quality level, f_E = Φ(ϕ(x^(k))) gives the output domain-invariant features, and l is the number of layers which equals 2 in this work. FC means the fully connected layer, and FFN means the feed-forward network <cit.>, which corresponds to the feature mapping in (<ref>). MSA signifies the multi-head self-attention module <cit.> which refers to the feature interaction in (<ref>) and (<ref>). §.§.§ Description Domain Establishment Since each quality level shares the same degradation description, there exists intra-domain relevance among samples within the same quality level. Therefore, we make an assumption that the extracted domain-invariant features for different quality levels can exhibit a structured distribution in a latent space. Such a structured latent space is learned in the support set by imposing a series of constraints to reproduce the function of the description domain, which is crucial for improving the generalization ability. Distribution Regularization. We expect the disentangled features from the same quality level to cluster together in the latent space as they refer to the same abstract semantic characteristics. For instance, features of score 5 represent “imperceptible distortion”, and features of score 1 represent “seriously annoying distortion”. Conversely, the features from different quality levels should be mutually exclusive. To enforce this, we use a normalized temperature-scaled cross-entropy loss <cit.> to restrict the distribution of disentangled features, i.e., L_dis = 1/K∑_i = 1^K 1/|P(i)|∑_j ∈ P(i) - logexp( sim( f_i,f_j)/τ)/∑_k = 1^K 1_k iexp( sim( f_i,f_k)/τ) , where f_i=Φ(ϕ (x_i)) represents the extracted domain-invariant features, sim( f_i,f_j) = f_i^Tf_j/ f _2f_j_2 is to measure the similarity between f_i and f_j, K is the number of point clouds in the mini-batch, 1 is the indicator function, τ is the temperature parameter, P(i) is a set containing point cloud indices belonging to the same quality level as y_i (but excluding the index i) and |P(i)| is its cardinality. By imposing the distribution regularization, the disentangled features that are causally associated with the degradation description can be regularly distributed in the learned latent space. Structured Ranking Regularization. To further regularize the learning of the latent space, we utilize the rank information among different samples. In quality assessment, accurate ranking has greater importance than precise classification. For example, a point cloud with a score of 2.6 can be classified into either level 2 or level 3, but it is important to place it between samples with scores of 2 and 3. The ranking performance can be measured by Spearman rank order correlation coefficient (SROCC), which is defined as follows: SROCC( q,q̂) = 1 - 6∑_i = 1^L (m_i - n_i)^2/L(L^2 - 1), where q is the true MOS, q̂ is the predicted quality score, L is the number of distorted point clouds, m_i is the rank of q_i in the MOS, and n_i is the rank of q̂_i in the predicted quality scores. SROCC can also be computed from Pearson linear correlation coefficient (PLCC): SROCC( q,q̂) = PLCC( R( q),R( q̂)), where R represents the rank function, which is defined as R(x)=∑_x_i ∈ x^(k) H(x-x_i) where H(x) is the Heaviside step function. And PLCC is defined as PLCC( q,q̂) = ∑_i=1^L(q_i - q_m)(q̂_i - q̂_m)/√(∑_i=1^L(q_i - q_m)^2)√(∑_i=1^L(q̂_i - q̂_m)^2), where q_m and q̂_m are the arithmetic mean of MOS q and predicted scores q̂. When given a mini-batch and transforming (<ref>) into the loss function: loss_plcc = ∑_k = 1^K ( q_k - q_m)( q̂_k - q̂_m)/√(∑_k = 1^K ( q_k - q_m)^2)√(∑_k = 1^K ( q̂_k - q̂_m)^2), where K is the length of the mini-batch. Then the structured ranking loss can be denoted as ℒ_rank = - loss_srocc( q^(k), q̂_L^(k)) = - loss_plcc( R( q^(k)), R( q̂_L^(k))), where q̂_L^(k) gives the predicted quality levels across the mini-batch. q^(k) indicates the ground truth MOS. R represents the rank function. Existing methods such as <cit.> have utilized PLCC as the loss function, because loss_plcc is derivable. However, the required loss_srocc is not derivable due to the existence of the Heaviside step function. We utilize a constrained linear program, inspired by Google Research's Fastsorting <cit.>, to approximate the rank function in (<ref>), denoted as R_ε Q(x)=P_ε Q(-x, ρ)=P_Q(-x / ε, ρ) P_Q(z, w)=μ∈𝒫(w)argmax⟨z, μ⟩-Q(μ)=μ∈𝒫(w)argmin1/2μ-z^2 where ρ=(n, n-1, …, 1). μ∈𝒫(w)argmax⟨z, μ⟩ is the linear program, and Q(μ)=1/2μ^2 is quadratic regularization. 𝒫(w)=conv({w_σ: σ∈Σ}) ⊂ℝ^n represents the convex hull of permutations of w. §.§.§ Feature Aggregation After enforcing the learned latent space to exhibit a structured distribution, the clustering center characterizes the local description domain corresponding to a specific quality level and can serve as an anchor feature to distinguish different quality levels. These anchor features provide auxiliary information beyond the perception domain, thereby helping sample features achieve domain transformation more easily. To generate the anchor feature, the regularized domain-invariant features from samples of the same quality level are aggregated to further reduce dependence on sample characteristics. This module denoted as F(·) can be formulated as f_R = 1/K∑_k^K Φ (ϕ (x^(k)))≈1/Ke(α _A-1/K[1])_W'_αϕ (x^(k)), where e is the unit vector. f_R is the obtained anchor feature for a specific quality level. (<ref>) which exhibits a similar formulaic expression to that of (<ref>) illustrates that the network implemented for generating the anchor features aggregates the common features for a specific quality level. §.§ Relevance Mapping Module for Stage-1 §.§.§ Inter-domain Relevance Mapping Referring to the response term in (<ref>), the inter-domain relevance between the testing sample feature and five anchor features is measured and then mapped into the quality level, denoted as r_i,L= max_j{ϕ^T_j (x_m^)β_j,Lϕ (x_i^), j=1,2,...,5}, where the relevance is measured through element-wise multiplication and is mapped using the matrix β_j,L in the alternating iteration method. However, this mapping operation, denoted as F_1(A, B) =(A ⊙ B)× W_1 where ⊙ represents the element-wise multiplication, only maps the relevant results and cannot make full use of the entire information from the samples and the description domain. In contrast, we use a network to perform both functions: relevance measurement and relevance mapping. To make full use of the anchor features derived from the support set, we concatenate them with the testing sample features from the query set. This compensates the sample feature with the information in the description domain. Subsequently, we use a mapping network to measure the inter-domain relevance between the testing sample and five anchor features and mapped it into the probability scores of each quality level that the testing samples belong to. The quality levels of testing samples can be determined by the one with maximum. In fact, we demonstrate that the proposed network operation denotes as F_2(A, B) =[A ; B]× W_2 is a special case of the operation F_1 in (<ref>) (i.e. Theorem <ref>). For F_1(A, B) =(A ⊙ B)× W_1, F_2(A, B) =[A ; B]× W_2, if W_1=[W_2./B , W_2./A], F_1=F_2^T. Substituting W_1=[W2./B , W2./A] into F_2(A, B) =[A ; B]× W_2, we obtain the following formula: F_2 (A, B) = [A ; B] × W_2 =[ A × W_2; B× W_2] = ([ A ⊙ B ] × [W_2./B , W_2./A])^T = ([ A ⊙ B ] × W_1)^T = F_1^T(A,B) Theorem <ref> explains that the output of the operation F_2 in the proposed network, represented by F_2(A, B) =[A ; B] × W_2, can be expressed as the output of the operation in (<ref>), denoted as F_1(A, B) =(A ⊙ B) × W_1, with a special weight matrix W_1. This proves that F_2 is a special case of F_1. Module Structure. This module denoted as G( · ) can be formulated as g_s = cat([f_x_i⊕f_R,j, j = 1, … ,5]), g_1 = FC(256× 2 × 5, 64)(g_s), g_2 = Relu(g_1), g_3 = FC(64, 5)(g_2), g_c = argmax(Softmax(g_3)), where g_c is the output of G identified as the predicted quality level, f_x_i=Φ(ϕ (x_i)) in which the sample x_i from the query set is mapped into the learned latent space to prevent additional errors due to offsets in the feature space, f_R,j gives the anchor feature for the j-th quality level, and cat and ⊕ represent the concatenation operation. FC means the fully-connected layers which are characterized by the input and output channel number. The forward propagation for Stage-1 can be represented as q̂_i,L=G(F(Φ(ϕ(x^(k)))), Φ(ϕ(x_i))), where x^(k) signifies the support set samples, x_i denotes the testing sample in the query set, and q̂_i,L is the predicted quality level. §.§.§ Boundary Regularization We use the ground truth quality level of the testing sample in the query set to constrain the training of the networks involved in Stage-1. This ensures that the networks are trained with correct quality level information. Additionally, the objective function for classification can promote the learning of bounded features for different quality levels in the latent space. To train the network for 5-class classification, the following cross entropy loss is adopted: L_cls = 1/K∑_i = 1^K∑_j = 1^5 - q_i,j,Llogq̂_i,j,L - ( 1 - q_i,j,L)log( 1 - q̂_i,j,L), where q̂_i,j,L gives the predicted quality level, q_i,j,L represents the ground truth quality level, and K is the length of the mini-batch. By applying the boundary regularization, the clustering centers in the learned latent space connect with the optimization objectives, i.e., the quality levels. This promotes the development of distinctions between features with different quality levels in the learned latent space. §.§ Relevance Mapping Module for Stage-2 §.§.§ Intra-domain Relevance Mapping Referring to the response term in (<ref>), the intra-domain relevance between the testing sample in the query set and the samples of the same j-th quality level in the support set is measured to obtain the confidence degree using r_i,R = ∑_k = 1^K α _k,j,Rϕ^T (x_k,j^)ϕ (x_i^) , which quantifies the relevance between the sample x_i and other samples around x_i within the local description domain. The domain-invariant features extracted in the previous subsection are designed specifically for the description domain. Besides, the disentangled features can cause information loss, which can limit the quality prediction from the description domain to the quality domain. Therefore, in this stage, we leverage the perception domain features ϕ(x) to measure the relevance of neighboring points in the description domain. Similar to the operation in the previous subsection, we measure and map the relevance using the concatenated features by a mapping network. In this way, the network can perceive the feature components related to the quality difference, and the quality domain can be connected with a smaller discrepancy. Module Structure. This module denoted as H( · ) is formulated as h_s = ϕ _x_i⊕ϕ _x,j^(k), h_k = FC(256 × 2, 64)(h_s^(i)), h_k = Relu(h_k), h_k = FC(64, 1)(h_k), h_k = Sigmoid(h_k)-0.5, k = 1, ⋯ ,K, h_r = mean([h_1,h_2, ⋯ ,h_K]), where h_r gives the output of H identified as the predicted confidence degree, ϕ _x_i=ϕ (x_i) represents the extracted perception domain features of the sample x_i in the query set, ϕ _x,j^(k) = ϕ (x_j^(k)) indicates the extracted perception domain features of samples in the support set identified as the j-th quality level by Stage-1, and K is the sample number for the j-th quality level in the mini-batch. The forward propagation for Stage-2 is q̂_i,R=H(ϕ(x^(k)),ϕ(x_i)), where x^(k) signifies the support set samples, x_i is the testing sample in the query set, and q̂_i,R represents the predicted confidence degree. §.§.§ Quality-aware Regularization We use the PLCC loss and SROCC loss defined above as loss functions to explicitly improve them, i.e., ℒ_reg1 = - loss_plcc( q̂^(k),q^(k)), and ℒ_reg2 = - loss_srocc( q̂^(k),q^(k)). Here, q̂^(k)=q̂^(k)_L+q̂^(k)_R is the predicted continuous quality scores across a mini-batch which is the sum of the quality levels and the confidence degree values. q^(k) gives the ground truth MOS. Besides, the modules involved in Stage-1 operate on the extracted domain-invariant features. The imposed quality-aware regularization can ensure that the quality-aware common components exist in the original features. §.§ Cross-stage Training Mechanism The two stages of the proposed architecture are not independent and mutually reinforce each other. Stage-1 serves as the foundation for Stage-2, and Stage-2, in turn, facilitates the establishment of the latent space in Stage-1. Specifically, when the network is inadequately trained and predicts the quality level inaccurately, the quality-aware regularization seeks to achieve a large value of H(x) which separates the feature ϕ(x) of the testing sample from those of “similar" samples determined in Stage-1. Essentially, it increases the differences between mismatched samples. Conversely, under accurate quality level prediction, H(x) facilitates the feature differences based on the quality differences, promoting a structured distribution in the learned latent space. §.§ Quality Combination Module Given the support set samples x^(k), the continuous quality score of the sample x_i in the query set can be obtained by q̂_i =q̂_i,L+q̂_i,R =G(F(Φ(ϕ(x^(k)))), Φ(ϕ(x_i))) + H(ϕ(x^(k)),ϕ(x_i)), where q̂_i is the desired predicted continuous quality score. §.§ Overall Loss The overall training loss is the sum of distribution loss, structured ranking loss, boundary loss and two quality-aware losses, which leads to ℒ = ℒ_dis + ℒ_rank + ℒ_cls + ℒ_reg1 + ℒ_reg2. § EXPERIMENTAL RESULTS AND ANALYSES §.§ Datasets To evaluate the performance of the proposed D^3-PCQA, we conduct evaluation experiments on three independent datasets, i.e., LS-PCQA <cit.>, SJTU-PCQA <cit.> and WPC datasets <cit.>. LS-PCQA. The LS-PCQA  <cit.> dataset consists of 104 reference point clouds with 31 different types of distortions under 7 levels. These distortions include geometry distortions, attribute distortions and compression distortions, resulting in a total of 22,568 distorted point cloud samples. SJTU-PCQA. The SJTU-PCQA <cit.> dataset consists of 9 reference point clouds with 7 different types of distortions under 6 levels. These distortions comprise 4 individual distortions and 3 superimposed distortions, resulting in a total of 378 distorted point cloud samples. WPC. The WPC <cit.> dataset consists of 16 reference point cloud samples with V-PCC distortion, considering the combination between 5 geometry quantization steps and 5 texture quantization steps, resulting in a total of 400 distorted samples. To compare the proposed D^3-PCQA with other learning-based NR-PCQA metrics, we split the LS-PCQA, SJTU-PCQA and WPC datasets into the training sets and the testing sets. The training set and the testing set from LS-PCQA contain the distorted samples generated from 100 and 4 reference point clouds respectively to avoid overlapping. For SJTU-PCQA and WPC, we split the reference point clouds of the two datasets such that 75% of the samples are used for training, while the remaining 25% are for testing. To augment the training sets, for projection-based methods, each point cloud is projected to 12 versions of images by rotating it to 12 different viewpoints. The 12 viewpoints are uniformly placed around the object based on the 12 polyhedron vertices of a regular icosahedron <cit.>. For 3D-based methods, random rotation within the range of [0^∘,360^∘) is invoked during training. §.§ Overall Performance The correlation metrics (PLCC and SROCC) are used to quantify the performance of the objective methods. We compare the performance of proposed D^3-PCQA with prevalent FR metrics, including MSE-PSNR-P2point (M-p2po) <cit.>, Hausdorff-PSNR-P2point (H-p2po) <cit.>, MSE-PSNR-P2plane (M-p2pl) <cit.>, Hausdorff-PSNR-P2plane (H-p2pl) <cit.>, PSNRyuv <cit.>, Hausdorff-PSNRyuv (H-PSNRyuv) <cit.>, PCQM <cit.>, GraphSIM <cit.> and MPED <cit.>, prevalent 3D-based NR methods, including ResSCNN <cit.>, GPA-Net <cit.> and MM-PCQA <cit.>, and prevalent projection-based NR methods, including PQA-Net <cit.> and IT-PCQA <cit.>. Additionally, to evaluate the average performance of these methods across multiple datasets, we present the weighted mean values of PLCC and SROCC based on the scale of the testing sets. The results are summarized in Table <ref>, with the best results highlighted in bold and the second-best results highlighted with underline. We can see from Table <ref> that: i) the proposed D^3-PCQA exhibits robust and outstanding performance across all three datasets. In contrast, the performance of existing FR and NR methods varies significantly across different datasets. For example, the proposed D^3-PCQA achieves an SROCC of approximately 0.8 on all three datasets. Conversely, GPA-Net performs well on the SJTU-PCQA dataset with an SROCC of 0.87 but poorly on the LS-PCQA and WPC dataset with an SROCC of 0.60; ii) the proposed D^3-PCQA improves the model fitting in the presence of huge domain discrepancy in the training data, leading to outstanding performance on the LS-PCQA and WPC datasets. For example, on LS-PCQA, a dataset with large domain discrepancy, the proposed D^3-PCQA exhibits a high SROCC over 0.7, while other NR methods yield inferior performance with an SROCC below 0.6. Additionally, the proposed D^3-PCQA achieves the best performance among the projection-based methods, albeit slightly inferior to 3D-based methods, on the less complex SJTU-PCQA dataset where 3D-based backbones offer greater advantages in utilizing the training data; iii) the performance of the proposed D^3-PCQA can approach or even exceed that of existing FR metrics. For example, when comparing the (PLCC, SROCC) values of D^3-PCQA and the FR metrics with the best performance on the three datasets, we obtain (0.75, 0.75) for D^3-PCQA vs. (0.63, 0.62) for PSNRyuv on LS-PCQA, (0.80, 0.82) for D^3-PCQA vs. (0.91, 0.89) for GraphSIM and MPED on SJTU-PCQA, and (0.81, 0.79) for D^3-PCQA vs. (0.74, 0.75) for PCQM and GraphSIM on WPC. While some NR methods may exhibit better performance than FR metrics, this does not necessarily mean that these NR methods are “superior", but rather that they have better fitting performance. Therefore, we further evaluate the generalization performance of D^3-PCQA and existing NR-PCQA methods in the following subsection. Additionally, we show some examples of distorted point clouds with the subjective MOS, the predicted quality score of the proposed D^3-PCQA and the semantic degradation description corresponding to the predicted quality level in Fig. <ref>. It can be observed that the predicted quality score and the semantic degradation description align with subjective perception. §.§ Generalization Performance In practice scenarios, the generalization ability assumes heightened significance. In this subsection, we evaluate the generalization ability of the proposed D^3-PCQA through the cross-dataset experiment. Specifically, we consider two testing conditions: training on a small-scale dataset and testing on a small-scale dataset, and training on a large-scale dataset and testing on a small-scale dataset. In the first testing condition, the proposed D^3-PCQA and other NR-PCQA methods are trained on the SJTU-PCQA dataset and tested on the WPC dataset. Then the two datasets are switched and the experiment is repeated. In the second testing condition, the proposed D^3-PCQA and other NR-PCQA methods are trained on the LS-PCQA dataset and tested on the SJTU-PCQA and WPC datasets, respectively. The cross-dataset evaluation results are shown in Table <ref>. The best results are highlighted in bold, and the second-best results are highlighted with underline. We can see from Table <ref> that: i) the proposed D^3-PCQA exhibits superior generalization ability compared with other NR-PCQA methods under all testing conditions, highlighting its effectiveness. The SROCC achieved by the proposed D^3-PCQA surpasses the second-best result under all testing conditions by approximately 0.2. Conversely, most existing learning-based NR methods exhibit unsatisfactory generalization ability. For example, in the first testing condition, the proposed D^3-PCQA exhibits an SROCC of 0.7, while other NR-PCQA methods exhibit an SROCC of only approximately 0.5; ii) the utilization of the training data significantly affects the generalization performance of NR-PCQA models. In general, 3D-based NR-PCQA methods outperform projection-based methods, which is explained by the high efficiency in utilizing the training data of 3D-based backbones. However, in this case, the performance of the proposed D^3-PCQA, despite using a projection-based backbone, still demonstrates superior generalization performance, showcasing the effectiveness of our method; iii) the coverage of the training data also affects the generalization performance of NR-PCQA models. Methods trained on the large-scale datasets benefit from training data with broader coverage, leading to higher generalization ability. §.§ Performance Compared with Regression-based Architecture The existing NR methods typically have a similar architecture. Generally, these methods extract features from raw point cloud data and then map these features into quality scores. In this subsection, we compare the overall performance and generalization performance between the proposed D^3-PCQA with the conventional regression-based architecture. Specifically, the regression-based architecture can be achieved by G(ϕ (x^(k))) where ϕ represents the feature extraction backbone and G signifies the regression network. ϕ and G of the testing network are kept identical to those in this paper. The results are shown in Table <ref>. We can see from Table <ref> that: i) the proposed D^3-PCQA outperforms the conventional regression-based architecture under both the single-dataset testing condition and the cross-dataset testing condition. This demonstrates the effectiveness of our proposed method in quality prediction; ii) the proposed D^3-PCQA has a greater gain in the cross-dataset evaluation. This is explained by the effective strategy of exploring domain relevance in improving generalization ability. §.§ Effect of Loss Functions The proposed network is trained using the distribution loss, the structured ranking loss, the boundary loss and the two quality-aware losses. In this subsection, we evaluate the effect of each loss function on the WPC dataset. We use ℒ_cls to train the network as a benchmark. To demonstrate the effectiveness of establishing the description domain, ℒ_cls + ℒ_dis and ℒ_cls + ℒ_dis + ℒ_rank are used as the loss function to repeat the trial, as the distribution loss and the structured ranking loss are highly related to the formation of structured latent space. To demonstrate the effectiveness of the cross-stage training mechanism, ℒ_cls + ℒ_dis + ℒ_rank + ℒ_reg is used as the loss function to train the network, which achieves the final performance of the proposed model. The overall performance and cross-dataset performance are shown in Table <ref> and Table <ref>, respectively. We can see from Table <ref> and Table <ref> that: i) the sole use of ℒ_cls leads to poor performance. For point clouds with similar impairment degrees, a few distortion levels may not suffice to accurately describe their ranking relationship, thus reducing the correlation performance; ii) ℒ_dis and ℒ_rank significantly improve the generalization ability for quality prediction by exploiting the intra-domain relevance within different samples and establishing the proposed structured latent space to reproduce the function of the description domain; iii) ℒ_reg leads to obvious gain in performance, which demonstrates that ℒ_reg can promote the learning of the structured latent feature space and quality-aware features, and the proposed cross-stage training mechanism is effective for improving the performance of the model. §.§ Effect of Projection Strategies In this work, we propose two methods to address the issue of information loss caused by the projection operation, i.e., the depth projection and the HVS-based projection. First, in addition to the texture projection, the proposed method in this work incorporates an extra depth projection image to enhance the projection of the point cloud. Second, we introduce a new representation at a small scale, which aligns with the way human eyes observe point clouds from a distance, to handle information loss. In this subsection, we conduct evaluation experiments to demonstrate the effectiveness of these two methods. Specifically, the network using only the texture projection and the network without the HVS-based projection are trained and tested under the same condition as the original network. The performance comparison is shown in Table <ref>. We can see from Table <ref> that both proposed methods to handle information loss exhibit improvements in performance. The proposed depth projection enhances the utilization of training samples, while the HVS-based projection allows the model to respond to the masked distortions from a different perspective, thus enhancing the quality prediction capability of the model. §.§ Selection of Maximum Blurring Degree In the proposed feature extraction module, we adopt the blurring distortion as a substitute for the distortions masked by projection. To determine the maximum filter radius or blurring degree, denoted as R_m, we need to carefully choose an appropriate value. If R_m is set too small, the degree of blurring will not be enough to replace the masked distortions. Conversely, if R_m is set too large, the degree of blurring will not match the masked distortions, which will result in a discontinuity in the predicted quality scores. To determine the appropriate value for R_m, we conduct experiments in this subsection with the same setups as in Section <ref> but with different R_m values. The overall performance and cross-dataset performance are shown in Table <ref> and Table <ref> respectively. We can see from Table <ref> and Table <ref> that R_m = 10 is the most appropriate setting which exhibits the best overall performance and generalization ability. §.§ Scalability Demonstration The proposed D^3-PCQA is a versatile framework. While this paper mainly focuses on the projection-based NR-PCQA methods, the proposed framework is modular and can be integrated with other backbones. In this subsection, we utilize the proposed framework in conjunction with other backbones to illustrate its effectiveness and scalability. Specifically, we replace the 2D backbone in the proposed framework with the 3D backbones (ResSCNN and PointNet). To accelerate training, the 3D point clouds are downsampled to 1% of the original point number for the ResSCNN backbone and to 2,500 points for the PointNet backbone. The results of the experiments are shown in Table <ref> and Table <ref>. We can see from Table <ref> and Table <ref> that the proposed D^3-PCQA is demonstrated to be effective in improving the performance for both 3D and 2D backbones, highlighting its generalization and scalability. § CONCLUSION In this study, we propose a novel NR-PCQA method called D^3-PCQA, which considers quality assessment as a domain transformation from the perception domain to the quality domain. To reduce domain discrepancy, we establish a new intermediate domain, namely the description domain, by exploiting domain relevance and learning a structured latent space. The anchor features derived from the learned structured latent space are generated as cross-domain auxiliary information to promote domain transformation. Furthermore, the established description domain decomposes quality prediction into the degradation description prediction and the confidence degree prediction, providing a semantic explanation for the predicted quality scores. Experimental results demonstrate the effectiveness of D^3-PCQA, which achieves robust overall performance and outstanding generalization ability compared with existing NR-PCQA methods. IEEEtran
http://arxiv.org/abs/2307.00924v1
20230703105044
Semi-supervised multi-view concept decomposition
[ "Qi Jiang", "Guoxu Zhou", "Qibin Zhao" ]
cs.LG
[ "cs.LG", "cs.CV" ]
1 .001 mode = title]Semi-supervised multi-view concept decomposition ]Qi Jiang qi.jiang.gdut@qq.com ]Guoxu Zhou [1] gx.zhou@gdut.edu.cn ]Qibin Zhao [2] qibin.zhao@riken.jp []School of Automation, Guangdong University of Technology, Guangzhou 510006, China [cor1, cor2]Corresponding author [S U M M A R Y] Concept Factorization (CF), as a novel paradigm of representation learning, has demonstrated superior performance in multi-view clustering tasks. It overcomes limitations such as the non-negativity constraint imposed by traditional matrix factorization methods and leverages kernel methods to learn latent representations that capture the underlying structure of the data, thereby improving data representation. However, existing multi-view concept factorization methods fail to consider the limited labeled information inherent in real-world multi-view data. This often leads to significant performance loss. To overcome these limitations, we propose a novel semi-supervised multi-view concept factorization model, named SMVCF. In the SMVCF model, we first extend the conventional single-view CF to a multi-view version, enabling more effective exploration of complementary information across multiple views. We then integrate multi-view CF, label propagation, and manifold learning into a unified framework to leverage and incorporate valuable information present in the data. Additionally, an adaptive weight vector is introduced to balance the importance of different views in the clustering process. We further develop targeted optimization methods specifically tailored for the SMVCF model. Finally, we conduct extensive experiments on four diverse datasets with varying label ratios to evaluate the performance of SMVCF. The experimental results demonstrate the effectiveness and superiority of our proposed approach in multi-view clustering tasks. Concept decomposition Multi-view clustering Label Propagation Manifold learning [ [ August 1, 2023 ================== § INTRODUCTION With the rapid growth of data, data sources and features have become increasingly diverse. For example, a news article may be reported by multiple media outlets, a facial image can be captured from different angles, and a web page may contain various elements such as images, text, and hyperlinks. These data, described by different source domains or features, are referred to as multi-view data. Although the data from each view can be used to design single-view representation learning models, this approach fails to leverage the information provided by other views, thereby limiting further improvement in algorithm performance. Therefore, effectively utilizing the information from multi-view data to enhance clustering performance is an important challenge. Multi-view learning has been widely applied in various fields <cit.>, including computer vision, natural language processing, bioinformatics, and health informatics. In the field of multi-view clustering, matrix factorization (MF) methods are commonly employed, particularly Nonnegative Matrix Factorization (NMF) <cit.>. NMF is a dimensionality reduction technique that can extract latent features from high-dimensional multi-view data. By decomposing the original data into a low-rank representation<cit.>, NMF reduces the dimensionality and captures the underlying structure of the data. This facilitates the clustering process by revealing the intrinsic relationships and patterns among the multi-view samples. Furthermore, Concept Factorization (CF) <cit.> as a variant of NMF, inherits all the advantages of NMF and has additional strengths as it can handle both positive and negative values and operate in any data representation space, including kernel feature space. In recent years, various multi-view clustering methods based on non-negative matrix factorization have been proposed. Liu et al. <cit.> proposed Multi-View Clustering via Joint NMF, which employs a consensus constraint to encourage the coefficient matrices learned from different views to converge towards a consistent consensus matrix. Subsequently, Khan et al. <cit.> introduced the Weighted Multi-View Data Clustering via Joint NMF method, which incorporates adaptive view weights to enhance the clustering performance. Wang et al. <cit.> presented the Adaptive Multi-View Semi-Supervised NMF, which extends traditional multi-view NMF to the semi-supervised setting by incorporating label information as hard constraints, aiming to achieve better clustering discriminability. In addition, Wang et al. <cit.> proposed Diverse NMF, a multi-view clustering method that introduces a diversity term to orthogonalize different data vectors and reduce redundancy in multi-view representations. The accumulated result integrates complementary information from multiple views. Liu et al. <cit.> introduced Partially Shared Latent Factor Learning (PSLF), a partially shared multi-view learning approach. PSLF assumes that different views share common latent factors while having their specific latent factors. By considering both the consistency and complementarity of multi-view data, PSLF learns a comprehensive partially shared latent representation that enhances clustering discriminability. Ou et al. <cit.> incorporated co-regularization and correlation constraints into multi-view NMF. They leverage the complementarity between different views and propose imposing correlation constraints on the shared latent subspace to obtain shared latent representations when a particular view is corrupted by noise. This approach demonstrates good performance in handling noisy multi-view data. Based on manifold learning <cit.>, Cai et al. <cit.> proposed a graph-constrained non-negative matrix factorization model, which highlights the importance of considering manifold learning. Since then, this approach has been widely applied in the field of multi-view learning. Zhang et al. <cit.> proposed Graph-Regularized NMF, a multi-view clustering method that incorporates graph regularization and orthogonal constraints. The orthogonal constraints help eliminate relatively less important features, while the graph regularization learns more relevant local geometric structures. Liang et al. <cit.> introduced a graph-regularized partially shared multi-view NMF method. Building upon the PSLF model <cit.>, this approach incorporates manifold learning and constructs affinity graphs for each view to approximate the geometric structure information in the data. As a variant of NMF, CF inherits all the advantages of NMF and has additional strengths, making CF a natural choice for the multi-view domain. Wang et al. <cit.> first introduced the Multi-View CF method, extending the traditional single-view CF methods to the multi-view scenario. Subsequently, Zhan et al. <cit.> proposed the Adaptive Multi-View CF method, which utilizes a view-adaptive weighting strategy to automatically update the weights for each view, further enhancing the ability of CF to handle multi-view problems. However, the aforementioned studies on multi-view concept factorization overlook an important aspect, which is the presence of a small amount of labeled information in real-world multi-view data. Leveraging the available labeled information can significantly enhance the clustering performance of our model. Therefore, the maximum utilization of limited labeled information becomes a crucial problem to address. To tackle this issue, we propose a novel multi-view concept factorization method called Semi-supervised Multi-View Concept Factorization (SMVCF) model. The framework structure of SMVCF is illustrated in Figure <ref>, and its main contributions can be summarized as follows: * We introduce a new multi-view concept factorization method, SMVCF, which extends the single-view CF to the multi-view scenario and integrates multi-view CF, label propagation, and manifold learning into a unified framework. Moreover, our method combines concept factorization, label propagation, and manifold learning to solve a unified optimization problem. * We develop a novel multi-view label learning strategy to utilize the available labeled information. For datasets with a small number of labeled instances, label propagation methods can propagate labels to unlabeled data, thereby improving the model's performance. * The proposed SMVCF incorporates an adaptive weight strategy in the learning process to balance the importance of each view, mitigating the adverse effects of information imbalance. * We conduct extensive experiments on four different datasets with varying label proportions. The results demonstrate that SMVCF outperforms several state-of-the-art semi-supervised multi-view clustering methods, showcasing its superior performance. § RELATED WORK §.§ Notations For the sake of readability, this section provides a summary of the commonly used mathematical symbols throughout the entire paper in the table<ref> below. §.§ NMF In matrix factorization-based learning methods, Non-negative Matrix Factorization <cit.> is a technique that approximates a non-negative matrix of sample data by decomposing it into a basis matrix 𝐔 and a coefficient matrix 𝐕. Given a data matrix 𝐗 = [ x_1,x_2, ⋯ ,x_n] ∈ℝ^m × n, where each column of 𝐗 represents a sample vector, NMF aims to decompose 𝐔=[u_i k] ∈ℝ^m × k and a coefficient matrix 𝐕=[v_j k] ∈ℝ^n × k. This can be expressed as follows: 𝐗≈𝐔𝐕^⊤. Furthermore, we can define the objective function of NMF as follows: 𝐉_𝐍 𝐌 𝐅= 𝐗-𝐔 𝐕^⊤_F^2 s.t. 𝐔,𝐕≥ 0. Based on the paper <cit.>, we can derive the update equations for the basis matrix 𝐔 and the coefficient matrix 𝐕 as follows: u_i k^t+1=u_i k^t(𝐗 𝐕)_i k/(𝐔 𝐕^⊤𝐕)_i k v_j k^t+1=v_j k^t(𝐗^⊤𝐔)_j k/(𝐕𝐔^⊤, 𝐔)_j k. §.§ CF Non-negative Matrix Factorization <cit.> has gained significant attention in the field of clustering <cit.> over the past few decades. While NMF is effective in extracting latent components from non-negative data, it encounters limitations when applied to real-world data where non-negativity is not always preserved due to noise or outliers. Converting data to non-negative form may disrupt the linear relationships among the data. Additionally, NMF cannot be easily kernelized using kernel methods <cit.>, as many kernel methods are not applicable to NMF. To overcome these drawbacks, Xu et al. <cit.> proposed the concept factorization approach as an alternative to NMF. CF not only eliminates the constraint of non-negativity but also leverages kernel methods to learn the latent representation of data. By incorporating kernelization, CF can capture nonlinear relationships in the data, which enhances its flexibility compared to NMF. It is worth noting that concept decomposition-based <cit.> methods have demonstrated superior performance in handling problems across various domains. Given a data matrix 𝐗 = [ x_1,x_2, ⋯ ,x_n] ∈ℝ^m × n, where x_i represents the i-th m-dimensional feature vector of the data samples, each basis vector u_j can be represented as a linear combination of the data samples: u_j = ∑_i w_ijx_i, where w_ij≥ 0. Let 𝐖 = [ w_ij] ∈ℝ^n × c. The objective of CF is to find an approximation as follows: 𝐗≈𝐗𝐖𝐕^⊤. To measure the reconstruction error, the objective function of CF can be rewritten as follows: 𝐉_𝐂 𝐅=𝐗-𝐗𝐖𝐕^⊤_F^2 s.t. 𝐖≥ 0, 𝐕≥ 0. According to the paper <cit.>, we can obtain the update rules for problem (<ref>) as follows: w_i j^t+1← w_i k^t(𝐊𝐕)_i k/(𝐊𝐖𝐕^⊤𝐯)_i k v_j k^t+1← v_j k^t(𝐊𝐖)_j k/(𝐕𝐖^⊤𝐊𝐖)_j k, where 𝐊 = 𝐗^⊤𝐗 ∈ℝ^n × n. These update rules only involve the inner product of 𝐗. However, it is possible to incorporate a kernel function into the matrix to introduce nonlinearity. A detailed explanation can be found in the paper <cit.>. §.§ Multi-view Clustering The general expression of matrix factorization-based multi-view models is given as follows: min _𝐔^(v), 𝐕^(v) ∑_v=1^m𝐗^(v)-𝐔^(v)𝐕^(v)_F^2+Ψ(𝐕^(v), 𝐕^*) s.t. 𝐔^(v), 𝐕^(v), 𝐕^*≥ 0, Where 𝐗^(v) represents the data matrix of the v-th view. 𝐔^(v) is the basis matrix for the v-th view, and 𝐕^(v) represents the coefficient matrix for the v-th view. Ψ(·) is a function that combines different 𝐕^(v) matrices to obtain a consistent consensus matrix 𝐕^*. § SEMI-SUPERVISED MULTI-VIEW CONCEPT DECOMPOSITION §.§ Label Propagation In the context of multi-view learning, datasets often contain partial label information, and effectively leveraging this limited label information becomes crucial for improving algorithm performance. Label propagation techniques have been widely demonstrated to be effective in previous research <cit.>. Compared to traditional label learning methods, label propagation methods have several advantages. Firstly, label propagation methods can leverage a large amount of unlabeled data for learning without requiring additional manual labeling costs. Secondly, for datasets with a small number of labels, label propagation methods can propagate label information from known labeled samples to unknown labeled samples through similarity propagation in the data space, thereby improving model performance. Lastly, label propagation methods exhibit high flexibility and robustness, being able to adapt to various data types and task types while being less susceptible to noise and outlier data. We can establish an undirected graph 𝐆(V,E) and a weight matrix 𝐒={s(i,j),i,j = 1,2,⋯, I_N} to describe the neighboring relationships between samples. The weight matrix 𝐒 can be constructed in the following way: s(i, j)={[ e^-X_i-X_j_F^2/σ^2, if X_i∈N_p(X_j) and X_j∈N_p(X_i); 0, otherwise ]. Where N_p(X_i) and N_p(X_j) denote the sets of p nearest samples to X_i and X_j in the graph G, respectively. σ is a hyperparameter. Previous studies have shown <cit.> that samples that are close in the sample space should have the same label. Therefore, if the dataset consists of I_N samples and contains label information, the label propagation problem can be rewritten as follows: min ∑_i=1^I_N∑_j=1^I_N𝐁(i,:)-𝐁(j,:)_2^2 s(i, j) +∑_i=1^I_N𝐁(i,:)-𝐘(i,:)_2^2 a(i, i), Where 𝐁∈ℝ^I_N× k is the predicted label matrix, 𝐘∈ℝ^I_N× k is the true label matrix, and 𝐘(i,:)=[0,0, ⋯, 1, ⋯, 0,0]^⊤ ∈R^1 × k. Here, k represents the number of classes for the samples. 𝐀={a(i, i), i, j ∈ 1,2, ⋯, I_N}∈ℝ^I_N× I_N denotes the diagonal indicator matrix. a(i, i)={[ 1, if X_i labeled.; 0, otherwise. ]. When given labeled samples X_i and unlabeled samples X_j, if s(i, j) is sufficiently large, minimizing (<ref>) ensures that the predicted label 𝐁(j,:) for sample X_j will be very close to the true label 𝐘(i,:) of sample X_i. §.§ Objective Function of SMVCF We integrate multi-view CF, label propagation, and manifold learning into a unified framework and propose a semi-supervised multi-view concept factorization model. Given a dataset with n_v views {𝐗^(v)}_v=1^n_v, where 𝐗^(v)=[x_1^(v), x_2^(v), ⋯, x_n^(v)] is the input matrix of the v-th view. It is important to note that a good low-dimensional representation vector 𝐕^(v)(i,:) should ideally have a small Euclidean distance to its corresponding label vector, resulting in better discriminative power. Therefore, the objective function of our SMVCF model is formulated as follows: min _𝐖^(v), 𝐕^(v), α^v ∑_v=1^n_vα^v(𝐗^(v)-𝐗^(v)𝐖^(v)(𝐕^(v))^⊤_F^2. +λ∑_i=1^n_v∑_j=1^n_v𝐕^(v)(i,:)-𝐕^(v)(j,:)_2^2 s(i, j) .+β∑_i=1^n_v𝐘^(v)(i,:)-𝐕^(v)(i,:)_2^2 a(i, i)) s.t. ∀ v, 𝐖^(v)≥ 0, 𝐕^(v)≥ 0, α^v≥ 0, ∑_v=1^n_vα^v=1, Where 𝐕^(v)(i,:) and 𝐕^(v)(j,:) represent the i-th and j-th rows of the factor matrix 𝐕^(v) in the v-th view. Equation (<ref>) can also be rewritten in the following form: min _𝐖^(v), 𝐕^(v), α^v∑_v=1^n_v α^v(𝐗^(v)-𝐗^(v)𝐖^(v)(𝐕^(v))^⊤_F^2. +λTr(𝐕^(v)^⊤𝐋^(v)𝐕^(v)) .+βTr((𝐕^(v)-𝐘^(v))^⊤𝐀^(v)(𝐕^(v)-𝐘^(v)))) s.t. ∀ v, 𝐖^(v)≥ 0, 𝐕^(v)≥ 0, α^v≥ 0, ∑_v=1^n_vα^v=1. However, it is important to note that when one of the views has a weight of 1 and the weights of other views are all 0, Equation (<ref>) will have an invalid solution in terms of α^v. However, if we solve Equation (<ref>): min _α^v∑_v=1^n_v(α^v)^2 s.t. ∀ v, α^v≥ 0, ∑_v=1^n_vα^v=1, The optimal solution is for all views to have equal weights: 1/n^v. By combining Equations (<ref>) and (<ref>), we can avoid the occurrence of invalid solutions. In summary, the final objective function can be formulated as follows: min _𝐖^(v), 𝐕^(v), α^v∑_v=1^n_v α^v(𝐗^(v)-𝐗^(v)𝐖^(v)(𝐕^(v))^⊤_F^2. +λTr(𝐕^(v)^⊤𝐋^(v)𝐕^(v))+γ∑_v=1^n_v(α^v)^2 .+βTr((𝐕^(v)-𝐘^(v))^⊤𝐀^(v)(𝐕^(v)-𝐘^(v)))) s.t. ∀ v, 𝐖^(v)≥ 0, 𝐕^(v)≥ 0, α^v≥ 0, ∑_v=1^n_vα^v=1. The diagonal matrix can be represented as 𝐃={d(i, i) . . =∑_j=1^n_v s(i, j), i, j ∈ 1,2, ⋯, n_v.}∈ℝ^n_v × n_v. The Laplacian matrix is defined as 𝐋=𝐃-𝐒. In Equation (<ref>), λ, β, γ are hyperparameters. §.§ Optimization of SMVCF Problem We have designed an iterative update algorithm to solve problem (<ref>). This iterative update algorithm can be roughly divided into three steps:1) Fix 𝐕^(v) and α^v, update 𝐖^(v); 2) Fix 𝐖^(v) and α^v, update 𝐕^(v); 3) Fix 𝐖^(v) and 𝐕^(v), update α^v. 1) Fix 𝐕^(v) and α^v, update𝐖^(v). 𝒪_1=min _𝐖^(v), 𝐕^(v) 𝐗^(v)-𝐗^(v)𝐖^(v)(𝐕^(v))^⊤_F^2 s.t. ∀ v, 𝐖^(v)≥ 0, 𝐕^(v)≥ 0. By defining 𝐊^(v)=(𝐗^(v))^⊤𝐗^(v), equation (<ref>) can be rewritten as: [ 𝐗^(v)-𝐗^(v)𝐖^(v)(𝐕^(v))^⊤_F^2; = Tr(𝐗^(v)-𝐗^(v)𝐖^(v)(𝐕^(v))^⊤)^⊤; (𝐗^(v)-𝐗^(v)𝐖^(v)(𝐕^(v))^⊤); = Tr(𝐈-𝐖^(v)(𝐕^(v))^⊤)^⊤𝐊^(v)(𝐈-𝐖^(v)(𝐕^(v))^⊤); = Tr(𝐊^(v)-2𝐕^(v)(𝐖^(v))^⊤𝐊^(v).; . +𝐕^(v)(𝐖^(v))^⊤𝐊^(v)𝐖^(v)(𝐕^(v))^⊤). ] Let Ψ^(v) = [ψ _ik^(v)] be the Lagrange multipliers for 𝐖^(v)≥ 0, then we can obtain the Lagrangian equation ℒ_1: [ ℒ_1=Tr(𝐊^(v)-2 𝐕^(v)(𝐖^(v))^⊤𝐊^(v)+Ψ^(v)(𝐖^(v))^⊤.; .+𝐕^(v)(𝐖^(v))^⊤𝐊^(v)𝐖^(v)(𝐕^(v))^⊤). ] Taking the first-order partial derivative of ℒ_1 with respect to 𝐖^(v) yields: ∂ℒ_1/∂𝐖^(v) = -2𝐊^(v)𝐕^(v) +2𝐊^(v)𝐖^(v)(𝐕^(v))^⊤𝐕^(v) +Ψ^(v). By applying the KKT (Karush-Kuhn-Tucker) conditions, ψ _ik^(v)w_ik^(v) = 0, we can obtain: (-𝐊^(v)𝐕^(v)+ 𝐊^(v)𝐖^(v)(𝐕^(v))^⊤𝐕^(v))_ikw_ik^(v)= 0. Therefore, we can obtain the update rule for w_ik^(v) as follows: w_ik^(v)← w_ik^(v)(𝐊^(v)𝐕^(v))_i k/(𝐊^(v)𝐖^(v)(𝐕^(v))^⊤𝐕^(v))_i k. 2) Fix 𝐖^(v) and α^v, update𝐕^(v). 𝒪_2 = min _𝐖^(v), 𝐕^(v) 𝐗^(v)-𝐗^(v)𝐖^(v)(𝐕^(v))^⊤_F^2 +λTr(𝐕^(v)^⊤𝐋^(v)𝐕^(v)) +βTr((𝐕^(v)-𝐘^(v))^⊤𝐀^(v)(𝐕^(v)-𝐘^(v))) s.t. ∀ v, 𝐖^(v)≥ 0, 𝐕^(v)≥ 0. By defining 𝐊^(v)=(𝐗^(v))^⊤𝐗^(v), equation (<ref>) can be rewritten as: [ 𝐗^(v)-𝐗^(v)𝐖^(v)(𝐕^(v))^⊤_F^2+λTr(𝐕^(v)^⊤𝐋^(v)𝐕^(v)); +βTr((𝐕^(v)-𝐘^(v))^⊤𝐀^(v)(𝐕^(v)-𝐘^(v))); = Tr(𝐊^(v)-2𝐕^(v)(𝐖^(v))^⊤𝐊^(v).; . +𝐕^(v)(𝐖^(v))^⊤𝐊^(v)𝐖^(v)(𝐕^(v))^⊤) +λTr(𝐕^(v)^⊤𝐋^(v)𝐕^(v)); +βTr𝐀^(v)( (𝐕^(v))^⊤𝐕^(v)-(𝐕^(v))^⊤𝐘^(v)-(𝐘^(v))^⊤𝐕^(v).; .-(𝐘^(v))^⊤𝐘^(v)) . ] Let Φ^(v) = [φ _jk^(v)] be the Lagrange multipliers for 𝐕^(v)≥ 0. Then we can obtain the Lagrange equationℒ_2: ℒ_2 = Tr(𝐊^(v)-2𝐕^(v)(𝐖^(v))^⊤𝐊^(v). .+𝐕^(v)(𝐖^(v))^⊤𝐊^(v)𝐖^(v)(𝐕^(v))^⊤) +βTr𝐀^(v)( (𝐕^(v))^⊤𝐕^(v)-(𝐕^(v))^⊤𝐘^(v). .-(𝐘^(v))^⊤𝐕^(v)-(𝐘^(v))^⊤𝐘^(v)) +λTr(𝐕^(v)^⊤𝐋^(v)𝐕^(v)) +Tr(Φ^(v)(𝐕^(v))^⊤). To solve for the first-order partial derivative of ℒ_2 with respect to 𝐕^(v), we obtain: ∂ℒ_2/∂𝐕^(v) = -2𝐊^(v)𝐖^(v) +2𝐊^(v)𝐕^(v)(𝐖^(v))^⊤𝐖^(v) +2λ𝐋^(v)𝐕^(v) +2β(𝐀^(v)𝐕^(v) - 𝐀^(v)𝐘^(v)) +Φ^(v). By applying the KKT conditions, ϕ _jk^(v)v_jk^(v) = 0, we can obtain: [ (-𝐊^(v)𝐖^(v)+𝐊^(v)𝐕^(v)(𝐖^(v))^⊤𝐖^(v).; .+λ𝐋^(v)𝐕^(v)+β(𝐀^(v)𝐕^(v)-𝐀^(v)𝐘^(v)))_j k v_j k^(v)=0, ] Where 𝐋 = 𝐃 - 𝐒, equation (<ref>) can be rewritten as: [ (-𝐊^(v)𝐖^(v)+𝐊^(v)𝐕^(v)(𝐖^(v))^⊤𝐖^(v)+λ𝐃^(v)𝐕^(v).; . -λ𝐒^(v)𝐕^(v) +β(𝐀^(v)𝐕^(v)-𝐀^(v)𝐘^(v)))_j k v_j k^(v)=0. ] we can derive the update rule for v_jk^(v) as follows: v_jk^(v)← v_jk^(v)(𝐊^(v)𝐖^(v) +λ𝐒^(v)𝐕^(v) +β𝐀^(v)𝐘^(v))_j k/(𝐊^(v)𝐕^(v)(𝐖^(v))^⊤𝐖^(v) +λ𝐃^(v)𝐕^(v) +β𝐀^(v)𝐕^(v))_j k. 3) Fix 𝐖^(v) and 𝐕^(v), update α^v. 𝒪_3 = min _α^v∑_v=1^n_v α^v(𝐗^(v)-𝐗^(v)𝐖^(v)(𝐕^(v))^⊤_F^2. . +λTr(𝐕^(v)^⊤𝐋^(v)𝐕^(v))+γ∑_v=1^n_v(α^v)^2. .+βTr((𝐕^(v)-𝐘^(v))^⊤𝐀^(v)(𝐕^(v)-𝐘^(v)))) s.t. ∀ v, α^v≥ 0, ∑_v=1^n_vα^v=1. Letf^(v) = 𝐗^(v)-𝐗^(v)𝐖^(v)(𝐕^(v))^⊤_F^2 +λTr(𝐕^(v)^⊤𝐋^(v)𝐕^(v)) +βTr((𝐕^(v)-𝐘^(v))^⊤𝐀^(v)(𝐕^(v)-𝐘^(v))), equation (<ref>) can be rewritten as: min _αα+1/2 γ f_2^2 s.t. α≥ 0, 1^⊤α=1, Where α=[α^1, α^2, …, α^n_v]^⊤ and f=[f^1, f^2, …, f^n_v]^⊤. The Lagrangian equation for problem (<ref>) is given by: ℒ_3 = α+1/2 γ f_2^2+ρ(1-1^⊤ α)+ ζ^⊤(- α), Where ρ and ζ are Lagrange multipliers, with ρ being a scalar and ζ being a column vector. According to the KKT conditions, the optimal solution for α is given by: α=(-1/2 γ f+ζ 1)_+ § EXPERIMENTS In this section, we compare SMVCF with advanced semi-supervised multi-view algorithms. All experiments were conducted on a PC with an Intel i5 9500T CPU and 16GB of RAM. §.§ Datasets The detailed information about the datasets used in this experiment is summarized in Table <ref>. (1) 𝐍𝐆𝐬: The 20Newsgroups dataset consists of news articles categorized into 20 topics. NGs is a subset of the 20Newsgroups dataset, comprising 500 news articles. The dataset is divided into three views based on three preprocessing methods, and for detailed preprocessing steps, please refer to the reference <cit.>. Each view has the same dimensionality, with 𝐗∈ℝ^2000 × 500. (2) 𝐁𝐁𝐂𝐒𝐩𝐨𝐫𝐭: The BBCSport dataset consists of 544 news articles from five different sports categories. Each news article in the dataset has two views, where 𝐗^(1)∈ℝ^3183 × 544 and 𝐗^(2)∈ℝ^3203 × 544. (3) 𝐁𝐁𝐂: The BBC dataset consists of 685 news articles from five different topic domains. Each news article in the dataset has four views, where 𝐗^(1)∈ℝ^4659 × 685, 𝐗^(2)∈ℝ^4633 × 685, 𝐗^(3)∈ℝ^4655 × 685, and 𝐗^(4)∈ℝ^4684 × 685. (4) 3𝐒𝐨𝐮𝐫𝐜𝐞𝐬: The 3Sources dataset consists of 169 news articles from six different topic domains. All the news articles are reported by three news agencies: The Guardian, Reuters, and BBC. Each news agency corresponds to one view, where 𝐗^(1)∈ℝ^3560 × 169, 𝐗^(2)∈ℝ^3631 × 169, and 𝐗^(3)∈ℝ^3068 × 169. §.§ Compared Algorithms To evaluate the performance of SMVCF, we compared it with the following four semi-supervised multi-view methods. * 𝐃𝐈𝐂𝐒 <cit.>: This is a semi-supervised multi-view learning method based on NMF. The proposed approach aims to explore both discriminative and non-discriminative information present in the common and view-specific components across different views through joint non-negative matrix factorization. It also incorporates graph regularization and orthogonal constraints. The orthogonal constraints help eliminate relatively less important features, while the graph regularization learns more relevant local geometric structures. * 𝐏𝐒𝐋𝐅 <cit.>: This is a semi-supervised multi-view learning method based on NMF. The approach assumes that different views share common latent factors while also having their specific latent factors. By considering both the consistency and complementarity of multi-view data, PSLF learns a comprehensive partially shared latent representation that enhances clustering discriminability. * 𝐆𝐏𝐒𝐍𝐌𝐅 <cit.>: This is a semi-supervised multi-view learning method based on NMF. Building upon the PSLF model, GPSNMF incorporates manifold learning and constructs affinity graphs for each view to approximate the geometric structure information in the data. Additionally, an efficient L_2,1-norm regularized regression matrix is employed to learn from labeled samples. * 𝐌𝐕𝐒𝐋 <cit.>: This is a novel semi-supervised multi-view semantic subspace learning method based on NMF. The proposed approach achieves joint analysis of multi-view data by sharing semantic subspaces across multiple views. It employs a novel graph regularization approach to preserve the geometric structure of the data and utilizes non-negative matrix factorization to learn the semantic subspaces for each view. §.§ Parameter Sensitivity To test the impact of parameter variations on SMVCF, we conducted sensitivity experiments on the NGs, BBCSport, BBC, and 3sources datasets. By analyzing the SMVCF model, we identified the following parameters: (1) Explicit parameters: λ, β, and γ. Here, λ is the coefficient for graph regularization, controlling the strength of the graph constraint. β balances the relationship between the SMVCF reconstruction term and the label propagation term. γ controls the weight distribution among different views. (2) Implicit parameter: the number of nearest neighbors p for the undirected graph. We first analyzed the explicit parameters and then performed targeted analysis for the implicit parameter. (1) Analysis of explicit parameters: We divided the experiments into three scenarios: 1) Fixing β=1 and γ=100, with a label ratio set at20%, we searched for the optimal parameter within the range of λ∈[1,10,100,1000,10000]. 2) Fixing λ=1 and γ=100, with a label ratio set at 20%, we searched for the optimal parameter within the range of β∈[0.01,0.1,1,10,100]. 3) Fixing λ=1 and β=1, with a label ratio set at 20%, we searched for the optimal parameter within the range of γ∈[1,10,100,1000,10000]. For each experimental result, we ran SMVCF 10 times and reported its average performance. The specific experimental results can be seen in Figure <ref>, Figure <ref>, and Figure <ref>. From Figure <ref>, it can be observed that when λ varies within the experimental range, SMVCF exhibits relatively stable performance in terms of ACC, NMI, and Purity. Moreover, at λ=1, SMVCF achieves the best performance across the NGs, BBC, and 3sources datasets. Figure <ref> shows that SMVCF maintains a high level of stability in ACC, NMI, and Purity metrics within the range of β∈[0.01,0.1,1,10]. In most cases, SMVCF performs optimally when β=1. In Figure <ref>, it is evident that SMVCF's performance in ACC, NMI, and Purity exhibits noticeable fluctuations as γ varies. This is expected since γ is the parameter that influences the weight distribution among different views. Specifically, when γ=100, we generally obtain relatively excellent performance. However, both excessively large and small values of γ have varying degrees of impact on the clustering performance, highlighting the crucial role of multi-view weight allocation in the model. (2) Implicit Parameter Analysis: The number of nearest neighbors in the undirected graph, denoted as p, is also related to label propagation, and its variation can affect the clustering performance of the SMVCF model. We tested the impact of p on clustering performance on different datasets, while keeping λ=1, β=1, γ=100, and the label proportion set to 20%. We explored the range of p∈[2,5,8,11,14] to find the optimal parameter. The detailed experimental results can be seen in Figure <ref>. It is evident that within the range of p∈[5,8,11], SMVCF's semi-supervised clustering performance exhibits relatively low fluctuations in terms of ACC, NMI, and Purity. Considering stability, we suggest setting p to 5. By analyzing the above experimental results, we recommend setting λ=1, β=1, γ=100, and p=5 as the default values for the SMVCF model. §.§ Results and Analysis In this section, we compare the performance of SMVCF with four other semi-supervised multi-view clustering models (DICS, PSLF, GPSNMF and MVSL) on four publicly available multi-view datasets. It is worth noting that, according to the literature <cit.>, for the partially structure-sharing methods PSLF and GPSNMF, we set the dimension of the partially shared latent representation, denoted as K, to 100 and set the common factor ratio λ=0.5. Specifically, we have K_c+K_s× P=100, K_c /(K_s+K_c)=0.5. In the following experiments, PSLF^w and GPSNMF^w use the regression coefficient matrix 𝐖 to obtain clustering labels. Given the latent factor 𝐯_i, the clustering label y is computed as y=max _c y_c, i where y_i=𝐖^⊤𝐯_i. The remaining algorithms, including DICS, PSLF^k, GPSNMF^k, MVSL, and SMVCF, obtain clustering labels through K-means clustering using the obtained latent representations. Regarding the parameter settings, for SMVCF, we set λ=1, β=1, γ=100 and p=5. For the other four comparison algorithms, we follow the suggested parameter settings from the literature <cit.>. In terms of label usage, we conduct experiments in four different semi-supervised scenarios with label proportions of 10%, 20%, 20%, and 40%. The specific clustering experimental results can be found in Table <ref>, <ref> and Table <ref>. Based on the aforementioned experimental results, we can draw the following conclusions: (1) In the case of 10% labeled data, comparing the results of different algorithms on the four datasets, we observe that even under the constraint of limited labeled data, SMVCF demonstrates remarkable clustering performance in most cases compared to several semi-supervised multi-view methods based on the NMF framework. On the NGs dataset, we outperform the second-best algorithm MVSL, achieving approximately 25.4% improvement in ACC, 30.41% improvement in NMI, and 25.4% improvement in Purity. This further confirms the superiority of the CF framework in handling multi-view problems. (2) In the scenarios with 20%, 30% and 40% label ratios as shown in Table <ref>, Table <ref> and Table <ref>, our proposed SMVCF achieved the best performance in all metrics compared to the competing algorithms. Particularly, on the NGs dataset, SMVCF achieved exceptional performance with ACC, NMI, and Purity reaching 99.60%, 98.60% and 99.60% respectively. This demonstrates the superior advantage of label propagation techniques when the amount of labeled information increases, compared to traditional semi-supervised label learning methods. (3) In this experiment, our proposed SMVCF consistently demonstrated superior performance compared to state-of-the-art methods in most cases, and it exhibited excellent stability across different scenarios. This confirms the robustness and advancement of SMVCF. It further emphasizes the necessity of considering better label learning approaches within the context of multi-view data, under the premise of learning a more comprehensive low-dimensional representation using the CF framework. §.§ Convergence analysis In this section, we investigate the monotonic convergence property of SMVCF. For the experiments related to convergence, we selected NGs and BBCSport datasets as representative datasets. As shown in Figure <ref>, it can be observed that in both datasets, the SMVCF algorithm reaches convergence within approximately 30 iterations, demonstrating the excellent convergence performance of our algorithm. § CONCLUSIONS In this paper, we propose a semi-supervised multi-view concept factorization model. Specifically, we integrate multi-view concept factorization, label propagation, and manifold learning into a unified framework to capture more useful information present in the data. Additionally, we introduce an adaptive weight vector to balance the importance of different views. Finally, we conduct extensive experiments on four different datasets with varying label proportions. The results validate the effectiveness of the SMVCF method. § ACKNOWLEDGMENTS This work was supported in part by the National Natural Science Foundation of China under Grant 62073087, 62071132, and 62203124. elsarticle-num
http://arxiv.org/abs/2307.01402v1
20230703234723
Multilinear fractional Calderón-Zygmund operators with Dini type kernel
[ "J. Wu", "P. Zhang" ]
math.CA
[ "math.CA", "42B20, 42B35" ]
Sliding suffix trees simplified Laurentius Leonard10000-0001-8477-7033 Shunsuke Inenaga20000-0002-1833-010X Hideo Bannai30000-0002-6856-5185 Takuya Mieno40000-0003-2922-9434 ============================================================================================================================================================== Abstract: In this paper, the main purpose is to consider a number of results concerning boundedness of multilinear fractional Calderón-Zygmund operators with kernels of mild regularity. Let T_α be a multilinear fractional Calderón-Zygmund operators of type ω(t) with ω being nondecreasing and ω∈(1). The end-point weak-type estimates for multilinear operator T_α are obtained. Moreover, some boundedness properties of the multilinear fractional operators are also established on variable exponent Lebesgue spaces. Keywords: Calderón-Zygmund operators; multilinear fractional integral; variable exponent Lebesgue space AMS(2020) Subject Classification: 42B20; 42B35 § INTRODUCTION AND MAIN RESULTS The multilinear Calderón-Zygmund theory was first studied by Coifman and Meyer in <cit.>. This theory was then further investigated by many authors in the last few decades, see for example <cit.>, for the theory of multilinear Calderón-Zygmund operators with kernels satisfying the standard estimates. And multilinear fractional integral operators were first studied by Grafakos <cit.>, followed by Kenig and Stein <cit.> et al. The importance of fractional integral operators is due to the fact that they have been widely used in various areas, such as potential analysis, harmonic analysis, and partial differential equations and so on<cit.>. The fractional Calderón-Zygmund operators and related problems have been studied by a number of authors, see, for instance, Lin and Lu <cit.>, Cruz-Uribe, Moen and Van Nguyen <cit.>, Wang and Xu <cit.>, et al. In 2009, Moen <cit.> presented a weighted theory for multilinear fractional integral operators and maximal functions. In 2014, Lu and Zhang <cit.> established some boundedness results of multilinear Calderón-Zygmund operator of type ω(t) and its commutators in variable Lebesgue spaces. And recently, Dalmasso et al. <cit.> proved boundedness results for integral operators of fractional type and their higher order commutators between weighted spaces, and the kernels of such operators satisfy certain size condition and a Lipschitz type regularity, and the symbol of the commutator belongs to a Lipschitz class. They also deal with commutators of fractional type operators with less regular kernels satisfying a Hörmander's type inequality. Inspired by the works above, the main goal of this paper is to consider a number of results concerning boundedness of multilinear fractional Calderón-Zygmund operators T_α with kernels of mild regularity. In particular, the corresponding conclusions can be found in <cit.> when α= 0. In what follows, let ℝ^n be an n-dimensional Euclidean space and (ℝ^n)^m= ℝ^n×⋯×ℝ^n be an m-fold product space (m∈ℕ). We denote by 𝒮(ℝ^n) the space of all Schwartz functions on ℝ^n and by 𝒮'(ℝ^n) its dual space, the set of all tempered distributions on ℝ^n. And let C_c^∞(ℝ^n) denote the set of smooth functions with compact support in ℝ^n. In <cit.>, in order to facilitate the study of certain classes of pseudodifferential operators, Yabuta introduced certain ω-type Calderón-Zygmund operators. Let ω(t): [0,∞) → [0,∞) be a non-negative and non-decreasing function with 0<ω(1)<∞. For a>0, * ω is said to satisfy the (a) condition, i.e. ω∈(a), if |ω|_(a) ≜∫_0^1ω^a(t) dt/t <∞. * ω is said to satisfy the log^m-(a) condition if the following inequality holds ∫_0^1ω^a(t) (1+log t^-1)^mdt/t <∞, where m∈ℤ^+. It is easy to check that the log^m-(a) condition is stronger than the (a) condition, and if 0 < a_1 < a_2 < ∞, then (a_1) ⊂(a_2). In particular, if ω∈(1), then ∑_j=0^∞ω(2^-j) ≈∫_0^1ω(t) dt/t <∞. And if ω∈log-(1), that is _0^1ω(t) (1+log t^-1) dt/t < ∞, then ω∈(1) and ∑_k=1^∞ k ω(2^-k) ≈∫_0^1ω(t) (1+log t^-1) dt/t <∞. The following gives the definition of the multilinear fractional Calderón-Zygmund operators of type ω(t). Let 0 ≤α < mn. A locally integrable function K_α(x, y_1,… , y_m), defined away from the diagonal x= y_1 = ⋯ = y_m in (ℝ^n)^m+1, is called an m-linear Calderón-Zygmund kernel of type ω(t), if there exists a constant A > 0 such that the following conditions are satisfied. * Size estimate: | K_α(x,y⃗)| ≤A(|x-y_1|+⋯+|x-y_m|)^mn-α for all (x,y_1,… , y_m)∈ (ℝ^n)^m+1 with x≠ y_j for some j∈{1,2,…,m}. * Smoothness estimate: assume that for each j∈{1,2,…,m}, there are regularity conditions |K_α(x,y⃗) -K_α(x',y⃗) | ≤A (∑_j=1^m|x-y_j| )^mn-αω( |x-x'|/ |x-y_1|+⋯+|x-y_m|) whenever |x-x'| ≤12max_1≤ j≤ m |x- y_j|. And for each fixed j with 1≤ j ≤ m, |K_α(x,y_1,…,y_j, … , y_m) -K_α(x,y_1,…,y_j', … , y_m) | ≤A (∑_j=1^m|x-y_j| )^mn-αω( |y_j-y_j'|/ |x-y_1|+⋯+|x-y_m|) whenever |y_j-y_j'| ≤12max_1≤ j≤ m |x- y_j|. We say T_α:𝒮(ℝ^n)×⋯×𝒮(ℝ^n) →𝒮'(ℝ^n) is an m-linear fractional singular integral operator with an m-linear fractional Calderón-Zygmund kernel of type ω(t), K_α(x,y_1,… , y_m), if T_α (f⃗)(x) = _(ℝ^n)^m K_α(x,y_1,… , y_m) ∏_i=1^mf_i(y_i) dy⃗ whenever x∉⋂_j=1^m f_j and each f_j∈ C_c^∞(ℝ^n), j=1,…,m. Let 0<α<mn and T_α be an m-linear fractional singular integral operator defined by equ:wm-frac-CZO. Suppose that 1≤ p__1,…, p__m < ∞ such that 1/p = 1/p__1 + 1/p__2 +⋯+1/p__m and 1/q = 1/p - α/n >0. Then T_α is called as an m-linear fractional Calderón-Zygmund operator of type ω (abbreviated to m-linear ω_α-CZO) if the following conditions are satisfied: * For some given numbers 1<p_1,…,p_m<∞, T_α maps L^p_1(ℝ^n) ×⋯× L^p_m(ℝ^n) into L^q(ℝ^n). * For some given numbers 1≤ p_1,…,p_m<∞ and min_1≤ j≤ m{p_j}=1, T_α maps L^p_1(ℝ^n) ×⋯× L^p_m(ℝ^n) into L^q,∞(ℝ^n). * Obviously, the m-linear ω_α-CZO is exactly the multilinear Calderón-Zygmund operator studied by Grafakos and Torres in <cit.> when α=0 and ω(t)=t^ε for some ε >0. * When α=0, the m-linear ω_α-CZO is exactly the multilinear Calderón-Zygmund operator studied by Lu and Zhang in <cit.>. * Using size estimate (<ref>) with α=0, from the proof of Lemma 2 in <cit.>, the following condition can be obtained _ℝ^n1(|x-y_1|+⋯+|x-y_m|)^nm d y_m≤C/(|x-y_1|+⋯+|x-y_m-1|)^n(m-1). By symmetry, it is also true if we freeze any other variable in K instead of y_m. Throughout this paper, the letter C always stands for a constant independent of the main parameters involved and whose value may differ from line to line. A cube Q ⊂ℝ^n always means a cube whose sides are parallel to the coordinate axes and denote its side length by l(Q). For some t>0, the notation tQ stands for the cube with the same center as Q and with side length l(tQ)=t l(Q). Denote by |S| the Lebesgue measure and by χ__ S the characteristic function for a measurable set S⊂ℝ^n. B(x,r) means the ball centered at x and of radius r, and B_0=B(0,1). X ≈ Y means there is a constant C>0 such that C^-1Y ≤ X ≤ C Y. For any index 1< q(x)< ∞, we denote by q'(x) its conjugate index, namely, q'(x)=q(x)/q(x)-1. And we will occasionally use the notational f⃗=(f_1,… , f_m), T(f⃗)=T(f_1,… , f_m), dy⃗=dy_1⋯ dy_m and (x,y⃗)=(x,y_1,… , y_m) for convenience. For a set E and a positive integer m, we will use the notation (E)^m=E×⋯× E_m sometimes. §.§ Boundedness of m-linear ω_α-CZO The first result for multilinear fractional operators T_α with multilinear fractional Calderón-Zygmund kernel of type ω is the following end-point weak-type estimates on the product of Lebesgue spaces. And the Calderón-Zygmund decomposition is the key tool used in obtaining endpoint weak type results for the m-linear ω_α-CZO. Let 0<α<mn and T_α be an m-linear ω_α-CZO with ω∈(1). Suppose that for some 1≤ p__1,p__2,…,p__m≤∞ and some 0<p,q<∞ with 1/p = 1/p__1 + 1/p__2 +⋯+1/p__m, 1/q = 1/p - α/n, T_α maps L^p_1(ℝ^n) ×⋯× L^p_m(ℝ^n) into L^q,∞(ℝ^n). Then T_α can be extended to a bounded operator from the m-fold product L^1(ℝ^n) ×⋯× L^1(ℝ^n) into L^n/mn-α,∞(ℝ^n). Moreover, there is a constant C _m, n,|ω|_(1) (that depends only on the parameters indicated) such that T_α_L^1×⋯× L^1→ L^n/mn-α,∞ ≤ C _m, n,|ω|_(1) (A+ T_α_L^p_1×⋯× L^p_m→ L^q,∞), where A is the constant appearing in equ:w-CZK-frac-size-estimate,equ:w-CZK-frac-regularity-1,equ:w-CZK-frac-regularity-2. * When α=0, <ref> was proved in <cit.>. * When α=0 and ω(t) = t^ε for some ε>0, <ref> was proved in <cit.>. * For the bilinear case, <ref> was proved in <cit.> when α=0, ω is concave and ω∈(1/2). To state the weighted norm inequalities for the multilinear fractional Calderón-Zygmund operators of type ω, we first recall some notation and definition on weights. The class of A_P⃗ can be found in <cit.>, and the class of A_P⃗,q can be found in <cit.>. For m exponents p_j, let P⃗ = (p_1,… ,p_m) and 1/p= 1/p_1+ ⋯ + 1/p_m with 1≤ p_j < ∞  (j = 1,…,m). Given w⃗ = (w_1,… ,w_m) with nonnegative function w_1,… ,w_m∈ℝ^n. * (Class of A_P⃗) We say that w⃗ satisfies the A_P⃗ condition, i.e. w⃗∈ A_P⃗, if sup_Q(1/|Q|∫_Q u_w⃗(x) dx )^1/p∏_j=1^m(1/|Q|∫_Q( w_j (x) )^1-p'_j dx )^ 1/p'_j <∞, where the supremum is taken over all cubes Q ⊂ℝ^n, u_w⃗ = ∏_j=1^m w_j^p/p_j, and (1/|Q|∫_Q( w_j (x) )^1-p'_j dx )^ 1/p'_j in the case p_j = 1 is understood as (inf_Q w_j)^-1. * (Class of A_P⃗,q) Let q be a number 1/m < p ≤ q < ∞. We say that w⃗ satisfies the A_P⃗,q condition if sup_Q(1/|Q|∫_Q(v_w⃗(x))^q dx )^1/q∏_j=1^m(1/|Q|∫_Q( w_j (x) )^-p'_j dx )^ 1/p'_j <∞, where the supremum is taken over all cubes Q in ℝ^n, q>0, v_w⃗ = ∏_j=1^m w_j, and (1/|Q|∫_Q( w_j (x) )^-p'_j dx )^ 1/p'_j is understood as (inf_Q w_j)^-1 when p_j = 1. In addition, For 0 < p < ∞ and w ∈ A_∞, denote by L^p (w)= L^p (ℝ^n,w) the collection of all functions f satisfying f_L^p (w) = ( _ℝ^n |f(x)|^p w(x) dx )^1/p < ∞. And, denote by L^p,∞ (w)= L^p,∞ (ℝ^n,w) the weak space with norm f_L^p,∞ (w) = sup_t>0 t w({x∈ℝ^n: |f(x)|>t})^1/p, where w(E)=∫_E w(x) dx for a measurable set E ⊂ℝ^n. Now, we state the multiple-weighted norm inequalities and weak-type estimates for the multilinear fractional Calderón-Zygmund operators of type ω. Let 0<α<mn and T_α be an m-linear ω_α-CZO with ω∈(1). Given P⃗ = (p_1,… ,p_m) and 1/p= 1/p_1+ ⋯ + 1/p_m with 1≤ p_j < ∞  (j = 1,…,m). Suppose that 1/q = 1/p - α/n >0 and w⃗∈ A_P⃗,q. * If 1< p__j < ∞ for all j=1,…,m, then T_α(f⃗)_L^q(v_w⃗^q) ≤ C ∏_j=1^mf_j_L^p_j(w_j^p_j). * If 1 ≤ p__j < ∞ for all j=1,…,m, and at least one p__j =1 for some j=1,…,m, then T_α(f⃗)_L^q,∞(v_w⃗^q) ≤ C ∏_j=1^mf_j_L^p_j(w_j^p_j). As a consequence of the theorem above the following result is obtained. The proof is left to the reader since it is simple. Let 0<α<mn and T_α be an m-linear ω_α-CZO with ω∈(1). Suppose that 1/q = 1/p__1 + 1/p__2 +⋯+1/p__m - α/n >0. * If 1< p__j < ∞ for all j=1,…,m, then T_α(f⃗)_L^q(ℝ^n) ≤ C ∏_j=1^mf_j_L^p_j(ℝ^n). * If 1 ≤ p__j < ∞ for all j=1,…,m, and at least one p__j =1 for some j=1,…,m, then T_α(f⃗)_L^q,∞(ℝ^n) ≤ C ∏_j=1^mf_j_L^p_j(ℝ^n). * When α=0, <ref> was proved in <cit.>. * When α=0 and ω(t) = t^ε for some ε>0, <ref> was proved in <cit.>. * when α=0, ω is concave and ω∈(1/2), the first part of <ref> was proved in <cit.>. §.§ On variable exponent Lebesgue spaces In this section, we will study the boundedness properties of m-linear ω_α-CZO with mild regularity on variable exponent Lebesgue spaces. We first recall some definitions and notations. Let  q(·): ℝ^n→[1,∞) be a measurable function. * The variable exponent Lebesgue spaces L^q(·)(ℝ^n) is defined by L^q(·)(ℝ^n)={f : F_q(f/η)<∞  η>0}, where F_q(f):=∫_ℝ^n |f(x)|^q(x)dx is a convex functional modular. The Lebesgue space L^q(·)(ℝ^n) is a Banach function space with respect to the Luxemburg type norm f_L^q(·)(ℝ^n)=inf{η>0: F_q(f/η)=∫_ℝ^n( |f(x)|/η)^q(x)dx ≤ 1 }. * The space L_^q(·)(ℝ^n) is defined by L_^q(·)(ℝ^n)={f  : f∈ L^q(·)(E)    E⊂ℝ^n}. * The weighted Lebesgue space L_w^q(·)(ℝ^n) is defined by as the set of all measurable functions for which f_L^q(·)_w(ℝ^n)=w f_L^q(·)(ℝ^n)<∞. Next we define some classes of variable exponent functions. Given a function f∈ L_^1(ℝ^n), the Hardy-Littlewood maximal operator M is defined by Mf(x)= sup_Q∋ x1|Q| ∫_Q |f(y)| dy. Given a measurable function q(·) defined on ℝ^n. For E⊂ℝ^n, we write q_-(E):=_x∈ E q(x), q_+(E):= _x∈ E q(x), and write q_-(ℝ^n) = q_- and q_+(ℝ^n) = q_+ simply. (i) q'_-=_x∈ℝ^n q'(x)=q_+/q_+-1, q'_+= _x∈ℝ^n q'(x)=q_-/q_--1. (ii) Denote by 𝒫_0(ℝ^n) the set of all measurable functions q(·): ℝ^n→(0,∞) such that 0< q_-≤ q(x) ≤ q_+<∞, x∈ℝ^n. (iii) Denote by 𝒫_1(ℝ^n) the set of all measurable functions q(): ℝ^n→[1,∞) such that 1≤ q_-≤ q(x) ≤ q_+<∞, x∈ℝ^n. (iv) Denote by 𝒫(ℝ^n) the set of all measurable functions q(·): ℝ^n→(1,∞) such that 1< q_-≤ q(x) ≤ q_+<∞, x∈ℝ^n. (v) The set ℬ(ℝ^n) consists of all measurable functions q(·)∈𝒫(ℝ^n) satisfying that the Hardy-Littlewood maximal operator M is bounded on L^q(·)(ℝ^n). Let q(·) be a real-valued function on ℝ^n. (i) Denote by 𝒞^log_loc(ℝ^n) the set of all local log-Hölder continuous functions q() which satisfies |q(x)-q(y)| ≤-C/ln(|x-y|), |x-y|≤ 1/2, x,y ∈ℝ^n, where C denotes a universal positive constant that may differ from line to line, and C does not depend on x, y. (ii) The set 𝒞^log_∞(ℝ^n) consists of all log-Hölder continuous functions q(·) at infinity satisfies |q(x)-q_∞| ≤C_∞/ln(+|x|), x ∈ℝ^n, where q_∞=lim_|x|→∞q(x). (iii) Denote by 𝒞^log(ℝ^n):=𝒞^log_loc(ℝ^n)∩𝒞^log_∞(ℝ^n) the set of all global log-Hölder continuous functions q(·). * The 𝒞^log_∞(ℝ^n) condition is equivalent to the uniform continuity condition |q(x)-q(y)| ≤C/ln(+|x|), |y|≥|x|, x,y ∈ℝ^n. The 𝒞^log_∞(ℝ^n) condition was originally defined in this form in <cit.>. * In what follows, we denote 𝒞^log(ℝ^n) ∩𝒫(ℝ^n) by 𝒫^log(ℝ^n). The theory of function spaces with variable exponent were first studied by Orlicz <cit.>, and it has been intensely investigated in the past twenty years since some elementary properties were established by Kováčik and Rákosník in <cit.>, and because of its connection with the study of variational integrals and partial differential equations with non-standard growth conditions (see, for instance, <cit.>). In 2003, Diening and Růz̆ic̆ka <cit.> studied the Calderón-Zygmund operators on variable exponent Lebesgue spaces and gave some applications to problems related to fluid dynamics. In 2006, by applying the theory of weighted norm inequalities and extrapolation, Cruz-Uribe et al. <cit.> showed that many classical operators in harmonic analysis are bounded on the variable exponent Lebesgue space. For more information on function spaces with variable exponent, we refer to <cit.>. For m-linear ω_α-CZO, we have the following result. Let 0<α<mn and T_α be an m-linear ω_α-CZO with ω∈(1). Given 1/p(·)= 1/p_1(·)+ ⋯ + 1/p_m(·) with p(·), p_j(·) ∈𝒫^log(ℝ^n)  (j = 1,…,m). Suppose that 0< 1/q(·) = 1/p(·) - α/n <1. Then there exists a positive constant C such that T_α(f⃗)_L^q(·)(ℝ^n ) ≤ C ∏_j=1^mf_j_L^p_j(·)(ℝ^n). § NOTATION AND PRELIMINARIES §.§ Sharp maximal function and A_p weights The following concepts are needed. Let f be a locally integral function defined on ℝ^n. Denote by M the usual Hardy-Littlewood maximal operator, for a cube Q ⊂ℝ^n and δ>0, the maximal functions M_δ is defined by M_δ (f)(x) = [ M(|f|^δ) (x) ]^1/δ = (sup_Q∋ x1|Q|_Q |f(y)|^δd y )^1/δ. Let M^♯ be the standard sharp maximal function of Fefferman and Stein<cit.>, that is M^♯f (x) = sup_Q∋ xinf_c1|Q|∫_Q |f(y)-c| dy ≈sup_Q∋ x1|Q|∫_Q |f(y)-f_Q| dy, where, as usual, f_Q denotes the average of f over Q, and the supremum is taking over all the cubes Q containing the point x. The operator M_δ^♯ is defined by M_δ^♯f (x) = [ M^♯(|f|^δ)(x) ]^1/δ. Let w be a nonnegative locally integrable function defined in ℝ^n. * For 1<p< ∞, we say that w is in the Muckenhoupt class A_p (namely, w∈ A_p), if there exists a constant C>0 (depending on the A_p constant of w) such that for any cube Q, there has (1|Q|∫_Q w(x) dx ) (1|Q|∫_Q w(x)^1-p'dx )^p-1≤ C. * We say that w∈ A_1 if there exists a constant C>0 (depending on the A_1 constant of w) such that Mw(x) ≤ C w(x) almost everywhere x∈ℝ^n. * Define the A_∞ by A_∞ = ⋃_p≥ 1 A_p. See <cit.> or <cit.>(Chapter 7) for more information about the Muckenhoupt weight class A_p. The following relationships between M_δ^♯ and M_δ to be used is a version of the classical ones due to Fefferman and Stein <cit.> ( see also <cit.> or P. 1228 in <cit.>). Let 0<p,δ< ∞ and w be any A_∞-weight. * Then there exists a constant C>0 (depending on the A_∞ constant of w), such that the inequality ∫( M_δ (f)(x) )^p w(x) dx ≤ C ∫( M_δ^♯(f)(x) )^p w(x) dx holds for any function f for which the left hand side is finite. * Similarly, there exists another constant C > 0 (depending on the A_∞ constant of w), such that M_δ (f)_L^p,∞(w) ≤ C M_δ^♯(f)_L^p,∞(w) holds for any function f for which the left hand side is finite. §.§ Some auxiliary lemmas In this part we state some auxiliary propositions and lemmas which will be needed for proving our main theorems. And we only state partial results we need. Let p(·)∈𝒫(ℝ^n). * If p(·)∈𝒞^log(ℝ^n), then we have p(·)∈ℬ(ℝ^n). * (see Lemma 2.3 in <cit.>) The following conditions are equivalent: * p(·)∈ℬ(ℝ^n), * p'(·)∈ℬ(ℝ^n). * p(·)/p_0∈ℬ(ℝ^n) for some 1<p_0<p_-, * (p(·)/p_0)'∈ℬ(ℝ^n) for some 1<p_0<p_-. The first part in <ref> is independently due to Cruz-Uribe et al. <cit.> and to Nekvinda<cit.> respectively. The second of <ref> belongs to Diening<cit.> (see Theorem 8.1 or Theorem 1.2 in <cit.>). The following gives the generalized Hölder's inequality. * Let p(),q(),r()∈𝒫_0(ℝ^n) satisfy the condition 1r(x) = 1p(x) + 1q(x) x∈ℝ^n. * Then, for all f ∈ L^p()(ℝ^n) and g∈ L^q()(ℝ^n), one has fg_r() ≤ Cf_p()g_q(). * When r()=1, then p'() = q(), hence, for all f ∈ L^p()(ℝ^n) and g∈ L^p'()(ℝ^n), one has ∫_ℝ^n|fg| ≤ Cf_p()g_p'(). * The generalized Hölder's inequality in Orlicz space: * Let r_1,…,r_m≥ 1 with 1/r=1/r_1+⋯+1/r_m and Q be a cube in ℝ^n. Then 1/|Q|∫_Q|f_1(x)⋯ f_m(x)g(x)| dx ≤ Cf_1_exp L^r_1,Q⋯f_m_exp L^r_m,Qg_L(log L)^1/r,Q. * Let t≥ 1, then 1/|Q|∫_Q|f(x)g(x)| dx ≤ Cf_exp L^t,Qg_L(log L)^1/t,Q. * Let q(),q_1(·),…, q_m(·)∈𝒫(ℝ^n) satisfy the condition 1q(x) = 1q_1(x)+⋯ + 1q_m(x) x∈ℝ^n. Then, for any f_j∈ L^q_j()(ℝ^n) , j=1,…,m, one has f_1⋯ f_m_q() ≤ Cf_1_q_1(·)⋯f_m_q_m(·). In <ref>, the first part is known as the generalized Hölder's inequality on variable exponent Lebesgue spaces, and the proof can be found in <cit.>(see also P.27-30 in <cit.> or P.81-82, Lemma 3.2.20 in <cit.>); the second part is generalized Hölder's inequality in Orlicz space (for details and the more general cases see <cit.>); and the third part see Lemma 9.2 in <cit.>. The following inequalities are also necessary (see (2.16) in <cit.> or Lemma 2.3 in <cit.> or Lemma 4.6 in <cit.> or page 485 in <cit.>). Let 0<p<q<∞, cube Q ⊂ℝ^n. Using L^q,∞(Q) denotes the weak space with norm f_L^q,∞(Q) = sup_t>0 t|{ x∈ Q: |f(x)|>t}|^1/q. * Then there is a positive constant C=C_p,q such that for any measurable funcction f there has |Q|^-1/pf_L^p(Q) ≤ C |Q|^-1/qf_L^q,∞(Q). * If 0<α <n and 1/q = 1/p - α/n. Then there is a positive constant C=C_p,q such that for any measurable function f there has f_L^p(Q) ≤ C |Q|^α/nf_L^q,∞(Q). §.§ Multilinear fractional maximal functions and multiple weights For all locally integrable functions f⃗=(f_1,f_2,…,f_m) and x∈ℝ^n, let 0 ≤α < mn. * The multilinear fractional maximal functions ℳ_α and ℳ_α,r are defined by ℳ_α(f⃗)(x) = sup_Q∋ x |Q|^α/n∏_j=1^m1/|Q|_Q |f_j(y_j)| d y_j = sup_Q∋ x∏_j=1^m1/|Q|^1-α/(nm)_Q |f_j(y_j)| d y_j, and ℳ_α, r(f⃗)(x) = sup_Q∋ x |Q|^α/n∏_j=1^m(1/|Q|_Q |f_j(y_j)|^r d y_j)^1/r, for  r>1, where the supremum is taken over all the cubes Q containing x. * the multilinear fractional maximal functions related to Young function Φ(t)=t(1+log^+t) are defined by ℳ_α, L(log L)^i(f⃗)(x) = sup_Q∋ x |Q|^α/nf_i_L(log L) ,Q∏_j=1 j≠ i^m1/|Q|_Q |f_j(y_j)| d y_j , and ℳ_α, L(log L) (f⃗)(x) = sup_Q∋ x |Q|^α/n∏_j=1^mf_j_L(log L) ,Q , where the supremum is taken over all the cubes Q containing x, and ·_L(log L),Q is the Luxemburg type average defined via g_L(log L),Q = inf{λ>0: 1/|Q|∫_Q|g(x)|/λlog(e+|f|/λ) dx ≤ 1 }. * If we take f≡ 1 in (<ref>) with t=1, it follows that for every α∈ [0,mn) the inequality ℳ_α(f⃗)(x) ≤ C ℳ_L(log L)^i (f⃗)(x) ≤ C_1ℳ_α,L(log L) (f⃗)(x). * In <cit.>, the authors prove that M_α(M^k) ≈ M_α,L(log L)^k with k ∈ℕ, where M^k is the iteration of the Hardy-Littlewood maximal operator k times. In particularly, for α=0 and k=1, one have M_L(log L)≈ M^2=M∘ M  (see also <cit.> or <cit.>). The following gives the characterization of the multiple-weight class A_P⃗ and A_P⃗,q, Separately. Let w⃗ = (w_1,… ,w_m), P⃗ = (p_1,… ,p_m) and 1/p= 1/p_1+ ⋯ + 1/p_m with 1≤ p_j < ∞  (j = 1,…,m). * w⃗∈ A_P⃗ if and only if { w_j^1-p'_j∈ A_mp'_j (j = 1,…,m) u_w⃗∈ A_mp ., where w_j^1-p'_j∈ A_mp'_j in the case p_j = 1 is understood as w_j^1/m∈ A_1. * Let 0<α≤ mn and 1/q= 1/p-α/n. Suppose w⃗∈ A_P⃗,q, then { w_j^-p'_j∈ A_mp'_j (j = 1,…,m) v_w⃗^q∈ A_mq ., The first part in <ref> is due to Lerner et al. in <cit.> (see Theorem 3.6). The second of <ref> was introduced by Moen <cit.> independently (see Theorem 3.4 in <cit.> or Theorem 2.1 in <cit.>). Let w⃗ = (w_1,… ,w_m), P⃗ = (p_1,… ,p_m) and 1/p= 1/p_1+ ⋯ + 1/p_m with 1≤ p_j < ∞  (j = 1,…,m). * When m = 1, A_P⃗,q will be degenerated to the classical A_p,q weights, and A_P⃗ reduces to the classical A_p weights. * (see P.1232 in <cit.>) If w_j∈ A_p_j (j=1,…,m), then by Hölder's inequality we have (1/|Q|∫_Q u_w⃗(x) dx )^1/p∏_j=1^m(1/|Q|∫_Q( w_j (x) )^1-p'_jdx )^ 1/p'_j = (1/|Q|∫_Q∏_j=1^m( w_j(x) )^p/p_jdx )^1/p∏_j=1^m(1/|Q|∫_Q( w_j (x) )^1-p'_jdx )^ 1/p'_j ≤∏_j=1^m(1/|Q|∫_Q w_j(x) dx )^1/p_j(1/|Q|∫_Q( w_j (x) )^1-p'_jdx )^ 1/p'_j< ∞, so we have ∏_j=1^m A_p_j ⊊ A_P⃗. * (see Remark 3.3 and 7.5 in <cit.>) If p_j≤ q_j, w_j∈ A_p_j,q_j (j=1,…,m), and 1/q= 1/q_1+ ⋯ + 1/q_m, then by Hölder's inequality we have (1/|Q|∫_Q(v_w⃗(x))^q dx )^1/q∏_j=1^m(1/|Q|∫_Q( w_j (x) )^-p'_j dx )^ 1/p'_j = (1/|Q|∫_Q(∏_j=1^m w_j(x) )^qdx )^1/q∏_j=1^m(1/|Q|∫_Q( w_j (x) )^-p'_jdx )^ 1/p'_j ≤∏_j=1^m(1/|Q|∫_Q( w_j(x) )^q_jdx )^1/q_j(1/|Q|∫_Q( w_j (x) )^-p'_jdx )^ 1/p'_j < ∞, and therefore, ⋃_q_1,⋯,q_m∏_j=1^m A_p_j, q_j ⊊ A_P⃗,q, where the union is over all q_j≥ p_j that satisfy 1/q= 1/q_1+ ⋯ + 1/q_m. § PROOF OF <REF> Let B = T_α_L^p_1×⋯× L^p_m→ L^q,∞. Fix λ> 0 and consider functions f_j∈ L^1(ℝ^n) for 1≤ j ≤ m. Without loss of generality, we may assume that f_1_L^1(ℝ^n) = ⋯ = f_m_L^1(ℝ^n)=1. it need to show that there is a constant C=C _m, n,|ω|_(1) >0 such that |{x∈ℝ^n: |T_α(f⃗)(x)|> λ}| ≤ C (A+ B/λ)^n/mn-α, Set γ be a positive real number to be determined later. Applying the Calderón-Zygmund decomposition to each function f_j at height (λγ)^n/mn-α to obtain “good” function g_j and “bad” function b_j with a sequence of pairwise disjoint cubes {Q_j,k_j}_k_j=1^∞ such that f_j = g_j + b_j = g_j + ∑_k_j b_j,k_j for all j=1,…,m, where * (b_j,k_j) ⊂ Q_j,k_j, * _ℝ^n b_j,k_j(x) dx =0, * _ℝ^n |b_j,k_j(x)| dx ≤ C (λγ)^n/mn-α |Q_j,k_j|, * |⋃_k_j Q_j,k_j| = ∑_k_j |Q_j,k_j | ≤ C (λγ)^-n/mn-α, * b_j_L^1(ℝ^n) ≤ C, * g_j_L^s(ℝ^n) ≤ C (λγ)^n/(mn-α)s' for any 1≤ s ≤∞. Let c_j,k_j be the center of cube Q_j,k_j and l(Q_j,k_j) be its side length. Set Q_j,k_j^* = 8√(n) Q_j,k_j, Ω_j^* =⋃_k_j Q_j,k_j^* (j=1,…,m), and Ω^*= ⋃_j=1^mΩ_j^*. Now let E_1 = {x∈ℝ^n: |T_α(g_1,g_2,…, g_m)(x)|> λ/2^m}, E_2 = {x∈ℝ^n∖Ω^*: |T_α(b_1,g_2,…, g_m)(x)|> λ/2^m}, E_3 = {x∈ℝ^n∖Ω^*: |T_α(g_1,b_2,…, g_m)(x)|> λ/2^m}, ⋯ ⋯ E_2^m = {x∈ℝ^n∖Ω^*: |T_α(b_1,b_2,…, b_m)(x)|> λ/2^m}, where each E_s = {x∈ℝ^n∖Ω^*: |T_α(h_1,h_2,…, h_m)(x)|> λ/2^m} with h_j∈{g_j,b_j} and all the sets E_s are distinct. It follows from property enumerate:CZ-decom-4 that |Ω^* | ≤∑_j=1^m |Ω_j^* | ≤ C ∑_j=1^m∑_k_j |Q_j,k_j | ≤ C (λγ)^-n/mn-α. Let us first estimate E_1 which is the easiest.Note that 1/q = 1/p - α/n, by the Chebyshev's inequality, the L^p_1(ℝ^n) ×⋯× L^p_m(ℝ^n)→ L^q,∞(ℝ^n) boundedness of T_α and property enumerate:CZ-decom-6 to obtain |E_1 | = |{x∈ℝ^n: |T_α(g_1,g_2,…, g_m)(x)|> λ/2^m} | ≤(2^mB/λ)^q∏_j=1^mg_j_L^p_j(ℝ^n) ^q ≤(2^mB/λ)^q∏_j=1^m (λγ)^nq/(mn-α)p'_j ≤ C (B/λ)^q (λγ)^(m-1/p)nq/(mn-α) =C B^qλ^-n/mn-αγ^q-n/mn-α. Since |{x∈ℝ^n: |T_α(f⃗)(x)|> λ}| ≤∑_s=1^2^m |E_s | + C |Ω^* | ≤∑_s=2^2^m |E_s | + C B^qλ^-n/mn-αγ^q-n/mn-α +C (λγ)^-n/mn-α. Thus, it will need to give the appropriate estimates for each |E_s | with 2≤ s≤ 2^m to guarantee the validity of equ:weak-norm-estimate. For the sake of clarity, we split the proof into two cases. Case 1: when m=2, Ω_1^* =⋃_k_1 Q_1,k_1^*, Ω_2^* =⋃_k_2 Q_2,k_2^*, Ω^*= Ω_1^*⋃Ω_2^*, and dy⃗ =dy_1dy_2. There leaves only the following three terms to be considered E_2 = {x∈ℝ^n∖Ω^*: |T_α(b_1,g_2)(x)|> λ/4}, E_3 = {x∈ℝ^n∖Ω^*: |T_α(g_1,b_2)(x)|> λ/4}, E_4 = {x∈ℝ^n∖Ω^*: |T_α(b_1,b_2)(x)|> λ/4}. The following, for s=2,3,4, will show that |E_s| ≤ C A (γ/λ)^n/2n-αγ^-α/2n-α. Now, for the term |E_2 |, by Chebyshev's inequality and property enumerate:CZ-decom-2, we have |E_2| = |{x∈ℝ^n∖Ω^*: |T_α(b_1,g_2)(x)|> λ/4}| ≤_{x∈ℝ^n∖Ω^*: |T_α(b_1,g_2)(x)|> λ/4}|T_α(b_1,g_2)(x)|λ/4dx ≤4/λ∑_k_1_ℝ^n∖Ω^* |T_α(b_1,k_1,g_2)(x)| dx ≤4/λ∑_k_1_ℝ^n∖Ω^*|_ (ℝ^n)^2(K_α(x,y_1,y_2)-K_α(x,c_1,k_1,y_2) ) b_1,k_1(y_1) g_2(y_2)dy⃗| dx ≤4 g_2_L^∞(ℝ^n) /λ∑_k_1_Q_1,k_1 | b_1,k_1(y_1)| _ℝ^n_ℝ^n∖Ω^*|K_α(x,y_1,y_2)-K_α(x,c_1,k_1,y_2) | dx dy⃗. For any fixed k_1, let 𝒬_1,k_1^i= (2^i+2√(n) Q_1,k_1) ∖ (2^i+1√(n) Q_1,k_1) with i=1,2,…. Clearly we have ℝ^n∖Ω^*⊂ℝ^n∖ Q_1,k_1^*⊂⋃_i=1^∞𝒬_1,k_1^i. For any y_1∈ Q_1,k_1 and y_2∈ℝ^n, since ω is nondecreasing, then it follows from equ:w-CZK-frac-regularity-2 that _ℝ^n∖Ω^*|K_α(x,y_1,y_2)-K_α(x,c_1,k_1,y_2) | dx ≤ A _ℝ^n∖Ω^*ω( |y_1-c_1,k_1|/ |x-y_1|+|x-y_2|) (|x-y_1| + |x-y_2| )^2n-αdx ≤ A ∑_i=1^∞_𝒬_1,k_1^iω( |y_1-c_1,k_1|/ |x-y_1|+|x-y_2|) (|x-y_1| + |x-y_2| )^2n-αdx ≤ A ∑_i=1^∞ω(2^-i) _𝒬_1,k_1^i 1 (|x-y_1| + |x-y_2| )^2n-αdx, where in the last step we use the facts that, for x ∈𝒬_1,k_1^i and any y_1∈ Q_1,k_1, there has |y_1-c_1,k_1| ≤1/2√(n) l( Q_1,k_1) and |x-y_1| ≥ 2^i-1√(n) l( Q_1,k_1) . Substituting (<ref>) into (<ref>), note that the fact (see (<ref>)) _ℝ^n1 (|x-y_1| + |x-y_2| )^2n-αdy_2 ≤C |x-y_1| ^n-α, and applying properties enumerate:CZ-decom-3,enumerate:CZ-decom-4,enumerate:CZ-decom-6, we have |E_2| ≤4 g_2_L^∞(ℝ^n) /λ∑_k_1_Q_1,k_1 | b_1,k_1(y_1)| _ℝ^n_ℝ^n∖Ω^*|K_α(x,y_1,y_2)-K_α(x,c_1,k_1,y_2) | dx dy⃗ ≤C A/λ (λγ)^n/2n-α∑_k_1∑_i=1^∞ω(2^-i) _Q_1,k_1 | b_1,k_1(y_1)| _𝒬_1,k_1^i1|x-y_1|^n-αdx dy_1 ≤C A/λ (λγ)^n/2n-α∑_k_1∑_i=1^∞ω(2^-i) _Q_1,k_1 | b_1,k_1(y_1)| _2^i+2√(n) Q_1,k_11|2^i-1√(n) Q_1,k_1|^1-α/ndx dy_1 ≤C A/λ (λγ)^n/2n-α∑_k_1∑_i=1^∞ω(2^-i)2^-iα |Q_1,k_1|^α/n_Q_1,k_1 | b_1,k_1(y_1)| dy_1 ≤ C A (γ/λ)^n/2n-αγ^-α/2n-α. Similarly, we can obtain that |E_3| ≤ C A (γ/λ)^n/2n-αγ^-α/2n-α. The following estimate |E_4|. By Chebyshev's inequality, properties enumerate:CZ-decom-1,enumerate:CZ-decom-2, we have |E_4| = |{x∈ℝ^n∖Ω^*: |T_α(b_1,b_2)(x)|> λ/4}| ≤4/λ∑_k_1,k_2_ℝ^n∖Ω^* |T_α(b_1,k_1,b_2,k_2)(x)| dx ≤4/λ∑_k_1,k_2_ℝ^n∖Ω^*|_ (ℝ^n)^2(K_α(x,y_1,y_2)-K_α(x,c_1,k_1,y_2) ) b_1,k_1(y_1) b_2,k_2(y_2)dy⃗| dx ≤4/λ∑_k_1,k_2_ℝ^n∖Ω^*_Q_2,k_2_Q_1,k_1|K_α(x,y_1,y_2)-K_α(x,c_1,k_1,y_2) | |b_1,k_1(y_1) b_2,k_2(y_2)| dy⃗dx ≤4/λ∑_k_1,k_2_Q_2,k_2_Q_1,k_1( _ℝ^n∖Ω^*|K_α(x,y_1,y_2)-K_α(x,c_1,k_1,y_2) | dx ) |b_1,k_1(y_1) b_2,k_2(y_2)| dy⃗. For any fixed k_2, let 𝒬_1,k_1^i be as above, denote by 𝒬_2,k_2^h= (2^h+2√(n) Q_2,k_2) ∖ (2^h+1√(n) Q_2,k_2) with h=1,2,…. Then ℝ^n∖Ω^* ⊂ℝ^n∖(Q_1,k_1^*⋃ Q_2,k_2^*) ⊂⋃_h=1^∞⋃_i=1^∞(𝒬_1,k_1^i⋂𝒬_2,k_2^h). For any ( y_1 , y_2 ) ∈ Q_1,k_1× Q_2,k_2, similar to (<ref>), we have _ℝ^n∖Ω^*|K_α(x,y_1,y_2)-K_α(x,c_1,k_1,y_2) | dx ≤ A _ℝ^n∖Ω^*ω( |y_1-c_1,k_1|/ |x-y_1|+|x-y_2|) (|x-y_1| + |x-y_2| )^2n-αdx ≤ A ∑_h=1^∞∑_i=1^∞_𝒬_1,k_1^i⋂𝒬_2,k_2^hω( |y_1-c_1,k_1|/ |x-y_1|+|x-y_2|) (|x-y_1| + |x-y_2| )^2n-αdx ≤ A ∑_h=1^∞∑_i=1^∞ω(2^-i) _𝒬_1,k_1^i⋂𝒬_2,k_2^h 1 (|x-y_1| + |x-y_2| )^2n-αdx. Note that, for any x ∈𝒬_1,k_1^i⋂𝒬_2,k_2^h and ( y_1 , y_2 ) ∈ Q_1,k_1× Q_2,k_2, there has |x-y_1| ≈ 2^i+1√(n) l( Q_1,k_1) and |x-y_2| ≈ 2^h+1√(n) l( Q_2,k_2), then, for any ( y_1 , y_2 ) ∈ Q_1,k_1× Q_2,k_2, the following holds _𝒬_1,k_1^i⋂𝒬_2,k_2^h 1 (|x-y_1| + |x-y_2| )^2n-αdx ≈|𝒬_1,k_1^i⋂𝒬_2,k_2^h| (2^i+1√(n) l( Q_1,k_1)+ 2^h+1√(n) l( Q_2,k_2) )^2n-α := ℋ(i,,k_1;h,,k_2). From estimates equ:weak-CZD-E4-kernel,equ:weak-CZD-E4-kernel-1 to obtain _ℝ^n∖Ω^*|K_α(x,y_1,y_2)-K_α(x,c_1,k_1,y_2) | dx ≤ C A ∑_h=1^∞∑_i=1^∞ω(2^-i) ℋ(i,,k_1;h,,k_2). Then, by (<ref>) and property enumerate:CZ-decom-3 one has |E_4| ≤4/λ∑_k_1,k_2_Q_2,k_2_Q_1,k_1( _ℝ^n∖Ω^*|K_α(x,y_1,y_2)-K_α(x,c_1,k_1,y_2) | dx ) |b_1,k_1(y_1) b_2,k_2(y_2)| dy⃗ ≤CA/λ∑_k_1,k_2_Q_2,k_2_Q_1,k_1( ∑_h=1^∞∑_i=1^∞ω(2^-i) ℋ(i,,k_1;h,,k_2) ) |b_1,k_1(y_1) b_2,k_2(y_2)| dy⃗ ≤CA/λ (λγ)^n/2n-α (λγ)^n/2n-α∑_i=1^∞ω(2^-i) ∑_k_1,k_2 |Q_1,k_1| |Q_2,k_2| ( ∑_h=1^∞ℋ(i,,k_1;h,,k_2) ) ≤CA/λ (λγ)^2n/2n-α∑_i=1^∞ω(2^-i) ∑_k_1,k_2_Q_1,k_1_Q_2,k_2( ∑_h=1^∞ℋ(i,,k_1;h,,k_2) ) dy⃗. Applying equ:weak-CZD-E4-kernel-1 again and noting that for any fixed k_2, the sequence {𝒬_2,k_2^h}_h=1^∞ is pairwise disjoint, it follows from property enumerate:CZ-decom-4 and estimate (<ref>) that |E_4| ≤CA/λ (λγ)^2n/2n-α∑_i=1^∞ω(2^-i) ∑_k_1,k_2_Q_1,k_1_Q_2,k_2( ∑_h=1^∞ℋ(i,,k_1;h,,k_2) ) dy⃗ ≤CA/λ (λγ)^2n/2n-α∑_i=1^∞ω(2^-i) ∑_k_1,k_2_Q_1,k_1_Q_2,k_2( _𝒬_1,k_1^i 1 (|x-y_1| + |x-y_2| )^2n-αdx ) dy⃗ ≤CA/λ (λγ)^2n/2n-α∑_i=1^∞ω(2^-i) ∑_k_1_Q_1,k_1_𝒬_1,k_1^i 1 |x-y_1| ^n-αdx dy_1 ≤CA/λ (λγ)^2n/2n-α∑_i=1^∞ω(2^-i) ∑_k_1_Q_1,k_1_2^i+2√(n) Q_1,k_11|2^i√(n) Q_1,k_1|^1-α/ndx dy_1 ≤CA/λ (λγ)^2n/2n-α∑_i=1^∞ω(2^-i) 2^-iα∑_k_1 |Q_1,k_1|^1+α/n ≤ C A (γ/λ)^n/2n-αγ^-α/2n-α. It is easy to see that the constants C involved depend only on m, n and |ω| _ ( 1 ). So, (<ref>) is proven. Set γ = ( A + B ) ^-1, it follows from equ:weak-CZD-Es,equ:weak-CZD-Es-2 that |{x∈ℝ^n: |T_α(f_1,f_2)(x)|> λ}|≤∑_s=2^2^2 |E_s | + C B^qλ^-n/2n-αγ^q-n/2n-α +C (λγ)^-n/2n-α ≤ C A (γ/λ)^n/2n-αγ^-α/2n-α + C B^qλ^-n/2n-αγ^q-n/2n-α +C (λγ)^-n/2n-α ≤ C λ^-n/2n-α( A (A+B)^α-n/2n-α + B^q (A+B)^n/2n-α-q + (A+B)^n/2n-α) ≤ C (A+B/λ)^n/2n-α( A (A+B)^α-2n/2n-α + B^q (A+B)^-q + 1 ) ≤ C (A+B/λ)^n/2n-α( A (A+B)^-1 + B^q (A+B)^-q + 1 ) ≤ C (A+B/λ)^n/2n-α, which is the desired result. The proof of the case m = 2 is completed. Case 2: when m≥ 3, we need to estimate | E_s | for 2 ≤ s ≤ 2^m. Suppose that for some 1 ≤ℓ≤ m, we have ℓ bad functions and m-ℓ good functions appearing in T_α(h_1,h_2,…, h_m) with h_j∈{g_j,b_j}. For matters of simplicity, without loss of generality, we may assume that the bad functions appear at the entries 1,…, ℓ, and denote the corresponding term by | E _s^(ℓ)| to distinguish it from the other terms. The following will consider |E_s^(ℓ)| = |{x∈ℝ^n∖Ω^*: |T_α(b_1,…,b_ℓ, g_ℓ+1,…, g_m)(x)|> λ/2^m}|, and the other terms can be estimated similarly. We will show |E_s^(ℓ)| ≤ C A (γ/λ)^n/mn-αγ^(m-2)n-α/mn-α. Recall that (b_1,k_1) ⊂ Q_1,k_1 and c_1,k_1 be the center of cube Q_1,k_1. Denote by ∏_r=1^ℓ Q_r,k_r =Q_1,k_1×⋯× Q_ℓ,k_ℓ and y⃗_*=(c_1,k_1, y_2, …, y_m) for simplicity. Then it follows from properties enumerate:CZ-decom-2,enumerate:CZ-decom-6 that, for any x∈ℝ^n∖Ω^*, |T_α(b_1,…,b_ℓ,g_ℓ+1,…, g_m)(x)| ≤∑_k_1,…,k_ℓ|_ (ℝ^n)^m K_α(x,y⃗) ∏_r=1^ℓ b_r,k_r(y_r) ∏_r=ℓ+1^m g_r(y_r)dy⃗| ≤∑_k_1,…,k_ℓ_ (ℝ^n)^m| K_α(x,y⃗) - K_α(x,y⃗_*)| ∏_r=1^ℓ |b_r,k_r(y_r)| ∏_r=ℓ+1^m |g_r(y_r)|dy⃗ ≤ C ∏_r=ℓ+1^mg_r_L^∞(ℝ^n) ∑_k_1,…,k_ℓ_ (ℝ^n)^m| K_α(x,y⃗) - K_α(x,y⃗_*)| ∏_r=1^ℓ |b_r,k_r(y_r)| dy⃗ ≤ C (λγ)^n(m-ℓ)/mn-α∑_k_1,…,k_ℓ_ (ℝ^n)^m| K_α(x,y⃗) - K_α(x,y⃗_*)| ∏_r=1^ℓ |b_r,k_r(y_r)| dy⃗. This together with Chebychev's inequality gives |E_s^(ℓ)| = |{x∈ℝ^n∖Ω^*: |T_α(b_1,…,b_ℓ, g_ℓ+1,…, g_m)(x)|> λ/2^m}| ≤2^m/λ_ℝ^n∖Ω^* |T_α(b_1,…,b_ℓ, g_ℓ+1,…, g_m)(x)| dx ≤C/λ (λγ)^n(m-ℓ)/mn-α_ℝ^n∖Ω^*(∑_k_1,…,k_ℓ_ (ℝ^n)^m| K_α(x,y⃗) - K_α(x,y⃗_*)| ∏_r=1^ℓ |b_r,k_r(y_r)| dy⃗) dx ≤C/λ (λγ)^n(m-ℓ)/mn-α∑_k_1,…,k_ℓ_ (ℝ^n)^m(_ℝ^n∖Ω^*| K_α(x,y⃗) - K_α(x,y⃗_*)| dx ) ∏_r=1^ℓ |b_r,k_r(y_r)| dy⃗ ≤C/λ (λγ)^n(m-ℓ)/mn-α∑_k_1,…,k_ℓ_ (ℝ^n)^m-ℓ_∏_r=1^ℓ Q_r,k_r∏_r=1^ℓ |b_r,k_r(y_r)| (_ℝ^n∖Ω^*| K_α(x,y⃗) - K_α(x,y⃗_*)| dx ) dy⃗. Let 𝒬_r,k_r^i_r= (2^i_r+2√(n) Q_r,k_r) ∖ (2^i_r+1√(n) Q_r,k_r) for r=1,2,…,ℓ and i_r=1,2,…. Then ℝ^n∖Ω^* ⊂ℝ^n∖( ⋃_r=1^ℓ Q_r,k_r^*) ⊂⋃_i_1=1^∞⋯⋃_i_ℓ=1^∞(𝒬_1,k_1^i_1⋂⋯⋂𝒬_ℓ,k_ℓ^i_ℓ) = ⋃_i_1,…,i_ℓ=1^∞(⋂_r=1^ℓ𝒬_r,k_r^i_r). For any ( y_1,… , y_ℓ ) ∈∏_r=1^ℓ Q_r,k_r and any ( y_ℓ+1,… , y_m ) ∈ℝ^n)^m-ℓ, applying (<ref>) and the fact that ω is nondecreasing, similar to equ:weak-CZD-E2-kernel,equ:weak-CZD-E4-kernel, we have _ℝ^n∖Ω^*| K_α(x,y⃗) - K_α(x,y⃗_*) | dx ≤ A _ℝ^n∖Ω^*ω( |y_1-c_1,k_1|/∑_j=1^m |x-y_j|) (∑_j=1^m |x-y_j| )^mn-αdx ≤ A ∑_i_1,…,i_ℓ=1^∞_⋂_r=1^ℓ𝒬_r,k_r^i_rω( |y_1-c_1,k_1|/ |x-y_1|) (∑_j=1^m |x-y_j| )^mn-αdx ≤ A ∑_i_1,…,i_ℓ=1^∞ω(2^-i_1) _⋂_r=1^ℓ𝒬_r,k_r^i_r 1 (∑_j=1^m |x-y_j| )^mn-αdx, Then, |E_s^(ℓ)| ≤CA/λ (λγ)^n(m-ℓ)/mn-α∑_k_1,…,k_ℓ∑_i_1,…,i_ℓ=1^∞ω(2^-i_1) _ (ℝ^n)^m-ℓ_∏_r=1^ℓ Q_r,k_r∏_r=1^ℓ |b_r,k_r(y_r)| ×(_⋂_r=1^ℓ𝒬_r,k_r^i_r 1 (∑_j=1^m |x-y_j| )^mn-αdx) dy⃗ ≤CA/λ (λγ)^n(m-ℓ)/mn-α∑_k_1,…,k_ℓ∑_i_1,…,i_ℓ=1^∞ω(2^-i_1) _∏_r=1^ℓ Q_r,k_r∏_r=1^ℓ |b_r,k_r(y_r)| ×(_⋂_r=1^ℓ𝒬_r,k_r^i_r( _ (ℝ^n)^m-ℓ 1 (∑_j=1^m |x-y_j| )^mn-αdy_ℓ+1⋯dy_m) dx) d y_1⋯d y_ℓ ≤CA/λ (λγ)^n(m-ℓ)/mn-α∑_k_1,…,k_ℓ∑_i_1,…,i_ℓ=1^∞ω(2^-i_1) _∏_r=1^ℓ Q_r,k_r∏_r=1^ℓ |b_r,k_r(y_r)| ×(_⋂_r=1^ℓ𝒬_r,k_r^i_r 1 (∑_j=1^ℓ |x-y_j| )^nℓ-αdx) d y_1⋯d y_ℓ. On the other hand, similar to (<ref>), for any ( y_1,… , y_ℓ ) ∈∏_r=1^ℓ Q_r,k_r, there has _⋂_r=1^ℓ𝒬_r,k_r^i_r 1 (∑_r=1^ℓ |x-y_r| )^nℓ-αdx ≈|⋂_r=1^ℓ𝒬_r,k_r^i_r| (∑_r=1^ℓ 2^i_r+1√(n) l( Q_r,k_r) )^nℓ-α. Then by (<ref>) and the property enumerate:CZ-decom-3, we have |E_s^(ℓ)| ≤CA/λ (λγ)^n(m-ℓ)/mn-α∑_k_1,…,k_ℓ∑_i_1,…,i_ℓ=1^∞ω(2^-i_1) _∏_r=1^ℓ Q_r,k_r∏_r=1^ℓ |b_r,k_r(y_r)| ×(_⋂_r=1^ℓ𝒬_r,k_r^i_r 1 (∑_j=1^m |x-y_r| )^nℓ-αdx) d y_1⋯d y_ℓ ≤CA/λ (λγ)^n(m-ℓ)/mn-α∑_k_1,…,k_ℓ∑_i_1,…,i_ℓ=1^∞ω(2^-i_1) _∏_r=1^ℓ Q_r,k_r∏_r=1^ℓ |b_r,k_r(y_r)| ×|⋂_r=1^ℓ𝒬_r,k_r^i_r| (∑_r=1^ℓ 2^i_r+1√(n) l( Q_r,k_r) )^nℓ-αd y_1⋯d y_ℓ ≤CA/λ (λγ)^nm/mn-α∑_k_1,…,k_ℓ∑_i_1,…,i_ℓ=1^∞ω(2^-i_1) ∏_r=1^ℓ |Q_r,k_r| |⋂_r=1^ℓ𝒬_r,k_r^i_r| (∑_r=1^ℓ 2^i_r+1√(n) l( Q_r,k_r) )^nℓ-α ≤CA/λ (λγ)^nm/mn-α∑_k_1,…,k_ℓ∑_i_1,…,i_ℓ=1^∞ω(2^-i_1) _∏_r=1^ℓ Q_r,k_r|⋂_r=1^ℓ𝒬_r,k_r^i_r| (∑_r=1^ℓ 2^i_r+1√(n) l( Q_r,k_r) )^nℓ-αd y_1⋯d y_ℓ. Applying (<ref>) again, we can see that |E_s^(ℓ)| is dominated by A/λ (λγ)^nm/mn-α∑_k_1,…,k_ℓ∑_i_1,…,i_ℓ=1^∞ω(2^-i_1) _∏_r=1^ℓ Q_r,k_r|⋂_r=1^ℓ𝒬_r,k_r^i_r| (∑_r=1^ℓ 2^i_r+1√(n) l( Q_r,k_r) )^nℓ-αd y_1⋯d y_ℓ =A/λ (λγ)^nm/mn-α∑_k_1,…,k_ℓ∑_i_1,…,i_ℓ=1^∞ω(2^-i_1) _∏_r=1^ℓ Q_r,k_r(_⋂_r=1^ℓ𝒬_r,k_r^i_r 1 (∑_r=1^ℓ |x-y_r| )^nℓ-αdx ) d y_1⋯d y_ℓ =A/λ (λγ)^nm/mn-α∑_i_1,…,i_ℓ=1^∞ω(2^-i_1) ∑_k_1_ Q_1,k_1(_⋂_r=1^ℓ𝒬_r,k_r^i_r( ∑_k_2,…,k_ℓ_∏_r=2^ℓ Q_r,k_rd y_2⋯d y_ℓ (∑_r=1^ℓ |x-y_r| )^nℓ-α) dx ) d y_1. Since for any fixed r, the family { Q_r,k_r}_k_r=1^∞ is a sequence of pairwise disjoint cubes, then for any x ∈⋂_r=1^ℓ𝒬_r,k_r^i_r and y_1∈ Q_1,k_1, using estimate (<ref>), there has ∑_k_2,…,k_ℓ_∏_r=2^ℓ Q_r,k_r 1 (∑_r=1^ℓ |x-y_r| )^nℓ-αd y_2⋯d y_ℓ ≤∑_k_2,…,k_ℓ-1_∏_r=2^ℓ-1 Q_r,k_r( _ℝ^n 1 (∑_r=1^ℓ |x-y_r| )^nℓ-αd y_ℓ) d y_2⋯d y_ℓ-1 ≤ C∑_k_2,…,k_ℓ-1_∏_r=2^ℓ-1 Q_r,k_r 1 (∑_r=1^ℓ-1 |x-y_r| )^n(ℓ-1)-αd y_2⋯d y_ℓ-1 ≤⋯≤ C∑_k_2_ Q_2,k_2 1 ( |x-y_1|+ |x-y_2| )^2n-αd y_2 ≤ C _ℝ^n 1 ( |x-y_1|+ |x-y_2| )^2n-αd y_2 ≤ C |x-y_1| ^n-α. Therefore, |E_s^(ℓ)| ≤CA/λ (λγ)^nm/mn-α∑_i_1,…,i_ℓ=1^∞ω(2^-i_1) ×∑_k_1_ Q_1,k_1(_⋂_r=1^ℓ𝒬_r,k_r^i_r( ∑_k_2,…,k_ℓ_∏_r=2^ℓ Q_r,k_rd y_2⋯d y_ℓ (∑_r=1^ℓ |x-y_r| )^nℓ-α) dx ) d y_1 ≤CA/λ (λγ)^nm/mn-α∑_i_1,…,i_ℓ=1^∞ω(2^-i_1) ∑_k_1_ Q_1,k_1(_⋂_r=1^ℓ𝒬_r,k_r^i_r 1 |x-y_1| ^n-αdx ) d y_1 ≤CA/λ (λγ)^nm/mn-α∑_i_1=1^∞ω(2^-i_1) ∑_k_1_ Q_1,k_1(∑_i_2,…,i_ℓ=1^∞_⋂_r=1^ℓ𝒬_r,k_r^i_r 1 |x-y_1| ^n-αdx ) d y_1. On the other hand, noting that for any fixed r the sequence {𝒬_r,k_r^i_r}_i_r=1^∞ is also pairwise disjoint, then for any y_1∈ Q_1,k_1, there has ∑_i_2,…,i_ℓ=1^∞_⋂_r=1^ℓ𝒬_r,k_r^i_r 1 |x-y_1| ^n-αdx = ∑_i_2,…,i_ℓ-1=1^∞( _(⋂_r=1^ℓ-1𝒬_r,k_r^i_r) ⋂(⋃_i_ℓ=1^∞𝒬_ℓ,k_ℓ^i_ℓ) 1 |x-y_1| ^n-αdx ) ≤∑_i_2,…,i_ℓ-1=1^∞( _⋂_r=1^ℓ-1𝒬_r,k_r^i_r 1 |x-y_1| ^n-αdx ) ≤⋯≤∑_i_2=1^∞( _𝒬_1,k_1^i_1⋂𝒬_2,k_2^i_2 1 |x-y_1| ^n-αdx ) ≤_𝒬_1,k_1^i_1 1 |x-y_1| ^n-αdx. Substituting the above estimate into (<ref>) and applying property enumerate:CZ-decom-4, we have |E_s^(ℓ)| ≤CA/λ (λγ)^nm/mn-α∑_i_1=1^∞ω(2^-i_1) ∑_k_1_ Q_1,k_1(∑_i_2,…,i_ℓ=1^∞_⋂_r=1^ℓ𝒬_r,k_r^i_r 1 |x-y_1| ^n-αdx ) d y_1 ≤CA/λ (λγ)^nm/mn-α∑_i_1=1^∞ω(2^-i_1) ∑_k_1_ Q_1,k_1(_ 2^i_1+2√(n) Q_1,k_1 1 | 2^i_1√(n) Q_1,k_1 | ^1-α/ndx ) d y_1 ≤CA/λ (λγ)^nm/mn-α∑_i=1^∞ω(2^-i_1) 2^-i_1α∑_k_1 |Q_1,k_1|^1+α/n ≤CA/λ (λγ)^nm/mn-α (λγ)^-n(1+α/n)/mn-α∑_i=1^∞ω(2^-i_1) 2^-i_1α ≤ C A (γ/λ)^n/mn-αγ^(m-2)n-α/mn-α. This shows that (<ref>) holds. Now, we have proved that each |E_s | satisfies |E_s | ≤ C A (γ/λ)^n/mn-αγ^(m-2)n-α/mn-α. So, by equ:weak-CZD-Es,equ:weak-CZD-Es-m, set γ = ( A + B ) ^-1, we have |{x∈ℝ^n: |T_α(f⃗)(x)|> λ}| ≤∑_s=2^2^m |E_s | + C B^qλ^-n/mn-αγ^q-n/mn-α +C (λγ)^-n/mn-α ≤ C A (γ/λ)^n/mn-αγ^(m-2)n-α/mn-α + C B^qλ^-n/mn-αγ^q-n/mn-α +C (λγ)^-n/mn-α ≤ C(1/λ)^n/mn-α( A γ^(m-1)n-α/mn-α + B^qγ^q-n/mn-α + γ^-n/mn-α) = C(A+B/λ)^n/mn-α. The proof of <ref> is finished. § PROOF OF <REF> In order to prove <ref>, we first establish the following pointwise estimates on the δ-sharp maximal function acting on T__α(f⃗) controlled in terms of the multilinear maximal function ℳ_α Let m≥ 2, 0<α<mn, T_α be an m-linear ω_α-CZO with ω∈(1). Assume that 0<δ< 1 and 0<δ< n/(mn-α). Then for all f⃗ in any product space L^p_1(ℝ^n) ×⋯× L^p_m(ℝ^n) with 1≤ p_j< ∞ (j=1,2,…,m), there exists a positive constant C such that M_δ^♯(T__α (f⃗)) (x) ≤ C ℳ_α(f⃗)(x). For a fixed point x and a cube Q ∋ x. Due to the fact | | a|^γ - | c|^γ| ≤ | a-c|^γ for 0<γ<1, it suffices to prove that, for 0<δ< min{1,n/(mn-α)}, there exists a positive constant C such that (1|Q|_Q| T__α(f⃗)(z) -c |^δdz )^1/δ≤ C ℳ_α(f⃗)(x), where the constant c is to be determined later. For each j, we decompose f_j=f_j^0 + f_j^∞ with f_j^0=f_jχ_Q^* and Q^* = 8√(n) Q. Then ∏_j=1^m f_j(y_j) = ∏_j=1^m( f_j^0(y_j) + f_j^∞(y_j) ) = ∑_ρ_1,…,ρ_m∈{0,∞} f_1^ρ_1(y_1) ⋯ f_m^ρ_m (y_m) = ∏_j=1^m f_j^0(y_j) + ∑_(ρ_1,…,ρ_m)∈ρ f_1^ρ_1(y_1) ⋯ f_m^ρ_m (y_m), where ρ={ (ρ_1,…,ρ_m): there is at least one ρ_j≠ 0}. It is easy to see that T__α(f⃗)(z) = T_α(f_1^0,…, f_m^0)(z) + ∑_(ρ_1,…,ρ_m)∈ρ T_α(f_1^ρ_1,…, f_m^ρ_m)(z) . Furthermore, we have (1|Q|_Q| T__α(f⃗)(z) -c |^δdz )^1/δ ≤(1|Q|_Q| T_α(f_1^0,…, f_m^0)(z) |^δdz )^1/δ + (1|Q|_Q| ∑_(ρ_1,…,ρ_m)∈ρ T_α(f_1^ρ_1,…, f_m^ρ_m)(z) -c |^δdz )^1/δ := I_1+I_2. Let us first estimate I_1. Since T_α is an m-linear ω_α-CZO with ω∈(1), then it follows from <ref> that T_α maps L^1(ℝ^n) ×⋯× L^1(ℝ^n) into L^n/mn-α,∞(ℝ^n). Applying Kolmogorov's inequality (see <ref>) with p=δ and q=n/mn-α, we have I_1 = (1|Q|_Q| T_α(f_1^0,…, f_m^0)(z) |^δ dz )^1/δ ≤ C |Q|^-mn-α/nT_α(f_1^0,…, f_m^0)_L^n/mn-α,∞(ℝ^n) ≤ C ∏_j=1^m1/|Q^*|^1-α/mn_Q^* |f_j(z)| d z ≤ C ℳ_α(f⃗)(x). To estimate the remaining terms in I_2, we choose c = ∑_(ρ_1,…,ρ_m)∈ρ T_α(f_1^ρ_1,…, f_m^ρ_m)(x), and it suffices to show that, for any z ∈ Q, the following estimates hold ∑_(ρ_1,…,ρ_m)∈ρ| T_α(f_1^ρ_1,…, f_m^ρ_m)(z) - T_α(f_1^ρ_1,…, f_m^ρ_m)(x) | ≤ C ℳ_α(f⃗)(x). we consider first the case when ρ_1=⋯ =ρ_m=∞. For any x, z∈ Q, there has |T_α(f_1^∞,…, f_m^∞)(z) - T_α(f_1^∞,…, f_m^∞)(x) | ≤∫_(ℝ^n∖ Q^*)^m |K_α(z,y⃗)-K_α(x,y⃗)| ∏_j=1^m| f_j^∞(y_j)| dy⃗ ≤∑_k=1^∞∫_(𝒬_k)^m |K_α(z,y⃗)-K_α(x,y⃗)| ∏_j=1^m| f_j^∞(y_j)| dy⃗, where 𝒬_k= ( 2^k+3√(n)Q) ∖ (2^k+2√(n)Q) for k=1,2,…. Note that, for x,z∈ Q and any (y_1,…,y_m) ∈ (𝒬_k)^m, there has 2^k√(n)l(Q)≤ |z-y_j|  and |z-x|≤√(n) l(Q), and recalling that ω is nondecreasing, and applying (<ref>), we have |K_α(z,y⃗)-K_α(x,y⃗)| ≤A (∑_j=1^m|z-y_j| )^mn-αω( |z-x|/∑_j=1^m|z-y_j|) ≤C ω (2^-k)|2^k√(n) Q |^m-α/n. Then |T_α(f_1^∞,…, f_m^∞)(z) - T_α(f_1^∞,…, f_m^∞)(x) | ≤∑_k=1^∞∫_(𝒬_k)^m |K_α(z,y⃗)-K_α(x,y⃗)| ∏_j=1^m| f_j^∞(y_j)| dy⃗ ≤ C ∑_k=1^∞ω (2^-k) ∫_(𝒬_k)^m1|2^k√(n) Q |^m-α/n∏_j=1^m| f_j^∞(y_j)| dy⃗ ≤ C ∑_k=1^∞ω (2^-k) |2^k+3√(n)Q|^α/n∏_j=1^m1|2^k+3√(n)Q|∫_2^k+3√(n)Q| f_j(y_j)| dy_j ≤ C |ω|_(1)ℳ_α(f⃗)(x). Now, for (ρ_1,…,ρ_m)∈ρ, let us consider the terms (<ref>) such that at least one ρ_j=0 and one ρ_i=∞. Without loss of generality, we assume that ρ_1=⋯=ρ_ℓ=0 and ρ_ℓ+1=⋯=ρ_m=∞ with 1≤ℓ<m. Thus |T_α(f_1^ρ_1,…, f_m^ρ_m)(z) - T_α(f_1^ρ_1,…, f_m^ρ_m)(x) | ≤∫_(ℝ^n)^m |K_α(z,y⃗)-K_α(x,y⃗)| ∏_j=1^m| f_j^ρ_j(y_j)| dy⃗ ≤∫_( Q^*)^ℓ∏_j=1^ℓ| f_j^0(y_j)| ∫_(ℝ^n∖ Q^*)^m-ℓ |K_α(z,y⃗)-K_α(x,y⃗)| ∏_j=ℓ+1^m| f_j^∞(y_j)| dy⃗ ≤∫_(Q^*)^ℓ∏_j=1^ℓ| f_j^0(y_j)| ∑_k=1^∞∫_(𝒬_k)^m-ℓ |K_α(z,y⃗)-K_α(x,y⃗)| ∏_j=ℓ+1^m| f_j^∞(y_j)| dy⃗. Since for x, z∈ Q, and any y_j∈𝒬_k with ℓ+1≤ j ≤ m, there has 2^k√(n) l(Q)≤ |z-y_j|, then, similar to (<ref>), we obtain that |K_α(z,y⃗)-K_α(x,y⃗)| ≤A (∑_j=1^m|z-y_j| )^mn-αω( |z-x|/∑_j=1^m|z-y_j|) ≤A (∑_j=ℓ+1^m|z-y_j| )^mn-αω( |z-x|/∑_j=ℓ+1^m|z-y_j|) ≤C ω (2^-k)|2^k√(n) Q |^m-α/n. This together with (<ref>) gives |T_α(f_1^ρ_1,…, f_m^ρ_m)(z) - T_α(f_1^ρ_1,…, f_m^ρ_m)(x) | ≤ C ∫_( Q^*)^ℓ∏_j=1^ℓ| f_j^0(y_j)| ∑_k=1^∞∫_(𝒬_k)^m-ℓ |K_α(z,y⃗)-K_α(x,y⃗)| ∏_j=ℓ+1^m| f_j^∞(y_j)| dy⃗ ≤ C ∫_( Q^*)^ℓ∏_j=1^ℓ| f_j^0(y_j)| ∑_k=1^∞ω (2^-k)∫_(𝒬_k)^m-ℓ1|2^k√(n) Q |^m-α/n∏_j=ℓ+1^m| f_j^∞(y_j)| dy⃗ ≤ C ∑_k=1^∞ω (2^-k)1|2^k√(n) Q |^m-α/n(∏_j=1^ℓ∫_ Q^*| f_j^0(y_j)| dy_j) ( ∏_j=ℓ+1^m∫_2^k+3√(n) Q| f_j^∞(y_j)| dy_j) ≤ C ∑_k=1^∞ω (2^-k) |2^k+3√(n) Q |^α/n(∏_j=1^m1|2^k+3√(n) Q |∫_2^k+3√(n) Q| f_j(y_j)| dy_j) ≤ C |ω|_(1)ℳ_α(f⃗)(x). So, (<ref>) is proven. Therefore, we have I_2 =(1|Q|_Q| ∑_(ρ_1,…,ρ_m)∈ρ T_α(f_1^ρ_1,…, f_m^ρ_m)(z) -c |^δ dz )^1/δ ≤ C ℳ_α(f⃗)(x). Combining the above estimates we get the desired result. The proof is completed. In 2014, Grafakos et al. <cit.> proved the following result in the context of RD-spaces, which serves as an analog of the classical Fefferman-Stein inequalities (see Lemma 4.11 in <cit.>). Here, we rewrite their result as follows. Let 0< q_0≤ q <∞ and w∈ A_∞. Then there exists a positive constant C (depending on n, q and the A_∞ constant of w), such that for all functions f∈ L_^1(ℝ^n) with M(f) ∈ L^q_0,∞(w), * when q_0< q, we have M (f)_L^q(w) ≤ C M^♯(f)_L^q(w) * when q_0≤ q, we have M (f)_L^q,∞(w) ≤ C M^♯(f)_L^q,∞(w). Now, by <ref>, we can get the following result. Since the argument is almost the same as the proof of Proposition 4.13 in <cit.> (see also Theorem 6.2 in <cit.>), we omit the proof. Let m≥ 2, 0<α<mn, T_α be an m-linear ω_α-CZO with ω∈(1), n/(mn-α)≤ q <∞ and w∈ A_∞. Then for all bounded functions f⃗ with compact support, there exists a constant C>0 (depending on n, q and the A_∞ constant of w), * when n/(mn-α)< q, we have T_α(f⃗)_L^q(w) ≤ C ℳ_α(f⃗)_L^q(w) * when n/(mn-α)≤ q, we have T_α(f⃗)_L^q,∞(w) ≤ C ℳ_α(f⃗)_L^q,∞(w). The weighted norm inequalities for ℳ_α were established in <cit.> ( see also <cit.> or <cit.>). Suppose that 0<α<mn and 1≤ p__1,p__2,…,p__m < ∞ with 1/p = 1/p__1 + 1/p__2 +⋯+1/p__m, such that 1/q = 1/p - α/n >0 and 1/m <p≤ q<∞. * If 1< p__j < ∞ for all j=1,…,m. Then w⃗∈ A_P⃗,q if and only if ℳ_α can be extended to a bounded operator ℳ_α(f⃗)_L^q(v_w⃗^q) ≤ C ∏_j=1^mf_j_L^p_j(w_j^p_j). * If 1 ≤ p__j < ∞ for all j=1,…,m, and at least one p__j =1 for some j=1,…,m. Then, for w⃗∈ A_P⃗,q, there is a constant C > 0 independent of f⃗ such that ℳ_α(f⃗)_L^q,∞(v_w⃗^q) ≤ C ∏_j=1^mf_j_L^p_j(w_j^p_j). of <ref> Similar to the reason as in the proof of Theorem 1.2 in <cit.>, it is enough to prove <ref> is valid for f⃗ being bounded functions with compact supports. By enumerate:multiple-weights-fract in <ref>, for w⃗∈ A_P⃗,q there has v_w⃗^q∈ A_∞ . Then <ref> follows from <ref> (the weighted boundedness of ℳ_α with multiple-weights). § PROOF OF <REF> The following result, an extrapolation theorem originally due to Cruz-Uribe et al. <cit.>, are also necessary. Here, we use the following form, see Theorem 7.2.1 in <cit.>. Let ℱ denote a family of ordered pairs of measurable functions (f,g). Suppose that for some fixed q_0 with 0< q_0< ∞ and every weight w ∈ A_1 such that _ℝ^n |f(x)|^q_0 w(x)dx ≤ C_0_ℝ^n |g(x)|^q_0 w(x)dx . Let q(·)∈𝒫 (ℝ^n) with q_0≤ q_-. If (q(·)/q_0)'∈ℬ(ℝ^n), then there exists a positive constant C, such that for all (f,g) ∈ℱ, f _q(·) ≤ C g _q(·). The following result was given in <cit.> (see Theorem 1.3 in <cit.> for details). Let 0<α<n and p(·), q(·) ∈𝒫^log(ℝ^n) with p_+< n/α and 1/q(·)= 1/p(·) - α/n. Then M_α(f)_ q(·) ≤ C f_ p(·). In the setting of classical Lebesgue spaces, <ref> follows immediately from the boundedness of the Hardy-Littlewood maximal operator. In fact, using Hölder's inequality it is straightforward to show that M_α(f)(x) ≤ f_L^p(ℝ^n)^1-p/q M(f)(x)^p/q , x∈Ω. We also need the following density property ( see Theorem 3.4.12 in <cit.>). If q(·) ∈𝒫 (ℝ^n), then C_c^∞(ℝ^n) is dense in L^q(·)(ℝ^n). Now, we have all the ingredients to prove <ref>. of <ref> Since q(·) ∈ℬ(ℝ^n) then, by <ref>, there exists a q_0 with 1 < q_0 <q_- such that (q(·)/q_0)'∈ℬ(ℝ^n). On the other hand, by <ref> we see that, for this q_0 and any w ∈ A_1, _ℝ^n |T_α(f⃗)(x)|^q_0 w(x)dx ≤ C _ℝ^n |ℳ_α(f⃗)(x)|^q_0 w(x)dx holds for all m-tuples f⃗ = ( f__1,f__2,…,f__m) of bounded functions with compact support. Apply <ref> to the pair (T_α(f⃗),ℳ_α(f⃗)) and obtain T_α(f⃗)_q(·) ≤ C ℳ_α(f⃗) _q(·). Let 0< α_1,…, α_m<n with α = α_1+⋯+ α_m and p_i(·)<n/α_i such that 1/q_i(·)= 1/p_i(·) - α_i/n (i=1,…,m) and 1/q(·) = 1/q_1(·) +⋯+ 1/q_m(·). By <ref>, it is easy to see that (see P.89 in <cit.>) ℳ_α(f⃗)(x) ≤∏_i=1^m M_α_i f_i(x) for x∈ℝ^n. This, together with (<ref>), <ref> and the generalized Hölder's inequality (see <ref>), yields T_α(f⃗)_q(·) ≤ C ∏_i=1^mM_α_i f_i_q_i(·)≤ C ∏_i=1^m f_i_p_i(·). Now, we have showed that <ref> is valid for all bounded functions f__1,f__2,…,f__m with compact support. <ref> concludes the proof of <ref>. §.§.§ Funding information: This work is financially supported by the Science and Technology Fund of Heilongjiang (No.2019-KYYWF-0909), the NNSF of China (No.11571160), the Reform and Development Foundation for Local Colleges and Universities of the Central Government (No.2020YQ07) and the Scientific Research Fund of MNU (No.D211220637). §.§.§ Conflict of interest: The authors state that there is no conflict of interest. §.§.§ Data availability statement: All data generated or analysed during this study are included in this manuscript. §.§.§ Author contributions: All authors contributed equally to this work. All authors read the final manuscript and approved its submission. tocsectionReferences tugboat -0.5 em
http://arxiv.org/abs/2307.02631v2
20230705200413
An explainable model to support the decision about the therapy protocol for AML
[ "Jade M. Almeida", "Giovanna A. Castro", "João A. Machado-Neto", "Tiago A. Almeida" ]
cs.LG
[ "cs.LG", "cs.AI" ]
empty Springer Copyright Notice Copyright (c) 2023 Springer This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Accepted to be published in: Proceedings of the 12th Brazilian Conference on Intelligent Systems (BRACIS'23), Sep. 25–29, 2023. Cite as: 0.95J. M. Almeida, G. A. Castro, J. A. Machado-Neto and T. A. Almeida, “An explainable model to support the decision about the therapy protocol for AML,” in Proceedings of the 12th Brazilian Conference on Intelligent Systems (BRACIS'23), Belo Horizonte, MG, Brazil, 2023, pp. 1–15. BibTeX: 1.2 @InProceedings{BRACIS_2023_JMAlmeida, author = {J. M. {Almeida} and G. A. {Castro} and J. A. {Machado-Neto} and T. A. {Almeida}}, title = {An explainable model to support the decision about the therapy protocol for AML}, pages = {1–15}, booktitle = {Proc. of the 12th Brazilian Conference on Intelligent Systems (BRACIS'23)}, address = {Belo Horizonte, MG, Brazil}, month = {Sep. 11–15}, year = {2023}, publisher = {{Springer}}, } Explainable support decision about the therapy protocol for AML J.M. Almeida et al. ^1Department of Computer Science (DComp-So) Federal University of São Carlos (UFSCar) 18052-780, Sorocaba, São Paulo – Brazil ^2Institute of Biomedical Sciences The University of São Paulo (USP) 05508-000, São Paulo – Brazil jade.almeida@dcomp.sor.ufscar.br, giovannacastro@estudante.ufscar.br, jamachadoneto@usp.br, talmeida@ufscar.br An explainable model to support the decision about the therapy protocol for AMLSupported by CAPES, CNPq, and FAPESP grant #2021/13325-1. Jade M. Almeida^a Giovanna A. Castro^a João A. Machado-Neto^b Tiago A. Almeida^a August 1, 2023 ========================================================================================================================================= =0.936 Acute Myeloid Leukemia (AML) is one of the most aggressive types of hematological neoplasm. To support the specialists' decision about the appropriate therapy, patients with AML receive a prognostic of outcomes according to their cytogenetic and molecular characteristics, often divided into three risk categories: favorable, intermediate, and adverse. However, the current risk classification has known problems, such as the heterogeneity between patients of the same risk group and no clear definition of the intermediate risk category. Moreover, as most patients with AML receive an intermediate-risk classification, specialists often demand other tests and analyses, leading to delayed treatment and worsening of the patient's clinical condition. This paper presents the data analysis and an explainable machine-learning model to support the decision about the most appropriate therapy protocol according to the patient's survival prediction. In addition to the prediction model being explainable, the results obtained are promising and indicate that it is possible to use it to support the specialists' decisions safely. Most importantly, the findings offered in this study have the potential to open new avenues of research toward better treatments and prognostic markers. § INTRODUCTION Acute Myeloid Leukemia (AML) is one of the most aggressive types of hematological neoplasm, characterized by the infiltration of cancer cells into the bone marrow. AML has decreasing remission rates regarding the patient's age, and its average overall survival rate is just 12 to 18 months <cit.>. In 2010, the European LeukemiaNet (ELN) published recommendations for diagnosing and treating AML <cit.>, which became a field reference. A significant update to these recommendations was published in 2017 <cit.> and 2022 <cit.>, incorporating new findings concerning biomarkers and subtypes of the disease combined with a better understanding of the disease behavior. For a diagnosis of AML, at least 10% or 20% of myeloblasts must be present in the bone marrow or peripheral blood, depending on the molecular subtype of the disease <cit.>. This analysis is performed according to the Classification of Hematopoietic and Lymphoid Tissue Tumors, published and updated by the World Health Organization. In addition to the diagnosis, the patient with AML receives a prognostic of outcomes, often divided into three risk categories: favorable, intermediate, and adverse. Cytogenetic and molecular characteristics define such stratification <cit.>. The cytogenetic characteristics come from certain chromosomal alterations. In turn, the molecular ones are determined according to mutations in the NPM1, RUNX1, ASXL1, TP53, BCOR, EZH2, SF3B1, SRSF2, STAG2, and ZRSR2 genes. Specialists commonly use the ELN risk classification to support critical decisions about the course of each treatment, which can directly impact patients' quality of life and life expectancy. Patients with a favorable risk prognosis generally have a good response to chemotherapy. On the other hand, those with adverse risk tend not to respond well to this therapy, needing to resort to other treatments, such as hematopoietic stem cell transplantation <cit.>. The problem with the current risk prognosis is the high rate of heterogeneity between patients of the same risk group. In addition, there is no clear definition regarding the intermediate risk since these patients do not show a response pattern to treatments. Most patients with AML receive an intermediate-risk classification <cit.>. Unfortunately, this makes specialists demand more information, such as the results of other tests and analyses, to support their decisions regarding the most appropriate treatment, even with little or no evidence of efficacy. This process can result in delayed initiation of treatment and consequent worsening of the patient's clinical condition. To overcome this problem, this study presents the result of a careful analysis of real data composed of clinical and genetic attributes used to train an explainable machine-learning model to support the decision about the most appropriate therapy protocol for AML patients. The model is trained to identify the treatment guide that maximizes the patient's survival, leading to better outcomes and quality of life. § RELATED WORK The decision on therapy for patients with AML is strongly based on the prediction of response to treatment and clinical outcome, often defined by cytogenetic factors <cit.>. However, the current risk classification can be quite different among patients within the same risk groups, in which the result can range from decease within a few days to an unexpected cure <cit.>. Since the mid-1970s, the standard therapy for patients with AML has been chemotherapy, with a low survival rate. However, with advances, various data on mutations and gene expressions began to be collected, analyzed, and made available, accelerating the development of therapeutic practices. In 2010, the European LeukemiaNet (ELN) proposed a risk categorization based on cytogenetic and molecular information, considering the severity of the disease <cit.>. This classification comprises four categories: favorable, intermediate I, intermediate II, and adverse. In 2017, a significant update to the ELN's risk classification was published <cit.>. The updated risk classification grouped patients into three categories (favorable, intermediate, and adverse) and refined the prognostic value of specific genetic mutations. Since then, specialists have commonly used this stratification to support important decisions about the course of each treatment, which can directly impact the patient's quality of life and life expectancy. In 2022, the ELN's risk classification was updated again. The main change provided is related to the expression of the FLT3-ITD gene. All patients with high expression but without any other characteristics of the adverse group are classified as intermediate risk. Another significant change is that mutations in BCOR, EZH2, SF3B1, SRSF2, STAG2, and ZRSR2 genes are related to the adverse risk classification <cit.>. Specialists often rely on the ELN risk classification to define the treatment guidelines given to the patient shortly after diagnosis. Patients with a favorable risk generally present a positive response to chemotherapy. In contrast, patients with an adverse risk tend not to respond well to this therapy, requiring other treatments, such as hematopoietic stem cell transplantation <cit.>. However, there is no clear definition regarding the therapeutic response of AML patients with intermediate risk. The problem with using the current risk classifications as a guide for deciding the most appropriate treatment is that there can be significant variability of patients in the same risk group, with different characteristics such as age and gender. For example, patients under 60 tend to respond better to high-dose chemotherapy. On the other hand, patients over 60 years old tend to have a low tolerance to intense chemotherapy and may need more palliative therapies <cit.>. Several studies suggest that age is a relevant factor when deciding the treatment for a patient, a fact that is not considered by the current risk classification. However, as most patients with AML receive the intermediate risk, specialists often require additional information, such as the results of other tests and analyses, to decide the most appropriate treatment, even with little or no evidence of efficacy <cit.>. This process can lead to a delay at the start of treatment and worsen the patient's clinical condition. Studies have emphasized the significance of analyzing mutations and gene expression patterns in families of genes to determine the therapeutic course in AML. Over 200 genetic mutations have been identified as recurrent in AML patients through genomic research <cit.>. With genetic sequencing, the patient profile for AML has transitioned from cytogenetic to molecular <cit.>. However, due to the heterogeneity of the disease, it is difficult to manually analyze the various genetic alterations that may impact the course of the disease. To overcome these challenges, recent studies have sought to apply machine learning (ML) techniques to automatically predict the outcome after exposure to specific treatments and complete remission of the disease. For example,  <cit.> trained supervised ML models with data extracted from RNA sequencing and clinical information to predict complete remission in pediatric patients with AML. The k-NN technique obtained the best performance, with an area under the ROC curve equals to 0.81. The authors also observed significant differences in the gene expressions of the patients concerning the pre-and post-treatment periods. Later, <cit.> used clinical and genetic data to train a random forest classifier capable of automatically predicting the survival probability. According to the authors, the three most important variables for the model were patient age and gene expression of the KDM5B and LAPTM4B genes, respectively. The authors concluded that applying ML techniques with clinical and molecular data has great predictive potential, both for diagnosis and to support therapeutic decisions. In the study of  <cit.>, a statistical decision support model was built for predicting personalized treatment outcomes for AML patients using prognostic data available in a knowledge bank. The authors have found that clinical and demographic data, such as age and blood cell count, are highly influential for early death rates, including death in remission, which is mainly caused to treatment-related mortality. Using the knowledge bank-based model, the authors concluded that roughly one-third of the patients analyzed would have their treatment protocol changed when comparing the model's results with the ELN treatment recommendations. The success reported in these recent studies is an excellent indicator that recent ML techniques have the potential to automatically discover patterns in vast amounts of data that specialists can further use to support the personalization and recommendation of therapy protocols. However, one of the main concerns when applying machine learning in medicine is that the model can be explainable, and experts can clearly understand how the prediction is generated <cit.>. In this context, this study presents the result of a careful analysis of real data composed of clinical and genetic attributes used to train an explainable machine-learning model to support the decision about the most appropriate therapy protocol for AML patients. Our main objective is to significantly reduce the subjectivity involved in the decisions specialists must make and the time in the treatment decision processes. This can lead to robust recommendations with fewer adverse effects, increasing survival time and quality of life. § MATERIALS AND METHODS This section details how the data were obtained, processed, analyzed, and selected. In addition, we also describe how the predictive models were trained. §.§ Datasets The data used to train the prediction models come from studies by The Cancer Genome Atlas Program (TCGA) and Oregon Health and Science University (OHSU). These datasets are known as Acute Myeloid Leukemia <cit.> and comprise clinical and genetic data of AML patients. Both are real and available in the public domain at <https://www.cbioportal.org/>. We used three sets with data collected from the same patients: one with clinical information (CLIN), another with gene mutation data (MUT), and another with gene expression data (EXP). Table <ref> summarizes these original data. §.§ Data cleaning and preprocessing Since the data comes from two sources, we have processed them to ensure consistency and integrity. With the support of specialists in the application domain, we removed the following spurious data: * Samples not considered AML in adults observed by (i) the age of the patient, which must not be less than 18 years, and (ii) the percentage of blasts in the bone marrow, which should be greater or equal to 20%; * Samples without information on survival elapsed time after starting treatment (Overall Status Survival); * Duplicate samples; and * Features of patients in only one of the two databases. We used the 3-NN method to automatically fill empty values in clinical data features (CLIN). We used the features with empty values as the target attributes and filled them using the value predicted from the model trained with other attributes. Nevertheless, we removed the features of 37 genes with no mutations. Subsequently, we kept only the samples in which all the variables are compatible, observing data related to the exams and treatment received by the patients, as these affect the nature of the clinical, mutation, and gene expression data. Of the 872 initial samples in the two databases, 272 were kept at the end of the preprocessing and data-cleaning processes. Of these, there are 100 samples from patients who remained alive after treatment and 172 who died before, during, or after treatment. Cytogenetic information was normalized and grouped by AML specialists. Moreover, the same specialists analyzed and grouped the treatments in the clinical data into four categories according to the intensity of each therapy: * Target therapy – therapy that uses a therapeutic target to inhibit some mutation/AML-related gene or protein; * Regular therapy – therapy with any classical chemotherapy; * Low-Intensity therapy – non-targeted palliative therapy, generally recommended for elderly patients; and * High-Intensity therapy – chemotherapy followed by autologous or allogenic hematopoietic stem cell transplantation. Finally, the specialists checked and validated all the data. §.§ Feature selection This section describes how we have analyzed and selected the features used to represent clinical, gene mutation, and gene expression data. §.§.§ Clinical data Among the clinical attributes common in the two databases, specialists in the data domain selected the following 11 according to their relevance for predicting clinical outcomes. In Table <ref>, we briefly describe all selected clinical features, and Table <ref> summarizes the main statistics of those with a continuous nature. Figures <ref> and <ref> summarize their main statistics. Among the clinical attributes, in line with several other studies, the only noticeable highlight is that the patient's age seems to be a good predictor of the outcome. The older the patient, the lower the chances of survival. All other attributes showed similar behavior for both classes, with subtle differences. §.§.§ Gene mutation data After cleaning and preprocessing the data, 281 gene mutation features remained. Then, we employed the χ^2 statistical method to select a subset of these features. We chose to use the χ^2 test because it has been widely used in previous studies to analyze the correlation between genetic mutations and certain types of cancer <cit.>. We defined the following hypotheses: H0 – patient survival is independent of gene mutation; and H1 – both groups are dependent. Using p < 0.05, only two features were selected: PHF6 and TP53 gene mutations. The TP53 mutation is the best known among the two gene mutations selected. Several studies show the relationship between TP53 mutation with therapeutic response and prognosis. The TP53 gene is considered the guardian of genomic stability, as it controls cell cycle progression and apoptosis in situations of stress or DNA damage, and mutations in this gene are found in approximately half of the cancer patients <cit.>. Although mutations in TP53 are less common in AML patients (about 10%), they predict a poor prognosis <cit.>. The mutation in the PHF6 gene has been identified as a genetic alteration associated with hematologic malignancies<cit.>. PHF6 is a tumor suppressor gene, and several studies have shown a high mutation frequency in the adverse risk group of AML <cit.>. These observations suggest that PHF6 mutations may have a significant role in the development and progression of AML and may serve as a potential prognostic marker for the disease<cit.>. To further investigate the potential of gene mutation data on outcome prediction, we have enriched the set of gene mutation features with well-known genes already highlighted in studies in the literature <cit.> and used by the ELN <cit.>. The literature features used were: FLT3, NPM1, DNMT3A, IDH1, IDH2, TET2, ASXL1, RUNX1, CEBPA, NRAS, KRAS, SF3B1, U2AF1, SRSF2. §.§.§ Gene expression data After data cleaning and preprocessing, 14,712 gene expression features remained. To select the most relevant features for outcome prediction, we have employed a method similar to Lasso Regression <cit.>: we have trained an SVM model with L1 regularization. This method estimates the relevance of the features by assigning a weight coefficient to each of them. When a feature receives a zero coefficient, it is irrelevant enough for the problem the model was trained for. As a consequence, these features are not selected. The method was trained with all 14,712 gene expression features, from which 22 were selected. The final datasets we have used to train and evaluate the outcome prediction models are publicly available at <https://github.com/jdmanzur/ml4aml_databases>. It is composed of 272 samples (patient data) consisting of 11 clinical features (), 22 gene expression features (), and 16 gene mutation features (). Table <ref> summarizes each of these datasets. §.§ Training the outcome prediction models Since interpretability is a crucial pre-requisite for machine-learning models in medicine <cit.>, we have employed the well-known Explainable Boosting Machine (EBM) technique <cit.>. EBM is a machine learning approach that combines the strengths of boosting techniques with the goal of interpretability. It is designed to create accurate and easily understandable models, making it particularly useful in domains where interpretability and transparency are important. EBM extends the concept of boosting by incorporating a set of interpretable rules. Instead of using complex models like neural networks as weak learners, EBM employs a set of rules defined by individual input features. These rules are easily understandable and can be represented as “if-then” statements. During training, EBM automatically learns the optimal rules and their associated weights to create an ensemble of rule-based models. The weights reflect the importance of each rule in the overall prediction, and the ensemble model combines their predictions to make a final prediction. The interpretability of EBM comes from its ability to provide easily understandable explanations for its predictions. Using rule-based models, EBM can explicitly show which features and rules influenced the outcome, allowing AML specialists to understand the underlying decision-making process. EBM has been applied successfully in various domains, such as predicting medical conditions, credit risk assessment, fraud detection, and predictive maintenance, where interpretability and transparency are paramount <cit.>. We have used the EBM classification method from the InterpretML library[InterpretML is a Python library that provides a set of tools and algorithms for interpreting and explaining machine learning models. The documentation is available at <https://interpret.ml/docs>.] to train seven outcome prediction models: one per dataset (CLIN, MUT, EXP) and four using all possible combinations (CLIN+MUT, CLIN+EXP, MUT+EXP, CLIN+MUT+EXP). §.§ Performance evaluation We evaluated the performance of the prediction models using holdout <cit.>. For this, we have divided the data into three parts 80% was randomly separated for training the models, 10% of the remaining data was randomly selected for model and feature selection, and the remaining 10% was used to test. The data separation was stratified; therefore, each partition preserves the class balance of the original datasets. We must highlight we performed the feature selection processes using only training and validation partitions. We calculated the following well-known measures to assess and compare the performance obtained by the prediction models: accuracy, recall (or sensitivity), precision, F1-Score, and the Area Under the ROC Curve (AUC). § RESULTS AND DISCUSSION First, we trained the outcome prediction models using only the best-known genes consolidated by studies in the literature, both for the expression and mutation contexts. These genes are FLT3, NPM1, DNMT3A, IDH1, IDH2, TET2, ASXL1, RUNX1, CEBPA, NRAS, KRAS, SF3B1, U2AF1, and SRSF2. Table <ref> presents the prediction performance obtained. The model that achieved the best result was the one that combined clinical and genetic mutation data. When analyzing the models trained with individual datasets, the ones based on gene mutation and expression showed the best performances. However, the overall results obtained are low and unsatisfactory for predicting the outcomes of AML patients. Surprisingly, the genes most known in the literature seem not strongly associated with outcomes prediction. We then trained the outcome prediction models using the data resulting from the pre-processing, data analysis, and feature selection process described in Section <ref> (Table <ref>). Table <ref> shows the results obtained. The performance of the model trained only with the mutation data deteriorated slightly compared to the one obtained only with the genes highlighted in the literature. However, the performance of the model trained only with the expression data showed a remarkable improvement since all performance measures were up about 30%, and figuring as the best model we achieved. This strong increase in model performance is probably due to the careful KDD (Knowledge Discovery in Databases) process performed on the data and the new genes discovered to be good predictors. Since gene expression data are expensive to obtain, they are usually absent on the first visit with specialists <cit.>. In this case, the outcome prediction model trained with clinical data and genetic mutations can be used as an initial guide to support the first therapeutic decisions. The main advantage of using EBMs is that they are highly intelligible because the contribution of each feature to an output prediction can be easily visualized and understood. Since EBM is an additive model, each feature contributes to predictions in a modular way that makes it easy to reason about the contribution of each feature to the prediction. Figure <ref> shows the local explanation for two test samples correctly classified as positive and negative using the classification model trained with the EXP feature set. Figure <ref> presents the top-15 attributes according to their importance in generating the prediction of outcome using gene mutation (Fig <ref>), gene expression (Fig <ref>), and clinical data (Fig <ref>), respectively. The attribute importance scores represent the average absolute contribution of each feature or interaction to the predictions, considering the entire training set. These contributions are weighted based on the number of samples within each group. The four most influential clinical features are (i) when low-intensity treatment is chosen by the specialist; (ii) the patient's age; (iii) when high-intensity treatment is chosen; and (iv) the ELN risk classification. It is well-known that the age at diagnosis and the ELN risk classification can potentially impact the patient's outcome <cit.>. Considering that specialists often do not have access to the most suitable treatment intensity during model prediction, the predictions are automatically generated for the four categorized treatment types (Section <ref>), and the one that best optimizes the patient's survival time is selected as the recommended therapy. Regarding genetic mutation data, the mutations in the TP53 and PHF6 genes are ranked as the most influential, followed by the gene mutations already well-known in the literature. If, on the one hand, the mutation in the TP53 gene was already expected, to the best of our knowledge, there are no studies in the literature associating the PHF6 gene with predicting outcomes in the context of AML. Therefore, laboratory tests should be performed to confirm whether this gene may serve as a potential prognostic marker. Among the most influential genetic expression features for model prediction, the following stand out KIAA0141, MICALL2, and SLC9A2. Unlike the other genes, such as PPM1 and LTK, which are already related in several AML studies, as far as we know, there is no study in the literature relating any of the three genes mentioned in the context of AML. In particular, the gene KIAA0141, also known as DELE1, has been recently identified as a key player <cit.>. In a pan-cancer analysis, MICALL2 was highly expressed in 16 out of 33 cancers compared to normal tissues <cit.>. The role of SLC9A2 in cancer is still an area of active research, and the exact relationship between SLC9A2 and cancer development or progression is not fully understood. However, some studies have suggested potential associations between SLC9A2 and certain types of cancer, such as colorectal, breast, and gastric cancer. The findings presented in this paper suggest that the biological role of these genes in the pathogenesis and progression of AML deserves future functional studies in experimental models and may provide insights into the prognosis and the development of new treatments for the disease. § CONCLUSION To support the decision on the therapy protocol for a given AML patient, specialists usually resort to a prognostic of outcomes according to the prediction of response to treatment and clinical outcome. The current ELN risk stratification is divided into favorable, intermediate, and adverse. Despite being widely used, it is very conservative since most patients receive an intermediate risk classification. Consequently, specialists must require new exams, delaying treatment and possibly worsening the patient's clinical condition. This study presented a careful data analysis and explainable machine-learning models trained using the well-known Explainable Boosting Machine technique. According to the patient's outcome prediction, these models can support the decision about the most appropriate therapy protocol. In addition to the prediction models being explainable, the results obtained are promising and indicate that it is possible to use them to support the specialists' decisions safely. We showed that the prediction model trained with gene expression data performed best. In addition, the results indicated that using a set of genetic features hitherto unknown in the AML literature significantly increased the prediction model's performance. The finding of these genes has the potential to open new avenues of research toward better treatments and prognostic markers for AML. For future work, we suggest collecting more data to keep the models updated regarding the disease variations over time. Furthermore, the biological role of the genes KIAA0141, MICALL2, PHF6, and SLC92A in the pathogenesis and progression of AML deserves functional studies in experimental models. splncs04 10 Arber-2022 Arber, D.A., et al.: International Consensus Classification of Myeloid Neoplasms and Acute Leukemias: integrating morphologic, clinical, and genomic data. Blood 140(11), 1200–1228 (2022) Caruana-2015 Caruana, R., et al.: Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 1721–1730. ACM, Sydney NSW Australia (2015) Charrot-2020 Charrot, S., et al.: AML through the prism of molecular genetics. British Journal of Haematology 188(1), 49–62 (2020) Combi-2022 Combi, C., et al.: A manifesto on explainability for artificial intelligence in medicine. Artificial Intelligence in Medicine 133, 102423–102423 (2022) Dohner-2010 Döhner, H., et al.: Diagnosis and management of acute myeloid leukemia in adults: recommendations from an international expert panel, on behalf of the European LeukemiaNet. Blood 115(3), 453–474 (2010) Dohner-2017 Döhner, H., et al.: Diagnosis and management of AML in adults: 2017 ELN recommendations from an international expert panel. Blood 129(4), 424–447 (2017) Dohner-2022 Döhner, H., et al.: Diagnosis and management of AML in adults: 2022 recommendations from an international expert panel on behalf of the ELN. Blood 140(12), 1345–1377 (2022) Eisa-2023 Eisa, Y.A., et al.: The Role of PHF6 in Hematopoiesis and Hematologic Malignancies. Stem Cell Reviews and Reports 19(1), 67–75 (2023) Estey-2019 Estey, E.A.: Acute myeloid leukemia: 2019 update on risk-stratification and management. American journal of Hematology 93(10), 1267–1291 (2019) Ophir-2019 Gal, O., et al.: Predicting complete remission of acute myeloid leukemia: Machine learning applied to gene expression. Cancer Informatics 18,  1–5 (2019) grob-2022-35108372 Grob, T., et al.: Molecular characterization of mutant TP53 acute myeloid leukemia and high-risk myelodysplastic syndrome. Blood 139(15), 2347–2354 (2022) kastenhuber-2017-28886379 Kastenhuber, E.R., Lowe, S.W.: Putting p53 in Context. Cell 170(6), 1062–1078 (2017) Kurzer-2021 Kurzer, J.H., Weinberg, O.K.: PHF6 Mutations in Hematologic Malignancies. Frontiers in Oncology 11, 704471–704471 (2021) Lagunas-Rangel-2017 Lagunas-Rangel, F.A., et al.: Acute Myeloid Leukemia—Genetic Alterations and Their Clinical Prognosis. International Journal of Hematology-Oncology and Stem Cell Research 11, 328–339 (2017) Ley-2008 Ley, T.J., et al.: DNA sequencing of a cytogenetically normal acute myeloid leukaemia genome. Nature 456(7218), 66–72 (2008) lin-2022-35281853 Lin, W., et al.: Identification of MICALL2 as a Novel Prognostic Biomarker Correlating with Inflammation and T Cell Exhaustion of Kidney Renal Clear Cell Carcinoma. Journal of Cancer 13(4), 1214–1228 (2022) Gerstung-2017 M., G., et al.: Precision oncology for acute myeloid leukemia using a knowledge bank approach. Nature Genetics 49, 332–340 (2017) Mitchell-1997 Mitchell, T.M.: Machine Learning. McGraw-Hill (1997) Mosquera-2021 Mosquera Orgueira, A., et al.: Personalized Survival Prediction of Patients With Acute Myeloblastic Leukemia Using Gene Expression Profiling. Frontiers in Oncology 11, 657191–657191 (2021) pmlr-v139-nori21a Nori, H., Caruana, R., Bu, Z., Shen, J.H., Kulkarni, J.: Accuracy, interpretability, and differential privacy via explainable boosting. In: Meila, M., Zhang, T. (eds.) Proc. of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8227–8237. PMLR (18–24 Jul 2021) Pelcovits-2020 Pelcovits, A., Niroula, R.: Acute Myeloid Leukemia: A Review. Rhode Island Medical Journal 103(3), 38–40 (2020) Pimenta-2021 Pimenta, R.J.G., et al.: Genome-wide approaches for the identification of markers and genes associated with sugarcane yellow leaf virus resistance. Scientific Reports 11(1), 15730 (2021) Rahman-2019 Rahman, M.M., et al.: Association of p53 Gene Mutation With Helicobacter pylori Infection in Gastric Cancer Patients and Its Correlation With Clinicopathological and Environmental Factors. World Journal of Oncology 10(1), 46–54 (2019) Sharon-2023 Sharon, D., et al.: DELE1 loss and dysfunctional integrated stress signaling in TP53 mutated AML is a novel pathway for venetoclax resistance [abstract]. Cancer Research 83,  2530 (2023) Genomic-2013 The Cancer Genome Atlas Research Network: Genomic and Epigenomic Landscapes of Adult De Novo Acute Myeloid Leukemia. New England Journal of Medicine 368(22), 2059–2074 (2013) Tibshirani-1996 Tibshirani, R.: Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological) 58(1), 267–288 (1996) Nature-2018 Tyner, J.W., et al.: Functional genomic landscape of acute myeloid leukaemia. Nature 562(7728), 526–531 (2018) Van-Vlierberghe-2011 Van Vlierberghe, P., et al.: PHF6 mutations in adult acute myeloid leukemia. Leukemia 25(1), 130–134 (2011)
http://arxiv.org/abs/2307.01057v1
20230703143640
Robust Beamforming Design for Energy Efficiency and Fairness Maximization in RIS-Assisted mmWave Communications
[ "Ahmed Magbool", "Vaibhav Kumar", "Mark F. Flanagan" ]
eess.SP
[ "eess.SP" ]
top=25.4mm,left=19.1mm, right= 19.1mm,bottom =19.1mm Robust Beamforming Design for Energy Efficiency and Fairness Maximization in RIS-Assisted mmWave Communications Ahmed Magbool, Graduate Student Member, IEEE, Vaibhav Kumar, Member, IEEE, and Mark F. Flanagan, Senior Member, IEEEThis publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant Number 13/RC/2077_P2. The authors are with the School of Electrical and Electronic Engineering, University College Dublin, Belfield, Dublin 4, Ireland. Email: ahmed.magbool@ucdconnect.ie, vaibhav.kumar@ieee.org, mark.flanagan@ieee.org. August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The use of reconfigurable intelligent surfaces (RISs) has been proposed in the past few years to achieve a better communication system performance by creating a programmable wireless propagation environment. In this paper, we target maximizing both energy efficiency and user fairness in RIS-assisted millimeter-wave systems with imperfect channel state information. We formulate the energy efficiency and fairness maximization problem as a multi-objective optimization problem. We split the corresponding multi-objective optimization problem into two stages using a lexicographic approach. In the first stage, the energy efficiency is maximized; then in the second stage, the fairness is maximized subject to a maximum reduction in the optimal value of the energy efficiency. We propose a projected gradient ascent based alternating optimization procedure to solve the optimization problem in each stage. We further employ the penalty dual decomposition method to address the challenging energy efficiency constraint in the second stage. Simulation results show that the proposed algorithm can achieve a better trade-off between energy efficiency and fairness compared to the methods that target only one of those metrics. Reconfigurable intelligent surfaces, mmWave communications, energy efficiency, user fairness, imperfect CSI, lexicographic approach, projected gradient ascent, penalty dual decomposition method. § INTRODUCTION The upcoming sixth generation (6G) of cellular wireless communication networks has prominent key performance indicators, including providing extreme data rates, further enhanced spectral efficiency and coverage, and enhanced energy efficiency <cit.>. Nevertheless, the available spectrum in the sub-[6]GHz band is limited compared to that available in the higher frequency bands, which eventually limits the network's capacity. This motivates exploring higher frequency bands such as millimeter wave (mmWave) and terahertz (THz) bands. However, signals at such high frequencies experience extreme propagation conditions which feature ill-conditioned channels <cit.>. Multiple architectural solutions have been shown to be effective to overcome this issue. One of these is the use of multiple antennas at the transmitter and the receiver, better known as multiple-input-multiple-output (MIMO), to compensate for the severe path loss and enhance the multiplexing gain <cit.>. The use of reconfigurable intelligent surfaces (RISs) is another promising solution that can be helpful in overcoming blockage issues and enhancing the channel condition <cit.>. RISs can create a programmable propagation environment by controlling the phase of the incident signal using nearly passive reflecting elements. As a result, higher spectral efficiency can be achieved with reduced power consumption compared to conventional mmWave systems <cit.>. The vast majority of papers that have investigated resource allocation for RIS-assisted communications have focused on the maximization of the sum rate. The authors in <cit.> proposed two methods to maximize the sum rate in THz systems based on local search and cross-entropy. The former was found to provide better performance but requires higher complexity compared to the latter. The sum rate maximization problem was tackled in <cit.> for mmWave systems based on statistical position information. The advantage of this work stems from the fact that neither instantaneous channel state information (CSI) nor second-order channel statistics is required for resource allocation. A block coordinate search approach was proposed in <cit.> for RIS-assisted THz communications, where the RIS matrix, RIS position, spectrum allocation, and power control were jointly optimized to maximize the sum rate. A weighted sum rate maximization-based approach was proposed in <cit.> under both perfect and imperfect CSI scenarios to address the case of users with different quality-of-service (QoS) requirements. A hybrid beamforming scheme was proposed in <cit.>, where the optimal achievable rates were obtained for discrete RIS phase shifts. The problem of sum rate maximization has also been investigated under more general settings such as multiple RISs <cit.>, cell-free massive MIMO <cit.>, unmanned aerial vehicles <cit.> and dynamic user access <cit.>. Sustainability is a key aspect in 6G systems with a target of reaching an energy efficiency (EE) of [1]Tbit/Joule <cit.>. Nevertheless, an increasing amount of power is needed for massive data transmission in 6G networks. This has motivated several researchers to target EE maximization <cit.>. A milestone paper in EE optimization for RIS-aided systems is <cit.>, where two solutions were provided to find the optimal power allocation and RIS reflection coefficient matrix; one is a gradient-based and the other is a sequential fractional programming-based alternating approach. A genetic approach was proposed in <cit.> based on a covariance matrix adaptation evolution strategy and Dinkelbach's method to maximize the system EE for THz systems. The optimal beamformer design was obtained in <cit.> along with an asymptotic channel capacity characterization under hardware impairments. The authors in <cit.> addressed the secrecy EE problem in RIS-assisted MIMO wiretap channels with a multiple-antenna eavesdropper, where the optimal solution was obtained using a penalty dual decomposition based approach. Furthermore, machine learning tools have been investigated as candidate solutions to optimize the EE in <cit.>, where a long short-term network has been utilized to boost the EE. However, the aforementioned works ignored the issue of providing an adequate degree of fairness to users. As different users may have different quality-of-service (QoS) requirements, proportional fairness should be imposed such that the weighted rates of all users should be similar <cit.>. When this aspect is ignored, optimization algorithms tend to allocate minimal resources to users with weaker channels, as these users make only a minor contribution to the EE objective function <cit.>. This issue is even more pronounced at higher frequency bands, due to the severe path loss that results in users close to the transmitter having much stronger channel gains than those far from it. While incorporating rate constraints into the optimization problem, e.g., by forcing the rate of each user to be greater than a specific threshold, can help in imposing user fairness, simulation results in <cit.> showed that these threshold constraints can severely reduce the EE of the system. Another issue with with the rate constraints is that depending on the choices of these rate thresholds, a feasible solution to the optimization problem may not exist. To avoid this, a multi-objective function was formulated in <cit.> which combines the sum rate and Jain's fairness index; a successive convex approximation method was proposed to solve the resulting non-convex optimization problem. The authors in <cit.> maximized the minimum rate in a system with a finite-resolution RIS. User fairness was investigated from an EE perspective in <cit.>, where the minimum user-wise EE is maximized. In this paper, we target maximizing both EE and user fairness for mmWave multiuser systems with an array-of-subarrays (AoSA) hybrid beamforming architecture. We adopt this architecture because it is more energy efficient than the digital beamformer based fully-connected architecture <cit.>. The proposed solution method is robust to CSI imperfections and is based on the assumption of bounded CSI error. The main contributions of this paper can be summarized as follows: * We formulate the EE and fairness maximization problem as a multi-objective optimization problem. Then we use a lexicographic approach to separate the original problem into two stages. In Stage 1, we optimize the EE only, and in Stage 2, we optimize the minimum weighted rate in the system subject to a controlled reduction in the optimal EE value obtained in Stage 1. * The optimization targets the imperfect CSI case with bounded CSI error. For robust design, we optimize a lower bound on the EE and fairness obtained using the triangle and Cauchy-Schwarz inequalities. * We use a projected gradient based alternating optimization (AO) algorithm to find the optimal digital precoder, analog precoder, RIS reflection coefficient matrix, and analog combiner that optimize the EE in Stage 1. The projection onto the feasible set can be implemented with low complexity for the considered optimization problems. * We convert the non-differentiable objective function in Stage 2 into a smooth function using the log-sum-exp (LSE) approximation. We then use the penalty dual decomposition (PDD) method with an AO-based projected gradient ascent algorithm to address the challenging EE constraint and optimize the user fairness in Stage 2. * We provide extensive numerical simulation results to show the performance of the proposed method, and demonstrate its superiority over the state-of-the-art EE and fairness maximization techniques of <cit.> and <cit.>. A related problem was addressed in our previous work <cit.>; however, that work was restricted to the case of single-antenna receivers and MUI-free reception, while the present work considers the AoSA architecture, multiple-antenna receivers, and the presence of MUI. Also, <cit.> assumed that perfect CSI is available at the BS, whereas in this work we focus in the robust system design with channel imperfections. Given the nature of the more complicated optimization problems in this work, we use projected gradient ascent and penalty dual decomposition methods as the simpler approaches employed in <cit.> (i.e., Dinkelbach's method and beam alignment) are not suitable for these optimization problems. The remainder of the paper is organized as follows. Section <ref> presents the system model. Section <ref> formulates the robust EE and fairness optimization problem using a lexicographic method and lower bounds on the EE and fairness. Section <ref> presents a detailed description of the proposed AO framework. Simulation results and discussions are presented in Section <ref>. Finally, the paper is concluded in Section <ref>. Notations: Bold lowercase and uppercase letters denote vectors and matrices, respectively. |·| represents the magnitude of a complex number (for a complex vector, it is assumed to operate element-wise). [𝐚]_i denotes the i-th element of the vector 𝐚. ||· ||_2 and ||· ||_𝖥 represent the Euclidean vector norm and the Frobenius matrix norm, respectively. (·)^𝖳, (·)^𝖧 and (·)^-1 denote the matrix transpose, matrix conjugate transpose, and matrix inverse, respectively. 𝐈 represents the identity matrix. 1 denotes a column vector all of whose elements are equal to one. diag(a_1,…,a_N) denotes a diagonal matrix with the elements a_1,…,a_N on the main diagonal and zeros elsewhere, while blkdiag(𝐚_1,…,𝐚_N) represents a block diagonal matrix with the vectors 𝐚_1,…,𝐚_N on the main block diagonal and zeros elsewhere. ℂ indicates the set of the complex numbers, and j = √(-1) is the imaginary unit. 𝔼{·} stands for the expectation operator. 𝒞𝒩(0,𝐁) represents a complex Gaussian random vector with a mean of 0 and a covariance matrix of 𝐁. ∇_𝐚 f(·) represents the gradient of the real-valued function f(·) with respect to 𝐚. 𝒪(·) represents the big-O notation to denote the computational complexity of an algorithm. § SYSTEM MODEL We consider a multi-user downlink mmWave system in which a base-station (BS) serves K users as shown in Fig. <ref>. The BS employs hybrid digital and analog precoding, where the analog precoder comprises M radio-frequency (RF) chains, each connected to a sub-array (SA) of N_𝖳 antenna elements (AEs). We assume that the direct paths between the BS and the user equipment (UEs) are blocked and the communication is achieved via an RIS with N_RIS reflecting elements. At the receiver side, each UE is assumed to have N_𝖱 AEs and uses an analog combiner to combine the received signals. In the following subsections, we describe the channel model and the signal model. §.§ Channel Model We adopt the widely used far-field Saleh-Valenzuela (SV) channel model <cit.> to represent the BS-RIS and RIS-UE channels. We can write the BS-RIS channel as 𝐇_𝖳 = √(MN_𝖳 N_RIS/L_𝖳)∑_ℓ = 0^L_𝖳-1α_𝖳,ℓ𝐚_RIS (ϕ_𝖱,ℓ) 𝐚_BS (φ_𝖳,ℓ)^𝖧, where L_𝖳 is the number of paths between the BS and the RIS with ℓ=0 representing the LoS path and ℓ=1,…,L_𝖳-1 representing the NLoS paths, α_𝖳,ℓ is the complex path gain of the ℓ-th path, and 𝐚_RIS (ϕ_𝖱,ℓ) and 𝐚_BS (φ_𝖳,ℓ) are the beam-steering (array response) vectors which are functions of the angle of arrival (AoA) to the RIS ϕ_𝖱,ℓ and angle of departure (AoD) from BS φ_𝖳,ℓ of the ℓ-th path, respectively. The general array response vector of an array of AEs is 𝐚 (ϕ) = 1/√(N) [1, e^j 2 π d_1/λ_cϕ,…, e^j 2 π d_N-1/λ_cϕ]^𝖳, where N is the number of AEs in the array, d_n is the distance between the reference antenna, which is assumed to be the antenna with index n = 0, and the n-th antenna, ϕ∈ [-1,1), and λ_c is the carrier wavelength. The array response of the n-th AE at the m-th SA at the BS can be deduced as a_n,m (φ_𝖳,ℓ) = 1/√(N_𝖳)exp( j 2 π ((n-1) d_AE + (m-1) d_SA) /λ_cφ_𝖳,ℓ), n∈{ 0,…, N_𝖳}, m∈{ 0,…, M }, where d_AE and d_SA denotes the distance between two consecutive AEs and SAs, respectively. On the other hand, the array response of the element in the n_x-th row and n_y-th column in the uniform planar array (UPA) of the RIS is <cit.> a_n_x,n_y (ϕ_𝖱,ℓ) = 1/√(N_RIS)exp( j 2 π√(((n_x-1) d_x)^2 + ((n_y-1) d_y)^2)/λ_cϕ_ℓ,k), where d_x and d_y are the horizontal and vertical distance between two consecutive RIS elements, respectively. Similarly, the channel between the RIS and the k-th UE is 𝐇_𝖱,k = √(N_RISN_𝖱/L_𝖱,k)∑_ℓ = 0^L_𝖱,k-1α_𝖱,ℓ,k𝐚_UE,k (ϑ_𝖳,ℓ,k) 𝐚_RIS (Ξ_𝖳,ℓ,k)^𝖧, where L_𝖱,k is the number of paths between the RIS and the k-th UE, α_𝖱,ℓ,k is the path gain of the ℓ-th path between the RIS and the k-th UE, 𝐚_UE,k (ϑ_𝖳,ℓ,k) is array reponse vector for the k-th UE, and ϑ_𝖳,ℓ,k and Ξ_𝖳,ℓ,k are the AoA to the k-th UE and the AoD of the RIS of the ℓ-th path, respectively. §.§ Signal Model The signal transmitted by the BS can be expressed as 𝐱 = 𝐅_𝖱𝖥𝐅_𝖡𝖡𝐬, where 𝐬 = [s_1,…,s_K]^𝖳∈ℂ^K × 1 is a vector containing the K transmitted symbols, with s_k representing the symbol to be transmitted to the k-th user, and 𝔼{𝐬𝐬^𝖧} = 𝐈. Also, 𝐅_𝖡𝖡 = [𝐟_𝖡𝖡,1,…,𝐟_𝖡𝖡,K] ∈ℂ^M × K is the digital (baseband) precoder satisfying the power budget constraint, i.e., || 𝐅_𝖡𝖡 ||_𝖥^2 ≤ P_max, with P_max denoting the total transmit power budget and 𝐟_𝖡𝖡,k∈ℂ^M × 1 denoting the digital precoder for the k-th user. In addition, 𝐅_𝖱𝖥 = blkdiag(𝐟_𝖱𝖥,1 ,…, 𝐟_𝖱𝖥,M ) ∈ℂ^MN_𝖳× M is the analog (RF) precoder, with 𝐟_𝖱𝖥,m∈ℂ^N_𝖳× 1 representing the analog precoder employed by the m-th RF chain, with elements obeying the constant modulus (CM) constraint, i.e., |[ 𝐟_𝖱𝖥,m ]_i| = 1 / √(N_𝖳). At the other end, the post-combining received signal by the k-th user is y_k = 𝐰_𝖱𝖥,k^𝖧𝐆_k (Θ) 𝐱 + 𝐰_𝖱𝖥,k^𝖧𝐧_k, where 𝐰_𝖱𝖥∈ℂ^N_𝖱× 1 is the analog combiner employed by the k-th user, with elements obeying the CM constraint, i.e., |[𝐰_𝖱𝖥,k]_ℓ| = 1 / √(N_𝖱), and 𝐧_k ∼𝒞𝒩 (0, σ^2 𝐈) is the complex additive white Gaussian noise (AWGN). Furthermore, 𝐆_k (Θ) ∈ℂ^N_𝖱× MN_𝖳 denotes the overall channel matrix for the k-th user, which can be written as 𝐆_k (Θ) = 𝐇_𝖱,kΘ𝐇_𝖳 = 𝐇̂_𝖱,kΘ𝐇̂_𝖳 + Δ_k, where 𝐇̂_𝖳∈ℂ^N_RIS× MN_𝖳 is the estimated BS-RIS channel, 𝐇̂_𝖱,k∈ℂ^N_𝖱× N_RIS is the estimated RIS-user k channel and Θ= diag(θ_1,…,θ_N_RIS) ∈ℂ^N_RIS× N_RIS is the RIS reflection coefficient matrix with | θ_n| = 1, ∀ n ∈{1,…, N_RIS}. Due to channel estimation error, RIS imperfections, limited feedback or/and quantized RIS phase shift values, perfect CSI may not be available at the transmitter. For the k-th user, these imperfections can modeled by the error matrix Δ_k, which we assume to have a bounded Frobenius norm (i.e., || Δ_k ||_𝖥≤δ_k). § PROBLEM FORMULATION In this section, we formulate the multi-objective optimization problem that targets both EE and user fairness. We then use a lower bound on the EE and fairness expressions in order to find a robust solution in the presence of CSI errors. We further separate the multi-objective optimization problem into two single-objective optimization problems using a lexicographic method <cit.>. §.§ EE and Fairness Multi-Objective Formulation We start by expressing the rate of the k-th user as[Since the elements of 𝐧_k are independent and identically distributed, the post-combining noise power remains unchanged because the analog combiners obey the CM condition. This can be mathematically justified as follows. var{𝐰_𝖱𝖥,k^𝖧𝐧_k } = var{∑_i=1^N_𝖱 [𝐰_𝖱𝖥,k]_i^* [𝐧_k]_i } = ∑_i=1^N_𝖱var{ [𝐰_𝖱𝖥,k]_i^* [𝐧_k]_i } + 2 ∑_i< ℓcov{ [𝐰_𝖱𝖥,k]_i^* [𝐧_k]_i , [𝐰_𝖱𝖥,k]_ℓ^* [𝐧_k]_ℓ} = 1/N_𝖱∑_i=1^N_𝖱var{ [𝐧_k]_i } = σ^2. ] R_k (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐰_𝖱𝖥,k) = log_2 ( 1 + | 𝐰_𝖱𝖥,k^𝖧𝐆_k (Θ) 𝐅_𝖱𝖥𝐟_𝖡𝖡,k |^2/∑_i=1, i ≠ k^K | 𝐰_𝖱𝖥,k^𝖧𝐆_k (Θ) 𝐅_𝖱𝖥𝐟_𝖡𝖡,i |^2 + σ^2). The total power consumption of the system can be expressed as (c.f. <cit.>) P_tot (𝐅_𝖡𝖡) = P_BS+ ξ|| 𝐅_𝖡𝖡||_𝖥^2 + MN_𝖳 P_RF,T_BS power consumption+ N_RIS P_θ_RIS power consumption +∑_k=1^K ( P_UE,k + N_𝖱 P_RF,R,k )_UEs power consumption , where P_BS and P_UE,k are the total hardware static power consumption at the BS and the k-th UE, respectively, ξ is the power amplification factor at the BS, P_RF,T, P_RF,R,k and P_θ are the power consumption of each phase shifter at the BS, the k-th UE and the RIS, respectively. We can then express the system EE as the sum rate divided by the total power consumption as η (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) = R_sum (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥)/P_tot (𝐅_𝖡𝖡), where R_sum (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) = ∑_k=1^K R_k (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐰_𝖱𝖥,k) and 𝐖_𝖱𝖥 = [𝐰_𝖱𝖥,1,…, 𝐰_𝖱𝖥,K]. While maintaining high EE by maximizing (<ref>) is important from a system design point of view, an acceptable QoS is of primary importance to end-users. Nevertheless, users in mmWave systems tend to have very different channel strengths due to the severe path loss <cit.>. In such a case, weak users who cannot make significant contributions to the EE objective function (<ref>) will be treated as unwanted interference to the stronger users and, as a result, will be allocated very limited resources. To address this issue, we aim also to optimize user fairness by maximizing the minimum weighted rate in the system, which is expressed as ℱ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) = min_k ∈{ 1,…,K}R_k (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐰_𝖱𝖥,k)/v_k, where v_k is a weight given to the k-th user by the system provider and represents the QoS for that user. The objective is to allocate the available resources to maximize both EE and user fairness. Accordingly, the following multi-objective optimization problem is formed: max_𝐅_𝖡𝖡, 𝐅_𝖱𝖥,Θ, 𝐖_𝖱𝖥 [ η (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥), ℱ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) ] s.t. || 𝐅_𝖡𝖡||_𝖥^2 ≤ P_max, | θ_n | = 1, ∀ n ∈{ 1,…,N_RIS}, | [𝐟_𝖱𝖥,m]_i | = 1/√(N_𝖳), ∀ m ∈{ 1,…,M }, ∀ i ∈{ 1,…,N_𝖳}, | [𝐰_𝖱𝖥,k]_ℓ | = 1/√(N_𝖱), ∀ k ∈{ 1,…,K }, ∀ℓ∈{ 1,…,N_𝖱}. The optimization problem (<ref>) is very challenging to solve directly for two main reasons. First, the two objective functions in (<ref>) usually have a negative correlation trend (i.e., maximizing EE reduces fairness and vice versa). Second, the transmitter has no knowledge of the error matrices Δ_1,…, Δ_K. In the following subsections, we tackle these issues by finding a tractable lower bound on the users' rates, and then we use a lexicographic approach to separate the multi-objective optimization problem (<ref>) into two single-objective optimization problems. §.§ Lower Bound on the EE and Fairness We start by finding a lower bound on the users' rates. To do so, we apply the method proposed in <cit.> with additional modifications to fit our system model. To this end, we have the following proposition. The desired signal term in (<ref>) can be lower bounded as | 𝐰_𝖱𝖥,k^𝖧𝐆_k (Θ) 𝐅_𝖱𝖥𝐟_𝖡𝖡,k | ≥ | 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,k | - δ_k √(M) || 𝐟_𝖡𝖡,k ||_2. given that | 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,k | ≥δ_k √(M) || 𝐟_𝖡𝖡,k ||_2. Also, each of the interfering signal terms in (<ref>) can be upper bounded as | 𝐰_𝖱𝖥,k^𝖧𝐆_k (Θ) 𝐅_𝖱𝖥𝐟_𝖡𝖡,i | ≤ | 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,i | + δ_k √(M) || 𝐟_𝖡𝖡,i ||_2. See Appendix <ref>. Using Proposition <ref>, we can find a lower bound on the k-th user's rate as R_k (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐰_𝖱𝖥,k) ≥R̅_k (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐰_𝖱𝖥,k) = log_2 ( 1 + ( | 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,k | - δ_k √(M) || 𝐟_𝖡𝖡,k ||_2 )^2/∑_i=1, i ≠ k^K (| 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,i | + δ_k √(M) || 𝐟_𝖡𝖡,i ||_2)^2 + σ^2), which can then be used to obtain lower bounds on the system EE and fairness as η (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) ≥η̅ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) = R̅_sum (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥)/P_tot (𝐅_𝖡𝖡), and ℱ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) ≥ℱ̅ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) = min_k ∈{ 1,…,K}R̅_k (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐰_𝖱𝖥,k)/v_k, respectively, where R̅_sum (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) = ∑_k=1^K R̅_k (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐰_𝖱𝖥,k). §.§ Lexicographic Approach The second challenge is to address the two objective functions in (<ref>). The lexicographic method <cit.> is one of the most powerful tools to tackle such a multi-objective optimization problem by dividing (<ref>) into two stages. In the first stage, we target EE maximization by solving the following optimization problem: η̅^* = max_𝐅_𝖡𝖡, 𝐅_𝖱𝖥,Θ, 𝐖_𝖱𝖥 η̅ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) s.t. (<ref>), (<ref>), (<ref>), (<ref>), | 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,k | ≥δ_k √(M) || 𝐟_𝖡𝖡,k ||_2, ∀ k ∈{1,…,K }, where the constraint (<ref>) is included to ensure that the lower bound in (<ref>) is non-negative. In the second stage, we maximize the fairness subject to an additional constraint on the maximum reduction in the EE as max_𝐅_𝖡𝖡, 𝐅_𝖱𝖥,Θ, 𝐖_𝖱𝖥 ℱ̅ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) s.t. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), η̅ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) ≥ρη̅^*, where ρ∈ [0,1] is a design parameter. The constraint (<ref>) is included to control the maximum reduction in the optimal EE. In other words, the minimum weighted rate of the system should be maximized at the expense of reducing the EE by a factor of at most 1-ρ from its optimal value η̅^*. Solving (<ref>) and (<ref>) is still very challenging due to the coupling between the optimization variables in the objective functions and some of the constraints. In the following section, we propose projected gradient ascent based AO procedures to solve both (<ref>) and (<ref>). § PROPOSED SOLUTION In this section, we present AO procedure to solve the optimization problems (<ref>) and (<ref>) by fixing the values of a subset of the variables while optimizing the others. In this AO framework, the projected gradient ascent method is used to find the optimal solution, where the gradient ascent update is performed for the desired variables, then this update is projected onto the feasible set. §.§ Stage 1: EE Maximization 1. Digital Precoder: We first assume that all of the variables other than the digital precoder are fixed (i.e., the values of 𝐅_𝖱𝖥,Θ, 𝐖_𝖱𝖥 are held fixed). We can then write the EE maximization problem with respect to the digital precoder as [In this paper, having some arguments omitted in a multi-variable function indicates that their values are being held fixed.] max_𝐅_𝖡𝖡 η̅ (𝐅_𝖡𝖡) s.t. (<ref>), (<ref>). To solve this, we begin by re-expressing R̅_k (𝐅_𝖡𝖡) in (<ref>) as R̅_k (𝐅_𝖡𝖡) = log_2 (∑_i=1^K (| 𝐛_k^𝖧𝐟_𝖡𝖡,i | + s_i,kδ_k √(M) || 𝐟_𝖡𝖡,i ||_2)^2 + σ^2 ) - log_2 (∑_i=1,i≠ k^K (| 𝐛_k^𝖧𝐟_𝖡𝖡,i | + δ_k √(M) || 𝐟_𝖡𝖡,i ||_2)^2 + σ^2 ) where 𝐛_k = 𝐅_𝖱𝖥^𝖧𝐇̂_𝖳^𝖧Θ^* 𝐇̂_𝖱,k^𝖧𝐰_𝖱𝖥,k and s_i,k = -1, if i = k, 1, if i ≠ k. Next, we have the following proposition. The gradient of R̅_k with respect to 𝐟_𝖡𝖡,ℓ is ∇_𝐟_𝖡𝖡,ℓR̅_k (𝐟_𝖡𝖡,ℓ)= 2/ln 2𝐂_ℓ,k𝐟_𝖡𝖡,ℓ 𝗂𝖿 ℓ = k, 2/ln 2 ( 𝐂_ℓ,k - 𝐂̅_ℓ,k) 𝐟_𝖡𝖡,ℓ 𝗂𝖿 ℓ≠ k, where 𝐂_ℓ,k = (1 + s_ℓ,kδ_k √(M) || 𝐟_𝖡𝖡,ℓ ||_2/| 𝐛_k^𝖧𝐟_𝖡𝖡,ℓ| ) 𝐛_k 𝐛_k^𝖧 + ( δ_k^2 M + s_ℓ,kδ_k √(M) | 𝐛_k^𝖧𝐟_𝖡𝖡,ℓ |/ || 𝐟_𝖡𝖡,ℓ ||_2) 𝐈/∑_i=1^K (| 𝐛_k^𝖧𝐟_𝖡𝖡,i | + s_i,kδ_k √(M) || 𝐟_𝖡𝖡,i ||_2)^2 + σ^2 , and 𝐂̅_ℓ,k = (1 + s_ℓ,kδ_k √(M) || 𝐟_𝖡𝖡,ℓ ||_2/| 𝐛_k^𝖧𝐟_𝖡𝖡,ℓ|) 𝐛_k 𝐛_k^𝖧 + ( δ_k^2 M + s_ℓ,kδ_k √(M) | 𝐛_k^𝖧𝐟_𝖡𝖡,ℓ |/ || 𝐟_𝖡𝖡,ℓ ||_2) 𝐈/∑_i=1, i≠ k^K (| 𝐛_k^𝖧𝐟_𝖡𝖡,i | + δ_k √(M) || 𝐟_𝖡𝖡,i ||_2)^2 + σ^2 . See Appendix <ref>. The gradient of R̅_k (𝐅_𝖡𝖡) with respect to 𝐅_𝖡𝖡 can be written as ∇_𝐅_𝖡𝖡R̅_k ( 𝐅_𝖡𝖡) = [ ∇_𝐟_𝖡𝖡,1 R̅_k (𝐟_𝖡𝖡,1 ),…, ∇_𝐟_𝖡𝖡,K R̅_k (𝐟_𝖡𝖡,K ) ], and that of R̅_sum (𝐅_𝖡𝖡) with respect to 𝐅_𝖡𝖡 is ∇_𝐅_𝖡𝖡R̅_sum ( 𝐅_𝖡𝖡) = ∑_k=1^K ∇_𝐅_𝖡𝖡R̅_k ( 𝐅_𝖡𝖡). We can also obtain the gradients of P_tot (𝐅_𝖡𝖡) with respect to 𝐅_𝖡𝖡 as ∇_𝐅_𝖡𝖡 P_tot (𝐅_𝖡𝖡) = 2 ξ𝐅_𝖡𝖡. Then the gradient of η̅(𝐅_𝖡𝖡) with respect to 𝐅_𝖡𝖡 is ∇_𝐅_𝖡𝖡η̅ (𝐅_𝖡𝖡) = P_tot (𝐅_𝖡𝖡) ∇_𝐅_𝖡𝖡R̅_sum (𝐅_𝖡𝖡) - R̅_sum (𝐅_𝖡𝖡) ∇_𝐅_𝖡𝖡 P_tot (𝐅_𝖡𝖡) /P_tot^2 (𝐅_𝖡𝖡). Therefore, the update of 𝐅_𝖡𝖡 should follow 𝐅_𝖡𝖡^(t+1) = 𝐅_𝖡𝖡^(t) + α∇_𝐅_𝖡𝖡η̅ (𝐅_𝖡𝖡^(t)), where α>0 is the step size and 𝐅_𝖡𝖡^(t) is the value of 𝐅_𝖡𝖡 at the t-th iteration of the gradient ascent algorithm. Next, we note that the projection of the updated variable 𝐅_𝖡𝖡^(t+1) onto the constraint (<ref>) is given by 𝐅̂_𝖡𝖡^(t+1) = min( 1,√(P_max)/|| 𝐅_𝖡𝖡^(t+1) ||_𝖥) 𝐅_𝖡𝖡^(t+1). Projecting onto the constraint (<ref>), on the other hand, is not straightforward. A simple way to address this constraint is to reject the projected update 𝐅̂_𝖡𝖡^(t+1) if it violates the constraint (<ref>) and keep it if it does not. This approach in general provides a low-complexity alternative when determining the projection is complex or analytically challenging[Another way to address the constraint (<ref>) is by using the PDD method. However, we observed that with a proper initialization, the constraint (<ref>) was never violated in our simulations, so the additional complexity of implementing a PDD-based approach can be removed without any loss in performance. The PDD method will be used later in this paper to address the EE constraint (<ref>) in Stage 2.]. This update rejection method will be applied for all other variables in a similar manner to address the constraint (<ref>). 2. Analog Precoder: Since the power consumption expression (<ref>) is a function of the digital precoder only, it is enough to maximize the sum rate for the other variables. Assuming that the values of 𝐅_𝖡𝖡, Θ and 𝐖_𝖱𝖥 are fixed, the sum rate optimization for the analog precoder is max_𝐅_𝖱𝖥 R̅_sum (𝐅_𝖱𝖥), s.t. (<ref>), (<ref>). To derive the gradient expression of the sum rate with respect to 𝐅_𝖱𝖥, we begin by expressing the k-th user's hybrid precoding vector as 𝐟_k = 𝐅_𝖱𝖥𝐟_𝖡𝖡,k = [ [𝐟_𝖡𝖡,k]_1 𝐟_𝖱𝖥,1^𝖳,…, [𝐟_𝖡𝖡,k]_M 𝐟_𝖱𝖥,M^𝖳]^𝖳 = 𝐅_𝖡𝖡,k𝐟_𝖱𝖥, where 𝐅_𝖡𝖡,k = diag ( [ [𝐟_𝖡𝖡,k]_1 1^𝖳 ,…, [𝐟_𝖡𝖡,k]_M 1^𝖳]^𝖳), and 𝐟_𝖱𝖥 = [𝐟_𝖱𝖥,1^𝖳,…,𝐟_𝖱𝖥,M^𝖳]^𝖳. Subsequently, the rate of the k-th user can be written as R̅_k (𝐟_𝖱𝖥) = log_2 ( ∑_i=1^K (| 𝐝_i,k^𝖧𝐟_𝖱𝖥 | + s_i,kδ_k √(M) || 𝐟_𝖡𝖡,i||_2)^2 + σ^2 ) - log_2 ( ∑_i=1, i ≠ k^K (| 𝐝_i,k^𝖧𝐟_𝖱𝖥 | + δ_k √(M) || 𝐟_𝖡𝖡,i||_2)^2 + σ^2 ), where 𝐝_i,k = 𝐅_𝖡𝖡,i^𝖧𝐇̂_𝖳^𝖧Θ^* 𝐇̂_𝖱,k^𝖧𝐰_𝖱𝖥,k. We can apply a similar derivation to that used in the proof of Proposition <ref> to find the gradient of R̅_k (𝐟_𝖱𝖥) with respect to 𝐟_𝖱𝖥 as ∇R̅_k (𝐟_𝖱𝖥) = 2/ln 2[ ∑_i=1^K (1 + s_i,kδ_k √(M)|| 𝐟_𝖡𝖡,i||_2/| 𝐝_i,k^𝖧𝐟_𝖱𝖥 | ) 𝐝_i,k𝐝_i,k^𝖧/∑_i=1^K (| 𝐝_i,k^𝖧𝐟_𝖱𝖥 | + s_i,kδ_k √(M)|| 𝐟_𝖡𝖡,i||_2)^2 + σ^2 - ∑_i=1,i≠ k^K (1 + δ_k √(M)|| 𝐟_𝖡𝖡,i||_2/| 𝐝_i,k^𝖧𝐟_𝖱𝖥 | ) 𝐝_i,k𝐝_i,k^𝖧/∑_i=1,i≠ k^K (| 𝐝_i,k^𝖧𝐟_𝖱𝖥 | + δ_k √(M)|| 𝐟_𝖡𝖡,i||_2)^2 + σ^2] 𝐟_𝖱𝖥, and thus the gradient of (<ref>) with respect to 𝐟_𝖱𝖥 is given by ∇_𝐟_𝖱𝖥R̅_sum (𝐟_𝖱𝖥) = ∑_k=1^K ∇_𝐟_𝖱𝖥R̅_k. Hence, the update of 𝐟_𝖱𝖥 should follow 𝐟_𝖱𝖥^(t+1) = 𝐟_𝖱𝖥^(t) + α∇_𝐟_𝖱𝖥R̅_sum (𝐟_𝖱𝖥^(t)), and the projection of 𝐟_𝖱𝖥^(t+1) onto the constraint (<ref>) can be performed as 𝐟̂_𝖱𝖥^(t+1) = 1/√(N_𝖳)diag(|𝐟_𝖱𝖥^(t+1)|)^-1𝐟_𝖱𝖥^(t+1) . 3. RIS Reflection Coefficient Matrix: Next, assuming that the values of 𝐅_𝖡𝖡,𝐅_𝖱𝖥 and 𝐖_𝖱𝖥 are fixed, the RIS reflection coefficient optimization sub-problem can be stated as max_Θ R̅_sum (Θ), s.t. (<ref>), (<ref>). We can find the gradient of R̅_sum (Θ) with respect to Θ by writing the k-th user's rate as R̅_k (Θ) = log_2 ( ∑_i=1^K ( |𝐞_k^𝖧Θ𝐦_i | + s_i,kδ_k √(M) || 𝐟_𝖡𝖡,i||_2 )^2 + σ^2 ) - log_2 ( ∑_i=1, i ≠ k^K ( | 𝐞_k^𝖧Θ𝐦_i | + δ_k √(M) || 𝐟_𝖡𝖡,i||_2)^2 + σ^2 ), where 𝐞_k = 𝐇̂_𝖱,k^𝖧𝐰_𝖱𝖥,k and 𝐦_i = 𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,i. Then we can express R̅_k (Θ) as R̅_k(θ) = log_2 ( ∑_i=1^K ( |𝐠_i,k^𝖧θ | + s_i,kδ_k √(M) || 𝐟_𝖡𝖡,i||_2 )^2 + σ^2 ) - log_2 ( ∑_i=1, i ≠ k^K ( | 𝐠_i,k^𝖧θ𝐦_i | + δ_k √(M) || 𝐟_𝖡𝖡,i||_2)^2 + σ^2 ), where 𝐠_i,k = diag(𝐦_i^𝖧) 𝐞_k, and θ = [θ_1,…,θ_N_RIS]^𝖳. Then, the gradient of R̅_k (θ) with respect to θ is ∇_θR̅_k (θ) = 2/ln 2[ ∑_i=1^K (1 + s_i,kδ_k √(M)|| 𝐟_𝖡𝖡,i||_2/| 𝐠_i,k^𝖧θ | ) 𝐠_i,k𝐠_i,k^𝖧/∑_i=1^K (| 𝐠_i,k^𝖧θ | + s_i,kδ_k √(M)|| 𝐟_𝖡𝖡,i||_2)^2 + σ^2 - ∑_i=1,i≠ k^K (1 + δ_k √(M)|| 𝐟_𝖡𝖡,i||_2/| 𝐠_i,k^𝖧θ | ) 𝐠_i,k𝐠_i,k^𝖧/∑_i=1,i≠ k^K (| 𝐠_i,k^𝖧θ | + δ_k √(M)|| 𝐟_𝖡𝖡,i||_2)^2 + σ^2] θ, and that of R̅_sum (θ) is ∇_θR̅_sum (θ) = ∑_k=1^K ∇_θR̅_k(θ). So, the update of θ should follow θ^(t+1) =θ^(t) + α∇_θR̅_sum (θ^(t)), and the projection of θ^(t+1) onto the constraint (<ref>) can be performed as θ̂^(t+1) = diag(|θ^(t+1)|)^-1θ^(t+1) . 4. Analog Combiner: Since each user applies its analog combiner independently of the other users, the analog combiner sub-problem can be formulated as k independent optimization problems. Assuming that the values of 𝐅_𝖡𝖡,𝐅_𝖱𝖥 and Θ are fixed, the k-th user's analog combiner optimization sub-problem can be written as max_𝐰_𝖱𝖥,k R̅_k (𝐰_𝖱𝖥,k), s.t. (<ref>), (<ref>). We start by expressing the rate of user k as R̅_k (𝐰_𝖱𝖥,k) = log_2 ( ∑_i=1^K (|𝐪_i,k^𝖧𝐰_𝖱𝖥,k| + s_i,kδ_k √(M) || 𝐟_𝖡𝖡,i||_2 )^2 + σ^2 ) - log_2 ( ∑_i=1,i ≠ k^K (|𝐪_i,k^𝖧𝐰_𝖱𝖥,k| + δ_k √(M) || 𝐟_𝖡𝖡,i||_2 )^2 + σ^2 ), where 𝐪_i,k =𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,k. The gradient of R̅_k (𝐰_𝖱𝖥,k) with respect to 𝐰_𝖱𝖥,k is then ∇_𝐰_𝖱𝖥,kR̅_k (𝐰_𝖱𝖥,k) = 2/ln 2[ ∑_i=1^K (1 + s_i,kδ_k √(M)|| 𝐟_𝖡𝖡,i||_2/| 𝐪_i,k^𝖧𝐰_𝖱𝖥,k | ) 𝐪_i,k𝐪_i,k^𝖧/∑_i=1^K (| 𝐪_i,k^𝖧𝐰_𝖱𝖥,k | + s_i,kδ_k √(M)|| 𝐟_𝖡𝖡,i||_2)^2 + σ^2 - ∑_i=1,i≠ k^K (1 + δ_k √(M)|| 𝐟_𝖡𝖡,i||_2/| 𝐪_i,k^𝖧𝐰_𝖱𝖥,k | ) 𝐪_i,k𝐪_i,k^𝖧/∑_i=1,i≠ k^K (| 𝐪_i,k^𝖧𝐰_𝖱𝖥,k | + δ_k √(M)|| 𝐟_𝖡𝖡,i||_2)^2 + σ^2] 𝐰_𝖱𝖥,k. So, the update of 𝐰_𝖱𝖥,k should follow 𝐰_𝖱𝖥,k^(t+1) = 𝐰_𝖱𝖥,k^(t) + α∇_𝐰_𝖱𝖥,kR̅_k (𝐰_𝖱𝖥,k^(t)), and the projection of 𝐰_𝖱𝖥,k^(t+1) onto the constraint (<ref>) should follow 𝐰̂_𝖱𝖥,k^(t+1) = 1/√(N_𝖱)diag(|𝐰_RF,k^(t+1)|)^-1𝐰_RF,k^(t+1). Algorithm <ref> summarizes the projected gradient ascent based AO procedure for EE maximization in Stage 1. §.§ Stage 2: Fairness Maximization After obtaining the optimal EE value η̅^*, we target solving the fairness maximization problem (<ref>) in the second stage. Unlike (<ref>), the projected gradient ascent algorithm cannot be applied here directly for two reasons. First, the gradient of the minimum weighted rate (<ref>) has discontinuities at the points at which R̅_i(𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐰_𝖱𝖥,i) / v_i = R̅_k(𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐰_𝖱𝖥,k) / v_k for some i≠ k. Second, the projection onto the EE constraint (<ref>) is not straightforward to determine. On the one hand, the discontinuity issue can be tackled using the commonly used LSE approximation to approximate (<ref>) as ℱ̅ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) ≈ℱ̂ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) = - 1/ζln( ∑_k=1^K exp( - ζR̅_k(𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐰_𝖱𝖥,k)/v_k) ), which is valid for a sufficiently large smoothing parameter ζ. On the other hand, the PDD method <cit.> can be employed to address the challenging EE constraint (<ref>). The PDD method operates by incorporating challenging constraints into the objective function with penalty parameters to ensure that these constraints are satisfied. For the problem at hand, to obtain the penalty function we first rewrite the constraint (<ref>) as ρη̅^* P_tot (𝐅_𝖡𝖡) - R̅_sum (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) ≤ 0. Then we define the following penalty function: 𝒢(𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥, μ) = ρη̅^* P_tot (𝐅_𝖡𝖡) - R̅_sum (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) + μ, where μ≥ 0 is a slack variable used to eliminate the penalty function 𝒢(𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥, μ) whenever the condition (<ref>) is satisfied. We then introduce the following augmented Lagrangian function <cit.>: ℋ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥, μ) ≜ℱ̂ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥) - γ𝒢(𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥, μ) - 1/2 ω𝒢^2(𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥, μ), where γ is the Lagrange multiplier and ω > 0 is a penalty parameter. For fixed values of γ and ω, we solve the following equivalent form of (<ref>): max_𝐅_𝖡𝖡, 𝐅_𝖱𝖥,Θ, 𝐖_𝖱𝖥 , μ ℋ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥, μ) s.t. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), μ≥ 0. Once convergence is achieved for specific values of γ and ω, they can be updated as <cit.> γ←γ + 1/ωℋ (𝐅_𝖡𝖡,𝐅_𝖱𝖥, Θ, 𝐖_𝖱𝖥, μ), and ω←ψω, where ψ∈ (0,1] is a design parameter that controls the speed of attenuation of the penalty parameter ω. We detail the AO optimization procedure to solve the optimization problem (<ref>) in the following. 1. Digital precoder: Assuming that the values of 𝐅_𝖱𝖥,Θ,𝐖_𝖱𝖥 and μ are fixed, the digital precoder optimization sub-problem can be written as max_𝐅_𝖡𝖡 ℋ (𝐅_𝖡𝖡) s.t. (<ref>) , (<ref>). The gradient of ℱ̂ (𝐅_𝖡𝖡) and 𝒢(𝐅_𝖡𝖡) with respect to 𝐅_𝖡𝖡 can be written as ∇_𝐅_𝖡𝖡ℱ̂ (𝐅_𝖡𝖡) = ∑_k=1^K ∇_𝐅_𝖡𝖡R̅_k(𝐅_𝖡𝖡)/v_kexp( - ζR̅_k(𝐅_𝖡𝖡)/v_k)/∑_k=1^K exp( - ζR̅_k(𝐅_𝖡𝖡)/v_k), and ∇_𝐅_𝖡𝖡𝒢(𝐟_𝖡𝖡) = ρη^* ∇_𝐅_𝖡𝖡 P_tot (𝐅_𝖡𝖡) - ∇_𝐅_𝖡𝖡R̅_sum (𝐅_𝖡𝖡), respectively. Thus, the gradient of ℋ (𝐅_𝖡𝖡) with respect to 𝐅_𝖡𝖡 is ∇_𝐅_𝖡𝖡ℋ (𝐅_𝖡𝖡) =∇_𝐅_𝖡𝖡ℱ̂ (𝐅_𝖡𝖡) - γ∇_𝐅_𝖡𝖡𝒢(𝐅_𝖡𝖡) - 1/ω𝒢(𝐅_𝖡𝖡) ∇_𝐅_𝖡𝖡𝒢(𝐅_𝖡𝖡), and the update of 𝐅_𝖡𝖡 should follow 𝐅_𝖡𝖡^(t+1) = 𝐅_𝖡𝖡^(t) + α∇_𝐅_𝖡𝖡ℋ (𝐅_𝖡𝖡^(t)). 2. Analog Precoder: Next, assuming that the values of 𝐅_𝖡𝖡, Θ, 𝐖_𝖱𝖥 and μ are fixed, we consider the following analog precoder optimization sub-problem: max_𝐅_𝖱𝖥 ℋ (𝐅_𝖱𝖥) s.t. (<ref>) , (<ref>). The gradient of ℱ̂ (𝐟_𝖱𝖥) with respect to 𝐟_𝖱𝖥 can be found as ∇_𝐟_𝖱𝖥F̂ (𝐟_𝖱𝖥) = ∑_k=1^K ∇_𝐟_𝖱𝖥R̅_k(𝐟_𝖱𝖥)/v_kexp( - ζR̅_k(𝐟_𝖱𝖥)/v_k)/∑_k=1^K exp( - ζR̅_k(𝐟_𝖱𝖥)/v_k). We can also find the gradient of 𝒢(𝐟_𝖱𝖥) with respect to 𝐟_𝖱𝖥 as ∇_𝐟_𝖱𝖥𝒢(𝐟_𝖱𝖥) = - ∇_𝐟_𝖱𝖥R̅_sum (𝐟_𝖱𝖥). From (<ref>) and (<ref>), we can deduce the gradient of ℋ (𝐟_𝖱𝖥) with respect to 𝐟_𝖱𝖥 as ∇_𝐟_𝖱𝖥ℋ (𝐟_𝖱𝖥) =∇_𝐟_𝖱𝖥ℱ̂ (𝐟_𝖱𝖥) - γ∇_𝐟_𝖱𝖥𝒢(𝐟_𝖱𝖥) - 1/ω𝒢(𝐟_𝖱𝖥) ∇_𝐟_𝖱𝖥𝒢(𝐟_𝖱𝖥), so that 𝐟_𝖱𝖥 can be updated as 𝐟_𝖱𝖥^(t+1) = 𝐟_𝖱𝖥^(t) + α∇_𝐟_𝖱𝖥ℋ (𝐟_𝖱𝖥^(t)). 3. RIS Reflection Coefficient Matrix: Assuming that the values of 𝐅_𝖡𝖡,𝐅_𝖱𝖥,𝐖_𝖱𝖥 and μ are fixed, the RIS reflection coefficient matrix optimization sub-problem can be written as max_Θ ℋ (Θ) s.t. (<ref>), (<ref>). The gradients of F̂ (θ) and 𝒢(θ) with respect to θ are ∇_θF̂ (θ) = ∑_k=1^K ∇_θR̅_k(θ)/v_kexp( - ζR̅_k(θ)/v_k)/∑_k=1^K exp( - ζR̅_k(θ)/v_k). and ∇_θ𝒢(θ) = - ∇_θR̅_sum (θ), respectively. Then, we can find the gradient of ℋ (θ) with respect to θ as ∇_θℋ (θ) =∇_θF̂ (θ) - γ∇_θ𝒢(θ) - 1/ω𝒢(θ) ∇_θ𝒢(θ), so the update of θ should follow θ^(t+1) = θ^(t) + α∇_θℋ (θ^(t)). 4. Analog Combiner: Since it is required to maximize the rate of the user with the minimum weighted rate, we can follow the same procedure used in EE maximization (Stage 1) to obtain the analog combiner. In this manner, each user's rate will be independently maximized including the minimum weighted rate, which helps in maximizing both EE and user fairness. 5. The Parameter μ: The fairness optimization with respect to the slack parameter μ is max_μ ℋ (μ) s.t. (<ref>). To find the optimal value of μ, we equate the partial derivative of ℋ (μ) with respect to μ to zero to obtain ∂ℋ (μ)/∂μ = - γ - 1/ω(ρη̅^* P_tot - R̅_sum + μ) = 0, which has the solution μ = R̅_sum - ρη̅^* P_tot -γω. Then the update of μ after projecting onto the constraint (<ref>) is μ^(t+1) = max( 0,R̅_sum - ρη̅^* P_tot -γω). Algorithms <ref> and <ref> summarize the projected gradient ascent based AO procedure for fairness maximization and the overall two-stage EE and fairness maximization algorithm, respectively. §.§ Complexity Analysis In this subsection, we present an analysis of the computational complexity of the proposed method, which is defined as the number of the required complex multiplications. Each variable update and projection for both Algorithm <ref> and <ref> are dominated by specific vector multiplications (inner and outer products). For example, the complexity of updating 𝐅_𝖱𝖥 is dominated by forming the vector 𝐝_i,k, computing the inner products 𝐝_i,k^𝖧𝐟_𝖱𝖥, computing the Euclidean norm || 𝐅_𝖱𝖥 ||_2 and computing the outer products 𝐝_i,k𝐝_i,k^𝖧. Table <ref> presents the complexity of updating and projecting each variable. In practical systems, the number of RIS elements is very large compared to the number of BS antennas, UE antennas, and the number of users, i.e., N_RIS≫max(MN_𝖳,N_𝖱,K). In this situation, the computational complexities of one iteration of Algorithm <ref> and <ref> are dominated by the term 𝒪( K^2N_RIS^2 (MN_𝖳 + N_𝖱) ). If I_1, I_2 denotes the number of iterations needed for Algorithm <ref> and Algorithm <ref> to converge, respectively, and I_3 denote the number of iterations needed for the PDD method to converge, then the complexity of Algorithm <ref> is dominated by the term 𝒪( (I_1 + I_2I_3) K^2N_RIS^2 (MN_𝖳 + N_𝖱) ). § NUMERICAL SIMULATIONS In this section, we conduct extensive simulations to demonstrate the trade-offs between EE and fairness that are achievable via the proposed method. Two state-of-the-art benchmark schemes are considered for comparison, the EE-only maximization scheme of <cit.> (without and with rate constraints), and the EE fairness maximization scheme of <cit.>. The main differences between the proposed method and the benchmark schemes are summarized in Table <ref>. In our simulations, the performance metric is averaged over 1000 independent channel realizations, where the small-scale fading, user locations, and QoS weights are randomly varied within confined ranges. Table <ref> summarizes the simulation parameters (unless otherwise stated). Two performance metrics are considered; EE and Jain's fairness index for proportional rates, which is defined as <cit.> F_J = (∑_k=1^K R_k / w_k)^2/K ∑_k=1^K (R_k / w_k)^2∈ [1/K,1], with F_J = 1 indicating perfect fairness (i.e., all users have the same proportional rate). §.§ Convergence We first plot the EE and the minimum weighted rate versus the iteration number for the perfect (i.e., δ_k = 0 ∀ k ∈{1,…,K }) and imperfect CSI cases in Fig. <ref> to examine the convergence of the proposed algorithm. When the EE only is targeted in Stage 1, the minimum weighted rate converges to zero. The reason behind this is that the weak users interfere with the strong ones and since they do not contribute much to the EE objective function, their rates will be suppressed by the EE optimization algorithm. This clearly demonstrates an acute need to address the issue of user fairness. In Stage 2, we start by setting ω = 10 and although the augmented Lagrangian function converges, the EE constraint (<ref>) is not satisfied. The parameter ω is then reduced by a factor of 10, which leads to (<ref>) being satisfied. It can be noted that at the final iteration, the value of the augmented Lagrangian function converges to the value of the fairness objective function. This behavior is expected since both functions should have the same value when (<ref>) is satisfied. For the perfect and imperfect CSI cases, a 30% reduction in EE leads to an increase of the minimum weighted rate from 0 to 59 Mbit/sec and from 0 to 27 Mbit/sec, respectively. These trade-offs can be controlled by adjusting the EE-fairness trade-off parameter, as will be shown in Subsection <ref>. Moreover, it can be observed that only a few iterations (∼ 45) are needed for convergence, which indicates that the solution can be obtained in a reasonable time. §.§ Effect of Varying EE-Fairness Trade-off Parameter ρ Fig. <ref> demonstrates the EE-fairness trade-off that is achieved by varying the value of ρ. The general trend shows that as ρ increases, the EE increases and Jain's fairness index decreases. This is because ρ controls the maximum reduction of EE when optimizing fairness. At one extreme, perfect fairness (Jain's fairness index of one) can be achieved when ρ = 0, at the expense of reducing the EE by 55% of its optimal value. At the other extreme, the maximum EE can be achieved with a very poor fairness. The value of ρ can be tuned to achieve achieve a favorable and flexible trade-off between these two extremes. Compared to existing EE and/or fairness maximization methods, the proposed method offers the flexibility of tuning ρ to achieve the desired operating point of EE and fairness based on available resources and QoS requirements. For instance, if the system in Fig. <ref> requires an EE above [100]Mbit/sec/Joule and a fairness index above 0.6, ρ can be chosen to be between 0.6 and 0.8 while none of the other methods can provide this flexibility. §.§ Effect of Varying the Transmit Power Budget Fig. <ref> shows the average EE and Jain's fairness index as the maximum transmit power budget increases. It can be noted that both EE and fairness saturate when the transmit power budget exceeds [40]dBm. This indicates that the extra power is not being used as it does not help in optimizing the EE or fairness. This can be attributed to the fact that the numerator of the EE objective function is a logarithmic function of the transmit power while the denominator is a linear function of the transmit power. In other terms, the rate of increase of the numerator (sum-rate) decreases with increasing P_max, while that of the denominator (total power consumption) is constant. Furthermore, it can be noted that Stage 1 of the proposed method can achieve better EE than the state-of-art methods for three main reasons. First, the AoSA is architecture more energy-efficient than the fully-connected one. This conclusion matches the observation of <cit.>, where the AoSA and fully-connected architectures are compared from this perspective. Second, the proposed system has a greater capability of increasing the EE as it features two additional sets of variables: the analog precoder at the BS and the analog combiners at the UEs. Third, unlike the benchmark methods, zero-forcing (ZF) and orthogonal transmission have not been used. While orthogonal transmission clearly limits the system capabilities, ZF is not ideal in the case of correlated, ill-conditioned, and rank-deficient mmWave channels. Nevertheless, considering only EE leads to very poor fairness, as in most practical cases all resources are allocated to serve one user only. This issue can be resolved by targeting fairness optimization directly as in <cit.>, however, the resulting EE is poorer than that achieved by other methods. The proposed two-stage approach can achieve a better EE and fairness trade-off than these existing approaches, as a 30% reduction in EE can lead to an increase in Jain's fairness index from 0.25 to 0.87. On the one hand, although incorporating rate constraints can help to improve fairness, it can degrade the system EE significantly when these constraints are made power budget dependent. On the other hand, setting the values of rate constraints independently from the transmit power budget may lead to feasibility issues. §.§ Effect of Varying the Number of RIS elements In Fig. <ref>, we plot the average EE and Jain's fairness index as the number of RIS elements increases for two values of the power consumption at each RIS element: P_θ = [1]dBm and P_θ = [10]dBm. We can notice that adding extra RIS elements provides a substantial increase in the EE for all cases when the number of RIS elements is small. However, as the number of RIS elements becomes larger than 64, the increase starts to slow down in the case of P_θ = [1]dBm while the average EE starts to decrease for P_θ = [10]dBm. This eventual decrease in the EE occurs due to the fact that although more RIS elements offer more degrees of freedom to enhance the system performance, it has a logarithmic increase in the numerator of the EE but a linear increase in the denominator. Similar trends can be observed in the behavior of the fairness as a function of the number of RIS elements. §.§ Effect of Varying the CSI Error Bound We finally show the effect of increasing the CSI error bound in Fig. <ref>, where the CSI error bound of each user is modeled as δ_k = β || 𝐇_𝖳𝐇_𝖱,k||_𝖥, for some β≥ 0. In each figure, we plot three curves: the actual value of the EE or fairness returned by the proposed algorithm (i.e., when the lower bound is optimized), the corresponding value of the optimized lower bound, and the value of the EE or fairness for the case where the BS assumes no error exists in the CSI (i.e., the BS assumes δ_k = 0 ∀ k). In the first and last cases, the performance was averaged over 100 error matrices with independent and identically distributed (i.i.d) Gaussian entries that satisfy (<ref>). It can be noticed that large channel estimation errors can affect system performance. In addition, compared to the case where the BS assumes perfect CSI knowledge, optimizing the lower bound instead of the original function can offer improvements in both EE and fairness; the higher the error bound, the greater the improvement obtained. However, this comes at at an additional cost in complexity, as more terms must be included in the gradient calculations. It can be concluded that, if the imperfect CSI bound is small, it might be better for the BS to assume perfect CSI knowledge as the improvement is not significant. On the other hand, optimizing the EE and fairness lower bounds offer a good level of improvement for scenarios where the CSI uncertainty is relatively high. § CONCLUSION This paper presents a method for robust design of the hybrid analog and digital precoder at the BS, RIS reflection coefficient matrix, and analog combiner at the UEs in order to maximize both the EE and user fairness in RIS-assisted mmWave systems with imperfect channel state information. To achieve this, a lower bound based on the triangle and Cauchy-Schwarz inequalities is used and a lexicographic method is employed to separate the original multi-objective optimization problems into two single-objective optimization problems. First, the EE is maximized using a projected gradient based alternating optimization procedure, and then, the user fairness is maximized subject to a tunable reduction in the optimal value of the EE. The penalty dual decomposition method is used to address the additional challenging EE constraint. The simulation results show that the proposed method can offer flexibility in tuning the EE and user fairness and in prioritizing one of these metrics over the other. § PROOF OF PROPOSITION 1 We use the reverse triangle inequality (i.e., |x+y| ≥| |x| - |y| |) to obtain | 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,k + 𝐰_𝖱𝖥,k^𝖧Δ_k 𝐅_𝖱𝖥𝐟_𝖡𝖡,k | ≥ | 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,k | - |𝐰_𝖱𝖥,k^𝖧Δ_k 𝐅_𝖱𝖥𝐟_𝖡𝖡,k |. By applying the Cauchy–Schwarz inequality and using the facts || 𝐰_𝖱𝖥,k||_2 = 1, || 𝐅_𝖱𝖥||_2 = √(M), and || Δ_k ||_𝖥≤δ_k, the following inequality holds: | 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,k | - |𝐰_𝖱𝖥,k^𝖧Δ_k 𝐅_𝖱𝖥𝐟_𝖡𝖡,k | ≥ | 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,k | - || 𝐰_𝖱𝖥,k ||_2 || Δ_k ||_𝖥 || 𝐅_𝖱𝖥 ||_𝖥 ||𝐟_𝖡𝖡,k ||_2 ≥ | 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,k | - δ_k √(M) || 𝐟_𝖡𝖡,k ||_2. In a similar manner, we can find an upper bound for the interference term in (<ref>) using the triangle inequality followed by the Cauchy–Schwarz inequality as | 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,i + 𝐰_𝖱𝖥,k^𝖧Δ_k 𝐅_𝖱𝖥𝐟_𝖡𝖡,i | ≤ | 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,i | + |𝐰_𝖱𝖥,k^𝖧Δ_k 𝐅_𝖱𝖥𝐟_𝖡𝖡,i | ≤ | 𝐰_𝖱𝖥,k^𝖧𝐇̂_𝖱,kΘ𝐇̂_𝖳𝐅_𝖱𝖥𝐟_𝖡𝖡,i | + δ_k √(M) || 𝐟_𝖡𝖡,i ||_2. § PROOF OF PROPOSITION 2 We start by finding the gradient of (<ref>) with respect to 𝐟_𝖡𝖡,ℓ when ℓ=k. The second logarithmic term becomes irrelevant and the only relevant term inside the sum argument of the first logarithmic term can be rewritten as ( |𝐛_k^𝖧𝐟_𝖡𝖡,ℓ| + s_ℓ,kδ_k √(M) || 𝐟_𝖡𝖡,ℓ||_2)^2 = |𝐛_k^𝖧𝐟_𝖡𝖡,ℓ|^2_g_1(𝐟_𝖡𝖡,ℓ) + δ_k^2 M || 𝐟_𝖡𝖡,ℓ||_2^2_g_2(𝐟_𝖡𝖡,ℓ) + 2 s_ℓ,kδ_k √(M)|𝐛_k^𝖧𝐟_𝖡𝖡,ℓ| || 𝐟_𝖡𝖡,ℓ||_2_g_3(𝐟_𝖡𝖡,ℓ). Then we can find the following gradients ∇_𝐟_𝖡𝖡,ℓg_1(𝐟_𝖡𝖡,ℓ) = 2 𝐛_k 𝐛_k^𝖧𝐟_𝖡𝖡,ℓ, ∇_𝐟_𝖡𝖡,ℓ g_2(𝐟_𝖡𝖡,ℓ) = 2 δ_k^2 M 𝐟_𝖡𝖡,ℓ, ∇_𝐟_𝖡𝖡,ℓ g_3(𝐟_𝖡𝖡,ℓ) = 2 s_ℓ,kδ_k √(M) || 𝐟_𝖡𝖡,ℓ||_2 𝐛_k 𝐛_k^𝖧𝐟_𝖡𝖡,ℓ/|𝐛_k^𝖧𝐟_𝖡𝖡,ℓ| + 2 s_ℓ,kδ_k √(M) |𝐛_k^𝖧𝐟_𝖡𝖡,ℓ| 𝐟_𝖡𝖡,ℓ/|| 𝐟_𝖡𝖡,ℓ||_2. Thus, the gradient of R̅_k (𝐅_𝖡𝖡) with respect to 𝐟_𝖡𝖡,ℓ is ∇_𝐟_𝖡𝖡,ℓR̅_k (𝐟_𝖡𝖡,ℓ) = 1/ln 2∇_𝐟_𝖡𝖡,ℓg_1(𝐟_𝖡𝖡,ℓ) + ∇_𝐟_𝖡𝖡,ℓ g_2(𝐟_𝖡𝖡,ℓ) + ∇_𝐟_𝖡𝖡,ℓ g_3(𝐟_𝖡𝖡,ℓ)/∑_i=1^K (| 𝐛_k^𝖧𝐟_𝖡𝖡,i | + s_i,kδ_k √(M) || 𝐟_𝖡𝖡,i ||_2)^2 + σ^2 , which can be rearranged to obtain (<ref>). A similar procedure can be applied to obtain the gradient for the case of ℓ≠ k by considering the relevant term inside the sum argument of the second logarithmic term in (<ref>). IEEEtran
http://arxiv.org/abs/2307.03057v1
20230706152354
Corrected Hill Function in Stochastic Gene Regulatory Networks
[ "Manuel Eduardo Hernández-García", "Jorge Velázquez-Castro" ]
q-bio.MN
[ "q-bio.MN", "q-bio.BM", "92-11", "I.6" ]
Hydrodynamic atmospheric escape in HD 189733 b: Signatures of carbon and hydrogen measured with the Hubble Space Telescope [ ========================================================================================================================== Describing reaction rates in stochastic bio-circuits is commonly done by directly introducing the deterministically deduced Hill function into the master equation. However, when fluctuations in enzymatic reaction rates are not neglectable, the Hill function must be derived, considering all the involved stochastic reactions. In this work, we derived the stochastic version of the Hill function from the master equation of the complete set of reactions that, in the macroscopic limit, lead to the Hill function reaction rate. We performed a series expansion around the average values of the concentrations, which allowed us to find corrections for the deterministic Hill function. This process allowed us to quantify the fluctuations of enzymatic reactions. We found that the underlying variability in propensity rates of gene regulatory networks has an important non-linear effect that reduces the intrinsic fluctuations of the mRNA and protein concentrations. Keywords: Hill function, stochastic system, fluctuations, genetic networks, stationary distribution. Hydrodynamic atmospheric escape in HD 189733 b: Signatures of carbon and hydrogen measured with the Hubble Space Telescope [ ========================================================================================================================== § INTRODUCTION The study of genetic regulatory networks has increased in recent years, owing to their potential applications in novel disease treatments. Understanding the dynamics and functional relationships among the components of the regulatory networks of some genetic diseases <cit.> can provide insights into their causes and lead to new treatments. These systems are complex, and contain several components. However, the transcription factors and protein concentrations involved in network dynamics are generally low. The effect of fluctuations around average concentrations can propagate between many components or even play a functional role. Thus, a deterministic description of the evolution of the network concentrations is an approximation. Many models have been proposed in the context of stochastic processes <cit.> using the master equation. Most of these approaches employ a deterministically derived Hill function to represent certain chemical rates, implying that they are approximations of a complete reaction network. This problem was previously addressed by <cit.>; however, its consequences and the problem itself have yet to be exhaustively studied. However, some developments have been made in this regard <cit.>. The formalism of multivariable birth-dead processes <cit.> is typically employed to describe reaction kinetics. However, in addition to the simple reactions, the master equation is difficult to solve analytically. Furthermore, as the reaction network becomes more complex, the numerical solutions become more computationally intensive. This lack of practicality in finding a solution to the master equation results in a linear noise approximation that leads to the Fokker-Planck equation, which is the standard framework for making inferences about stochastic systems <cit.>. The Fokker-Planck equation is useful for a wide range of situations and systems. In <cit.> a similar approximation was considered, although only second-order reactions were also considered. Although we work with a model of any order, it is necessary for the system to be sufficiently large. The results obtained contain a corrective term. In <cit.> a similar approximation was considered, although they only worked with second-order reactions, whereas we worked with a model of any order; only for reactions of order greater than 2, it is necessary that the size of the system is large enough for the approximation to the second order to be sufficient. The results obtained contain a corrective term. However, before the approximation is made, it is possible to take advantage of the separation between fast and slow reactions in some scenarios. In this work, we obtained general expressions for the master equation based on the assumption that there are fast and slow reactions in the chemical network, where the fast reactions have already reached their equilibrium distributions. This procedure provides a more accurate description than inserting a Hill function to describe an enzymatic response, as is commonly performed. The master equation for the slow reactions was then used to obtain the evolution of the average reactant concentrations as a series of expansions of relative fluctuations. This calculation was necessary because the mean concentrations were generally associated with deterministic reaction kinetics equations. Thus, this procedure allows us to correct the deterministic dynamics of small systems where the effect of intrinsic fluctuations cannot be neglected. This method was illustrated using three examples. First, we found the corrections of the Hill function due to reactant concentration fluctuations in small systems using a Toggle Switch. With this gained experience, an expression for the master equation of a general gene regulatory network was provided and used to analyze a repressilator and an activator-repressor clock. The proposed methodology helps find a more accurate description of complex reaction kinetics for small systems than deterministic models. It is also an improvement over the simplistic application of the Hill function for describing the reaction rates of enzymatic reactions in a stochastic manner. Additionally, the resulting ordinary differential equations are significantly more computationally efficient than the expensive computer simulations of the Gillespie algorithm <cit.>. This approximation will allow for simulation and thus, a better understanding of more complex systems and the role of fluctuations in chemical networks. In particular, it is possible to analyze the effect of intrinsic noise in complex gene regulatory networks and the potential emergent properties from the inherent noise in these types of systems. The remainder of this paper is structured as follows. In Section <ref>, we briefly describe the multivariable life and death processes and propose a useful variation of the method presented in <cit.> for determining the stationary distribution of this type of system. The methodology is used in the following sections because fast reactions are assumed to have reached their stationary distribution. Section <ref> describes a method for obtaining deterministic equations from a stochastic model, particularly for multivariable life and death systems. In addition, we find the corresponding fluctuation-dissipation theorem, which allows us to quantify the fluctuations of the variables throughout the time evolution. Section <ref> briefly explains how the Hill function was obtained from a stochastic process. In particular, we first analyzed the toggle switch to explain in more detail how the Hill function was obtained. To achieve this, we assumed fast and slow reactions. Using a similar procedure, we calculated the Hill function with corrections owing to the stochastic effects. Section <ref> presents a generic gene regulation network. This study was divided into two broad categories. First, we present a general network with one transcription factor, and then the study is broadened to consider many transcription factors. In Section <ref>, we review the results and provide concluding remarks. § CHEMICAL MASTER EQUATION In deterministic models, the law of mass action is used to build a set of differential equations that describe the concentration dynamics of the species for some network of chemical processes. However, this approach may not always be the most suitable for small systems, because the relative fluctuations become significant. The mass action law can be generalized in the stochastic realm as a multivariable birth-dead-like process <cit.>. In this section, the methodology and notation used to describe the birth-dead process are reviewed. We consider N species 𝒮_j (j ∈ 1,2,..N), and M reactions ℛ_i (i ∈ 1,2,..M) through which these species are transformed, that is, ℛ_i : ∑_j=1^Nα_ij𝒮_j [k_i^-]k_i^+⇄∑_j=1^Nβ_ij𝒮_j. The coefficients α_ij and β_ij are positive integers known as stoichiometric coefficients. The stoichiometric matrix is defined as Γ_ji= β_ij - α_ij . Through collisions (or interactions) of the species, they may be transformed; therefore, the transformation rates are proportional to the collision probability. The propensity rates can be expressed as follows <cit.> t_i^+(𝐒) = k_i^+∏ _lS_l !/Ω ^α_il(S_l- α_il )!, t_i^-(𝐒)= k_i^-∏ _lS_l !/Ω ^β_il(S_l- β_il )!. ( index i labels the reaction that occurs ℛ_i). Where S_l is the number of molecules of species 𝒮_l and 𝐒 = (S_1,S_2,..., S_N) is the state vector. These ratios are the transition probabilities between different states of the system. Ω is closely related to the system size. According to (<ref>), we labeled reactions that go from left to right as t^+_i(𝐒) and when they move in the opposite direction as t^-_i(𝐒). With all of the above elements, we can write the master equation of the system, which describes the dynamic evolution of the probability distribution of the states of the system. The master equation can then be expressed as ∂_t P(𝐒,t)= Ω∑ _i ( t_i^-(𝐒+Γ_i) P(𝐒+Γ_i,t) - t_i^+(𝐒) P(𝐒,t) . +.t_i^+(𝐒-Γ_i) P(𝐒-Γ_i,t) - t_i^-(𝐒) P(𝐒,t) ). Where Γ_i denotes the i column of the stoichiometric matrix. The sum is made over all the reactions of the system. This equation is also known as the master chemical equation and is the stochastic version of the law of mass action. Let 𝒮= {𝒮_i} be the set of species, ℛ= {ℛ_i} be the set of reactions, 𝐭= {t_i^+, t_i^-} be the set of reaction rates of the system, and {𝒮, ℛ, 𝐭} be called the chemical reaction network (CRN). §.§ Calculation of the stationary distribution We will assume that the fast reactions reach their stationary state well before the slow reactions. Thus, when describing the dynamics of slow reactions, we assume that fast reactions are already stationary. In this section, we develop the methodology used to calculate the stationary distribution of the master equation. The stationary distribution is the solution of (<ref>) when its LHS is set to zero 0= ∑ _i ( t_i^-(𝐒+Γ_i) P(𝐒+Γ_i,t) - t_i^+(𝐒) P(𝐒,t) . +.t_i^+(𝐒-Γ_i) P(𝐒-Γ_i,t) - t_i^-(𝐒) P(𝐒,t) ). Several proposals have been made to solve (<ref>) <cit.>. We follow a procedure similar to <cit.> with a slight variation. As shown in <cit.>, a stationary distribution of a CRN that is at least weakly reversible and has zero deficiency can be expressed in the form P(S)_S=∏_i P_i(S_i). Where P_i(S_i) represents the probability of finding S_i molecules of species i. Then we proceed as follows, 1.- We define ν^+_i(𝐒) = t_i^+(𝐒) P(𝐒), ν^-_i(𝐒) = t_i^-(𝐒) P(𝐒). 2.- By complex balance <cit.>, the stationary distribution obey the following system of equations ∑_j Γ_ij (ν^+_j(𝐒)θ(Γ_ij)+ ν^-_j(𝐒) θ(-Γ_ij)) = ∑_j Γ_ij (ν^+_j(𝐒-Γ_j)θ(-Γ_ij)+ ν^-_j(𝐒+Γ_j)θ(Γ_ij)) . where Γ_ij is the stoichiometric matrix, and θ(x) is the Heaviside step function. 3.- There might be equations that are linearly dependent on others, meaning that, depending on the particular form of Γ_ij we could have more equations than unknowns in the system (<ref>). In this case, we can transform the similarity to represent Γ_ij in its reduced echelon form Γ_ij'. If there are linearly dependent columns, this means that there are conserved quantities. Thus the linear independent system of equation is ∑_j Γ_ij' (ν^+_j(𝐒)θ(Γ_ij')+ ν^-_j(𝐒) θ(-Γ_ij')) = ∑_j Γ_ij' (ν^+_j(𝐒-Γ_j')θ(-Γ_ij')+ ν^-_j(𝐒+Γ_j')θ(Γ_ij')) . 4.- Substituting (<ref>) in (<ref>) and taking the average over all variables except a single specie S_i we obtain an equation for P_i(S_i) ∑_j Γ_ij' (⟨t^+_j(𝐒)|_⟩l ≠ i P_i(S_i)θ(Γ_ij')+ ⟨t^-_j(𝐒)|_⟩l ≠ i P_i(S_i) θ(-Γ_ij')) = ∑_j Γ_ij' ⟨t^+_j(𝐒)|_⟩l ≠ i P_i(S_i-Γ_ij)θ(-Γ_ij')+ ⟨t^+_j(𝐒)|_⟩l ≠ i P_i(S_i+Γ_ij)θ(Γ_ij')) . This procedure can be repeated for the remaining species of the system. After solving for each species it is possible to find the stationary distribution around a stationary state P(𝐒)= ∏_i M c_i^S_i/S_i !∏_l δ _∑_iγ_li S_i - N_l,0. Here the coefficient M is a normalization constant, c_i characterise the mean concentration of specie i, γ_li are the null vectors of the stoichiometric matrix, N_l,0 are the conserved quantities of the system. We introduce the Kronecker delta to consider that there could be some dependent variables in 𝐒 labeled by index l. § DETERMINISTIC APPROXIMATION AND THE DISSIPATION FLUCTUATION THEOREM Deterministic models of chemical reactions typically describe the dynamics of average concentrations of the species. Thus, the deterministic model associated with a master equation is obtained by calculating the average number of molecules divided by the system size ⟨S_j|⟩/Ω using the master equation (<ref>). Thus, the temporal evolution of the concentration of the species 𝒮_j is ∂/∂t( ⟨S_j|⟩/Ω)= ∑ _i Γ_ij⟨t_i^+(𝐒) -t_i^-(𝐒)|,⟩ where the bracket notation ⟨f(𝐒)|=⟩∑_𝐒f(𝐒)P(𝐒,t) is the average of all the variables. In the following, we denote the average concentrations with the lower letters s_j= ⟨S_j|⟩/Ω. To obtain an approximate expression of the RHS in (<ref>), we use the Taylor expansion of a function around the average of its argument, that is, ⟨f(𝐗)|≈⟩ ⟨ f(⟨𝐗|)⟩ + ∑_i (X_i-⟨X_i|)⟩[ ∂ f(𝐗)/∂ X_i]_𝐗=⟨𝐗|⟩ + ∑_i ∑_j (X_i-⟨X_i|)⟩(X_j-⟨X_j|)⟩/2[ ∂^2 f(𝐗)/∂ X_i ∂ X_j]_𝐗=⟨𝐗|⟩⟩ = f(⟨𝐗|)⟩ + ∑_i ∑_j σ^2(X_i,X_j)/2[∂^2 f(𝐗)/∂ X_i ∂ X_j]_𝐗=⟨𝐗|⟩, for large enough Ω, where ⟨𝐗|=⟩ (⟨X_1|,⟩⟨X_2|,⟩..., ⟨X_N|)⟩ and σ^2(X_i,X_j)=⟨ (X_i-⟨X_i|)⟩(X_j-⟨X_j|)⟩⟩ = ⟨X_i X_j|-⟩⟨X_i|⟨%s|%s⟩⟩X_j. as been defined. Thus, we can express the averages rates ⟨t^±(𝐒)|$⟩ using the previous 2nd order expansion to obtain ⟨t_i^+(𝐒)|≈⟩ k_i^+( ∏ _j s_j^α_ij + ∑_l_1∑_l_2σ^2(s_l_1,s_l_2)/2[ ∂^2/∂ s_l_1∂ s_l_2( ∏ _j s_j^α_ij) ]) , ⟨t_i^-(𝐒)|≈⟩ k_i^-( ∏ _j s_j^β_ij + ∑_l_1∑_l_2σ^2(s_l_1,s_l_2)/2[ ∂^2/∂ s_l_1∂ s_l_2( ∏ _j s_j^β_ij) ]), where we have also made use of Stirling's approximation to eliminate factorials, i.e. t_i^+(⟨𝐒|)⟩= k_i^+∏ _j⟨S_j|!⟩/Ω ^α_ij(⟨S_j|-⟩α_ij )!≈ k_i^+∏ _j( ⟨S_j /Ω|⟩)^α_ij = k_i^+∏ _j s_j^α_ij≡ R_i^D+ (𝐬) , t_i^-( ⟨𝐒|)⟩= k_i^-∏ _j⟨S_j|!⟩/Ω ^β_ij(⟨S_j|-⟩β_ij )!≈ k_i^-∏ _j( ⟨S_j /Ω|⟩)^β_ij = k_i^-∏ _j s_j^β_ij≡ R_i^D- (𝐬). Finally, if we denotek_i^+ ∏_j s_j^α_ij=R_i^D+ (𝐬)andk_i^- ∏_j s_j^β_ij=R_i^D- (𝐬)as the deterministic reaction rates whenΩ→∞we write the evolution of concentrations_jas d s_j /dt= ∑_iΓ_ji( R_i^D+(𝐬) - R_i^D-(𝐬) + ∑_j_1∑_j_2σ^2(s_j_1, s_j_2)/2∂^2/∂s_j_1∂s_j_2 (R_i^D+(𝐬) - R_i^D-(𝐬)) ). This equation provides the first stochastic correction to the deterministic evolution of the reaction kinetics. However, it is not yet a closed system of equations; we also need the evolution of covarianceσ^2(s_j_1, s_j_2)to solve it. As before, we use the master equation to find the evolution of⟨X_i X_j|$⟩, and then we use the expansion (<ref>) on the reactions rates R_i's to obtain ∂/∂tσ^2(s_l_1, s_l_2) = ∑ _i ( Γ_l_1 iΓ_l_2i(R_i^D+(𝐬)+R_i^D-(𝐬))/Ω. + ∑_j_1( σ^2(s_l_1, s_j_1)Γ_l_2 i∂/∂s_j_1 +σ^2(s_j_1, s_l_2)Γ_l_1 i∂/∂s_j_1) (R_i^D+(𝐬)-R_i^D-(𝐬)) + . ∑_j_1∑_j_2{σ^2(s_j_1, s_j_2) Γ_l_1 iΓ_l_2i/2 Ω∂^2/∂s_j_1∂s_j_2 (R_i^D+(𝐬)+R_i^D-(𝐬)) ), It is worth noting that, in contrast to the common Linear Noise Approximation, the cross-reaction terms from equations (<ref>) and (<ref>) give us a closed system, and thus the exact expressions for second-order reactions <cit.>. As the intrinsic fluctuations of the species concentrations are given by η^2_i = ⟨ξ_i^2|⟩/Ω^2= σ^2(X_i,X_i)/Ω^2 , where ξ_i^2= (X_i-⟨X_i|)⟩^2. Subsequently, the system of differential equations formed by (<ref>) and (<ref>) describes the mean dynamics of the system and quantifies its intrinsic fluctuations. Thus, instead of solving the entire master equation, it is possible to solve the more simple equations (<ref>) and (<ref>). In the following sections, we use the method to describe representative stochastic systems and demonstrate its advantages. § HILL FUNCTION AND ITS FLUCTUATIONS-INDUCED CORRECTION The Hill function is widely used in systems in which an enzymatic reaction occurs, or, in our case, to capture the transcription rate of factors affecting mRNA synthesis. Generally, transcription factors act as activators or suppressors. In the case of an activating factor, the following Hill function is usually used, H= ê^n/K^n+ ê^n. Here, ê are the concentrations of the factors and n is known as the Hill coefficient. Figure <ref> shows how the Hill function behaves for an activator with different values of n. In the case of a repressor, the Hill function used is of the form D=K^n/(K^n+ê). Hill functions are derived by analyzing the deterministic dynamics of enzymatic reactions, and their use is widely extended, even in stochastic descriptions. However, its direct use as a transcription rate in the master equation of a stochastic system is just an approximation <cit.>. In this section, we derive the exact transcription rate due to stationary enzymatic reactions by setting up the master equation of the entire reaction network of the system. This procedure allows us to find corrections of the Hill function to consider the intrinsic fluctuations of enzymatic reactions. First, we will analyze the Toggle switch as a particular case, and then derive a general expression for the corrections. §.§ Toggle switch Now, we will analyze the Toggle switch genetic regulatory network to show a common approach used in the literature. Then we will pose the master equation of the complete reaction network corresponding to the system. A systematic treatment of the master equation will allow us to show that the commonly used Hill function is a first approximation and that corrections can be made. The standard approach to describe the Toggle switch is to use the following chemical network ∅ R_1, ∅ R_2, R_1 ∅, R_2∅. where D_d is the deterministic derived Hill function D_d, defined by D_d(x)= (x/K_R)^n/1+(x/K_R)^n. The corresponding master equation of the system is given by <cit.> d/dt P(r_1,r_2,t) = αΩ D_d(r_2/Ω) P(r_1-1,r_2,t)- βr_1 P(r_1,r_2,t) + β (r_1+1) P(r_1+1,r_2,t) - αΩ D_d(r_2/Ω) P(r_1,r_2,t) + αΩ D_d(r_1/Ω) P(r_1,r_2-1,t)- βr_2 P(r_1,r_2,t) + β (r_2+1) P(r_1,r_2+1,t) - αΩ D_d(r_1/Ω) P(r_1,r_2,t). In the previous description of the Toggle switch, the Hill Function in the master equation (<ref>) is introduced because of the assumption that the enzymatic reaction leading to the production of R_1 and R_2 implicitly involves an enzyme at a fixed concentration <cit.>. For a more accurate description, the intrinsic fluctuations of the available active molecules should be considered, even if the enzyme concentration is stationary. We included the binding reactions between R_1 and R_2 to the corresponding enzyme to account for these enzymatic fluctuations. The reaction network of the Toggle Switch then becomes. P_R_1 +n R_2[k_-]k_+⇄ P_R_1^*, P_R_2 +n R_1[k_-]k_+⇄ P_R_2^*, P_R_1 R_1, P_R_2 R_2, R_1 ∅, R_2∅. We will denote r_1 as the number of molecules of the chemical species R_1 and p_1 for the active polymerase P_R_1, and p_1^* for the deactivated polymerase P_R_1^*. Similarly, r_2 represents the number of molecules of species R_2 and p_2, p_2^* is the corresponding active and inactivated polymerase P_R_2, P_R_2^*. To clarify the first reactions, we must remember that in the Toggle Switch, R_1 inhibits R_2 and vice versa. Thus, if n molecules of R_2 bind to the promoter region of P_R_1, they deactivate it. The same process is true for bidding R_1 and P_R_2. The reactions that occur in the promoter are fast (the first line of reactions in (<ref>)); thus, we consider that they have already reached an equilibrium. On the other hand, protein synthesis is a slow process. Therefore, we separate the master equation into a stationary part corresponding to fast reactions and a dynamic part describing slow reactions <cit.>. Ṗ(r_1, r_2,p_1,p_2,p_1^*,p_2^*,t) = α' (p_2+1) P(r_1-1,r_2,p_1,p_2,p_1^*,p_2^*,t)- α' p_1 P(r_1,r_2,p_1,p_2,p_1^*,p_2^*,t) + α' (p_1+1) P(r_1,r_2-1,p_1,p_2,p_1^*,p_2^*,t) - α' p_2 P(r_1,r_2,p_1,p_2,p_1^*,p_2^*,t) + β (r_1+1) P(r_1+1,r_2,p_1,p_2,p_1^*,p_2^*,t)- βr_1 P(r_1,r_2,p_1,p_2,p_1^*,p_2^*,t) + β (r_2+1) P(r_1,r_2+1,p_1,p_2,p_1^*,p_2^*,t) - βr_2 P(r_1,r_2,p_1,p_2,p_1^*,p_2^*,t), 0= Ω^n k_-(p_1^*+1) P(r_1,r_2-n,p_1-1,p_2,p_1^*+1,p_2^*)- k_+p_1 r_2!/(r_2-n)! P(r_1,r_2,p_1,p_2,p_1^*,p_2^*) + k_+(p_1+1) (r_2+n)!/r_2! P(r_1,r_2+n,p_1+1,p_2,p_1^*,p_2^*-1) - Ω^n k_- p_1^* P(r_1,r_2,p_1,p_2,p_1^*,p_2^*) + Ω^n k_-(p_2^*+1) P(r_1-n,r_2,p_1,p_2-1,p_1^*,p_2^*+1) - k_+p_2 r_1!/(r_1-n)! P(r_1,r_2,p_1,p_2,p_1^*,p_2^*) + k_+(p_2+1) (r_1+n)!/r_1! P(r_1+n,r_2,p_1,p_2+1,p_1^*,p_2^*-1) - Ω^n k_- p_2^* P(r_1,r_2,p_1,p_2,p_1^*,p_2^*), in these equations, we denote r_1 as the number of elements of the chemical species R_1 and p_1 to the active polymerase, p_1^* to the deactivated polymerase, and we denote the remaining variables similarly. We now take the average over the stationary variables p_1 and p_2 on the equation (<ref>) obtaining and effective master equation for only the variables r_1 and r_2, Ṗ(r_1,r_2,t) = α' ⟨p_2|_⟩s P(r_1-1,r_2,t)- α' ⟨p_1|_⟩s P(r_1,r_2,t) + α' ⟨p_1|_⟩s P(r_1,r_2-1,t) - α' ⟨p_2|_⟩s P(r_1,r_2,t) + β (r_1+1) P(r_1+1,r_2,t)- βr_1 P(r_1,r_2,t) + β (r_2+1) P(r_1,r_2+1,t) - βr_2 P(r_1,r_2,t). On the other hand, the solution of the stationary distribution (<ref>) can be explicitly obtained (see Section 2), and thus we can calculate α' ⟨p_2|_⟩s =αΩ D_1(r_1,Ω) and α' ⟨p_1|_⟩s = αΩ D_1(r_2,Ω) where D_1(x,Ω)= K^n/K^n + ⟨x!/(x-n)!|_⟩x/Ω^n with K≡k_+/k_-. Substituting this result we obtain a closed master equation describing r_1 and r_2 Ṗ(r_1,r_2,t) = αΩ D_1(r_1,Ω) P(r_1,r_2-1,t)- αΩ D_1(r_2,Ω) P(r_1,r_2,t) + αΩ D_1(r_2,Ω) P(r_1-1,r_2,t) - αΩ D_1(r_1,Ω) P(r_1,r_2,t) + β (r_1+1) P(r_1+1,r_2,t)- βr_1 P(r_1,r_2,t) + β (r_2+1) P(r_1,r_2+1,t) - βr_2 P(r_1,r_2,t). Furthermore, using the fact that ⟨f(𝐗)|≈⟩ f(⟨𝐗|)⟩ + ∑_i ∑_j (σ^2(X_i,X_j)/2[ ∂^2 f(𝐗)/∂ X_i ∂ X_j]_𝐗=⟨𝐗|⟩, we approximate D_1(x,Ω)= K^n/K^n + ⟨x!/(x-n)!|⟩/Ω^n ≈K^n/K^n + ⟨x|^⟩n + n(n-1) σ^2(x,x) ⟨x|^⟩n-2/Ω^n . where σ^2(x,x) is the variance of x around its average ⟨x|$⟩. This is the first correction to the Hill function due to fluctuations in species concentrations. With this example, it is clear that introducing the Hill function directly into the master equation does not consider fluctuations (σ^2(x,x)=0). Furthermore, to make the first corrections to the Hill function, the following transformation is necessary. x̂^n →x̂^n + n(n-1) σ^2(x̂,x̂) x̂^n-2. In the following, we refer to (<ref>) as the Hill function with stochastic corrections. § STOCHASTIC GENETIC REGULATION NETWORKS As we are interested in describing the stochastic version of a gene regulatory network, we will briefly explain how a transcription/translation module (TTM) works. When several of these modules are connected, they form a gene regulatory network; more strictly speaking, one of the proteins produced by one of these modules acts as a transcription factor for other TTMs. Figure <ref> is a simplified schematic view of a TTM, where aXprotein is the input of the module acting as a transcription factor, the transcription process occurs in which the mRNA is synthesized, and translation occurs where some newYproteins are synthesized from the mRNA. Hill function is typically used to express the production rate of a protein in terms of its activator. More specifically, it helps to describe the number of active promoters as a function of transcription factor concentration. §.§ Stochastic genetic regulatory network The most simple description of a MTT can be described with the following reactions x H→mRNAk_2→ y mRNA γ_m→ 0, y γ_p→ 0. The first reaction indicates that mRNA is synthesized by transcription factorxwith a reaction rate proportional to the Hill function. Subsequently, it is synthesized into a proteiny, and the last two reactions indicate that the mRNA and protein are degraded and diluted. In the reaction rates,His a Hill function, which can be an activator or a suppressor depending on the system. Hill function would be a repressor if the protein suppressed gene activation; otherwise, it would be an activator. In a more general sense, many transcription factors can act on the same gene giving effectively a more complex Hill function of the form H_i(𝐩)= ∑_j | A_ij|( p_j/Ω K_j)^n_ij * A_ij/1 + ∑_j | A_ij|( p_j/Ω K_j)^n_ij * A_ij, whereA_ijis the connections matrix with elementsA_il=1iflactivatesi, andA_ij=-1ifjrepressi, otherwiseA_ij=0. The numbersn_ijare called to the Hill coefficients, andK_jare the Michelson-Menten constants. The general expression of the master equation describing a gene regulatory network is given by (<ref>), ∂ P( 𝐦, 𝐩,t)/∂ t= ∑_i ( Ω k_1i( H_i(𝐩,Ω)P(𝐦,𝐩,m_i-1,t)- H_i(𝐩,Ω)P(𝐦, 𝐩,t)) . + m_i+1/τ_1iP(𝐦,𝐩,m_i+1,t) -m_i/τ_1iP(𝐦, 𝐩,t) + k_2im_i P(𝐦,𝐩,p_i-1,t)- k_2im_i P(𝐦, 𝐩,t) . + p_i+1/τ_2iP(𝐦,𝐩,p_i+1,t) -p_i/τ_2iP(𝐦, 𝐩,t)) , whereτ_1i= 1/γ_m_iandτ_2i= 1/γ_p_iare mRNA and protein degradation times, respectively. Indexilabels each MTT, and there is a sum over all network modules. In expression (<ref>), it is customary to use operators to supply the Hill function <cit.> or directly use the deterministic Hill function <cit.>; however, we have the option of using a generalized Hill function with stochastic corrections. Making the LHS of eq. <ref>, the fluctuations in the steady state can been exactly calculated to be η_m_i^2 = ⟨m_i|⟩/Ω^2, η_p_i^2 = ⟨p_i|⟩/Ω^2. This particular expression, which is proportional to the first moment of the variables, originates from the fact that a Poisson distribution purely describes the stationary distribution. In this case⟨m_i|$⟩ and ⟨p_i|$⟩ are the number of molecules, but in terms of concentrations, the fluctuations are proportional to the inverse of the system size in contrast to its square as in (<ref>). To illustrate the proposed analysis and its advantages, we will present two examples. §.§ Repressilator Figure <ref> shows a graphical representation of the repressilator. This simple but important system has been experimentally developed <cit.>. With the help of figure <ref> we build its connection matrixAA_ij= [ 0 0 -1; -1 0 0; 0 -1 0 ], the first row of matrixAindicates that a protein synthesized in module 2 acts as a suppressor in module 0 and similarly in the other modules. The corresponding master equation is ∂ P(𝐦,𝐩,t)/∂ t= ∑_i=1^3( ( k_1^-Ω( H_i(𝐩,Ω)P(𝐦,𝐩,m_i-1,t)-H_i(𝐩,Ω) P(𝐦,𝐩,t) ) ). + m_i+1/τ_1iP(𝐦,𝐩,m_i+1,t) -m_i/τ_1iP(𝐦,𝐩,t) + k_2im_i P(𝐦,𝐩,p_i-1,t)- k_2im_i P(𝐦,𝐩,t) . + p_i+1/τ_2iP(𝐦,𝐩,p_i+1,t) -p_i/τ_2iP(𝐦,𝐩,t)) . The Hill functionsH_i(𝐩,Ω)are repressors. We can now analyze the system using three different approaches. First, we simulated the system dynamics using the Gillespie algorithm, assuming deterministic Hill functions for reaction rates. Second, the description can be performed by approximating the deterministic dynamics in conjunction with TFD using the deterministically derived Hill function. Finally, we can describe the system dynamics using the deterministic approximation and dynamics of the fluctuations but using the Hill function with stochastic correction, an expression similar to (<ref>). A Gillespie simulation of the system is shown in figures <ref>. In this case, we used the deterministic Hill function, and the size of the system is 200 (as in the Elowitz experiment <cit.>). We plotted the protein and mRNA concentrations. This is simply a realization of a stochastic system; thus, the amplitudes of the oscillations are not uniform. By creating an ensemble of simulations, the average or deterministic dynamics as well as the fluctuations were obtained (see Fig. <ref>). In Fig. <ref> it is plotted the concentration of the mRNA and the region of the fluctuations. In figure <ref> (a), we plot the analytically obtained deterministic dynamics and deduce the region of fluctuations using FDT <cit.>. Finally, using the Hill function with stochastic correction, we obtain Figure <ref>. b. The most notable result is that the size of the fluctuations is considerably reduced compared to Figures <ref> and <ref>. a. At first sight, this might be counterintuitive, but it shows that the stochastic effect in the reaction rates might have no linear effects on gene regulatory networks that make them robust with respect to intrinsic noise. We can compare the prediction of fluctuations in the steady state of the approximations with the exact result. In fig. <ref>, we plotted the characteristic size of the fluctuations using the Hill function and the Hill function with stochastic corrections, and it is compared with the exact calculation. It is observed that the corrected Hill function provides a much better description of the stochastic system. §.§ Activator-repressor clock A genetic network in which several transcription factors participate is the activator-repressor clock. First, we analyze the system with the Gillespie algorithm using the deterministic Hill function, then we use the FDT with the deterministic Hill function, and finally, we analyze it using stochastic corrections to the Hill function. Figure <ref> presents a graphical representation of the model. The matrix of connections of the system is A_ij= [ 1 -1; 1 0 ]. In module 0, the protein synthesized by module 1 acts as a suppressor, whereas the protein synthesized by module 0 acts as an activator. The protein synthesized by module 0 acts as an activator of module 1. We assume thatk_1i=k_1, alson_11=n_12=n_21=2andn_22=0,β_11= β21=1,β_11= 0.0004so the master equation is ∂ P(𝐦,𝐩,t)/∂ t = ∑_i=1^2( k_1iΩ( H_g^i(𝐩,Ω)P(𝐦,𝐩,m_i-1,t)-H_g^i(𝐩,Ω) P(𝐦,𝐩,t) ). + m_i+1/τ_2iP(𝐦,𝐩,m_i+1,t) -m_i/τ_2iP(𝐦,𝐩,t) + k_3im_i P(𝐦,𝐩,p_i-1,t)- k_3im_i P(𝐦,𝐩,t) + . p_i+1/τ_3iP(𝐦,𝐩,p_i+1,t) -p_i/τ_3iP(𝐦,𝐩,t)) . Where the Hill functions take the form H_g^1(p_1,p_2,Ω)= β_1 (p_1/K_1' Ω)^n_11 + α_0 /1 + (p_1/K_1' Ω)^n_11 + (p_2/K_2 Ω)^n_12, H_g^2(p_1,Ω)= β_2 (p_1/K_1 Ω)^n_21/1 + (p_1/K_1 Ω)^n_21. By employing the Gillespie algorithm to simulate the system and using the deterministic Hill function, we obtain Figure <ref>. By analyzing an ensemble of Gillespie simulations, Figure <ref> (a) is obtained. The figure shows the average dynamics and range of fluctuations. It is worth noting that averaging many fluctuating trajectories from the output of the Gillespie simulations makes the average value tend to be stationary in the long run. This occurs even if a single realization is constantly oscillating. This shows that this approach does not help describe the characteristic dynamics of a single system, as it represents the collective behavior of many systems. The deterministic approximation and the fluctuation-dissipation theorem (FDT) <cit.>, using the deterministic Hill function, provide the characteristic size of the fluctuations, as shown in Fig. <ref>(b). Furthermore, the characteristic dynamics and size of the fluctuations using the corrected Hill functions are shown in Fig. <ref>(c). Again, we see that the reaction rate fluctuations have feedback effects that compensate for fluctuations in the other variables. Fluctuations around the steady state are shown in fig. <ref> and compared with the exact result. § RESULTS AND CONCLUSIONS The proposed second-order expansion of the reaction rates around the average dynamics of a stochastic system allowed us to describe the mesoscopic dynamics and fluctuations of the system more accurately than the Linear Noise Approximation and the commonly used dissipation fluctuation theorem (<cit.>). Furthermore, the approach describes the dynamics of the covariances exactly for second-order reactions. This approximation allowed us to obtain a corrected Hill function that considers the change in the effective reaction rate that it describes due to the underlying fluctuations in the enzymatic processes. The corrected Hill function can be introduced straightforwardly to describe gene regulatory networks, where standard Hill functions are typically used. As the corrected Hill function depends on the average and variance of the concentrations, it is not introduced in the master equation but in the deterministic description (mesoscopic dynamics) and characterization of the size of its fluctuations. We used the proposed algorithm to analyse the dynamics and fluctuations in gene regulatory networks and compare the different approximations. Specifically, by examining the size of the fluctuations using the corrected Hill function and the fluctuations obtained by the Gillespie algorithm and the TFD with the traditional Hill function, it was observed that the fluctuations in Hill-type propensity rates, have a feedback effect that reduces the intrinsic fluctuations in the dynamics of the species involved in gene regulatory networks. In particular, this effect was observed in both the repressilator and the activator-repressor clock. This approach will allow us to study the intrinsic noise effect in more complex and larger network structures, as it is significantly more computationally efficient than the Gillespie algorithm. § ACKNOWLEDGMENTS One of the authors appreciates the support provided by CONACYT during the course of the master's degree. Partial financial support was received from VIEP-BUAP 2023 [00226] § STOCHASTIC DERIVATION OF HILL FUNCTION Now, our objective is to derive the Hill function from a stochastic process. If we remember that the derivation or deterministic function relies on the steady state, we will also make a similar choice, although in this case, we will look at the stationary distribution. We use the method presented in Section 2, and the stationary distribution allows us to determine the fluctuations of the system. We suppose that we have a proteinpto whichnenzymesecan bind, formings=pe_n, we also ask that the enzymeearrive or go to the system and this can also degrade, so our system is described by the following reversible processes ∅ [λ_-]λ_+⇄ e, p + n e [k_-]k_+⇄ s. Note that the derivations made in this section are valid only when only two previous reactions are found in the system. As we have already explained, the master equation of a chemical network can be obtained from the formalism developed in section 2 (multivariable life and death processes), for this, we first define our stoichiometric coefficient matrices and the stoichiometric matrix α_ij = [ 0 0 0; n 1 0 ] , β_ij = [ 1 0 0; 0 0 1 ], Γ_ij = [ 1 -n; 0 -1; 0 1 ], With the help of these, we calculate the propensity rates and we get t_1^+ = λ_+ , t_1^-= λ_- e1/Ω, t_2^+ = k_+ p e!/(e-n)!1/Ω^n+1, t_2^-= k_- s1/Ω. (The subscript 1 corresponds to the reaction on the left of (<ref>), while 2 corresponds to the right.) To find the master equation of the system, we substitute all the quantities we have calculated, we finally get our master equation Ṗ(e,p,s,t) = k_-(s+1) P(e-n,p-1,s+1,t)- k_+p e!/(e-n)!1/Ω^n P(e,p,s,t) + k_+(p+1) (e+n)!/e!1/Ω^n P(e+n,p+1,s-1,t) - k_- s P(e,p,s,t) + λ_+Ω P(e-1,p,s,t) - λ_- e P(e,p,s,t) + λ_- (e+1) P(e+1,p,s,t) - λ_+Ω P(e,p,s,t), if we consider that our system reaches equilibrium very quickly, then it is enough to look at the stationary distribution so that the previous relationship will be zero, then we need to find the stationary distribution, for this, we use the method that we have presented previously since we have a conserved quantity, the total number of active and inactive proteins is constant, that isN_0= p+s(to make contact with the deterministic part it is convenient to defineN_0= N_0d Ω, beingN_0d the deterministic initial condition), if we substitute the constant we will obtain 0 = k_-(s+1) P(e-n,s+1)- k_+(N_0-s) e!/(e-n)!1/Ω^nP(e,s) + k_+(N_0-s+1) (e+n)!/e!1/Ω^nP(e+n,s-1) - k_- s P(e,s) + λ_+Ω P(e-1,s) - λ_- e P(e,s) + λ_- (e+1) P(e+1,s) - λ_+Ω P(e,s). Continuing with the method, we will finally have the stationary distribution P(e,p,s) = 1/(1+ k_+/k_-( λ_+/λ_-)^n)^N_0( k_+/k_-( λ_+/λ_-)^n)^s N_0! /s!(N_0 - s)!1/e!(Ωλ_+/λ_-)^e e^-Ωλ_+/λ_-δ_p,N_0 -s. This distribution is the multiplication of a Poisson and a variant of a binomial, which corresponds to the arrival of the enzyme in the system, whereas the variant of the binomial indicates the joining and separation of the proteins and enzymes. With this last equation, we calculate the average of the variablesande, ⟨s| ⟩= N_0 k_+/k_-( λ_+/λ_-)^n/1+ k_+/k_-( λ_+/λ_-)^n, ⟨e|=⟩Ωλ_+/λ_-. If we remember that the Hill function can also be calculated asH= ⟨s|⟩/⟨s+p|⟩= ⟨s|⟩/N_0, supporting us from the two previous relationships we have H= k_+/k_-( λ_+/λ_-)^n/1+ k_+/k_-( λ_+/λ_-)^n= 1/Ω^n⟨e|^⟩n/k_-/k_++ 1/Ω^n⟨e|^⟩n = ê^n/K^n+ê^n, Thus, we recover the deterministic Hill function withK^n=k_-/k_+andê=⟨e|⟩/Ω. We realize that the deterministic Hill function can also be derived from the stochastic formalism, and we can calculate how our function fluctuates in the stationary state and calculate the fluctuations ofs/N_0because this variable is, in fact, the Hill function. Using the steady-state distribution, the fluctuations of the Hill function are as follows: η(H)=1/Ω√(σ^2(s/N_0))= 1/Ω√(H(1-H) /Ω N_0^d K^n ) , We usedN_0= N_0d Ω, in which it can be seen that when the system becomes large enough, the fluctuations disappear becauseΩappears in the denominator; whenHapproaches a value of 0 or 1, the fluctuations become small. In other words, because there are few enzymese(the value ofHis close to 0), few can bind to the protein, creating a small fluctuation in the number of active proteins, whereas when there are many enzymese(the value ofHapproaches 1), the system has many active proteins but also many proteins available to bind to proteins, so the fluctuations become small. Note that whenH=1/2the fluctuations reach their maximum value. A similar derivation can be made for the case of a repressor, in this case, we would analyze a set of biochemical reactions similar to those we started in this section, and since the process is very similar (we omit the calculations) the Hill function would be obtained for a repressor and the same magnitude of the fluctuations although these would be given by η(D)=1/Ω√(σ^2(s/N_0))= 1/Ω√(D(1-D) /Ω N_0^d K^n ) . BecauseH+D=1, we can say that the fluctuations inHandDhave the same magnitude. We performed a simulation of the process from which we started using the Gillespie algorithm <cit.> (figure <ref>), where the initial condition of moleculeseis zero, which resembles the Hill function, although what really matters is the steady state. § GENERAL HILL FUNTION In this appendix, we derive the Hill function exclusively when several proteins act as transcription factors. In figure <ref> we can see that there are three ways in which mRA is synthesized: basal transcription is always present, whereas when transcription factors appear, transcription is faster (activator) or inhibits it (repressor). This model was inspired by <cit.>. For this, we assume that two types of proteinsP_1andP_2act as transcription factors as activators and suppressors respectively, all the processes involved are given in the following biochemical reactions Basal union: G + P_r [λ_1^-]λ_1^+⇄ GP_r. Activator transcription factor: n_1P_1 + P_r [k_1^-]k_1^+⇄ P_rp_1. Suppressor transcription factor: n_2 P_2 + P_r [k_2^-]k_2^+⇄ P_rp_2. Binding with transcription factors: G + P_rp_1[λ_2^-]λ_2^+⇄ GP_rp_1. Basal mRNA synthesis: GP_r δ_1→ R. mRNA synthesis with transcription factors: GP_rp_1δ_2→ R.Grepresents the gene,P_ris the promoter,Ris the mAR,GP_ris the gene binding to the promoter,P_rp_1andP_rp_2indicate when the transcription factor has joined the promoter. The first reaction at the top is a reversible process, indicating that the gene binds to the promoter regardless of the presence of transcription factors. The second reaction is also a reversible process, in which transcription factors bind to the promoter in such a way that they create a new molecule that accelerates mRNA synthesis. In the third reaction, a transcription factor creates a new molecule that cannot bind to a gene. In the fourth reaction,Gbinds to theP_rp_1molecule. The last two reactions indicate how mRNA is synthesized by a one-way process. We used the law of mass action to build a set of differential equations that describe the concentrations of our system, wheregrepresents the gene concentration,p_ris the promoter concentration,ris the mAR concentration, s_1is the concentration ofGP_r,s_2ands_3at concentrations ofP_rp_1andP_rp_2respectively,s_4is the concentration ofGP_rp_1. The differential equations we obtain are the following, d s_1/dt= λ_1^+ gp_r - λ_1^-s_1, d s_2/dt= k_1^+ p_1^n_1p_r - k_1^-s_2 - λ_2^+ gs_2 + λ_1^-s_4, d s_3/dt= k_2^+ p_2^n_2 p_r - k_2^-s_3, d s_4/dt= λ_2^+ gs_2 - λ_1^-s_4, d p_r/dt= -λ_1^+ gp_r + λ_1^- - k_1^+ p_1^n_1 + k_1^-p_r - k_2^+ p_2^n_2 + k_2^-p_r, d r/dt= δ_1 s_1 + δ_2 s_4, From these differential equations, we can see that we have a conserved quantity, and that the initial concentration of the promoters is constant,p_0= p_r + s1+s2+s3 + s4. Of the reactions that we have, we consider that the first 4 are practically in equilibrium, so the first four differential equations are considered in equilibrium, from which we obtain the following conditions s_1= λ_1^+/λ_1^- g p_r= λ_1 g p_r , s_2= k_1^+/k_1^- p_1^n_1 p_r = p_1^n_1/ K_1^n_1 p_r, s_3= k_2^+/k_2^- p_2^n_2 p_r = p_2^n_2/ K_2^n_2p_r, s_4= λ_2^+/λ_2^- g s_2 = λ_2 g s_2. Our goal is to find a relationship that describess_1ands_4in terms of transcription factors and basal synthesis since these are closely related to mRNA synthesis as can be seen in (<ref> ), then with the help of the above relations we can write the following H_1= s_1/p_0= s_1/p_r+s_1+s_2+s_3+s_4, H_2= s_4/p_0= s_4/p_r+s_1+s_2+s_3+s_4, when evaluating the quantities obtained in (<ref>) these relations we obtain H_1= λ_1 g/1 + λ_1 g + p_1^n_1/ K_1^n_1 + p_2^n_2/ K_2^n_2 + λ_2 g p_1^n_1/ K_1^n_1, H_2= λ_2 g p_1^n_1/ K_1^n_1/1 + λ_1 g + p_1^n_1/ K_1^n_1 + p_2^n_2/ K_2^n_2 + λ_2 g p_1^n_1/ K_1^n_1. Remember that the synthesis of the mRNA is given by the termδ_1 s_1 + δ_2 s_4, so now we will have δ_1 s_1 + δ_2 s_4 = δ_1 p_0 s_1/p_0 + δ_2 p_0 s_4/p_0 = δ_1 p_0 H_1 + δ_2 p_0 H_2 = H, where H= δ_1 p_0 λ_1 g + δ_2 p_0 λ_2 g p_1^n_1/K_1^n_1/1 + λ_1 g + p_1^n_1/ K_1^n_1 + p_2^n_2/ K_2^n_2 + λ_2 g p_1^n_1/ K_1^n_1 = β_0 K_0 + β_11p_1^n_1/K_1^n_1/1 + K_0 + β_21p_1^n_1/ K_1^n_1 + β_22p_2^n_2/ K_2^n_2 , (β_0 = δ_1 p_0 ,K_0 = λ_1 g ,β_11= δ_2 p_0 λ_2 g ,β_21=(1 + λ_2 g ), β_22=1). The functionHthat we defined corresponds to the Hill function of our system, which indicates how the ARm is synthesized in terms of the transcription factorsP_1andP_2, in addition to a basal constantK_0.β_0 K_0denotes the basal synthesis rate. Now we can give a generalization of this expression, for this we suppose that we have a system withmproteins that acts both as activators or suppressors, then our generalized Hill function (H_g) will be of the form H_g= β_0 K_0 + ∑_j β_1j(p_j^n_j/K_j^n_j)^q/1 + K_0 + ∑_j β_2jp_j^n_j/K_j^n_j, we addedqto the numerator because if the protein acts as an activator,q=1, whereas if it acts as a suppressor,q=0. In this way, we have made a generalization of the Hill function, which will be very useful for analyzing any transcription-translation module in which many proteins participate. We can even introduce stochastic corrections to this Hill function, it will only be necessary to change top_j^n_jas follows p_j^n_j→ p_j^n_j + σ^2(p_j,p_j) n_j(n_j-1)p_j^n_j-2, This change is made according to the manner in which the Hill function with stochastic corrections is obtained. § HILL FUNCTION IN A STOCHASTIC PROCESS A methodology that explains how to properly introduce a Hill function in any stochastic biochemical reaction, such that the result can be generalized to other types of chemical reaction networks, is needed. To carry out the derivation, we rely on the ideas already presented up to this point and <cit.>, where a separation is made between slow and fast variables. For our derivation, we first suppose that the probability distribution of our system is composed of the multiplication of a stationary part and a dynamic part, in such a way that we will have something of the following form P(𝐱,t)= P_s(x_1,..,x_s)P(x_s+1,...,x_l,t), in <cit.> conditional probabilities are used, but because we directly consider that a part of the process is stationary, a separation between them can be made; in this case, we can assume that it is stationary because the concentrations of the variables associated with it tend to be very fast and asymptotically to the stationary value (the behavior of the Hill function). Under these considerations, the master equation of the system will be divided into a dynamic and stationary part that will be as follows P_s(S_1,..,S_s) dP(S_s+1,...,S_j,t)/dt= Ω∑ _i ( t_i^-(S_j+Γ_ji) P_s(S_1+Γ_ii,..,S_s +Γ_si) P(S_s+1+Γ_s+1,i,... S_j+Γ_j,i,t) . - t_i^+(S_j) P_s(S_1,..,S_s) P(S_s+1,...,S_j,t) +t_i^+(S_j-Γ_ji) P_s(S_1-Γ_ii,..,S_s -Γ_si) P(S_s+1-Γ_s+1,i,... S_j -. Γ_j,i,t) - t_i^-(S_j) P_s(S_1,..,S_s) P(S_s+1,...,S_j,t) ), P(S_s+1,...,S_j,t) dP_s(S_1,..,S_s)/dt = Ω∑ _i=rap( t_i^-(S_j+Γ_ji) P_s(S_1+Γ_ii,..,S_s +Γ_si) P(S_s+1+Γ_s+1,i,... S_j+Γ_j,i,t) . - t_i^+(S_j) P_s(S_1,..,S_s) P(S_s+1,...,S_j,t) +t_i^+(S_j-Γ_ji) P_s(S_1-Γ_ii,..,S_s -Γ_si) P(S_s+1-Γ_s+1,i,... S_j -.Γ_j,i,t) - t_i^-(S_j) P_s(S_1,..,S_s) P(S_s+1,...,S_j,t) )=0. (Note that only fast reactions are considered in the second equation) Remember that the stationary part is the one associated with the Hill function, that is, this part is given by the following reaction p + n e [k_-]k_+⇄ pe_n. These reactions are generally fast; therefore, the Hill function is determined if a deterministic analysis is performed. However, because we are using a stochastic approach, we would have to proceed in another way. Assuming that we are already in the stationary state, from these reactions and with the help of (<ref>) we would obtain something similar to the following k_-(s+1) P_s(e-n,p-1,s+1)- k_+p e!/(e-n)!1/Ω^n P_s(e,p,s) + k_+(p+1) (e+n)!/e!1/Ω^n P_s(e+n,p+1,s-1) - k_- s P_s(e,p,s)=0. (e,pandsare the number of moleculese,pandpe_nrespectively) Now, the question is what to do with this equation and how the Hill function would emerge, and there are two methods, as we will see below. §.§.§ Exact Hill function In this first derivation, from the equation (<ref>) a stationary distribution is obtained, we use the method that we explained to obtain it, P_s(e,p,s) = (1/Ω^nk_+/k_-⟨e!/(e-n)!|_⟩e )^sN_0 ! /s! (N_0 -s )! P_s(0). Now we average the first equation of (<ref>) with respect to this stationary distribution, we define the average with respect to the stationary part as⟨t^± |_⟩Sof this way our first equation becomes dP(S_s+1,...,S_j,t)/dt= Ω∑ _i ( ⟨t_i^-(𝐒+Γ_i)|_⟩s P(S_s+1+Γ_s+1,i,... S_j+Γ_j,i,t) - ⟨t_i^+(𝐒)|_⟩s P(S_s+1,...,S_j,t) . +.⟨t_i^+(𝐒-Γ_i)|_⟩s P(S_s+1-Γ_s+1,i,... S_j-Γ_j,i,t) - ⟨t_i^-𝐒|_⟩s P(S_s+1,...,S_j,t) ), This equation was used to model the dynamics of this type of system; only now are the reaction rates averaged with respect to the stationary part, for more details on the developments that can be reviewed <cit.>. Hill functions are expected to appear in the averages with respect to the stationary part, as observed in the case of Toggle Switch. However, because some averages would appear within this expression, it is not recommended to use Gillespie's algorithm; however, we can still find the average of the concentrations and quantify the intrinsic fluctuations if the deterministic approach and dissipation fluctuation theorem are used. §.§.§ Hill function variants In the case in which we want to do Gillespie-type simulations, and also explain how the deterministic Hil function appears, we can derive another type of Hill functions, from (<ref>) we find the following two relations s+1= p 1/K^ne!/(e-n)!1/Ω^nP(e,p,s)/P(e-n,p-1,s+1), s= (p+1) 1/K^n(e+n)!/e!1/Ω^nP(e+n,p+1,s-1)/P(e,p,s), We definek_-/k_-= K^n . Several assumptions can be made to determine the Hill function, including the size of the system and/or shape of the distribution (these conditions were chosen to treat the previous equations as algebraic), as outlined in Table <ref>. In this table, we have four types of Hill functions; its type depends on the type of assumptions that are considered: in the case that we do not make any, we have a stochastic case; when considerations are made in the probability distribution, we have a semi-stochastic case, and so on. These names were chosen because of the degree of relevance of the components to each formalism. For example, in the stochastic case, one mainly considers the master equation, which is a differential equation of distributions. Semi-deterministic case First, we analyze the semi-deterministic case, assuming that the system is sufficiently large. The probability distribution can then be approximated asP(e,p,s)/P(e-n,p-1, s+1) ≈1. We consider two cases because we have two expressions in (<ref>) andp+s=N_0: a)For this case, we use the first equation of (<ref>), s and N_0 are very large (because the system is large) so s+1 ≈ s and also fracP(e,p,s)P(e-n,p-1,s+1)≈ 1, so we would have s=p 1/K^ne!/(e-n)!1/Ω^n, substituting in the definition of the Hill function H= s/s+p we obtain, H_1=e!/Ω^n K^n (e-n)!+ e!. b) For this other case, we use the second equation of (<ref>), p and N_0 are very large (because the system is large) so p+1 ≈ p, said of otherwise s has to be very small because we have the relation N_0=s+p, furthermore, we suppose P(e+n,p+1,s-1)/P(e, p,s)≈ 1, so we would have s=p 1/K^n(e+n)!/e!1/Ω^n, Substituting in the definition of the Hill function we get, H_2=(e+n)!/Ω^n K^n e!+ (e+n)!. In this way we have obtained our Hill function for two regions, so we can say that one is valid whensis small and the other when it is large, so we define a Hill function for all possible values ofsas follows H_sd= e!/Ω^n K^n (e-n)!+ e! +( (e+n)!/Ω^n K^n e!+ (e+n)!-e!/Ω^n K^n (e-n)!+ e!) θ(1/2-s/N_0), A Heavennside function was introduced to separate the two regions of the function, which relies onH_1orH_2depending on the value ofs, thus covering all possible values ofs. We obtained our semi-deterministic Hill function, which is semi-deterministic because the probability considerations are almost deterministic. Also, wheneand/orΩget very large H_sd≈ H_d= e^n/Ω^n/K^n + e^n/Ω^n, A Heavennside function was introduced to separate the two regions of the function, which relies onH_1orH_2depending on the value ofs, thus covering all possible values ofs. We obtained our semi-deterministic Hill function, which is semi-deterministic because the probability considerations are almost deterministic. In addition, wheneand/orΩbecome very large. Stochastic case In the case that we want to use a stochastic Hill function, we can construct a completely stochastic one, for this we return to the relations found in (<ref>) but without making any assumptions, we operate in a similar way to the case above, to cover the different values ofs, we finally obtain our stochastic Hill function H_s = e!/Ω^n K^n (e-n)!P(e-n,s+1)/P(e,s)+ e! + ( (e+n)!/Ω^n K^n e! P(e,s)/P(e+n,s-1) + (e+n)!-e!/Ω^n K^n (e-n)!P(e-n,s+1)/P(e,s)+ e! ) θ(1/2-s/N_0). This is the stochastic Hill function, which depends on the probability distribution of the system and, using this function as defined within the master equation, becomes more complex. Semi-Stochastic case Because it is more complicated to use the Hill function in a stochastic process, we approximated the probabilities. So we suppose the following P(e+e_1,s+s_1)/P(e,s)≈ e^-( e_1+ s_1/Ω), We used this approximation because when the system is large, the binomial distribution that appears in the steady state tends to be a Poisson distribution, and whenΩis sufficiently large, we find the deterministic assumption of cases a) and b). Substituting these relations in the previous equation we obtain our semi-stochastic Hill function, because we make a consideration on the distributionP(e,s), H_ss = e!/e^( n-1/Ω)Ω^n K^n (e-n)!+ e! + ( (e+n)!/e^( n-1/Ω)Ω^n K^n e! + (e+n)!-e!/e^( n-1/Ω)Ω^n K^n (e-n)!+ e! ) θ(1/2-s/N_0). We call this function the semi-stochastic Hill function because we made a consideration only on the probability distribution, this function coincides with the semi-deterministic whenn=1or when the value ofΩis very great, lim_Ω→∞ H_ss = e^n/Ω^n/K^n + e^n/Ω^n=H_d, Therefore, to perform simulations of a certain stochastic system and when the system size is small, we recommend using the semi-stochastic Hill function because it represents the system more faithfully. This function is considerably easier to use in stochastic simulations. To observe the behavior of the Hill functions that we have calculated,H_d,H_sdandH_ss, we made some graphs that can be seen in figure <ref>, in which we notice that whenn=1it is observed that the three functions are practically the same, and when the value ofeincreases, the values of the three are almost identical, whereas at small values, there is a large difference, as shown in the figures. Another feature they share is that they all practically become step functions whennis large. However, the location where the step varies, because factorials appear is semi-deterministic or semi-stochastic. The considerations we have made for the appearance of Hill functions may not be completely fulfilled (even more so when the size of the system is small). Therefore, if one wants a process that is as realistic as possible, it is better to use the relations presented at the beginning of this subsection. However, other complications are possible if the constants are unknown in detail. Therefore, the best approach would be to use an exact derivation. XXBuloaRoces, M. E. Á. B., Martínez-García, J. C., Dávila-Velderrain, J., Domínguez-Hüttinger, E., & Martínez-Sánchez, M. E. (2018). Modeling Methods for Medical Systems Biology. AlonAlon, U. (2019). An introduction to systems biology: design principles of biological circuits. CRC press. WalWalczak, A. M., Mugler, A., & Wiggins, C. H. (2012). Analytic methods for modeling stochastic regulatory networks. Computational Modeling of Signaling Networks, 273-322.k. PaulPaulsson, J. (2005). Models of stochastic gene expression. Physics of life reviews, 2(2), 157-175. ThomasThomas, P., Straube, A. V., & Grima, R. (2012). The slow-scale linear noise approximation: an accurate, reduced stochastic description of biochemical networks under timescale separation conditions. BMC systems biology, 6(1), 1-23. UriGómez-Uribe, C. A., Verghese, G. C., & Tzafriri, A. R. (2008). Enhanced identification and exploitation of time scales for model reduction in stochastic chemical kinetics. The Journal of chemical physics, 129(24), 244112. Santi Santillán, M. (2014). Chemical kinetics, stochastic processes, and irreversible thermodynamics (Vol. 2014). Heidelberg: Springer. GarGardiner, C.W., et al.: Handbook of Stochastic Methods vol. 3. Springer, Berlin; New York (2004) TomThomas, P., Matuschek, H., & Grima, R. (2013). How reliable is the linear noise approximation of gene regulatory networks?. BMC genomics, 14(4), 1-15. GillespieGillespie, D. T. (1976). A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. Journal of computational physics, 22(4), 403-434. Lecca Lecca, P. (2013). Stochastic chemical kinetics. KurtzAnderson, D. F., Craciun, G., & Kurtz, T. G. (2010). Product-form stationary distributions for deficiency zero chemical reaction networks. Bulletin of mathematical biology, 72(8), 1947-1970. Ramon GRIMA, Ramon. Linear-noise approximation and the chemical master equation agree up to second-order moments for a class of chemical systems. Physical Review E, 2015, vol. 92, no 4, p. 042124. GomezGomez-Uribe, C. A., & Verghese, G. C. (2007). Mass fluctuation kinetics: Capturing stochastic effects in systems of chemical reactions through coupled mean-variance computations. The Journal of chemical physics, 126(2), 024109. ScotScott, M. (2012). Applied stochastics Processes in science and engineering Sauro Iglesias, P. A., & Ingalls, B. P. (Eds.). (2010). Control theory and systems biology. MIT press. EloElowitz, M., and Leibler, S. (2000). A synthetic oscillatory network of transcriptional regulators. Nature 403:335–338. RS Loinger, A., & Biham, O. (2007). Stochastic simulations of the repressilator circuit. Physical Review E, 76(5), 051917. TSSLoinger, A., Lipshtat, A., Balaban, N. Q., & Biham, O. (2007). Stochastic simulations of genetic switch systems. Physical Review E, 75(2), 021904. VecchioMDel Vecchio, D., & Murray, R. M. (2014). Biomolecular feedback systems. In Biomolecular Feedback Systems. Princeton University Press.
http://arxiv.org/abs/2307.02822v1
20230706074243
A note on stable toric sheaves of low rank
[ "Carl Tipler" ]
math.AG
[ "math.AG", "14M25" ]
Stable toric sheaves of low rank] A note on stable toric sheaves of low rank C. Tipler]Carl Tipler Univ Brest, UMR CNRS 6205, Laboratoire de Mathématiques de Bretagne Atlantique, France carl.tipler@univ-brest.fr Kaneyama and Klyachko have shown that any torus equivariant vector bundle of rank r over ^n splits if r < n. In particular, any such bundle is not slope stable. In contrast, we provide explicit examples of stable equivariant reflexive sheaves of rank r on any polarised toric variety (X,L), for 2≤ r< (X)+((X)), and show that the dimension of their singular locus is strictly bounded by n-r. [ [ August 1, 2023 ================== § INTRODUCTION In his study of low codimension subvarieties of ^n, Hartshorne conjectured that for n≥ 7, any rank 2 vector bundle on ^n should split (see <cit.>). While this conjecture still remains open in general, a lot of progress have been made in the equivariant context. Considering ^n as a toric variety, Kaneyama <cit.> and Klyachko <cit.> have shown that any torus equivariant vector bundle of rank r<n over ^n splits as a direct sum of line bundles. More recently, Ilten and Süss extended this result for bundles equivariant with respect to a lower rank torus action <cit.>. As split vector bundles are not simple, they are in particular not slope stable (see Section <ref> for precise definitions). Reflexive sheaves can be considered as midly singular versions of locally free ones <cit.>. For a reflexive sheaf over a complex variety X, we denote by ()⊂ X the singular locus of , that is the complement in X of the open set where is locally free. In contrast with the previously cited results, we have the following theorem. Let (X,L) be a smooth polarised toric variety of dimension n and Picard rank p. Then, for any 2≤ r < n+p, there is an equivariant stable reflexive sheaf _r of rank r on (X,L). Moreover, if r < n, its singular locus satisfies (Sing(_r)) < n - r and if r≥ n, _r is locally free. Allowing for singularities provides a much greater flexibility in the constructions of (equivariant) sheaves, and the above result motivates the following question : is there a lower bound on the dimension of the singular locus of a stable equivariant reflexive sheaf of low rank on a toric variety? In Proposition <ref>, we show that (())=n-3 for any rank 2 equivariant stable sheaf on ^n, n≥ 3. Thus in that case the bound from Theorem <ref> is actually 'optimal', but also the worst bound one could hope for, given that the singular locus of a reflexive sheaf on a smooth variety is always of codimension greater or equal to 3. On the other hand, Dasgupta, Dey and Khan provided examples of rank 2 stable equivariant bundles over specific polarised Bott towers <cit.>. Thus, if such a lower bound existed, it would depend on invariants of (X,L). We believe that it would be interesting to understand better the relationship between stability and singularities for (equivariant) reflexive sheaves over (toric) varieties, and this note is a first step in that direction. §.§ Acknowledgments The author would like to thank Achim Napame for his careful reading of the first version of the paper and his comments. The author is partially supported by the grants MARGE ANR-21-CE40-0011 and BRIDGES ANR–FAPESP ANR-21-CE40-0017. § BACKGROUND Let X be a smooth and complete toric variety of dimension n over . We will use the standard notations from toric geometry, following <cit.>. In particular, we denote by T_N=N⊗_^* the torus of X, N the rank n lattice of its one-parameter subgroups, M=_(N,) its character lattice and Σ its fan of strongly convex rational polyhedral cones in N_=N⊗_ (see <cit.>). The variety X is then covered by the T_N-invariant affine varieties U_σ=([M∩σ^∨]), for σ∈Σ. §.§ Equivariant sheaves Let α : T_N × X → X, π_1 : T_N× X → T_N and π_2 : T_N × X → X be the T_N-action, the projection on T_N and the projection on X respectively. A coherent sheaf on X is T_N-equivariant (or equivariant for short) if there is an isomorphism φ : α^*→π_2^* satisfying some cocycle condition (see for example <cit.>). Klyachko provided a simple description of equivariant reflexive sheaves on toric varieties <cit.> (see also <cit.>). Any such sheaf is uniquely described by a family of filtrations that we denote (E,E^ρ(i))_ρ∈Σ(1),i∈. Here, E is a finite dimensional complex vector space of dimension (), and for each ray ρ∈Σ(1), (E^ρ(i))_i∈ is a bounded increasing filtration of E (note here that we will use increasing filtrations as in <cit.>, rather than decreasing ones as in <cit.>). Then, one recovers an equivariant reflexive sheaf by setting for each σ∈Σ : Γ(U_σ, ):=⊕_m∈ M⋂_ρ∈σ(1) E^ρ(⟨ m,u_ρ⟩)⊗χ^m where u_ρ∈ N is the primitive generator of ρ and ⟨·,·⟩ the duality pairing. Finally, from <cit.> (or <cit.>), will be locally free if and only if the family of filtrations (E^ρ(∙))_ρ∈Σ(1) satisfies Klyachko's compatibility criterion, namely that for each σ∈Σ, there exists a decomposition E= ⊕_[m]∈ M/(M∩σ^⊥) E^σ_[m] such that for each ray ρ∈σ(1) in σ : E^ρ(i)=⊕_⟨ m,u_ρ⟩≤ i E^σ_[m]. §.§ Slope stability Assume now that L→ X is an ample line bundle on X. Recall that a reflexive sheaf on X is said to be slope stable if for any coherent and saturated subsheaf ⊂ with ()<(), one has μ_L()<μ_L(), where for any coherent torsion-free sheaf , the slope μ_L() is the intersection number μ_L()=c_1()· L^n-1/()∈. If is equivariant with associated family of filtrations (E,E^ρ(∙))_ρ∈Σ(1), from Klyachko's formula for the first Chern class (see e.g. <cit.>) we obtain μ_L()=-1/()∑_ρ∈Σ(1)ι_ρ() _L(D_ρ), where _L(D_ρ) is the degree with respect to L of the divisor D_ρ associated to the ray ρ∈Σ, and where ι_ρ():=∑_i∈ i ((E^ρ(i))- (E^ρ(i-1))). Moreover, in that equivariant case, from Kool's work <cit.> (see also <cit.>), to check stability for , it is enough to compare slopes with equivariant and saturated reflexive subsheaves. By <cit.>, any such subsheaf is associated to a family of filtrations of the form (F, F∩ E^ρ(i))_ρ∈Σ(1),i∈ for some vector subspace F⊊ E. To summarize, we have The equivariant reflexive sheaf associated to the family of filtrations (E,E^ρ(i))_ρ∈Σ(1), i∈ is slope stable if and only if for any vector subspace F⊊ E, we have 1/(F)∑_ρ∈Σ(1)ι_ρ(F) _L(D_ρ) > 1/(E)∑_ρ∈Σ(1)ι_ρ() _L(D_ρ), where ι_ρ(F):=∑_i∈ i ((F∩ E^ρ(i))- (F∩ E^ρ(i-1))). § THE EXAMPLES Once the above settled, the proof of Theorem <ref> is fairly simple and relies on elementary observations. To produce the examples, we will need the following lemma. Let (r,m)∈^2 with r≥ 2 and m≥ 1. There exists (v_i)_1≤ i≤ m∈(^r)^m such that for any d≤min{ r, m } and any { i_1,…, i_d}⊂{ 1,2,…,m}, the vectors (v_i_1, … , v_i_d) are linearly independent. If m≤ r the statement is obvious. For m≥ r, we use induction on m. Assuming that we have (v_i)_1≤ i≤ m∈(^r)^m satisfying the conclusion of the lemma, we can pick v_m+1∈^r in the complementary of the finite union of hyperplanes defined by the equations (v_i_1,…,v_i_r-1, x)=0 where the set of indices { i_1,… , i_r-1} runs through all subsets of { 1, …, m } with r-1 elements. Let r∈ with 1 < r < n+p, where we recall that p=((X)). From <cit.>, n+p=|Σ(1)| is the number of rays in Σ, so we have 2≤ r ≤|Σ(1)|-1. By Lemma <ref>, we can fix m=|Σ(1)| vectors (v_ρ)_ρ∈Σ(1) in ^r such that any d-dimensional subspace F⊂^r contains at most d elements from { v_ρ, ρ∈Σ(1) }. We also set for ρ∈Σ(1) : m_ρ = ρ'≠ρΠ_L(D_ρ')∈^*, so that there is a positive constant c∈^* such that for any ρ∈Σ(1), m_ρ _L(D_ρ) =c. Now, define _r to be the equivariant reflexive sheaf associated to the family of filtrations (^r,E^ρ(i))_ρ∈Σ(1),i∈ with E^ρ(i)={[ { 0 } if i < 0; · v_ρ if 0≤ i < m_ρ; ^r if m_ρ≤ i . ]. By construction, using formula (<ref>), we have [ - μ_L(_r) = 1/r∑_ρ∈Σ(1) (r-1) m_ρ _L(D_ρ); = c r-1/r|Σ(1)|. ] On the other hand, for a d-dimensional subspace F⊊^r, we compute ι_ρ(F) _L(D_ρ)={[ c (d-1) if v_ρ∈ F; c d if v_ρ∉ F ]. and then [ 1/(F)∑_ρ∈Σ(1)ι_ρ(F) _L(D_ρ) = c |Σ(1) | - ∑_v_ρ∈ Fc/d; ≥ c |Σ(1)| - c ] where the last inequality comes from the fact that F contains at most d elements amongst (v_ρ)_ρ∈Σ(1). As r < |Σ(1) |=n+p, we then conclude that - μ_L(_r) < 1/(F)∑_ρ∈Σ(1)ι_ρ(F) _L(D_ρ) and by Proposition <ref>, _r is slope stable. We now turn to the singular locus (_r)⊂ X. Assume first that r≤ n. Let σ∈Σ(r) a r-dimensional cone in Σ. As X is smooth, we can find an isomorphism N≃^n such that the elements (e_1,…, e_r) from the canonical basis of ^n span the rays (ρ_1,…, ρ_r) of σ(1). We can then identify M/(M∩σ^⊥)≃· e_1^*⊕…⊕· e_r^* for (e_i^*)_1≤ i≤ n the dual canonical basis. For j∈ [[ 1, r ]], define E^σ_j to be the vector space · v_ρ_j⊂^r together with the weight μ^j-action of T_N, where μ^j:=∑_1≤ i≤ r, i≠ j m_ρ_i e_i^*∈· e_1^*⊕…⊕· e_r^*. Then, by choice of the (v_ρ)_ρ∈Σ(1), we have ^r=⊕_j=1^r E^σ_j, and by choice of the weights (μ^j)_1≤ j≤ r, for any ρ_k∈σ(1), we infer that E^ρ_k(i)= ⊕_μ^j_k≤ i E^σ_j where μ^j_k stands for the k-th coordinate of μ^j in ⊕_i=1^n · e_i^*. Thus, Klyachko's criterion for locally freeness is satisfied by (_r)_| U_σ, the restriction of _r to U_σ. Hence, we have (_r)⊂ X ∖⋃_σ∈Σ(r) U_σ. By the orbit-cone correspondence (see <cit.>), we deduce that (_r)⊂⋃_(τ)> r(τ) where (τ) is the (n-(τ))-dimensional orbit associated to τ∈Σ. Hence, ((_r)) < n-r. The case for r > n can be dealt with a similar argument, and this concludes the proof. In view of Hartshorne's conjecture, it is natural to try to build rank 2 stable reflexive sheaves with a singular locus of the smallest possible dimension. Unfortunately, on ^n, the bound ((_2))≤ n-3 is actually optimal in the torus equivariant case. Let be a rank 2 stable equivariant reflexive sheaf on ^n, n≥ 3. Then (())=n-3. In the proof, we specify to X=^n, but keep all previous notations (e.g. Σ will denote the fan of ^n). Consider (E,E^ρ(∙))_ρ∈Σ(1) the family of filtrations associated to . Up to an isomorphism, we can assume E=^2. For each ρ∈Σ(1), there exist integers n_ρ≤ m_ρ and a vector v_ρ∈^2 such that E^ρ(i)={[ { 0 } if i< n_ρ; · v_ρ if n_ρ≤ i< m_ρ; ^2 if m_ρ≤ i . ]. Up to tensoring by (∑_ρn_ρ D_ρ), we can further assume that n_ρ=0 for all ρ∈Σ(1), see <cit.>. We then have E^ρ(i)={[ { 0 } if i< 0; · v_ρ if 0≤ i< m_ρ; ^2 if m_ρ≤ i . ]. If for all ρ∈Σ(1), m_ρ=0, then ≃_^n⊕_^n, which contradicts stability. Hence, there is at least one line · v_ρ appearing in the family of filtrations of . We claim now that there must be at least three different such lines. Indeed, if for all ρ∈Σ(1) with m_ρ >0 we have · v_ρ = F_1 for a given line F_1⊂^2, then we have on one hand (we assume _L(D_ρ)=1 for all ρ∈Σ(1)) : -μ_L()=1/2∑_m_ρ≠ 0 m_ρ >0 , while ∑_ρ∈Σ(1)ι_ρ(F_1) = 0, which contradicts stability by Proposition <ref>. If there are two different lines F_1, F_2⊂^2 such that for any ρ∈Σ(1) with m_ρ >0, · v_ρ = F_1 or · v_ρ = F_2, then -μ_L()=1/2(∑_· v_ρ= F_1 m_ρ+∑_· v_ρ= F_2 m_ρ), while ∑_ρ∈Σ(1)ι_ρ(F_1) = ∑_· v_ρ= F_2 m_ρ and ∑_ρ∈Σ(1)ι_ρ(F_2) = ∑_· v_ρ= F_1 m_ρ. Again, this contradicts stability. Hence, we can find at least three different lines F_1, F_2, F_3 ⊂^2 together with (ρ_1, ρ_2, ρ_3)∈(Σ(1))^3 such that for 1≤ i≤ 3, · v_ρ_i = F_i. As n≥ 3, the cone σ:=∑_i=1^3 ρ_i belongs to Σ. Klyachko's criterion for locally freeness cannot be satisfied on U_σ, as it would imply that F_1⊕ F_2⊕ F_3 ⊂^2, which is absurd. It follows that ()∩ U_σ≠∅. On the other hand, arguing as in the proof of Theorem <ref>, we have that for any face τ⊊σ, is locally free on U_τ⊂ U_σ. We conclude by invariance of the singular locus and the orbit-cone correspondence that (σ)⊂(), and as ((σ))=n-3, (())≥ n-3. The result then follows from general theory, the singular locus of a reflexive sheaf on a smooth complex manifold being always of codimension at least 3. amsplain
http://arxiv.org/abs/2307.00529v1
20230702094605
New intelligent defense systems to reduce the risks of Selfish Mining and Double-Spending attacks using Learning Automata
[ "Seyed Ardalan Ghoreishi", "Mohammad Reza Meybodi" ]
cs.CR
[ "cs.CR", "cs.LG" ]
Smart Defenses against DS and SM attacks Shell Ghoreishi and Meybodi.: Bare Demo of IEEEtran.cls for IEEE Journals New intelligent defense systems to reduce the risks of Selfish Mining and Double-Spending attacks using Learning Automata Seyed Ardalan Ghoreishi, and Mohammad Reza Meybodi August 1, 2023 ========================================================================================================================= In this paper, we address the critical challenges of double-spending and selfish mining attacks in blockchain-based digital currencies. Double-spending is a problem where the same tender is spent multiple times during a digital currency transaction, while selfish mining is an intentional alteration of a blockchain to increase rewards to one miner or a group of miners. We introduce a new attack that combines both these attacks and propose a machine learning-based solution to mitigate the risks associated with them. Specifically, we use the learning automaton, a powerful online learning method, to develop two models, namely the SDTLA and WVBM, which can effectively defend against selfish mining attacks. Our experimental results show that the SDTLA method increases the profitability threshold of selfish mining up to 47%, while the WVBM method performs even better and is very close to the ideal situation where each miner's revenue is proportional to their shared hash processing power. Additionally, we demonstrate that both methods can effectively reduce the risks of double-spending by tuning the Z Parameter. Our findings highlight the potential of SDTLA and WVBM as promising solutions for enhancing the security and efficiency of blockchain networks. Bitcoin, Blockchain, Double-spending attack, Learning automata, Selfish mining, Reinforcement Learning § INTRODUCTION Blockchain can be considered a distributed ledger where all approved transactions are stored in its blocks <cit.>. This chain is constantly growing with the addition of new blocks. For example, a famous blockchain named Bitcoin relies on an incentive mechanism called mining. Every node that participates in mining progress is called a miner. Miners are trying to produce blocks and broadcast them. When a miner mines a block, they will receive rewards. Miners often pool their resources (computing and processing power) to form a mining pool. By doing this, they can mine more blocks and thus share the mining reward. When more than one block extends the previous block, the main chain is determined by a fork-resolving policy. In this case, the miner selects the longest chain, or in the case of multiple chains with the same length, the chain that receives the next block will be considered the main chain. This forked situation is called a block race, and an equal-length block race is called a tie. As long as more than half of the mining power follows the protocol, the probability that a miner will gain the next block reward is equal to the miner’s computing power <cit.>. These rules provide the opportunity to launch a selfish mining attack, as offered by Eyal et al. <cit.>. Selfish mining refers to the efforts of a destructive miner to increase their share of the mining reward. The attacker hides the mined blocks for a period of time and then releases several blocks immediately, causing the other miners to lose their blocks <cit.>. Double spending is one of the most common attacks that take place by abusing the transaction confirmation mechanism. All transactions on the platform of a blockchain must be approved by other users to be recognized as valid and approved transactions, and of course, this process takes some time; attackers can use this time to take advantage and trick the system into using the same coins in other transactions as well. In general, the chances of double-spending attacks succeeding are low. However, if the attacker continues to try these attacks alternately with the α computing power, he will eventually succeed <cit.>. Instead of focusing on the likelihood of success, we should focus on the cost of these attacks. Any failed attempt to double-spending attack would cost the attacker the equivalent of losing the reward he would receive if he honestly released his blocks instead of hiding them. This is where selfish mining can help the double-spending attacker. An intelligent strategy for an attacker is to launch a series of selfish mining attacks and, once successful, combine them with a double-spending attack. This can be done by conducting regular public transactions while always having conflicting items hidden in the attacker's secret blocks. It is always possible that the recipient will accept the payment until a successful selfish mining attack is completed. In this case, in addition to the selfish mining, the double spending attack will also be successful. Thus, having a miner that for him a selfish mining strategy is at least as profitable as honest mining generally undermines the security of Bitcoin payments <cit.>, as the attacker pays no cost to attempt double-spending attacks and will eventually succeed in shaping the attack. On the other hand, an attacker who cannot benefit from selfish mining alone may find it profitable to combine this strategy with double-spending attacks, which has potentially serious consequences for the selfish mining profitability threshold. This paper proposes a combined attack model and two models to defend against this attack. Our methods are based on a new fork-resolving policy that uses a new weighting algorithm to choose the winning chain in forks. Our defense methods replace the original Bitcoin fork-resolving policy, represented by length FRP, with smart FRP. At any moment, according to the conditions of the network, miners choose one of two weight or length criteria to determine the winning chain in a fork. In weighted FRP, the miner assigns a weight to the blocks in each chain according to the time stamps, and the chain with the most weight is selected as the main chain. On the other hand, we will see that the weight criterion will only sometimes be the best choice, and conditions must be created to use this criterion only in exceptional cases. Furthermore, the algorithms presented in this paper intelligently set a parameter known as the number of confirmations required by the service provider to send the goods in order to reduce the risk of double-spending attacks. Compared with existing defenses, our defense methods have many advantages. Our methods are backward-compatible, decentralized, effective, and can reduce the probability of choosing a winning chain under the influence of eclipse attacks. For evaluation, we extend the SM1 developed by Eyal et al. <cit.> Results show that our methods successfully reduce the risk of SM1 and double-spending attacks. § PRELIMINARIES In this section, we first describe some definitions, then mention the defenses proposed before. §.§ Bitcoin Blockchain and Mining To better understand the presented algorithms, in this section we summarize the basic features of Bitcoin by reviewing the original paper by Nakamoto <cit.> and the book by Narayanan et al. <cit.> to have a complete view of the system. All nodes follow the same block and transaction validation rules to ensure participants' consensus on valid transactions. A typical Bitcoin transaction consists of at least one input and one output. The transaction fee, which is the difference between the total amount of inputs and outputs in a transaction, is paid to the miner, who records the transaction in the blockchain. §.§ Selfish Mining In this attack, a set of miners use dishonesty in the consensus process and conspire with each other to inject a set of decisions and fake blocks into the network. The scenario of this attack is that when honest miners discover a new block, (1) if the size of the public chain (the honest branch) is longer than the selfish branch (the private chain created by the attacker), then selfish miners try to set their private branch to the public branch. (2) If the selfish branch is one block longer than the public branch, then selfish miners will fully publish their private chain. (3) If the selfish branch is more than one block longer than the public branch, then selfish miners only publish the first block of their private branch. When selfish miners discover a new block, they keep it private, and when competing with honest miners, they publish their private branch to win the race <cit.>. §.§ Double-Spending attack Bigam et al. <cit.> have described a double-spending attack in five steps. A double-spending attack refers to the fact that a different number of transactions have occurred where the cryptocurrencies are the same. It means that one unit of digital currency has been spent twice in two different transactions. The five steps for a double-spending attack that Begam et al. described are as follows. * The process of adding blocks. First, the user requests a transaction through his wallet. This unconfirmed transaction is placed in a pool of unconfirmed transactions. We know that miners select transactions and add blocks to the blockchain by solving complex mathematical proof-of-work problems. So this block will be added only if other miners confirm the obtained hashes. * Once the honest miners have verified the block and the block has been added to the main blockchain, the attacker's miner starts his chain with the verified block. This miner spends all his currency and sends this information to the main blockchain but does not put it on his private chain. * In this step, the attacker selects transactions and adds the block to his private chain by verifying them with his powerful computing power faster than honest miners can add the block to the real blockchain. * When the attacker broadcasts his private chain transactions in the main blockchain, if the private chain is larger than the real chain, honest miners on the real chain will try to add their block to the newly discovered chain as well. * The rule governing the blockchain states that blocks are added to the head block of the larger chain by removing previous records. Since the head block of the real blockchain has information about the transaction in which the corrupt miner spent his currency, the private chain does not know about the first transaction. Therefore, the previous transaction information is deleted when the private chain wins. Thus, in the new private chain, the attacker can re-spend all the coins he spent once in the real blockchain. According to the proposed solutions to deal with double-spending attacks, today, before sending the service, the service providers wait until the confirmation of a certain number of blocks to reduce the possibility of invalidating the confirmed blocks, and then proceed to send the service. This threshold limit is considered equal to 6 in Bitcoin, so to create such an attack, the attacker's private chain needs to be larger than the actual chain, in addition to being larger than this threshold value, so that the multi-confirmation double-spending attack occurs. §.§ Combined attack As it is known, the double-spending attack alone requires very high computing power and cost, but by combining it with selfish mining, this attack can be carried out with far less computing power <cit.>. In order to check our proposed solution to deal with these attacks, we first need to have a proper simulation of these attacks. In this paper, it is suggested that in selfish mining, if the attacker's chain surpasses the chain of honest miners and the length of the public chain is more than the number of confirmations needed for the merchant to send goods, it is a strategic moment for executing a double-spending attack. This is due to the possibility that the merchant has already sent the goods to the attacker after receiving the necessary confirmation. In our defensive strategies, we try to avoid such situations as much as possible. Algorithm <ref> is the pseudocode of this section. <ref> shows the notations used in all of the pseudocodes used in this paper. §.§ Learning Automata A learning automaton(LA)<cit.> is a machine that can perform a finite number of operations. A possible environment evaluates every action chosen, and the result of this evaluation is given to the learning automaton in the form of a positive or negative signal, and the automaton is affected by this response to choose the next action. The ultimate aim of the process is for the automaton to learn how to select the optimal action from its available options, by maximizing the probability of obtaining rewards from the environment. In this paper, we use two types of Variable Depth Hybrid Learning Automaton (VDHLA) proposed by Nikhalat-Jahromi et al <cit.>. <ref> illustrates the interaction between the learning automaton and the environment. §.§ Properties of an Ideal Defense We can enumerate desirable properties of an ideal defense by explaining the problems and weaknesses of existing defenses <cit.>. * Decentralization: The introduction of a trusted server would lead to a new single point of failure, which would be contrary to Bitcoin’s fundamental principles. * Incentive Compatibility: The expected relative revenue of a miner should be proportional to his mining power. * Backward compatibility: Individuals who do not engage in mining activities and are unable to upgrade their clients are still able to take part in the network, which is particularly vital for hardware products like Bitcoin ATMs. It is crucial that the following regulations remain unchanged. * Block validity rules: A block that meets the requirements of the current Bitcoin protocol must also meet the requirement of the new method. * Reward distribution policy: All blocks in the main chain and no other block receive block rewards. * Eventual consensus: Despite potential attacks, both new and old clients are ultimately expected to reach a consensus on the main chain. §.§ Existing Defenses To prevent a combined attack, selfish mining must be prevented. To do this, we will first talk about the ways these attacks can be prevented. §.§.§ Uniform tie-breaking Eyal and Sirer proposed a defense mechanism for resolving ties in the mining process. This involves miners randomly selecting which chain to mine on in the event of a tie. The authors demonstrated in their research that this approach effectively increases the profit threshold for earning unfair block rewards within the selfish mining strategy to 25% <cit.>. The defense is also referred to as uniform tie-breaking <cit.>. There are two notable drawbacks to this particular defense mechanism. Firstly, it can aid attackers with an advertisement factor of less than 0.5, which is the fraction of computing power from honest nodes that would accept a block from a selfish miner over one from an honest miner (represented by parameter γ). Secondly, even with a profitable threshold of 25%, this approach remains risky in the context of small blockchains. §.§.§ Ethan Heilman’s proposed method Heilman proposed a method based on unforgeable timestamps against selfish mining called "Preferred Freshness." This method requires miners to add unforgeable timestamps to blocks, encourages honest miners to choose blocks with the latest timestamp, and invalidates blocks hidden by attackers. However, the disadvantage of this method is that it requires a valid timestamp agent to generate unforgeable timestamps, which requires honest miners to log all last timestamp release reports <cit.>. §.§.§ Publish or Perish Zhang and Preneel proposed a weighted fork-resolving policy. When a fork occurs, a weight is calculated for each chain, and it is recommended that honest miners not rely on chain length when choosing the main chain but choose the chain with the highest weight <cit.>. Below are listed the disadvantages of this method. * The excessive emphasis is placed on the weight criterion for selecting the winning chain in a fork. * This method relies on a fixed value for parameter K, which plays a crucial role. * The potential for developing novel forms of attacks. * Despite the paper's assertion that there has been no change in the structure, it is imperative to make significant alterations to both the blocks' structure and the consensus protocol. §.§.§ Saad’s proposed method By measuring transaction size, transaction fees, and other factors, Saad et al. assigned an "expected confirmation height" (i.e., the expected height of the block containing a given transaction) to each transaction. The smaller the gap between the actual confirmation and expected height, the less likely the selfish mining behavior is <cit.>. Some modifications to the Bitcoin block structure and transactions are required for this method, which may become its primary drawback. §.§.§ Lee’s proposed method By adding transaction creation time to the transaction data structure, Lee and Kim increased the total hash required for the selfish mining profit threshold from 25% to 33% <cit.>. This approach may have some drawbacks, such as requiring modifications to the transaction data structure, which could be a significant disadvantage. Additionally, the profit threshold equal to 33% may still pose a risk for smaller blockchains. §.§.§ Nik Defense Nikhalat-Jahromi et al. proposed a time stamp-based weighted fork-resolving policy. When a fork occurs, a weight is calculated for each chain and based on a safe parameter calculated by learning automata; honest miners choose the main chain based on the chain’s length or weight <cit.>. Below are listed the disadvantages of this method. * The excessive emphasis is placed on the weight criterion for selecting the winning chain in a fork. * The inadequate definition of the reinforcement signal, coupled with the failure to account for sudden changes in the attacker's hash rate, can lead to an inappropriate setting of the K parameter. * The potential for developing novel forms of attacks. * The weight criterion contains calculations that are not necessary. There are also some studies that try to identify factors in order to detect selfish mining attacks <cit.>. These papers use existing data of selfish mining attacks to create training and test data. But because of the stochastic nature of blockchain, it seems that we cannot be sure about the validity of these kinds of datasets. There are several studies that can be used to prevent double-spending attacks on the blockchain. For example, we can refer to the papers by Ghassan et al. <cit.> and the paper by Podolanko et al. <cit.> to prevent double-spending attacks in fast payments. In these papers, solutions such as using enhanced observers and nearby peers to detect and warn about double-spending attacks are proposed. Bamert et al. <cit.> have also proposed two countermeasures against double-spending attacks. For example, the merchant must connect to a sufficient number of random nodes in the Bitcoin network. This makes it difficult for an attacker to inject false transaction information or a transaction containing a double-spending attack because the attacker does not know which nodes the merchant is connected to. Additionally, the merchant should not accept direct input connections. Therefore, the attacker cannot directly send a transaction to the merchant and is forced to broadcast it over the network. Nodes transmitting that transaction will check and detect if there is a double-spending attempt. Subsequent transactions from the attacker using the same input (address) are ignored by those nodes, making it difficult for the attacker to perform a double-spending attack. The main solution to prevent double-spending attacks is waiting for multiple confirmations. In this method, the vendor should wait for multiple confirmations before releasing the product or providing a service to the client <cit.>. The idea of waiting for more confirmations to prevent double-spending attacks has been discussed in many papers; this value is set to 6 for Bitcoin traders by contract. Arthur Gervais et al. <cit.> have investigated the changes of v_d, which is the lowest value of a double-spending transaction for the profitability of a combined attack, with different values of this parameter. It is also mentioned in another paper <cit.> that it is possible to change the mentioned parameter in order to increase security and reduce the risk of double spending in the blockchain. The paper states that one way to counter these attacks is to allow servers to require more confirmations for larger transactions and make the attack more difficult. § PROPOSED ALGORITHMS The proposed algorithms are presented in this section. We begin by describing how combined attacks and the proposed defense work. The proposed algorithms are then explained with the required definitions. Lastly, we demonstrate the proposed algorithms. §.§ Main Idea As previously discussed, the combination of double-spending and selfish mining attacks poses a significant threat. Therefore, an effective defense system must be capable of detecting such attacks. Various studies, including Gervis et al.'s experiments <cit.>, have established a direct correlation between selfish mining and the rate of stale blocks. When an attacker is present in the network, the rate of stale blocks increases as a result of forks being won by one chain over another. This heightened rate of stale blocks increases the risk of double-spending attacks, as confirmed by <cit.>. By observing and analyzing the rate of stale blocks generated by forks, we can gauge the vulnerability of the network and use this data to train our model. It is important to focus on stale blocks generated by forks for two reasons: * We cannot know about stale blocks that the attacker chooses not to publish. * Stale blocks resulting from an attacker's surrender without competition do not necessarily indicate danger to the network. Such attacks only pose a threat if the attacker has infiltrated an honest pool. In this paper, we propose the utilization of the stale block rate to train our intelligent model as our primary idea. §.§ System Model We propose a model that prioritizes miner nodes as the key players in the blockchain network while disregarding other nodes such as super nodes, light nodes, and others. In this model, a selfish mining attack occurs when mining nodes unite to form a mining pool with the goal of obtaining more revenue than they are entitled to. To create the optimal and most effective model, we consider two groups of miners: those who follow a selfish mining strategy, possessing less than 50% of the total computing power, and those who adhere strictly to the Bitcoin mining protocol. We explain our model by expanding the one described in Nikhalat-Jahromi et al. <cit.>. To describe how computing power is distributed in the proposed model, we assume that the selfish mining pool holds a fraction α of the total computing power, while the remaining honest miners hold a fraction of 1 - α. Consequently, the probability that a newly discovered block belongs to the selfish pool is α, and the probability that it belongs to other miners is 1 - α. In our proposed model, we have made the assumption of disregarding the block propagation delay in order to improve the clarity of network connectivity. This assumption is justifiable as Bitcoin miners strive to transmit and receive blocks promptly, as any delay in this process can interrupt their mining schedule and affect their efficiency in finding new blocks. Moreover, there are ongoing efforts by researchers and network developers to minimize the propagation delay in the Bitcoin network, as evidenced by various published articles and improvement proposals <cit.>. In the subsequent paragraph, we will delve into the topic of block order in the network. The blocks in a Bitcoin node are organized in a tree structure, with each block containing a reference to the previous one. To simplify our analysis, we will focus on just two branches of the block tree: the main chain, which is the longest chain agreed upon by consensus among nodes, and a private branch created by selfish miners. It is impossible for an honest node to differentiate between these two branches. The following paragraphs will detail the miner behavior within this network model. In decentralized networks such as Bitcoin that operate on a proof-of-work consensus mechanism, the creation of a new block can be regarded as an event that is not influenced by the passage of time. As a result, the mining process is considered both discrete and memoryless. This implies that at the moment of discovering a new block, every miner, regardless of their honesty or selfishness, makes a decision that persists until the next block is found. Our proposed model involves the selfish miner utilizing their computing power to create a private chain. At a given time t, the selfish miner must decide which block from the main chain to extend their private chain with and which block to release in order to increase the selfish pool's revenue. If the selfish miner becomes aware that an honest miner has discovered a new block, they may attempt to substitute their private block. The advertisement factor, represented by parameter γ, is a crucial element of this model. γ is the fraction of computing power belonging to honest nodes that would accept a block from a selfish miner instead of an honest miner. When considering the block's height within Bitcoin's network (h), the probability of the selfish miner's block being accepted by honest nodes at that height is γ(1-α). Each miner node within the network utilizes one or two learning automata. The following sections will detail the proposed algorithm's definitions based on the i^th miner in the network. §.§ Required Definitions In this section, the required definitions of the algorithm are defined in order to explain the proposed algorithm. * Decision-making time (τ): This definition outlines the concept of fork decision-making time from the perspective of the i^th miner. This involves a designated period for the miner to assess whether any forks exist and, if so, to choose between them. To specify this time parameter in the proposed method, it is referred to as τ. * Time Window: The interval during which the miner chooses to adjust the safe parameters K and Z is referred to as the time window. This window is determined by an integer factor of the time parameter τ. * SM safe parameter (K): If the difference in length between the two longest chains in the fork is less than K, the chain with the higher weight wins in the fork. * DS safe parameter (Z): In the proposed methods, the parameter Z denotes the value that merchants are recommended to obtain confirmation for prior to dispatching goods, to mitigate the risk of double spending. * Timestamp: Current time as seconds in the universal time since January 1, 1970. Each block contains a timestamp whose main function is to determine the exact moment in which the block has been mined and validated by the blockchain network. In the subsequent sections, we will present the proposed methods based on the system model and the definitions outlined. §.§ Smart defense system with two learning automata (SDTLA) This section will provide a detailed overview of the first proposed algorithm. The algorithm will be initially explained through a sequence of events that occurred during the defense process, followed by a discussion of sub-algorithms. The proposed algorithm is capable of responding to the following events: * One Block Receive Event * If a fork exists, a new block's relation to existing forks will be checked by the previous hash parameter. Forks will be created if necessary by the miner. * Decision-Making Time (τ) Event * The existence of a fork will be checked, and if present, it is essential to determine the selection criteria, either by length or weight. * Time Window Event * The existence of a fork will be checked, and if present, it is essential to determine the selection criteria, either by length or weight. * The reinforcement signals will be used to update the learning automatons. * The next actions will be determined by the learning automatons, and the safe parameters will be updated accordingly. The proposed algorithm consists of five sub-algorithms. These five sub-algorithms will describe in the following. Algorithm <ref> is the pseudocode of this section. §.§.§ Length Calculation Selfish mining creates a fork, as we all know. The length of every chain created by the fork condition is one characteristic that can be used to defend against it. For calculating the length of each chain created by the fork, first get the height of the last block before the fork was created, then calculate the difference between that height and the height of the last block before the fork <cit.>. §.§.§ Weight Calculation To defend against selfish mining, the weight of every chain created by the fork condition can also be used. In this paper, we propose calculating the weights by considering the first ten blocks of each chain in the fork and assigning greater weight to older blocks. The rationale behind this approach is that the attacker's selfish behavior becomes evident in the initial blocks, which were previously concealed. For each chain created by forking, the following steps will be taken: * The first ten blocks (the oldest) are selected from each chain. * Based on the maximum length, evaluate blocks of different chains but of the same height. The chain with the most recent timestamp will win the race; so, among the blocks present at the same height, the weight of the chain that has the highest time stamp, according to the height at which it is located, is added to the value that is the result of multiplying one unit by the corresponding coefficient of that height (because the lower timestamp means that the block is older and more likely to be hidden by the attacker, and also the heights associated with older blocks should have a higher coefficient because they have a higher value in weighting). * The calculation in part 2 will continue until the maximum height or the tenth block has been reached; we consider ten blocks for weighting. A shorter chain will not conclude in comparisons of blocks with higher heights if the others are longer. In the end, if the length of the chains is more than ten, taking into account that, in this case, the superiority of the length of the chain is not considered in the weight, it is necessary to take action to solve this problem. Therefore, the amount of difference is calculated from ten and is added to the weight of the chain by multiplying by 0.5 (a value less than the lowest weight of the first ten blocks). Algorithm <ref> is the pseudocode of this section. §.§.§ Chain Selection The miner must make a decision among the chains created by the fork condition. Our first method employs the chain selection approach proposed by Nikhalat-Jahromi et al. <cit.>. The chain selection algorithm of the proposed defense can be described as follows: * Calculating the chain length. * Sorting chains based on length in descending order. * If one chain is longer than the others by K; it will select it for the next mining event. * If no chain is longer than the others by K, the weight of all chains will be calculated by the algorithm described before. * Sorting chain based on the weight in descending order * Heaviest chain will be selected. Algorithm <ref> is the pseudocode of this section. §.§.§ Action selection by LA With the end of the time window, considering that the time window is an integer coefficient of the time interval τ and in each time interval τ, unique events have happened, it is necessary to make a decision about the outcome of these events. Decision-making, in this case, is the responsibility of learning automata. The learning automatons at the end of the time window, according to the selected action, adjust the value of safe parameters according to the determined interval. In this method, we use two learning automata. * Action selection by SM-LA: In order to update the SM safe parameter, we use a learning automaton. This learning automaton is responsible for setting the K parameter. The action that the learning automaton chooses is one of the following three actions: * Grow: This action happens if the network is considered to be under a selfish mining attack, which means that the rate of change of stale blocks has increased compared to K and should be fixed. In this case, the K parameter increases by one unit. * Stop: This action happens when the learning automaton concludes, according to the received signal, that the value of the safe parameter related to selfish mining was appropriate in the previous time period. So there will be no need to update K. * Shrink: This action happens if the network is less exposed to selfish mining attacks, which means that the rate of change of stale blocks decreases compared to K. In this case, the K parameter is reduced by one unit. Algorithm <ref> is the pseudocode of this section. * Action selection by DS-LA In order to update the DS safe parameter, we use a learning automaton. This learning automaton is responsible for setting the Z parameter. The action that the learning automaton chooses is one of the following three actions: * Increase: This action happens if the network is under attack by selfish mining and double-spending attackers, which means that the rate of changes of stale blocks has increased compared to Z and should be fixed. In this case, determining the Z parameter has two modes. * If the Stale Blocks Change Rate is greater than 0.75, it indicates that the network is heavily exposed to attack and the need to quickly reduce the rate of change is felt, so we multiply the value of Z by 2 to have a sudden jump. * Otherwise, even though the rate of change is increasing and the need to reduce it is felt, there is no need for drastic changes, and it is enough to increase the Z parameter by 2 units. * No Change: This action happens when the learning automaton concludes, according to the received signal, that the value of the safe parameter related to the double-spending attack in the previous time period was appropriate. So there will be no need to update Z. * Decrease: This action happens if the network is less under the attack of selfish mining and double spending, which means that the rate of changes of stale blocks has a downward trend compared to Z and the speed of transaction confirmation can be increased by reducing Z. In this case, determining the Z parameter has two modes. * If Z is greater than 6 and divisible by 2, we divide this parameter by 2. * Otherwise, we subtract 2 units from it. The reason for this way of dealing with the Z parameter is that the fact that this parameter is divisible by 2 can be a sign that it has had an instantaneous jump before (multiplied by 2 in the Increase operation) and has not experienced a sharp decrease in dividing by 2. In this section, we use the term "Stale Blocks Change Rate," which is defined based on the current Z in the given time window. <ref> shows the calculation of this term. 0.93StaleBlockRatePerZ=#fork Stale Blocks In Window/Current Z 0.94SBCR=StaleBlockRatePerZ_New/StaleBlockRatePerZ_New + StaleBlockRatePerZ_Old <ref>, defined here, is also utilized for computing the reinforcement signal in double-spending related learning automata, as detailed in the subsequent sections. Algorithm <ref> is the pseudocode of this section. §.§.§ Calculate Update Signals As we said before, we have to update the signals. One for the SM learning automaton and another one for the DS learning automaton. This section defines the calculation of these signals from the i^th miners' point of view. The learning automata must calculate the reinforcement signal. In this paper, the signal is denoted by β. The reinforcement signal is feedback from the environment that the learning automaton uses to update its probability vector. In the proposed method, this signal is calculated from the analysis of each decision τ in a time window. * Calculate Update Signals for SM learning Automata: At the end of the time window, since the learning automaton has chosen its action according to the previous window, before selecting the next action, it needs to receive its reward if it chose correctly and be penalized if it chose incorrectly. For this purpose, the parameter for updating the reinforcement signal is used in the learning automaton. In this section, the reinforcement signal for the learning automaton related to selfish mining is defined for the first algorithm. 0.92StaleBlockRatePerK=#fork Stale Blocks In Window/Current K 0.88β_1=Number of Weight Decision/Number of Height Decision + Number of Weight Decision 0.95β_2=StaleBlockRatePerK_Old/StaleBlockRatePerK_New + StaleBlockRatePerK_Old If the values of β_1 and β_2 are greater than 0.5, they will be equal to one. Otherwise, they will be equal to zero. Now we can calculate β for SM learning automata: β = β_1 and β_2 Finally, a value of one for β means that the selection of the automata is correct, and a reward should be given to it. Otherwise, a penalty will be given to the learning automata. As it is known, in this case, the automata will be rewarded only when the change rate of stale blocks has a downward trend compared to the K parameter. In this way, changes in the attacker's mining power will affect the model’s training. * Calculate Update Signals for DS learning Automata: In this section, the reinforcement signal for the learning automaton related to the double-spending is defined for the first algorithm. 0.95β=StaleBlockRatePerZ_Old/StaleBlockRatePerZ_New + StaleBlockRatePerZ_Old If the β is greater than 0.5, the automaton selection is correct, and a reward should be given to it. Otherwise, a penalty will be given to the learning automata. As is known, in this case, the automaton will be rewarded only when the rate of change of stale blocks has a downward trend compared to the Z parameter. In this way, changes in the attacker's mining power will affect the model’s training. In the experiment section, we will see that this algorithm has several problems. To solve these problems, we introduce another model in the following. §.§ Weight validating based method (WVBM) In this section, we present a completely different solution that uses a new concept called weight threshold to deal with selfish mining attacks and somehow validates the chains. Unlike the previous system, this part of the system does not use learning automata and only uses learning automata to deal with double-spending attacks. §.§.§ Motivation The defense system presented in this section is designed to reduce calculations dependent on learning automata and get the results as close as possible to the ideal scenario, no individual would be able to earn revenue greater than their proportionate processing power. This would remove any incentive for rational miners to act selfishly. The two main features of this system are as follows: * Using the weight threshold concept to validate the largest chain in a fork. * Using just one learning automaton to calculate the Z parameter and thus reduce double-spending attacks. The proposed weight threshold concept is independent of the weight calculated to select the winning chain, and the same innovative weighting proposed in the defense system of the previous model will be used for the weight. The defense system presented in this section must be able to provide all of the following: * Preserve the properties of the blockchain. * Do not impose too much additional load on the nodes. * Do not reduce the speed of transaction approval too much. * The more nodes follow it, the more successful it will be. * React correctly against the dynamic changes in the attacker's mining power. * Train the Z parameter correctly and reduce the risk of double-spending attacks. * Reduce the success rate of selfish mining attacks as close as possible to the ideal scenario in which no individual would be able to earn revenue greater than their proportionate processing power by using the weight threshold concept. * As much as possible, do not act randomly in selecting the winning chain in a fork The most important thing about this system is that larger chains will win the fork if they have newer equivalent blocks than other chains in at least a certain percentage of their initial ten blocks. This prevents selfish miners who tried to hide their old blocks from winning the fork just because their private chain is bigger. If the largest chain does not have enough weight, the winner is selected based on the innovative weight introduced in the previous model. §.§.§ The validating weight calculation To show the differences between this model and the former, we first present the validating weight. The validating weight calculation is given below. * The first ten blocks (the oldest) are selected from each chain. * Based on the maximum length, evaluate blocks of different chains but of the same height. The chain with the most recent timestamp will win the race. So, among the blocks present at the same height, the weight of the chain that has the highest time stamp will be added by one unit. * Step 2 calculations continue for all ten heights considered in step one. Algorithm <ref> is the pseudocode of this section. §.§.§ Chain Selection This part is entirely different from the previous system. Chain selection in this algorithm includes the following steps: * Calculate the length of the chains. * Calculate the weight of the chains. * The validation weight calculation. * Sort chains by decreasing the length. * If the validation weight calculated for the longest chain exceeds the threshold, that chain is selected. Otherwise, the chain with the highest weight is selected. * If the lengths of chains are equal, the weight will be the criterion for choosing the winning chain. Algorithm <ref> is the pseudocode of this section. In this system, we considered the weight threshold equal to a quarter of the length of the chain. In general, this threshold should be a function of the chain's length, so that it can be chosen based on the length of the chain and the level of risk that you pose to the network if the chain belongs to the attacker. A quarter of the length means that since the validation weight is an integer, if a chain has a length of 10, its minimum validation weight should be 3. These two sections are the main differences between our new model and the former. Algorithm <ref> is the pseudocode of this section. § EVALUATION In order to evaluate the efficiency of the presented algorithms, it is first necessary to introduce evaluation metrics. In the next section, experiments are designed, and by analyzing the results of these experiments according to the mentioned metrics, the quality of the proposed solution can be measured. We know that the attackers are unknown, and like other nodes, they have access to information about safe parameters, so to be fair, we consider the possibility that attackers release their private branches when the difference is higher than K. This kind of attack is proposed in <cit.> and is called Modified SM1. Taking inspiration from previous defense mechanisms and their simulators <cit.>, we simulated the proposed algorithm by converting the mining model into a Monte Carlo simulation process. This conversion enables the distribution of newly discovered blocks among selfish and honest miners, without solving a cryptographic puzzle. The simulator generated 10,000 blocks for each experiment, testing varying selfish pool sizes. All the tests performed in this paper were performed on a computer equipped with an Intel Core i7 9750H processor with a working frequency of 2.6 GHz and using the Python programming language. §.§ Evaluation metrics In this section, several evaluation metrics are introduced to evaluate the intelligent defense model. These metrics will be used in the next section for experiments designed to validate the presented algorithms. §.§.§ Relative revenue of selfish miners Considering the attack by selfish miners, it is necessary to obtain the relative revenue of selfish miners. <ref> can be used to calculate the relative revenue of selfish miners. 0.95#SelfishMinerWinBlocks/#HonestMinerWinBlocks +#SelfishMinerWinBlocks * 100 §.§.§ The number of times a double-spending attack can occur A double-spending attack occurs when the length of the attacker's chain exceeds the length of the honest chain and the number of confirmations required before the merchant sends the goods. To calculate this metric, we put a counter in the code that increases by one every time the mentioned conditions are met. Finally, the calculated value in the network with the defense system and the network without the defense system will be a suitable measure to measure the power of our defense system. In order to properly check our defense systems, we have obtained this value in the presence of the Nik-defense system and compared it with our defense systems. §.§.§ The upper bound of the relative revenue of selfish miners If the attack is carried out under ideal conditions, <ref> is used to calculate the upper bound of the revenue of selfish miners. α/1 - α §.§.§ Profitable threshold The minimum ratio of hash power that brings more rewards to a selfish miner is called the profitable threshold. §.§ Experiments In this section, experiments have been designed and implemented to check the performance of the proposed models. The purpose of these tests is to check the performance of the models in the face of double-spending and selfish mining attacks. Each experiment consists of 1000 iterations, meaning that we consider 1000 mined blocks. Additionally, we repeat each experiment 50 times and report the average of the results as the final outcome to ensure a fair analysis. In our selfish mining experiments, we conducted a detailed analysis to accurately determine the profit threshold. We achieved this by examining the relative revenue of the selfish pool, considering various processing power values (α) ranging from 0.20 to 0.5 with intervals of 0.02. These experiments were performed on networks that were fortified with our proposed defenses. In double-spending experiments, we conducted an analysis to determine the frequency of successful double-spending attacks. Specifically, we examined the behavior of a selfish pool with processing power α ranging from 0.20 to 0.45, with intervals of 0.05. These experiments were performed on networks equipped with our proposed defenses. §.§.§ The first experiment This experiment aims to assess the effectiveness of the proposed models in protecting against selfish mining attacks. The results of this evaluation are presented in <ref>, which illustrates the relative revenue earned by attackers based on their hash rates. The network is assumed to be equipped with the proposed models and compared to other defensive mechanisms, including the Nik-defense, Tie-breaking, and publish or perish systems. In <ref>, we can see the results of this experiment for both proposed methods. The black dotted line shows the ideal scenario. An ideal Scenario means a situation in which each miner is rewarded according to the amount of processing power they have shared. The blue line shows the Nik-Defense, so we can compare our methods to this method as the best method proposed before. In addition, we present the results obtained through the implementation of uniform tie-breaking and publish or perish techniques <cit.> to facilitate a more comprehensive analysis. As we can see in <ref>, our first proposed method increases the profit threshold compared to previous works. But as it is clear from <ref>, both the SDTLA presented by us and the Nik-Defense severely punish the attacker when they have even very little power. This indicates a relatively high-value selection for the safe parameter K even when the network is not at serious risk. But this point can be hazardous. Because always choosing a weight criterion to choose your winner can open the way for new and even easier attacks compared to selfish mining. For instance, if an attacker persists in mining a block instead of abandoning their private chain, even when they lag two blocks behind, a high value for parameter K and consequent selection of the winning chain based on weight criterion can lead to the attacker's triumph. To address this issue, we present WVBM, which only applies weight criteria when necessary. Our weight validation-based approach predominantly utilizes length criteria for selecting the winning chain, ensuring that newer attacks cannot surpass selfish mining in terms of success rate. As depicted in <ref>, the results of this method are comparable to those of the ideal scenario, establishing it as a robust defensive system against selfish mining. In an ideal scenario, no individual would be able to earn revenue greater than their proportionate processing power and as a result, there is no incentive for rational miners to act selfishly. In <ref>, the profit threshold for each defensive mechanism is displayed. Our proposed methods, particularly WVBM, demonstrate the most favorable outcomes. §.§.§ The second experiment The objective of this experiment is to assess the effectiveness of the proposed models in preventing double-spending attacks. The section presents a graphical representation of the number of successful double-spending attacks based on the attacker's hash rate. The evaluation is conducted in a scenario where the network is equipped with the proposed methods, and the results are compared to other scenarios, such as a network that just uses a fixed Z or a network equipped with a Nik-defense system that has been improved by using a fixed Z to reduce double-spending attacks. The table presented as <ref> displays the parameter values of the proposed systems used in this experiment. Notably, WVBM offers the key advantage of being able to determine the optimal value for Z. In the best-case scenario, Z is equal to 2, while in the worst-case scenario, it is equal to 12. In comparison, SDTLA performs differently, with the Z being 3 in the best cases and 24 in the worst cases. The following figure show the results of the second experiment for our proposed methods. In this figure, we can see the effects of our methods compared to previous works. As we can see in <ref>, SDTLA can handle DS attacks better than improved Nik-Defense with a fixed Z. The Z parameter in this method is often set between 6 and 12. So we can conclude that if we use this method, not only can we verify transactions faster than when we use Nik-Defense, but we can also reduce the risk of DS attacks better too. Aside from the substantial reduction of double-spending attacks, it is crucial to consider the value of the parameter Z. Customers typically prefer prompt services, so increasing the value of Z may not be favorable for them. In Bitcoin, a block is mined every ten minutes, implying that a rise in Z from 6 to 12 would result in a 60-minute extension in service time. According to the results, WVBM demonstrates superior performance in managing DS attacks compared to SDTLA and Nik-Defense. This technique sets the maximum allowable value for Z at 12, but it frequently adjusts this parameter to 6, 4, or even 2. The effectiveness of the proposed methods can be evaluated by examining the average values of the Z parameter and the time required for the service provider to dispatch goods, as shown in <ref>. Therefore, we can assert that WVBM is the best method proposed in this paper. SVBM uses a high Z parameter to reduce the likelihood of double-spending attacks, but this approach may not be appropriate for services that demand quick delivery. §.§.§ The third experiment The objective of this experiment is to assess the impact of variations in tame intervals τ and Time Window on the performance of the two methods presented in this paper. As these parameters are directly associated with the learning automata employed in the proposed methods, the evaluation for SDTLA will encompass measuring its performance against double-spending and selfish mining attacks. On the other hand, for WVBM, the evaluation will focus solely on its performance against double-spending attacks. * The impact of the τ time interval on proposed methods. Based on <ref>, it is evident that the impact of this parameter on the SDTLA model's ability to address selfish mining is not particularly significant. It can be concluded that a value of 5 for this parameter yields the most favorable outcomes. The rationale behind this is to enhance responsiveness in adapting to changes in the attacker's strength. However, this principle does not hold true for double-spending attacks. <ref> and <ref> clearly demonstrate that a higher value for this parameter yields superior results in both methods. Nonetheless, it is important to note that assigning a larger value to this parameter does not automatically imply its superiority. In the conducted experiments, <ref> and <ref> present the values of the Z parameter. Based on the information provided in this table, the optimal scenario occurs when τ is set to 5. Considering the significance of transaction confirmation speed for both service providers and receivers, we designate 5 as the appropriate value for this parameter. It is worth noting that the table's results indicate that when the model responds to the attacker's changes at a slower pace, larger values are assigned to Z. Although this reduces the risk of attacks, it also diminishes the transaction confirmation speed, which is undesirable for clients. * The impact of the Time-Window time interval on proposed methods. According to the findings depicted in <ref>, it becomes apparent that the influence of this parameter on the SDTLA model's capacity to combat selfish mining is relatively insignificant. Therefore, it can be inferred that employing a Time-Window size of 6 * τ produces the most advantageous results. The underlying reasoning behind this choice is to enhance the system's ability to promptly adapt to variations in the attacker's capabilities. However, this principle does not hold true for double-spending attacks. <ref> and <ref> clearly demonstrate the superior performance of WVBM in mitigating double-spending attacks. Although utilizing a time window size of 6 * τ yields exceptional results in terms of Z-mean, it also raises the possibility of facing weaker attackers compared to a window size of 12 * τ. Upon careful examination of <ref>, <ref>, <ref>, and <ref>, it becomes evident that opting for a Time-Window size of 12 * τ would be the most prudent decision for both proposed methods. This choice effectively minimizes the risk of attacks to an appropriate level without excessively inflating the value of Z. § CONCLUSION In this section, based on the findings from the experiments, we will present a comprehensive conclusion to assess the performance of the two proposed methods. §.§ SDTLA The experimental results indicate that SDTLA algorithm has a significant impact on reducing the risks of double-spending attacks and selfish mining. The algorithm outperforms Nik-defense system and is trained effectively due to the independence of the two learning automata. The results demonstrate the power of the learning automata in setting safe parameters and the intelligent performance of the algorithm in response to network changes. Compared to previous systems, SDTLA has a lower computational cost and higher ability to reduce the risk of attacks. One important consideration in SDTLA is the possibility of the learning automata being stuck in a state where it repeatedly chooses actions with high probabilities. To address this issue, the learning automata are reset after a period of time and the parameters are returned to their initial values. While the main feature of VDHLA is the dynamic adjustment of the maximum depth for operation, an additional time interval is included to completely eliminate the risk of being stuck in a repeating state. The duration of this interval is chosen to be less than the time in which the attacker's processing power rate changes. It is essential to carefully set this parameter to avoid problems such as always choosing large values for Z. Another solution to this problem is to not provide rewards to the learning automata if they repeatedly choose the same actions. This approach ensures that the probability of choosing a particular action does not exceed a reasonable limit. In summary, the advantages of SDTLA include its ability to effectively reduce double-spending attacks, its lower computational cost, and its intelligent response to network changes. The potential disadvantage of the algorithm is the need for careful parameter setting to avoid getting stuck in repeating states. However, solutions such as resetting the learning automata and not providing rewards for repeated actions effectively mitigate this issue. Here are the advantages and disadvantages of this system. §.§.§ Advantages * Preservation of the advantages of previous systems * Simultaneous training of two learning automata with no adverse effects on each other * Mitigation of the risk of selecting a winning chain in the presence of an eclipse attack §.§.§ Disadvantages * Choosing the winning chain in a fork based on weight criteria when it's not necessary. * Choosing a high value for the Z parameter, that result in slower transaction and trade processes. * The potential for new attack methods to arise. The meaning of the possibility of creating new attacks is that in this method, the choice of weight criteria plays a crucial role in most cases and this can be dangerous. For instance, if the attacker is two blocks behind and instead of relinquishing, they mine a block, an incorrect setting of the safe parameter associated with selfish mining and a poor selection of the winning chain based on the weight criterion can lead to the attacker winning. This situation can cause the emergence of new types of attacks, including selfish and double-spending attacks. To address this issue, the proposed solution is to design a system that can prevent these attacks. Therefore, we propose the WVBM system. §.§ WVBM Based on the experimental results, our findings indicate that the proposed solution is highly effective in mitigating selfish mining attacks. Despite a lower profitability threshold in some cases compared to SDTLA, the overall profitability for the attacker is significantly reduced, making it comparable to the ideal scenario which means no individual would be able to earn revenue greater than their proportionate processing power and as a result, there is no incentive for rational miners to act selfishly. While there is no defense that can achieve the conditions of the ideal scenario, WVBM can be considered the best outcome of this paper. It is worth noting that the success of the double-spending attack is highly dependent on the success of the selfish mining attack, and the use of WVBM has shown a considerable reduction in the possibility of such attacks, as evidenced by the experimental results. To provide a comprehensive evaluation of this system, we also present its advantages and disadvantages in the following sections. §.§.§ Advantages * Preservation of the advantages of previous systems. * Precise adjustment of the Z parameter. * Optimal defense performance against selfish mining. * Reduced likelihood of selecting a winning chain during an eclipse attack. * Elimination of potential vulnerabilities that could lead to new attacks. * Generally, the length criterion is preferred, while the weight criterion is considered only in exceptional circumstances. §.§.§ Disadvantage * The potential for new attack methods to arise. § SUMMARY AND FUTURE WORKS Double-spending attacks are one of the most significant risks that digital currencies face. The risk of this type of attack increases when combined with selfish mining, so by preventing selfish mining, the risk of another attack can be significantly reduced. This paper introduced two intelligent, back-compatible decentralized defenses that use a new weight policy to select the winning chain. We propose two defensive systems to reduce the risks of selfish mining and double-spending attacks. We use learning automata as a light and fast learning tool to help our systems detect the possibility of these attacks. The results of experiments show that our models can reduce the risks of these kinds of attacks, and we can use them to improve the current blockchains. The SDTLA and WVBM methods have been proposed as effective defense mechanisms against selfish mining in blockchain networks. The SDTLA method has been shown to increase the profitability threshold of selfish mining up to 47%, while the WVBM method performs even better and is very close to the ideal scenario where each miner's revenue is proportional to their shared hash processing power. Moreover, both methods can effectively mitigate the risks of double-spending through the tuning of the Z Parameter. These findings highlight the potential of SDTLA and WVBM as promising solutions for enhancing the security and efficiency of blockchain networks. Because of the weighting method used in defense systems presented in this paper, there will never be conditions under which two chains in a fork are completely equal in terms of weight. We know that normally when the lengths of two chains are equal, the winner will be the chain on which the next block is mined, which can lead to the attacker increasing his power by using the eclipse attack. Therefore, it can be said that the methods proposed in this paper have been able to eliminate the situation in which the attacker can use the eclipse attack to strengthen his combined attack. Future research could explore the potential for new attacks arising from changes in system policies. One potential way for investigation is designing an attack where an automaton can switch between honest and selfish behavior at any moment. Such an attack could potentially disrupt the proper training of defense system automata and catch them off guard during the attack. As blockchain technology continues to evolve, there is an increasing need for in-depth exploration and analysis of its potential applications. Notably, the incorporation of artificial intelligence and machine learning into blockchain technology has emerged as a fascinating area for further research. Therefore, the following recommendations are proposed for future studies in this domain: * Investigating the efficacy of alternative reinforcement learning algorithms in mitigating blockchain attacks. * Developing novel attack strategies utilizing learning automata to dynamically optimize the attacker's performance and response to defensive mechanisms. * To enhance the security and robustness of distributed systems, future research could focus on developing advanced consensus algorithms based on machine learning techniques. These algorithms should be designed to make it significantly more challenging and costly for attackers to launch successful attacks against the system. * Further work could be done to improve the efficiency and effectiveness of learning automata algorithms used in the proposed systems. This could involve developing more intelligent approaches for calculating the reinforcement signal that enables these algorithms to learn and adapt to changing environments. Such advancements could enhance the performance and reliability of these systems, leading to more widespread adoption and application in real-world scenarios. * Develop a new algorithm for calculating the weight of chains to improve accuracy and efficiency. * Investigate additional strategies to combat the combined attack of selfish mining, eclipse, and double-spending, which was proposed by Gervais et al.<cit.>, such as exploring alternative consensus mechanisms or implementing additional security measures. * One potential area of future work could be the exploration of an intelligent approach for determining the optimal number of blocks used in the weighting of chains, based on network conditions. This approach would aim to improve the efficiency and effectiveness of the chain weighting process by leveraging intelligent algorithms to identify the optimal number of blocks for a given set of network conditions. Such an approach could potentially enhance the scalability and performance of blockchain networks, and warrants further investigation in future research. * An area for future exploration involves leveraging reinforcement learning algorithms, such as learning automata, to intelligently calculate the chain length coefficient for obtaining a weighted threshold in WVBM. This approach has the potential to improve the accuracy and efficiency of calculations unsrtnat [ < g r a p h i c s > ]Seyed Ardalan Ghoreishi He earned his Bachelor of Science in Electrical Engineering from Sadjad University of Technology in 2014. In 2022, he earned his Master of Science in Computer Engineering from Amirkabir University of Technology in Tehran, Iran. His areas of expertise include Machine Learning, Deep Learning, and Blockchain technology. [ < g r a p h i c s > ]Mohammad Reza Meybodi He earned his Bachelor of Science and Master of Science in Economics from Shahid Beheshti University in Tehran, Iran in 1973 and 1977 respectively. He later received his Master of Science and Doctorate in Computer Science from Oklahoma University in Norman, OK, USA in 1980 and 1983 respectively. Afterward, he served as an Assistant Professor at Western Michigan University in Kalamazoo, MI from 1983 to 1985, and then as an Associate Professor at Ohio University in Athens, OH from 1985 to 1991. Presently, he holds the position of a Full Professor at the Computer Engineering Department of Amirkabir University of Technology in Tehran. His areas of expertise encompass Wireless Networks, Fault-Tolerant Systems, Learning Systems, Parallel Algorithms, Soft Computing, and Software Development.
http://arxiv.org/abs/2307.02225v1
20230705120627
Efficient Information Reconciliation for High-Dimensional Quantum Key Distribution
[ "Ronny Mueller", "Domenico Ribezzo", "Mujtaba Zahidy", "Leif Katsuo Oxenløwe", "Davide Bacco", "Søren Forchhammer" ]
quant-ph
[ "quant-ph" ]
Efficient Information Reconciliation for High-Dimensional QKD]Efficient Information Reconciliation for High-Dimensional Quantum Key Distribution ^1 Department of Electrical and Photonics Engineering, Technical University of Denmark, Lyngby, Denmark ^2 National Institute of Optics of National Research Council, (CNR-INO), Florence, Italy ^3 University of Naples Federico II, Naples, Italy ^4 Department of Physics and Astronomy, University of Florence, 50019 Sesto Fiorentino, Italy ronmu@dtu.dk The Information Reconciliation phase in quantum key distribution has significant impact on the range and throughput of any QKD system. We explore this stage for high-dimensional QKD implementations and introduce two novel methods for reconciliation. The methods are based on nonbinary LDPC codes and the Cascade algorithm, and achieve efficiencies close the the Slepian-Wolf bound on q-ary symmetric channels. § INTRODUCTION Quantum Key Distribution (QKD) protocols allows for secure transmission of information between two entities, Alice and Bob, by distributing a symmetric secret key via a quantum channel <cit.>. The process involves a quantum stage where quantum information is distributed and measured. This quantum stage is succeeded by post-processing. In this purely classical stage, the results of the measurements undergo a reconciliation process to rectify any discrepancies before a secret key is extracted during the privacy amplification phase. The emphasis of this research paper is on the phase of information reconciliation, which has a significant impact on the range and throughput of any QKD system. Despite the considerate development of QKD technology using binary signal forms, its high-dimensional counterpart (HD-QKD)<cit.> has seen significantly less research effort so far. However, HD-QKD offers several benefits, including higher information efficiency and increased noise resilience <cit.>. Although the reconciliation phase for binary-based QKD has been extensively researched, little work has been done to analyze and optimize this stage for HD-QKD, apart from introducing the layered scheme in 2013 <cit.>. This study addresses this research void by introducing two novel methods for information reconciliation for high-dimensional QKD and analyzing their performance. Unlike the majority of channel coding applications, the (HD)-QKD scenario places lesser demands on latency and throughput while emphasizing significantly the minimization of information leakage. Spurred by this unique setting, the superior decoding performance of nonbinary LDPC codes <cit.>, and their inherent compatibility with high dimensions, we investigate the conception and utilization of nonbinary LDPC codes for post-processing in HD-QKD protocols as the first method. The second method we investigate is the Cascade protocol <cit.>. It is one of the earliest proposed methods for reconciling keys. While the many rounds of communication required by Cascade and concerns about resulting limitations on throughput have led to a focus on syndrome-based methods <cit.> in the past decade, recent research has shown that sophisticated software implementations can enable Cascade to achieve high throughput even with realistic latency on the classical channel <cit.>. Motivated by these findings, we explore the usage of Cascade in the reconciliation stage of HD-QKD and propose a modification that enables high reconciliation efficiency for the respective quantum channel. § BACKGROUND In this section, we describe the general setting and channel model and introduce relevant figures of merit. We then continue to describe the two proposed methods in more detail. §.§ Information reconciliation The goal of the information reconciliation stage in QKD is to correct any discrepancies between the keys of the two parties while minimizing the information leaked to potential eavesdroppers. Generally, Alice sends a random string 𝐱=(x_0,...,x_n-1), x_i = 0,...,q-1 of n qudits of dimension q to Bob, who measures them and obtains his version of the string 𝐲=(y_0,...,y_n-1), y_i = 0,...,q-1. We assume that the quantum channel can be accurately represented by a substitute channel where 𝐱 and 𝐲 are correlated as a q-ary symmetric channel since errors are typically uncorrelated and symmetric. The transition probabilities of such a channel are as follows: P(y_i|x_i) = 1-p y_i=x_i, p/q-1 else. Here, the parameter p represents the channel transition probability. We refer to the symbol error rate between 𝐱 and 𝐲 as the quantum bit error rate (QBER) in a slight abuse of notation but consistent with experimental works on HD-QKD. In our simulations, we assume the QBER to be an inherent channel property, making it equivalent to the channel parameter p. In addition to the qudits, Alice also sends messages, e.g. syndromes or parity bits, which are assumed to be error-free. From a coding perspective, this is equal to asymmetric Slepian-Wolf coding with side information at the receiver, where the syndrome 𝐬 represents the compressed version of 𝐱, and 𝐲 is the side information. A more detailed explanation of this equivalence can be found in <cit.>, while for an interpretation of Cascade in the context of linear block codes see <cit.>. Any information leaked to a potential eavesdropper at any point during the quantum key distribution must be subtracted from the final secret key during privacy amplification <cit.>. The information leaked during the information reconciliation stage will be denoted by leak_IR. In the case of LDPC codes, assuming no rate adaptation, it can be upper-bounded by the syndrome length in bits, leak_IR≤ m, with m being the syndrome length times log_2(q). In the case of Cascade, it can be upper-bounded by the number of parity bits sent from Alice to Bob <cit.>. Using the Slepian-Wolf bound <cit.>, the minimum amount of leaked information required to successfully reconcile with an arbitrarily low failure probability in the asymptotic limit of infinite length is given by the conditional entropy: leak_IR≥ nH(X|Y). The conditional entropy (base q) of the q-ary symmetric channel, assuming independent and identically distributed input X, can be expressed as H(X|Y) = -((1-p)log_q(1-p) - p·log_q(p/q-1)). A code's performance in terms of relative information leakage can be measured by its efficiency f, given by f = leak_IR/nH(X|Y). It is important to note that an efficiency of f>1 corresponds to leaking more bits than required by the theoretical minimum of f=1, which represents the best possible performance according to the Slepian-Wolf bound. In practice, systems have f>1 due to the difficulty of designing optimal codes, finite-size effects, and the inherit trade-off between efficiency and throughput. In the following sections, we restrict ourselves to q being a power of 2. Both approaches can function without this restriction, but it allows for more efficient implementation of the reconciliation and is commonly seen in physical implementations of the quantum stage due to symmetries. §.§ Nonbinary LDPC codes §.§.§ Codes & Decoding We provide here a short overview over nonbinary LDPC codes and their decoding based on the concepts and formalism of binary LDPC codes. For a comprehensive review of those, we refer to <cit.>. Nonbinary LDPC codes can be described by their parity check matrix 𝐇, with m rows and n columns, containing elements in a Galois Field (GF) of order q. To enhance clarity in this section, all variables representing a Galois field element will be marked with a hat, for instance, â. Moreover, let ⊕, ⊖, ⊗, and ⊘ denote the standard operations on Galois field elements. An LDPC code can be depicted as a bipartite graph, known as the Tanner graph. In this graph, the parity-check equations form one side, called check nodes, while the codeword symbols represent the other side, known as variable nodes. The Tanner graph of a nonbinary LDPC code also has weighted edges between check and variable nodes, where the weight corresponds to the respective entry of 𝐇. The syndrome 𝐬 of the q-ary string 𝐱 is computed as 𝐬 = 𝐇𝐱. For decoding, we employ a log-domain FFT-SPA <cit.>. In-depth explanations of this algorithm can be found in <cit.>, but we provide a summary here for the sake of completeness. Let Z represent a random variable taking values in GF(q), such that P(Z_i = k) indicates the probability that qudit i has the value k=0,...,q-1. The probability vector 𝐩=(p_0,...p_q-1), p_j = P(Z=j), can be converted into the log-domain using the generalized equivalent of the log-likelihood-ratio (LLR) in the binary case, 𝐦=(m_0,...,m_q-1), m_j = logP(Z=0)/P(Z=j) = log(p_0/p_j). Given the LLR representation, probabilities can be retrieved through p_j = exp(-m_j)/∑_k=0^q-1exp(-m_k). We use p(·) and m(·) to denote these transforms. To further streamline notation, we define the multiplication and division of an element â in GF(q) and an LLR message as a permutation of the indices of the vector: â·𝐦 := (m_0̂⊘â,...,m_q̂-̂1̂⊘â) 𝐦 / â := (m_0̂⊗â,...,m_q̂-̂1̂⊗â), where the multiplication and division of the indices occur in the Galois Field. These permutations are necessary as we need to weigh messages according to their edge weight during decoding. We further define two transformations involved in the decoding, ℱ(𝐦, Ĥij) = ℱ(p(Ĥij·𝐦)) ℱ(𝐦, Ĥij)^-1 = m(ℱ^-1(𝐦))/Ĥij, where ℱ represents the discrete Fourier transform. Note that for q being a power of 2, the Fast Walsh Hadamard Transform can be utilized. The decoding process then consists of two iterative message-passing phases, from check nodes to variable nodes and vice versa. The message update rule at iteration l for the check node corresponding to the parity check matrix entry at (i,j) can be expressed as 𝐦^(l)_ij,CV = 𝒜(ŝ^'_i) ℱ^-1( j'∈ℳ(i)/jΠℱ(𝐦^(l-1)_ij', Ĥ_ij'), Ĥ_ij), where ℳ(i) denotes the set of all check nodes in row i of 𝐇. 𝒜, defined as 𝒜_kj(â) = δ( â⊕ k ⊖ j) - δ(a⊖ j), accounts for the nonzero syndrome <cit.>. The weighted syndrome value is calculated as ŝ^'_i = ŝ_i ⊘Ĥ_ij. The a posteriori message of column j can be written as 𝐦^(l)_j = 𝐦^(0)(j) + i^'∈𝒩(j)∑𝐦^(l)_i^'j,CV, where 𝒩(j) is the set of all check nodes in column j of 𝐇. The best guess 𝐱 at each iteration l can be calculated as the minimum value of the a posteriori, x_j^(l) = argmin (𝐦^l_j). The second message passings, from variable to check nodes, are given by 𝐦_ij, VC^(l) = 𝐦^(l)_j - 𝐦^(l)_ij, CV. The message passing continues until either 𝐇𝐱 = 𝐬 or the maximum number of iterations is reached. To allow for efficient reconciliation for different QBER values, a rate-adaptive scheme is required. We use the blind reconciliation protocol <cit.>. A fixed fraction δ of symbols is chosen to be punctured or shortened. Puncturing refers to replacing a key bit with a random bit that is unknown to Bob, for shortening the value of the bit is additionally send to Bob over the public channel. Puncturing, therefore, increases the code rate, while shortening lowers it. The rate of a code with p punctured and s shortened bits is then given by R = n-m-s/n-p-s. To see how rate adaption influences the bounding of leak_IR see <cit.>. The blind scheme introduces interactivity into the LDPC reconciliation. Given a specific code, we start out with all bits being punctured and send the respective syndrome to Bob. Bob attempts to decode using the syndrome. If decoding fails, Alice transforms ⌈ n(0.028 - 0.02R)⌉ <cit.> punctured bits into shortened bits, and resends the syndrome. This value is a heuristic expression and presents a trade-off between the number of communication rounds and the efficiency. Bob tries to decode again and requests more bits to be shortened in case of failure. If there are no punctured bits left to be turned into shortened bits, Alice reveals key bits instead. This continues until either decoding succeeds or the the whole key is revealed. §.§.§ Density Evolution In the case of a uniform edge weight distribution, the asymptotic decoding performance of LDPC codes for infinite code length is entirely determined by two polynomials <cit.>: λ(x) = ∑_i=0^d_v, maxλ_i x^i-1 ρ(x) = ∑_i=0^d_c, maxρ_i x^i-1. In these expressions, λ_i (ρ_i) represents the proportion of edges connected to variable (check) nodes with degree i, while d_v, max (d_c, max) indicates the highest degree of the variable (check) nodes. Given these polynomials, we can then define the code ensemble ℰ(λ, ρ), which represents all codes of infinite length with degree distributions specified by λ and ρ. The threshold p_t(λ, ρ) of the code ensemble ℰ(λ, ρ) is defined as the worst channel parameter (QBER) at which decoding remains possible with an arbitrarily small failure probability. This threshold can be estimated using Monte-Carlo Density Evolution (MC-DE), which is thoroughly described in <cit.>. This technique repeatedly samples node degrees according to λ and ρ, and draws random connections between nodes for each iteration. With a sufficiently large sample size, this simulates the performance of a cycle-free code. Note that MC-DE is particularly well suited for nonbinary LDPC codes, as the distinct edge weights aid in decorrelating messages <cit.>. During the simulation, we track the average entropy of all messages. When it falls below a certain value, decoding is considered successful. If this does not occur after a maximum number of iterations, the evaluated channel parameter is above the threshold of ℰ(λ,ρ). Utilizing a concentrated check node distribution (which is favorable according to <cit.>) and a fixed code rate, we can further simplify to ℰ(λ). The threshold can then be employed as an objective function to optimize the code design, which is commonly achieved using the Differential Evolution algorithm <cit.>. §.§ Cascade §.§.§ Binary Cascade Cascade <cit.> is one of the earliest schemes proposed for information reconciliation and has seen widespread use due to its simplicity and high efficiency. Cascade operates in several iterative steps. Alice and Bob divide their strings into top-level blocks of size k_1 and calculate their parity, where the size k_1 usually depends on the QBER and the specific version of Cascade. They send and compare their parities over a noiseless classical channel. If the parities for a single top-level block do not match, they perform a binary search on this block. There, the block is further divided into two, and parities are calculated and compared again. One of the two sub-blocks will have a different parity than the corresponding sub-block of Alice. We continue the binary search on this sub-block until we reach a sub-block that has size one, which allows us to locate and correct one error per mismatched top-level block. Alice and Bob then move on to the next iteration, where they shuffle their strings and choose new top-level blocks of size k_2. They then repeat the binary search on those. After correcting a bit in iteration i, except for i=1, Bob can look for blocks in previous iterations that contain this specific bit. The parity of these blocks has now changed as the bit got flipped, mismatching now with Alice's parity. This allows Bob to perform another binary search on them and correct additional bits. He can then again look for these additional bits in all earlier iterations, allowing for detected errors to "cascade" back. Successive works on the original Cascade protocol have been trying to increase its performance by either substituting the parity exchange with error-correction methods <cit.>, or by optimizing parameters like the top-level block sizes <cit.>. All our comparisons and modifications are applied to a high performing modifications <cit.> achieving efficiencies of up to f=1.025. This version has also been the basis for a recent high-throughput implementation, reaching a throughput of up to 570 Mbps <cit.>. §.§.§ High-Dimensional Cascade We propose the following modification to use Cascade for high-dimensional data, which we will denote by high-dimensional Cascade (HD-Cascade). We only highlight the differences compared to the best-performing approach<cit.> in terms of efficiency designed for binary QKD. * Initially, we map all symbols to an appropriate binary representation. Prior to the first iteration, we shuffle all bits while maintaining a record of which bits originate from the same symbol. This mapping effectively reduces the expected QBER used for block-size calculations to QBER_BIN = q/(2(q-1)) QBER_SYM. * Upon detecting an error, we immediately request the values of all bits originating from the same symbol, if not already known. The conditional probability to be a one given the values of all previously transmitted bits for these bits is close to 1/2. To be precise, it is equal to 1/2 for bits that have not yet participated in any parity checks and then varies with the length of the smallest block they participated in <cit.>. If any of these bits are erroneous, the blocks they have been participating in now have a mismatching parity. We can therefore immediately run the cascading step on those requested bits in all iterations including the current one, detecting more errors. Note that this allows for a cascading process in the first iteration already. * The fraction of errors corrected in the first iteration is significantly higher (often >95% in our simulations) compared to the binary version. This is due to the possibility of running a cascading process in the first iteration already. Consequently, we need to increase the block sizes for the following iterations as the dimensionality increases, see Table <ref>. § RESULTS §.§ Nonbinary LDPC codes While the code-design and decoding techniques described above are feasible for any dimension q, we focus on q=4, 8 as those are common in current implementations <cit.>. Nine codes were designed with code rates between 0.50 and 0.90 for q=4 (q=8), corresponding to a QBER range between 0 and 18% (24.7%). We used 100000 nodes with a maximum of 150 iterations for the MC-DE, the QBER was swept in 20 steps in a short range below the best possible threshold. In the Differential Evolution, population sizes between 15 and 50, a differential weight of 0.85, and a crossover probability of 0.7 were used. A sparsity of at most 10 nonzero coefficients in the polynomial was enforced, with the maximum node degree chosen as d_v, max=40. The sparsity allowed for reasonable optimization complexity, the maximum node degree was chosen to avoid numerical instability which we observed for higher values. The results of the optimization can be found in Table <ref> in form of the node degree distributions and their performance according to density evolution. The efficiency was evaluated for the highest sustainable QBER. The all-zero codeword assumption was used for the optimization and evaluation, which holds for the given scenario of a symmetric channel <cit.>. For all rates, the designed thresholds are close to the theoretical bound. LDPC codes with a length of n=30000 symbols were constructed using Progressive Edge Growth <cit.>, and a log-FFT-SPA decoder was used to reconcile the messages. The simulated performance of the finite-size codes can be seen in Figure <ref> for a span of different QBER values, each data point being the mean of 100 samples. We used the blind reconciliation scheme for rate-adaption. The mean number of decoding tries required for Bob to successfully reconcile is also shown. The valley pattern visible in the efficiency of the LDPC codes is due to the switching between codes of different rate, and a slight degradation in performance for high ratios of puncturing or shortening. The decoder used a maximum of 100 decoding iterations. As expected for finite-size codes, they do not reach the asymptotic ensemble threshold but show sub-optimal performance <cit.>. §.§ High-dimensional Cascade The performance of HD-Cascade was evaluated on the q-ary symmetric channel for dimensions q = 4, 8, 32, and for a QBER ranging from 1% to 20%. The results are shown in Figure <ref>. For comparison, a direct application of the best-performing Cascade modification on a binary mapping is also included. The proposed high-dimensional Cascade uses the same base Cascade with the additional adaptations discussed earlier. For q=2, HD-Cascade reduces to binary Cascade, resulting in equal performance. Both methods use the same block size of n = 2^16 bits for all cases. The used top-level block sizes k_i for each iteration i can be seen in Table <ref>, where [·] denotes rounding to the nearest integer. Additionally, the layered scheme is included as a reference. All data points have a frame error rate below 1% and show an average of 1000 samples. The wave pattern observable for the efficiency of Cascade in Figure <ref> and Figure <ref> is due to the integer rounding operation when calculating the block-sizes. This seems to be unavoidable, as block-sizes being a power of two have been shown to be optimal for the binary search in this setting <cit.>. The increase in both the range and secret key rate resulting from using HD-Cascade instead of directly applying binary Cascade is depicted in Figure <ref>. The improvement in the relative secret key rate r obtained using HD-Cascade is shown in Figure <ref>. This is calculated as r = skr_HD-Cascade/skr_Cascade-1. The used protocols are <cit.>, and experimental parameters for the simulation are derived from <cit.> for q=2, 4, where a combination of polarization and path is used to encode the qudits. For q=4, we also analyzed the performance of HD-Cascade on a subset of the actual experimental data, which confirms the simulated performance. For q=8, 32 we used a generalization, additional losses might transpire due to increased experimental complexity which are not considered in the simulation. § DISCUSSION §.§ Nonbinary LDPC codes Nonbinary LDPC codes are a natural candidate for the information reconciliation stage of HD-QKD, as their order can be matched to the dimension of the used qudits, and they are known to have good decoding performance <cit.>. Although they typically come with increased decoding complexity, this drawback is less of a concern in this context, since the keys can be processed and stored before being employed in real-time applications, which reduces the significance of decoding latency. Nevertheless, less complex decoder algorithms like EMS <cit.> or TEMS <cit.> can be considered to allow the usage of longer codes and for increasing the throughput. The node degree distributions we constructed show ensemble efficiencies close to one, 1.037 - 1.067 for q=4 and 1.024-1.080 for q=8. Note, that to the best of our knowledge, there is no inherent reason for the efficiencies of q=8 to be lower than for q=4, it is rather just a heuristic result due to optimization parameters fitting better. Although the ensembles we found display thresholds near the Slepian-Wolf bound, we believe that even better results could be achieved by expanding the search of the hyperparameters involved in the optimization, such as the enforced sparsity and the highest degree of λ, and by performing a finer sweep of the QBER during density evolution. The evaluated efficiency of finite-size codes shows them performing significantly worse than the thresholds computed with density evolution, with efficiencies ranging from 1.078 to 1.14 for QBER values in a medium range. This gap can be reduced by using longer codes and improving the code construction, e.g. using improved versions of the PEG algorithm <cit.>. The dependency of the efficiency on the QBER can further be reduced, i.e. flatting the curve in Figure <ref>, by improving the position of punctured bits <cit.>. While working on this manuscript, the usage of nonbinary LDPC codes for information reconciliation has also been proposed in <cit.>. They suggest mapping symbols of high dimensionality to symbols of lower dimensionality but still higher than 2 if beneficial, in similarity to the layered scheme. This can further be used to decrease computational complexity if required. §.§ HD-Cascade HD-Cascade has improved performance on high-dimensional QKD setups compared to directly applying binary Cascade. We can see significant improvement in efficiency, with mean efficiencies of f_HD-Cascade=1.06, 1.07, 1.12 compared to f_Cascade=1.22, 1.36, 1.65 for q=4, 8, 32, respectively. Using the parameters of a recent experimental implementation of 4-dimensional QKD<cit.>, a resulting improvement in range and secret key rate can be observed, especially for higher dimensions. For q=32, an increase of more than 10% in secret key rate and an additional 2.5 dB in tolerable channel loss is achievable according to our simulation results. Our approach demonstrates high efficiency across all QBER values but we noted that the time required for executing the correction increases significantly with higher error rates. Apart from the inherent scaling of Cascade with the QBER that is also present for binary implementations, this is additionally attributable to the immediate cascading of same-symbol bits. While the many rounds of communication required by Cascade have raised concerns about resulting limitations on throughput, recent research has shown that sophisticated software implementations can enable Cascade to achieve high throughput even with realistic latency on the classical channel <cit.>. We expect HD-Cascade to reach similar rates as its classical counterpart, as we expect the main difference with respect to throughput being an increased difficulty to batch together parity requests for parallelization due to the additional serial cascading for the same symbol bits while keeping the resulting penalty to efficiency minimal. Moreover, we believe that significant improvements in efficiency can still be achieved by further optimizing the choice of block sizes. §.§ Comparison Before comparing HD-Cascade and nonbinary LDPC codes, we want to mention the layered scheme, a binary LDPC code based scheme introduced in 2013. The layered scheme is based on decoding bit layers separately using ⌈log_2(q)⌉ binary LDPC codes. It is similar in concept to the multilevel coding and multistage decoding methods used in slice reconciliation for continuous-variable (CV) QKD <cit.>. While the layered scheme allows for reconciliation using binary LDPC codes only, it brings its own drawbacks, like error propagation, bit mapping, and interactive communication. Its performance can be seen for q=32 in Figure <ref>, notably for a much smaller block length (data read off Figure 5 <cit.>). Later experimental implementations report efficiencies of 1.25 <cit.> (q=3, n=1944, p=8%) and 1.17 <cit.> (q=1024, n=4000, p=39.6%). These papers report their efficiencies in the β-notation. β is commonly used in the continuous-variable QKD community, whereas f is more widespread with respect to discrete-variable QKD. They can be related via β(H(X)-H(X|Y)) = H(X)-fH(X|Y). Overall, HD-Cascade and nonbinary LDPC codes show good efficiency over all relevant QBER values, with HD-Cascade performing slightly better in terms of efficiency (see Figure <ref>). HD-Cascade shows a flat efficiency behavior over all ranges, compared to the LDPC codes, which have a bad performance for very low QBER values and an increase in performance with increasing QBER. This behavior can also be observed in LDPC codes used in binary QKD <cit.>. While the focus of this work lies in introducing new methods for high-dimensional information reconciliation with good efficiencies, the throughput is another important measure, especially with continuously improving input rates from advancing QKD hardware implementation. While an absolute and direct comparison of throughput strongly depends on the specific implementation and setup parameters, relative performances can be considered. Cascade has low computational complexity but high interactivity which can limit throughput in scenarios where the classical channel has a high latency. For a constant efficiency, as approximately observed for Cascade, the number of messages exchanged scales with the QBER as it is proportional to H(X|Y). Nonbinary LDPC codes, on the other hand, have low requirements on interactivity (usually below 10 syndromes per frame using the blind scheme) but high computational costs at the decoder. Their decoding complexity scales with q but not with the QBER, as its main dependence is on the number of entries in its parity check matrix and the node degrees. It should be noted that the QBER is usually fairly stable until the loss approaches the maximum range of the setup, e.g. see Figure <ref>, and that higher dimensions tend to operate at higher QBER values. It should be emphasized that for QKD, latency is not a big issue as keys do not need to be available immediately but can be stored for usage. QKD systems are usually significantly bigger and more expensive than setups for classical communication. This allows for reconciliation schemes with comparatively high latency and high computational complexity, for example by extensive usage of pipelining <cit.>. § CONCLUSION We introduced two new methods for the information reconciliation stage of high dimensional Quantum Key Distribution. The nonbinary LDPC codes we designed specifically for the q-ary symmetric channel allow for reconciliation with good efficiency with low interactivity. High-dimensional Cascade on the other hand uses a highly interactive protocol with low computational complexity. It shows significant improvement compared to directly applying Cascade protocols designed for binary Quantum Key Distribution, e.g. more than 10% for a 32-dimensional system for all possible channel losses. The Center of Excellence SPOC (ref DNRF123). § COMPETING INTERESTS The authors declare no competing financial or non-financial interests. § DATA AVAILABILITY All data used in this work are available from the corresponding author upon reasonable request.
http://arxiv.org/abs/2307.02692v1
20230705234032
Pair-instability supernovae from rapidly rotating metal-enriched progenitors
[ "Hideyuki Umeda", "Chris Nagele" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.HE" ]
Department of Astronomy, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan Department of Astronomy, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan In this paper we revisit metal-enriched pair instability supernovae (PISNe) models which undergo chemically homogeneous evolution (CHE). By calculating multiple models, we intend to clarify mass ranges for the PISNe, ^56Ni masses from the PISNe, and mass loss histories of CHE-PISNe models for metallicities consistent with the Small Magellanic Cloud (SMC) and with the Large Magellanic Cloud (LMC). We show that for an initial velocity of v_ i/ v_ k = 0.1, these models undergo CHE and He-rich (Type Ib) PISNe occur in a lower mass range (M_ i∼ 110-170) than for more slowly rotating models. Interestingly, bright PISNe which have ^56Ni masses larger than 10 occur in a relatively small mass range, M_ i∼ 140-170. Another notable characteristic of CHE-PISNe is the large late time mass loss rates; consequently, CSM interaction may be observable in their light curves. We also show some examples of O-rich (Type Ic) CHE-PISNe produced by v_ i/ v_ k = 0.2 models. We expect these models to exhibit interaction with O-rich CSM, behavior which is consistent with the observed properties of the recently discovered PISN candidate, SN2018ibb. Finally, we present a collapsing v_ i/ v_ k = 0.2 model which has sufficient angular momentum to be regarded as a candidate for a Super-Kilonova. § INTRODUCTION A pair-instability supernova (PISN) is a thermonuclear explosion of a massive Oxygen core induced by the creation of electron-positron pairs. After this mechanism was initially proposed <cit.>, detailed calculations confirmed the viability of the explosion <cit.>. These studies mostly focused on metal free Pop III stars because the mass range of PISNe is large (about 140 - 300 for metal free stars) and stars massive enough to experience the pair instability are thought to be rare in the present universe because of large wind mass loss for metal enriched stars. Although there are some calculations for metal enriched stars <cit.>, all but two of these works (discussed below) assume that the main properties of PISNe, i.e., the explosion energy and the amount of ^56Ni production, depend solely on CO-core mass and not on metallicity. <cit.> and <cit.> calculated hydrodynamics and nucleosynthesis for metal-enriched models (Z ≳ 0.001) during the exploding phase, but only two mass models are shown for each metal-enriched case. Observationally it is not clear if there has been any evidence for the existence of PISNe. Since the nucleosynthetic pattern is quite different from core collapse SNe, if an extremely metal poor star was formed from the gas of PISN ejecta, it should not be difficult to identify <cit.>. Recently, a single star with the signature of PISN enrichment was reported <cit.>, but no other metal poor stars with evidence of PISN enrichment have been found, although there is a suggestion that PISN abundance patterns might be hidden in more metal rich stars <cit.>. A PISN can also be observed as a nearby supernova since in principle PISNe are possible for Z ≲ Z_⊙/3 <cit.>. The first suggestion for such a discovery was reported as a Type I super luminous supernova (SLSN-I), SN2007bi <cit.>. However, spectroscopic models of PISNe were incompatible with the observations both in the photospheric <cit.> and nebular <cit.> phases. Since then, magnetar models have more frequently been considered to explain SLSNe-I, and PISNe models are rarely taken seriously. Nevertheless there remain some SLSNe-I, such as PS1-14bj, PTF10nmn, OGLE14, and SN2020wnt, which cannot be well explained by the magnetar models and these could well be PISNe. If these candidates are PISNe, the ejected ^56Ni mass falls in the range of ∼ 0.5 - 10M_⊙ <cit.>. Very recently, there was a report that SN2018ibb might be the best PISN candidate discovered thus far <cit.>. SN2018ibb is a Hydrogen poor super-luminous SN at z=0.166, and if it is a PISN, the expected ^56Ni mass is more than 30. <cit.> also mention that there is a signature of interaction with oxygen-rich CSM. As discussed e.g., in <cit.>, if a nearby PISN with metallicity Z=0.004 ejects more than a few solar masses ^56Ni, the initial mass should be more than 500M_⊙ for standard mass loss rates and this poses a problem, as these massive stars should be exceedingly rare. This potential problem can be remedied if we consider fast rotating stars which undergo quasi chemically homogeneous evolution (CHE) <cit.>. The CHE model was proposed for the progenitors of GRBs since CHE stars tend to have much more massive and faster rotating CO-cores. Because of this property, PISNe occur for metal-enriched CHE progenitors with a much lower mass range as shown in <cit.> and <cit.>. In this paper we revisit these "CHE-PISN" models to find more precise properties of these PISNe, such as mass range, ^56Ni mass and mass loss histories. This is motivated by the paucity of models in the existing literature. Although many groups are currently working on rotating stellar models which can reproduce similar results for CHE and GRB progenitors, it is not necessarily the case that these results reflect nature. This is because angular momentum transfer is not well understood and some observations, such as the rotational velocities of red giant cores, cannot be explained in the conventional formalism. Therefore, in this paper, our stance is as follows. We use a similar formalism and parameter settings as other groups such that we can produce similar GRB progenitors, i.e., fast rotating massive CO-cores for low metallicity. Although there is no proof that these are the GRB progenitors, this is one of the most promising models. Our CHE-PISN models shown here are obtained by simply extending the GRB progenitor models to higher masses. Therefore we would say that if our CHE-PISNe models fit the observations, CHE-GRB progenitor models can be supported and vice versa. The remainder of the paper is organized as follows. In Section <ref>, we summarize our numerical methods, including initial rotational profiles (<ref>), mass loss induced by rapid rotation (<ref>), angular momentum transfer (<ref>), and finally the hydrodynamical calculations of the PISNe (<ref>). Section <ref> shows mass loss histories (<ref>), stellar evolution of CHE models (<ref>), hydrodynamics (<ref>), nucleosynthetic post processing (<ref>) and comparison to observation (<ref>). We then discuss our results in the context of previous work (Sec. <ref>) before concluding in Sec. <ref>. § METHODS In this paper we calculate evolution of sub-solar metallicity stars in the PISN mass range for two metallicity cases: LMC metallicity (Z = Z_⊙ / 3) and SMC metallicity (Z = Z_⊙ / 5). Here, Z_⊙ is the solar metallicity and we adopt Z_⊙ = 0.0141. For this study, we use the HOSHI (HOngo Stellar Hydrodynamics Investigator) code described e.g. in <cit.> and <cit.>. for the stellar evolution. We follow the nuclear burning using a nuclear reaction network of 153 species of nuclei <cit.>. Nuclear reaction rates are taken from the JINA reaclib database v1 <cit.>, except for the ^12C(α,γ)^16O rate which is taken to be 1.5 times the value given in <cit.>. In this paper we adopt the overshooting parameter f_ ov=0.01, which is called the `M' model in <cit.>. Then we use the hydrodynamical code described in <cit.> and <cit.> for the collapse and PISN explosions after the model approaches the electron-positron pair-creation instability region. Calculation methods for stellar evolution are mostly the same as in our previous work, but there are differences in the setting of initial models, and the treatments in the rotation-induced mass loss rates. Previously we started stellar evolution from a zero age main sequence (ZAMS) model and set the initial rotation speed for that ZAMS model. However, the definition of ZAMS is somewhat uncertain and it is thus difficult to compare with other works which may adopt a slightly different definition for ZAMS. Usually this small difference does not matter, but in this work initial rotation speed is critical, so we adopt a more concrete definition. §.§ Initial rotation We now describe the determination of initial rotation in this work. First we construct an expanded pre-mainsequence model which has central temperature below log T_ c < 6.0 (K). Then we begin the stellar evolution calculation, and when the central temperature reaches log T_ c = 6.0 (K), we set the initial rotational velocity. We assume rigid rotation initially and the ratio of the surface velocity, v_ i, to the Keplerian velocity, v_ K≡√(GM/R), is taken as a parameter. As shown below v_ i/v_ K = 0.1 corresponds to fast rotation and we use the value 0.1 for the fiducial fast-rotating case. Figure <ref> shows typical evolution for the ratio of the surface velocity to the Keplerian velocity, v_ rot/v_ k. The three lines are for initial masses M= 120, 150 and 180 M_⊙ with LMC metallicity and v_ i/v_ k = 0.1. As shown in this figure, v_ rot/v_ k increases with the initial stellar shrinkage, and has a local minimum around log T_ c∼ 6.6 (K). Then this ratio increases as the model moves towards ZAMS, at which point the ratio becomes maximal. Since the maximal ratio is roughly 0.35 and above, we may say that theses models correspond to "fast-rotation". In Table <ref>, we summarize the models we show in this paper and the minimal v_ rot/v_ k around log T_ c∼ 6.6 (K), and maximal v_ rot/v_ K near ZAMS, which we denote by (v_ rot/v_ k)_ min and (v_ rot/v_ k)_ max, respectively. From this table we see a good correlation between (v_ rot/v_ k)_ min and (v_ rot/v_ k)_ max. This suggests that if we had set the rotation speed at the local minimum, we would have obtained a more homogeneous set. We do not attempt this in the current paper as we believe it is more instructive if there are some variations in (v_ rot/v_ k)_ max in order to understand the effect of small variations in the initial rotation velocity. §.§ Rotation induced mass loss For the non-rotating cases we use the same mass loss rate as our previous studies as described e.g., in <cit.>. Namely, we adopt <cit.> as the mass loss rate of a main sequence star where the effective temperature is higher than log T_ eff= 4.05 and the surface Hydrogen mass fraction is higher than or equal to 0.3. The mass loss rate of <cit.> is adopted for Wolf-Rayet stars where the effective temperature is higher than log T_ eff = 4.05 and the surface Hydrogen mass fraction is lower than 0.3. When the surface temperature is lower than log T_ eff = 3.90, we adopt <cit.> as the red supergiant mass loss rate. We also set the lower bound for the mass loss rate to be 10^-14 M_⊙ yr^-1 as in our previous work, even when we say that a model has no mass-loss. For fast rotation, mass loss is enhanced due to the nearly-critical rotation at the surface (the Ω - Γ limit, ). According to <cit.>, the enhanced mass loss rate is calculated as Ṁ= - min[|Ṁ (v_ rot=0)|×( 1-v_ rot/v_ crit)^-0.43, 0.3 M/τ_ KH], where v_ rot, v_ crit≡√(GM(1-L/L_ Edd)/R), τ_ KH, L_ edd are the surface rotation velocity, the critical rotation velocity, the Kelvin-Helmholtz timescale, and the Eddington luminosity, respectively. There is an ambiguity in the definition of the Kelvin-Helmholtz timescale ∼ GM^2/RL, so we introduce a parameter of O(1), f_ KH, such that τ_ KH = f_ KH GM^2/RL. In this paper we adopt f_ KH=0.5 <cit.>. As shown above, we set an upper limit on the magnitude of the mass loss rate and refer to it as Ṁ_ up≡ 0.3M/τ_ KH. We refer to the magnitude of the non-critical rate as Ṁ_ rot≡ |Ṁ (v_ rot=0)|× ( 1- v_ rot/v_ crit)^-0.43. In general, the rotation induced mass loss rate increases when a star is shrinking, because both the v_ rot and L tend to increase. The reader might think that in these stages, Ṁ_ rot increases gradually and when it reaches Ṁ_ up, we replace it with the upper limit. However, this is not the typical realization. Since we cannot adopt very short time steps to follow the star over evolutionary timescales, in a realistic calculation, the ratio v_ rot/v_ crit often approaches and then exceeds 1. We use Ṁ_ up if v_ rot/v_ crit exceeds 1. In the usual calculations, Ṁ_ rot≪Ṁ_ up even if v_ rot/v_ crit is very close to 1. Therefore, in practice we use Ṁ_ rot for v_ rot/v_ crit < 1 and use Ṁ_ up for v_ rot/v_ crit≥ 1. This inclusion of the upper limit for mass-loss rate is likely the biggest difference with recent similar works (for lower mass ranges) by <cit.>, where they instead set the upper limit to the value v_ rot/v_ crit to be 0.98. In the results section (Figure <ref>), we will show the actual variation of the mass loss rate according to this procedure. §.§ Angular momentum transfer In the HOSHI code, the diffusion approximation is applied for the transportation of angular momentum, similar to the codes in <cit.> which were used to calculate progenitor models of long gamma-ray bursts (GRBs). As in these works, we assume a magnetic model and the Tayler-Spruit dynamo (TS dynamo, ), is applied. With the TS dynamo, angular momentum is transferred efficiently when differential rotation exists. Because of this effect, stellar cores after He burning stages are relatively slowly rotating, and it is difficult to produce GRB progenitors using magnetic models (Heger, Langer and Woosley 2000). <cit.> proposed a solution of this problem. If we consider an initially rapidly rotating star, the evolution through the H and He burning stages are roughly chemically homogeneous due to rotational matter mixing, which translates to almost rigid rotation throughout the star. Then the rapid angular momentum transfer from the core to surface by the TS dynamo can be avoided and relatively fast rotating cores can be kept. We have confirmed that these previous findings can be reproduced in our code and we apply the same method to a higher mass range, the PISNe mass range, to obtain the predictions of these models. We are aware that there are some tensions in the predictions of stellar evolution models with the TS dynamo and the core rotation speed of red-giants (e.g., <cit.> and references therein). In this paper, however, we are more interested in the CHE stars where the angular momentum transfer by the TS dynamo is thought to be inefficient. Therefore most of our results will apply even if the TS dynamo model is modified in the future. §.§ Hydrodynamical and nucleosynthesis calculation In this paper, we aim to calculate explosion and nucleosynthesis for pair-instabillity supernovae. Such calculations were done in <cit.> for Pop III PISNe and we use basically the same method. We calculate stellar evolution with the HOSHI code, until the central temperature reaches around Log Tc =9.2 and then we switch to the hydrodynamical code. The HOSHI code includes the acceleration term in the equation of motion and in principle could calculate hydrodynamics. However, since this is a Henyey type stellar evolution code, energy conservation is not as good as in a hydrodynamical code and not suitable for PISN simulations. This is the reason we switch to a hydrodynamical code. The hydrodynamics code is a one dimensional Lagrangian general relativistic hydrodynamics code <cit.>. The code includes energy changes due to nuclear reactions and neutrino cooling, and we use the same 153 isotope network as in HOSHI. In this paper, we use 255 radial meshes and the HOSHI models are mapped to this grid using the same procedure as in previous works <cit.> while the numerical settings are identical to those used in <cit.>. After shock breakout, the timesteps become large and we terminate the simulation. For one LMC model (100 ), a pulsation occurs instead of an explosion, and for this model we terminate the calculation after shock breakout, as we are not overly interested in pulsations in this paper. After the hydrodynamics calculation is completed, we post process the nucleosynthesis with a network consisting of 300 nuclei, as in <cit.>. The ^56Ni masses reported in this paper are from the post processing, and are slightly lower than the values obtained from hydrodynamics. After post processing the hydrodynamics, additional post processing is carried out with fixed temperature and density until the composition is fully decayed. The reported elemental yields use the abundances at the end of this additional post processing. § RESULTS §.§ Mass loss histories First, using 160 M_⊙ models with SMC metallicity, we compare the evolution of the non rotating v_ i/v_ K =0 and the fast rotating v_ i/v_ K =0.1 cases. We define the He core as the region inside the mass coordinate with X(^1 H)>0.01 and the CO core as the region inside the mass coordinate with X(^4 He)>X(^16 O). As shown in Table <ref>, the final mass is larger for the non-rotating case, but core masses are larger for the fast-rotating case. This suggests that the fast rotating models undergo CHE. In Figures <ref> and <ref>, we compare mass loss histories for these models. Figure <ref> shows the total mass as a function of central temperature. Both models lose significant amounts of mass during the H-burning stage. After the He burning stage, which starts at around log T_ c = 8.2 (K), the non-rotating models do not lose much mass. On the other hand, fast rotating ones continue to lose mass after the end of H-burning. Figure <ref> shows the mass loss rates plotted against central temperature. Orange dots are for the non-rotating case and blue dots connected by dashed lines are for the fast rotating case. In the non-rating case, mass loss rates are roughly constant until the end of He burning. At the end of He-burning, the rate jumps for a short period as the star shrinks. However, this jump ends quickly and the total amount of mass lost during this period is not large. For the fast rotating case, the mass loss rate shows violent variation after late stage H-burning. This is because the star in this stage is shrinking and v_ rot/v_ crit tends to increase. Since Ṁ_ rot is not large enough to stop the increase of v_ rot/v_ crit, the ratio eventually exceeds one, and we replace the rate by Ṁ_ up at this point (Ṁ_ rot and Ṁ_ up are defined in Sec.<ref>). Until the end of the He burning stage, Ṁ_ up is large enough to stop the increase of v_ rot/v_ crit and thus v_ rot quickly becomes sub-critical and mass loss becomes small until v_ rot exceeds the critical value once again. In Figure <ref> we show the value of v_ rot/v_ crit and v_ rot/v_ k for the same horizontal-axis for the v_ i/v_ k =0.1 model. As shown here, although v_ rot/v_ crit reaches 1, the ratio to the Keplarian velocity is below 0.4, showing the importance of the L/L_ Edd ratio in determining the critical value. This variation of mass loss rate is not seen in <cit.> since they set an upper limit on the value of v_ rot/v_ crit. This difference during the He burning stage is unlikely to impact the final results because the averaged mass loss rate shown below is not large in this stage. However, we posit that this difference, if it exists, may be significant for the mass loss rate after the end of the He-burning stage, where the averaged rate is set by Ṁ_ up or may exceed it. Since the actual mass loss rate in Figure <ref> varies rapidly, we show in Figure <ref> the averaged rate calculated as follows. First we calculate cumulative mass loss, M_ loss^ cum, as shown by the blue line, from a time when the central temperature is T_ c, to the time when the central temperature reaches 10^9.4 (K) (at the right edge of this figure). Then the averaged mass loss rate, shown by the green line, is calculated as the cumulative mass lost divided by the remaining lifetime of the star, τ. Here we define τ(T_ c) as the time interval between a central temperature of T_ c and a central temperature of 10^9.4 (K). For comparison, we also show Ṁ_ up with the orange line. This figure shows that the average mass loss rate is 3 or 4 orders of magnitude smaller than Ṁ_ up until the end of He burning. At this time, the rate increases and converges to Ṁ_ up for log T_ c (K) ≳ 9 (after the late C-burning stage). We note that for this late stage, v_ rot/v_ crit > 1 and is increasing. This suggests that we may need to increase the mass loss rate beyond Ṁ_ up in this stage. In this specific example, however, the lifetime after log T_ c∼ 9.2 is very short (see the later section for hydro-results) because of the pair-instability, thus we do not try to remedy this problem as long as we can continue the calculation. One exception is the v_ i/ v_ k=0.2 model introduced in Subsection <ref>. For this model, v_ rot/v_ crit exceeds one earlier and maintains larger values than in the v_ i/ v_ k=0.1 case. Therefore, in Subsection <ref>, we show a single example, the 155B model, for which Ṁ is allowed to exceed Ṁ_ up. Figure <ref> shows the same averaged mass loss rate with the horizontal axis replaced by the remaining lifetime of the star, τ. In order to see the mass and metallicity effects on the averaged mass loss rates we also show two other v_ i/ v_ k=0.1 models in this figure, specifically the 110, SMC and 160, LMC models. The 110 model shows very similar evolution to 160 SMC, though Ṁ_ up is slightly smaller. In the SMC, v_ i/ v_ k=0.1 case, the 160 and 110 models' mass loss histories are similar, as summarized in Table <ref>, but this is not always the case. In this table, the “final mass loss rate”, Ṁ_ fin, which is defined by the mass loss rate at log T_ c = 9.2, is shown in units of yr^-1. From this table we find that the 110, 140, 150 and 160 models have large mass loss rate at the end of the evolution, while the 120, 130 and 170 models do not. This difference can be understood as a consequence of smaller initial velocity, (v_ rot/v_ k)_ max, shown in Table <ref>. Since the mass loss at the end of the evolution is caused by the the Ω - Γ limit, slight differences in the initial velocity can affect the final mass loss rate. In the PISN mass range, more massive and faster rotating models tend to have larger final mass loss rates, though there are exceptions to this trend. In Figure <ref>, a 160 , LMC model is shown. Interestingly this model has small |Ṁ_ fin| even though (v_ rot/v_ k)_ max is larger than the 160 , SMC model. In general, a larger metallicity model tends to have a slower final state because the amount of mass lost is larger. As shown in Table <ref>, large |Ṁ_ fin| is not realized for LMC metallicity models except for the 110 model. It is not clear why the 110 LMC model has a larger |Ṁ_ fin| since the 100 LMC model has a small |Ṁ_ fin|. The 110 LMC model has relatively large (v_ rot/v_ k)_ max and this could be one reason why it has a large |Ṁ_ fin|. However, we do not try to extend the parameter study further, since the main purpose of this paper is to find general trends in these models. §.§ Chemically homogeneous evolution (CHE) In this subsection, we show how CHE affects the stellar evolution. In order to do so, we compare SMC metallicity 160 M_⊙ models with v_ i/ v_ k=0.1 and v_ i/ v_ k=0.03, which we refer to as model 0.1 and model 0.03, respectively. We show that model 0.1 undergoes CHE but model 0.03 does not. The effects of (quasi-) CHE has already been described in <cit.> for lower mass models, and we confirm that their arguments apply to our higher mass models. In Figures <ref> and <ref>, we show the distribution of angular velocity, M_ r vs Ω and specific angular momentum M_ r vs j at several epochs for models 0.1 and 0.03. The five epochs shown are at the times when the central H mass-fraction X = 0.5, 0.1, the central He mass-fraction after the onset of central He burning Y= 0.9, 0.1, and near the end of the calculation (log T_ c = 9.4). We refer to these as epochs 1 through 5, and some properties at these epochs are summarized in Table <ref>. In the table, total stellar mass M, central temperature T_ c, stellar radius R, surface abundance, and surface rotation velocity divided by the Keplerian velocity v_ rot/v_ k are shown at each epoch. In the column for the surface composition, the name of the most abundant element and its mass fraction is shown in parentheses. Abundance distributions for these epochs are shown in Figures <ref> and <ref>. From Figures <ref> and <ref>, we find that the specific angular momentum in the core decreases with time though this is not the case for Ω. The bottom panels in Figures <ref> and <ref> also show a curve labeled j_ LSO, Kerr. This is the specific angular momentum needed to get into the last stable orbit around a maximally rotating Kerr-black hole of rest-mass equal to the mass coordinate <cit.>. This curve gives a rough measure of the amount of angular momentum necessary to be a GRB progenitor candidate in the collapsar scenario <cit.>. These figures show that, initially, both models 0.1 and 0.03 have angular momentum, j, larger than j_ LSO, Kerr. However, this is not true of the later evolution. For model 0.1, j is larger than j_ LSO, Kerr in the whole star at epoch 3, while in the model 0.03, j is already close to j_ LSO, Kerr at epoch 2. The reason for this difference can be seen in the figures of Ω. In the stellar interior up to epoch 3 in model 0.1, Ω is roughly constant while in model 0.03, Ω is smaller for M_r > 90 than the value in core. Therefore, differential rotation occurs, and angular momentum is transferred from core to surface by the ST-dynamo. Then, core angular momentum is lost efficiently, even with a small mass-loss rate. The reason for the smaller Ω in the envelope of model 0.03 can be understood as follows. Since the initial rotation speed is slower, rotational matter mixing is less effective in model 0.03. Evidence of this can be seen by comparing the top panels of Figures <ref> and <ref>. In model 0.1, H and He abundances in the core change continuously to the surface values. On the other hand, in model 0.03, there is a jump at around M_ r =130 between the abundance of core and envelope. This suggests that in model 0.1, matter mixing between core and envelope is effective, but not in model 0.03. Because of this mixing, the surface He abundance in model 0.1 is always larger than that of model 0.03. Indeed model 0.1 has a He-rich surface already in epoch 2, while model 0.03 is still H-rich. This is the reason why evolution similar to model 0.1 can be called (quasi-) CHE, though the abundance distribution is not really homogeneous, especially in the later stages. Since model 0.03 has a H-rich envelope and He-rich core, the Hydrogen envelope tends to expand due to shell H-burning, and this expansion causes the decrease of Ω in the envelope. As shown in these examples, a typical outcome of CHE is that the star becomes a He-star during the He-burning stage and angular momentum remains large during the He-burning stage. CHE-stars have larger core masses than the corresponding slower rotators, as shown in Table <ref> (for non-rotating and v_ i/ v_ k=0.1 models). Therefore, the PISN mass range, which is mostly determined by the CO core mass, is lower for CHE-stars. Another important property of CHE stars is the large late time mass-loss. This topic will be discussed in more detail in Subsection <ref>. There is a gap between ”this definition of CHE" and being a candidate for a GRB progenitor. Indeed, model 0.1 loses angular momentum after epoch 4 and the final core has much smaller j than j_ LSO, Kerr. In all the models with v_ i/ v_ k=0.1 which we have tested within and beyond PISN mass range, we have had final cores rotating slower than j_ LSO, Kerr. On the other hand, for lower mass models, we could rather easily produce fast rotating GRB candidate progenitors with v_ i/ v_ k=0.1. This suggests that the CHE-PISNe should be rarer than GRBs if CHE is the dominant channel for producing GRBs. §.§ Hydrodynamical calculations and PISN explosions For most models, when the central temperature reaches 10^9.2 K, we switch to the hydrodynamics code. The exceptions to this are the v_i/v_k=0.2. After this switch, all models collapse immediately, reaching their peak temperatures within 200 to 700 seconds. After this, the velocity reverses, and a shock forms, which propagates from the center of the star to the surface in 10-100 seconds. Maximum temperatures and explosion energies (Table <ref>) fall in roughly the same range as in the Pop III case <cit.>. However, the CO core-mass range for the explosion is lower than in the Pop III case, both at the high end (M_ CO core∼ 115) and the low end (M_ CO core∼ 70). Note that models which fail to explode because they pulse or collapse to a black hole are denoted by "PPISN" and "Collapse", respectively, in the final three columns of Table <ref>. Furthermore, the ^56Ni mass is higher than for comparable CO core masses in the Pop III case. Both of these facts may follow from the presence of seed metals not present in the Pop III case (Subsection <ref>), although it is hard to say for certain (Section <ref>). The ^56Ni mass of M = 170, LMC, v_i/v_k=0.1 is smaller than lower mass models (15 ), but this is simply due to the small value of (v_ rot/v_ k)_ min. We computed an additional model for this mass and metallicity with slightly faster rotation (v_i/v_k=0.17^*, Table <ref>), and this model produces the expected amount of ^56Ni. §.§ Nucleosynthesis during PISN explosions After performing the hydrodynamical simulations with 153 isotopes, we then post process the trajectories of those simulations with a larger network (300 isotopes). As the temperature increases, isotopes above the iron group begin to photodissintegrate. Note that this process does not occur in the Pop III case as there are no heavy isotopes. By Log T ≈ 9.5, the composition has moved away from stability towards the proton rich side (p side), forming a continuous distribution in the neutron-proton plane. As the temperature increases further, the composition shifts to higher mass and towards the neutron rich side, so that, at maximum temperature (Log T ≈ 9.8, in the extreme case), the composition spans nearly the entire 300 isotope network. Once the temperature and density decrease, the distribution retreats to the p side, before eventually decaying back to stability. Figure <ref> shows the elemental yields for all models (grouped similarly to in Table <ref>) and Table <ref> shows isotopic yields for the M =170 , Z = SMC, v_i=0.1 model. As in the Pop III case, the main signature of the PISNe is the sawtooth pattern derived from the preference for even elements <cit.>. This sawtooth pattern extends to heavier elements for more massive progenitors (Figure <ref>) which reach higher peak temperatures (Table <ref>). §.§ Late time mass loss and interaction with SN ejecta In Subsection <ref> we explained the entire mass loss histories in our CHE-PISN models. Specifically, we showed that in some models the late time mass loss rate, which is roughly determined by Ṁ_ up, is very large, > 10^-3 yr^-1, though the large mass loss rate occurs for only a few years before the explosion. In this subsection, we explore this phenomenon in more detail. The late time mass loss from rapidly rotating massive stars was recently discussed in <cit.>. However, the considered mass range is different, so we do not make a direct comparison here. For the PISN progenitor models in <cit.>, no large late-time mass loss was found. However, in <cit.>, strong outbursts at the end of the stellar lives in the models of <cit.> are mentioned, and these correspond to the mass loss increase at the end of H-burning often seen in rotating massive star models. Since the GENEVA code used in <cit.> uses a different formalism for angular momentum transfer than ours and since they did not include the TS-dynamo, it is difficult to identify the reason for the disagreement with our models. In Table <ref>, we summarize the late time mass loss rates, surface Keplarian velocities, and final surface compositions for the 140-170 SMC, v_ i/ v_ k=0.1 models. We show the Keplerian velocities as a reference since this is sometimes used to discuss the CSM structure due to the lost mass. The surface composition is mostly unchanged for τ≲ 10 year, and the same as the final value shown in the table. Some v_ i/ v_ k=0.2 models are also shown in the Table for the discussion in Subsection <ref>. Here we explain only the v_ i/ v_ k=0.1 models. As described in Subsection <ref>, 140 to 160 models have large late time mass loss, but the 170 model does not. For these large mass loss models, Ṁ typically increases as τ decreases, and a small amount of Hydrogen remains at the surface though He is dominant. The Keplerian velocity is roughly unchanged for τ≤ 100 years and the 140 to 160 models have larger v_ k than the 170 model since the latter has a larger stellar radius. Though in this paper we do not attempt to calculate interactions of the SN ejecta with the late time lost mass, there is a possibility that such an interaction may be observable. For example, <cit.> used their late-time mass loss models to discuss the evolution of interaction luminosities. Among their models, the 39 model is closest to our models, since it is relatively massive and has large late-time mass loss rate, Ṁ∼ 10^-3 - 10^-2 /yr after log T_ c = 9. By assuming a SN explosion with an explosion energy of 4× 10^51 erg, and a small wind velocity 0.2 km/s, they showed that the interaction luminosity, which decreases relatively slowly, is above roughly 5 × 10^42 erg/s up to 200 days after the explosion. This luminosity is much smaller than the peak luminosity of a typical PISNe, however luminosity by the ^56Ni and ^56Co decays decreases exponentially, so the interaction luminosity may become dominant in late phases. Though the situation here is not exactly the same, and it is not clear if the assumption of the slow wind velocity is reasonable or not, we consider that it is worthwhile to investigate the shock interaction in future work. § DISCUSSION First we compare our results with Pop III PISN calculations in <cit.>. For the Pop III models, PISNe occurred in the CO core mass rage, M_ CO∼ 72 - 134, though the range depends on models. From Table <ref>, the upper CO-core mass limit for metal-enriched PISNe seems to be lower and M_ CO∼ 120, though lower M_ CO limit may be similar. There is also a difference in the relation between M_ CO and ^56Ni mass or explosion energy. Our metal-enriched PISNe models tend to have higher ^56Ni masses and explosion energies than the Pop III models. Without further investigations, we cannot say, at this moment, that these differences are metallicity effects, since some adopted parameters including the ^12C(α,γ)^16O rate are different from <cit.>. Nevertheless, we can say that it is dangerous to simply assume that a metal-rich PISNe is the same as a Pop III PISN with the same M_ CO. Next we compare with PISN models in <cit.>, which calculated metal-enriched PISN progenitors. The PISN progenitor models of <cit.> are used in <cit.> and <cit.> to simulate PISN collapse, explosions and light curves. Though limited numbers of PISN progenitors are shown in <cit.>, we can still see some significant differences. They calculate models for SMC and LMC metallicities, like us, and show the mass range for PISNe for each case. The PISN mass range for SMC metallicity shown in their Figure 18, is wider than our results. The main reason is that they assume that the upper mass range is determined by M_ CO = 130. Their 200 SMC model is inside the PISN region while our model (200, SMC, v_ i/ v_ k=0.1) is not. Their results for LMC models are totally different from ours. In our results for the fast rotating case, 110 to 170 LMC stars end up as PISNe, while in Yusof et al. (2013), stars in this mass range do not become PISNe and only a 500 becomes a PISN. Although this appears to be a severe inconsistency, we suggest that the reason might simply be a difference in the definition of the LMC metallicity. We adopt the LMC and SMC metallicities such as Z = Z_⊙ / 3 = 0.0047 and Z = Z_⊙ / 5= 0.00282, respectively, while <cit.> adopted Z = 0.006 and 0.002, respectively. Their choice of the LMC metallicity might be above the critical metallicity for realizing CHE-PISNe. §.§ Type Ic PISN candidate SN2018ibb Very recently <cit.> reported that SN2018ibb is the best PISN candidate to date. Though they only say that it is Hydrogen poor, it is likely that it is Type Ic since they show evidence that this SN interacts with Oxygen-rich CSM. If this was a He-rich SN Ib, our SMC, v_ i/ v_ k=0.1 models around 150-160 may fit the observations well, specifically the large ^56Ni mass inferred and the evidence of CSM interaction, since they have large mass-loss rates at the end of their evolution. However if this is a Type Ic, we may need to modify our models to explain the SN. A simple change would be to enhance the initial rotation velocity, since if the rotation mass-loss is enhanced at some stages, the star may become a CO star. Indeed we find some examples for type Ic CHE-PISN by enhancing the initial rotation velocities. Here we show two such examples, namely, 150 and 155 SMC models with v_ i/ v_ k=0.2. We have added the results of these models to Tables <ref> and <ref>. As shown in Table <ref>, 160 v_ i/ v_ k=0.2 collapses, indicating that the 155 is close to the upper mass limit for this case. Unlike the v_ i/ v_ k=0.1 models, these 150 and 155 , v_ i/ v_ k=0.2 models have a Ṁ which decreases with time, because Ṁ_ UP slightly decreases. However, it is not clear we should strictly set the upper limit by Ṁ_ UP since in these models v_ rot/ v_ crit overly exceeds one in the late stages. Therefore, we also show in Table <ref> an example (model 155B) for which we relax the upper limit of the mass loss rate. The model 155B is exactly the same as 155 model until the end of the central He-burning stage. After that, the mass loss rate is doubled if v_ rot/ v_ crit > 1.05 in the previous stage. Eventually, the ratio v_ rot/ v_ crit decreases, and we stop the doubling if v_ rot/ v_ crit becomes between 1 and 1.05, and set to Ṁ = Ṁ_ rot for v_ rot/ v_ crit < 1. In Table <ref>, we show the averaged mass loss rate, which is defined in Subsection <ref>, for model 155B, since Ṁ calculated in this way varies rapidly. As shown in the Table <ref>, model 155B has much larger Ṁ than the original 155 model, and Ṁ increases for smaller τ as for the v_ i/ v_ k=0.1 models. In either case, late time mass loss rates are much larger than slower rotating models, and these rotating CHE-PISN models may explain CSM interaction observed in SN2018ibb. Since the calculations for the CSM interaction light curves are beyond the scope of this work, we will consider such calculations in a separate paper. §.§ A Super-Kilonova progenitor candidate In the previous subsection, we introduced two v_ i/ v_ k=0.2 models which explode as (Type Ic) CHE-PISNe. As well as these two models, we calculated an additional model with M=160 which collapsed instead of exploding, likely due to a slightly larger CO core mass (Table <ref>). Unlike the collapsing models with v_ i/ v_ k=0.1, the v_ i/ v_ k=0.2 model has enough angular momentum so that infalling material will form an accretion disk around the black hole (Fig. <ref>). The possibility of such a massive core forming an accretion disk qualifies it as a progenitor for a "Super-Kilonova" <cit.>, a possible transient powered by radioactive decays originating from collapsar outflows. <cit.> investigated this phenomenon using a semi-analytic model for the collapse of a massive star with a Helium core mass of 125 , using a range of plausible rotational profiles. In comparison, the model presented here has an Oxygen core mass of 118 , and does not have a Hydrogen or Helium envelope due to extreme mass loss. It's rotational profile is also quite different, as it has a Keplerian fraction of only a few percent, except near the surface. In addition, this model does not display an obvious break radius inside of which rotation is suppressed. Despite these differences, the fact that the momentum exceeds j_ LSO,Kerr throughout the star indicates that disk formation is possible. We caution, however, that rotation may bear on the explosion outcome, as its inclusion in the hydrodynamics would likely slow the timescale of the collapse, and possibly result in a PISN. We also reiterate the rarity of these events, as the progenitors require higher initial rotation rates than CHE-GRB progenitors. § CONCLUDING REMARKS In this paper we revisit metal-enriched PISN models which undergo CHE. Previously, only a few such models had been calculated in the literature and the general properties for these models were not clear. By calculating several models, we intend to clarify, for example, mass ranges, produced ^56Ni mass, and mass loss histories of CHE-PISN models for SMC and LMC metallicities. Since we would like to know the precise values for the ^56Ni mass, for example, we carefully perform hydrodynamical simulations with a sufficiently large nuclear reaction network, as well as stellar evolution calculations. In this work, contrary to other works we start the calculation from a pre-main sequence stage, because the definition of ZAMS is rather uncertain, and it is thus dangerous to set the initial velocity at ZAMS. We have shown that in the velocity ratio v_ rot/ v_ k there is a good correlation between the local minimum around log T_ c = 6.6 (K) and maximum around ZAMS, log T_ c = 7.7 (K). Therefore, we could produce a homogeneous set if we set the initial velocity at around log T_ c = 6.6 (K), though we do not do that in this paper, since it is more instructive to have some variation in rotation. We have shown that for the initial velocity set v_ i/ v_ k = 0.1, both the SMC and LMC models undergo CHE and He-rich (Type Ib) PISNe occur in a relatively lower mass range (M_ i∼ 110-170) than slowly rotating models (e.g., M_ i∼ 150-240 for non-rotating SMC models). We find that bright PISNe which have ^56Ni mass larger than 10 are possible in a relatively small mass range, M_ i∼ 140-170 for these CHE progenitors. Another notable difference for the CHE-PISNe and more slowly rotating PISNe is the large late time mass loss rates. Since the rate for τ < 100 year is large (Ṁ > 10^-3/yr ), CSM interaction may be observable for these PISNe. We also show some examples for O-rich (Type Ic) CHE-PISNe produced by v_ i/ v_ k = 0.2 models. These models with interaction with O-rich CSM may explain the observed properties of the recently discovered PISN candidate, SN2018ibb. We note that we have obtained these results by adopting mostly the same method for the model of GRB progenitors in (quasi-) CHE. Indeed we have confirmed that we would have GRB progenitor models, i.e., rapidly rotating CO-star models, if we applied the same method to lower mass ranges. We are aware that the CHE model is very promising for the GRB-progenitors, but it is not proven that this model reflects nature. One concern is the uncertainty in angular momentum transfer, including the validity of the TS-dynamo theory. It is important, therefore, that the CHE-PISN models in this paper will confront observations, such as the existence (or the rate) of nearby PISNe, the ^56Ni mass, explosion energy, surface compositions and CSM interactions. We posit that if these CHE-PISNe are consistent with observations, it may support the CHE GRB-progenitor models, and vice versa. § DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the corresponding author. § ACKNOWLEDGEMENTS We thank K. Maeda for useful discussions. This study was supported in part by the Grant-in-Aid for the Scientific Research of Japan Society for the Promotion of Science (JSPS, No. JP21H01123). aasjournal
http://arxiv.org/abs/2307.01957v1
20230704232801
Hybrid Neural Diffeomorphic Flow for Shape Representation and Generation via Triplane
[ "Kun Han", "Shanlin Sun", "Xiaohui Xie" ]
cs.CV
[ "cs.CV" ]
[ Hybrid Neural Diffeomorphic Flow for Shape Representation and Generation via Triplane Kun Han1 Shanlin Sun1 Xiaohui Xie1 1University of California, Irvine, USA {khan7,shanlins,xhx}@uci.edu August 1, 2023 ========================================================================================================================================= type=figure < g r a p h i c s > figureShape Representation and Shape Generation. Left half presents the diffeomorphic deformation from the learned template (t=0) to instance shapes (t=1), with color highlighting the dense correspondence captured by triplane features. Right half presents the denoising process for shape generation. The shapes are generated as deformed templates and the 3D deformation is controlled by the generated triplane features from diffusion. ] Deep Implicit Functions (DIFs) have gained popularity in 3D computer vision due to their compactness and continuous representation capabilities. However, addressing dense correspondences and semantic relationships across DIF-encoded shapes remains a critical challenge, limiting their applications in texture transfer and shape analysis. Moreover, recent endeavors in 3D shape generation using DIFs often neglect correspondence and topology preservation. This paper presents HNDF (Hybrid Neural Diffeomorphic Flow), a method that implicitly learns the underlying representation and decomposes intricate dense correspondences into explicitly axis-aligned triplane features. To avoid suboptimal representations trapped in local minima, we propose hybrid supervision that captures both local and global correspondences. Unlike conventional approaches that directly generate new 3D shapes, we further explore the idea of shape generation with deformed template shape via diffeomorphic flows, where the deformation is encoded by the generated triplane features. Leveraging a pre-existing 2D diffusion model, we produce high-quality and diverse 3D diffeomorphic flows through generated triplanes features, ensuring topological consistency with the template shape. Extensive experiments on medical image organ segmentation datasets evaluate the effectiveness of HNDF in 3D shape representation and generation. § INTRODUCTION 3D geometry representation is critical for numerous computer vision tasks, including 3D model reconstruction, matching and manipulation. Deep implicit functions (DIFs) have emerged as promising alternatives to traditional representation methods such as voxel grids, point clouds and polygon meshes. DIFs offer several advantages such as compactness, continuity, and the ability to capture fine geometric details. They enable efficient computation while leveraging deep neural networks for end-to-end training, enhancing shape representation and understanding. However, despite the promising results in direct object modeling using DIFs, it is important to consider the common shape features and semantic correspondences shared among objects. Conventional DIFs face challenges in establishing correspondences between different shapes, limiting their applicability in domains like medical image segmentation <cit.> and texture transfer <cit.>. Previous methods <cit.> have proposed shape modeling as conditional deformations of a template DIF to address this limitation. However, these methods still have limitations, such as being topology-agnostic or lacking the capability to capture correspondences for local details. Recent researches have also explored the integration of DIFs for the 3D shape generation<cit.>. Compared to point clouds and polygon meshes, DIF-based generation offers continuous representations with high quality and resolution. However, existing approaches primarily focus on direct shape generation without considering underlying point correspondence and topology preservation. To overcome these challenges, we introduce Hybrid Neural Diffeomorphic Flow (HNDF) for shape representation and generation. HNDF models shapes as conditional deformations of a template DIF, similar to previous work <cit.>. However, HNDF encodes diffeomorphic deformations into axis-aligned triplane features to enhance representation capability. Local deformations are controlled through interpolation of triplane features with a shared feature decoder. Nevertheless, the direct application of triplanes may lead to local optimization issues and defective deformations, resulting in inaccurate representations. To address this, we propose a hybrid supervision approach that considers both local and global correspondences, along with additional modifications and regularization to preserve the diffeomorphism property of the represented deformations. This combination of triplane feature exploration and supervision enables high representation capabilities and accurate dense correspondences. Unlike conventional 3D shape generation works which primarily focus on direct shape generation, we explore the idea of deformation-based shape generation, where the template shape is deformed based on newly generated diffeomorphic deformations. This approach ensures that the newly generated shapes maintain the same topology as the template shape, preserving topological consistency while offering a wide range of diverse shapes. To achieve this, we represent deformations using optimized per-object triplane features, which encode diffeomorphic deformations as three axis-aligned 2D feature planes. We concatenate the triplane features as multi-channel images and leverage the existing 2D diffusion models to generate new triplane features. By applying the new diffeomorphic deformations encoded in the triplane features, we deform the template shape to generate novel 3D shapes while preserving their topological characteristics. The contributions of this paper are as follows: * We propose HNDF, which leverages axis-aligned triplane features to provide high representation capability and capture dense correspondences accurately. * We demonstrate that hybrid supervision and regularization are essential for ensuring correct deformation representation and preventing the representation from local optima. * Rather than directly generating 3D shapes, we explore the concept of shape generation through diffeomorphic deformations and provide a baseline method utilizing 2D diffusion model. The topology and correspondences are preserved in newly generated 3D shapes. § RELATED WORKS Deep Implicit Function Deep implicit functions, or neural fields, have enabled the parameterization of physical properties and dynamics through simple neural networks <cit.>. DeepSDF <cit.> serves as an auto-decoder model, commonly used as a baseline for shape representation <cit.>. NeRF <cit.> presents a novel approach for synthesizing photorealistic 3D scenes from 2D images. Occupancy Network <cit.> constructs solid meshes through the classification of 3D points, while Occupancy Flow <cit.> extends this idea to 4D with a continuous vector field in time and space. Recent trends incorporate locally conditioned representations <cit.>, utilizing small MLPs that are computationally and memory-efficient while capturing local details effectively. One such representation is the hybrid triplane <cit.>, which represents features on axis-aligned planes and aggregates them using a lightweight implicit feature decoder. In our work, we adopt the expressive triplane representation. However, instead of decoding the 3D object itself, we utilize triplane features to decode complex diffeomorphic deformations, allowing us to represent new 3D objects by deforming the template shape using the encoded deformation. Point Correspondence and Topology preservation Capturing dense correspondences between shapes remains a significant challenge and a critical area of interest in the 3D vision community. Various approaches have been proposed to address point correspondence, including template learning, elementary representation, and deformation field-based methods. Among them, mesh-based methods <cit.> face difficulties in handling topological changes, sensitivity to mesh connectivity, and challenges in capturing fine-grained details. Elementary-based methods <cit.>, on the other hand, may struggle with capturing high-level structural features due to the simplicity of the elements used. DIT <cit.> and NDF <cit.> exemplify deformation field-based methods, with DIT exhibiting smoother deformations using LSTM <cit.> and NDF employing NODE <cit.> for achieving diffeomorphic deformation. ImplicitAtlas <cit.> integrates multiple templates to improve the shape representation capacity at a negligible computational cost. In our work, we follow the NDF framework but enhance the representation's capacity to capture accurate correspondences by leveraging more powerful triplane representation. Experimental results highlight the importance of incorporating triplane features with hybrid supervision, which prevents local optimization issues, provides significantly more accurate correspondences, and ensures the preservation of topology. 3D Shape Generation Generative models, such as GANs, autoregressive models, score matching models, and denoising diffusion probabilistic models, have been extensively studied for 3D shape generation. However, GAN-based methods <cit.> still outperform alternative approaches. Voxel-based GANs <cit.>, for example, directly extend the use of CNN generators from 2D to 3D settings with high memory requirement and computational burden. In recent years, there has been a shift towards leveraging expressive 2D generator backbones, such as StyleGAN2 <cit.>. EG3D <cit.> combines a hybrid explicit-implicit triplane representation to improve computational efficiency while maintaining expressiveness. Get3D<cit.> incorporates the deformable tetrahedral grid for explicit surface extraction and triplane representation for differentiable rendering to generate textured 3D shapes. Compared to the existing GAN-based approaches for 3D generation, the development of 3D diffusion models is still in its early stages. Several notable works have explored the application of diffusion models in generating 3D shapes. PVD <cit.> proposed the use of a point-voxel representation combined with PVConv<cit.> to generate 3D shapes through diffusion. DPM <cit.> introduced a shape latent code to guide the Markov chain in the reverse diffusion process. MeshDiffusion <cit.> utilized the deformable tetrahedral grid parametrization for unconditionally generating 3D meshes. 3D-LDM <cit.> integrated DeepSDF<cit.> into diffusion-based shape generation, leveraging diffusion to generate a global latent code and improve the conditioning of the neural field. NFD <cit.> extended the use of 2D diffusion into 3D shape generation, exploring the potential of diffusion models in capturing and generating complex 3D shapes with Occupancy Network <cit.>. While existing approaches in shape generation focus on directly generating 3D shapes, they often neglect the preservation of underlying topology. This oversight can lead to artifacts in the generated shapes and limit their applicability in scenarios where topology is important. In our work, we introduce a baseline diffusion-based method that deforms a template to generate new shape. The diffeomorphic deformation is encoded by the generated triplane features. Our approach focuses on producing visually coherent and realistic shapes while preserving point correspondence and underlying topology. § PRELIMINARIES Diffeomorphic Flow is a continuous and smooth mapping that transforms a given manifold or space while preserving its differentiable structure. In the context of 3D geometry, diffeomorphic flow plays a crucial role in establishing dense point correspondences between 3D shapes and ensuring the preservation of their underlying topology during deformation. Mathematically, the forward diffeomorphic flow Φ(p, t): ℝ^3 ×[0,1] →ℝ^3 describes the trajectory of a 3D point p over the interval [0,1], where the starting point p is located in the space of instance shape S and the destination point corresponds to the target shape T. The velocity field 𝐯(p, t): ℝ^3 ×[0,1] →ℝ^3 represents the derivative of deformation of 3D points. The diffeomorphic flow Φ is obtained by solving the initial value problem (IVP) of an ordinary differential equation (ODE), ∂Φ/∂ t(p, t)=𝐯(Φ(p, t), t) s.t. Φ(p, 0)=p Similarly, the inverse flow Ψ can be calculated by solving a corresponding ODE with negative velocity field -𝐯, allowing for the transformation from the template space to the instance space ∂Ψ/∂ t(p, t)=-𝐯(Ψ(p, t), t) s.t. Ψ(p, 0)=p where p is the starting point on the target shape. The property of topology preservation is achieved through the Lipschitz continuity of the velocity field. The forward and backward diffeomorphic deformation can be calculated by the integration of the velocity field by solving the equation <ref> <ref>, respectively. Diffusion Probabilistic Model (DPM)<cit.> is a parameterized Markov chain designed to learn the underlying data distribution p(X). During the Forward Diffusion Process (FDP), the diffused data point X_t is obtained at each time step t by sampling from the conditional distribution: q(X_t | X_t-1)=𝒩(X_t; √(1-β_t) X_t-1, β_t I) where X_0 is sampled from the initial distribution q(X_0), and X_T follows a Gaussian distribution N(X_T ; 0, I). The parameter β_t ∈(0,1) represents a variance schedule that gradually introduces Gaussian noise to the data. By defining α_t=1-β_t and α̅_t=∏_s=1^t(1-β_t), X_t can be sampled conditionally on X_0 as q(X_t | X_0)=𝒩(X_t ; √(α̅_̅t̅) X_0,(1-α̅_̅t̅) I), providing a distribution for sampling X_t from the initial data X_0. In contrast, the Reverse Diffusion Process aims to approximate the posterior distribution p(X_t-1 | X_t) to recreate a realistic X_0 starting from random noise X_T. The Reverse Diffusion Process is formulated as a trajectory of posterior distributions starting from X_T: p(X_0: T)=p(X_T) ∏_t=1^T p_θ(X_t-1| X_t) The conditional distribution p_θ(X_t-1 | X_t) is approximated by a neural network with parameters θ: p_θ(X_t-1| X_t)=𝒩(X_t ; μ_θ(X_t, t), Σ_θ(X_t, t)) § METHOD In this section, we present our Hybrid Neural Diffeomorphic Flow (HNDF) for shape representation and generation. Section <ref> reviews our baseline method <cit.>. In Section <ref>, we introduce the utilization of triplane features, and the hybrid supervision for capturing local and global correspondences. Finally, in Section <ref>, we describe our proposed method for generating topology-preserving shapes. §.§ Review of NDF NDF <cit.>, similar to DeepSDF<cit.>, represents a 3D shape S_i using a continuous signed distance field (SDF) ℱ. Given a random 3D point p and a one-dimensional latent code c_i of length k, ℱ outputs the distance from the point p to the closest surface of shape S_i. However, unlike DeepSDF, which directly represents 3D shapes, NDF uses a deform code c_i to control the deformation of each instance shape from the template shape. As a result, the conditional continuous SDF ℱ can be decomposed into 𝒯∘𝒟, where 𝒟: ℝ^3 ×ℝ^k ↦ℝ^3 provides the deformation mapping from the coordinates of p in the instance space of S_i to a canonical position p' in the template space. The function 𝒯 represents a single shape DeepSDF that models the implicit template shape. §.§ Hybrid Shape Representation via Triplane As shown in <cit.>, previous methods <cit.> utilizing a single latent vector to control the entire shape or deformation space could not be able to capture the details of the complex 3D shape or the deformation. Motivated by recent advancements in hybrid representation <cit.>, we propose to encode complex diffeomorphic deformations as a set of three axis-aligned 2D feature planes, as shown in Fig. <ref>. This enables us to capture fine-grained details and variations in the shape space more effectively. The triplane representation is a hybrid architecture for neural fields that combines explicit and implicit components <cit.>. For each instance shape S_i, it employs three axis-aligned orthogonal feature planes (X_i = [F_x y^i, F_x z^i, F_y z^i]), each with a resolution of L × L × C. These planes serve as the encoded representations of the deformation. To query a deformation, the position of given point p_i is projected onto each of the feature planes, and the corresponding feature vectors are retrieved using bilinear interpolation. Subsequently, a lightweight multilayer perceptron (MLP) decoder is employed to interpret the aggregated features as corresponding velocity vector v_i. The diffeomorphic deformation d_i for point p_i can be calculated by integrating the velocity vector using an explicit Runge-Kutta solver <cit.>, as defined in Eq. <ref>. In contrast to the approach in <cit.>, where feature aggregation is performed through summation, we have found that concatenating the interpolated features from the triplane yields better results. §.§.§ Training In our method, we represent the instance shape S_i as a deformed template shape (𝒯∘𝒟_i). To capture the continuous shape of S_i, we employ two modules: a continuous diffeomorphic deformation module 𝒟 and a template shape representation 𝒯. As discussed in Sec. <ref>, the diffeomorphic deformation d_i of a point p_i is obtained by integrating the velocity field. The signed distance field (SDF) value of p_i is determined by evaluating the implicit template shape module 𝒯 at the transformed point p_i', where p_i' = p_i + d_i. During training, our method jointly optimizes the deformation module 𝒟, template DeepSDF shape 𝒯, and per-object triplane features X_i to represent a training set of S objects. The triplane representation provides an expressive representation power, allowing us to achieve accurate deformation and correspondence. Unlike NDF <cit.>, which requires multiple deformation modules, our method only requires one deformation module. This not only enables more accurate deformation representation but also reduces the memory and computation requirements. The training objective function includes a reconstruction loss and a regularization loss: ℒ_train=ℒ_rec +λ_reg ℒ_reg where ℒ_rec shows the reconstruction loss between the ground truth SDF value s_i and the represented SDF value s_i', and ℒ_reg includes a series of regularization terms. Specifically, reconstruction loss ℒ_rec can be written as ℒ_rec = ∑_i=1^S ∑_j=1^N L_1 (𝒯∘𝒟_i(p_i,j), s_i, j) where S is the number of instance shapes in the training set, N is the number of sampling points for each shape, p_i,j is the j-th point on the i-th shape and s_i, j is the corresponding ground truth SDF value. In addition to the point-wise deformation regularization (∑_i,j𝒯∘𝒟_i(p_i,j) - s_i, j_2) and the L_2 norm feature regularization (F_x y^i_2+F_y z^i_2+F_x z^i_2), the inclusion of total variation (TV) regularization <cit.> is crucial for simplifying the triplane representation and ensuring smooth deformations. The overall regularization term in the training objective is defined as: ℒ_reg = λ_PW ℒ_PW + λ_L2 ℒ_L2 + λ_TV ℒ_TV §.§.§ Hybrid Supervision for Inference Time Reconstruction In contrast to previous methods <cit.> that utilize a single latent vector for shape reconstruction, the incorporation of triplane representation in our work introduces specific challenges when reconstructing new shapes. Specifically, during the optimization process, the features interpolated from the triplane representation for different positions p_i are optimized locally. Since the final diffeomorphic deformation is the integration of velocity vectors along the trajectory in the entire space, the optimized deformation can become trapped in local optima, leading to incorrect global correspondence, as shown in Fig. <ref>. As a consequence, the reconstructed shape and deformation may exhibit artifacts, and the overall correspondence may be compromised. Therefore, we introduce a hybrid supervision strategy that incorporates both global and local correspondence. In addition to randomly sampled points that provide local supervision, we downsample the entire N × N × N coordinate grid with predefined step size and include these regularly sampled points for global supervision during optimization. The reconstruction loss during inference is defined as: ℒ_rec = ℒ_rec^grid + λ_random ℒ_rec^random where λ_random is initialized as 0 and gets increased as the optimization continues. After we get the grid-structure deformation Φ, we utilize two additional regularization terms to ensure the diffeomorphism of the deformation field and maintain structural integrity. The first term, selective Jacobian determinant regularization (ℒ_Jdet), enforces local orientation consistency. ℒ_J d e t=1/N∑_p relu(-|J_Φ(p)|) where the Jacobian matrix J_Φ is defined as: J_Φ(p)=[ ∂Φ_x(p)/∂ x ∂Φ_x(p)/∂ y ∂Φ_x(p)/∂ z; ∂Φ_y(p)/∂ x ∂Φ_y(p)/∂ y ∂Φ_y(p)/∂ z; ∂Φ_z(p)/∂ x ∂Φ_z(p)/∂ y ∂Φ_z(p)/∂ z ] The second term, deformation regularization (ℒ_def), discourages excessively skewed deformations that may lead to unnatural shapes. ℒ_def= ∑_p∇Φ(p)^2 The combination of global and local supervision provides comprehensive guidance during optimization, enabling the model to capture both fine-grained details and global structural consistency. §.§.§ Point Correspondence and Shape Registration During inference, our method utilizes the learned template shape from training and the diffeomorphic deformation encoded by the triplane feature to establish point correspondence and shape registration between different instance shapes. For each point p_t on the template shape, we apply the inverse diffeomorphic flow Ψ, as defined in Eq. <ref>, to obtain the corresponding points p_i and p_j on instance shapes S_i and S_j respectively, based on their respective triplane features X_i and X_j. This process allows us to accurately capture point correspondence and establish registration between the instances, facilitating tasks such as shape comparison, shape synthesis, and texture transfer. §.§ Topology-preserving Shape Generation In this section, we present our proposed method for topology-preserving shape generation. Rather than directly generating shapes from scratch, our approach focuses on generating new shapes by deforming a template shape using synthesized diffeomorphic deformations. §.§.§ Training a Diffusion Model After the training of the diffeomorphic deformation module 𝒟 and the template shape representation 𝒯, as described in Section <ref>, we can leverage the hybrid supervision introduced in Section <ref> to obtain the corresponding per-shape triplane features for the dataset. These optimized sets of triplane features, denoted as X ∈ℝ^N × (L × L × 3C), will be utilized to train our generative model, where N denotes the number of shapes in the dataset, L is the dimension of triplane features and C is the number of channels for each 2D plane (F_x y^i, F_x z^i, F_y z^i). In our framework, the triplane feature is composed of three 2D plane features. We concatenate these feature planes and takes advantage of the strong generative capability of existing 2D diffusion models. Following Sec. <ref>, we train a diffusion model to learn the reverse diffusion process and predict the added noise from its noisy input by minimizing the following loss function: Loss(θ)= 𝔼_X_0 ∼ q(X), ϵ∼𝒩(0, I), t [ϵ-ϵ_θ(√(α̅_t) X_0+√(1-α̅_t)ϵ, t)^2] where ϵ_θ is predicted noise and θ represents the model parameters. §.§.§ New Shape Generation During the inference phase, the generation of a new shape involves deforming the template shape based on the diffeomorphic deformation encoded by the sampled triplane features. Following <cit.>, we initiate the process by sampling a random Gaussian noise X_T ∼𝒩(0, I) ∈ℝ^L × L × 3C. Subsequently, we perform iterative denoising for a total of T steps as: X_t-1=1/√(α_t)(X_t-1-α_t/√(1-α̅_t)ϵ_θ(X_t, t))+σ_t ϵ where ϵ∼𝒩(0, I) if t > 1, else, ϵ = 0. After sampling, the concatenated triplane feature is split into three axis-aligned 2D planes (F_xy^i, F_xz^i, F_yz^i). This generated triplane feature can be interpreted as the diffeomorphic deformation. By following the trajectory defined by the ODE function in Eq. <ref>, each point on the template shape is displaced towards its corresponding destination point in the instance space. Consequently, the new generated shape, known as the deformed template, retains the same underlying topology as the template shape, ensuring consistent connectivity. § EXPERIMENTS In this section, we present the experiments conducted to evaluate our proposed Hybrid Neural Diffeomorphic Flow (HNDF) for shape representation and generation tasks. Datasets: To assess the effectiveness of our shape representation, we utilize the same medical datasets as <cit.>: Pancreas CT <cit.> and Inhouse Liver <cit.>, as these datasets exhibit clear common topology while demonstrating shape variation, making them suitable for our evaluation. For shape generation evaluation, we employ the Abdomen1k dataset <cit.>, consisting of 573 valid liver data and 693 pancreas data after preprocessing and filtering. Please refer to the supplementary material for detailed data sources and preprocessing information. Shape Representation Evaluation: We evaluate HNDF for shape representation through two experiments. First, we demonstrate the expressive power of triplane representation and the importance of our hybrid supervision. Evaluation metrics include Chamfer distance (CD) and normal consistency (NC). Second, we evaluate point correspondence and shape registration accuracy, incorporating self-intersection (SI) as an additional metric for geometrical fidelity. Shape Generation Evaluation: For shape generation evaluation, following <cit.>, we adopt an adapted version of Frechet inception distance (FID). This metric considers rendered shading images of our generated meshes, taking human perception into account. As discussed in <cit.>, shading-image FID overcomes limitations of other mesh-based evaluation metrics. FID is computed across 20 views and averaged to obtain a final score FID=1/20[∑_i=1^20μ_g^i-μ_r^i^2+Tr(Σ_g^i+Σ_r^i-2(Σ_r^i Σ_g^i)^1/2)] Additionally, precision and recall scores are reported using the method proposed by <cit.>. Precision reflects the quality of the rendered images, while recall measures the diversity of the generative model. Baseline Methods We compare our proposed Hybrid Neural Diffeomorphic Flow (HNDF) with several baselines for the shape representation task. This includes DIT<cit.>, DIF-Net<cit.>, and NDF<cit.>, which share the same representation formula as ours, where the shape is represented as a deformed template. We also include AtlasNet<cit.>, which uses explicit mesh parameterization for shape reconstruction. Additionally, we compare with DeepSDF<cit.> and NFD <cit.>, which directly represent 3D shapes from scratch. For the shape generation task, we explore different sampling strategies and generative models. We compare against DeepSDF<cit.> and NDF<cit.>, which assume a Gaussian distribution for the global latent vector. We sample new shapes by randomly sampling global vectors from a Gaussian distribution or performing PCA analysis on optimized global latent vectors. We also compare with recent generative models such as point-cloud-based PVD<cit.>, and neural-field-based 3D-LDM <cit.> and NFD <cit.>. However, it's important to note that these models do not consider the preservation of underlying topology. §.§ Shape Representation We evaluate our shape representation through two evaluations: representation on training data and reconstruction on unseen data, following the setting of <cit.>. For each point p in the instance space, according to Eq. <ref>, we can get the corresponding destination point p' in the template space, and the trained template module will return the sign distance value for this point. After retrieving the sign distance value for all the grid points, we can then utilize the marching cube algorithm <cit.> to extract the mesh for each instance. In the representation comparison, we utilize the trained per-object latent feature to assess the effectiveness of different representation methods. In the reconstruction comparison, we independently optimize the per-object latent feature while keeping the network parameters fixed to evaluate the generability of the methods in shape reconstruction. Fig. <ref> shows the reconstruction results of different methods. According to Table <ref>, DIF-Net achieves the best results on the training data representation but worse results on the shape reconstruction tasks, indicating the overfitting on the training data. Our method and NFD achieve similar overall performance, benefiting from the enhanced representation power of the triplane feature. Comparing with NDF, our method achieves superior performance even with a single deformation module, outperforming NDF with 4 consecutive deformation modules. The ablation study conducted on regularization, as shown in Tab. <ref>, demonstrates the significance of our proposed hybrid supervision in achieving accurate reconstruction for new shapes reconstruction. §.§ Point Correspondence and Shape Registration As the methods DeepSDF and NFD can only represent the shape without capturing point correspondence, we compare the remaining methods in Table <ref> for shape registration evaluation and the instance shape is represented by deforming the template, as described in Sec. <ref>. Following the trajectory defined by the ODE function in Eq. <ref>, each point on the template shape moves towards the corresponding destination point on the instance space. As a result, the instance shape, defined as the deformed template, shares the same underlying topology as the template shape, ensuring consistent connectivity. The diffeomorphic deformation from the template towards instance shapes is shown in the left half of Fig. 1. To evaluate the point correspondence and shape registration results, we compare the deformed template with the corresponding ground truth instance shape. We also utilize self-intersection as a metric to assess the preservation of topology and geometric fidelity during the deformation. To ensure a fair comparison, we remesh the template meshes to have the same number of vertices (5000), following the approach in <cit.>. Based on the comparison presented in Table <ref>, our proposed method achieves better registration accuracy and correct dense correspondence, with only slight self-intersection, which can be considered negligible given the large number of vertices and faces in the template shape. §.§ Shape Generation Table <ref> presents the evaluation of shape generation across different methods. For DeepSDF and NDF, we sample global latent vectors from a Gaussian distribution and perform PCA analysis, where the parameters are determined by grid search. However, similar to the results in previous experiments, the shapes sampled from DeepSDF and NDF tend to be smoother compared to real instance shapes. PVD is capable of generating variable shapes, but it is limited by its nature to generate only coarse object shapes. 3D-LDM attempts to capture the distribution of the global latent vectors of DeepSDF, but still faces the smoothing issue from the global latent vector. NFD can also generate variable shapes. However, compared to our methods, the shapes generated by NFD may not preserve topology, resulting in potentially separated components in the generated shapes, as shown in Fig. <ref>. In contrast, our method focuses on generating diffeomorphic deformations encoded by triplane features. The new shapes are generated by deforming the template, allowing us to achieve high fidelity and variability while preserving the underlying topology. §.§ Ablation Study Supervision Table <ref> highlights the significance of our global supervision in shape reconstruction, mitigating the risk of local minima. While incorporating additional mesh supervision improved the results marginally, it also increased computational and memory demands. Thus, we opted to utilize global supervision in our approach. Feature Representation We explored the use of 3D voxel-grid features as an alternative to triplane features, and found that they yielded similar results as shown in Table <ref>. However, voxel-grid features required more computation and memory resources for representation and generation tasks. In contrast, triplane feature representation achieved high reconstruction accuracy with improved memory and computation efficiency. § CONCLUSION In this paper, we introduce Hybrid Neural Diffeomorphic Flow (HNDF) as a novel approach for topology-preserving shape representation and generation. Our method leverages the expressive power of triplane representation, enabling accurate dense correspondence and high representation accuracy. The proposed hybrid supervision plays a crucial role in capturing both local and global correspondence. Unlike existing methods that primarily focus on directly generating shapes, we explore the concept of generating shapes using deformed templates to preserve the underpying topology. We present a baseline method for topology-preserving shape generation and will continue our exploration for more complex shapes and scenarios. By presenting our research, we aim to contribute to the 3D vision community and provide insights into the potential of topology-preserving shape representation and generation. ieee_fullname
http://arxiv.org/abs/2307.03250v1
20230706184716
Gravitational Waves from Binary Neutron Star Mergers with a Spectral Equation of State
[ "Alexander Knight", "Francois Foucart", "Matthew D. Duez", "Mike Boyle", "Lawrence E. Kidder", "Harald P. Pfeiffer", "Mark A. Scheel" ]
astro-ph.HE
[ "astro-ph.HE", "gr-qc" ]
In numerical simulations of binary neutron star systems, the equation of state of the dense neutron star matter is an important factor in determining both the physical realism and the numerical accuracy of the simulations. Some equations of state used in simulations are C^2 or smoother in the pressure/density relationship function, such as a polytropic equation of state, but may not have the flexibility to model stars or remnants of different masses while keeping their radii within known astrophysical constraints. Other equations of state, such as tabular or piece-wise polytropic, may be flexible enough to model additional physics and multiple stars' masses and radii within known constraints, but are not as smooth, resulting in additional numerical error. We will study in this paper a recently developed family of equation of state, using a spectral expansion with sufficient free parameters to allow for a larger flexibility than current polytropic equations of state, and with sufficient smoothness to reduce numerical errors compared to tabulated or piece-wise polytropic equations of state. We perform simulations at three mass ratios with a common chirp mass, using two distinct spectral equations of state, and at multiple numerical resolutions. We evaluate the gravitational waves produced from these simulations, comparing the phase error between resolutions and equations of state, as well as with respect to analytical models. From our simulations we estimate that the phase difference at merger for binaries with a dimensionless weighted tidal deformability difference greater than ΔΛ̃ = 55 can be captured by the SpEC code for these equations of state. Gravitational Waves from Binary Neutron Star Mergers with a Spectral Equation of State Mark A. Scheel August 1, 2023 ====================================================================================== § INTRODUCTION Great progress has been made in gravitational wave astrophysics starting with the binary black hole (BBH) merger event GW150914 <cit.> and the detection of the binary neutron star merger event GW170817 by the LIGO/Virgo/KAGRA collaboration <cit.>. Additional mergers of black hole-neutron star (BHNS) and binary neutron star (BNS) have been observed, such as GW190425 <cit.>, GW200115 <cit.>, and GW191219 <cit.>. We have also observed merger events between objects with properties outside of what was previously expected, such as GW190814 <cit.>, which has either the heaviest NS or lightest BH detected so far, with a mass of 2.50-2.67 M_⊙, partnered with a 22.2-24.3 M_⊙ BH. Gravitational waves from these mergers carry information about the system's chirp mass, as well as the angular momentum and mass of each compact object. In the presence of a neutron star, they also provide information on how matter behaves at densities higher than nuclear saturation density. To interpret this information, we need to quickly generate millions of theoretical gravitational wave signals. Even with modern methods and technologies, however, creating simulations that sample the available parameter space densely enough to allow us to perform parameter estimation on observed binaries remains prohibitive in time and computational cost. As a result, simulations are instead used to test and/or train analytical models. While analytical models have high accuracy during the inspiral of a binary even without including any information from merger simulations, they can become inaccurate in the last few orbits before merger – which also happens to be when the impact of the finite size of a neutron star is most noticeable on the waveform. Numerical relativity simulations are thus used to test or calibrate these models for better accuracy during these critical stages of a merger event. As the accuracy of these analytical models (or, at the very least, our ability to test said accuracy) is then dependent on the accuracy of merger simulations, a reduction of potential numerical errors below the expected accuracy of current and future gravitational wave detectors is highly desirable. For GW170817, the only event so far to put meaningful constraints on the equation of state (EoS) of dense matter, modeling errors were most likely unimportant <cit.>. However, with planned improvements to the sensitivity of current detectors and the construction of new observatories, the frequency of gravitational wave detections will increase, and modeling errors will potentially become an important source of uncertainty. In the case of BNS mergers, high accuracy simulations are still relatively new, and even the best simulations have a lower accuracy and a higher computational cost than for BBH mergers. BNS mergers require the evolution of the equations of general relativistic hydrodynamics in addition to Einstein's equations. Beyond the cost of evolving this additional system of equations, the presence of surface discontinuities and/or shocks in the fluid makes it impossible to use for neutron star mergers the high-order methods that have allowed BBH simulations to produce high-accuracy waveforms at a reasonable computational cost, or at least impossible to consistently use such methods across the entire computational domain. There are at least two ways to limit the unphysical impacts of discontinuities in the evolution of the fluid. One is the use of numerical methods capable of maintaining high-order accuracy while capturing shocks and surfaces. Methods demonstrating third order convergence (for beta-equilibrium EoS) <cit.> and fourth order convergence (for piecewise polytropes) <cit.> in the phase of the gravitational wave signal have been published so far. The other is to improve the smoothness of the EoS used to describe high-density matter <cit.>. Smoother EoS typically lead to lower truncation errors. A simple Γ=2 polytropic EoS, for example, is much preferable in terms of numerical accuracy to the more complex piecewise-polytropic EoS, or to the tabulated, composition and temperature dependent EoS needed in simulations that evolve neutrinos or that aim to capture changes in the fluid composition. There is however a significant trade-off, in that a simple polytropic EoS matching the desired mass and radius of a specific neutron star will often not be consistent with known physical constraints, for example the maximum mass of neutron stars, or the radius of neutron stars of different masses. Additionally, these simple EoS are much farther from satisfying constraints on the nuclear EoS derived from theoretical and experimental nuclear physics results. With constraints on the neutron star from studies such as <cit.>, a smooth EoS that is more consistent with at least the known physical constraints on the macroscopic properties of cold neutron stars (mass-radius relation, maximum mass) is desirable. For these reasons, this paper will be evaluating our ability to perform high-accuracy simulations with a relatively new EoS utilizing a spectral representation of the nuclear EoS, which results in a smoother EoS with more flexibility in setting the pressure P(ρ) and specific internal energy ϵ(ρ) for dense matter than in simple polytropes. We note that while this does allow us to construct an EoS that more closely matches any chosen set of constraints on the properties of cold matter in beta-equilibrium, the temperature dependence of our EoS remains extremely simplified, and it does not include any composition dependence <cit.>. A method to expand a cold, beta-equilibrium EoS with a more physically motivated temperature and composition dependence has been proposed in <cit.>, but is not currently implemented in the SpEC code used in this work. Our ability to accurately evolved these spectral EoS was first tested in <cit.> over short simulations; this manuscript presents our first full inspiral-merger simulations using these EoS. As we are in particular interested in estimating our ability to capture tidal effects in waveforms, this manuscript includes simulations of systems with identical neutron star masses but performed with two EoS with ∼ 20% differences in their dimensionless tidal deformability (∼ 570 vs ∼ 710 for GW170817-like systems). We will show that the phase difference between the resulting waveforms at merger is resolved in our simulations. In addition to our study of this spectral EoS, we will in this manuscript evaluate the performance of a new time stepping method for SpEC evolutions, where the evolution of Einstein's equations is permitted to take smaller steps than the evolution of the fluid equations. For the SpEC code, this offers reduced computational cost, as these two systems of equations are evolved on different numerical grids, with different time step constraints. The fluid equations are typically more costly to solve in each individual time step while Einstein's equations often require a shorter step to reach a desired accuracy. Here, we demonstrate that uncoupling these time stepping systems results in simulations equivalent (within expected numerical errors) to those obtained when Einstein's equations and the equations of hydrodynamics use the same time stepping algorithm. We simulate six distinct physical configurations, using two different EoS each evolved at three mass ratios. For two of these cases, we also use both time stepping methods. We then extrapolate the gravitational waves to future null infinity, and compare these gravitational waves to analytical models. All waveforms presented here will will become public as part of the next data release by the SxS collaboration. § METHODS §.§ Evolution For the simulations presented in this manuscript, we use the Spectral Einstein Code (SpEC) <cit.>, which evolves Einstein's equations using the Generalized Harmonic formalism <cit.> on a pseudospectral grid, with p-type adaptive mesh refinement <cit.>. The general relativistic hydrodynamical equations are evolved on a separate grid using the high-order shock capturing scheme described in Radice and Rezzolla (2012) <cit.>. This scheme uses the fifth-order, shock capturing MP5 reconstruction to interpolate from cell centers to cell-faces, and a Roe solver to calculate numerical fluxes at cell faces. It has been shown to result in third-order convergence of the solution when used in neutron star merger simulations <cit.>. Einstein's equations and the fluid equations are both evolved in time using a third-order Runge-Kutta algorithm. In the algorithm previously used in SpEC, both systems of equations use the same time step, chosen to meet a target time discretization error (see Appendix A, section 3 of <cit.>). Coupling source terms are communicated between the grids at the end of each time step. Linear extrapolation from the current step and previous steps is used to determine the values of the source terms during the intermediate steps of the Runge-Kutta algorithm. In this manuscript, we also use for the first time an algorithm where the evolution of Einstein's equations is allowed to use a smaller time step than the evolution of the fluid. In that algorithm, the fluid equations take a time step Δ t = α_CFLmin(Δ x/c_ max), with Δ x the grid spacing, c_ max the maximum characteristic speed of the fluid equations in grid coordinates in that cell (in absolute value), and α_ CFL=0.25 a constant chosen to maintain stability of the evolution. The time step used for Einstein's equations is chosen in the same way as in our standard algorithm, with the minor modification that we require each time step of the fluid evolution to be an integer number of time steps of the metric evolution. In this new method, source terms are communicated between the two grids at the end of each time step of the fluid evolution. This results in less frequent communication than in our previous algorithm, and in a lower number of time steps for the fluid equations. At intermediate times, we again use extrapolation from previous time steps to calculate the source terms. The order of extrapolation used in this algorithm is freely specifiable. So far, we found no significant impact on the accuracy of our simulations as long as we use at least first order extrapolation. In the rest of this manuscript, we will refer to the simulations using the same time step on both grids as Shared Time Step (ShTS), and the simulations using a different time step on each grid as Split Time Step (SpTS). Interpolating from the spectral grid to the finite-difference grid to communicate source terms is done by refining the spectral grid by approximately a factor of 3 in each dimension, and then using third-order polynomial interpolation from the colocation points in the refined spectral grid to the finite-difference grid. Interpolation from the finite-difference grid to the spectral grid uses fifth-order polynomial interpolation, limited so that interpolation does not create any new extremum in the fluid variables. The pseudospectral and finite difference grids both rotate and contract to follow the binary system. The finite-difference grid is rescaled when the grid spacing decreases by a factor of 0.8 in the inertial frame, in order to keep a consistent resolution during all phases of the evolution. The finite-difference grid removes subdomains where no significant matter (max(ρ) < 6.2×10^9g/cm^3) is located, and adds back subdomains as higher density matter flows close to the boundary of the removed subdomains over the course of the simulation. These two methods result in a reduced computational cost for our simulations. For a full explanation on SpEC's methods for the evolution of the hydrodynamical and general relativistic grids, we refer the reader to Duez (2008) <cit.>, as well as appendix A of Foucart (2013) <cit.>. We will limit ourselves here to a brief discussion of the methods most relevant to the use of spectral EoS in simulations aiming to produce high-accuracy numerical waveforms. We define the neutron star matter as a perfect fluid with stress-energy tensor T^μν = (ρ + u + P)u^μ u ^ν + P g^νμ. In this equation, we have the pressure P, baryon density ρ, internal energy density u, 4-velocity u^μ, and the inverse metric g^μν. The evolution equations are derived from the conservation of baryon number [The baryon density is defined as ρ=m_b n, with n the baryon number density and m_b an arbitrarily chosen reference mass for baryons; accordingly, our evolution equation represents conservation of baryon number, not conservation of mass.] ∇_μ (ρ u^μ) = 0 and the energy-momentum conservation ∇_μ T^μν=0, which give 5 equations for 6 independent variable (e.g. ρ,u,P and 3 independent components of the velocity). We close the system of equations with an EoS, which introduces two functions P(ρ,T) and u(ρ,T) (defined below), with temperature T. While such an EoS introduces a new variable, the two additional equations are sufficient to close the system of equations. Practically, we evolve the 'conserved' variables ρ_* = -√(γ)n_μ u^μρ, τ = √(γ) n_μ n_ν T^μν-ρ_*, S_k = -√(γ) n_μ T^μ_k, where γ is the determinant of the spacial metric γ_ij=g_ij+n_i n_j, and n^μ is the future directed unit normal to a constant time slice. The integral of ρ_* (“total baryonic mass”) is conserved over the entire domain, up to losses at the domain boundary. Recovering the “primitive” variables (ρ_0,T,u_i) from the conserved variables requires multi-dimensional root-finding. We follow the 2D root-finding method of Noble et al <cit.>, with corrections in low-density and high-velocity regions where due to numerical errors in the conservative variables the inversion may not be possible <cit.>. The evolution equations are written in “conservative” form, i.e. as a set of five coupled equations of the form ∂F^0(u)/∂ t +∑_i=1^3∂F^i(u)/∂ x^i = S(u) with the primitive variables u, vector of conserved variables F^0(u), fluxes F^i, and source terms S(u). These fluxes and source terms are calculated at cell centers, and the fluxes (as well as the physical variables ρ_0, T, and u_i) are interpolated to cell faces. The calculation of the divergence of the fluxes from the values of u at cell centers follows the previously mentioned method of Radice & Rezzolla <cit.>. §.§ Numerical Implementation of Spectral Equation of State The EoS used in these simulations to describe matter inside the neutron star was developed in Lindblom (2010) <cit.>, and modified in Foucart (2019) <cit.> for computational use. As already mentioned, the choice of EoS in numerical simulations is often a trade-off between the ability to capture more physics and a wider range of possible models on one side, and the numerical accuracy of the simulations on the other side. Spectral EoS are smoother than both tabulated and piecewise polytropic EoS, which results in higher accuracy simulations (as will be seen in the results section). On the other hand, our existing spectral EoS are limited to matter in beta-equilibrium, use an extremely simplified model for the thermal pressure, and are not suitable for coupling to neutrino evolution. We consider this a reasonable trade-off when attempting to generate high-accuracy gravitational wave signals from the inspiral, merger, and early post-merger evolution of BNS and BHNS binaries, but acknowledge that spectral EoS would be a poor choice for simulations attempting e.g. to model the outcome of r-process nucleosynthesis in mergers. Single polytropic EoS, for their part, lead to higher accuracy simulations than more complex spectral EoS at a given resolution[Or at least than the spectral EoS used in this manuscript; single polytropic EoS are themselves a subset of the spectral models, but one that does not allow much flexibility on the functional form of the EOS.]. However, their use to simulate asymmetric binaries and/or the merger and post-merger phase of the evolution of a binary may be problematic. Indeed, while it is possible to construct a single polytropic EoS for which a neutron star of a given mass has the desired radius, or the desired tidal deformability, it is typically difficult to do this for two neutron stars of distinct mass, or to make sure that the EoS at the same time support massive neutron stars M_ NS≳ 2M_⊙. In this section, we give a reduced explanation of the theory of spectral EOS. A full explanation can be found in the previous work by Foucart (2019) <cit.> and Lindblom (2010) <cit.>. We choose the spectral expansion as in Foucart (2019) <cit.> by writing the pressure P and specific internal energy ϵ as P(x,T)= P_0 exp(Γ_0 x + η_2 x^3/3 + η_3 x^4/4) + ρ T x>0 P_0 exp(Γ_0 x) + ρ T x≤ 0 and ϵ(x,T)= ϵ_0 + ∫_0^x dξP(ξ,0)/ρ_0e^-ξ+T/Γ_th-1 x>0 P(x,0)/ρ(Γ_0-1)+T/Γ_th-1 x ≤ 0 with some reference density ρ_0, reference adiabatic index Γ_0, reference pressure P_0, temperature T, and where we define x = log(ρ/ρ_0). We note that despite its name, used here to match standard conventions, T does not scale linearly with the physical temperature of the fluid; it is simply defined so that the thermal pressure is P_ thermal = ρ T. These equations give free parameters of η_2, η_3, ρ_0, P_0, and Γ_0. Many possible sets of values for these would result in sound waves in dense matter moving at superluminal velocities and/or behavior that does not conform to known nuclear physics. In Foucart (2019) <cit.>, a Marko-Chain Monte Carlo method was used to determine values of these parameters resulting in causal EoS, and values of the pressure at high-density within the range of values currently allowed by nuclear physics. We also found that the choice Γ_0=2 led to higher accuracy than lower values of Γ_0, possibly due to the simple behavior of the density close to the surface for that choice of Γ_0. We make this choice here as well, even though this results in values of the pressure and internal energy at ρ≪ρ_0 that are inconsistent with the known behavior of dense neutron rich matter at low density. This is reasonable for our purpose here because the gravitational wave signal is mostly sensitive to the EoS at high density. From <cit.>, we choose two EoS with free parameters shown in table <ref>, with mass-radius curves shown in figure <ref>. The first EoS parameter set displays a higher maximum mass and lower maximum radius (hMlR) while the second has a lower maximum mass and a higher maximum radius (lMhR). The hMlR EoS gives neutron stars with a maximum Schwartzschild radius of 12.05 km and maximum baryonic mass of 2.719 M_⊙, while the lMhR EoS gives neutron stars with a maximum Schwartzschild radius of 12.41 km and maximum baryonic mass of 2.191 M_⊙. §.§ Initial conditions For both the ShTS and SpTS time stepping algorithm, we evolve binary neutron star systems with the same chirp mass M_ chirp=1.18M_⊙, chosen to match the chirp mass of GW170817. Our chosen configurations for the ShTS method consist of two systems, with mass ratios of 1 and 2 and the hMlR EoS, separated by a distance of 53.1km, and no initial neutron star spin. The SpTS simulations have mass ratios of 0, 1, and 2, separated by 54.6km, with no initial spin, and are performed for both EoS. We construct out initial data utilizing our SPELLS code <cit.> adapted for binary neutron star systems <cit.>, which generates a binary system in quasi-circular orbit. From this, we iteratively adjust the initial angular and radial velocity of the neutron stars to reduce the initial eccentricity of the orbits to ≲0.002 utilizing the methods of Pfeiffer (2007) <cit.>. With these methods, and initial separation distance, we reach merger at about 10.5 orbits for the ShTS and about 11.5 orbits for the SpTS. Parameters for all simulations can be found in table <ref>. With these EoS, we have two systems with a mass averaged dimensionless tidal deformability Λ̃≈ 570 for the hMlR EoS and Λ̃≈ 710 for the lMhR EoS. For the hMlR EoS, the dimensionless tidal deformabilities ranges are Λ_1=318.655-588.389 and Λ_2=588.389-990.603, and the lMhR EoS ranges from Λ_1=529.503-713.648 and Λ_2=713.648-1224.13. These values rest comfortably within the 90% probability region of low spin systems (and very close to the 50% region in some cases) for the Λ_1 and Λ_2 relationship from the LIGO and Virgo constraints <cit.>. §.§ Domain/Grid Setup The initial finite-difference hydrodynamical domain construction consists of a rectangular, bar-shaped Cartesian grid space with the neutron stars located at each end. We have three resolutions for the 1 and 2 hMlR ShTS and the 0 lMhR SpTS simulations. For the 1 and 2 simulations, we have grid spacings of Δ x_FD = 298m, 239m, 191m with number of grid points along each dimension of (369×185×185), (457×229×229), and (577×289×289), respectively. The 0 simulation has grid spacings of Δ x_FD=273m, 218m, and 174m, with grid points of (401×201×201), (505×253×253), and (649×325×325), respectively. For future ease, we will refer to the lowest resolution of each simulation set as Lev0, the middle Lev1, and the highest resolution Lev2. The SpTS hMlR EoS simulations were run at the Lev1 resolution, while the SpTS lMhR 1 and 2 were run at the Lev1 and Lev2 resolutions. In all SpTS simulations, the grid spacing is chosen so that we have N=(72,90,112) grid points across the diameter of the neutron stars at (Lev0,Lev1,Lev2), averaging over both stars at t=0[Note that our initial data is in a coordinate system close to isotropic coordinates, in which the neutron star radius is smaller than in Schwarzschild coordinates. Hence N×Δ x ≠ 2R_ NS if R_ NS is the areal (Schwarzschild) radius used, by convention, in our description of the neutron stars.]. This domain is divided into 8 equal segments along the shorter axes, and 16 segments along the long axis, resulting in 512 subdomains of equal size. The spectral grid construction consists of a ball and 5 shells covering each of the neutron stars. A set of distorted cylinders, with the rotational axis along the line between the neutron stars, connects the sets of ball and shells around the neutron stars. These cylinders also connect to 12 shells covering the outer regions, which are centered on the center of mass of the system. After merger, the area interior to the outer shells is replaced by distorted cubic subdomains. We refer the reader to Foucart et al (2013) <cit.> and Szilágyi (2014) <cit.> for a more detailed explanation and graphics of the pseudospectral grid construction. The number of basis functions used in each subdomain is adaptively chosen to reach a user defined maximum error, which is estimated from the spectral coefficients of the evolved variables. The user-defined accuracy on the pseudospectral grid is chosen such that it scales as (Δ x_FD^0)^5, and as such errors on this grid converge faster than those on the finite-difference hydrodynamical grid. §.§ Waveform Extrapolation The simulations evolve through inspiral, plunge, and merger, then continue until the peak of the gravitational waves resulting from the merger event progress past the outer edge of the pseudospectral grid at a radius of 2047.5 km for the ShTS simulations and 2074.7 km for the SpTS simulations. The method to extrapolate the gravitational wave signal to null infinity from the metric at finite radii follows the procedure outlined by Boyle & Mroue (2009) <cit.>. The Newman-Penrose scalar Ψ_4 and metric perturbation h are estimated on spheres of constant inertial radii and decomposed into spin=-2 spherical harmonics components. At 24 radii from R_i=211.2 km to R=2,015.6 km equidistant in 1/R, we compute a retarded time t_ret(t,R_i) to approximate the travel time for the wave from the merging neutron stars to R_i. From here, we fit the ansatz A_lm(t_ret,r) = ∑_j=0^N A_lm,j(t_ret)r^-j ϕ_lm(t_ret,r) = ∑_j=0^N ϕ_lm,j(t_ret)r^-j to the amplitude A_lm and phase ϕ_lm of the (l,m) component of the spherical harmonic decomposition of the gravitational wave at fixed retarded times. We then estimate the (l,m) mode at infinite radius to be A_lm,1e^iϕ_lm,0. § RESULTS §.§ Error Analysis As in Foucart (2019) <cit.> and Foucart (2021) <cit.>, we present and utilize a standard method of error estimation that likely overestimates the potential errors. We take into account three potential sources of error in our simulations: finite resolution of our computational domain, extrapolation of the gravitational wave to infinity, and mass lost during the simulation at the boundaries. For a more detailed overview of how these error sources are evaluated, we recommend Foucart (2019) <cit.>, but we will review the fundamentals here, and show the error estimates for the (2,2) mode of the extrapolated waveforms. We estimate the errors due to finite resolution by comparing the three resolutions, Lev0 (low), Lev1 (medium), and Lev2 (high). First, we use the phases of the Lev0 and Lev2 simulations in a Richardson extrapolation to infinite resolution, assuming a 2nd-order convergence. We then take the difference between the Lev2 waveform and the extrapolated waveform as a first estimate of the numerical error. We repeat this process on the Lev2 and Lev1 resolutions, and keep the worse of these two error estimates. We note that this is typically conservative because the methods used within the SpEC code converge at 3rd order or better. However, the hybrid spectral/finite volume methods utilized by SpEC causes different errors to dominate at different phases of the simulation. As a result, when considering only two resolutions, it is not uncommon for multiple sources of error to cancel each other during a simulation. In our experience so far comparing numerical simulations with different numerical setups, comparing with results from other collaborations, and in case were additional higher accuracy simulations were performed, using three resolutions and considering the worse error estimate when comparing the simulations pairwise offers a comfortable buffer against this issue. The errors from extrapolation to null infinity are estimated by comparing the phase difference between the 2nd and 3rd order extrapolation in r^-1, between t=0 and t_peak, where the (2,2) mode of the waveform reaches its maximum amplitude. The maximum phase difference is conservatively chosen to be the associated error. This error is typically smaller than the finite resolution error, except at the very beginning of a simulation. Finally, mass lost during the evolution results in gravitational waves emitted from a system different from the initially intended system. Here, we use an estimate of the resulting error in the phase of the waveform derived in Boyle (2007) <cit.>. In our simulations, mass loss was minimal, and resulted in a negligible phase error comparative to the error from finite resolution. The q=1.1 ShTS simulation lost approximately 5.42×10^-6 M_⊙ while the q=1.2 simulation lost approximately 8.71×10^-6 M_⊙. The lMhR 0 SpTS simulation lost 1.52×10^-5 M_⊙. The other SpTS simulations were performed at either 1 or 2 resolutions, as opposed to the 3 required for the previous error analysis. The simulations with 2 resolutions demonstrated similar phase difference between resolutions as the 0 lMhR SpTS. In figure <ref>, we can see the three sources of error, as well as the total error ϕ_T. In all of these figures, clearly the finite resolution error (ϕ_dis) of the simulation dominates ϕ_T from a few hundred t/M after the start of the simulation to past merger. At the time of merger, the hMlR ShTS 1 and 2 simulations have approximately 1-2 radians of phase error, but the longer lMhR SpTS 0 simulation peaks at approximately 4 radians at the time of merger. The extrapolation error (ϕ_ext) provides a constant error estimate at approximately 0.01 radians, two orders of magnitude smaller than the discretization error at the time of merger, and is the only significant error for the first few hundred t/M. The error estimate from the loss of mass during the simulation is negligible, even at its maximum value, which occurs after t_peak (the time of merger), indicated in the plots by the vertical dashed line. In figure <ref>, we compare the phase difference between resolutions for the hMlR 2 systems. We see a significant difference between the higher two resolutions (Lev1 and Lev2) and the low resolution (Lev0). The Lev1 and Lev2 simulations are on the other hand very close to each other. This is most likely due to error cancellations between the two highest resolution as the error changes sign twice during the evolution. We see similar behavior in the hMlR 1 waveform. This is a fairly common issue when evaluating simulation errors with the SpEC code, and the main reason that any estimate of the phase error in the waveform requires three or more resolutions. For the lMhR 0 case (figure <ref>), we see a different behavior, as the Lev1 and Lev2 simulations minimally differ. However, between approximately a quarter and halfway through the simulation, the Lev1 phase begins to drift towards the Lev0 simulation. The accumulation of this error is likely due to the changing signs of some canceling errors, similar to the ShTS q=1.2 simulation. As the simulation progresses, different errors dominate, and this figure is another example of the necessity of more than two resolutions. This is a natural consequences of the resolution choices made in SpEC, where different sources of error (spectral, finite difference, time stepping) are kept at roughly the same level in order to avoid wasting computational resources on over-resolving one sector of our simulations – but as a result may occasionally cancel out when comparing simulations at two resolutions only. §.§ Discussion Given the relatively minimal database of similar BNS simulations, we compare the accuracy of our simulations to the BNS simulations in Foucart et al (2019) <cit.>. Compared to <cit.>, we can see that our total error at merger is comparable to the 0 simulations performed with the piecewise polytropic EoS MS1b, and slightly higher than for the single polytrope with Γ=2 case. However, the MS1b evolved for 8.5 orbits, shorter than our present simulations of about 10.5 for the ShTS and 11.5 for the SpTS, which themselves are shorter than the 12.5 orbits of the single polytrope simulation. From this, we can see that we accumulate error slower than the MS1b EoS, and faster than the polytropic EoS. We additionally can directly compare our two ShTS systems against two analytical models, IMRPhenomD_NRTidalv2 and SEOBNRv4Tidal, generated using the LALSuite <cit.>. Both analytical models were generated using our simulations' parameters (ADM mass, tidal deformability, etc.) and show a high level of agreement in waveforms (see figures <ref> and <ref>, discussed below). In order to perform meaningful comparisons with analytical models and between simulations of different lengths or using different equations of state, the gravitational waves from our simulations are time and phase matched to the Lev2 resolution simulations by choosing two set times in the reference waveform, and minimizing the phase difference within that time frame among all transformations t'=t+δ t, ϕ'=ϕ+δϕ. Before doing so, in order to remove any difference in time steps between gravitational wave sets, we interpolate the highest resolution gravitational wave (referred from here as our 'target' wave) to a standardized time series using a cubic 1D interpolator. For each gravitational wave we wish to match to our target wave (referred from here as the 'adjusted' wave) we go through the following procedure: Each wave is interpolated to the standardized time series. An allowed maximum time shift Δ t is chosen for the adjusted wave, and a minimizing algorithm then determines the proper time and phase shifts to reduce the phase error between the adjusted wave and the target wave within the chosen matching window. We can see in figures <ref> and <ref> the analytical and numerical relativity waveforms overlaid, and, for waveforms using the same equation of state, the close agreement between them during the inspiral and plunge phases of the simulation. Differences between simulations and models are most apparent near merger, but this is not unexpected as analytical models often do not accurately predict the merger portion of simulations. Simulations using different equations of state are clearly distinguishable at merger on these plots. Using the peak amplitude of the extrapolated and analytical waveforms as the merger time, we can see in figure <ref> that the analytical model has an oscillatory amplitude before merger, and reaches merger before the other hMlR 1 systems. We see similar behavior for the 2 systems. As an additional avenue of waveform analysis, we use a new method from Read (2023) <cit.>. Gravitational waves are Fourier transformed using the Stationary Phase Approximation, and matched in time and phase at a reference coalescence frequency f_c (chosen to be the minimum peak frequency among our simulations and analytical models in our analysis). We then compare the resulting phase differences of the Fourier transform. Quite importantly, the method is constructed so that the phase difference is fairly insensitive to numerical noise in the calculation of frequencies and of their derivatives, and relies on the choice of a single reference frequency for matching waveforms. In <cit.>, a variation of this method is also used to assess whether waveforms are distinguishable in various gravitational wave detectors – however, this requires knowledge of the waveforms over the entire frequency range accessible to the detector, while our numerical waveforms only provide data for f≳ 400 Hz. Another important advantage of this method is that it is relatively insensitive to errors in the early phase of the evolution, when simulations can have a hard time resolving high-frequency noise, and instead provide a more direct comparison of waveforms in the range in which finite size effects are the largest. We can see in figure <ref> that the hMlR model is within our numerical error. This is not the case for the lMhR simulation. This is due to the fact that, in this plot, we are matching waveforms at the peak frequency of the lMhR configuration, which is about 100 Hz below the peak frequency of the hMlR configuration. A similar comparison matching waveform at the peak frequency of the hMlR configuration shows the model now inconsistent with the numerical simulations. This is another clear indication that any difference between the model and the numerical result is due to the behavior of the model close to merger. Additionally we see that our simulations can clearly distinguish between the hMlR and lMhR EoS. However, we must note that the difference between the two EoS mostly comes from the behavior of the waveform above 1 kHz, outside the sensitive range of LIGO and Virgo. Accordingly, these results do not indicate anything about the detectability of these differences by current gravitational wave detectors. They only tell us that the numerical relativity waveforms are distinguishable well outside of their numerical errors. This can be better understood if we note that applying a time and phase shift on these waveforms allow us to change any curve on figure <ref> according to ϕ→ϕ + Af + B for any constant A,B. Waveforms are thus only distinct on this plot if they differ in their second derivative d^2ϕ/df^2. Clearly, this is only the case at high frequency (f≳ 900 Hz). Similarly, in figures <ref> and <ref>, we are clearly able to see a distinct phase difference between our hMlR EoS and the lMhR EoS within our current numerical error. Assuming a linear dependence of the phase at merger in Λ̃, and for the specific EoS and mass ratios simulated in this manuscript, we estimate that we are able to distinguish with SpEC different gravitational waveforms down to a dimensionless tidal deformability difference of ΔΛ̃≈ 55, well below current constraints from the observation of GW170817 <cit.>. This shows that the spectral EoS is a promising option to train analytical models. Moving to a comparison of the time stepping methods, we see a very close agreement in the extrapolated waveforms using the ShTS method and the SpTS method. In both the 1 and 2 cases, the SpTS and ShTS simulations behave nearly identically in waveform and peak amplitude at merger, with differences well below our estimated numerical errors. As the ShTS and SpTS simulations were run on different clusters, specifically the University of Texas' Frontera cluster and the University of New Hampshire's Plasma cluster, a direct comparison of computational cost is non-trivial. We therefore compare two main components to estimate computational cost: the number of time steps the grid evolving Einstein's equations took and the CPU hours spent during the Δ t=5000 (about 0.025 seconds) preceding merger. Using this time period will avoid the initial numerical errors and junk radiation at simulation start from affecting the computational time, as well as the extra orbit in the SpTS case. We compared the Lev1 1 simulations. We found the ShTS simulation to have taken 251823 steps during this Δ t=5000 period, and cost 60341 CPU hours. In comparison, the SpTS took 357506 steps, with only a cost of 42707 CPU hours. We also must note that the SpTS simulation had an approximately 10% increase in resolution compared to the ShTS case, which should result, everything else being equal, in approximately 40% increased computational expense for the fluid evolution. Despite this additional cost and approximately 42% additional time steps for the evolution of Einstein's equations, we can see an approximately 30% reduction in computational time cost in the SpTS case. From standardized speed tests performed on both machines for BNS evolutions, we can also determine that Frontera is roughly 12% faster than Plasma, further increasing the CPU hour cost reduction of the SpTS method. While this comparison is relatively rough, we find it sufficient to state that the SpTS method does save on computational resources in simulations such as the ones used in this paper. § CONCLUSION From our work, we have found the spectral EoS to be a promising option for numerical BHNS or BNS waveform studies. It is capable in our systems to generate gravitational waves from BNS that agree within expected error with state-of-the-art analytical models up to the merger event, where analytical models become less accurate. The defining parameters of spectral EoS can be adjusted to produce a range of neutron star EoS candidates, offering a large amount of flexibility for future systems as we further refine the constraints on the EoS of a neutron star. It offers an improved ability to generate stars with appropriate macroscopic properties when compared with a polytropic EoS, and provides better numerical accuracy compared to a discontinuous EoS, at least in the SpEC code. We find in particular that, for two distinct methods of matching the waveforms in time and phase, we are capable of clearly capturing differences in the gravitational wave signals produced by binaries with tidal deformabilities of Λ̃≈ 550 and Λ̃= 700. Assuming a linear dependence of the phase differences with Λ̃, our results indicate that variations of ΔΛ̃≈ 50 could lead to numerical waveforms whose behavior close to merger differ by more than our current finite-resolution errors. As for numerical methods, our preliminary comparison between the SpTS and ShTS methods indicates potential computational cost savings by uncoupling the finite difference and spectral grid time steps. In this manuscript, we measured a greater than 30% decrease in CPU hours used for a simulation with ≈ 10% increased resolution on a cluster with slower hardware. Clearly, further testing on simulations conducted on the same cluster, utilizing the same simulation parameters such as resolution and initial conditions is required for a definitive answer, but our test here has shown the comparison to be worth closer inspection of a method that could potentially have significant savings in computational cost. There is still a great deal of experimentation that can be done with the spectral EoS, including higher resolution simulations to verify the accuracy of our current error estimates. Additionally, generating NS with varying radii, mass, and tidal deformability by adjusting the Γ_0, η_2, η_3, Γ_th, ρ_0 and P_0 may prove useful in determining its viability in a range of systems, allowing for a smooth EoS for SpEC and other codes sensitive to discontinuous EoS. The spectral EoS offers a new avenue for simulations, expanding our potential tools for more accurate and better resolved simulations, to aid in eventually better understanding the detected gravitational waves from merger events between compact objects. A.K and F.F. gratefully acknowledge support from the Department of Energy, Office of Science, Office of Nuclear Physics, under contract number DE-AC02-05CH11231 and from the NSF through grant AST-2107932. M.D. gratefully acknowledges support from the NSF through grant PHY-2110287. M.D. and F.F. gratefully acknowledge support from NASA through grant 80NSSC22K0719. M.S. acknowledges funding from the Sherman Fairchild Foundation and by NSF Grants No. PHY-1708212, No. PHY-1708213, and No. OAC-1931266 at Caltech. L.K. acknowledges funding from the Sherman Fairchild Foundation and by NSF Grants No. PHY-1912081, No. PHY-2207342, and No. OAC-1931280 at Cornell. Computations for this manuscript were performed on the Plasma cluster, a Cray CS500 supercomputer at UNH supported by the NSF MRI program under grant AGS-1919310, and on the Wheeler cluster at Caltech, supported by the Sherman Fairchild Foundation. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin and the NSF for providing resources on the Frontera cluster <cit.> that have contributed to the research results reported within this paper. Computations were also performed on ACCESS resources through grant No PHY990002. ieeetr
http://arxiv.org/abs/2307.03297v1
20230706211222
Patterns in Knot Floer Homology
[ "Ekaterina S. Ivshina" ]
math.GT
[ "math.GT", "math.GN" ]
1]Ekaterina S. Ivshina [1]Department of Mathematics, Princeton University, Princeton, NJ 08544, USA Patterns in Knot Floer Homology [ ================================ Submitted to Experimental Mathematics Based on the data of 12-17-crossing knots, we establish three new conjectures about the hyperbolic volume and knot cohomology: (1) There exists a constant a∈R_>0 such that log r(K) < a ·(K) for all knots K where r(K) is the total rank of knot Floer homology (KFH) of K and (K) is the hyperbolic volume of K. (2) Fix a small cut-off value d of the total rank of KFH and let f(x) be defined as the fraction of knots whose total rank of knot Floer homology is less than d among the knots whose hyperbolic volume is less than x. Then for sufficiently large crossing numbers, the following inequality holds f(x) < L/1 + exp(-k·(x-x_0)) + b where L ,x_0, k, b are constants. (3) There exist constants a,b∈R such that log(K) < a ·(K) +b for all knots K where (K) is the knot determinant of K. Keywords: knot Floer homology, hyperbolic volume, knot determinant § INTRODUCTION A major goal of knot theory is to relate the geometric structure of a knot complement to the knot’s topological properties. Nathan Dunfield <cit.> documented a nearly linear relationship between log and the hyperbolic volume of K for all alternating knots K with at most 13 crossings and samples of 14-16-crossing alternating knots ( denotes the knot determinant of K). He then considered log/log() versus the hyperbolic volume of K for all alternating knots K with at most 13 crossings and samples of 14-16-crossing alternating knots ( denotes the degree of the Jones polynomial of K). Dunfield again observed the points to cluster around a straight line. The plots, however, become more scattered when including non-alternating knots. Furthermore, Mikhail Khovanov <cit.> observed a correlation between log(H(K)- 1) and the hyperbolic volume of 10- and 11-crossing non-alternating knots where H(K) is the total rank of Khovanov homology H(K) of K. In this work, we are interested in investigating the relationship between the hyperbolic and homological measures of the knot complexity. Recall that the vast majority of knots are hyperbolic <cit.> and that the volume of a knot complement serves as a proxy for the complexity of the knot. Another proxy of the complexity of a knot is the total rank of its knot Floer homology. In particular, the unknot is the only known knot whose total rank of knot Floer homology is equal to 1. In this paper, we quantify the strength of the nearly linear relation between the hyperbolic volume and the logarithm of the total rank of knot Floer homology of knots with 12-17 crossings. We then perform a similar analysis considering the hyperbolic volume versus the logarithm of the knot determinant. We find that the relationship between the hyperbolic volume and the logarithm of the total rank of knot Floer homology is stronger compared to the relationship between the hyperbolic volume and the logarithm of the knot determinant. Finally, we provide experimental evidence for a special pattern in the density of knots with small total ranks of knot Floer homology. This paper is organized as follows. Section <ref> provides a brief overview of the concepts relevant to this work. In Section <ref>, we discuss how we constructed our dataset. In Section <ref>, we provide experimental evidence for and formulate a conjecture that there exists an inequality between the hyperbolic volume and the logarithm of the total rank of knot Floer homology. In Section <ref>, we conjecture the existence of an inequality between the hyperbolic volume and the logarithm of the knot determinant. Section <ref> establishes a conjecture about the density of knots with small total ranks of knot Floer homology based on our experimental data. Section <ref> provides a summary of our findings. § BACKGROUND INFORMATION Our primary focus in this paper is on the hyperbolic volume of the knot complement, knot Floer homology, and the knot determinant. We now briefly discuss each of these concepts. Knot Floer homology was introduced independently by Ozsváth-Szabó <cit.> and Rasmussen <cit.> around 2002. Knot Floer homology categorifies the Alexander polynomial and contains information about several non-trivial geometric properties of the knot (genus, slice genus, fiberedness, effects of surgery, and others). Knot Floer homology provides more information about knots/links compared to the Alexander polynomial. For example, it detects the genus of a knot while the Alexander polynomial gives only bounds for it. In addition, knot Floer homology detects fibered knots while the Alexander polynomial gives obstructions about it. Another advantage of knot Floer homology is that it is computable. Next, we define the hyperbolic volume of the complement of a hyperbolic link. A hyperbolic link is a link in the 3-sphere whose complement (the space formed by removing the link from the 3-sphere) can be given a complete Riemannian metric of constant negative curvature, giving it the structure of a hyperbolic 3-manifold, a quotient of hyperbolic space by a group acting freely and discontinuously on it. The components of the link will become cusps of the 3-manifold, and the manifold itself will have a finite volume. Mostow's Rigidity Theorem (see <cit.>) implies that the hyperbolic structure on a finite volume hyperbolic 3-manifold is unique up to isometry. In particular, any invariants which are defined in terms of the hyperbolic structure of such a manifold are topological invariants of the 3-manifold. One of such invariants is the hyperbolic volume of the manifold. If the manifold has been decomposed into ideal hyperbolic tetrahedra, the volume will simply be the sum of the volumes of the tetrahedra <cit.>. The hyperbolic volume of the complement of a knot is a knot invariant. The knot determinant has multiple equivalent definitions <cit.>. The determinant (K) of a knot K is defined as the absolute value of the determinant of the matrix V +V^T where V is a Seifert matrix of K. Equivalently, it can also be defined as |Δ_K(-1)| (i.e., the absolute value of the Alexander polynomial of K evaluated at -1), which is the same as |J_-1(K)| (i.e., the absolute value of the Jones polynomial of K evaluated at -1). Finally, (K) is equal to the number of elements in the first homology group of the double cover of S^3 branched over K <cit.>. § DATA We used the knot database provided in <cit.>, which contains all prime knots up to 19 crossings. In this work, we considered all knots with 12-17 crossings. There are 1,288 alternating and 888 non-alternating 12-crossing knots; 4,877 alternating and 5,108 non-alternating 13-crossing knots; 19,536 alternating and 27,433 non-alternating 14-crossing knots; 85,262 alternating and 168,023 non-alternating 15-crossing knots; 379,799 alternating and 1,008,895 non-alternating 16-crossing knots; 1,769,978 alternating and 6,283,385 non-alternating 17-crossing knots. We converted the identification of the knots in this database to the Dowker–Thistlethwaite notation using the Regina <cit.> Python package. We then used Snappy <cit.> to compute the following invariants for each knot: the knot Floer homology, the hyperbolic volume, and the knot determinant. We have provided online[<doi.org/10.5281/zenodo.7879466>] the resulting dataset used in this work. Figures <ref>, <ref>, and <ref> show the probability distribution functions of the total rank of knot Floer homology, the hyperbolic volume, and the knot determinant, respectively, for 12-17-crossing knots. Note that for a fixed crossing number, the mean value of a given knot invariant is always lower for the non-alternating knots compared to the alternating knots. § VOLUME AND COHOMOLOGY In all of our experiments, we considered knots of each crossing number (ranging between 12-17) and each type (alternating versus non-alternating) separately. We used least-squares linear regression to fit the hyperbolic volume versus the following two invariants: the logarithm of the total rank of knot Floer homology and the logarithm of the knot determinant. The outputs of the linear regression fits included the estimated slope with its standard error, the intercept with its standard error, the Pearson product-moment correlation coefficient, and the R^2 value. We also plotted the hyperbolic volume versus the number of diagonals that contain nontrivial knot Floer homology groups of a given knot. We did not observe any clear pattern in this case and did not include these results in the manuscript. §.§ Volume and Knot Floer Homology Figure <ref> shows plots of the hyperbolic volume versus the logarithm of the total rank of knot Floer homology for 12-17-crossing knots. We see a linear trend in the case of alternating and non-alternating knots, although the data have more spread in the case of non-alternating knots. Table <ref> summarizes the results of the linear regression fits of the hyperbolic volume versus the logarithm of the total rank of knot Floer homology for knots of a fixed crossing number (ranging between 12-17) and a fixed type (alternating versus non-alternating). Overall, the correlation is strong in both cases, ranging between 0.97 and 0.982 for alternating knots and between 0.903 and 0.95 for non-alternating knots. The R^2 ranges between 0.941 and 0.964 for alternating knots and between 0.815 and 0.902 for non-alternating knots. For alternating knots, the intercept value overall increases with increasing crossing number (except for 14-crossing knots), going from 2.94± 0.01 for 12-crossing knots to 3.989 ± 0.0007 for 17-crossing knots. For non-alternating knots, on the other hand, the intercept value overall decreases with increasing crossing number (except for 14-crossing knots), going from 1.56± 0.03 for 12-crossing knots to 0.124± 0.001 for 17-crossing knots. For alternating knots, the slope value decreases with increasing crossing number, going from 0.1284± 0.0007 for 12-crossing knots to 0.12083 ± 2 × 10^-5 for 17-crossing knots. For non-alternating knots, on the other hand, the slope value overall increases with increasing crossing number (except for 14-crossing knots), going from 0.185± 0.002 for 12-crossing knots to 0.19507± 4× 10^-5 for 17-crossing knots. Our experimental data presented in this Section supports the following new conjecture relating the total rank of knot Floer homology and the hyperbolic volume: There exists a constant a∈R_>0 such that log r(K) < a · Vol(K) for all knots K where r(K) is the total rank of knot Floer homology of K and Vol(K) is the hyperbolic volume of K. For each crossing number c between 12-17, we also computed the minimum slope a_ such that all of the knots in Figure <ref> with crossing number c lie below the line with the slope equal to a_ and the intercept equal to 0. The results are shown in Table <ref>. a_ slowly increases with increasing crossing number, going from 0.852 for 12-crossing knots to 0.948 for 17-crossing knots. §.§ Volume and Knot Determinant Figure <ref> shows plots of the hyperbolic volume versus the logarithm of the knot determinant for 12-17-crossing knots. Recall that the total rank of knot Floer homology is the same as the knot determinant for all alternating knots, so the data shown in Figures <ref> and <ref> coincide for alternating knots. Comparing the two Figures, we see that in Figure <ref>, the data for the non-alternating knots have substantially more spread. Table <ref> summarizes the results of the linear regression fits of the hyperbolic volume versus the logarithm of the knot determinant for non-alternating knots of a fixed crossing number ranging between 12 and 17. Overall, the R^2 (which ranges between 0.461 and 0.608) and the correlation (which ranges between 0.679 and 0.78) are much lower than the ones we observed for non-alternating knots in Section <ref>, in which case R^2 ranged between 0.815 and 0.902 and the correlation ranged between 0.903 and 0.95. These metrics suggest that the hyperbolic volume of the knot complement correlates more strongly with the logarithm of the total rank of knot Floer homology compared to the logarithm of the knot determinant. Our experimental data presented in this Section supports the following conjecture relating the knot determinant and the hyperbolic volume: There exist constants a,b∈R such that log(K) < a · Vol(K) +b for all knots K where (K) is the determinant of K and Vol(K) is the hyperbolic volume of K. <cit.> conjectures that there must be the following inequality for all knots K: log(K) /log(J) < a · Vol(K) + b where J is the Jones polynomial of K. §.§ Knot Floer Homology Total Rank Density Non-alternating knots with small total ranks of knot Floer homology usually have very special properties (e.g., they can be torus knots) and are, therefore, of mathematical interest. We performed the following analysis to study the distribution of non-alternating knots with small total ranks of knot Floer homology. We fixed the cut-off value of the total rank of knot Floer homology to be d (say, 50). Next, we ordered the non-alternating knots of a fixed crossing number by the hyperbolic volume and considered all knots whose hyperbolic volume is less than a given value x. Among these knots, we then computed the fraction of knots whose total rank of knot Floer homology was less than d. We called the function defined in this way f(x). Figure <ref> shows a plot of this function when d=50. Having observed a consistent sigmoid-like shape of f(x) for all non-alternating knots with the crossing number ranging between 12-17, we fitted the data with a sigmoid function defined as g(x) = L/1 + exp(-k·(x-x_0)) + b. The best-fit parameters are reported in Table <ref>. The results of our fits lead us to conjecture that such density can be bounded by a sigmoid function in the limit of large crossing numbers. We summarize this finding in the following conjecture: Fix a small cut-off value d of the total rank of knot Floer homology. Let f(x) be defined as the fraction of knots whose total rank of knot Floer homology is less than d among the knots whose hyperbolic volume is less than x. Then for sufficiently large crossing numbers, the following inequality must hold f(x) < L/1 + exp(-k·(x-x_0)) + b where L ,x_0, k, b are constants. § CONCLUSION By analyzing data of 12-17-crossing knots, we experimentally examined and quantified the strength of the following two knot invariant relationships: (i) hyperbolic volume versus the logarithm of the total rank of knot Floer homology and (ii) hyperbolic volume versus the logarithm of the knot determinant. We concluded that relationship (i) is stronger than relationship (ii), with substantially higher R^2 and correlation values. Based on our experimental findings, we formulated a new conjecture that there exists an inequality between the hyperbolic volume and the logarithm of the total rank of knot Floer homology (Conjecture <ref>). We also conjecture that there is an inequality between the hyperbolic volume and the logarithm of the knot determinant (Conjecture <ref>). Further, having carried out computational experiments, we discovered Conjecture <ref> that, for sufficiently large crossing numbers, provides a sigmoid bound for the density of non-alternating knots with small total ranks of knot Floer homology. We have provided online the dataset[<doi.org/10.5281/zenodo.7879466>] and code[<https://github.com/eivshina/patterns-in-knot-floer-homology>] used in this work. We hope that these resources will be useful to anyone interested in further studying and understanding knot Floer homology and the relationships between different knot invariants. § CODE AND DATASET The code is available on GitHub at <github.com/eivshina/patterns-in-knot-floer-homology>. The dataset is archived on Zenodo at <doi.org/10.5281/zenodo.7879466>. § ACKNOWLEDGMENTS Thank you to Zoltán Szabó for advising this work. Thank you to Ian Zemke and Peter Ozsváth for reading the draft of this work. authordate1
http://arxiv.org/abs/2307.01579v1
20230704091859
With Trail to Follow: Measurements of Real-world Non-fungible Token Phishing Attacks on Ethereum
[ "Jingjing Yang", "Jieli Liu", "Jiajing Wu" ]
cs.CR
[ "cs.CR" ]
[ Bernard Bercu, Jérémie Bigot and Gauthier Thurin August 1, 2023 at  ==================================================== With the popularity of Non-Fungible Tokens (NFTs), NFTs have become a new target of phishing attacks, posing a significant threat to the NFT trading ecosystem. There has been growing anecdotal evidence that new means of NFT phishing attacks have emerged in Ethereum ecosystem. Most of the existing research focus on detecting phishing scam accounts for native cryptocurrency on the blockchain, but there is a lack of research in the area of phishing attacks of emerging NFTs. Although a few studies have recently started to focus on the analysis and detection of NFT phishing attacks, NFT phishing attack means are diverse and little has been done to understand these various types of NFT phishing attacks. To the best of our knowledge, we are the first to conduct case retrospective analysis and measurement study of real-world historical NFT phishing attacks on Ethereum. By manually analyzing the existing scams reported by Chainabuse, we classify NFT phishing attacks into four patterns. For each pattern, we further investigate the tricks and working principles of them. Based on 469 NFT phishing accounts collected up until October 2022 from multiple channels, we perform a measurement study of on-chain transaction data crawled from Etherscan to characterizing NFT phishing scams by analyzing the modus operandi and preferences of NFT phishing scammers, as well as economic impacts and whereabouts of stolen NFTs. We classify NFT phishing transactions into one of the four patterns by log parsing and transaction record parsing. We find these phishing accounts stole 19,514 NFTs for a total profit of 8,858.431 ETH (around 18.57 million dollars). We also observe that scammers remain highly active in the last two years and favor certain categories and series of NFTs, accompanied with signs of gang theft. Keywords: Non-Fungible Token, Ethereum, phishing attack, taxonomy, measurement study § INTRODUCTION Blockchain <cit.> is the underlying support technology for cryptocurrency and its platforms, with the characteristics of decentralization, public transparency, and tamper-resistance. Decentralization refers to using distributed nodes to store data through consensus mechanism <cit.>, where each node has the right to verify and record transaction information. Once a transaction is successfully packaged into the block, it becomes irrevocable, which makes the blockchain system tamper-resistant and traceable. By introducing the Ethereum Virtual Machine (EVM) technology <cit.>, Ethereum <cit.> has become currently the largest programmable blockchain platform which supports smart contracts <cit.>. Due to its self-executed contract programs, diverse decentralized applications can be deployed on Ethereum. This is also the reason why Ethereum is referred to as Blockchain 2.0 <cit.>. The native cryptocurrency created by Ethereum, namely Ether, is now the second largest cryptocurrency by market capitalization <cit.>. An Non-Fungible Token (NFT) <cit.> is an ownership record of cryptographic digital assets stored on the blockchain. Each NFT has a unique identifier and metadata which makes it indivisible and unique, and also ensures the traceability of digital assets in the trading process. CryptoPunks, one of the first non-fungible token projects, was launched on the Ethereum blockchain in 2017 <cit.>. Until 2021, a JPG file “Everydays: The First 5000 Days" composed of 5000 images, was sold as an NFT for $69.3 million <cit.>, NFTs have once again been brought to the public's attention. The NFT market generated around 24.7 billion dollars worth of organic trading volume in 2022 across blockchain platforms and marketplaces <cit.>. It has also achieved remarkable results in applications of the metaverse <cit.>, gaming and other fields, with a group of star NFT projects such as Bored Ape Yacht Club (BAYC) constantly springing up <cit.>. Their large community base, valuable collectibility, and unit valuations reaching tens of thousands of dollars have attracted a large number of investors. However, at the same time, they have also become targets for phishing scammers <cit.>. Phishing scammers often spread phishing links by attacking official accounts of social media platforms such as Discord <cit.> and Twitter <cit.>, and lure users into revealing their sensitive information or transferring money to scammers. Financial losses caused by NFT phishing scams account for 50 percent of all scams in the field of NFTs <cit.>, with NFT phishing events occurring almost every day. As is shown in Figure 1, on 26 October 2022, the phishing scam gang Monkey Drainer created multiple forged Twitter accounts to promote a fake airdrop and posted links of phishing websites, stealing approximately 700 ETH (about 1 million dollars) worth of cryptocurrencies and NFTs <cit.>. Figure 2 shows a recent case on 28 January 2023, the official Twitter account of Azuki, a well-known NFT project, was hacked, causing its followers to connect to phishing links, resulting in over 122 NFTs being stolen, with a loss of over 780,000 dollars <cit.>. NFT phishing scams continue to become more frequent and increasingly innovative, while existing studies have only focused on one or two types of fraud. Kim et al. <cit.> describes two typical patterns of NFT phishing attacks conducted by stealing login credentials and unlimited approval. Wang et al. <cit.> carries out a systematic and quantitative analysis on the prevalence and disadvantages of the unlimited approval mechanism for ERC20 tokens on Ethereum. Due to the scarcity of existing research on systematic analysis of the patterns of NFT phishing attacks, users lack an in-depth understanding of this ecosystem. Therefore, it is urgent to understand the current means and patterns of existing NFT phishing attacks. To this end, we attempt to fill the research gap through case retrospective analysis of phishing scams that have occurred. This work. Our goal is to systematically summarize and investigate different patterns of NFT phishing attacks, measure their impacts and characterize the modus operandi and preferences of scammers. To this end, we first make efforts to create a pattern taxonomy of NFT phishing attacks (see Section 5). By resorting to scam reports obtained from Chainabuse and security reports of several blockchain security companies, we have conducted case retrospective analysis of four patterns of NFT phishing attack means. The four patterns are: 1) deceptive signature, 2) authorization fraud, 3) stealing login credentials, and 4) fraudulent transaction. To demystify these patterns of NFT phishing attacks, we investigate the tricks and working principles of them. Based on 469 NFT phishing accounts collected up until October 2022 from multiple channels, we further measure the characteristics and impacts of NFT phishing scams by log parsing and transaction record parsing (see Section 6). We find these phishing accounts stole 19,514 NFTs for a total profit of 8,858.431 ETH (around 18.57 million dollars). We also observe that scammers remain highly active in the last two years and favor certain categories and series of NFTs, accompanied with signs of gang theft. To the best of our knowledge, we are the first to conduct a systematic and comprehensive case retrospective analysis to reveal real-world NFT phishing attack means, which shed light on identifying phishing scams in NFT trading ecosystem. § BACKGROUND Accounts and transactions <cit.>. Accounts are a core component of the blockchain system, and transactions are records of change the state of an account. Accounts are divided into two categories: 1) external accounts, which are controlled by users who have the corresponding private key; and 2) internal accounts, which are controlled by the code of smart contracts. The address of account in this article is represented by four-character identifiers beginning with 0x. There are two types of transactions depending on the sender of message: Those initiated by external accounts are called “external transactions", which are stored on the blockchain and can be obtained by parsing blocks. The other type, which is sent from a smart contract to another account, is called “internal transactions". Internal transactions are typically triggered by external transactions and are not stored on the blockchain. Smart contracts and event logs. Smart contracts are programs that run on the Ethereum blockchain, and events stored in log data are a mechanism for smart contracts to interact with external users provided by Ethereum. Calling a function of a smart contract to execute a transaction modifies the event and log data of the corresponding transaction (such as account balances), thus changing the state of the smart contract. Ethereum provides event and log registers to track the internal state of smart contracts. Each external transaction contains all the events emitted by the transaction. As shown in Figure 3, if a transaction calls a contract function and the contract emits an event, the event will be recorded in the log. Each event contains multiple topics, making it easy for users to query specific events, with the first topic usually being the hash signature of the event name. Figure 4 shows an example log of the event Transfer. Topic 0 is the hash of the Transfer event 0xddf2. Topic 1 and topic 2 are the addresses of the sender and the receiver, respectively. Topic 3 is the identifier tokenId of the token being transferred. Non-Fungible Token (NFT). Token is the digital proof of ownership which is deployed on the blockchain through smart contracts, and can be circulated and traded on the blockchain. There are multiple token standards <cit.> on Ethereum, which are a set of rules that allow cryptocurrency to be issued on different blockchain protocols. Among these standards, ERC721 <cit.> and ERC1155 <cit.> are used to create NFTs, providing interfaces that allow NFTs to be issued, owned, and circulated. ERC721 allows only one NFT to be minted or transferred per transaction, while ERC1155 allows for batch minting and transfer of multiple types and quantities of NFTs per transaction, which can save a lot of gas fees. The indivisible and unique nature of NFTs, as well as the immutable, traceable and decentralized characteristics of blockchain technology, guarantee the uniqueness, authenticity and permanence of NFT assets. Token operations. ERC721 and ERC1155 standard interfaces define a set of mandatory and optional API methods provided by token contracts for implementing four types of token operations: mint, transfer, authorize, and burn. Mint. NFTs are created through minting, which is the process of storing digital assets on the blockchain. Each NFT is uniquely identified by a _tokenId, and the corresponding metadata _url for NFT can be obtained through the tokenURI() interface provided by ERC721. Transfer. Transferring an NFT from one account address to another marks a change in ownership of the NFT. This is done by calling the transfer() interface. Authorize. The owner of an NFT can grant control of their digital assets to another account address. The approve() method grants control of a single NFT with _tokenId to the address _to; while the setApprovalForAll() method grants control of all NFTs to the address _operator. Both transferFrom() and safeTransferFrom() can be used to transfer NFTs for which control has been granted. There is a slight difference between the two: calling transferFrom() must confirm that the _to address is able to receive NFTs, otherwise the NFTs will be lost. safeTransferFrom() provides an error message when attempting to transfer NFTs to the zero address 0x0000 (which is the black hole in Ethereum and any assets transferred to it cannot be retrieved). Burn. The operation of token burning is usually done by transferring the corresponding amount of tokens to the zero address 0x0000. § RELATED WORK In this section, we discuss related work on NFT phishing attacks from two aspects: 1) measurement of blockchain token ecosystem, and 2) detection of phishing attacks. §.§ Measurement of Blockchain Token Ecosystem With the continuous development and application of blockchain technology, the blockchain token ecosystem has become increasingly active and diversified, giving rise to various types of tokens such as cryptocurrencies, stablecoins and non-fungible tokens(NFTs). This has attracted numerous researchers, with Chen et al. <cit.> exploring the distribution, circulation and correlation among different tokens in the Ethereum ERC20 token ecosystem, as well as the stability of the token ecosystem through graph analysis. Zheng et al. <cit.> analyzed the transaction data and smart contract code of tokens in the EOSIO blockchain token ecosystem, revealing the structure and characteristics of token relationships and transaction liquidity in the EOSIO token ecosystem. There are also researches on NFT ecosystem security. Das et al. <cit.> discussed security issues such as smart contract vulnerability, exchange security and wallet security in the NFT ecosystem, and proposed corresponding solutions such as auditing smart contracts and using multi-signature wallets, so as to enhance the security of the NFT ecosystem. §.§ Detection of Phishing Attacks Researchers have also proposed many works in detecting phishing attacks. Chen et al. <cit.> proposed a transaction graph-based cascading feature extraction method and a lightGBM <cit.> -based double sampling ensemble integration algorithm to build a detection model for identifying phishing accounts. Wu et al. <cit.> proposed a novel network embedding algorithm called trans2vec to extract the features of the addresses and adopt the one-class support vector machine (SVM) <cit.> to detect phishing accounts. Research studies related to NFT phishing detection have also emerged. Roy et al. <cit.> discussed the promotion of NFTs and phishing scams, and tested the effectiveness of existing phishing detection tools against phishing websites. The authors also extracted a series of features for phishing detection based on phishing websites to improve the detection accuracy of existing tools. Kim et al. <cit.> proposed to capture the transaction and social context of NFTs based on graph neural networks to implement an automatic detection system of NFT phishers. However, existing researches lack a systematic review and retrospective analysis of NFT phishing attacks. Different from the above research, our work provides a comprehensive pattern analysis of NFT phishing attacks, and further conducts a measurement study of analyzing the modus operandi, preferences of NFT phishing scammers as well as their economic impact based on on-chain transaction data. § DATA COLLECTION Collecting NFT phishing cases. We manually collect scam cases from Chainabuse and security reports of several blockchain security companies such as SlowMist and PeckShield, and then systematically classify these phishing attack means into four patterns. Crawling NFT phishing accounts. We crawl and filter all the NFT phishing scam addresses up until October 2022 from three websites: Twitter, Etherscan and Chainabuse. Twitter. From reported events of the account of Scam Sniffer on the social media platform Twitter, we obtain 126 NFT phishing scam addresses. Etherscan and Chainabuse. We crawl 6657 and 187 phishing scam addresses respectively from Etherscan and Chainabuse. To identify phishing accounts related to NFT transactions, we use the txs_eth_bfs crawler from BlockchainSpider <cit.> and the interface alchemy_getAssetTransfers provided by the blockchain development platform Alchemy to obtain all phishing scam addresses for ERC721 and ERC1155 transactions. Among them, the phishing addresses with ERC721 or ERC1155 transactions are the NFT phishing scam accounts. After the above process, we label 469 unique accounts as NFT phishing accounts (summarized in Table 1). § CASE RETROSPECTIVE ANALYSIS OF NFT PHISHING ATTACKS In this section, based on the scam reports obtained from Chainabuse and security reports of several blockchain security companies, we conduct a detailed case analysis of NFT phishing attacks and classified them into four types based on their implementation patterns, i.e., 1) deceptive signature, 2) authorization fraud, 3) stealing login credentials, and 4) fraudulent transaction. Figure 5 illustrates the process of phishing attacks in each pattern. To demystify these patterns of NFT phishing attacks, we also investigate the tricks and working principles of them. §.§ Deceptive Signature The transaction signature on Ethereum refers to the process where a user digitally signs the transaction data using their private key to verify the legitimacy and ensure the security of the transaction. However, digital signatures are subject to certain security risks, and attackers typically use fraudulent means to lure users into signing a seemingly normal transaction, but the actual transaction content can result in the loss of users' funds. OpenSea, a decentralized trading marketplace, supports blind signatures when executing NFT transactions. Blind signature means the signed message content is pure hexadecimal unreadable data, and the signer cannot see the specific content of the message to be signed. Phishing scammers use blind signatures precisely to make the underlying behavioral logic of a smart contract inaccessible to users who sign it interactively through their wallets, thus signing transactions without knowing exactly what they are signing. The following describes how the scam works. When purchasing an NFT on OpenSea, there are usually two ways to buy it: Buy Now and Make Offer. In the Buy Now mode, the buyer's purchase order is matched with the sell order in the OpenSea database. While in the Make Offer mode, many purchase orders are stored in the database, and the sell order is matched with these purchase orders. Both ways complete the transaction by calling the atomicMatch_ method to match orders of both buyers and sellers. If the attacker purchases NFTs on OpenSea through the normal process, they use the seller's order and signature in the OpenSea central database, that is, the purchase price of NFT is not 0 ETH. Therefore, the attacker needs to bypass the regular process of purchasing NFTs using illegitimate means: 1) The attacker obtains the sell order information hung on OpenSea, sets the sell price in the order information to 0 ETH, and then generates the data to be signed; 2) After obtaining the seller's signature, the attacker constructs the corresponding purchase order information and then performs order matching to purchase NFTs with 0 ETH. The specific steps are as follows. Step 1: The attacker obtains the signature. Tracking the function call stack, the constructed unsigned data by attacker is shown in Figure 6 below. The signature is calculated as: keccak256(“\x19Ethereum Signed Message:\n32", hashOrder(order)). This signature method adds a message prefix “\x19Ethereum Signed Message:\n32" before the order to ensure that the signature cannot be used outside of Ethereum. Afterwards, the complete data with the message prefix is hashed using keccak256, and then signed with a private key. Figure 7 illustrates the function hashOrder involved in the signature. The order structure contains sensitive information such as the order amount and target address. However, the value hashed by keccak256 is just a string of hexadecimal characters, that is a blind signature, which cannot be identified by users. The attacker can arbitrarily set the amount of the basePrice parameter involved in the hashOrder to 0, and set target address to themselves, etc. Step 2: The attacker invokes OpenSea contract to steal NFTs. After obtaining the signature, the scammer, acting as the buyer, performs order matching by calling the atomicMatch_ function of the OpenSea contract, which in turn calls the atomicMatch function as shown in Figure 8. In this function, the parameters are the orders and signatures of the buyer and seller. When the phishing account as a buyer call this function, the buyer's order is first verified, and as long as the order built by the buyer has not expired, it will pass successfully. Next, it verifies the seller's order and signature. As long as the seller's order has not expired or been cancelled, the scammer will be able to buy the victim's NFTs hanging on OpenSea at the price in the signature. In a real-world case of deceptive signature, the phisher lures users to visit the phishing link by forging a phishing email informing them to migrate the sell order under the pretext of OpenSea upgrade (as the example shown in Figure 9). Figure 10 shows The interface of wallet as seen by users, where the data to be signed is a hexadecimal string. Figure 11 shows a transaction record of this pattern. After the victim 0x1Ba9 was deceived into signing by phisher Fake_Phishing5169, the phisher bought 29 ERC721 tokens and 4 ERC1155 tokens at a transaction price of 0 ETH. From Etherscan's event logs, the OrdersMatched event triggered by the atomicMatch_ function of the OpenSea contract can be queried, as is shown in Figure 12. §.§ Authorization Fraud The second pattern of NFT phishing is that scammers establish phishing websites to deceive users into granting them authorization. These phishing websites are mainly designed to imitate the domain name and content of official websites of NFT projects in almost the same way, including: 1) changing the top-level domain name of the original official website; 2) adding words or symbols to the main domain name; 3) adding subdomains for obfuscation. Scammers hijack social media platform servers such as Discord, Twitter and Instagram, posting phishing links or fraudulent links for fake airdrops to lure users. After users click on the link and enter the phishing website, they are usually required to connect their wallet to check their wallet balance (to ensure profitability) before being subjected to subsequent fraudulent activities. Specifically, after the user connects the wallet, the phishing website will require the user to execute the approve or setApprovalForAll function in the ERC721 contract to authorize a single NFT or all NFTs to the scammer. Figure 13 shows an example of the interface of wallet which contains the funtion setApprovalForAll. The approve function only requires that the authorized account to not to be the owner of the NFT and that the function caller is the owner of the NFT. Once these two conditions are met, this function calls the _approve function to execute the actual authorization operation, triggering the Approval event to authorize the execution of a single NFT. The setApprovalForAll function operates similarly by calling the _setApprovalForAll function to execute the actual authorization operation. The ApprovalForAll event is triggered by checking that the owner of the NFT is not the authorized account. The mentioned functions setApprovalForAll and _setApprovalForAll are shown in Figure 14. After the scammer obtains authorization, he can call the transferFrom or safeTransferFrom function to transfer the NFTs from the victim's account, as is shown in Figure 15. In this way, the scammer can transfer the NFTs from the victim's account to his own account, successfully committing the fraud. §.§ Stealing Login Credentials Private key and mnemonic are both login credentials, with any one of them, scammers can access the victim's account and transfer NFTs to their own wallets. Private key determines the ownership of digital assets. Possessing the private key allows one to freely dispose of the corresponding digital assets in the wallet. The private key is composed of 64 hexadecimal characters generated by the encryption algorithm. However, due to the complexity of storing and remembering private keys in this form, they are often converted into a form of several common English words to facilitate memory, known as mnemonic. In other words, mnemonic is another form of private key. In this pattern, scammers often use the following means: 1) pretending to be administrators of social platforms such as Discord, sending private messages to members in bulk to help solve problems or sending false phishing websites claiming to offer free NFTs to lower users' vigilance and try to lure them into giving away their mnemonics, as the example shown in Figure 16; 2) using malicious npm packages to poison and steal mnemonics of victims. Since the npm package uploaded on npmjs.com does not need to be audited, and the basic information of the package can be filled in arbitrarily, attackers can construct malicious npm packages and forge the basic information of the npm package to confuse users. After victims install the “opensea-wallet-provider" npm package and then use mnemonics, the malicious npm package will automatically send the mnemonics to the attacker's server through network transmission, thus stealing the mnemonics of the victim. After obtaining login credentials, the scammer can randomly transfer all types of tokens from the victim's wallet, including Ether, fungible tokens and NFTs. Figure 17 shows the transaction records of the scammer Fake_Phishing4938 transferring ERC-721 tokens and ERC-20 tokens of the victim 0x925d to his own wallet after acquiring the victim's mnemonics. §.§ Fraudulent Transaction This pattern of fraud is characterized by the victim directly transferring an NFT or NFTs to the scammer's wallet without any other gains. A typical scenario is that the scammer pretends to be a buyer to conduct OTC transaction with the user, the scammer does not complete the payment but tells the user that “his payment has been made and there is a delay in the system", and urges the user to transfer NFT to the scammer's account by faking a screenshot of a successful payment transaction. The transaction behavior exhibited in this fraud pattern is different from that of stealing login credentials. Although the initiator of the transaction in both patterns is shown as the victim, the difference between the two is that in this pattern, it is the victim who actively transfers NFTs to the other side, and the number of NFTs transferred is usually around one to two; whereas in the pattern of stealing login credentials, it is the phisher who accesses the victim's account with the stolen key to transfer NFTs, and usually transfers all types of tokens from the victim's wallet, with the number of tokens transferred being relatively large. § MEASUREMENT STUDY In this section, based on 469 NFT phishing accounts collected up until October 2022 from multiple channels, we perform a measurement study of on-chain transaction data crawled from Etherscan to characterizing NFT phishing scams by analyzing the modus operandi and preferences of NFT phishing scammers, as well as economic impacts and whereabouts of stolen NFTs. §.§ Pattern Analysis By log parsing and transaction record parsing, we classify NFT phishing transactions into one of the four patterns. We obtain the number of victims, phishers, stolen NFTs and transactions involved in each pattern, as is shown in Table 2. A phisher typically lures multiple victims, and steal multiple NFTs in several transactions. Among them, the number of victims and stolen NFTs in the pattern of fraudulent transaction are relatively small, i.e., it is relatively rare for a user to actively transfer an NFT to a phisher without any gain. This is also in line with reality, after all, the initiative to transfer precious NFTs to others is a move that most users would think twice about. The two patterns of deceptive signature and authorization fraud, both of which have around 1,800 victims, show that many users still lack basic security awareness of Ethereum transactions, and lack vigilance or may even be completely ignorant of blind signature and authorization operations. Moreover, the number of stolen NFTs in these two patterns are much larger than the number of corresponding victims, which indicates that phishers often covet multiple NFTs in victims' wallets and successfully transfer them after obtaining users' signature and authorization. It must be mentioned that stealing login credentials is still widely used by phishers in NFT trading scenarios, where 8,912 NFTs were stolen, causing serious financial losses. §.§ The Category, Series and Number of Stolen NFTs For NFTs issued by OpenSea, we have access to their categories, including collectibles, domain names, art, gaming, music, etc. Based on the address of token contract, we make statistics on the number and categories of stolen NFTs, as shown in Figure 18. We can see that scammers mostly favor NFTs of the Collectibles category, which is inseparable from the rarity and high value of collectibles. Figure 19 shows the power-law distribution of stolen NFTs, which indicates that a few types have a large number of stolen NFTs, implying that scammers are more inclined to steal certain specific types of NFTs. NFTs of the same series share similar characteristics such as theme and limited availability, but also with unique variations from each other. Therefore, we also subdivide them from the perspective of NFT series to see the difference in the stolen number of them. We select the series with relatively large number of stolen NFTs, as shown in Table 3. We can see that scammers prefer the current popular, high value, rare NFT series. This also indicates from the side that scammers have filtered the victim group before designing phishing scams and are particularly jealous of those with NFTs of high market value. §.§ Active Period of Phishing Accounts We crawl the timestamps of all ERC721 transactions with phishing accounts using txs_eth_bfs. The block number of ERC1155 transactions is crawled using alchemy_getAssetTransfers, and then the timestamps of transactions are crawled using eth_getBlockByNumber. The distribution of active times of phishing accounts is shown in Figure 20. As can be seen from the figure, NFT phishers became active again in early 2021 and showed a continuous trend of high activity, which corresponds to the prosperity of the NFT market. In early 2021, the NFT digital art collection “Everydays: The First 5,000 Days" was sold for $69.3 million <cit.>, bringing NFTs back into the public eyes. By 2022, NFTs had achieved remarkable results in applications of the metaverse, gaming and other fields, and a group of star NFT projects such as BAYC have emerged <cit.>. The value and popularity of NFTs continues to expand, attracting large numbers of investors, and at the same time becoming the target of phishers again. §.§ The Amount of Stolen NFTs Each NFT has a unique identifier that can be traced, and market supply and demand can cause significant fluctuations in the price of NFTs. Therefore, phishers can only sell the stolen NFTs on exchanges to determine their price and make a profit. Phishers usually exchange NFTs for fungible tokens such as ETH and WETH. For all NFT transactions (including ERC721 and ERC1155) based on phishing addresses, we crawl all external transactions (NFTs exchanged for ETH) and internal transactions (NFTs exchanged for ERC20 tokens) corresponding to these transactions, and calculate the total amount of stolen NFTs, as shown in Table 4. Based on the number of transactions in which stolen NFTs are exchanged for different fungible tokens, scammers tend to exchange most of stolen NFTs into ETH or WETH, which is related to the low risk and stable value of ETH and WETH. §.§ Sell, Gift-out or Remain Generally, there are three whereabouts of stolen NFTs: Sell, Gift-out or Remain. We respectively obtain the statistics of three whereabouts under ERC1155 transaction and ERC721 transaction, including the number of transactions, the number of stolen NFTs and the amount obtained by selling NFTs, as is shown in Table 5. A total of 19,514 NFTs are stolen by scammers. Among them, 31.95% are sold directly, earning a total profit of 8,858.431 ETH. Phishers choose to transfer 57.50% NFTs to their accomplices for further operations. The remaining 10.56% are held by phishers in their own wallets, mainly because the market price of these NFTs does not satisfy them and they choose to wait for better opportunities. We also filter the top-10 phishing accounts that make the most profit of selling NFTs, as shown in Table 6. It is worth noting that the top-5 phishing accounts have been marked by Etherscan as phishing scam addresses. Among them, the most profitable phishing address has exchanged NFTs 1,180 ETH, which is about 2.125 million dollars. §.§ Exchange Distribution Scammers exchange the stolen NFTs into other types of cryptocurrencies on exchanges. We have made statistics on the amount of tokens exchanged on different exchanges, as shown in Figure 21. We also find that most scammers choose to sell NFTs on the OpenSea trading platform, which is related to the fact that OpenSea is the largest NFT trading platform in the world. § CONCLUSION In this paper, we are the first to conduct case retrospective analysis of NFT phishing attacks. We provided an in-depth analysis of fraud techniques and working principles behind each pattern, in order to increase users' understanding of elaborate phishing scams and improve the awareness of risk prevention. Further, we performed a measurement study of on-chain transaction data to characterize NFT phishing scams by analyzing the modus operandi and preferences of NFT phishing scammers, as well as economic impacts and whereabouts of stolen NFTs. We hope that more scholars will pay attention to NFT ecosystem security. unsrt
http://arxiv.org/abs/2307.01171v1
20230703173009
Quantum Neural Estimation of Entropies
[ "Ziv Goldfeld", "Dhrumil Patel", "Sreejith Sreekumar", "Mark M. Wilde" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech", "cs.IT", "cs.LG", "math.IT" ]
APS/123-QED School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14850, USA Department of Computer Science, Cornell University, Ithaca, New York, 14850, USA Institute for Quantum Information, RWTH Aachen University, Aachen, Germany School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14850, USA School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14850, USA Entropy measures quantify the amount of information and correlations present in a quantum system. In practice, when the quantum state is unknown and only copies thereof are available, one must resort to the estimation of such entropy measures. Here we propose a variational quantum algorithm for estimating the von Neumann and Rényi entropies, as well as the measured relative entropy and measured Rényi relative entropy. Our approach first parameterizes a variational formula for the measure of interest by a quantum circuit and a classical neural network, and then optimizes the resulting objective over parameter space. Numerical simulations of our quantum algorithm are provided, using a noiseless quantum simulator. The algorithm provides accurate estimates of the various entropy measures for the examples tested, which renders it as a promising approach for usage in downstream tasks. Quantum Neural Estimation of Entropies Mark M. Wilde July 3, 2023 ====================================== § INTRODUCTION Entropy plays a fundamental role in quantifying uncertainty or the level of disorder in physical systems <cit.>, with its roots tracing back to thermodynamics and statistical mechanics <cit.>. Since its inception, it has been central for studying various fields of physics and beyond, including cosmology, meteorology, physical chemistry, thermodynamics, and information science. One of its critical applications is in explaining the inefficiency of heat-work dynamics and the irreversibility of physical systems via the second law of thermodynamics <cit.>. Moreover, the concept of entropy increasing over time provides a basis for causality and the arrow of time <cit.>, which are fundamental to our understanding of the universe. From an information-theoretic perspective, Shannon developed the concept of entropy to measure the information content of a source <cit.>, and Rényi later extended it to a single-parameter family of information measures <cit.>. Various discrepancy measures that are intimately connected to entropy were introduced throughout the years in the information theory and mathematical statistics communities, encompassing relative entropy <cit.> and f-divergences <cit.>. Quantum counterparts of these measures include the von Neumann entropy <cit.>, quantum Rényi entropy, quantum relative entropy <cit.>, and various quantum f-divergences <cit.>. In particular, the entropy of a quantum state quantifies how mixed it is, which is an important concept in quantum mechanics; cf. <cit.> for a detailed survey of the role of entropy in physics. While the aforementioned measures are essential for determining the amount of information and correlation present in classical and quantum systems, the state of the system is often unknown in practice and is only accessible through sampling. In such situations, one must resort to estimation of the entropy or divergence terms, which is the task at the core of this work. §.§ Contributions Here we propose a variational quantum algorithm (VQA) for estimating the von Neumann and Rényi entropies of an unknown quantum state (see (<ref>) and (<ref>) for definitions). Our approach also extends to the measured Rényi relative entropy between two unknown quantum states, which, in particular, captures the measured relative entropy and fidelity as special cases (see (<ref>), (<ref>), and (<ref>), respectively). The proposed VQA is developed based on variational formulae for these entropy measures <cit.>, which represent them as optimizations of certain objectives over the space of Hermitian operators. The key idea behind our approach is to parameterize this optimization domain using two models: (i) a classical neural network to parameterize the spectrum (eigenvalues) of a Hermitian operator, and (ii) a quantum circuit to parameterize the eigenvectors. Having that, we present a sampling procedure for execution on a quantum computer, using which the parameterized objective is approximated by employing independent copies of the unknown quantum states. This results in an optimization problem over the parameter space, which is solvable using classical tools and whose solution yields an estimate of the entropy measure of interest. Our VQA-based estimation technique, termed quantum neural estimation, builds upon the possibility of VQAs realizing quantum speedup <cit.> and is fully trainable using classical optimizers. Analogous variational methods based on neural network parameterization have seen immense success in recent years for estimating classical entropies and divergences <cit.>. The appeal of such neural estimators stems from their scalability to high-dimensional problems and large datasets, as well as their compatibility with modern gradient-based optimization techniques. We expect the proposed quantum neural estimation framework to enjoy similar virtues. Numerical simulations that demonstrate the performance of our method on small-scale examples are provided, revealing accurate estimates and convergence of the training algorithm. We also discuss extensions of our method to quantum relative entropy <cit.> and sandwiched Rényi relative entropy <cit.>, surfacing key challenges and outlining potential avenues to overcome them, which are left for future work. §.§ Literature Review Estimation of the von Neumann and Rényi entropies has attracted significant interest throughout the years. A naïve approach is to use quantum tomography to estimate the entire density matrix and then evaluate the entropic quantities based on the estimate. However, the time complexity of this approach is linear in the dimension of the state (or, equivalently, exponential in the number of qubits) <cit.> and is thus untenable. To address this problem, many quantum algorithms for estimating these quantities have been proposed, and their computational complexities have been investigated under different input models <cit.>. The authors of <cit.> studied the problem of estimating the von Neumann and Rényi entropies, given access to independent copies of the input quantum state. They demonstrated that the sample complexity (i.e., the number of independent copies of the quantum state) of this task grows exponentially with the number of qubits. Entropy estimation under the quantum query model, i.e., where one has access to an oracle that prepares the input quantum state, was explored in <cit.>. These works established that the query complexity (i.e., the number of times the oracle is queried) for estimating the von Neumann and Rényi entropies is also exponential in the number of qubits. Variational methods are often used in physics to find approximate solutions to problems that are hard to solve exactly <cit.>, e.g., computing ground-state energies for quantum systems <cit.>. As such, they underlie various computational methods, including the Hartree–Fock method <cit.> and the configuration interaction method <cit.>. VQAs are an extension of classical variational methods, and have become a prominent research area in quantum computing <cit.>. VQAs are hybrid quantum-classical algorithms that are driven by classical optimizers and only call a quantum subroutine for tasks that are (presumably) hard for a classical machine. These algorithms apply the variational principle to find approximate solutions, by first preparing a quantum trial state and then optimizing its parameters using a classical computer. VQAs have now been applied to a variety of problems, including quantum simulation <cit.>, optimization <cit.>, and machine learning <cit.>. A VQA is also at the heart of the quantum neural estimation method developed herein. Neural estimation techniques have been at the forefront of research on classical entropy and divergence estimation. The idea is to parameterize a variational form of the measure of interest by a neural network, approximate expectations by sample means, and optimize the resulting empirical objective over parameter space. The appeal of neural estimators stems from excellent scalability and computational efficiency observed in practice, which is in line with the success of neural nets for language models <cit.>, computer vision <cit.>, and generative modeling <cit.>. Various neural estimators of Shannon's mutual information <cit.>, neural net distances <cit.>, and classical f-divergences <cit.> have been developed and analyzed for their performance. Neural estimators of quantum entropy measures have not appeared in the literature yet, and we make strides towards harnessing this promising methodology in this work (see also the concurrent work discussed below). Note on concurrent work. The independent and concurrent work <cit.> appeared on the arXiv a few days before our preprint was uploaded. It introduced a method for estimating von Neumann entropy reminiscent of ours, but with a few key differences. First, we treat several prominent quantum entropy measures—von Neumann entropy, Rényi entropy, measured relative (Rényi) entropy, and fidelity—while <cit.> only accounts for the von Neumann entropy. Second, we parameterize the space of all Hermitian operators using parameterized quantum circuits and classical neural networks, whereas <cit.> rewrites the variational formula for von Neumann entropy in terms of an optimization over parameterized quantum states and uses only the quantum circuit component. We believe that the incorporation of classical neural networks is crucial for scalable estimation of a broad class of quantum entropy and divergence measures. § QUANTUM ENTROPIES AND DIVERGENCES In this section, we define the quantum entropy and discrepancy measures of interest and present their variational forms, which are subsequently used for the proposed quantum neural estimation method. Throughout, fix d∈ℕ, and let ρ and σ be d-dimensional quantum states. §.§ Measured Relative Entropy and von Neumann Entropy The measured relative entropy between ρ and σ is defined as <cit.> _M(ρ‖σ)sup_,( Λ_x) _x∈∑_x∈Tr[Λ_xρ]ln( Tr [Λ_xρ]/Tr[Λ_xσ]), where the supremum is over all finite sets and positive operator-valued measures (POVMs) ( Λ_x)_x∈ indexed by .[Ref. <cit.> defined the quantity with an optimization over just projective measurements, but Ref. <cit.> generalized the definition to include an optimization over all possible measurements.] That is, it is equal to the largest classical relative entropy between the probability distributions (Tr [Λ_xρ])_x ∈𝒳 and (Tr [Λ_xσ])_x ∈𝒳 that result from performing a measurement (Λ_x)_x ∈𝒳 on the respective states ρ and σ.[Here, we identify a probability distribution p on with the ||-dimensional simplex vector (p(x))_x∈.] As a consequence of the data-processing inequality for the classical relative entropy, it suffices to perform the optimization in (<ref>) over rank-one measurements <cit.> (i.e., where each Λ_x, for x∈, is a rank-one operator). A variational form for _M(ρ‖σ) was derived in <cit.> (see also <cit.>), whereby _M(ρ‖σ) =sup_H{Tr[Hρ ]-lnTr[e^Hσ]}, where the supremum is over all d × d Hermitian operators (i.e., H such that H = H^†). Eq. (<ref>) can be understood as a quantum generalization of the Donsker–Varadhan formula for classical relative entropy <cit.>. An important special case of the measured relative entropy is the von Neumann entropy (ρ) -Tr[ρlnρ]. It is obtained from _M(ρ‖σ) by setting σ to be the maximally mixed state π_d I/d, where I is the identity operator. Namely, we have (ρ)=ln d-_M(ρ‖π_d), which follows from <cit.> (see also <cit.>). Inserting the variational representation from (<ref>) into (<ref>), the von Neumann entropy is expressed in the following variational form: (ρ) =ln d- sup_H{Tr[Hρ]-lnTr[e^Hπ_d]}. §.§ Measured Rényi Relative Entropy and Rényi Entropy The measured Rényi relative entropy of order α∈ (0, 1) ∪ (1,∞) between ρ and σ is defined as follows <cit.>: _M,α(ρ‖σ) sup_,( Λ_x) _x∈1/α-1ln∑_x∈Tr[Λ_xρ]^αTr[Λ_xσ]^1-α , where the supremum is over all finite sets and POVMs ( Λ_x)_x∈. Akin to the measured relative entropy case, _M,α(ρ‖σ) is equal to the largest classical Rényi relative entropy between probability distributions of the form (Tr [Λ_xρ])_x ∈𝒳 and (Tr [Λ_xσ])_x ∈𝒳 that result from performing a measurement (Λ_x)_x ∈𝒳 on ρ and σ. As before, it suffices to perform the optimization in (<ref>) over rank-one measurements. The definition in (<ref>) recovers the measured relative entropy _M(ρ‖σ) by taking the limit α→ 1. The following variational representation was established in <cit.>: _M,α(ρ‖σ) =sup_H{α/α-1lnTr[e^( α-1) Hρ ]-lnTr[e^α Hσ]}. Eq. (<ref>) is a quantum generalization of the more recently derived variational formula for classical Rényi relative entropy <cit.> (i.e., the classical case arises by plugging in commuting density operators ρ and σ). Instantiating σ in (<ref>) as the maximally mixed state gives rise to the quantum Rényi entropy _α(ρ)1/1-αlnTr[ρ^α], which admits the form _α(ρ)=ln d-_M,α(ρ‖π_d). Leveraging the variational formula in (<ref>), we obtain _α(ρ)-2mu=-2muln d - sup_H{α/α-2mu--2mu1lnTr[e^(α-2mu--2mu1) Hρ]-lnTr[ e^α Hπ_d]}. Another interesting special case of the measured Rényi relative entropy is obtained for α = 1/2: _M,1/2(ρ‖σ) = -lninf_,( Λ_x) _x∈[∑_x∈√(Tr[Λ_xρ]Tr[Λ_xσ])]^2 = -ln(ρ,σ), where the fidelity of ρ and σ is defined as <cit.> (ρ,σ)√(ρ)√(σ)_1^2. The equality in (<ref>) was established in <cit.>, which indicates that the fidelity of quantum states ρ and σ is achieved by a measurement (i.e., minimizing the classical fidelity of the distributions (Tr [Λ_xρ])_x ∈𝒳 and (Tr [Λ_xσ])_x ∈𝒳 over all possible measurements). It thus follows from (<ref>) and (<ref>) that the negative logarithm of the fidelity has the following variational form: -ln(ρ,σ) =-inf_H{lnTr[e^- Hρ ]+lnTr[e^ Hσ]}. Alternatively, by making use of the variational form in <cit.>, we find that the root fidelity has the form √((ρ,σ)) = 1/2inf_H{Tr[e^- Hρ ]+Tr[e^ Hσ]}, which coincides with the expression from <cit.>. § QUANTUM NEURAL ESTIMATION OF ENTROPIES We develop variational estimators for the entropy and measured relative entropy terms defined in the previous section. Our approach assumes access to a black-box procedure for repeatedly preparing the quantum states ρ and σ. The key idea is to parameterize the set of Hermitian operators in (<ref>) using a classical neural network and a quantum circuit. The parameterization procedure, sampling step, and the quantum algorithm to optimize the resulting objective are described next. §.§ Measured Relative Entropy Parameterization. The spectral theorem implies that any Hermitian operator H can be decomposed as H = ∑_i=1^dλ_i |λ_i⟩⟨λ_i|, where λ_1,…,λ_d are the eigenvalues and |λ_1⟩,…,|λ_d⟩ are the corresponding (orthonormal) eigenvectors. Our first step is to approximate the set of eigenvalues using a classical neural network f_w:{1,…,d}→ with a parameter vector w∈ℝ^p, where p∈ℕ. The neural network output f_w(i), for i∈{1,…,d}, serves as a proxy for the eigenvalue λ_i. We keep the architecture of the neural net (viz., nonlinearity, width, depth, etc.) implicit, in order to maintain flexibility of the approach. Next, we approximate the set of eigenvectors using a parameterized quantum circuit U(θ) with a parameter vector θ∈ [0, 2π]^q, where q∈ℕ. In practice, the total number of neural network and quantum circuit parameters, i.e., p+q, should scale like O(poly(log d)), so that the optimization of the ensuing VQA is efficient with respect to the number of qubits specifying ρ and σ. See <cit.> for a similar approach for parameterizing the set of mixed quantum states. The above procedure defines a set of parameterized Hermitian operators, specified as H(w,θ)=∑_i=1^d f_w(i)U(θ)|i⟩⟨ i|U^†(θ), where (w,θ) ∈^p× [0,2π]^q and {|i⟩}_i=1^d denotes the computational basis. Using this parameterization, we approximate the measured relative entropy from below as follows: _M(ρ‖σ)≥sup_w,θ{Tr[H(w,θ)ρ ]-lnTr[e^H(w,θ)σ]}, which follows from (<ref>) and because {H(w,θ)}_w,θ is a subset of all d× d Hermitian operators. To further simplify the parameterized objective and arrive at a form that lands well for sampling on a quantum computer, define the following probability distributions on {1,…,d}: p_θ^ρ(i) Tr[|i⟩⟨ i|U^†(θ)ρ U(θ)] , q_θ^σ(i) Tr[|i⟩⟨ i|U^†(θ)σ U(θ)]. Using (<ref>), (<ref>), and (<ref>), we can write the trace terms from the right-hand side of (<ref>) as Tr[H(w,θ)ρ] =∑_i=1^d p_θ^ρ(i)f_w(i), Tr[e^H(w,θ)σ] =∑_i=1^d q_θ^σ(i)e^f_w(i), where (<ref>) follows because e^β H(w,θ)=∑_i=1^d e^β f_w(i) U(θ) |i⟩⟨ i|U^†(θ), for all β∈ℝ. Inserting (<ref>)–(<ref>) into the right-hand side of (<ref>) yields an objective function that is readily estimated using a quantum computer, as described next. Sampling. We use a quantum computer to sample from the distributions p_θ^ρ and q_θ^σ. As the procedures are similar, we only describe the steps for the former. We prepare the state ρ, act on it with the parameterized unitary U^†(θ), and then measure in the computational basis to obtain a sample from p_θ^ρ. Repeating this process n times for each distribution, we obtain the samples i_1(θ),…,i_n(θ) and j_1(θ),…,j_n(θ) from p_θ^ρ and q_θ^σ, respectively. With that, we approximate the trace values in (<ref>)–(<ref>) by sample means, to arrive at the following objective _n(w,θ)1/n∑_ℓ=1^nf_w(i_ℓ(θ))-ln1/n∑_ℓ=1^n e^f_w(j_ℓ(θ)), and the resulting estimator is thus _M^nsup_w,θ_n(w,θ). Algorithm. We present a VQA for performing the optimization in (<ref>). The algorithm uses the parameter-shift rule <cit.> to update the quantum circuit parameters and standard backpropagation <cit.> to update the weights of the neural network. The pseudocode of our algorithm is as follows. §.§ Von Neumann Entropy The von Neumann entropy H(ρ) can be estimated via a similar approach by appealing to (<ref>). This leads to the upper bound (ρ)≤ln d - sup_w,θ{Tr[H(w,θ)ρ ]-lnTr[e^H(w,θ)π_d]}, and we thus take the estimator to be _nln d-sup_w,θ{1/n∑_ℓ=1^nf_w(i_ℓ(θ))-ln1/n∑_ℓ=1^n e^f_w(j̃_ℓ(θ))}, where j̃_ℓ(θ),…, j̃_ℓ(θ) are samples from the distribution q̃_θ, with q̃_θ(i)Tr[|i⟩⟨ i|U^†(θ)π_d U(θ)] = 1/d. Given that the distribution q̃_θ is simply the uniform distribution, a quantum computer is not required to sample from it, and so the term 1/n∑_ℓ=1^n e^f_w(j̃_ℓ(θ)) in (<ref>) can be evaluated exclusively by a classical sampling approach. §.§ Measured Rényi Relative Entropy We estimate the measured Rényi relative entropy via a similar approach using the variational representation in (<ref>). A few minor modifications to the steps in Section <ref> are required, as delineated next. Using the parameterization of Hermitian operators given in (<ref>), we obtain the variational lower bound _M,α(ρ‖σ) ≥sup_w,θ{α/α-1lnTr[e^( α-1) H(w,θ)ρ ]-lnTr[e^α H(w,θ)σ]}. With the same definitions of the distributions p_θ^ρ and q_θ^σ as in (<ref>)–(<ref>), we rewrite the trace terms in (<ref>) as Tr[e^(α-1)H(w,θ)ρ] =∑_i=1^d p_θ^ρ(i)e^(α-1)f_w(i), Tr[e^α H(w,θ)σ] =∑_i=1^d q_θ^σ(i)e^α f_w(i). Following the same sampling step as in Section <ref>, we approximate the expected values in (<ref>)–(<ref>) by sample means and arrive at the estimator _M,α^nsup_w,θ_α^n(w,θ), where _α^n(w,θ)α/α-1ln1/n∑_k=1^ne^(α-1)f_w(i_k(θ)) -ln1/n∑_k=1^n e^α f_w(j_k(θ)). §.§ Rényi Entropy We briefly state the variational estimator for the Rényi entropy. From (<ref>), along with our parameterization procedure, we obtain _α(ρ)≤ln d - sup_w,θ{α/α-1 lnTr[e^(α-1) H(w,θ)ρ] -lnTr[ e^α H(w,θ)π_d]}. The estimator that results from replacing expectations with sample means is thus ^n_αln d -sup_w,θ{α/α-1 ln1/n∑_k=1^ne^(α-1)f_w(i_k(θ)) -ln1/n∑_k=1^ne^α f_w(j̃_k(θ))}, where j̃_ℓ(θ),…, j̃_ℓ(θ) are samples from the uniform distribution. As in the case of estimating the von Neumann entropy, a quantum computer is not required to sample from the uniform distribution, and so the term 1/n∑_k=1^ne^α f_w(j̃_k(θ)) above can be evaluated using a classical sampling approach. §.§ Root Fidelity Lastly, we estimate the root fidelity √((ρ,σ)) via a similar approach using the variational representation in (<ref>). Employing the same parameterization of Hermitian operators given in (<ref>), we obtain the variational upper bound √((ρ,σ))≤1/2inf_w,θ{Tr[e^- H(w,θ)ρ ]+Tr[e^ H(w,θ)σ]}, and then rewrite the trace terms as Tr[e^-H(w,θ)ρ] =∑_i=1^d p_θ^ρ(i)e^-f_w(i), Tr[e^ H(w,θ)σ] =∑_i=1^d q_θ^σ(i)e^ f_w(i), where p_θ^ρ and q_θ^σ are given in (<ref>)–(<ref>). Following the same sampling step as in Section <ref>, we approximate the expected values in (<ref>)–(<ref>) by sample means and arrive at the estimator for the root fidelity _ninf_w,θ_F^n(w,θ), where _F^n(w,θ)1/2n∑_k=1^n(e^-f_w(i_k(θ)) + e^ f_w(j_k(θ))). Alternative variational methods for estimating fidelity have been proposed recently <cit.>. Some of the methods rely on Uhlmann's theorem <cit.>, which involves a maximization. However, those approaches require purifications of the states ρ and σ in order to estimate their fidelity. Since state purifications are not easily attainable from samples, the fact that our algorithm avoids this need represents an advantage. Another variational method was proposed in <cit.>. While this algorithm does not require purifications, the estimator employed there for the objective function is biased. Our approach, on the other hand, estimates the objective function from the right-hand side of (<ref>) in an unbiased fashion, as given in (<ref>). § IMPLEMENTATION AND EXPERIMENTS We numerically simulate our quantum neural estimation algorithm and assess its accuracy for estimating the measured relative entropy, von Neumann entropy, measured Rényi relative entropy, and Rényi entropy. For each quantity, we benchmark the performance against an estimator that avoids the use of a classical neural network and instead explicitly stores the eigenvalues. The latter approach is more expressive than the former; however, storing all eigenvalues is infeasible for systems with a large number of qubits. The quantum neural estimation approach circumvents this overhead by parameterizing the eigenvalues with a reasonably-sized classical neural network. The comparison between the two approaches aims to surface the potential performance loss that results for the neural net parameterization, but no noticeable drop in performance is seen in our simulations. For our simulations, we use Pennylane's default-qubit as our quantum circuit simulator <cit.>. We next describe our methodology and then follow with the results. §.§ Methodology All our simulations follow the approach described below. Preparing input quantum states. We consider examples for which the input states are two-qubit mixed states. To prepare these states, we first prepare purifications thereof and then trace out the reference systems. Formally, let ρ_S be a two-qubit mixed state of a system S, and let | ψ⟩⟨ψ|_RS be a purification of ρ_S, such that Tr_R[ |ψ⟩⟨ψ|_RS] = ρ_S, where R is a reference system (for our purposes, it suffices to consider two additional qubits for the reference system). With this approach, we generate two input instances for each quantity of interest. Parameterized quantum circuits. Our algorithm samples from certain distributions by applying a quantum circuit U^†(θ) to the input state (cf. e.g., (<ref>)). We use Pennylane's subroutine to prepare a parameterized quantum circuit with a random structure and then keep this structure fixed throughout multiple runs of a specific simulation. We only change the structure when it is not sufficiently expressive, in the sense that the set of generated unitaries is not comparable to the set of all unitaries. The subroutine creates a parameterized quantum circuit with multiple layers, each built by randomly selecting a subset of qubits and applying single-qubit or two-qubit parameterized quantum gates to them. We use three layers, each with three to four parameterized quantum gates. This results in 9 to 12 total quantum circuit parameters. To evaluate the gradient of a given cost function with respect to these parameters, we use the parameter-shift rule <cit.>. Classical neural network. The neural estimator uses a classical neural network to parameterize eigenvalues of Hermitian observables. For our simulations, we consider a 2–10–1 fully connected architecture with sigmoidal activations (here 2 and 1 are the input and output dimensions, respectively). A sigmoid is not applied at the output since it restricts the values to [0,1], while the eigenvalues that the neural networks aim to approximate may be outside this interval. Gradients are evaluated using the PyTorch automatic differentiation subroutine . Number of runs. We estimate expectations of the form Tr[·], which appear in the objective functions of our quantities, using sample means (cf. e.g., (<ref>) or (<ref>)). We find that 100 samples suffice for our experiments, and so we use that sample size throughout. We plot the mean (solid line) and standard deviation (shaded area) of each experiment over 10 runs. §.§ Results Von Neumann entropy. Figure <ref>(a) shows that our estimator, both with and without the neural network, accurately retrieves the von Neumann entropy with only a small error. We note, however, that the convergence rate is quite slow in this case, as it takes around 600 epochs to come within 1% error of the ground truth. To evaluate the latter, we use the closed-form expression of the von Neumann entropy in (<ref>). Measured relative entropy. Figure <ref>(b) plots the quantum neural estimation error curves for the case of measured relative entropy. To compute the ground truth, we use the fact that the measured relative entropy is equal to the quantum relative entropy when the quantum states are positive definite and commute with each other. This perspective allows us to work around the issue that there is no closed-form expression for the measured relative entropy, and use that of quantum relative entropy instead: (ρσ) Tr[ρ (lnρ - lnσ)]. Rényi entropy. Figure <ref>(c) presents simulations for the Rényi entropy of order α = 2.5. The closed-form expression from (<ref>) is used to compute the ground truth. Again, we observe that the neural estimator converges fast and accurately recovers the true Rényi entropy value. Root fidelity. We simulate quantum neural estimation of the root fidelity (see (<ref>)) and show the results in Figure <ref>(d). The ground truth fidelity value is computed from (<ref>). We see that the model that employs the neural network approaches the ground truth faster than the one that directly optimizes the eigenvalues. For the second instance, the neural estimator fluctuates around the ground truth from the onset, while the estimator without the neural net approaches it from above. The effect persist in repeated runs of the simulation. Measured Rényi relative entropy. Figure <ref>(e) plots the quantum neural estimation error curves for the case of measured Rényi relative entropy with α=2.5. To compute the ground truth, we use the fact that the measured Rényi relative entropy is equal to the sandwiched Rényi relative entropy when the quantum states are positive definite and commute with each other. This perspective allows us to get around the fact that there is no closed-form expression for measured Rényi relative entropy and instead use that of sandwiched Rényi relative entropy: _α(ρσ) 1/α-1lnTr[(σ^1-α/2αρσ^1-α/2α)^α]. § CONCLUDING REMARKS AND FUTURE WORK This work proposed a quantum neural estimation algorithm for estimating quantum information measures, spanning various entropies and measured relative entropies. Our estimator utilizes parameterized quantum circuits and classical neural networks to approximate a variational form of the measure of interest, which enables efficient estimation by sampling and optimization. Numerical experiments that validate the accuracy of the proposed approach on simple two-qubit scenarios were provided. Scalability of this approach to larger instances and extensions to other quantum entropies and divergences are important future avenues that we briefly discuss below. Scalability and the barren plateau problem. To achieve the desired scalability, one would have to overcome the barren plateau problem <cit.>. It refers to the phenomenon that the gradients of a cost function with respect to the quantum circuit parameters in variational quantum algorithms tend to become exponentially small as the circuit depth and the number of qubits grow. This can have a significant impact on the performance and scalability of variational quantum algorithms, limiting their ability to find optimal solutions efficiently. Addressing the barren plateau problem is an active area of research, and various promising heuristic approaches towards mitigating its impact have been proposed <cit.>. In the current paper, we did not explicitly investigate the presence of barren plateaus in the context of our quantum neural estimation algorithm. Even though our numerical simulations show good performance, they do not involve sufficiently many qubits for observing the barren plateaus. We speculate that the presence of a classical neural network may alleviate the barren plateau problem. This is due to the fact that the cost functions considered in this work are dependent not only on the quantum circuit parameters but also on the neural network parameters. As a result, while gradients with respect to quantum circuit parameters may vanish exponentially with the number of qubits, this may not be the case for gradients with respect to neural network parameters (which are independent of the number of qubits and the type of parameterized quantum circuit used). We leave for future work an in-depth study of this possibility, and generally, of the optimization landscapes. In addition, we plan to explore potential restrictions and formal limitations that may result from barren plateaus and design regularization methods to mitigate them. Other quantum relative entropies. Another important direction is to extend the quantum neural estimation approach to the relative entropy <cit.> and the sandwiched Rényi relative entropy <cit.>, defined respectively, in (<ref>) and (<ref>). These quantities admit the following variational forms (see <cit.> and <cit.>, respectively): (ρσ) = sup_H{Tr[Hρ]-lnTr [exp( H + lnσ) ]}, _α(ρ‖σ) = sup_H-2mu{α/α-3mu--3mu1lnTr[H-2muρ]-1mu--1mulnTr[ -2mu( H^1/2σ^α-1/αH^1/2)^-3muα/α-1-1mu]-1mu}. The difficulty in estimating these objective functions has to do with the second terms in (<ref>)–(<ref>) due to noncommutativity. A possible approach for evaluating the second term in (<ref>) is to employ the Lie–Trotter product formula, which implies Tr [exp( H + lnσ) ] = lim_ℓ→∞Tr[ (e^H/ℓσ^1/ℓ)^ℓ]. To realize (a finite approximation of) the right-hand side of (<ref>), one could employ a variant of multivariate trace estimation <cit.> along with quantum singular value transformation <cit.> as a means to realize fractional powers of the density operator σ from block encodings of it. This exploration and the associated error analysis is left for future work. An alternative approach could use the following relations between the measured and unmeasured quantities <cit.>: _M(ρ‖σ) ≤(ρ‖σ) ≤_M(ρ‖σ)+2ln|spec(σ)| , _M,α(ρ‖σ) ≤_α(ρ‖σ) ≤_M,α(ρ‖σ)+2ln|spec(σ)| , with the latter holding for all α∈(0,1)∪(1,∞). In the above, spec(σ) denotes the set of distinct eigenvalues of σ. Now, for n∈ℕ, using the fact that |spec(σ^⊗ n)| ≤ (n+1)^d-1, as well as the additivity of the unmeasured relative entropies, we obtain |_M(ρ^⊗ n‖σ^⊗ n)/n -(ρ‖σ)| ≤2(d-1)/nln(n+1) , |_M,α(ρ^⊗ n‖σ^⊗ n)/n -_α(ρ‖σ) | ≤2(d-1)/nln(n+1) . One can then rewrite (<ref>) and (<ref>) as _M(ρ^⊗ n‖σ^⊗ n) -2mu =-4musup_ω^(n)>0-3mu{Tr[(lnω ^(n))ρ^⊗ n]-2mu--2mulnTr[ω^(n)-2muσ^⊗ n]}, _M,α(ρ^⊗ n‖σ^⊗ n) =sup_ω^(n)>0{α/α-1lnTr[(ω^(n))^α-1/αρ^⊗ n ]-lnTr[ω^(n)σ^⊗ n]}. It follows from operator concavity of x↦ln x and x ↦ x^α-1/α for α > 1, the operator convexity of x ↦ x^α-1/α for α∈ (1/2,1), and the permutation invariance of the tensor-power states ρ^⊗ n and σ^⊗ n, that in (<ref>)–(<ref>) it suffices to optimize only over permutation invariant observables ω^(n). This significantly simplifies the optimization since the parameter space occupied by permutation invariant observables grows only polynomially in the number of copies n (this is related to some recent observations of geometric quantum machine learning <cit.>). However, in spite of this reduction, the error bounds in (<ref>)–(<ref>) scale as d/n, and thus the number of copies n must be larger than the dimension d to obtain a good approximation. As d is exponential in the number of qubits, this approach may not be feasible, and new ideas are needed to address this problem. We thank Paul Alsing, Daniel Koch, Saahil Patel, Shannon Ray, and Soorya Rethinasamy for insightful discussions. DP and MMW acknowledge support from the National Science Foundation under Grant No. 1907615. ZG is partially supported by NSF grants CCF-2046018, DMS-2210368, and CCF-2308446, and the IBM Academic Award. SS acknowledges support from the Excellence Cluster - Matter and Light for Quantum Computing (ML4Q). unsrt
http://arxiv.org/abs/2307.05697v1
20230706092538
A Machine-Learned Ranking Algorithm for Dynamic and Personalised Car Pooling Services
[ "Mattia Giovanni Campana", "Franca Delmastro", "Raffaele Bruno" ]
cs.IR
[ "cs.IR", "cs.LG" ]
A Machine-Learned Ranking Algorithm for Dynamic and Personalised Car Pooling Services Mattia Giovanni Campana, Franca Delmastro and Raffaele Bruno IIT-CNR Via G. Moruzzi 1, 56124, Pisa, ITALY {m.campana,f.delmastro,r.bruno}@iit.cnr.it *This work has been partially supported by the EC under the H2020-SC Lighthouse Project n. 691735, REPLICATE. ============================================================================================================================================================================================================================================================================= empty empty Car pooling is expected to significantly help in reducing traffic congestion and pollution in cities by enabling drivers to share their cars with travellers with similar itineraries and time schedules. A number of car pooling matching services have been designed in order to efficiently find successful ride matches in a given pool of drivers and potential passengers. However, it is now recognised that many non-monetary aspects and social considerations, besides simple mobility needs, may influence the individual willingness of sharing a ride, which are difficult to predict. To address this problem, in this study we propose GoTogether, a recommender system for car pooling services that leverages on learning-to-rank techniques to automatically derive the personalised ranking model of each user from the history of her choices (i.e., the type of accepted or rejected shared rides). Then, GoTogether builds the list of recommended rides in order to maximise the success rate of the offered matches. To test the performance of our scheme we use real data from Twitter and Foursquare sources in order to generate a dataset of plausible mobility patterns and ride requests in a metropolitan area. The results show that the proposed solution quickly obtain an accurate prediction of the personalised user's choice model both in static and dynamic conditions. § INTRODUCTION Car pooling (aka ride-sharing) consists in the sharing of private cars and related journeys with one or more people who have similar mobility needs. Car pooling is commonly considered a sustainable transportation mode since it reduces the number of travelling cars, which is beneficial to lower traffic congestion on roads, the need of parking spaces and total carbon emissions <cit.>. Car pooling is not a novel concept. In the past, local authorities already tried to promote ride-sharing for commuters, starting with the construction of high-occupancy vehicle lanes in early 1980s. However, only recently car pooling started to gain momentum through the development of online and mobile services that allow drivers with spare seats to connect with people wishing to share a ride on very short notice or even en-route (e.g., BlaBlaCar, carpooling.com, gomore.com). In order to be successful, car pooling applications need efficient matching algorithms able to automatically provide suitable and real-time ride matches to their users <cit.>. Typically, proximity in time and space is a necessary condition to have a good match between trips <cit.>. Clearly, private car-pooling providers want to generate revenues and maximise the number of participants. Public providers may also have a societal objective and aim at maximising a system-wide benefit (e.g., reduction of congestion). Thus, when determining matches between drivers and riders in a ride-sharing system, it is essential to effectively combine system-wide optimisations with user-based benefits and constraints on the feasibility of ride matches. It is important to recognise that reduced travel costs may not necessarily be the only or most important reason for a user to accept a ride-sharing suggestion <cit.>, especially in case of short distances. Many other aspects may be relevant for the user's choice, and determine whether a particular shared ride would be accepted or not (e.g., safety considerations, social similarity between driver and passengers, etc.). For these reasons, many recommendation systems and incentive models have been recently proposed to increase the success probability of ride-sharing suggestions, for instance on the basis of monetary negotiation <cit.>, measurements of ride enjoyability <cit.> and utility of the user's desired activity at the destination <cit.>. However, the majority of existing solutions assume to know a priori the most relevant reasons to accept or reject a shared trip for each user, typically on the basis of stated-preference travel surveys <cit.>. Furthermore, users' preferences may change over time making the users' profiles difficult to maintain. In this work we propose GoTogether, a dynamic and personalised car pooling solution that is able to learn the individual acceptance model of each user in an automated and transparent (for the user) way. We start by observing that any online car pooling system provides the passengers with an ordered list of the top ride matches to choose from. The user can accept one of the suggested offers (not necessarily the top ranked) or reject all of them. The user's choices over time provide invaluable information on her personal preferences. For this reason, we leverage on machine-learned ranking (also Learning-to-Rank or LR) techniques <cit.> to reconstruct the initially unknown ranking model that is implicitly adopted by each individual user to determine the relevance of a ride match for a specific request of the user. Then, GoTogheter builds a personalised list of recommended shared rides for each user in order to maximise the success rate of the offered ride matches. To investigate the effectiveness of the proposed solution we used a data-driven validation methodology generating a data set that merge topological information with the social characteristics of the visited places and of possible car poolers. To this aim, we extracted data from FourSquare and Twitter online social networks as explained in Section  <ref>. The results show that the proposed solution can obtain an accurate prediction of the personalised user's choice model after a few replications of the same car pooling requests. Furthermore, our learning algorithm quickly reacts to variations of the users' profiles and dynamically adjust the users' ranking models. The rest of this paper is structured as follows. Section <ref> provides an overview of related work. Section <ref> presents GoTogether and the proposed learning framework. In Section <ref>, we present numerical results for the analysed case study. In Section <ref>, we describe GoTogether mobile application, currently in use for a pilot testing. Finally, in Section <ref> we draw our conclusions and present directions for future research. § RELATED WORKS There is large body of work on the carpooling problem. A thread of studies focuses on determining the potential of carpooling services in urban transportation scenarios mining big mobility data. For instance, authors in <cit.> estimate the percentage of sharable traffic for private cars in Tuscany by extracting mobility profiles and route similarity between routine trips from GPS-based car trajectories. In <cit.>, the benefits of vehicle pooling for taxi services in New York is quantified as a function of tolerable passenger discomfort. Mobile and online social data is used in <cit.> to assess the potential of ride-sharing for reducing traffic in the cities of Barcelona and Madrid. All the aforementioned studies show that a range from 30% to 70% of existing trips can be typically shared. Many carpooling works are related to the design of efficient algorithms for matching passengers and drivers with similar mobility needs, and scheduling riders' pickup and delivery, in order to maximise the benefits of carpooling (e.g., minimising the total travelled distances or maximising the number of carpoolers) considering a range of constraints and rider preferences (e.g., maximum waiting time or social distance). A survey of optimisation frameworks for the dynamic carpooling problem can be found in <cit.>. For instance, integer programming is used in <cit.> to solve the carpooling problem. Genetic algorithms are proposed in <cit.> to reduce computational times. Frequency-correlated algorithms for rider selection and route merging are developed in <cit.>. A stochastic carpooling model that considers the influence of stochastic travel times is formulated in <cit.>. Recently, other studies focus on designing recommendation systems to improve the acceptance probability of a carpooling match and to encourage participants to use the carpooling service. The authors in <cit.> develops a model for the carpooling problem that incorporates pre-matching information (e.g., previous accepted passengers). Network analytics is used in <cit.> to determine subpopulations of travellers in a given territory with a higher change to create a carpooling community, and the predisposition of users to be either drivers or passengers in a shared car. A measure of enjoyability for a carpooling ride is defined in <cit.> based on social similarities between any two users and tendency of a person to group with similar ones. In <cit.> an route planning algorithm is proposed to generate the top-k personalised routes based on route familiarity for each user. Our work differs from the aforementioned studies because we leverage on the history of user's interactions with the carpooling system to incrementally learn the acceptance model of each user. § GOTOGETHER: A DYNAMIC AND PERSONALISED CAR POOLING SERVICE In this section we describe the system architecture of GoTogether and we present its core functionalities, focusing on the learning algorithm used to infer the users' personal ranking model. §.§ System architecture Figure <ref> illustrates the system architecture of GoTogether, highlighting the operation flows between the user and the system during the ride selection process. The basic component of the system is a spatial database that stores all the offered trips. A passenger's query for a shared trip triggers the ride searching process, which generates a list of possible ride matches. Then, the candidate trips are ranked according to the estimated user' ranking model in order to maximise the success probability of a ride match. The passenger's query must provide a series of parameters to define the ride search. Specifically, it is necessary to specify at least: i) the departure place (q_sp), ii) the destination place (q_dp), and iii) the desired departure time (q_dt). Typically, the query can also include the user's preferences for the ride, such us the tolerance for pick-up/drop-off distances, the tolerance for the deviation from the preferred departure time, and desired trip and driver's characteristics. To obtain the list of candidate shared trips GoTogether applies the following procedure. First of all, it defines the pickup area and drop-off area of the potential passenger as the circles of radius δ around the q_sp and q_dp points, respectively[δ is a system parameter that defines the maximum walking distance from the passenger's departure/arrival locations to the pickup/drop-off points.]. In addition, we denote with τ the maximum delay of the shared trip with respect to the desired departure time. Then, for each retrieved ride in the database, say r_i, we compute the shortest paths between q_sp and q_dp. The intersections between the shortest paths originated from q_sp and q_dp and r_i are the pickup points and drop-off points of the passenger, respectively. The pick-up delay is obtained as the difference between the desired departure time of the passenger and the time instant at which the driver reaches the pickup point following r_i. Finally, r_i is a candidate ride match for the passenger's query if the pickup and drop-off points fall within the pickup and drop-off areas, and the pick-up delay is shorter than τ. Figure <ref> illustrates an example of the above-described ride selection process for a given request (solid line is a candidate ride, while dashed line no). The list of candidate rides extracted from the ride database needs to be ordered based on the passengers' preferences. To this end, the user's personal ranking model (also called ranker) is applied to this list to assign a ranking score to each shared trip. Typically, this score is obtained by a combination of utility functions associated with a set of ride features. Clearly, the system does not have the complete and exact knowledge of the user's ranker but it has to rely on an estimated model. In this study, we advocate the use of the history of users' choices to predict the users' rankers. Specifically, we leverage on LR techniques for automatically learning the ranking model, and therefore optimise the car pooling recommendations. As better explained in Section <ref>, when the user accepts a ride from the proposed ranking, the system generates a training data which is then used by the learning algorithm to produce the ranking model. §.§ The ranking model Before describing the GoTogether learning algorithm of the individual ranking models, it can be useful to provide a brief overview of Learning-to-Rank (LR) techniques. §.§.§ Background on Learning-to-Rank Learning-to-Rank (LR) was originally proposed for Information Retrieval (IR) systems, i.e., collections of data objects (text documents, images, trajectories, etc.), which can be queried by multiple users to obtain ranked lists of objects that match the queries with different degrees of relevance. Then, machine learning techniques can be applied to IR systems in order to automatically discover the users' ranking models <cit.>. Most of LR methods employ offline supervised learning approach, i.e., rankers are estimated before deploying the IR system using training data that has been created in advance <cit.>. This approach has two main drawbacks: (i) it requests a large amount of manually annotated data (i.e., the training and test sets) needs to be available before deployment, and (ii) it is difficult and costly to track dynamic behaviours in a timely manner. On the contrary, online LR techniques allow the system to learn directly from the users' interactions, e.g. via click actions[Typically, IR systems are web-based and a click corresponds to the user's choice of a data object in the ranked list or to an expression of interest in a specific data object.]  <cit.>. This type of solutions are typically based on reinforcement learning techniques, meaning that the system test new rankers, and learns from users' interactions with the presented rankings. We believe that the online approach is best suited for a car pooling system, since collecting a large amount of training data before the system's deployment is not feasible. Furthermore, car pooling users may show dynamic behaviours, and the rankers should be able to self-adapt during the system lifetime. Two of the most successful approaches to LR are the listwise and pairwise methods, which differentiate on the basis of the type of users' feedbacks and cost functions used to evaluate the performance of the learned ranking functions. More precisely, listwise approaches directly operate on the entire ranked list of data objects associated with a query. In pairwise approaches the learning procedure consider as input pairs of objects, and it assigns a label to the pair representing the relative relevance of the two objects for the user. In this case the LR method learns a classifier that predicts these labels for each possible pair of data objects in the query result. We believe that pairwise LR techniques fits better a car pooling system because each query generates a single output, i.e., the selected trip. Thus, for a pairwise LR approach, it is easier to generate a sufficiently large sequence of training data from a single query, while the system is running. Finally, it is important to point out that online LR methods intrinsically suffer from the exploitation-exploration dilemma. In other words, an LR algorithm needs to both explore new solutions to obtain feedback for effective learning, and exploit what has already been learned to produce results that are acceptable for the users. A well-known method for balancing exploration and exploitation is the ϵ-greedy strategy <cit.>, in which the agent selects at each time step the greedy action[In GoTogether an action is the selection of a ride match.] (i.e., the action with the highest currently estimated value) with a constant probability 1-ϵ, and a random action with probability ϵ. However, in <cit.> it has been shown that implicit feedback can be biased towards the top results displayed to the user. The user may not choose the most relevant ride simply because it is located in the lower section of the proposed list. §.§.§ The learning algorithm GoTogether uses an online and pairwise LR approach to define the learning algorithm, which is inspired by the technique developed in <cit.>. Our algorithm, which is described in Algorithm <ref>, takes as input a user u, the set of candidate rides R fetched from the database with a randomised order for a specific query, the learning rate η, and the probability ϵ∈ [0,1]. As better explained in the following, R is the explorative list of ride matches because rides are not yet sorted based on their relevance and their position in the list is random. The algorithm starts by extracting the vector of features 𝐱 = ϕ(r) from each candidate ride r ∈ R. The set of features used in this study to rank the potential ride matches is explained in Section <ref>, but it can be further extended. We associate a weight w_k with each feature x_k∈ϕ(r). Then, the candidate rides are ranked using a weighted linear combination of these features. Specifically, the estimated user's ranker at time step t corresponds to the vector of ranking weights, say 𝐰_𝐭-1, learned so far. Then the learning algorithm construct the exploitative list L by sorting the list R of candidate rides using the estimated ranker. Finally, a recommendation list I is selected from L and R as follows. For each ranking position, the algorithm selects the corresponding ride from the exploitative list L with probability 1-ϵ; otherwise, with probability ϵ, the algorithm selects a ride from the explorative list R. At this point, the system shows the resulting recommendation list to the user, and it observes the user's feedback. Two types of feedbacks are possible. On the one hand, the user can reject the entire recommendation list if the relevance of all shown results is too low (i.e., below a critical relevance threshold). On the other hand, the user can accept one of the proposed rides, not necessarily the one ranked first. If the user accepts a ride, the algorithm infers all the possible labeled ride pairs P using the pairwise labelling method described hereafter. For the sake of presentation clarity, we introduce the operator ≻, and r_i ≻ r_j means that the ride r_i is more relevant than the ride r_j for the user. Let us assume that the recommendation list that is showed to the user contains four ride matches (r_1, r_2, r_3,r_4) and the user accepts ride r_3. Then, we can infer that r_3 ≻ r_1, r_3 ≻ r_2, and r_3 ≻ r_4, but we can not say anything about the relevance between r_1 and r_2. From these observations, three training pairs can be obtained as (r_1, r_3, -1), (r_2, r_3, -1), and (r_3, r_4, +1), where the labels “-1” and `+1” mean that they are negative and positive learning instances, respectively. In other words, the learning algorithm should update the user's ranker in order to prefer a ride similar to r_3 (assigning a higher rank to it) than a ride like r_2 or r_3 the next time the user makes a similar query q. More formally, for each training pair (𝐱_𝐚, 𝐱_𝐛, y) in the training data P, the algorithm measures how much the current model has mis-labeled the examples. If the labels don't match, the weight vector is updated with the unregularized Stochastic Gradient Descent <cit.> update rule: 𝐰_𝐭 = 𝐰_𝐭-1 + η y_i (𝐱_𝐚 - 𝐱_𝐛), where 𝐱_𝐚 and 𝐱_𝐛 are the features vectors of the ride pair. The update rule adjusts the model weights in order to minimise the number of mis-labeled pairs. The parameters η influences the rate of learning but also the convergence speed of the learner and its tuning is essential to avoid excessive fluctuations of the learner weights. § EXPERIMENTAL EVALUATION In order to assess the performance of our learning algorithm we generate synthetic users and mobility traces using real-world data sources. Our dataset, evaluation methodology and experimental results are described in the following sections. §.§ Data sources Nowadays, Online Social Networks (OSNs) can be effectively used to study different aspects of human behaviours, as well as to obtain information regarding individual mobility patterns. In this study we jointly use Twitter and Foursquare as data sources <cit.>. Specifically, Foursquare is a location-based OSN that motivates registered users to share their check-ins at different places. A check-in is often characterised not only through raw GPS coordinates, but also with contextual information such as the location name (e.g., “Starbucks”) and its semantic description (e.g., coffee shop). Foursquare does not provide an API to fetch the check-ins generated in a given geographic area. However, Foursquare users typically share their check-ins also with other OSNs like Twitter. Furthermore, Twitter provides supplemental information about social connections and interest similarities between users. The following methodology is used to obtain the dataset for our experiments. First, we leverage on the Twitter streaming APIs to get a set of geolocated tweets sharing Foursquare check-ins within the metropolitan area of New York for two weeks at the end of February 2016. In this way, we collect the check-ins of 56 users. For each user, we also download the tweet history and we use the TagMe annotation tool <cit.> to extract the users' topics of interest[We remind that Twitter APIs allow to freely download only the last 3200 tweets of a user.]. Finally, we employ Foursquare APIs to expand the set of topics of each user with the semantic categories of his check-ins. To infer a plausible mobility traces from the users' check-ins, we proceed as follows. First, for each user in our trace we aggregate all check-ins in a single day and we sort them by their timestamp. Then, we use the Google Maps Directions APIs in order to determine the most plausible car trajectory between pairs of consecutive locations in each day. Finally, we prune all the trips with a duration shorter than 20 minutes, which results in maintaining a total of 3679 trips. Figure <ref> shows the spatial distribution of these trips on the analysed geographical area, while Figure <ref> shows the hourly distribution of the trips over a day. Typical peak and off-peak behaviours can be observed. §.§ User's choice model In the transportation field various discrete choice models have been proposed to characterise the probability of individuals choosing a given transportation option from a finite set of alternatives <cit.>. To represent the attractiveness of the alternatives, the concept of utility is typically used, and the observable utility is usually defined as a linear combination of features associated to each transportation alternative. Furthermore, a weight is associated to each feature to quantify the relevance of that feature for an individual. In this study, we use the following four features to rank a ride offer: ∙=1em =-0.5em * the walking distance from the trip origin to the pickup point (d_p); * the walking distance from the drop-off point to the trip destination (d_d); * the pickup delay (t_p); * the social similarity between the driver and the passenger. It is intuitive to recognise that walking distances may have different degrees of utility for each user. In general, the shorter the walking distance and the higher the utility. To represent this variability, we describe the walking distance as the combination of three features, which correspond to three non-overlapping distance ranges. Specifically, ranges [0,d^1], (d^1,d^2] and (d^2,d^3] correspond to short, medium and long walking distances, respectively. Then, a weight ω^1 (d_p), ω^2 (d_p), ω^3 (d_p) for the walking distance from the trip origin to the pickup point, and ω^1 (d_d), ω^2 (d_d), ω^3 (d_d) for the walking distance from the drop-off point to the trip destination, are assigned to each one of the previous ranges, respectively. Similarly, we model the pickup delay as the combination of three features, which correspond to three non-overlapping time ranges. Specifically, ranges [0,t^1], (t^1,t^2] and (t^2,t^3] correspond to short, medium and long delays, respectively. Then, a weight ω^1 (t_p), ω^2 (t_p), and ω^3 (t_p) is assigned to each one of the previous ranges, respectively[In the following experiments d^1=1 Km, d^2=2 Km and d^3=3 Km. Similarly, t^1=30 minutes, t^2=60 minutes and t^3=90 minutes.]. The fourth feature is a measure of the common interests between users, as in <cit.>. Specifically, for each pair of users u and v we can build two vectors of topics, say t⃗_u and t⃗_v, from their tweets, where each topic is weighted by its relative importance (i.e., frequency) within the tweets. The similarity between these two vectors is estimated using the cosine similarity, i.e., the cosine of the angle between the vectors of topics: sim(t⃗_u,t⃗_v) = t⃗_u·t⃗_v/||t⃗_u|| || t⃗_v || From the social similarity we can also derive the homophily of user u, say h_u, which is defined as the median of the social similarity between this user and all his friends. If h_u ≈ 1, we say that u is homophilous, while if h_u ≈ -1 we call u heterophilous. In the former case, the user tends to associate and bind with similar others, while in the latter case with individuals that have different interests. Thus, we expect that this property may also influence users' choices of attractive ride shares. Clearly, varying degrees of homophilous and heterophilous behaviours can be identified. Finally, we can express the total utility, for a user u, of a ride r offered by driver v as follows: U_u(r) = h_u · sim(t⃗_u,t⃗_v) + ∑_j = 1^3 ω^j (t_p) ·1{ t_p ∈ [t^(j-1),t^j] } + ∑_x= p,d∑_j = 1^3 ω^j (d_x) ·1{ d_x ∈ [d^(j-1),d^j ]}, where 1{z} is the indicator function of a condition z: 1{z} = 1 if z = true, and 1{z} = 0 otherwise. In other words, for the sake of simplicity the utility associated with each feature is equal to one, but different weights are assigned to each feature. It is important to note that we do not need to learn the utility functions but only their weights. Based on the values of the weights we have defined four categories of typical users: ∙=1em =-0.5em * Homophilous and lazy users (U_1). They have a high level of homophily and they prefer rides with a short walking distance, and a short pickup delay: h_i = 0.9; ω^1 (t_p) = ω^1 (d_p) = ω^1 (d_d) = 0.8; ω^2 (t_p) = ω^2 (d_p) = ω^2 (d_d) = 0.15; ω^3 (t_p) = ω^3 (d_p) = ω^3 (d_d) = 0.05. * Homophilous and active users (U_2). They have a high level of homophily and they are willing to walk longer distances to reach the driver, and wait a longer time: h_i = 0.9; ω^1 (t_p) = ω^1 (d_p) = ω^1 (d_d) = 0.05; ω^2 (t_p) = ω^2 (d_p) = ω^2 (d_d) = 0.15; ω^3 (t_p) = ω^3 (d_p) = ω^3 (d_d) = 0.8. * Heterophilous and lazy users (U_3). They have a low level of homophily, and they prefer a short walking distance and a short pickup delay: h_i = 0.1; ω^1 (t_p) = ω^1 (d_p) = ω^1 (d_d) = 0.8; ω^2 (t_p) = ω^2 (d_p) = ω^2 (d_d) = 0.15; ω^3 (t_p) = ω^3 (d_p) = ω^3 (d_d) = 0.05. * Heterophilous and active users (U_4). They have a low level of homophily, and they are willing to walk a long distance and to wait a longer time: h_i = 0.1; ω^1 (t_p) = ω^1 (d_p) = ω^1 (d_d) = 0.05; ω^2 (t_p) = ω^2 (d_p) = ω^2 (d_d) = 0.15; and ω^3 (t_p) = ω^3 (d_p) = ω^3 (d_d) = 0.8. §.§ Evaluation methodology and results To test the performance of the proposed car pooling system we use the following methodology. First, we assume that the users in our dataset are commuters, who perform the same set of ride-sharing requests over several consecutive days. Then, we uniformly distribute the users in the previously described categories (i.e., U_1, U_2, U_3, and U_4). The requests of shared rides for each user are generated as follows. We consider the mobility trace of each user in the dataset and we cluster both the origin and the destination points of the trips in the trace. We assume that two points belong to the same cluster if the distance between them is shorter than 400 meters. Then, we use the centroids of these clusters as the origin and destination points of the queries performed by that user. To avoid searching for unpopular and short routes, we also require that the requested ride is not shorter than 10 Km, and that there are at least fifteen ride matches for that query in the mobility database. To assess the load of our car pooling service, Figure <ref> shows the average number of feasible requests generated by each user on a hourly basis. As expected, the car pooling service has a load peak in the middle of the day. Clearly, the number of feasible requests is varying between the users. To avoid a bias towards users that are much more active than others, we randomly select at most 100 queries per hour for each user from the set of feasible ride-sharing requests. Finally, we assume that the recommendation list consists of ten suggested ride matches, and that the user selects one of the recommended rides only if the ride utility, as defined in Equation (<ref>), is greater than a critical threshold called C. Considering all the feasible rides, the average ride utility per user varies between 0.1 and 2.77. However, the utility values are more concentrated in the lowest part of this range. For instance, only 4.26% of ride offers has an utility that is greater than 2. For this reason, it does not seem reasonable to select high values of the threshold C. Consequently, to evaluate our system we consider three different acceptance thresholds, namely C =0,1,2. Clearly, if C=0 then users will always select one of the proposed ride matches in the recommended list. The larger the C values, the higher the probability to reject an offer. §.§.§ Metrics We evaluate GoTogether in terms of two performance metrics. The first one is the average ranking of the best ride match of each query. In principle, an ideal ranker should always classify the best ride match as the top ranking in the recommended ride list. The second one is the success probability of the ride match, computed as the ratio between the number of rejected ride requests (i.e., recommended ride lists without acceptable offers) and the total number of requests. §.§.§ Static scenario The first set of experiments is carried out in a static scenario in which each user is characterised by a choice model with time-invariant parameters. Then, we evaluate the convergence time of the learning algorithm as a function of the exploration rate ϵ. Figures <ref>, <ref>, and <ref> show the average ranking in the recommendation list of the best ride match for acceptance thresholds equal to 0, 1, and 2, respectively. Note that we do not assume any a priori knowledge of the users' choice model, and the users' rankers are initialised with all weights set to 0. Important observations can be derived from the shown results. First, our learning algorithm quickly improves its predicting performance and after a few iterations (i.e. days) it is able to classify the best ride as one of the top rankings in` the recommendation list. Clearly, the convergence speed to a stationary behaviour depends on the exploration rate ϵ. Generally, the lower the exploration rate, the better the learning performance. Intuitively, this can be explained by observing that in a static scenario the users apply always the same choice model and the ranker continues to learn and adapt to the users' profile. Thus, exploitative actions should be preferred over explorative actions. For instance, a purely random strategy that selects only explorative actions (i.e. ϵ = 1) is unable to learn the users' ranking models and it could even fail to include the best ride match in the list of the ten recommended rides. On the other hand, ϵ=0.2 and ϵ=0.1 provide the highest rankings for the best ride match. Interestingly, a strategy that selects only exploitative actions (i.e. ϵ = 0) performs worse than a strategy that still allows an exploration phase. Furthermore, the learning performance significantly degrades when increasing the value of the acceptance threshold C. In particular, with C=2 the learning the best ride match is classified at most with the sixth ranking even with the best setting of the exploration rate. A possible explanation of this behaviour is that there are many rejected offers and the learning algorithms has too few examples to learn from. To validate our intuition about the learning degradation, in Figure <ref> we show the average success probability of a ride request for three different cases: i) an ideal ranker that always classifies the best ride match at the top of the recommended list, ii) the fully explorative learning algorithm, and iii) the learning algorithm with the best setting of the exploration rate (i.e., ϵ=0.2). We can observe that, when C=2, even an ideal learning algorithm would obtain a low success probability (around 28%). The performance of our exploitative learning algorithm with ϵ=0.2 is close to that of the ideal solution, even if the best ride match is not top ranked (see Figure <ref>). On the contrary, a random choice of the recommended list (i.e., ϵ=1) leads to worse performance with a 30% decrease of the success probability. In general, the learning algorithm needs the users' choices to improve its estimate of the users' ranking models. If many ride requests fail, then the learning algorithm gets stuck with inaccurate estimates. §.§.§ Dynamic scenario In this section we consider a dynamic scenario, in which each user periodically decides to change his choice model. Specifically, every 5 days a user randomly changes its user category. Figure <ref> shows the variation of the average ranking of the best ride match in the recommendation list. Clearly, after a radical change of the user's choice model, the ranking model provides wrong estimates. However, our learning algorithm quickly detects this change and correctly updates the weights of the ranking function. Interestingly, we can observe that after the first change of user category, the learning algorithm appears slightly slower in updating the user's ranking model. This can be explained by observing that the update rule of the learning algorithm may have some inertia. However, the error introduced by the subsequent changes of user category tends to decrease. As for the static scenario, exploration rates ϵ=0.2 and ϵ=0.1 provide the best learning performance. § GOTOGETHER MOBILE APPLICATION In order to experimentally evaluate GoTogether with real users, we developed an Android mobile app implementing the recommendation system described above. It has been recently launched in the CNR campus area in Pisa as a corporate carpooling service. The campus hosts more than 1200 working people, several of them commuting every day. The GoTogether app provides several functionalities: ride search and offer operations, visualisation of the current user's rides (both as a driver and a passenger) as well as the most popular shared routes, the possibility to set a reminder to be automatically notified when a plausible trip is available (see Figure <ref> for some screenshots). Note that the users' profiles can also be characterised in terms of travel preferences (i.e., listen to music, travel with smokers, colleagues, neighbours), in addition to the temporal and spatial constraints for the requested ride. The application is currently available on the Playstore[https://play.google.com/store/apps/details?id=it.cnr.iit.smartmobility] and we are collecting real data from its usage to further evaluate the system. § CONCLUSIONS In this work we have shown that machine-learned ranking techniques can be effectively used to improve the quality of the recommendation system of a car pooling service. In particular, we have designed an online, pairwise learning-to-rank algorithm that leverages on the history of users' selections among the offered rides to predict the individual ranking model of the users. Then, we have used Twitter and Foursquare as data sources to generate a dataset of plausible mobility patterns and ride requests. Finally, we have used this dataset to evaluate our learning algorithm in terms of learning speed and accuracy, both in static and dynamic scenarios. The shown results confirm the validity and robustness of the proposed solution. As future work, we plan to extend our methodology to consider additional data sources and ride features. Furthermore, we are collecting real data from a prototype implementation of the GoTogether system to evaluate our solution in the real world. Another avenue of research is to design a more sophisticated learning framework that could work in multi-modal scenarios in which car pooling is one of the available on-demand mobility services, in addition to, for instance, car and bike sharing. IEEEtran
http://arxiv.org/abs/2307.02135v1
20230705092448
Differentially Private Adversarial Auto-Encoder to Protect Gender in Voice Biometrics
[ "Oubaïda Chouchane", "Michele Panariello", "Oualid Zari", "Ismet Kerenciler", "Imen Chihaoui", "Massimiliano Todisco", "Melek Önen" ]
eess.AS
[ "eess.AS" ]
0000-0001-8208-9667 EURECOM Sophia Antipolis France oubaida.chouchane@eurecom.fr 0009-0007-4154-5460 EURECOM Sophia Antipolis France michele.panariello@eurecom.fr oualid.zari@eurecom.fr 0009-0005-2093-4059 EURECOM Sophia Antipolis France 0009-0009-6472-9208 EURECOM Sophia Antipolis France Ismet.Kerenciler@eurecom.fr 0009-0008-7020-7062 EURECOM Sophia Antipolis France Imen.Chihaoui@eurecom.fr 0000-0003-2883-0324 EURECOM Sophia Antipolis France massimiliano.todisco@eurecom.fr 0000-0003-0269-9495 EURECOM Sophia Antipolis France melek.onen@eurecom.fr Over the last decade, the use of Automatic Speaker Verification (ASV) systems has become increasingly widespread in response to the growing need for secure and efficient identity verification methods. The voice data encompasses a wealth of personal information, which includes but is not limited to gender, age, health condition, stress levels, and geographical and socio-cultural origins. These attributes, known as soft biometrics, are private and the user may wish to keep them confidential. However, with the advancement of machine learning algorithms, soft biometrics can be inferred automatically, creating the potential for unauthorized use. As such, it is crucial to ensure the protection of these personal data that are inherent within the voice while retaining the utility of identity recognition. In this paper, we present an adversarial Auto-Encoder–based approach to hide gender-related information in speaker embeddings, while preserving their effectiveness for speaker verification. We use an adversarial procedure against a gender classifier and incorporate a layer based on the Laplace mechanism into the Auto-Encoder architecture. This layer adds Laplace noise for more robust gender concealment and ensures differential privacy guarantees during inference for the output speaker embeddings. Experiments conducted on the VoxCeleb dataset demonstrate that speaker verification tasks can be effectively carried out while concealing speaker gender and ensuring differential privacy guarantees; moreover, the intensity of the Laplace noise can be tuned to select the desired trade-off between privacy and utility. <ccs2012> <concept> <concept_id>10002978.10002991.10002992.10003479</concept_id> <concept_desc>Security and privacy Biometrics</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002978.10002991.10002995</concept_id> <concept_desc>Security and privacy Privacy-preserving protocols</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Security and privacy Biometrics [500]Security and privacy Privacy protections [500]Computing methodologies Machine learning Differentially Private Adversarial Auto-Encoder to Protect Gender in Voice Biometrics Melek Önen ===================================================================================== § INTRODUCTION Voice is a unique biometric trait that is widely recognized for its capability to efficiently and securely identify individuals <cit.>. The use of voice as a biometric modality has been deployed in Automatic Speaker Verification (ASV) systems that have been incorporated into a range of applications like personal database access, credit card authorization, voice banking, and funds transfer. In speaker verification, also known as speaker authentication, a user claims their identity, and the system evaluates the truthfulness of that claim by comparing the speaker's biometric characteristics with the stored representation of the claimed identity. The system seeks to establish a match between the speaker's features and the claimed identity that surpasses a specified threshold. In instances where a match is not found, the speaker is rejected. Moreover, the voice does not only contain unique identity information but also physiological or psychological aspects like age, gender, emotions, accent, ethnicity, personality, and health condition, referred to as soft biometrics <cit.>, that can be detected automatically using machine learning systems <cit.>. The utilization of these soft biometric traits in conjunction with primary biometrics can provide additional information for the recognition process and improve recognition accuracy <cit.>. Studies in  <cit.> also show that speakers' short recordings can be used to reconstruct their average-looking facial images that embody their physical characteristics such as age, gender, and ethnicity. However, despite their potential use for legitimate processing purposes, soft biometrics are susceptible to malicious utilization. This can occur through unauthorized data processing that puts individuals at risk of privacy concerns such as discrimination, invasive advertising, extortion, and other forms of abuse. As a specific illustration, the finance sector has been shown to exhibit gender-based biases in loan provision <cit.>. This raises concerns regarding the potential existence of discriminatory lending practices that pose greater barriers to women than to men in the pursuit of starting a business enterprise <cit.>. Solutions based on cryptographic primitives <cit.>, while effective, produce completely garbled messages. Data obfuscation techniques, on the other hand, provide a more balanced approach to privacy preservation, protecting sensitive information without rendering the entire message content unrecognizable. Moreover, the voice is recognized as personal and sensitive and is therefore subject to protection under the General Data Protection Regulation (GDPR or Regulation 2016/679)[https://gdpr-info.eu/] together with numerous other data protection legislation, worldwide. The GDPR considers gender as well as a form of personal data and imposes an obligation to safeguard its protection. In light of the increasing concerns surrounding privacy, there has been a growing effort to protect private information like soft biometrics. This effort has led to multiple research initiatives aimed at developing and implementing effective techniques for protecting the privacy of soft biometric attributes <cit.>. Among these, techniques based on the differential privacy (DP) notion <cit.> have received significant attention. Differentially private solutions (also referred to as global or centralized DP) were proposed for more than a decade and regarded as a privacy protection tool for different areas <cit.>. While global DP mechanisms consist of a trusted central party/data curator collecting the users' data, aggregating them, and further protecting the aggregated information by adding some calibrated noise before releasing it to the public, local DP (LDP) solutions  <cit.> protect the input data immediately to prevent the data curator from discovering the real, individual data. The noise is derived from a DP mechanism (e.g. Laplace mechanism). In this paper, we aim to address the challenge of protecting gender information while preserving the efficiency of speaker verification. Our approach is based on adding a calibrated noise drawn from the Laplace distribution during the training of an Adversarial Auto-Encoder (AAE) architecture. The noise is injected into the latent space (i.e. the output of the encoder) in order to assure that the model is ϵ-differentially private and to enhance the capability of the adversary in obscuring gender information. The speaker makes use of the private AAE locally to conceal their gender prior to the dissemination of their speaker features for the purpose of authentication. Our experiments conducted on the VoxCeleb 1 and VoxCeleb 2 datasets demonstrate the feasibility of executing speaker verification tasks effectively while disrupting adversarial attempts of gender recognition. To the best of our knowledge, this is the first work that uses differentially private solutions to protect gender information while preserving identity in biometrics. § RELATED WORK In recent years, there has been a proliferation of academic literature pertaining to the topic of soft biometrics protection in biometric recognition systems. A significant number of researchers have centered their efforts on developing technical solutions that are capable of preventing the extraction of soft biometric attributes and are either directly applied to the collected biometric data like face images and voice signals (i.e. at sample level) <cit.> or to the extracted features (i.e. at feature level) <cit.>. Mirjalili et al. <cit.> proposed a Semi-Adversarial Network (SAN) based on an adversarial Convolutional Auto-Encoder (CAE) in order to hide the gender information from face images while retaining the biometric matching utility. In a follow-up work  <cit.>, the same authors introduced an ensemble of SANs that are constituted of multiple auxiliary gender classifiers and face matches that generates diverse perturbations for an input face image. The idea behind this approach is that at least one of the perturbed images succeeds in fooling an arbitrary gender classifier. In <cit.>, Mirjalili et al. also attempted to combine a variety of face perturbations in an effort to improve the generalization capability of SAN models. Despite the successful privacy preservation of gender attributes by the aforementioned techniques, their robustness to arbitrary classifiers is limited. In a more recent study, Tang et al. <cit.> presented an alternative gender adversarial network model that effectively masks gender attributes while preserving both image quality and matching performance. Besides, this model demonstrates the ability to generalize to previously unseen gender classifiers. Further work was proposed by Bortolato et al. <cit.> to leverage the privacy-preservation of face images on the template level also using the AE technique. The authors suggested an AE-based solution that effectively separates gender attribute information from identity, resulting in good generalization performance across a variety of datasets. Additionally, Terhöst et al. <cit.> introduced an Incremental Variable Eliminations (IVE) algorithm that trains a set of decision trees to determine the importance of the variables that are crucial for predicting sensitive attributes. These variables were then incrementally removed from the facial templates to suppress gender and age features while maintaining high face-matching performance. In  <cit.> Melzi et. al. extended this approach to protect multiple soft biometrics (i.e. gender, age, and ethnicity) present in facial images. In speech-related literature, Aloufi et al. <cit.> built a Voice Conversion (VC) system that can conceal the emotional state of the users while maintaining speech recognition utility for voice-controlled IoT. The model is based on a Cycle-Generative Adversarial Network (GAN) architecture. Similarly, in <cit.>, the authors introduced a neural VC architecture that can manipulate gender attributes present in the voice signal. This proposed VC architecture involves multiple Auto-Encoders that transform speech into independent linguistic and extra-linguistic representations. These representations are learned through an adversarial process and can be adjusted during VC. On a template level, Noé et. al. <cit.> proposed an adversarial Auto-Encoder architecture that disentangles gender attributes from x-vector speaker embeddings <cit.>. The AE is combined with an external gender classifier that attempts to predict the attribute class from the encoded representations. The proposed solution succeeds in concealing gender-related information in the embedding while maintaining good ASV performance. Nonetheless, our experimental findings indicate that using speaker embeddings other than x-vectors, such as those generated by the ECAPA-TDNN model <cit.>, yields inconsistent performance, implying potential challenges in achieving generalization. We hypothesize that this may be attributed to the superior representational capabilities of ECAPA-TDNN embeddings, which have largely superseded x-vectors in recent speaker modeling. § GENDER CONCEALMENT In this section, we present the building blocks of the proposed gender concealment technique. First, we describe the architecture of the AAE and highlight its limitations in the concealment task. Second, we briefly introduce local differential privacy, a concept that is instrumental in improving the gender concealment capabilities of the model. Lastly, we illustrate how to combine the AAE and LDP to obtain a more effective technique for suppressing gender information in speaker embeddings, with a tunable privacy-utility trade-off and sound theoretical guarantees. §.§ Gender-Adversarial Auto-Encoder Let be an embedding representing a speaker identity. The goal of a Gender-Adversarial Auto-Encoder is to process so as to produce a new embedding that still encodes the identity of that same speaker, but is devoid of any information about their gender. In this section, we describe our implementation of this system, which mostly follows the one proposed in <cit.>. Given an input embedding ∈ℝ^d, we create a compressed representation of it by means of = ∈ℝ^l, where · is a feed-forward neural network parameterized by and l < d. The disentanglement of gender-related information from depends on an adversarial “discriminator” module · (also a feed-forward neural network) that attempts to infer the gender of the speaker associated with . During training, we optimize to minimize the objective: ℒ_disc(, y, |) = - y log() - (1-y)log(1-) where y ∈{0,1} is the ground-truth gender label (0 for male, 1 for female) and ∈[0,1] represents the predicted probability of having been produced by a female speaker. The suppression of the gender-related information is performed by adversarially training the encoder to “fool” the discriminator, i.e. to make it so that it is not capable of accurately predicting the speaker's gender from . In practice, this is achieved by optimizing the same objective as (<ref>), except that the probability predicted by the discriminator is inverted: ℒ_adv(, y, |) = - y log(1 - ) - (1-y)log() A decoder feed-forward module · attempts to reconstruct the original input embedding from . The role of the decoder is to guarantee that the reconstructed embedding can still be used for other tasks, e.g. speaker verification, despite the suppression of gender-related attributes. Thus, the Auto-Encoder is optimized end-to-end according to a further “reconstruction” objective: the cosine distance between the original input embedding and the reconstructed one. ℒ_rec(, , ) = 1 - cos(, ) Overall, we aim to strike a balance between privacy protection (optimizing ℒ_disc, ℒ_adv) and utility (optimizing ℒ_rec) of the processed embeddings. The overall system is trained by alternating gradient descent steps on the parameters of the Auto-Encoder ϕ = {, } and the parameters of the discriminator : ϕ ←∇_ϕ(ℒ_adv + ℒ_rec) ←∇_ℒ_disc At test time, we produce a protected embedding by passing through the Auto-Encoder: = The privacy preservation capability of the Auto-Encoder is evaluated upon the ability of an attacker to infer the gender of the original speaker from the protected utterance . To measure it, we train an external gender classifier · on a separate set of clean embeddings, then report the gender classification performance of · on the original test embeddings and their privacy-protected version: the difference between the two represents the effectiveness of gender concealment. The utility preservation is evaluated by comparing the performance of the same ASV system on the original and protected speaker embeddings. We perform a preliminary evaluation of the reconstructed speaker embeddings of the Gender-AAE and obtain Area Under the ROC Curve (AUC) for gender recognition = 98.45 (10^-2) and Equal Error Rate (EER) = 1.86% for ASV performance. In order to ensure that the predictions of the gender classifier are truly random, the AUC must be close to 50%. Therefore, it is necessary to strengthen the adversarial performance to conceal gender information. In this work, we investigate the impact of adding noise derived from a Laplace mechanism which is well-studied for noise addition and calibration and also provides DP guarantees. The latent vectors are locally differentially private thanks to the Laplace mechanism and subsequently, the reconstructed vectors are differentially private by the post-processing property of DP <cit.>. §.§ Local Differential Privacy Local differential privacy plays a crucial role in protecting personal data like soft biometrics and assessing the privacy risks. In this section, we provide a brief description of the underlying concepts of local differential privacy and the Laplace mechanism. Definition. Local differential privacy is a state-of-the-art privacy model and consists in protecting individual input data before its collection. LDP ensures privacy for each user locally (i.e. each individual record is protected rather than the entire dataset as a whole) by adding noise without the necessity of trusting a central authority. Formally, (ϵ)-local differential privacy is defined as follows. A randomized algorithm ℳ satisfies (ϵ)-LDP if and only if for any pairs of input values x, x' ∈𝒳 in the domain of ℳ, and for all possible outputs S⊆ Range(ℳ), we have: Pr[ℳ(x) ∈ S] ≤ e^ϵ· Pr[ℳ(x') ∈ S] where Pr denotes the probability and ϵ (ϵ >0) is known as the privacy budget that provides a measure of the privacy loss incurred by the DP algorithm. The smaller the value of ϵ, the smaller the privacy loss (i.e. the stronger the privacy protection) and vice versa. Sensitivity. The sensitivity <cit.>, denoted as Δ f, is a measure of the maximum influence that a single data point can have on the result of a numeric query f. In an LDP mechanism, the sensitivity can be defined as shown in (<ref>), where x and x' represent two adjacent records in a dataset 𝒳 and ‖ . ‖ denotes the ℓ_1 norm of a vector. Δ f = max_x,x' ∈𝒳 f(x) - f(x')_1 The sensitivity is the maximum difference between two adjacent records in a dataset and it provides an upper bound on the potential impact of an individual record. It defines the magnitude of the noise needed in order to meet the (ϵ)-LDP requirements. Laplace Mechanism. The Laplace mechanism <cit.> is a widely adopted technique for achieving (ϵ)-LDP. The mechanism works by adding random noise, sampled from the Laplace distribution, to the output of a function in order to obscure any sensitive information about individual records in the database. The amount of noise added is determined by the sensitivity Δ f of the function and the privacy budget ϵ. Formally, given a database 𝒳 and a function f:𝒳→ℝ^d that maps the database to d real numbers, the Laplace mechanism is defined as: ℳ(f(x),ϵ) = f(x) + (n_1,n_2, ...,n_d). where each n_i ∼(Δ f / ϵ) is drawn from the zero centered Laplace distribution with scale Δ f / ϵ. The Laplace mechanism has been demonstrated to be particularly effective in the context of numerical queries (e.g. counting queries, histogram queries, and classification queries) with low sensitivity <cit.>. In our work, we use the Laplace mechanism to perturb each component of the latent speaker embedding with noise drawn from the Laplace distribution. This approach successfully conceals the speaker's gender while retaining the usefulness of the feature vectors for ASV tasks. §.§ Gender-Adversarial Auto-Encoder with Laplace noise We improve the gender concealment capability of the AAE by applying the Laplace mechanism to the latent space learned by the encoder. More specifically, during training, we pass the latent embedding through a Laplace layer dp(·) that adds to its input a noisy vector ∼(0, Δ f / ϵ). Figure <ref> graphically depicts the system. As there is no prior bound on the ℓ_1-norm of the vector , we use the same clipping procedure described in <cit.>: it consists in scaling by a coefficient 1 / max(1, _1 / C), where C is the clipping threshold. This method ensures that if z_1 ≤ C, remains unchanged, while if z_1 > C, it is scaled down to have a norm of C. The purpose of the clipping is to ensure that the sensitivity between any pair of vectors and ' is Δ f ≤ 2C. In practice, one pragmatic approach to determine an appropriate value for C is to compute the median of the norm of unclipped vectors throughout the training phase. Thus, the Laplace layer is defined as dp() = /max(1, _1/C) + and has no learnable parameters. It is applied before is passed to the decoder · and to the discriminator ·. The rest of the forward pass, the loss computation, and the overall training method then proceed as reported in Section <ref>. Once the model has been trained, the adversarial module · is removed. The value of ϵ can be chosen according to the desired balance between privacy protection and the utility of the produced embeddings. The goal of adding Laplace noise into the system is twofold. At test time, its purpose is to provide privacy protection theoretical guarantees as previously described; however, at training time, it also serves as a regularizer for the adversarial module and the decoder. Indeed, our experiments show that applying the Laplace mechanism at training time only (i.e. removing the Laplace layer at test time) is sufficient to greatly enhance the gender concealment capabilities of the system. To better explore the functional difference between the Laplace noise at test and at training time, we perform experiments by independently varying the value of ϵ during the training phase (ϵ_tr) and during the testing phase (ϵ_ts). Our results show that changing ϵ_tr is the most convenient way to roughly set the balance between the empirical capabilities of gender concealment and ASV performance; however, by definition, ϵ_tr does not provide full control over the DP budget of the embeddings at test time. Changing ϵ_ts then offers a flexible means of fine-tuning the privacy budget of the embeddings even once the model has been trained and deployed. In Section <ref>, we show that both ϵ_tr and ϵ_ts are equally relevant in determining the behavior of the system. Privacy Guarantees for the Gender-AAE. One of the main strengths of differential privacy lies in its property of post-processing, which ensures that the privacy guarantee offered by a DP mechanism remains unaltered regardless of the arbitrary computations performed on its output. More formally, Let ℳ be an ϵ-differentially private mechanism and g be an arbitrary mapping from the set of possible outputs to an arbitrary set. Then, g ∘ℳ is ϵ-differentially private. Similarly to the work in  <cit.>, we add noise to the latent space of the Auto-Encoder during the training, and use the same privacy proof, thanks to the post-processing property: d_∘ dp satisfies ϵ-DP, and so does the Auto-Encoder d_∘ dp ∘ e_. § EXPERIMENTAL SETUP AND RESULTS In this section, we discuss the experimental configurations and results. The feature extractor used to produce the speaker embeddings is the ECAPA-TDNN, whose output feature size is d=192. The modules of the proposed encoder and decoder models are single-layer fully-connected neural networks and the gender classifiers (i.e. discriminator and external) are two-layer fully-connected neural networks. The encoder is followed by a ReLU activation and batch normalization, and the decoder is followed by a tanh activation function. We set the latent space to be of size l=64. The adversarial classifier is composed of two fully-connected layers: the first one has 64 input units with a ReLU activation function, and the second one has 32 input units with a sigmoid activation function. An external gender classifier, used by an attacker to infer gender, is used to assess privacy protection and has the same architecture as the discriminator with 192 input units in the first layer and 100 input units in the second layer. The ASV assessment is done by first creating a model for each speaker; trial scores are then obtained by comparing trial embeddings with the respective speaker models by means of cosine similarity. The training process is carried out with Adam optimizer using a learning rate of 1 · 10^-3 and a minibatch size of 128. The training dataset of the AAE is a subset of VoxCeleb2 <cit.> development partition (397032 segments per class). The testing is conducted using a subset of the VoxCeleb1 <cit.> test partition (2900 segments per class). The external sex classifier is trained using a subset of the VoxCeleb1 development partition (61616 segments per class). To select the clipping threshold C, we compute the median of the norm of all unclipped z vectors during the training, which is C=18.35. We initially explore the behavior of the system by setting ϵ_ts=∞ (i.e. no DP protection) and for increasing values of ϵ_tr: Figure <ref> shows the achieved ASV EER and gender classification AUC. We experimentally determine the noise scale and prioritize higher ϵ_tr resolution for the region with significant privacy/utility changes, while lower resolution suffices for regions with minor variations. As expected, privacy and utility scores inversely mirror one another. Specifically, ϵ_tr = 15 seems to strike a satisfactory balance between the two, resulting in a 0.55 gender classification AUC while achieving an ASV EER of 8.1%. For comparison, the same gender classifier and ASV system obtain an AUC of nearly 1 and an EER of 1.1% on the original ECAPA embeddings, respectively. We pick the model weights trained with ϵ_tr = 15 and ϵ_tr = 20 and experiment with values of ϵ_ts < ∞ to add DP protection to the speaker embeddings. Setting ϵ_ts = ϵ_tr further enhances the level of gender concealment: AUC scores drop from 0.55 to 0.50 (from 0.76 to 0.55 respectively) for ϵ_tr = ϵ_ts = 15 (ϵ_tr = ϵ_ts = 20 respectively). However, ASV EER degrades by around 20 percentage points in both scenarios. By increasing ϵ_ts by 20 units, it is possible to restore the ASV EER to around 10% (for both model versions) while achieving satisfactory AUC values of 0.55 and 0.68 for ϵ_tr=15 and ϵ_tr=20, respectively. In general, these results show the level of flexibility that the system can achieve even after training, all while providing DP guarantees over the produced embeddings. Informal experiments run with ϵ_tr=∞ have resulted in rapid erasure of all meaningful information from the speaker embeddings even for high values of ϵ_ts: this is indicative of the relevance of including the Laplace noise during training for the DP protection to be applicable at test time. § CONCLUSIONS We have presented an AE-based system to conceal gender-related information in speaker embeddings while retaining their utility for a speaker verification task. We perform the concealment by means of an adversarial game between an Auto-Encoder and an external gender classifier, and we improve upon previous work by introducing a Laplace-noise–addition layer within the architecture. The Laplace noise regularizes the training and allows for more robust gender concealment, while also endowing the output speaker embedding with DP guarantees at inference time. The tuning of the ϵ parameter of the Laplace layer allows selecting the desired balance of privacy protection and utility, even after the training process has finished. Experimental results show that the proposed solution is effective in preserving gender privacy while maintaining utility for speaker verification tasks. Furthermore, the flexible trade-off between privacy and utility provided by our approach can be adapted to individual needs, making it a promising solution for privacy-preserving applications. This work is supported by the TReSPAsS-ETN project funded by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 860813. It is also supported by the ANR-DFG RESPECT project. ACM-Reference-Format
http://arxiv.org/abs/2307.00737v1
20230703034800
Analytical Constraints on the Radius and Bulk Lorentz Factor in the Lepto-Hadronic One-Zone Model of BL Lacs
[ "ZhiPeng Ma", "Kai Wang" ]
astro-ph.HE
[ "astro-ph.HE" ]
Department of Astronomy, School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China 0000-0003-4976-4098] Kai Wang Department of Astronomy, School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China In this work, we study the parameter space of neutrino-emitting BL Lacs under the framework of the one-zone lepto-hadronic model. We show that constraints on the model come from various aspects of observations such as the variability timescale of blazar flares, gamma-ray opacity and the spectral energy distribution of electromagnetic emission, as well as the inferred neutrino emissivity of the blazar. We apply our method to two potential neutrino sources, i.e., TXS 0506+056 and PKS 0735+178, which are BL Lacs. Then, we explore and summarize the allowed range of parameters such as the bulk Lorentz factor and the blob radius under different distributions of injected protons. We find that the parameter space that is available to explain the BL Lac–neutrino association is sensitive to the proton distribution, and usually, an injected proton luminosity significantly exceeding the Eddington luminosity is required for both sources. Our results suggest that the simple lepto-hadronic one-zone model may not be a reasonable interpretation for BL Lac–neutrino associations. § INTRODUCTION In September 2017, a high-energy neutrino event, IceCube-170922A, with the energy of ∼290 TeV was detected by the IceCube Observatory <cit.>. With good angular resolution, it had a 3σ chance correlation with object TXS 0506+056, a BL Lac object which was in a gamma-ray flaring state. Extensive efforts have been made to understand their  <cit.>. The event IceCube-170922A (hereafter IC170922A) and the following electromagnetic observations, for the first time, directly indicate that blazars are potential sources of high-energy neutrinos <cit.>. Blazars are the most extreme form of active galactic nuclei (AGN), which have their jet pointing to observers approximately <cit.>. As one of the most powerful astrophysical persistent objects, blazars are widely considered as source candidates for the origin of extragalactic high-energy cosmic rays and neutrinos <cit.>. They are sub-classified as flat spectrum radio quasars (FSRQs) and BL Lacs objects depending on differences between their optical emission line features <cit.>. The most significant characteristic of the spectral energy distributions (SEDs) of such objects is the two-hump structure, with different interpretations in various models. In leptonic models, the low-energy hump of the SED is considered as a result of the synchrotron radiation of relativistic electrons in the jet, while the high-energy hump originates from inverse Compton (IC) scattering between high-energy electrons and low-energy photons from an external photon field (external Compton, EC) <cit.> or the electron-synchrotron radiation field (synchrotron-self Compton, SSC) <cit.>. Such conventional models achieve great success in explaining the SED of blazars in the literature <cit.> but fail to explain neutrino emission because of the lack of hadronic processes. Hence, hadronic processes, i.e., the photomeson process (pγ) or the proton-proton collision process (pp), have to be involved to be responsible for the neutrino emissions of blazars by considering the accelerated proton component <cit.>. In these models, the low-energy bump of the SED is still dominated by the synchrotron radiation of accelerated electrons, as is the same as for leptonic models, while the high-energy hump could be from the superposition of the EC, the SSC, proton synchrotron radiation and/or the cascade emission of secondaries of hadronic processes <cit.>. Based on the number of emission regions, the theoretical models can be classified as a one-zone model and a two-zone (or multi-zone) model <cit.>. More parameters are invoked in the two-zone model to explain the multi-messenger observations (electromagnetic radiations and neutrinos) of blazars. Here, we consider a simple one-zone model with fewer parameters to explore the allowed parameter space by comparing the theoretical expectations with the observations of electromagnetic radiation and high-energy neutrinos. Then, the conclusions can help us to differentiate whether the one-zone model is valid or whether the two-zone model has to be invoked. For further simplification, we consider a spherical blob region in the jet where the accelerations and interactions of all electrons and protons take place. Such a scenario is the so-called one-zone lepto-hadronic model, which has been well developed to explain the SED and the neutrino emission of blazars. For the case of TXS 0506+056, <cit.> modeled the SED and neutrino flux in both the proton-synchrotron scenario and the pγ scenario by using the developed method for hadronic processes in blazars (see <cit.>). Their solutions have degeneracy in parameters, especially in a big parameter space of magnetic field strength B and blob radius R. Recently, <cit.> developed a time-dependent code that follows the time evolution of the isotropic distribution functions of all particles involved. The parameters in most numerical calculations can vary by several orders of magnitude <cit.>. This uncertainty may lead to quite different physical conditions of blazar jets and create difficulties for us to study the AGN environment. Here, we provide an analytical method to explore the viable parameter space and study the consequent constraints given by the observations. <cit.> constrained the location of the blob and the bulk Lorentz factor through the electromagnetic observations of blazars in the framework of the leptonic process. However, the observations of high-energy neutrinos from blazars could be another criterion to explore the model parameter space. In this paper, we adopt a similar analytical method to constrain the radius of the blob and the bulk Lorentz factor in the framework of the lepto-hadronic one-zone model for BL Lacs by focusing on the pγ scenario. We use a combination of constraints from the observed variability timescale of flare t_ var, the SSC luminosity L_ SSC, the optical depth for gamma-ray photon τ _γγ, the gamma-ray photon luminosity L_ pγ and the neutrino luminosity L_ν produced by the hadronic process. Then, we apply our method to TXS 0506+056 and PKS 0735+178 for further studies. We emphasize that our method is based on the simple lepto-hadronic model without an external photon field (such as radiation from a broad line region or dust torus) and FSRQs have a non-negligible radiation component from BLR; hence, we only select BL Lacs as potential sources. The cosmological parameters H_0=69 km s^-1 Mpc^-1, Ω_M=0.286 and Ω_Λ=0.714 <cit.> are applied. This paper is organized as follows. We demonstrate our derivation in Section <ref>. Then, we present the application to TXS 0506+056 in Section <ref>. A further study on another BL Lacs, i.e., PKS 0735+178, is presented in Section <ref>. Our results and conclusions are discussed in Section <ref>. § DERIVATION OF CONSTRAINTS First, we set the rules for notation: Physical quantities measured in a co-moving frame will be indicated by a prime, those in an AGN frame will be indicated by a superscript ‘star’ (e.g.., t' and t^*) and those in an observer frame will be indicated by nothing. The peak luminosity of the SED for all kinds of radiation is L_ i=ν L_i,ν, in contrast, the bolometric luminosity is presented as L_i, bol=∫ L_i, ν dν. We assume a spherical uniform emission region (blob) in a jet with radius R', a propagating velocity β =v/c (c is the speed of light) and a Lorentz factor Γ =( 1-β ^2 ) ^-1/2. For an observer on Earth, we should use the Doppler factor δ _ D=[ Γ( 1-βcosθ _ obs) ] ^-1 for transformations. The relation between δ _ D and Γ depends on the observer's viewing angle θ _ obs with respect to the blob propagating direction and opening angle θ of the blob, see more details in Section 2 of <cit.>. §.§ Constraint from Variability Timescale In order to avoid temporal integrations over different portions of the blob, the timescale for variation in the radiation, t'_ var, should be longer than the light crossing time t'_ lc∼ R'/c <cit.>, i.e., R'≲ c· t'_ var=cδ _ Dt_ var/( 1+z ) . Then, Γ≳R'( 1+z )/ct_ var·( δ _ D/Γ) ^-1 , where t_ var is the variation timescale of the light curve, varying from days to months for different sources. §.§ Constraint from SSC Luminosity In a p-γ scenario, gamma-ray photons from high-energy humps are mainly produced by the SSC process with a possible additional contribution from hadronic interactions <cit.>; therefore, the peak of the SSC spectrum should be less than the high-energy peak of the SED (which is in GeV, measured by Fermi-LAT), i.e., L_ SSC≤ L_γ . We have the relation L_ SSC/L_ syn∼ g_ SSC( u'_ syn/u'_ B) , because the input radiation field of the SSC process is the same photon field produced by electron-synchrotron radiation, where u_ B'=B'^2/8π and u'_ syn=L_ syn/(4π cδ _ D^4R'^2) are the energy densities of magnetic field and synchrotron radiation, respectively. g_ SSC=( L_ SSC/L_ syn) /( L_ SSC,bol/L_ syn,bol) ∼3/4 is a bolometric correction factor (mainly due to the spectral shape and source geometry) <cit.>. Furthermore, for electron-synchrotron radiation, we have the peak frequency ν _ l,p∼ 3× 10^6 γ^'_ e^2B'δ _ D/( 1+z ), and ν _ h, p∼γ^'_ e^2ν _ l, p is the peak frequency of inverse Compton scattering. The combination of the two relations yields B'∼( 1+z )/3× 10^6δ _ D·( ν _ l, p^2/ν _ h, p) . Combining Equations (<ref>) and (<ref>), we obtain a constraint on Γ: Γ≃[ 3× 10^6/( 1+z )](ν _ h,pL_ syn/ν _ l, p^2R')( 2g_ SSC/cL_ SSC) ^1/2( δ _ D/Γ) ^-1 , with the constraint condition in Equation (<ref>). §.§ Constraint from Optical Depth The high-energy photon may be absorbed by the low-energy photon field, leading to the maximum observed photon energy. The peak cross-section for γγ annihilation is σ _γγ∼σ _ T/5, where σ _ T is the Thomson cross-section. In the observer frame, we have a relation between the energy of the gamma-ray photon and the soft photon: E_ soft∼3.6( m_ ec^2 ) ^2δ _ D^2/( 1+z ) ^2E_γ. The optical depth for γγ annihilation is estimated as: τ _γγ=σ _γγn'_ sR'∼( 1+z ) σ _TL_ softE_γ/72π( m_ ec^2 ) ^2cδ _ D^5R' . From the SED of TXS 0506+056 <cit.>, we can obtain the maximum gamma-ray energy of E_max∼ 5× 10^11 eV. To obtain L_ soft, we set a critical point E_ X,0, which is the demarcation point between two humps in the SED. Then, we connect the soft photon luminosity and the critical point luminosity L_ X,0 via a spectral index α (indicating the slope of the SED, i.e., ν F_ν∝ν^α), with the value of α depending on the location of E_ soft (E_ soft>E_ X,0 or E_ soft<E_ X,0), i.e., L_ soft=L_ X,0( E_ soft/E_ X,0) ^α. Bringing this relation back to Equation (<ref>), we obtain τ _γγ=σ _ T3.6^α( 1+z ) ^1-2 αE_max^1-αL_ X,0/72π( m_ ec^2 ) ^2-2αcδ _ D^5-2 αE_ X,0^αR'. The observations of gamma-ray photons with the maximum energy imply that . Note that the gamma-ray photon energy with τ_γγ( E_γ)≃ 1 may be around the cut-off position of the SED and smaller than the observed maximum gamma-ray energy E_max; however, it will not affect our estimation significantly since the value of τ _γγ( E_max) is at most around a few. Then, we achieve our formula: Γ≳[ σ _ T3.6^α( 1+z ) ^1-2αE_max^1-αL_ X,0/72π( m_ ec^2 ) ^2-2αcE_ X,0^αR'] ^1/( 5-2α)( δ _ D/Γ) ^-1. We stress again that in the general case, E_ soft could be larger or less than E_ X,0, leading to a different spectral index in Equation (<ref>) and thus a different constraint. The value of E_ soft is approximately estimated from Equation (<ref>) with respect to different sources; we will briefly discuss this in the next section. §.§ Constraint from the Hadronic Process The pγ process in the blob will produce gamma-ray photons and high-energy neutrinos through the photomeson process and Bethe–Heitler (BH) pair production. Since the BH process will not produce neutrinos and contribute less to the final gamma-ray radiation, we will focus our study on the photomeson process. The total produced gamma-ray photon luminosity (L_ pγ) and the neutrino luminosity (L_ν) in the blob are calculated via: L_ pγ=4/3π R'^3δ _ D^4m_ pc^2∫_1^γ' _ p,max5/8f_ pγ( γ' _ p) γ' _ pQ( γ' _ p) dγ' _ p and L_ν=4/3π R'^3δ _ D^4m_ pc^2∫_1^γ' _ p,max3/8f_ pγ( γ' _ p) γ' _ pQ( γ' _ p) dγ' _ p, where Q( γ^'_ p) is the proton injection spectrum with the form of Q( γ' _ p) =Q_0γ^'_ p^-q, in which q is the injection spectrum index, γ' _ p is the Lorentz factor of protons in a co-moving frame and γ^'_ p,max is the maximum Lorentz factor. f_ pγ( γ^'_ p) is the efficiency of the photomeson process which has a complicated integral formation <cit.>; however, we have a simple relation between the gamma-ray energy and the proton energy as long as they interact with the same soft photon field <cit.>: E_ p∼ 3× 10^5E_γ and relation between f_ pγ( γ _ p) and τ _γγ: f_ pγ( γ _ p) ∼ 10^-3τ _γγ . Notice that both relations are valid in the observer frame. Combining the relations above and Equation (<ref>) together and changing the reference frame to a co-moving frame, we achieve a simplified formation for f_ pγ( γ' _ p): f_ pγ∼ 10^-3·( 3× 10^5 ) ^α -1· σ _ T3.6^α( 1+z ) ^-α( m_ pc^2 ) ^1-αL_ X,0γ^'_ p^( 1-α)/72π( m_ ec^2 ) ^2-2αcδ _ D^4-αE_ X,0^αR' Furthermore, we have the proton luminosity L_ p^* in the AGN frame, which is estimated as the lowest power for the jet: L_ p^*=π R'^3m_ pc^2Γ ^2∫_1^γ' _ p,maxγ' _ pQ( γ' _ p) dγ' _ p. With Equations (<ref>), (<ref>) and (<ref>), functions for L_pγ and R' are obtained: Γ ^2-α∼4/3·5/8· 10^-3·( 3× 10^5 ) ^α -1·2-q/3-α -q· γ^'_ p,max^3-α -q-1/γ^'_ p,max^2-q-1·L_ p^*/L_pγ·( δ _ D/Γ) ^α· B( R' ), where B( R' ) =σ _ T3.6^α( 1+z ) ^-α( m_ pc^2 ) ^1-αL_ X,0/72π( m_ ec^2 ) ^2-2αcE_ X,0^αR'. For neutrino luminosity L_ν, the formula is similar but the prefactor 5/8 is replaced with 3/8 and L_ pγ with L_ν. Due to the possibility of other contributions (e.g., the SSC and the BH process), the constraints can be obtained by the fact that the observed peak luminosity (L_γ) of Gev gamma-ray band by Fermi-LAT should be larger than the contributions of the photomeson process which would be generally cascaded peaking in the GeV band. For the high-energy neutrino, the expected luminosity from the photomeson process should be larger than the actual observed value or conservatively larger than 0.003 times the observed value to ensure the detection of the high-energy neutrino is at the 3σ significance level. Note that L_ν in Equation (<ref>) is the luminosity for all-flavor neutrinos, while many observed values in the literature are only for muon neutrinos and anti-muon neutrinos. In conclusion, we have the constraints: L_ pγ<L_γ=4π D_ L^2F_γ, L_ν>4π D_ L^2F_ν, or more conservatively using L_ν>4π D_ L^2F_ν× 0.003, where F_ν is the flux of all-flavor neutrinos based on one high-energy neutrino detection during the corresponding time window. Then, combining these with Equation (<ref>) by replacing L_pγ (L_ν) therein, one can give constraints for the hadronic process. § APPLICATION TO TXS 0506+056 In this section, we will apply the above-derived constraints to the specific neutrino source TXS 0506+056, which coincides with the IC170922A event. Observational values can be obtained from the SED shown in <cit.> and summarized in Table <ref>. In addition, we assume δ _ D/Γ∼1 for all constraints. Here, since the photon field presents two different spectral indexes, i.e., α∼-0.48 and 0.31 (here, ν F_ν∝ν^α), below and above the critical photon energy E_ X,0, respectively, for the TXS 0506+056 observations, we evaluate the energy of soft photon field by whether they can participate in the γγ annihilation (for the constraint from optical depth) and the photomeson process (for the constraint from the hadronic process). For γγ annihilation, the maximum gamma-ray energy and critical photon energy are estimated as E_max∼ 5× 10^11 eV and E_ X,0∼ 4× 10^3 eV. One has the typical energy E_ soft∼ 1×δ _ D^2 eV from Equation (<ref>) to attenuate the gamma-rays with maximum energies. Hence, E_ soft<E_X,0 when δ _ D≲ 63 and E_ soft is located in the soft X-ray band and the spectral index is adopted as ∼-0.48. As the results demonstrate below, since for δ _ D≳ 63, the constraints obtained by the optical depth of maximum energy photons become negligible so that it will not affect the final parameter space, we achieve constraints through the opacity of maximum energy photons considering the soft photon field with the spectral index of α∼ -0.48 only. For the photomeson process, E_ soft depends on γ'_ p via Equations (<ref>) and (<ref>). To obtain a observational neutrino energy with tje range of 1 TeV–1 PeV, γ _ p∼δ_ Dγ'_ p should be in the range of 10^3-10^6, indicating that the soft photon energy that exceeds the threshold of photomeson process is E_ soft≳ 10^4 (γ_ p/10^6)^-1(δ_ D/10)^2 eV. For δ_ D slightly larger than a few, all concerned protons will interact with the soft photons with energies above E_ X,0, so constraints from the hadronic process can be reached by considering the soft photon field with a spectral index of α∼ 0.31 only. Other parameters needed are shown in Table <ref>, where we have multiplied (anti-)muon neutrino flux from <cit.> by 3 to obtain all flavor fluxes. We have four free parameters: γ' _ p,max, γ' _ p,min, q and L_ p^*. Here, we take γ' _ p,min=1 and γ' _ p,max=10^6 to normalize the proton luminosity. In addition, such a range of γ'_ p can achieve the optimistic neutrino production around PeV, yielding relatively conservative constraints from the hadronic process. In addition, we keep q and L_ p^* free to explore their influence on the constraints of the parameter space. Among all situations concerned, we find that the injected proton luminosity should exceed 2.5×10^49 erg/s, about 42 L_ Edd, while the Eddington luminosity for this source is estimated as ∼ 10^47.8 erg/s by assuming the mass of the central black hole is 5×10^9 M_⊙, since the mass of this source is uncertain. The injected luminosity for a proton must be larger than 2.5×10^49 erg/s for q= 1.8, 5.0×10^49 erg/s for q= 2.0 and 2.0×10^50 erg/s for q= 2.2, otherwise there is no allowed parameter space. The required lower limits of L_ p^* in our results are higher than in <cit.>, which may be caused by the different γ' _ p,max selection. We note that with a higher γ' _ p,max, the allowed value for Γ will increase. However, this will overestimate the neutrino luminosity. A sample result for the constraints and parameter space is demonstrated in Figure <ref> under the conditions of q= 2.0, L_ p^*= 6×10^50 erg/s≃ 10^3L_ Edd, where the allowable parameter space is highlighted. With a fixed q, the area of allowed parameter space increases with the injected proton luminosity, as shown in Figure <ref>, where L_ p^*= 10^3L_ Edd, 10^2 L_ Edd and 10 L_ Edd, respectively. The constraints of time variability, SSC and opacity will not vary with q and L_ p^*; hence, we only demonstrate the constraints given by a neutrino flux larger than 0.3% of the observed flaring neutrino flux (Equation (<ref>)) for different L_ p^* values (dashed, solid and dot-dashed line in red) and ignore the inconsequential constraints of gamma-rays from the photomeson process (Equation (<ref>)) and neutrino flux is the same as the observed flaring neutrino flux (Equation (<ref>)). The constraint of gamma-rays from the photomeson process could affect the parameter space only in extreme conditions: the injected luminosity for a proton must be larger than for q= 1.8, 9.0×10^50 erg/s for q= 2.0 and 4.0×10^51 erg/s for q= 2.2, which are not used here. Hereafter, we take L_ p^*= 10^2L_ Edd, q= 2.0 as a benchmark for comparison. From Figure <ref>, one may find that the area of allowed parameter space varies dramatically by changing L_ p^* and the available values for Γ vary from ∼10 to ∼100. We summarize the allowed value range in Table <ref>. Note that the values summarized are only upper and lower limits for Γ and R'. When L_ p^* is fixed, the area of the allowed parameter space decreases through increasing q, as shown in Figure <ref>. Similar to Figure <ref>, we consider three different values of q, i.e., 1.8, 2.0 and 2.2, and a fixed proton luminosity of L_ p^*= 10^2L_ Edd. Comparing the two figures above, we found that the variation of the parameter is less sensitive to q than to L_ p^*. § APPLICATION TO PKS 0735+178 For further study, we also investigated another BL Lac possibly associated with a neutrino event: PKS 0735+178 and IC-211208A. IC-211208A is a muon track event with an estimated energy of 172 TeV with a large statistical 90% localization error region of ∼13 square degrees. The potential source PKS 0735+178 (z∼0.65), an intermediate synchrotron peaked BL Lac (ISP), is located slightly outside of the error region of IC-211208A <cit.>. However, it was an outburst in γ rays, X-rays and optical-UV at the time of the neutrino alert. Moreover, it might be associated with three other neutrino events detected by Baikal-GVD <cit.>, the Baksan Underground Scintillation Telescope <cit.> and Km3NeT undersea neutrino detectors <cit.>. To be consistent with the analysis of TXS 0506+056, we still used IceCube data to study this source. Similar to the case of TXS 0506+056, we first confirm some key values in our method. For the neutrino flux F_ν, we estimate the effective area of the neutrino detector for IC-211208A as A∼160 m^2 using Figure 5 from <cit.> and the neutrino energy as ∼172 TeV. The duration time for neutrino emission is taken as the multiwavelength flare duration time of 3 weeks <cit.>. With this duration time and effective area, the neutrino flux can be calculated as . For the spectral index α, we first estimate the critical photon energy as E_ X,0∼5.8× 10^3 eV and the maximum photon energy as E_ max∼3.4× 10^9 eV from the SED of PKS 0735+178 <cit.>. From , we have E_ soft∼ 100 δ _ D^2 eV and for any δ _ D≳ 7.5, E_ soft will be larger than the critical energy E_X,0. Furthermore, in this case, the constraints from time variation and SSC already imply that δ _ D must be larger than 26.0, so the condition δ _ D≳ 7.5 is always satisfied and we can estimate the spectral index α for Equation (<ref>) as ∼ 0.44. The spectral index for Equation (<ref>) can be obtained in a similar way. The adopted parameters and the results are summarized in . Similar to the results of TXS 0506+056, a super-Eddington luminosity for proton power is required (M_ SMBH∼6.3×10^8 M_⊙). There is an allowed parameter space only when L_ p^* is larger than 200L_ Edd. When L_ p^* exceeds 2×10^3L_ Edd, there is an allowed parameter space under all conditions. Our results are consistent with the parameters chosen in <cit.>. § CONCLUSIONS AND DISCUSSION In this work, we study the parameter space of the radius and bulk Lorentz factor of blobs with an analytical method in the framework of the lepto-hadronic one-zone model for BL Lacs. We use a combination of constraints from the observed variability timescale t_ var, synchrotron self-Compton (SSC) luminosity L_ SSC, optical depth for gamma-rays τ _γγ, photon luminosity L_ pγ and neutrino luminosity L_ν in the hadronic process. We apply our method to TXS 0506+056 and PKS 0735+178, then explore the allowed parameter space. We find that the allowed values for Γ and R' vary with different injected proton powers L_ p^* and injection indexes q, and are more sensitive to L_ p^*. For two studied BL Lac–neutrino associations, a proton luminosity significantly exceeding the Eddington luminosity is required to have an allowed parameter space for the simple lepto-hadronic one-zone model. Our analytical constraints on the allowed parameter space based on the lepto-hadronic one-zone model should be more conservative than from the detailed numerical fitting to the electromagnetic radiation and neutrino spectrum of BL Lacs, since only some key spectral characteristics are selected to limit the parameter space. However, our conservative constraints have introduced a quite large proton luminosity compared to the Eddington luminosity, probably suggesting that the actual condition may disfavor the simple one-zone lepto-hadronic model. As a result, the more complicated model may be invoked to explain the BL Lac–neutrino association event. Future multi-messenger observations could help us to determine the actual physical condition further. In addition to the selected BL Lac–neutrino associations in this paper, some other possible BL Lac–neutrino associations have been reported as well, e.g., IC-200107A associated with 4FGL J0955.1+3551 <cit.> and IC-141209A with GB6 J1040+0617 <cit.>. However, the data on their broadband electromagnetic radiation are inadequate to provide effective constraints. Besides, potential associations between high-energy neutrino events and FSRQs, such as IC-35 and PKS B1424-418 <cit.>, IC-190730A and PKS 1502+106 <cit.>, have been also reported. For FSRQs, the extra external photon field, especially Broad Line Region (BLR) radiation, could serve as the seed photons of the EC process contributing to the observed gamma-rays and the pγ process contributing to the high-energy neutrino radiation. The latter constraint on the proton luminosity is more stringent and could be alleviated to allow a smaller proton luminosity to a certain extent. aasjournal
http://arxiv.org/abs/2307.00861v1
20230703090236
Perch a quadrotor on planes by the ceiling effect
[ "Yuying Zou", "Haotian Li", "Yunfan Ren", "Wei Xu", "Yihang Li", "Yixi Cai", "Shenji Zhou", "Fu Zhang" ]
cs.RO
[ "cs.RO", "cs.SY", "eess.SY" ]
Ground state EIT cooling of ^171Yb^+ ion in polychromatic field S. N. Bagaev August 1, 2023 =============================================================== empty empty plain plain Perching is a promising solution for a small unmanned aerial vehicle (UAV) to save energy and extend operation time. This paper proposes a quadrotor that can perch on planar structures using the ceiling effect. Compared with the existing work, this perching method does not require any claws, hooks, or adhesive pads, leading to a simpler system design. This method does not limit the perching by surface angle or material either. The design of the quadrotor that only uses its propeller guards for surface contact is presented in this paper. We also discussed the automatic perching strategy including trajectory generation and power management. Experiments are conducted to verify that the approach is practical and the UAV can perch on planes with different angles. Energy consumption in the perching state is assessed, showing that more than 30% of power can be saved. Meanwhile, the quadrotor exhibits improved stability while perching compared to when it is hovering. § INTRODUCTION Unmanned aerial vehicles (UAVs) have numerous applications such as aerial photography, surveying, and monitoring. However, UAVs suffer from certain constraints, and one of the most significant challenges is the limited flight time, which leads to reduced mission efficiency. A feasible solution to overcome this limitation is to develop the ability for UAVs to perch on environmental structures. Research has shown perching a drone can significantly reduce energy consumption, enhance drone stability, and has the potential for new applications <cit.>. Perching is a common behavior observed in birds and insects, where they land and rest on natural objects using their feet or claws, conserving energy and stabilizing their position. In the case of UAVs, many perching mechanisms such as grippers and hooks are bio-inspired <cit.>. These mechanisms can be either actively controlled by servo motors or passively actuated by the weight of the UAVs. Most of them are applicable for perching on branches, ropes, and fences. In the meantime, ongoing research is exploring strategies for perching on planar structures such as walls, buildings, and bridges, which are commonly encountered in urban environments. In the work of <cit.>, adhesive pads are used to assist the UAVs to perch on walls temporarily. Mellinger et al. <cit.> proposed the use of Velcro to attach UAVs to specific inclined surfaces, while Ji et al. <cit.> utilized magnets to apply pressure for perching on iron surfaces. Nonetheless, the above solutions do have some limitations. Firstly, incorporating additional mechanisms including but not limited to grippers and adhesive pads adds complexity to the design and extra weight to the UAVs. Although energy consumption can be reduced during perching, more power will be consumed during hovering and flight. Secondly, the mechanisms are mainly installed on the bottom side of the UAVs, which is typically where cameras are mounted for aerial photography. Those grippers or pads are not only constraining the views of the cameras but also potentially causing mechanical interference with the sensors. Last but not least, in all of the above methods, the UAVs must turn their bottom side towards branches or walls, which further obstructs the camera views leading to undesired filming results or even mission suspension. Recently, Hsiao and Chirarattananon <cit.> proposed a novel method of perching small rotorcraft by utilizing the ceiling effect <cit.>. When UAVs approach to ceiling, the ceiling effect is an aerodynamic phenomenon creating a relatively low-pressure area between the ceiling and UAVs, which attracts rotors towards the surface, making them capable of perching. Moreover, this effect also reduces the drag on propellers, leading to higher rotating speeds and increased thrust. Experiments of perching a quadrotor under bridges were presented in <cit.>, where the quadrotor was able to maintain altitude with lower throttle input. This method requires minimum mechanisms for UAVs to perch and barely affect the flight mission. Similarly, when a quadrotor is closing to planar structures other than a ceiling, the quadrotor can experience the ceiling effect if it is aligned parallel to the surfaces. Therefore, we propose to perch a quadrotor on planes of varying incline angles including ceilings, walls, slopes, and grounds, as shown in Fig. <ref>. Only propeller guards are required as supporting structures when contacting with planar structures, which already exist on many drones for safety purposes. The quadrotor can save energy by perching on more available places. Besides, a distinctive landing strategy is introduced by this concept, which is flipping the quadrotor upside down and using the propeller guards as the landing gear. As traditional landing gears can be eliminated, the composition will be even simpler, and onboard cameras will have larger ranges of view in both perching and flight. One of the challenges is controlling the quadrotor to reach and maintain abnormal attitudes. Although modern electronic speed controllers (ESCs) enable bi-directional thrust generation in flight by changing the motor's rotation direction (known as 3D mode) <cit.>, which allows the UAVs tilt to upright and upside-down postures as well as make firm contact on inclined surfaces, current research on perching trajectories mainly focuses on reaching the target position with the bottom side of the quadrotor and using single direction thrust <cit.>. Plans for approaching the surfaces with the top side and using bi-directional thrust are rarely investigated in existing works. Thus, we present a coherent trajectory generation and control framework to adjust this challenge. In spite of several pieces of research indicating that the ceiling effect has the potential to save energy, the actual power that can be preserved on a complete quadrotor has not been assessed yet. The evaluation should be conducted while the quadrotor is stably perching with as less as possible thrust. Since we demonstrate a throttle control logic for perching, the power consumption in different conditions can be found in experiments. We summarize the main contributions of this paper as follows: * We design a quadrotor that can utilize the ceiling effect to perch on planar structures with its top side. * We develop novel perching procedures for the quadrotor to reach different incline angles and verify the feasibility through experiments. * We evaluate the energy efficiency and stability of our perching method. § METHODOLOGY §.§ Design and control In this section, we first discuss the components and dimensions of the quadrotor as shown in Fig. <ref>. Then, we elaborate on how the controller work for managing bi-directional thrust in general flight. §.§.§ Structure design This quadrotor is assembled from a purchasable regular fuselage, on which the propulsion system including four T-MOTOR F60 pro motors and GEMFAN 513D propellers are also common in the market. A T-MOTOR F60A 4-in-1 ESC with DShot1200 communication protocol is connected to all motors. Unlike variable-pitch propellers that produce reversible thrusts <cit.>, our method of using symmetrical-blade propellers and open-source ESC keeps the mechanism simple, ensuring effective bi-directional thrust while maximizing compatibility with existing UAVs. Customized propeller guards are made of two 3D-print parts and long screws. The forces for supporting the quadrotor during surface contact are borne by the screws. A few rubber tapes are added on the top of the propeller guards to increase friction for perching on inclined planes as well as dampen the collision force. Light nylon ducts are used on this quadrotor to improve propulsion efficiency and enhance safety, by surrounding the propellers and enlarging the airflow velocity difference around them <cit.>. Other components which include a flight controller Pixhawk 4 mini and an onboard computer Jasper Lake N5105 are attached at the center of the quadrotor's body. A 6S 1850 mAh battery weighted 270 g is mounted beneath the fuselage. The total weight of the quadrotor is 1131 g and the thrust-to-weight ratio is 2.5. §.§.§ System control This quadrotor is using a cascaded control framework base on PX4 that is also widely used in existing UAVs. The position and velocity controller produces the desired acceleration, which provides the desired thrust and attitude for the attitude controller and subsequent control loops. In order to reduce computational demands and complexity, we impose a constraint that the thrust generated by each propeller is always in the same direction, either all pointing upward or all pointing downward at the same time with respect to the quadrotor body. As a result, for one desired acceleration, there are two sets of desired thrust and attitude, including positive thrust with normal attitude and negative thrust with reverse attitude. The quadrotor only chooses the desired attitude and the relative thrust that is closer to its current attitude, preventing unnecessary flips. §.§ Perching trajectory and strategy This section comprehensively introduces our automatic perching procedures for different cases, including Case (a) perching on the planes closing to vertical, which incline angle ranging from 60 to 120, Case (b) perching on ceilings that have incline angle less than 60, and Case (c) perching on grounds with incline angle greater than 120. An example for each scenario is presented in Fig. <ref>. We divide the whole perching process into steps, in every one of which the thrust directions of all propellers are unified and consistent. Case (a) requires three steps, starting with trajectory tracking only using positive thrust, followed by model predictive control (MPC) perching adjustment with reversed thrust, and then establishing and maintaining perching with positive thrust. Case (b) is the same except that positive thrust is used in MPC perching adjustment. Case (c) involves trajectory tracking with reversed thrust and perching establishing with positive thrust. §.§.§ Perching trajectory generation The first step of perching is to tilt the quadrotor from hovering to be parallel to the target surface. Since the thrust direction is fixed in this process for all cases, one of the state-of-the-art trajectory optimizers MINCO <cit.> can be employed. Our controller, as mentioned above, allows the quadrotor to choose reversed thrust and attitude based on the desired acceleration input in Case (c). The initial state and the final state of the quadrotor are the input to the optimizer, while a smooth trajectory with constraints on mechanical properties is the output. Considering the plane axes are at the center of a surface and rotated accordingly (shown in Fig. <ref>), the initial hover position is always in the -z direction. The final position of the trajectory has offsets with regard to the center of the plane. As the quadrotor is tilted, its control effectiveness in the plane's y direction is weakened and the motion is mainly dominated by the component of gravity in this direction. Space needs to be reserved for subsequent steps in view of this. §.§.§ MPC perching adjustment At the end of the trajectory tracking step, the quadrotor is parallel to the perching surface but is not ready to perch yet. On one hand, there are tracking errors, especially in Case (a) and Case (b) which have aggressive trajectories. The quadrotor position may not be ideal for perching. On the other hand, since the thrust points toward the plane in both Case (a) and Case (b), there may be significant velocity in the plane's +z direction, making a deceleration procedure necessary. Therefore, we set up the MPC perching adjustment to actively guide the quadrotor closing the surface for these two cases. In this control section, the state vector comprises the quadrotor position with respect to the plane center, d_z, d_y, and d_x as shown in Fig. <ref>, and the relative velocity v_z, v_y, and v_x: x= [ d_z v_z d_y v_y d_x v_x ]^T. The control input consists of the thrust acceleration a_T, the relative roll angle ϕ, and pitch angle θ (also indicated in Fig. <ref>): u= [ cosθcosϕ a_T sinϕcosθ a_T -sinθ a_T ]^T. With the gravitational acceleration g and the incline angle of the plane α, this model is written as v̇_z = cosθcosϕ a_T-gcos(α) v̇_y = sinϕcosθ a_T+gsin(α) v̇_x = -sinθ a_T . We consider that there are total n computational cycles in this adjustment period and the cycle interval is Δ t seconds. The total time of the adjustment period is nΔ t seconds. After discretization, we have x(k+1)= Ax(k) + Bu(k) + d, k ∈ℕ, where A∈ℝ^6 × 6, B∈ℝ^6 × 3, d∈ℝ^6 × 1 can be computed from (<ref>). The last state that is predicted from the k cycle can be represented as x(n|k)= A^n-kx(k) + ∑_j=0^n-k-1A^n-k-1-j (Bu(k+j|k)+d) . We define that U_k= [ u(k|k) u(k+1|k) … u(n-1|k) ], and our problem becomes minJ = U_k^T IU_k ,7a s.t. 0 ≤ d_z(n|k) ≤ d_z_max ,7b 0 ≤ v_z(n|k) ≤ v_z_max ,7c d_y_min≤ d_y(n|k) ≤ d_y_max ,7d d_x_min≤ d_x(n|k) ≤ d_x_max ,7e ϕ (n-1|k) =0 ,7f θ (n-1|k) =0 ,7g a_T_min≤ a_T(j) ≤ a_T_max, j ∈{ k, …, n-1} ,7h ϕ_min≤ϕ(j) ≤ϕ_max, j ∈{ k, …, n-1} ,7i θ_min≤θ(j) ≤θ_max, j ∈{ k, …, n-1} ,7j where we are finding the minimum input for fulfilling all constraints, including hard constraints of d_z, v_z, d_y, and d_x in the last state ((7b)-(7e)), hard constraints of ϕ and θ in the last input ((7f)-(7g)), and constraints of a_T, ϕ and θ in all inputs ((7h)-(7j)). a_T_min and a_T_max are both negative in Case (a). In every cycle, we only take the first group of the desired attitude and thrust as the command. By repeating this control process for n cycles, the influence of the tracking error from the previous step should be minimized, allowing the quadrotor to approach the center of the plane with low perpendicular velocity. While there are no constraints on the velocity along the planar surface, it will eventually be eliminated by friction. §.§.§ Throttle control in perching In order to establish perching and maintain it with reduced power consumption, implement control over the throttle of the quadrotor and give zero angular rate commands. Once reaches the plane, all propellers generate high positive thrust to push the quadrotor against the surface for a short period of time. Following this, the throttle is gradually reduced as long as the quadrotor remains stable in contact with the plane. The throttle value that no longer holds the quadrotor position is marked as T_min. If there exists a T_min, the high throttle is once again provided to rebuild the contact until the quadrotor stabilizes. The throttle is then gradually reduced again, but this time it stops reducing at T_perch=T_min+0.05. However, in Case (c), T_min may not exist, and all rotors can be turned off during perching to further conserve power in such a case. § EXPERIMENTS §.§ Perch on different planes To verify the feasibility of our method, we carry out several tests of perching indoors. Transparent acrylic boards with a width of 120 cm are placed at different incline angles to represent various cases, as shown in Fig. <ref> and <ref>. Both perching target planes and the quadrotor have pose feedback from the motion capture system. Fig. <ref> also displays snapshots of the quadrotor in the perching processes and Fig. <ref>. depicts the roll command and feedback in the experiment of perching to a 90 wall. The quadrotor usually takes less than 2 seconds to reach the targeted angles. The experiments confirm that the quadrotor can tilt to be parallel to the planes and make contact successfully. In addition, our results have shown to be highly repeatable. Despite the relatively smooth surfaces of the acrylic boards, the quadrotor successfully makes contact. Our strategy is proven to be fast, safe, and robust. §.§ Throttle control In all perching experiments, we controlled the throttle of the quadrotor based on the methodology mentioned above once it reached the target planes. Fig. <ref> and <ref> show the process in one of the tests demonstrating perching to the ceiling. As depicted in Fig. <ref> (a), the quadrotor is contacting with the acrylic board. The throttle of to quadrotor is gradually decreased from 0.5 to 0.29 as reported in Fig. <ref>. The quadrotor drops its altitude at t=4.3s but quickly recovers by increasing the throttle to over 0.6. Then, the throttle is gradually reduced again to 0.34 and maintains the quadrotor's position with an average power of around 340 W. Our method ensures comparatively low power consumption and stable perching of the quadrotor at the same time. §.§ Power saving The power consumption comparison among different states is reported in Fig. <ref>. The average power for the quadrotor to hover is 517 W which is gathered when the aircraft is hovering away from the ground, wall, or ceiling. The power consumption data of perching on the planes with incline angles ranging from 0 to 90 is obtained when the quadrotor is using throttle equaling to T_perch. The mean power values for perching on 0, 45, and 90 planes are 340 W, 394 W, and 348 W, respectively. Compared to the hovering state, the perching strategy results in energy savings of around 35%, 24%, and 33% in the three perching states accordingly. These results demonstrate the effectiveness of perching by the ceiling effect in improving efficiency while remaining simple and not requiring complicated mechanisms. Besides, in the cases where the quadrotor perches on planes with 135 and 180 incline angles, the rotors can be fully turned off, leading to power consumption close to zero. The novel landing manner is not only easy to achieve but also highly valuable. §.§ Stability In the disturbance tests, a fan capable of producing a wind speed of 5 m/s is positioned 1 m away from the quadrotor. The detailed setup is displayed in Fig. <ref>. During both the hovering test and the perching test, the fan is turned on for a duration of 25 seconds. Fig. <ref> illustrates that the position errors during hovering are considerably greater in comparison to those during perching, which are all nearly zero. The root-mean-square error is 0.06 m in hovering while it is 0.003 m in perching. Perching demonstrates strong stability and anti-interference ability in this experiment. § CONCLUSION In this work, we proposed to perch a quadrotor on planes by the ceiling effect as a means of saving power and enhancing stability. We designed a quadrotor that can use its propeller guards to make contact with planar structures, thereby eliminating the need for not only landing gear but also grippers, hooks, or adhesive pads. Compared to the existing perching mechanisms of UAVs, our method reduces the complexity of design and is not limited by the angle or material of the perching planes. We have also developed practical perching procedures including trajectory tracking, MPC perching adjustment, and throttle control on surfaces to handle different cases. The power that can be preserved in perching by the ceiling effect is evaluated. Around 30% of energy consumption can be reduced while guaranteeing the excellent stability of the quadrotor. Currently, we acknowledge that our quadrotor design may not be optimal in utilizing the ceiling effect. For example, the distance between the propellers and the perching planes is possible to adjust to achieve higher efficiency. Therefore, we plan to further optimize our design and attempt more various scenarios. We will also extend our work to plane detection and autonomous navigation. -8cm § ACKNOWLEDGMENT This work is supported by the Hong Kong Research Grants Council (RGC) General Research Fund (GRF) (no.17206920), the Hong Kong Research Grants Council (RGC) Early Career Scheme (ECS) (no.27202219), and a DJI research donation. IEEEtran
http://arxiv.org/abs/2307.02743v1
20230706030641
Loop corrections as marginal deformations in celestial holography
[ "Song He", "Pujian Mao", "Xin-Cheng Mao" ]
hep-th
[ "hep-th", "gr-qc" ]
utphys calc,decorations.markings -1.5cm -.1cm 4535.5pc 20pt 20pt 4pt [1] √(-g)
http://arxiv.org/abs/2307.01566v1
20230704083737
Last layer state space model for representation learning and uncertainty quantification
[ "Max Cohen", "Maurice Charbit", "Sylvain Le Corff" ]
stat.ML
[ "stat.ML" ]
Once-Training-All-Fine: No-Reference Point Cloud Quality Assessment via Domain-relevance Degradation Description Yipeng Liu, Qi Yang, Yujie Zhang, Yiling Xu, Le Yang, Xiaozhong Xu, Shan LiuThis paper is supported in part by National Natural Science Foundation of China (61971282, U20A20185). The corresponding author is Yiling Xu(e-mail: yl.xu@sjtu.edu.cn). Y. Liu, Y. Zhang and Y. Xu are from Cooperative Medianet Innovation Center, Shanghai Jiaotong University, Shanghai, 200240, China, (e-mail: liuyipeng@sjtu.edu.cn, yujie19981026@sjtu.edu.cn, yl.xu@sjtu.edu.cn) Q. Yang, X. Xu, S. Liu are from Media Lab, Tencent, Shenzhen, China, (e-mail: chinoyang@tencent.com, xiaozhongxu@tencent.com, shanl@tencent.com) L. Yang is from the Department of electrical and computer engineering, University of Canterbury, Christchurch 8041, New Zealand, (e-mail: le.yang@canterbury.ac.nz) Corresponding author: Y. Xu August 1, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== As sequential neural architectures become deeper and more complex, uncertainty estimation is more and more challenging. Efforts in quantifying uncertainty often rely on specific training procedures, and bear additional computational costs due to the dimensionality of such models. In this paper, we propose to decompose a classification or regression task in two steps: a representation learning stage to learn low-dimensional states, and a state space model for uncertainty estimation. This approach allows to separate representation learning and design of generative models. We demonstrate how predictive distributions can be estimated on top of an existing and trained neural network, by adding a state space-based last layer whose parameters are estimated with Sequential Monte Carlo methods. We apply our proposed methodology to the hourly estimation of Electricity Transformer Oil temperature, a publicly benchmarked dataset. Our model accounts for the noisy data structure, due to unknown or unavailable variables, and is able to provide confidence intervals on predictions. Recurrent neural networks, Representation learning, Uncertainty quantification, Sequential Monte Carlo. § INTRODUCTION Recurrent Neural Networks (RNN) were first introduced as an efficient and convenient architecture to address short time dependencies problems. They have been consistently improved to develop longer term memory, and optimize their implementations <cit.>. Current deep learning frameworks allow stacking arbitrary high number of recurrent layers, whose parameters are estimated by gradient descent through automated differentiation procedures, as shown in <cit.>. However, many critical applications, such as medical diagnosis or drug design discovery, require not only accurate predictions, but a good estimate of their uncertainty (<cit.>). Fostering the dissemination of deep learning-based algorithms to such fields requires to design new approaches for uncertainty quantification. Bayesian statistics are able to approximate the distributions of future observations and to provide uncertainty estimation <cit.>. Several architectures inspired by Variational Inference (VI, see <cit.>) emerged by considering latent states as random variables and approximating their posterior distribution. The authors of <cit.> built on a traditional recurrent architecture by modelling temporal dependencies between these latent random states. Results presented in <cit.> yield improved performances when considering local gradient information for computing the posterior. In <cit.>, a prior model based on a Markov chain is estimated in the latent space of an Auto Encoder in order to compute uncertainty estimation on the observation. Sequential Monte Carlo (SMC) methods have also been successfully applied to Recurrent Neural Networks. Instead of computing a single latent vector at each time step, a set of particles representing the distribution of the latent space are propagated, and associated with importance weights. In <cit.>, the authors were able to model complex distributions on dependant data. We turn to <cit.> for an example using more complex neural architectures, such as the Transformer. In <cit.>, the authors considered weights as random variables and proposed approximations of their posterior distributions allowing more robust predictions. Such Bayesian neural networks have been proposed and studied in a variety of works, see for instance <cit.>. However, these methods are computationally intensive for high dimensional models and we do not have statistical guarantees on their ability to capture the target posterior distribution, see <cit.>. Monte Carlo Dropout (MC Dropout) methods offer to capture uncertainty by leveraging Dropout during both training and evaluation tasks, producing variable predictions from a single trained recurrent model, see <cit.>. In the recent years, MC Dropout methods have been applied in many industrial fields, such as flight delay prediction <cit.> or molecular simulations <cit.>. Alternatively, ensemble methods consist in training distinct networks to obtain a combined prediction, as shown in <cit.>. However, these frequentist approaches fail to guarantee proper calibration of the model, as highlighted by <cit.>, and suffer various limitations, see <cit.>. In an effort to provide an alternative strategy with limited computation overhead, <cit.> suggests splitting representation learning and uncertainty estimation to solve classification problems for independent data. A deep classifier is first trained to obtain task dependent representations of the data, on which ensemble models are fitted to approximate the distribution of the observations. Their experiments indicate that performing uncertainty estimation on the last-layer of the model outperforms baseline networks and is an appealing trade-off between computational cost and uncertainty quantification. However, as this method is restricted to independent data, it cannot be directly applied to time series. Inspired by <cit.>, we propose a last layer approach to split uncertainty quantification from representation learning, in the context of dependent data. This new method combines high expressivity, quality uncertainty estimations and ease of training. Our main contributions are as follows. * We propose a decoupled architecture composed of an arbitrary sequential model and a state space model layer. * This last layer allows to introduce complex predictive distributions for the observations. Its parameters are estimated through approximate samplings using Sequential Monte Carlo methods, as the likelihood of the observations is not available explicitly in such a setting. * Our methodology allows for arbitrary deep architectures, and does not suffer the overconfidence of frequentist approaches. § LAST LAYER DECOUPLING Estimating the parameters of potentially high-dimensional models with unobserved (i.e. noisy) layers is a challenging task. We therefore propose to first train an input model following traditional deep learning approaches, then use Monte Carlo methods in a lower dimensional state space to account for uncertainty, with tractable and computationally efficient simulation-based methods. The two-stage training algorithm is presented in Algorithm <ref>, and the architecture of the model is described in Figure <ref> In the following, for any sequence (a_m,…, a_n) with n≥ m, we use the short-hand notation a_m:n = (a_m,…, a_n). Let T≥ 1 be a given time horizon. We consider a regression task with observations Y_1:T associated with inputs U_1:T. §.§ Representation learning In this paper, we consider an arbitrary multi-layer neural network h_φ with unknown parameters φ, responsible for extracting high level features from the input time series: U_1:T = h_φ(U_1:T) , input model. We produce an estimate φ̂ during the first training stage, by introducing an auxiliary function κ_ψ to model the observations as follows: for all 1 ≤ k ≤ T, Y_k = κ_ψ(Y_k-1, U_k) + ϵ_k and Y_0 = κ_ψ(U_0) + ϵ_0, where (ϵ_k)_k≤ 0 are independent centered Gaussian random variables with unknown variance. The input model is trained on a simple deterministic regression task, by performing gradient descent on the mean squared error, leading to a first estimate of φ and ψ. We keep the estimated parameters φ̂ while the auxiliary function κ_ψ and its parameters, only designed to model the observations, are discarded. §.§ State space model The next step is to define a state space model taking as input the previously extracted features U_1:T. Let X_1:T a sequence of stochastic hidden states computed recursively and Y_t their associated predictions. For all k ≥ 1, the model is defined as: X_k = g_θ(X_k-1, U_k) + η_k , state model, Y_k = f_θ(X_k) + ϵ_k , observation model, where θ are the unknown real-valued parameters of the network (weights and biases) and f_θ and g_θ are nonlinear parametric functions. We chose (η_k)_k≥ 1 and (ϵ_k)_k≥ 1 as two sequences of independent centered Gaussian random variables with covariance matrices Σ_x and Σ_y, although any distribution can be substituted. This decoupled approach aims at reducing the number of parameters in θ, compared to φ, in order to estimate them using Sequential Monte Carlo methods. In the next section, we describe this second training procedure for the last layer only, by keeping φ̂ fixed. § SEQUENTIAL MONTE CARLO LAYER In this section, we detail how to estimate the parameters θ, Σ_x and Σ_y in the model introduced in Section <ref>, from a record of observations Y_1:T. This is challenging because the likelihood of the observations is not available explicitly, as it would require integrating over the hidden states X_1:T. Consequently, the score function is intractable. We propose to optimize a Monte Carlo estimator of this score function, using Fisher's identity <cit.>: ∇_θlog p_θ(Y_1:T) = 𝔼_θ[ ∇_θlog p_θ(X_1:T, Y_1:T) | Y_1:T] , where 𝔼_θ designs the expectation under the model parameterized by θ (the dependency on the input U_1:T is kept implicit here for better clarity). In the following paragraphs, we denote by Ψ_μ, Σ the Gaussian probability density function with mean vector μ and covariance matrix Σ. §.§ Particle filter The conditional distribution of X_1:T given Y_1:T is not available explicitly for a nonlinear state space model, but it can be approximated using a family of N particles (ξ^ℓ_1:T)_ℓ=1^N associated with importance weights (ω^ℓ_T)_ℓ=1^N. At k = 0, (ξ^ℓ_0)_ℓ=1^N are sampled independently from ρ_0 = Ψ_0, Σ_x, and each particle ξ^ℓ_0 is associated with the standard importance sampling weight ω_0^ℓ∝Ψ_Y_0, Σ_y(f_θ(ξ^ℓ_0)). Then, for k≥ 1, using {(ξ^ℓ_k-1,ω^ℓ_k-1)}_ℓ=1^N, we sample pairs {(I^ℓ_k,ξ^ℓ_k)}_ℓ=1^N of indices and particles from the instrumental distribution: π_k(ℓ,x) ∝ω_k-1^ℓ p_k(ξ^ℓ_k-1,U_k,x) . In this application we use for p_k(ξ^ℓ_k-1,U_k,·) the prior kernel Ψ_g_θ(ξ^ℓ_k-1, Ũ_k), Σ_x. For ℓ∈{1,…,N}, ξ^ℓ_k is associated with the importance weight ω^ℓ_k ∝Ψ_Y_k, Σ_y(f_θ(ξ^ℓ_k)). Such a particle filter with multinomial resampling is referred to as the bootstrap algorithm, see <cit.>. It has been extended and analyzed in many directions in the past decades, see <cit.>. In other lines of works, the adaptive tuning of the Monte Carlo effort has been analyzed in order to adapt the number of particles on-the-fly, see <cit.>. §.§ Particle smoother and online estimation Our framework allows the use of any particle smoother to estimate (<ref>). In this paper, we first describe the Path-space smoother <cit.> for its simplicity, in order to illustrate our approach. In practice, it often leads to particle path degeneracy <cit.>, which can be mitigated by substituting a more complex smoother such as the Forward Filtering Backward Smoothing <cit.> or the Forward Filtering Backward Simulation algorithm <cit.>. Additionally, because estimating (<ref>) amounts to computing a smoothed expectation of an additive functional, we can also use very efficient forward-only SMC smoothers such as the PaRIS algorithm and its pseudo-marginal extensions <cit.>. With ξ^i_1:T the ancestral line of ξ^i_T, the score function (<ref>) can be estimated as follows using automated differentiation: S^N_θ(Y_1:T) = ∑_ℓ=1^N ω_T^ℓ∇_θlog p_θ(ξ^ℓ_1:T, Y_1:T) , where p_θ is the joint probability density function of (X_1:T, Y_1:T) for the model described in Section <ref>. The degeneracy relative to the smoothing problem can be overcome using backward sampling. It is specifically designed for additive functionals so it is well suited to our setting (<ref>) since ∇_θlog p_θ(x_1:T, y_1:T) = ∑_t=1^T ∇_θlog m_θ(x_t-1,u_t;x_t) r_θ(x_t,y_t), where m_θ(x_t-1,u_t;·) is the transition density of the state model and r_θ(x_t,·) is the density of the conditional distribution of y_t given x_t and by convention m_θ(x_0,u_1;·) = ρ_0(·). The Monte Carlo estimator of the score function can be obtained online by setting, S^N_θ(y_1:T) = ∑_i = 1^N ω_T^iτ_T^i, where the statistics {τ_s^i}_i = 1^N satisfy the recursion τ_s + 1^i = τ_s^I_s+1^i + h̃_s(ξ_s^I_s+1^i, ξ_s + 1^i), where h̃_s(x_s,x_s+1) = ∇_θlog m_θ(x_s,u_s+1;x_s+1) r_θ(x_s+1,y_s+1). Following <cit.>, the degeneracy of the path-space smoother can be overcome by performing an online PaRis update of the statistics τ_s + 1^i, 1≤ i≤ N, using the backward kernel of the hidden Markov chain. An appealing application of the last layer approach is recursive maximum likelihood estimation, i.e., where new observations are used only once to update the estimator of the unknown parameter θ. In <cit.>, the authors used in particular Stochastic Gradient Descent (SGD) and Stochastic Gradient Langevin Dynamics to update the estimation of θ and perform uncertainty quantification. In state space models, recursive maximum likelihood estimation produces a sequence {θ_k}_k≥ 0 of parameter estimates writing, for each new observation Y_k, k≥ 1, θ_k = θ_k-1 + γ_k ∇_θℓ_θ(Y_k | Y_0:k - 1) , where ℓ_θ(Y_k | Y_0:k - 1) is the loglikelihood for the new observation given all the past, and {γ_k}_k≥ 1 are positive step sizes such that ∑_k ≥ 1γ_k = ∞ and ∑_k ≥ 1γ_k^2 < ∞. The practical implementation of such an algorithm, where ∇_θℓ_θ(Y_k | Y_0:k - 1) is approximated using the weighted samples {(ξ^ℓ_k,ω^ℓ_k)}_ℓ=1^N can be found for instance in <cit.>. The PaRIS algorithm proposed in <cit.> allows to use the weighted samples {(ξ^ℓ_k,ω^ℓ_k)}_ℓ=1^N and the statistics {τ^ℓ_k}_ℓ=1^N on-the-fly to approximate ∇_θℓ_θ(Y_k | Y_0:k - 1). Although this algorithm is very efficient to update parameters recusrively, it is computationally intensive and therefore fits particularly well our last layer approach as it would be intractable for very high dimensional latent states. § EXPERIMENTS §.§ Data and model We benchmarked our approach on the public Electricity Transformer Temperature (ETT) Dataset, designed in <cit.> to forecast Oil temperature based on hourly power load records (ETTh1 subset). The Input model is a L=3 layered GRU model, as defined in the deep learning framework PyTorch[https://pytorch.org/docs/stable/generated/torch.nn.GRU.htmlhttps://pytorch.org/docs/stable/generated/torch.nn.GRU.html]: for all 1 ≤ℓ≤ L and all 1 ≤ k ≤ T, r^ℓ_k = σ(W_ir U^ℓ - 1_k + b_ir + W_hr U^ℓ_k-1 + b_hr) , z^ℓ_k = σ(W_iz U^ℓ - 1_k + b_iz + W_hz U^ℓ_k-1 + b_hz) , n^ℓ_k = tanh(W_in U^ℓ - 1_k + b_in + r^ℓ_k (W_hn U^ℓ_k-1 + b_hn)) , Ũ^ℓ_k = (1-z^ℓ_k) n^ℓ_k+z^ℓ_k U^ℓ_k-1 , where φ = {(W_is, b_is, W_hs, b_hs), s ∈{r, z, n}} are unknown parameters, and σ: x ↦ 1/(1+e^-x) is the sigmoid function. The first layer of the network is assimilated to the input vectors, U_t^0 ≡ U_k and U^ℓ_0 ≡ 0. The input dimension d_in=6 corresponds to the number of power load records of the dataset, we set the output dimension to 6. In order to estimate the parameters φ, we introduce an auxiliary GRU layer responsible for computing oil temperature predictions. During the training, we minimize the cost function ℒ_input(φ) = ∑_i=1^N__φ(U^i_1:T) - Y^i_1:T^2 between each sample of the dataset and the associated prediction obtain with this deterministic model. The State Space model is implemented using PyTorch implementations of RNN and Linear layers. We chose the following form for f_θ and g_θ: g_θ : X_k-1, U_k ↦tanh(W_gx X_k-1 + b_gx + W_guU_k + b_gu) , f_θ : X_k ↦σ(W_f X_k + b_f) , where θ = {W_gx, b_gx, W_gu, b_gu, W_f, b_f,Σ_x,Σ_y} are unknown parameters. All following experiments are conducted with N=100 particles and a batch size of 32, using the Adam optimizer introduced in <cit.>. The learning rate was chosen using a simple grid search. We train models for a maximum of 50 epochs, and employ early stopping to prevent overfit. §.§ Evaluations In this section, we illustrate the ability of our model to capture the distribution of future observations, by evaluating the benchmarked models using the following protocol. We draw 48 hours long samples (u_1:48, y_1:48) from the validation dataset, composed of a 24 hour long lookback window (u_1:24, y_1:24), containing historic commands and observations, and a predictions window where only future commands are available (u_25:48). Each model produces N=100 24 hour long forecasts (y_25:48^(i))_i=1^N. We compute the Root Mean Squared Error (RMSE) between the observations and the average of the forecasts: RMSE^2 = T^-1∑_k=1^T (Y_k - N^-1∑_i=1^N y_k^(i))^2. Additionally, we evaluate the Prediction Interval Coverage Probability (PICP, see <cit.>) which measures the ratio of observations falling between a 95% confidence interval: PICP = T^-1∑^T_k=11_[y_L^k, y_U^k](Y_k), where y_U^k (resp. y_L^k) is the upper (resp. lower) bound of the confidence intervals. Both criteria are reported in Table <ref>. For our proposed model, predictions can be performed by approximating the predictive density p_θ,φ(y_k+1|U_1:k+1,Y_1:k) by p^N_θ,φ(y_k+1)= ∑_i=1^Nω_k^i p_θ,φ(y_k+1|ξ_k^i,U_k+1) , where p_θ,φ(y_k+1|ξ_k^i,U_k+1) is the predictive distribution of Y_k+1 described in Section <ref>. In order to explore longer ranges, we run our model to get N samples for any time horizon. The associated intervals containing 95% of the samples are displayed in Figure <ref>, for 24 hours forecasts. We compared our model with MC Dropout methods, by implementing recurrent dropout layers as described in <cit.>. The optimal dropout rate p_drop=0.01 that we tuned by grid search is smaller than the proposed value in the original paper, which may be due to our much longer time series, similarly to results presented in <cit.>. Additionally, we evaluate the model with p_drop=0.05, which we show slightly degrades performances. The training procedure is similar to traditional recurrent models ; during inference, we draw 100 samples from the dropout layers, and compute the same average forecasts and intervals as for our model. Despite being based on the same deep learning architecture, the MC dropout model is still largely overconfident, while our proposed model provide more credible empirical confidence intervals. We also experimented with a Gaussian linear Hidden Markov Model (HMM) whose parameters are estimated with the Kalman smoother using the Expectation Maximization (EM, <cit.>) algorithm. Out of a range of possible latent dimension sizes d_∈{1, 2, 4, 6}, we selected d_=4 as it yielded the best performances. § CONCLUSION In this paper, we introduced a decoupled architecture for uncertainty estimation on a time series dataset. Our deep neural network backbone is responsible for extracting high level features, while particle filtering in the last layer allows modelling recurrent nonlinear uncertainty. Our proposed model does not suffer from the overconfidence of MC Dropout methods, while significantly improving on the performances of Hidden Markov Models. We demonstrate the potential behind implementing latent space models as a modified RNN cell ; more complex architectures, such as the GRU network used in the input model, or LSTM cells, could be considered. Our decoupled architecture enables incorporating uncertainty estimation to an already trained network. This opens the door to multiple, cheap finetuning of last layers parameters, from a global pretraining. IEEEtran
http://arxiv.org/abs/2307.00892v1
20230703094410
Tales from the Git: Automating the detection of secrets on code and assessing developers' passwords choices
[ "Nikolaos Lykousas", "Constantinos Patsakis" ]
cs.SE
[ "cs.SE", "cs.CR" ]
Generating Reliable Pixel-Level Labels for Source Free Domain Adaptation Gabriel Tjio Centre for Frontier AI Research (CFAR) gabriel-tjio@cfar.a-star.edu.sg Ping Liu* Centre for Frontier AI Research (CFAR) liu_ping@cfar.a-star.edu.sg Yawei Luo Zhejiang University yaweiluo329@gmail.com Chee Keong Kwoh Nanyang Technological University asckkwoh@ntu.edu.sg Joey Zhou Tianyi Centre for Frontier AI Research (CFAR) joey_zhou@cfar.a-star.edu.sg =========================================================================================================================================================================================================================================================================================================================================================================================================== Typical users are known to use and reuse weak passwords. Yet, as cybersecurity concerns continue to rise, understanding the password practices of software developers becomes increasingly important. In this work, we examine developers' passwords on public repositories. Our dedicated crawler collected millions of passwords from public GitHub repositories; however, our focus is on their unique characteristics. To this end, this is the first study investigating the developer traits in password selection across different programming languages and contexts, e.g. email and database. Despite the fact that developers may have carelessly leaked their code on public repositories, our findings indicate that they tend to use significantly more secure passwords, regardless of the underlying programming language and context. Nevertheless, when the context allows, they often resort to similar password selection criteria as typical users. The public availability of such information in a cleartext format indicates that there is still much room for improvement and that further targeted awareness campaigns are necessary. § INTRODUCTION § INTRODUCTION One of the cornerstones of security is undoubtedly authentication, as it guarantees that only the prescribed entities are granted access to a given piece of information. Since credentials can be forgotten, intercepted, and stolen, there is a lot of work on biometrics and physically unclonable functions. The major advantage of these approaches is that one does not need to remember or bear anything and simply authenticates with what she is. Nevertheless, regardless of how seamless these modalities are, credentials are still the prevalent method to authenticate to a service. Contrary to their criticality, users, in general, do not make wise choices when selecting passwords. More precisely, users would choose guessable passwords, e.g. "password", celebrity names, bands, songs, dates, heroes from films and novels, easy to type passwords; e.g. , , etc. This has been repeatedly reported and exploited, leading to the compromise of various systems and services. Therefore, service providers resort to applying password policies that would guarantee that the password would be difficult to guess, forcing the users to use upper and lower case characters, digits, special characters, be at least 8 characters long etc. However, even this does not solve the problem. A typical user would resort to a simple variation of a typical password, e.g. . Regardless of the choice that a user makes about her password, a key issue is how they are stored. The reason is that one has to consider that the passwords must not be accessible from a malicious insider or a compromised host. Therefore, it is "prohibited" to store passwords in plaintext form. To allow for authentication without revealing the passwords, hashing and salting mechanisms are used and prevent, or at least significantly delay, attackers from performing offline attacks in case the password file has been leaked. The above are well-known and widely studied problems. Going a step further, one has to consider the case of developers. A key difference here is that hard-coded credentials are distributed with their binaries and can greatly expose their organisation and infrastructure. There are numerous cases of such exposures, e.g., firmware [<https://nakedsecurity.sophos.com/2021/01/06/zyxel-hardcoded-admin-password-found-patch-now/>], security mechanisms[<https://thehackernews.com/2016/01/fortinet-firewall-password-hack.html>], ICS controllers[<https://www.techtarget.com/searchsecurity/news/252442369/Yokogawa-Stardom-vulnerability-leaves-hardcoded-creds-in-ICS-controllers>]. Even more, the adoption of DevOps and Git has introduced another exposure as careless developers commit code where credentials, private keys, tokens and other possible sensitive information is exposed <cit.>. To detect credentials, tokens, and passwords, the mainstream approach is to look for high entropy strings or use regular expressions. Nevertheless, developers are not always so constrained in their writing. Their source code does not always comply with specific conventions and norms. Based on the above, we investigate whether developers follow the same patterns that common users do when selecting passwords. Theoretically, since developers are more accustomed to computers and security, their choices should be significantly more secure and less predictable. To the best of our knowledge, no concrete study is exploring this problem. Merely asking people working on computers to select passwords triggers a bias as they feel that they are tested on this aspect and know how to properly address it. However, this does not necessarily mean that they would use the same criteria when they are not being monitored for this aspect and freely select them for the comfort of their daily work. To answer the above questions, we leverage a large-scale dataset, the biggest available to date, containing more than 2 million secrets that were committed in public repositories on GitHub. This dataset contains labels according to programming language, type of secret (e.g., token, password), usage (e.g., mail, database) etc. Based on this dataset, we provide a thorough analysis of the passwords that developers actively use to detect patterns. In the next section, we provide a brief overview of the related work. Then, we describe our data collection methodology and our dataset. Afterwards, in Section <ref>, we outline our credential extraction methodology. In Section <ref>, we analyse the dataset and discuss the structure and emerging patterns. Finally, the article concludes by summarising our contributions, limitations of our work, and possible extensions for future work. § RELATED WORK As already discussed, users often choose predictable passwords for their services <cit.>. Some studies suggest that people with a computer science background select stronger passwords than others <cit.>. However, users tend to have misconceptions regarding password generation. For instance, the interviews conducted by Ur et al. <cit.> uncovered several of them, e.g. adding a symbol or using a 'non-public' date or name would lead to a secure password. Most of these misconceptions can be attributed to the lack of users understanding of either the common patterns in which people think or the capacity to brute-force variations of known keywords for an individual. In fact, the passwords reflect several traits of ourselves and our culture <cit.>. From the above, one can derive several lines of research to protect users, as they still seem reluctant to use, e.g. password managers <cit.>. In fact, user passwords are shown to adhere to specific models, e.g. Zipf-like models <cit.>. One way to make users use strong passwords is to prevent them from using weak ones by enforcing specific policies on the characters used in the password, e.g. mandating the use of longer passwords, use of letters (both small and capital), digits, and symbols. Despite the fact that this sounds like a valid policy, users often end up using a password of the form or , which cannot be considered much more secure. Therefore, one research line is to measure the strength of passwords without context, trying to determine the entropy of the string, the length, the complexity and length of the underlying patterns, the position of the characters on the keyboard etc. There are various approaches <cit.> with varying accuracy <cit.> leveraging among others probabilistic context-free grammars, Markov models, and neural networks. The fact that users tend to use a limited set of rules to generate passwords <cit.> makes them guessable <cit.>, with users often failing to understand the reasons why <cit.>. The guessability of user passwords has emerged as another research line that tries to quantify this extent. To this end, researchers exploit data-driven methods to assess how similar passwords are with leaked passwords <cit.> but also to train password crackers <cit.> and to prevent such attacks <cit.>. Nevertheless, developers are not typical users, nor just savvy. They develop software solutions and regulations, such as the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the United States force them to apply secure policies on the collection, processing, and handling of personal data. Nevertheless, their policies are not the best. For instance, they do not store passwords <cit.> properly, misconfigure the underlying infrastructure <cit.>, do not sanitise the input properly <cit.>. The wide adoption of DevOps pipelines from the industry and the use of Git has given rise to another issue: developers often make public commits with their code containing secrets and passwords. One of the first studies is that of Meli et al. <cit.>, who harvested thousands of API keys, private keys, and secrets from public GitHub repositories. This has sparked the development of targeted crawlers for code repositories and source code scanners to extract secrets and passwords <cit.>. § DATA COLLECTION To collect hardcoded credentials from GitHub, we employ a method similar to <cit.> and use a set of targeted queries to collect candidate files possibly containing various types of secrets. Several of these queries were derived from the authentication-related code snippets provided by Feng et al. <cit.>. Moreover, we defined additional queries capturing authentication contexts not considered in <cit.>, e.g. login automation using Selenium Webdriver. Provided that at the time of wiring, GitHub API allowed only scoped code searches (specific repository/user/organization/etc.), we crafted a crawler leveraging the search interface of <http://github.com> instead of the API, and accordingly collected all the returned results. The dataset was collected over 6 months, from May to Oct. 2021. In total, approximately 30M files were collected. Our queries targeted a wide range of programming languages and configuration files, including: , , , , , , , , , , , , , , , , , , , , , , , , , , , and . § CREDENTIAL EXTRACTION The inherent diversity of passwords makes detection approaches based on regular expressions and entropy-related heuristics ineffective <cit.>. Naturally, passwords lack a well-defined structure and frequently do not comply with strong password policies (which would ensure their distinctiveness compared to regular strings found in source code). As such, to effectively extract credentials from our dataset, we leverage methods from the PassFinder model for password leakage detection from source code presented in <cit.> while introducing several improvements considering the broader set of target languages we considered, compared to Feng et al. <cit.> More precisely, PassFinder employs an ensemble of two text convolution neural network models, namely the Context Model and the Password Model. The Context Model is trained to classify source code snippets surrounding specific seed elements (including names of methods, variables, constants, etc.) that can be potentially relevant for various authentication contexts. To establish the optimal number of lines of these snippets, Feng et al. performed a series of experiments and set the context window size at 6, which provides a good enough representation of the context in which these seed keywords appear. The Password Model is trained to classify strings extracted for the code snippets previously identified as being potential passwords, machine-generated secrets (e.g. API keys, JWT tokens, etc.), or ordinary strings. To enhance the performance and accuracy, we made a series of improvements, including introducing a series of innovative features for the Password Model to assess the degree of human memorability for strings, drawing upon the research presented in Casino et al. <cit.>, augmenting the candidate password extraction from source code with an extensive set of regular expressions capturing cases where passwords are not wrapped in quotes (e.g. within connection strings, URLs, comments, etc.), as well as expanding the datasets used to train its two components. Nevertheless, these improvements fall outside the scope of the current work, and thus, a detailed exploration of their impact will be reserved for future research. Finally, we also altered the architecture neural network tasked with modelling the context semantics, i.e. the Context Model, to classify code snippets belonging to the following categories of credentials: * Database: Credentials used to authenticate connections to various data stores, including MySQL, Microsoft SQL Server, PostgreSQL, Oracle, MongoDB, etc. * Mail: Credentials used to authenticate connections to mail servers, including SMTP, IMAP, and POP3 protocols. * Automation: Strings submitted to log-in web forms using Browser Automation libraries, such as Selenium. * Web Service: Credentials used to perform HTTP basic authentication, proxy authentication, as well as being part of HTTP request bodies or parameters. * Other: Authentication-related code not fitting the previous categories. To assess the performance of both components in our enhanced version of PassFinder, we manually examined the results in a random sample of 2,000 authentication snippets, which included code in languages not considered in <cit.>. The performance of our Password Model was better compared to Feng et al. (Macro-F_1 score 98.1% vs 96.79%), while the performance of the augmented Context Model in terms of credential category classification achieved a Macro-F_1 score of 88.7%. It is important to note that we do not consider strings containing non-ASCII characters as potential passwords. § DATASET Next, we present the password dataset extracted by applying the credential extraction approach described in Section <ref> to the collected source code files (see Section <ref>), as well as the dataset of leaked passwords we use for comparison. §.§ Hardcoded credentials on Github In total, we extracted 2,093,488 unique strings that were classified as passwords. Note that, for candidate passwords, we considered strings with a length of at least 5. From these, 425,071 were classified as machine-generated; that is API keys, tokens, encrypted strings, and hashes, and 280,750 were included in wordlists of default credentials for various platforms[<https://github.com/danielmiessler/SecLists>]. To perform a fair comparison with the predominantly human-generated passwords in leaked credential databases, typically used for authenticating online accounts, we excluded these strings, resulting in a total dataset of 1,387,667 hardcoded passwords by 294,975 developers. In our dataset, the 78.95% of developers (232,890) leaked a single password, while the maximum number of unique passwords associated with a single developer was 1,718, indicating their utilisation in a unit testing context. Note that we do not examine password reuse across developers in this study as many developers may have used the same credentials for the same service multiple times, shared them within a team, or added them in multiple unit tests. Thus, we focus on the 439,204 unique passwords, which constitute 31.65% of the human generated passwords in our dataset. For the rest of this paper, we refer to this dataset as . Note that, given the large scale of our dataset, there might be instances of machine-generated passwords that could not be filtered out using the employed heuristics. Nevertheless, we expect that their impact on our comparison experiments will be negligible. §.§ Leaked passwords For leaked user passwords, we use the so-called dataset. The dataset is a comprehension of previously leaked databases and contains 8.4 billion entries of passwords. The dataset was published in RaidForums, which has now been seized by the U.S. Department of Justice (DOJ)[<https://www.justice.gov/usao-edva/pr/us-leads-seizure-one-world-s-largest-hacker-forums-and-arrests-administrator>] as the DOJ notes that: Members could also earn credits through other means, such as by posting instructions on how to commit certain illegal acts. While researchers claim that password choices do not vary greatly over time, we opted for this dataset as beyond its size and variety, it is expected to reflect the latest password policies on the selection of user passwords. For tractability reasons, in our experiments, we consider a random sample of 10M unique passwords. § PASSWORD ANALYSIS AND COMPARISON In Figure <ref>, the Top-15 languages in the dataset are illustrated. We observe that although the majority of most popular programming languages <cit.> is present, a significant number of credentials is contained in configuration files in standard formats, including JSON, Java Properties, YAML, and INI. Moreover, we notice a non-trivial number of passwords in Text files, meaning that developers can leak authentication snippets and credentials inside plaintext files, commonly used for note-taking purposes. The distribution of passwords in the different categories considered in Section <ref> are illustrated in Figure <ref>. We observe that `Database' and `Mail' are the most prominent categories across all the top languages, while passwords in the `Automation' category do not appear in configuration files. Nevertheless, please note that these distributions are tightly related to the queries we performed for collecting the studied dataset, and as such, it is important to acknowledge the potential bias introduced by our specific choice of queries, which while capturing a representative sample of various password categories, may not provide a comprehensive view of all possible scenarios. Next, we study the distribution of password lengths across the Top-15 programming languages and the 5 credential types in our dataset. Accordingly, we plot these distributions in Figure <ref>. In Figure <ref> we observe that the majority of passwords have relatively similar median lengths, ranging from 9 to 11 characters. Notably, the TypeScript language stands out with a higher mean length of 13.12 characters, while INI and JSON languages exhibit slightly shorter average password lengths at 10.40 and 10.46 characters, respectively. Figure <ref> focuses on the distribution of password lengths for each credential type. Here, we observe a more noticeable variation in password lengths across different types. `Web Service' and `Other' credential types exhibit the longest mean password lengths at 14.27 and 13.49 characters, respectively. In contrast, `Automation' credentials have a shorter mean length of 10.39 characters, while the `Database' and `Mail' types show moderately longer average password lengths, with means of 11.41 and 11.74 characters, respectively. These differences in password lengths across credential types suggest that the context in which a password is used may play a more substantial role in determining its length, with some authentication contexts requiring or encouraging the use of longer, and potentially more secure, passwords. We proceed to perform a comparison between the characteristics of the passwords in and datasets. In Figure <ref> we plot the cumulative distribution function (CDF) for the password lengths for each dataset. A CDF displays the probability that a random variable will take a value less than or equal to a specific value. In this context, the CDFs illustrate the proportion of passwords with lengths less than or equal to a given length for both datasets. It is evident that developers use significantly longer passwords compared to typical users. For example, the probability of observing a password of length 10 or shorter is 53.46% for the , whereas the same probability for is significantly higher at 82.27%. This trend continues for longer password lengths as well, with the developers' passwords consistently exhibiting lower cumulative probabilities at each length compared to normal users. To further investigate the differences in password strength between the and datasets we employ the well-known  <cit.>. Zxcvbn is a widely-used password strength estimator that evaluates the strength of passwords by estimating the number of guesses an attacker would require to crack them, how long it would take, and by providing a password score ranging from 0 (weakest) to 4 (strongest). To this end, we plot the CDFs of the estimated number of guesses for both datasets in Figure <ref>. We observe that it follows the same trend as the password lengths (Figure <ref>), indicating that the developers' passwords are generally stronger and more resistant to brute-force attacks. Moreover, in Figure <ref> we plot the proportion of passwords with each zxcvbn score, from 0 to 4, for both datasets. It is clear that the developers' passwords dominate the higher scores (3 and 4), further confirming the enhanced security awareness among developers, who tend to create stronger and more secure passwords compared to normal users. Interestingly, for the lowest score (0), we also observe a higher fraction of developers' passwords compared to , albeit very small (<1%). This can be attributed to the fact that several database servers, etc., do not enforce password complexity requirements, allowing for weak passwords to be used. On the contrary, most of the online services from which the passwords of the dataset were leaked have enforced stricter password policies for user accounts preventing users from having passwords with a very low score. In Figure <ref> we compare the entropy of the passwords of the two datasets per password length. Clearly, the developers' passwords have more entropy than the passwords that typical users select for all lengths. Notably, this difference increases as the password length increases. Thus, we can again conclude that the developers' passwords are more random and thus more secure than the ones of users. Finally, we focus on four key features of passwords: the number of uppercase ASCII characters, lowercase ASCII characters, digits, and symbols. These features have been widely studied in the context of password security and strength <cit.>. A diverse combination of these character types is crucial to creating more secure and less predictable passwords, as it increases the search space for potential attacks and makes it more challenging for attackers to guess passwords using brute-force or dictionary-based methods. We present the pairwise correlation matrices (Pearson) of these features for the and datasets in Figure <ref>. We observe a significant difference in the correlation patterns between the two classes of passwords. For the Developers class, the correlation between uppercase and lowercase characters is 0.158, while for the RockYou class, it is -0.261. The positive correlation suggests that developers tend to incorporate a better mix of uppercase and lowercase letters in their passwords. Furthermore, the correlation between uppercase characters and symbols in is 0.120, compared to only 0.027 for . This indicates that developers are more likely to include a combination of uppercase letters and symbols, another factor contributing to stronger passwords. On the other hand, the correlation between lowercase letters and digits in is -0.134, whereas it is -0.652 for . This negative correlation in the RockYou class implies that as the number of lowercase letters increases, the number of digits decreases, leading to a less diverse and more predictable password composition. In contrast, the weaker negative correlation in the Developer class suggests a more balanced distribution of lowercase letters and digits. Based on the above, we conclude that the stronger cross-category correlations observed in developer passwords imply that they are more likely to contain a diverse mix of character types. As a result, the developers' passwords are more secure than the ones of normal users. § CONCLUSIONS While shifting from DevOps to DevSecOps, many concepts, tools, methods, and pipelines have to be revised to prevent vulnerabilities from reaching final products and services. Password leaks from source code are a common security issue that significantly impacts numerous organisations world wide. In this work we perform the first analysis of real-world developers' passwords leveraging a large scale dataset that we collected from public GitHub repositories. Additionally, we compare them to leaked user passwords from the RockYou2021 dataset. Our findings highlight that developers generally exhibit stronger password practices, with longer and more complex passwords. However, we also observed cases where developers' passwords were weak, particularly when certain systems did not enforce password complexity requirements. These results emphasise the need for continued education and awareness about secure password practices among developers, as well as the importance of enforcing password policies across all types of systems and services. Of course, this also underlines the extensive carelessness of developers which do not consider the risks that they expose their clients when committing their code on public repositories, acting as if their code is hosted in their own premises and not accessible from anyone else. § ACKNOWLEDGEMENTS This work was supported by the European Commission under the Horizon Europe Programme, as part of the project LAZARUS (<https://lazarus-he.eu/>) (Grant Agreement no. 101070303). The content of this article does not reflect the official opinion of the European Union. Responsibility for the information and views expressed therein lies entirely with the authors. 10 alsabah2018your Mashael AlSabah, Gabriele Oligeri, and Ryan Riley. Your culture is in your password: An analysis of a demographically-diverse password dataset. Computers & security, 77:427–441, 2018. braz2021don Larissa Braz, Enrico Fregnan, Gül Çalikli, and Alberto Bacchelli. Why don’t developers detect improper input validation?'; drop table papers;–. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), pages 499–511. IEEE, 2021. casino2021intercepting Fran Casino, Nikolaos Lykousas, Ivan Homoliak, Constantinos Patsakis, and Julio Hernandez-Castro. Intercepting hail hydra: real-time detection of algorithmically generated domains. Journal of Network and Computer Applications, 190:103135, 2021. cass2020top Stephen Cass. The top programming languages: Our latest rankings put python on top-again-[careers]. IEEE Spectrum, 57(8):22–22, 2020. di2022revenge Alessia Michela Di Campi, Riccardo Focardi, and Flaminia L Luccio. The revenge of password crackers: Automated training of password cracking tools. In European Symposium on Research in Computer Security, pages 317–336. Springer, 2022. diakopoulos2015interactive Nick Diakopoulos and Stephen Cass. Interactive: The top programming languages 2015. IEEE Spectrum, online, July, 20, 2015. dietrich2018investigating Constanze Dietrich, Katharina Krombholz, Kevin Borgolte, and Tobias Fiebig. Investigating system operators' perspective on security misconfigurations. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 1272–1289, 2018. dinev2006extended Tamara Dinev and Paul Hart. An extended privacy calculus model for e-commerce transactions. Information systems research, 17(1):61–80, 2006. feng2022automated Runhan Feng, Ziyang Yan, Shiyan Peng, and Yuanyuan Zhang. Automated detection of password leakage from public github repositories. In International Conference on Software Engineering (ICSE’22), 2022. golla2018accuracy Maximilian Golla and Markus Dürmuth. On the accuracy of password strength meters. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 1567–1582, 2018. HuZG20a Yimin Guo and Zhenfeng Zhang. Corrigendum to "lpse: Lightweight password-strength estimation for password meters" [computers & security, volume 73, 2018, pages 507-518]. Comput. Secur., 94:101879, 2020. hitaj2019passgan Briland Hitaj, Paolo Gasti, Giuseppe Ateniese, and Fernando Perez-Cruz. Passgan: A deep learning approach for password guessing. In Applied Cryptography and Network Security: 17th International Conference, ACNS 2019, Bogota, Colombia, June 5–7, 2019, Proceedings 17, pages 217–237. Springer, 2019. HoushmandA12 Shiva Houshmand and Sudhir Aggarwal. Building better passwords using probabilistic techniques. In Robert H'obbes' Zakon, editor, 28th Annual Computer Security Applications Conference, ACSAC 2012, Orlando, FL, USA, 3-7 December 2012, pages 109–118. ACM, 2012. JakobssonD12 Markus Jakobsson and Mayank Dhiman. The benefits of understanding passwords. In Patrick Traynor, editor, 7th USENIX Workshop on Hot Topics in Security, HotSec'12, Bellevue, WA, USA, August 7, 2012. USENIX Association, 2012. malone2012investigating David Malone and Kevin Maher. Investigating the distribution of password choices. In Proceedings of the 21st international conference on World Wide Web, pages 301–310, 2012. mayer2022users Peter Mayer, Collins W Munyendo, Michelle L Mazurek, and Adam J Aviv. Why users (don't) use password managers at a large educational institution. In 31st USENIX Security Symposium (USENIX Security 22), pages 1849–1866, 2022. 2508859.2516726 Michelle L. Mazurek, Saranga Komanduri, Timothy Vidas, Lujo Bauer, Nicolas Christin, Lorrie Faith Cranor, Patrick Gage Kelley, Richard Shay, and Blase Ur. Measuring password guessability for an entire university. In Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, CCS '13, page 173–186, New York, NY, USA, 2013. Association for Computing Machinery. MeliMR19 Michael Meli, Matthew R. McNiece, and Bradley Reaves. How bad can it git? characterizing secret leakage in public github repositories. In 26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 2019. The Internet Society, 2019. melicher2016fast William Melicher, Blase Ur, Sean M Segreti, Saranga Komanduri, Lujo Bauer, Nicolas Christin, and Lorrie Faith Cranor. Fast, lean, and accurate: Modeling password guessability using neural networks. In 25th USENIX Security Symposium (USENIX Security 16), pages 175–191, 2016. naiakshina2020conducting Alena Naiakshina, Anastasia Danilova, Eva Gerlitz, and Matthew Smith. On conducting security developer studies with cs students: Examining a password-storage study with cs students, freelancers, and company developers. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–13, 2020. 3134082 Alena Naiakshina, Anastasia Danilova, Christian Tiefenau, Marco Herzog, Sergej Dechand, and Matthew Smith. Why do developers get password storage wrong? a qualitative usability study. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS '17, page 311–328, New York, NY, USA, 2017. Association for Computing Machinery. pal2019beyond Bijeeta Pal, Tal Daniel, Rahul Chatterjee, and Thomas Ristenpart. Beyond credential stuffing: Password similarity models using neural networks. In 2019 IEEE Symposium on Security and Privacy (SP), pages 417–434. IEEE, 2019. pasquini2021improving Dario Pasquini, Ankit Gangwal, Giuseppe Ateniese, Massimo Bernaschi, and Mauro Conti. Improving password guessing via representation learning. In 2021 IEEE Symposium on Security and Privacy (SP), pages 1382–1399. IEEE, 2021. saha2020secrets Aakanksha Saha, Tamara Denning, Vivek Srikumar, and Sneha Kumar Kasera. Secrets in source code: Reducing false positives using machine learning. In 2020 International Conference on COMmunication Systems & NETworkS (COMSNETS), pages 168–175. IEEE, 2020. 3025453.3026050 Blase Ur, Felicia Alfieri, Maung Aung, Lujo Bauer, Nicolas Christin, Jessica Colnago, Lorrie Faith Cranor, Henry Dixon, Pardis Emami Naeini, Hana Habib, Noah Johnson, and William Melicher. Design and evaluation of a data-driven password meter. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, page 3775–3786, New York, NY, USA, 2017. Association for Computing Machinery. 2858036.2858546 Blase Ur, Jonathan Bees, Sean M. Segreti, Lujo Bauer, Nicolas Christin, and Lorrie Faith Cranor. Do users' perceptions of password security match reality? In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, page 3748–3760, New York, NY, USA, 2016. Association for Computing Machinery. UrNBSSBCC15 Blase Ur, Fumiko Noma, Jonathan Bees, Sean M. Segreti, Richard Shay, Lujo Bauer, Nicolas Christin, and Lorrie Faith Cranor. "i added '!' at the end to make it secure": Observing password creation in the lab. In Lorrie Faith Cranor, Robert Biddle, and Sunny Consolvo, editors, Eleventh Symposium On Usable Privacy and Security, SOUPS 2015, Ottawa, Canada, July 22-24, 2015, pages 123–140. USENIX Association, 2015. von2013survival Emanuel Von Zezschwitz, Alexander De Luca, and Heinrich Hussmann. Survival of the shortest: A retrospective analysis of influencing factors on password composition. In Human-Computer Interaction–INTERACT 2013: 14th IFIP TC 13 International Conference, Cape Town, South Africa, September 2-6, 2013, Proceedings, Part III 14, pages 460–467. Springer, 2013. wang2022segments Chuanwang Wang, Junjie Zhang, Ming Xu, Haodong Zhang, and Weili Han. # segments: A dominant factor of password security to resist against data-driven guessing. Computers & Security, 121:102848, 2022. 7961213 Ding Wang, Haibo Cheng, Ping Wang, Xinyi Huang, and Gaopeng Jian. Zipf’s law in passwords. IEEE Transactions on Information Forensics and Security, 12(11):2776–2791, 2017. wang2019birthday Ding Wang, Ping Wang, Debiao He, and Yuan Tian. Birthday, name and bifacial-security: understanding passwords of chinese web users. In 28th USENIX Security Symposium (USENIX Security 19), pages 1537–1555, 2019. 2976749.2978339 Ding Wang, Zijian Zhang, Ping Wang, Jeff Yan, and Xinyi Huang. Targeted online password guessing: An underestimated threat. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS '16, page 1242–1254, New York, NY, USA, 2016. Association for Computing Machinery. 10063545 Elliott Wen, Jia Wang, and Jens Dietrich. Secrethunter: A large-scale secret scanner for public git repositories. In 2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), pages 123–130, 2022. Wheeler16 Daniel Lowe Wheeler. zxcvbn: Low-budget password strength estimation. In Thorsten Holz and Stefan Savage, editors, 25th USENIX Security Symposium, USENIX Security 16, Austin, TX, USA, August 10-12, 2016, pages 157–173. USENIX Association, 2016. xia2019genpass Zhiyang Xia, Ping Yi, Yunyu Liu, Bo Jiang, Wei Wang, and Ting Zhu. Genpass: a multi-source deep learning model for password guessing. IEEE Transactions on Multimedia, 22(5):1323–1332, 2019. yan2004password Jeff Yan, Alan Blackwell, Ross Anderson, and Alasdair Grant. Password memorability and security: Empirical results. IEEE Security & privacy, 2(5):25–31, 2004.
http://arxiv.org/abs/2307.03321v2
20230706222511
Quantum Entanglement & Purity Testing: A Graph Zeta Function Perspective
[ "Zachary P. Bradshaw", "Margarite L. LaBorde" ]
quant-ph
[ "quant-ph", "math-ph", "math.MP" ]
1]Zachary P. Bradshawcor1 zbradshaw@tulane.edu 2]Margarite L. LaBorde [cor1]Corresponding author [1]organization=Tulane University, Department of Mathematics, city=New Orleans, country=USA [2]organization=Louisiana State University, Department of Physics & Astronomy, city=Baton Rouge, country=USA We assign an arbitrary density matrix to a weighted graph and associate to it a graph zeta function that is both a generalization of the Ihara zeta function and a special case of the edge zeta function. We show that a recently developed bipartite pure state separability algorithm based on the symmetric group is equivalent to the condition that the coefficients in the exponential expansion of this zeta function are unity. Moreover, there is a one-to-one correspondence between the nonzero eigenvalues of a density matrix and the singularities of its zeta function. Several examples are given to illustrate these findings. Machine Learning to detect cyber-attacks and discriminating the types of power system disturbances [ ================================================================================================== 2 § INTRODUCTION One of the most interesting and often discussed properties arising in quantum information theory is that of quantum entanglement <cit.>, wherein a bipartite quantum system is described by a joint state in such a way that the state of one subsystem cannot be described independently of the other, no matter the physical distance between them. In the restricted case of pure states that we consider here, a state |ψ⟩ is called entangled if it cannot be written as a product state |ϕ⟩⊗|χ⟩. States which do not possess this property are called separable. In recent decades, a number of criteria for separability have been developed, including the positive partial transpose (PPT) criterion <cit.> and k-extendibility <cit.>, and the problem of determining whether a state is separable or entangled has been shown to be NP-hard in many cases <cit.>. Additionally, quantum algorithms which test for separability, such as the SWAP test <cit.> and its generalizations <cit.>, are under continuous development. Meanwhile, another perspective has arisen in which quantum properties are framed in a graph-theorectic setting. This study began with the work of Braunstein, Ghosh, and Severini <cit.>, who defined the density matrix of a graph as the normalized Laplacian associated to it. In their work, they give a graph-theorectic criterion for the entanglement of the associated density matrix; however, not all density matrices can be encoded into a graph in this way, thus limiting the field of applicability of this criterion. The work of Hassan and Joag <cit.> addresses this problem by associating an arbitrary density matrix to a weighted graph. They then define a modified tensor product of graphs in such a way that an arbitrary quantum state is a product state if and only if it is the density matrix of a modified tensor product of weighted graphs. The perspective of Braunstein et al. produces a graph-theorectic separability criterion known as the degree criterion <cit.>, which was shown to be equivalent to the PPT criterion in <cit.>. In the limited case of density matrices which can be associated to a graph in this sense, the PPT criterion can be replaced by a simple graph-theorectical criterion. Here we will take another step in this direction by establishing a connection between a family of bipartite pure state separability tests, the cycle index polynomials of the symmetric group, and a new graph zeta function associated to a density matrix. The crux of the argument is that this zeta function is the generating function for the cycle index polynomial of the symmetric group evaluated at the moments of the density matrix, and this is exactly the acceptance probability of these tests. Moreover, we show that there is a one-to-one correspondence between the singularities of this zeta function and the nonzero eigenvalues of the associated density matrix, thereby establishing a variant of the Hilbert-Pólya conjecture associated to this zeta function. In Section <ref>, we review the separability tests defined in <cit.> related to previous work in <cit.>. We then review previous constructions of the density matrix of a graph but ultimately take a different approach by naturally assigning an arbitrary density matrix to a weighted graph in Section <ref>. The relevant graph zeta function is then introduced in Section <ref>, and in Section <ref>, we prove that our separability tests are equivalent to the condition that the expansion of the corresponding zeta function has unit coefficients. This establishes a simple graph-theorectic test for the entanglement of pure bipartite states. We also derive a determinant representation for this function, and use it to prove the correspondence between its singularities and the nonzero eigenvalues of the associated density matrix in Section <ref>. Finally, we give concluding remarks in Section <ref>. § REVIEW OF SEPARABILITY TESTS The separability tests outlined in <cit.> are examples of G-Bose symmetry tests, where G is some finite group. Let U:G→𝒰(ℋ) be a unitary representation of G on the Hilbert space ℋ. A state ρ is called G-Bose symmetric if Π_GρΠ_G^†=ρ, where Π_G:=1/| G|∑_g∈ GU(g) is the projection onto the G-symmetric subspace, or the space of states |ψ⟩ such that U(g)|ψ⟩=|ψ⟩ for all g∈ G. To be clear, in general both mixed and pure states may demonstrate this property. In both <cit.> and <cit.>, it was shown how to test a state for G-Bose symmetry using a quantum computer, and we review a special case of this procedure here. The G-Bose symmetry property is equivalent to the condition ‖Π^G_S|ψ_S⟩‖_2=1. To test for this, we generate a superposition control state |+⟩_C = 1/√(|G|)∑_g∈ G|g⟩ wherein we take a superposition over some set of computational basis elements labelled with a corresponding group element G, and this can be done in a general sense with a quantum Fourier transform, as discussed in <cit.>. This control state is used to implement a corresponding unitary U(g) if the control qubit is in the state |g⟩. Afterwards, applying an inverse quantum Fourier transform and measuring all control qubits, accept if the outcome |0⟩⟨0|_C occurs and reject otherwise. The acceptance probability, given a pure state, is then given by ‖(⟨+|_C⊗ I_S) (1/√(|G|)∑_g∈ G|g⟩_C⊗(U_S(g)|ψ⟩_S))‖_2^2 =‖1/|G|∑_g∈ GU_S(g)|ψ⟩_S‖_2^2 =Π_S^G|ψ⟩_S_2^2 =[Π_S^G|ψ⟩⟨ψ|_S]. This result is easily generalized to mixed states by convexity, and we find that the acceptance probability is equal to [Π_S^Gρ_S]. Consider the circuit in Fig <ref>. To test for the separability of a bipartite pure state ψ_AB, we prepare k copies of the state ψ_AB and label the k copies of the A system by A_1 ⋯ A_k and the k copies of the B system by B_1 ⋯ B_k. We then perform an S_k-Bose symmetry test on the state ψ_AB^⊗ k, wherein we identify S with A_1B_1⋯ A_kB_k and U_S(π) with I_A_1⋯ A_k⊗ W_B_1⋯ B_k(π), where π∈ S_k and W_B_1⋯ B_k:S_k→𝒰(ℋ_B_1⋯ B_k) is the standard unitary representation of S_k which acts on ℋ_B_1⋯ B_k≡ℋ_B_1⊗⋯⊗ℋ_B_k by permuting the Hilbert spaces according to the corresponding permutation. Define ρ_B:= _A[ψ_AB]. The acceptance probability for the bipartite pure-state separability algorithm is given by p^(k) := [Π_B_1⋯ B_kρ_B^⊗ k] where Π_B_1⋯ B_k:= 1/k!∑_π∈ S_kW_B_1⋯ B_k(π) is the projection onto the symmetric subspace. The acceptance probability p^(k) = 1 for all k if and only if ψ_AB is separable. Thus, the S_k-Bose symmetry test is indeed a separability test for pure bipartite states. In <cit.>, this test is generalized to any finite group G. Moreover, the acceptance probability of this generalized test is shown to be given by the cycle index polynomial of G, which is defined by 1/| G|∑_g∈ Gx_1^c_1(g)⋯ x_k^c_k(g) for permutation groups, where c_j(g) is the number of j-cycles in the cycle decomposition of g. This definition is easily extended to any finite group by Cayley's theorem <cit.>. When we specialize to the case G=S_k, this acceptance probability takes the form p^(k)=∑_a_1+2a_2+⋯ +ka_k=k∏_j=1^k([ρ_B^j])^a_j/j^a_ja_j!, where the sum is taken over the partitions of k. § DENSITY MATRICES AND WEIGHTED GRAPHS The study of the density matrix of a graph was initiated by Braunstein et al. in <cit.>. A graph X is a set V(X)={v_1,…,v_n} of labeled vertices along with a set E(X) of pairs {v_i,v_j} of vertices called edges. The (vertex) adjacency matrix A_X of the graph is given by (A_X)_ij:= 1, if {v_i,v_j}∈ E(X) 0, otherwise, and it encodes the information about the edges of the graph. The degree d(v_i) of a vertex v_i is the number of edges which include the vertex v_i. We define the degree matrix Δ_X of the graph to be the diagonal matrix consisting of the degrees of each vertex; that is, Δ_X:=diag(d(v_1),…,d(v_n)). The combinatorial Laplacian of the graph is defined to be the difference between the degree matrix and the adjacency matrix: L_X:=Δ_X-A_X, which is both positive semi-definite and symmetric; however, it does not have unit trace. For this reason, we define the density matrix of a graph to be ρ_X:=Δ_X-A_X/(Δ_X). We may prescribe an arbitrary orientation to the edges so that we may label them by e_1,…,e_| E(X)| and their respective inverses by e_| E(X)|+1,…,e_2| E(X)|. A path in the graph X is a sequence of edges a_1⋯ a_s such that the origin vertex of a_j+1 is the terminal vertex of a_j. The primes of a graph are defined to be the equivalence classes of closed, backtrackless, tailless, primitive paths. By this we mean equivalence classes of paths such that the origin vertex of a_1 is the terminal vertex of a_s, a_s a_1^-1, a_j+1 a_j^-1, and such that the path is not the power of another path. The equivalence classes are given by the cyclic shifts of a_1⋯ a_s. That is, we define [a_1⋯ a_s]=[a_sa_1⋯ a_s-1]=⋯=[a_2⋯ a_sa_1]. Clearly, there are density matrices which do not fit the graph-theoretic description outlined above. This point has been addressed by Hassan and Joag <cit.> by generalizing the association of a density matrix to a graph to include a subset of weighted graphs. A weighted graph is a graph equipped with a weight function ω:E(X)→ℂ. In what follows, we use the notation ω_ij:=ω({v_i,v_j}) for the weight of the edge connecting v_i to v_j. Define the adjacency matrix of a weighted graph (X,ω) by (A_X,ω)_ij:=|ω_ij|, if {v_i,v_j}∈ E(X) 0, otherwise, and the degree of the vertex v_i by d(v_i):=∑_j|ω_ij|. The degree matrix is defined the same way as before. An approach similar to that in <cit.> is to define the density matrix of a weighted graph by ρ_X,ω:=L_X,ω/(L_X,ω), where L_X,ω:=Δ_X,ω-A_X,ω is the weighted Laplacian of the graph. For this to make sense, we need the weighted Laplacian to be nonzero, and graphically this is equivalent to the condition that the graph does not consist only of loops. In order to insure that the density matrix is positive semi-definite, we could restrict our attention to weight functions with magnitudes unchanged by a swap of the indices. That is, weight functions satisfying |ω_ij|=|ω_ji|. Note that this condition is satisfied by both symmetric and conjugate symmetric weight functions. Let ω be a weight function satisfying |ω_ij|=|ω_ji|. Then the Laplacian of the weighted graph (X,ω) is positive semi-definite. Let u∈ℂ^n be an arbitrary vector upon which L_X,ω acts and denote its components by u_i. Then u^† L_X,ωu =u^†Δ_X,ωu-u^† A_X,ωu =∑_id(v_i)| u_i|^2-∑_i,j|ω_ij| u_i^*u_j =∑_i,j|ω_ij|| u_i|^2-∑_i,j|ω_ij| u_i^*u_j Now the first term in (<ref>) can be written as ∑_i|ω_ii|| u_i|^2+∑_i<j|ω_ij|| u_i|^2+∑_i<j|ω_ij|| u_j|^2. Similarly, the second term in (<ref>) is given by ∑_i|ω_ii|| u_i|^2+∑_i<j|ω_ij| u_i^*u_j+∑_i<j|ω_ij| u_iu_j^*. Thus, we have u^† L_X,ωu =∑_i<j|ω_ij|(| u_i|^2+| u_j|^2-u_i^*u_j-u_iu_j^*) =∑_i<j|ω_ij|| u_i-u_j|^2, from which it follows that the Laplacian is positive semi-definite. Since (<ref>) obviously has unit trace, it does indeed define a density matrix. Notice, however, that the entries of this density matrix are positive real numbers. By following the proof of Proposition <ref>, the reader will find that if |ω_ij| is replaced by ω_ij in the weighted adjacency matrix, then no conclusion can be made about the positive semi-definiteness of ρ_X,ω. It is therefore not obvious how to extend this construction to take into account density matrices with complex coefficients. For this reason, we will take a different approach by naturally assigning an arbitrary density matrix to a weighted graph. This shortcoming is noted by Hassan and Joag in their version of the construction; although, they show that the property of having a density matrix is invariant under isomorphism of graphs. Moreover, several conditions for which a weighted graph does or does not have a density matrix are given in their work <cit.>. For our purposes, a simpler approach can be taken which includes all density matrices at the expense of having to exclude further graphs. Indeed, it is not hard to construct a weighted graph (X,ω) from a density matrix ρ. We construct it as follows: * The number of vertices is the dimension n of the Hilbert space upon which ρ acts. Label them 1,…,n. * If ρ_ij is nonzero, include an edge from vertex i to vertex j with weight ω_ij:=ρ_ij. Notice that in the second point, an edge is included for both ρ_ij and ρ_ji, so that primes involving an edge from i to j followed by an edge from j to i can be considered (see Fig <ref>). It is this graph which we will use to construct our zeta function in the next section. In fact, the above procedure works for any matrix, not just density matrices. However, we will restrict our attention to those matrices which are positive semi-definite with unit trace so that relationships with quantum information can be developed. From this perspective, the only weighted graphs that we associate to a density matrix are those for which the matrix with entries ω_ij given by the weight function is a density matrix. While it is immediately apparent that the weights of the loops satisfy ω_ii≥0 with ∑_iω_ii=1, and the remaining weights satisfy ω_ij=ω_ji^*, a complete classification of this subset of weighted graphs remains an open question. § GRAPH ZETA FUNCTIONS The most famous zeta function is that of Bernhard Riemann <cit.>, which is defined as the function ζ(s):=∑_n=1^∞1/n^s for Re(s)>1 and its analytic continuation elsewhere. Associated to this function is the Riemann hypothesis, which states that all nontrivial zeros of the zeta function are contained on the vertical line with Re(s)=1/2. Proving this hypothesis is one of the most important problems in mathematics as many results rely on the assumption of its truth. One potential approach to solving this problem is the Hilbert-Pólya conjecture, which states that the imaginary parts of the nontrivial zeros of ζ are the eigenvalues of a self-adjoint operator. This function was connected to the theory of prime numbers by Euler upon proving the identity ζ(s)=∏_p prime(1-p^-s)^-1. Subsequently, several analogous functions have been defined, including the Ihara zeta function <cit.> associated to a graph X, which is defined by ζ_X(u):=∏_[P](1-u^ν(P))^-1, where the product is over the primes in X, and ν(p) denotes the length (number of edges) of P. There is a determinant formula for this function which can be traced back to Bass <cit.> and Hashimoto <cit.> given by ζ_X(u)^-1=(1-u^2)^r-1(I-Au+(Δ-I)u^2), where r=| E(X)|-| V(X)|+1 is the rank of the fundamental group of the graph. A generalization of the Ihara zeta function called the edge zeta function can be defined as follows: For a graph X, there is an associated edge matrix W with entries given by the variable ω_ab if the terminal vertex of edge a is the origin vertex of edge b and b a^-1, and zero otherwise. The edge zeta function is defined by ζ_E(W,X)=∏_[P](1-Ñ_E(P))^-1, where Ñ_E(C)=ω_a_1a_2ω_a_2a_3⋯ω_a_sa_1 is the edge norm and C=a_1⋯ a_s is a closed path in X. One can formulate analogs to the prime number theorem and the Riemann hypothesis for these functions, making them interesting in their own right. Here we will define a different but related zeta function ζ_ρ(u) associated to a density matrix. Indeed, we define ζ_ρ(u)=∏_[P](1-N_E(P)u^ν(P))^-1, where the product is over equivalence classes of primes in the weighted graph associated to ρ, ν(P) again denotes the length of the prime representative P, and N_E(P) is the product of the weights of the edges which make up P. If the weights of the graph were all unity, then the Ihara zeta function would be recovered. Of course, this cannot be true for a density matrix with rank greater than one since its trace is unity. However, this point is worth noting when (<ref>) is extended to all matrices, in which case the condition can certainly hold. Note that while (<ref>) is a generalization of the Ihara zeta function, it differs from the natural generalization seen in the book by Terras <cit.>. However, by setting ω_ab=ρ_iju where i denotes the origin vertex of edge a and j denotes the origin vertex of edge b (the terminal vertex of a), the density matrix zeta function is recovered. Indeed, the edge norm becomes the product of the weights of the closed path in the weighted graph associated to the density matrix multiplied by an extra factor of u^ν(P). Therefore, the density matrix zeta function defined by (<ref>) is a special case of the edge zeta function. In the next section, we will show that the separability tests in <cit.> and <cit.> corresponding to the symmetric group are equivalent to the conditions 1/n![d^n/du^nζ_ρ(u)]_u=0=1 for the zeta function of the graph associated to the reduced density matrix of a pure state. § EQUIVALENCE OF TEST WITH ZETA FUNCTION CRITERION Our main result relies on the fact that the zeta function associated to a density matrix is the generating function for the cycle index polynomial of the symmetric group evaluated at x_j=[ρ^j] for j=1,…,n. To see this, we will derive an exponential expression for this function from (<ref>). We will also prove the following theorem, which gives a determinate formula for this zeta function. Throughout, we will assume that | u| is small enough to force the convergence of the series involved. The proof is similar to that in <cit.> for the edge zeta function. Let ρ be a density matrix and let ζ_ρ denote the associated zeta function. Then ζ_ρ(u)=(I-uρ)^-1. Starting from the definition (<ref>), we have ζ_ρ(u) =∏_[P](1-N_E(P)u^ν(P))^-1. Now taking the logarithm, this becomes log(ζ_ρ(u)) =-∑_[P]log(1-N_E(P)u^ν(P)) =∑_[P]∑_j=1^∞(N_E(P)u^ν(P))^j/j =∑_j,m=1^∞∑_P ν(P)=m(N_E(P)u^ν(P))^j/jm, where in the second equality we have expanded the logarithm, and in the third, the inner sum is now over all prime paths of length m (the equivalence class of such a prime contains m elements, explaining the factor of m in the denominator). Let us define an operator by L:=∑_k,lρ_kl∂/∂ρ_kl and observe that N_E(P)=ρ_i_1j_1⋯ρ_i_ν(P)j_ν(P) for i_1,…,i_ν(P),j_1,…,j_ν(P) the vertices in P. Then L(N_E(P))^j=jν(P)(N_E(P))^j, so that we have Llog(ζ_ρ(u)) =∑_j,m=1^∞∑_P ν(P)=m(N_E(P)u^ν(P))^j =∑_j,m=1^∞∑_P ν(P)=mN_E(P^j)u^jν(P), where we have used the fact that (N_E(P))^j=N_E(P^j) since P^j is just j copies of each edge in P. Now, we are doing nothing more than summing over all the primes of any given length and of any given power. Thus, (<ref>) is equivalent to a sum over all closed, backtrackless, tailless paths. That is, we can drop the primitive assumption and write (<ref>) as Llog(ζ_ρ(u))=∑_CN_E(C)u^ν(C), where the sum is over the paths mentioned above. The ij-th component of the m-th power of uρ is given by (u^mρ^m)_ij=u^m∑_i_2,…,i_mρ_ii_2ρ_i_2i_3⋯ρ_i_m-1i_mρ_i_mj, and now we recognize the summand as the value of N_E(C) for some path C=e_ii_2e_i_2i_3⋯ e_i_m-1i_me_i_mj. Letting i=j, this becomes a closed, backtrackless, tailless path; therefore, we have shown (u^mρ^m)_ii=u^m∑_C o(C)=i ν(C)=mN_E(C), where o(C) denotes the origin vertex of C. Now summing over i, we have [ρ^m]u^m=u^m∑_C ν(C)=mN_E(C), so that summing over m produces Llog(ζ_ρ(u))=∑_m=1^∞[ρ^m]u^m. On the other hand, L log((I-uρ)^-1) =Llog((exp(log(I-uρ)^-1))) =Llog(exp(-(log(I-uρ)))) =-L(log(I-uρ)) =L(∑_m=1^∞ρ^m/mu^m) =L∑_m=1^∞(ρ^m)/mu^m =∑_m=1^∞(ρ^m)u^m, where in the last line, we have used L(ρ^m)=m(ρ^m). The argument is similar to the one for the relation L(N_E(P))^j=jν(P)N_E(P) used earlier. Thus, we have shown that Llog(ζ_ρ(u))=Llog((I-uρ)^-1). To finish off the proof, we note that with ρ_ij=0 for all i,j, we have ζ_ρ(u)=1=(I-uρ)^-1, so that applying the method of characteristics yields log(ζ_ρ(u)/(I-uρ)^-1)=0, which implies ζ_ρ(u)=(I-uρ)^-1. Theorem <ref> tells us that the reciprocals of the nonzero eigenvalues of ρ coincide with the zeros of 1/ζ_ρ and therefore the singularities of ζ_ρ. It thereby establishes a variant of the Hilbert-Pólya conjecture wherein, instead of the imaginary parts of the nontrivial zeros of a zeta function, we consider the singularities of the zeta function. Then the corresponding statement is that the singularities of this zeta function are given by the reciprocal eigenvalues of a self-adjoint operator, namely the matrix ρ. In fact, it is clear from (<ref>) that ζ_ρ has no zeros; although, it does vanish asymptotically. Note that the matrix representation of a quantum state is basis dependent, and under a change of basis, the weighted graph associated to this matrix will change too. However, the zeta function is unchanged, and this fact follows from Theorem <ref>. Indeed, if P is a change of basis matrix and ρ'=Pρ P^-1, then ζ_ρ'(u)=(I-uρ')^-1=(PP^-1-uPρ P^-1)^-1=(P)^-1(I-uρ)(P)=(I-uρ)=ζ_ρ(u). We can therefore rest assured that ζ_ρ(u) is well-defined. Let ρ be a density matrix and let ζ_ρ denote the associated zeta function. Then ζ_ρ(u)=exp(∑_m=1^∞[ρ^m]/mu^m). Starting from Theorem <ref>, we have ζ_ρ(u) =(I-uρ)^-1 =(exp(log(I-uρ)^-1)) =exp(-(log(I-uρ))) =exp((∑_m=1^∞ρ^m/mu^m)) =exp(∑_m=1^∞[ρ^m]/mu^m) The next theorem shows that ζ_ρ(u) is the generating function for the cycle index polynomial Z(S_n)(1,[ρ^2],…,[ρ^n]). This is the key to the relationship between this zeta function and the separability tests discussed in Section <ref>. Let ρ be a density matrix and let ζ_ρ denote the associated zeta function. Then ζ_ρ is the generating function for Z(S_n)(1,[ρ^2],…,[ρ^n]). By Corollary <ref>, we have the identity ζ_ρ(u)=exp(∑_m=1^∞[ρ^m]/mu^m). Let us split up this exponential into a product and then expand. This gives us ζ_ρ(u) =∏_m=1^∞exp([ρ^m]/mu^m) =∏_m=1^∞∑_j_m=0^∞[ρ^m]^j_m/m^j_mj_m!u^mj_m Then the n-th coefficient is given by the sum over all terms where the exponent ∑_k=1^∞kj_k of u is equal to n, each of which is given by a partition of n. That is, we have that the n-th coefficient is ∑_j_1+⋯+nj_n=n∏_k=1^n([ρ^k])^j_k/k^j_kj_k!, which is exactly the cycle index polynomial Z(S_n)(1,[ρ^2],…,[ρ^n]). Let ψ_AB be a bipartite pure state and let ρ_B:=_A[ψ_AB] be the reduced density matrix. Then ψ_AB is separable if and only if 1/n![d^n/du^nζ_ρ(u)]_u=0=1 for all n∈ℤ_≥0. In Section <ref> (and in <cit.>) it was shown that a bipartite pure state ψ_AB is separable if and only if the acceptance probability of the symmetric group separability algorithm reviewed there is 1 for all n, and that this is equivalent to the statement that the state is separable if and only if Z(S_n)(1,[ρ^2],…,[ρ^n])=1 for all n. By Theorem <ref>, the corollary then follows. By considering the density matrix zeta function in the original form (<ref>), Corollary <ref> exchanges the computation and evaluation of the cycle index polynomial of S_n for a graph-theorectic calculation. It follows that the tests in <cit.> can be viewed from a graph-theorectic perspective just as the PPT criterion was shown to be equivalent to the graph-theoretic degree criterion <cit.>. To illustrate this point, let us compute a few of the coefficients. For n=0, the condition for separability holds trivially since ζ_ρ(0)=∏_[P] ν(P)=0(1-N_E(P))^-1, but there are no primes with zero edges, so that the product evaluates to unity. For n=1, we use the fact that d/duζ_ρ(u)=ζ_ρ(u)d/dulog(ζ_ρ(u)) and note that d/dulog(ζ_ρ(u)) =-d/du∑_[P]log(1-N_E(P)u^ν(P)) =∑_[P]N_E(P)ν(P)u^ν(P)-1/1-N_E(P)u^ν(P). Now letting u=0, the only terms that survive are those with ν(P)=1, and we have [d/duζ_ρ(u)]_u=0 =[ζ_ρ(u)d/dulog(ζ_ρ(u))]_u=0 =∑_[P] ν(P)=1N_E(P). The only primes with ν(P)=1 are the loops, which correspond to the diagonal entries of the density matrix. Therefore, [d/duζ_ρ(u)]_u=0 is the sum of the diagonal entries of ρ; that is, [d/duζ_ρ(u)]_u=0=[ρ]=1, which is consistent with the cycle index polynomial calculation. For the n=2 case, the coefficient is given by evaluating 1/2d^2/du^2ζ_ρ(u) at u=0. Observe that 1/2d^2/du^2ζ_ρ(u) is given by 1/2(ζ_ρ(u)(d/dulog(ζ_ρ(u)))^2+ζ_ρ(u)d^2/du^2log(ζ_ρ(u))) so that evaluating at u=0 yields 1/2(1+[d^2/du^2log(ζ_ρ(u))]_u=0), where we have used the previous results for n=0,1 to simplify. We obtain the graph-theorectic version of the second term from (<ref>) in a similar way to the n=1 case. Indeed, using the standard quotient rule and evaluating at u=0 yields [d^2/du^2 log(ζ_ρ(u))]_u=0 =2∑_[P] ν(P)=2N_E(P)+∑_[P] ν(P)=1(N_E(P))^2, and the n=2 coefficient is therefore given by 1/2(1+2∑_[P] ν(P)=2N_E(P)+∑_[P] ν(P)=1(N_E(P))^2). Now, the corresponding cycle index polynomial calculation produces the value 1/2(1+[ρ^2]). Therefore, it must be the case that [ρ^2]=2∑_[P] ν(P)=2N_E(P)+∑_[P] ν(P)=1(N_E(P))^2, and this is easy to see when ρ is written in the diagonal basis (which exists by the spectral theorem). Indeed, in this case the graph consists only of loops so that the ν(P)=2 term vanishes and the ν(P)=1 term is the sum of the squares of the eigenvalues of ρ. Consider the density matrix ρ=1/2[ 1 1; 1 1 ], which is the pure state |+⟩⟨+|, so that all coefficients should be equal to unity. The weighted graph associated to ρ is given in Fig <ref>. As expected, the n=1 coefficient is ∑_[P] ν(P)=1N_E(P)=1/2+1/2=1 and the n=2 coefficient is 1/2 (1+2∑_[P] ν(P)=2N_E(P)+∑_[P] ν(P)=1(N_E(P))^2) =1/2(1+2(1/2·1/2)+(1/2)^2+(1/2)^2)=1. Next consider the maximally mixed state ρ=1/2I on one qubit. Since this state is mixed, every purification of it is entangled. The Bell state 1√(2)(|00⟩+|11⟩) is such a purification since tracing out either subsystem gives the state 1/2(|0⟩⟨0|+|1⟩⟨1|)=1/2I. Therefore, by computing the n=2 coefficient of ζ_ρ(u), we get a graph-theorectic proof that the Bell state is entangled. Indeed, the weighted graph associated to ρ is given in Fig <ref> and consists only of loops since the density matrix is diagonal. There are therefore no primes with ν(P)=2 and the only two primes with ν(P)=1 are the loops themselves. Thus, the n=2 coefficient is 1/2(1+0+(14+14))=3/4<1, from which it follows that ψ=1√(2)(|00⟩+|11⟩) is entangled. Note that the n=2 case is the graph-theorectic equivalent to the well-known SWAP test <cit.>. It is perhaps worth noting that these tests for each n double as tests for the purity of a density matrix. The n=2 case measures the purity exactly. In fact, this is really the mechanism behind the separability test since a pure bipartite state is separable if and only if its reduced state is pure. § SINGULARITY-EIGENVALUE CORRESPONDENCE Observe that (I-uρ) is a polynomial in u of degree at most equal to the dimension of the Hilbert space upon which ρ acts. Then the number of zeros of the reciprocal zeta function 1/ζ_ρ is given by the fundamental theorem of algebra, and this coincides with the number of singularities of ζ_ρ. Suppose that ψ_AB is a pure bipartite state and let ρ_B denote its reduced density matrix given by tracing out the A-subsystem. If ρ_B is pure, it is a rank one operator such that the only nonzero eigenvalue is λ=1. Since the reciprocals of the nonzero eigenvalues of ρ_B coincide with the zeros of 1/ζ_ρ by Theorem <ref>, we have another criterion for the separability of pure bipartite states. Let ψ_AB be a pure bipartite state and let ρ_B denote its reduced density matrix by tracing out one of the systems. Then ψ_AB is separable if and only if the only singularity of ζ_ρ(u) is at u=1. By Theorem <ref>, we have 1/ζ_ρ_B(u)=(I-uρ_B). Notice that if u=0, then ζ_ρ0. Then the condition that 1/ζ_ρ(u)=0 is equivalent to the eigenvalue problem ρ_B|ϕ⟩=1/u|ϕ⟩. Now, ψ_AB is separable if and only if ρ_B is a pure state, which is true if and only if the only nonzero eigenvalue is 1/u=1, so that we have u=1. This is equivalent to the only zero of the reciprocal zeta function being at u=1, which is equivalent to the only singularity of ζ_ρ_B being at u=1. To illustrate this point, we now examine the plots of the zeta function associated to different choices of density matrices. In Fig <ref>, we have an archetypal example of a pure state, |+⟩ = 1/√(2)(|0⟩+|1⟩), and the singularity at u=1 is marked by a vertical line. Let us consider something a little more interesting. Since the W-state given by |W⟩=1/√(3)(|001⟩+|010⟩+|100⟩) is a pure state, its zeta function should look the same as that in Fig <ref>. However, this state is entangled, so that we expect to pick up at least one singularity which is not at u=1 in the zeta function after the first system is traced out. This is exactly what we see in Fig <ref>. If we instead take ψ_AB to be the GHZ-state given by |GHZ⟩=1/√(2)(|000⟩+|111⟩), we find a similar result to |W⟩, except that the zeta function looks somewhat different. Here it turns out that both zeros of the reciprocal zeta function of the reduced state ρ_B are located at u=2, so that this is the only singularity that appears in Fig <ref>. Another interesting choice for ρ_B is given by the isotropic state that continuously transforms between the maximally entangled and maximally mixed states. Indeed, we define ρ(p)=[ 2-p/4 0 0 1-p/2; 0 p/4 0 0; 0 0 p/4 0; 1-p/2 0 0 2-p/4 ], from which we recover the maximally entangled state on two qubits at p=0 and the maximally mixed state at p=1. A three dimensional plot of ζ_ρ(u,p) is given in Fig <ref>. The maximally entangled state is a pure state, so that the cross section at p=0 looks like Fig <ref>. The zeta function of the maximally mixed state, being the cross section at p=1, is shown in Fig <ref>. We see from Fig <ref> that as p varies from zero to one, a second pole comes from infinity and recombines with the original u=1 pole when p=1, where the state is maximally mixed; however, the merging of the poles does not happen at u=1, as the first pole shifts to the right as p increases. Instead, the merging happens at the dimension of the Hilbert space (in this case 4), and this can be seen from (<ref>) with p=1, where the only non-zero eigenvalue is 1/4. It should be noted that neither Corollary <ref> nor Corollary <ref> apply to mixed states in general. To see this, define σ:=[ 1/3 0 0 0; 0 0 0 0; 0 0 0 0; 0 0 0 2/3 ], and set ρ_B:=σ⊗σ⊗σ. Then there are singularities at u=27/8,27/4,27/2, and 27, as can be seen in Fig <ref>. These singularities correspond to the three-fold products of the nonzero reciprocal eigenvalues of σ, as expected. Moreover, the coefficients in the expansion of the zeta function go to zero as seen in Fig <ref>. By tracing out the two extra copies of σ, we recover the original state σ, which is mixed. The corresponding plots for ζ_σ and its coefficients are shown in Fig <ref> and Fig <ref>, respectively. This shows that the tests we have developed for pure states do not hold for mixed states. Therefore, another method will have to be used to check for entanglement in this case. § CONCLUSION A connection between the zeta function ζ_ρ defined in (<ref>) and the separability of bipartite pure states was derived. However, this result does not carry over to the more general case where the joint state is mixed. Establishing a similar result in the mixed state setting is worth an investigation of its own. On paper, to test whether a pure bipartite state is entangled, it suffices to compute the n=2 case since the reduced state has unit purity if and only if the joint state is separable. From the graph-theoretic perspective, this means that the only relevant primes when testing for entanglement are those with length ν(P)=1 or 2. The higher order tests in <cit.> are introduced because the acceptance probability decays as n→∞, so when noise is introduced, it may be beneficial to perform a higher order test. Through the determinant formula (<ref>), it was shown that the nonzero eigenvalues of a density matrix ρ are in correspondence with the singularities of ζ_ρ. Such a connection has the potential to transform the study of the spectra of random density matrices to a graph-theorectic setting, and we leave this to future work. The reader is directed to <cit.> for more about the connections between random matrix theory and the various graph zeta functions. § DATA AVAILABILITY STATEMENT The Mathematica code used in this work is available in the following GitHub repository: <https://github.com/mlabo15/ZetaFunctions>. § ACKNOWLEDGEMENTS ZPB and MLL acknowledge support from the Department of Defense SMART scholarship. plainurl
http://arxiv.org/abs/2307.02101v1
20230705082128
An Explicit Uniform Mordell Conjecture over Function Fields of Characteristic Zero
[ "Jiawei Yu" ]
math.NT
[ "math.NT", "math.AG" ]
Molecular outflow in the reionization-epoch quasar J2054-0005 revealed by OH 119 observations Suphakorn Suphapolthaworn August 1, 2023 ============================================================================================== § INTRODUCTION Let C be a geometrically connected smooth projective curve of genus g>1 over ℚ. Mordell <cit.> conjectured that C(ℚ) is finite. The Mordell conjecture for curves over complex function fields was proved independently by Manin <cit.> and Grauert <cit.>, and Faltings <cit.> proved it over number fields. Vojta <cit.> gave an alternative proof with Diophantine approximation in both cases. Based on Vojta's proof, Dimitrov-Gao-Habegger <cit.> and Kühne <cit.> proved the following theorem (cf. <cit.>). For any integer g>1, there is a constant c(g) with the following property. Let K be a field of characteristic 0, C/K a geometrically connected smooth projective curve of genus g, J the Jacobian variety of C/K, and P_0∈ C(K) a rational point. Then for any subgroup Γ⊆ J(K) of finite rank ρ, ♯(i_P_0(C(K))∩Γ)≤ c(g)^ρ+1. Here i_P_0 is the Abel-Jacobi map i_P_0:C⟶ J, P⟼ P-P_0. The rank of Γ is the dimension of the ℚ-vector space Γ⊗ℚ. Note that Γ is not required to be finitely generated. In this article, we determine the constant c(g) in the non-isotrivial case explicitly. Our main theorem is as follows. Let K be a field of characteristic 0, C/K a geometrically connected smooth projective curve of genus g>1, J the Jacobian variety of C/K, and α a line bundle on C of degree 1. If C is non-isotrivial over ℚ, then for any subgroup Γ⊆ J(K) of finite rank ρ, ♯(i_α(C(K))∩Γ)≤(16g^2+32g+188)(20g)^ρ. In particular, if J(K) is of finite rank ρ, then ♯ C(K)≤(16g^2+32g+188)(20g)^ρ. A curve C is non-isotrivial over ℚ if it is not isomorphic to the base change of a curve from ℚ̅ to K̅. We will reduce the main theorem to the following theorem. Let k be an algebraically closed field of characteristic 0, K the function field of a smooth projective connected curve B/k, C/K a smooth projective geometrically connected curve non-isotrivial over k and of genus g>1, J the Jacobian variety of C/K, and α a line bundle on C of degree 1. Then for any subgroup Γ⊆ J(K̅) of rank ρ, ♯(i_α(C(K̅))∩Γ)≤(16g^2+32g+188)(20g)^ρ. In particular, if the K/k-trace of J is trivial, then ♯ C(K)≤(16g^2+32g+188)(20g)^ρ_LN. Here the Lang-Néron rank ρ_LN is the rank of J(K). The K/k-trace of an abelian variety A/K is the final object in the category of pairs (B,f) consisting of an abelian variety B/k and a morphism f:B_K→ A. It exists if k is algebraically closed in K (cf. <cit.>). With the curve B, we have a Weil height on C(K̅). The rational points are divided into two parts based on height and counted separately. We modify Vojta's proof of the Mordell conjecture to count points with large height. Vojta constructed an effective divisor with the Grothendieck-Riemann-Roch theorem. Then he gave a Diophantine approximation inequality on its index to derive an ineffective upper bound of height. An obstruction to giving explicit inequality is that the Weil height can be determined only up to a bounded function. Zhang <cit.> introduced the adelic line bundle, the height associated to which is the canonical height. We use it to refine Vojta's estimates and bound uniformly the number of points with large height. We also use Siu's theorem <cit.> to avoid dealing with higher cohomologies and simplify the construction of the divisor. On the other hand, a conjecture of Bogomolov states that there are only finitely many points in C(K̅) with small height. Ullmo <cit.> proved it with the equidistribution theorem of Szpiro-Ullmo-Zhang <cit.>. Zhang <cit.> introduced the admissible adelic line bundle and showed that the positivity of the admissible self-intersection number implies the Bogomolov conjecture. Based on it, Looper-Silverman-Wilms <cit.> gave a quantitive uniform result on the Bogomolov conjecture over function fields, and Yuan <cit.> proved uniform Bogomolov conjecture over global fields independently. We combine the theorem of Looper-Silverman-Wilms and the modified Vojta's inequality to deduce our main theorem. Throughout this article, let 𝒞 be the minimal regular model of C over B. We may assume that C has split semi-stable reduction. Otherwise, replace K by a finite extension. The dualizing sheaf ω̅=ω_𝒞/B is a line bundle on 𝒞. Let X=C×_KC be the product, p_1,p_2:X→ C the projections, and Δ⊆ X the diagonal. We are grateful to Xinyi Yuan for his vital suggestions and patient help. We thank Zheng Xiao and Chunhui Liu for teaching the author Diophantine approximation. We thank Yinchong Song for reviewing the draft of this article. We thank Joseph Silverman and Robert Wilms for helpful comments. § DIFFERENCE BETWEEN HEIGHTS In this section we recall the adelic line bundles introduced in <cit.> and admissible metrics in <cit.>. We refer to <cit.> for a detailed treatment on admissible metrics. Then we compare the height associated to ω̅ with the canonical height. For v∈ B, let K_v be the local field and 𝒪_K_v its valuation ring. For a projective variety Z/K and a line bundle L on Z, a model (𝒵,ℒ) of (Z,L^⊗ n) over 𝒪_K_v induces a metric ‖·‖ on L_K_v as follows: For z∈ Z(K̅_v), it extends to z̅:Spec(𝒪_K̅_v)⟶𝒵. For ℓ∈ z^*L, define ‖ℓ‖=inf_a∈K̅_v{| a|:ℓ^n∈ az̅^*ℒ}. A metric ‖·‖' on L_K_v is continuous and bounded if ‖·‖'/‖·‖ is continuous and bounded for some metric ‖·‖ induced by a model. An adelic metric on L is a collection {‖·‖_v} of continuous and bounded metrics ‖·‖_v on L_K_v for all v∈ B, such that ‖·‖_v is induced by a model of (Z,L) over an open subvariety U⊆ B for all v∈ U. An adelic line bundle is a pair L̅=(L,{‖·‖_v}) consisting of a line bundle L and an adelic metric {‖·‖_v} on L. We say that an adelic metric {‖·‖_v} on L is the limit of a sequence of adelic metrics {‖·‖_n,v}(n=1,2,…), if ‖·‖_n,v is independent of n for v in some open subvariety of B, and ‖·‖_n,v/‖·‖_v convenges uniformly on X(K̅_̅v̅) for each v∈ B. A model (𝒵,ℒ) of (Z,L^⊗n) on B is relatively nef if ℒ is nef on special fibers of 𝒵. An adelic metric {‖·‖_v} is relatively nef if it is the limit of a sequence of adelic metrics induced by relatively nef models over B. An adelic line bundle is integrable if it is the tensor quotient of two relatively nef adelic line bundles. Denote by Pic(Z)_int the group of isometry classes of integrable adelic line bundles on Z. If Z=Spec(K), the degree of L̅∈Pic(Z)_int is defined as (L̅)=∑_v∈ B-log‖ s‖_v, where s is any non-zero section of L. If d=X, there is a Deligne pairing Pic(Z)_int^d+1⟶Pic(K)_int, (L̅_1,…,L̅_d+1)⟼π_*⟨L̅_1,…,L̅_d+1⟩ with respect to the structure morphism π:Z→Spec(K) (cf. <cit.>). The intersection number of d+1 line bundles is the degree of their Deligne pairing. In particular, we have the degree of an adelic line bundle on K̅. The height associated to L̅ is h_L̅:Z(K̅)⟶ℝ, z⟼(z^*L̅)/(z). Here (z) is the degree of the residue field of z over K. Choose a line bundle α_0 of degree 1 on C satisfying (2g-2)α_0=ω_C/K. Let θ be the image of C^g-1⟶ J, (x_1,…,x_g-1)⟼ i_α_0(x_1)+…+i_α_0(x_g-1). It is a divisor on J. The line bundle Θ=𝒪(θ)+[-1]^*𝒪(θ) is symmetric. Zhang <cit.> applied Tate's limiting argument to construct an integrable adelic metric on Θ. Denote the adelic line bundle by Θ̅. The canonical height is defined as the associated height ĥ=h_Θ̅:J(K̅)⟶ℝ. It is positive and |·|=ĥ(·)^1/2 extends to a norm on J(K̅)_ℝ=J(K̅)⊗ℝ satisfying the parallelogram law. Denote the corresponding inner product by ⟨·,·⟩_Θ. By abuse of language, we write ĥ(i_α_0(x)) as ĥ(x). For v∈ B, let Γ_v be the reduction graph of the special fiber 𝒞_v, i.e. the vertexes and edges of Γ_v represent the components and nodes of 𝒞_v respectively, and each edge is of length 1. Denote by F(Γ_v) the space of continuous and piecewise smooth function on Γ_v. For f∈ F(Γ_v), the Laplacian operator gives a measure Δ f=-f”(x)dx-∑ d_vf(P)δ_P. Here x represents a canonical coordinate on each edge. The summation is over P∈Γ_v and tangent directions v at P, and δ_P is the Dirac measure supported at P. For each probability measure μ, there exists a unique symmetric function g_μ:Γ_v^2→ℝ, called the Green function associated to μ, satisfying g_μ(x,·)∈ F(Γ), Δ g_μ(x,·)=δ_x-μ, ∫_Γ_vg_μ(x,·)μ=0. It can be seen directly from definition that g_μ(x,x)=sup_y∈Γ_vg_μ(x,y)≥ 0. The canonical divisor K_Γ_v of Γ_v is a formal linear combination K_Γ_v=∑(ω̅|_F_ξ)ξ. The summation is over all vertexes ξ, and F_ξ is the component of 𝒞_v represented by ξ. There is a unique probability measure μ satisfying g_μ(K_Γ_v,x)+g_μ(x,x) is a constant independent of x. Denote the measure by μ_v and the Green function by g_v. Let C_K_v^an be the analytic space introduced in <cit.>. With the retraction map C_K_v^an→Γ_v, we can view g_v as a function on (C_K_v^an)^2. Denote e_v=|ϖ_v|^-1 where ϖ is a uniformizer of K_v. The model (𝒞,ω̅) induces an adelic metric {‖·‖_Ar,v} on ω_C/K. The canonical admissible metric {‖·‖_a,v} of ω_C/K defined by ‖·‖_a,v(x)=‖·‖_Ar,v(x)· e_v^g_v(x,x), x∈ C_K_v^an. is an integrable adelic metric. Denote by ω_a the canonical admissible adelic line bundle. The self-intersection number ω_a^2 is non-negative. Similarily, there is an integral canonical admissible adelic line bundle 𝒪(Δ)_a=(𝒪(Δ),{‖·‖_Δ,v})∈Pic(X)_int determined by ‖·‖_Δ,a(x,y)=e_v^-i(x,y)-g_v(x,y), x,y∈ C(K̅), x y. Here, i(x,y) is the stable intersection number, i.e. if x,y∈ C(K') for some finite extension K'/K, and x̅,y̅ are their closures in the minimal regular model 𝒞' over the smooth projective curve B' with function field K', then i(x,y)=(x̅·y̅)/[K':K]. Denote by δ(Γ_v) the total length of Γ_v. Zhang <cit.> introduced the φ-invariant as φ(Γ_v)=-1/4δ(Γ_v)+1/4∫_Γ_vg_v(x,x)((10g+2)μ_v-δ_K_Γ_v). He also showed in the loc. cit that ω_a^2≥2g-2/2g+1∑_v∈ Bφ(Γ_v). (1)For any P_0∈ C(K̅), 0≤ h_ω̅(P_0)-g-1/gĥ(P_0)≤19/2ω_a^2. (2)For any P=(P_1,P_2)∈(X\Δ)(K̅), -19/2ω_a^2≤ i(P_1,P_2)-ĥ(P_1)/2g-ĥ(P_2)/2g+⟨ P_1,P_2⟩_Θ≤37/4ω_a^2. (1) Consider the morphism i_ω:C⟶ J, x⟼(2g-2)x-ω_C/K. By <cit.>, i_ω^*Θ̅=4g(g-1)ω_a-π^*π_*⟨ω_a,ω_a⟩ in Pic(C)⊗ℚ. Here π_*⟨·,·⟩ is the Deligne pairing for the structure morphism π:C→Spec(K). Since i_ω=[2g-2]∘ i_α_0, we have (2g-2)^2ĥ(P_0)=4g(g-1)h_ω_a(P_0)-ω_a^2. Hence, h_ω̅(P_0)-g-1/gĥ(P_0)= h_ω̅(P_0)-h_ω_a(P_0)+1/4g(g-1)ω_a^2 = ∑_v∈ Bg_v(P_0,P_0)+1/4g(g-1)ω_a^2 ≥ 0. By <cit.>, sup_x∈Γ_vg_v(x,x)≤15/4φ(Γ_v). Therefore, h_ω̅(P_0)-g-1/gĥ(P_0)= ∑_v∈ Bg_v(P_0,P_0)+1/4g(g-1)ω_a^2 ≤ 15/4∑_v∈ Bφ(Γ_v)+1/4g(g-1)ω_a^2 ≤ 30g+15/8g-8ω_a^2+1/4g(g-1)ω_a^2 ≤ 19/2ω_a^2. (2) Consider the morphism j:X⟶ J, (x,y)⟼ y-x. By <cit.>, j^*Θ̅=2𝒪(Δ)_a+p_1^*ω_a+p_2^*ω_a in Pic(X)⊗ℚ. Hence ĥ(P_2-P_1)=2h_𝒪(Δ)_a(P)+h_ω_a(P_1)+h_ω_a(P_2). We get i(P_1,P_2)-ĥ(P_1)/2g-ĥ(P_2)/2g+⟨ P_1,P_2⟩_Θ = i(P_1,P_2)-h_𝒪(Δ)_a(P)+∑_i=1,2((g-1)ĥ(P_i)/2g-h_ω_a(P_i)/2) = -∑_v∈ Bg_v(P_1,P_2)-1/4g(g-1)ω_a^2 ≥ -15/4∑_v∈ Bφ(Γ_v)-1/4g(g-1)ω_a^2 ≥ -19/2ω_a^2. By <cit.>, g_v(y,z)≥-sup_x∈Γ_vg_v(x,x). As a consequence i(P_1,P_2)-ĥ(P_1)/2g-ĥ(P_2)/2g+⟨ P_1,P_2⟩_Θ = -∑_v∈ Bg_v(P_1,P_2)-1/4g(g-1)ω_a^2 ≤ 15/4∑_v∈ Bφ(Γ_v)-1/4g(g-1)ω_a^2 ≤ 37/4ω_a^2. § VOJTA'S INEQUALITY In this section we modify Vojta's proof <cit.> to give a uniform inequality. The following is the main theorem of this section. For P_1,P_2∈ C(K̅), if | P_2|≥√(1000g)| P_1| and | P_1|≥500√(gω_a^2), then ⟨ P_1,P_2⟩_Θ/| P_1|| P_2|≤4/5. Let Z be the singular locus of 𝒞→ B. By <cit.>, the blow-up 𝒳 of 𝒞×_B𝒞 at Z×_BZ is regular, and if β:Δ̃→Δ̅ is the strict transform of the diagonal Δ̅⊆𝒞×_B𝒞, then β^*ω̅=𝒪(-Δ̃)|_Δ̃. By abuse of notations, we denote by p_1,p_2:𝒳→𝒞 the composition of the blow-up and projections 𝒞×_B𝒞→𝒞. Their restriction to the generic fiber X→ C coincides with the previous definition. Let M be the pull-back to 𝒳 of a line bundle on B of degree 1. Consider the line bundle L=d_1p_1^*ω̅+d_2p_2^*ω̅+d((2g-2)𝒪(Δ̃)-p_1^*ω̅-p_2^*ω̅)+cM∈Pic(𝒳), where d_1,d_2,d,c are positive integers to be decided. If d_1≥2gd, d≥ d_2, and (d_1d_2-gd^2)c≥(39g+20)d_1d^2ω_a^2, then L is big. By <cit.> and <cit.>, ω̅^2≤ω_a^2+(2g-2)∑_v∈ Bδ(Γ_v). Combining it with <cit.> and <cit.>, we have ω̅^2≤ω_a^2+39(2g-2)∑_v∈ Bφ(Γ_v)≤(78g+40)ω_a^2. Take L_1=(d_1-d)p_1^*ω̅+d(2g-2)𝒪(Δ̃)+cM, L_2=(d-d_2)p_2^*ω̅. Note that ω̅ is nef, 𝒪(Δ̃) is effective, and (𝒪(Δ̃)+p_1^*ω̅)|_Δ̃=0. Therefore, both L_1 and L_2 are nef. By Siu's theorem <cit.>, we have vol(L) ≥ (L_1)^3-3(L_1)^2L_2 = 6(2g-2)^2(d_1d_2-gd^2)c+ (2g-2)(3d_1^2d_2-6gd_1d^2+(4g^2+4g-2)d^3-(6g-3)d^2d_2)ω̅^2 ≥ 6(2g-2)^2(d_1d_2-gd^2)c-(2g-2)3gd_1d^2ω̅^2 > 0. For P_i∈ C(K)(i=1,2), we have P=(P_1,P_2)∈ X(K) and sections P̅_̅i̅⊂𝒞, P̅⊂𝒳 of B. Choose a local coordinate x_i on C at P_i. Then for any effective divisor D∈Div(X), D is defined near P by a formal power series ∑_i_1,i_2≥0a_i_1,i_2x_1^i_1x_2^i_2. Recall that the index of D at P with respect to a pair of positive numbers (e_1,e_2) is ind(D,P,e_1,e_2)=min{i_1/e_1+i_2/e_2:a_i_1,i_2 0}. It is independent of the choice of x_1,x_2. The theorem is invariant after replacing K by a finite extension. We may assume P_1,P_2∈ C(K). Take d_1=⌈√(g+1/250)| P_2|/| P_1|d⌉, d_2=⌈√(g+1/250)| P_1|/| P_2|d⌉, c=⌈250(39g+20)d_1ω_a^2⌉, where ⌈·⌉ is the ceiling function, i.e. ⌈ x⌉ is the least integer not less than x. For d large enough, L is big. There is a positive integer n such that nL admits a section s. By <cit.>, there is a finite extension B' of B with function field K' and a regular surface 𝒞_i' over B' for i=1,2 satisfying (1)there is a morphism 𝒞_i'→𝒞×_BB'; (2)its restriction to the generic fiber C_i'→ C×_KK' is of degree 2d_3-i and unramified outside P_i; (3)there are two points of C_i' lying over P_i, both defined over K' and of ramification index d_3-i. Let 𝒳' be the blow-up of 𝒞_1'×_B'𝒞_2' such that 𝒳' is regular and there is a morphism f:𝒳'⟶𝒳×_BB' extending C_1×_K'C_2→ X×_KK'. Choose arbitary P'=(P_1',P_2')∈ C_1'×_K'C_2' lying over P. Since P_i' lies in the smooth locus of 𝒞_i' over B' for i=1,2, we may assume the center of the blow-up 𝒳'→𝒞_1'×_B'𝒞_2' does not meet P̅'. Then the conormal sheaf 𝒩_P̅'/𝒳'^∨ is a direct sum 𝒩_P̅'/𝒳'^∨=𝒩_P̅_1'/𝒞_1'^∨⊕𝒩_P̅_2'/𝒞_2'^∨. By <cit.>, ind(div(s),P,(2g-2)nd_1,(2g-2)nd_2) = ind(f^*div(s),P',(2g-2)nd_1d_2,(2g-2)nd_1d_2) = ind(f^*div(s),P',1,1)/(2g-2)nd_1d_2. Together with <cit.>, we have ind(div(s),P,(2g-2)nd_1,(2g-2)nd_2) ≥ 1/(2g-2)nd_1d_2-2(nf^*L|_P̅')/(ω_𝒞_1'/B'|_P̅_1')+(ω_𝒞_2'/B'|_P̅_2') = -2(L|_P̅)/(2g-2)d_1(ω̅|_P̅_1)+(2g-2)d_2(ω̅|_P̅_2) = -1/g-1+1/g-1d((ω̅|_P̅_1)+(ω̅|_P̅_2)-(2g-2)(𝒪(Δ̃)|_P̅))-c/d_1(ω̅|_P̅_1)+d_2(ω̅|_P̅_2). By Proposition <ref>, g-1/g| P_i|^2≤(ω̅|_P̅_i)≤g-1/g| P_i|^2+19/2ω_a^2≤1000/999g-1/g| P_i|^2. Note that (𝒪(Δ̃)|_P̅)=i(P_1,P_2). Hence, (ω̅|_P̅_1)+(ω̅|_P̅_2)-(2g-2)(𝒪(Δ̃)|_P̅)≥(2g-2)(⟨ P_1,P_2⟩_Θ-19/2ω_a^2). We may assume ⟨ P_1,P_2⟩_Θ≥0. Then ind(div(s),P,(2g-2)nd_1,(2g-2)nd_2) ≥ -1/g-1+0.999g/(g-1)2d⟨ P_1,P_2⟩_Θ/d_1| P_1|^2+d_2| P_2|^2-g/(g-1)^2(19g-19)dω_a^2+c/d_1| P_1|^2+d_2| P_2|^2 ≥ -1.001/g-1+0.999g/g-12d⟨ P_1,P_2⟩_Θ/d_1| P_1|^2+d_2| P_2|^2-g/(g-1)^2c/d_1| P_1|^2+d_2| P_2|^2. As a consequence, lim inf_d→∞ ind(div(s),P,(2g-2)nd_1,(2g-2)nd_2) ≥ -1.001/g-1+0.999g/(g-1)√(g+1/250)⟨ P_1,P_2⟩_Θ/| P_1|| P_2|-g/(g-1)^2125(39g+20)ω_a^2/| P_1|^2 ≥ -1.05/g-1+0.995g/(g-1)√(g)⟨ P_1,P_2⟩_Θ/| P_1|| P_2|. By Dyson's lemma (cf. <cit.>), V(ind(div(s),P,(2g-2)nd_1,(2g-2)nd_2))≤d_1d_2-gd^2/d_1d_2+(2g-1)d_2/2d_1. Here V(t)=∫_x,y∈[0,1],x+y≤ tdxdy. Taking limit we have lim inf_d→∞ V(ind(div(s),P,(2g-2)nd_1,(2g-2)nd_2)) ≤ 1/250g+1+(2g-1)| P_1|^2/2| P_2|^2 ≤ 1/200g By the monotonicity of V, we have lim inf_d→∞ ind(div(s),P,(2g-2)nd_1,(2g-2)nd_2)≤1/10√(g). Combining two inequalities on index we have -1.05/g-1+0.995g/(g-1)√(g)⟨ P_1,P_2⟩_Θ/| P_1|| P_2|≤1/10√(g). Therefore, ⟨ P_1,P_2⟩_Θ/| P_1|| P_2|≤1000/995(g-1/10g+1.05/√(g))≤4/5. § PROOF OF THE MAIN THEOREM In this section we finish the proof of Theorem <ref> and Theorem <ref>. Since Γ is of finite rank, there is a finitely generated subgroup Γ_0⊆Γ satisfying Γ_0⊗ℚ=Γ⊗ℚ. Then we can find a finitely generated extension K/ℚ such that C and Γ_0 are defined over K. For any P∈Γ, there is Q∈Γ_0 and an integer n such that nP=Q. The morphism [n]:J↦ J is finite. Therefore P∈ J(K̅). Denote by M_g the coarse moduli scheme of curves of genus g over ℚ̅. Let ι:Spec(K̅)⟶ M_g be the K̅-point corresponding to C_K̅. Since C is non-isotrivial over ℚ, x does not factor throught Spec(ℚ̅). Note that ∩ k=ℚ̅, where the intersection is over all algebraically closed subfield k⊆K̅ satisfying K̅/k is of transcendental degree 1. There is a k such that ι does not factor through Spec(k). Replace K by Kk. Then K/k is finitely generated of transcendental degree 1, and hence the function field of a smooth projective connected curve B/k. Theorem <ref> follows from Theorem <ref>. We need the following proposition to count points with large height in the proof of Theorem <ref> Let P_1,P_2∈ C(K̅) be two distinct points. If | P_2|≥| P_1|≥(5√(2)-1/4)√(ω_a^2) and ⟨ P_1,P_2⟩_Θ/| P_1|| P_2|≥4/5, then | P_2|/| P_1|≥9g/10. Since P_1 P_2, we have 0 ≤ i(P_1,P_2) ≤| P_1|^2/2g+| P_2|^2/2g-⟨ P_1,P_2⟩_Θ+37/4ω_a^2 ≤| P_1|^2/2g+| P_2|^2/2g-4/5| P_1|| P_2|+37/4ω_a^2 ≤| P_1|^2/2g+| P_2|^2/2g-3/5| P_1|| P_2|. Solving it we have | P_2|/| P_1|≥3g/5+√((3g/5)^2-1)≥9g/10. Denote by Γ_1 the subgroup of J(K̅) generated by Γ and α-α_0. We have the vector spaces V=Γ⊗ℝ and V_1=Γ_1⊗ℝ. Let W=V+α-α_0 be the coset of V in V_1. Then ♯(i_α(C(K̅))∩Γ)= ♯{P∈ C(K̅):P-α∈ V} = ♯{P∈ C(K̅):P-α_0∈ W}. For any point x∈ V_1 and positive number r, denote by B(x,r) the closed ball with center x and radius r in V_1. By <cit.>, a ball of radius R_0=√(ω_a^2/8(g^2-1)) contains at most there are at most 16g^2+32g+124 rational points in C(K̅). Let R_1=(5√(2)-1/4)√(ω_a^2). We cover B(O,R_1)∩ W with balls of radius R_0 in an inductive manner. Choose arbitary x_1∈ B(O,R_1)∩ W. After selecting x_1,…,x_t, if B(O,R_1)∩ W is not covered by B(x_1,R_1),…,B(x_t,R_0), choose arbitary x_t+1∈ (B(O,R_1)∩ W)-(⋃_i=1^tB(x_i,R_0)). Note that B(x_i,R_0/2)∩ W are disjoint and all contained in B(O,R_1+R_0/2)∩ W. Since W is the translation of a ρ-dimensional vector space. Calculating volume in W, we find that B(O,R_1)∩ W can be covered by at most (R+R_0/2)^ρ/(R_0/2)^ρ≤(20g)^ρ balls of radius R_0. Therefore, ♯{P∈ C(K̅):P-α_0∈ V, | P|≤ R_1}≤(16g^2+32g+124)(20g)^ρ. The above inequality implies the first assertion in Theorem <ref> in the case ρ=0. Now suppose ρ>0. For any 0≤ϕ≤π/2, by <cit.>, V_1 can be covered by 2ρ/sin(ϕ/4)^(V_1)cos(ϕ/4) sectors such that any two points in the same sector form an angle of at most ϕ. Here (V_1)≤ρ+1. Let ϕ=arccos(4/5). Then 2ρ/sin(ϕ/4)^(V_1)cos(ϕ/4)≤13^ρ+1. Since 500√(gω_a^2)/R_1<(9g/10)^8, by Proposition <ref>, for any sector S, ♯{P∈ C(K̅):P-α∈ S,R_1≤| P|≤500√(gω_a^2)}≤8. By Theorem <ref>, similarly we have ♯{P∈ C(K̅):P-α∈ S,| P|≥500√(gω_a^2)}≤7. In conclusion, ♯(C(K̅)∩Γ)≤ (16g^2+32g+124)(20g)^ρ+15·13^ρ+1 ≤ (16g^2+32g+188)(20g)^ρ. This proves the first assertion. By Lang-Néron theorem (cf. <cit.>), J(K)/tr_K/k(J)(k) is a finitely genereated group, where tr_K/k(J) is the K/k-trace of J. If tr_K/k(J)=0, then J(K) is finitely generated. Taking Γ=J(K), we finish the proof. alphaurl
http://arxiv.org/abs/2307.00682v1
20230702232700
Tools for Verifying Neural Models' Training Data
[ "Dami Choi", "Yonadav Shavit", "David Duvenaud" ]
cs.LG
[ "cs.LG", "cs.CR" ]
Rapid mixing of global Markov chains via spectral independence: the unbounded degree case Antonio Blanca Pennsylvania State University. Email: ablanca@cse.psu.edu. Research supported in part by NSF grant CCF-2143762. Xusheng ZhangPennsylvania State University. Email: xushengz@psu.edu. Research supported in part by NSF grant CCF-2143762. August 1, 2023 ========================================================================================================================================================================================================================================================================== It is important that consumers and regulators can verify the provenance of large neural models to evaluate their capabilities and risks. We introduce the concept of a “Proof-of-Training-Data”: any protocol that allows a model trainer to convince a Verifier of the training data that produced a set of model weights. Such protocols could verify the amount and kind of data and compute used to train the model, including whether it was trained on specific harmful or beneficial data sources. We explore efficient verification strategies for Proof-of-Training-Data that are compatible with most current large-model training procedures. These include a method for the model-trainer to verifiably pre-commit to a random seed used in training, and a method that exploits models' tendency to temporarily overfit to training data in order to detect whether a given data-point was included in training. We show experimentally that our verification procedures can catch a wide variety of attacks, including all known attacks from the Proof-of-Learning literature. § INTRODUCTION How can we verify the capabilities of large machine learning models? Today, such claims are based on trust and reputation: customers and regulators believe that well-known companies building AI models wouldn't lie about the training data used in their models. However, as the ability to build new AI models proliferates, users need to trust an ever-larger array of model providers at their word, and regulators may increasingly face malicious AI developers who may lie to appear compliant with standards and regulations. Worse, countries developing militarily-significant AI systems may not trust each others' claims about these systems' capabilities, making it hard to coordinate on limits. AI developers can enable greater trust by having a third party verify the developer's claims about their system, much as the iOS App Store checks apps for malicious code. Current black-box approaches to model auditing allow some probing of capabilities <cit.>, but these audits' utility is limited and a model's capabilities can be hidden <cit.>. An auditor can more effectively target their examination if they also know the model's training data, including the total quantity, inclusion of data likely to enable specific harmful capabilities (such as texts on cyber-exploit generation), and inclusion of safety-enhancing data (such as instruction-tuning <cit.>). However, if such data is self-reported by the AI developer, it could be falsified. This uncertainty limits the trust such audits can create. In this work, we define the problem of Proof-of-Training-Data (PoTD): a protocol by which a third-party auditor (the “Verifier”) can verify which data was used to train a model. Our verification procedures assume that the Verifier can be given access to sensitive information and IP (e.g., training data, model weights) and is trusted to keep it secure; we leave the additional challenge of simultaneously preserving the confidentiality of the training data and model weights to future work. In principle, one could solve PoTD by cryptographically attesting to the results of training on a dataset using delegated computation <cit.>. However, in practice such delegation methods are impractically slow, forcing us to turn to heuristic verification approaches. Inspired by the related literature on “Proof-of-Learning” (PoL)<cit.>, we propose that model-trainers disclose a training transcript to the Verifier, including training data, training code, and intermediate checkpoints. In Section <ref>, we provide several verification strategies for a Verifier to confirm a training transcript's authenticity, including new methods that address all published attacks in the Proof-of-Learning literature. We demonstrate the practical effectiveness of our defenses via experiments on two language models (Section <ref>). Our methods can be run cheaply, adding as little as 1.3% of the original training run's compute. Further, we require no change to the training pipeline other than fixing the data ordering and initialization seeds, and storing the training process seeds for reproducibility. Still, like PoL, they sometimes require re-running a small fraction of training steps to produce strong guarantees. The verification strategies we describe are not provably robust, but are intended as an opening proposal which we hope motivates further work in the ML security community to investigate new attacks and defenses that eventually build public confidence in the training data used to build advanced machine learning models. § RELATED WORK We build on <cit.>, which sketches a larger framework for verifying rules on large-scale ML training. It defines, but does not solve, the “Proof-of-Training-Transcript” problem, a similar problem to Proof-of-Training-Data that additionally requires verifying hyperparameters. Proof-of-Learning. <cit.> introduce the problem of Proof-of-Learning (PoL), in which a Verifier checks a Prover's ownership/copyright claim over a set of model weights by requesting a valid training transcript that could have led to those weights. The Verifier is able to re-execute training between selected checkpoints to check those segments' correctness, although subsequent works have shown vulnerabilities in the original scheme <cit.>. Proof-of-Training-Data is a stricter requirement than Proof-of-Learning, as PoL only requires robustness to adversaries that can use less computation than the original training run, whereas PoTD targets all computationally-feasible adversaries. Further, any valid PoTD protocol can automatically serve as a solution to PoL. As we show in Sections <ref> and <ref>, our defenses address all scalable published attacks in the PoL literature. <cit.> show that forged transcripts can support false claims about the training set, and demonstrate the ability to forge transcripts on small neural nets in the less-restrictive PoL setting, though these attacks are also ruled out by our data-ordering-precommitment defense (Section <ref>). Memorization during training. <cit.> introduce the notion of counterfactual memorization (the average difference in model performance with and without including a specific point in training) that is most similar to our own, and use it to investigate different training points' effects on final model performance. <cit.> examine which datapoints are most strongly memorized during training by using influence functions, but they focus on the degree of memorization only at the end of training. <cit.> show that per-datapoint memorization of text (as measured by top-1 recall) can be somewhat reliably predicted based on the degree of memorization earlier in training. <cit.> analyze pointwise loss trajectories throughout training, but do not focus specifically on the phenomenon of overfitting to points in the training set. § FORMAL PROBLEM DEFINITION In the Proof-of-Training-Data problem, a Prover trains an ML model and wants to prove to a Verifier that the resulting target model weights W^* are the result of training on data D^*. If a malicious Prover used training data that is against the Verifier's rules (e.g., terms of service, regulatory rules) then that Prover would prefer to hide D^* from the Verifier. To appear compliant, the Prover will instead lie and claim to the Verifier that they have used some alternative dataset D ≠ D^*. However, the Prover will only risk this lie if they believe that with high probability they will not get caught (making them a “covert adversary” <cit.>). The goal of a Proof-of-Training-Data protocol is to provide a series of Verifier tests that the Prover would pass with high probability if and only if they truthfully reported the true dataset that was used to yield the model W^*. Let D ∈^n be an ordered training dataset. Let contain all the hyperparameters needed to reproduce the training process, including the choice of model, optimizer, loss function, random seeds, and possibly details of the software/hardware configuration to maximize reproducibility. A valid Proof-of-Training-Data protocol consists of a Prover protocol , Verifier protocol , and witnessing template that achieves the following. Given a dataset D^* and hyperparameters ^*, an honest Prover uses to execute a training run and get (W^*, J^*) = (D^*, ^*, c_1), where W^* ∈^d is a final weight vector, J^* ∈𝕁 is a witness to the computation, and c_1 ∼ C_1 is an irreducible source of noise. The Verifier must accept this true witness and resulting set of model weights with high probability: _c_1 ∼ C_1, c_2 ∼ C_2[(D^*, ^*, J^*, W^*, c_2) = 1 ] ≥ 1 - δ_1 , where δ_1 ≪ 1/2 and c_2 is the randomness controlled by the Verifier. Conversely, ∀ computationally-feasible probabilistic adversaries 𝒜 which produce spoofs (D, M, J) = 𝒜(D^*, ^*, J^*, W^*, c_3) where D ≠ D^* and c_3 ∼ C_3 is the randomness controlled by the adversary, the Verifier must reject all such spoofs with high probability: _c_1 ∼ C_1, c_2 ∼ C_2, c_3 ∼ C_3[(D, M, J, W^*) = 0] ≥ 1- δ_2 where δ_2 ≪ 1/2. In practice, our proposal does not yet provide a provably-correct PoTD protocol, but instead provides a toolkit of heuristic approaches. Following the literature on the related Proof-of-Learning problem <cit.>, we use as a witness the series of m model weight checkpoints J^* = = (W_0, W_1, …, W_m-1, W^*). Model weight checkpoints are already routinely saved throughout large training runs; we assume a checkpoint is saved after training on each k = n/m-datapoint segment. [For simplicity, we assume n is evenly-divisible by m.] During verification, the Prover provides[ Throughout this work we assume that the Prover provides the full training transcript to the Verifier, but as we discuss in Section <ref>, in practice secure versions of these methods will be needed maintain the confidentiality of the Prover's sensitive data and IP. ] the Verifier with the training transcript = {D, , }, which the Verifier will then test to check its truthfulness. In practice, in order to achieve the guarantee from Definition <ref>, the Prover and Verifier protocols must satisfy two conditions: * Uniqueness: Using , the Prover must not be able to find a second D ≠ D^* and that would honestly yield W^* via a valid sequence of checkpoints , even given a large amount of time. This is a stronger requirement than in PoL, which protects only against adversarial Provers that use less compute than the original training run. Since it is not hard to create a fake transcript for a training run in general (e.g., by declaring that the weights are initialized at W_0 =W^*), will need to constrain the set of acceptable training runs. The Verifier needs to be able to confirm that the Prover's reported training run followed these constraints (Section <ref>). * Faithfulness: If the Prover provides a fake sequence of checkpoints that could not result from actually training on D^* via a valid and , the Verifier should be able to detect such spoofing. Our tools for ensuring a transcript's uniqueness are presented in Section <ref>; all other verification strategies in this paper address faithfulness. As a brute-force solution to Proof-of-Training-Data, the Verifier could simply re-execute the complete training process defined by , and check that the result matches W^*. However, beyond technical complications[This would also fail in practice because of irreducible hardware-level noise which means that no two training runs return exactly the same final weight vector <cit.>. a transcript could still be examined piecewise, as done in <cit.>; for more, see Section <ref>.], doing so is far too computationally expensive to be done often; a government Verifier would need to be spending as much on compute for audits as every AI developer combined. Therefore any verification protocol must also be efficient, costing much less than the original training run. Inevitably, such efficiency makes it near certain that the Verifier will fail to catch spoofs D≠ D^* if D only differs in a few data points; in practice, we prioritize catching spoofs which deviate on a substantial fraction of points in D^*. Though we do not restrict to a particular definition of dataset deviations, we list several possibilities relevant for different Verifier objectives in Appendix <ref>. § VERIFICATION STRATEGIES We provide several complementary tools for detecting whether a transcript is spoofed. Combined, these methods address many different types of attacks, including all current attacks from the PoL literature <cit.>. §.§ Existing Tools from Proof-of-Learning Our protocol will include several existing spoof-detection tools from the Proof-of-Learning literature <cit.>, such as looking for outliers in the trajectory of validation loss throughout training, and plotting the segment-wise weight-change W_i - W_i-1_2 between the checkpoints . The most important of these existing tools is the segment-wise retraining protocol of <cit.>. Let R(W_i-1, Π_i, c; ) be the model training operator that takes in a weight checkpoint W_i-1, updates it with a series of gradient steps based on training data sequence Π_i (describing the order in which the Prover claims data points were used in training between checkpoints W_i-1 and W_i, which may be different from the order of the dataset D^*), hyperparameters , and hardware-noise-randomness c ∼ C, and then outputs the resulting weight checkpoint W_i. Transcript segment i is (ϵ, δ)-reproducible if for the pair of checkpoints (W_i-1, W_i) in , the reproduction error (normalized by the overall segment displacement) is small: _c ∼ C( Ŵ_i - W_i _2/Ŵ_i - W_i-1_2 + W_i - W_i-1_22 < ϵ) > 1-δ where Ŵ_i = R(W_i-1, Π_i, c; ). The values ϵ and δ trade off false-positive vs. false-negative rates; see <cit.> for discussion. The Verifier can use this retraining procedure as a ground-truth for verifying the faithfulness of a suspicious training segment. However, this test is computationally-intensive, and can thus only be done for a small subset of training segments. Our other verification strategies described in Sections <ref> and <ref> will be efficient enough to be executable on every training segment. §.§ Memorization-Based Tests The simplest way for a Prover to construct a spoofed transcript ending in W^* is to simply make up checkpoints rather than training on D^*, and hope that the Verifier lacks the budget to retrain a sufficient number of checkpoints to catch these spoofed checkpoints. To address this, we demonstrate a heuristic for catching spoofed checkpoints using a small amount of data, based on what is to the best of our knowledge a previously-undocumented phenomenon about local training data memorization. Machine learning methods notoriously overfit to their training data D, relative to their validation data D_v. We can quantify the degree of overfitting to a single data point d on a loss metric : ×^|W|→ relative to a validation set D_v via a simple memorization heuristic : (d, W) = 𝔼_d' ∈ D_v[(d', W)]- (d, W). Recall that Π_i is the sequence of data points corresponding to the ith segment of the training run. One would expect that in checkpoints before data segment i, for data points d ∈Π_i, memorization (d, W_j<i) would in expectation be similar to the validation-set memorization; after data-segment i, one would expect to see higher degrees of overfitting and therefore (d, W_j ≥ i) would be substantially higher. We find evidence for this effect in experiments on GPT-2-Small <cit.> and the Pythia suite <cit.>). As shown in Figures <ref> and <ref>, when a Prover reports the true training data, on average the greatest memorization occurs where Π_i and W_j=i match. We corroborate this finding with additional experiments on a range of models in Appendix <ref>. The finding is even clearer if we look at jumps in memorization level, which we call the Memorization Delta : (d, i; , D_v, ) = (d, W_i) - (d, W_i-1). To test whether each reported checkpoint W_i resulted from training on at least some of the segment training data Π_i, a Verifier can compute a memorization plot like the one shown in Figure <ref>. Such plots can be computed more efficiently by sampling only a small fraction α of the training data Π, and by plotting only a few checkpoints W_i-β, …, W_i+β for each segment Π_i. We can further harness this memorization phenomenon to test whether on segment i, rather than training on the full claimed data sequence Π_i and yielding W_i, the Prover secretly skipped training on at least a κ-fraction of the points in Π_i and yielded W_i'. Consider the odds that, for d∼Π_i, (d, W_i) happens to fall in the bottom p-probability quantile of the validation set D_v's values on W_i: (d, p, W_i) =𝕀( _d' ∼ D_v𝕀((d', W_i) > (d, W_i)) ≤ p ) (Π, p, W_i) = _d ∼Π [ (d, p, W_i) ], where 𝕀 is the indicator function, stands for “Point is Below Quantile”, and stands for “Fraction Below Quantile”. We can see in Figure <ref> that, as expected, when the points in Π_i are all included in training, (Π_i, p, W_i) is small compared to (Π_j ≠ i, p, W_i)). If many points were secretly excluded, as in W_i', we should expect that (Π_i, p, W_i') should be higher and closer to (D_v, p, W_i'), where D_v is the validation set. If the Prover secretly excluded a greater than κ-fraction of data points in Π_i thus yielding W_i', then we should expect that: (Π_i, p, W_i') = (1-κ)(Π_i, p, W_i) + κ(D_v, p, W_i') ≥κ·(D_v, p, W_i') Rearranging terms, we get λ(Π_i, p, W_i') := (Π_i, p, W_i')/(D_v, p, W_i')≥κ. λ(Π_i, p, W_i') can be estimated using a small fraction of training and validation datapoints, and can serve as an upper-bound estimate on κ, the fraction of Π_i secretly excluded from training W_i'.[The percentile-threshold p is left unspecified, but should be kept ≪ 0.5. The test can be strengthened by varying the chosen fraction p and rerunning the analysis to confirm its insensitivity.] In Section <ref> we show that this heuristic can detect even small data subtractions in practice, and in Appendix <ref> we show the test's effectiveness across a range of percentiles p and segment-lengths k. We also observe that gradually decreases over time from an initial peak immediately after the point's training segment. This echoes the many findings on “forgetting” in deep learning <cit.>. We show in Section <ref> how this can be used to catch gluing attacks. §.§ Fixing the Initialization and Data Order As mentioned in Section <ref>, a Proof-of-Training-Data protocol needs to ensure a transcript's uniqueness, and make it difficult for a malicious Prover to produce a second transcript with D ≠ D^* that, if training was legitimately executed, would also end in W^*. There are two well-known types of attacks the Prover might use to efficiently produce such spoofs: * Initialization attacks: An attacker can choose a “random” initialization that places W_0 in a convenient position, such as close to the target W^*. Even if the Verifier uses statistical checks to confirm that the initialization appears random, these are sufficiently loose that an adversary can still exploit the choice of initialization <cit.>. * Synthetic data/data reordering attacks: Given the current weight vector W_i, an attacker can synthesize a batch of training datapoints such that the resulting gradient update moves in a direction of the attacker's choosing, such as towards W^*. This could be done through the addition of adversarial noise to existing data points <cit.>, generating a new dataset <cit.>, or by carefully reordering existing data points in a “reordering attack” <cit.>. We propose methods for preventing both of these attacks by forcing the Prover to use a certified-random weight initialization, and a certified-random data ordering. The randomized data ordering guarantees that the adversarial Prover cannot construct synthetic datapoints that induce a particular gradient, because it does not know the corresponding weights W at the time of choosing the datapoints D.[This does not address hypothetical methods for constructing synthetic data points that would induce a particular gradient with respect to any weight vector that would be encountered across many possible training runs with high probability. However, no approaches to constructing such “transferrable” synthetic-gradient-attack data points are currently known.] Given a fixed data ordering, we discuss in Appendix <ref> why it may be super-polynomially hard to find a certified-random weight initialization that, when fully trained, results in a particular W^*. The Verifier can produce this guaranteed-random initialization and data order by requiring the Prover to use a particular random seed , constructed as a function of the dataset D itself. This produces the initialization W_0 = (s) ∈𝕏^n and data ordering S = () using a publicly known pseudorandom generators and . [ is a cryptographically-secure pseudorandom d-length vector generator, with postprocessing defined in the hyperparameters , and is a publicly-agreed pseudorandom n-length permutation generator. can be modified to repeat data multiple times to train for multiple epochs, or according to a randomized curriculum.] [ In practice, the statistical test to verify that the certified ordering was used will only be able to distinguish whether each data point d_i ∼ D was trained in the assigned segment S_i or not. Therefore, for this protocol to apply a checkpoint must be saved at least twice per epoch, k ≤ n/2.] The Prover can also construct a verifiable validation subset D_v by holding out the last n_v data-points in the permutation S from training. The Prover constructs as follows. Assume that the dataset D has some initial ordering. Let be a publicly-known cryptographic hash function. We model as a random oracle, so that when composed with or , the result is polynomial-time indistinguishable from a random oracle.[Since the random oracle model is known to be unachievable in practice, we leave the task of finding a more appropriate cryptographic primitive as an interesting direction for future work.] This means that if a Prover wants to find two different seeds _1, _2 that result in similar initializations W_0; 1, W_0; 2 or two similar permutations S_1, S_2, they can find these by no more efficient method than guessing-and-checking. For large d and n, finding two nontrivially-related random generations takes exponential time. We construct the dataset-dependent random seed as (D, s_rand) = ((d_1) ∘(d_2) ∘…∘(d_a) ∘ s_rand), where {d_1, …, d_a} = D, ∘ is the concatenation operator, and s_rand is a Prover-chosen 32-bit random number to allow the Prover to run multiple experiments with different seeds.[To enable a Prover to only reveal the required subset of data to the Verifier, it may be best to construct using a Merkle hash tree.] A Verifier given access to D (or only even just the hashes of D) can later rederive the above seed and, using the pseudorandom generators, check that it produces the reported W_0 and S. The important element of this scheme is that given an initial dataset D^* and resulting data order S, modifying even a single bit of a single data point in D^* to yield a second D will result in a completely different data order S' that appears random relative to S. Thus, if we can statistically check that a sequence of checkpoints matches a data order S^* and dataset D^* better than a random ordering, this implies that D^* is the only efficiently-discoverable dataset that, when truthfully trained [It is still possible to construct multiple data sets D_1, D_2, and train on both, interleaving batches. This is not a uniqueness attack, but a data addition attack, and will be addressed in Section <ref>.] , would result in the checkpoints and final weights W^*. We provide this statistical test in Appendix <ref>. This same approach can be extended to the batch-online setting, where a Prover gets a sequence of datasets D^*_1, D^*_2, … and trains on each before seeing the next. The Prover simply constructs a new seed (D^*_i, s_rand) for each dataset D^*_i, and continues training using the resulting data ordering. This works so long as each D^*_i is large enough for a particular data-ordering to not be brute-forceable. §.§ Putting It All Together In Appendix <ref> we sketch a complete protocol for combining these defenses complementarily to detect all of the attacks discussed in Section <ref>. The overall computational cost for the Verifier is O(n) training data-point hashes, O(α n) model inferences for computing losses, and O(|Q|n) gradient computations for retraining transcript segments (where |Q| depends on hyperparameters that can be adjusted according on the Verifier's compute budget). Importantly, the Verifier's cost grows no worse than linearly with the cost of the original training run. If we run our tests using an α=0.01 fraction of the points in each segment as done in our experiments below, the verification cost of computing our new tests in Sections <ref> and <ref> totals just 1.3% of the original cost of training, assuming inference is 3× cheaper than training. o #1 Or Or(#1) o #1 𝒱 𝒱(#1) § EXPERIMENTAL SETUP Our main experiments are run on GPT-2 <cit.> with 124M parameters and trained on the OpenWebText dataset <cit.>. We use a batch size of 491,520 tokens and train for 18,000 steps (∼8.8B tokens), which is just under 1 epoch of training, saving a checkpoint every 1000 steps. See Appendix <ref> for additional details. The data addition attack experiments in Section <ref> further use the Github component of the Pile dataset <cit.> as a proxy for a Prover including additional data that is different from reported data. In addition to training our own models, we also evaluate Pythia checkpoints <cit.> published by EleutherAI, as they publish the exact data order used to train their models. We chose the 70M, 410M, and 1B-sized Pythia models trained on the Pile dataset with deduplication applied. All experiments were done using 4 NVIDIA A40 GPUs. § EMPIRICAL ATTACKS AND DEFENSES Below, we show that our methods address existing attacks from the literature (Glue-ing and Interpolation), and also demonstrate our method's response to two new attacks (Data Addition and Subtraction). We omit the synthetic initialization and synthetic data attacks of <cit.> as we addressed those in Section <ref>. All plots are from experiments using GPT-2; we include additional experiments in Appendix <ref>. We do not claim that the attacks studied here are exhaustive, but provide them as a starting point to motivate future work. Glue-ing Attack A known attack against Proof-of-Learning, which also applies to PoTD, is to “glue” two training runs and together and report a combined sequence of checkpoints = (_0, …, _i, _j≫0, …, _final). The resulting model _final can be trained on undisclosed data prior to segment j, with the Prover never reporting this data to the Verifier. As highlighted by <cit.>, the size of the glued segment _j-_i _2 will generally appear as an outlier in weight-space. We demonstrate this phenomenon in Figure <ref>. Following <cit.>, a Verifier could then check such suspicious segments via retraining. We demonstrate a second verification option using inference instead of training: the Verifier can check whether the checkpoint _j has memorized not only the most recent data Π_i, but also the preceding data segments Π_i-1, Π_i-2, … The absence of long-term memorization is visible in the memorization heatmap in Figure <ref>. Also mention the linear-mode-connectivity trick Does the memorization last longer for larger models or later in training? For future work, can also look at super-OOD inputs like random strings of tokens, which shoudl agree more for successive checkpoints vs. gluing attacks Also mention sample efficiency (and how this isn't yet sample-efficient) but that a sample-efficient test is an important avenue for future work. Interpolation Attack To avoid the spike in weight-space shown in Figure <ref> when jumping from _i to _j, the attacker can break up the large weight-space jump into smaller jumps by artificially constructing intermediate checkpoints a_j + (1-a)_i for several values of a. However, these interpolated checkpoints fail our memorization tests, as they are artificial and not the result of actual training (Figure <ref>).[A Prover could fix this memorization-plot signature by fine-tuning each interpolated checkpoint on data segment Π_i, but this would add a large additional weightspace displacement, which may itself be identifiable in a weightspace-magnitude plot as in Figure <ref>.] Add a plot that shows this signature Data Addition Attack An important class of attacks for Proof-of-Training-Data is when the Prover, in addition to training on the declared dataset D and data sequence Π, trains on additional data D' without reporting it to the Verifier.[Undisclosed data can be added within existing batches, or placed in new batches and interleaved.] This attack cannot be detected using memorization analysis (Figure <ref>), because the Verifier does not know and cannot test points d' ∈ D'. However, we see in Figure <ref> that even small amounts of data addition (whether from the same distribution or a different distribution) can be detected by segment retraining. Still, this raises the problem of how the Verifier can find which segments to retrain. If the data addition is done uniformly throughout a large fraction of the training run, then choosing a small number of segments randomly should be sufficient to catch at least one offending segment with high probability. If instead the data addition is done in only a few segments, this leaves a signature in the “weight-changes” plot which can be used to select segments to re-verify (Figure <ref>). Unfortunately, these defenses would not detect an attacker that adds a modest amount of data within a small number of segments. Data Subtraction Attack A final attack is data subtraction: when a Prover claims the model has been trained on more points than it truly has. Detecting data subtraction attacks could enable a Verifier to detect overclaiming by model providers, and would enable Proofs-of-Learning. Subtraction can also be used to hide data addition attacks, as combining the two attacks would mean the segment was still trained on the correct number of datapoints, thus suppressing the weight-change-plot signature used to catch data addition (as in Figure <ref>). We demonstrate the effectiveness of an efficient memorization-based approach for detecting subtraction, described in Section <ref>. Leveraging the subtraction-upper-bound test from Equation <ref>, we see in Figure <ref> that the upper-bound heuristic λ(Π, p, W_i) is surprisingly tight, consistently differentiating between no-subtraction segments and even small subtraction attacks. Still, even if λ(Π_i, p, )>z for some large z ≫ 0, this is only an upper bound on the quantity of data subtraction, and does not prove that a z-fraction of points were subtracted. The Verifier can instead use this test as an indicator to flag segments for retraining, which would confirm a subtraction attack. (That retraining would result in a different weight vector can be inferred from the plot of the 50%-addition attack in Figure <ref>). Appendix <ref> explores the test's performance on the suite of Pythia models. § DISCUSSION AND LIMITATIONS This work contributes to an emerging societal effort to develop practical and robust tools for accountability in the large-scale development of AI models. The statistical tests we introduce are best taken as an opening proposal. Future work could propose clever new attacks that break this protocol, or better yet, create new defenses that efficiently detect more, and subtler, attacks and enable trustworthy verification of ML models' training data. Experimental Limitations This work provides suggestive evidence for the local-memorization phenomenon, but further study is needed across additional modalities, architectures, and training recipes in order to determine its broad applicability. Encouragingly, we find in Appendix <ref> that local-memorization gets even stronger as models get larger, though memorization appears weaker near the end of training as the learning rate shrinks. The paper's experiments only include language models, in part because they are a current priority for audits. The memorization tests used may need to be adjusted models trained with less data on many epochs, such as image models <cit.>. Attacks Our Protocol Does Not Catch There are several remaining directions for attacks. The attacks explored above can be composed in new ways, and it may be possible for compositions of attacks to undermine the defenses that would otherwise detect each attack individually. The method also does not address small-scale data additions, and thus cannot yet detect copyright violations or spot inserted backdoors <cit.>. It also cannot detect attacks based on small-norm modifications to the weights, which could be used to insert backdoors <cit.>. Finally, attacks could masked with cleverly chosen hyperparameters, such as by using a temporary lower-than-reported learning rate to shrink large changes in W. Exploring whether such attacks are feasible without degrading learning performance – and identifying defenses – is an interesting direction for future work. Applicability to Different Training Procedures We attempted to make our procedure as agnostic as possible to the details of the training procedure, and believe it will be compatible with most training procedures for large models in use today. However, our protocol does not apply to online or reinforcement learning, or to schemes that require multiple models to be co-trained <cit.>, as the data is unknown in advance. This means the uniqueness defense cannot be applied (Section <ref>). Finding methods for defending against non-uniqueness attacks even in the online setting is a valuable direction for future work. Maintaining Privacy and Confidentiality One significant challenge to using this protocol in practice is that it requires that the Prover disclose confidential information to the Verifier, including training data, model weights, and code. It would be valuable for future work to modify this protocol to reduce data leakage, such as by running the protocol on a Prover-and-Verifier-trusted air-gapped cluster, thereby minimizing the possibility of data leakage <cit.>. In principle, the Prover may only need to disclose hashes of the data and weights to the Verifier, with the matching full data and weights only ever supplied on the secure cluster during verification. It would also be interesting to explore whether the described memorization effect persists under differentially-private model training. Verifier Hardware One expensive requirement of our protocol is that the Verifier must have hardware that can reproduce segments of the original training run, though it does not require exact bit-wise reproducibility. In the scenario where the Prover is using a specialized, proprietary, or prohibitively expensive hardware configuration, it might be infeasible for the Verifier to independently acquire the hardware needed to reproduce even segments of such a run. Exploring the limits of “light” variants of the protocol that do not require re-training segments is a desirable direction for future work. Particularly interesting would be a protocol in which the Verifier requests that the Prover retrain a chosen segment on their own cluster and save and report closer-spaced checkpoints, and then call the PoTD procedure recursively on these closer-spaced checkpoints until verification becomes affordable to the Verifier. § ACKNOWLEDGEMENTS We thank Nicolas Papernot, Anvith Thudi, Jacob Austin, Cynthia Dwork, Suhas Vijaykumar, Rachel Cummings Shavit, Shafi Goldwasser, Hailey Schoelkopf, Keiran Paster, Ariel Procaccia, and Edouard Harris for helpful discussions. DC was supported by NSERC CGS-D, and DC and YS are supported by Open Philanthropy AI Fellowships. alpha § COMBINED VERIFICATION PROTOCOL We can unify the defenses of Section <ref> into a combined defense protocol, which catches a wide swath of attacks, including all current attacks on from the Proof-of-Learning literature <cit.>. A Prover gives the Verifier a transcript T={D, , } and a final weight vector W^*. The verifier proceeds to verify whether T is a valid training transcript through the following checks: * Check that ends in the claimed final weights W^*. * Given the dataset D, hash it to yield the seed s as in Section <ref>, and use that seed compute the resulting data order Π and validation subset D_v. (Alternatively, these hashes can be provided by the Prover, and only verified when each point is needed for the protocol.) * Check that W_0 matches (s). If this fails, reject the transcript. * Create an empty list Q to store all suspicious-looking segments to retrain. For each segment W_i, Π_i, include it in the list Q to retrain if it fails any of the following checks: * Randomly select an α fraction (e.g., 1% of k) of points Π_i, α from Π_i. For each such point d ∼Π_i, α, compute the losses on W_i and W_i-1, shorthanded as sets _Π_i, i and _Π_i, i-1. Similarly, pick an α fraction [To reduce noise when comparing validation performance across checkpoints, this α subset of D_v should be the same across all evaluated checkpoints.] of points from the validation set D_v and compute these points' losses on W_i, shorthanded as _D_v, i. (The validation loss on W_i-1, _D_v, i-1, should've already been computed when looping on the previous segment.) Also, randomly select an α k subset of data points D_t from across all training segments Π, and compute these points' losses on W_i, _D_t, i. * If the Verifier wants to plot complete memorization plots, for example as a sanity check or to use in checking for a Glue-ing attack as described in Section <ref>, they can also compute the losses on 2β nearby weight checkpoints W_i-β, …, W_i+β-1. However, this is not part of the core protocol, and will not be counted in its sample complexity. * Compare the values in _Π_i, i and _D_t, i using the one-sided binomial hypothesis test described in Appendix <ref>, to check that the model used the correct data ordering. If the test cannot reject the null hypothesis that Π_i does not match W_i better than randomly-selected training points D_t, then run the test on an even larger fraction of data points from Π_i and D to double-check the statistical significance of the null. If this still fails, it suggests that the transcript may have been spoofed. Investigate further (such as by reexecuting training on this segment) and/or reject the transcript. * Using _Π_i, i, _Π_i, i-1, _D_v, i, and _D_v, i-1, calculate a data subtraction upper bound heuristic as in Section <ref>. If this heuristic fails to bound the data subtraction to below an acceptable small limit (which depends on how strongly the Verifier wants to catch even small subtraction attacks, and on the Verifier's retraining budget), add this segment to Q. * Using {_D_v, j | j ∈ 0, …, m}, compute the mean validation set loss across time, and check that it is smooth at W_i and doesn't increase or decrease discontinuously. If it does, add this segment to Q. (The tolerance should depend on the retraining budget.) * Compute the distance W_i - W_i-1 from the previous checkpoint. Check that this delta is similar to nearby segments' weight-space deltas. If it is not, add this segment to Q. (The tolerance should depend on the retraining budget. We leave the question of how best to set this threshold, and of σ below, to minimize false negatives while avoiding reducing false positives that increase retraining costs, to future work.) * Randomly sample σ additional data segments from throughout training, and add them to Q. These additional segments are important to establish baseline estimates of segments' weight-space deltas across training, to ensure that there were no systematic data addition attacks at every segment. (Illegal data additions in every segment would shift the entire weight-change delta magnitude plot, thus suppressing anomalies in any one segment). * For each segment in the list Q, execute retraining and verify that the resulting weights Ŵ_i are within an ϵ-ball of the original reported weights W_i reported in the transcript. If any values in the re-trained weights fail to come within the tolerance ϵ, that is significant evidence that the transcript has been spoofed, and warrants further investigation. For example, the segment can be retrained more times, to confirm that the weight-space variance across retraining results Ŵ^(1), Ŵ^(2), … is sufficiently smaller than ϵ such that the reported W_i is a clear outlier. If all these tests pass, accept the transcript. §.§ Complexity The time costs of training, borne by the the Prover are: * h × |D|, where h is the cost of a hash, for generating the initial random seed. * s × n, where s is the cost of a single gradient computation, and n is the number of training data points. In comparison, the time costs to the Verifier (assuming the transcript is accepted) are: * h × |D| hashes for verifying the initial weights. * (2 + 1 + 1) ×α×s/3× n operations for computing the loss of an α fraction of datapoints in Π_i on W_i and W_i-1, and another 2α fraction of points in D_t and D_v. We also assume that computing the loss requires 1/3 the number of operations as computing a gradient update, which is the standard ratio of inference vs. training when using backpropagation. * s × n × |Q|/m operations for retraining, where m is the total number of checkpoints in the training run. § DATA ORDER STATISTICAL TEST We want a statistical test that will tell the Verifier whether, for a given training dataset D and data ordering S, which together yield a data sequence Π, and for a given weight checkpoint W_i, the data segment sequence Π_i ∈𝒳^k explains the memorization pattern of W_i better than a random data order/sequence Π'_i (which we assume is drawn randomly from D). In particular, based on results from Section <ref>, we know that datapoints from the most recent training segment d ∈Π_i tend to have higher memorization delta values than the average point d' ∈ D from the overall training distribution D. Conversely, points from Π'_i would have no reliably greater than the rest of D. We will use the following test, where Π^? = Π_i' is the null hypothesis and the alternative hypothesis is Π^? = Π_i. Let z = _d ∈ D((d, W_i)), estimated via a small number of samples from D. For Pick n_t datapoints from data sequence Π^?, and for each data point d ∈Π^?, check if it's >z. Under the null hypothesis, the probability that each point passes this check is 0.5. Let the test statistic be t: t(Π^?) = ∑_d_j ∼Π^?, j=1,…,n_t𝕀((d_j, W_i)>z) where 𝕀 is the indicator function. The value of t(Π'_i), the statistic under the null hypothesis, is distributed as a binomial with biased coin probability c=1/2 and n_t samples. However, we expect that t(Π_i) is a binomial with a larger c. To compute our confidence that W_i was trained using the data order Π_i, we can use a one-sided binomial hypothesis test, computing a p-value as 1 - CDF_binomial(c=1/2, n_t, k<t) where k is the value up to which to calculate the CDF. This statistic can be computed jointly across all checkpoints (requiring relatively few samples per checkpoint) to prove that the overall data ordering matches the one defined in Section <ref>. Note that this test is similar to the “subtraction upper bound” heuristic from Section <ref>, with the key difference being that in this test we compare against the distribution of all training points D (since the counterfactual is a randomly selected subset of training data), whereas the subtraction test compares against points from the validation set D_v (since the counterfactual is that the points are never included in training). As an additional note, this same test can be generalized by replacing the median with a quantile, which may improve sample efficiency depending on the shape of the distribution on Π_i vs. Π_i'. § VERIFIER OBJECTIVES TABLE lightgray Transcript Use-Case Attacker Motivation Definition of Defender Success Check whether a model W was trained on data from a disallowed distribution (e.g., relating to backdoors, cyberexploit generation, or enabling an undisclosed modality such as images). A Prover wants to claim that W lacks a certain ability in order to avoid scrutiny, and does so by claiming W has only been trained on data from distribution 𝒟 and not on distribution 𝒟'. A test such that, given target weight checkpoint W, confirms that its training data did not include, in addition to a known number of data points n from a known distribution 𝒟, an additional kn training points from a different distribution 𝒟'. Check whether a model W^* was trained on greater than a certain number of data points, in case policy oversight targets the total training compute of a model (e.g. as part of compute usage reporting). Underreport total training time to avoid triggering oversight. A test such that, given a target weight checkpoint W^* and claimed sequence of n data points Π, detects whether the model was in fact trained on >kn data points, for some k>1. Check whether a model W was initialized without using weights obtained from previously-trained models. A Prover might wish to start training using weights obtained from a previous training run, hiding the fact that more data or compute was used than reported, in order to avoid scrutiny, or to save compute by copying another's work. A test such that, given a desired initialization W_0 (up to hidden unit permutations), makes it cryptographically hard to construct a transcript that results in W_0 being an initialization compatible with the resulting transcript. Check whether a model has a backdoor, i.e. an improbable input that yields a disallowed behavior. An attacker might wish to hide capabilities, or give themselves unauthorized access to systems that will be gatekept by deployed versions of their models. A test such that, given a transcript, allows reliable detection of backdoors through code or data audits. Check whether a model was trained using at least a certain quantity of data, e.g., as part of a Proof-of-Learning meant to verify the original owner of a model, or to verify that certain safety-best-practice training was done. A Prover may wish to save on compute costs by doing less training, or to prevent their model from being trained on required data. A test such that, given a target weight checkpoint W^* and a claimed sequence of n data points Π, detects whether the model was in fact trained on <cn data points, for some c<1. Check whether a model was trained using a particular datapoint. A Prover may wish to train on copyrighted content, or un-curated datasets, or obfuscate which training data were used. A test such that, given a transcript and a target datapoint x, detects whether the model was in fact trained on x. § HARDNESS OF SPOOFING A WEIGHT INITIALIZATION To recap, by requiring that the Prover initialize a model's weights at a specific value in high-dimensional space W_0 ∈^d drawn from a pseudorandom vector generator , we seek to disallow a class of spoofing attacks based on the Prover hand-picking an initial weight vector Ŵ_0 that will after training end up close to W_f, for example by picking an initialization that is already close to W_f (Attack 2 in <cit.>). The simplest setting in which defense is impossible, and the Prover can reliably find a random initialization that will converge to a given W_f, is in realizable linear models (models with only a single linear layer). Since their loss function is strongly convex, any initialization will converge to a neighborhood of the same final value W_f, making it straightforward to construct tweaked datasets with certified-random initializations that result in approximately the same final model. Another counterexample occurs when datasets have a single degenerate solution: it is possible to construct a 2-layer neural network with training data covering the input space and where all the labels are 0, such that the model always converges to a weight vector of all 0s, independent of initialization. We will focus our discussion on the usual case of multi-layer NNs with non-degenerate solutions, as described below. Below, we will sketch an informal argument that for some radius r, for a fixed training data sequence Π, the probability that a training run initialized at a pseudorandomly-generated [Assuming that s is chosen randomly, based on assumptions described in Section <ref>.] weight vector W_0 = (s) ends in a final weight vector W_f that is within distance r of a particular target vector A, is less than some small value δ < õ(1/poly(d)), where d is the dimension of the neural network. This means that a Prover would need to sample a super-polynomial (in d) number of random seeds to find one that would, via training on Π, result in a fully-valid training transcript that ends close to the weight vector W_f from a previous training run with a different initialization, and therefore that it is exponentially hard to violate the “uniqueness” property from Section <ref> if the Prover uses a certified random initialization. To understand whether this is the case, we can examine the counterfactual claim: that independent of weight initialization, all NNs tend to converge to a small (polynomial) number of modes in weight space. This is indeed the case with linear regression: regardless of the initialization, given sufficient full-rank data all linear model training runs will converge to a neighborhood of the same loss minimum in weight-space. If this were also true for neural networks, then even a small number of randomly-sampled weight initializations would likely yield at least one weight initialization that, after training, converged to a mode close to the target A (assuming A is close to at least one mode, which is the case when A is the outcome of a previous training run W^f). Yet, empirically, many works have found that large NNs converge to many different modes <cit.>. The many modes of the NN loss landscape can be understood through permutation symmetries <cit.>. Neural networks are equivariant (“equivariant” means that a function changes symmetrically under a group action) under specific permutations of their matrices' columns and rows. Nearly all neural networks have the following permutation symmetries: given a single hidden layer M_1 σ (M_2(x)) where M_1 ∈ℝ^a × b, M_2 ∈ℝ^b × c and σ: ℝ^b →ℝ^b is a nonlinearity, and given any permutation matrix F ∈ℝ^b × b (such that FZ permutes the rows of Z), then by simple algebra M_1 F^T σ(FM_2 x) = M_1 σ (M_2 x) for all x. This means that for any set of successive NN matrices M_1, M_2, there are at least b! possible permutations with identical input output behavior. For a neural network W with k nonlinear layers and hidden dimension of each layer b, there could be k-1 different permutation matrices F_1, F_2, …, and we denote to the operation of permuting the flattened weight vector W using a particular value of these Fs as P: ^d →^d. Each P is drawn from the overall set of valid permutations for a particular architecture P ∈(), and we know that () = Ω(2^kblog b). A second important property is that gradient descent is itself equivariant under the described permutations. Let R be the training operator, such that W_f = R(W_0, Π) is the result of training initial weights W_0 on a data sequence Π. [We omit the inherent noise and hyperparameters inherent in R for brevity.] Then it is true that ∀ P ∈(), P(W_f) = P(R(W_0, Π) ) = R(P(W_0), Π) = W_f^p where W_f^p is the result of training on the permuted initialization. This is simply a consequence of the fact that the gradient operator commutes with any constant matrix (including the permutation matrix), and that the training process R is comprised of repeated calls to the gradient operator, additions, and scalar multiplications (both of which also commute with the permutation matrix). [It is in principle possible to construct optimizers for which this is not the case, but this should hold for all common gradient-based NN training optimizers.] Now, assume that the initialization function is radially symmetric (as is the case with all common initialization schemes, e.g., those based on Gaussians), and therefore the probability that the initialization will start at W_0 and P(W_0) is the same for all P ∈. Then the probability that the post-training final weights reach W_f or P(W_f) is also the same. If we knew that _W_0 ∼(s)(W_f - P(W_f) > 2r) > 1- δ for some r and small δ, then this derivation would tell us that there are many different weight-space modes into which training could converge, each of which is far apart from the others. (For convenience, let's refer to the number of such far-apart permuted modes as k.) Again, our goal is to show that a random initialization is unlikely to converge after training to within a neighborhood around some vector A. Assume that B is one of these modes, and A - B < r.[If this is untrue for all modes B, then by definition there is no initialization that leads close to A, which satisfies our original objective of bounding the probability of the final weights converging to a neighborhood of A.] According to the assumption from the previous paragraph on the distance between post-training modes, for any second mode C, we know that C - B > 2r with high probability. By the triangle inequality, we know that: A-C ≥C - B - A-B > 2r - r = r Therefore there is some minimum distance A-C>r between the target A and all other k disjoint modes (each associated with a permutation) of the post-training weight distribution. If the number of such far-apart permutations k is superpolynomial, then no polynomial number of weight initialization samples will result in a final model close to A. However, this argument is predicated on a sometimes-invalid assumption: that there are superpolynomially-many permutations k = ω(poly(d)) of W_f, each at least a distance 2r from each other. In the case of the counterexample from the beginning, where all initializations converge after training to the weight vector of all 0s, all such permutations are in fact equal, and therefore there is no such distance r. Instead, one may need to make an assumption about the non-degeneracy of the distribution of final weight vectors W_f, such that permutations of these weight vectors are far apart from each other. We leave analysis of which assumptions fulfill this property as future work. Note that for any specific training transcript which includes a specific W_f, the distribution of distances of permutations of W_f can be estimated empirically by manually permuting W_f's matrices. § EXPERIMENT DETAILS For the GPT-2 Experiments we use a cosine learning rate schedule that decays by a factor of 10x by the end of training, with a linear warmup of 2000 steps to a peak learning rate of 0.0006. For the Pythia evaluation experiments, we choose checkpoints from 3 contiguous blocks out of 144 checkpoints: early (first 19 checkpoints), mid (checkpoints at step 62000 to 80000), and late (last 19 checkpoints). § MORE MEMORIZATION PLOTS In the following subsections, we plot memorization , fraction of points with above the median, and fraction of points with below the 10th percentile. For GPT-2, we use 100% of the data to generate Figures <ref>, <ref>, and <ref>, while for Pythia, we use 10% of the data to generate Figure <ref>. In this section, we show results for smaller sampling rates to highlight that with 1%, or sometimes even 0.1% of the original data, we can still observe the memorization effect. From Pythia 70M results (Subsections <ref>, <ref>, and <ref>) we can see that as training progresses, the memorization effect becomes less pronounced, such that with a smaller data sampling rate, less of the diagonal get highlighted (Figures <ref>, <ref>, <ref>), and the histograms are closely overlapping (Figure <ref>) for the last 18 checkpoints. At the same time, we observe that as the model size increases the memorization effect becomes clearer, even with 0.1% data sampling rate. In fact, for the 1B-parameter Pythia model, the memorization effect is still clear for the last few checkpoints (Figures <ref>, <ref>, <ref>, and <ref>) unlike the 70M-parameter case. §.§ Memorization §.§ Fraction of Samples Above 50th Percentile §.§ Fraction of Samples Below 10th Percentile §.§ Memorization Delta Histograms § MORE ATTACK PLOTS §.§ Data Addition Attack We repeat the data addition attack on the 70M-parameter Pythia model. As shown in Figure <ref>, similarly to the case of GPT-2 in the main body of the paper, segment retraining is able to distinguish data addition. §.§ Interpolation Attack We repeat the interpolation attack experiment in the main body of the paper, with the 1B-parameter Pythia model and observe from Figure <ref> that indeed the interpolated checkpoints fail our memorization tests. §.§ Data Subtraction Attack Tests In the following subsections, we plot the subtraction-upper-bound heuristic λ(Π_i, p, W_i) with varying values of p, for different subtraction rates. We observe that for big enough models λ is a tight upper-bound when no subtraction has happened. For Pythia with 70M parameters, our smallest model, λ does not provide a tight upper-bound. However, for GPT-2 with 124M parameters, Pythia with 410M parameters, and Pythia with 1B parameters, λ provides a tight upper-bound. For the 1B-parameter Pythia model, we further plot the upper-bound heuristic for varying values of the checkpoint interval (number of training steps between each checkpoint). From Figure <ref>, we observe that even though λ increases as the interval increases, it is still a good upper-bound (∼0.05 for a checkpoint interval of 5000 steps) for p=0.1 and p=0.2. This means that we can save checkpoints less frequently and still use the heuristic to detect data subtraction. §.§.§ GPT-2 §.§.§ Pythia (70M) §.§.§ Pythia (410M) §.§.§ Pythia (1B) § BROADER IMPACTS We intend this work to be a step towards meaningful and transparent public oversight of large AI systems, especially those with capabilities whose irresponsible use could significantly harm the public. Our protocol is a sketch of a technical framework for a system by which AI developers can prove properties of their training data, and may thereby enable the effective enforcement of a broader set of policies than those solely relying on querying models “black-box”. While enabling many possible positive rules, this could also be misused by coercive states to detect and enforce harmful restrictions on beneficial AI development. However, in most cases, such authoritarian states would already have a means for policing domestic AI developers' behavior, and verification tools demanding so much cooperation from the Prover are unlikely to meaningfully increase existing surveillance powers. Another issue is that requirements for complying with monitoring and enforcement tend to favor large companies, for whom the cost of compliance can more easily be amortized. This motivates efforts to keep verification schemes simple, flexible and cheap. We hope that this protocol can also be useful for verifying agreements between untrusting countries. The protocol itself does not provide a means for identifying that an AI model was developed in the first place unless it is disclosed. In this sense, it more closely parallels a process for an AI-developing country to allow its counterpart to retroactively inspecting a developed system (paralleling the New START treaties' inspections of nuclear launchers), rather than to proactively detect when a new system is deveoped (paralleling the IAEA's monitoring of the process of uranium enrichment). Because our protocol supports multiple independent auditors reviewing the same transcripts, we hope that these tools will support the development of trust between competing companies and countries. Ultimately we hope such protocols will support the development of a larger governance ecosystem representing many parties.
http://arxiv.org/abs/2307.01793v1
20230704160325
Constraining the binarity of black hole candidates: a proof-of-concept study of Gaia BH1 and Gaia BH2
[ "Toshinori Hayashi", "Yasushi Suto", "Alessandro A. Trani" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.SR", "gr-qc" ]
Toshinori Hayashi toshinori.hayashi@yukawa.kyoto-u.ac.jp 0000-0003-0288-6901]Toshinori Hayashi Yukawa Institute for Theoretical Physics, Kyoto university, Kyoto 606-8267, Japan Department of Physics, The University of Tokyo, Tokyo 113-0033, Japan 0000-0002-4858-7598]Yasushi Suto Department of Physics, The University of Tokyo, Tokyo 113-0033, Japan Research Center for the Early Universe, School of Science, The University of Tokyo, Tokyo 113-0033, Japan 0000-0001-5371-3432]Alessandro A. Trani Niels Bohr Institute, University of Copenhagen, Blegdamsvej 172100 Copenhagen, Denmark Research Center for the Early Universe, School of Science, The University of Tokyo, Tokyo 113-0033, Japan Okinawa Institute of Science and Technology Graduate University, Okinawa 904-0495, Japan Nearly a hundred of binary black holes (BBHs) have been discovered with gravitational-wave signals emitted at their merging events. Thus, it is quite natural to expect that significantly more abundant BBHs with wider separations remain undetected in the universe, or even in our Galaxy. We consider a possibility that star-BH binary candidates may indeed host an inner BBH, instead of a single BH. We present a detailed feasibility study of constraining the binarity of the currently available two targets, Gaia BH1 and Gaia BH2. Specifically, we examine three types of radial velocity (RV) modulations of a tertiary star in star-BBH triple systems; short-term RV modulations induced by the inner BBH, long-term RV modulations induced by the nodal precession, and long-term RV modulations induced by the von Zeipel-Kozai-Lidov oscillations. Direct three-body simulations combined with approximate analytic models reveal that Gaia BH1 system may exhibit observable signatures of the hidden inner BBH if it exists at all. The methodology that we examine here is quite generic, and is expected to be readily applicable to future star-BH binary candidates in a straightforward manner. § INTRODUCTION Since the first discovery of GW150914<cit.>, more than 90 candidates for binary black holes (BBHs) have been reported so far <cit.>. The formation and evolution of such BBHs are one of the important unsolved questions in astrophysics, and there are a variety of proposed scenarios, including formation from isolated binary stars <cit.>, dynamical capture <cit.>, and binary formation of primordial black holes <cit.>. Regardless of those different formation scenarios, their progenitors are expected to have a longer orbital period. The subsequent dynamical evolution decreases their orbital energy and angular momentum, and eventually leads to the BBH merger events that are detectable using gravitational wave (GW) observations. Therefore, it is natural to expect that more abundant wide-separation BBHs remain undetected in the universe, or even in our Galaxy. In order to search for BBHs with relatively long orbital periods that cannot be probed with GWs, <cit.> and <cit.> pointed out that a BBH orbited by a tertiary star would be detectable in optical spectroscopic surveys from the radial velocity (RV) modulations of the tertiary star; the inner BBH produces the observable RV modulations of the star in short (a half of the orbital period of the BBH) and long (a nodal precession timescale and/or the von Zeipel-Kozai-Lidov timescale) terms. They examined the feasibility of the strategy from three-body simulations for hypothetical triple systems of an inner BBH and an outer tertiary star, and proposed that the methodology can distinguish between the single black hole (BH) and BBH when applied to the future star-BH binary candidates from on-going Gaia <cit.> and TESS <cit.> surveys <cit.>. Indeed, star-BH binary candidates, Gaia BH1 and Gaia BH2, recently discovered from Gaia DR3 astrometric data <cit.> would provide a good opportunity to directly check the methodology. Gaia BH1 is a binary of a ∼ 1M_⊙ main sequence star and a ∼ 10M_⊙ dark companion, with an orbital period P_obs∼ 190 days <cit.>. Gaia BH2 was first discovered by <cit.> using Gaia astrometry, and later more robustly identified combining the follow-up RV observations by <cit.>. Gaia BH2 is a binary of a ∼ 1M_⊙ red giant and a ∼ 9M_⊙ dark companion, with P_obs∼ 1300 days. The best-fit values of their system parameters are listed in Table <ref>. Due to the limited precision and duration of the current spectroscopic monitoring observations, it is not possible to prove the presence of an inner BBH, instead of a single dark companion, in either system. Nevertheless, those systems are useful as a proof-of-concept in constraining the binarity of the dark companion for future star-BH binary candidates. We first consider the short-term RV modulations on the timescale of half the inner orbital period. We next move on to the long-term RV modulations, which become important for inclined triples. We put constraints, as one application, on the binarity of dark companions in Gaia BH1 and Gaia BH2. For reference, Figure <ref> shows the configuration of a triple that we consider in the present paper. The rest of paper is organized as follows. Section <ref> examines the short-term RV modulations. We first discuss the short-term semi-amplitude predicted from an analytic approximation for coplanar and circular triples. Then, we show that the outer eccentricity, such as those for Gaia BH1 and Gaia BH2, significantly increases the simple prediction by direct three-body simulations. Next, section <ref>, focuses on the long-term RV modulations induced by the nodal precession for moderately inclined triples. We also discuss analytic predictions first, and then examine their validity using three-body simulations. Section <ref> considers more significantly inclined triples in which the von Zeipel-Kozai-Lidov (ZKL) oscillations <cit.> play an important role. Finally, we summarize the constraints on Gaia BH1 and Gaia BH2, and discuss a future prospect in section <ref>. lcll 6 1.0 Best-fit parameters for Gaia BH1 and BH2 systems system parameter symbol Gaia BH1 Gaia BH2 star mass m_* 0.93±0.05 M_⊙ 1.07±0.19 M_⊙ companion mass m_c 9.62±0.18 M_⊙ 8.94±0.34 M_⊙ eccentricity e_obs 0.451±0.005 0.5176±0.0009 pericenter argument ω_obs 12.8±1.1 deg 130.9±0.4 deg longitude of ascending node Ω_obs 97.8±1.0 deg 266.9±0.5 deg RV semi-amplitude K_obs 66.7±0.6 kms^-1 25.23±0.04 kms^-1 orbital inclination I_obs 126.6±0.4 deg 34.87±0.34 deg orbital period P_obs 185.59±0.05 days 1276.7±0.6 days The best-fit values are adopted from <cit.> for Gaia BH1 and <cit.> for Gaia BH2. § SHORT-TERM RV MODULATIONS For a coplanar triple system, the inner binary efficiently induces short-term wobbles of the tertiary, with about a half the inner orbital period P_. For inclined triples, however, the additional long-term RV modulations are generated due to the misalignment between the inner and outer orbital angular momenta. This section focuses on coplanar triples, and discusses the amplitude of the short-term RV modulations using an analytic approximation and numerical simulations. The long-term RV modulations for inclined triples will be discussed in later sections. §.§ Analytic estimates The short-term RV modulations for coplanar and circular triples are, to the leading order of a_/a_, analytically approximated as <cit.> V_short(t) = -15/16K_shortcos[(2ν_-3ν_)t+2(f_+ω_)-3(f_+ω_)] + 3/16K_shortcos[(2ν_-ν_)t+2(f_+ω_)-(f_+ω_)], where ν_, ν_, ω_, ω_, f_,0 and f_,0 are the orbital frequencies, pericenter arguments, and initial true anomalies of the inner and outer orbits, respectively. In equation (<ref>), K_short corresponds to a characteristic semi-amplitude of the short-term RV modulations defined as K_short ≡ m_1 m_2/m_12^2√(m_123/m_12)(a_/a_)^7/2V_0,0sinI_obs = m_1 m_2/m_12^2(m_123/m_12)^2/3(P_/P_)^7/3V_0,0sinI_obs, where m_12≡ m_1+m_2, m_123≡ m_12+m_*, V_0,0≡m_12/m_123a_ν_ =(2π𝒢 m_12^3/m_123^2P_)^1/3, with 𝒢 being Newton's gravitational constant, and a_, a_, P_, P_ and I_obs are the semi-major axes, orbital periods of inner and outer orbits, and the observed inclination, respectively. Since we assume a triple with an inner binary companion throughout the present analysis, the orbital parameters with the subscript "out" are interpreted to be those estimated for the star-BH binary (with the subscript “obs" in Table <ref> for Gaia BH1 and Gaia BH2). Similarly, we assume that m_12=m_1+m_2 is equal to m_ c in Table <ref>. Figure <ref> plots the contours of K_short in the q_21≡ m_2/m_1 – P_ plane; for Gaia BH1 (left) and Gaia BH2 (right). The shaded regions indicate those corresponding to dynamical instability condition for coplanar triples by <cit.> (hereafter, MA01): / > 2.8(1-0.3i_mut/180^∘) [(1+m_*/m_12) (1+e_)/√(1-e_)]^2/5. The condition (<ref>) turned out to be a good approximation for coplanar triples (i_mut=0^∘). <cit.> examined the Lagrange stability timescales of triples in general, and found that the condition (<ref>) needs to be improved especially for inclined triples that exhibit the ZKL oscillations. Figure <ref> indicates that the expected values of K_short (dotted contours) are fairly small; at most 𝒪(10) m/s for Gaia BH1, and 𝒪(1) m/s for Gaia BH2. In reality, however, the observed semi-amplitude should be sensitive to the mutual phases of the three bodies, in particular for eccentric outer orbits as in the cases of both Gaia BH1 and Gaia BH2. While K_short is derived for circular orbits, the effect of the outer eccentricity may be partly taken into account by replacing a_ by a_(1-e_) in equation (<ref>), i.e., K_short (1-e_ obs)^-7/2 as plotted in red solid contours in Figure <ref>. In the next subsection, we perform three-body simulations and show that the phase-dependent RV modulation amplitudes become even larger for Gaia BH1 and BH2 around the pericenter passages, due to their relatively large e_obs. §.§ Numerical results In order to predict the short-term RV modulations for Gaia BH1 and Gaia BH2 more quantitatively, we perform three-body simulations using TSUNAMI <cit.>. The details of the procedure are described in <cit.>. Figure <ref> shows results of the simulation assuming the inner equal-mass binaries (m_1=m_2=m_ c/2) with the initial phases of M_=30^∘, M_=45^∘, ω_=0^∘, and ω_=ω_obs. In order to remove possible transient behavior due to the choice of initial phase angles, we first evolve the system over 100 outer orbital periods P_ (=P_obs), and then plot the resulting RV curve for t=100 P_obs to 101 P_obs (top panels). We also fit the simulated RV data with the public code, RadVel <cit.> so as to remove the overall Kepler motion of the tertiary star. The resulting residuals (middle and bottom panels) represent the short-term RV modulations. Left and right panels of Figure <ref> correspond to Gaia BH1 with P_=10 days, and Gaia BH2 with P_ = 50 days, both of which satisfy the dynamical stability condition (Figure <ref>). Red and blue curves show the results for the initial inner eccentricities of e_ = 0 and e_=0.2, respectively. The difference of e_ produces a small phase shift of the total and residual RV, but does not affect their amplitudes much. For reference, we plot the analytic short-term modulation semi-amplitude ± K_short, equation (<ref>), and also ± K_short (1-e_ obs)^-7/2; see magenta and cyan regions in middle and bottom panels of Figure <ref>. Clearly, K_short significantly underestimates the simulated amplitudes. Indeed, the simulated RV modulations become even larger around the pericenter passage of the tertiary; the short-term RV modulations for Gaia BH1 and Gaia BH2 amount to ∼ 300 m/s and ∼ 100 m/s around the epoch. Those values are about 10–100 times larger than the analytic approximation K_short, equation (<ref>), and may be detectable for Gaia BH1 from the observed RV residuals according to Figure 4 of <cit.>. § LONG-TERM RV MODULATIONS FOR MODERATELY INCLINED SYSTEMS: NODAL PRECESSION Consider next non-coplanar triples, i.e., the inner and outer orbits are mutually inclined. <cit.> pointed out that the long-term RV modulations of the tertiary body due to the nodal precession and the ZKL oscillations may carry interesting signatures of the hidden inner binary. The details of the inclined three-body dynamics are described in previous literature <cit.>. In this section, we focus on the nodal precession in moderately inclined systems (i_mut≲ 50^∘). First, we consider analytic approximations for the timescale and the RV modulation amplitude of the nodal precession. Then, we perform three-body simulations to present more quantitative prediction, and discuss the observational feasibility. §.§ Analytic estimates §.§.§ Nodal precession timescale If e_ is initially small and i_mut is moderate (i_mut≲ 50^∘), the outer ascending node Ω_ regularly precess with the following timescale P_Ω <cit.>: P_Ω = 2π/Ω̇_= π G_ G_/6C_quadG_totcosi_mut, where C_quad is the quadrupole strength coefficient: C_quad ≡ 𝒢/16m_1m_2/m_12m_*/(1-e_^2)^3/2(a_^2/a_^3), and G_, G_, and G_tot are the inner, outer, and total angular momenta: G_ = μ_ν_ a_^2 √(1-e_^2), G_ = μ_ν_ a_^2 √(1-e_^2), G_tot = √(G_^2+G_^2+2G_ G_cosi_mut). In equations (<ref>) and (<ref>), μ_ and μ_ denote the reduced masses: μ_ ≡ m_1m_2/m_12= q_21m_12/(1+q_21)^2 , μ_ ≡ m_12m_*/m_123, where q_21≡ m_2/m_1 is the mass ratio of the inner binary. It is convenient to introduce the ratio of the inner to outer angular momenta: ξ≡G_/G_ = q_21/(1+q_21)^2√(1-e_^2/1-e_^2)(m_12/m_*) (m_123P_/m_12P_)^1/3, which is a key parameter that characterizes the long-term modulation due to the nodal precession. Figure <ref> plots ξ against P_ for Gaia BH1 and Gaia BH2 in solid and dashed lines; different colors correspond to (q_21, e_) = (1, 0), (1, 0.3), (0.1, 0), and (0.1, 0.3). As equation (<ref>) indicates, ξ is sensitive to q_21, but not to e_ as long as e_^2 ≪ 1. The realistic range of ξ values is shown in this figure for a given set of q_21 and P_. Due to dynamical stability, ξ cannot exceed about 1.2 and 0.9 for Gaia BH1 and Gaia BH2, respectively. By rewriting equation (<ref>) in terms of ξ as P_Ω = π/6ξ G_/C_quadcosi_mut1/√(1+2ξcosi_mut+ξ^2), equation (<ref>) reduces to P_Ω/P_ = 4q_21^3/3(1+q_21)^6(m_12^2m_123^2/m_*^4) (1-e_^2)^2/ξ^3cosi_mut1/√(1+2ξcosi_mut+ξ^2). Equation (<ref>) implies that the nodal precession timescale is very sensitive to ξ. Figure <ref> plots P_Ω/P_ as a function of cosi_mut for Gaia BH1 (solid) and BH2 (dashed) with e_=0 and q_21 = 1.0. The plot also shows that P_Ω/P_ is not sensitive to i_mut as long as moderately inclined triples are considered. Figure <ref> shows P_Ω/P_ as a function of ξ for Gaia BH1 (solid) and BH2 (dashed) with q_21=1 and e_=0. Since P_Ω is a strongly decreasing function of ξ, triples with a larger value of ξ are preferable for a successful detection of long-term RV modulations. §.§.§ Relation between inclination angles The long-term RV modulations due to the nodal precession are computed as a function of inclination angles illustrated in Figure <ref>. First, we derive a relation between i_ and i_mut, which proves to be useful in the later discussion. If we neglect the ZKL oscillations and simply consider the nodal precession alone, i_mut is nearly constant, and i_ and i_ simply satisfy i_ + i_ = i_mut and sini_ = ξsini_. Thus, for moderately inclined triples of i_mut≲ 50^∘, we obtain sini_ = ξsini_mut/√(1+2ξcosi_mut+ξ^2), cosi_ = 1+ξcosi_mut/√(1+2ξcosi_mut+ξ^2). Figure <ref> plots the outer inclination i_ against i_mut for different values of ξ. The figure indicates that only small i_ is allowed for moderately inclined triples except for very large values of ξ, which are not permitted for Gaia BH1 and Gaia BH2. Note that ξ=1.2 and 0.9 roughly correspond to the maximum possible values of ξ from the viewpoint of dynamical stability for Gaia BH1 and Gaia BH2, respectively. For moderately inclined triples in which the nodal precession (i.e. Ω_ precession) dominates the dynamics, Ω_ and ϕ(t) (see Figure <ref>) change gradually from 0^∘ to 360^∘ with timescale P_Ω. Thus, I_obs(t) varies within the following range: |I_ - i_| < I_ obs(t) <min{I_ + i_,360^∘-(I_+i_)} . We can insert in equation (<ref>) the expression for i_ in terms of i_ mut(<90^∘) using the relation: i_ = tan^-1ξsini_mut/1+ξcosi_mut, derived from equations (<ref>) and (<ref>). Figure <ref> shows the constraints on the inclination angles of Gaia BH1 (left) and Gaia BH2 (right) from the observed value of I_ obs. If future observations detect any change of the RV semi-amplitude, or equivalently that of I_obs, this plot is useful in inferring the geometric configuration of the corresponding triple system. The observed RV semi-amplitude K(t) is proportional to sinI_obs(t). In the case of the nodal precession alone, we obtain from Figure <ref> cosI_obs(t) = sinI_sini_cosϕ(t)+ cosI_cosi_. Thus, sinI_obs(t) = √(1-sin^2I_sin^2i_(cosϕ(t)+Γ)^2), where Γ≡i_I_, and the precession angle ϕ(t) varies from 0^∘ to 360^∘ periodically with the timescale of P_Ω as Ω_ precesses. If -1≤Γ≤ 1, equation (<ref>) becomes maximum (unity) when cosϕ = -Γ. If Γ<-1 and Γ>1, it becomes maximum when cosϕ=+1 and -1, respectively. Similarly, equation (<ref>) becomes minimum when cosϕ=-1 and +1, for Γ<0 and Γ>0, respectively. It is amusing to note that the periodic change of the above RV semi-amplitude is basically identical to photometric variations for a oblique rotating star with surface inhomogeneities <cit.>. The above argument is simply summarized as K_max/V_0 ={[ 1 (-1≤Γ≤ 1); |sin(I_-i_)| (Γ < -1); |sin(I_+i_)| (1 < Γ) ]. and K_min/V_0 ={[ |sin(I_+i_)| (Γ< 0); |sin(I_-i_)| (0 < Γ) ]. , where V_0 is the RV semi-amplitude for an edge-on system: V_0 ≡V_0,0/√(1-e_^2) = 1/√(1-e_^2)(2π𝒢 m_12^3/m_123^2P_)^1/3. If future long-term RV monitoring (over the duration exceeding P_Ω) identifies the RV modulation of Gaia BH1 and BH2, equations (<ref>) and (<ref>) determine the inclination angles of the line-of-sight and the outer orbits, I_ and i_, separately. If the dark companion is a single BH, instead of BBH, i_=0^∘ and I_=I_ obs always (see Figure <ref>). The inner binarity of the dark companion may be revealed by i_≠0^∘. Note that there is a parameter degeneracy of I_obs↔ 180^∘ - I_obs in the RV observation, but the astrometry indeed breaks this degeneracy. Figure <ref> summarizes the expected fractional change of the RV semi-amplitude, Δ_K, in the P_ – P_Ω plane for Gaia BH1 (left) and Gaia BH2 (right). Specifically, we define Δ_K using equations (<ref>) and (<ref>): Δ_K ≡K_max-K_min/V_0. For simplicity, we here assume e_ =0 and q_21=1 for both Gaia BH1 and Gaia BH2. In addition, we fix I_los=120^∘ and 30^∘ for Gaia BH1 and Gaia BH2, respectively, corresponding to the values close to their I_obs; see Table <ref>. The cyan regions correspond to the dynamically unstable region from MA01 (see equation (<ref>)). We note that for high mutual inclination (i_mut≳ 50^∘), the analytic discussion based on the nodal precession becomes invalid since the ZKL oscillations become important. For moderate inclination, however, we can safely estimate Δ_K, and corresponding P_ and P_Ω. Figure <ref> implies that Δ_K = 0.2–0.4 variations are expected within 100 yrs for Gaia BH1 if P_=5–10 days, while unrealistically long observational duration is required to detect the similar level of variations for Gaia BH2. §.§ Numerical results In order to discuss the observational feasibility, we perform three-body numerical simulations with TSUNAMI, and present examples of the expected long-term RV modulations. We fix the initial phases (M_=30^∘, M_=45^∘, ω_=0^∘, ω_=ω_obs), and assume m_1=m_2, P_ = 10 days(Gaia BH1) and P_ = 50 days (Gaia BH2), e_=0. We additionally assume I_los=120^∘ (Gaia BH1) and I_los=30^∘ (Gaia BH2), and i_mut=20^∘. The top panels of Figure <ref> show the simulated RV semi-amplitudes against t/P_Ω for Gaia BH1 (left) and Gaia BH2 (right), respectively. The red and blue curves indicate the envelope of the radial velocity K(t)/V_0, i.e., neglecting the periodic changes over P_, which we define as RV_ max/V_0 and RV_ min/V_0. The normalized RV semi-amplitude K/V_0 ≡ ( RV_ max- RV_ min)/2V_0 is plotted in solid black curves, which should be compared with an analytic prediction, equation (<ref>) with equations (<ref>) and (<ref>). In the plots, we show the analytically estimated Δ_K as magenta regions, and the expected semi-amplitude change from equation (<ref>) as dotted green curves. We chose the initial phase to be consistent with those of the simulations at t=0. As expected, the ZKL oscillations are negligible for the present case (i_ mut=20^∘ initially), and the mutual inclination is nearly constant over the period of P_Ω. The simulated RV semi-amplitude changes almost sinusoidally with a period of ∼ P_Ω (black curve), and its fractional change Δ_K is indeed in good agreement with the value predicted from the analytic approximation; Δ_K≈ 0.2 for Gaia BH1, and Δ_K ≈ 0.3 for Gaia BH2, see Figure <ref>. The example for Gaia BH1 indicates the RV semi-amplitude change of as large as 17 km/s, corresponding to Δ_K = 0.2, within P_Ω /2 ≈ 26 yrs, depending on the phase. Furthermore, the zero-point of the RV curve also changes significantly. Thus, future long-term RV monitoring of Gaia BH1 should provide strong constraints on, or even detect, its inner BBH. On the contrary, the case of Gaia BH2 is very difficult because its P_Ω is too long. § LONG-TERM RV MODULATIONS FOR SIGNIFICANTLY INCLINED SYSTEMS: ZKL OSCILLATIONS Finally, we consider triple systems whose inner binary orbit is significantly inclined, i_mut>50^∘, relative to the outer orbit. In this case, analytic discussion is not easy due to the strong ZKL oscillations. Thus, we present examples of numerical simulations alone. The middle and bottom panels of Figure <ref> are the same as the top panels except that their initial mutual inclinations are i_mut=60^∘ and i_mut=90^∘, respectively. Note that they are plotted against t/P_, and their long-term modulation period is roughly consistent with the quadrupole ZKL timescale <cit.>: T_ ZKL/P_ = m_123P_/m_3 P_ (1-e_^2)^3/2≈ 130(m_123/10M_⊙) (m_3/1M_⊙)^-1 (P_/P_/20) [1-(e_/0.5)^2]^3/2. The middle panels, with i_mut=60^∘ initially, indicate that the amplitude of the ZKL oscillations are still modest in this example, and the resulting semi-amplitude change (black curves) is roughly sinusoidal as expected for nodal precession alone. Moreover, the analytic prediction of Δ_K, equation (<ref>), agrees with the simulated value within ten percent. In contrast, the bottom plots, with i_mut=90^∘ initially, show non-trivial RV curves, due to the strong ZKL oscillations. For most of the time, the systems stay at mutually orthogonal orbits, but suddenly move to i_mut≈ 40^∘. While the change of the mutual inclination is very periodic roughly with the ZKL timescale T_ZKL, equation (<ref>), the corresponding RV semi-amplitude changes are no longer periodic. Therefore, long-term RV monitoring of such systems may detect the significant change of the RV semi-amplitude even for relatively short timescales, or barely no change for long duration, depending on the phase of the observation over the sporadic behavior represented in the bottom panels of Figure <ref>. § SUMMARY AND DISCUSSION Triple systems are ubiquitous in the universe, and trigger a wide variety of interesting observable events in astronomy. While nearly a hundred of BBHs have been discovered from the GWs emitted at the final instance of their coalescence, there is no candidate for triples including two BHs yet. Needless to say, such triples are fascinating targets for observational astronomy. Furthermore, star- BBH or even triple BH systems may provide an important mechanism to accelerate the GW merger of the detected BBHs <cit.>. Formation and evolution of stellar triples are fundamental, but theoretically challenging, problems in broad areas of astrophysics. Their proper understanding requires many complicated physical processes, including the evolution of common envelope phases, supernova explosions, and the subsequent dynamics of the resulting compact objects <cit.>. Thus, future discoveries of star-BH binaries and star-BBH triples that we consider in the present paper would shed complementary observational insights that are useful in constructing and testing theoretical models. <cit.> and <cit.> have proposed a methodology to discover a hidden inner BBH in star-BH binary candidates from the radial velocity modulations of the orbiting (tertiary) star. Recent discoveries of such systems, Gaia BH1 and BH2 <cit.>, provide a great opportunity to examine the feasibility of their methodology in detail as a proof of concept. Even if the dark companion of Gaia BH1 and BH2 turn out to be a single BH instead of a BBH, the analysis presented here is readily applicable for future star-BH candidates that remain to be discovered. The results of our proof-of-concept study are summarized below. (1) short-term RV modulations induced by the inner BBH Inner BBH generates a small-amplitude modulation of period P_ on the RV of the tertiary star. The semi-amplitudes based on an analytic approximation are 𝒪(10) m/s for Gaia BH1, and 𝒪(1) m/s for Gaia BH2, if the tertiary is on a coplanar and circular orbit. In reality, relatively large eccentricities of e_obs∼ 0.5 for both systems are expected to significantly increase the semi-amplitude. Our numerical simulations indicate that the semi-amplitude of the short-term RV modulation increases by more than a factor of (1-e_)^-7/2 (≈ 11) near the pericenter passage. Thus, the resulting amplitudes amount to ∼ 300 m/s for Gaia BH1, and ∼ 100 m/s for Gaia BH2 at their pericenter passage phases. We conclude that high-cadence and precise RV followups near the pericenter passages of the star are promising to search for possible inner BBHs for star-BH candidates with large e_obs. (2) long-term RV modulations induced by the nodal precession If the orbit of the inner BBH is moderately inclined relative to that of the tertiary, i_mut≲ 50^∘, the nodal precession generates long-term modulations of the radial velocity, or equivalently of the inclination I_ obs of the tertiary relative to the observer's line of sight. Unlike the short-term RV modulation, the nodal precession changes the RV semi-amplitude of the tertiary by a factor of sin I_ obs. Thus, the change of the RV semi-amplitude, Δ_KV_0, is significantly larger than that of the short-term modulation, but its modulation period P_Ω may be unrealistically long. Our examples from three-body simulations (equal-mass circular BBH with P_=10 days) predict the RV semi-amplitude change of 17 km/s within ∼ 26 yrs for Gaia BH1, assuming that the line-of-sight inclination I_ = 120^∘ is close to the observed inclination I_obs=126^∘.6. For Gaia BH2, the nodal precession timescale is too long to be detectable within a reasonable observation duration. More importantly, we confirm that our simple analytic estimates of Δ_K and P_Ω reproduce well the simulation results. (3) long-term RV modulation induced by the ZKL oscillations For highly inclined triples, the ZKL oscillations induce the drastic and non-periodic RV semi-amplitude change, and analytic approximation becomes less reliable than the case with the nodal precession alone. Thus, numerical simulations are required to make quantitative predictions. We confirm that the timescale of the corresponding RV modulations are consistent with the ZKL timescale T_ZKL, which is roughly ∼ 100P_ for our fiducial cases for Gaia BH1 (P_∼ 190 days) and Gaia BH2 (P_∼ 1300 days). Due to the rather sporadic and abrupt change of the RV semi-amplitude due to the ZKL oscillations, we may be able to detect the signatures of the long-term RV modulation depending on the observational phase. We have demonstrated the feasibility of detecting an inner BBH from RV follow-ups of star-BH binary candidates, if some of them are indeed star-BBH triples. We studied the presently available best targets, Gaia BH1 and Gaia BH2, as a proof-of-concept, but found that future monitoring of Gaia BH1 may indeed detect an inner BBH within a reasonable timescale. The three observable signatures of the RV modulations of the tertiary discussed in the above summary are quite generic, and can be applied to more abundant candidates from future Gaia data in a straightforward manner. We also mention that this method is applicable to tertiary pulsar - BBH triple systems using the pulsar timing analysis, instead of RV monitoring <cit.>. It is not clear if such star-BBH and even pulsar-BBH triples exist within our reachable horizon. Nevertheless, we would like to conclude by referring to a universal principle that “everything not forbidden is compulsory” <cit.>. § ACKNOWLEDGMENTS T.H. thanks Kareem El-Badry for fruitful discussion on the possible binary companion in Gaia BH1 during the workshop “ The Renaissance of Stellar Black-Hole Detections in The Local Group”, held from June 26 to 30, 2023, at the Lorentz Center in Leiden University. T.H. gratefully acknowledges the fellowship by Japan Society for the Promotion of Science (JSPS). This work is supported partly by the JSPS KAKENHI grant Nos. JP19H01947 and JP23H01212 (Y.S.), JP21J11378 and JP23KJ1153 (T.H.), and JP21K13914 (A.A.T.). natexlab#1#1 [Aarseth & Mardling(2001)]Aarseth2001 Aarseth, S. J., & Mardling, R. A. 2001, Astronomical Society of the Pacific Conference Series, Vol. 229, The Formation and Evolution of Multiple Star Systems, ed. P. Podsiadlowski, S. Rappaport, A. R. King, F. D'Antona, & L. Burderi (Astronomical Society of the Pacific), 77 [Abbott et al.(2016)Abbott, Abbott, Abbott, Abernathy, Acernese, Ackley, Adams, Adams, Addesso, Adhikari, & et al.]Abbott2016 Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2016, Physical Review Letters, 116, 061102, 10.1103/PhysRevLett.116.061102 [Antognini(2015)]Antognini2015 Antognini, J. M. O. 2015, , 452, 3610, 10.1093/mnras/stv1552 [Belczynski et al.(2012)Belczynski, Dominik, Repetto, Holz, & Fryer]Belczynski2012 Belczynski, K., Dominik, M., Repetto, S., Holz, D. E., & Fryer, C. L. 2012, ArXiv e-prints. 1208.0358 [Belczynski et al.(2016a)Belczynski, Holz, Bulik, & O'Shaughnessy]Belczynski2016 Belczynski, K., Holz, D. E., Bulik, T., & O'Shaughnessy, R. 2016a, , 534, 512, 10.1038/nature18322 [Belczynski et al.(2002)Belczynski, Kalogera, & Bulik]Belczynski2002 Belczynski, K., Kalogera, V., & Bulik, T. 2002, , 572, 407, 10.1086/340304 [Belczynski et al.(2016b)Belczynski, Repetto, Holz, O'Shaughnessy, Bulik, Berti, Fryer, & Dominik]Belczynski2016_2 Belczynski, K., Repetto, S., Holz, D. E., et al. 2016b, , 819, 108, 10.3847/0004-637X/819/2/108 [Belczynski et al.(2007)Belczynski, Taam, Kalogera, Rasio, & Bulik]Belczynski2007 Belczynski, K., Taam, R. E., Kalogera, V., Rasio, F. A., & Bulik, T. 2007, , 662, 504, 10.1086/513562 [Bird et al.(2016)Bird, Cholis, Muñoz, Ali-Haïmoud, Kamionkowski, Kovetz, Raccanelli, & Riess]Bird2016 Bird, S., Cholis, I., Muñoz, J. B., et al. 2016, Physical Review Letters, 116, 201301, 10.1103/PhysRevLett.116.201301 [Breivik et al.(2017)Breivik, Chatterjee, & Larson]Breivik2017 Breivik, K., Chatterjee, S., & Larson, S. L. 2017, , 850, L13, 10.3847/2041-8213/aa97d5 [Dominik et al.(2012)Dominik, Belczynski, Fryer, Holz, Berti, Bulik, Mandel, & O'Shaughnessy]Dominik2012 Dominik, M., Belczynski, K., Fryer, C., et al. 2012, , 759, 52, 10.1088/0004-637X/759/1/52 [Dominik et al.(2013)Dominik, Belczynski, Fryer, Holz, Berti, Bulik, Mandel, & O'Shaughnessy]Dominik2013 —. 2013, , 779, 72, 10.1088/0004-637X/779/1/72 [El-Badry et al.(2023a)El-Badry, Rix, Cendes, Rodriguez, Conroy, Quataert, Hawkins, Zari, Hobson, Breivik, Rau, Berger, Shahaf, Seeburger, Burdge, Latham, Buchhave, Bieryla, Bashi, Mazeh, & Faigler]El-Badry2023b El-Badry, K., Rix, H.-W., Cendes, Y., et al. 2023a, , 521, 4323, 10.1093/mnras/stad799 [El-Badry et al.(2023b)El-Badry, Rix, Quataert, Howard, Isaacson, Fuller, Hawkins, Breivik, Wong, Rodriguez, Conroy, Shahaf, Mazeh, Arenou, Burdge, Bashi, Faigler, Weisz, Seeburger, Almada Monter, & Wojno]El-Badry2023a El-Badry, K., Rix, H.-W., Quataert, E., et al. 2023b, , 518, 1057, 10.1093/mnras/stac3140 [Fragione et al.(2020)Fragione, Martinez, Kremer, Chatterjee, Rodriguez, Ye, Weatherford, Naoz, & Rasio]Fragione2020 Fragione, G., Martinez, M. A. S., Kremer, K., et al. 2020, , 900, 16, 10.3847/1538-4357/aba89b [Fulton et al.(2018)Fulton, Petigura, Blunt, & Sinukoff]Fulton2018 Fulton, B. J., Petigura, E. A., Blunt, S., & Sinukoff, E. 2018, , 130, 044504, 10.1088/1538-3873/aaaaa8 [Gaia Collaboration et al.(2016)Gaia Collaboration, Prusti, de Bruijne, Brown, Vallenari, Babusiaux, Bailer-Jones, Bastian, Biermann, Evans, Eyer, Jansen, Jordi, Klioner, Lammers, Lindegren, Luri, Mignard, Milligan, Panem, Poinsignon, Pourbaix, Randich, Sarri, Sartoretti, Siddiqui, Soubiran, Valette, van Leeuwen, Walton, Aerts, Arenou, Cropper, Drimmel, Høg, Katz, Lattanzi, O'Mullane, Grebel, Holland, Huc, Passot, Bramante, Cacciari, Castañeda, Chaoul, Cheek, De Angeli, Fabricius, Guerra, Hernández, Jean-Antoine-Piccolo, Masana, Messineo, Mowlavi, Nienartowicz, Ordóñez- Blanco, Panuzzo, Portell, Richards, Riello, Seabroke, Tanga, Thévenin, Torra, Els, Gracia- Abril, Comoretto, Garcia-Reinaldos, Lock, Mercier, Altmann, Andrae, Astraatmadja, Bellas-Velidis, Benson, Berthier, Blomme, Busso, Carry, Cellino, Clementini, Cowell, Creevey, Cuypers, Davidson, De Ridder, de Torres, Delchambre, Dell'Oro, Ducourant, Frémat, García-Torres, Gosset, Halbwachs, Hambly, Harrison, Hauser, Hestroffer, Hodgkin, Huckle, Hutton, Jasniewicz, Jordan, Kontizas, Korn, Lanzafame, Manteiga, Moitinho, Muinonen, Osinde, Pancino, Pauwels, Petit, Recio-Blanco, Robin, Sarro, Siopis, Smith, Smith, Sozzetti, Thuillot, van Reeven, Viala, Abbas, Abreu Aramburu, Accart, Aguado, Allan, Allasia, Altavilla, Álvarez, Alves, Anderson, Andrei, Anglada Varela, Antiche, Antoja, Antón, Arcay, Atzei, Ayache, Bach, Baker, Balaguer-Núñez, Barache, Barata, Barbier, Barblan, Baroni, Barrado y Navascués, Barros, Barstow, Becciani, Bellazzini, Bellei, Bello García, Belokurov, Bendjoya, Berihuete, Bianchi, Bienaymé, Billebaud, Blagorodnova, Blanco-Cuaresma, Boch, Bombrun, Borrachero, Bouquillon, Bourda, Bouy, Bragaglia, Breddels, Brouillet, Brüsemeister, Bucciarelli, Budnik, Burgess, Burgon, Burlacu, Busonero, Buzzi, Caffau, Cambras, Campbell, Cancelliere, Cantat-Gaudin, Carlucci, Carrasco, Castellani, Charlot, Charnas, Charvet, Chassat, Chiavassa, Clotet, Cocozza, Collins, Collins, Costigan, Crifo, Cross, Crosta, Crowley, Dafonte, Damerdji, Dapergolas, David, David, De Cat, de Felice, de Laverny, De Luise, De March, de Martino, de Souza, Debosscher, del Pozo, Delbo, Delgado, Delgado, di Marco, Di Matteo, Diakite, Distefano, Dolding, Dos Anjos, Drazinos, Durán, Dzigan, Ecale, Edvardsson, Enke, Erdmann, Escolar, Espina, Evans, Eynard Bontemps, Fabre, Fabrizio, Faigler, Falcão, Farràs Casas, Faye, Federici, Fedorets, Fernández-Hernández, Fernique, Fienga, Figueras, Filippi, Findeisen, Fonti, Fouesneau, Fraile, Fraser, Fuchs, Furnell, Gai, Galleti, Galluccio, Garabato, García-Sedano, Garé, Garofalo, Garralda, Gavras, Gerssen, Geyer, Gilmore, Girona, Giuffrida, Gomes, González-Marcos, González-Núñez, González-Vidal, Granvik, Guerrier, Guillout, Guiraud, Gúrpide, Gutiérrez-Sánchez, Guy, Haigron, Hatzidimitriou, Haywood, Heiter, Helmi, Hobbs, Hofmann, Holl, Holland, Hunt, Hypki, Icardi, Irwin, Jevardat de Fombelle, Jofré, Jonker, Jorissen, Julbe, Karampelas, Kochoska, Kohley, Kolenberg, Kontizas, Koposov, Kordopatis, Koubsky, Kowalczyk, Krone-Martins, Kudryashova, Kull, Bachchan, Lacoste-Seris, Lanza, Lavigne, Le Poncin-Lafitte, Lebreton, Lebzelter, Leccia, Leclerc, Lecoeur-Taibi, Lemaitre, Lenhardt, Leroux, Liao, Licata, Lindstrøm, Lister, Livanou, Lobel, Löffler, López, Lopez-Lozano, Lorenz, Loureiro, MacDonald, Magalhães Fernandes, Managau, Mann, Mantelet, Marchal, Marchant, Marconi, Marie, Marinoni, Marrese, Marschalkó, Marshall, Martín-Fleitas, Martino, Mary, Matijevič, Mazeh, McMillan, Messina, Mestre, Michalik, Millar, Miranda, Molina, Molinaro, Molinaro, Molnár, Moniez, Montegriffo, Monteiro, Mor, Mora, Morbidelli, Morel, Morgenthaler, Morley, Morris, Mulone, Muraveva, Musella, Narbonne, Nelemans, Nicastro, Noval, Ordénovic, Ordieres-Meré, Osborne, Pagani, Pagano, Pailler, Palacin, Palaversa, Parsons, Paulsen, Pecoraro, Pedrosa, Pentikäinen, Pereira, Pichon, Piersimoni, Pineau, Plachy, Plum, Poujoulet, Prša, Pulone, Ragaini, Rago, Rambaux, Ramos-Lerate, Ranalli, Rauw, Read, Regibo, Renk, Reylé, Ribeiro, Rimoldini, Ripepi, Riva, Rixon, Roelens, Romero-Gómez, Rowell, Royer, Rudolph, Ruiz-Dern, Sadowski, Sagristà Sellés, Sahlmann, Salgado, Salguero, Sarasso, Savietto, Schnorhk, Schultheis, Sciacca, Segol, Segovia, Segransan, Serpell, Shih, Smareglia, Smart, Smith, Solano, Solitro, Sordo, Soria Nieto, Souchay, Spagna, Spoto, Stampa, Steele, Steidelmüller, Stephenson, Stoev, Suess, Süveges, Surdej, Szabados, Szegedi-Elek, Tapiador, Taris, Tauran, Taylor, Teixeira, Terrett, Tingley, Trager, Turon, Ulla, Utrilla, Valentini, van Elteren, Van Hemelryck, van Leeuwen, Varadi, Vecchiato, Veljanoski, Via, Vicente, Vogt, Voss, Votruba, Voutsinas, Walmsley, Weiler, Weingrill, Werner, Wevers, Whitehead, Wyrzykowski, Yoldas, Žerjal, Zucker, Zurbach, Zwitter, Alecu, Allen, Allende Prieto, Amorim, Anglada-Escudé, Arsenijevic, Azaz, Balm, Beck, Bernstein, Bigot, Bijaoui, Blasco, Bonfigli, Bono, Boudreault, Bressan, Brown, Brunet, Bunclark, Buonanno, Butkevich, Carret, Carrion, Chemin, Chéreau, Corcione, Darmigny, de Boer, de Teodoro, de Zeeuw, Delle Luche, Domingues, Dubath, Fodor, Frézouls, Fries, Fustes, Fyfe, Gallardo, Gallegos, Gardiol, Gebran, Gomboc, Gómez, Grux, Gueguen, Heyrovsky, Hoar, Iannicola, Isasi Parache, Janotto, Joliet, Jonckheere, Keil, Kim, Klagyivik, Klar, Knude, Kochukhov, Kolka, Kos, Kutka, Lainey, LeBouquin, Liu, Loreggia, Makarov, Marseille, Martayan, Martinez-Rubi, Massart, Meynadier, Mignot, Munari, Nguyen, Nordlander, Ocvirk, O'Flaherty, Olias Sanz, Ortiz, Osorio, Oszkiewicz, Ouzounis, Palmer, Park, Pasquato, Peltzer, Peralta, Péturaud, Pieniluoma, Pigozzi, Poels, Prat, Prod'homme, Raison, Rebordao, Risquez, Rocca-Volmerange, Rosen, Ruiz-Fuertes, Russo, Sembay, Serraller Vizcaino, Short, Siebert, Silva, Sinachopoulos, Slezak, Soffel, Sosnowska, Straižys, ter Linden, Terrell, Theil, Tiede, Troisi, Tsalmantza, Tur, Vaccari, Vachier, Valles, Van Hamme, Veltz, Virtanen, Wallut, Wichmann, Wilkinson, Ziaeepour, & Zschocke]Gaia2016 Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, , 595, A1, 10.1051/0004-6361/201629272 [Gaia Collaboration et al.(2022)Gaia Collaboration, Arenou, Babusiaux, Barstow, Faigler, Jorissen, Kervella, Mazeh, Mowlavi, Panuzzo, Sahlmann, Shahaf, Sozzetti, Bauchet, Damerdji, Gavras, Giacobbe, Gosset, Halbwachs, Holl, Lattanzi, Leclerc, Morel, Pourbaix, Re Fiorentin, Sadowski, Ségransan, Siopis, Teyssier, Zwitter, Planquart, Brown, Vallenari, Prusti, de Bruijne, Biermann, Creevey, Ducourant, Evans, Eyer, Guerra, Hutton, Jordi, Klioner, Lammers, Lindegren, Luri, Mignard, Panem, Randich, Sartoretti, Soubiran, Tanga, Walton, Bailer-Jones, Bastian, Drimmel, Jansen, Katz, van Leeuwen, Bakker, Cacciari, Castañeda, De Angeli, Fabricius, Fouesneau, Frémat, Galluccio, Guerrier, Heiter, Masana, Messineo, Nicolas, Nienartowicz, Pailler, Riclet, Roux, Seabroke, Sordo, Thévenin, Gracia-Abril, Portell, Altmann, Andrae, Audard, Bellas-Velidis, Benson, Berthier, Blomme, Burgess, Busonero, Busso, Cánovas, Carry, Cellino, Cheek, Clementini, Davidson, de Teodoro, Nuñez Campos, Delchambre, Dell'Oro, Esquej, Fernández-Hernández, Fraile, Garabato, García-Lario, Haigron, Hambly, Harrison, Hernández, Hestroffer, Hodgkin, Janßen, Jevardat de Fombelle, Jordan, Krone-Martins, Lanzafame, Löffler, Marchal, Marrese, Moitinho, Muinonen, Osborne, Pancino, Pauwels, Recio-Blanco, Reylé, Riello, Rimoldini, Roegiers, Rybizki, Sarro, Smith, Utrilla, van Leeuwen, Abbas, Ábrahám, Abreu Aramburu, Aerts, Aguado, Ajaj, Aldea-Montero, Altavilla, Álvarez, Alves, Anders, Anderson, Anglada Varela, Antoja, Baines, Baker, Balaguer-Núñez, Balbinot, Balog, Barache, Barbato, Barros, Bartolomé, Bassilana, Becciani, Bellazzini, Berihuete, Bernet, Bertone, Bianchi, Binnenfeld, Blanco-Cuaresma, Blazere, Boch, Bombrun, Bossini, Bouquillon, Bragaglia, Bramante, Breedt, Bressan, Brouillet, Brugaletta, Bucciarelli, Burlacu, Butkevich, Buzzi, Caffau, Cancelliere, Cantat-Gaudin, Carballo, Carlucci, Carnerero, Carrasco, Casamiquela, Castellani, Castro-Ginard, Chaoul, Charlot, Chemin, Chiaramida, Chiavassa, Chornay, Comoretto, Contursi, Cooper, Cornez, Cowell, Crifo, Cropper, Crosta, Crowley, Dafonte, Dapergolas, David, de Laverny, De Luise, De March, De Ridder, de Souza, de Torres, del Peloso, del Pozo, Delbo, Delgado, Delisle, Demouchy, Dharmawardena, Diakite, Diener, Distefano, Dolding, Enke, Fabre, Fabrizio, Fedorets, Fernique, Figueras, Fournier, Fouron, Fragkoudi, Gai, Garcia-Gutierrez, Garcia-Reinaldos, García-Torres, Garofalo, Gavel, Gerlach, Geyer, Gilmore, Girona, Giuffrida, Gomel, Gomez, González-Núñez, González-Santamaría, González-Vidal, Granvik, Guillout, Guiraud, Gutiérrez-Sánchez, Guy, Hatzidimitriou, Hauser, Haywood, Helmer, Helmi, Sarmiento, Hidalgo, Hładczuk, Hobbs, Holland, Huckle, Jardine, Jasniewicz, Jean-Antoine Piccolo, Jiménez-Arranz, Juaristi Campillo, Julbe, Karbevska, Khanna, Kordopatis, Korn, Kóspál, Kostrzewa-Rutkowska, Kruszyńska, Kun, Laizeau, Lambert, Lanza, Lasne, Le Campion, Lebreton, Lebzelter, Leccia, Lecoeur-Taibi, Liao, Licata, Lindstrøm, Lister, Livanou, Lobel, Lorca, Loup, Madrero Pardo, Magdaleno Romeo, Managau, Mann, Manteiga, Marchant, Marconi, Marcos, Marcos Santos, Marín Pina, Marinoni, Marocco, Marshall, Polo, Martín-Fleitas, Marton, Mary, Masip, Massari, Mastrobuono-Battisti, McMillan, Messina, Michalik, Millar, Mints, Molina, Molinaro, Molnár, Monari, Monguió, Montegriffo, Montero, Mor, Mora, Morbidelli, Morris, Muraveva, Murphy, Musella, Nagy, Noval, Ocaña, Ogden, Ordenovic, Osinde, Pagani, Pagano, Palaversa, Palicio, Pallas-Quintela, Panahi, Payne-Wardenaar, Peñalosa Esteller, Penttilä, Pichon, Piersimoni, Pineau, Plachy, Plum, Poggio, Prša, Pulone, Racero, Ragaini, Rainer, Raiteri, Ramos, Ramos-Lerate, Regibo, Richards, Rios Diaz, Ripepi, Riva, Rix, Rixon, Robichon, Robin, Robin, Roelens, Rogues, Rohrbasser, Romero-Gómez, Rowell, Royer, Ruz Mieres, Rybicki, Sáez Núñez, Sagristà Sellés, Salguero, Samaras, Sanchez Gimenez, Sanna, Santoveña, Sarasso, Schultheis, Sciacca, Segol, Segovia, Semeux, Siddiqui, Siebert, Siltala, Silvelo, Slezak, Slezak, Smart, Snaith, Solano, Solitro, Souami, Souchay, Spagna, Spina, Spoto, Steele, Steidelmüller, Stephenson, Süveges, Surdej, Szabados, Szegedi-Elek, Taris, Taylor, Teixeira, Tolomei, Tonello, Torra, Torra, Torralba Elipe, Trabucchi, Tsounis, Turon, Ulla, Unger, Vaillant, van Dillen, van Reeven, Vanel, Vecchiato, Viala, Vicente, Voutsinas, Weiler, Wevers, Wyrzykowski, Yoldas, Yvard, Zhao, Zorec, & Zucker]Gaia2022 Gaia Collaboration, Arenou, F., Babusiaux, C., et al. 2022, arXiv e-prints, arXiv:2206.05595. 2206.05595 [Hayashi & Suto(2020)]Hayashi2020b Hayashi, T., & Suto, Y. 2020, , 897, 29, 10.3847/1538-4357/ab97ad [Hayashi & Suto(2021)]Hayashi2021 —. 2021, , 907, 48, 10.3847/1538-4357/abcec6 [Hayashi et al.(2022)Hayashi, Trani, & Suto]HTS2022 Hayashi, T., Trani, A. A., & Suto, Y. 2022, , 939, 81, 10.3847/1538-4357/ac8f48 [Hayashi et al.(2023)Hayashi, Trani, & Suto]HTS2023 —. 2023, , 943, 58, 10.3847/1538-4357/acac1e [Hayashi et al.(2020)Hayashi, Wang, & Suto]Hayashi2020a Hayashi, T., Wang, S., & Suto, Y. 2020, The Astrophysical Journal, 890, 112, 10.3847/1538-4357/ab6de6 [Ioka et al.(1999)Ioka, Tanaka, & Nakamura]Ioka1999 Ioka, K., Tanaka, T., & Nakamura, T. 1999, , 60, 083512, 10.1103/PhysRevD.60.083512 [Kawanaka et al.(2016)Kawanaka, Yamaguchi, Piran, & Bulik]Kawanaka2016 Kawanaka, N., Yamaguchi, M., Piran, T., & Bulik, T. 2016, Proceedings of the International Astronomical Union, 12, 41, 10.1017/S1743921316012606 [Kinugawa et al.(2014)Kinugawa, Inayoshi, Hotokezaka, Nakauchi, & Nakamura]Kinugawa2014 Kinugawa, T., Inayoshi, K., Hotokezaka, K., Nakauchi, D., & Nakamura, T. 2014, , 442, 2963, 10.1093/mnras/stu1022 [Kinugawa et al.(2016)Kinugawa, Miyamoto, Kanda, & Nakamura]Kinugawa2016 Kinugawa, T., Miyamoto, A., Kanda, N., & Nakamura, T. 2016, , 456, 1093, 10.1093/mnras/stv2624 [Kocsis et al.(2018)Kocsis, Suyama, Tanaka, & Yokoyama]Kocsis2018 Kocsis, B., Suyama, T., Tanaka, T., & Yokoyama, S. 2018, , 854, 41, 10.3847/1538-4357/aaa7f4 [Kozai(1962)]Kozai1962 Kozai, Y. 1962, , 67, 591, 10.1086/108790 [Lidov(1962)]Lidov1962 Lidov, M. L. 1962, , 9, 719, 10.1016/0032-0633(62)90129-0 [Liu & Lai(2018)]Liu2018 Liu, B., & Lai, D. 2018, , 863, 68, 10.3847/1538-4357/aad09f [Mardling & Aarseth(1999)]Mardling1999 Mardling, R., & Aarseth, S. 1999, in NATO Advanced Science Institutes (ASI) Series C, Vol. 522, NATO Advanced Science Institutes (ASI) Series C, ed. B. A. Steves & A. E. Roy (Springer), 385 [Mashian & Loeb(2017)]Mashian2017 Mashian, N., & Loeb, A. 2017, , 470, 2611, 10.1093/mnras/stx1410 [Masuda & Hotokezaka(2019)]Masuda2019 Masuda, K., & Hotokezaka, K. 2019, , 883, 169, 10.3847/1538-4357/ab3a4f [Morais & Correia(2008)]Morais2008 Morais, M. H. M., & Correia, A. C. M. 2008, , 491, 899, 10.1051/0004-6361:200810741 [Morais & Correia(2012)]Morais2012 —. 2012, , 419, 3447, 10.1111/j.1365-2966.2011.19986.x [Naoz(2016)]Naoz2016 Naoz, S. 2016, , 54, 441, 10.1146/annurev-astro-081915-023315 [O'Leary et al.(2009)O'Leary, Kocsis, & Loeb]OLeary2009 O'Leary, R. M., Kocsis, B., & Loeb, A. 2009, , 395, 2127, 10.1111/j.1365-2966.2009.14653.x [Portegies Zwart & McMillan(2000)]Zwart2000 Portegies Zwart, S. F., & McMillan, S. L. W. 2000, , 528, L17, 10.1086/312422 [Ricker et al.(2014)Ricker, Winn, Vanderspek, Latham, Bakos, Bean, Berta-Thompson, Brown, Buchhave, Butler, Butler, Chaplin, Charbonneau, Christensen-Dalsgaard, Clampin, Deming, Doty, De Lee, Dressing, Dunham, Endl, Fressin, Ge, Henning, Holman, Howard, Ida, Jenkins, Jernigan, Johnson, Kaltenegger, Kawai, Kjeldsen, Laughlin, Levine, Lin, Lissauer, MacQueen, Marcy, McCullough, Morton, Narita, Paegert, Palle, Pepe, Pepper, Quirrenbach, Rinehart, Sasselov, Sato, Seager, Sozzetti, Stassun, Sullivan, Szentgyorgyi, Torres, Udry, & Villasenor]TESS2014 Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9143, Space Telescopes and Instrumentation 2014: Optical, Infrared, and Millimeter Wave, ed. J. Oschmann, Jacobus M., M. Clampin, G. G. Fazio, & H. A. MacEwen, 914320, 10.1117/12.2063489 [Rodriguez et al.(2016)Rodriguez, Haster, Chatterjee, Kalogera, & Rasio]Rodriguez2016 Rodriguez, C. L., Haster, C.-J., Chatterjee, S., Kalogera, V., & Rasio, F. A. 2016, , 824, L8, 10.3847/2041-8205/824/1/L8 [Sagan(1985)]Sagan1985 Sagan, C. 1985, Contact (New York: Simon and Schuster) [Sasaki et al.(2016)Sasaki, Suyama, Tanaka, & Yokoyama]Sasaki2016 Sasaki, M., Suyama, T., Tanaka, T., & Yokoyama, S. 2016, Physical Review Letters, 117, 061101, 10.1103/PhysRevLett.117.061101 [Sasaki et al.(2018)Sasaki, Suyama, Tanaka, & Yokoyama]Sasaki2018 —. 2018, Classical and Quantum Gravity, 35, 063001, 10.1088/1361-6382/aaa7b4 [Shikauchi et al.(2020)Shikauchi, Kumamoto, Tanikawa, & Fujii]Shikauchi2020 Shikauchi, M., Kumamoto, J., Tanikawa, A., & Fujii, M. S. 2020, , 10.1093/pasj/psaa030 [Spera et al.(2019)Spera, Mapelli, Giacobbo, Trani, Bressan, & Costa]Spera2019 Spera, M., Mapelli, M., Giacobbo, N., et al. 2019, , 485, 889, 10.1093/mnras/stz359 [Suto et al.(2023)Suto, Sasaki, Aizawa, Fujisawa, & Kashiyama]Suto2023 Suto, Y., Sasaki, S., Aizawa, M., Fujisawa, K., & Kashiyama, K. 2023, , 75, 103, 10.1093/pasj/psac093 [Suto et al.(2022)Suto, Sasaki, Nakagawa, & Benomar]Suto2022 Suto, Y., Sasaki, S., Nakagawa, Y., & Benomar, O. 2022, , 74, 857, 10.1093/pasj/psac039 [Tagawa et al.(2016)Tagawa, Umemura, & Gouda]Tagawa2016 Tagawa, H., Umemura, M., & Gouda, N. 2016, , 462, 3812, 10.1093/mnras/stw1877 [Tanikawa et al.(2023)Tanikawa, Hattori, Kawanaka, Kinugawa, Shikauchi, & Tsuna]Tanikawa2023 Tanikawa, A., Hattori, K., Kawanaka, N., et al. 2023, , 946, 79, 10.3847/1538-4357/acbf36 [The LIGO Scientific Collaboration et al.(2021)The LIGO Scientific Collaboration, the Virgo Collaboration, the KAGRA Collaboration, Abbott, Abbott, Acernese, Ackley, Adams, Adhikari, Adhikari, Adya, Affeldt, Agarwal, Agathos, Agatsuma, Aggarwal, Aguiar, Aiello, Ain, Ajith, Akcay, Akutsu, Albanesi, Allocca, Altin, Amato, Anand, Anand, Ananyeva, Anderson, Anderson, Ando, Andrade, Andres, Andrić, Angelova, Ansoldi, Antelis, Antier, Appert, Arai, Arai, Arai, Araki, Araya, Araya, Areeda, Arène, Aritomi, Arnaud, Arogeti, Aronson, Arun, Asada, Asali, Ashton, Aso, Assiduo, Aston, Astone, Aubin, Austin, Babak, Badaracco, Bader, Badger, Bae, Bae, Baer, Bagnasco, Bai, Baiotti, Baird, Bajpai, Ball, Ballardin, Ballmer, Balsamo, Baltus, Banagiri, Bankar, Barayoga, Barbieri, Barish, Barker, Barneo, Barone, Barr, Barsotti, Barsuglia, Barta, Bartlett, Barton, Bartos, Bassiri, Basti, Bawaj, Bayley, Baylor, Bazzan, Bécsy, Bedakihale, Bejger, Belahcene, Benedetto, Beniwal, Bennett, Bentley, BenYaala, Bergamin, Berger, Bernuzzi, Berry, Bersanetti, Bertolini, Betzwieser, Beveridge, Bhandare, Bhardwaj, Bhattacharjee, Bhaumik, Bilenko, Billingsley, Bini, Birney, Birnholtz, Biscans, Bischi, Biscoveanu, Bisht, Biswas, Bitossi, Bizouard, Blackburn, Blair, Blair, Blair, Bobba, Bode, Boer, Bogaert, Boldrini, Bonavena, Bondu, Bonilla, Bonnand, Booker, Boom, Bork, Boschi, Bose, Bose, Bossilkov, Boudart, Bouffanais, Bozzi, Bradaschia, Brady, Bramley, Branch, Branchesi, Brandt, Brau, Breschi, Briant, Briggs, Brillet, Brinkmann, Brockill, Brooks, Brooks, Brown, Brunett, Bruno, Bruntz, Bryant, Bulik, Bulten, Buonanno, Buscicchio, Buskulic, Buy, Byer, Cabourn Davies, Cadonati, Cagnoli, Cahillane, Calderón Bustillo, Callaghan, Callister, Calloni, Cameron, Camp, Canepa, Canevarolo, Cannavacciuolo, Cannon, Cao, Cao, Capocasa, Capote, Carapella, Carbognani, Carlin, Carney, Carpinelli, Carrillo, Carullo, Carver, Casanueva Diaz, Casentini, Castaldi, Caudill, Cavaglià, Cavalier, Cavalieri, Ceasar, Cella, Cerdá-Durán, Cesarini, Chaibi, Chakravarti, Chalathadka Subrahmanya, Champion, Chan, Chan, Chan, Chan, Chan, Chandra, Chanial, Chao, Chapman-Bird, Charlton, Chase, Chassande-Mottin, Chatterjee, Chatterjee, Chatterjee, Chaturvedi, Chaty, Chatziioannou, Chen, Chen, Chen, Chen, Chen, Chen, Chen, Chen, Cheng, Cheong, Cheung, Chia, Chiadini, Chiang, Chiarini, Chierici, Chincarini, Chiofalo, Chiummo, Cho, Cho, Choudhary, Choudhary, Christensen, Chu, Chu, Chu, Chua, Chung, Ciani, Ciecielag, Cieślar, Cifaldi, Ciobanu, Ciolfi, Cipriano, Cirone, Clara, Clark, Clark, Clarke, Clearwater, Clesse, Cleva, Coccia, Codazzo, Cohadon, Cohen, Cohen, Colleoni, Collette, Colombo, Colpi, Compton, Constancio, Conti, Cooper, Corban, Corbitt, Cordero-Carrión, Corezzi, Corley, Cornish, Corre, Corsi, Cortese, Costa, Cotesta, Coughlin, Coulon, Countryman, Cousins, Couvares, Coward, Cowart, Coyne, Coyne, Creighton, Creighton, Criswell, Croquette, Crowder, Cudell, Cullen, Cumming, Cummings, Cunningham, Cuoco, Curyło, Dabadie, Dal Canton, Dall'Osso, Dálya, Dana, DaneshgaranBajastani, D'Angelo, Danila, Danilishin, D'Antonio, Danzmann, Darsow-Fromm, Dasgupta, Datrier, Datta, Dattilo, Dave, Davier, Davis, Davis, Daw, de Alarcón, Dean, DeBra, Deenadayalan, Degallaix, De Laurentis, Deléglise, Del Favero, De Lillo, De Lillo, Del Pozzo, DeMarchi, De Matteis, D'Emilio, Demos, Dent, Depasse, De Pietri, De Rosa, De Rossi, DeSalvo, De Simone, Dhurandhar, Díaz, Diaz-Ortiz, Didio, Dietrich, Di Fiore, Di Fronzo, Di Giorgio, Di Giovanni, Di Giovanni, Di Girolamo, Di Lieto, Ding, Di Pace, Di Palma, Di Renzo, Divakarla, Dmitriev, Doctor, D'Onofrio, Donovan, Dooley, Doravari, Dorrington, Drago, Driggers, Drori, Ducoin, Dupej, Durante, D'Urso, Duverne, Dwyer, Eassa, Easter, Ebersold, Eckhardt, Eddolls, Edelman, Edo, Edy, Effler, Eguchi, Eichholz, Eikenberry, Eisenmann, Eisenstein, Ejlli, Engelby, Enomoto, Errico, Essick, Estellés, Estevez, Etienne, Etzel, Evans, Evans, Ewing, Fafone, Fair, Fairhurst, Farah, Farinon, Farr, Farr, Farrow, Fauchon-Jones, Favaro, Favata, Fays, Fazio, Feicht, Fejer, Fenyvesi, Ferguson, Fernandez-Galiana, Ferrante, Ferreira, Fidecaro, Figura, Fiori, Fishbach, Fisher, Fittipaldi, Fiumara, Flaminio, Floden, Fong, Font, Fornal, Forsyth, Franke, Frasca, Frasconi, Frederick, Freed, Frei, Freise, Frey, Fritschel, Frolov, Fronzé, Fujii, Fujikawa, Fukunaga, Fukushima, Fulda, Fyffe, Gabbard, Gabella, Gadre, Gair, Gais, Galaudage, Gamba, Ganapathy, Ganguly, Gao, Gaonkar, Garaventa, García, García-Núñez, García-Quirós, Garufi, Gateley, Gaudio, Gayathri, Ge, Gemme, Gennai, George, George, Gerberding, Gergely, Gewecke, Ghonge, Ghosh, Ghosh, Ghosh, Ghosh, Giacomazzo, Giacoppo, Giaime, Giardina, Gibson, Gier, Giesler, Giri, Gissi, Glanzer, Gleckl, Godwin, Goetz, Goetz, Gohlke, Golomb, Goncharov, González, Gopakumar, Gosselin, Gouaty, Gould, Grace, Grado, Granata, Granata, Grant, Gras, Grassia, Gray, Gray, Greco, Green, Green, Gretarsson, Gretarsson, Griffith, Griffiths, Griggs, Grignani, Grimaldi, Grimm, Grote, Grunewald, Gruning, Guerra, Guidi, Guimaraes, Guixé, Gulati, Guo, Guo, Gupta, Gupta, Gupta, Gustafson, Gustafson, Guzman, Ha, Haegel, Hagiwara, Haino, Halim, Hall, Hamilton, Hammond, Han, Haney, Hanks, Hanna, Hannam, Hannuksela, Hansen, Hansen, Hanson, Harder, Hardwick, Haris, Harms, Harry, Harry, Hartwig, Hasegawa, Haskell, Hasskew, Haster, Hattori, Haughian, Hayakawa, Hayama, Hayes, Healy, Heidmann, Heidt, Heintze, Heinze, Heinzel, Heitmann, Hellman, Hello, Helmling-Cornell, Hemming, Hendry, Heng, Hennes, Hennig, Hennig, Hernandez, Hernandez Vivanco, Heurs, Hild, Hill, Himemoto, Hines, Hiranuma, Hirata, Hirose, Hochheim, Hofman, Hohmann, Holcomb, Holland, Holley-Bockelmann, Hollows, Holmes, Holt, Holz, Hong, Hopkins, Hough, Hourihane, Howell, Hoy, Hoyland, Hreibi, Hsieh, Hsu, Huang, Huang, Huang, Huang, Huang, Huang, Hübner, Huddart, Hughey, Hui, Hui, Husa, Huttner, Huxford, Huynh-Dinh, Ide, Idzkowski, Iess, Ikenoue, Imam, Inayoshi, Ingram, Inoue, Ioka, Isi, Isleif, Ito, Itoh, Iyer, Izumi, JaberianHamedan, Jacqmin, Jadhav, Jadhav, James, Jan, Jani, Janquart, Janssens, Janthalur, Jaranowski, Jariwala, Jaume, Jenkins, Jenner, Jeon, Jeunon, Jia, Jin, Johns, Johnson-McDaniel, Jones, Jones, Jones, Jones, Jones, Jonker, Ju, Jung, Jung, Junker, Juste, Kaihotsu, Kajita, Kakizaki, Kalaghatgi, Kalogera, Kamai, Kamiizumi, Kanda, Kandhasamy, Kang, Kanner, Kao, Kapadia, Kapasi, Karat, Karathanasis, Karki, Kashyap, Kasprzack, Kastaun, Katsanevas, Katsavounidis, Katzman, Kaur, Kawabe, Kawaguchi, Kawai, Kawasaki, Kéfélian, Keitel, Key, Khadka, Khalili, Khan, Khazanov, Khetan, Khursheed, Kijbunchoo, Kim, Kim, Kim, Kim, Kim, Kim, Kimball, Kimura, Kinley-Hanlon, Kirchhoff, Kissel, Kita, Kitazawa, Kleybolte, Klimenko, Knee, Knowles, Knyazev, Koch, Koekoek, Kojima, Kokeyama, Koley, Kolitsidou, Kolstein, Komori, Kondrashov, Kong, Kontos, Koper, Korobko, Kotake, Kovalam, Kozak, Kozakai, Kozu, Kringel, Krishnendu, Królak, Kuehn, Kuei, Kuijer, Kulkarni, Kumar, Kumar, Kumar, Kumar, Kume, Kuns, Kuo, Kuo, Kuromiya, Kuroyanagi, Kusayanagi, Kuwahara, Kwak, Lagabbe, Laghi, Lalande, Lam, Lamberts, Landry, Lane, Lang, Lange, Lantz, La Rosa, Lartaux-Vollard, Lasky, Laxen, Lazzarini, Lazzaro, Leaci, Leavey, Lecoeuche, Lee, Lee, Lee, Lee, Lee, Lee, Lehmann, Lemaître, Leonardi, Leroy, Letendre, Levesque, Levin, Leviton, Leyde, Li, Li, Li, Li, Li, Li, Lin, Lin, Lin, Lin, Lin, Linde, Linker, Linley, Littenberg, Liu, Liu, Liu, Liu, Llamas, Llorens-Monteagudo, Lo, Lockwood, Loh, London, Longo, Lopez, Lopez Portilla, Lorenzini, Loriette, Lormand, Losurdo, Lott, Lough, Lousto, Lovelace, Lucaccioni, Lück, Lumaca, Lundgren, Luo, Lynam, Macas, MacInnis, Macleod, MacMillan, Macquet, Magaña Hernandez, Magazzù, Magee, Maggiore, Magnozzi, Mahesh, Majorana, Makarem, Maksimovic, Maliakal, Malik, Man, Mandic, Mangano, Mango, Mansell, Manske, Mantovani, Mapelli, Marchesoni, Marchio, Marion, Mark, Márka, Márka, Markakis, Markosyan, Markowitz, Maros, Marquina, Marsat, Martelli, Martin, Martin, Martinez, Martinez, Martinez, Martinovic, Martynov, Marx, Masalehdan, Mason, Massera, Masserot, Massinger, Masso-Reid, Mastrogiovanni, Matas, Mateu-Lucena, Matichard, Matiushechkina, Mavalvala, McCann, McCarthy, McClelland, McClincy, McCormick, McCuller, McGhee, McGuire, McIsaac, McIver, McRae, McWilliams, Meacher, Mehmet, Mehta, Meijer, Melatos, Melchor, Mendell, Menendez-Vazquez, Menoni, Mercer, Mereni, Merfeld, Merilh, Merritt, Merzougui, Meshkov, Messenger, Messick, Meyers, Meylahn, Mhaske, Miani, Miao, Michaloliakos, Michel, Michimura, Middleton, Milano, Miller, Miller, Miller, Millhouse, Mills, Milotti, Minazzoli, Minenkov, Mio, Mir, Miravet-Tenés, Mishra, Mishra, Mistry, Mitra, Mitrofanov, Mitselmakher, Mittleman, Miyakawa, Miyamoto, Miyazaki, Miyo, Miyoki, Mo, Modafferi, Moguel, Mogushi, Mohapatra, Mohite, Molina, Molina-Ruiz, Mondin, Montani, Moore, Moraru, Morawski, More, Moreno, Moreno, Mori, Morisaki, Moriwaki, Morrás, Mours, Mow-Lowry, Mozzon, Muciaccia, Mukherjee, Mukherjee, Mukherjee, Mukherjee, Mukherjee, Mukund, Mullavey, Munch, Muñiz, Murray, Musenich, Muusse, Nadji, Nagano, Nagano, Nagar, Nakamura, Nakano, Nakano, Nakashima, Nakayama, Napolano, Nardecchia, Narikawa, Naticchioni, Nayak, Nayak, Negishi, Neil, Neilson, Nelemans, Nelson, Nery, Neubauer, Neunzert, Ng, Ng, Nguyen, Nguyen, Nguyen, Nguyen Quynh, Ni, Nichols, Nishizawa, Nissanke, Nitoglia, Nocera, Norman, North, Nozaki, Nuño Siles, Nuttall, Oberling, O'Brien, Obuchi, O'Dell, Oelker, Ogaki, Oganesyan, Oh, Oh, Oh, Ohashi, Ohishi, Ohkawa, Ohme, Ohta, Okada, Okutani, Okutomi, Olivetto, Oohara, Ooi, Oram, O'Reilly, Ormiston, Ormsby, Ortega, O'Shaughnessy, O'Shea, Oshino, Ossokine, Osthelder, Otabe, Ottaway, Overmier, Pace, Pagano, Page, Pagliaroli, Pai, Pai, Palamos, Palashov, Palomba, Pan, Pan, Panda, Pang, Pang, Pankow, Pannarale, Pant, Panther, Paoletti, Paoli, Paolone, Parisi, Park, Park, Parker, Pascucci, Pasqualetti, Passaquieti, Passuello, Patel, Pathak, Patricelli, Patron, Paul, Payne, Pedraza, Pegoraro, Pele, Peña Arellano, Penn, Perego, Pereira, Pereira, Perez, Périgois, Perkins, Perreca, Perriès, Petermann, Petterson, Pfeiffer, Pham, Phukon, Piccinni, Pichot, Piendibene, Piergiovanni, Pierini, Pierro, Pillant, Pillas, Pilo, Pinard, Pinto, Pinto, Piotrzkowski, Piotrzkowski, Pirello, Pitkin, Placidi, Planas, Plastino, Pluchar, Poggiani, Polini, Pong, Ponrathnam, Popolizio, Porter, Poulton, Powell, Pracchia, Pradier, Prajapati, Prasai, Prasanna, Pratten, Principe, Prodi, Prokhorov, Prosposito, Prudenzi, Puecher, Punturo, Puosi, Puppo, Pürrer, Qi, Quetschke, Quitzow-James, Qutob, Raab, Raaijmakers, Radkins, Radulesco, Raffai, Rail, Raja, Rajan, Ramirez, Ramirez, Ramos-Buades, Rana, Rapagnani, Rapol, Ray, Raymond, Raza, Razzano, Read, Rees, Regimbau, Rei, Reid, Reid, Reitze, Relton, Renzini, Rettegno, Reza, Rezac, Ricci, Richards, Richardson, Richardson, Riemenschneider, Riles, Rinaldi, Rink, Rizzo, Robertson, Robie, Robinet, Rocchi, Rodriguez, Rolland, Rollins, Romanelli, Romano, Romel, Romero-Rodríguez, Romero-Shaw, Romie, Ronchini, Rosa, Rose, Rosińska, Ross, Rowan, Rowlinson, Roy, Roy, Roy, Rozza, Ruggi, Ruiz-Rocha, Ryan, Sachdev, Sadecki, Sadiq, Sago, Saito, Saito, Sakai, Sakai, Sakellariadou, Sakuno, Salafia, Salconi, Saleem, Salemi, Samajdar, Sanchez, Sanchez, Sanchez, Sanchis-Gual, Sanders, Sanuy, Saravanan, Sarin, Sassolas, Satari, Sathyaprakash, Sato, Sato, Sauter, Savage, Sawada, Sawant, Sawant, Sayah, Schaetzl, Scheel, Scheuer, Schiworski, Schmidt, Schmidt, Schnabel, Schneewind, Schofield, Schönbeck, Schulte, Schutz, Schwartz, Scott, Scott, Seglar-Arroyo, Sekiguchi, Sekiguchi, Sellers, Sengupta, Sentenac, Seo, Sequino, Sergeev, Setyawati, Shaffer, Shahriar, Shams, Shao, Sharma, Sharma, Shawhan, Shcheblanov, Shibagaki, Shikauchi, Shimizu, Shimoda, Shimode, Shinkai, Shishido, Shoda, Shoemaker, Shoemaker, ShyamSundar, Sieniawska, Sigg, Singer, Singh, Singh, Singha, Sintes, Sipala, Skliris, Slagmolen, Slaven-Blair, Smetana, Smith, Smith, Soldateschi, Somala, Somiya, Son, Soni, Soni, Sordini, Sorrentino, Sorrentino, Sotani, Soulard, Souradeep, Sowell, Spagnuolo, Spencer, Spera, Srinivasan, Srivastava, Srivastava, Staats, Stachie, Steer, Steinhoff, Steinlechner, Steinlechner, Stevenson, Stops, Stover, Strain, Strang, Stratta, Strunk, Sturani, Stuver, Sudhagar, Sudhir, Sugimoto, Suh, Sullivan, Sullivan, Summerscales, Sun, Sun, Sunil, Sur, Suresh, Sutton, Suzuki, Suzuki, Swinkels, Szczepańczyk, Szewczyk, Tacca, Tagoshi, Tait, Takahashi, Takahashi, Takamori, Takano, Takeda, Takeda, Talbot, Talbot, Tanaka, Tanaka, Tanaka, Tanaka, Tanaka, Tanasijczuk, Tanioka, Tanner, Tao, Tao, Tapia San Martín, Taranto, Tasson, Telada, Tenorio, Terhune, Terkowski, Thirugnanasambandam, Thomas, Thomas, Thomas, Thompson, Thondapu, Thorne, Thrane, Tiwari, Tiwari, Tiwari, Toivonen, Toland, Tolley, Tomaru, Tomigami, Tomura, Tonelli, Torres-Forné, Torrie, Tosta e Melo, Töyrä, Trapananti, Travasso, Traylor, Trevor, Tringali, Tripathee, Troiano, Trovato, Trozzo, Trudeau, Tsai, Tsai, Tsang, Tsang, Tsao, Tse, Tso, Tsubono, Tsuchida, Tsukada, Tsuna, Tsutsui, Tsuzuki, Turbang, Turconi, Tuyenbayev, Ubhi, Uchikata, Uchiyama, Udall, Ueda, Uehara, Ueno, Ueshima, Unnikrishnan, Uraguchi, Urban, Ushiba, Utina, Vahlbruch, Vajente, Vajpeyi, Valdes, Valentini, Valsan, van Bakel, van Beuzekom, van den Brand, Van Den Broeck, Vander-Hyde, van der Schaaf, van Heijningen, Vanosky, van Putten, van Remortel, Vardaro, Vargas, Varma, Vasúth, Vecchio, Vedovato, Veitch, Veitch, Venneberg, Venugopalan, Verkindt, Verma, Verma, Veske, Vetrano, Viceré, Vidyant, Viets, Vijaykumar, Villa-Ortega, Vinet, Virtuoso, Vitale, Vo, Vocca, von Reis, von Wrangel, Vorvick, Vyatchanin, Wade, Wade, Wagner, Walet, Walker, Wallace, Wallace, Walsh, Wang, Wang, Wang, Ward, Warner, Was, Washimi, Washington, Watchi, Weaver, Webster, Weinert, Weinstein, Weiss, Weller, Weller, Wellmann, Wen, Weßels, Wette, Whelan, White, Whiting, Whittle, Wilken, Williams, Williams, Williams, Williamson, Willis, Willke, Wilson, Winkler, Wipf, Wlodarczyk, Woan, Woehler, Wofford, Wong, Wu, Wu, Wu, Wu, Wysocki, Xiao, Xu, Yamada, Yamamoto, Yamamoto, Yamamoto, Yamamoto, Yamashita, Yamazaki, Yang, Yang, Yang, Yang, Yang, Yap, Yeeles, Yelikar, Ying, Yokogawa, Yokoyama, Yokozawa, Yoo, Yoshioka, Yu, Yu, Yuzurihara, Zadrożny, Zanolin, Zeidler, Zelenova, Zendri, Zevin, Zhan, Zhang, Zhang, Zhang, Zhang, Zhang, Zhao, Zhao, Zhao, Zhao, Zheng, Zhou, Zhou, Zhu, Zhu, Zimmerman, Zlochower, Zucker, & Zweizig]LIGO2021 The LIGO Scientific Collaboration, the Virgo Collaboration, the KAGRA Collaboration, et al. 2021, arXiv e-prints, arXiv:2111.03606. 2111.03606 [Thorne & Zytkow(1975)]TZ1975 Thorne, K. S., & Zytkow, A. N. 1975, , 199, L19, 10.1086/181839 [Toonen et al.(2021)Toonen, Boekholt, & Portegies Zwart]Toonen2021 Toonen, S., Boekholt, T. C. N., & Portegies Zwart, S. 2021, arXiv e-prints, arXiv:2108.04272. 2108.04272 [Trani et al.(2022)Trani, Rastello, Di Carlo, Santoliquido, Tanikawa, & Mapelli]Trani2022 Trani, A. A., Rastello, S., Di Carlo, U. N., et al. 2022, , 511, 1362, 10.1093/mnras/stac122 [Trani & Spera(2023)]Trani2023 Trani, A. A., & Spera, M. 2023, IAU Symposium, 362, 404, 10.1017/S1743921322001818 [von Zeipel(1910)]Zeipel1910 von Zeipel, H. 1910, Astronomische Nachrichten, 183, 345, 10.1002/asna.19091832202 [White(1939)]White1939 White, T. 1939, The Once and Future King (London: Collins) [Yamaguchi et al.(2018)Yamaguchi, Kawanaka, Bulik, & Piran]Yamaguchi2018 Yamaguchi, M. S., Kawanaka, N., Bulik, T., & Piran, T. 2018, , 861, 21, 10.3847/1538-4357/aac5ec
http://arxiv.org/abs/2307.02712v1
20230706012601
Multi-Similarity Contrastive Learning
[ "Emily Mu", "John Guttag", "Maggie Makar" ]
cs.LG
[ "cs.LG" ]
Multi-Similarity Contrastive Learning Emily Mu Massachusetts Institute of Technology John Guttag Massachusetts Institute of Technology Maggie Makar University of Michigan ================================================================================================================================================================================== empty Given a similarity metric, contrastive methods learn a representation in which examples that are similar are pushed together and examples that are dissimilar are pulled apart. Contrastive learning techniques have been utilized extensively to learn representations for tasks ranging from image classification to caption generation. However, existing contrastive learning approaches can fail to generalize because they do not take into account the possibility of different similarity relations. In this paper, we propose a novel multi-similarity contrastive loss (MSCon), that learns generalizable embeddings by jointly utilizing supervision from multiple metrics of similarity. Our method automatically learns contrastive similarity weightings based on the uncertainty in the corresponding similarity, down-weighting uncertain tasks and leading to better out-of-domain generalization to new tasks. We show empirically that networks trained with MSCon outperform state-of-the-art baselines on in-domain and out-of-domain settings. § INTRODUCTION Contrastive methods learn embeddings by pushing similar examples together and pulling dissimilar examples apart. Embeddings trained using contrastive learning have been shown to achieve state-of-the-art performance on a variety of computer vision tasks <cit.>. In contrastive learning, representations are trained to discriminate pairs of similar images (positive examples) from a set of dissimilar images (negative examples). Supervised contrastive learning approaches consider all instances with the same label to be positive examples and all examples with different labels to be negative examples <cit.>. Existing contrastive learning methods can fail to generalize because the learned embeddings are too simplistic, reflecting limited similarities between different examples. This limitation exists because current contrastive learning methods only consider a single way of defining similarity between examples. In settings where multiple notions of similarity are available, relying on only one notion of similarity represents a missed opportunity to learn more general representations <cit.>. A challenge of generalizing to multiple notions is that training using multiple tasks adds complexity when tasks have different levels of uncertainty. Incorporating noisy similarity measures can lead to worse generalization performance. In multi-task and meta learning, it has been demonstrated that assigning different weights based upon relative task uncertainty can help models focus on tasks with low uncertainty, potentially leading to better classification accuracy and generalization towards new tasks and datasets <cit.>. In this work, we propose multi-similarity contrastive loss (MSCon), a novel loss function that utilizes supervision from multiple similarity metrics and learns to down-weight more uncertain similarities. Throughout, we will use shoe classification as a motivating example. Each of the shoes in Figure <ref> is associated with distinct category, closure, and gender attributes. For example, images 1 and 2 are similar in category but are dissimilar in closure and gender, while images 2 and 3 are similar in gender but dissimilar in category and closure. We refer to such a dataset as a multi-similarity dataset. Other examples of multi-similarity datasets include multiple disease labels associated with chest radiographs <cit.> and relational tables associated with website text <cit.>. For convenience, we will refer to the similarity function induced by the labels of a task as the similarity metric of that task. Suppose we are training a model using all three tasks: category, closure, and gender. Closure might be a task with low noise and low uncertainty, while gender might be a task with high noise and higher uncertainty. We find that our approach learns a higher weight for closure than for gender, ensuring that the model focuses more on closure during training. Our framework is shown in Figure <ref>. MSCon uses multiple projection heads to learn embeddings based on different metrics of similarity. In this way, we are able to represent examples that are positive examples in one projected subspace and negative examples in a different projected subspace. Additionally, we model similarity-dependent uncertainty by first constructing a pseudo-likelihood function. Since our contrastive loss uses a non-parametric approach to learn the similarities between two inputs, we use the pseudo-likelihood function to approximate label uncertainty in the learned similarity spaces. We then learn a weighting parameter for each similarity metric that maximizes this pseudo-likelihood. In extensive experiments, we show that our weighting scheme allows models to learn to down-weight more uncertain similarity metrics, which leads to better generalization of the learned representation to novel tasks. We also show that embeddings trained with our multi-similarity contrastive loss outperform embeddings trained with traditional self-supervised and supervised contrastive losses on two multi-similarity datasets. Finally, we show that embeddings trained with MSCon generalize better to out-of-domain tasks than do embeddings trained with multi-task cross-entropy. Our main contributions are: * We propose a novel multi-similarity contrastive learning method for utilizing supervision based on multiple metrics of similarity. * We propose a weighting scheme to learn robust embeddings in the presence of possibly uncertain similarities induced by noisy tasks. Our weighting scheme learns to down-weight uninformative or uncertain tasks leading to better out-of-distribution generalization. * We empirically demonstrate that a network trained with our multi-similarity contrastive loss performs well for both in-domain and out-of-domain tasks and generalizes better than multi-task cross-entropy methods to out-of-domain tasks. § RELATED WORK §.§ Contrastive Representation Learning Our work draws from existing literature in contrastive representation learning. Many of the current state-of-the-art vision and language models are trained using contrastive losses <cit.>. Self-supervised contrastive learning methods, such as MoCo and SimCLR, maximize agreement between two different augmentations or views of the same image <cit.>. Recently, vision-language contrastive learning has allowed dual-encoder models to pretrain with hundreds of millions of image-text pairs <cit.>. The resulting learned embeddings achieve state-of-the-art performance on many vision and language benchmarks <cit.>. Supervised contrastive learning, SupCon, allows contrastive learning to take advantage of existing labels <cit.>. Contrastive learning has also been adapted to learn from both labels and text <cit.> and from hierarchies of labels <cit.>. The method most similar to ours is conditional similarity networks <cit.>. In conditional similarity networks, masks are learned or assigned to different embedding dimensions with respect to different metrics of similarity. These masks are learned jointly with the convolutional neural network parameters during training time. Conditional similarity networks differ from our work in two major ways. First, unlike our work, conditional similarity networks uses triplet loss, a specialized version of contrastive loss. At training time, it requires triplets based on each similarity metric. Second, we automatically learn separate projection spaces and weights for each metric of similarity, whereas they learn a linear transformation from the embedding space for each similarity and do not consider weighting metrics. As we show in Section <ref>, our multiple similarity contrastive networks consistently outperforms conditional similarity networks. §.§ Multi-Task Learning Multi-task learning aims to simultaneously learn multiple related tasks and often outperforms learning each task alone <cit.>. However, if tasks are weighted improperly during training, the performance on some tasks suffer. Various learned task weighting methods have been proposed for multi-task learning in the vision and language domains <cit.>. These methods learn task weightings based on different task characteristics in order to improve the generalization performance towards novel tasks <cit.>. This is done by regularizing the task variance using gradient descent <cit.> or by using adversarial training to divide models into task-specific and generalizable parameters <cit.>. Overwhelmingly, these methods are built for multiple tasks trained with likelihood-based losses, such as regression and classification. One of the most popular of these methods models task uncertainty to determine task-specific weighting and automatically learns weights to balance this uncertainty <cit.>. In our work, we adapt automatically learned task weighting to our multi-similarity contrastive loss by predicting similarity uncertainty. This is not straightforward since the contrastive loss is trained in a pairwise fashion and there is a lack of absolute labels in the learned output (a set of embedding vectors) <cit.>. §.§ Uncertainty in Contrastive Learning Adapting uncertainty estimation techniques to contrastive learning remains an active area of research. This is because contrastive learning learns abstract embedding vectors rather than absolute labels and because of the pairwise training of contrastive models. Given access to training data and labels, previous work has proposed estimating the density and consistency of the hypersphere embedding space distribution as metrics to estimate uncertainty <cit.>. The density of the embedding space at a point captures the amount of data the model has observed during training, and can serve as a proxy for epistemic, or model, uncertainty <cit.>. The consistency of the embedding space at a point uses k-nearest-neighbors to measure the extent to which the training data mapped closest to that point have consistent labels, and can serve as a proxy for aleatoric, or data-dependent, uncertainty <cit.>. Other recent work proposes learned temperature as a metric of heteroscedastic, or input-dependent, uncertainty to identify out-of-distribution data for labeled datasets <cit.>. To our knowledge, we are the first work to model similarity-dependent uncertainty, or the relative confidence between different training tasks, in the contrastive setting. § METHOD §.§ Multi-Similarity Setup We assume that during training time, we have access to dataset: 𝒟 = {x_i, Y_i}_i^M, where x is an image and the Y_i = {y_i^1... y_i^C} are distinct categorical attributes associated with the image. We aim to learn an embedding function f: x →ℝ^d that maps x to an embedding space. We define h_i = f(x_i) to be the embedding of x_i. In the typical contrastive training setup, training proceeds by selecting a batch of N randomly sampled data {x_i}_i=1...N. We randomly sample two distinct label preserving augmentations (e.g., from rotations, crops, flips) for each x_i, (x̃_2i and x̃_2i-1), to construct 2N augmented samples, {x̃_j}_j=1...2N. Let A(i) = {1,... 2N}\ i be the set of all samples and augmentations not including i. We define g to be a projection head that maps the embedding to the similarity space represented as the surface of the unit sphere 𝕊^d = {v ∈ℝ^d: ||v||_2=1}. Finally, we define v_i = g(h_i) as the mapping of h_i to the projection space. Supervised contrastive learning uses labels to implicitly define the positive sets of examples. Specifically, supervised contrastive learning encourages samples with the same label to have similar embeddings and samples with a different label to have different embeddings. We follow the literature in referring to samples with the same label as an image i as the positive samples, and samples with a different label than that of i's as the negative samples. Supervised contrastive learning (SupCon) <cit.> proceeds by minimizing the loss: L^supcon = ∑_i=I-1/|P(i)|∑_p ∈ P(i)logexp(v_i^T v_p/τ)/∑_a ∈ A(i)exp(v_i^T v_a/τ), where |S| denotes the cardinality of the set S, P(i) denotes the positive set with all other samples with the same label as x_i, i.e., P(i) = {j ∈ A(i): y_j = y_i}, I denotes the set of all samples in a particular batch, and τ∈{0, ∞} is a temperature hyperparameter. In contrast to SupCon, our multi-similarity contrastive (MSCon) approach proceeds by jointly training an embedding space using multiple notions of similarity. We do so by training the embedding with multiple projection heads g^c that map the embedding to C projection spaces, where each space distinguishes the image based on a different similarity metric. We define v^c_i = g^c(h_i) to be the mapping of h_i to the projection space by projection head g^c. Because each projection space is already normalized, we assume that the each similarity loss is similarly scaled. We define the multi-similarity contrastive loss to be a summation of the supervised contrastive loss over all conditions L^mscon = ∑_c ∈ C∑_i=I L^mscon_c,i where each conditional L^mscon_c,i is defined as in equation <ref>. Specifically, L^mscon_c,i = -1/|P^c(i)|∑_p ∈ P^c(i)logexp(v_i^cT v^c_p/τ)/∑_a ∈ A(i)exp(v_i^cT v^c_a/τ), where P^c(i) is defined as the positive set under similarity c such that for all j ∈ P^c(i), y_j^c = y_i^c. §.§ Contrastive Task Weighting In the above formulation of our multi-similarity contrastive loss function, each similarity is weighted equally. However, previous work in multi-task learning for both vision and language have demonstrated that model performance can deteriorate when one or more of the tasks is noisy or uncertain. One way to tackle this is to learn task weights based on the uncertainty of each task. However, model performance can be sensitive to weight selection <cit.>, and manually searching for optimal weightings is expensive in both computation and time. Previous work has suggested using irreducible uncertainty of task predictions in a weighting scheme. For example, tasks where predictions are more uncertain are weighted lower because they are less informative<cit.>. Such notions of uncertainty are typically predicated on an assumed parametric likelihood of a label given inputs. However, this work is not easily adapted to multi-similarity contrastive learning because 1) contrastive training does not directly predict downstream task performance and 2) the confidence in different similarity metrics has never been considered in this setting. In contrastive learning, the estimate of interest is a similarity metric between different examples rather than a predicted label, which means that downstream task performance is not directly predicted by training results. Furthermore, previous work in contrastive learning has only focused on modeling data-dependent uncertainty, or how similar a sample is to negative examples within the same similarity metric. To our knowledge, we are the first to utilize uncertainty in the training tasks and their corresponding similarity metrics as a basis for constructing a weighting scheme for multi-similarity contrastive losses. We do this in two ways: 1) we construct a pseudo-likelihood function approximating task performance and 2) we introduce a similarity dependent temperature parameter to model relative confidence between different similarity metrics. We present an extension to the contrastive learning paradigm that enables estimation of the uncertainty in similarity metrics. In addition to providing useful information about the informativeness of each similarity metric, our estimate of uncertainty enables us to weight the different notions of similarity such that noisy notions of similarity are weighted lower than more reliable notions. Our approach proceeds by constructing a pseudo-likelihood function which approximates task performance. We show in the supplement that maximizing our pseudo-likelihood also maximizes our MSCon objective function. This pseudo-likelihood endows the approach with a well-defined notion of uncertainty that can then be used to weight the different similarities. Let v_i^c be the model projection head output for similarity c for input x_i. Let Y^c be the cth column in Y. We define P^c_y = {x_j ∈𝒟 : Y^c_j = y} to be the positive set for label y under similarity metric c. We define the classification probability p(y|v_i^c, D, τ) as the average distance of the representation v_i^c from all representations for inputs conditioned on the similarity metric. Instead of directly optimizing equation <ref>, we can maximize the following pseudo-likelihood: p(y|v_i^c, D, τ) ∝1/|P^c_y|∑_p ∈ P^c_yexp(v_i^cT v_p^c/τ). Note that optimizing <ref> is equivalent to optimizing <ref> by applying Jensen's inequality (as shown in the supplement). By virtue of being a pseudo-likelihood, equation <ref> provides us with a well-defined probability associated with downstream task performance that we can use to weight the different tasks. We will next outline how to construct this uncertainty from the pseudo-likelihood defined in equation <ref>. We assume that v^c is a sufficient statistic for y^c, meaning that y^i is independent of all other variables conditional on v^i. Such an assumption is not unrealistic, it simply reflects the notion that v^c is an accurate estimation for y^c. Under this assumption the pseudo-likelihood expressed in <ref> factorizes as follows: p(y^1, ... y^C|v_i^1, ... v_i^C, D, τ) = p(y^1|v_i^1, D, τ) ... p(y^C|v_i^C, D, τ). Previous work in contrastive learning modifies the temperature to learn from particularly difficult data examples <cit.>. Inspired by this, we adapt the contrastive likelihood to incorporate a similarity-dependent scaled version of the temperature. We introduce a parameter σ_c^2 for each similarity metric controlling the scaling of temperature and representing the similarity dependent uncertainty in Equation <ref>. p(y|v_i^c, D, τ, σ_c^2) ∝1/|P^c_y|∑_p ∈ P^c_yexp(v_i^cT v_p^c/τσ_c^2) The negative log-likelihood for this contrastive likelihood can be expressed as Equation <ref>. - log p(y|v_i^c, D, τ, σ_c^2) ∝1/σ_c^2∑_i=I L^mscon_c,i + 2log(σ_c) We provide a detailed derivation of this equation in the supplement. Extending this analysis to consider multiple similarity metrics, we can adapt the optimization objective to learn weightings for each similarity as in Equation <ref>. argmin_f, g_1, ... g_C, σ_1, ... σ_C (∑_c ∈ C (1/σ_c^2∑_i=I L^mscon_c,i + 2log(σ_c))) During training, we learn the σ_c weighting parameters through gradient descent. § EXPERIMENTS In this section, we evaluate the performance of our approach: 1) under varying levels of uncertainty in similarity metrics induced by varying levels of task noise and 2) across in-domain and out-of-domain classification tasks. We show that our multi-similarity contrastive loss significantly outperforms existing self-supervised and single-task supervised contrastive networks and outperforms multi-task cross-entropy networks on novel tasks. We also demonstrate that our method is able to learn to down-weight more uncertain similarities, and that compared to using equal weights, our weighted multi-similarity contrastive loss is more robust to similarity metric uncertainty and generalizes better to novel tasks under increasing uncertainty. §.§ Datasets and Implementation Datasets. We use two datasets: Zappos50k <cit.> and MEDIC <cit.>. Sample images are provided in the supplement. Zappos50k consists of 50,000 136 × 102 images of shoes. We focus our analysis on three tasks: the category of shoe (shoes, boots, sandals, or slippers), the suggested gender of the shoe (for women, men, girls, boys), and the closing mechanism of the shoe (buckle, pull on, slip on, hook and loop, or laced). We fine-tune the embedding space to predict the brand of the shoe for the out-of-domain experiment. We split the images into 70% training, 10% validation, and 20% test sets and resize all images to 112 × 112. MEDIC is the largest multi-task learning disaster-related dataset, extending the CRISIS multi-task image benchmark dataset <cit.>. MEDIC consists of ≈71,000 images of disasters collected from Twitter, Google, Bing, Flickr, and Instagram. The dataset includes four disaster-related tasks that are relevant for humanitarian aid: the disaster type (earthquake, fire, flood, hurricane, landslide, other disaster, and not a disaster), the informativeness of the image for humanitarian response (informative or not informative), categories relevant to humanitarian response (having affected, injured, or dead people, infrastructure and utility damage, rescue volunteering or donation effort, and not needing humanitarian response), and the severity of the damage of the event (severe damage, mild damage, and little to no damage). For the out-of-domain analysis, we hold out each task from training and then attempt to predict the hold-out task during evaluation. These tasks were generated from a crowd sourcing annotation platform and the images are split already into 69% training, 9% validation and 22% test sets. All images were resized to 224 × 224. Training Details. Consistent with previous work <cit.>, images are augmented by applying various transformations to increase dataset diversity. We train using standard data augmentations, including random crops, flips, and color jitters. An embedding network consisting of a shared encoder and multiple projection heads is then trained using MSCon with multiple similarity metrics defined by different tasks as shown in Figure <ref>. The resulting vectors are normalized to the unit hypersphere, which allows us to use an inner product to measure distances in the projection space. Zappos50k encoders use ResNet18 backbones with projection heads of size 32. MEDIC encoders use ResNet50 backbones with projection spaces of size 64 <cit.>. All models are pretrained on ImageNet <cit.>. All networks are trained using a SGD with momentum optimizer for 200 epochs with a batch-size of 64 and a learning rate of 0.05, unless otherwise specified. We use a temperature of τ = 0.1. After training the multi-similarity contrastive network, we discard the projection heads and freeze the encoder network. We then evaluate the performance of the embedding network on downstream tasks by training a linear classifier on the embedding features. We train a linear classifier for 20 epochs and evaluate top-1 accuracy. Standard deviations are computed by bootstrapping the test set 1000 times. Additional implementation details can be found in the supplement. We will release code for implementing MSCon. Models. We compare the unweighted and weighted versions of our Multi-Similarity Contrastive Network (MSCon) with the following baselines: * Cross-Entropy Networks (XEnt) We train separate cross-entropy networks with each of the available tasks. We also train a multitask cross-entropy network with all available tasks. We train each network with a learning rate of 0.01. We select the best model using the validation accuracy. * Conditional Similarity Network (CSN) We train a conditional similarity network that learns the convolutional filters, embedding, and mask parameters together. 10,000 triplets are constructed from the similarities available in the training dataset. We follow the training procedure specified in <cit.>. * SimCLR and SupCon Networks We train a self-supervised contrastive network for each dataset and individual supervised contrastive networks with each of the similarity metrics represented in the training dataset. We pretrain with a temperature of 0.1 for all contrastive networks which is the typical temperature used for SimCLR and SupCon <cit.>. For evaluation, we fine-tune a classification layer on the frozen embedding space. §.§ Role of Weighting in Achieving Robustness to Task Uncertainty In this subsection, we evaluate the robustness of our learned embeddings to similarity uncertainty. Since the true level of task noise (similarity metric uncertainty) is unobserved, we use a semi-simulated approach, where we simulate uncertain similarities in both the Zappos50k and MEDIC datasets. For the Zappos50k dataset, we train the encoder using the category, closure, and gender similarity metrics. To introduce task uncertainty, we randomly corrupt the closure task by proportion ρ. We randomly sample ρ of the closure labels and randomly reassign the label amongst all possible labels. Note that when ρ = 1.0, all labels are randomly sampled equally from the available closure labels. When ρ = 0.0, all labels are identical to the original dataset. For the MEDIC dataset, we train the encoder using the disaster types, humanitarian, and informative similarity metrics. We corrupt the disaster type task in order to introduce task uncertainty. As ρ increases in Figure <ref>, we find that MSCon learns to down-weight the noisy task for both the Zappos50k and MEDIC datasets. For the Zappos50k dataset, we evaluate the top-1 classification accuracy on an out-of-domain task, brand classification, and on an in-domain task, the corrupted closure classification. Similarly, for the MEDIC dataset, we evaluate the top-1 classification accuracy on an out-of-domain task, damage-severity classification, and on an in-domain task, the corrupted disaster-type classification. Figure <ref> shows the results from this analysis. The top panel shows how the weights change as we change task uncertainty on the x-axis. The middle and bottom panels shows how out-of-domain and in-domain evaluation accuracy changes as we change task uncertainty. As expected, as ρ increases to 1, the in-domain classification accuracy for both the equal-weighted and weighted MSCon learned embeddings decreases to random. However, the out-of-domain classification accuracy for the weighted MSCon learned embeddings is more robust to changes in ρ than the unweighted MSCon learned embeddings. This is because the weighted version of MSCon automatically learns to down-weight uncertain or more uninformative tasks during encoder training. §.§ Classification Performance In this section, we evaluate in- and out-of-domain performance of various methods. We find that our multi-similarity contrastive network significantly outperforms all other contrastive methods on in-domain tasks and outperforms multi-task cross-entropy learning on out-of-domain tasks. We also show how performance changes with variation in hyperparameter selection. More qualitative analysis of the learned similarity subspaces (i.e., TSNE visualizations) can be found in the supplement. In-domain Performance. To evaluate the quality of the learned embedding spaces, we measure top-1 classification accuracy on all tasks for both the Zappos50k and MEDIC datasets. We report the average accuracy and the standard deviation for all tasks in Table <ref> and Table <ref>. For the Zappos50k dataset, MSCon has the highest top-1 classification accuracy of the models. For MEDIC, MSCon out performs all of the contrastive learning techniques on all tasks. However, for three of the tasks, the best performance is achieved by one of cross-entropy methods (but different methods dominate for different tasks). We hypothesize that this may be due to the inherent uncertainty of some of the tasks <cit.>. For both datasets, CSN achieves accuracies that are lower than the single-task supervised networks. We believe this is because conditional similarity loss is trained with triplet loss <cit.>, which has been shown to be outperformed by N-pairs loss and supervised contrastive learning for single-task learning <cit.>. Out-of-domain Performance. Here, we test how well different approaches are able to generalize to previously unseen tasks. We compare MSCon to multi-task cross-entropy (XEnt MT). For the Zappos50k dataset, we train embedding spaces with the category, closure, and gender similarity metrics. We then select the top 20 brands in the Zappos dataset with the most examples, and fine-tune a classification layer on the frozen embedding with the brand labels. We report top-1 brand classification accuracy of the fine-tuned network on the test set and the standard deviation in Table <ref>. We find that MSCon significantly improves upon XEnt MT in the out-of-domain setting. More detailed top-1 classification results for all cross-entropy and contrastive networks are provided in the supplement. To evaluate generalization on the MEDIC dataset, we hold out each of the four tasks. We then train an embedding space with the remaining three similarity metrics. Next, we fine-tune a classification layer on the frozen embedding with the hold-out task. Table <ref> reports the top-1 classification accuracy and the standard deviation for the hold-out task on the test set. We observe that, except for the informative task, our approach is able to generalize to new tasks with higher accuracy than the multi-task cross-entropy learned embedding space. We hypothesize that this is because the informative task is the only binary task and the most ambiguous. Hyperparameter Analysis. We test if there exists a specific temperature that leads to optimal performance of MSCon for multiple similarity metrics. In Figure <ref>, we plot the top-1 classification accuracy for each of the category, closure, and gender tasks as a function of pretraining temperature for MSCon. We also plot the top-1 classification accuracy as a function of training epochs. We find that a pretraining temperature of τ=0.1 and training for 200 epochs works well for all tasks. These hyperparameter settings are consistent with optimal hyperparameter settings for SimCLR and SupCon. Note that previous work for SimCLR and SupCon have found the large batch sizes consistently result in better top-1 accuracy <cit.>. We hypothesize that larger batch sizes would also improve performance for MSCon loss. We include hyperparameter analyses on MSCon for the MEDIC dataset in the supplement. § CONCLUSION In this work, we propose multi-similarity contrastive loss (MSCon). Existing contrastive learning methods learn a representation based on a single similarity metric. However, it is often the case that multiple tasks are available, each implying a different similarity metric. We show how to leverage multiple similarity metrics in a contrastive setting to learn embeddings that generalize well to unseen tasks. We additionally extend uncertainty based task weighting to the contrastive framework. We do this by 1) modeling downstream classification performance for each similarity by using a psuedo-likelihood and 2) by representing similarity dependent uncertainty as a temperature scaling factor for each similarity metric. We demonstrate that our MSCon learned embeddings outperform all contrastive baselines and generalizes better than multi-task cross-entropy to novel tasks. There are many interesting directions for future work. Firstly, we do not consider data-dependent uncertainty in our framework. It would be interesting to consider what would happen if we have variance in the uncertainty of our input data. Can we account for both similarity-dependent and input-dependent uncertainty? Another interesting direction for future work would be to see if we could incorporate non-categorical labels in our multi-similarity learning scheme. Currently, we define similarity metrics using multiple categorical tasks. However, some applications use continuous metrics of similarity (e.g. heart rate measurements available in patient electronic health record data or heel height associated with shoes). Defining positive and negative examples for continuous variables with different scales is not straightforward. Thus, an interesting follow-up question may be how to incorporate both categorical and continuous similarity metrics under a single contrastive framework. Finally, we note that our method will not necessarily generalize well to any novel task. Sometimes, multi-task learning can degrade performance when models are unable to learn representations that generalize towards all tasks <cit.>. Our work does not address criteria for the selection of tasks for training or evaluation. ieee_fullname
http://arxiv.org/abs/2307.01386v1
20230703224925
Spatial-temporal Graph Based Multi-channel Speaker Verification With Ad-hoc Microphone Arrays
[ "Yijiang Chen", "Chengdong Liang", "Xiao-Lei Zhang" ]
cs.SD
[ "cs.SD", "eess.AS" ]
Journal of Class Files, Vol. 18, No. 9, September 2020 How to Use the IEEEtran Templates Spatial-temporal Graph Based Multi-channel Speaker Verification With Ad-hoc Microphone Arrays Yijiang Chen, Chengdong Liang, and Xiao-Lei Zhang Yijiang Chen and Xiao-Lei Zhang are with the School of Marine Science and Technology, Northwestern Polytechnical University, 127 Youyi West Road, Xi'an, Shaanxi 710072, China (e-mail: orangechen@mail.nwpu.edu.cn, xiaolei.zhang@nwpu.edu.cn). Chenegdong Liang is currently with the Horizon Robotics, Beijing, China. The work was done when Chengdong Liang was with the Northwestern Polytechnical University, China (e-mail: chengdong01.liang@horizon.ai). 9 June 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ The performance of speaker verification degrades significantly in adverse acoustic environments with strong reverberation and noise. To address this issue, this paper proposes a spatial-temporal graph convolutional network (GCN) method for the multi-channel speaker verification with ad-hoc microphone arrays. It includes a feature aggregation block and a channel selection block, both of which are built on graphs. The feature aggregation block fuses speaker features among different time and channels by a spatial-temporal GCN. The graph-based channel selection block discards the noisy channels that may contribute negatively to the system. The proposed method is flexible in incorporating various kinds of graphs and prior knowledge. We compared the proposed method with six representative methods in both real-world and simulated environments. Experimental results show that the proposed method achieves a relative equal error rate (EER) reduction of 15.39% lower than the strongest referenced method in the simulated datasets, and 17.70% lower than the latter in the real datasets. Moreover, its performance is robust across different signal-to-noise ratios and reverberation time. Far-field speaker verification, ad-hoc microphone arrays, graph convolution networks, channel selection. § INTRODUCTION Speaker verification is to identify whether a speaker is the target speaker. It finds important applications in privacy protection, identity authentication, smart home, etc. The research on speaker verification dates back to the 1960s <cit.>, followed by a series of statistical-model-based approaches, such as the Gaussian-mixture-model-based universal background model (GMM-UBM) <cit.> and i-vectors <cit.>. With the rise of deep learning era, speaker feature extraction with neural networks becomes the mainstream <cit.>. Although the deep-learning-based speaker verification has achieved a significant breakthrough, far-field speaker verification is still challenging. When a microphone is placed far away from a speaker, the recorded speech signal is not only severely attenuated but also corrupted by background noise, reverberation and other interfering sound sources. Eventually, the performance of a speaker verification system in the far-field conditions drops sharply. To compensate the negative effect caused by noise and reverberation, a common approach for speaker verification is to add a deep-learning-based speech noise reduction front-end <cit.>. For example, Kolboek et al. <cit.> first used a masking-based front-end to compute the posterior probability of a frame vector belonging to a speaker for the GMM-UBM-based speaker verification system. Then, Chang and Wang <cit.> used long short-term memory as a masking-based front-end to estimate clean speech for speaker recognition. Novotny et al. <cit.> learned a mapping from noisy speech to clean speech using an encoder, and subsequently applied the estimated speech to extract segment-level speaker representations. Another class of approaches treat far-field speaker recognition as a domain mismatch problem. It regards clean speech as the source domain and noisy data as the target domain, and solves the domain mismatching problem by domain adaptation methods <cit.>. The aforementioned methods are based on a single-channel microphone, which does not explore important spatial information. To address this issue, multi-channel speaker verification based on fixed arrays utilize azimuth information for further performance improvement. Taherian et al. <cit.> used a deep-learning-based minimum variance distortionless response to get the enhanced speech for speaker verification, where the deep neural network is used to learn a time-frequency mask for estimating the noise component of each channel. Then, they further explored the effect of combining different beamforming front-ends with i-vector/x-vector-based recognition back-ends. Yang and Chang <cit.> jointly optimized the deep-learning-based beamforming and speaker verification. Cai et al.<cit.> took the multichannel noisy speech as the input of a two-dimensional-convolutional-neural-network-based end-to-end speaker recognition directly, which yields lower equal error rate (EER) than single-channel speaker recognition systems. He et al. <cit.> extracted vocal pattern information and orientation information simultaneously by a multi-channel front-end, and applied the orientation information to a direction-of-arrival (DOA) estimation for speaker identification. Similar works which combine other orientation information for speaker identification were also developed in <cit.>. Wang et al.<cit.> obtained spatially encoded s-vectors by DOA estimation of the multichannel input, and identified speakers according to the similarity matrix of the s-vectors and x-vectors. It is worthy noting that the above multi-channel methods used fixed arrays with small array apertures. When a speaker is far from the array, then serious signal attenuation and strong interference of reverberation are still hard to prevent. To reduce the occurrence probability of extremely hard far-field problems, grouping multiple distributed devices together as an ad-hoc microphone array becomes a new type of effective methods, where each device in the ad-hoc microphone array, denoted as an ad-hoc node, contains either a single microphone or a conventional fixed microphone array. Compared to the fixed arrays, a key advantage of the deep-learning-based ad-hoc microphone array processing is that it is able to utilize spatial distance information via channel reweigting and selection. An early work used a deep neural network to estimate the signal-to-noise ratio at each randomly placed microphone array for the channel reweighting and selection <cit.>. However, the channel selection approach is not optimized. Recently, many works explored advanced channel selection approaches for speech enhancement <cit.>, speech separation <cit.>, speech recognition <cit.>, and speaker recognition <cit.>. Particularly, Liang et al. <cit.> and Cai et al. <cit.> independently proposed end-to-end speaker verification with ad-hoc microphone arrays, where an inter-channel attention-based channel reweighting method was developed to fuse utterance-level speaker features from all channels. However, the spatial-temporal connection between the nodes were not fully explored by simply the attention mechanism. Furthermore, because the ad-hoc nodes that are far away from speech sources might be too noisy to contribute negatively to the system, taking all channels into account may not be the best choice. Recently, some methods explored graphs to reweight and fuse the multi-channel signals collected by distributed microphone arrays for speech enhancement <cit.>. However, they simply used complete graphs without further exploring different kinds of graphs that can incorporate flexible prior knowledge. Their effectiveness was not studied in speaker verification as well. To fully exploit the spatial-temporal information, in this paper, we propose an end-to-end multichannel speaker verification framework based on graph convolutional networks (GCN). It includes a graph-based spatial-temporal multi-channel feature aggregation block and a graph-based channel selection block. The former learns to enhance the speaker characteristics of each frame of a channel by aggregating spatial-temporal information from its neighboring frames and channels, while the latter discards strongly-noisy channels to further improve the performance. The core contributions of the paper are as follows. * A graph-based spatial-temporal aggregation framework is proposed for multi-channel speaker verification with ad-hoc microphone arrays. Unlike existing works <cit.> which regard microphones as vertices of a graph for noise reduction, the proposed method takes both channels and time frames as vertices of a graph for speaker verification. It can capture the relationship between time frames across channels and thus has stronger modeling capability than simply modeling the relationship between channels. * Several spatial-temporal GCNs that not only accelerate the modeling process significantly but also are flexible in incorporating prior knowledge are proposed under the framework. Unlike the methods <cit.> which focus on applying graph neural networks (GNNs) without exploring adjacent matrices of graphs, this paper constructs adjacent matrices of graphs that are flexible in utilizing more abundant prior information, such as the spatial labeling information of datasets, temporal labeling information of speakers. Moreover, this paper describes two efficient backbone spatial-temporal GCNs. Experimental results showed that proposed methods outperform six representative comparison methods significantly in highly-reverberant and low signal-to-noise ratio environments on both a simulated dataset and two real-world datasets. This paper differs from our preliminary work <cit.> in several major aspects, which includes the design of the channel selection block and various methods to construct graphs (but not in <cit.>), which improves the performance over <cit.> vitally. Consequently, many new experimental scenarios were studied beyond that in <cit.>. The rest of the paper is organized as follows. Section <ref> briefly reviews the research progress of graph neural networks and introduces some preliminaries of modeling ad-hoc microphone arrays with graph neural networks. Section <ref> introduces the proposed framework. Detailed description of the proposed algorithm is presented in Section <ref>, <ref>, <ref> respectively. Section <ref> describes the experimental settings. Section <ref> reports experimental results. Finally, Section <ref> concludes the paper. § RELATED WORK Graph is expressed as G=(𝒱, ℰ), where 𝒱 represents a set of nodes, and ℰ⊆𝒱×𝒱 represents a set of edges between nodes. Given a node v ∈𝒱, its neighboring nodes are defined as 𝒩(v) = {u ∈𝒱 | e_vu∈ℰ}, where e_vu means that the node v is connected with its neighbor node u. The graphs with attributes are called attributed graphs. In GNNs, the input of an attributed-graph is expressed as (𝐗_𝒱, 𝐗_ℰ, 𝐀), where 𝐗_𝒱∈ℝ^D_𝒱 denotes the attributes of nodes, 𝐗_ℰ∈ℝ^D_ℰ denotes the attributes of edges, D_𝒱 and D_ℰ represent the feature dimensions of the nodes and edges respectively. The adjacent matrix 𝐀∈ℝ^|𝒱|×|𝒱| contains the edges between any pairs of nodes in the graph. If the node v and node u are connected, the uth row and vth column of 𝐀 are set to 1. Existing studies on GNNs can be divided into the following four categories <cit.>: recurrent graph neural networks (RecGNNs), convolutional graph neural networks (ConvGNNs), graph autoencoders (GAEs), and spatial-temporal graph neural networks (STGNNs), all of which can be trained at the node level, edge level or graph level, given (𝐗_𝒱, 𝐗_ℰ, 𝐀) as the inputs. RecGNNs learn node representations of graph data iteratively by using the same graph recurrent layer until the representation reaches a stable resolution. ConvGNNs iteratively learn node representations using different graph convolutional layers. It can be further categorized into the spectral-based and spatial-based ones. The convolution in spectral-based ConvGNNs has strong theoretical basis in graph signal processing <cit.>, while the graph convolution in the spatial-based methods is defined by the information propagation between the center nodes and its neighbors. The spatial-based ConvGNNs inherently relate to the spectral-based ones <cit.>, and are widely used in real-world scenarios in recent years <cit.>. GAEs encode graph data into latent vectors, and then reconstruct the graphs from the latent representations. Different from the above approaches that are built on static graphs, STGNNs aim to learn the spatial-temporal dependencies of time-varying data over dynamic graphs, which finds applications in action detection <cit.>, 3D point clouds processing <cit.>, and traffic forecasting <cit.>. Common types of graphs include: * Directed graphs versus undirected graphs: According to whether the relationship between a pair of nodes is bi-directional, graphs can be divided into directed graphs and undirected graphs. For a directed graph, the edge from node v to node u is different from the edge from node u to node v, and therefore 𝐀[v,u] is unnecessarily equal to 𝐀[u,v] in the adjacent matrix. A graph is undirected if and only if 𝐀 is symmetric. * Dense graphs versus sparse graphs: According to the denseness of the edges in a graph, graphs can be classified into sparse graphs and dense graphs. In a dense graph, each node tends to be linked to any other nodes. A special case of dense graphs is the complete graph, in which every node takes all other nodes as neighbors. In a sparse graph, most nodes are not mutually connected. * Static graphs versus dynamic graphs: According to whether the nodes or edges of a graph change over time, graphs can be classified into static graphs and dynamic graphs. If all variables of a graph do not change over time, then it is a static graph. A graph is dynamic if its nodes or edges change over time, which can be denoted as G=(𝒱^(t), ℰ^(t)), ∀ t=1,2,…, T, where 𝒱^(t) and ℰ^(t) are the nodes and edges respectively of the graph at time t. The attributes of the graph can be represented as (𝐗^(t)_𝒱, 𝐗^(t)_ℰ, 𝐀^(t)). The proposed method is flexible in incorporating various kinds of graphs. In this paper, we will discuss the applications of the undirected graph, dense graph, sparse graph, and static graph to ad-hoc microphone arrays. Particularly, unlike STGNNs <cit.> which model time-varying data via dynamic graphs, the proposed method models the time-dependency between frames via static graphs, so that the overall spatial-temporal data can be modeled via static graphs as well. This novel graph modeling method on time-varying data is simpler and has less variables to be estimated than STGNN. § FRAMEWORK As shown in Fig. <ref>, an ad-hoc microphone array of C randomly distributed devices are placed around a speaker. In this paper, we consider a situation where each device contains a single microphone. With the interference of reverberation and additive noise, the signal collected from the single channel of a device can be formulated as: x_c(t)=r_c(t)*s_c(t)+n_c(t) , ∀ c=1,2,…,C where s_c(t) and n_c(t) denote the clean speech and additive noise of the cth channel respectively, the symbol “*” denotes the convolution operation, and r_c(t) denotes the room impulse function (RIR). §.§ Two-stage training Given that the data collected from single-channel devices are much more sufficient than that from ad-hoc microphone arrays, we adopt a two-stage training strategy as in <cit.>, so as to prevent the model overfitting to the small-scale data collected from the ad-hoc arrays. In both stages of training, softmax is used as the output layer, and Mel-filterbanks are used as the acoustic features. The first-stage training aims to train a frame-level feature extractor. Specifically, we first train a standard speaker verification system using a large number of single-channel speech data, as shown in Fig. <ref>. Then, we retain the frame-level feature extractor, and discard all other part of the system. The second-stage training is to train the channel fusion module in Fig. <ref> using the spatial-temporal speech data collected from ad-hoc microphone arrays, with the frame-level feature extractor trained in the first stage fixed. Specifically, the frame-level feature extractor is first applied to each channel of the ad-hoc microphone arrays, which generates the frame-level speaker embeddings of each channel: 𝐗_c∈ℝ^T × D, c=1,2,…,C where D is the dimension of the frame-level speaker embeddings, and T is the number of frames. The speaker embeddings of all C channels can be represented as: 𝐗_CT = {𝐗_1,…,𝐗_C}∈ℝ^C× T × D §.§ Static graph formulation of spatial-temporal data In the second-stage training, there are many ways to formulate 𝐗_CT as a graph data, however, this problem seems far from explored yet. To our knowledge, existing works <cit.> do not fully explore the temporal connections between frames. To address this issue, one possible way is to formulate 𝐗_CT as a dynamic graph defined in Section <ref> directly. However, this way is too complicated since that both the nodes, edges, and adjacent matrix of a dynamic graph are time-varying. To prevent this overcomplicated formulation, we propose to reformulate each frame of the spatial-temporal data 𝐗_CT as a node of a static graph, and define its adjacent matrix 𝐀_CT as a boolean matrix: 𝐀_CT∈𝔹^(C T)× (C T) where its element A_CT[i,j] = 0 means that the ith node does not have a direct connection with the jth node. To this end, we have formulated 𝐗_CT as a static graph G_CT = {𝐗_CT, 𝐀_CT}. To our knowledge, this is the first time that the spatial-temporal data is formulated as a static graph learning problem. This formulation not only can still grasp the spatial-temporal dependency, such as the relative time delay and SNR differences between the channels, but also is easily trained with common graph neural networks. One difficulty of the above formulation is that 𝐀_CT is very large, which causes high computational and storage complexities. For example, a speech signal of 10 seconds collected with 40 ad-hoc nodes has an 𝐀_CT of as large as 40000× 40000, if the frame-shift is 10 milliseconds. How to approximate 𝐀_CT efficiently is one of the core issues, which will be introduced in Section <ref>. §.§ Network architecture To grasp the spatial-temporal dependency between the frames in 𝐗_CT, we design a graph-based spatial-temporal aggregation block ℋ^1(·) to transform 𝐗_CT to another multichannel feature 𝐙_CT∈ℝ^(CT)× D: 𝐙_CT = ℋ^1(𝐗_CT,𝐀_CT) See Section <ref> for two implementations of ℋ^1(·). To filter out the channels that contribute negatively to the speaker verification system, we further design a graph-based channel selection algorithm ℋ^2(·) to automatically select K channels (K≤ C) from 𝐙_CT: (𝐙̂_CT,𝐀̂_CT) = ℋ^2(𝐙_CT,𝐀_CT) with 𝐙_CT∈ℝ^(KT)× D. An important novelty and advantage of the graph-based channel selection is that prior information can be easily injected into 𝐀_CT for the performance improvement. See Section <ref> for the details of ℋ^2(·) in the presence or absence of prior information. Finally, we calculate the utterance-level speaker embedding 𝐒 by the average pooling over all channels and frames of 𝐙̂_CT: 𝐒= 1/KT∑_i=1^KT𝐙̂_CT[i,:] which is used as the input of the utterance-level feature extractor. § GRAPH-BASED SPATIAL-TEMPORAL AGGREGATION BLOCK To reduce the high computational and storage complexities of the adjacent matrix 𝐀_CT in (<ref>), in this paper, as shown in Fig. <ref>, we decompose ℋ^1(·) in (<ref>) into two successive blocks—a temporal module, denoted as ℋ^1_t(·), and a spatial module, denoted as ℋ^1_s(·). The overall procedure is as follows: §.§ Temporal module The temporal module first takes each channel of 𝐗_CT as its input: 𝐗^c=[𝐱_1^c,…,𝐱_t^c,…,𝐱_T^c], ∀ c=1,…,C where 𝐱_t^c∈ℝ^D is the speaker embedding of the tth frame at the cth channel. Then, it builds a static graph on each channel, denoted as G_temporal^c = {𝐗^c, 𝐀_temporal}, where 𝐀_temporal is the adjacent matrix of the temporal static graph, and the frame 𝐱_t^c is a node of the static graph. Finally, the temporal module is defined as: 𝐘^c= ℋ^1_t(𝐗^c, 𝐀_temporal), ∀ c=1,…,C We then aggregate 𝐘^c into: 𝐘_CT={𝐘^1,…,𝐘^c,…,𝐘^C} which is used as the input of the spatial module. A core problem here is the design of 𝐀_temporal∈𝔹^T × T which is used to constrain the mutual connections between neighboring frames. Because speaker verification requires as many frame-level speaker embeddings as possible for a reliable utterance-level speaker embedding, in this paper, we set 𝐀_temporal to a complete-graph adjacent matrix: A_temporal[i,j] = 1, ∀ i=1,…,T, ∀ j = 1,…, T We also study a common definition in the ablation study of this paper: A_temporal[i,j] = {[ 1, if i∈span(j,δ); 0, otherwise ]., where span(j,δ)= {j-δ,…,j-1,j, j+1,…,j+δ} is a time span of the jth frame with a half-window length of δ. §.§ Spatial module The spatial module first partitions 𝐘_CT along the temporal dimension: 𝐘^t =[𝐲^t_1,…,𝐲^t_c,…,𝐲^t_C], ∀ t=1,…, T where 𝐲_c^t∈ℝ^D represents the frame-level speaker embedding of the cth channel at the tth frame. Then, it builds a static graph on each frame, denoted as G_spatial^t = {𝐘^t, 𝐀_spatial}, where 𝐀_spatial is the adjacent matrix of the spatial static graph, and the frame 𝐲_c^t is a node of the static graph. Finally, the spatial module is defined as: 𝐙^t= ℋ^1_s(𝐘^t, 𝐀_spatial) We aggregate 𝐙^t into: 𝐙_CT= {𝐙^1,…,𝐙^t,…,𝐙^T} which is used as the input of the graph-based channel selection algorithm. The adjacent matrix 𝐀_spatial ∈𝔹^C × C is used to constrain the mutual connections between the channels of the ad-hoc microphone arrays. Expect for using a complete graph where 𝐀_spatial is an all-one adjacent matrix, we can also implement 𝐀_spatial according to the relative positions between the channels and the speaker: A_spatial[u,v] = {[ 1, if v ∈𝒩_k(u); 0, otherwise ]., where 𝒩_k(u) represents the k nearest neighboring nodes of channel u. Various methods can be applied to get the k-nearest neighborhoods, see Section <ref> for the details. § IMPLEMENTATION OF THE SPATIAL-TEMPORAL AGGREGATION BLOCK WITH ADJACENT MATRICES We implemented ℋ^1_t(·) and ℋ^1_s(·) using the same arithmetic mechanism. The difference between ℋ^1_t(·) and ℋ^1_s(·) lies in their inputs and outputs. For clarity, we unify the descriptions of the two functions as follows: 𝐇 = ℋ^1_ε(𝐗,𝐀), ∀ε∈{t,s} where 𝐗∈ℝ^N × D and 𝐀∈ℝ^N × N denote the node attributes and adjacent matrix of a graph G respectively. This section describes two implementations of ℋ^1_ε(·), which are the self-attention aggregation with graphs as masks (SAM-agg) and GCN-based aggregation (GCN-agg) respectively. Their structures are shown in the right side of Fig. <ref>. §.§ Self-attention aggregation with graphs as masks (SAM-agg) As showed in Fig. <ref>a, SAM-agg aims to learn new attributes 𝐇 of the nodes of the graph G by the multi-head self-attention mechanism. When calculating the weight of a node, the adjacent matrix is used to mask off the non-neighbor nodes. Specifically, suppose we have an M-head self-attention model with M>1. For the mth attention head, we first transform the input 𝐗 to the mth query 𝐐^m, mth key 𝐊^m and mth value 𝐕^m: 𝐐^m = 𝐗𝐖_Q^m, 𝐊^m = 𝐗𝐖_K^m, 𝐕^m = 𝐗𝐖_V^m where the matrices 𝐖_Q^m, 𝐖_K^m, and 𝐖_V^m are learnable parameters of the mth attention head, and the matrices 𝐐^m, 𝐊^m, and 𝐕^m are all in the shape of ℝ^N × d with d = E/M, where E denotes the dimension of the query, key, or value space, which is a user-defined hyperparameter. The scores between the query-key pair 𝐄^m then can be calculated by: 𝐄^m=𝐐^m (𝐊^m)'/√(d) where (·)' denotes the transpose operator of a matrix. Given the adjacent matrix 𝐀, a local attentive score Ê^m[n,i] from the neighbor node v_i to the central node v_n is calculated by: Ê^m[n,i] = exp(𝐄^m[n,i])/∑_𝐀[n,j]=1exp(𝐄^m[n,j]) where the denumerator represents the information from all neighbor nodes of v_n defined by 𝐀, i.e. {j|v_j ∈𝒩(n),∀ j = 1,…,N}. The hidden state of the mth head is calculated by: 𝐇^m = Ê^m𝐕^m The output of ℋ^1_ε(·) is a concatenation of the hidden states of all attention heads: 𝐇 = concat [𝐇^1,…,𝐇^m, … ,𝐇^M] §.§ Graph convolution network based aggregation (GCN-agg) As shown in Fig. <ref>b, GCN-agg aims to learn new attributes 𝐇 by the multi-head self-attention mechanism based on a GCN layer <cit.>. Different from SAM-agg, the attention mechanism updates the attribute of each node by attending its neighbors using its own representation as the query. Specifically, the mth attention head first projects 𝐗 in to a d-dimension space using learnable parameters 𝐖_l^m∈ℝ^D × d and 𝐖_r^m∈ℝ^D × d: 𝐠_l^m = 𝐗^c𝐖_l^m , 𝐠_r^m = 𝐗^c𝐖_r^m where 𝐠_l^m and 𝐠_r^m denote query and key matrices respectively. The score for the query-key pair is calculated by: 𝐄^m[i,j] = β^⊤LeakyReLU(concat(𝐠_li^m,𝐠_rj^m)) where 𝐄^m∈ℝ^N × N, and β∈ℝ^2d is a learnable vector. Given the adjacent matrix 𝐀, the local attention weight from a neighbor node v_i ∈𝒩(n) to a central node v_n, denoted as α_ni^m, is calculated by: α_ni^m = exp(𝐄^m[n,i] )/∑_𝒜[n,j]=1exp(𝐄^m[n,j]) For the mth head, the aggregated output of the node v_n is given by 𝐡_n^m: 𝐡_n^m = ∑_i=1^𝒩(n)α_ni^m𝐠_ri^m We concatenate the aggregated features of all nodes 𝐇^m = [𝐡_1^m,…,𝐡_n^m,…,𝐡_N^m]. The output of the aggregation module is: 𝐇 = concat [𝐇^1,…,𝐇^m, … ,𝐇^M] § GRAPH-BASED CHANNEL SELECTION The graph-based channel selection algorithm ℋ^2(·) in (<ref>) aims to select effective microphones from the spatial graph G_spatial^t by reconstructing the adjacent matrix 𝐀_spatial. In this section, we design two ℋ^2(·) according to whether the positions of the microphones or speakers are known. §.§ Graph-based channel selection with no prior information (gPool) When the positions of the microphones or speakers are unknown, we employ a graph-based pooling layer to select channels <cit.>. The procedure of the algorithm is in Fig. <ref>. Specifically, it first maps 𝐙^t to a one-dimensional space using a learnable parameter 𝐩∈ℝ^D: 𝐪 = (𝐙^t)'𝐩/𝐩_2 where 𝐪∈ℝ^C, ·_2 is the ℓ_2-norm. Then, it selects the channels that correspond to the top K largest elements of 𝐪 as the the most effective channels. Suppose the indices of the selected channels are: idx = rank(𝐪,K) The reconstructed spatial graph is formulated as: 𝐙̂^t[i,:] = 𝐙^t[i,idx] ⊙(sigmoid(𝐪[idx]))', ∀ i = 1,…,D 𝐀̂_spatial = 𝐀_spatial[idx,idx] where the symbol “:” means that all elements at the row or column of the matrix are selected. Finally, we have: 𝐙̂_CT= {𝐙̂^1,…,𝐙̂^t,…,𝐙̂^T} §.§ Graph-based channel selection with prior knowledge (𝐀_prior) 开始将关于初始化邻接矩阵的不同方法,时间维度或者空间维度。 The direct estimation of the positions of the microphones and speakers is a hard issue. However, as shown in Fig. <ref>, if the positions are known as a prior, then they may help the performance improvement significantly once properly utilized. In this situation, we rename 𝐀_spatial as 𝐀_prior to emphasize the utilization of the prior knowledge. There are many ways to initialize 𝐀_prior with different types of graphs in Section <ref>. Here we choose the simple undirect graph, which initializes 𝐀_prior with the relative distances between the microphones and speakers. Specifically, we denote the distance between the speaker position and the ith microphone as D_(i,spk). The maximum of the distances is denoted as D_max. Then, we construct 𝐀_prior using a pre-defined scalar ρ to choose the K-nearest channels: 𝐀_prior[i,:] = {[ 1 , if D_(i,spk)/D_max < ρ; 0, otherwise ]. and set 𝐙̂_CT = 𝐙_CT Note that, if more abundant prior information is available, the adjacent matrix can be constructed in a more multiplicative way. For example, we can also utilize the location of the noise sources or the speaker orientation as a mask, which is further discussed in the ablation study in Section <ref>. § EXPERIMENTAL SETUP This section presents the experimental datasets as well as the parameter settings. §.§ Datasets The experiments were conducted on two simulation datasets—LibriSIMU-noise and LibriSIMU-reverb, as well as two real multi-channel datasets—Libri-adhoc40<cit.> and Hi-mia<cit.>. The detailed settings of the datasets are listed as follows. LibriSIMU-noise is used to simulate the working scenario of speaker verification in noisy environment. For each utterance, we simulated a room environment. The width, length, and height of the room were selected randomly from ranges of [8,10], [12,14], and [3,5] meters respectively. The reverberation environment was generated by the image-source library[https://github.com/DavidDiazGuerra/gpuRIR]. The reverberation time T_60 was selected randomly from a range of [0.2,0.5] seconds. A single point source of noise, a single speaker, and an ad-hoc microphone array with 40 ad-hoc nodes were put randomly in the three dimensional space of the room, which implies a harsh experimental conditions. The speech source was from the train-clean-100, test, and dev subsets of Librispeech <cit.>. The noise data was from a mixed noise dataset <cit.> which includes classical bus noise, street noise, white noise, etc. The signal-to-noise ratio was in a range of [-5,20] dB. LibriSIMU-reverb is used to simulate the working scenario of speaker verification in strong reverberation environment. The simulation is similar to the LibriSIMU-noise dataset, except that the reverberation time T_60 was in a range of [0.2,1.2] seconds, and no additive noise was added. Libri-adhoc40<cit.>: It is a replayed version of the Librispeech corpus in a real office environment. The recording environment is an office room with a size of 9.8× 10.3 × 4.2 meters. The room is highly reverberant with T_60 around 0.9 second and little additive noise. Each replayed utterance was recorded by an ad-hoc microphone array of 40 ad-hoc nodes. The locations of both the microphones and speakers of the training, evaluation, and test data are different. The distances between the speakers and the microphones were ranged from 0.8 meter to 7.4 meters, which makes the dataset suitable for the study of far-field speech processing. Hi-mia<cit.>: It is a real text-dependent dataset for smart homes. It uses one close-talking microphone and six 16-channel microphone arrays to collect speech data. The training set takes AIshell-wakeup[https://www.aishelltech.com/wakeup_data] as the speech source, which contains 254 speakers. The test set takes AIshell-2019B-eval[https://www.aishelltech.com/aishell_2019_eval] as the speech source, which contains 44 speakers. The text content of the speech source is 'ni hao mi ya' in Chinese and 'Hi Mia' in English with different speaking speed. In <cit.>, a speaker verification system was first trained with the text-independent single-channel AIshell-2[https://www.aishelltech.com/aishell_2] data and data augmentation, and then fine-tuned with the text-dependent multichannel Hi-mia data. We follow the same step in this paper. §.§ Parameter settings The proposed method was implemented via the voxcelecb_trainer toolbox[https://github.com/DavidDiazGuerra/gpuRIR]. We stacked two spatial-temporal blocks. The number of the attention head is 4. In the first training stage, the single-channel speaker verification system uses the architecture in <cit.>. It was pre-trained with Librispeech<cit.> train-clean-100 subset, when the test sets of the LibriSIMU-noise, LibriSIMU-reverb, and Libri-adhoc40 dataset were used as the test data. It was pre-trained with the iOS subset of AIShell-2<cit.>, when the test set of the Hi-mia dataset was used as the test data. The best model among the 200 training epochs of the single-channel system was used to initialize the multichannel system for the second-stage training. In the second training stage, we randomly selected 20 channels for each training utterance as the multichannel training data. For each utterance of a test set, we randomly selected {8, 16, 32, 40} channels respectively for evaluation. Multiple variants of the proposed method were used for evaluation, which are denoted as follows: * Self-attention aggregation with graphs as masks (SAM-agg): It only contains the SAM-agg spatial-temporal aggregation block. No channel selection block is added. * Graph convolution network based aggregation (GCN-agg): It only contains the GCN-agg spatial-temporal aggregation block. No channel selection block is added. * Channel selection based on graph pooling with GCN-agg mechnism (GCN-agg+gpool): It adds the gPool channel selection block after the GCN-agg block. * Channel selection based on prior adjacent matrix with GCN-agg mechnism (GCN-agg+𝐀_𝐩𝐫𝐢𝐨𝐫): It adds the 𝐀_prior channel selection block after the GCN-agg block. We used the equal error rate (EER) as the evaluation metric, and reported the number of parameters of the comparison methods. §.§ Comparison methods We compared with 6 referenced methods, which can be categorized into the following two classes. The first class aims to generate a single channel speech signal from the multiple channels of an ad-hoc microphone array, and then applies it to a single-channel speaker verification system: * Oracle one-best: It selects a physically closest channel to the speech source. * Beamforming <cit.>: It uses a conventional delay-and-sum beamforming to aggregate all channels into a single channel. * EV <cit.>: It selects a channel whose energy envelope has the highest variance among all channels. The second class uses a multichannel speaker verification system to handle the speech signals from ad-hoc microphone arrays directly: * Utterance-level channel mean aggregation (MEAN-uttr-agg): It extracts an utterance-level speaker embedding from each channel, and then averages the speaker embeddings of all channels into a single-channel speaker embedding. * Utterance-level channel aggregation based on multihead attention (MHA-uttr-agg) <cit.>: It extracts an utterance-level speaker embedding from each channel, and then aggregates the speaker embeddings using the multi-head attention mechanism. * Utterance-level channel aggregation based on attentive pooling (AP-uttr-agg)<cit.>: It extracts an utterance-level speaker embedding from each channel, and then conducts the weighted average over the speaker embeddings of all channels where the weights of the channels are calculated by an attention pooling layer. § EXPERIMENTAL RESULTS In this section, we first show the main comparison results in Section <ref>, then show the robustness of the proposed method to the variation of the noise and reverberation environments in Section <ref>, and finally study the effects of the components of the proposed method on performance in Sections <ref> and <ref>. §.§ Main results Table <ref> lists the comparison results on the simulated datasets. From the table, we see that, the proposed graph-based methods outperform all referenced methods. We take the 8-channels test scenario as an example. SAM-agg achieves a relative EER reduction of 7.66% over MHA-uttr-agg, and 7.64% over the AP-uttr-agg. Even the GCN-agg, which performs the poorest among the variants of the proposed method, outperforms the best utterance-level method AP-uttr-agg by a relative EER reduction of 2.71%. A similar phenomenon is observed on the libriSIMU-reverb dataset as well. Comparing the variants of the proposed methods, we observe the following phenomena. GCN-agg+gpool obtains 2.09% relative EER reduction over GCN-agg on libriSIMU-reverb. However, this advantage was not transferred to libriSIMU-noise. GCN-agg+A_prior outperforms GCN-agg by a relative EER reduction of 15.65% on LibriSIMU-noise, and 20.02% on LibriSIMU-reverb. Moreover, it achieves a relative EER reduction of 12.99% over the runner-up method SAM-agg on LibriSIMU-reverb, and 6.40% on LibriSIMU-noise, which demonstrates the importance of prior knowledge in the study of ad-hoc microphone arrays. Table <ref> lists the comparison results on the real datasets. From the table, we see that the proposed methods significantly outperform the referenced methods, which is similar to that on the simulated test data. Comparing the variants of the proposed methods, we further observe the following phenomena. GCN-agg outperforms SAM-agg on Hi-mia, e.g. by a relative EER reduction of 14.36% on the 8-channels test scenario. GCN-agg+gPool outperforms GCN-agg by at most a relative EER reduction of 4.64% on Libri-adhoc40, while GCN-agg+A_prior obtains a relative EER reduction of 18.94% over GCN-agg, on the 8-channels test scenario. Fig. <ref> analyzes the performance of speaker verification at each node of an ad-hoc microphone array with respect to the distance between the node and a speaker, where the model GCN-agg is used as the speaker verification system. From the figure, we see that the EER of the microphone nodes is correlated with the distance from the microphones to the speaker and point noise source. For example, comparing Fig. <ref>a with Fig. <ref>d, we see that the channels close to the speaker yield good performance on the Libri-adhoc40. Similar phenomenon is observed on the LibriSIMU-reverb as well. Comparing Fig. <ref>b with Fig. <ref>e, we see that most of the channels that yield good performance are not only close to the speaker, but also far away from the point noise source. However, an important phenomenon is that some channels that are close to the wall also yield excellent performance, though they are far away from the speaker. At last, we observe that GCN-agg is able to grasp the spatial-temporal difference between the channels. §.§ Results in different noise and reverberation conditions 结构可以 语句不太整齐; 第二段表述不够规范 以及写法有点问题。 To study how the proposed method behaves in different noise and reverberation conditions, we preset the SNR and reverberation time T_60 of the test scenarios. Table <ref> lists the EER of the comparison methods on the 20-channels test scenario of LibriSIMU-noise with different SNR levels. From the table, we see that, although the performance of all comparison methods is getting worse when the test scenarios become more challenge, the proposed methods outperform the referenced methods, and GCN-agg+A_prior performs the best in all cases. Specifically, when SNR∈ [-5,0]dB, SAM-agg obtains a relative EER reduction of 10.86% over the best referenced method MHA-uttr-agg. When SNR is enlarged to, e.g., [15,20]dB, the relative EER reduction is still 9.96%. GCN-agg+A_prior achieves a relative EER reduction of 11.0% over SAM-agg. Table <ref> lists the EER of the comparison methods on the 20-channels test scenario of LibriSIMU-reverb with different reverberation time. The experimental phenomena are similar with those in Table <ref>. Specifically, when the reverberation time T_60∈ [0.2,0.4] seconds, SAM-agg obtains a relative EER reduction of 6.53% over the best referenced method MHA-uttr-agg. GCN-agg+A_prior further reduces EER by relatively 6.90% over SAM-agg. When T_60∈ [1.0,1.2] seconds, SAM-agg obtains a relative EER reduction of 10.83% over MHA-uttr-agg. GCN-agg+A_prior further outperforms SAM-agg by a relative EER reduction of 11.39%. §.§ Effects of the adjacent matrices on performance The adjacent matrices 𝐀_temporal and 𝐀_spatial can be constructed in different ways, where 𝐀_spatial is rewritten as 𝐀_prior when it is constructed with the spatial information prior. In this section, we study how the construction methods affect the performance. Particularly, we denote 𝐀_prior with the prior knowledge “X” as 𝐀_prior^X. Table <ref> lists the effect of the construction methods of the adjacent matrices on the Libri-adhoc40 dataset. Specifically, 𝐀_temporal is constructed by the two ways in Section <ref> where the parameter δ in span(j,δ) is set to {0,1}. From the table, we see that constructing 𝐀_temporal with the complete graph is better than that with the sparse graph span(j,δ). 𝐀_prior^pos is constructed using the position information prior of the ad-hoc microphone arrays and the sound sources, as presented in Section <ref>, where the tunable parameter ρ determines the number of selected channels. A small ρ means that the selected channels are close to the speaker. As shown in Table <ref>, when ρ decreases from 0.9 to 0.3, the performance is improved; however, when ρ is further reduced to 0.1, the performance decreases due to that only very limited number of channels are selected. 𝐀_prior^pos+ori uses the speaker orientation as an additional prior where the ad-hoc nodes that are placed behind the speaker are masked off by further setting the corresponding elements of 𝐀_prior^pos to zero. As shown in Table <ref>, when ρ = 0.9, the performance of 𝐀_prior^pos+ori on the 40-channel test scenario outperforms that produced by 𝐀_prior^pos. However, when ρ is gradually reduced, the advantage of 𝐀_prior+mask over 𝐀_prior is limited. Tables <ref> and <ref> list the effect of the construction methods of the adjacent matrices on the LibriSIMU-noise and LibriSIMU-reverb datasets respectively, where 𝐀_prior^pos+noise_pos denotes that the microphone nodes that are close to the point noise source are further masked off. From the tables, we see that 𝐀_prior^pos+noise_pos improves the performance in the low SNR scenarios, which obtains a relative EER reduction of 2.94% over 𝐀_prior. §.§ Effect of the graph-based spatial-temporal aggregation block on performance The graph-based spatial-temporal aggregation block consists of two components—a temporal module and a spatial module. In this section, we analyze the effects of the temporal module and spatial module respectively. Specifically, we compare the graph-based temporal module with a method of simply averaging the features along the time dimension, i.e. “mean pooling over time”, and compare the graph-based spatial module with a method of averaging the features along the spatial dimension, i.e..“mean pooling over space”, which derives the following three comparison methods: * MEAN-uttr-agg: Both the graph-based temporal and spatial modules of the proposed method are replaced by their corresponding mean pooling strategies. * GCN-agg-temporal: The graph-based spatial module is replaced by the mean pooling over space. * GCN-agg-spatial: The graph-based temporal module is replaced by mean pooling over time. Table <ref> lists the EER performance of the GCN-agg variants on the real-world data. From the table, we see that the performance of the proposed method performs the best; compared with “MEAN-uttr-agg”, the performance improvement produced by GCN-agg-spatial is more significant than that produced by “GCN-agg-temporal”. We take the 8-channels test scenario as an example. “GCN-agg-spatial” outperforms “MEAN-uttr-agg” by a relative EER reduction of 40.93% on Libri-adhoc40 and 26.22% on Hi-mia respectively, while “GCN-agg-temporal” outperforms “MEAN-uttr-agg” by a relative EER reduction of only 35.97% on Libri-adhoc40 and 19.20% on Hi-mia respectively. § CONCLUSIONS In this paper, we have proposed the graph-based frame-level multi-channel speaker verification system with ad-hoc microphone arrays. It consists of two components—graph-based spatial-temporal aggregation block and graph-based channel selection block. The spatial-temporal aggregation block first uses graphs to model the interdependencies of frame-level multichannel speaker embeddings, then aggregates the embeddings along both the temporal and spatial dimensions. The channel selection block further chooses channels that are most helpful to the performance improvement. The core novelties lie in that, to our knowledge, the proposed method models multichannel speaker verification with graphs for the first time; moreover, it constructs adjacent matrices of the graphs with environmental priors flexibly. We have compared the proposed method with a number of representative algorithms on both simulated and real-world datasets. Experimental results in both scenarios show that the proposed method significantly outperforms the referenced methods. For example, the proposed method outperforms the best referenced method by a relative EER of at least 15.39% in the simulated noisy environment, and 18.54% in the simulated reverberant environment. Moreover, the proposed method is robust against the variation of reverberation and SNR levels. IEEEtran
http://arxiv.org/abs/2307.02649v1
20230705204723
Periodic discrete Darboux transforms
[ "Joseph Cho", "Katrin Leschke", "Yuta Ogata" ]
math.DG
[ "math.DG", "(2020): 53A70 (Primary) 58J72 (Secondary)" ]
We express Darboux transformations of discrete polarised curves as parallel sections of discrete connections in the quaternionic formalism. This immediately leads to the linearisation of the monodromy of the transformation. We also consider the integrable reduction to the case of discrete bicycle correspondence. Applying our method to the case of discrete circles, we obtain closed-form discrete parametrisations of all (closed) Darboux transforms and (closed) bicycle correspondences. A palindromic polynomial connecting the earth mover's distance to minuscule lattices of Type A William Q. Erickson August 1, 2023 ================================================================================================= § INTRODUCTION In computational modeling, discretisation has been central to a number of applications in the form of a polygonal mesh: For example, computer graphics uses triangular meshes to represent 3–dimensional models; freeform architecture greatly benefits from a systematic analysis of polygonal meshes (see, for example, <cit.>). The primary objective of polygonal meshes is to approximate a given smooth surface via polygons. In view of its manifold applications in computational modelling, recently the field of discrete differential geometry, with the central ethos of integrable discretisation, has experienced a surge in interest. In contrast to classical numerical approaches for mesh generation, integrable discretisation in its nascence <cit.> saught to recover the integrable system structure of smooth solitonic theory in its discrete counterparts. As surface theory became modernised via the solitonic approach, integrable discretisation began to take shape in the form of discrete surfaces, with discrete pseudospherical surfaces <cit.> and discrete isothermic surfaces <cit.> being the seminal examples. These discrete surfaces with integrability approximate the smooth surfaces capably; more importantly, integrable discretisation were quickly found to possess a rich mathematical structure rivaling that of the smooth counterpart, giving birth to the field of discrete differential geometry <cit.>. With the growth of the field, discrete differential geometry no longer merely seeks to replicate the mathematical structure of the smooth theory; the field is now quickly becoming a key ingredient in understanding the smooth theory. For example, a solution to the Björling problem for isothermic surfaces was obtained via discrete isothermic surfaces in <cit.>; remarkably, discrete differential geometry also was essential to the resolution of the long standing global Bonnet problem in <cit.>. Much of the interest in discrete differential geometry was centered around the local theory; on the contrary, the global theory of discrete surfaces from the viewpoint of integrable discretisations has received comparatively less interest. In this work, we seek to focus on the global aspects of discrete differential geometry. As a starting point, we will investigate periodic Darboux transforms of discrete polarised curves. Darboux transformations of smooth polarised curves were defined in <cit.> in the context of interpreting the semi-discrete isothermic surfaces <cit.> in terms of transformation theory. In fact, it has been investigated that the integrable reductions of such Darboux transformations include the bicycle correspondences, a mathematical model of the pair of tire tracks of a bicycle, with various connections to the Hashimoto or the smoke ring flow and the filament equations <cit.> and the modified Korteweg–de Vries equations <cit.>. The integrable discretisation of the Darboux transformation was obtained in <cit.> in the case of plane curves motivated by discrete isothermic surfaces <cit.>; meanwhile, the monodromy of discrete bicycle correspondences was investigated in <cit.> (see also <cit.>) while that of the discrete Hashimoto flows was examined in <cit.>. Building on these results, we will investigate the monodromy of discrete Darboux transformations via the gauge theoretic approach to integrability, where the zero curvature formalism is expressed via the existence of a 1-parameter family of (flat) connections on the trivial bundle <cit.>, as the approach has been shown to be amenable to discretisations <cit.>. In Section <ref>, we reinterpret the Darboux transformations of smooth polarised curves of <cit.> in the quaternionic setting to serve as a motivation for the discrete case. The Darboux transformations of smooth polarised curves can be expressed via a Riccati-type equation <cit.>; via a suitably defined 1-parameter family of (flat) connections, we will show in Theorem <ref> that Darboux tranformations can be characterised as the parallel sections of the connection, recovering a quaternionic analogue of the result in <cit.>. The interpretation of Darboux transformations via parallel sections is key to linearising the monodromy problem, a process that we explain in Section <ref>. Finally, we finish the introductory section by considering the integrable reduction to the bicycle correspondences and the bicycle monodromy in Section <ref>. In fact, the quaternionic formalism to Darboux transformations allows us to obtain closed-form parametrisations for the transforms of a circle in Examples <ref> and <ref> as the quaternionic approach yields second-order ordinary differential equations with constant coefficients from the linearisation of the Riccati-type equation (see Remark <ref>). As we will see, this approach will also allow us to obtain the closed-form discrete parametrisations, a comparatively rare result in the discrete theory. The next Section <ref> is devoted to the Darboux transformations of discrete polarised curves via characterised as parallel sections of discrete (flat) connections. Motivated by the Darboux transformations of smooth curves, we define the discrete connections associated with discrete polarised curves in Definition <ref>, and show in Theorem <ref> that the parallel sections correspond to discrete Darboux transformations obtained via a discrete Riccati-type equation, coming from the well-known cross-ratios condition of discrete isothermic surfaces <cit.>. The discrete connections approach immediately yields the linearisation of the discrete monodromy problem (see Section <ref>). Then in Theorem <ref>, we obtain the integrable reduction to the case of discrete bicycle correspondences. In Examples <ref> and <ref>, we test the robustness of our discretisation by considering the case of discrete circles. Surprisingly, our methods efficiently yield closed-form discrete parametrisations of: the Darboux transformations, the closed Darboux transformations, the bicycle correspondences, and the closed bicycle correspondences of the discrete circle (see Figure <ref>). Integrable system structures are at the core of many problems in physics, chemistry and biology; for example the Korteweg-de Vries equation models waves on shallow water. Our results can be viewed as prototypes on how to obtain efficient numerical periodic solutions in terms of recurrence equations by discretising the integrable system structure. § DARBOUX TRANSFORMATIONS OF SMOOTH SPACE CURVES In this section, we adapt the Darboux transformations of smooth polarised curves in ℝ^n from <cit.> to the special case of 3–space ℝ^3 and 4–space ℝ^4 using a quaternionic formalism, with an eye on efficiently obtaining explicit parametrisations of transformations. (For details on the quaternionic setting, we refer the readers to works such as <cit.>.) Recall that the space of quaternions is given by ℍ = _ℝ{1, , , } where ^2 = ^2 = ^2 = =-1 so that the multiplication is not commutative. We identify the 4–space with the quaternions, while we identify the 3-space with the imaginary quaternions ℍ = _ℝ{, , }. Under the identification, we have (a b) = ⟨ a, b⟩, where the standard Euclidean inner product in 4–space is denoted by ⟨·, ·⟩, and the norm by | · |. §.§ Darboux transformations via parallel sections Let I ⊂ℝ be a smooth interval, q be a non-vanishing real quadratic differential acting as polarisation on I. We will refer to the pair (I, q) as polarised domain as in <cit.>*Definition 2.1 <cit.>*p. 190. Suppose now that a regular curve x : (I,q) →ℝ^4 ≅ℍ is defined on the polarised domain. Then a curve x^d : I →ℍ is called a dual curve <cit.> if xx^d = q. When we need to fix a parameter t of the domain I to consider explicit examples, we will define a non-vanishing m : I →ℝ by q = 1/mt^2. With the notion of duality, the Darboux transform x̂ : (I, q) →ℍ of x with spectral parameter μ is given by a Riccati equation in <cit.>: x̂ = μ (x̂ - x) x^d (x̂ - x) = μ T x^d T for a real constant μ and T := x̂ - x, and we call x, x̂ : (I, q) →ℍ a Darboux pair. We note that as shown in <cit.>, the Riccati equation (<ref>) is a reformulation of the tangential cross-ratios, and implies that a Darboux pair is a Ribaucour pair, namely, they must envelop a common circle congruence. Darboux pairs of polarised curves are Möbius invariant notions; therefore, to view the transformation within the realm of conformal geometry, we consider ℍ∪{∞}≅ℍℙ^1 := ℙ(ℍ^2) as the model for the conformal 4-sphere, where we view ℍ^2 as a quaternionic right vector space. In this paper, we take advantage of the Möbius invariance and take affine coordinates to associate the conformal 4-sphere ℍℙ^1 with points in ℝ^4 ≅ℍ via ℍ∋ x ∼ L := ψℍ := [ x; 1 ]ℍ∈ℍℙ^1. Therefore, any polarised space curve is now represented as L : (I, q) →ℍℙ^1, also considered as a 1-dimensional subbundle of the trivial bundle ℍ^2 := I ×ℍ^2. Under this setting, we now aim to understand how the Darboux transformations of polarised curves can be interpreted in terms of parallel sections of flat connections defined on the trivial bundle ℍ^2, recovering the quaternionic analogue of the result in <cit.>. To do this, consider a family of (flat) connections 𝒟_λ defined on the trivial bundle ℍ^2 given by 𝒟_λ := + [ 0 x; λ x^d 0 ], λ∈ℝ. The family of connections 𝒟_λ is trivially flat as the domain is 1-dimensional; however, the ground for the emphasis on the flatness is twofold: to mirror the integrable structure of isothermic surfaces in the polarised curve theory, and to note that the parallel sections are well-defined. Then we have that ϕ := [ α; β ] is a parallel section of 𝒟_μ for some μ∈ℝ, that is, 𝒟_μϕ = 0, if and only if [ α; β ] = - [ x β; μ x^d α ]. Under this setting, the 𝒟_μ–parallel sections can be characterised as follows: Given a polarised curve x : (I, q) →ℍ, we have x̂ := x + αβ^-1 is a Darboux transform of x with parameter μ if and only if ϕ := [ α; β ] is 𝒟_μ–parallel. First, assuming that ϕ is 𝒟_μ–parallel, define x̂ := x + αβ^-1. Then it is straightforward to see via the differential equations on α and β (<ref>) that x̂ = x + α β^-1 - αβ^-1β β^-1 = μ (x̂ - x)x^d (x̂ - x), so that x̂ solves the Riccati equation (<ref>). On the other hand, let x̂ be a solution to the Riccati equation (<ref>). Set T := x̂ - x and define β so that β solves ( + T^-1x̂) β = 0. Putting α := T β, we have 0 = β + T^-1x̂ β = β + μx^d α, while 0 = (x̂ - μ T x^d T)β = x̂ β - μ T x^d α = x β + T β + T β = x β + α. Therefore, ϕ is 𝒟_μ–parallel. Now consider the gauge transformation _λ := 𝒢∙𝒟_λ, where 𝒢 = ( e ψ) for e = [ 1; 0 ]. This gives a 1-parameter family of (flat) connections _λ = + λη, with η := [ x x^d - x x^d x; x^d - x^d x ], satisfying η =η = ψℍ. Then ϕ = [ α; β ] is 𝒟_μ–parallel if and only if φ := 𝒢ϕ = e α + ψβ = ψ̂β is _μ–parallel where L̂ := ψ̂ℍ = [ x̂; 1 ]ℍ = [ x + αβ^-1; 1 ]ℍ. Therefore, we conclude: The Darboux transforms of a polarised curve x : (I, q) →ℍ with parameter μ are given by the _μ–parallel sections. The family of connections _λ defined on ℍ^2 is the quaternionic analogue of the family of connections defined on I ×ℝ^n+1,1 introduced in <cit.>. We can identify the sufficient condition for the Darboux transform x̂ of a polarised curve in any 3-sphere to take values again in the same 3-sphere: Given a polarised curve x : (I, q) → S^3 in some 3-sphere S^3 ⊂ S^4 with associated connection _λ, let φ = ψ̂β be _μ–parallel. Then x̂: (I, q) → S^3 if and only if x̂(t_0) ∈ S^3 for some t_0 ∈ I. Since the necesscity is obvious, we show the sufficiency. Applying a suitable stereographic projection to the S^3, we will prove the statement for curves in ℍ≅ℝ^3. Now consider the hermitian form ( [ a; b ], [ c; d ]) = a̅ d + b̅ c, for a, b, c, d ∈ℍ. Then we have that (ϕλ, ϕ̃λ̃) = λ̅(ϕ, ϕ̃) λ̃ for λ, λ̃∈ℍ and ϕ, ϕ̃∈ℍ^2; furthermore, it is straightforward to check that ( [ α; 1 ], [ α; 1 ]) = 0 if and only if α = 0. Thus, if x is as given, then we have (ψ, ψ) = 0 for ψ = [ x; 1 ]. Now since φ = e α + ψβ is _μ–parallel, we have ((φ, φ)) = -μ( (ηφ , φ) + (φ, ηφ)) = 0, where we used that ηφ = η eα = ψx^d α. Thus, (φ,φ) is constant. If at t = t_0, we have x̂ = x + αβ^-1 is pure imaginary, then αβ^-1 must also be pure imaginary for t = t_0, so that (φ,φ) = α̅β +β̅α = 2(αβ̅) = 2ββ̅(αβ^-1) =0. Hence, (φ,φ) ≡ 0 on I, and (ψ̂, ψ̂) = β̅^-1 (φ,φ)β^-1≡ 0, giving us the desired conclusion. In fact, a similar result holds for curves in any 2-sphere: Let x : (I, q) → S^2 be a curve into a 2-sphere S^2. Then the Darboux transform x̂ takes values in the same 2-sphere if and only if x(t_0) ∈ S^2 for some t_0 ∈ I. Viewing the given 2-sphere S^2 as the intersection of 1-parameter family of 3-spheres, also called the elliptic sphere pencil of 3-spheres (see <cit.> for example), if x̂(t_0) ∈ S^2, then x̂(t_0) takes value in every 3-sphere of the elliptic sphere pencil. Thus Lemma <ref> implies that x̂ must be in the intersection of all 3-spheres in the elliptic sphere pencil, the starting 2-sphere S^2. Given a polarised curve x with its associated connection _λ, the next proposition shows that one can gauge _λ to obtain the 1-parameter family of connections associated with the Darboux transform x̂ of x. Let L, L̂ : (I, t^2/m) →ℍℙ^1 be a Darboux pair with spectral parameter μ, with respective associated connections _λ and _λ. For the splitting ℍ^2 = L ⊕L̂, denote by π and π̂ the projections onto L and L̂ respectively. Defining r_λ := π + μ - λ/μπ̂, one has that _λ = r_λ∙_λ. As L and L̂ are Darboux pair with spectral parameter μ, we have _μ–parallel φ and _μ–parallel φ̂ such that L = φ̂ℍ while L̂ = φℍ. Thus, we only need to show that _λ and r_λ∙_λ agree on φ and φ̂. Using the fact that _μφ = 0 and η̂φ = 0, we have that (r_λ∙_λ) φ = r_λ (φ + ηφλ) μ/μ - λ = -ηφμ = φ = _λφ while _μφ̂ = 0 and ηφ̂ = 0 implies that (r_λ∙_λ) φ̂ = r_λ (φ̂) = r_λ (-μη̂φ̂) = η̂φ̂ (λ - μ) = _λφ̂, giving us the desired conclusion. The gauge r_λ^-1 has a simple pole at λ = μ; however, the above Proposition <ref> shows that _λ is real–analytic across λ = μ. §.§ Monodromy of Darboux transforms The current setup of treating Darboux transforms as parallel sections of a connection allows us to reduce the problem of monodromy to finding sections with multipliers: Assuming that a polarised space curve x with associated connection _λ has period M, then its Darboux transform x̂ with spectral parameter μ has period M if and only if _μ–parallel φ is a section with multiplier, that is, φ(t + M) = φ(t) h where h ∈ℍ_*. Along with the fact that parallel sections of _μ for some fixed spectral parameter μ form a vector space, one can calculate the monodromy of Darboux transforms (see Figures <ref> and <ref>). We will illustrate this explicitly with the next example of a circle. In this example, we consider the Darboux transforms of a singly-wrapped circle: let x(t) = e^ t be polarised by q = t^2 = t^2/m as in Remark <ref>. Noting that the 𝒟_μ–parallel condition (<ref>) can be reformulated as α” = - x”β - x' β' = x”(x')^-1α' + μ/mα, β' = -μ (x^d)' α, where ' denotes the differentiation with respect to t, the solutions to the differential equation for α =: α_0 + α_1 (<ref>) can be written as α = (c_0^- α_0^- + c_0^+ α_0^+) + (c_1^- α_1^- + c_1^+ α_1^+) for some constants of integration c_0^±, c_1^±∈ℂ, where α_0^± = e^/2(-1±√(1-4μ))t, α_1^± = e^/2(1±√(1-4μ))t. Writing s := √(1-4μ), we then have β = -(x')^-1α' =: β_0 + j β_1 for β_0 = -1/2 e^- t(c_1^- (1 - s)α_1^- + c_1^+ (1 + s)α_1^+) β_1 = 1/2e^ t(c_0^- (1 + s)α_0^- + c_0^+ (1 - s)α_0^+). Noting that T = αβ^-1 = (α_0 + α_1)(β_0 + β_1)^-1 = 1/|β|^2((α_0 β_0 + α_1β_1) + (α_1 β_0 - α_0β_1)). one obtains a Darboux transforms taking values in ℍ≅ℝ^3 by choosing constants of integration so that (c_0^- c_1^- - c_0^+ c_1^+) = 0 via Lemma <ref>. Similarly, if one chooses constants of integration so that c_0^- c_1^- - c_0^+ c_1^+ = 0, then the resulting Darboux transform takes values in the –plane via Corollary <ref>. In particular, for c_0^± = 0 so that α_0 = 0 = β_1, we obtain for s = √(1 - 4μ), x̂ = x + T = (e^ t + α_1 β_0^-1) = (-e^ t( c_1^+ (1 - s) e^ s t + c_1^- (1+s))/c_1^+ (1+s) e^ s t + c_1^- (1 - s )). To consider the monodromy, first note that since β = -(x')^-1α', we see that φ = eα + ψβ is periodic if and only if α is. Now, α_*^±(t+ 2π) = α_*^±(t) h^± for * = 0, 1 with h^± = - e^±π√(1-4μ). Therefore, at the resonance points, i.e. h^+ = h^-, we have μ=1-k^2/4 for some k ∈ℤ, and we get a ℍℙ^1–worth of closed Darboux transforms, that is, every choice of initial conditions gives a closed Darboux transform. Restricting to those Darboux transforms in the –plane, we obtain the following explicit paremetrisations x̂= (-e^ t(c_1^+ (1 - k) e^ k t + c_1^- (1+k))/c_1^+ (1 + k) e^ k t+c_1^-(1-k) ) which are clearly 2π–periodic for k ∈ℤ. For examples of closed Darboux transforms of the circle in both 3–space and the plane, see Figure <ref>. We note here that by Corollary <ref>, every Darboux transform of a circle must be contained in some 2-sphere, determined by the circle and an initial point of the Darboux transform. §.§ Arc-length polarisations and the bicycle monodromy We now consider the integrable reduction of Darboux transformations by requiring that both curves of the Darboux pair are arc-length polarised: A curve x : (I, q) →ℝ^4 ≅ℍ is arc-length polarised if q = |x|^2. Given an arc-length polarised curve x, the condition for the Darboux transform x̂ to be again arc-length polarised is identified in <cit.> in the case of plane curves. Excluding the trivial case of curves reflected across a certain plane (see Figure <ref>), the analogous statement for space curves can be proven similarly; for the sake of completeness, we give an independent argument here. Let x, x̂ : (I, q) →ℍ be a (non-trivial) Darboux pair with spectral parameter μ, and further assume that x is arc-length polarised. Then x̂ is also arc-length polarised if and only if |x̂ - x|^2 = 1/μ > 0 at one point t_0 ∈ I. We first gather some conditions coming from the given assumption that x is arc-length polarised. For an arc-length polarised curve x so that x^d = q x^-1 = |x|^2 x^-1 = x̅, we calculate using the Riccati equation (<ref>) (|T|^2) = 2 (TT) = 2 (T(-x + μ T x̅ T)) = - 2 (Tx) + 2 μ |T|^2 (x̅ T) = 2 μ (Tx) ( |T|^2 - 1/μ). The uniqueness of the solutions to ordinary differential equations tells us that |T|^2 = 1/μ holds at one point if and only if |T|^2 ≡1/μ on I. Thus, the necessary direction is immediately justified. To see the sufficiency, assume that x̂ is arc-length polarised so that |x̂|^2 = q. Noting that the Riccati equation (<ref>) implies |x̂|^2 = μ^2 |T|^4 |x^d|^2 = μ^2 |T|^4 |x|^2 , the assumption that both x and x̂ is arc-length polarised tells us |T|^4 ≡1/μ. Hence, we only need suppose for contradiction that |T|^2 ≡ -1/μ. Then via the ordinary differential equation (<ref>), we must have (Tx) ≡ 0, implying via the Riccati equation (<ref>) that (x̂ T) = μ |T|^2 (T x̅) = - (Tx) = 0, on I, so that ⟨x̂, T ⟩ = ⟨x, T ⟩≡ 0. Therefore, Remark <ref> implies that x̂ is always parallel to x, telling us that this is the trivial case. Such integrable reduction is known as the tractrix construction or the bicycle correspondence, while the monodromy of the bicycle correspondence is called the bicycle monodromy (see, for example, <cit.>). As bicycle correspondences are special cases of Darboux transformations, the bicycle monodromy can also be considered in terms of Darboux transformations (see Figure <ref>). The closed bicycle correspondences in the plane of a circle are called circletons in <cit.>; in this example, we obtain explicit parametrisations of all circletons. To make direct comparison with the results in <cit.>, we consider the transformations in the plane given by {1, }≅ℂ. For an arc-length polarised circle parametrised via x(t) = e^ t, recall that the Darboux transforms x̂ with respect to spectral parameter μ are given by _μ–parallel section φ =eα+ψβ so that we have α = c^- α^- + c^+α^+ with α^± = e^/2(1±√(1-4 μ)) t for some constants of integrations c^±∈ℂ. Hence, we see that β = -1/2 e^- t(c^- (1-√(1-4 μ)) α^- + c^+ (1+√(1-4 μ)) α^+ ). Excluding the trivial case of μ = 1/4, by Lemma <ref>, we then have that x̂ is also arc-length polarised (and thus arc-length parametrised) if and only if |αβ^-1|^2 = 1/μ > 0, a condition when evaluated at t =0 becomes | c^+ + c^-|^2 = 1/4μ|c^-(1-√(1-4μ)) + c^+(1 + √(1-4μ))|^2 so that c^+ + c^- = e^τ/2√(μ)(c^-(1-√(1-4μ)) + c^+(1 + √(1-4μ))) for some τ∈ℝ. Thus, if 1- e^τ/2√(μ) (1 + √(1-4μ)) ≠ 0, that is μ≠1/4cos^2(τ), we obtain c^+ =χ c^- with χ = -2 √(μ) + e^τ(1-√(1-4μ))/2√(μ) - e^τ(1+√(1-4μ)). Otherwise, we obtain c^-=0 for any choice of c^+. Therefore, all arc-length polarised Darboux transforms x̂ of an arc-length polarised circle x are given by x̂(t) = e^ t( χ(√(1-4 μ)-1) e^√(1-4μ) t-(√(1-4μ)+1))/χ(√(1-4 μ)+1) e^√(1-4 μ) t-(√(1-4 μ)-1). We now investigate the monodromy of the Darboux transformation x̂ over a single period. Since we have explicit formulas, it is straightforward to calculate that since we have μ > 0, the Darboux transformation is a closed curve if and only if either c^- = 0 or χ = 0. To investigate the resonance points of the transformation to obtain non-trivial transforms, we first calculate as in Example <ref> that α_±(t+ 2π) = α_±(t) e^π e^±π√(1-4μ). Thus, α_± have the same multiplier if and only if √(1-4μ)= k ∈ℤ, a contradiction in the current example since now we have μ > 0. Thus, we consider the ℓ–fold cover of [0, 2π] and calculate α_±(t+ 2πℓ) = α_±(t) e^ℓπ e^±√(1-4μ)ℓπ, allowing us to deduce that, α_± have the same multiplier if and only if ℓ√(1-4μ)= k ∈ℤ. Thus, we have circletons over ℓ–fold cover of [0, 2π] if and only if μ=ℓ^2-k^2/4 ℓ^2 and ℓ > k > 0 by Lemma <ref>, recovering the result of <cit.>. For examples of circletons, see Figure <ref>. The linearisation of Riccati-type equations for finding Darboux transforms of polarised curves and the subsequent investigation of monodromy can also be theoretically carried out in the lightcone model of Möbius geometry as introduced in <cit.>. However, a quick calculation in the lightcone model using circles polarised by arc-length yields second-order ordinary differential equations with non-constant coefficients, as opposed to that with constant coefficients in the quaternionic model. Thus, the quaternionic approach enables us to efficiently obtain closed-form solutions, which is central to the explicit investigation of monodromy. § MONODROMY OF DISCRETE DARBOUX TRANSFORMATIONS Having discussed the smooth theory in detail, we now aim to investigate the monodromy of Darboux transformations of a discrete closed curve. The gauge theoretic description of Darboux transformations was central to the consideration of the monodromy in the smooth case; therefore, we briefly review the discrete gauge theory here (for a more detailed introduction, see, for example, <cit.>). Let now denote a discrete interval, simply-connected in the sense of <cit.>. A bundle on assigns a set V_i to each vertex i ∈, and we call σ : →∪ V_i a section if σ_i ∈ V_i for all i ∈. A discrete connection assigns a bijection r_ji: V_i → V_j on each oriented edge (ij) so that r_ij r_ji = _i, while a discrete gauge transformation 𝒢 acts on a connection r_ji via (𝒢∙ r)_ji = 𝒢_j ∘ r_ji∘𝒢_i^-1 where 𝒢_i : V_i → V_i is an automorphism defined at every vertex i ∈. A section σ is r–parallel if on any oriented edge (ij), we have r_jiσ_i = σ_j. Note that since we are working with discrete intervals, we have that every discrete connection is flat. §.§ Flat connection of a discrete polarised curve Let x : (, 1m) →ℝ^4 ≅ℍ be a discrete curve defined on a polarised domain (, 1m) for a strictly positive or negative function m defined on (unoriented) edges. Recalling the definition of exterior derivatives <cit.> for 0-forms x x_ij := x_i - x_j, the dual curve x^d : (, 1m) →ℍ of a discrete polarised curve x : (, 1m) →ℍ is defined via the discrete 1-form x^d_ij = 1/m_ijx_ij^-1. As in the smooth case, let ψ = [ x; 1 ] so that we can view L = ψℍ as a line subbundle of the trivial vector bundle ℍ^2 := I ×ℍ^2. On every edge (ij), define a linear isomorphism 𝒟^λ_ji : {i}×ℍ^2 →{j}×ℍ^2 for some λ∈ℝ via 𝒟^λ_ji := _ji + [ 0 x_ij; λx^d_ij 0 ] for the identity map _ji : {i}×ℍ^2 →{j}×ℍ^2. Then it is immediate (with some abuse of notation on the identity map) that 𝒟^λ_ij𝒟^λ_ji = 𝒟^λ_ji𝒟^λ_ij = ( 1 - λ/m_ij) , implying that 𝒟^λ does not define a discrete connection. Therefore, we instead consider the projective transformation (𝒟^λ)^P_ji : {i}×ℍℙ^1 →{j}×ℍℙ^1 induced by 𝒟^λ_ji, so that (𝒟^λ)^P_ji is a (flat) connection defined on the trivial bundle ℍℙ^1. Alternatively, one could normalise 𝒟^λ_ji so that 𝒟^λ_ji defines a discrete connection on the trivial bundle ℍ^2. The choice for the projective bundle ℍℙ^1 is made to avoid the introduction of square root terms in the discrete connection, and to keep the similarity in the expression of 𝒟_λ of the smooth case (<ref>) and 𝒟^λ in the discrete case (<ref>). Now for 𝒢 = (e ψ), we have that ^λ_ji := (𝒢∙𝒟^λ)_ji = _ji + λ[ x_j x^d_ij - x_j x^d_ij x_i; x^d_ij - x^d_ij x_i ] =: _ji + λη_ji where η_ji = [ x_j; 1 ]ℍ and η_ji = [ x_i; 1 ]ℍ. Defining (^λ)^P_ji : {i}×ℍℙ^1 →{j}×ℍℙ^1 to be the projective isomorphism induced by ^λ_ji : {i}×ℍ^2 →{j}×ℍ^2, we call the discrete connection (^λ)^P_ji the discrete (flat) connection associated to x. The central ethos of discrete differential geometry dictates that the discrete integrable structure is inherent in the transformations and permutability of the smooth integrable structure. In our case, the integrable structure of discrete polarised curves, represented by the discrete (flat) connections, should be related to the Darboux transformations of smooth polarised curves, represented by the gauge transformations introduced in Proposition <ref>. The next proposition clarifies this relationship: the corresponding points of successive Darboux transforms of a smooth curve give the discrete curve, the spectral parameters of the Darboux transformations become the discrete polarisation (see Figure <ref>), and the gauge transformation of the Darboux transformations induces the discrete (flat) connection: For the splitting ℍ^2 = L_i ⊕ L_j, denote by π the projection onto the line bundle L defined at every vertex. Then we have that ^λ_ji = π_i + m_ij - λ/m_ijπ_j. Due to the splitting, we only need to see that the two sides agree for ψ_i and ψ_j. Note that ^λ_jiψ_i = ( _ji + λη_ji) ψ_i = ψ_i since ψ_i ∈η_ji, so that they agree on ψ_i. On the other hand, using the fact that η_jiψ_j = ψ_j (x^d_ij x_j - x^d_ij x_i) = - ψ_j x^d_ijx_ij = - 1/m_ijψ_j, we see ^λ_jiψ_j = (_ji + λη_ji) ψ_j = ψ_j + λη_jiψ_j = ψ_j -λ/m_ijψ_j = (m_ij - λ/m_ij)ψ_j, giving us the desired conclusion. §.§ Discrete Darboux transformations via parallel sections Now we recall the definition of Darboux transformations of discrete polarised curves given in <cit.>, adapted for curves in ℝ^4 ≅ℍ: Two discrete polarised curves x, x̂ : (I, 1/m) →ℍ are called a Darboux pair with spectral parameter μ if on every edge (ij), the cross-ratios of the four points x_i, x_j, x̂_j, x̂_i, denoted by (x_i, x_j, x̂_j, x̂_i), satisfy (x_i, x_j, x̂_j, x̂_i) = (x_i - x_j) (x_j - x̂_j)^-1 (x̂_j - x̂_i) (x̂_i - x_i)^-1 = μ/m_ij. Throughout the paper, we assume for non-degeneracy that μ≠ m_ij for any edge (ij). The cross-ratios condition (<ref>) implies that the four points x_i, x_j, x̂_j, x̂_i are concircular. The cross-ratios condition (<ref>) is equivalent to the discrete Riccati equation: x̂_ij = μ/m_ij (x̂_j - x_j)(x_i - x_j)^-1(x̂_i - x_i) = μ (x̂_j - x_j)x^d_ij(x̂_i - x_i), and defining T := x̂ - x, we have that T_ij = - x_ij + μ T_jx^d_ijT_i. As in the smooth case, we show that Darboux transforms of a discrete polarised curve can be obtained via the parallel sections of its associated family of connections. Given a discrete polarised curve x : (I, 1/m) →ℍ, a discrete polarised curve x̂ : (I, 1/m) →ℍ is a Darboux transform of x with spectral parameter if and only if L̂ = ψ̂ℍ = [ x̂; 1 ]ℍ is (^μ)^P–parallel. First assume L̂ = ψ̂ℍ = [ x̂; 1 ]ℍ is (^μ)^P–parallel, that is, on any fixed edge (ij), (^μ)^P_jiL̂_i = L̂_j. Then since we have ^μ_ji = (𝒢∙𝒟^μ)_ji, there is [ a; b ] = ϕ : I →ℍ^2 such that 𝒟^μ_jiϕ_i = ϕ_j with L̂ = 𝒢ϕℍ; hence, we have x̂ = x + a b^-1. Noting that the condition (<ref>) can be restated as [ a; b ]_ij = -[ x_ij b_i; μx^d_ij a_i ], we deduce with T := a b^-1 that - x_ij + μ T_jx^d_ijT_i = a_ij b^-1_i + a_j(b^-1)_ij = a_i b^-1_i - a_j b^-1_j = T_ij. Therefore, x̂ is a solution to the discrete Riccati equation (<ref>), and thus a Darboux transform of x. Conversely, assume that x̂ solves the Riccati equation (<ref>), that is, there is T : I →ℍ satisfying the Riccati equation (<ref>) and x̂ = x + T. On any edge (ij), define b : I →ℍ recursively via the equation b_j = (1 + T^-1_j x̂_ij)b_i, and let a := T b. Then we have - μx^d_ij a_i = - μx^d_ij T_i b_i = - μ T^-1_j T_j x^d_ij T_i b_i = - T^-1_j x̂_ij b_i = b_ij, while - x_ij b_i = (T_ij - μ T_j x^d_ij T_i) b_i = a_i - T_j b_i - μ T_j x^d_ij a_i = a_i - T_j b_i + T_j b_ij = a_i - T_j b_i + T_j b_i - T_j b_j = a_ij. Therefore, ϕ := [ a; b ] satisfies (<ref>), i.e., 𝒟^μ_jiϕ_i = ϕ_j. But since we have that L̂ = ψ̂ℍ = 𝒢ϕℍ, ^μ_jiψ̂_i ℍ = 𝒢_j 𝒟^μ_ji𝒢^-1_i 𝒢_i ϕ_i ℍ = 𝒢_j 𝒟^μ_jiϕ_i ℍ = 𝒢_j ϕ_j ℍ = ψ̂_j ℍ, telling us that L̂ is (^μ)^P–parallel. Similar to the smooth case, the choice of initial condition completely determines whether the Darboux transform x̂ of a discrete curve in some 3-sphere S^3 again takes values in the same S^3: Given a discrete polarised curve x : (I, 1/m) → S^3 taking values in some 3-sphere S^3 and (^μ)^P–parallel L̂ = ψ̂ℍ = [ x̂; 1 ]ℍ, the Darboux transform x̂ takes values in S^3 if and only if x̂_i ∈ S^3 for some i ∈ I. As in the smooth case, we will apply a suitable stereographic projection and prove the statement for curves in ℍ≅ℝ^3. Letting ϕ = [ a; b ] so that L̂ = 𝒢ϕℍ with 𝒟^μ_jiϕ_i = ϕ_j, define φ := 𝒢ϕ = ea + ψ b so that ^μ_jiφ_i = φ_j. Then we can directly verify that η_jiφ_i = ψ_j x_ij^d a_i, while using the hermitian form (<ref>), (ψ_j, φ_i) = (ψ_j, ea_i + ψ_i b_i) = a_i + x_ij b_i. Thus, on any edge (ij), using that (ψ_i, ψ_i) = 0, we have (φ_j, φ_j) = (^μ_jiφ_i, ^μ_jiφ_i) = (φ_i, φ_i) + μ(x_ij^d a_i( ψ_j , φ_i) + (φ_i, ψ_j )x_ij^d a_i) = (φ_i, φ_i) - μ/m_ij(a_i b_i + b_i a_i) = (1 - μ/m_ij) (φ_i, φ_i). Thus the non-degeneracy condition iterated in Remark <ref> tells us that (φ, φ) ≡ 0 on I if and only if it vanishes on one vertex i ∈ I. Finally, the relation (ψ̂, ψ̂) = b^-1 (φ, φ) b^-1 allows us to obtain the desired conclusion. In fact, we also obtain the discrete counterpart of Corollary <ref> on curves in the 2-sphere, where the proof is verbatim: Let x : (I, 1/m) → S^2 be a curve into some 2-sphere S^2. Then the Darboux transform x̂ takes values in the same 2-sphere if and only if x̂_i ∈ S^2 for some i ∈ I. §.§ Discrete monodromy The discrete monodromy was investigated in <cit.> (see also <cit.>) for the case of Darboux transformations of planar curves with normalised polarisation so that m_ij≡ 1. In this section, we obtain a generalisation of this result; in fact, the result is immediate due to the parallel sections formulation of Darboux transformations. As in the smooth case, we call a (^μ)^P–parallel section L̂ a global parallel section if L̂_n = L̂_n + M for all n ∈ I for some fixed M ∈ℤ, that is, ϕ = [ a; b ] is a section with a multiplier. Supposing that L̂ is a Darboux transform of L, that is, L̂ is (^μ)^P–parallel, we have that L̂_n + M = ψ̂_n + Mℍ = (∏_κ = n^n+M - 1^μ_( κ , κ+1)) ψ̂_nℍ = L̂_n. Therefore, denoting the monodromy matrix as ℳ_r, μ := ∏_κ = n^n+M - 1^μ_(κ,κ+1), we see that finding sections with multipliers amounts to finding the eigenvectors of ℳ_r, μ (see, for example, Figures <ref> and <ref>). We illustrate this with the next example: In this example, we offer an explicit discrete parametrisation of all planar Darboux transforms of the discrete polarised circle. Suppose x: (I, 1/m) →ℍ given by x_n = e^2 π/M n for some M ∈ℕ is polarised by m_ij = | 1 - e^2π/M|^-2 for any edge (ij). To calculate the Darboux transforms, we will find ϕ = [ a; b ] satisfying the difference relations (<ref>). To do this, first we eliminate the b from the pair of difference relations by noting that on any three consecutive vertices (ijk), we have b_ij = b_i - b_j = -(x_ij)^-1a_ij + (x_jk)^-1a_jk so that we obtain a second order linear recurrence relation on a: μx^d_ij a_i -(x_ij)^-1a_ij + (x_jk)^-1a_jk = 0. Conversely, every solution a to the recurrence relation (<ref>), and b defined via (<ref>) gives a Darboux transform of x. In the specific case of discrete circles, the recurrence relation (<ref>) reads e^2π/M a_k - (1 + e^2π/M)a_j + (1 - μ̂| 1 - e^2π/M|^2) a_i = 0. Writing a = a_0 + a_1 for some complex valued discrete functions a_0 and a_1 on I, we then have a = (c_0^- a_0^- + c_0^+ a_0^+) + (c_1^- a_1^- + c_1^+ a_1^+) for some constants of integration c_0^±, c_1^±∈ℂ where a_0,n^± = (1/2(e^-2π/M (1 ∓ s) + (1 ± s)))^n a_1,n^± = (1/2(e^2π/M (1 ± s) + (1 ∓ s)))^n, where s = √(1 - 4μ) as before. Then using b_i = -(x_ij)^-1a_ij, we find that b =: b_0 + b_1 for b_0,n = -1/2 e^-2π/Mn(c_1^- (1- s) a_1,n^- + c_1^+ (1 + s) a_1,n^+) b_1,n = 1/2 e^2π/Mn(c_0^- (1+ s) a_0,n^- + c_0^+ (1 - s) a_0,n^+). Therefore, as in the smooth case, Lemma <ref> ensures that the Darboux transform takes values in ℍ≅ℝ^3 by choosing the constants so that (c_0^- c_1^- - c_0^+ c_1^+) = 0, while Corollary <ref> implies that the Darboux transform takes values in the –plane by choosing the constants so that c_0^- c_1^- - c_0^+ c_1^+ = 0. For example, letting c_0^± = 0 so that a_0 ≡ 0 ≡ b_1, we have that x̂_n = 0.85(-e^2 π/M n(c_1^+ (1 - s)(e^2 π/M(1 + s) + (1 - s))^n + c_1^- (1+s)(e^2 π/M(1 - s) + (1 + s))^n ) c_1^+ (1 + s)(e^2 π/M(1 + s) + (1 - s))^n + c_1^- (1-s)(e^2 π/M(1 - s) + (1 + s))^n ). To consider the monodromy, note that by the difference relations (<ref>) on a and b, we see that φ is periodic if and only if a is. Since the period of the discrete circle is M, we calculate that (a_*^±)_n+M = (a_*^±)_n h^± for * = 0, 1 where h^± = (a_0^±)_M = (a_1^±)_M. Therefore, the resonance points occur when h^+ = h^-, that is, (e^2π/M (1 + s) + (1 - s)/e^2π/M (1 - s) + (1 + s))^M = e^2π k for some k ∈ℤ. Hence, we have resonance points when μ =1/4(1 - ^2π/Mtan^2k π/M). for k ∈ℤ. For spatial and planar examples of closed Darboux transforms of the discrete circle, see Figures <ref> and <ref>, respectively. We remark here that similar to the smooth case, Corollary <ref> implies every Darboux transform of the discrete circle must be contained in some 2-sphere, determined by the circle and an initial point of the Darboux transform. §.§ Discrete bicycle correspondence and bicycle monodromy Analogous to the smooth case, the discrete case also allows for an integrable reduction to the case when both of the curves forming the Darboux pair are arc-length polarised. To see this, we first recall the definition of a discrete arc-length polarisation <cit.>. A discrete polarised curve x : (I, 1/m) →ℍ is arc-length polarised if on any edge (ij), |x_ij|^2 = 1/m_ij. As in the smooth case, reflections of certain discrete arc-length polarised curves are Darboux transforms (see Figure <ref>); treating these cases as trival cases, we obtain a characterisation of non-trivial Darboux transformations keeping the arc-length polarisation: Let x : (I, 1/m) →ℍ be arc-length polarised. Then a (non-trivial) Darboux transform x̂: (I, 1/m) →ℍ with spectral parameter μ is also arc-length polarised if and only if |x̂_i - x_i|^2 = 1/μ on some vertex i ∈ I. As in the statement of the proof, let x be a discrete arc-length polarised curve. To show the necessary direction, assume that |T_i|^2 = |x̂_i - x_i|^2 = 1/μ for some i ∈ I. From the discrete Riccati equation (<ref>), we can calculate that on the edge (ij) x̂_j = (μ x_j x_ijT_i + x̂_i)(1 + μx_ijT_i)^-1, so that x̂_ij = μ(x̂_i - x_j)x_ijT_i (1 + μx_ijT_i)^-1. On the other hand, since x is arc-length polarised, |1 + μx_ijT_i|^2 = 1 + μ(T_i x_ij + x_ijT_i) + μ/m_ij = μ(T_i T_i + T_i x_ij + x_ijT_i + x_ijx_ij) = μ |T_i + x_ij|^2 = μ |x̂_i - x_j|^2. Therefore, we have that |x̂_ij|^2 = |μ(x̂_i - x_j)x_ijT_i|^2/|1 + μx_ijT_i|^-2 = μ|x̂_i - x_j|^2 |x_ij|^2/μ|x̂_i - x_j|^2 = 1/m_ij. Then, the discrete Riccati equation (<ref>) allows us to see that |T_j|^2 = 1/μ. Propagating the above proof for any edge (ij), we have that x̂ is discrete arc-length polarised. To see the sufficient direction, assume that |x_ij|^2 = |x̂_ij|^2 = 1/m_ij; hence, the cross-ratios condition (<ref>) implies |T_i|^2 |T_j|^2 = 1/μ^2. On the other hand, by Remark <ref>, we have that the circular quadrilateral formed by x_i, x_j, x̂_j, x̂_i is an isosceles trapezoid and hence symmetric; therefore, we must have either |T_i|^2 = |T_j|^2 or |x̂_i - x_j|^2 = |x̂_j - x_i|^2. If |T_i|^2 ≠ |T_j|^2 on any one edge (ij), then by (<ref>) we must have that |T_i|^2 ≠ |T_j|^2 on every edge (ij). Therefore, on every edge, we have |x̂_i - x_j|^2 = |x̂_j - x_i|^2 so that T_i ∥ T_j, and the symmetry of quadrilaterals then forces the discrete curve x̂ to be a reflection of x, allowing us to exclude this case. Hence, we see that T_i and T_j are not parallel on every edge (ij) so that the circular quadrilateral is non-embedded. Then the cross-ratios condition (<ref>) reads (x_i, x_j, x̂_j, x̂_i) = μ/m_ij > 0. Now since 1/m is an arc-length polarisation, we have that m_ij > 0 on every edge (ij) so that μ > 0. Thus, we conclude that |T_i|^2 = |T_j|^2 = 1/μ on every vertex i ∈ I. As in the smooth case, the discrete Darboux transformation keeping the arc-length polarisation is known as a discrete bicycle correspondence <cit.>. Applying the monodromy problem, we can obtain the discrete bicycle monodromy with examples given in Figure <ref>. In this example, we recover the discrete analogue of the smooth case in Example <ref>: Consider the planar bicycle correspondences of the arc-length polarised discrete circle as in Example <ref>, that is x = e^2 π/M n∈ℂ with m_ij = | 1 - e^2π/M|^-2. Then the recurrence relation on the complex–valued function a (<ref>) becomes e^-2π/M a_k - (1 + e^-2π/M) a_j + (1 - μ̂ | 1 - e^2π/M|^2) a_i = 0 on any three consecutive vertices (ijk). Therefore, a = c^- a^- + c^+ a^+ where a_n^± = (1/2(e^2π/M (1 ± s) + (1 ∓ s)))^n so that b_i = -(x_ij)^-1a_ij implies b_n = -1/2 e^-2π/Mn(c^- (1- s) a^- + c^+ (1 + s) a^+) for s = √(1 - 4μ). To find the Darboux transforms x̂ that are also discrete arc-length polarised, we use Theorem <ref> and require that |ab^-1|^2 = 1/μ at n = 0, that is, |c^- + c^+|^2 = 1/4 μ |c^- (1-s) + c^+ (1+s)|^2 implying that c^- + c^+ = e^τ/2 √(μ) (c^- (1-s) + c^+ (1+s)) for some τ∈ℝ. Therefore, if μ≠1/4 cos^2 τ, we have c^+ = χ c^- with χ = -2 √(μ) + e^τ(1 - s)/2√(μ) - e^τ(1 + s); otherwise, we have c^- = 0. Thus, all the planar bicycle transformations of the discrete circle are given by x̂_n =0.85-e^2 π/M n(χ (1 - s)(e^2 π/M(1 + s) + (1 - s))^n + (1+s)(e^2 π/M(1 - s) + (1 + s))^n )χ (1 + s)(e^2 π/M(1 + s) + (1 - s))^n +(1-s)(e^2 π/M(1 - s) + (1 + s))^n. To find the discrete circletons, i.e. closed bicycle correspondences of the discrete circle, we calculate the monodromy as before and require that a is periodic over ℓ–fold cover of M. Noting that a^±_n+ℓ M = a^±_n h^± where h^± = a^±_ℓ M, we see that we have resonance points when h^+ = h^-, i.e. (e^2π/M (1 + s) + (1 - s)/e^2π/M (1 - s) + (1 + s))^ℓ M = e^2π k for some k ∈ℤ. Thus, for μ =1/4(1 - ^2π/Mtan^2k π/ℓ M). we obtain discrete circletons. Examples of discrete circletons with M = 15 was given in Figure <ref>; for other values of M, see Figures <ref> and <ref>. § SUMMARY For a smooth integrable system, the introduction of the spectral parameter allows one to consider an infinite system of linear partial differential equations to solve partial differential equation of higher order. It is this point of view which yields efficient discrete models for higher order partial differential equations which form an integrable system: rather than solving differential equations by typical numerical methods, e.g., the Runge–Kutta method, discretising the integrable system structure gives a solution to the equation by recurrence equations, which can be easily implemented and avoid singularities. In this paper we were concerned with periodic discrete solutions of an integrable system. Writing the system in terms of an associated family of connections, new solutions to the underlying partial differential equations are obtained from parallel sections of the family of connections. The question of finding periodic smooth solutions can then be approached by finding parallel sections with multipliers, Section <ref>. As an example of this strategy, we provided a new computational model for periodic, discrete (and smooth) solutions given by the discrete Darboux transformation for the special case of polarised curves, Section <ref>. The integrable reductions of this system include the bicycle correspondences, linking our results to the smoke ring flow, the filament equations and the modified Korteweg-de Vries equations, modeling shallow water waves. In particular, we provided in Section <ref> all discrete, periodic planar and spatial curves which are iso–spectral to the circle as well as all circletons which preserve arc-length polarisations. The implementation with quaternions is particularly relevant for obtaining explicit solutions to the recurrence equations in the case of discrete circles. Our results serve as templates for generalisations to other integrable systems, to provide computational models for periodic solutions of problems in physics, chemistry and biology, for example in modelling shallow water waves, fiber optics applications, and low-frequency collective motion in proteins and DNA. Acknowledgements. We are thankful to the referee for many indispensable comments. We gratefully acknowledge the support from the Leverhulme Trust Network Grant IN-2016-019 and the JSPS Research Fellowships for Young Scientist 21K13799. bobenko_compact_2021article author=Bobenko, Alexander I., author=Hoffmann, Tim, author=Sageman-Furnas, Andrew O., title=Compact Bonnet pairs: isometric tori with the same curvatures, date=2021, eprint=2110.06335, url=http://arxiv.org/abs/2110.06335, bobenko_discrete_1996-1article author=Bobenko, Alexander I., author=Pinkall, Ulrich, title=Discrete isothermic surfaces, date=1996, journal=J. Reine Angew. Math., volume=475, pages=187208, review=1396732, doi=10.1515/crll.1996.475.187, bobenko_discrete_1996article author=Bobenko, Alexander I., author=Pinkall, Ulrich, title=Discrete surfaces with constant negative Gaussian curvature and the Hirota equation, date=1996, journal=J. Differential Geom., volume=43, number=3, pages=527611, review=1412677, doi=10.4310/jdg/1214458324, bobenko_discrete_2008book author=Bobenko, Alexander I., author=Suris, Yuri B., title=Discrete differential geometry, series=Graduate Studies in Mathematics, publisher=American Mathematical Society, address=Providence, RI, date=2008, number=98, ISBN=978-0-8218-4700-8, review=2467378, bor_tire_2020article author=Bor, Gil, author=Levi, Mark, author=Perline, Ron, author=Tabachnikov, Sergei, title=Tire tracks and integrable curve evolution, date=2020, journal=Int. Math. Res. Not. IMRN, number=9, pages=26982768, review=4095423, doi=10.1093/imrn/rny087, bucking_constructing_2016incollection author=Bücking, Ulrike, author=Matthes, Daniel, title=Constructing solutions to the Björling problem for isothermic surfaces by structure preserving discretization, date=2016, book= title=Advances in discrete differential geometry, editor=Bobenko, Alexander I., publisher=Springer, address=Berlin,, pages=309345, review=3587191, doi=10.1007/978-3-662-50447-5_10, burstall_notes_2017incollection author=Burstall, Francis E., title=Notes on transformations in integrable geometry, date=2017, book=title=Special metrics and group actions in geometry, editor=Chiossi, Simon G., editor=Fino, Anna, editor=Musso, Emilio, editor=Podestà, Fabio, editor=Vezzoni, Luigi, series=Springer INdAM Ser., volume=23, publisher=Springer, address=Cham, pages=5980, doi=10.1007/978-3-319-67519-0_3, review=3751962 burstall_conformal_2010article author=Burstall, Francis E., author=Calderbank, David M. J., title=Conformal submanifold geometry I-III, date=2010, eprint=1006.5700, url=http://arxiv.org/abs/1006.5700, burstall_discrete_2020article author=Burstall, Francis E., author=Cho, Joseph, author=Hertrich-Jeromin, Udo, author=Pember, Mason, author=Rossman, Wayne, title=Discrete Ω-nets and Guichard nets via discrete Koenigs nets, date=2023, journal=Proc. Lond. Math. Soc. (3), volume=126, number=2, pages=790–836, review=4550152, doi=10.1112/plms.12499, burstall_isothermic_2011article author=Burstall, Francis E., author=Donaldson, Neil M., author=Pedit, Franz, author=Pinkall, Ulrich, title=Isothermic submanifolds of symmetric R-spaces, date=2011, journal=J. Reine Angew. Math., volume=660, pages=191–243, review=2855825, doi=10.1515/crelle.2011.075, burstall_conformal_2002book author=Burstall, Francis E., author=Ferus, Dirk, author=Leschke, Katrin, author=Pedit, Franz, author=Pinkall, Ulrich, title=Conformal geometry of surfaces in S^4 and quaternions, series=Lecture Notes in Mathematics, publisher=Springer-Verlag, address=Berlin, date=2002, volume=1772, ISBN=978-3-540-43008-7, review=1887131, doi=10.1007/b82935, burstall_semi-discrete_2016article author=Burstall, Francis E., author=Hertrich-Jeromin, Udo, author=Müller, Christian, author=Rossman, Wayne, title=Semi-discrete isothermic surfaces, date=2016, journal=Geom. Dedicata, volume=183, pages=4358, review=3523116, doi=10.1007/s10711-016-0143-7, burstall_discrete_2018article author=Burstall, Francis E., author=Hertrich-Jeromin, Udo, author=Rossman, Wayne, title=Discrete linear Weingarten surfaces, date=2018, journal=Nagoya Math. J., volume=231, pages=5588, review=3845588, doi=10.1017/nmj.2017.11, burstall_discrete_2014incollection author=Burstall, Francis E., author=Hertrich-Jeromin, Udo, author=Rossman, Wayne, author=Santos, Susana D., title=Discrete surfaces of constant mean curvature, date=2014, book=title=Development in differential geometry of submanifolds, editor=Kobayashi, Shim-Pei, series=RIMS Kôkyûroku, volume=1880, publisher=Res. Inst. Math. Sci. (RIMS), address=Kyoto,, pages=133179, burstall_discrete_2015article author=Burstall, Francis E., author=Hertrich-Jeromin, Udo, author=Rossman, Wayne, author=Santos, Susana D., title=Discrete special isothermic surfaces, date=2015, journal=Geom. Dedicata, volume=174, pages=111, review=3303037, doi=10.1007/s10711-014-0001-4, cho_infinitesimal_2020article author=Cho, Joseph, author=Rossman, Wayne, author=Seno, Tomoya, title=Infinitesimal Darboux transformation and semi-discrete mKdV equation, journal=Nonlinearity, volume=35, number=4, pages=2134 2146, year=2022, review=4407235, doi=10.1088/1361-6544/ac591f cho_discrete_2021-1article author=Cho, Joseph, author=Rossman, Wayne, author=Seno, Tomoya, title=Discrete mKdV equation via Darboux transformation, date=2021, journal=Math. Phys. Anal. Geom., volume=24, number=3, pages=25:111, review=4287306, doi=10.1007/s11040-021-09398-y, hasimoto_soliton_1972article author=Hasimoto, Hidenori, title=A soliton on a vortex filament, date=1972, journal=J. Fluid Mech., volume=51, number=3, pages=477485, review=3363420, doi=10.1017/S0022112072002307, hertrich-jeromin_introduction_2003book author=Hertrich-Jeromin, Udo, title=Introduction to Möbius differential geometry, series=London Mathematical Society Lecture Note Series, publisher=Cambridge University Press, address=Cambridge, date=2003, volume=300, review=2004958, hertrich-jeromin_mobius_2001article author=Hertrich-Jeromin, Udo, author=Musso, Emilio, author=Nicolodi, Lorenzo, title=Möbius geometry of surfaces of constant mean curvature 1 in hyperbolic space, date=2001, journal=Ann. Global Anal. Geom., volume=19, number=2, pages=185205, review=1826401, doi=10.1023/A:1010738712475, hertrich-jeromin_remarks_1997article author=Hertrich-Jeromin, Udo, author=Pedit, Franz, title=Remarks on the Darboux transform of isothermic surfaces, date=1997, journal=Doc. Math., volume=2, pages=313333, review=1487467, hoffmann_discrete_2008incollection author=Hoffmann, Tim, title=Discrete Hashimoto surfaces and a doubly discrete smoke-ring flow, date=2008, book=title=Discrete differential geometry, editor=Bobenko, Alexander I., editor=Schröder, Peter, editor=Sullivan, John M., editor=Ziegler, Günter M., series=Oberwolfach Semin., volume=38, publisher=Birkhäuser, address=Basel, pages=95115, review=2405662, doi=10.1007/978-3-7643-8621-4_5, kilian_dressing_2015article author=Kilian, Martin, title=Dressing curves, date=2015, eprint=1508.00378, url=http://arxiv.org/abs/1508.00378 levi_backlund_1980article author=Levi, D., author=Benguria, R., title=Bäcklund transformations and nonlinear differential difference equations, date=1980, journal=Proc. Nat. Acad. Sci. U.S.A., volume=77, number=9, part 1, pages=50255027, review=587276, doi=10.1073/pnas.77.9.5025, matthaus_discrete_2003thesis author=Matthäus, Lars, title=Discrete curves with low spectral genus, type=Diplomarbeit, date=2003, organization=Technische Universität Berlin, muller_semi-discrete_2013article author=Müller, Christian, author=Wallner, Johannes, title=Semi-discrete isothermic surfaces, date=2013, journal=Results Math., volume=63, number=3-4, pages=13951407, review=3057376, doi=10.1007/s00025-012-0292-4, musso_laguerre_1999article author=Musso, Emilio, author=Nicolodi, Lorenzo, title=Laguerre geometry of surfaces with plane lines of curvature, date=1999, journal=Abh. Math. Sem. Univ. Hamburg, volume=69, pages=123138, review=1722926, doi=10.1007/BF02940867, pember_discrete_2021article author=Pember, Mason, author=Polly, Denis, author=Yasumoto, Masashi, title=Discrete Weierstrass-type representations, journal = To appear in Discrete Comput. Geom., eprint=2105.06774, url=http://arxiv.org/abs/2105.06774, pinkall_new_2007article author=Pinkall, Ulrich, author=Springborn, Boris, author=Weißmann, Steffen, title=A new doubly discrete analogue of smoke ring flow and the real time simulation of fluid flow, date=2007, journal=J. Phys. A, volume=40, number=42, pages=1256312576, review=2392889, doi=10.1088/1751-8113/40/42/S04, pottmann_architectural_2007book author=Pottmann, Helmut, author=Asperl, Andreas, author=Hofer, Michael, author=Kilian, Axel, title=Architectural geometry, publisher=Bentley Institute Press, address=Exton, PA, date=2007, ISBN=978-1-934493-04-5, quispel_linear_1984article author=Quispel, G. R. W., author=Nijhoff, F. W., author=Capel, H. W., author=van der Linden, J., title=Linear integral equations and nonlinear difference-difference equations, date=1984, journal=Phys. A, volume=125, number=2-3, pages=344380, review=761644, doi=10.1016/0378-4371(84)90059-1, tabachnikov_bicycle_2017article author=Tabachnikov, Serge, title=On the bicycle transformation and the filament equation: results and conjectures, date=2017, journal=J. Geom. Phys., volume=115, pages=116123, review=3623617, doi=10.1016/j.geomphys.2016.05.013, tabachnikov_discrete_2013article author=Tabachnikov, Serge, author=Tsukerman, E., title=On the discrete bicycle transformation, date=2013, journal=Publ. Mat. Urug., volume=14, pages=201219, review=3235356,
http://arxiv.org/abs/2307.01086v1
20230703150743
A collider test of nano-Hertz gravitational waves from pulsar timing arrays
[ "Shao-Ping Li", "Ke-Pan Xie" ]
hep-ph
[ "hep-ph", "astro-ph.CO" ]
numbers=left, numberstyle= , keywordstyle= , commentstyle= , frame=shadowbox, rulesepcolor= , escapeinside=“, xleftmargin=2em,xrightmargin=2em, aboveskip=1em, framexleftmargin=2em arrows,shapes trees matrix,arrows positioning calc,through decorations.pathreplacing decorations.pathmorphing decorations.markings vector/.style=decorate, decoration=snake, draw, provector/.style=decorate, decoration=snake,amplitude=2.5pt, draw, antivector/.style=decorate, decoration=snake,amplitude=-2.5pt, draw, fermion/.style=draw=black, postaction=decorate, decoration=markings,mark=at position .55 with [draw=black]>, fermionbar/.style=draw=black, postaction=decorate, decoration=markings,mark=at position .55 with [draw=black]<, fermionnoarrow/.style=draw=black, gluon/.style=decorate, draw=black, decoration=coil,amplitude=4pt, segment length=5pt, scalar/.style=dashed,draw=black, postaction=decorate, decoration=markings,mark=at position .55 with [draw=black]>, scalarbar/.style=dashed,draw=black, postaction=decorate, decoration=markings,mark=at position .55 with [draw=black]<, scalarnoarrow/.style=dashed,draw=black, electron/.style=draw=black, postaction=decorate, decoration=markings,mark=at position .55 with [draw=black]>, bigvector/.style=decorate, decoration=snake,amplitude=4pt, draw, block = [draw, rectangle, minimum height=3em, minimum width=6em]
http://arxiv.org/abs/2307.04814v1
20230701155842
Controlling the electron-phonon heat exchange in a metallic film by its position in a dielectric slab
[ "D. V. Anghel", "M. Dolineanu", "J. Bergli", "I. J. Maasilta" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ]
Controlling the electron-phonon heat exchange in a metallic film by its position in a dielectric slab D. V. AnghelInstitutul National de Cercetare-Dezvoltare pentru Fizica si Inginerie Nucleara Horia Hulubei, 077125 Magurele, Ilfov, Romania, Research Institute of the University of Bucharest (ICUB), 050663 Bucharest, Romania, BLTP, JINR, Dubna, Moscow region, 141980, Russia, dragos@theory.nipne.ro, M. DolineanuInstitutul National de Cercetare-Dezvoltare pentru Fizica si Inginerie Nucleara Horia Hulubei, 077125 Magurele, Ilfov, Romania, Doctoral School of Physics, University of Bucharest, Faculty of Physics, 077125 Magurele, Ilfov, Romania, mircea.dolineanu@theory.nipne.ro, J. BergliDepartment of Physics, University of Oslo, PO Box 1048, Blindern, 0316 Oslo, Norway, joakim.bergli@fys.uio.no, and I. J. MaasiltaNanoscience Center, Department of Physics, University of Jyvaskyla, FI-40014 Jyväskyä, Finland, ilari.j.maasilta@jyu.fi August 1, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We theoretically study the heat flux between electrons and phonons in a thin metallic film embedded in a suspended dielectric slab (called a membrane, in accordance with the established nomenclature), forming a layered structure. The thickness of the membrane is much smaller than the other two dimensions and, in the considered temperature range, is comparable to the dominant phonon wavelength. The thickness of the metallic layer is an order of magnitude smaller than the thickness of the membrane. While the dependence of the heat exchange on the thicknesses of the film and of the membrane has been studied before, it is not yet known how this depends on the position of the film inside the membrane. Here we show that the position strongly influences the heat exchange. If we denote by T_e the effective temperature of the electrons in the metal and by T_ph the effective temperature of the phonons (assumed to be uniform in the entire system), then we may write in general the heat power as P ≡ P^(0)(T_e) - P^(0)(T_ph), where P^(0)(T) ≡ P_s^(0)(T) + P_a^(0)(T), with P_s^(0)(T) and P_a^(0)(T) being the contributions of the symmetric and antisymmetric Lamb modes, respectively. In the low temperature limit, we may write P_s^(0)(T) ≡ C_s T^4 and P_a^(0)(T) ≡ C_a T^3.5, where C_s is independent of the position of the film inside the membrane, whereas C_a increases with the distance between the mid-plane of the film and the mid-plane of the membrane, being zero when the film is at the center of the membrane. Our examples show that by changing the position of the film inside the membrane one may change the electron-phonon heat power by orders of magnitude, depending on the dimensions and the temperature range. § INTRODUCTION Nanosystems are of great importance for current technological applications. Therefore, understanding their physical properties is necessary for both basic science and technology development. One such property is the electron-phonon coupling and heat exchange in nanoscopic systems consisting of metallic films in contact with dielectric suspended membranes, since structures like this appear, for example, in ultrasensitive detectors <cit.> and microrefrigerators <cit.>. At low temperatures, the electron-phonon heat exchange becomes weak enough that one can consider the electrons and the acoustic phonons in separate thermal equilibrium, at effective temperatures T_e and T_ph, respectively. Then, the heat exchange may be written in general as P(T_e, T_ph) ≡ P^(0)(T_e) - P^(1)(T_ph), but the thermal equilibrium condition P(T, T) = 0 implies that P^(0)(T) = P^(1)(T) is the same function. If the “exponent” x ≡ dln[P^(0)(T)]/ dT = dln[P^(1)(T)]/ dT is constant on a wide temperature range (orders of magnitude), then one may use the approximation P ∝ T_e^x - T_ph^x. For example, in clean three-dimensional (3D) bulk systems, where the electron mean free path is longer than the thermally dominant phonon wavelength, x = 5 <cit.>, whereas for (clean, non-disordered) two-dimensional (2D) phonons in graphene, x=4 <cit.>, and for a quasi one-dimensional (1D) phonon system x=3 <cit.> (clean limit). Thus, it would at first seem thatx = s+2, where s is the dimensionality of the phonon gas. However, the above statement in not generally true, as was shown in previous theoretical studies of the electron-phonon heat exchange in thin quasi-2D suspended layered nano-structures <cit.>. In those studies, the structure consists of a metallic film, of a thickness of the order of 10 nm, on top of a dielectric membrane, of a thickness of the order of 100 nm. Then, in the low temperature limit (which, for the parameters mentioned above, is of the order of 100 mK or below), the heat power flow between electrons and phonons obeys the simple power law dependence on temperature P(T_e, T_ph) ∝ T_e^3.5 - T_ph^3.5 – so, x = 3.5 and s=1.5 <cit.>. But as the temperature increases, x starts to vary in a wide range, from 3.5 reaching approximately 4.7 at around 0.5 K <cit.>. This is close to the experimentally observed value of x ∼ 4.5, measured for both SiN/Cu <cit.> and SiO2/Au <cit.> suspended membrane devices. In contrast to previous work <cit.> where the metallic film was located on top of a dielectric membrane, here we study the effect of the position of the metallic film inside the dielectric membrane on the electron-phonon heat exchange. We observe that while in the high temperature range (roughly above 1 K for the parameters used here) the heat exchange is almost independent of the position of the metallic film, in the low temperature sub-Kelvin limit the heat power flow decreases as the metal film is placed closer and closer to the center of the membrane, by up to one order of magnitude at 10 mK. This provides an additional method to control the electron-phonon heat exchange, which is an important characteristic for the responsivity and noise of bolometric detectors and the effectiveness of microrefrigerators, without changing the materials or the thickness of the layers. The article is organized as follows: in Section <ref> we describe the system and the models used, in Section <ref> we present the numerical results, and in Section <ref> we draw the conclusions. § METHODS §.§ System description The system, of total dimensions L_x × L_y × L_z, is schematically represented in Fig. <ref> and consists of a metallic layer (red) embedded within a suspended dielectric slab. We consider that L_x, L_y ≫ L_z and L_z may be comparable to the dominant phonon wavelength in the temperature range of interest. The metallic layer has the dimensions L_x × L_y × d, where d = d_2-d_1 is the metal layer thickness, and -L_z/2 ≤ d_1 < d_2 ≤ L_z/2. Although the following equations are general, we consider in the numerical examples that L_z is 100 nm and d is 10 nm, which are dimension scales relevant for real devices. We assume that the electron mean free path is longer than d <cit.> and that the phonon mean free path is longer than L_z, and assume smooth interfaces and surfaces without diffusive scattering. In the x and y directions the electron wavefunction ψ is periodic (free motion), whereas at z=d_1, d_2 we assume Dirichlet boundary conditions (ψ = 0)–this is a good approximation for metals which have a tall potential barrier at the surface so that the electron wavefunction does not extend much outside of the metallic layer. Then, we can write the electron wavefunction as ψ __∥,n_z(,t) ≡ ψ __∥,k_z(,t) = ϕ _k_z(z)e^i(_∥_∥-ϵ__∥,nt/ħ )/√(A), where ϕ _k_z(z) = {[ √(2/d)sin[ ( z - d_1) k_z] , if z ∈ [d_1, d_2] ,; 0 , if z ∉ [d_1, d_2] , ]. where _∥ and k_z are the wave vector components parallel and perpendicular to the metal film, respectively. The boundary conditions quantize the components of the wavevector to k_x = 2π n_x/L_x, k_y = 2π n_y/L_y, and k_z = π n_z/L, where n_x,y∈ (integer), whereas n_z ∈ (positive integer). These quantization conditions induce a constant (but non-isotropic) density of states (DOS) in the space, namely, σ_≡σ_k_xσ_k_yσ_k_z, where σ_k_x≡ L_x/(2π), σ_k_y≡ L_y/(2π), and σ_k_z≡ d/π. Similarly, we denote σ__∥≡σ_k_xσ_k_y and since σ_k_x, σ_k_y≫σ_k_z, we shall say that the states of constant k_z form quasi-continuous 2D conduction bands, with a band index n_z. If we denote by m_e the electron's effective mass, then its energy is ϵ_𝐤= ħ ^2k^2/2m_e = ħ^2k_∥^2/2m_e + ħ ^2k_z^2/2m_e≡ϵ _k_∥,k_z≡ϵ _k_∥,n_z, where k_∥≡ |_∥|. The minimum energy in the band n_z is ϵ _k_∥ = 0,n_z = ħ ^2k_z^2/(2m_e) = (ħπ n_z)^2/(2m_e d^2) and the difference in energy between two consecutive bands, at the same k_∥, is Δϵ _k_∥,n_z≡ϵ _k_∥,n_z+1 - ϵ _k_∥,n_z = ħ^2 π^2 (2n_z + 1) /(2m_e d^2). We denote the Fermi energy by ϵ_F and define n_F ≡⌊√(2m_eϵ_F)/πħ d ⌋ , where ⌊ x⌋ is the biggest integer smaller or equal to x. Then, ϵ _k_∥ = 0,n_z≤ϵ_F if and only if n_z≤ n_F. Therefore, at T ≪Δϵ _k_∥,n_F/k_B (k_B is the Boltzmann constant), only the bands of n_z ≤ n_F will be populated, plus, eventually, the band n_z=n_F+1, if ϵ_F is close enough to ϵ _0,n_F+1. To describe the phonons in our system, we assume that the whole slab (from z = -L_z/2 to L_z/2) may be treated as a homogeneous isotropic elastic material <cit.>. Although a real slab would consist of different materials with differing elastic properties, our simplifying assumption is accurate enough to emphasize the qualitative features of the electron-phonon heat exchange we investigate. The phonon modes in slabs have been studied before <cit.> and they differ from the phonon modes in bulk materials. There are three types or polarizations: horizontal shear (h), symmetric (s), and antisymmetric (a) phonon modes (known as Lamb waves) <cit.>. All these modes propagate in the direction parallel to the (x,y) plane and are stationary waves along the z axis. The h modes are simple transverse horizontal shear modes, with a displacement field parallel to the (x,y) plane. Their wave vector ≡_∥ + q_th has the components parallel _∥≡ q_∥ x + q_∥ y and perpendicular to the membrane q_th, where q_∥ x = 2πν_x/L_x, q_∥ y = 2πν_y/L_y, and q_th = πν_z /L (notice that here th signifies t= transverse and h= horizontal shear). The quantization conditions ν_x, ν_y = …, -1, 0, 1, …, and ν_z = 0, 1, … are imposed by the periodic boundary conditions in the and directions and free boundary conditions in the direction <cit.>. As in the case of electrons, the phonon modes with the same ν_z and any _∥ form 2D bands <cit.>. The s and a Lamb modes, in contrast, are a superposition of transverse and longitudinal waves, with displacement fields oscillating in a plane perpendicular to the (x,y) plane. Both, the longitudinal and the transverse partial waves have the same component _∥ of the wave vector parallel to the (x,y) plane, whereas the components parallel to the z axis, q_l and q_t, respectively, satisfy the equation <cit.> - 4q_∥^2 q_l q_t/(q_∥^2- q_t ^2)^2 = [ tan( q_tL/2)/tan( q_lL/2)]^± 1, where the exponents 1 and -1 on the right hand side (r.h.s) of Eq. (<ref>) correspond to the symmetric (s) and antisymmetric (a) modes, respectively. Equation (<ref>) relate q_l and q_t for any q_∥ and for each polarization s and a. Another relation that has to be satisfied <cit.> for q_t and q_l is the Snell's law ω_q_∥ = c_l√(q_l^2+q_∥^2) = c_t√(q_t^2+q_∥^2) , where ω_q_∥ is the angular frequency-wave vector dispersion relation of the mode. Solving Eqs (<ref>) we obtain an infinite, countable set of solutions [q_t,ν_z,σ(q_∥), q_l,ν_z,σ(q_∥)], where σ stands for the polarization s or a, and ν_z = 0,1,…. The components q_t,ν_z,σ(q_∥) and q_l,ν_z,σ(q_∥) take either real or imaginary values, but never complex values, with both, real and imaginary components <cit.>; when they are imaginary, we use the notation q_t,ν_z,σ≡ i p_t,ν_z,σ and q_l,ν_z,σ≡ i p_l,ν_z,σ. To make the notations uniform, in the following we make use of the doublets ξ≡ (ν_z,σ), where ν_z = 0,1,… and σ = h,s,a. Then, the displacement fields of all the phonon modes are of the form __∥ξ(, t) ≡e^i (_∥_∥ - ω_q_∥ξ t)/2π__∥ξ(z) . The z dependence of the displacement field of the phonon modes __∥ξ(z) are normalized and explicitly given, for example, in Refs. <cit.>. §.§ Electron-phonon interaction Hamiltonian For the electron-phonon interaction we use the deformation potential model <cit.> Ĥ_ def = E_a ∫_V_eld^3𝐫 Ψ̂^†(𝐫)Ψ̂ (𝐫)∇·(𝐫) . where ∇·(𝐫) is the dilatation field operator, V_el=A× d is the volume of the metallic layer, E_a is a constant, usually taken as E_a = (2/3) ϵ_F <cit.>, whereas the electron field annihilation and creation operators are Ψ̂ (𝐫,t) = ∑__∥, k_zψ__∥, k_z(, t) ĉ__∥, k_z and Ψ̂^†(𝐫,t) = ∑_𝐤_∥,k_zψ__∥, k_z^* (,t) ĉ_𝐤_∥,k_z^†, respectively. The operators ĉ_𝐤_∥,k_z and ĉ__∥, k_z^† are the electron k-space annihilation and creation operators on the state ψ __∥,k_z. From Eq. (<ref>) we write the phonon field operator () = ∑_ξ,_∥√(ħ/2 ρω__∥ξ) e^i (_∥_∥ - iω__∥ξt )[ â__∥ξ__∥ξ(z) + â_-_∥ξ^†^*__∥ξ(z) ] , where â__∥ξ^† and â__∥ξ are the phonon creation and annihilation operators, respectively. §.§ Electron-phonon heat flow We follow the prescription of Ref. <cit.> to calculate the electron-phonon heat power flow. We apply the Fermi golden rule to obtain from Eq. (<ref>) the transition rate Γ_i→ f = (2π/ħ) |⟨ f| Ĥ_ def| i⟩ |^2 δ(E_f - E_i) between the initial (i) and final (f) state, of energies E_i and E_f. Using the transition rates and assuming Fermi and Bose distributions of the electrons and phonons, respectively, we calculate the heat power flow, which may be written as (Eqs. 17 of Ref. <cit.>) P(T_e, T_ph) ≡ P^(0)(T_e) - P^(1)(T_e, T_ph) P^(0) ( T_e ) ≡ P^(0)_s ( T_e ) + P^(0)_a ( T_e ) , P^(1) ( T_e, T_ph ) ≡ P^(1)_s ( T_e, T_ph ) + P^(1)_a ( T_e, T_ph ) , P^(0)_α ( T_e ) ≡ 4π/ħ∑__∥_∥', n, n'^_∥, νħω __∥, α, ν |g__∥, α, ν^n',n|^2 [f(β_e ϵ_𝐤_∥ -𝐪_∥, n') - f(β_e ϵ_k_∥,n) ] n(β_e ϵ_q_∥, ν) , P^(1)_α ( T_e, T_ph) ≡ 4π/ħ∑__∥_∥', n, n'^_∥, νħω __∥, α, ν |g__∥, α, ν^n',n|^2 [f(β_e ϵ_𝐤_∥ -𝐪_∥, n') - f(β_e ϵ_k_∥,n) ] n(β_phϵ_q_∥, ν) , where β_e = 1/(k_BT_e), β_ph = 1/(k_BT_ph), T_e is the electron temperature, T_ph is the phonon temperature, k_B is Boltzmann constant, P^(0)_α and P^(1)_α are the contributions of the α modes, where α = s,a is the polarization. Purely transverse waves do not contribute to the electron-phonon heat exchange in our model, so the h modes do not contribute to P in Eq. (<ref>). Note also that the terms P^(0)(T_e) and P^(1)(T_e, T_ph) are not the heat powers from electrons to phonons and from phonons to electrons, respectively, since some terms, which cancel out are not explicitly written in Eq. (<ref>). Furthermore, ω __∥, α, ν are given by Eq. (<ref>), with q_l,ν_z,σ(q_∥) and q_t,ν_z,σ(q_∥) being the solutions of Eqs. (<ref>). In Eqs. (<ref>), we also used the notation for the coupling constant g__∥, ξ^n',n = E_a N_q_∥, ξ√(ħ/2ρω __∥, ξ)∫_d_1^d_2ϕ _n^'^* (z)ϕ _n(z) [ i_∥·__∥, ξ(z)+d w__∥, ξ, z(z)/d z] dz, where w__∥, ξ, z is the component of __∥, ξ along the z axis, and the normalization constants are 1/N_q_∥, s, ν^2 = A { 4|q_t|^2 q_∥^2 |cos( q_t L/2)|^2 [ ( |q_l|^2+q_∥^2 ) sinh(p_lL)/2 p_l - ( |q_l|^2-q_∥^2 ) sin(q̅_lL)/2q̅_l] . + | q_t^2-q_∥^2 |^2 |cos(q_lL/2)| [ (|q_t|^2+q_∥^2) sinh(p_tL)/2p_t + (|q_t|^2 - q_∥^2)sin(q̅_tL)/2q̅_t] . -4q_∥^2 |cos(q_lL/2)|^2 [ p_t (|q_t|^2 + k_∥^2) sinh(p_t L) - q̅_t(|q_t|^2 - q_∥^2)sin(q̅_tL) ] } , 1/N_q_∥, a, ν^2 = A{4 |q_t|^2 q_∥^2 |sin( q_tL/2)|^2 [(|q_l|^2+q_∥^2)sinh(p_lL)/2 p_l +(|q_l|^2-q_∥^2)sin(q̅_lL)/2 q̅_l] +| q_t^2-q_∥^2|^2 |sin(q_lL/2)|^2 [(|q_t|^2+q_∥^2)sinh(p_tL)/2p_t - (|q_t|^2 - q_∥^2)sin(q̅_tL)/2q̅_t] - 4 q_∥^2 |sin(q_lL/2) |^2 [ p_t(|q_t|^2+q_∥^2)sinh(p_tL) + q̅_t (|q_t|^2 - q_∥^2) sin(q̅_tL) ] } . In Eqs. (<ref>) q̅_t and q̅_l are the real and parts of q_t and q_l, respectively. Since q_l and q_t may be either real or imaginary, the expressions (<ref>) should be interpreted as a limit, when the redundant component goes to zero: lim_p_t/l→ 0sinh(p_t/lL)/(2p_t/l) = L/2 and lim_q̅_t/l→ 0sin(q̅_t/lL)/(2q̅_t/l) = L/2. Combining Eqs. (<ref>), (<ref>) and (<ref>) we obtain (see, for example, <cit.>) P_s^(0) = 4 A/π^2 LE_a^2/ρ c_l^42m/ħ^2∑_n∑_n'∑_ν∫_0^∞ dx_∥ x_∥I_s,ν^(0) (x_∥)/2x_∥ n(β_e ħω_s,ν,q_∥) I_P , P_a^(0) = 4 A/π^2 LE_a^2/ρ c_l^42m/ħ^2∑_n∑_n'∑_ν∫_0^∞ dx_∥ x_∥I_a,ν^(0) (x_∥)/2x_∥ n(β_e ħω_a,ν,q_∥) I_P , where we use the dimensionless notations y_∥≡ (L/2) k_∥, x_∥≡ (L/2) q_∥, x_l,ξ≡ x_l,ξ(q_∥) ≡ q_l,ξ(q_∥) (L/2), x_t,ξ≡ x_t,ξ(q_∥) ≡ q_t,ξ(q_∥) (L/2), x̅_l,ξ≡x̅_l,ξ(q_∥) ≡q̅_l,ξ(q_∥) (L/2), x̅_t,ξ≡x̅_t,ξ(q_∥) ≡q̅_t,ξ(q_∥) (L/2), χ_l,ξ≡χ_l,ξ(q_∥) ≡ p_l,ξ(q_∥) (L/2), χ_t,ξ≡χ_t,ξ(q_∥) ≡ p_t,ξ(q_∥) (L/2), z_∥≡β_e (ħ^2/2m)(2/L)^2 y_∥^2, z_1 ≡β_e (ħ^2/2m)(nπ/L)^2, z_ min≡β_e (ħ^2/2m)(2/L)^2 y_ min^2, z_ph≡β_e ħω_ξ,q_∥, and z_ϵ_F≡β_e ϵ_F, and y_ min ≡ L/4 q_∥| 2m/ħ^2ħω_ξ,q_∥ + q_∥^2 + (n'^2 - n^2) ( π/L)^2 | . In Eqs. (<ref>) we have the integral I_P = 1/2√(k_BT 2m/ħ^2)L/2∫_0^∞dz'_∥/√(z'_∥){1/e^z'_∥ - ( z_ϵ_F + z_ph - z_1 - z_ min) + 1 - 1/e^z'_∥ - (z_ϵ_F - z_1 - z_ min) + 1} and the notations I_s,ν^(0) (x_∥) = ∑_n∑_n' |x_t|^2 x_∥^2 |cos(x_t)|^2 |G_s, ν, q_∥(n, n')|^2 ħω_s, ν,q_∥^4 ×{ 4 |x_t|^2 x_∥^2 |cos(x_t)|^2 [ (|x_l|^2+x_∥^2) sinh(2 χ_l)/2 χ_l + (x_∥^2 - |x_l|^2) sin(2x̅_l)/2 x̅_l]. + |x_∥^2 - x_t^2|^2 |cos(x_l)|^2 [ (|x_t|^2+x_∥^2) sinh(2 χ_t)/2 χ_t - (x_∥^2 - |x_t|^2) sin(2x̅_t)/2 x̅_t] . - 4x_∥^2 |cos(x_l)|^2 [ χ_t(|x_t|^2+x_∥^2) sinh(2 χ_t) + x_t(x_∥^2 - |x_t|^2) sin(2x̅_t) ] }^-1 , I_a,ν^(0) (x_∥) = ∑_n∑_n' |x_t|^2 x_∥^3 |sin(x_t)|^2 |G_a, ν, q_∥(n,n')|^2 ħω^4_a, ν,q_∥ ×{ 4|x_t|^2 x_∥^2 |sin(x_t)|^2( (|x_l|^2+q_∥^2)sinh(2 χ_l)/2 χ_l +(|x_l|^2-x_∥^2)sin(2x̅_l)/2x̅_l). +|x_t^2-x_∥^2|^2 |sin(x_l)|^2( (|x_t|^2+x_∥^2)sinh(2 χ_t)/2 χ_t -(|x_t|^2-x_∥^2)sin(2x̅_t)/2x̅_t) -4x_∥^2 |sin(x_l)|^2( χ_t(|x_t|^2+x_∥^2) sinh(2 χ_t)+x̅_t(|x_t|^2-x_∥^2)sin(2x̅_t)) .}^-1 , with G_q_∥, s, ν(n,n') = 2/d∫^d_2_d_1 dz sin[(z-d_1)nπ/d] sin[(z-d_1)n'π/d] cos[q_l, s, ν(q_∥) z] , = - 8 π^2 n_1 n_2 x_l /[ π^2 ( n_1 - n_2 )^2 ( L/d_2 - d_1)^2 - 4 x_l^2 ] [ π^2 ( n_1 + n_2 )^2 ( L/d_2 - d_1)^2 - 4 x_l^2 ] ×[ ( -1 )^n_1 + n_2sin( 2 x_l d_2/L) -sin( 2 x_l d_1/L) ] ( L/d_2-d_1)^3 G_q_∥, a, ν(n,n') = 2/d∫^d_2_d_1 dz sin[(z-d_1)nπ/d] sin[(z-d_1)n'π/d] sin[q_l, a, ν(q_∥) z] . = 8 π^2 n_1 n_2 x_l /[ π^2 ( n_1 - n_2 )^2 ( L/d_2 - d_1)^2 - 4 x_l^2 ] [ π^2 ( n_1 + n_2 )^2 ( L/d_2 - d_1)^2 - 4 x_l^2 ] ×[ (-1)^n_1 + n_2cos( 2 x_l d_2/L) - cos( 2 x_l d_1/L) ] ( L/d_2 - d_1)^3 In general, z_ϵ_F - z_1 - z_ min≫ 1 <cit.>, so we can use the approximation (see the Appendix of Ref. <cit.>) I_P ≈ 1/2√(k_BT_e 2m/ħ^2)L/2z_ph/√(z_ϵ_F - z_1 - z_ min) = √(2m/ħ^2)L/4ħω_ξ,q_∥/√(ϵ_F - ħ^2/2m(2/L)^2 [ (nπ/2)^2 - y_ min^2 ]) and observe that I_P does not depend on temperature. In such a case, the only temperature dependence in the expressions (<ref>) is in the phonon populations n(β_e ħω_σ,ν,q_∥). The term P^(1) may be calculated similarly as P^(0), but replacing T_e by T_ph in the phonon populations n(β_phϵ_q_∥, ν) of Eq. (<ref>), as shown in detail in Refs. <cit.>. Therefore, in the limit z_ϵ_F - z_1 - z_ min≫ 1, P^(1)_α ( T_e, T_ph) remains only a function of T_ph, in such a way that we may write in general P^(1)_s(T) = P^(0)_s(T), P^(1)_a(T) = P^(0)_a(T), so P^(1)(T) = P^(0)(T) . Notice that these simplifications are valid only outside the very narrow crest regions, as they are defined in <cit.>. Therefore, in the region of applicability of Eq. (<ref>), Eq. (<ref>) simplifies to P(T_e, T_ph) = P^(0)(T_e) - P^(0)(T_ph) . In the low temperature limit, only the lowest phonon band is populated and the expressions (<ref>) are simplified to G_q_∥, s, ν(n,n') = {[ 1 , if n - n' = 0 ,; 4 χ_l^2 (d_2-d_1)^2/π^2 L^2{ 1 /(n-n')^2 - 1/(n+n')^2} , if n - n' = 2 k ,; - 4 χ_l^2 (d_1+d_2) (d_2-d_1)/π^2 L^2{ 1 /(n-n')^2 - 1 /(n+n')^2} , if n - n' = 2k+1 , ]. G_q_∥, a, ν(n,n') = {[ i χ_l d_2 + d_1/L , if n - n' = 0 ,; 4/π^2 i χ_l^3 (d_2 + d_1) (d_2-d_1)^2/L^3{ 1 /(n-n')^2 - 1 /(n+n')^2} , if n - n = 2k ,; -4/π^2 iχ_l d_2-d_1/L{ 1 /(n-n')^2 - 1 /(n+n')^2} , if n - n' = 2k + 1 . ]. The main contribution to the heat power flow (especially in the low temperature limit) comes from the cases n=n', since in the other cases the heat exchange involve phonons of very high energy (for a typical 10 nm thick Cu metallic film, the lowest energy difference between two bands is Δϵ_k_∥=0, n_z = 1≈ 131 K <cit.>). From Eq. (<ref>) we observe that in the low temperature limit G_q_∥, s, ν(n,n), and therefore P_s^(0), is independent of the position of the metallic layer in the dielectric membrane. On the other hand, from Eq. (<ref>) we notice that G_q_∥, a, ν(n,n) = 0 for d_1=-d_2 (when the metallic film is in the middle of the membrane). § RESULTS Let us consider a 10 nm thick Cu film at an arbitrary location inside a 100 nm thick suspended SiN_x dielectric slab. The density of SiN_x is 3290 kg/m^3, whereas the longitudinal and transversal sound velocities are 10300 m/s and 6200 m/s, respectively. The Fermi energy in Cu is 7 eV and the 10 nm thick Cu film is outside the crest region <cit.>, so we can use the expression (<ref>) for I_P. In the temperature range of interest (from 10 mK to 10 K) we can use only the terms n=n' in the summations (<ref>) and (<ref>). In Fig. <ref> we plot P_s^(0), P_a^(0), and P^(0) = P_s^(0) + P_a^(0) as functions of T, for different positions of the Cu film in the membrane, specified by (d_1+d_2)/2. We notice that in the low temperature range (say, around 100 mK and below) P_s^(0) is practically independent of the position of the film, confirming Eq. (<ref>), whereas P_a^(0) strongly depends on it in the whole temperature range investigated, giving no contribution when the film is exactly in the middle, P_a^(0) = 0 at d_1=-d_2. This can be seen more clearly in Fig. <ref>, where we plot P_s^(0), P_a^(0), and P^(0) as functions of the Cu film position (d_1+d_2)/2 at three different temperatures: T=0.01 K, T=0.1 K, and T=10 K. We notice that in the sub-K temperature range, there is a crossover from the symmetric-mode domination for close-to-central metal film locations, to the antisymmetric-mode domination in the opposite limit. As it was noticed also in Refs. <cit.>, in the low temperature limit, P_s^(0) decreases faster than P_a^(0) with decreasing temperature, so, for any d_1+d_2 0, there is a crossover temperature T_c(d_1+d_2), such that P_s^(0) < P_a^(0) for T<T_c(d_1+d_2). Therefore, at low enough temperatures, the heat power exchanged by the electrons with the antisymmetric phonons dominates the heat power exchanged with the symmetric phonons at any |d_1+d_2|/2 > 0. Due to this variation of P^(0)_a with the position of the metallic film, at T=10 mK the total heat exchange power P^(0) decreases by as much as an order of magnitude when moving the metallic film from the surface of the slab to the middle of it. In addition, in Fig. <ref> we plot the exponent of the temperature dependence for the different components of the heat power flow, defined as x_s ≡∂ln P_s^(0)(T)/∂ln T, x_a ≡∂ln P_a^(0)(T)/∂ln T, and x ≡∂ln P^(0)(T)/∂ln T . One can show that <cit.> lim_T→ 0 x_s = 4 and lim_T→ 0 x_a = 3.5 , so, at low enough temperatures x_s>x_a, as mentioned above. At higher temperatures, the exponents x_s, x_a and x have a non-monotonous temperature dependence and do not reach the 3D limit x=5 even at T=10 K, although the phonon 2D-3D crossover temperature for a 100 nm slab is T_C ≈ 240 mK <cit.>. This is due to the fact that although the phonon gas in the 100 nm thick slab is quasi-3D at 10 K, the energy of an average phonon is much smaller than the energy difference between the 2D electronic bands, so in this temperature range the electrons are still scattered only within the same band, n=n'. Therefore, the higher temperature range corresponds here to the heat power exchange between a collection of 2D electron gases, with n≤ n_F, and a 3D phonon gas. In this case, the exponent x approaches four, satisfying the ansatz x = s+2, but with s being the smaller dimensionality of the two subsystems–in our case, s=2 is the dimensionality of the electron subsystem. § CONCLUSIONS We studied the heat exchange between electrons and phonons in a suspended geometry, where a Cu film of thickness d=10 nm is placed inside a dielectric SiN_x membrane of thickness L=100 nm, forming a layered structure. We focused on investigating on how the location of the metal film influences the power flow, and found that at low temperatures it can change significantly – at 10 mK it changes by an order of magnitude. At sub-Kelvin temperatures, this metal film location dependence arises only from the coupling to the antisymmetric Lamb phonon modes of the membrane, whereas the symmetric Lamb-modes give a constant, location independent contribution. Moreover, the contribution of the antisymmetric modes goes to zero, if the metal film is placed at the center of the membrane. The physical reason for this is that–by definition–the displacement field in the antisymmetric Lamb-modes is zero in the middle plane of the membrane. In the low temperature limit, the temperature dependence of the symmetric mode contribution is P^(0)_s ∝ T^4, whereas for the antisymmetric mode, P^(0)_a ∝ T^3.5. Therefore, if the metal film is not close to the center of the membrane, at low enough temperatures P_a prevails over P_s and the total heat power flux has the simple temperature dependence P(T_e, T_ph) ∝ T_e^3.5 - T_ph^3.5. In the opposite case, the symmetric mode dominates and P(T_e, T_ph) ∝ T_e^4 - T_ph^4. A consequence of this is that electrons and phonons can be much more efficiently decoupled at low temperatures by placing the metallic film in the center of the membrane. This may also help considerably for electron cooling and noise reduction in ultrasensitive nanosensors. In a wider temperature range, the exponent x of the temperature dependence has a complicated, non-monotonous dependence on the temperature and on the metal film location. For the antisymmetric mode, it varies from ∼ 3.5 to ∼ 7, whereas for the symmetric mode, it varies from from ∼ 4 to ∼ 5.7. The bulk 3D limit, corresponding to x=5, was not achieved even at T=10 K, due to the high energy difference between the 2D electronic bands, but instead, the limit of x=4 is approached at T=10 K. § ACKNOWLEDGMENTS D.V.A. and M.D. acknowledge financial support by the Ministry of Education, UEFISCDI projects PN23210101 and PN23210204. I.J.M. acknowledges support by the Academy of Finland project number 341823. 10 Enss Cryogenic Particle Detection, edited by Ch. Enss (Springer,New York, 2005). RevModPhys.78.217.2006.Giazotto F. Giazotto, T. T. Heikkilä, A. Luukanen, A. M. Savin, and J. P. Pekola. Opportunities for mesoscopics in thermometry and refrigeration: Physics and applications. Rev. Mod. Phys., 78:217, 2006. PhysRevApplied.16.034051 P. J. de Visser, S. A..H. de Rooij, V. Murugesan, D. J. Thoen and J. J. A. Baselmans. Phonon-Trapping-Enhanced Energy Resolution in Superconducting Single-Photon Detectors. Phys. Rev. Appl., 16:034051, 2021. Quaranta_2013 O. Quaranta, T. W. Cecil, L. Gades, B. Mazin and A. Miceli X-ray photon detection using superconducting resonators in thermal quasi-equilibrium. Supercond. Sci. Technol., 26:105021, 2013. ApplPhysLett.78.556.2001.Anghel D. V. Anghel, A. Luukanen, and J. P. Pekola. Performance of cryogenic microbolometers and calorimeters with on-chip coolers. Appl. Phys. Lett., 78:556, 2001. RepProgrPhys.75.046501.2012.Muhonen J. T Muhonen, M. Meschke, and J. P Pekola. Micrometre-scale refrigerators. Rep. Progr. Phys., 75:046501, 2012. ApplSupercond.5.227.1998.Leivo M. M. Leivo, A. J. Manninen, and J. P. Pekola. Microrefrigeration by normal-metal/insulator/superconductor tunnel junctions. Appl. Supercond., 5:227, 1997. ApplPhysLett.70.1885.1997.Manninen A. J. Manninen, M. M. Leivo, and J. P. Pekola. Refrigeration of a dielectric membrane by superconductor/insulator/normal-metal/insulator/superconductor tunneling. Appl. Phys. Lett., 70:1885, 1997. ApplPhysLett.92.163501.2008.Miller N. A. Miller, G. C. O’Neil, J. A. Beall, G. C. Hilton, K. D. Irwin, D. R. Schmidt, L. R. Vale, and J. N. Ullom. High resolution X-ray transition-edge sensor cooled by tunnel junction refrigerators. Appl. Phys. Lett., 92:163501, 2008. Vercuyssen N. Vercruyssen, R. Barends, T. M. Klapwijk, J. T. Muhonen, M. Meschke, and J. P. Pekola. Substrate-dependent quasiparticle recombination time in superconducting resonators. Appl. Phys. Lett. 99:062509, 2011. Nguyen H. Q. Nguyen, M. Meschke, and J. P. Pekola. A robust platform cooled by superconducting electronic refrigerators. Appl. Phys. Lett. 106:012601, 2015. SovPhysJETP.4.173.1957.Kaganov M. I. Kaganov, I. M. Lifshitz, and L. V. Tanatarov. Relaxation between electrons and the crystalline lattice. Sov. Phys. JETP, 4:173, 1957. PhysRevLett.59.1460.1987.Allen P. B. Allen. Theory of thermal relaxation of electrons in metals. Phys. Rev. Lett., 59:1460, 1987. PhysRevB.49.5942.1994.Wellstood F. C. Wellstood, C. Urbina, and J. Clarke. Hot-electron effects in metals. Phys. Rev. B, 49:5942, 1994. PhysRevB.81.245404.2010.Viljas J. K. Viljas and T. T. Heikkilä. Electron-phonon heat transfer in monolayer and bilayer graphene. Phys. Rev. B, 81:245404, 2010. PhysRevB.77.033401.2008.Hekking F. W. J. Hekking, A. O. Niskanen, and J. P. Pekola. Electron-phonon coupling and longitudinal mechanical-mode cooling in a metallic nanowire. Phys. Rev. B, 77:033401, 2008. JApplPhys.119.085101.2016.Gall Daniel Gall. Electron mean free path in elemental metals. J. Appl. Phys., 119:085101, 2016. SolidStateCommun.227.56.2016.Anghel D.V. Anghel and S. Cojocaru. Electron-phonon heat exchange in layered nano-systems. Solid State Commun., 227:56, 2016. PhysRevB.93.115405.2016.Cojocaru S. Cojocaru and D. V. Anghel. Low-temperature electron-phonon heat transfer in metal films. Phys. Rev. B, 93:115405, 2016. EurPhysJB.90.260.2017.Anghel D. V. Anghel and S. Cojocaru. Electron–phonon heat exchange in quasi-two-dimensional nanolayers. Eur. Phys. J. B, 90:260, 2017. PhysScr.94.105704.2019.Anghel D. V. Anghel, C. Caraiani, and Y. M. Galperin. Crossover temperature in electron–phonon heat exchange in layered nanostructures. Phys. Scr., 94:105704, 2019. PhysRevLett.99.145503 J. T. Karvonen and I. J. Maasilta. Influence of Phonon Dimensionality on Electron Energy Relaxation. Phys. Rev. Lett. 99:145503, 2007. Saira2020 O.-P. Saira, M. H. Matheny, L. Wang, J. Pekola, and M. Roukes. Modification of electron-phonon coupling by micromachining and suspension. J. Appl. Phys. 127:024307, 2020. PhysRevB.70.125425.2004.Kuhn T. Kühn, D. V. Anghel, J. P. Pekola, M. Manninen, and Y. M. Galperin. Heat transport in ultrathin dielectric membranes and bridges. Phys. Rev. B, 70:125425, 2004. JPhysA.40.10429.2007.Anghel D. V. Anghel and T. Kühn. Quantization of the elastic modes in an isotropic plate. J. Phys. A: Math. Theor., 40:10429, 2007. cond-mat/0611528. Auld:book B. A. Auld. Acoustic Fields and Waves in Solids, 2nd Ed. Robert E. Krieger Publishing Company, 1990. Ziman:book J. M. Ziman. Electrons and Phonons. Oxford University Press, 1960.
http://arxiv.org/abs/2307.00562v1
20230702130339
A MIL Approach for Anomaly Detection in Surveillance Videos from Multiple Camera Views
[ "Silas Santiago Lopes Pereira", "José Everardo Bessa Maia" ]
cs.CV
[ "cs.CV" ]
Quantum interference between quasi-2D Fermi surface sheets in UTe_2 A. G. Eaton August 1, 2023 =================================================================== Occlusion and clutter are two scene states that make it difficult to detect anomalies in surveillance video. Furthermore, anomaly events are rare and, as a consequence, class imbalance and lack of labeled anomaly data are also key features of this task. Therefore, weakly supervised methods are heavily researched for this application. In this paper, we tackle these typical problems of anomaly detection in surveillance video by combining Multiple Instance Learning (MIL) to deal with the lack of labels and Multiple Camera Views (MC) to reduce occlusion and clutter effects. In the resulting MC-MIL algorithm we apply a multiple camera combined loss function to train a regression network with Sultani’s MIL ranking function. To evaluate the MC-MIL algorithm first proposed here, the multiple camera PETS-2009 benchmark dataset was re-labeled for the anomaly detection task from multiple camera views. The result shows a significant performance improvement in F1 score compared to the single-camera configuration. § INTRODUCTION In video surveillance scenarios for detecting anomalous events, the use of a single camera view to identifying suspicious activities and abnormal behavior brings with it a set of limitations and difficulties for automating this task. In addition to variability factors such as lighting conditions, background clutter, and low viewing resolution, the information captured is also dependent on the proper calibration of the camera for the target environment. This dependence on perspective can often generate uncertainty regarding the interpretation of actions and behaviors. Another limitation is the occlusion of people or objects, which can impair the recognition of anomalous activity. In this sense, the use of multiple overlapping cameras to capture and monitor the same scene can provide a general perspective of the whole scenario, greater representation of information and a greater amount of data from multiple perspectives. However, video anomaly detection (VAD) is a challenging problem in the computer vision area. The definition of an anomaly involves subjectivity, depends on localization and context, and can vary in duration and content. Thus, the definition of an anomaly could become a complex problem. Beyond that, there is also the existing challenge of capturing anomaly examples. Performing frame and pixel-level annotations could be a tedious and expensive human activity, which leads to the creation of frequently unbalanced databases and the disseminated use of unary classification methods, although the existence of works on literature about binary classification <cit.>. Multiple Instance Learning (MIL) is applicable when the knowledge about label categories and training samples is incomplete <cit.>. In MIL, some bags contain multiple instances instead of individual ones to represent each pattern in a dataset. From a binary perspective, a bag could have normal or anomalous labels, which can be used to train a model with an appropriate machine-learning technique. Different factors can impact the performance of MIL approaches. Firstly, predictions can be executed at the bag or instance levels, and these two levels have distinct misclassification costs. Second, the composition of each bag, such as the proportion of examples from each category and the relation between examples, impacts the performance of MIL methods. Third, ambiguity in instance labels can be related to label noise as well to instances not belonging to classes. Finally, class distributions can also affect MIL algorithms depending on their assumptions about the data (<cit.>). Recent studies have shown the performance efficiency of the MIL approach in detection and recognition tasks. Although using the weakly supervised approaches mitigates the need for labeled training data, acquiring video data labeled even at the video level is still an exhaustive and challenging task, and anomaly detection task in realistic scenes is still an open problem. The vast majority of video anomaly detection works explore single-camera approaches and they do not explore the intrinsic information existing in multiple camera views for a same scene. In this sense, since collected video events can be described by multiple overlapped camera perspectives to describe a same scene, is crucial to explore multi-camera strategies to learn the underlying semantics of each camera view of the same scene data. Thus, we propose a multi-camera multiple-instance training scheme in this work. We employ the MIL algorithm of <cit.> in addition to a combined loss function to take into account the multiple views of the same data during the network weight adjustment. We consider a multi-camera video anomaly detection dataset generated from PETS 2009 dataset for evaluation and comparison with the proposed multi-camera approach with the vanilla single-camera case. The main contributions of this work are summarized as follows: * We have developed a MIL training strategy with multiple camera views which improves its performance over single-camera configuration; * To evaluate the proposed approach, we re-label the multiple camera PETS-2009 benchmark dataset for the anomaly detection task from multiple camera views; * Since <cit.> trains your regression network with video bags with a fixed number of segments of varying length, we provide an adaptation in the source code to enable, instead, training with video bags with a variable number of video clips of the same length. This work is organized as follows: Section 2 describes the dataset and the multi-camera multiple-instance proposed method. In section 3, the experimental results are presented and discussed. Section 4 presents some relevant related works associated with video anomaly detection. Section 5 presents the conclusions and directions for future research. § DATA AND METHODS This section describes the main steps for data preparation, modeling, and evaluation of our proposed multi-camera multiple-instance video anomaly detection approach. Firstly, we describe the formation of the multi-camera video anomaly dataset used in our experiments. Then, we explain the modelling and evaluation process. In our work, we consider the multi-camera video anomaly detection problem in the perspective of a regression problem. We describe this problem as follows: Let X = {x_i } ^ n_ i = 1 a dataset consisting of n video scenes. Each video scene x_i is composed of multiple videos corresponding to multiple overlapped camera perspectives for a same scene. Each video x_i has also a duration t_i, so that T = {t_i } ^ n_ i = 1 is the temporal duration of the dataset. Let Y = {y_i } ^ n_i = 1 be the binary labels for each video in dataset X. We desire to built a predictive model which receives a given video x_test and produces as inference an anomaly score. §.§ The Multiple Camera PETS-2009 Benchmark Dataset Re-labeled PETS-2009[<https://cs.binghamton.edu/ mrldata/pets2009>] is a benchmark dataset that aggregates different scene sets with multiple overlapped camera views and distinct events involving crowds (<cit.>). We use these frame sequences to derive a new dataset for the multi-camera video anomaly detection task. We consider the first four cameras in the original frame sequences, which provide different overlapped visions of the same scene from varying positions and angles, as illustrated in Figure <ref>. The scenes were labeled at frame level as anomaly or normal events. Scenes with background, people walking individually or in a crowd, and regular passing of cars are considered as normal patterns. Frames with occurrences of people running (individually or in crowd), crowding of people in the middle of the traffic intersection, and people in the counterflow were considered anomalous patterns. In summary, there are 27 scenes, where 19 scenes reflect normal events while 8 scenes have some anomalous activity. In these 27 scenes, there is a total of 528 clips for each one of the four viewpoints. In Table <ref>, we summarize the distribution of normal and anomalous patterns in the video anomaly detection dataset. Since the number of frames is sometimes different among the four cameras of the same scene in some of the videos, we complete each frame sequence with background frames so that the four camera views have the same number of frames, and the number of frames is a multiple of 16. A set of RGB I3D (Inflated 3D) attributes were obtained for each sequence of 16-frame video clips in the videos. For this step, we use the Video Features library [<https://github.com/v-iashin/video_features>] that uses a pre-trained model on the Kinetics 400 dataset. We describe the composition of training and test splits for modeling and performance evaluation as follows. Initially, we load the dataset 𝐃 = {(X_i, y_i, yf_i)}_i=1^N with the processed videos scenes. Then, we partitioned 𝐃 into training and test datasets, and we used 50% of data for further training with a holdout procedure. We build both partitions so that we maintain the same proportion of anomalous and normal instances in training and test partitions. The training split contains 9 normal videos and 4 anomalous ones for each camera. The test set contains 10 normal videos and 4 anomalous videos for each camera. We made this processed dataset available at the following link: <https://github.com/santiagosilas/MC-VAD-Dataset-BasedOn-PETS2009>. §.§ Modeling and Evaluation The Multiple Instance Learning (MIL) problems in the binary classification context can be formally specified as follows: Consider an instance space X=ℝ^d and a set of labels Y= {0, 1}. A model is then built from a dataset with m bags β = {β_1, β_2, …, β_m}. Each bag β_i = {x_i1, …, x_ij, …, x_in_i} is a set with n_i instances and x_ij∈ X. During the training step, each bag β_i has only the information about the associated bag label y_i ∈ Y, but instance labels are unknown. The learning goal is to predict the label of an unseen bag and also predict the label of its instances (<cit.>). SultaniEtAl2018 In <cit.>, video anomaly detection was treated as a regression problem under MIL in which features are mapped to anomaly scores by the use of a 3-layer fully connected neural network. The authors utilized a coarse-grained approach in which videos are divided into a fixed number of video segments during the training phase and each video segment is an instance of a bag. They propose a deep MIL ranking loss as a hinge-loss formulation that also considers sparsity and temporal smoothness constraints. Anomalous and normal surveillance videos were partitioned into segments so that a video (bag) contains multiple segments (instances of a bag). To build and evaluate the proposed method, the authors consider the large-scale video anomaly dataset UCF-Crime which is composed of multiple anomalous events. From the obtained results, the proposed approach overcomes other anomaly detection state-of-the-art approaches in performance with 75.41% and 1.9% in terms of AUC and false alarm rate. For training and evaluating our proposed multi-camera training strategy, we adopt the MIL baseline model of <cit.> as the backbone. This approach corresponds to a regression network trained under the MIL paradigm. In <cit.> the authors handle the anomaly detection task as a regression problem under MIL since only video labels are considered for model training. Their proposed solution takes a 3-layer fully connected neural network where the loss function contains restrictions of sparsity (anomaly scores are sparse since an anomaly usually occurs in a short time period) and smoothness (the anomaly score will vary smoothly). The MIL objective function is expressed as follows: ℒ(𝒲) = max(0, 1 - max_i ∈ℬ_a f(𝒱^i_a) + max_i ∈ℬ_n f(𝒱^i_n)) + λ_1∑_i^(n-1)( f(𝒱^i_a) - f(𝒱^i+1_a) )^2 + λ_2∑_i^nf(𝒱^i_a) + λ_3 ||𝒲||_F Since it is expected that f(𝒱^i_a) > f(𝒱^i_n) and the instance labels are unknown, the strategy consists of a MI ranking loss where max_i ∈ℬ_a f(𝒱^i_a) > max_i ∈ℬ_n f(𝒱^i_n), which is expressed in the hinge loss formulation in the loss function equation. In the original paper, each video is divided into 32 non-overlapping segments, where each segment is an instance of the bag. To a feature vector for the whole segment, the authors take the average of all clip features within each segment. During the training phase, they randomly select a batch with 30 negative and 30 positive bags for loss computation and backpropagation. In the context of surveillance videos, data is highly imbalanced in weakly supervised anomaly detection. Differently from <cit.>, a characteristic of our multi-camera video anomaly detection preprocessed I3D dataset is that each video has a varying amount of clips. We trained the MIL approach of <cit.> based on the code available on this link[https://github.com/ekosman/AnomalyDetectionCVPR2018-Pytorch], which provides a re-implementation of their approach in PyTorch[https://pytorch.org/] framework. For the training of the regressor network, <cit.> consider video bags with a fixed number of 32 non-overlapping segments, where each segment is an instance of the bag. Differently from <cit.>, we consider that each video in our approach has a varying amount of clips, such as <cit.>. Since the utilized preprocessed I3D data contains bags with a variable number of video clips, we then adapt the source code to allow the MIL training from video bags with a variable number of clips. It is important to note that our proposal does not increase the dimensions of the original regression network. The following outlines our multi-camera training scheme: Let's consider two overlapped camera views 1 and 2. For each camera view i, where i={1, 2}, the regression network receives a batch composed with a set of normal clip bags {β^N_c_i} and a set of abnormal ones, {β^A_c_i}. The regression network outputs a set of normal clip scores { S^N_c_i} and a set of abnormal ones, { S^A_c_i}. Then, we obtain a loss l_c_i from the computed clip scores. A combined loss function l_final of l_c_1 and l_c_2 is used to adjust the network parameters. We evaluate loss combinations by minimum, maximum and mean. In our experiments, the loss combination by maximum achieves the best results. In the Figure <ref>, we present the proposed training scheme for the multi-camera video anomaly detection task. As a result, the multi-camera regression network is able to produce an anomaly score for each of the cameras given a new unseen multi-camera clip. A late score fusion function can be used to combine the anomaly outputs yielded by each one of two camera view. We consider the following three late score fusion strategies to combine the anomaly scores yielded by regression network: linear combination (LC) (S_LC←β S_C_1 + (1-β) S_C_2, β∈ [0,1]), maximum (Max) (S_MAX←max(S_C_1, S_C_2)), and minimum (Min) (S_MIN←min(S_C_1, S_C_2)). In our experiments, the score combination by maximum achieves the best results. For the evaluation at the frame level, we describe the test segment partition by 𝐗_test = { (X_i, yfs_i) }, where yfs_i is a sequence of 16 frame labels obtained from ground truth variable yf_i. Since we build the generated models at the clip level, we mapped each predicted test clip output to the corresponding output frame sequence (each video clip corresponds to a sequence of 16 frames) to allow evaluation at the frame level. We consider six different performance metrics to report the achieved results. Let TP be the number of true positives, FP the number of false positives, TN the number of true negatives, and FN the number of false negatives. For each method and for each experiment, we computed the following 6 metrics for performance evaluation: Area under Curve (AUC), False Positive Rate (FPR=FP/FP+TN) or False Alarm Rate (FAR), Accuracy (ACC = TP+TN/TP+TN+FP+FN), Precision (PREC=TP/TP+FP), Recall (REC=TP/TP+FN), and F1-score (F1=2 ·PREC · REC/PREC + REC). We reported FPR metric at 50% threshold. We compare our proposed multi-camera approach with the vanilla single-camera multiple instance approach (SC-MIL) with the MIL ranking loss function referred to in the previous section which was trained and evaluated for each one of the four cameras. Since we have four cameras in the video anomaly detection dataset, we evaluate the proposed multiple-camera learning scheme for all six combinations of two cameras: (C1,C2), (C1,C3), (C1,C4),(C2,C3),(C2,C4), and (C3,C4). To obtain a final prediction for a pair of cameras, we compute the score combination with the functions Linear Combination (LC), Maximum (Max), and Minimum (Min) of multiple camera anomaly output scores. In our experiments, the score combination by maximum achieves the best results. § RESULTS AND DISCUSSION This section presents the quantitative results of our proposed multi-camera multiple-instance (MC-MIL) approach for multi-camera video anomaly detection. We summarize in Table <ref> the best performance results for each pair of cameras and the late decision function of output scores. We also provide the baseline results for the single-camera MIL (SC-MIL) setting for further analysis and comparison. The performance evaluation of compared models considered ground truths at the frame level. As a training loss combination scheme, we only present the cost function combined with the maximum of the two loss functions in this Table, given that this combination was the one that generated better results in terms of the AUC metric. When we consider only one camera in the inference phase, i.e., either cameras C1, C2, C3, or C4 in SC-MIL or MC-MIL training, we observe that the proposed MC-MIL scheme outperforms three of the four single-camera baselines: For the first camera, there is a performance gain from 91.97% (SC-MIL) to 94.10% (MC-MIL through the pair C1 and C4) in terms of AUC, respectively. For the second camera, this gain is from 95.16% (SC-MIL) to 96.47% (MC-MIL through the pair C2 and C3). For the third camera, the AUC goes from 93.44% to 94.66% (MC-MIL through the pair C3 and C4). For the fourth camera, the baseline SC-MIL achieves better results in terms of AUC. In terms of F1-Score, we can also observe gains when we compare SC-MIL and MC-MIL results for only one camera in the inference phase. For instance, we can note this improvement in F1-Score in cameras C1 (35.97% to 36.64%), C2 (42.44% to 59.88%), C3 (38.24% to 46.57%) and C4 (29.49% to 37.76%) in one of the six pairs of camera combinations. But it is when we fuse the decision scores of the two cameras by the maximum decision that the harmonic mean between precision and recall becomes more expressive in five of the four pairs of camera combinations. When we pay attention to the decision inference in MC-MIL for two cameras, we note a positive difference of 2.18%, 13.39%, 19.78%, 19.67%, 5.9% for the pairs (C1,C2), (C3, C4), (C1,C3), (C2,C3), and (C2,C4), respectively, although the pair (C1,C4) has no performance gain in F1-Score. In Figure <ref> we present the frame-level AUC ROC curves for our proposed approach MC-MIL in comparison with the baseline SC-MIL models for each pair of cameras. Although we can observe that, for all pairs of cameras, the curves of the SC-MIL and MC-MIL approaches seem similar and overlap in many regions of the graph of each camera combination, for the pairs C1-C3 and C2-C4, there are still some regions of the graphic where the MC-MIL approach outperforms, even if by a marginal gain, the SC-MIL approach. Despite the ROC curves being close, we can verify subtle differences for F1-Score and AUC metrics, indicating the slightly superior performance of the MC-MIL approach over SC-MIL. These results provide an indication of the greater robustness of the MC-MIL approach in terms of AUC and F1-Score for the context of using a single camera in the inference phase. Also, in the context when we have multiple cameras to support the decision inference, the simple late fusion by maximum is able to improve the tradeoff between precision and recall. §.§ Qualitative Analysis To further exemplify the performance of our proposed multi-camera approach against the single-camera baseline, we visualize the temporal predictions of the models for the anomaly test scene S3-Multiple-Flows–Time-14-13 for the camera pair (C2, C3), as showed in Figure <ref>. We present the analysis of our proposed approach in comparison with the single-camera baselines. In the video S3-Multiple-Flows–Time-14-13, pedestrians standing in the middle of the intersection are considered an anomaly. The video begins with a small people crowd walking down the street. In front of them, four other people that are positioned immobile and lined up side by side in part of the traffic intersection. These three people remain motionless throughout the entire video, which is considered an anomaly here. When the flow approaches, this main flow goes around the three people without bump into them. The video is challenging since their spatial and motion patterns are very approximate to the normal videos. This event is similar to a normal scene where people in crowd just walk. We mark with red bounding boxes the position of the three people remaining motionless. At the frame 100, it is no longer possible to clearly distinguish the crowd and the people standing in the middle of the street. We can observe that the single-camera models performed poorly, while the multi-camera approach was able to detect the anomaly event even when the two groups of people were very close together. § RELATED WORK There is a considerable number of research studies on video surveillance and video anomaly detection. Comprehensive reviews that expand the coverage of this section can be found in <cit.>. This section overviews related studies about binary classification and multiple instance learning in video anomaly detection. Pereira2022Bracis22 <cit.> presents a wrapper-based multiple instance learning approach for video anomaly detection that applies a LightGBM model built with publicly available deep features constructed with a clip-based instance generation strategy. They evaluate the approach with the single-camera ShanghaiTech dataset. To mitigate the redundant information in highly correlated deep features in the dataset the authors consider the removal of the higher correlated features from the computation and analysis of Pearson correlation matrix of training data at clip level. They compare the results against other commonly used methods and with the state-of-the-art literature. The authors observe that the proposed approach is able to overcome the frame-level results of the literature in terms of the AUC metric, although our technique suffers from a high false positive rate. hao2020anomaly In <cit.> the authors approach the task of video anomaly detection from a model based on two streams to deal with RGB and optical flow data with specific neural networks for each information modality. The final anomaly score is made from the weighted fusion of the anomaly scores generated by each network. The intention is that the information from each stream can complement each other and thus improve the performance of the final detection. The task is treated as a regression problem under the multi-instance paradigm. For the training of each network, the authors make use of the MIL ranking cost function proposed by <cit.>. The authors also evaluate the performance impact regarding the use of the same or different number of layers in the networks. The authors observed that the merging of streams with a different number of layers obtained better results than the use of networks with the same number of layers. The authors argue that merging in this way not only allows to use of the complementary information of the two streams but also makes it possible to explore multi-scale information. <cit.> present a deep neural network model to recognize human activities from multiple cameras by utilizing raw images to feed the network.The approach includes a feature extractor and a discriminator in order to capture the local and temporal information in data. The approach contains three components: a convolutional neural network to extract spatial information, an LSTM network to decode temporal information (MSLSTMRes - Multiple Stacked Long Short-term Memory Residual), and a dense layer with softmax activation to classify the feature patterns of all views. The approach performs a late data fusion by the concatenation of the processed data through network blocks CNN and LSTM for each camera before delivering the information to the dense layer. In this sense, the final layer seeks to relate the information of multiple cameras to categorize the action. The approach has five input units, corresponding to the images of each camera. The authors also employ an attention mechanism to get information about the area of a moving object. In this work, we evaluate the application of a multi-camera multiple-instance strategy to optimize the robustness of predictive models by combining loss function values of each camera view in an aggregated loss function for backpropagation and weight adjustment. The intuition of our proposal is that with the employment of a combined loss function that considers multiple camera views rather than conventional MIL training with one single camera, we will be able to improve the final capacity of the predictive model. From a multi-camera video anomaly dataset composed from the benchmark PETS 2009 dataset preprocessed as inflated 3D (I3D) clips in a fine-grained clip-based manner, we get promising results in comparison to the single-camera scenario. § CONCLUSION In this work, we explore the anomaly detection problem in surveillance video by combining Multiple Instance Learning (MIL) with Multiple Camera Views (MC). We are the first to propose the multiple-camera multiple-instance training scheme for the MIL algorithm of Sultani et al. [2018] which uses a combined loss function to take into account the multiple camera views of the same scene during the network weight adjustment. Our proposal does not increase the dimensions of the original regression network. Due to the lack of a suitable dataset available, the PETS-2009 dataset has been re-labeled to perform proof-of-concept testing. The result shows a significant performance improvement in F1 score compared to the single-camera configuration. The use of multiple camera views of the same scene opens up an avenue of research opportunities for improving the performance of anomaly detection in surveillance video. A number of loss functions derived from the original score rank function of Sultani et al. [2018] was published after them (<cit.>, <cit.>, <cit.>). So, in addition to investigating the conception of a new multiple-camera multiple-instance loss function, we are investigating multitask and experimenting to apply these other previously published loss functions for the setting of multiple-camera multiple-instance learning. unsrtnat
http://arxiv.org/abs/2307.03223v1
20230706180001
Neural Network Field Theories: Non-Gaussianity, Actions, and Locality
[ "Mehmet Demirtas", "James Halverson", "Anindita Maiti", "Matthew D. Schwartz", "Keegan Stoner" ]
hep-th
[ "hep-th", "cs.LG" ]
http://arxiv.org/abs/2307.01993v1
20230705025303
Freezing transition in particle-conserving East model
[ "Cheng Wang", "Zhi-Cheng Yang" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "quant-ph" ]
School of Physics, Peking University, Beijing 100871, China zcyang19@pku.edu.cn School of Physics, Peking University, Beijing 100871, China Center for High Energy Physics, Peking University, Beijing 100871, China Quantum kinetically constrained models can exhibit a wealth of dynamical phenomena ranging from anomalous transport to Hilbert-space fragmentation (HSF). We study a class of one-dimensional particle number conserving systems where particle hoppings are subjected to an East-like constraint, akin to facilitated spin models in classical glasses. While such a kinetic constraint leads to HSF, we find that the degree of fragmentation exhibits a sharp transition as the average particle density is varied. Below a critical density, the system transitions from being weakly fragmented where most of the initial states thermalize diffusively, to strongly fragmented where the dynamics are frozen and the system fails to thermalize. Remarkably, the East model allows for both efficient numerical simulations and analytic solutions of various diagnostics of the phase transition, from which we obtain a set of exact critical exponents. We find that the freezing transition in particle-conserving East models belongs to the same universality class as dipole-conserving fracton systems. Our results provide a tractable minimal model for filling-induced freezing transitions associated with HSF, which can be readily tested in state-of-the-art quantum platforms. Freezing transition in particle-conserving East model Zhi-Cheng Yang August 1, 2023 ===================================================== Introduction.- The research field of nonequilibrium quantum many-body dynamics has been a fruitful source of intriguing fundamental questions in theoretical physics over the past few years. While the notion of universality has proved to be a powerful tool in equilibrium statistical mechanics, identifying universality classes in out-of-equilibrium dynamical properties has remained a challenging task. Generic non-integrable quantum many-body systems are expected to thermalize to a maximal entropy state subjected to constraints from conservation laws. Introducing additional ingredients (e.g. disorder, kinetic constraints), however, can impede thermalization and result in a variety of nonequilibrium dynamical phenomena. For example, the Rydberg-blockaded atom array harbors atypical high-energy eigenstates that lead to nonthermal behaviors starting from certain initial states, a phenomenon now known as quantum many-body scars <cit.>. More generally, one can consider quantum kinetically constrained models, where local dynamical moves are restricted. Such systems are either nonergodic and fail to thermalize <cit.>, or exhibit anomalously slow relaxation to thermal equilibrium <cit.>. One paradigmatic example is given by fracton systems <cit.>, where particle moves are subjected to both charge (particle number) and dipole moment (center of mass) conservations. It was shown in Refs. <cit.> that the combination of these two conservation laws and locality lead to Hilbert-space fragmentation (HSF): the Hilbert space within a particular symmetry sector further fractures into many disconnected subspaces, giving rise to exponentially many Krylov subsectors in total which cannot be uniquely labelled by the quantum numbers of the conserved charges. One can further quantify the degree of fragmentation and distinguish between strong and weak fragmentation. Weakly fragmented systems have a dominating Krylov subspace within the symmetry sector, such that typical initial states are able to explore most of the Hilbert space and thermalize. On the other hand, in strongly fragmented systems, an arbitrary initial state is only able to explore a vanishingly small fraction of the entire Hilbert space, and the dynamics is essentially frozen. Interestingly, it was recently demonstrated in Refs. <cit.> that strong and weak fragmentation in fractonic models are separated by a continuous phase transition as the charge density is varied. A natural question that follows is whether such a freezing transition associated with HSF is special to fractonic systems, or does it happen in a broader class of kinetically constrained models. If so, do they belong to the same universality class as fractonic models? In this Letter, we study a class of one-dimensional systems with a conserved particle number, where particle hoppings are subjected to an East-like constraint, as illustrated in Fig. <ref>(a). As a result of the kinetic constraint, the Hilbert space within a given particle number sector further fractures into Krylov subspaces <cit.>. We find that the degree of HSF undergoes a sharp transition as the average particle density n=N/L is varied, similarly to fractonic models. While determining the universal scaling properties of the transition in fractonic models has proved to be quite involved <cit.>, the situation is surprisingly simple in East models. We obtain analytic expressions for the critical filling, the size of the largest Krylov subsectors D_ max for r=1, and develop efficient algorithms for computing D_ max for arbitrary r up to L ∼ 10^3. We are also able to simulate the exact dynamics of thermal inclusion at infinite times up to L∼ 10^5. Despite the simplicity of the model, we show that the transition belongs to the same universality class as fracton models with identical critical exponents. Furthermore, we study the Krylov-sector-restricted dynamical structure factor of the model, and find that charge transport is diffusive in the thermal phase, which is to be contrasted with previous results without resolving the Krylov sectors <cit.>. Our results provide a tractable minimal model of disorder-free dynamical phase transition that can be readily tested in state-of-the-art quantum platforms using controlled-unitary gates. Model.- We study a one-dimensional system of N hardcore bosonic particles with nearest-neighbor hopping on L lattice sites. Each site i can host n_i=0 or 1 particle, and one can equivalently consider a qubit or spin-1/2 system where the computational or spin-z basis configurations correspond to particle occupations. We use open boundary condition unless otherwise specified. We further impose an East-like kinetic constraint on the dynamics: a particle can hop to the right only when there is an occupied site within a distance r to its left (i.e., an occupied site can mobilize nearby particles to its “east"), as illustrated in Fig. <ref>(a) for r=1 and r=2. Such a constraint is inspired by the East model <cit.>, or more generally, facilitated spin models in classical glasses <cit.>, where spin flips are facilitated by an adjacent spin along a particular orientation. Recently, its quantum versions (without particle number or S^z conservation) have been proposed as candidates for slow thermalization and localization without disorder <cit.>. Introducing an additional U(1) particle number conservation allows for the study of HSF <cit.> as well as transport properties of the conserved charge <cit.>. While one can construct Hamiltonians generating the constrained dynamics in Fig. <ref>(a), the essential physics of HSF and the freezing transition does not require a time-independent Hamiltonian. Therefore, we consider more generally dynamics generated by classical Markovian circuits, which is equivalent to quantum automaton circuits when starting from a single particle configuration with fixed occupation numbers on each site. The circuit consists of consecutive layers of (r+2)-site gates. Take r=1 as an example: local three-site gates implement the moves: ∙∘∙↔∙∙∘, where ∙ and ∘ denote an occupied and empty site, respectively. As a result of the kinetic constraint, not all particle configurations belonging to the same charge sector can be connected to one another under the dynamics, and hence the Hilbert space further fractures into Krylov subsectors. To see this, notice that according to Fig. <ref>(a), the position of the leftmost particle is conserved under the dynamics, and thus configurations with distinct leftmost particle positions cannot be connected by the dynamical moves even if they have the same particle number. Freezing transition.- Intuitively, it is easy to understand why the average particle density can affect the degree of fragmentation in this model. At low fillings, particles in the system are well isolated from one another, and it is very unlikely to find an occupied site to the left of a particle to trigger hopping. Thus, most of the particles are frozen and the Hilbert space is strongly fragmented. At high fillings, it is almost always possible to find a nearby occupied site, and the constraint essentially becomes ineffective. Therefore, we expect the structure of the Hilbert space to change qualitatively as the density is varied. To quantify the degree of HSF, it is useful to consider the ratio of the dimension of the largest Krylov sector and that of the entire symmetry sector D_ max/D_ sum. While the total size of a symmetry sector with N particles is simply D_ sum = L N, an analytic expression for D_ max is usually hard, and numerically enumerating all configurations within a Krylov sector is only possible for very small system sizes. In fact, it is in general difficult to even identify the largest Krylov sector within each symmetry sector. However, for the East models, we are able to develop a simple algorithm for computing D_ max recursively, and even analytic solutions in certain cases. To begin with, it is easy to show that within a symmetry sector of N particles, the largest Krylov sector is generated from the following root configuration: ∙∙∙⋯∙_N ∘∘⋯∘_L-N, i.e., a domain wall configuration with all N particles occupying the first N sites from the left. The reason is that this sector has only one frozen particle which is the leftmost particle, and hence one active block. If there are more than one active blocks separated by frozen regions, one can always form a different Krylov sector by concatenating the active blocks and moving all frozen regions to the right. The resulting Krylov sector is necessarily larger than the original one, and hence the largest Krylov sector is generated by particle configuration (<ref>). After identifying the largest Krylov sector, we have yet to compute its size. This can be done recursively in the East model, as illustrated in Fig. <ref>(a). First of all, starting from the root configuration (<ref>), the longest distance that the particles can spread is given by L_ max=(r+1)N - r, corresponding to the most dilute particle configuration: ∙∘⋯∘_r+1∙∘⋯∘_r+1∙⋯. Denote the dimension of the largest Krylov sector with N particles on L sites as D^ max_N,L. Apparently, we have D^ max_N,L = D^ max_N, L_ max for L > L_ max. For L≤ L_ max, one can obtain D^max_N,L from the dimensions of Krylov sectors of the same type [i.e. those that are generated from the root configuration (<ref>)] with (N-1) particles on L-1, L-2, …, N-1 lattice sites, which corresponds to fixing the rightmost particle at all possible positions [see Fig. <ref>(a)]: D^ max_N,L = D^ max_N-1, L-1 + D^ max_N-1, L-2 + ⋯ + D^ max_N-1, N-1. To summarize, we have the following recursion relation: D^ max_N,L= D^ max_N,L_ max, L > L_ max ∑_i=N-1^L-1 D^ max_N-1, i, L ≤ L_ max. Carrying out the above recursion relation up to system size L with particle numbers N≤ L requires only 𝒪(L^2) operations, which allows us to efficiently compute the size of the largest Krylov sector up to L∼ 10^3 and obtain clear signatures of a phase transition. In Fig. <ref>(b)-(d), we show numerical results for r=1 using the recursive algorithm described above. Fig. <ref>(b) clearly shows a transition in the ratio D_ max/D_ sum as the average density is varied. For n<0.5, D_ max constitutes a small fraction of the entire symmetry sector, indicating strong fragmentation. For n>0.5, the ratio approaches order one, indicating weak fragmentation. We further consider the scaling of the ratio D_ max/D_ sum with L below and above the critical filling, as shown in Fig. <ref>(c). In the weakly fragmented phase, this ratio saturates to a constant of order unity as L increases. In the strongly fragmented phase, the ratio decays exponentially with L, which implies that even the size of the largest Krylov subsector is vanishingly small compared with the full symmetry sector in the thermodynamic limit. At the critical point, we find that D_ max/D_ sum exhibits a power-law decay with system size: D_ max/D_ sum∼ 1/L, as shown in Fig. <ref>(d). In the Supplemental Material (SM) <cit.>, we prove that for r=1, D_ max at the critical point (with L=2N) is precisely given by the N-th Catalan number: D^ max_N, 2N= C_N ≡1/N+12N N = 1/N+1 D_ sum. Thus, the ratio D_ max/D_ sum∼ L^-1, which explains our numerical finding in Fig. <ref>(d). The qualitative change in the structure of the Hilbert space as diagnosed by the ratio D_ max/D_ sum has a direct consequence on the dynamics of the system, starting from an initial state at a given filling n. We consider the average density of frozen sites ⟨ n_F⟩, defined as the fraction of sites whose occupation numbers remain unchanged under the circuit dynamics at infinite times <cit.>. This quantity is averaged over all initial states within the same charge sector, and serves as an order parameter for the transition. Notice that a site is frozen if its occupation is the same in all configurations within the same Krylov sector, and hence ⟨ n_F ⟩ is closely related to the connectivity of the Hilbert space. This order parameter, however, is in general hard to compute. Usually one has to either enumerate all Krylov sectors for small system sizes (infinite time, finite size regime), or simulate the dynamics for large systems at early times (infinite system, finite time regime) <cit.>. Fortunately, the East model allows us to access both infinite time and infinite system limit simultaneously via an efficient way of simulating the dynamics of growing thermal bubbles. We defer the detailed algorithm to the next section, and show our numerical result of ⟨ n_F ⟩ in Fig. <ref>(b), which we obtain from sampling 10^3 different configurations of size L=10^3 at a given filling and compute the average n_F. The result clearly shows that ⟨ n_F ⟩ is zero for n>0.5 (thermal phase) and becomes nonzero for n<0.5 (frozen phase). Furthermore, we find that near the transition, ⟨ n_F ⟩∼ (n_c-n)^β with β =1. The critical density n_c turns out to be quite straightforward to compute for the East model. Consider the largest Krylov sector (<ref>). Since the longest distance that the particles can spread is L_ max, for L<L_ max, all sites are necessarily active; for L>L_ max, there will be sites on the right end that cannot be reached by particles, and a non-zero fraction of frozen sites will emerge. Therefore, the critical density is given by n_c = N/L_ max1/1+r. For r=1, this is in agreement with our numerical results. In the SM <cit.>, we provide numerical results for r=2 which also shows perfect agreement with Eq. (<ref>). Thermal inclusion.- For n ≲ n_c, a large sample of the system typically contains local thermal regions with n>n_c, as well as frozen regions with n<n_c. Under time evolution, excess particles in the thermal region will propagate into nearby frozen regions and absorb them into a larger thermal region. Of course, this process will decrease the charge density of the thermal region, and hence the growth of a thermal bubble stops once its average filling decreases to n_c. We study this thermal inclusion process for the East model near the critical point. We find an efficient way of figuring out the maximal size that an initial thermal seed can grow into at infinite times for the East model. For a random initial particle configuration, we use a pointer that starts from the leftmost site and moves towards the right, until we reach the first particle and start counting the size of the current thermal bubble. We then add more sites that can be absorbed into the bubble by moving the pointer further to the right according to the following rule. We compute the total number of particles N currently in the bubble, and move the pointer to site (r+1)N+1=L_ max+(r+1) counting from the leftmost site in the bubble, which is the farthest site that the current bubble can affect. The above step is repeated until no additional particle is encountered between two consecutive moves of the pointer, indicating that the thermal bubble cannot grow any further to the right at this point. We record the length of this thermal region, and start over by moving our pointer to the right until we reach a new particle, and start counting the size of the next thermal region. The procedure continues until we reach the rightmost site of the system. We give a concrete example of this algorithm in Fig. <ref>(a). Apparently, this procedure requires only 𝒪(L) operations, and can be carried out for extremely large system sizes. In Fig. <ref>(b), we find that the ultimate sizes of the thermal regions follow a power-law distribution near the critical point P(l)∼ l^-3/2 for l<ξ, where ξ is identified as the correlation length. We can further extract ξ from the moments of P(l): ξ = ⟨ l^2 ⟩ / ⟨ l ⟩. We find that the correlation length diverges as ξ∼ (n_c-n)^-ν with ν≈ 2 near the critical point. Interestingly, the critical exponents we obtained by explicitly growing all thermal bubbles microscopically is in perfect agreement with a simplified effective model constructed for the fracton model <cit.>. In the SM, we further show that these exponents remain the same for r=2. We are thus led to conclude that the universality class of the freezing transition is largely independent of the microscopic details of the model, as long as it is driven by charge density, and the underlying physics is captured by the growth of local thermal bubbles with n>n_c until they self-tune to the critical density. We now have an explicit example where the validity of the effective model proposed in Ref. <cit.> is confirmed via exact numerical simulations of the microscopics. Krylov-sector-restricted transport.- Finally, we study transport properties of the particle-conserving East model in the thermal phase. We compute the infinite-temperature autocorrelation function of the charge density on a given site C(0,t) ≡ tr[n_i(t) n_i(0)]/D, where the trace is restricted to configurations within a given charge sector, and D denotes the size of this sector. We use periodic boundary condition and average over all sites. In Fig. <ref>, we find that the autocorrelation function at long times decay as C(0,t)∼ t^-1/2, consistent with diffusive transport with z=2. This result is easy to understand: since the kinetic contraint is ineffective in the thermal phase, particles hop around as in an unconstrained U(1) symmetric system, and hence charge transport obeys diffusion. Notice, however, that this result is in sharp contrast to previous studies where this correlator is averaged over all symmetries sectors, which leads to a diverging dynamical exponent at late times <cit.>. Our results clarify the origin of this distinction: the dynamics within each U(1) sector actually undergo a phase transition, and hence it is crucial to study the Krylov-sector resolved transport properties. Recently, the existence of diffusive Krylov sectors in subdiffusive dipole-conserving systems has also been demonstrated <cit.>. Summary and outlook.- We study a particle-conserving East model in which particle hoppings are facilitated by the presence of other particles to its left. We find that the structure of the Hilbert space and the dynamical properties exhibit a sharp transition as the average particle density is varied, going from weakly fragmented and thermal at high fillings n>n_c to strongly fragmented and frozen at low fillings n<n_c. The special feature of the model allows for both analytic solutions and efficient numerical simulations which are combined to characterize the universal properties at the transition. Despite its simplicity, we find that the transition belongs to the same universality class as in dipole-conserving fracton models, where the microscopics are much more complicated. Our results thus provide a tractable minimal model for filling-induced freezing transitions in quantum many-body systems. The East-like constraint can be implemented via controlled-unitary gates, and hence the physics explored in this work can be readily tested in state-of-the-art quantum platforms such as trapped ions and superconducting qubits, using random circuit evolutions. Acknowledgments.- We thank Jingwu Tang for helpful discussions on Catalan number. Z.-C.Y. is supported by a startup fund at Peking University. Numerical simulations were performed on High-performance Computing Platform of Peking University. §.§ Supplemental Material for “Freezing transition in particle-conserving East model" § ANALYTIC EXPRESSIONS FOR THE SIZE OF THE LARGEST KRYLOV SECTOR FOR R=1 In this section, we give analytic expressions for the sizes of the largest Krylov sectors for r=1. We start from the critical filling n=0.5, or L=2N. The largest Krylov sector is generated from the root configuration: ∙∙∙⋯∙_N ∘∘⋯∘_N. Since this sector is fully connected, configurations belonging to this sector cannot have any frozen site in the bulk that separates the entire system into disconnected regions. Therefore, the allowed configurations must satisfy the following condition: for any bipartitioning of the system into A=[1,k] and A=[k+1,L], there cannot be more empty sites than occupied sites within region A. For example, ∙∘∙∘∘∙⋯ cannot reside within this subsector, and is hence forbidden. Counting the total number of configurations satisfying the abovementioned condition is a well-known problem in combinatorics. The problem is equivalent to counting the number of Dyck words of length 2N, with occupied and empty sites corresponding to two different alphabets. The solution is given by the N-th Catalan number: C_N ≡1/N+12N N. To generalize the above result to n>0.5, it is useful to first introduce an alternative interpretation of the combinatorial problem. Consider a square lattice grid as depicted in Fig. <ref>. For a configuration with N particles and N holes, we start from the origin of the lattice, and draw a horizontal arrow → each time we see a particle, and a vertical arrow ↑ for each hole. We end up with a monotonic path connecting the origin (0,0) and site (N,N) on the lattice. Here, monotonicity simply means that there is no left or down pointing arrow, and the path contains precisely 2N steps. However, due to the constraint that there cannot be more empty sites than occupied sites for any contiguous subregions including the leftmost site, the allowed paths can only stay in the yellow shaded region depicted in Fig. <ref>. In particular, they cannot touch or cross the red dashed line y=x+1. To count the total number of allowed paths, we need to substract from all paths connecting the two points those that are disallowed. There is a simple way of counting the number of disallowed paths. As we illustrate in Fig. <ref>(a), suppose a path touches or crosses the red dashed line. We do a reflection of the path starting from the crossing point about the red line [orange path in Fig. <ref>(a)]. The reflected path now connects the origin to the reflected site (N-1, N+1). It is easy to see that, disallowed paths have a one-to-one correspondence to all possible paths connecting the origin and the reflected site (N-1, N+1). Hence, we interpret the Catalan number as the subtraction of disallowed paths from all possible paths: C_N = 2N N - 2N N-1. Using this interpretation, it is straightforward to obtain an analytic expression for the size of the largest Krylov sector for n>0.5. In this case, we have N particles and L-N<N holes, and configurations belonging to this Krylov sector can be mapped to all paths connecting the origin and site (N, L-N) restricted in the yellow shaded region, as depicted in Fig. <ref>(b). Using the same trick of mapping disallowed paths to paths connecting the origin and the mirror-reflected point, we obtain the total number of allowed paths: D^ max_N,L = L N - L N+1. § NUMERICAL RESULTS FOR R=2 In this section, we present additional numerical results for the East model with range r=2. We will see that the essential physics discussed in the main text is independent of the range r. We start by showing our diagnostics for the phase transition in Fig. <ref>. Notice that in this case, we do not have analytic expressions for the size of the largest Krylov sector as in the case of r=1. So we implement the recursive algorithm outlined in the main text, which works independent of the range r. The ratio D_ max/D_ sum exhibits a qualitative change at the critial density n_c ≈ 0.33 [Fig. <ref>(a)]. For n<n_c, the largest Krylov sector constitute a vanishingly small fraction of the full symmetry sector in the thermodynamic limit, indicative of strong fragmentation. The ratio decays exponentially with L upon increasing system size in this regime [Fig. <ref>(b)]. For n>n_c, the ratio approaches order one, indicating weak fragmentation, and the system thermalizes with high probability from a random initial state. At the critical point, D_ max/D_ sum again shows a power-law decay with system size as L^-1. We can similarly consider the fraction of frozen sites averaged over all configurations in a symmetry sector as a function of the filling, which serves as an order parameter for the transition. This order parameter changes from zero to nonzero at the critical n_c, as shown in Fig. <ref>(d). Notice that the position of the critical point is again in excellent agreement with the general expression n_c=1/r+1, which is equal to 1/3 for r=2. We also consider the process of thermal inclusion in this case, for which numerical results are summarized in Fig. <ref>. We find that the distribution of the sizes of the thermal bubble again obeys P(l)∼ l^-3/2 for l<ξ, and the correlation length itself diverges as ξ∼ (n_c-n)^-ν with ν≈ 2. Finally, we confirm that charge transport is diffusive in the thermal phase by computing the autocorrelation function restricted to a specific symmetry sector, as shown in Fig. <ref>.
http://arxiv.org/abs/2307.02278v1
20230705132451
Smooth Particle Mesh Ewald-integrated stochastic Lanczos Many-body Dispersion algorithm
[ "Pier P. Poier", "Louis Lagardère", "Jean-Philip Piquemal" ]
physics.chem-ph
[ "physics.chem-ph" ]
AIP/123-QED Smooth Particle Mesh Ewald-integrated stochastic Lanczos Many-body Dispersion algorithm]Smooth Particle Mesh Ewald-integrated stochastic Lanczos Many-body Dispersion algorithm pier.poier@sorbonne-universite.fr Sorbonne Université, Laboratoire de Chimie Théorique, 75005, Paris, France Sorbonne Université, Laboratoire de Chimie Théorique, 75005, Paris, France Sorbonne Université, IP2CT, FR 2622 CNRS, Paris, France Sorbonne Université, Laboratoire de Chimie Théorique, 75005, Paris, France The University of Texas at Austin, Department of Biomedical Engineering, TX, USA jean-philip.piquemal@sorbonne-universite.fr We derive and implement an alternative formulation of the Stochastic Lanczos algorithm to be employed in connection with the Many-Body Dispersion model (MBD). Indeed, this formulation, which is only possible due to the Stochastic Lanczos' reliance on matrix-vector products, introduces generalized dipoles and fields. These key quantities allow for a state-of-the-art treatment of periodic boundary conditions via the 𝒪(Nlog(N)) Smooth Particle Mesh Ewald (SPME) approach which uses efficient fast Fourier transforms. This SPME-Lanczos algorithm drastically outperforms the standard replica method which is affected by a slow and conditionally convergence rate that limits an efficient and reliable inclusion of long-range periodic boundary conditions interactions in many-body dispersion modelling. The proposed algorithm inherits the embarrassingly parallelism of the original Stochastic Lanczos scheme, thus opening up for a fully converged and efficient periodic boundary condition treatment of MBD approaches. [ Jean-Philip Piquemal^* August 1, 2023 ========================== § INTRODUCTION Electron correlation is one of the most fascinating and difficult phenomenon to model. Dispersion in particular originates from the long-range electronic correlation among distant electron densities and represents the purely attractive contribution in van der Waals interactions. These are ubiquitous in nature: they can be for example observed in milk as they drive the formation of lipid droplets that, through light scattering, give to milk its typical white color. Geckos and spiders, on the other hand, also take advantage of dispersion for supporting their entire weight on smooth vertical surfaces. From the microscopic point of view, dispersion interactions are crucial in many processes driven by non-covalent phenomena such as protein folding, protein-protein interactions, supra molecular and inter-molecular interactions in general. An exact modelization of dispersion requires the analytical solution of the electronic Schrödinger equation, which is unfortunately impossible for practical cases. In the past decades, very accurate numerical wave function-based quantum chemical methods have been developed to tackle electron correlation, thus implicitly capable of describing dispersion and intermolecular interactions.<cit.> These methodologies, however, can only be applied to molecules composed of very few atoms, thus preventing the study of chemically and biologically relevant systems. The advent of Density Functional Theory (DFT) represents a milestone in quantum chemistry as it provides a cheap way of including electronic correlation, as its computational cost is similar to that of the Hartree-Fock method. Nevertheless, the intrinsic local nature of common exchange-correlation functionals, makes DFT inadequate for describing long-range correlation effects, thus dispersion. To retain the DFT scaling benefits, extensive efforts have been spent in the past years in developing dispersion corrections able to improve the DFT capability of describing intermolecular interactions, crucial in material design and molecular modelling in general. Many of these correction techniques rely on simple empirical pairwise treatments of dispersion, similar to those embraced in force fields. Their simplicity, together with the negligible computational cost and the good accuracy improvement, made possible for these methods to be included in most of the quantum chemistry softwares.<cit.> Despite their large diffusion, these pairwise corrections completely neglect the many-body nature of dispersion interactions inherited from the long-range electronic correlation on the basis of these phenomena. In recent years, the interest towards Many-Body Dispersion correction models has risen<cit.>. In particular the MBD@rSCS model by Tkatchenko, Di Stasio and Ambrosetti, together with its variations, has become especially popular by virtue of its high accuracy obtained despite of the absence of empirical parameters except for a single range-separation parameter for the coupling between the long-range MBD energy and the chosen DFT functional.<cit.> The MBD@rsSCS model can be summarized as follows. First, a set of atomic dipole polarizabilities are obtained from the partitioning of the molecular electron density or, alternatively, retrieved from a deep-neuronal network as recently proposed.<cit.> Secondly, the polarizabilies are made frequency-dependent via Padé approximation and subsequently a Dyson-like self-consistent screening linear equation is solved for a selected set of frequencies. Lastly, the set of screened frequency-dependent polarizabilities are used as key quantities in building the MBD interaction matrix which spectrum is used to express the final many-body dispersion energy. Compared to the 𝒪(N^4) scaling of Kohn-Sham equations' resolution, the MBD@rsSCS model involves a small additional computational cost. However, for increasingly large systems, the 𝒪(N^3) scaling of the diagonalization procedure becomes no longer negligible and, it can even become a burden if coupled to 𝒪(N) DFT methods. Recently, we have proposed and implemented an alternative resolution of the MBD key equations that overcomes this scaling issue that is based on the state-of-art Stochastic Lanczos (SL) trace estimation.<cit.> Due to the the sparsity of the matrices involved, it exhibits linear-scaling with the system size. The proposed stochastic Lanczos MBD approach (SL-MBD) further benefits from an embarrassingly parallel implementation arising from its stochastic nature and this allows for reaching system sizes of hundred thousands atoms within a few minutes' time. <cit.> Compared to a simple pairwise description, this many-body treatment of dispersion interactions in systems such as solvated proteins has revealed a higher degree of delocalization as well as a collective solute-solvent character leading to remarkable long-range interactions.<cit.> The potentially longer-range of MBD interactions stresses the importance of the inclusion of a coherent full periodic boundary condition (PBC) treatment, especially in highly ordered and periodic systems. In this direction, recent efforts have been spent in past years.<cit.> By virtue of the above mentioned long-range nature of MBD interactions, it is of broad interest to generalize the SL-MBD approach to a full PBC treatment. However, the quadratic-scaling approaches typically employed in connection to MBD models are clearly not suitable to be integrated in the the SL-MBD methodology for both memory requirements and computational efficiency due to the large systems targeted. A more sophisticated approach has therefore to be developed. In the context of long-range electrostatics modelling, this scaling limitation was addressed via Ewald summation techniques, as they formally scale as 𝒪(N^2) but a proper optimization lowers the factor to 𝒪(N^3/2). Ewald summation techniques replace the original conditionally convergent energy summation with a direct and reciprocal space absolutely convergent ones consisting of a real and reciprocal summations as well as a self interaction term. The Particle Mesh Ewald (PME) method proposed by Darden, York and Pedersen, drastically improved Ewald summation technique's associated performance.<cit.> Its idea relies on the efficient calculation of the reciprocal space energy contribution thanks to fast Fourier transforms scaling as 𝒪(Nlog(N)). The PME method with its different variants (especially the Smooth Particle Mesh Ewald (SPME)<cit.>, has become the standard algorithm implemented in nearly all the most efficient Molecular Dynamics packages thanks to its scaling features. In this work, we derive and present a modification of the SL-MBD method based on a PME treatment of periodic boundary conditions. The resulting Smooth Particle Mesh Ewald stochastic Lanczocz (SPME-SL) MBD approach is suitable for large systems as it exhibits the typical 𝒪(Nlog(N)) scaling inherited from the PME method. In the next section, we review the MBD model as well as the stochastic Lanczos method in its standard form. A theory section is then dedicated to the derivation of the modified SPME-based Lanczos quadrature scheme followed by a section dedicated to numerical results where the computational performances of the method are discussed and compared to the ones of the standard replica method. § REVIEW OF THE MBD AND SL-MBD The MBD model is based on the idea that a molecule is described as a set of interacting quantum harmonic oscillators, which Hamiltonian is shown in eq.(<ref>), 𝐝_i=√(m_i)ξ_i being the mass-weighted dipole moment displaced by the vector ξ_i from its equilibrium position. α_i(0) and ω_i represent the model's key parameters and correspond to the static dipole polarizability and characteristic excitation frequency respectively. Ĥ_MBD =1/2∑_i=1^N(-∇̂^2_𝐝_i+ω_i^2d_i^2)+∑_i>jω_iω_j√(α_i(0)α_j(0))𝐝_i^†𝐓'_ij(β) 𝐝_j 𝐕_ij =𝐈_3δ_ijω_i^2+(1-δ_ij)ω_iω_j√(α_i(0)α_j(0))𝐓'_ij(β) These parameters are obtained from ab initio data as the atom-in-molecule (AIM) polarizability is typically retrieved via partitioning of the electron density while ω_i is defined in terms of accurate free atoms quantities.<cit.> The 𝐓'_ij(β) term is built from the pure dipole-dipole interaction tensor for the ij atom pair that is screened via a damping function s(R_ij;β) depending on the interactomic distance R_ij and the single range-separation parameter β typically optimized for the correspondent DFT functional to be dispersion-corrected, 𝐓'_ij(β)=s(R_ij;β)𝐓_ij. Recently the MBD model was generalized to higher than dipole interactions<cit.>, however, here we will only consider the dipole-dipole interaction case. For the explicit expression of 𝐓' we refer to the work in reference.<cit.> The eigenvalues (λ_i) of the MBD interaction matrix 𝐕, shown for the ij block in eq.(<ref>), are required to obtain the MBD energy ℰ_MBD via the plasmonic formula shown in eq.(<ref>) that represents the correlation energy of the interacting fluctuating dipoles. ℰ_MBD=1/2∑_i=1^3N√(λ_i)-3/2∑_i=1^Nω_i We note in passing that in the MBD@rSCS model, a Dyson-like self-consistent screening equation is solved to obtain a set of screened parameters (α_i,ω_i). For the sake of simplicity we will not consider, in the following discussion, this additional procedure as this does not alter the generality of our derivations, later presented. The solution of eq.(<ref>) is bound to the 𝒪(N^3) scaling of the diagonalization step that, as mentioned earlier, strongly limits the applicability of the method to large systems. The SL-MBD method bypasses the diagonalization of 𝐕 by exploiting the alternative but equivalent expression of the plasmonic formula, eq.(<ref>), where the sum over the whole spectrum of 𝐕 is rewritten in term of its trace, that is invariant under any change of basis, namely ∑_i=1^3N√(λ_i)=Tr(√(Λ))=Tr(√(𝐕)) where Λ is the diagonal form of 𝐕 obtained via the unitary transformation Λ=𝐖^†𝐕𝐖. ℰ_MBD=1/2Tr(√(𝐕))-3/2∑_i=1^Nω_i The evaluation of the trace of a symmetric matrix function such as Tr[√(𝐕)] is, in the proposed SL-MBD, based on two main assumptions. First, the stochastic Hutchinson trace estimator (HTE) <cit.> is invoked, Eq.(<ref>), 𝐯_l being one of the R normalized random vectors of dimension D (in our case D=3N), which entries follow a Rademacher distribution, i.e. they can assume values of either 1 or -1 with the same probability. Tr[√(𝐕)]≈D/R∑_l=1^R𝐯_l^†√(𝐕)𝐯_l 𝐯_l =𝐮_l/𝐮_l u_l,i =  1,   Pr= 1/2 -1,  Pr= 1/2 Second, each of the R scalar expectation values in Eq.(<ref>) can be expressed in terms of Tr[√(Λ)] and the unitary transformation 𝐖 as reported in Eq.(<ref>) where we introduced μ_l=𝐖^†𝐯_l. 𝐯_l^†√(𝐕)𝐯_l=𝐯_l^†𝐖√(Λ)𝐖^†𝐯_l=∑_i^Dμ_l,i^2√(λ_i) The last equality in Eq.(<ref>) corresponds to the Riemann–Stieltjes integral<cit.> defined in Eq.(<ref>) which is approximated via the general (M+1)-points quadrature shown in eq.(<ref>), {τ_k} and {θ_k} representing the unknown weights and nodes respectively. ∑_i^Dμ_l,i^2√(λ_i) =∫_a^b √(t)dμ(t) μ(t) = 0            ,    t<a=λ_1 ∑_j=1^i-1μ_j^2   ,   λ_i-1≤ t < λ_i ∑_j=1^Dμ_j^2   ,   b=λ_n<t 𝐯_l^†√(𝐕)𝐯_l=∫_a^b √(t)dμ(t)≈∑_k=1^M+1τ^(l)_k√(θ_k) By inserting Eq.(<ref>) in Eq.(<ref>), one can identify the complete expression for the stochastic trace estimation, Eq.(<ref>). Tr[√(𝐕)]≈D/R∑_l=1^R∑_k=1^M+1τ^(l)_k√(θ_k^(l)) In the stochastic Lanczos algorithm, the nodes and weights for the quadrature relative to each of the l-th terms in the first summation, are identified as the eigenvalues {λ_k^(l)} and the first entry (squared) of the eigenvectors {[U_1,k^(l)]^2} of the tridiagonal Δ^(l) matrix which is the representation of the original MBD potential matrix 𝐕 in the M+1 Krylov subspace 𝒦_M+1={𝐲_1,𝐲_2,…,𝐲_M+1} where the basis vectors are gathered as the 𝐘^(l) matrix's columns. Δ^(l)=𝐘^†^(l)𝐕𝐘^(l) Λ^(l)=𝐔^(l)†Δ^(l)𝐔^(l) The solution of eq.(<ref>) represents the crucial part of the algorithm in terms of efficiency while eq.(<ref>), by virtue of the small matrices involved (Krylov subspace dimension rarely exceeding 15), is inexpensive and it is solved by means of standard libraries. Eq.(<ref>) is practically solved as follows: For each of the R terms employed in the HTE, 𝐯 (from now on the upperscript (l) is dropped for simplicity) is taken as the first basis vector of the Krylov subspace (𝐲_1) while the remaining basis vectors {𝐲_k } (columns of 𝐘) and the diagonal (Δ_kk) and out-of diagonal (Δ_(k-1)k=Δ_k(k-1)) elements of Δ are retrieved recursively as shown in eq.(<ref>) where the asterisk denotes the unnormalized k-th basis vector. 𝐲_1 = 𝐯 b_k𝐲_k =𝐲^*_k= 𝐥_k-1-a_k-1𝐲_k-1-b_k-1𝐲_k-2 𝐥_k =𝐕𝐲_k b_k =√(𝐲^*_k^†𝐲^*_k)=Δ_(k-1)k=Δ_k(k-1) a_k =𝐲_k^†𝐕𝐲_k=𝐲_k^†𝐥_k=Δ_kk Δ^(l) = [ Δ_11^(l) Δ_12^(l) 0 0 0; Δ_21^(l) ⋱ ⋱ 0 0; 0 ⋱ Δ_kk^(l) ⋱ 0; 0 0 ⋱ ⋱ Δ_(M)(M+1)^(l); 0 0 0 Δ_(M+1)(M)^(l) Δ_(M+1)(M+1)^(l); ] In general the k-th iteration retrieves the Δ_kk diagonal element as well as the contiguous upper/lower Δ_(k-1)k and Δ_k(k-1) ones. In the next section, expressions for 𝐲, a_k and b_k in the case of full PBC enforced via PME method will be derived. § THEORY The easiest strategy for including PBC in the MBD model consists in looping over a selected number of cell vectors 𝐧, each of which denoting the periodic image of the central simulation cell U defined by its edges (𝐚_1,𝐚_2 ,𝐚_3 ) and with volume V=𝐚_1 · (𝐚_2 ×𝐚_3). This would result in the modified dipole-dipole interaction matrix 𝐓^pbc shown in eq.(<ref>) where 𝐓'_ij(j∈0) represents the ij interaction block belonging to the central simulation cell while 𝐓'_ij(j∈𝐧) the interaction between the particle i and the particle j this time belonging to the cell's periodic replica identified with 𝐧. In particular, the list of cells (and therefore their associated 𝐧 vectors) are chosen according to a cutoff radius as pictorially represented in Fig.<ref> 𝐓^pbc_ij =𝐓'_ij(j∈0)+∑_𝐧≠0𝐓'_ij(j∈𝐧) 𝐧 =n_1𝐚_1 + n_2𝐚_2+n_3𝐚_3    n_1,n_2,n_3∈ℤ^3 The substitution of 𝐓'_ij with 𝐓^pbc_ij inside 𝐕 (often referred as to the replica method) and the subsequent use of its eigenvalues in eq.(<ref>) was discussed in reference.<cit.> However, the use of truncated methos based on eq.(<ref>) involves the problematics listed and discussed below. First, the summation in eq.(<ref>) represents a slowly and conditionally convergent series that characterizes not only dipole-dipole interactions, but also charge-charge, charge-dipole and charge-quadrupole Coulomb interactions kernels.<cit.> Consequently, the slow convergence of eq.(<ref>) strongly limits the applicability of the SL-MBD algorithm where the efficient “on-the-fly” computation of each 𝐕_ij block is crucial for the evaluation of the 𝐕𝐲_k products discussed in connection to eq.(<ref>). The Ewald summation (ES) method , as well as its more efficient PME variants, was design to improve over eq.(<ref>) since the conditionally convergent features of long-range electrostatic interactions of periodic systems are replaced by an absolutely convergent treatment. Let's consider a set of N interacting dipoles belonging to the central simulation cell U and gathered into the 3N-dimensional array 𝐝. The correspondent electric field array 𝐄=𝐓^pbc𝐝 arising from the dipoles in both the central simulation cell and all its periodic images is, in the ES method, expressed as the sum of three component, eq.(<ref>). 𝐄 = 𝐓^pbc𝐝⟶𝐄^⋆=𝐄^dir+𝐄^rec+𝐄^self 𝐄^dir represents the direct space contribution to the Ewald electric field, the 𝐄^rec is the long-range term computed in Fourier (reciprocal) space while 𝐄^self represents the so called self-interaction term. The explicit expressions for each of these terms will be given later in the discussion, however, it is important to stress that each of these field components consist of absolutely convergent contributions as the resulting 𝐄^⋆ field. Our strategy is thus to identify and isolate from the SL-MBD equations, eq.(<ref>), an electric field-like term that can be then evaluated according to the three absolute convergent contributions in eq.(<ref>), thus allowing us to include PBC in a robust and efficient manner. To do so, we will now start by partitioning 𝐕 into its diagonal and out-of-diagonal contributions given below, where 𝐈_3 is a (3,3) identity matrix. 𝐕_ij =ω_iω_j√(α_i(0)α_j(0))𝐓'_ij 𝐕_ii =𝐈_3 ω^2_i Due to the fact that the diagonal blocks 𝐕_ii are themselves diagonal, we introduce the identity in eq.(<ref>), where Ω is the diagonal matrix defined below and 𝐕 is the hollow matrix coposed of the off-diagonal entries of 𝐕. These quantities will turn useful later in the discussion. 𝐕= Ω + 𝐕 Ω= ⊕_i^N 𝐕_ii We further introduce the 𝐠 vector (of dimension 3N) defined as the concatenation of N three-dimensional vectors-of-ones (1_3) as shown in Eq.(<ref>). 𝐠 =⊕_i^Nω_i√(α_i(0))1_3 At this point, we use the newly introduced quantities defined in eq.(<ref>) to rewrite the diagonal a_k term as shown in eq.(<ref>). a_k=𝐲_k^†Ω𝐲_k+𝐲_k^†𝐕𝐲_k One can now easily prove that the second term on the right hand side of eq.(<ref>) can be rewritten in terms of the 𝐠, eq.(<ref>), where ⊙ denotes the Hadamard product. 𝐲_k^†𝐕𝐲_k=(𝐲_k⊙𝐠 )^†𝐓'(𝐠⊙𝐲_k) By inserting Eq.(<ref>) into (<ref>), we obtain an expression for a_k which will soon turn crucial for the discussion. a_k=𝐲_k^†Ω𝐲_k+(𝐲_k⊙𝐠 )^†𝐓'(𝐠⊙𝐲_k) The 3N-dimensional term (𝐠⊙𝐲_k) can be thought as a generalized dipole array 𝐝_k that, via the interaction tensor 𝐓 originates the generalized field 𝐄_k=𝐓𝐝_k that can be then eventually computed according to eq.(<ref>). a_k=𝐲_k^†Ω𝐲_k+𝐝_k^†𝐄^⋆_k We note in passing that the introduction of this generalized field can be used in different situations as it allows us to couple our system with en external perturbation that, as discussed in references, could arise from implicit solvent contribution.<cit.> At this point we note from eq.(<ref>) (last equality) that a_k is related to 𝐥_k via a differentiation with respect to the basis vector 𝐲_k. We can therefore differentiate eq.(<ref>) to finally obtain eq.(<ref>) where the rule for the differentiation of a commuting Hadamard product has been applied, eq.(<ref>). We note that a similar approach based on differentiation was adopted by Stamm and co-workers in deriving Ewald summation for arbitrary orders of multipoles with particular emphasis on the self term, for which different expressions can be found in literature.<cit.> ∂ (𝐲_k⊙𝐠 )/∂𝐲_k=∂Diag(𝐠)/∂𝐲_k𝐲_k+Diag(𝐠)∂𝐲_k/∂𝐲_k=Diag(𝐠) 𝐥_k=1/2∂ a_k/∂𝐲_k=Ω𝐲_k+Diag(𝐠)𝐓' (𝐠⊙𝐲_k) Once again we use the definition of the generalized dipole and field to finally obtain eq.(<ref>). 𝐥_k= Ω𝐲_k+ Diag(𝐠)𝐓'𝐝_k = Ω𝐲_k+ Diag(𝐠)𝐄^⋆_k Eq.(<ref>) can therefore be rewritten in terms of the generalized electric field 𝐄_k^⋆ through the above derived quantities, eq.(<ref>). 𝐲_1 = 𝐯 b_k𝐲_k =𝐲^*_k= 𝐥_k-1-a_k-1𝐲_k-1-b_k-1𝐲_k-2 𝐥_k =Ω𝐲_k+ Diag(𝐠)𝐄^⋆_k b_k =√(𝐲^*_k^†𝐲^*_k)=Δ_(k-1)k=Δ_k(k-1) a_k =𝐲_k^†Ω𝐲_k+𝐝_k^†𝐄^⋆_k=Δ_kk 𝐄^⋆_k can be evaluated by ES and the explicit expressions for 𝐄^dir, 𝐄^self and 𝐄^rec are shown below, however , for a broader discussion and derivation we refer to the following references.<cit.> Starting from the direct component, we identify the three dimensional electric field 𝐄_i,k^dir at the atomic position 𝐑_i arising from the generalized dipole array 𝐝_k, where its three-dimensional contribution related to the j-th atom is denoted 𝐝⃗_j,k, as shown in eq.(<ref>). L_j,k =𝐝⃗_j,k∇_j 𝐄_i,k^dir = -∑_𝐧∑_j=1^N^* L_j,k∂/∂𝐑_i(erfc(τ|𝐑_j-𝐑_i + 𝐧|)/|𝐑_j-𝐑_i + 𝐧|       +∑_j=1^N^*L_j,k(1-s_ij)𝐓_ij𝐝_j,k) In the above, τ represents a real parameter governing the balance between the direct and reciprocal contributions. For a cubic cell of side h, it is typically taken to be 5/h.<cit.> τ is commonly chosen so that the direct term convergence is fast as the reciprocal contribution can be efficiently computed via FFT. This makes the summation over 𝐧 fastly converging, and only particles belonging to neighboring periodic images are therefore usually considered. 𝐄^dir is practically computed by means of neighbor lists based on the choice of τ determining the suitable cutoff and this ensures an efficient and linear-scaling evaluation. The self term 𝐄_i,k^self consists in the single term shown in eq.(<ref>) which evaluation involves a negligible computational effort. 𝐄_i,k^self= 2τ^3/3√(π)𝐝_i,k From a computational point of view, with standard τ parameters, the most expensive and thus crucial term to evaluate is represented by the 𝐄_i,k^rec contribution. In order to discuss its explicit expression, we introduce the reciprocal conjugate vectors (𝐚^*_1,𝐚^*_2 ,𝐚^*_3 ) which are related to their dual set by 𝐚^*_α·𝐚_β=δ_αβ, with α,β = {1,2,3} and δ_αβ being the Kronecker delta. In analogy to what done for 𝐧, we define 𝐦. 𝐦=m_1𝐚^*_1 + m_2𝐚^*_2+m_3𝐚^*_3   m_1,m_2,m_3 ∈ℤ^3 We further introduce the structure factor S(𝐦), defined in eq (<ref>) for a given 𝐦 is defined in . S(𝐦)=∑_j=1^N 𝐝⃗_j,k·𝐦exp(2iπ𝐦·𝐑_j) In the Ewald summation method the reciprocal component of the field is finally given in eq.(<ref>) 𝐄_i,k^rec=-1/π V∑_𝐦≠0∂/∂𝐑_i(exp(-π^2 𝐦^2/τ^2)/𝐦^2 S(𝐦)exp(-2iπ𝐦·𝐑_i)) The optimal choice of τ makes the evaluation of eq.(<ref>) (and therefore of the whole ES method) 𝒪(N^3/2) scaling, however, the PME method sensibly improves the scaling by approximating the complex exponentials via interpolation. In the Smooth PME method (SPME) in particular, the complex exponentials are first rewritten in terms of the scaled fractional coordinates u_α j, eq <ref>, and then interpolated by a p-degree B-spline function θ_p(u_α j-n_α ) on a grid of size K_1× K_2 × K_3 and the final contribution due to the reciprocal space is given in eq.(<ref>) u_α j=K_α𝐚_α^*·𝐑_j               α = {1,2,3}  ,  K_α∈ℕ^+ exp(2i𝐦·𝐑_j)=∏ _α=1^3exp(2iπ m_αu_α_j/K_α) The 𝐄_i,k^rec is finally given by eq.(<ref>). 𝐄_i,k^rec≈ -∂/∂𝐑_i∑_𝐧∏_α=1 ^3 θ_p (u_α,i-n_α)(G^R * D^R)(𝐧) The (G^R * Q^R) term is the convolution between the pair potential G^R discussed by Sagui et al. and the real space dipole array D^R defined below.<cit.> The use of fast Fourier transtorms in the evaluation of (<ref>) ensures an overall 𝒪(Nlog(N)) scaling. D^R_k(k_1,k_2,k_3)=∑_𝐧 ∑_jL_j,kθ_p(u_1,j-k_1-K_1n_1)θ_p(u_2,j-k_2-K_2n_2) θ_p(u_3,j-k_3-K_3n_3) The above algorithm was implemented in the Tinker-HP molecular dynamics package<cit.> and will, in the following section, be numerically analyzed. The replica method (eq.(<ref>)) has also been implemented and coupled to the SL-MBD method as this will allow us to perform a direct comparison for a few test cases with the newly proposed SPME version which numerical results will always refer to a fixed Ewald's τ parameter (τ=0.544590) corresponding to a real space cutoff of 7 Ångstrom. § NUMERICAL RESULTS We start by considering results related to the simple replica method based on Eq.(<ref>). In particular, for all the results we choose as a measure the first diagonal element of the Δ matrix calculated from the same fixed initial vector 𝐲_1=𝐯, chosen as usual from a Rademacher distribution. This choice will allow us to eliminate the stochastic noise from the computed Δ_11 values that otherwise would make harder the interpretation and comparison of the effects arising from long-range interactions introduced via both the replica and SPME methods. The first system analyzed is a small cubic box of dimension 18.64 Ångstrom containing 216 water molecules in the liquid phase. Figure <ref> shows the evolution of Δ_11 as a function of the cutoff radius R_cut that is used to determine the replicas identified by a set of {𝐧} vectors to be included in eq.(<ref>). Even for a not highly symmetric system such as bulk water, the convergence is reached for a cutoff radius of nearly 30 Ångstrom thus confirming the slow (and conditional) convergence rate that characterizes the replica method. The large cutoff radius required by the replica method, because of its consequent quadratic scaling, has a direct impact on the computational time as shown in Fig.<ref>. In particular, for a 30 Ångstrom cutoff the CPU-time required for the computation of the diagonal element chosen as observable reaches 1 second. The situation if quite different if the SPME-based algorithm is employed since in this case the overall convergence is determined by the number of grid points to be used in the solution of the reciprocal field contribution (K_1,K_2,K_3 in eq.(<ref>)) that also represents the computationally most expensive part of the algorithm as the direct summation part is computed very efficiently in a linear-scaling fashion. Fig.(<ref>) shows the convergence of our target quantity Δ_11 as a function of the number of grid points for the box of water undertaken as test system. We stress that, given the choice that we made to fix Ewald's τ parameter, the only quantity governing the convergence is thus the grid size. We first note that the convergence has a monotonic behavior as a smaller grid size does not involve a physical truncation of the space and thus of the interactions as for the replica case that in fact shows an oscillatory behavior. It is now interesting to compare the computational cost required by the SPME-based approach to that of the replica method. In particular for a 18 point sized grid for which convergence is observed, the CPU time is 10^-2, therefore a factor 100 faster than the cumbersome replica method. The slow convergence rate observed for the replica method is further exacerbated when highly symmetrical systems are taken under consideration. Fig (<ref>) shows the evolution of Δ_11 as a function of the cutoff radius, this time for a 14.2 Ångstrom sided cubic box of diamond. In this case the cutoff radius reaches the extremely large value of 60 Ångstrom before the convergence is reached, with a huge impact on outcoming computational cost as shown in Fig<ref>. The system dependency affecting the choice for a proper cutoff radius observed for the replica method is not suffered by the SPME-Lanczos method as it can be seen in Fig.<ref> showing the Δ_11 convergence as a function of the number of grid points. Even in this case, convergence is observed starting from circa 20 points, similarly to that observed for water as both boxes have quite similar size. In fact, convergence is ensured when a certain density of grid points is provided, independently of the system. In general a density of 1.2 points/Ångstrom (for each of the three box dimensions) is enough to ensure convergence, and this is the default value chosen in our implementation. For highly periodic systems for which the replica method is particularly slow to converge, the computational gain provided by the SPME alternative becomes even more marked. For a 18 points grid, the Δ_11 computation via the SPME-Lanczos is nearly 350 times faster than its replica counterpart. Although our analysis focused, for the sake of clarity, on the Δ_11, the same results hold for the convergence of out of diagonal terms Δ_(k-1)(k). Moreover, we note that the solution of the SPME-Lanczos equations, Eq.(<ref>), does not spoil the orthogonality of the Krylov subspace basis vectors 𝐲^†_j𝐲_k=δ_jk as the set of vectors remain orthogonal by construction as in the original algorithm. Furthermore, we stress that for an accurate resolution of eq.(<ref>), the number of quadrature points i.e. the dimension (M+1) of the Krylov subspace 𝒦_M+1, can be set to 15, regardless of the system size. This implies that the SPME-Lanczos algorithm does not suffer from the numerical instability (loss of orthogonality among basis vectors) of standard Lanczos algorithm<cit.>, typically encountered in applications where very large Krylov subspaces and thus basis vectors are required.<cit.> Being the construction of the tridiagonal matrix Δ the bottleneck step of the overall algorithm, it is of interest to probe its scaling as a function of the system size as shown in Fig.<ref> for an increasingly large box of liquid water. The plot shows that the SPME-SL algorithm deviates from linearity for larger system sizes and this is explained by virtue of the Nlog(N) scaling of the SPME method employed to compute the generalized field vectors which are key contributions in the construction of the Δ matrix. The deviation from linearity is, however, rather contained even for the largest system considered composed of approximately 100000 water molecules that is completely out of reach for the standard replica method discussed earlier. For one single core, the overall time necessary to compute the final energy) is equivalent to the time required to build Δ (Fig.<ref>) multiplied by the number of random samples R involved in Hutchinson's trace estimator. For large systems in the order of 10^4 atoms or above, R can be taken to be around 300 with a resulting low relative standard deviation (0.5%). However, the SPME-SL algorithm's strength is found in its embarrassingly parallel nature since the random samples can be divided among the available processes while a simple reduction is required before the final trace evaluation (eq.(<ref>)). Since the parallelization scheme is essentially the same as the one discussed in the original SL-MBD algorithm, we refer to a previous work<cit.> for an in depth analysis of the scalability with respect to the number of processes as well as a detailed discussion of the parallelization strategy. § CONCLUSIONS We have derived, implemented an discussed the SPME-SL algorithm where the stochastic Lanczos trace estimation scheme is coupled to the state of the art Smooth Particle Mesh Ewald method. This was made possible by introducing the generalized field term contribution from the Lanczos iterative equations. Our combined approach allows for an embarrassingly parallel computation of many-body dispersion energies with the full inclusion of long-range interactions arising from all periodic images of the central simulations cell. The proposed algorithm undoubtedly outperforms truncation-based approaches such as the replica method that is affected by slow and conditionally convergence as well as by the employed quadratic-scaling double loops making the computation highly inefficient for large systems. The parallelism features of the SPME-SL algorithm together with the Nlog(N) scaling with the system size allows for a fast Many-Body dispersion treatment of very large periodic systems composed of hundreds of thousands atoms and more. This work represents the natural extension to long-range PBC of our recent stochastic Lanczos MBD algorithm<cit.> and it focuses uniquely on the energy evaluation. Our focus will now be dedicated to the extension of these achievements to the energy nuclear gradients towards large scale condensed phase molecular dynamics simulations including many-body dispersion effects. This work has been funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant No 810367), project EMC2 (JPP). Computations have been performed at GENCI (IDRIS, Orsay, France and TGCC, Bruyères le Châtel) on grant no A0070707671.
http://arxiv.org/abs/2307.00885v1
20230703093150
An Explainable Deep Framework: Towards Task-Specific Fusion for Multi-to-One MRI Synthesis
[ "Luyi Han", "Tianyu Zhang", "Yunzhi Huang", "Haoran Dou", "Xin Wang", "Yuan Gao", "Chunyao Lu", "Tan Tao", "Ritse Mann" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Explainable Task-Specific Fusion Network L. Han et al. Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Nijmegen, The Netherlands Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands GROW School for Oncology and Development Biology, Maastricht University, P. O. Box 616, 6200 MD, Maastricht, The Netherlands Faculty of Applied Science, Macao Polytechnic University, 999078, Macao, China Institute for AI in Medicine, School of Automation, Nanjing University of Information Science and Technology, Nanjing, China Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing, University of Leeds, Leeds, UK Biomedical Imaging Department, Leeds Institute for Cardiovascular and Metabolic Medicine (LICAMM), School of Medicine, University of Leeds, Leeds, UK {taotanjs}@gmail.com An Explainable Deep Framework: Towards Task-Specific Fusion for Multi-to-One MRI Synthesis Luyi Han1,2 Tianyu Zhang1,2,3Tianyu Zhang and Luyi Han contributed equally to this work. Yunzhi Huang5 Haoran Dou6,7 Xin Wang2,3 Yuan Gao2,3 Chunyao Lu1,2 Tao Tan2,4() Ritse Mann1,2 August 1, 2023 ========================================================================================================================================================================================= Multi-sequence MRI is valuable in clinical settings for reliable diagnosis and treatment prognosis, but some sequences may be unusable or missing for various reasons. To address this issue, MRI synthesis is a potential solution. Recent deep learning-based methods have achieved good performance in combining multiple available sequences for missing sequence synthesis. Despite their success, these methods lack the ability to quantify the contributions of different input sequences and estimate the quality of generated images, making it hard to be practical. Hence, we propose an explainable task-specific synthesis network, which adapts weights automatically for specific sequence generation tasks and provides interpretability and reliability from two sides: (1) visualize the contribution of each input sequence in the fusion stage by a trainable task-specific weighted average module; (2) highlight the area the network tried to refine during synthesizing by a task-specific attention module. We conduct experiments on the BraTS2021 dataset of 1251 subjects, and results on arbitrary sequence synthesis indicate that the proposed method achieves better performance than the state-of-the-art methods. Our code is available at <https://github.com/fiy2W/mri_seq2seq>. § INTRODUCTION Magnetic resonance imaging (MRI) consists of a series of pulse sequences, e.g. T1-weighted (T1), contrast-enhanced (T1Gd), T2-weighted (T2), and T2-fluid-attenuated inversion recovery (Flair), each showing various contrast of water and fat tissues. The intensity contrast combination of multi-sequence MRI provides clinicians with different characteristics of tissues, extensively used in disease diagnosis <cit.>, lesion segmentation <cit.>, treatment prognosis <cit.>, etc. However, some acquired sequences are unusable or missing in clinical settings due to incorrect machine settings, imaging artifacts, high scanning costs, time constraints, contrast agents allergies, and different acquisition protocols between hospitals <cit.>. Without rescanning or affecting the downstream pipelines, the MRI synthesis technique can generate missing sequences by leveraging redundant shared information between multiple sequences <cit.>. Many studies have demonstrated the potential of deep learning methods for image-to-image synthesis in the field of both nature images <cit.> and medical images <cit.>. Most of these works introduce an autoencoder-like architecture for image-to-image translation and employ adversarial loss to generate more realistic images. Unlike these one-to-one approaches, MRI synthesis faces the challenge of fusing complementary information from multiple input sequences. Recent studies about multi-sequence fusion can specifically be divided into two groups: (1) image fusion and (2) feature fusion. The image fusion approach is to concatenate sequences as a multi-channel input. Sharma et al. <cit.> design a network with multi-channel input and output, which combines all the available sequences and reconstructs the complete sequences at once. Li et al. <cit.> add an availability condition branch to guide the model to adapt features for different input combinations. Dalmaz et al. <cit.> equip the synthesis model with residual transformer blocks to learn contextual features. Image-level fusion is simple and efficient but unstable – zero-padding inputs for missing sequences lead to training unstable and slight misalignment between images can easily cause artifacts. In contrast, efforts have been made on feature fusion, which can alleviate the discrepancy across multiple sequences, as high-level features focus on the semantic regions and are less affected by input misalignment compared to images. Zhou et al. <cit.> design operation-based (e.g. summation, product, maximization) fusion blocks to densely combine the hierarchical features. And Li et al. <cit.> employ self-attention modules to integrate multi-level features. The model architectures of these methods are not flexible and difficult to adapt to various sequence combinations. More importantly, recent studies only focus on proposing end-to-end models, lacking quantifying the contributions for different sequences and estimating the qualities of generated images. In this work, we propose an explainable task-specific fusion sequence-to-sequence (TSF-Seq2Seq) network, which has adaptive weights for specific synthesis tasks with different input combinations and targets. Specially, this framework can be easily extended to other tasks, such as segmentation. Our primary contributions are as follows: (1) We propose a flexible network to synthesize the target MRI sequence from an arbitrary combination of inputs; (2) The network shows interpretability for fusion by quantifying the contribution of each input sequence; (3) The network provides reliability for synthesis by highlighting the area the network tried to refine. § METHODS Figure <ref> illustrates the overview of the proposed TSF-Seq2Seq network. Our network has an autoencoder-like architecture including an encoder 𝐄, a multi-sequence fusion module, and a decoder 𝐆. Available MRI sequences are first encoded to features by 𝐄, respectively. Then features from multiple input sequences are fused by giving the task-specific code, which identifies sources and targets with a binary code. Finally, the fused features are decoded to the target sequence by 𝐆. Furthermore, to explain the mechanism of multi-sequence fusion, our network can quantify the contributions of different input sequences and visualize the TSEM. To leverage shared information between sequences, we use 𝐄 and 𝐆 from Seq2Seq <cit.>, which is a one-to-one synthetic model that integrates arbitrary sequence synthesis into single 𝐄 and 𝐆. They can reduce the distance between different sequences at the feature level to help more stable fusion. Details of the multi-sequence fusion module and TSEM are described in the following sections. §.§ Multi-Sequence Fusion Define a set of N sequences MRI: 𝒳={X_i|i=1,...,N } and corresponding available indicator 𝒜⊂{1,...,N} and 𝒜≠∅. Our goal is to predict the target set 𝒳_𝒯={X_i|i∉𝒜} by giving the available set 𝒳_𝒜={X_i|i∈𝒜} and the corresponding task-specific code c = { c_src, c_tgt}∈ℤ^2N. As shown in Fig. <ref>, c_src and c_tgt are zero-one codes for the source and the target set, respectively. To fuse multiple sequences at the feature level, we first encode images and concatenate the features as f = {𝐄(X_i)|i=1,...,N }. Specifically, we use zero-filled placeholders with the same shape as 𝐄(X_i) to replace features of i∉𝒜 to handle arbitrary input sequence combinations. The multi-sequence fusion module includes: (1) a task-specific weighted average module for the linear combination of available features; (2) a task-specific attention module to refine the fused features. §.§.§ Task-Specific Weighted Average. The weighted average is an intuitive fusion strategy that can quantify the contribution of different sequences directly. To learn the weight automatically, we use a trainable fully connected (FC) layer to predict the initial weight ω_0∈ℝ^N from c. ω_0 = softmax(c𝐖+𝐛)+ϵ where 𝐖 and 𝐛 are weights and bias for the FC layer, ϵ=10^-5 to avoid dividing 0 in the following equation. To eliminate distractions and accelerate training, we force the weights of missing sequences in ω_0 to be 0 and guarantee the output ω∈ℝ^N to sum to 1. ω = ω_0 · c_src/⟨ω_0, c_src⟩ where · refers to the element-wise product and ⟨·,·⟩ indicates the inner product. With the weights ω, we can fuse multi-sequence features as f̂ by the linear combination. f̂ = ⟨f, ω⟩ Specially, f̂≡𝐄(X_i) when only one sequence i is available, i.e. 𝒜={i}. It demonstrates that the designed ω can help the network excellently inherit the synthesis performance of pre-trained 𝐄 and 𝐆. In this work, we use ω to quantify the contribution of different input combinations. §.§.§ Task-Specific Attention. Apart from the sequence-level fusion of f̂, a task-specific attention module 𝐆_A is introduced to refine the fused features at the pixel level. The weights of 𝐆_A can adapt to the specific fusion task with the given target code. To build a conditional attention module, we replace convolutional layers in convolutional block attention module (CBAM) <cit.> with HyperConv <cit.>. As shown in Fig. <ref>, channel attention and spatial attention can provide adaptive feature refinement guided by the task-specific code c to generate residual attentional fused features f_A. f_A = 𝐆_A(f|c) §.§.§ Loss function. To force both f̂ and f̂+f_A can be reconstructed to the target sequence by the conditional 𝐆, a supervised reconstruction loss is given as, ℒ_rec= λ_r·X'-X_tgt_1 + λ_p·ℒ_p(X', X_tgt) + λ_r·X_A'-X_tgt_1 + λ_p·ℒ_p(X_A', X_tgt) where X'=𝐆(f̂|c_tgt), X'_A=𝐆(f̂+f_A|c_tgt), X_tgt∈𝒳_𝒯, ·_1 refers to a L_1 loss, and ℒ_p indicates the perceptual loss based on pre-trained VGG19. λ_r and λ_p are weight terms and are experimentally set to be 10 and 0.01. §.§ Task-Specific Enhanced Map As f_A is a contextual refinement for fused features, analyzing it can help us understand more what the network tried to do. Many studies focus on visualizing the attention maps to interpret the principle of the network, especially for the transformer modules <cit.>. However, visualization of the attention map is limited by its low resolution and rough boundary. Thus, we proposed the TSEM by subtracting the reconstructed target sequences with and without f_A, which has the same resolution as the original images and clear interpretation. TSEM = | X'_A - X' | § EXPERIMENTS §.§ Dataset and Evaluation Metrics We use brain MRI images of 1,251 subjects from Brain Tumor Segmentation 2021 (BraTS2021) <cit.>, which includes four aligned sequences, T1, T1Gd, T2, and Flair, for each subject. We select 830 subjects for training, 93 for validation, and 328 for testing. All the images are intensity normalized to [-1, 1] and central cropped to 128×192×192. During training, for each subject, a random number of sequences are selected as inputs and the rest as targets. For validation and testing, we fixed the input combinations and the target for each subject. The synthesis performance is quantified using the metrics of peak signal noise rate (PSNR), structural similarity index measure (SSIM), and learned perceptual image patch similarity (LPIPS) <cit.>, which evaluate from intensity, structure, and perceptual aspects. §.§ Implementation Details The models are implemented with PyTorch and trained on the NVIDIA GeForce RTX 3090 Ti GPU. The 𝐄 and 𝐆 from Seq2Seq are pre-trained using the Adam optimizer with an initial learning rate of 2× 10^-4 and a batch size of 1 for 1,000,000 steps, taking about 60 hours. Then we finetune the TSF-Seq2Seq with the frozen 𝐄 using the Adam optimizer with an initial learning rate of 10^-4 and a batch size of 1 for another 300,000 steps, taking about 40 hours. §.§ Quantitative Results We compare our method with one-to-one translation, image-level fusion, and feature-level fusion methods. One-to-one translation methods include Pix2Pix <cit.> and Seq2Seq <cit.>. Image-level fusion methods consist of MM-GAN <cit.>, DiamondGAN <cit.>, and ResViT <cit.>. Feature-level fusion methods include Hi-Net <cit.> and MMgSN-Net <cit.>. Figure <ref> shows the examples of synthetic T2 of comparison methods input with the combinations of T1Gd and Flair. Table <ref> reports the sequence synthesis performance for comparison methods organized by the different numbers of input combinations. Note that, for multiple inputs, one-to-one translation methods synthesize multiple outputs separately and average them as one. And Hi-Net <cit.> and MMgSN-Net <cit.> only test on the subset with two inputs due to fixed network architectures. As shown in Table <ref>, the proposed method achieves the best performance in different input combinations. §.§ Ablation Study We compare two components of our method, including (1) task-specific weighted average and (2) task-specific attention, by conducting an ablation study between Seq2Seq, TSF-Seq2Seq (w/o f_A), and TSF-Seq2Seq. TSF-Seq2Seq (w/o f_A) refers to the model removing the task-specific attention module. As shown in Table <ref>, when only one sequence is available, our method can inherit the performance of Seq2Seq and achieve slight improvements. For multi-input situations, the task-specific weighted average can decrease LPIPS to achieve better perceptual performance. And task-specific attention can refine the fused features to achieve the best synthesis results. §.§ Interpretability Visualization The proposed method not only achieves superior synthesis performance but also has good interpretability. In this section, we will visualize the contribution of different input combinations and TSEM. §.§.§ Sequence Contribution. We use ω in Eq. <ref> to quantify the contribution of different input combinations for synthesizing different target sequences. Figure <ref> shows the bar chart for the sequence contribution weight ω with different task-specific code c. As shown in Fig. <ref>, both T1 and T1Gd contribute greatly to the sequence synthesis of each other, which is expected because T1Gd are T1-weighted scanning after contrast agent injection, and the enhancement between these two sequences is indispensable for cancer detection and diagnosis. The less contribution of T2, when combined with T1 and/or T1Gd, is consistent with the clinical findings <cit.> that T2 can be well-synthesized by T1 and/or T1Gd. §.§.§ TSEM vs. Attention Map. Figure <ref> shows the proposed TSEM and the attention maps extracted by ResViT <cit.>. As shown in Fig. <ref>, TSEM has a higher resolution than the attention maps and can highlight the tumor area which is hard to be synthesized by the networks. Table <ref> reports the results of PSNR for regions highlighted or not highlighted by TSEM with a threshold of the 99th percentile. To assist the synthesis models deploying in clinical settings, TSEM can be used as an attention and uncertainty map to remind clinicians of the possible unreliable synthesized area. § CONCLUSION In this work, we introduce an explainable network for multi-to-one synthesis with extensive experiments and interpretability visualization. Experimental results based on BraTS2021 demonstrate the superiority of our approach compared with the state-of-the-art methods. And we will explore the proposed method in assisting downstream applications for multi-sequence analysis in future works. splncs04
http://arxiv.org/abs/2307.02875v1
20230706092455
Reference-based Motion Blur Removal: Learning to Utilize Sharpness in the Reference Image
[ "Han Zou", "Masanori Suganuma", "Takayuki Okatani" ]
cs.CV
[ "cs.CV" ]
Reference-based Motion Blur Removal Learning to Utilize Sharpness in the Reference Image Han Zou^1,2     Masanori Suganuma^1,2     Takayuki Okatani^1,2 ^1Graduate School of Information Sciences, Tohoku University     ^2RIKEN Center for AIP {hzou, suganuma, okatani}@vision.is.tohoku.ac.jp ================================================================================================================================================================================================================== Despite the recent advancement in the study of removing motion blur in an image, it is still hard to deal with strong blurs. While there are limits in removing blurs from a single image, it has more potential to use multiple images, e.g., using an additional image as a reference to deblur a blurry image. A typical setting is deburring an image using a nearby sharp image(s) in a video sequence, as in the studies of video deblurring. This paper proposes a better method to use the information present in a reference image. The method does not need a strong assumption on the reference image. We can utilize an alternative shot of the identical scene, just like in video deblurring, or we can even employ a distinct image from another scene. Our method first matches local patches of the target and reference images and then fuses their features to estimate a sharp image. We employ a patch-based feature matching strategy to solve the difficult problem of matching the blurry image with the sharp reference. Our method can be integrated into pre-existing networks designed for single image deblurring. The experimental results show the effectiveness of the proposed method. § INTRODUCTION The removal of motion blurs in an image is one of the fundamental problems of image restoration. Researchers have considered problems in several different settings: whether the blur kernel is known or unknown; whether the blur kernel is spatially constant or varying in the input image; and whether the input is a single or multiple image. The employment of deep learning has led to great success even with the most challenging setting, i.e., single-image deblurring in the case of an unknown, spatially varying blur kernel. However, there is a limit for the single-image methods, since it is hard to restore information lost due to a large motion blur. There is the same trade-off as super-resolution, i.e., the trade-off between the naturalness of output images and their precision (i.e., the error from the ground truths)<cit.>; an excessive attempt will lead to “hallucination,” i.e., the generation of fake image textures. A promising way to overcome this limit is to use multiple images. A typical example is video deblurring, that is, the removal of blur in the image(s) contained in a video using the aid of other images in the same video. The community has considered two problem settings. One is to remove blur in an image in the video by utilizing the contents of adjacent, sharp images(s), assuming their availability. The other is to attempt to recover a “latent” image from a sequence of (all) blurry images. In this paper, we consider the problem of removing motion blur in an image of a scene using an additional image as a reference. The reference image is ideally a sharp image of the same scene, and we design our method to work best in that case. However, it works also well when using an image of a different scene as a reference. Our experiments show that the method can utilize a blurry image of the same or even a different scene as a good reference. It is vital from a practical standpoint to have fewer restrictions on the reference images. Methods that assume the availability of multiple images in the same scene are only applicable to deblurring videos. On the other hand, our method could effectively use an image of a different scene as a reference, which will widen the applicability almost to the level of the single image deblurring methods. Why a different scene image could be used as a reference is because deblurring may be spatially local inference performed in an image. Research on natural image statistics dictates that local image patches of natural images have relatively low degrees of freedom <cit.>; they are constrained in a low-dimensional manifold in the high-dimensional space of local patches. Thus, even if the reference is a different scene image, we can utilize its local patches to deblur the target image, as long as the reference image patches have some similarities with the target image patches. We propose a method to utilize this local information from the reference images. Specifically, we first match each local patch of the target image with one of the reference image patches. Their matching is deterministic, while we also use the confidence of each matching in subsequent steps. Now, suppose the most ideal case, in which we are given a sharp reference image of the same scene as the target. In this case, the matching of the patches must be aligned with the correspondences of scene points between the images, similarly to optical flows. However, one is blurry and the other is sharp, in our case, making the matching difficult. To cope with this, we employ a multi-scale strategy. Specifically, we deal with the problem coarse-to-fine, i.e., starting from the largest (i.e., coarsest) scale and gradually moving to smaller scales. The underlying idea is that blurs will have a smaller impact on matching patches in a coarser resolution. Going one step further, we attempt to predict the sharp image at each scale and use it for patch matching. Specifically, to match local patches between the target to reference images, we use the predicted sharp image instead of the down-sampled target image itself. This solution further mitigates the above concern since it should be easier to match the sharp image with the reference image, provided that the predicted sharp image is sufficiently accurate. Considering the excellent performance of recent methods for single-image deblurring, we choose to extend them to utilize a reference image to further improve their performance. We extend them by enriching the local features of the target blurry image with those of the reference image. Specifically, we first match the local patches of the target and reference images, as mentioned above. We then warp the feature map of the reference image using the matching result, which spatially aligns the feature map with that of the target image. Next, we augment the target image feature map with the warped reference image feature map. Note that the single-image deblurring methods only use the pre-augment features to infer sharp images. Augmenting them with the reference features will leads to feature enrichment, hopefully leading to a better restoration result. We employ a coarse-to-fine strategy for patch matching and feature augmentation. Conveniently, this approach is well aligned with different state-of-the-art methods, i.e., MIMO-UNet <cit.>, NAFNet <cit.>. We employ them as a base architecture and design several modules that can be integrated into them, which perform the above patch matching and feature enrichment steps. We train these modules and the base network as a whole in an end-to-end manner. Note that the proposed modules can be integrated into any architecture having the same multi-scale, coarse-to-fine approach. Thus, if a better single-image deblurring architecture is developed in the future, our method will theoretically be integrated into it to gain further performance improvement. § RELATED WORK §.§ Image Deblurring Image deblurring has been studied for a long time. Deep learning based methods have proved its success in image deblurring. Nah et al. <cit.> propose a multi-scale architecture based on a coarse-to-fine strategy. They also propose the GOPRO dataset, consisting of pairs of blurry and sharp image sequences of the same scene; they synthesize the blurry sequences by averaging successive sharp frames. Tao et al. adopt <cit.> adopt a recurrent structure to extract features on different scales and recover a sharp image in a coarse-to-fine manner. Gao et al. <cit.> follow the multi-scale architecture and adopt an encoder-decoder network similar to U-Net. They use DenseBlocks to build their network and propose nested skip connections to learn the higher-order residual. Park et al. <cit.> propose a method that iteratively removes blur through a single UNet. The feature maps from the previous iteration are fed into the encoders of the next iteration to generate sharp results progressively. Cho et al. <cit.> revisit the recent method based on the coarse-to-fine framework and propose MIMO-UNet (multi-input multi-output U-Net) that deals with multi-scale inputs using a single network. Chen et al. <cit.> decompose the SOTA methods and identify the essential components. They propose NAFNet Block based on these components and build a strong simple baseline model. §.§ Reference-based Image Restoration Reference-based image restoration uses an extra reference image for better image restoration. The task for which it is the most widely employed is super-resolution (SR). Previous studies have shown the effectiveness of transferring features from a high-resolution reference image and combining its features with low-resolution images. Zhang et al. <cit.> use Patch Match <cit.> for matching and transferring features obtained through a feature extractor based on the VGG network <cit.>. Yang et al. <cit.>, who adopt attention mechanisms based on feature fusion and further improve their models by integrating features across scales. Aligning the target and reference images is prone to errors. Wang et al. <cit.> propose an aligned attention method for better fusion of features. The proposed modules can well preserve high-frequency features via spatial alignment operations. In addition to SR, the use of additional reference inputs has been successful in deblurring. Xiang et al. <cit.> improve video deblurring performance by learning the sharpness of a reference video. They extract sharp information from reference video and fuse it with an optical flow-based deblurring network to generate better results. Li et al. <cit.> and Li et al. <cit.> adopt feature matching and fusion methods on blurry and sharp reference image pairs. Li et al. <cit.> propose a selective fusion module to guide feature fusion, while Li et al. <cit.> use a rank module to explore and transfer more useful information from the reference. Liu et al. <cit.> decouples the ref-based deblurring task into a single image deblurring task and a reference transfer task for better utilizing the reference input. Although its concept is similar to ours, its method shows considerably lower deblurring performance than the current single-image deblurring methods and, therefore, our method. § REFERENCE-GUIDED DEBLURRING The problem we consider here is to estimate a sharp image of the blurry input image I^blur of a scene, given an additional reference image I^ref. Note that I^ref may be either a sharper image of the same scene or of a different scene. §.§ Outline Instead of designing a whole new network for the problem, we design a module to be integrated into a backbone network, i.e., an existing single-image deblurring network. The backbone network is originally designed to receive I^blur of a scene and output its sharp version by itself. Our module is integrated into such a backbone, where it updates the backbone's intermediate feature and sends the resulting feature back to the backbone, aiming at improving the backbone's performance. Although the module has a separate design, we train the integrated model (i.e., the backbone plus our module) in an end-to-end fashion. Specifically, our module works as follows. First, an intermediate feature F^blur that the backbone extracts from I^blur is fed into our module. Next, receiving also I^ref and I^blur, our module compare their local features, extracts necessary information from I^ref, and fuses it with F^blur, yielding an updated intermediate features F^fusion. Finally, F^fusion is fed back to the backbone, where F^fusion replaces F^blur and is used to estimate the sharp image. For the backbone network, we primarily consider DeblurNet <cit.>, NAFNet<cit.>, and MIMO-UNet<cit.>, which achieves state-of-the-art performance in the single-image deblurring task. Our module consists of multiple sub-modules that compare/extract/fuse the (features of) blurry and reference images, as above, independently at each scale, as shown in Fig. <ref>. Our method can be used with any backbone network having a similar architecture. §.§ Reference-guided Feature Enrichment §.§.§ Patch Matching on Multi-scale Outputs As mentioned above, our module is designed to update the feature F^blur. The basic idea is as follows. First, comparing I^blur (rigorously, the latest estimate of the sharp image) and I^ref, we find the matching of their local patches, i.e., finding the most similar patch in the latter to each patch of the former (Sec.<ref>). Then, by extracting a feature map F^ref from I^ref using a ref encoder, we spatially divide it into a set of feature vectors and then use the above patch level matches to rearrange them to create a transformed feature map F^trans. Due to construction, F^trans is similar to F^blur but differs in that it contains features of a sharper reference I^ref. Finally, we fuse F^trans with F^blur, replacing F^blur in the backbone network (Sec.<ref>). An issue with the above idea is the difficulty in matching (local patches) of I^blur and I^ref in a meaningful way, since I^blur is blurry and I^ref is sharp. To cope with this, we use the coarse-to-fine strategy of MIMO-UNet<cit.>, i.e., gradually improving estimates from coarser to finer scales while supervising the model to predict the sharp image at each scale. The experimental results in <cit.> show that supervision on outputs of different scales helps to generate sharper intermediate outputs(outputs at low scales) and final outputs. Inspired by multi-scale supervision, we design a feature matching strategy that matches features between intermediate outputs, denoted by I^inter, and the reference image I^ref. I^inter has been removed most blurs and tends to have a smaller difference from I^ref compared to the downscaled blurry input, making it more accurate to match their local patches and mitigate the above difficulty due to the gap between I^blur and I^ref. §.§.§ Local Patch Matching of Blurry and Reference Images As mentioned above, we employ a coarse-to-fine strategy. We downscale I^blur(∈ℝ^H× W× 3) and I^ref(∈ℝ^H× W× 3) with the factor of 1/2^k-1 (k=1,…,K), obtaining I^blur_k(∈ℝ^H_k× W_k× 3) and I^ref_k(∈ℝ^H_k× W_k× 3), respectively; H_k=H/2^k-1 and W_k=W/2^k-1. (Note I^blur_1=I^blur and I^ref_1=I^ref.) Starting from the coarsest scale k=K, we move from a coarser scale to a finer scale, as in k=K-1,…,1. At each scale k, we obtain an estimate of the sharp image, which we denote by I^inter_k(∈ℝ^H_k× W_k× 3). Note that I^inter_K is the final estimate of the sharp image. On scale k, we calculate the features of I^inter_k and I^ref_k that will be used to calculate I^inter_k-1. We first embed I^inter_k and I^ref_k into feature maps F̃^inter_k and F̃^ref_k using a shared encoder ϕ. Then we extract patches of the size 3× 3 from the two feature maps with stride =1, yielding P^inter_k={p^inter_k,i}_i=1,…,H_k W_k and P^ref_k={p^ref_k,i}_i=1,…,H_k W_k, respectively. For matching extracted patches, we calculate the cosine distance r_k,i,j between i-th element from P^inter_k and j-th element from P^ref_k. The index t_i of patch most similar to the i-th element from P^inter_k and its confidence s_k,i are given by t_k,i = max_j r_k,i,j, s_k,i= max_j r_k,i,j. §.§.§ Feature Fusion We fuse the features of I^ref_k and I^blur_k to obtain a feature map that will be used to calculate I^inter_k-1. For the feature map of I^blur_k, we borrow F^blur_k(∈ℝ^H_k× W_k× C) that is computed for the inference of I^inter_k in the backbone deblurring network. For the feature map of I^ref_k, we compute a new one F^ref_k(∈ℝ^H_k× W_k× C); we input I^ref_k into a shallow encoder consisting of 3 stacks of base blocks. We then create a new feature map F^trans_k from F^ref_k using the correspondences between I^inter_k and I^ref_k represented by t_k,i of (<ref>). To be specific, we generate F^trans_k as follows. We denote the spatial coordinates of the feature maps F̃^inter_k and F̃^ref_k by x and y; (x,y)∈ [1,W_k]×[1,H_k]. Note that F̃^inter_k(x,y) and F̃^ref_k(x,y)∈ℝ^C. We then denote the mapping of (x,y) to the patch index by index_k(x,y) (:ℝ^2↦[1,H_k· W_k]) and its inverse mapping from an index i to (x,y) by (x_k(i),y_k(i)). We then compute F_k^trans(x,y) as F_k^trans(x,y) = F_k^ref(x_k(t_k,i) ,y_k(t_k,i)), where i=index_k (x,y). We then fuse F_k^trans obtained above with F^blur_k. We compute a fused feature map F_k^fusion(∈ℝ^H_k× W_k× C) as follows: F^fusion_k = conv_1(F^blur_k, F^trans_k) ⊙conv_2(S_k) + F^blur_k, where conv_1 and conv_2 are each a single conv layer; S∈ℝ^H_k× W_k× 1 is a upsampled confidence map; specifically, letting S_k(x,y)=s_index_k(x,y) be the confidence map in the resolution of H_k× W_k. The resulting map F^fusion_k is upsampled with transposed convolution and fed to the next scale k+1. In (<ref>), our aim is to predict only a `residual' component with the two conv. layers to update F^blur_k, similarly to the skip connection of ResBlock. §.§ Acceleration on Feature Matching In order to perform feature matching and fusion across multiple scales, the computational costs can become substantial when dealing with larger scales. To mitigate the computational burden associated with feature matching, we leverage index maps from lower scales and propose a coarse-to-fine approach for acceleration. It becomes apparent that images at lower scales contain the same content but with reduced details compared to images at higher scales. This observation suggests that the index map obtained from lower scales can serve as a guide for feature matching at higher scales. By utilizing the coarse index map, there is no longer a necessity for global matching at larger scales. Instead, a localized feature matching around the position indicated by the coarse index map suffices, as shown in the Fig. <ref> As discussed in Section <ref>, the computation of cosine distance between all extracted patches P^inter_k={p^inter_k,i}_i=1,…,H_k W_k and P^ref_k={p^ref_k,i}_i=1,…,H_k W_k is required for patch matching at scale k. This process involves H_k × W_k × H_k × W_k operations. However, by utilizing the coarse index map as a guide, the number of operations is reduced to H_k × W_k × L × L, where L represents a constant that indicates the side length of a square block within which patch matching is performed. §.§ Loss Function We train the integrated model, i.e., the backbone network and the proposed feature enrichment module. We follow MIMO-UNet<cit.> for the training of the integrated network. It yields an estimate of the sharp image at each scale k=1,…,K. For brevity, we denote the sharp image at scale k by I_k and its estimate by Î_k=I^iter_k. We consider the following two losses between I_k and Î_k. The first is the Charbonnier loss defined by L_cb = ∑^K_k=1√(I_k - Î_k_2 + ϵ^2). The second is the frequency reconstruction loss given by L_fr = ∑^K_k=1ℱ(I_k) - ℱ(Î_k)_1, where and ℱ denote the Laplacian operator and the fast Fourier transform. We use the following weighted sum for the total loss for the training: L_total = α L_cb + β L_fr, where we set α and β to 1 and 0.01, respectively, in our experiments. § EXPERIMENTS §.§ Experimental Settings §.§.§ Datasets We used three datasets in our experiments, GOPRO <cit.>, RealBlur <cit.>, and HIDE <cit.>. We primarily use the GOPRO<cit.> datasets for the training of the proposed method. The GOPRO<cit.> dataset contains 2,013 training pairs of 22 different scenes and 1,111 test pairs of 11 different scenes. The RealBlur<cit.> dataset contains 3,758 training pairs of 182 different scenes and 980 image pairs of 50 different scenes. It contains two subsets: RealBlur-J<cit.>, a set of JPEG images processed by camera ISPs, and RealBlur-R<cit.>, those generated from camera raw images. The HIDE <cit.> dataset contains 8,422 pairs of realistic blurry and ground truth images. As these datasets do not officially provide reference images, we choose them as follows. For the GOPRO<cit.> dataset, we randomly sample another frame from the same scene for the reference. Specifically, we choose a reference from the range of [-30,30] frames before and after the target frame. We do the same for the RealBlur<cit.> and HIDE <cit.> dataset, where we randomly choose one of the same scene images as the target image as its reference. We train our models so that they can utilize not only sharp images but also blurry images, effectively as references. Therefore, we include blurry reference images in the training for both datasets. Specifically, we randomly choose a sharp or a blurry image for each target image; we set their ratio to 8:2 (sharp:blurry). We use the ground-truth sharp images for the sharp references and the input blurry images for the blurry reference at the frames chosen as above. The side length L used in matching acceleration is set to 16. §.§.§ Implementation Details We primarily use DeblurNet <cit.>, two variants of MIMO-UNet <cit.> and two variants of NAFNet <cit.> as backbones. MIMO-UNet and MIMO-UNet+ employ eight and twenty ResBlocks <cit.> in the encoder and decoder, respectively. NAFNet32 and NAFNet64 employ 32 and 64 channels, respectively. As shown in Fig. <ref>, we augment it with three components, i.e., feature matching module (Fig. <ref>), feature fusion module (Fig. <ref>), and an encoder for extracting features from the input reference image. The feature matching module first extracts features from I^inter (or I^blur at the coarsest scale) and I^ref. We use an ImageNet <cit.> pretrained VGG19 <cit.> network for this purpose. To extract features from I^ref, we use a stack of three convolutional layers followed by four ResBlocks in Ref-DeblurNet and Ref-MIMO-UNet, and use a stack of three convolutional layers followed by 2 NAFBlocks in Ref-NAFNet. §.§.§ Training We train all components as a whole in an end-to-end manner using the Adam optimizer <cit.> with β_1=0.9 and β_2=0.999. Setting the initial learning rate at 2 × 10^-4, we employ a learning rate scheduler based on cosine annealing<cit.>; the learning rate decreases steadily to 1 × 10^-6. Following previous studies of single-image deblurring, we set the input image size to 256 × 256 at the training time. (We input the images with original sizes into the network at the test time.) As the original images have larger sizes, we randomly crop 256 × 256 square regions from these images; we crop an identical square from the blurry and its reference images. We apply random horizontal and vertical flipping with a probability =0.5 for data augmentation. §.§ Experimental Results §.§.§ Quantitative Comparison We first evaluate our proposed method on the GOPRO <cit.>, HIDE <cit.>, and RealBlur <cit.> dataset. The compared methods are as follows: single image-based methods and reference-based methods. We borrow their results from the respective papers, where a network is trained and tested in the same setting. Note that reference-based methods including ours only need a single frame as additional input. We train our network on the training set of the GOPRO dataset and test it on the test set of the GOPRO and HIDE datasets. Table <ref> shows the qualitative results. We can see that our methods that are integrated into five base models, DeblurNet, MIMO-UNet, MIMO-UNet+, NAFNet32 and NAFNet64. The proposed method show improvements of 1.13dB (30.55 vs. 31.68), 0.8dB (32.53 vs. 31.73), 0.73dB (33.18 vs. 32.45), 0.37dB (33.22 vs. 31.85) and 0.44dB (34.13 vs. 33.69) in PSNR, respectively, on GOPRO <cit.>. When testing the same network on HIDE <cit.>, we can see that our method yields more improvements, i.e., 0.79dB (30.07dB vs. 29.28dB) and 0.92dB (30.91dB vs. 29.99dB), respectively. The comparison results on RealBlur <cit.> are presented in Table <ref>. Our network is trained using the GOPRO dataset and subsequently evaluated directly on RealBlur-R and -J <cit.>. Despite the substantial domain gap between the RealBlur dataset and the GOPRO dataset, the proposed module demonstrates the potential for performance improvement by utilizing a reference image. This improvement is notable considering that the RealBlur dataset consists of non-synthesized blurry images. We have omitted the performance evaluation of DeblurNet and NAFNet64 due to specific reasons. DeblurNet lacks performance data on the RealBlur dataset, while NAFNet64 produces unusual images in certain cases of the RealBlur dataset. §.§.§ Qualitative Comparison Figure <ref> shows examples of deblurred images for several challenging images that have been used in the literature. It shows the results of our Ref-NAFNet64 and the SOTA single image deblurring methods. We can see that our models achieve the best results; Figures <ref>,  <ref>, and  <ref> compare the original MIMO-UNet+<cit.> and our modified Ref-MIMO-UNet+ on the HIDE <cit.> and RealBlur <cit.> datasets. We also compared the deblurred results of our model without reference, which shows the effectiveness of using a reference image. §.§ Other Results §.§.§ Ablation Study of Feature Matching We employ a multi-output architecture and create a feature matching module that conducts matching on the intermediate output. In order to assess the efficacy of this approach, we compare it to models that directly perform feature matching on the blurry input and reference input. The results in Table <ref> indicate that utilizing the intermediate output for feature matching yields a 0.1dB improvement in PSNR performance on GOPRO dataset. This demonstrates the effectiveness of the proposed design. §.§.§ Detailed Comparison and Results on Different SOTA methods To further analyze the effectiveness of proposed methods on other SOTA deblurring models, we conduct several experiments with different backbones and test different configurations of the feature fusion module. We report their results in Table <ref>; fusion num. =1, 2 means that the reference features are fused with those of the blurry image on a scale of 0.5, 0.25. We can see that fusing on more scales achieves a better result. The effectiveness of the proposed coarse-to-fine design is showcased. §.§.§ Impacts of Reference Selection The choice of a reference image will affect the result of our method. Figure <ref> demonstrates the impact of the choice of reference images on Ref-MIMO-UNet. We select reference images with different properties here. The image in the first row is from the GOPRO dataset. The specified reference image is chosen from the same sequence as the input; it is less blurry than the input. Column (c) shows the results obtained by the model that does not use a reference, which illustrates the upper bound performance of single image deblurring methods. Although the reference is blurry, using it as a reference leads to a better result; the edges of the windows, etc. are more straight and textures become finer. The images in the second and third rows are from HIDE. For the second image, we specify an image contains a car of the same model seen in the input image; we choose it from a different source from HIDE. We can see that the result in (d) reconstructs slightly more accurate texture of the wheel, although the reference is not so sharp. For the third image, we specify an image of the same road but from a considerably different viewpoint. Using the reference (in (d)) leads to clearly better results, such as precise reconstruction of texts. §.§.§ Application to Video Deblurring In real-world deblurring, we may cannot find an ideal sharp image as a reference. Considering real-world videos often consist of blurry frames with different degrees of blur. In that case, we can find sharp (or mildly blurry) frames and use them as a reference to deblur the blurry frames. To do this, we adopt a simple method for evaluating the sharpness of an image <cit.>. The sharpness of an image I with size M× N is the portion of pixels whose value is larger than thres, thres = max(|CFT(I)|)/1000 × M× N, where CFT is the centered Fourier transform. Sharper images have higher sharpness scores. Figure <ref> shows several images with different level of blurs and their evaluated sharpness scores. To evaluate the proposed methods on blurry Ref inputs, we conduct experiments using the GOPRO dataset, which contains images with different amounts of blur in each image sequence. Specifically, for each input, we consider the range from -30 to 30 frames in the same sequence, and then choose a frame for a reference in the following four ways: * The most blurry frame * The frame with intermediate blur * The sharpest frame All the frames in the above range (60 in total) are sorted. The above three images are chosen from the top, the middle, and the bottom of the sorted list. Table <ref> shows the results when we specify each of the above three images for references. We can see that our method yields better results as the sharpness of a reference image increases. §.§.§ Runtime and Params Comparison We compare the number of parameters and computational cost of our modified ref-net with the original models. The results are listed in Table <ref>. The MACs column compares the computational cost of processing a 256 × 256 patch. §.§ Summary and Conclusion We have proposed a new method to deblur a blurry image with the help of a reference image. While, the reference image is ideally a sharp image of the same scene as the input image, our method can utilize a less blurry image of the same scene or even a different one. The method employs a coarse-to-fine approach, in which the sharp image at each resolution is estimated and used for subsequent steps. This approach mitigates the difficulty of aligning (or rigorously matching local patches between) the blurry input image and the sharp reference image, leading to the proper fusion of their local features to recover the sharp detail of the blurry input patches. We have designed our method in the form of modules that augment state-of-the-art single-image deblurring architectures. Thus, it can theoretically be integrated into any single image deblurring method following the same coarse-to-fine method, including those to be developed in the future. The experimental results confirm the effectiveness of our approach. ieee_fullname 10=-1pt barnes2009patchmatch Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman. Patchmatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph., 28(3):24, 2009. blau2018perception Yochai Blau and Tomer Michaeli. The perception-distortion tradeoff. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6228–6237, 2018. chen2022simple Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. arXiv preprint arXiv:2204.04676, 2022. chen2021hinet Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, and Chengpeng Chen. Hinet: Half instance normalization network for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 182–192. IEEE, 2021. cho2021rethinking Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4641–4650. IEEE, 2021. de2013image Kanjar De and V Masilamani. Image sharpness measure for blurred images in frequency domain. Procedia Engineering, 64:149–158, 2013. gao2019dynamic Hongyun Gao, Xin Tao, Xiaoyong Shen, and Jiaya Jia. Dynamic scene deblurring with parameter selective sharing and nested skip connections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3848–3856. IEEE, 2019. he2016deep Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. kingma2014adam Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. krizhevsky2012imagenet Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. kupyn2019deblurgan Orest Kupyn, Tetiana Martyniuk, Junru Wu, and Zhangyang Wang. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8878–8887. IEEE, 2019. li2022learning Dasong Li, Yi Zhang, Ka Chun Cheung, Xiaogang Wang, Hongwei Qin, and Hongsheng Li. Learning degradation representations for image deblurring. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XVIII, pages 736–753. Springer, 2022. li2022reference Yaowei Li, Ye Luo, and Jianwei Lu. Reference-guided deep deblurring via a selective attention network. Applied Intelligence, 52(4):3867–3879, 2022. li2022deep Yaowei Li, Jinshan Pan, Ye Luo, and Jianwei Lu. Deep ranking exemplar-based dynamic scene deblurring. IEEE Transactions on Image Processing, 31:2245–2256, 2022. liu2023reference Cunzhe Liu, Zhen Hua, and Jinjiang Li. Reference-based dual-task framework for motion deblurring. The Visual Computer, pages 1–15, 2023. loshchilov2016sgdr Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. nah2017deep Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3883–3891, 2017. park2020multi Dongwon Park, Dong Un Kang, Jisoo Kim, and Se Young Chun. Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training. In Proceedings of European Conference on Computer Vision, pages 327–343. Springer, 2020. purohit2020region Kuldeep Purohit and AN Rajagopalan. Region-adaptive dense network for efficient motion deblurring. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 11882–11889, 2020. rim2020real Jaesung Rim, Haeyun Lee, Jucheol Won, and Sunghyun Cho. Real-world blur dataset for learning and benchmarking deblurring algorithms. In Proceedings of European Conference on Computer Vision, pages 184–201. Springer, 2020. shen2019human Ziyi Shen, Wenguan Wang, Xiankai Lu, Jianbing Shen, Haibin Ling, Tingfa Xu, and Ling Shao. Human-aware motion deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5572–5581, 2019. simoncelli2001natural Eero P Simoncelli and Bruno A Olshausen. Natural image statistics and neural representation. Annual review of neuroscience, 24(1):1193–1216, 2001. simonyan2014very Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. tao2018scale Xin Tao, Hongyun Gao, Xiaoyong Shen, Jue Wang, and Jiaya Jia. Scale-recurrent network for deep image deblurring. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8174–8182, 2018. wang2021dual Tengfei Wang, Jiaxin Xie, Wenxiu Sun, Qiong Yan, and Qifeng Chen. Dual-camera super-resolution with aligned attention modules. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2001–2010. IEEE, 2021. xiang2020deep Xinguang Xiang, Hao Wei, and Jinshan Pan. Deep video deblurring using sharpness features from exemplars. IEEE Transactions on Image Processing, 29:8976–8987, 2020. yang2020learning Fuzhi Yang, Huan Yang, Jianlong Fu, Hongtao Lu, and Baining Guo. Learning texture transformer network for image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5791–5800. IEEE, 2020. zamir2021multi Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14821–14831. IEEE, 2021. zhang2019deep Hongguang Zhang, Yuchao Dai, Hongdong Li, and Piotr Koniusz. Deep stacked hierarchical multi-patch network for image deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5978–5986. IEEE, 2019. zhang2019image Zhifei Zhang, Zhaowen Wang, Zhe Lin, and Hairong Qi. Image super-resolution by neural texture transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7982–7991. IEEE, 2019. zhou2019davanet Shangchen Zhou, Jiawei Zhang, Wangmeng Zuo, Haozhe Xie, Jinshan Pan, and Jimmy S Ren. Davanet: Stereo deblurring with view aggregation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10996–11005. IEEE, 2019.
http://arxiv.org/abs/2307.02111v1
20230705083800
Event Rate of Fast Radio Burst from Binary Neutron-star Mergers
[ "Zhi-Lin Chen", "Rui-Chong Hu", "Da-Bin Lin", "En-Wei Liang" ]
astro-ph.HE
[ "astro-ph.HE" ]
Da-Bin Lin lindabin@gxu.edu.cn 0000-0002-0926-5406]Zhi-Lin Chen Guangxi Key Laboratory for Relativistic Astrophysics, School of Physical Science and Technology, Guangxi University, Nanning 530004, China 0000-0002-6442-7850]Rui-Chong Hu Guangxi Key Laboratory for Relativistic Astrophysics, School of Physical Science and Technology, Guangxi University, Nanning 530004, China 0000-0003-1474-293X]Da-Bin Lin Guangxi Key Laboratory for Relativistic Astrophysics, School of Physical Science and Technology, Guangxi University, Nanning 530004, China 0000-0002-7044-733X]En-Wei Liang Guangxi Key Laboratory for Relativistic Astrophysics, School of Physical Science and Technology, Guangxi University, Nanning 530004, China It is proposed that one-off fast radio burst (FRB) with periodic structures may be produced during the inspiral phase of a binary neutron-star (BNS) merger. In this paper, we study the event rate of such kind of FRB. We first investigate the properties of two one-off FRBs with periodic structures (i.e., FRB 20191221A and FRB 20210213A) in this scenario, by assuming the fast magnetosonic wave is responsible for their radio emission. For the luminosities and periods of these bursts, it is found that the pre-merger BNS with magnetic field strength B≳ 10^12 Gs is required. This is relatively high compared with that of the most of the BNSs observed in our Galaxy, of which the magnetic field is around 10^9 Gs. Since the observed BNSs in our Galaxy are the binaries without suffering merger, a credited event rate of BNS-merger originated FRBs should be estimated by considering the evolution of both the BNS systems and their magnetic fields. Based on the population synthesis and adopting a decaying magnetic field of NSs, we estimate the event rate of BNS-mergers relative to their final magnetic fields. We find that the rapid merged BNSs tend to merge with high magnetization, and the event rate of BNS-merger originated FRBs, i.e., the BNS-mergers with both NSs' magnetic field being higher than 10^12 Gs is ∼8×10^4 yr^-1 (19 % of the total BNS-mergers) in redshift z<1. § INTRODUCTION The merger of binary neutron-star (BNS) systems can lead to rich electromagnetic phenomena, in addition to the strong emission of gravitational waves <cit.>. In the post-merger phase, a short gamma-ray burst GRB 170817A <cit.>, the associated afterglows <cit.>, and a kilonova AT2017gfo peaking at ∼1 day <cit.> have been observed as the electromagnetic counterparts of gravitational wave event GW 170817 <cit.>, which is the first gravitational wave signal from a BNS merger detected by the advanced LIGO and Virgo detectors <cit.>. During the pre-merger phase, the magnetospheres of two neutron-stars (NSs) in a BNS would interact with each other and thus energetic Poynting-flux would be driven from the BNS system. The dissipation of the driven Poynting-flux would appear as multi-band electromagnetic precursors <cit.>. This scenario has been confirmed by general relativistic magnetohydrodynamic simulations (e.g., ). Compared with the post-merger multi-band emission, radiation signals from the pre-merger BNS may provide more detailed information about the equation of state for a NS <cit.> and the magnetospheres interaction in a BNS system <cit.>. The interaction of the magnetospheres in a BNS during its late inspiral phase could form strong Poynting-flux by extracting the orbit kinetic energy of the system <cit.>. It is shown that the power of the Poynting-flux during this phase could range from ∼ 10^38 erg· s^-1 to 10^44 erg· s^-1 <cit.>, which is strong enough to form the detectable radio emission from a cosmology distance. Many efforts have been made for studying the emission from a pre-merger BNS and the association with fast radio bursts (FRBs). <cit.> proposed that the BNS-mergers with high magnetic field (B≳10^12 Gs) could produce the observed FRBs through the coherent radio mechanism like that in a isolated pulsar. <cit.> found that a unipolar inductor model <cit.> with a high magnetized primary NS (B≳10^12 Gs) and a weak magnetized companion could produce FRBs by accelerating electrons in coherent slices. <cit.> investigated the pulsar-like emission within polar gap model <cit.> and showed the coherent millisecond radio bursts could be detected in Gpc distances by next-generation radio facilities if one NS has a magnetic field higher than 10^12 Gs. The fast magnetosonic wave has been considered in magnetar <cit.> and double neutron star system <cit.> as the radio emission mechanism for FRBs. <cit.> presented the detailed behavior of the flare arising from the magnetic reconnection in the common magnetospheres of BNS with magnetic fields of ∼ 10^11 Gs, and demonstrated that the flare interacting with the orbital current sheet could produce the radio transients with sub-millisecond quasi-periodic structure like FRB 20201020A <cit.>. The emissions formed in a pre-merger BNS would appear with periodicity and the corresponding event would be one-off <cit.>. Such kind of temporal behavior has been proposed as a possible explanation for the periodicity in the sub-bursts of some one-off FRBs, e.g., FRB 20191221A and FRB 20210213A <cit.>. The BNS-merger originated FRB with periodic sub-pulses may account for a fraction of the FRB population, in spite of that the estimated rate of BNS mergers (∼10^3 Gpc^-3 yr^-1; ) was significantly less than the rate of FRBs (∼10^4 Gpc^-3 yr^-1; ). In this paper, we investigate the event rate of BNS-merger originated FRBs. We first study the properties of FRBs 20191221A and FRB 20210213A in the scenario that a pre-merger BNS is responsible for their radio emission. The properties of the corresponding BNS is our focus. Then, based on the population synthesis and adopting a decaying magnetic field of NSs, we estimate the event rates for BNS-mergers relative to their magnetic fields. The rest of this paper is organized as follows. In Section <ref>, the luminosity of the Poynting-flux from a pre-merger BNS and the corresponding radio emission are presented. Based on the luminosities and periods of two one-off periodic FRBs (FRB 20191221A and FRB 20210213A), we estimate the magnetic fields of NSs in BNS. In Section <ref>, we estimate the event rate of BNS-merger originated FRBs, based on the population synthesis and adopting a decaying magnetic field of NSs. In Section <ref>, the conclusion and a brief discussion are presented. § PRE-MERGER BNS FOR ONE-OFF FRBS §.§ Luminosity of the Poynting-flux and the corresponding radio emission For a BNS system, the Poynting-flux driven in the pre-merger phase is related to the magnetic field of NSs and the orbital separation of BNS <cit.>. Then, the primary NS with a magnetic dipole moment, μ _∗ = B_∗R_∗^3, and the companion NS with μ _c = B_cR_c^3 are adopted, where B and R are respectively the dipolar magnetic field and radius of a NS, and R_∗ = R_c = 13.6 km is adopted. In this paper, the subscript “∗” (“c”) represents the parameters of the primary (companion) NS in a BNS, and the primary (companion) NS refers to the NS with a heavy (light) mass in the BNS. For general BNS systems, the magnetic field has a negligible effect on the inspiral behavior of BNSs <cit.>. Then, the evolution of the BNS's orbital separation a is mainly associated with the gravitational-wave radiation of the system and thus can be described as (), ȧ=-64 G^3( M_∗^2M_c+ M_∗M_c^2)/5c^5 a^3, where G is the gravitational constant, M_∗ = M_c = 1.4 M_⊙ with M_⊙ being the solar mass is adopted, and c is the speed of light. The radius and mass of the NS are obtained with library[LORENE home page, http://www.lorene.obspm.fr/http://www.lorene.obspm.fr/.], which assumes a polytropic equation of state P=Kρ^Γ with Γ=2 and K=123. Except for the orbital separation, the Poynting-flux from the system is also dependent on the magnetic field, dipolar orientations (with respect to the orbital angular momentum), and the ratio of magnetic moments (). We consider three basic cases in this paper: U/u case, where the magnetic field in a NS was dominated by that from the other NS, i.e., max{μ_∗,μ_ c}/a^3 > min{B_∗,B_ c}; U/U case, where the systems with equal magnetic moments aligned with the orbital angular momentum; and U/-U case, with an anti-aligned magnetic moment compared to case U/U. Here, both “U” and “u” symbols represent the magnetic dipole moment of a NS, and “U” (“u”) represents the NS with a high (low) magnetic dipole moment in the BNS if the “U” appears together with “u”. In addition, we would like to clarify that the three cases discussed below are particular flavours of the inspiral scenario. Other flavours are not discussed in this paper, e.g., the pulsar revival model (). * In the case of U/u, a unipolar induction model was studied by <cit.>, in which one of two NSs in the BNS is assumed to be a perfect conductor with a negligible magnetic field. Following the equation (22) of <cit.>, the maximum power of the Poynting-flux from the BNS system is given by L_BNS≈ 4.0 × 10^43(max{B_∗,B_ c}/10^12 Gs)^2(a/27.2  km )^-7 erg·s^-1, under the assumption that the resistance of the BNS system is dominated by the magnetospheres, where 27.2 km=2R_∗=2R_ c is the minimum orbital distance for two NSs. * <cit.> studied the case of U/U. Based on the equation (5) of , the power of the Poyting-flux extracted from the system can be read as L_BNS≈ 1.8 × 10^44(0.19 / η-0.08) (B_∗/10^12 Gs)^2 (a/27.2 km )^-9 / 2 erg·s^-1, where η = Δ r/r with Δ r being the thickness of the compacted region in common magnetosphere and r=a/[1+(μ_c/μ_∗)^1/3]=a/2 represents the distance of both NSs' magnetic fields that come to contact, and η = 0.1 is adopted from <cit.> which could affect the stored energy in the compacted region. It should be noted that μ_c=μ_∗ and B_c=B_∗ in this case. * For the case U/-U, the twisted magnetic flux loop that connects two NSs can be broken down by the orbit motion. Correspondingly, a flare is ejected by the magnetic reconnection. The energy stored in the twisted field lines can be estimated as Δ E_twist ≈ B^2_∗R^3_∗ (2R_∗/a)^2+β <cit.> and the release time-scale of the magnetic field is Δ t≃ 2a/v_rec <cit.>, where v_rec=0.3c is the velocity of reconnection. Thus, the luminosity of the flare can be estimated as L_BNS≈Δ E_twist /Δ t=4.1 × 10^45 (a /27.2 km)^-3-β (B_∗/10^12 Gs )^2 (v_rec/0.3c ) erg·s^-1, where β=1/2 is the index introduced to match the scaling relation of L_flare∝ a^-7/2 found in <cit.>. It should be noted that μ_c=μ_∗ and B_c=B_∗ in this case. For a more common case μ_c≠μ_∗, it is possible to generalized the luminosity estimated in Equations (<ref>) and (<ref>). We take the situation with μ_c < μ_∗ as an example. Following the previous works (e.g., ), a magnetic balance sphere will exist around the companion NS with an effective radius a_eff =a/[(μ_∗/μ_c)^1/3+1], and the luminosity from the system with μ_c < μ_∗ is nearly equal to that from the system with μ_∗ = μ_c and a=2a_eff. That is to say, Equations (<ref>) and (<ref>) can be used in a system with unequal magnetic fields by replacing B_∗ with B_c and a with 2a_eff. It should be noted that the luminosity of Poynting-flux is subject to the minimum magnetic moment of the NS (i.e., min{μ_∗,μ_c}) for both the case U/-U and U/U, and to the maximum magnetic moment of the NS (i.e., max{μ_∗,μ_c}) for the case U/u. In this paper, we consider the BNS with equal magnetic fields of NSs, which should be treated as the lower limit of the magnetic fields in the BNS system. In addition, the effect of the NS spin on the Poynting-flux from the BNS system () is not considered in this paper. For FRBs originated from pre-merger BNSs, the radio emission is formed during the dissipation of the Poynting-flux driven during the inspiral phase of a BNS-merger. We assume that the fast magnetosonic wave is responsible for the radio emission, which was used to explain the giant radio pulse from pulsar (). In the spirit of <cit.>, we take the following relation between FRB luminosity L_FRB and Poynting-flux luminosity L_BNS, i.e., L_FRB=fv_rec/cL_BNS/1/(4Γ^2_BNS) =2.4×10^-3f/0.002v _rec/0.3cΓ^2_BNS L_BNS, where f is the efficiency of the magnetic reconnection driving the fast magnetosonic wave and Γ_BNS is the bulk Lorentz factor of the Poynting-flux. According to <cit.>, an outward Poynting-flux, which arrived at the light cylinder R_LC = c/Ω_orbit with magnetic field strength B_ BNS=√(L_ BNS/c )/R_ LC, could have a bulk Lorentz factor Γ _BNS=max [√(B_BNS/B_LC)/2 ,Γ _BNS,min], where Ω_orbit =[G M_∗(1+M_c/M_∗)/a^3 ]^1/2 is the orbital frequency and Γ _BNS,min=2 is the minimum Lorentz factor. Once the outflow moves across the light cylinder, the perturbation in the current sheet could trigger the magnetic reconnection, and the following fast magnetosonic wave will escape into the vacuum and convert to the coherent emission <cit.>. In this process, local kinetic simulation <cit.> showed that the conversion efficiency of the reconnection energy to the fast magnetosonic wave energy is f ≃ 0.002. §.§ Properties of BNS for two one-off FRBs There are three one-off FRBs with periodic sub-pulses, i.e., FRBs 20191221A (216.8 ms), 20210206A (2.8 ms), and 20210213A (10.7 ms), reported by <cit.>, where the period signal significance of FRB 20191221A (6.5 σ) is higher than that of FRB 20210206A (1.3 σ) and FRB 20210213A (2.4 σ). It is an open question for the origin of their periodicity <cit.>. It is proposed that such kinds of FRBs (like FRB 20191221A and FRB 20210213A) may be produced during the inspiral phase of a BNS-merger (), and we noted that FRB 20210206A is disfavoured in a BNS-merger scenario as its expected period derivative is not consistent with the observed sub-pulse separation (). In this scenario, we study the properties of the corresponding BNS system based on these two FRBs' periodicity and luminosity (FRB 20191221A and FRB 20210213A)[ FRB 20201020A is detected with sub-millisecond period (∼0.415 ms) and high period signal significance (2.5 σ) <cit.>. However, this burst is not involved in this section since its period may be too small compared with the minimum orbital period of BNSs ().]. Periodicity Estimation. The direction of the Poynting-flux is dependent on the relative inclination of the magnetic moments and their ratio <cit.>. In general, the magnetic dipole axis is not aligned with the spin axis in pulsar magnetic field configuration, this suggests that the timescale of Poynting-flux powering flares in the light of sight should be closed to the orbit period P_orbit of the corresponding BNS system <cit.>. For simplicity, we assume that the period P_FRB of these two FRBs were corresponded to the orbit period of the corresponding BNS system P_orbit=2π[G(M_∗+M_c)/a^3 ]^-1/2, i.e., P_FRB=P_orbit. Luminosity. The isotropic peak luminosity of these bursts is estimated as follows <cit.>, L_FRB≃ 4π× 10^42 (D_L/10^28 cm)^2 F_p/Jyν_cf/GHz erg·s^-1, where F_p is the peak flux, D_L is the luminosity distance of the burst, and ν_cf is the central frequency in observed bandwidth. The value of D_L is estimated with the redshift z, and the redshift of the FRBs with host-galaxy location is found to be related to the excess dispersion measure DM_ excess, i.e., the total dispersion measure without the contribution from our Galaxy <cit.>. The relation between z and DM_excess obtained from the appendix A of <cit.>, i.e., DM_excess=1028z+84.34, is used in this paper, where the contributions of our Galaxy on the dispersion measure for these two FRBs is from <cit.>[https://www.atnf.csiro.au/research/pulsar/ymw16/https://www.atnf.csiro.au/research/pulsar/ymw16/]. Correspondingly, the obtained redshifts of FRB 20191221A and FRB 20210213A are ∼ 0.19 and ∼ 0.35, respectively. Since the maximum redshift of FRB is ∼ 1.0 (FRB20220610A, ), the BNS-mergers within a redshift of 1 is our focus. Properties of BNSs for FRBs. The luminosity of a pre-merger BNS would be related to the value of a and thus the appearance time of FRBs preceding the BNS-mergers. Then, we first estimate the appearance times of our studied FRBs preceding the BNS-mergers based on the periods of these FRBs. In the panel-(a) of Figure , we plot the relation of the periods of FRBs and the orbital period of the BNS with black line. With the periods of our studied FRBs, the orbital period of the BNS corresponding to FRB is estimated. From this panel, one can find that the appearance times of FRBs 20191221A (red triangle) and 20210213A (blue triangle) are ∼-5000 s and -1.6 s, respectively. Here, the merger time of BNS is set as zero time. In panels-(b) and (c) of Figure , we show the minimum magnetic fields of NSs in the BNS for our studied FRBs. In these panels, the luminosity of Poynting-flux relative to the pre-merger time of BNS in the case of U/u, U/U, and U/-U with a same set of B_* and B_ c is shown with gray, red, and orange lines, respectively. From these lines, the luminosity of Poynting-flux in the case of U/-U (orange lines) is higher than those in other two cases. Thus, the minimum strength of magnetic fields of NSs is estimated based on the case of U/-U and shown in the upper-left part of these two panels, i.e., 8×10^12 Gs and 2.4×10^12 Gs for FRB 20191221A and FRB 20210213A, respectively. Then, we can conclude that the minimum strength of the magnetic field of NSs in the BNS that is responsible for these two one-off FRBs should be around ≳10^12 Gs. Such high magnetic field required for explaining one-off FRBs are also proposed in a serial of works, e.g., <cit.> and <cit.>, and <cit.>. We also found that it is difficult for case U/u to produce FRBs like FRB 20191221A and FRB 20210213A since a magnetar with ≳10^14-16 Gs should be involved in the corresponding BNS-merger. Suck kind of BNS-mergers would be just a tiny fraction (∼0.35 %) of total BNS-mergers at z<1 based on the estimation in Section <ref> and thus is not the focus in Section <ref>. Hereafter, “the BNS-merger originated FRBs” refers to the FRBs formed in the case U/-U or U/U. We have found that the high magnetic field (≳10^12 Gs) in a pre-merger BNS is needed for producing the one-off FRBs we studied. Such a high magnetic field is often observed in the Galactic isolated pulsar, while it is relatively rare for the pulsar in the Galactic BNS-systems (). This implies that most of the BNSs in our Galaxy would merge with low magnetic fields[https://www.atnf.csiro.au/research/pulsar/psrcat/https://www.atnf.csiro.au/research/pulsar/psrcat/] (∼10^9 Gs; ) and could not power the FRBs we studied. To obtain a credible event rate of BNS-merger originated FRBs, i.e., BNS-mergers with both NSs' magnetic field strength being higher than 10^12 Gs, we use the population synthesis and adopting a decaying magnetic field of NSs to study the event rate of BNS-mergers in Section <ref>. We also provide the event rate of BNS-mergers with at least one magnetized NS (>10^12 Gs), which could be useful in other scenarios, e.g., <cit.>. § MERGER RATE OF BNSS RELATIVE TO THEIR MAGNETIC FIELD Magnetic field Prescription for a NS. In order to estimate the event rate of BNS-merger originated FRBs, the event rate of BNS-mergers relative to the final magnetic fields of the NSs should be given. We use the data of the BNS-mergers from Model M33.A in <cit.>, of which the main features of the binary evolution models for Model M33.A can be found in their table 2 and the binary evolution calculations were performed with the upgraded population synthesis code [The population synthesis code is not an open source code and the basic description of the code can be found in <cit.> and <cit.>. The improvements of was given in the Appendix A.7 of <cit.>. The population synthesis code was developed for the study of double compact object mergers based on binary evolution calculations. Given models about the cosmic star formation history and metallicity evolution, the simulation of tracks the mergers of double compact objects in different redshift. ] <cit.>. Based on the date of Model M33.A, the BNS-mergers rate density (Gpc^-3 yr^-1) in different redshift can be obtained and is plotted in the right-panel of Figure  with black line. The local merger rate densities of BNS-mergers and binary black hole mergers from Model M33.A (<cit.>) are consistent with the observational limits of LIGO-Virgo O1/O2/O3 observing runs (). Besides, the magnetic field of the NSs in the BNS-mergers is vital in our estimation. Then, the magnetic field of a newborn NS and its evolution during the lifetime of the corresponding NS should be prescribed. <cit.> modeled the population of BNSs in our Galaxy by using the binary population synthesis code <cit.>, and found that the scenario with an exponentially decaying magnetic field for NSs is consistent with the observation. Then, the magnetic field evolution of NSs is took as B_∗/c= ( B_init-B_min ) exp ( t_∗/c/Δ ) +B_min, where B_init is the magnetic field of the newborn NS, Δ is the decay timescale of the magnetic field (; see for a review), and t_∗ (t_c) is the lifetime of the primary NS (the companion NS) in BNSs. The value of B_init/ Gs is randomly selected in a lognormal distribution with mean μ_B_ init= 12.66 and variance σ_B_ init= 0.35; the value of Δ/Myr is also randomly selected from a lognormal distribution with mean μ_Δ=0.56 and variance σ _Δ= 0.075[The value of log ( σ _Δ /Myr ) = 0.075 is estimated by fitting the distribution of log ( Δ /Myr ) in the “Power-law model” of figure 4 in <cit.>.] (). It should be noted that the lognormal distribution of the initial magnetic field and the corresponding decay timescale are obtained based on the isolated pulsars from simulations () and observations (). The observation of Galactic NSs reveals that the isolated NSs and the NSs from BNS present an almost same behavior in the relation of the magnetic field and NS's characteristic age[https://www.atnf.csiro.au/research/pulsar/psrcat/https://www.atnf.csiro.au/research/pulsar/psrcat/] <cit.>. We set a maximum value B_init,max=10^14 Gs (i.e., B_init⩽ B_init,max) for the initial magnetic field of the newborn NS, where the 10^14 Gs is the maximum value we used according to the high-B pulsars (a rough definition is given in Section 3.4 of ). The minimum magnetic field B_min is drawn from a logarithmic uniform distribution between 10^7 Gs and 10^8 Gs <cit.>. The lifetime of the primary NS (the companion NS) in BNSs, i.e., t_∗ (t_c), was obtained based on the data of Model M33.A in <cit.>, of which both the born-time and the merger-time of a NS in every BNS-merger are recorded. It should be noted that the impact of the NS's magnetic field on the binary evolution is not involved in the StarTrack population synthesis code and thus in this work. We noted that the BNS-mergers rates below were calculated in the local universe, i.e., z=0, and the calculation was based on the section 2.2 of <cit.>. Event Rate of BNS-mergers relative to their final magnetic fields. In the left panel of Figure , we plot the event rate density of BNS-mergers relative to the magnetic field strength of the primary NS (B_*) and that of the companion NS (B_ c) within the redshift of 1.0. Here, the dash line represents the relation of B _∗ = B _c, and the event rate of BNS-mergers for B_∗∈ [B_∗,l, B_∗,u] and B_ c∈[B_ c,l, B_ c,u] can be obtained by multiplying the event rate density with log(B_ c,u/B_ c,l)×log(B_∗,u/B_*,l). Based on the result shown in this panel, one can find the following facts. (1) The BNS-mergers concentrate in the field of (B_*, B_c)∼ (10^12 Gs, 4× 10^12 Gs). The event rate of BNS-mergers in this field, i.e., log ( B_∗ /Gs )∈12×[0.32,1.68] and log ( B _∗ /Gs )∈12.6×[0.32,1.68], is ≃ 4.6×10^4 yr^-1 and 11 % of the total BNS-mergers. (2) The event rates of the BNS-mergers with both B_* and B_ c being higher than 10^11 Gs, 10^12 Gs, and 10^13 Gs can be estimated, and are ∼1.6×10^5 yr^-1 (40 % of the total BNS-mergers), ∼ 8×10^4 yr^-1 (19 % of the total BNS-mergers), and ∼ 567 yr^-1 (0.1 % of the total BNS-mergers), respectively. (3) The event rates of the BNS-mergers with either B_* or B_ c being higher than 10^11 Gs, 10^12 Gs, and 10^13 Gs are ∼ 2.2×10^5 yr^-1 (52 % of the total BNS-mergers), ∼ 2.0×10^5 yr^-1 (47 % of the total BNS-mergers), and ∼ 3.1×10^4 yr^-1 (7 % of the total BNS-mergers), respectively. We also calculate the event rate within the maximum redshift of our studied FRBs, i.e., z⩽0.35. It is found that the fraction of BNS-mergers related to different final magnetic fields of NSs is nearly consistent with the result estimated in the above. In the left panel of Figure , more than half of BNS-mergers with both B_* and B_ c being higher than 10^11 Gs is distributed below the dash line, i.e., B _*=B _c. It reveals that the companion NS tend to own a higher magnetic field strength compared with that of the primary NS. This is no surprise because the companion NS in a BNS system is born behind the primary NS and thus the decay of its magnetic field is relatively weak compared with that of the primary NS. In the middle panel of Figure , we show the event rate density distributions relative to the lifetime of the primary NS (blue line) and the companion NS (orange line) in BNS-mergers. The event rate for a specific lifetime interval (with a unit of Gyr) could be obtained by multiplying the event rate density with its logarithmic dimensionless time interval. It is shown that the lifetime distribution of the companion NS in the BNS-mergers has two peaks, i.e., peak at t_ c∼ 10^-4 Gyr with event rate density of ∼ 2×10^4 yr^-1 and peak at t_ c∼ 10^0.5 Gyr with event rate density of ∼ 2×10^4 yr^-1. The lifetime distribution of the primary NS in the BNS-mergers has three peaks, i.e., peaking at t_*∼ 10^-5 Gyr with event rate density of ∼5×10^2 yr^-1, at t_*∼ 10^-2 Gyr with event rate density of ∼5×10^4 yr^-1, and at t_*∼ 10^0.5 Gyr with event rate density of ∼2×10^4 yr^-1. It reveals that the BNS-mergers can be divide into two kinds of mergers based on the lifetime of the companion NS: the rapid merged BNSs and the slow merged BNSs. In the rapid merged BNSs, the lifetime of the companion NS peaks at t_ c∼ 10^-4 Gyr. Since the lifetime of the companion NS is less than the decay timescale of a NS (i.e., Δ∼ 10^-2.4 Gyr), the magnetic field of the companion NS in the rapid merged BNSs is almost the same as that at its birth, i.e., the initial magnetic field. However, the magnetic field of the primary NS is generally suffered from significantly decaying in two kinds of mergers. Then, the following two facts can be understood. (1) The rapid merged BNSs are always presented with a strong magnetized companion NS and a weak magnetized primary NS. Based on the middle panel of Figure , one can find that the rapid merged BNSs is 47 % of the total BNS-mergers. Then, one can expect that the BNS-mergers with one NS's final magnetic field being higher than 10^12 Gs should be around 50 % of the total BNS-mergers. This is indeed obtained. (2) The BNS-mergers with both NSs' magnetic field being higher than 10^12 Gs should be related to the rapid merged BNSs with both NSs' lifetime being closed to the decay timescale of a NS (i.e., Δ∼ 10^-2.4 Gyr). Based on the middle panel of Figure , one can find that such kind of the rapid merged BNSs is around 19 % of the total BNS-mergers. In the right panel of Figure , we plot the total BNS-mergers rate density (black line) and the BNS-mergers rate density with both NSs' magnetic fields being larger than 10^12 Gs (red line), both of which are related to a redshift at z<1. Magnetar was not considered in Figure , because the exact condition for a progenitor to produce the magnetic field in magnetar (or pulsar) is still unclear <cit.>. Here, we show that involving magnetars in our estimation of the even rate of BNS-mergers relative to the final magnetic field could not obviously change our results presented above. Magnetar is a strong magnetized (10^14-10^15 Gs) NS with period of several seconds <cit.>, which may have an unusually evolutionary behavior in its magnetic field compared with that of the pulsar. It is found that the decay timescale of a magnetar (Δ_mag=4 kyr; ) is shorter than that of a pulsar (Δ≃4 Myr; ) with around three orders of magnitude. Although the companion of a magnetar has not been found[https://www.physics.mcgill.ca/ pulsar/magnetar/main.htmlhttps://www.physics.mcgill.ca/ pulsar/magnetar/main.html] <cit.>, a magnetar is still possibly born in BNS-systems through core-collapse supernovae <cit.> or the accretion collapse of ONe white dwarfs <cit.>. We assume that the magnetar can born in the BNS-systems and estimate the event rate of rapid merged BNS with a magnetar (≳10^14 Gs), which should be the BNSs with the merger time being ≲Δ_mag=4 kyr. This is owing to that the magnetic field of a magnetar would decay significantly in the BNSs with the merger time being larger than Δ_mag=4 kyr. The event rate of BNS-mergers with the merger time being ≲Δ_mag=4 kyr is estimated to be R_ mag≈ 1.4× 10^3(f_ mag/0.1) yr^-1 or 0.35 % of the total BNS-mergers, where f_mag=0.1 is the adopted as the minimum fraction of magnetar in the young NS population <cit.>. Compared with the rapid merged BNSs, the fraction of the BNS-mergers with the merger time being ≲Δ_mag=4 kyr (i.e., the BNS-mergers with a magnetar) can be neglected. § DISCUSSIONS AND CONCLUSIONS It is proposed that one-off FRBs with periodic structures may be produced during the inspiral phase of a BNS-merger. In this paper, we study the event rate of such kind of FRB. We first investigate the properties of some one-off FRBs with periodic sub-pulses (i.e., FRB 20191221A and FRB 20210213A) based on the scenario that the Poynting-flux from pre-merger BNSs drives the observed radio emission. Three basic cases, according to the orientations of the NSs' magnetic moments with respect to the orbital angular moment of the BNS, are discussed in producing Poynting-flux. By assuming the fast magnetosonic wave is responsible for the radio emission, the minimum magnetic field of NSs in pre-merger BNSs for explaining the observed periods and luminosity of FRB 20191221A and FRB 20210213A are estimated, i.e., ∼8×10^12 Gs and ∼2.4×10^12 Gs, respectively. Thus, we conclude that the minimum magnetic fields in BNSs mergers required for producing the one-off period FRBs like these bursts should be as high as ≳10^12 Gs. The neutron stars with high magnetic field (≳ 10^12 Gs) are relatively rare in the observed BNSs of our Galaxy, most of which are consist of two low-magnetized NSs[https://www.atnf.csiro.au/research/pulsar/psrcat/https://www.atnf.csiro.au/research/pulsar/psrcat/] (∼10^9 Gs; ). It means that the BNSs being responsible for the one-off FRBs should be the other systems rather than the observed BNSs of our Galaxy, which would merge with low magnetic fields finally. In order to obtain a credible event rate of BNS-mergers related to their final magnetic fields, we consider the evolution of both the BNS and their magnetic fields. Based on the population synthesis and adopting a decaying magnetic field of NSs, we estimate the event rate of BNS-mergers relative to their final magnetic fields. It is found that the rapid merged BNSs, of which the merger time is generally less than the decay timescale of the magnetic field in a pulsar (or magnetar), tend to merge with high magnetization. In the rapid merged BNSs, the companion will merge with the nearly initial magnetic field, while the magnetization of the primary NS is often lower, which makes them become an ideal energy reservoir for the pre-merger electromagnetic counterparts like FRBs. In Milky Way, a rapidly merging BNS population is required for describing the observed heavy element abundances (), i,e., at least 40 % of the entire BNS population should merge within 1 Gyr. The possible effect of accretion-induced magnetic field decay during the stages of mass transfer is not considered in this paper (). The mass transfer from a stripped post-helium-burning donor star (case BB) onto a NS is likely occurring in a dominant channel to ensure the NS' recycling happens (). <cit.> estimated the amount of accreted mass ΔM_ NS by the NS during the case BB in different orbital periods and helium star masses, and ΔM_ NS= 5×10^-5-3×10^-3 M_⊙ was obtained. The accreted mass in case BB is significantly low compared with the magnetic-field-decay mass-scale (ΔM_ decay≃ 0.01-0.02 M_⊙; ). That is to say, the accretion of the primary NS in case BB has negligible effect on its magnetic field. In the same reason, the effect of the accreted mass in the high-mass X-ray binaries (ΔM_ NS= a few×10^-3 M_⊙; ) or NS-helium star stages (ΔM_ NS< 4 ×10^-4 M_⊙; ) on the suppression of the magnetic field in the primary NS can also be neglected. The detailed evolution of the common envelope phase is still unclear (see , for reviews). <cit.> studied the recycling pulsar in Galaxy and found that the mass transfer onto the primary NS during the common envelope is at most 0.02 M_⊙. If all of the primary NS have accreted ∼ 0.02 M_⊙ mass and takes the magnetic field decay mass-scale ΔM_ decay=0.02 M_⊙ (see the Section 4.2 of <cit.>), the BNS-mergers rate with both NSs' magnetic field being higher than 10^12 Gs is reduced to 3.5×10^5 yr^-1 (∼ 9 % of the total BNS-mergers) in redshift z<1. However, the BNS-mergers rate with one NS's final magnetic field being higher than 10^12 Gs is not affected. Some studies () found that the decay timescale of the magnetic field can be up to 500-1000 Myr, which means that 77-83 % of the total BNS-mergers will merge with their initial magnetic fields within z=1 in our calculation. Such a high rate of BNS-mergers with strong magnetic fields may be challenged by the lack of direct detection of the per-merger electromagnetic counterparts. The pre-merger electromagnetic counterparts (e.g., X-ray/radio emission) of the BNS-mergers could be detected as a precursor of short gamma-ray bursts (). <cit.> revealed that around 8-10 % of the short gamma-ray bursts are accompanied with a precursor. <cit.> performed a stringent search (≳ 4.5σ of the signal significance) on for the precursor of short gamma-ray bursts and found the rate of ∼3 % for bursts with a precursor. The periodic radio emission as a precursor of BNS mergers is still undetected yet. The event rate of BNS-merger-originated FRBs, i.e., BNS-mergers with both NSs' magnetic field being higher than 10^12 Gs, is ∼8×10^4 yr^-1 (19 % of the total BNS-mergers) in redshift z<1. Our estimation about the BNS-mergers shows that nearly 19 % of the BNSs could merge with high magnetic fields (≥ 10^12 Gs) for both the primary NS and the companion NS, which implies that nearly one of five detected gravitational-wave events from BNS-mergers could produce the one-off radio signal (like FRB 20191221A and FRB 20210213A) if the beaming angle of the radio emission is not considered. This work is supported by the National Natural Science Foundation of China (grant Nos. 12273005, 11673006, U1938116, U1938201, U1731239, and U1938106), the Guangxi Science Foundation (grant Nos. 2018GXNSFFA281010, 2017AD22006, 2018GXNSFGA281007, and 2018GXNSFDA281033), and China Manned Spaced Project (CMS-CSST-2021-B11). natexlab#1#1 [Abbott et al.(2017a)Abbott, Abbott, Abbott, Acernese, Ackley, Adams, Adams, Addesso, Adhikari, Adya, Affeldt, Afrough, Agarwal, Agathos, Agatsuma, Aggarwal, Aguiar, & Aiello]Abbott-2017ApJ Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2017a, , 848, L13 [Abbott et al.(2017b)Abbott, Abbott, Abbott, Acernese, Ackley, Adams, Adams, Addesso, Adhikari, LIGO Scientific Collaboration, & Virgo Collaboration]Abbott-2017PhRvL —. 2017b, , 119, 161101 [Abbott et al.(2017c)Abbott, Abbott, Abbott, Acernese, Ackley, Adams, Adams, Addesso, Adhikari, Adya, Affeldt, Afrough, Agarwal, Agathos, Agatsuma, Aggarwal, Aguiar, Aiello, Ain, Ajith, Allen, & South Africa/MeerKAT]Abbott-2017ApJ...848L..12A —. 2017c, , 848, L12 [Abbott et al.(2019)Abbott, Abbott, Abbott, Abraham, Acernese, Ackley, Adams, LIGO Scientific Collaboration, & Virgo Collaboration]2019PhRvX...9c1040A —. 2019, Physical Review X, 9, 031040 [Abbott et al.(2020a)Abbott, Abbott, Abbott, Abraham, Acernese, Ackley, Adams, Adhikari, Adya, Affeldt, Agathos, Agatsuma, Aggarwal, Aguiar, Aiello, Ain, Ajith, Allen, Allocca, Aloy, Altin, Amato, & Anand]Abbott-2020ApJ —. 2020a, , 892, L3 [Abbott et al.(2020b)Abbott, Abbott, Abbott, Abraham, Acernese, Ackley, Adams, Adya, Affeldt, Kagra Collaboration, & VIRGO Collaboration]2020LRR....23....3A —. 2020b, Living Reviews in Relativity, 23, 3 [Ablimit(2022)]2022MNRAS.509.6061A Ablimit, I. 2022, , 509, 6061 [Alexander et al.(2017)Alexander, Berger, Fong, Williams, Guidorzi, Margutti, Metzger, Annis, Blanchard, Brout, Brown, Chen, Chornock, Cowperthwaite, Drout, Eftekhari, Frieman, Holz, Nicholl, Rest, Sako, Soares-Santos, & Villar]2017ApJ...848L..21A Alexander, K. D., Berger, E., Fong, W., et al. 2017, , 848, L21 [Belczynski et al.(2002)Belczynski, Kalogera, & Bulik]2002ApJ...572..407B Belczynski, K., Kalogera, V., & Bulik, T. 2002, , 572, 407 [Belczynski et al.(2008)Belczynski, Kalogera, Rasio, Taam, Zezas, Bulik, Maccarone, & Ivanova]2008ApJS..174..223B —. 2008, , 174, 223 [Belczynski et al.(2016)Belczynski, Repetto, Holz, O'Shaughnessy, Bulik, Berti, Fryer, & Dominik]2016ApJ...819..108B —. 2016, , 819, 108 [Belczynski et al.(2020)Belczynski, Klencki, Fields, Olejak, Berti, Meynet, Fryer, Holz, O'Shaughnessy, Brown, Bulik, Leung, Nomoto, Madau, Hirschi, Kaiser, Jones, Mondal, Chruslinska, & Drozda]2020A A...636A.104B —. 2020, , 636, A104 [Belloni & Schreiber(2023)]2023arXiv230308997B Belloni, D., & Schreiber, M. R. 2023, arXiv e-prints, arXiv:2303.08997 [Beniamini et al.(2019)Beniamini, Hotokezaka, van der Horst, & Kouveliotou]2019MNRAS.487.1426B Beniamini, P., Hotokezaka, K., van der Horst, A., & Kouveliotou, C. 2019, , 487, 1426 [Beniamini & Kumar(2022)]2022arXiv221107669B Beniamini, P., & Kumar, P. 2022, arXiv e-prints, arXiv:2211.07669 [Bisnovatyi-Kogan & Komberg(1974)]1974SvA....18..217B Bisnovatyi-Kogan, G. S., & Komberg, B. V. 1974, , 18, 217 [Chattopadhyay et al.(2020)Chattopadhyay, Stevenson, Hurley, Rossi, & Flynn]2020MNRAS.494.1587C Chattopadhyay, D., Stevenson, S., Hurley, J. R., Rossi, L. J., & Flynn, C. 2020, , 494, 1587 [Chattopadhyay et al.(2021)Chattopadhyay, Stevenson, Hurley, Bailes, & Broekgaarden]2021MNRAS.504.3682C Chattopadhyay, D., Stevenson, S., Hurley, J. R., Bailes, M., & Broekgaarden, F. 2021, , 504, 3682 [Cherkis & Lyutikov(2021)]Cherkis-2021ApJ Cherkis, S. A., & Lyutikov, M. 2021, , 923, 13 [CHIME/FRB Collaboration et al.(2018)CHIME/FRB Collaboration, Amiri, Bandura, Berger, Bhardwaj, & Boyce]2018ApJ...863...48C CHIME/FRB Collaboration, Amiri, M., Bandura, K., et al. 2018, , 863, 48 [CHIME/FRB Collaboration et al.(2021)CHIME/FRB Collaboration, Amiri, Andersen, Bandura, Berger, Bhardwaj, Boyce, Boyle, Brar, Breitman, Cassanelli, Chawla, Chen, Cliche, Cook, Cubranic, Curtin, Deng, Dobbs, Dong, Eadie, Fandino, Fonseca, Gaensler, Giri, Good, Halpern, Hill, Hinshaw, Josephy, Kaczmarek, Kader, Kania, Kaspi, Landecker, Lang, Leung, Li, Lin, Masui, McKinven, Mena-Parra, Merryfield, Meyers, Michilli, Milutinovic, Mirhosseini, Münchmeyer, Naidu, Newburgh, Ng, Patel, Pen, Petroff, Pinsonneault-Marotte, Pleunis, Rafiei-Ravandi, Rahman, Ransom, Renard, Sanghavi, Scholz, Shaw, Shin, Siegel, Sikora, Singh, Smith, Stairs, Tan, Tendulkar, Vanderlinde, Wang, Wulf, & Zwaniga]2021ApJS..257...59C CHIME/FRB Collaboration, Amiri, M., Andersen, B. C., et al. 2021, , 257, 59 [CHIME/FRB Collaboration et al.(2022)CHIME/FRB Collaboration, Bandura, Bhardwaj, Boyle, Brar, & Breitman]2022Natur.607..256C CHIME/FRB Collaboration, Andersen, B. C., Bandura, K., Bhardwaj, M., et al. 2022, , 607, 256 [Choudhuri & Konar(2002)]2002MNRAS.332..933C Choudhuri, A. R., & Konar, S. 2002, , 332, 933 [Chrimes et al.(2022)Chrimes, Levan, Fruchter, Groot, Jonker, Kouveliotou, Lyman, Stanway, Tanvir, & Wiersema]2022MNRAS.513.3550C Chrimes, A. A., Levan, A. J., Fruchter, A. S., et al. 2022, , 513, 3550 [Cieślar et al.(2020)Cieślar, Bulik, & Osłowski]2020MNRAS.492.4043C Cieślar, M., Bulik, T., & Osłowski, S. 2020, , 492, 4043 [Cooper et al.(2023)Cooper, Gupta, Wadiasingh, Wijers, Boersma, Andreoni, Rowlinson, & Gourdji]2023MNRAS.519.3923C Cooper, A. J., Gupta, O., Wadiasingh, Z., et al. 2023, , 519, 3923 [Coulter et al.(2017)Coulter, Foley, Kilpatrick, Drout, Piro, Shappee, Siebert, Simon, Ulloa, Kasen, Madore, Murguia-Berthier, Pan, Prochaska, Ramirez-Ruiz, Rest, & Rojas-Bravo]Coulter-2017Sci Coulter, D. A., Foley, R. J., Kilpatrick, C. D., et al. 2017, Science, 358, 1556 [Cui et al.(2022)Cui, Zhang, Li, Zhang, Peng, Zhu, Strom, Wang, Wang, Wu, Wang, & Yang]2022Ap SS.367...66C Cui, X.-H., Zhang, C.-M., Li, D., et al. 2022, , 367, 66 [Cumming et al.(2001)Cumming, Zweibel, & Bildsten]2001ApJ...557..958C Cumming, A., Zweibel, E., & Bildsten, L. 2001, , 557, 958 [Daugherty & Harding(1982)]1982ApJ...252..337D Daugherty, J. K., & Harding, A. K. 1982, , 252, 337 [Dominik et al.(2012)Dominik, Belczynski, Fryer, Holz, Berti, Bulik, Mandel, & O'Shaughnessy]2012ApJ...759...52D Dominik, M., Belczynski, K., Fryer, C., et al. 2012, , 759, 52 [Duncan & Thompson(1992)]Duncan-1992ApJ Duncan, R. C., & Thompson, C. 1992, , 392, L9 [Enoto et al.(2019)Enoto, Kisaka, & Shibata]2019RPPh...82j6901E Enoto, T., Kisaka, S., & Shibata, S. 2019, Reports on Progress in Physics, 82, 106901 [Falanga et al.(2015)Falanga, Bozzo, Lutovinov, Bonnet-Bidaud, Fetisova, & Puls]2015A A...577A.130F Falanga, M., Bozzo, E., Lutovinov, A., et al. 2015, , 577, A130 [Faucher-Giguere & Kaspi(2006)]faucher2006birth Faucher-Giguere, C.-A., & Kaspi, V. M. 2006, The Astrophysical Journal, 643, 332 [Gaensler et al.(2005)Gaensler, McClure-Griffiths, Oey, Haverkorn, Dickey, & Green]2005ApJ...620L..95G Gaensler, B. M., McClure-Griffiths, N. M., Oey, M. S., et al. 2005, , 620, L95 [Ghirlanda et al.(2019)Ghirlanda, Salafia, Paragi, Giroletti, Yang, & Marcote]2019Sci...363..968G Ghirlanda, G., Salafia, O. S., Paragi, Z., et al. 2019, Science, 363, 968 [Goldreich & Lynden-Bell(1969)]1969ApJ...156...59G Goldreich, P., & Lynden-Bell, D. 1969, , 156, 59 [Gonthier et al.(2002)Gonthier, Ouellette, Berrier, O'Brien, & Harding]2002ApJ...565..482G Gonthier, P. L., Ouellette, M. S., Berrier, J., O'Brien, S., & Harding, A. K. 2002, , 565, 482 [Gonthier et al.(2004)Gonthier, Van Guilder, & Harding]2004ApJ...604..775G Gonthier, P. L., Van Guilder, R., & Harding, A. K. 2004, , 604, 775 [Hallinan et al.(2017)Hallinan, Corsi, Mooley, Hotokezaka, Nakar, Kasliwal, Kaplan, Frail, Myers, Murphy, De, Dobie, Allison, Bannister, Bhalerao, Chandra, Clarke, Giacintucci, Ho, Horesh, Kassim, Kulkarni, Lenc, Lockman, Lynch, Nichols, Nissanke, Palliyaguru, Peters, Piran, Rana, Sadler, & Singer]2017Sci...358.1579H Hallinan, G., Corsi, A., Mooley, K. P., et al. 2017, Science, 358, 1579 [Hankins et al.(2003)Hankins, Kern, Weatherall, & Eilek]2003Natur.422..141H Hankins, T. H., Kern, J. S., Weatherall, J. C., & Eilek, J. A. 2003, , 422, 141 [Hansen & Lyutikov(2001)]Hansen-2001MNRAS Hansen, B. M. S., & Lyutikov, M. 2001, , 322, 695 [Hessels et al.(2006)Hessels, Ransom, Stairs, Freire, Kaspi, & Camilo]2006Sci...311.1901H Hessels, J. W. T., Ransom, S. M., Stairs, I. H., et al. 2006, Science, 311, 1901 [Hotokezaka et al.(2018)Hotokezaka, Beniamini, & Piran]Hotokezaka-2018 Hotokezaka, K., Beniamini, P., & Piran, T. 2018, International Journal of Modern Physics D, 27, 1842005 [Igoshev et al.(2022)Igoshev, Frantsuzova, Gourgouliatos, Tsichli, Konstantinou, & Popov]2022MNRAS.514.4606I Igoshev, A. P., Frantsuzova, A., Gourgouliatos, K. N., et al. 2022, , 514, 4606 [Igoshev et al.(2021)Igoshev, Popov, & Hollerbach]2021Univ....7..351I Igoshev, A. P., Popov, S. B., & Hollerbach, R. 2021, Universe, 7, 351 [Ioka & Taniguchi(2000)]2000ApJ...537..327I Ioka, K., & Taniguchi, K. 2000, , 537, 327 [Ivanova et al.(2013)Ivanova, Justham, Chen, De Marco, Fryer, Gaburov, Ge, Glebbeek, Han, Li, Lu, Marsh, Podsiadlowski, Potter, Soker, Taam, Tauris, van den Heuvel, & Webbink]2013A ARv..21...59I Ivanova, N., Justham, S., Chen, X., et al. 2013, , 21, 59 [Jahan Miri & Bhattacharya(1994)]1994MNRAS.269..455J Jahan Miri, M., & Bhattacharya, D. 1994, , 269, 455 [Jawor & Tauris(2022)]2022MNRAS.509..634J Jawor, J. A., & Tauris, T. M. 2022, , 509, 634 [Kaspi & Beloborodov(2017)]Magnetars-2017ARA A Kaspi, V. M., & Beloborodov, A. M. 2017, , 55, 261 [Konar & Bhattacharya(1997)]1997MNRAS.284..311K Konar, S., & Bhattacharya, D. 1997, , 284, 311 [Konar & Bhattacharya(1999)]1999MNRAS.308..795K —. 1999, , 308, 795 [Konar & Choudhuri(2004)]2004MNRAS.348..661K Konar, S., & Choudhuri, A. R. 2004, , 348, 661 [Lai(2012)]Lai-Dong-2012 Lai, D. 2012, , 757, L3 [Lattimer & Prakash(2004)]2004Sci...304..536L Lattimer, J. M., & Prakash, M. 2004, Science, 304, 536 [Levan et al.(2006)Levan, Wynn, Chapman, Davies, King, Priddey, & Tanvir]2006MNRAS.368L...1L Levan, A. J., Wynn, G. A., Chapman, R., et al. 2006, , 368, L1 [Lovelace et al.(2005)Lovelace, Romanova, & Bisnovatyi-Kogan]2005ApJ...625..957L Lovelace, R. V. E., Romanova, M. M., & Bisnovatyi-Kogan, G. S. 2005, , 625, 957 [Luo et al.(2020)Luo, Men, Lee, Wang, Lorimer, & Zhang]2020MNRAS.494..665L Luo, R., Men, Y., Lee, K., et al. 2020, , 494, 665 [Lyubarsky(2019)]2019MNRAS.483.1731L Lyubarsky, Y. 2019, , 483, 1731 [Lyubarsky(2020)]Lyubarsky-2020ApJ —. 2020, , 897, 1 [Lyutikov(2019)]Lyutikov-2019MNRAS Lyutikov, M. 2019, , 483, 2766 [Lyutikov(2022)]2022arXiv221114433L —. 2022, arXiv e-prints, arXiv:2211.14433 [Macquart et al.(2020)Macquart, Prochaska, McQuinn, Bannister, Bhandari, Day, Deller, Ekers, James, Marnoch, Osłowski, Phillips, Ryder, Scott, Shannon, & Tejos]2020Natur.581..391M Macquart, J. P., Prochaska, J. X., McQuinn, M., et al. 2020, , 581, 391 [Mahlmann et al.(2022)Mahlmann, Philippov, Levinson, Spitkovsky, & Hakobyan]Mahlmann-2022ApJL Mahlmann, J. F., Philippov, A. A., Levinson, A., Spitkovsky, A., & Hakobyan, H. 2022, , 932, L20 [Makarenko et al.(2021)Makarenko, Igoshev, & Kholtygin]2021MNRAS.504.5813M Makarenko, E. I., Igoshev, A. P., & Kholtygin, A. F. 2021, , 504, 5813 [Manchester et al.(2005)Manchester, Hobbs, Teoh, & Hobbs]ATNF-2005AJ Manchester, R. N., Hobbs, G. B., Teoh, A., & Hobbs, M. 2005, , 129, 1993 [Mandel & Broekgaarden(2022)]2022LRR....25....1M Mandel, I., & Broekgaarden, F. S. 2022, Living Reviews in Relativity, 25, 1 [Margutti & Chornock(2021)]2021ARA A..59..155M Margutti, R., & Chornock, R. 2021, , 59, 155 [Mooley et al.(2018)Mooley, Nakar, Hotokezaka, Hallinan, Corsi, Frail, Horesh, Murphy, Lenc, Kaplan, de, Dobie, Chandra, Deller, Gottlieb, Kasliwal, Kulkarni, Myers, Nissanke, Piran, Lynch, Bhalerao, Bourke, Bannister, & Singer]2018Natur.554..207M Mooley, K. P., Nakar, E., Hotokezaka, K., et al. 2018, , 554, 207 [Most & Philippov(2020)]Most-2020ApJ Most, E. R., & Philippov, A. A. 2020, , 893, L6 [Most & Philippov(2022a)]2022MNRAS.515.2710M —. 2022a, , 515, 2710 [Most & Philippov(2022b)]Most-2022arXiv —. 2022, arXiv e-prints, arXiv:2207.14435 [Muno et al.(2006)Muno, Clark, Crowther, Dougherty, de Grijs, Law, McMillan, Morris, Negueruela, Pooley, Portegies Zwart, & Yusef-Zadeh]2006ApJ...636L..41M Muno, M. P., Clark, J. S., Crowther, P. A., et al. 2006, , 636, L41 [Nakano et al.(2015)Nakano, Murakami, Makishima, Hiraga, Uchiyama, Kaneda, & Enoto]2015PASJ...67....9N Nakano, T., Murakami, H., Makishima, K., et al. 2015, , 67, 9 [Neill et al.(2022)Neill, Tsang, van Eerten, Ryan, & Newton]2022MNRAS.514.5385N Neill, D., Tsang, D., van Eerten, H., Ryan, G., & Newton, W. G. 2022, , 514, 5385 [Olausen & Kaspi(2014)]2014ApJS..212....6O Olausen, S. A., & Kaspi, V. M. 2014, , 212, 6 [Osłowski et al.(2011)Osłowski, Bulik, Gondek-Rosińska, & Belczyński]2011MNRAS.413..461O Osłowski, S., Bulik, T., Gondek-Rosińska, D., & Belczyński, K. 2011, , 413, 461 [Palenzuela et al.(2013a)Palenzuela, Lehner, Liebling, Ponce, Anderson, Neilsen, & Motl]Palenzuela-2013PhRvD Palenzuela, C., Lehner, L., Liebling, S. L., et al. 2013a, , 88, 043011 [Palenzuela et al.(2013b)Palenzuela, Lehner, Ponce, Liebling, Anderson, Neilsen, & Motl]Palenzuela-2013PhRvL Palenzuela, C., Lehner, L., Ponce, M., et al. 2013b, , 111, 061105 [Pan et al.(2022)Pan, Yang, & Yagi]2022arXiv220808808P Pan, Z., Yang, H., & Yagi, K. 2022, arXiv e-prints, arXiv:2208.08808 [Parfrey et al.(2013)Parfrey, Beloborodov, & Hui]2013ApJ...774...92P Parfrey, K., Beloborodov, A. M., & Hui, L. 2013, , 774, 92 [Pastor-Marazuela et al.(2022)Pastor-Marazuela, van Leeuwen, Bilous, Connor, Maan, Oostrum, Petroff, & Straal]2022arXiv220208002P Pastor-Marazuela, I., van Leeuwen, J., Bilous, A., et al. 2022, arXiv e-prints, arXiv:2202.08002 [Patricelli et al.(2022)Patricelli, Bernardini, Mapelli, D'Avanzo, Santoliquido, Cella, Razzano, & Cuoco]2022MNRAS.513.4159P Patricelli, B., Bernardini, M. G., Mapelli, M., et al. 2022, , 513, 4159 [Peters(1964)]1964PhRv..136.1224P Peters, P. C. 1964, Physical Review, 136, 1224 [Philippov et al.(2019)Philippov, Uzdensky, Spitkovsky, & Cerutti]2019ApJ...876L...6P Philippov, A., Uzdensky, D. A., Spitkovsky, A., & Cerutti, B. 2019, , 876, L6 [Piro(2012)]2012ApJ...755...80P Piro, A. L. 2012, , 755, 80 [Ponce et al.(2014)Ponce, Palenzuela, Lehner, & Liebling]Palenzuela-2014PhRvD Ponce, M., Palenzuela, C., Lehner, L., & Liebling, S. L. 2014, , 90, 044007 [Roepke & De Marco(2022)]2022arXiv221207308R Roepke, F. K., & De Marco, O. 2022, arXiv e-prints, arXiv:2212.07308 [Radice et al.(2020)Radice, Bernuzzi, & Perego]2020ARNPS..70...95R Radice, D., Bernuzzi, S., & Perego, A. 2020, Annual Review of Nuclear and Particle Science, 70, 95 [Ravi et al.(2019)Ravi, Catha, D'Addario, Djorgovski, Hallinan, Hobbs, Kocz, Kulkarni, Shi, Vedantham, Weinreb, & Woody]2019Natur.572..352R Ravi, V., Catha, M., D'Addario, L., et al. 2019, , 572, 352 [Ruderman & Sutherland(1975)]1975ApJ...196...51R Ruderman, M. A., & Sutherland, P. G. 1975, , 196, 51 [Ryder et al.(2022)Ryder, Bannister, Bhandari, Deller, Ekers, Glowacki, Gordon, Gourdji, James, Kilpatrick, Lu, Marnoch, Moss, Prochaska, Qiu, Sadler, Simha, Sammons, Scott, Tejos, & Shannon]2022arXiv221004680R Ryder, S. D., Bannister, K. W., Bhandari, S., et al. 2022, arXiv e-prints, arXiv:2210.04680 [Savchenko et al.(2017)Savchenko, Ferrigno, Kuulkers, Bazzano, Bozzo, Brandt, Chenevez, Courvoisier, Diehl, Domingo, Hanlon, Jourdain, von Kienlin, Laurent, Lebrun, Lutovinov, Martin-Carrillo, Mereghetti, Natalucci, Rodi, Roques, Sunyaev, & Ubertini]Savchenko-2017ApJ Savchenko, V., Ferrigno, C., Kuulkers, E., et al. 2017, , 848, L15 [Suvorov et al.(2022)Suvorov, Kuan, & Kokkotas]2022A A...664A.177S Suvorov, A. G., Kuan, H. J., & Kokkotas, K. D. 2022, , 664, A177 [Szary et al.(2014)Szary, Zhang, Melikidze, Gil, & Xu]szary2014radio Szary, A., Zhang, B., Melikidze, G. I., Gil, J., & Xu, R.-X. 2014, The Astrophysical Journal, 784, 59 [Taam & van den Heuvel(1986)]1986ApJ...305..235T Taam, R. E., & van den Heuvel, E. P. J. 1986, , 305, 235 [Tauris et al.(2015)Tauris, Langer, & Podsiadlowski]2015MNRAS.451.2123T Tauris, T. M., Langer, N., & Podsiadlowski, P. 2015, , 451, 2123 [Tauris et al.(2017)Tauris, Kramer, Freire, Wex, Janka, Langer, Podsiadlowski, Bozzo, Chaty, Kruckow, van den Heuvel, Antoniadis, Breton, & Champion]2017ApJ...846..170T Tauris, T. M., Kramer, M., Freire, P. C. C., et al. 2017, , 846, 170 [The LIGO Scientific Collaboration et al.(2021)The LIGO Scientific Collaboration, the Virgo Collaboration, the KAGRA Collaboration, Abbott, Abbott, Acernese, Ackley, & Adams]2021arXiv211103634T The LIGO Scientific Collaboration, the Virgo Collaboration, the KAGRA Collaboration, et al. 2021, arXiv e-prints, arXiv:2111.03634 [Totani(2013)]2013PASJ...65L..12T Totani, T. 2013, , 65, L12 [Troja et al.(2010)Troja, Rosswog, & Gehrels]2010ApJ...723.1711T Troja, E., Rosswog, S., & Gehrels, N. 2010, , 723, 1711 [Tsang et al.(2012)Tsang, Read, Hinderer, Piro, & Bondarescu]2012PhRvL.108a1102T Tsang, D., Read, J. S., Hinderer, T., Piro, A. L., & Bondarescu, R. 2012, , 108, 011102 [Urpin et al.(1997)Urpin, Konenkov, & Urpin]1997MNRAS.292..167U Urpin, V., Konenkov, D., & Urpin, V. 1997, , 292, 167 [Vigna-Gómez et al.(2018)Vigna-Gómez, Neijssel, Stevenson, Barrett, Belczynski, Justham, de Mink, Müller, Podsiadlowski, Renzo, Szécsi, & Mandel]2018MNRAS.481.4009V Vigna-Gómez, A., Neijssel, C. J., Stevenson, S., et al. 2018, , 481, 4009 [Wang et al.(2022)Wang, Li, Dai, & Wu]2022arXiv221009930W Wang, J.-S., Li, X., Dai, Z., & Wu, X. 2022, arXiv e-prints, arXiv:2210.09930 [Wang et al.(2018)Wang, Peng, Wu, & Dai]Wang-2018ApJ Wang, J.-S., Peng, F.-K., Wu, K., & Dai, Z.-G. 2018, , 868, 19 [Wang et al.(2020)Wang, Peng, Zou, Zhang, & Zhang]2020ApJ...902L..42W Wang, J.-S., Peng, Z.-K., Zou, J.-H., Zhang, B.-B., & Zhang, B. 2020, , 902, L42 [Wang et al.(2016)Wang, Yang, Wu, Dai, & Wang]WangJS-2016ApJ Wang, J.-S., Yang, Y.-P., Wu, X.-F., Dai, Z.-G., & Wang, F.-Y. 2016, , 822, L7 [Webbink(1984)]1984ApJ...277..355W Webbink, R. F. 1984, , 277, 355 [White et al.(2022)White, Burrows, Coleman, & Vartanyan]2022ApJ...926..111W White, C. J., Burrows, A., Coleman, M. S. B., & Vartanyan, D. 2022, , 926, 111 [Xiao et al.(2022)Xiao, Zhang, Zhu, Xiong, Gao, Xu, Zhang, Peng, Li, Zhang, Lu, Lin, Liu, Zhang, Ge, Tuo, Xue, Fu, Liu, Li, Wang, Zheng, Wang, Jiang, Li, Liu, Cao, & Cai]2022arXiv220502186X Xiao, S., Zhang, Y.-Q., Zhu, Z.-P., et al. 2022, arXiv e-prints, arXiv:2205.02186 [Yao et al.(2017)Yao, Manchester, & Wang]2017ApJ...835...29Y Yao, J. M., Manchester, R. N., & Wang, N. 2017, , 835, 29 [Zhang(2018)]2018ApJ...867L..21Z Zhang, B. 2018, , 867, L21 [Zhang & Kojima(2006)]2006MNRAS.366..137Z Zhang, C. M., & Kojima, Y. 2006, , 366, 137 [Zhang et al.(2022)Zhang, Yi, Zhang, Xiong, & Xiao]2022ApJ...939L..25Z Zhang, Z., Yi, S.-X., Zhang, S.-N., Xiong, S.-L., & Xiao, S. 2022, , 939, L25
http://arxiv.org/abs/2307.01740v2
20230704141649
Synchronous Image-Label Diffusion Probability Model with Application to Stroke Lesion Segmentation on Non-contrast CT
[ "Jianhai Zhang", "Tonghua Wan", "Ethan MacDonald", "Bijoy Menon", "Aravind Ganesh", "Qiu Wu" ]
cs.CV
[ "cs.CV" ]
1. University of Calgary 2. Huazhong University of Science and Technology Synchronous Image-Label Diffusion Probability Model with Application to Stroke Lesion Segmentation on Non-contrast CT Jianhai Zhang^1 Tonghua Wan^2 Ethan MacDonald^1 Bijoy Menon^1 Aravind Ganesh^1 Qiu Wu^2 ===================================================================================================================== Stroke lesion volume is a key radiologic measurement for assessing the prognosis of Acute Ischemic Stroke (AIS) patients, which is challenging to be automatically measured on Non-Contrast CT (NCCT) scans. Recent diffusion probabilistic models have shown potentials of being used for image segmentation. In this paper, a novel Synchronous image-label Diffusion Probability Model (SDPM) is proposed for stroke lesion segmentation on NCCT using Markov diffusion process. The proposed SDPM is fully based on a Latent Variable Model (LVM), offering a complete probabilistic elaboration. An additional net-stream, parallel with a noise prediction stream, is introduced to obtain initial noisy label estimates for efficiently inferring the final labels. By optimizing the specified variational boundaries, the trained model can infer multiple label estimates for reference given the input images with noises. The proposed model was assessed on three stroke lesion datasets including one public and two private datasets. Compared to several U-net and transformer based segmentation methods, our proposed SDPM model is able to achieve state-of-the-art performance. The code is publicly available. § INTRODUCTION Stroke lesion volume is a key radiologic measurement in assessing prognosis of Acute Ischemic Stroke (AIS) patients <cit.>. Early assessment of patient outcome is beneficial to inform patients about future perspectives as soon as possible, and to enable treating physician to adapt and personalize the treatments and rehabilitation plans. Ischemic stroke lesions, such as hemorrhagic and ischemic infarct, are typically measured on post treatment Non-Contrast CT (NCCT) scans. Manual contouring of stroke lesions are still clinically deemed as gold standard for volume measurement even though it is time consuming and observer dependent <cit.>. Regardless of many attempts to automate segmentation for stroke lesions <cit.>, there are still no methods well-established for NCCT, as cerebral CT is limited due to low signal to noise ratio, low contrast of soft tissues, partial volume effects, and acquisition variability across different scanners <cit.>. This study aims to develop a Diffusion Probabilistic Model (DPM) based approach <cit.> to accurately segmenting ischemic or hemorrhagic lesion on NCCT. Recently, the methods <cit.> based on DPM have received increasing attention for medical image segmentation. The attention is warranted because of the powerful denoising mechanism against even high degrees of noise contamination. In essence, a progressive denoising process <cit.> for image generation (or potential image labels) is foundationally different from the previous techniques outputting results immediately. Moreover, the introduction of denoising for training indeed improves the robustness of the prediction because of the massive observations with different degrees of noises. Thus, the stability of segmentation performance is able to be guaranteed. Nevertheless, the current diffusion models for medical image segmentation <cit.> are simply treating the image as the conditional input fed to the models without probabilistic interpretability. In this paper, based on a fully generative Latent Variable Model (LVM), a Synchronous image-label Diffusion Probability Model (SDPM) is proposed for efficiently inferring segmentation labels. To this end, we developed a specified variant of variational inference method and a set of related strategies, including synchronous image-label diffusion process illustrated in Fig.1 and a two-stream network of predicting initial noisy label where the inference process starts, to fit our segmentation task, efficiently restoring the segmentation labels from noisy initials. Accordingly, the label inference methods are implemented in four different ways with their own strength, making SDPM more flexible and applicable. Our contributions briefly include: 1) SDPM is proposed using a synchronous Markov diffusion process for medical image segmentation task. SDPM is based on the fully generative latent variable model with derivational interpretability. 2) A specified variational inference and the involved strategies are proposed for training SDPM. Four inference algorithms in different ways are proposed to efficiently obtain final labels. § METHODOLOGY §.§ Revisit diffusion model As a LVM, diffusion model<cit.> has been successfully applied to the field of image generation with the form p_θ(x_0) ≜∫ p_θ(x_0:T)dx_1:T, where x_1:T are the latent variables, x_0 follows an approximately sampled distribution q(x_0). The joint distribution p_θ(x_0:T) is called the reverse process <cit.> modeled by a first-order Markov chain starting at x_T∼𝒩(0,𝐈). The diffusion process q(x_1:T|x_0) is also a Markov chain, which adds standard normal noise with a variance schedule β_t. All the mathematical notations are as same as in the paper <cit.>: p_θ(x_0:T)≜ p(x_T)∏_t=1^Tp_θ(x_t-1|x_t), q(x_1:T|x_0)≜∏_t=1^Tq(x_t|x_t-1) Minimizing the Kullback-Leibler (KL) Divergence between q(x_1:T|x_0) and p_θ( x_0:T) will optimize the variational boundary on the negative log-likelihood. To efficiently implement this minimization, two properties are utilized: 1) the observation x_t is sampled at any time in a closed form: q(x_t|x_0)≜𝒩(x_t|√(α̅_t)x_0,γ_t𝐈),α̅_t=∏_k=1^tα_k, α_t=1-β_t, γ_t=1-α̅_t 2) a vicarious posterior q(x_t-1|,x_t, x_0) with condition x_0 is used for an intractable posterior q(x_t-1|x_t) when training the model: q(x_t-1|,x_t, x_0)≜𝒩(x_t-1|μ̃_t(x_t,x_0),β̃_t𝐈) μ̃_t(x_t,x_0)=√(α̅_t-1)β_tγ_t^-1x_0+√(α_t)γ_t-1γ_t^-1x_tβ̃_t=β_tγ_t-1γ_t^-1 After simplifying the loss function, the optimization process is equivalent to predict the noise ε_t from the diffused image x_t using a neural network: ℒ=𝔼_x_t,ε_t∼𝒩(0,𝐈)[2σ^2_tα_tβ_t^-2γ_t‖ε_t-ε̂_t(x_t,t)‖^2] The convergent model is capable of inferring the unseen images from the random noise following the standard normal distribution. §.§ SDPM for Semantic Segmentation We extend the diffusion model to segmentation tasks with the form p_θ(y_0|x_0)=p_θ(y_0,x_0)/p_θ(x_0)∝ p_θ(y_0,x_0), where p_θ(y_0,x_0)≜∫ p_θ(x_0:T, y_0:T)dx_1:T,y_1:T, and y_1:T are new members of latent variables. In the new model, the joint distribution p_θ(x_0:T, y_0:T) is defined as the reverse process, and it can be factorized as an original DDPM <cit.> part p_θ(x_0:T) and a conditional reverse process p_θ( y_0:T|x_0:T), which is a series of terms by Markov chains starting at p_θ(y_T|x_T): p_θ( y_0:T|x_0:T)=p_θ(y_T|x_T)∏_t=1^T p_θ( y_t-1|y_t,x_t-1) The diffusion process is an approximate conditional posterior q(y_1:T|y_0,x_0:T): q_( y_1:T|y_0,x_0:T)=∏_t=1^T q( y_t|y_t-1,x_t) A KL Divergence is defined between the diffusion process q_( y_1:T|y_0,x_0:T) and the reverse process p_θ( y_0:T|x_0:T) to obtain the variational upper bound on the negative log likelihood because of the non-negative property of KL divergence <cit.>: 𝕂𝕃(q(y_1:T|y_0,x_0:T)‖ p_θ(y_1:T|y_0,x_0:T))-𝔼_q[p_θ(y_0|x_0)] ⩾ 0 The loss function is then defined as minimizing the KL divergence of the conditional posterior and the unnormalized distribution: ℒ ≜ 𝔼_q[p_θ(y_0|x_0)/p_θ(y_T|x_T)+∑_t=1^Tq(y_t|y_t-1,x_t)/p_θ(y_t-1| y_t, x_t-1)-p_θ(y_0|x_0)] = 𝔼_q[q(y_T|y_0)+∑_t=2^Tq(y_t-1|y_t,y_0,x_t)/p_θ(y_t-1| y_t, x_t-1)]-p_θ(y_0|y_1,x_0)p_θ(y_T|x_T) Eq.(<ref>) indicates the acquisition of y_0 could be the traditional way of outputting the label x_0↦ y_0, or a generative way of Markov chains by inference x_T↦ y_0. Namely, as long as an initial label estimate y_T exists, the trained diffusion model can infer back and get the label y_0. To this end, an additional sub-network for estimating the noisy label y_T from the image x_T is introduced into the network. Unfortunately, it is difficult to estimate a proper initial y_T for the subsequent inference, because image information is severely destroyed. It is less likely to predict y_T from the image x_T nearly following the distribution 𝒩(0,𝐈). Thus, -p_θ(y_T|x_T) is a very strong restriction for predicting the initial y_T. Using the final initial y_T could produce a poor result and degrade the segmentation performance notably. To obtain a good initial y_t for inferring y_0, we add a time window of length T_p to train the model at each time (the loss ℒ_p in Eq.(<ref>)), guaranteeing that the label y_0 could be efficiently restored. Introducing the prospect to predict an initial y_t, the acquisitions of y_t-1 do not solely depend on the immediate outputs of the network fed by the noisy image x_t-1 any longer. The term 𝔼_q[q(y_T|y_0)] is constant and thus can be omitted. Therefore, the loss function is further simplified as: ℒ=𝔼_q[∑_t=2^Tq(y_t-1|y_t,y_0,x_t)/p_θ(y_t-1| y_t)]_ℒ_d-∑_t=0^T_pp_θ(y_t|x_t)_ℒ_p-p_θ(y_0|y_1)_ℒ_d_0 For the loss ℒ_d, the term p_θ(y_t-1|y_t) in reverse process is still compared using KL divergence by the conditional posterior q(y_t-1|y_t,y_0,x_t) in the diffusion process introduced in <cit.>. To make the distribution q(y_t-1|y_t,y_0,x_t) tractable, a further assumption is made that images and labels are both overlain by the same Gasussian noise during the diffusion process, i.e., synchronous image-label diffusion process. Thus, the posterior q(y_t-1|y_t,y_0,x_t) is degenerated as q(y_t-1|y_0,ε_t), which only relies on the noise at time t and y_0: q(y_t-1|y_0,ε_t) = 𝒩(y_t-1|μ̃_t, β̃_t𝐈) μ̃_t = √(α̅_t-1)y_0+√(α_t)(γ_t-1γ_t^-1/2)ε_t, β̃_t =β_tγ_t-1γ_t^-1 Since the diffusion process for images and labels are shared with the same noise ε_t∼𝒩(0, 𝐈), the images and labels at time t can thus be sampled as: x_t = √(α̅_t)x_0+√(γ_t)ε_t,y_t = √(α̅_t)y_0+√(γ_t)ε_t The loss ℒ_d_0 is for optimizing the last step of generating y_0. Some cumbersome discrete segmentation points might degrade the performance to some degree. To refrain from the influence from those points, in addition to applying the same strategy of stopping adding the uncertainty inference of variance β̃_1 in <cit.>, we also append a self-attention module with a convolutional layer of frozen parameters of all ones to get rid of the discrete points with a proper threshold, as illustrated in Fig.<ref>. §.§ Optimizing SDPM The choice for the transition term p_θ(y_t-1|y_t) in the reverse process is still a Gaussian distribution, i.e., 𝒩(μ_θ(y_t,t), σ_t^2𝐈), where θ is the trainable parameters and σ_t^2=β_t or σ_t^2=β̃_̃t̃ are suggested in <cit.>. The loss ℒ_d is reparameterized as: ℒ_d=𝔼_q[1/2σ_t^2‖μ̃_t(y_0, ε_t)-μ_θ(y_0, ε_t)‖^2]-2^-1Dβ̃+Dσ_t_C where D is image dimensionality. The loss function reveals that as long as the model μ_θ is able to predict μ̃_t given the label y_0 with the shared ε_t in the posterior, the final label y_0 is able to be inferred by Markov process. To further reduce the uncertainty of inferring y_0 through stochastic samplings of q(y_t-1|y_0,ε_t), different from the original DDPM <cit.>, SDPM adds another loss function to restrict the difference between the true and predicted final labels y_0: ℒ_d=𝔼_y_0,ε_t∼𝒩(0,𝐈)[α̅_t-1/2σ^2_t‖ y_0-ŷ_0(ŷ_t,ε̂_t,t)‖^2_ℒ_d_1+α_tγ_t-1^2/2σ^2_tγ_t‖ε_t-ε̂_t(y_t,t)‖^2_ℒ_d_2]+ξ where ŷ_0=ŷ_t/√(α̅_t)-√(γ_t/α̅_t)ε̂_t and ξ is a binomial residue term. As the losses ℒ_d_1 and ℒ_d_2 are getting optimized, the ξ is also getting optimized and thus has been ignored for now. Notice that it is impossible to predict ε_t from the label y_t, because there is no image information from y_t at all. Fortunately, from <cit.>, the network predicting the noise at each diffusion process also carries the label information. Thus, the model can utilize x_t to predict the shared noise ε̂_t(x_t,t). For the loss ℒ_p, since the network can predict stochastic noise, the noisy labels y_t can thus be predicted with the supervision of the noised label: ℒ_p=𝔼_x_t∼ q(x_t|x_0)[‖ y_t-ŷ_t(x_t,t)‖^2] To further improve the segmentation performance, a classic dice loss can also be applied to this composite loss function with the treatment of adding an activation function of sigmoid directly at the last layer of outputting y_0. The supervised loss functions are illustrated in Fig.<ref>. §.§ Inferring the Labels by SDPM The final label y_0 can be inferred in four ways. The first is the fast and easy way by directly outputting: ŷ_0^𝐚𝐯𝐠=ŷ_0, where ŷ_0 is the output of the trained network given the clean image x_0. The second is based on the salient weight ψ_t over the time window T_i: ŷ_0^𝐬𝐚𝐥=1/N∑_n=1^N(∑_t=0^T_i<Tŷ_0ψ_t), where ψ_t=1-(t/T_i)^ν, ν>1 are gradually degraded coefficients. The third is based on the Markov chain inference starting at time T_i: ŷ_0^𝐢𝐧𝐟𝐞𝐫=1/N∑_n=1^N(𝐈𝐋(d_i, T_i)), where d_i∈ℝ^(0,1), T_i∈{0,⋯,T} and 𝐈𝐋(·) is the Algorithm 1. The last is the union of all the results: ŷ_0^𝐚𝐥𝐥=ŷ_0^𝐚𝐯𝐠∪ŷ_0^𝐬𝐚𝐥∪ŷ_0^𝐢𝐧𝐟𝐞𝐫. Note that the second and third inferences are performed N times because of the randomness of noise. The average value of N time is used as the final result. § EXPERIMENTS AND RESULTS Datasets and Pre-cessing: Two datasets were involved: 1) A private dataset, named Infarct, containing 195 AIS patient NCCT scans (5 mm) were included. Of 195 patients, 123 images were used for training while the remained 72 for testing. 2) Another private dataset comprising of 331 patients with acute intracranial hemorrhage confirmed by NCCT (2.5mm), called Hemorrhage, was also included patients with acute ICH confirmed by NCCT (2.5 mm thickness). Of 331 patient scans, 241 scans were used for training and validation, and 90 were used for testing. Hyper-parameters Settings: Online data augmentations was performed, including adding noise, rotations, scalings, inplane flipping, etc. Learning rate was reduced non-linearly from 1e-4 to 6e-5 with the Adam optimizer, where lr=lr_init*(1-i_c/i_max)^0.9+lr_min*(i_c/i_max)^0.9, i_c is the current iteration and i_max is the maximum number of iteration. The variance schedule β_t is the sigmoid curve<cit.>. P2 Weighting coefficients during the training suggested in <cit.>. The repeated times are 100 and 50 for ŷ_0^𝐬𝐚𝐥 and ŷ_0^𝐢𝐧𝐟𝐞𝐫 in the inference process. Diffusion period T is 500 and the initial time T_i=T/2. Evaluation Metrics: Three metrics<cit.> including Dice, Volume Correlation(VC) based on the Pearson product-moment correlation coefficient, and Volume Difference Percentages(VDP), were used to quantitatively assess the performance of the model prediction at a voxel level compared to manual contouring. Five other methods, including SegResNet, UNETR, SwinUNETR, nnUNet, nnUnet++, were also applied on the same three datasets. For fair comparisons, the best performance with the optimized parameters for each method was reported. Results: A few segmentation examples are visualized in Fig.<ref>. Quantitative results in Table.<ref> show SDPM with four inferences obtained the best performance on the whole with Dice, VC and VDP. The method 'CDPM w/o noise' is with the same neural network architecture but without diffusion process. In our private Infarct dataset, although SDPM w/o noise and ŷ_0^𝐚𝐥𝐥 have obtained nearly same dice of 0.4, SDPM-ŷ_0^𝐚𝐥𝐥 have better VC-0.619 and lower VDP-0.545. In hemorrhage dataset, the best performance by ŷ_0^𝐢𝐧𝐟𝐞𝐫 is Dice-0.931, VC-0.985 and VDP-0.032. Additionally, Table.<ref> also suggested that our method was able to greatly reduce the VDP scores across three datasets. § DISCUSSION AND CONCLUSION A novel probabilistic SDPM is proposed to automatically segment stroke lesions of hemorrhage and infarct on NCCT, in order to alleviate the tedious manual contouring currently used in clinic <cit.>. The segmentation labels are output by SDPM in a fully probabilistic generative way. With the proposed several inference methods, the model was able to efficiently recover the lesion labels. Compared to the reference standard of manual contouring, quantitative evaluations demonstrate the efficacy of the proposed SDPM, outperforming several CNN and transformer based methods. This study represents the first study to use a completely probabilistic inference model based on DPM to automatically segment infarct and hemorrhage on NCCT. Table.<ref> has shown the proposed SDPM with four inference methods obtained the start-of-the-art performance on two datasets. All the ablation studies in Table.<ref> revealed every inference method has its own strength, where salient weighting estimate ŷ_0^𝐚𝐥𝐥 reached the best VDP of 0.540 and 0.032, ŷ_0^𝐬𝐚𝐥 obtained the best Dice in two infarct datasets. In the hemorrhage dataset, inference method ŷ_0^𝐢𝐧𝐟𝐞𝐫 reached the best results of Dice and VC. Generally, the inference of immediately outputting the labels performed worse than other three inference methods. This study has several limitations. First, our datasets are limited. More training samples may further improve the segmentation accuracy and generalizability. Second, the final label inference is slightly influenced by stochastic factors. It is time-consuming to get an average prediction based on several inferences. Third, the predicted labels generated by different inference methods were simply averaged. More advanced label fusion techniques may improve the performance. In conclusion, a synchronous image-label diffusion model by a LVM is proposed to segment stroke lesion of infarct and hemorrhage on NCCT. Experiments on three datasets demonstrate the efficacy of the proposed method, suggesting its potentials of being used for stroke lesion volume measurement. IEEEtran
http://arxiv.org/abs/2307.01341v1
20230703202806
Polynomial-time Approximation of Independent Set Parameterized by Treewidth
[ "Parinya Chalermsook", "Fedor Fomin", "Thekla Hamm", "Tuukka Korhonen", "Jesper Nederlof", "Ly Orgo" ]
cs.DS
[ "cs.DS" ]
HPC-driven computational reproducibility Yufeng Luo^1,2,3,4,6, Qian Zhang^5, Roland Haas^4,3, Zachariah B. Etienne^7, Gabrielle Allen^2,1,4 ======================================================================================================== We prove the following result about approximating the maximum independent set in a graph. Informally, we show that any approximation algorithm with a “non-trivial” approximation ratio (as a function of the number of vertices of the input graph G) can be turned into an approximation algorithm achieving almost the same ratio, albeit as a function of the treewidth of G. More formally, we prove that for any function f, the existence of a polynomial time (n/f(n))-approximation algorithm yields the existence of a polynomial time O(·logf()/f())-approximation algorithm, where n and denote the number of vertices and the width of a given tree decomposition of the input graph. By pipelining our result with the state-of-the-art O(n · (loglog n)^2/log^3 n)-approximation algorithm by Feige (2004), this implies an O(· (loglog)^3/log^3 )-approximation algorithm. § INTRODUCTION An independent set of a graph is a subset of pairwise non-adjacent vertices. The Maximum Independent Set problem, which asks to find an independent set of maximum cardinality of a given input graph on n vertices, has been among the most fundamental optimization problems that appeared in many research areas of computer science and has been a canonical problem of study in algorithms. In the field of approximation algorithms, the problem is notoriously hard: It has no O(n/2^log^3/4 n)-approximation algorithm running in polynomial time unless NP can be solved in randomized quasi-polynomial time by the work of Khot and Ponnuswami <cit.> (building on earlier work by among others Håstad <cit.>). The best known polynomial time approximation algorithm is an Õ(n/log^3 n)-approximation by Feige <cit.>, which is almost twenty years old; here the Õ-notation hides factors polynomial in loglog n. Besides measuring the approximation ratio as a function of n, two other directions have been suggested in the literature. One of the directions is to measure the ratio as a function of the maximum degree d of the input graph. The first improvement over the naive greedy (d+1)-approximation to o(d) was given by Halldorsson and Radhakrishnan <cit.> in 1994. After this, several improvements to this approximation were made <cit.>, culminating in the currently best Õ(d/log^1.5 d)-approximation by Bansal, Gupta, and Guruganesh <cit.> with an almost matching lower bound of Ω(d/log^2 d) under the Unique Games Conjecture (UGC) by Austrin, Khot, and Safra <cit.>; here the Õ-notation hides factors polynomial in loglog d. Another direction is to measure the approximation ratio as a function of the treewidth of the input graph. Here, a simple greedy algorithm that is based on the fact that graph of treewidth are -degenerate (see Lemma <ref>) achieves an approximation ratio of (+1). This was improved by Czumaj, Halldórsson, Lingas, and Nilsson <cit.> in 2005, who gave a (/ log n)-approximation algorithm when a tree decomposition of width is given with the input graph. Their algorithm is quite elegant and follows easily from the observation that one can greedily partition the vertices of the graph into sets V_1,…,V_r such that the treewidth of G[V_i] is at most /r. Combined with dynamic programming for independent set on graphs of bounded treewidth, this gives a 2^/r n^O(1) time r-approximation for any r, and therefore runs in polynomial time when we set r=/log n, resulting in the (/ log n)-approximation algorithm. Contrary to the degree-direction of approximating independent set, there has been no progress in the two other directions measuring the approximation ratio as a function on the number of vertices or the treewidth since the milestone results of Feige <cit.> and Czumaj et al. <cit.>. It is easy to show that one cannot improve the result of Czumaj et al. <cit.> to a polynomial time (/(f()log n))-approximation for any diverging positive function f, assuming the Exponential Time Hypothesis (ETH). In particular, given an input graph G on n_0 vertices we can create a graph G' on n=2^n_0 / f(n_0) vertices by adding n-n_0 vertices of degree 0. Then G' has treewidth n_0 and the assumed algorithm is a n^O(1)=2^o(n_0)-time r-approximation for r=n_0/(f(n_0) log n)=1, which violates the lower bound that Maximum Independent Set cannot be solved exactly in 2^o(n_0) time on graphs with n_0 vertices, assuming ETH (see e.g. for an equivalent lower bound for Vertex Cover <cit.> ) . This ETH lower bound naturally brings us to the question of what is the best approximation ratio in terms of treewidth only. In this paper, we essentially resolve this question by relating the approximation ratio parameterized by treewidth tightly to the approximation ratio parameterized by n. Formally, as our main result we prove the following theorem: theoremmaintheoremrestate Let f: ℕ→ℕ be a function such that there exists an n/f(n)-approximation algorithm for Maximum Independent Set, where n is the number of vertices of the input graph[We make mild assumptions on the properties of f, which are detailed in <Ref>. Any “reasonable” function f satisfies these assumptions.]. Then there exists an O(·logf()/f())-approximation algorithm for Maximum Independent Set, where is the width of a given tree decomposition of the input graph. Let γ(n) be the approximability function of Maximum Independent Set for n-vertex graph (i.e., the function for which O(γ(n))-approximation exists and o(γ(n))-approximation is hard). As mentioned before, the current state of the art has provided the lower and upper bounds γ(n) = Ω(n/2^log^3/4 n) <cit.> and γ(n) = Õ(n/log^3 n) respectively <cit.>. Similarly, one can consider Maximum Independent Set parameterized by and define τ() as the approximability function of Maximum Independent Set on the setting when a tree decomposition of width is given. Our result implies that the approximability functions γ and τ are essentially the same function, so this closes the treewidth-direction of Maximum Independent Set approximation. We find this phenomenon rather surprising. For some other parameters, such relations do not hold, e.g., when we consider the degree parameter d of the input graph, the approximability function of Maximum Independent Set is Ω(d/log^2 d) assuming UGC <cit.>, while the Õ(n/log^3 n)-approximation of Feige <cit.> exists. Combining Theorem <ref> with the result of Feige <cit.>, we obtain the following corollary. There exists an O(· (loglog)^3/log ^3 )-approximation algorithm for Maximum Independent Set, where is the width of a given tree decomposition of the input graph. This improves over the result of Czumaj et al. <cit.> when log^1/3 n = o(log/loglog), i.e., when is larger than exp(Ω̃(log^1/3 n)). It is better than the algorithm of Feige <cit.> whenever = o(n/loglog n), so overall it improves the state-of-the-art in the range of parameters exp(Ω̃(log^1/3 n)) ≤≤ o(n/loglog n). These results assume that the tree decomposition is given as part of the input. To remove this assumption, we can use the algorithm of Feige et al. <cit.> to O(√(log))-approximate treewidth. In particular, their algorithm combined with <Ref> yields the following corollary in the setting when a tree decomposition is not assumed as a part of the input. There exists an O(· (loglog)^3/log^2.5)-approximation algorithm for Maximum Independent Set, where is the treewidth of the input graph. *Techniques On a high level, our technique behind Theorem <ref> is as follows: First we delete a set of vertices of size at most /2 from the graph so that each of the remaining components can be partitioned into subinstances with pathwidth at most and subinstances with tree decompositions of width O() and depth O(log f()). For the subinstances of small pathwidth, we partition the vertices into O(log f()) levels based on in how many bags of the path decomposition they occur. Similarly, for the subinstances with O(log f())-depth tree decompositions, we partition the vertices in levels based on the depth of the highest bag of the tree decomposition they occur in. In both subinstances we argue that all vertices of all but one level can be removed, in order to make the vertices in the remaining level behave well in the decomposition, after which the remaining level can be chopped into components of size roughly O() such that the size of maximum independent again does not decrease significantly. Although some aspects of our approach are natural, we are not aware of arguments modifying the tree decomposition as we did here in the previous literature; we expect these arguments may have more applications for designing approximation algorithms for other NP-hard problem parameterized by treewidth similar to Theorem <ref>. *Organization The paper is organized as follows. We give preliminaries in Section <ref>. A major ingredient of <Ref> will be an approximation algorithm for Maximum Independent Set parameterized by pathwidth, which we will be presented in Section <ref>. Then, the approximation algorithm for Maximum Independent Set parameterized by treewidth will be presented in Section <ref>. This will use the pathwidth case as a black box. We then conclude and present open problems in <Ref>. § PRELIMINARIES *Basic notation We refer to <cit.> for standard graph terminology. We use the standard notation – α(G) – to denote the independence number, i.e., the size of a maximum independendent set, of graph G. Throughout, for a natural number i we denote the set {1, …, i} by [i], and for two natural numbers i ≤ j we denote the set {i, i+1, …, j} by [i,j]. We use log to denote the base-2 logarithm. *Tree decompositions Given a graph G, a tree decomposition of G consists of a tree T, where each node t ∈ V(T) is associated with a subset B_t ⊆ V(G) of vertices called a bag, such that * ⋃_t ∈ V(T) B_t = V(G) * For every edge uv ∈ E(G), there must be some node t such that {u, v}⊆ B_t. * For every vertex v ∈ V(G), the bags {t: v ∈ B_t} are connected in T. The width of a tree decomposition is max_t ∈ V(T) |B_t| - 1. The treewidth of G (denoted by (G)) is the minimum number k, such that G has a tree decomposition of width k. When the input graph is clear from the context, we simply write to denote the treewidth of G. A rooted tree decomposition is a tree decomposition where one node is assigned to be the root of the tree T. We use standard rooted-tree definitions when talking about rooted tree decomposition. The depth of a rooted tree decomposition is the depth of the tree T, i.e., the length of the longest root-leaf path. A rooted tree decomposition T is called nice if it satisfies that * Every node of T has at most 2 children. * If a node t has two children t' and t”, then t is called a join node and B_t = B_t' = B_t”. * If a node t has one child t', then either: * B_t⊂ B_t' and |B_t'| = |B_t| + 1, in which case t is a forget node, or * B_t'⊂ B_t and |B_t| = |B_t'| + 1, in which case t is an introduce node. * If a node t has no children we call it a leaf node. It is well-known that any tree decomposition can be turned into a nice tree decomposition. For every graph G on n vertices, given a tree decomposition T' of width ω, there is a nice tree decomposition T with at most 4 · n nodes and width ω that can be computed in polynomial time. It is possible to also assume the following additional property without loss of generality. Given a tree decomposition T' of width ω, there exists a nice tree decomposition of width ω and at most 4n nodes, that can be computed in polynomial time, such that for each leaf node t ∈ V(T), there exists a vertex v ∈ B_t that appears in exactly one bag, i.e., the bag B_t itself. We use <Ref> to compute a nice tree decomposition T. If there exists a leaf node t ∈ V(T) that does not contain such a vertex, we delete t from T. Notice that all the properties of a tree decomposition continue to hold after such a deletion. However, if after this deletion the former parent s of t in T is not a leaf, s was a join node which now has a child with the same bag as s which violates niceness. To repair this we can simply contract the edge between s and its remaining child in T. It is straightforward to verify that after this T remains nice. We can iterate the above, strictly decreasing the number of nodes of T, until T has the desired property. We will use the following well-known lemma of Bodlaender and Hagerup <cit.> to turn a tree decomposition into a logarithmic-depth tree decomposition, while increasing the width only by a factor of three. Given a tree decomposition of a graph G of width ω and having γ nodes, we can compute in polynomial time a rooted tree decomposition of G of depth O(logγ) and width at most 3ω + 2. *Path decompositions A path decomposition is a tree decomposition where the tree T is a path. The pathwidth of G is the minimum number k, such that G has a path decomposition of width k. It is denoted by (G). A nice path decomposition is a nice tree decomposition where T is a path, and the root is assigned to a degree-1 node, i.e., at one end of the path. Note that there are no join-nodes in a nice path decomposition. We observe that any path decomposition can be turned into a nice path decomposition with 2n nodes. For every graph G on n vertices, given a path decomposition P' of width ω, there is a nice path decomposition P with 2n nodes and width ω, that can be computed in polynomial time. By introducing vertices one at a time and forgetting vertices one at a time we obtain a nice path decomposition where the bag of the first node is empty, the bag of the last node is empty, and on each edge exactly one vertex is either introduced or forgotten, and therefore the path decomposition has exactly 2n edges and 2n+1 nodes. We can remove the first bag that is empty to get a path decomposition with exactly 2n nodes. *Maximum independent set approximation Given a function r that maps graphs to numbers greater than 1, an r-approximation algorithm for Maximum Independent Set takes as input a graph G and outputs in polynomial time an independent set in G of size at least α(G)/r(G). We usually denote any occurrence of |V(G)| in r by n. Let us now detail our assumptions on the function f in <Ref>. We assume that the approximation ratio n/f(n) of the given approximation algorithm is a non-decreasing function on n. This assumption is reasonable because if n/f(n) would be decreasing at some point, we could improve the approximation ratio by adding universal vertices to the graph; note that adding universal vertices does not change the optimal solution, but increases n. This also implies that the function f(n) grows at most linearly in n. We assume that for arbitrary fixed constant c ≥ 1, it holds that f(c · n) ∈ O(f(n)). We also assume that the function f can not decrease too much when n grows, in particular, we assume that for arbitrary fixed constant c ≥ 1 it holds that f(c · n) ∈Ω(f(n)). Moreover, we will use a basic result about finding independent sets whose size depends on the treewidth of the graph. Recall that a graph G is d-degenerate if there is always a vertex of degree at most d in any induced subgraph of G. It is known that every graph G is (G)-degenerate: Simply consider the vertex that is contained at a leaf bag and no other bag of a tree decomposition T of any induced subgraph of G as given by <Ref>. This vertex has degree at most (G). Therefore, we obtain a following trivial algorithm for approximating Maximum Independent Set parameterized by treewidth. There is a polynomial time algorithm that given a graph G on n vertices finds an independent set of size at least n/((G)+1). Iteratively assign a vertex of minimum degree to the independent set and delete its neighbors. By the aforementioned degeneracy argument, at each iteration at most (G)+1 vertices are deleted, so the number of iterations and the size of the found independent set is at least n/((G)+1). Note that the algorithm of <Ref> does not need a tree decomposition as an input. § APPROXIMATION PARAMETERIZED BY PATHWIDTH In this section, we prove a version of <Ref> where instead of a tree decomposition, the input graph is given together with a path decomposition. This will be an important ingredient for proving <Ref>. In particular, this section is devoted to the proof of the following lemma. Let f: ℕ→ℕ be a function such that there exists an n/f(n)-approximation algorithm for Maximum Independent Set, where n is the number of vertices of the input graph, and f satisfies the assumptions outlined in <Ref>. Then there exists an O(·logf()/f())-approximation algorithm for Maximum Independent Set, where is the width of a given path decomposition of the input graph. Throughout this section we will use G to denote the input graph and to denote the width of the given path decomposition of G. We denote by k = +1 the maximum size of a bag in the given decomposition. Note that by our assumptions on the function f, it holds that f(k) = Θ(f()). Let us denote = α(G). If < n/f(k), then <Ref> gives us a solution of size at least n/(G)+1≥·f(k)/(G)+1, i.e., an O((G)/f(k))-approximation, which would give the desired result by the facts that (G) ≤ and f(k) = Ω(f()). Therefore, in the rest of this section we will assume that ≥n/f(k). Let P be the given path decomposition of G. By <Ref>, we can assume without loss of generality that P is a nice path decomposition and has exactly 2 n bags, which we will denote by B_1, …, B_2n in the order they occur in the path. For each v ∈ V(G), we define the length of v to be the number of bags in P that contain v, and denote the length of v by ℓ(v). In particular, ℓ(v) = |{i ∈ [1,2n] v ∈ B_i}|. Then, we partition V(G) into 2 + ⌈log f(k) ⌉ sets based on the lengths of the vertices: V_0 = {v ℓ(v) < 2k} V_i = {v ℓ(v) ∈ [k · 2^i, k ·2^i+1)}, 1 ≤ i ≤⌈log f(k) ⌉ V' = {v ℓ(v) ≥ 4k · 2^⌈log f(k) ⌉} Note that (V_0, V_1, …, V_⌈log f(k) ⌉, V') is indeed a partition of V(G). We first show that the set V', which consists of the longest vertices, can only contribute to at most half of the optimal solution. It holds that |V'| ≤/2. First, notice that ∑_v ∈ V(G)ℓ(v) ≤ 2 n k. This is because P has 2n bags, each vertex appears in ℓ(v) bags of P, and each bag of P can have at most k vertices appearing in it. Now, because for vertices v ∈ V' we have ℓ(v) ≥ 4k · 2^⌈log f(k) ⌉≥ 4k · f(k) the vertices in V' contribute at least ∑_v ∈ V'ℓ(v) ≥ 4k · f(k) · |V'| to the sum. Therefore, it holds that |V'| ≤2nk/4k · f(k)≤n/2 · f(k)≤/2, as desired. <Ref> implies that at least half of any maximum independent set in G must be in the subgraph G[V_0 ∪ V_1 ∪…∪ V_⌈log f(k) ⌉]. In the rest of this section, we will focus on the following lemma. For each i ∈ [0,⌈log f(k) ⌉], there is a O(k/f(k))-approximation algorithm for Maximum Independent Set in G[V_i]. It is easy to see how <Ref> implies <Ref>. For each such G[V_i], we invoke Lemma <ref> to obtain a O(k/f(k))-approximate solution S_i ⊆ V_i. Our algorithm returns the set S_i with the largest cardinality. Since there are at most O(log f(k)) such sets, by Lemma <ref> there must be some integer i^* for which α(G[V_i^*]) ≥Ω(/log f(k)). Therefore, the returned set must have size at least Ω(/log f(k))/O(k/f(k)) = ·Ω(f(k)/k ·log f(k)) = ·Ω(f()/·log f()). Therefore, to finish the proof of <Ref>, it remains to prove <Ref>. Recall that the bags of P are denoted by B_1, B_2,…, B_2n where B_h is the h-th bag in the order from left to right. Let L = max_v ∈ V_iℓ(v) denote the maximum length of a vertex v ∈ V_i. Recall that by our definition of V_i, it holds that if i = 0, then L < 2k, and if i > 0, then all vertices in V_i have length between L/2 and L. We partition the set V_i into sets X_r and Y_r as follows. * For each r ∈ [1, ⌊ 2n/(2L) ⌋], we define X_r = B_2L r∩ V_i. These sets contain the vertices of V_i that appear in bags B_2 L, B_4L, … and the vertices in B_2L r can never occur in the same bag as the vertices in B_2L r', for any r ≠ r', since all vertices in V_i have length at most L. Let X=⋃_r X_r. * Denote the remaining vertices by Y = V_i ∖ X. We further partition Y into sets Y_r for r ∈ [1, ⌈ 2n/(2L) ⌉], where Y_r contains the vertices v ∈ Y that occur only in the bags B_j in the interval j ∈ [2L(r-1)+1, 2Lr-1]. It follows from definitions that X ∪ Y = V_i. See <Ref> for an illustration. We prove the following claim. For all r ∈ℕ, both sets X_r and Y_r have size at most 4k. For the set X_r, there is nothing to prove since each bag contains at most k vertices. Let us consider the set Y_r. First, we observe that because vertices of Y_r occur only in the bags B_2L(r-1)+1, …, B_2Lr-1, we have that ∑_v ∈ Y_rℓ(v) ≤ k · 2L, by the argument that each bag can contribute to the length of at most k vertices. Then, we consider two cases: i>0 and i=0. In the case when i>0, we know that each vertex v ∈ Y_r has length at least ℓ(v) ≥ L/2. Together with <ref>, this implies that |Y_r| ≤ 4 k. In the case when i=0, we have L≤ 2k, but we do not have the lower bound on the length of vertices in V_0. In this case, we use the property that P is a nice path decomposition of G. We know that the paths of vertices in Y_r appear only in the bags B_2L(r-1)+1, …, B_2Lr-1 of P. There are 2L-1 such bags, and because P is nice, each bag either introduces a single vertex in Y_r ∪ X_r or forgets a single vertex in X_r-1∪ Y_r. Since all vertices of Y_r must be introduced in these bags, but there are only 2L-1 such bags, this implies that |Y_r| ≤ 2 L-1 ≤ 4k. Finally, notice that because there are no bags that contain vertices from both Y_r and Y_r' for r ≠ r', there are no edges between Y_r and Y_r' for r ≠ r'. Also, since X_r-1 and X_r have 2L -1 bags between them and the maximum length of a vertex is L, it follows that no vertex from X_r-1 occurs in a bag together with a vertex in X_r, and therefore there are no edges between X_r and X_r' for r ≠ r'. Therefore, a union of independent sets in Y_1, …, Y_⌈ 2n/(2L) ⌉ is an independent set in Y, and a union of independent sets in X_1, …, X_⌊ 2n/(2L) ⌋ is an independent set in X. As each graph G[X_r] and G[Y_r] has at most 4k vertices, we use the given n/f(n)-approx­imation algorithm to 4k/f(4k)-approximate maximum independent set in all of the graphs G[X_r] and G[Y_r]. We denote by X^* the union of the results in the graphs G[X_r] and by Y^* the union of the results in the graphs G[Y_r]. Note that by previous arguments, α(G[X]) = ∑_rα(G[X_r]) and α(G[Y]) = ∑_rα(G[Y_r]), and therefore X^* is a 4k/f(4k)-approximation for independent set in G[X] and Y^* is a 4k/f(4k)-approximation for independent set in G[Y]. Now, we observe that because V_i = X ∪ Y, either α(G[X]) ≥α(G[V_i])/2 or α(G[Y]) ≥α(G[V_i])/2, and therefore the larger of X^* and Y^* is a 8k/f(4k)-approximation for independent set in G[V_i]. Note that 8k/f(4k) = O(k/f(k)), which is the desired approximation ratio. § APPROXIMATION PARAMETERIZED BY TREEWIDTH In this section, we finish the proof of <Ref>. For the convenience of the reader, let us re-state <Ref> here. * Throughout this section we will use G to denote the input graph and to denote the width of the given tree decomposition of G. We denote by k = +1 the maximum size of a bag in the given tree decomposition. Recall that f(k) = Θ(f()). Let T be the given tree decomposition of G. By <Ref> we assume that T is nice, and moreover that for each leaf node t of T there exists a vertex v ∈ B_t that occurs only in the bag B_t. Let denote the size of a maximum independent set in G. Similarly to the pathwidth case in <Ref>, by using <Ref> we can assume in the rest of this section that ≥n/f(k). Let ⊆ V(T) be the set of all leaf nodes of T. If the number of leaf nodes is at least || ≥· f(k)/k, then the unique vertices in these leaf bags already give us an independent set with the desired approximation factor. Therefore, in the rest of this section we will also assume that || < · f(k)/k. With this assumption, we can invoke the following lemma with ℓ = 2 f(k). There exists a set X ⊆ V(G) of size |X| ≤ k ·||/ℓ such that for each connected component of G - X there is a rooted tree decomposition of width at most k-1 that has at most ℓ leaf nodes. Such a set X and the tree decompositions of the components can be computed in polynomial time. We prove the lemma constructively starting with X = ∅ and the tree decomposition T of the entire graph. We also maintain a set of tree decompositions of connected components of G-X. We iteratively remove vertices from the graph G based on the structure of T as follows. Initially, we define the set of tree decompositions we will return as = ∅, and we initially assign T'=T as the tree decomposition from which we will “chop off” pieces with at most ℓ leaf nodes into . As long as T' has more than ℓ leaves, let t^* be a node of T' such that there are at least ℓ leaf nodes in the subtree T^* of T' rooted at t^* and no descendant of t^* has the same property. We add the vertices of B_t^* to X and delete them from the graph and all bags of T'. This separates the vertices in the bags in T^* from the vertices in the bags of the rest of T' and since no descendant of t^* had more than ℓ leaves, all connected components of T^* - t^* have at most ℓ leaves. We remove T^* from T', and add all connected components of T^* - t^* into . This completes the iteration. When the process stops, vertices in G are either deleted (because they belonged to B_t^* in some iteration) or appear in some tree decomposition that was added to . By construction, each connected component of G - X has a tree decomposition that is given by a connected component in . Each of these has fewer than ℓ leaves and did not increase in width compared to T. With these observations, the following claim finishes the proof of the lemma. It holds that |X| ≤ k ·||/ℓ. In each iteration of the algorithm, the number of leaves of T' decreases by at least ℓ, because T^* has more than ℓ leaves. Hence, this process terminates after at most ||/ℓ iterations. Each such iteration adds a subset of a bag of T to X (which contains at most k vertices). Therefore, the total number of deleted vertices is at most k ·||/ℓ. We then assume to have X as in the statement of <Ref> with ℓ = 2 · f(k), and for each connected component C of G-X a tree decomposition T^C of width at most k-1 with at most 2 f(k) leaves. For a connected component C of G - X, let S_C denote a fixed maximum independent set in C. Since |X| ≤ k ·||/2 f(k)≤/2, we know that the sum of |S_C| over all connected components C of G - X is at least /2. We can distinguish two cases for a single connected component C of G - X based on whether a majority of S_C appears in bags of nodes of degree at least 3 in T^C or not. Formally, let Q denote the set of vertices that appear in the bags of nodes of degree at least 3 in T^C, i.e., Q=⋃_t has degree >2 in T^C B_t. For each component C one of the following two alternatives holds: * |S_C ∖ Q| > |S_C|/2, or * |S_C ∩ Q | ≥ |S_C|/2. For handling the first case we can observe an easy pathwidth bound for C - Q, which allows us to apply <Ref>. A path decomposition of C - Q of width at most k-1 can be computed in polynomial time given the tree decomposition T^C. A path decomposition witnessing this can easily be obtained from T^C by deleting all nodes with degree at least 3 as well as vertices in their bags from the decomposition, resulting in a disjoint union of paths all of whose bags are of size at most k. These paths can be concatenated in arbitrary order. For the second case we next give a lemma that splits up each C[Q] into O(log f(k)) many disjoint subgraphs in which every connected component has at most O(k) vertices. C[Q] can be divided into ℓ≤ O(logf(k)) subgraphs H_1, …, H_ℓ, such that V(C[Q]) = ⋃̇_i ∈ [ℓ] V(H_i), and for any i ∈ [ℓ], each connected component of H_i has at most 6k vertices. Such H_1, …, H_ℓ can be computed in polynomial time. Consider the tree decomposition T^C of C obtained according to <Ref>. In particular, such a tree has at most 2 f(k) leaves and therefore at most 2 f(k)-1 nodes with degree at least 3. We can replace each path between two nodes u and v of degree at least 3 in T^C by two edges incident to a shared new node whose bag consists of the union B_u ∪ B_v of the bags of u and v. In this way we obtain a tree decomposition of C[Q] with at most 4 · f(k) nodes and width at most 2k - 1. For this tree decomposition we invoke <Ref> to obtain a tree decomposition T^J of C[Q] with width at most 6k - 1 and depth ℓ∈ O(logf(k)). Now, we partition the vertices in C[Q] into H_1,…, H_ℓ where H_i contains all vertices v such that the distance between the root of T^J and the highest bag in which v appears is exactly i-1. By definition all V(H_i) are pairwise disjoint and because T^J is a tree decomposition of C[Q], the union of all V(H_i) covers V(C[Q]). Moreover each connected component of any H_i is by construction a subset of some bag of T_J and thus has at most 6k vertices as desired. With the previous lemmas in hand, we are now ready to finish the proof of <Ref> as follows. We begin by invoking <Ref>. Let be the set of connected components in G - X. For the next few paragraphs consider an arbitrary but fixed single connected component C ∈. We first use <Ref> to invoke <Ref> on C - Q to obtain an independent set S^J_C in C - Q of size at least Ω(f(k)/k ·logf(k)) ·α(C - Q). Independently we invoke <Ref> and on each of the returned graphs H_i the assumed n/f(n)-approximation for n-vertex graphs on each of its connected components. Due to their small component size for each H_i this results in an independent set S_H_i of size at least Ω(f(k)/k) ·α(H_i). Because the graphs H_i vertex-partition C[Q] and there are only O(logf(k)) many H_i, returning an S_H_i with maximum size yields an O(k log f(k)/f(k) )-approximate solution for Maximum Independent Set on C[Q]. We know that either * α(C - Q) ≥α(C)/2, or * α(C[Q]) ≥α(C)/2. Overall this implies that returning the larger of S^J_C and the maximum-size S_H_i yields an O(k logf(k)/f(k))-approximate solution for Maximum Independent Set on C. We denote the returned independent set by S_C. Our final output is the union of all S_C. Because all C are pairwise independent, the union of S_C is an independent set in G. Moreover, because ∑_C ∈α(C) ≥/2 and because of the above approximation guarantee for each S_C, we obtain the overall desired approximation guarantee. § CONCLUSION AND OPEN PROBLEMS In this paper we essentially settled the polynomial time approximability of Maximum Independent Set when parameterized by treewidth. The most relevant open problem is to extend our approach to give the improved time-approximation tradeoff result in Czumaj et al. <cit.>. The current best known algorithm gives an r-approximation in 2^/r n^O(1) time. With fine-tuning of the parameters and using the recent exponential-time approximation result of Bansal et. al. <cit.>, we believe our techniques could give an improved running time of 2^o(/r) when r is sufficiently high, e.g., r = log^Ω(1). For us, the most interesting question is perhaps when r is tiny. Can we get a 2-approximation algorithm that runs in time 2^(1/2-ϵ) n^O(1)? Can we prove some concrete lower bound in this regime? While the Gap-ETH lower bound 2^/ poly(r) (for sufficiently large r) is immediate from <cit.>, such techniques do not rule out anything when r is a small constant. A different possible direction for future research would be to formulate approximation algorithms in terms of treewidth only for the more general Maximum Weight Induced Subgraph problem studied by Czumaj et al. <cit.>.
http://arxiv.org/abs/2307.02202v1
20230705105349
On the Adversarial Robustness of Generative Autoencoders in the Latent Space
[ "Mingfei Lu", "Badong Chen" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CR" ]
1]Mingfei Lu 1]Badong Chen [1]organization=National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, addressline=, city=Xi'an, postcode=710049, state=, country=China The generative autoencoders, such as the variational autoencoders or the adversarial autoencoders, have achieved great success in lots of real-world applications, including image generation, and signal communication. However, little concern has been devoted to their robustness during practical deployment. Due to the probabilistic latent structure, variational autoencoders (VAEs) may confront problems such as a mismatch between the posterior distribution of the latent and real data manifold, or discontinuity in the posterior distribution of the latent. This leaves a back door for malicious attackers to collapse VAEs from the latent space, especially in scenarios where the encoder and decoder are used separately, such as communication and compressed sensing. In this work, we provide the first study on the adversarial robustness of generative autoencoders in the latent space. Specifically, we empirically demonstrate the latent vulnerability of popular generative autoencoders through attacks in the latent space. We also evaluate the difference between variational autoencoders and their deterministic variants and observe that the latter performs better in latent robustness. Meanwhile, we identify a potential trade-off between the adversarial robustness and the degree of the disentanglement of the latent codes. Additionally, we also verify the feasibility of improvement for the latent robustness of VAEs through adversarial training. In summary, we suggest concerning the adversarial latent robustness of the generative autoencoders, analyze several robustness-relative issues, and give some insights into a series of key challenges. generative autoencoders, adversarial robustness, latent space § INTRODUCTION As one of the most successful deep unsupervised representation learning models, variational autoencoders (VAEs) <cit.> and their deterministic variants (such as the adversarial autoencoders <cit.> and the regularized autoencoders <cit.>) have been used in many domains such as computer vision <cit.>, natural language processing <cit.>, time series <cit.>. By taking advantage of the prior distribution hypothesis and the re-parameterize trick for the latent representation, VAEs outperform the classic autoencoders that are trained by minimizing reconstruction error from two perspectives: (a) it helps to make smooth interpolations, which means VAEs can be used as generative models to sample from the latent space and make new reasonable examples with high quality <cit.>. (b) it provides more robustness against input perturbations, particularly those originating from adversarial attacks <cit.>. Nevertheless, several aspects of the traditional VAE framework prevent it from trustworthy reconstruction or generating new data. On the one hand, insufficiency of the training data may cause holes or valleys in the latent space <cit.>, as illustrated in Figure <ref>, from where sampling a latent may lead to bad or even invalid reconstruction or generation. Meanwhile, a latent from the low-density area of the prior distribution also tends to produce a sample with low quality in high probability. On the other hand, VAEs enforce a global structure in the latent space by fitting a prior distribution that may not match the true data manifold. This model mismatch can result in less accurate generative modeling of the data <cit.>. Note that, the above-mentioned limitations are mostly related to the latent space of generative autoencoders. In this sense, from a security perspective, the vulnerability of the generative autoencoders in the latent space may provide an easy opportunity for attackers who aim to deteriorate the reconstruction of those autoencoders (especially in a communication scenario). Moreover, in practical scenarios like communication or compressed sensing <cit.> as depicted in Figure <ref>, the encoder and decoder of an autoencoder are used separately hence the latent transmitting channel is at risk of physical interference or attack <cit.>. Motivated by the above facts, we systematically, for the first time, investigate the adversarial robustness of generative autoencoders in the latent space. The adversarial robustness of generative autoencoders has been extensively investigated <cit.>. However, most of existing studies focus on robustness against adversarial inputs, while little research has been done on the latent counterpart. G. Osada et al. propose a latent space virtual adversarial training algorithm, which injects perturbation in the latent space and aims to generate input samples with more adverse-effective regularization <cit.>. In <cit.>, Yu et al. point out that latent features in such input-perturbation-robust models are surprisingly susceptible to adversarial attacks. Through harnessing latent features, they formulate a unified ℓ_∞-norm white-box attack algorithm with a stronger adversarial effect. Park et al. introduce a single-step latent adversarial training method <cit.>, which leverages the gradients of latent representation as to the latent adversarial perturbation. It is worth note that our motivation and study in this work is totally different from  <cit.>. First, we study the adversarial robustness of generative models under an autoencoder framework, rather than a discriminative models used mostly for classification. Second, our study is not only targeted for developing an advanced adversarial training method to improve robustness. Rather, we aim to warn practitioners of the vulnerability of autoencoders in the latent space and provide several insights with respect to both variational and deterministic autoencoders (DAE). As a by product, we also demonstrated that the latent robustness of VAE models can be improved by adversarial training. We start the research with attack experiments to show the latent vulnerability on well-trained VAE models based on the MNIST , FasionMNIST , and CelebA datasets. Next, experiments are conducted to investigate the difference in adversarial latent robustness between VAEs and DAEs. This involves a key question: whether VAE or DAE is more robust to attacks and potential for safe practical applications. Another concern of ours is the relation between adversarial robustness (in latent space) and the degree of disentanglement (of latent representations). It is well-known that there exists a trade-off between the reconstruction accuracy and disentangling strength for disentangling VAE such as β-VAE <cit.> and β-TCVAE <cit.>, which motivates us to consider the (possible) existence of other trade-off factors. Comparison attack experiments with different β are conducted to further reveal the mystery of whether there are trade-offs among the reconstruction accuracy, disentangling strength, and latent robustness. Our contributions are summarized below: * Proposal of the adversarial robustness problem for generative autoencoders in the latent space (in Section <ref>); * Demonstration of the vulnerability to adversarial latent and the potential to promote latent robustness through adversarial training (in Section <ref>); * Investigation of the difference in latent adversarial robustness between VAEs and DAEs, and an insightful finding that deterministic autoencoders show more robustness in the latent space (in Section <ref>); * Analysis of the trade-off between latent robustness and the disentanglement of the latent representations (in Section <ref>). § PRELIMINARIES AND RELATED WORK §.§ Variational Autoencoders In their seminal work <cit.>, Kingma & Rezende et al. introduced the variational autoencoder which has attracted much research interest, and become one of the most popular generative models used so far <cit.>. The general framework of a VAE model is shown as in Figure <ref>. The encoder f_enc(x) is a network mapping a high-dimensional input representation x into a lower-dimensional (compressed) latent representation z. And the decoder f_dec(z) is a mirror network of the encoder, mapping the latent representation back to a high-dimensional output x̂. The VAE model provides a very revolutionary idea of having neural networks learn the distribution rather than the features of the data only. By applying a prior distribution hypothesis with an explicit density function for latent Z and pursuing the maximum log-likelihood for the posterior distribution of the data, they derive the variational/evidence lower bound (ELBO) and then train the model. Nevertheless, the ELBO objective ensures to minimize the reconstruction error and the data distribution hypothesis fitting error simultaneously: ℒ_ELBO = D_KL[ q( . z |x)p( z ).] - E_q( . z |x)[ log p( . x |z)]. The first term is the Kullback–Leibler divergence between the learned approximation q( . z |x) to the true posterior distribution and the prior distribution of the latent representation z, and the second term E_q( . z |x)[ log p( . x |z)] denotes the loss of reconstruction x̂ from the original input x. In view of the objective function, D_KL is a regularization term which quantifies the mismatch between the learned posterior distribution and the prior. In practice, the KL divergence can also be replaced with the maximum mean discrepancy (MMD) <cit.> and the cauchy-schwarz (CS) divergence <cit.> for more flexible latent prior, beyond just an isotropic Gaussian. Notice that the encoder does not produce a latent representation directly but the corresponding parameters for the probability density function (PDF) of the prior distribution. This is the specific and most important difference between VAEs and the traditional autoencoders. Then, there comes the re-parameterize trick z = μ + σ⊙ζ, where ζ is randomly sampled from the prior normal distribution. With such a trick, the overall framework is able to be optimized using the backward-propagation process. In this work, we conduct studies with the above original VAE model framework and two simple variants. Referring to the understanding of the VAE loss-function in <cit.> where the framework is illustrated in Figure <ref>, we replace the regularization term of Eq (<ref>) with MMD and SWT <cit.>, thus achieve the MMD-VAE and SWT-VAE. MMD is the non-parametric kernel two-sample test metric proposed by Gretton et al.. SWT (Shapiro-Wilk Test) is a parametric distributional testing method for Normal distribution <cit.>, and we use its extension proposed in <cit.>. The corresponding loss functions are: L_ELBO-MMD = MMD[ . z |x,z_s] - E_q( . z |x)[ log p( . x |z)], and L_ELBO-SWT = 1 - W( . z |x) - E_q( . z |x)[ log p( . x |z)], where z_s∼𝒩( 0,I^m × d) are random sampled from normal distribution, m denotes the batch size, d is the dimension of the latent code, and W can be calculated with the method proposed in <cit.>. The original VAE model is also known as Vanilla-VAE, the objective of which employs the KL-divergence for regularization as in Eq. (<ref>). Therefore, we use the terms “Vanilla-VAE” and “KLD-VAE” interchangeably. §.§ Deterministic Generative Autoencoders Traditional deep autoencoders tend to learn a trivial identity function and thus copy the input to the output, instead of picking up the underlying patterns and characteristics of the data distribution to generate new examples <cit.>. VAEs bring auto-encoding into the generative era with theoretical attractiveness besides a pretty framework. However, they suffer from the posterior collapse problem hence motivating many studies turning back to deterministic autoencoders. RAE <cit.> fixes the variance of the inferred Gaussian approximate posterior distribution as a hyper-parameter, and substitutes the stochastic encoder by injecting noise into the input of a deterministic decoder. Ding et al. improve the RAE and propose the SCVG to learn the variance of the approximate Gaussian posterior distribution in a semi-deterministic manner by aggregating inferred mean vectors from other connected nodes via graph convolution operation <cit.>. In <cit.>, the authors couple the VAE model with a deterministic network sharing the same structure but optimized with the reconstruction loss without regularization for latent distribution. The DD-VAE proposed in <cit.> employs a variational encoder but deterministic decoder. A family of generative models named Exemplar VAE bridges the gap between parametric and non-parametric, exemplar based generative models <cit.>. All the above works put efforts into changing the prior hypothesis and the corresponding sampling procedure for data or the framework of the original VAE, through which they achieve deterministic autoencoders. There is another way of obtaining deterministic generative autoencoders just by rethinking the way of generating latent variables or the organization of ELBO of the VAEs. Remember in mind that the ELBO objective defined in Eq.(<ref>) consists of two parts. The second term exhibits a mean squared error (MSE) with L2 regularization on μ_q(x), which helps to reduce reconstruction loss. While the first term, representing KL-divergence between the posterior data distribution and its prior one, works to fitting the data distribution. Obviously, the implementation of the KL-divergence is the real source of generative modeling ability, and there are many alternative realizations in the deterministic way to regularize the loss with data distribution fitting error as that in AAE <cit.> or ITL-AE <cit.>. By doing so, one can obtain deterministic or semi-deterministic autoencoders surmounting limitations of the VAEs but preserving their generative capability. We extend the two variants defined in Section <ref> to realize their deterministic counterparts: MMD-DAE and SWT-DAE for later use, motivating by the idea in <cit.>. The only thing needed is to change the way of generating latent representations for the encoder. That is, make the encoder output the latent directly instead of its mean and standard deviation [μ, σ] for the Gaussian PDF of the posterior distribution but train them with the same objective as the corresponding VAE does. §.§ Adversarial Robustness of Generative Autoencoders Adversarial robustness is one of the key problems for neural networks. It is common to generate adversarial examples by attacks to collapse VAEs or train them with adversarial examples to promote robustness. Notice that existing research on the adversarial robustness of generative autoencoders mostly focuses on the robustness of downstream classification with adversarial examples from the input space <cit.>. And studies involving the latent of an autoencoder aim to develop new methods for attack or defense by taking advantage of the latent regularization to obtain more adversarial-effect <cit.>. In this paper, we study the robustness of generative autoencoders directly from the latent space for the first time. Motivations come from real applications in communication systems where the issue of latent robustness or security arises as a problem. Because the information channel transmitting latent representations is exposed to noise interference or attackers as illustrated in Figure <ref>. We demonstrate that malicious latent can derail the decoder/generator of a generative autoencoder, and attempt to evoke research attention on their latent robustness. § ANALYSIS FOR ROBUSTNESS OF VAES IN LATENT SPACE In this section, we show the vulnerability of the VAEs in the latent space through attack experiments at first. Then, a simple methodology is presented for adversarial training to improve the latent robustness. §.§ Vulnerability of VAEs in the latent space §.§.§ Problem Proposal It is common to study the vulnerability of neural networks to adversarial samples through attack experiments. Here we make the following assumptions and then define adversarial examples of the latent representations. * Assumption 1. One can get access to the latent (codes) of the encoder-decoder model. * Assumption 2. One can get access to the decoder (needn't know the structure or functional in detail but can get the output of the decoder whenever given a specific latent code). As defined before, f_enc(·) and f_dec(·) denote the encoder and decoder of a well-trained VAE model, respectively. An un-targeted adversarial latent z^adv to the original z^0 is defined as below: {[ J( z^adv,z^0) = D( f_dec( z^adv),f_dec( z^0)); z^* = max_d( z,z^0)≤ε J( z,z^0) ]., where z^0=f_enc(x) can be encoded from an input x or directly sampled from its prior distribution, and D is some a distance or similarity measurement. Any distance, divergence metrics for a two-sample test, or a composition of them can be used to realize D. Without loss of generality, we use the mean square error in the experiments of this section, maintaining consistency with the reconstruction error term of the ELBO for the original VAE model. One can easily change it for other metrics as needed. Here d measures the Euclidean distance between adversarial and the original latent, and ε is a small positive number that represents a constraint on the attack intensity. The objective of a targeted adversarial latent is defined similarly: {[ J( z^adv) = D( f_dec( z^adv),x^t); z^* = min_d( z,z^0) < ε J( z ) ]. No matter whether planning to attack the decoder with a targeted example or in an un-targeted way, one should know the correct reconstruction as a prior. That's why we need the Assumption 2. Note that the objective of the attack is to collapse the decoder, which means any alternatives that deteriorate the reconstructions or make a batch of generated samples being homogeneous are effective adversarial latent z^adv. From this perspective, we can design kinds of targets for attacks such as images with all black/white pixels, with pixels randomly sampled from a prior distribution, or with the reversed color of the original reconstruction, and so on. In this section, we conduct two attack experiments to examine the latent robustness of the investigated VAE models. One is the un-targeted attack and the other is a targeted attack with all-black pixeled targets. §.§.§ Attack Experiment We will take experiments on three well-trained models, which are the Vanilla-VAE and the aforementioned MMD-VAE and SWT-VAE. Attack Method. We choose PGD <cit.> to optimize and solve Eq.(<ref>) ∼ Eq.(<ref>) for all the attack experiments in this work. {[ z_0^adv = z^0,; z_k + 1^adv = Clip_Z, ε{z_k^adv + α sign( ∇ _ZJ( Z_k,Z_k-1^adv))} ]. Unless otherwise specified, parameter α is set to α=1, the maximum iteration times is k=10, and ε will be used to control the attack intensity (energy) for all scenarios. Metrics to Evaluate Robustness. When dealing with image issues, a good choice for robustness evaluation should related to the quality of the reconstructions or generations. Candidates for such image quality evaluation can be PSNR <cit.>, SSIM <cit.>, IS <cit.>, FID <cit.>, LPIPS <cit.> and so on. In view of their excellent and reliable performance, we mainly take SSIM and LPIPS to score the image reconstruction quality in the current and future experiments. The SSIM performs quite better than the MSE and PSNR in discriminating structural content in images. While the LPIPS is more effective to account for the nuances of human perception. It is implemented with an ImageNet-trained deep neural network, e.g. VGG, but can also be used for other image datasets. In this work, we use the realization from the TorchMetrics package <cit.> for Pytorch to calculate all the above-mentioned metrics. From a qualitative point of view, it is the truth that a VAE model is lack of adversarial robustness if the quality of the reconstructions decreases as the attack intensity increases, i.e., the SSIM score decreases or the LPIPS score increases. Such curves can be viewed as distortion-to-distortion plots (DD-plots or DD-curves) like those in <cit.>. However, quantitative evaluation of the latent robustness of the VAEs remains an open problem. In <cit.>, they address this problem with the AUDDC (Area under Distortion–Distortion Curve, AADDC). Motivated by this, we suggest the area associated with the DD-curves (AADDC) to quantify the latent robustness. For SSIM curves that achieve better performance with larger scores, our AADDC has the same definition as the AUDDC. While for LPIPS curves that achieve better performance with smaller scores, the AADDC denotes the area above the DD-curves. Attack Results Showing Vulnerability. We investigate the adversarial latent robustness of models trained on MNIST, FasionMNIST, and CelebA datasets. All the models for each dataset share the same framework as shown in Figure <ref>. The encoder contains 4 convolutional layers with hidden nodes of 32,64,128 and 64, respectively. Each layer is followed by a BatchNorm2d and a LeakyReLu activation layer. The decoder is just the inverse of the encoder. The only difference is the dimension of the latent, which is set to 10, 30, and 128 for models on MNIST, FasionMNIST, and CelebA, respectively. The batch size is set to 64 for training and is limited to 8 for attack experiments. The optimizer is Adam with learning rate 1e^-3. At first, we take an experiment to attack the well-trained models in an un-targeted way. As shown in Figure <ref>, it exhibits quality deterioration and significantly different reconstructions when the attack goes strong on all three datasets. Figure <ref>, <ref>, <ref> directly support the judgment that adversarial reconstructions show a trend of increasing difference from the original ones as the attack intensity increases. But the three VAE models regularized with different terms perform quite differently. It seems that the reconstruction-quality scores are getting more and more similar among different models as the complexity of the data increases. For instance, the curves for different models on MNIST in Figure <ref> can be recognized with a clear distinction while they perform very close on FMNIST as in Figure <ref>, and in Figure <ref> for CelebA they even overlaps. Furthermore, as shown in the last three rows of Figure <ref>, reconstructions from different types of models based on the three datasets are not only differed from the original images but also qualitatively declined. The above experiment has proved that un-targeted attacks in the latent space are effective to fail the reconstruction or generation of VAE models. Next, we investigate the latent robustness of VAEs to the targeted attack. Figure <ref> presents the results of adversarial reconstructions and the corresponding DD-plots of the Vanilla-VAE under attack with all-black targets on the MNIST dataset. It can be concluded that the VAEs are indeed prone to be attacked in the latent space. The LPIPS and SSIM scores in Figure <ref> show an explicit worsen trend as attack intensity arises. As displayed in Figure <ref>, model reconstructions under attacks is deteriorating, too. And the investigated model reconstructs images with nearly all black pixels under a black-targeted attack when the intensity rises to ε=1.0. Aware that this adversarial attack is added in the latent space, so its intensity no longer represents the scale of the image pixels in the input space. And we will give a visualization of the adversarial latent in the subsequent experiment. Effective and efficient targets for attack are always deep associated with the features of the dataset. Here we have demonstrated that all-black targets are of this category for the MNIST dataset. Since the aim for us is to verify the deceptive reconstruction capability with adversarial latent generated by the targeted attack and this has been achieved, we will show no more results with other types of targets for MNIST and the other two datasets. It worth believed that targeted attacks are definitely capable of collapse the reconstruction or generation of VAEs as long as the right target and enough attack power is taken. The DD-plots based on SSIM and LPIPS scores in Figure <ref> qualitatively demonstrate the vulnerability of VAE models in the latent space. However, an quantitative metric of the latent robustness is usually necessary when evaluating the difference among several models. For instance, the image quality score statistics for three different VAE models under attack are illustrated in Figure <ref>, <ref>, <ref> and Figure <ref>, how can we tell the quantitative difference among them? This is just the motivation for us to propose the statistical AADDC. As presented in Table <ref>, we compute the AADDC scores for each model under attack with all black targets on three datasets, with the help of which we can provide a quantitative evaluation on the adversarial latent robustness of the investigated models. If judge them from the latent robust perspective, we get an intuitive conclusion that SWT-VAE performs the best on the MNIST dataset, KLD-VAE the second and MMD-VAE the worst. When calculating the statistical AADDC for the LPIPS curve, a base line (the red dotted) is required to set above them as plotted in the upper sub-figure of Figure <ref>, and similarly a base line under the SSIM curve is set in the lower sub-figure. The associated AADDC scores are then obtained by integrating the area between the measurement curve and the base line. Visualization of Latent Intervention. A straightforward question arises what the adversarial latent looks like and how far away it is from the original one. Unlike adversarial examples for input images, adversarial latent cannot be visualized in an intuitive way. Consider a mini-batch of latent codes Z ∈ R^m × d, where m is the batch size and d is the dimension of the latent, and then we compute the dimension-wise average latent absolute as below: {[ E(| Z_j|) = 1/m∑_i = 1^m | z_ij|; E(| Z_j^adv|) = 1/m∑_i = 1^m | z_ij^adv|; E(| δ _j|) = 1/m∑_i = 1^m | z_ij^adv - z_ij| ].,j = 1,2, ⋯ ,d. With this mean-absolute difference visualized in Figure <ref>∼Figure <ref>, it is shown that an effective attack can be realized with small modification in latent representation. The mean absolute variations are tiny as shown in Figure <ref> at attack intensity ϵ=0.5 while the effectiveness of attacks at this intensity is still significant as shown in Figure <ref> and Figure <ref>. Although the mean absolute latent difference between the adversarial and original are significant at intensity ε=1.0 for all-black targeted attack on the Vanilla-VAE with the MNIST dataset, we have obtained adversarial reconstructions with almost all-black pixels as shown in the bottom row of Figure <ref>. Aware that the all-black pixeled target may be too harsh a choice for attack on the MNIST dataset. It's obvious that the dimension-wise visualization of the latent is impractical when the dimensionality is too high. Thus, we use the t-SNE method <cit.> to map the latent representations to two-dimensional variables for the FMNIST and CelebA dataset, and then visualize them in Figure <ref>. Overall, the following phenomenon can be identified from the figures: (a) The difference between adversarial and the original latent is tiny on FMNIST dataset under both types of attacks. (b) The scattered range of the adversarial latent for both datasets seems to be smaller than the original, which may imply that the attacks make the generated/reconstructed samples less diverse. §.§ Latent Adversarial Training Autoencoder In this section, we show the ability of latent robustness promotion by adversarial training. Consider adding an adversarial training loop to a VAE model. After the regular training procedure at each mini-batch, the decoder is optimized for given iteration times with the loss of adversarial reconstructions from original samples, thus achieving more robustness. The pseudo-code is shown as in Algorithm <ref>. We use the PGD attack to generate adversarial latent codes in an un-targeted way, and the SSIM is taken to measure the distance, D as defined in Eq (<ref>), between original reconstructions and adversarial ones. We conduct attack experiments to compare the robustness difference between the original-trained and adversarial-trained models. The results are presented in Figure <ref>. The LPIPS curves encourage us to believe that the model with adversarial training performs significantly more robust under attacks than the regular one. But the SSIM curves of the adversarial-trained model only outperform the regular-trained one at the first phase and then give a worse performance as shown in Figure <ref>. This is because the model was trained with all adversarial latent generated under attacks with an intensity of ε=0.05. As a result, the adversarial-trained model performs more robust under attacks with small or similar intensities but is powerless against attacks with too much bigger intensities. Anyway, this experiment has shown the potential of promotion on latent robustness through adversarial training. It can be expected that there are many other effective adversarial training methods to obtain latent robust VAE models, and we leave it for future work. § DEEPER ANALYSIS ON LATENT ROBUSTNESS §.§ Comparison of Latent Robustness between VAEs and DAEs Conventional autoencoders are of deterministic structure, and the VAE pioneered a variational functional structure. Thanks to this structure, VAEs not only achieve higher robustness to input space perturbations but are also able to generate new reasonable examples. Despite high expectations from the day it was developed, VAE is still very difficult to be used in practical applications. In recent years, many scholars have turned their attention back to autoencoders with deterministic structure <cit.>. Perhaps analyzing and understanding the difference between VAEs and DAEs from the perspective of latent robustness could give us new insights. Whether the VAEs or DAEs are more robust can be observed through attack experiments on them and then evaluate their latent robustness performance. To conduct a fair comparison, we will take experiments on two types of models that share the same framework except the way to generalize the latent codes, that is, the variational and deterministic ways, respectively. Consequently, the DAEs with the same structure as mentioned in section <ref> can help to realize this plan. What we need to do is choose the same regularization in loss functions for the corresponding VAE and DAE model pairs. Experiment settings for attack are the same as that in Section <ref>, and the results under black targets on the MNIST dataset are depicted in Figure <ref>. Both the LPIPS and SSIM-based DD-curves imply that deterministic autoencoders are more robust than variational autoencoders. We also take another two attack experiments based on FasionMNIST and CelebA datasets. The DD-curves are similar to those in Figure <ref>, so we do not plot them here but the statistical AADDC scores are presented in Table <ref>. All the statistics give us an insight that deterministic autoencoders perform more robustly in the latent space than variational autoencoders. In this sense, our results further justify another potential advantage of DAE over VAE in terms of latent adversarial robustness. In addition, the robustness differences between VAEs and DAEs tend to decline as the increase of latent dimensionality in view of AADDC. For instance, the dimensionality of z for models on MNIST, FMNIST, and CelebA is set to be 10, 30, and 128, respectively, and the measure gap of the AADDC between VAEs and DAEs seems to be narrowed as it grows. This is quite similar to what happens in the robustness comparison of different VAEs in Figure <ref>, <ref>, <ref>. §.§ Relation between Latent Robustness and Disentanglement Disentangled representation is one of the key pursuits for machine learning <cit.>. A model with disentangling ability can obtain semantic features from the original high-dimensional data. It helps to achieve the interpretability of the representation network. One can also generate new samples or change existing ones toward a demanded style by manipulating some specific dimensional features of the semantic disentangled latent. It is a common sense that there is a trade-off between the reconstruction accuracy and disentanglement. This implies that once we try to improve the disentangling effect of the encoded latent representations, the decoding reconstruction error increases. Additionally, researchers have found that β-TCVAE with larger β is more robust to adversarial input <cit.>. Are there trade-offs among the reconstruction loss, the disentanglement and the latent robustness? To answer the above question, we conduct experiments to attack the β-TCVAE <cit.>. As shown in Figure <ref>, the LPIPS (SSIM) scores grow faster (slower) with a larger weight for the TC term β, which controls the disentangling strength. Though not monotonic as the LPIPS and SSIM, the FID and PSNR scores show a similar trend. Now, we may conclude that the disentangling strength of the latent representation in VAEs does damage the latent adversarial robustness. Besides the DD-plots, we present the statistical AADDS scores in Tab. <ref>, from which a deteriorating trend can be easily found for latent robustness with the increasing disentangle strength β. § CONCLUSION AND DISCUSSION This empirical study investigates robustness issues about the latent space for generative autoencoders. We verify the vulnerability of the variational autoencoders by attacking them from the latent space. Experiments on three types of VAE models trained on MNIST, FasionMNIST, and CelebA datasets show that one can mislead the decoders to reconstruct images quite different from the original or even completely invalid. We also develop the adversarial latent training framework and achieve more robust VAEs. Furthermore, experiments are conducted for latent robust comparison between variational autoencoders and deterministic autoencoders. The results give us a new insight into that the DAEs are more robust in the latent space than VAEs. Finally, we discuss the relationship between disentanglement and the latent robustness of the β-TC-VAE models. The finding is that the promotion of disentanglement may lead decline in latent robustness. To wrap up, we explore several points related to the latent robustness of VAEs, giving certain explanations and insights. We see two important directions for further research. First is the theoretical analysis of the relationship between latent robustness and the generation diversity for VAEs. Latent robust generative autoencoders tend to generate more homogenized samples. There may be a trade-off between latent robustness and generation diversity. Second, an extension of investigation for adversarial latent robustness in other research areas of artificial intelligence such as natural language processing and so on. Experiments in this paper are all conducted on autoencoders for datasets from the field of computer vision. Well, the framework and methodology are easy to extend for networks in applications of other domains. elsarticle-num
http://arxiv.org/abs/2307.02687v1
20230705230853
On time-periodic solutions to an interaction problem between compressible viscous fluids and viscoelastic beams
[ "Ondřej Kreml", "Václav Mácha", "Šárka Nečasová", "Srđan Trifunović" ]
math.AP
[ "math.AP", "74F10 (Primary), 35B10, 76N06 (Secondary)" ]
SACHA: Soft Actor-Critic with Heuristic-Based Attention for Partially Observable Multi-Agent Path Finding Qiushi Lin and Hang Ma Manuscript received: February 9, 2023; Revised May 11, 2023; Accepted June 14, 2023. This paper was recommended for publication by Editor M. Ani Hsieh upon evaluation of the Associate Editor and Reviewers' comments. This work was supported by the NSERC under grant number RGPIN2020-06540 and a CFI JELF award. The authors are with the School of Computing Science, Simon Fraser University, Burnaby, BC, Canada {qiushi_lin, hangma}@sfu.ca Digital Object Identifier (DOI): see top of this page. August 1, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In this paper, we study a nonlinear fluid-structure interaction problem between a viscoelastic beam and a compressible viscous fluid. The beam is immersed in the fluid which fills a two-dimensional rectangular domain with periodic boundary conditions. Under the effect of periodic forces acting on the beam and the fluid, at least one time-periodic weak solution is constructed which has a bounded energy and a fixed prescribed mass. Keywords and phrases: fluid-structure interaction, compressible viscous fluid, viscoelastic beam, time-periodic solutions AMS Mathematical Subject classification (2020): 74F10 (Primary), 35B10, 76N06 (Secondary) § THE MODEL Let L,H,T>0 and define Γ:= (0,L), Ω = (0,L)× (-H,H). We denote the horizontal variable by x and the vertical variable by z. The fluid fills the domain Ω and it is described with velocity u:(0,T)×Ω→ℝ^2 and density ρ:(0,T)×Ω→ℝ which are periodic in both the x and the z direction. The beam is immersed in the fluid and its vertical displacement is given as η:(0,T)×Γ→ℝ, while its graph is denoted as Γ^η(t) := { (x,η(t,x)) : x∈Γ}. In order to work on a fixed domain Ω (note that η does not necessarily have values in [-H,H]), let us define a z-periodic version of η η̂(t,x):=η(t,x)-2n(t,x)H, where n(t,x)∈ℤ is uniquely determined by the requirement η(t,x)-2n(t,x)H∈ [-H,H). Its graph Γ̂^η(t) is on Figure <ref>. The time-space cylinders corresponding to our problem will be denoted as Q_T:=(0,T)×Ω, Γ_T:=(0,T)×Γ. The governing equations for our coupled fluid-structure interaction problem read as follows: The viscoelastic beam equation on Γ_T: η_tt+ η_xxxx - η_txx=-S^ηf_fl·𝐞_2 + f. Here f denotes a given external time-periodic force acting on the viscoelastic beam and f_fl is the force with which the fluid acts on the beam. Moreover, S^η=√(1+|η_x|^2) is the Jacobian of the transformation from Eulerian to Lagrangian coordinates of the beam (i.e. from Γ^η to Γ). The compressible Navier-Stokes equations on ⋃_t∈(0,T){t}×(Ω∖Γ̂^η(t)): ∂_t (ρ𝐮) + ∇· (ρ𝐮⊗𝐮) = -∇ p(ρ) +∇·𝕊(∇u)+ρF, ∂_t ρ + ∇· (ρ𝐮) = 0, where we set the pressure p for simplicity to be p(ρ)=ρ^γ, the viscous stress tensor 𝕊 is given by the Newton rheological law 𝕊(∇u):=μ( ∇u + ∇^τu- ∇·u𝕀) + ζ∇·u𝕀, μ,ζ >0, and F is a given time-periodic force acting onto the fluid. The fluid-structure coupling (kinematic and dynamic, resp.) on Γ_T: η_t (t,x) 𝐞_2 = 𝐮(t,x,η̂(t,x)), f_fl(t,x) = [[(-p(ρ)𝕀+𝕊(∇u))]](t,x,η̂(t,x)) ν^η(t,x), where ν^η=(-η_x,1)/√(1+|η_x|^2) denotes the normal vector on Γ^η facing upwards and [[A]](·,z):= lim_ε→ 0^+(A(·,z-ε)-A(·,z+ε)) represents the jump of quantity A in the vertical direction. The beam boundary conditions: η is periodic in x and η(t, x)= 0, (t,x)∈ (0,T)×{0,L}. Fluid spatial periodicity: ρ,u are periodic in x and z directions. Time periodicity: ρ,u, η are periodic in time. § WEAK SOLUTION AND MAIN RESULT The nature of the studied problem enables us to work with two equivalent formulations of the problem. In the original formulation, the domain Ω is fixed and the viscoelastic beam appears inside the domain Ω. However, we may use the z-periodicity of the problem to formulate it on the moving domain Ω^η(t) filled with the fluid, where the top and the bottom of the domain is given by the viscoelastic beam. For a given η(t,x) we introduce an equivalent fluid domain and the corresponding time-space cylinder Ω^η(t):={(x,z): x∈(0,L), η(t,x)<z<η(t,x)+2H }, Q_T^η:=⋃_t∈ (0,T){t}×Ω^η(t), both domains are demostrated in Figure <ref>. For a set[Here, S will represents either one of the sets (0,T), Γ, Ω or some of their products.] S=(a_1,a_1+L_1)×…× (a_n,a_n+L_n) where L_1,...,L_n>0 and n∈{1,2,3}, we introduce the spaces of differentiable periodic functions for k ∈ℕ_0 ∪{∞} C_#^k(S):={f ∈ C^k(ℝ^n): f(x_1,…,x_n) = f(x_1+L_1,…,x_n) =...=f(x_1,…,x_n+L_n) for all (x_1,…,x_n)∈ℝ^n }. We define Lebesgue and Sobolev function spaces for any p,q∈[1,∞], k ∈ℕ_0 ∪{∞} as closures in the respective norms W_#^k,p(S):=C_#^∞(S)^·_W^k,p(S). In order to accommodate the boundary conditions (<ref>) we further introduce the spaces C^k_#,0(Γ):={φ∈ C^k_#(Γ): φ(0) = 0 }, C^k_#,0(Γ_T):={φ∈ C^k_#(Γ_T): φ(t,0) = 0 for all t ∈}, for k ∈ℕ_0 ∪{∞}, and the corresponding closure W_#,0^k,p(Γ):=C_#,0^∞(Γ)^·_W^k,p(Γ). Finally, we define L_#^p(0,T;W_#^1,q(Ω)):={f∈ L_#^p(0,T;L_#^q(Ω)): ∇ f∈ L_#^p(0,T;L_#^q(Ω))}, W_#^1,p(0,T;L_#^q(Γ)):={f∈ L_#^p(0,T;L_#^q(Γ)): ∂_t f∈ L_#^p(0,T;L_#^q(Γ))}. As usual, H^k denotes Sobolev spaces W^k,2. For a function f∈ C_#^1(Ω) and η∈ C^1_#,0(Γ), we can define the Lagrangian trace on Γ̂^η as γ_|Γ̂^η f(x):=f(x,η̂(x)) and then extend it to a linear and continuous operator γ_|Γ̂^η: H^1_#(Ω)→ H^1/2_#(Γ). Here H^1/2 denotes the Sobolev-Slobodetskii space. Finally, we will denote the two-dimensional space variable y = (x,z). [Weak solution] We say that ρ∈ L_#^∞(0,T; L_#^γ(Ω)), u∈ L_#^2(0,T; H_#^1(Ω)) and η∈ W_#^1,∞(0,T; L^2_#(Γ))∩ L_#^∞(0,T; H_#^2(Γ))∩ H_#^1(0,T; H_#,0^1(Ω)) is a weak solution to (<ref>)-(<ref>) if: * The kinematic coupling γ_|Γ̂^ηu = η_t e_2 holds on Γ_T. * The renormalized continuity equation ∫_Q_Tρ B(ρ)( ∂_t φ +u·∇φ)y t =∫_Q_T b(ρ)(∇·u) φy t holds for all functions φ∈ C_#^∞(Q_T) and any b∈ L^∞ (0,∞) ∩ C[0,∞) such that b(0)=0 with B(ρ)=B(1)+∫_1^ρb(z)/z^2dz. * The coupled momentum equation ∫_Q_Tρu·∂_t φy t + ∫_Q_T(ρu⊗u):∇φy t +∫_Q_Tρ^γ (∇·φ) y t - ∫_Q_T𝕊( ∇u): ∇φy t +∫_Γ_Tη_t ψ_t x t - ∫_Γ_Tη_xxψ_xx x t- _Γ_Tη_txψ_x x t = -∫_Γ_T fψ x t - ∫_Q_TρF·φy t holds for all φ∈ C_#^∞(Q_T) and all ψ∈ C^∞_#,0(Γ_T) such that φ (t,x,η̂(t,x))=ψ(t,x)e_2 on Γ_T. We note that the choice b(ρ) = 0 in (<ref>) recovers the standard weak formulation of the continuity equation. Our main result reads as follows. Let H,L,T,m_0>0 be given and let γ > 1. Let f∈ L_#^2(Γ_T) and F∈ L_#^2(0,T; L_#^∞(Ω)). Then, there exists at least one weak solution to (<ref>)-(<ref>) in the sense of Definition <ref> such that ∫_Ωρ(t)y=m_0 for almost all t∈ (0,T) and the energy inequality -∫_Q_Tϕ_t(1/2ρ|u|^2 + 1/γ-1ρ^γ) y t- ∫_Γ_Tϕ_t(1/2 |η_t |^2 + 1/2 |η_xx |^2)(t) x t +∫_0^T ∫_Ωϕ𝕊(∇u):∇uy t +∫_0^T ∫_Γϕ | η_tx|^2 x t ≤∫_0^T ∫_Γϕ fη_t x t + ∫_0^T ∫_Ωϕρu·Fy t holds for all ϕ∈ C_#^∞(0,T), ϕ≥ 0. Moreover, sup_t ∈ (0,T)[ ∫_Ω( 1/2ρ |u|^2 + 1/γ-1ρ^γ) y + ∫_Γ(1/2|η_t|^2 + 1/2 |η_xx|^2) x](t) +∫_Q_T𝕊(∇u):∇uy t + ∫_Γ_T|η_tx|^2 x t ≤ C(f,F,Ω,m_0). The proof of this theorem is based on a four-level approximation scheme. Following the approach from <cit.> (see also <cit.>), we decouple the coupled momentum equation to the fluid momentum equation and the structure momentum equation by penalizing the kinematic coupling condition (<ref>). This allows us to deal with these equations separately. Then, we choose to span the fluid velocity and the structure displacement in finite time-space bases, as it was done in <cit.> (note that this is in contrast with the fixed-point approach which was used in <cit.>). Finally, as it is standard in the theory of compressible Navier-Stokes equations, artificial diffusion is added to the fluid continuity equation and the artificial pressure is added to the fluid momentum equation. Several other terms are also added due to technical reasons. In order to obtain a weak solution, there are four limits that are performed, each of them being based on estimates that significantly differ from a limit to limit due to their high sensitivity to the approximation parameters. Unlike the initial value problem, we need to additionally ensure that the energy inequality of the form (<ref>) is satisfied at each approximation level to obtain some important estimates, and for this we need to prove the convergence of the structure kinetic and elastic energies in each of the limits. This part is based on improved structure displacement estimates from <cit.>, adapted to our framework similarly as in <cit.>. Throughout the proof, we will work with formulations of the problem both on Ω and on Ω^η(t). As both the fluid velocity u and density ρ can be represented on Ω^η(t) equivalently, we keep the same notation for u and ρ whenever we shift to the domain Ω^η(t). Let us point out that u is continuous on Γ̂^η(t) so u_W^1,p(Ω^η(t))= u_W^1,p(Ω) for any p∈ [1,∞], while ρ may have a jump on Γ̂^η(t) so we use ρ_L^p(Ω^η(t))= ρ_L^p(Ω) for p∈ [1,∞] only. § DISCUSSION AND LITERATURE OVERVIEW The mathematical theory of the interaction problems between incompressible viscous fluids and thin elastic structures (plates or shells) has started with results of Beirao da Veiga <cit.> and Grandmont et al. <cit.>, and continued to develop in the last two decades, see <cit.> for the existence of weak solutions, <cit.> for the existence of strong solutions and <cit.> for uniqueness. Theory involving compressible viscous fluids interacting with plates and shells on the other started quite recently with the result of Schwarzacher and Breit <cit.>, and continued with <cit.> where weak solution was obtained for an interaction between a compressible viscous fluid and a nonlinear thermoelastic plate. Local in time regular solutions were constructed in <cit.>, while the weak-strong uniqueness for such problems was studied in <cit.>. In the case of heat-conducting fluids, interaction with an elastic plate was considered in <cit.> where a weak solution was constructed which satisfies the energy equality, and an interaction with a viscoelastic plate was considered in <cit.> where the strong solution with maximal regularity was constructed. The interaction of heat-conducting fluids and thermoelastic shells with heat exchange was studied in <cit.>, where a weak solution was constructed. The case of mixture with elastic structure was studied in <cit.>. A semigroup approach to wellposedness of the problem of interaction of a linearized compressible fluid with an elastic boundary was presented in <cit.>. Finally, local in time regular solutions to the interaction problems between 3D elastic solids and fluids were obtained in <cit.>, while weak solutions were constructed in <cit.>. We also refer the reader to a very recent result <cit.> where such a problem with allowed contact was studied. With all this in mind, little attention has been given to time-periodic solutions, or more precisely, to the question when the fluid-structure interaction model has a periodic behaviour under periodic forcing. Indeed, this question is of big importance, since many models tend to show periodic behavior. For example, heart beats and air flow through trachea are both periodic. Therefore, one can naturally ask, under what condition we can expect such models to behave periodically? This was first studied by Casanova for an interaction problem between a viscoelastic beam and an incompressible fluid <cit.> in the framework of strong solutions. Quite recently, Schwarzacher and Mîndrilǎ studied the interaction of a linear Koiter shell with an incompressible viscous fluid and obtained existence of a weak solution with a closed rigid boundary with no-slip condition in <cit.> and a dynamic pressure boundary condition in <cit.>. Finally, concerning the purely fluid system, the time-periodic weak solutions to the compressible Navier-Stokes system on a fixed domain were constructed in <cit.> for isentropic flows and in <cit.> for the full Navier-Stokes-Fourier system. The main goal of this paper is to tackle this issue in the case when the fluid is compressible. This brings many challenges which do not exist in the incompressible case. The main challenge in the compressible viscous fluid theory is dealing with pressure and our case is no different. The estimates based on Bogovskii operator for the pressure are very sensitive to the shape of the domain (and thus on deformations of the beam) and many other factors including dimension. This directly results in limitations in our result, i.e. the dimension of the fluid is two, the beam is visoelastic and the fluid domain is periodic in horizontal and vertical direction which a priori excludes contact for the beam. The paper is organized as follows. In Section 4 we present a way to obtain a priori estimates assuming the solution is sufficiently smooth. This procedure is split into several steps. In Section 5 we present the approximation scheme used in the proof of Theorem <ref> and prove the existence of a solution to the approximated system. In Section 6 we pass to the limit in the number of time basis functions m →∞ and present uniform estimates for the arising solution independent of n. In Section 7 we pass to the limit in the number of spatial basis functions n →∞, deduce uniform bounds independent of and introduce the coupled momentum equation. In Section 8, we perform the limit with the penalization and artificial density diffusion parameter → 0 and deduce uniform bounds independent of δ. Finally, in Section 9 we pass to the limit with δ→ 0, thus removing the artificial pressure term and finishing the proof of Theorem <ref>. § A PRIORI ESTIMATES FOR SMOOTH SOLUTIONS Before we start, let us introduce the energy associated to the studied system as E(t) := ∫_Ω(1/2ρ|u|^2 + 1/γ-1ρ^γ)(t)y + ∫_Γ(1/2 |η_t |^2 + 1/2 |η_xx |^2)(t) x and we emphasize that replacing Ω with Ω^η(t) yields the same quantity, see Remark <ref>. Further, we denote ℰ := sup_(0,T) E. The goal of this section is to show that smooth solutions to the problem (<ref>)-(<ref>) satisfies the inequality (<ref>). This will serve as base in the forthcoming sections, where approximate problems with similar properties will be studied. We note that since we assume in this section that the solution is smooth, we are allowed to consider unbounded functions b in (<ref>). §.§ Part I - estimates of ∇u and η_tx In order to obtain the estimates, we sum up (<ref>) with b(ρ)=ρ^γ and φ=1, (<ref>) with b(ρ)=0 and φ=1/2|u|^2 and (<ref>) with (φ,ψ)=(u,η_t) to obtain ∫_Q_T𝕊(∇u):∇uy t + ∫_Γ_T |η_tx|^2 x t=∫_Γ_Tfη_t x t+∫_Q_Tρu·Fy t and thus ∫_Q_T𝕊(∇u):∇uy t+ c(L)η_t _L^2(0,T;H^1(Γ))^2 ≤∫_Q_T𝕊(∇u):∇uy t + ∫_Γ_T |η_tx|^2 x t = ∫_Γ_Tfη_t x t +∫_Q_Tρu·Fy t ≤f_L^2(Γ_T)η_t _L^2(Γ_T) + ρ_L^∞(0,T; L^p(Ω))u_L^2(0,T; L^q(Ω))F_L^2(0,T;L^∞(Ω)) ≤ C(f,L) + c(L)/2η_t _L^2(0,T; H^1(Γ))^2+C(F)ρ_L^∞(0,T; L^p(Ω))u_L^2(0,T; L^q(Ω)) for any p>1 and q=p/p-1 by the Poincaré inequality for η. We have just deduced that ∫_Q_T𝕊(∇u):∇uy t + η_t _L^2(0,T;H^1(Γ))^2 ≤ C+Cρ_L^∞(0,T; L^p(Ω))u_L^2(0,T;L^q(Ω)). From here onward, we omit the dependence of constants on Ω,f,F, since they are given and do not depend on functions ρ,u,η. Next, we shift to the moving domain Ω^η(t) given in (<ref>). We have η_t e_2_L^2(0,T; H^1(Ω^η(t))) = 2H η_t _L^2(0,T; H^1(Γ)). Due to the kinematic coupling, we have that u-ηe_2=0 on Γ^η(t) and Γ^η(t)+2H, so by using the Korn identity on Ω^η(t) ∇u - ∇(η_t e_2)_L^2(Q_T^η)^2 + ∇·(u-η_te_2)_L^2(Q_T^η)^2 = 2𝔻(u -η_te_2)_L^2(Q_T^η)^2 ≤ C𝕊(∇u-∇(η_te_2)_L^2(Q_T^η)^2 ≤ C ( ∫_Q_T^η𝕊(∇u):∇uy t + η_t _L^2(0,T;H^1(Γ))^2 ), where C only depends on μ,ζ. The Poincaré inequality yields u-η_te_2_H^1(Ω^η(t))^2 ≤ C∇u-∇(η_te_2)_L^2(Ω^η(t))^2. Note that the constant C is independent of η – this follows directly from the proof of the inequality for the steady domain <cit.>. We use η_t_L^∞(Γ)≤ C η_t_H^1(Γ) and u-η_te_2=0 on Γ^η(t)∪Γ^η(t)+2H to conclude u_L^2(0,T; L^q(Ω^η(t)))^2≤ 2u-η_te_2_L^2(0,T; L^q(Ω^η(t)))^2+2η_te_2 _L^2(0,T; L^q(Ω^η(t)))^2 ≤ Cu-η_te_2_L^2(0,T; H^1(Ω^η(t)))^2+ Cη_te_2 _L^2(0,T; H^1(Ω^η(t)))^2 ≤ C ∫_Q_T^η𝕊(∇u):∇uy t + Cη_t_L^2(0,T; H^1(Γ))^2 for any 1<q<∞. We set κ := min{1/20, (γ-1)/5γ, 1/5(γ-1)}, and p = p(κ) := 2γ^2/2γ^2 - κ(γ-1), so we have θ:=γ(p-1)/p(γ-1) = κ/2γ. Then for any 1 < p < p we have for some θ < θ ρ_L^∞(0,T; L^p(Ω^η(t)))≤ρ_L^∞(0,T; L^1(Ω^η(t)))^1-θρ_L^∞(0,T; L^γ(Ω^η(t)))^θ≤ C m_0^1-θℰ^γθ≤ C(1 + ℰ^κ/2). Since ρ_L^∞(0,T; L^p(Ω^η(t)))=ρ_L^∞(0,T; L^p(Ω)), ∫_Q_T^η𝕊(∇u):∇uy t = ∫_Q_T𝕊(∇u):∇uy t, the inequalities (<ref>), (<ref>) and (<ref>) yield u_L^2(0,T;H^1(Ω))^2+ η_t _L^2(0,T;H^1(Γ))^2 ≤ C(κ)(1+ℰ^κ), for the original domain, and consequently u_L^2(0,T;L^q(Ω))^2 ≤ C(κ,q)(1+ℰ^κ) for all q > 1. §.§ Part II - circular estimates In order to deduce the energy inequality, we sum up (<ref>) with b(ρ)=ρ^γ and φ=χ_[s,t], (<ref>) with b(ρ)=0 and φ=χ_[s,t]1/2|u|^2 and (<ref>) with (φ,ψ)=(χ_[s,t]u,χ_[s,t]η_t) to obtain E(t) +∫_s^t∫_Ω𝕊(∇u):∇uyτ + ∫_s^t∫_Γ |η_tx|^2 x τ = E(s) + ∫_s^t ∫_Γ f η_t x τ+ ∫_s^t∫_Ωρu·Fyτ ≤ E(s)+C(κ)(1+ℰ^κ) ≤ E(s)+C(κ)+κℰ by (<ref>), (<ref>), (<ref>) and the Young inequality. We integrate again over (0,T) with respect to variable s and then we take a supremum in the variable t over (0,T) on the left hand side to obtain ℰ≤ C_0(1+∫_0^T E(s) s). The constant C_0 depends on the choice of κ, however we recall that κ is already fixed. Our goal in the remaining part of the estimates is to show ∫_0^T E(s) s ≤δ_0ℰ + C(δ_0) for some δ_0∈ (0,1/C_0). §.§ Part III - estimate of η_xx In this section we need the following interpolation inequality. Let g ∈ H^1(0,T;L^2(Γ)) ∩ L^2(0,T;H^1(Γ)). Then for any α∈ (0,1) it holds g ∈ H^α(0,T;H^1-α(Γ)) and there exists a constant C > 0 independent of g such that g_H^α(0,T;H^1-α(Γ))≤ C g_H^1(0,T;L^2(Γ))^αg_L^2(0,T;H^1(Γ))^1-α. First, note that g can easily be extended to ℝ^2 (also denoted as g) so that g_H^1(ℝ;L^2(ℝ))≤ C g_H^1(0,T;L^2(Γ)), g_L^2(ℝ;H^1(ℝ))≤ C g_L^2(0,T;H^1(Γ)). Denote as ℱ_t, ℱ_x and ℱ_t,x the Fourier transform w.r.t. variables t and x and both t,x, respectively. One has: g_H^α(ℝ;H^1-α(ℝ))^2 ≤ C∫_ℝ (1+σ^2)^α || ℱ_t(g)||_H^1-α(ℝ)^2  dσ ≤ C∫_ℝ (1+σ^2)^α∫_ℝ (1+ξ^2)^1-α |ℱ_x(ℱ_t(g))|^2  dξ dσ = C∫_ℝ^2(1+σ^2)^α (1+ξ^2)^1-α |ℱ_t,x (g)|^2   dξ dσ ≤ C(∫_ℝ^2(1+σ^2) |ℱ_t,x (g)|^2   dξ dσ)^2α(∫_ℝ^2(1+ξ^2) |ℱ_t,x (g)|^2   dξ dσ)^2(1-α) = Cg_H^1(ℝ;L^2(ℝ))^2αg_L^2(ℝ;H^1(ℝ))^2(1-α), where we used Hölder's inequality with indexes p=1/α and q=1/1-α. We use test functions (φ, ψ) = (ηe_2,η) in (<ref>), we observe that ∇·(ηe_2) = 0 and ∫_Γ_Tη_txη_x x t= 1/2∫_Γ_T (η_x^2)_t x t = 0. Consequently, η_xx^2_L^2(Γ_T) = ∫_Γ_T| η_xx |^2 x t = ∫_Q_Tρu·η_t e_2y t + ∫_Q_Tρu⊗u :∇(ηe_2)y t - ∫_Q_T𝕊(∇u):∇ (ηe_2)y t +∫_Q_Tρηe_2·Fy t + ∫_Γ_T |η_t |^2 x t +∫_Γ_T f η x t. We fix 1<p<p, denote q = p/p-1 and estimate the terms on the right hand side as follows. First, |∫_Q_Tρu·η_t e_2y t| ≤ Cρ_L^∞(0,T; L^p(Ω))u_L^2(0,T;L^q(Ω))η_t_L^2(0,T;L^∞(Γ))≤ C(κ)(1+ℰ^3κ/2) by using Sobolev embedding, (<ref>), (<ref>) and (<ref>). In order to estimate the convective term, we utilize the following estimate η_x_L^∞(0,T;L^3q(Γ))≤ Cη_x_H^1/2+δ(0,T;H^1/2-δ(Γ))≤ Cη_x_H^1(0,T; L^2(Γ))^1/2+δη_x_L^2(0,T; H^1(Γ))^1/2-δ ≤ C(η_x_L^2(0,T; L^2(Γ))^1/2+δ+η_tx_L^2(0,T; L^2(Γ))^1/2+δ)η_x_L^2(0,T; H^1(Γ))^1/2-δ =C η_x_L^2(0,T; H^1(Γ))+Cη_tx_L^2(0,T; L^2(Γ))^1/2+δη_x_L^2(0,T; H^1(Γ))^1/2-δ ≤ C(η_x_L^2(0,T; H^1(Γ)) + η_tx_L^2(0,T; L^2(Γ))). Here δ>0 is sufficiently small, we have used Sobolev embedding, Lemma <ref> and the Young inequality for exponents (1/2+δ)^-1 and (1/2-δ)^-1. We use this estimate to write |∫_Q_Tρu⊗u :∇(ηe_2)y t | ≤ Cρ_L^∞(0,T; L^p(Ω)u_L^2(0,T;L^3q(Ω))^2 η_x_L^∞(0,T;L^3q(Γ)) ≤ C(κ)(1+ℰ^3κ/2 ) (η_x_L^2(0,T; H^1(Γ)) + η_tx_L^2(0,T; L^2(Γ))) ≤ C(κ)( 1+ℰ^3κ) + 1/8η_xx^2_L^2(Γ_T), where we have used again (<ref>), (<ref>), (<ref>), and the Young inequality. The viscous term is estimated by |∫_Q_T𝕊(∇u):∇ (ηe_2)y t| ≤ C𝕊(∇u)_L^2(Q_T)η_x _L^2(0,T;L^2(Γ)) ≤ C𝕊(∇u)_L^2(Q_T)^2+1/8η_xx_L^2(Γ_T)^2 ≤ C(κ)( 1+ℰ^κ) + 1/8η_xx_L^2(Γ_T)^2 using (<ref>). We also use (<ref>) directly to estimate ∫_Γ_T |η_t|^2 x t ≤ C(κ)( 1+ℰ^κ). Finally, |∫_Q_Tρηe_2·Fy t| ≤ Cρ_L^∞(0,T; L^1(Ω))η_L^2(0,T;L^∞(Γ))F_L^2(0,T;L^∞(Ω))≤ C + 1/8η_xx_L^2(Γ_T)^2 and |∫_Γ_T f η x t | ≤f_L^∞(Γ_T)η_L^1(Γ_T)≤ C + 1/8η_xx_L^2(Γ_T)^2 by using the Poincaré inequality twice together with the boundary condition (<ref>). All the estimates together with (<ref>) yield ∫_Γ_T| η_xx |^2 x t≤ C(κ)(1+ℰ^3κ). §.§ Part IV - density/pressure estimates Denote the Bogovskii operator as ℬ_Ω: L_0^p(Ω) → W_0^1,p(Ω). This operator satisfies ∇·ℬ_Ω[f] = f, where L_0^p(Ω):={f∈ L^p(Ω): ∫_Ω f = 0} and W^1,p_0(Ω):={f∈ W^1,p(Ω): f_∂Ω=0}. Moreover, ℬ_Ω[f]_W^1,p(Ω)≤ C f_L^p(Ω). Throughout the rest of this section, we will repeatedly use the following estimate. For 0<α<1/2, we have ℬ_Ω[ρ^α-∫_Ωρ^αy]_L^∞(Ω)≤ Cℬ_Ω[ρ^α-∫_Ωρ^αy]_W^1,1/α(Ω)≤ C ρ^α_L^1/α(Ω) =C m_0^α. We cannot use ℬ_Ω[ρ^α-∫_Ωρ^α] as a test function φ in (<ref>) since its trace on Γ^η is not regular enough in general. Therefore, we split the procedure into estimates near the viscoelastic structure and estimates in the interior of the fluid domain. To this end we fix 0<h<H/2 and we emphasize that constants appearing in the calculations below may depend on h. We shift to the moving domain Ω^η(t) and we deal with the interior estimates first. Note that the function ℬ_Ω[ρ^α-∫_Ωρ^αy] shifted to Ω^η(t) does not vanish on its boundary Γ^η(t) and Γ^η(t)+2H. For that reason, we define a cut-off function ϕ_h(t,x,z):=z-η(t,x)/h, for η(t,x)<z<η(t,x)+h, 1, for η(t,x)+h<z<η(t,x)+2H-h, 2H+η(t,x)-z/h, for η(t,x)+2H-h<z<η(t,x)+2H, and φ_h:=ϕ_hℬ_Ω[ρ^α-∫_Ωρ^αy], where 0<α:=min{2/5,γ-1/2} is fixed from now on. We emphasize that this choice of α ensures α < 1/2, so we can use the estimate (<ref>). Moreover due to (<ref>) it holds 3/2κ(γ-1) < α < γ - 1-κγ, which will be important later. We test the coupled momentum equation (<ref>) by (φ_h,0) to obtain ∫_Q_T^ηρ^γ+αϕ_hy t = ∫_Q_T^ηρ^γ(∫_Ω^η(t)ρ^α(t)y)ϕ_hy t - ∫_Q_T^ηρ^γ(ℬ_Ω[ρ^α-∫_Ωρ^αy] ·∇ϕ_h )y t - ∫_Q_T^ηρu·∂_t φ_h y t - ∫_Q_T^ηρu⊗u : ∇φ_h y t + ∫_Q_T^η𝕊(∇u): ∇φ_hy t - ∫_Q_T^ηρF·φ_hy t. We proceed to bound the terms on the right-hand side. Notice that ∫_Ω^η(t)ρ^α(t)y≤(∫_Ω^η(t)ρ(t)y)^α |Ω^η(t)|^1-α≤ Cm_0^α and therefore ∫_Q_T^ηρ^γ(∫_Ω^η(t)ρ^α(t)y)ϕ_h y t≤ Cℰ m_0^α. Moreover, |∫_Q_T^ηρ^γℬ_Ω[ρ^α-∫_Ωρ^αy] ·∇ϕ_h |y t ≤ C ∫_0^T ρ^γ_L^1(Ω^η(t))ℬ_Ω[ρ^α-∫_Ωρ^α x]_L^∞(Ω) (1 + η_x _L^∞(Γ)) t ≤ C ρ^γ_L^∞(0,T;L^1(Ω^η(t)))m^α(1 + η_L^2(0,T;H^2(Γ))) ≤ C(κ)(ℰ^1+3κ/2). In order to estimate the third term on the right hand side of (<ref>), we fix 1 < p < p and q > 1 such that 1/γ+1/q+1/p=1. Since the Bogovskii operator commutes with the derivative with respect to time, we deduce ∂_t φ_h = ϕ_h ∂_t ℬ_Ω[ρ^α-∫_Ω^η(t)ρ^αy] + ∂_t ϕ_h ℬ_Ω[ρ^α-∫_Ω^η(t)ρ^αy] = ϕ_h ℬ_Ω[∂_t ( ρ^α-∫_Ω^η(t)ρ^αy)] +∂_t ϕ_h ℬ_Ω[ρ^α-∫_Ω^η(t)ρ^αy]. The continuity equation implies ∂_t ρ^α = - ∇· (ρ^αu) + (1 - α)ρ^α∇·u which is used to estimate ℬ_Ω[∂_t ρ^α-∂_t∫_Ω^η(t)ρ^αy]_L^2(0,T;L^p(Ω^η(t))) = ℬ_Ω[ ∇· ( ρ^αu) + (α-1)ρ^α∇·u - (α-1)(∫_Ω^η(t)ρ^α∇·uy)] _L^2(0,T;L^p(Ω^η(t))) ≤ρ^αu_L^2(0,T;L^p(Ω^η(t))) + C ℬ_Ω[ρ^α∇·u] _L^2(0,T;L^p(Ω^η(t))) ≤ρ^αu_L^2(0,T;L^p(Ω^η(t))) + C ρ^α∇·u_L^2(0,T;L^r(Ω^η(t))) ≤ρ^α_L^∞(0,T;L^γ/α(Ω^η(t)))u_L^2(0,T;L^pγ/γ-α p(Ω^η(t))) + C ρ^α_L^∞(0,T;L^γ/α(Ω^η(t)))∇·u_L^2(Q_T^η) ≤ C(κ)(1+ℰ^α/γ+κ/2), where r = max{1,2p/2+p}. Since ∂_t ϕ_h = -1/hη_t on the set where it is not zero, it holds that |∫_Q_T^ηρu·∂_t φ_hy t| ≤ρ_L^∞(0,T;L^γ(Ω^η(t)))u_L^2(0,T;L^q(Ω^η(t)))∂_t φ_h _L^2(0,T;L^p(Ω^η(t))) ≤ C(κ)(1+ ℰ^1/γ+κ/2) ( ϕ_h_L^∞(Q_T^η)ℰ^α/γ+κ/2 + η_t_L^2(0,T;L^p(Γ)) m_0^α) ≤ C(κ)(1+ ℰ^1/γ+α/γ+κ). We continue with the fourth term on the right hand side of (<ref>). Here we take q = 2γ/γ-1-α and deduce |∫_Q_T^ηρu⊗u : ∇φ_hy t| ≤ρ_L^∞(0,T;L^γ(Ω^η(t)))u_L^2(0,T;L^q(Ω^η(t)))^2 ∇φ_h_L^∞(0,T;L^γ/α(Ω^η(t))) ≤ C(κ)ℰ^1/γ+κ( ∇ℬ_Ω[ρ^α-∫_Ωρ^αy]_L^∞(0,T;L^γ/α(Ω^η(t))) + ∇ϕ_h_L^∞(0,T;L^γ/α(Ω^η(t))) m_0^α) ≤ C(κ)(1+ℰ^1/γ+κ)( ρ^α_L^∞(0,T;L^γ/α(Ω^η(t))) + 1 + η_x _L^∞(0,T;L^γ/α(Γ) )) ≤ C(κ)(1+ℰ^1/γ+κ)( ρ^α_L^∞(0,T;L^γ/α(Ω^η(t))) + 1+η_x_L^2(0,T; H^1(Γ)) + η_tx_L^2(0,T; L^2(Γ))) ≤ C(κ)(1+ℰ^1/γ+κ)(1+ℰ^α/γ+ ℰ^3κ/2) ≤ C(κ)(1 + ℰ^1+α/γ+κ+ ℰ^1/γ+5κ/2) by (<ref>) and (<ref>). The elliptic term satisfies |∫_Q_T^η𝕊(∇u): ∇φ_hy t| ≤𝕊(∇u)_L^2(0,T;L^2(Ω^η(t)))∇φ_h_L^2(0,T;L^γ/α(Ω^η(t))) ≤ C(κ)(1+ℰ^κ/2)( ∇ℬ_Ω[ρ^α-∫_Ωρ^αy]_L^2(0,T;L^γ/α(Ω^η(t))) + ∇ϕ_h_L^2(0,T;L^γ/α(Ω^η(t))) m_0^α) ≤ C(κ)(1+ℰ^κ/2)( ρ^α_L^∞(0,T;L^γ/α(Ω^η(t))) + (1+ η_x _L^2(0,T;L^γ/α(Γ) )) ) ≤ C(κ)(1+ℰ^κ/2)( 1+ ρ^α_L^∞(0,T;L^γ/α(Ω^η(t))) + η_xx_L^2(0,T;L^2(Γ) )) ≤ C(κ)(1+ℰ^α/γ+κ/2 + ℰ^2κ). Finally, |∫_Q_T^ηρF·φ_hy t| ≤ Cρ_L^∞(0,T;L^1(Ω^η(t)))F_L^2(0,T;L^∞(Ω^η(t)))φ_h_L^∞(Q_T^η)≤ Cm_0^1+α≤ C. We observe that due to (<ref>) and (<ref>) the largest power of ℰ in all of the above estimates is ℰ^1+3κ/2. We combine these estimates to get ∫_0^T ∫_{η+h<z<η+2H-h}ρ^γ+αy t≤∫_Q_T^ηρ^γ+αϕ_h y t≤ C(κ)(1+ℰ^1+3κ/2), which then gives us by the interpolation of Lebesgue spaces (∫_0^T ∫_{η+h<z<η+2H-h}ρ^γy t)^1/γ ≤( ∫_0^T ∫_{η+h<z<η+2H-h}ρ^γ+αy t)^θ/γ+α( ∫_0^T ∫_{η+h<z<η+2H-h}ρy t)^1-θ ≤ C(κ)(1+ℰ^1+3κ/2)^θ/γ+αm_0^1-θ, where θ=(γ-1)(γ+α)/(γ+α-1)γ. The choice of κ and α which satisfy (<ref>) and (<ref>) ensures that (1+3κ/2)γθ/γ + α = (1+3κ/2)γ-1/γ+α-1 <1. We define κ':= 1 - (1+3κ/2)γ-1/γ+α-1 which yields ∫_0^T ∫_{η+h<z<η+2H-h}ρ^γy t ≤ C(κ)(1+ℰ^1-κ'). Next, we deal with the near boundary estimates. Recall that we have fixed 0<h<H/2. This time we define φ_h(t,x,z):= z-η(t,x), for η(t,x)<z<η(t,x)+h, -h/H-h(z-(η(t,x)+h))+h, for η(t,x)+h<z<η(t,x)+2H-h, z-(η(t,x)+2H), for η(t,x)+2H-h<z<η(t,x)+2H. Note that for fixed (t,x), φ_h(t,x,z) is piecewise linear in the z variable with slope equal to 1 near the boundary of the domain. We choose (φ,ψ)=(φ_he_2,0) as test functions in (<ref>) to obtain ∫_0^T ∫_{η<z<η+h}∪{η+2H-h<z<η+2H}ρ^γy t = h/H-h∫_0^T ∫_{η+h<z<η+2H-h}ρ^γy t-∫_Q_T^ηρu·∂_t (φ_he_2)y t - ∫_Q_T^ηρu⊗u : ∇ (φ_he_2) y t + ∫_Q_T^η𝕊(∇u): ∇ (φ_he_2)y t - ∫_Q_T^ηρF·(φ_he_2)y t . We use (<ref>) to bound the first term on the right hand side. In order to bound the remaining terms, we use similar estimates as in the case of the interior estimates. In fact, the estimates are now more simple as there are no terms with the Bogovskii operator and the derivatives act directly on the function φ_h and consequently on η. Therefore we obtain ∫_0^T ∫_{η<z<η+h}∪{η+2H-h<z<η+2H}ρ^γy t ≤ C(κ)(1+ℰ^1-κ')+C(κ)(1+ ℰ^1/γ+α/γ+κ) +C(κ)(1 + ℰ^1+α/γ+κ+ ℰ^1/γ+5κ/2) +C(κ)(1+ℰ^α/γ+κ/2 + ℰ^2κ) ≤ C(κ)(1+ℰ^1-κ”), where κ” := min{κ',1-κ-1+α/γ,1-1/γ - 5κ/2}. The conditions (<ref>) and (<ref>) ensure that κ” > 0. We sum up (<ref>) and (<ref>) and we go back to Ω to finally deduce ∫_Q_Tρ^γy t ≤ C(κ)(1+ℰ^1-κ”), where κ and κ” are related through (<ref>). §.§ Part V - closing the estimates We notice that for q = 2γ/γ-1 ∫_Q_Tρ |u|^2y t≤ C ρ_L^∞(0,T;L^γ(Ω))u_L^2(0,T;L^q(Ω))^2 ≤ C(κ)(1+ℰ^1/γ+κ). Since 1/γ + κ < 1-κ” we finally obtain by previous estimates ∫_0^T E(s) s ≤ C(κ)(1+ℰ^1-κ”) ≤ C(δ_0)+δ_0 ℰ for any δ_0 > 0. This together with (<ref>) yields ℰ≤ C_0(1 + ∫_0^T E(s) s) ≤ C_0(1 + δ_0ℰ + C(δ_0)) and, consequently, ℰ≤ C, where C depends on f,𝐅,m_0,L,H, h and the choice of κ. However, we can choose h = H/4, and the choice of κ depends only on the value of γ so the constant C in the end depends only on f,𝐅,m_0,γ, L and H, i.e. the given data and parameters of the problem. § APPROXIMATE DECOUPLED PROBLEM We introduce the orthogonal basis of L_#^2(0,T) denoted by {τ_i(t)}_i∈ℕ∪{0}, more precisely we set for k ∈ℕ∪{0} τ_2k(t)=cos(2π kt/T), τ_2k+1(t)=sin(2π kt/T). We denote by {s_i(x)}_i∈ℕ the orthogonal basis of H^1_#,0(Γ)∩ H_#^2(Γ) and by {f_i(x,z)}_i∈ℕ the orthogonal basis of H_#^1(Ω). We define finite-dimensional spaces 𝒫_n,m^str := span{s_i(x)τ_j(t)}_1≤ i≤ n, 0≤ j≤ 2m, 𝒫_n,m^fl := span{f_i(x,z)τ_j(t)}_1≤ i≤ n,0≤ j≤ 2m. We fix m,n ∈ℕ, we introduce parameters > 0 and δ > 0, and we fix a ≥ 5. Here, denotes the artificial diffusion in the continuity equation, but also denotes the penalization parameter between the trace of the fluid velocity field on the viscoelastic beam and the velocity of the beam itself. The parameter δ then denotes an artificial pressure coefficient δρ^a in the momentum equation and it appears in other artificial terms which help us to get good estimates at the beginning of the proof but have to disappear from the equations later. We are ready to present the approximate decoupled and penalized problem which is the starting point of our existence proof. We fix β∈ (0,1), our goal is to find ρ∈ C_#^0,β(0,T; C_#^2,β(Ω))∩ C_#^1,β(0,T; C_#^0,β(Ω)), u∈𝒫_n,m^fl and η∈𝒫_n,m^str which satisfy the following identities. * The structure momentum equation ∫_Γ_Tη_t ψ_t x t - ∫_Γ_Tη_xxψ_xx x t -∫_Γ_Tη_txψ_x x t -∫_Γ_Tη_t - v·e_2/εψ x t=-∫_Γ_Tfψ x t holds for all ψ∈𝒫_n,m^str, where v=γ_|Γ̂^ηu. * The damped continuity equation ∂_t ρ + ∇· (ρu) -εΔρ +ερ=ε M, complemented with periodic boundary conditions for ρ holds in the classical sense in Ω, where M = m_0/|Ω|. * The fluid momentum equation δ∫_Q_Tu·∂_tφy t + ∫_Q_Tρu·∂_t φy t + ∫_Q_Tρu⊗u:∇φy t + ∫_Q_T (ρ^γ+δρ^a) ∇·φy t - ∫_Q_T𝕊(∇u):∇φy t -δ∫_Q_T |u|^2 u·φy t-ε∫_Q_T∇ρ⊗φ:∇uy t + ε/2∫_Q_T(M-ρ)u·φy t -∫_Γ_Tv-η_te_2 /ε·ψ x t=-∫_Q_TρF_δ·φy t, holds for all φ∈𝒫_n,m^fl, where ψ=γ_|Γ̂^ηφ and v=γ_|Γ̂^ηu. Here F_δ denotes a smooth approximation of F. §.§ Uniform estimates We derive the uniform estimates for solutions to the approximate problem (<ref>)-(<ref>). We choose ψ=η_t in (<ref>), multiply (<ref>) with γ/γ-1ρ^γ-1, then δ a/a-1ρ^a-1 and 1/2|u|^2 and finally choose φ=u in (<ref>), and then sum up these identities to obtain ∫_Q_T𝕊(∇u):∇uy t+δ∫_Q_T|u|^4y t+∫_Γ_T|η_tx |^2 x t + εγ∫_Q_Tρ^γ-2|∇ρ|^2y t + εγ/γ-1∫_Q_Tρ^γy t + εδ a∫_Q_Tρ^a-2|∇ρ|^2y t +εδ a/a-1∫_Q_Tρ^ay t + 1/ε∫_Γ_T |v - η_te_2|^2 x t = ∫_Γ_T fη_t x t + ∫_Q_Tρu·F_δy t+ ε∫_Q_T Mγ/γ-1ρ^γ-1y t+εδ∫_Q_T Ma/a-1ρ^a-1y t ≤f_L^2(Γ_T)η_t_L^2(Γ_T)+ Cρ_L^a(Q_T)u_L^4(Q_T)F_δ_L^∞(Q_T) + εγ/4(γ-1)ρ_L^γ(Q_T)^γ+ εδ a/4(a-1)ρ_L^a(Q_T)^a+C(ε,δ) ≤ C(ε,δ)+1/2∫_Γ_T|η_tx |^2 x t + εγ/4(γ-1)ρ_L^γ(Q_T)^γ+εδ a/2(a-1)ρ_L^a(Q_T)^a+δ/2u_L^4(Q_T)^4, where we used ρ_L^a(Q_T)u_L^4(Q_T)F_δ_L^∞(Q_T)≤ Cρ_L^a(Q_T)u_L^4(Q_T) ≤εδ a/4(a-1)ρ_L^a(Q_T)^a+δ/2u_L^4(Q_T)^4+ C(ε,δ) which follows from the Young inequality. Some terms on the right hand side of (<ref>) might be absorbed in the left hand side and thus we deduce ∫_Q_T𝕊(∇u):∇uy t+δ∫_Q_T|u|^4y t+∫_Γ_T|η_tx |^2 x t+ εγ∫_Q_Tρ^γ-2|∇ρ|^2y t+ εγ/γ-1∫_Q_Tρ^γy t + εδ a∫_Q_Tρ^a-2|∇ρ|^2y t+εδ a/a-1∫_Q_Tρ^ay t + 1/ε∫_Γ_T |v - η_te_2|^2 x t ≤ C(ε,δ). Next, we integrate (<ref>) over Ω to deduce d/dt∫_Ωρ(t)y + ∫_Ωρ(t)y = m_0, which yields the only time-periodic solution ∫_Ωρ(t)y = m_0. Further estimates of density are deduced by the L^p - L^q theory for parabolic equations applied to the continuity equation (<ref>). To this end, we estimate the term ∇· (ρu) = ρ∇·u + u·∇ρ in L^p(0,T,L^q(Ω)) using the information we already have. The term ρ∇·u is easy, as we have bounds for ρ∈ L^a(Q_T) and ∇u∈ L^2(Q_T). For the other term we use the bound u∈ L^4(Q_T) and ∇ρ∈ L^2(Q_T), where the latter follows from a straightforward manipulation with the continuity equation. Hence, we end up with ∂_t ρ_L^p(0,T;L^q(Ω))+Δρ_L^p(0,T;L^q(Ω))≤ C(,δ) for some p,q∈ (1,2), more specifically one can take p=q=4/3. Finally, we choose ψ=η in (<ref>) to obtain ∫_Γ_T |η_xx |^2 x t = 1/ε∫_Γ_Tv·e_2 η x t+∫_Γ_T |η_t |^2 x t+ ∫_Γ_T fη x t ≤ C(ε,δ)+1/2∫_Γ_T |η_xx |^2 x t. To sum up, we have the following set of estimates independent of m,n ∈ℕ. η_tx_L^2(Γ_T) ≤ C(ε,δ), u_L^4(Q_T) ≤ C(ε,δ), u_L^2(0,T;H^1(Ω)) ≤ C(ε,δ), u_L^2(0,T;L^p(Ω)) ≤ C(ε,δ,p), for any p∈(1,∞), ρ_L^a(Q_T) ≤ C(ε,δ), ∂_t ρ_L^p(0,T;L^q(Ω))+Δρ_L^p(0,T;L^q(Ω)) ≤ C(ε,δ,p,q), for some p,q∈(1,2), η_L^2(0,T;H^2(Γ)) ≤ C(ε,δ). §.§ Solution to the approximate problem Assume f∈ L^2_#(Γ_T), ũ∈𝒫_n,m^fl, and η̃∈𝒫_n,m^str are given and let ṽ=γ_|Γ̂^η̃ũ (or equivalently ṽ(t,x)=ũ(t,x,η̃(t,x))). Then, the following problem ∫_Γ_Tη_ttψ x t + ∫_Γ_Tη_xxψ_xx x t +∫_Γ_Tη_txψ_x x t +∫_Γ_Tη_t - ṽ·e_2/εψ x t=∫_Γ_Tfψ x t for all ψ∈𝒫_n,m^str and all t ∈ (0,T) has a unique solution η∈𝒫_n,m^str. Moreover, the mapping (ũ,η̃)↦η is compact from 𝒫_n,m^fl×𝒫_n,m^str to 𝒫_n,m^str. The idea is to solve (<ref>) in η_t instead of η. Note that, due to time periodicity of η, function η_t must be mean-value free in time and therefore cannot contain the constant function in time from the time basis. Therefore, we define S_0 = 𝒫^str_n,0 = span{s_i(x)}_1≤ i≤ n and S:=(𝒫_n,m^str∖ S_0, ||·||_L^2(Γ_T)) and the mappings B:S× S→ℝ and a:S→ℝ as B(u,v):= ∫_Γ_T u_t v x t+ ∫_Γ_T U_xx v_xx x t+ ∫_Γ_T u_x v_x x t+ ∫_Γ_Tu/εv x t, a(v) = ∫_Γ_T f v x t+ ∫_Γ_Tṽ·e_2/εv x t where U(t,x):=∫_0^t u(s,x) s. Then, our problem can be formulated as finding η_t=u ∈ S such that B(u,v)=a(v) for all v∈ S. Obviously, B is bi-linear and a is bounded and linear. Moreover, by the equivalence of norms in finite basis 𝒫_n,m^str, one has B(u,v)≤ C ||u||_L^2(Γ_T) ||v||_L^2(Γ_T). Finally, due to time-periodicity, one has B(u,u) = ||u_x||_L^2(Γ_T)^2 + 1/ε||u||_L^2(Γ_T)^2 ≥ C ||u||_L^2(Γ_T)^2. Therefore, the solution η_t=u∈ S follows directly by Lax-Milgram Lemma. Since ∫_0^t η_t(s,x) d s in general does not belong to the space S due to integrals of τ_2k+1(t), we find η in the form η(t,x) = P_S(∫_0^t η_t(s,x)ds) + G(x), where P_S is a projection from 𝒫^str_n,m onto the space S and G(x)∈ S_0 is a solution to the elliptic equation - ∫_Γ_T G_xxψ_xx dx + ∫_Γ_Tṽ·e_2/εψ dx=-∫_Γ_Tfψ dx for all ψ∈ S_0. The continuity of mapping (ũ,η̃)↦η is a direct consequence of linearity of the equation. (<cit.>) Let ũ∈𝒫_n,m^fl. Then, there exists a unique solution ρ to the following problem ∂_t ρ + ∇· (ρũ) -εΔρ +ερ=ε M. Moreover, ρ∈ C_#^∞(0,T;W_#^2,p(Ω)) for any p∈ (1,∞), the mapping ũ↦ρ is continuous and compact from 𝒫_n,m^fl to W_#^1,p(Q_T) and ρ≥ 0. Let ũ∈𝒫_n,m^fl, η̃∈𝒫_n,m^str and ρ∈ C_#^∞(0,T;W_#^2,p(Ω)). Then, there exists a solution u∈𝒫_n,m^fl of δ∫_Q_Tu·∂_t φy t+∫_Q_Tρũ·∂_t φy t + ∫_Q_Tρũ⊗ũ:∇φy t + ∫_Q_T (ρ^γ+δρ^a) ∇·φy t - ∫_Q_T𝕊(∇u):∇φy t - δ∫_Q_T|u|^2 u·φy t -ε∫_Q_T∇ρ⊗φ:∇ũy t + ε/2∫_Q_T(M-ρ)ũ·φy t -∫_Γ_Tv- η̃_te_2/ε·ψ x t=-∫_Q_TρF_δ·φy t, for all φ∈𝒫_n,m^fl, where ψ = γ_|Γ̂^η̃φ and v = γ_|Γ̂^η̃u. Moreover, the mapping (ρ,ũ,η̃)↦u is continuous from W^1,p_#(Q_T)×𝒫_n,m^fl×𝒫_n,m^str to 𝒫_n,m^fl. The existence of solution is straightforward. Indeed, (<ref>) may be rewritten as A u = RHS where Au =𝒫( δu_t - ∇·𝕊(∇u) + δ |u|^2 u + 1/εv) where 𝒫 denotes the projection to 𝒫^fl_n,m and RHS contains all the other terms. The operator A is a coercive operator on 𝒫_n,m^fl and the classical result then yields that A is also surjective – we refer to <cit.>. To prove the continuity, let ρ_1,ρ_2∈ C_#^∞(0,T;W_#^2,p(Ω)), ũ_1,ũ_2∈𝒫_n,m^fl and η̃_1,η̃_̃2̃∈𝒫_n,m^str be given, and let u_1, u_2∈𝒫_n,m^fl be the corresponding solutions. Denote v_i = γ_|Γ̂^η̃_̃ĩu_i for i=1,2. We take the difference of the equation for u_1 tested with φ = (u_1-u_2) and the equation for u_2 tested with φ = (u_1-u_2). We emphasize that even though the test functions φ in both equations are the same, the corresponding ψ are different in both equations, as they are traces of φ on different curves η̃_i. Since 1/4 |u_1-u_2|^4 ≤ (|u_1|^2u_1 - |u_2|^2u_2)·(u_1-u_2) we get ∫_Q_T𝕊(∇u_1 - ∇u_2):∇(u_1 - u_2)y t+δ/4∫_Q_T|u_1-u_2|^4y t+1/ε∫_Γ_T |v_1-v_2|^2 x t ≤∫_Q_T (ρ_1 ũ_1 - ρ_2ũ_2) ·∂_t(u_1 - u_2)y t+∫_Q_T (ρ_1 ũ_1⊗ũ_1 - ρ_2 ũ_2⊗ũ_2): ∇ (u_1-u_2)y t + ∫_Q_T (ρ_1^γ -ρ_2^γ +δρ_1^a-δρ_2^a) ∇·(u_1-u_2)y t- ε∫_Q_T(∇ρ_1- ∇ρ_2)⊗(u_1 - u_2):∇ũ_1y t +ε∫_Q_T∇ρ_2⊗(u_1 - u_2):∇ (ũ_2-ũ_1)y t+ ε/2∫_Q_TM(ũ_1 -ũ_2) ·(u_1-u_2) y t - ε/2∫_Q_T(ρ_1 - ρ_2)ũ_2·(u_1-u_2)y t - ε/2∫_Q_Tρ_1(ũ_1-ũ_2)·(u_1-u_2)y t +∫_Q_T(ρ_1 - ρ_2)F_δ· (u_1-u_2)y t -1/ε∫_Γ_T (γ_|Γ̂^η̃_̃1̃u_1 - γ_|Γ̂^η̃_̃2̃u_2) · (γ_|Γ̂^η̃_̃2̃u_2 - γ_|Γ̂^η̃_̃1̃u_2) x t - 1/ε∫_Γ_Tγ_|Γ̂^η̃_̃2̃u_2 · (γ_|Γ̂^η̃_̃2̃(u_2-u_1) - γ_|Γ̂^η̃_̃1̃(u_2-u_1)) x t +1/ε∫_Γ_T(η̃_1t- η̃_2t)e_2 ·γ_|Γ̂^η̃_̃1̃(u_1 - u_2) x t +1/ε∫_Γ_Tη̃_2te_2 · (γ_|Γ̂^η̃_̃1̃(u_1 - u_2)-γ_|Γ̂^η̃_̃2̃(u_1 - u_2)) x t where we used that ∫_Γ_Tγ_|Γ̂^η̃_̃1̃u_1 ·γ_|Γ̂^η̃_̃1̃(u_1 -u_2) x t - ∫_Γ_Tγ_|Γ̂^η̃_̃2̃u_2 ·γ_|Γ̂^η̃_̃2̃(u_1 -u_2) x t = ∫_Γ_Tγ_|Γ̂^η̃_̃1̃u_1 · (γ_|Γ̂^η̃_̃1̃u_1 -γ_|Γ̂^η̃_̃2̃u_2) x t - ∫_Γ_Tγ_|Γ̂^η̃_̃2̃u_2 ·( γ_|Γ̂^η̃_̃1̃u_1 - γ_|Γ̂^η̃_̃2̃u_2) x t +∫_Γ_Tγ_|Γ̂^η̃_̃1̃u_1 · (γ_|Γ̂^η̃_̃2̃u_2 - γ_|Γ̂^η̃_̃1̃u_2) x t - ∫_Γ_Tγ_|Γ̂^η̃_̃2̃u_2 · (γ_|Γ̂^η̃_̃2̃u_1 - γ_|Γ̂^η̃_̃1̃u_1) x t = ∫_Γ_T|γ_|Γ̂^η̃_̃1̃u_1 -γ_|Γ̂^η̃_̃2̃u_2|^2_=|v_1-v_2|^2 x t + ∫_Γ_T (γ_|Γ̂^η̃_̃1̃u_1 - γ_|Γ̂^η̃_̃2̃u_2) · (γ_|Γ̂^η̃_̃2̃u_2 - γ_|Γ̂^η̃_̃1̃u_2) x t + ∫_Γ_Tγ_|Γ̂^η̃_̃2̃u_2 · (γ_|Γ̂^η̃_̃2̃(u_2-u_1) - γ_|Γ̂^η̃_̃1̃(u_2-u_1)) x t. The convective term is treated as follows ∫_Q_T (ρ_1 ũ_1⊗ũ_1 - ρ_2 ũ_2⊗ũ_2): ∇ (u_1-u_2)y t ≤ C ∫_Q_T (|ρ_1 - ρ_2|^2 + |ũ_1-ũ_2|^2)y t + c∫_Q_T|∇(u_1-u_2)|^2y t where c is taken small enough to absorb the term into the left hand side using the Korn inequality and C depends on the functions and m,n,ε,δ and c. The remaining terms on Q_T are estimated in a similar fashion. The most involved boundary term is the following 1/ε∫_Γ_Tγ_|Γ̂^η_2u_2 · (γ_|Γ̂^η_2(u_2-u_1) - γ_|Γ̂^η_1(u_2-u_1)) x t ≤ C ∫_Γ_T ( η̃_1 - η̃_2) ∂_zu_1 - ∂_zu_2_C(Q_T) x t ≤ C∫_Γ_T |η̃_1 - η̃_2|^2 x t + δ/16∫_Q_T |u_1 - u_2|^2y t, by the equivalence of norms in a finite basis, where we have also used γ_|Γ̂^η_2(u_2-u_1)(t,x) - γ_|Γ̂^η_1(u_2-u_1)(t,x) = (u_2-u_1)(t,x,η̃_1(t,x)) -(u_2-u_1)(t,x,η̃_2(t,x)) =(η̃_1(t,x) - η̃_2(t,x)) ∂_z(u_2-u_1)(t,x,θη̃_1(t,x)+(1-θ)η̃_2(t,x) ), θ∈ (0,1), which follows by the mean value theorem. We estimate the other terms similarly and we end up with ∫_Q_T𝕊(∇u_1 - ∇u_2):∇(u_1 - u_2)y t+δ∫_Q_T|u_1-u_2|^4y t+1/ε∫_Γ_T |v_1-v_2|^2 x t ≤ C∫_Q_T|∇ρ_1 -∇ρ_2|^2y t + C∫_Q_T|ρ_1 -ρ_2|^2y t + C∫_Q_T|ũ_1 -ũ_2|^2y t + C∫_Γ_T |η̃_1 - η̃_2|^2 x t + C∫_Γ_T |η̃_1t - η̃_2t|^2 x t, so the solution mapping is continuous. There exists a solution (ρ,u,η) to the approximate problem (<ref>)-(<ref>). We define an operator 𝒯:[ 𝒫_n,m^str×𝒫_n,m^fl→𝒫_n,m^str×𝒫_n,m^fl,; (η̃,ũ) ↦ (η,u), ] where η =η(ũ,η̃) is obtained in Lemma <ref>, ρ =ρ(ũ) is obtained in Lemma <ref> and u=u(ρ,ũ,η̃) is the solution obtained in Lemma <ref>. As a consequence of these lemmas, mapping 𝒯 is continuous and it is compact. It remains to show that the set {(η̃,ũ)∈𝒫_n,m^str×𝒫_n,m^fl: λ𝒯(η̃,ũ)=(η̃,ũ), λ∈ [0,1] } is bounded. We denote (η,u)= 𝒯(η̃,ũ) and emphasize that points from (<ref>) satisfy λ(η,u) = (η̃,ũ). We test (<ref>) by φ = ũ = λu and (<ref>) by ψ = η_t. Recalling ρ = ρ(ũ) and making similar calculations as in (<ref>) we obtain λ∫_Q_T𝕊(∇ u):∇ uy t + λδ∫_Q_T | u|^4y t + ∫_Γ_T |η_tx|^2 x t + ε∫_Q_Tρ^γ-2|∇ρ|^2y t + εγ/γ-1∫_Q_Tρ^γy t + εδ a∫_Q_Tρ^a-2|∇ρ|^2y t + εδ a/a-1∫_Q_Tρ^ay t + λ/2∫_Γ_T |v-λη_t|^2 x t + λ/2∫_Γ_T |v|^2 x t + 1/2∫_Γ_T |λv·e_2-η_t|^2 x t + 1/2∫_Γ_T |η_t|^2 x t = ∫_Γ_T f η_t x t + λ∫_Q_Tρ F_δ· uy t + ε M γ/γ-1∫_Q_Tρ^γ-1y t + εδ M a/a-1∫_Q_Tρ^a-1y t + λ^3/2 ε∫_Γ_T |η_t|^2 x t + λ^2/2 ε∫_Γ_T |v·e_2|^2 x t where v = γ_|Γ̂^η̃u. The first four terms on the right hand side can be dealt with as in (<ref>). The last two terms can be easily absorbed to the left hand side as λ≤ 1. We obtain λ∫_Q_T𝕊(∇ u):∇ uy t + λδ∫_Q_T | u|^4y t + ∫_Γ_T |η_tx|^2 x t ≤ C which provides by multiplying with suitable powers of λ ∫_Q_T𝕊(∇ũ):ũy t + δ∫_Q_T |ũ|^4y t + ∫_Γ_T |η̃_tx|^2 x t ≤ C, hence the set (<ref>) is bounded. The desired claim then follows by the Schaeffer fixed point theorem. Finally, since ρ is a solution to (<ref>), classical theory of parabolic equations implies Hölder regularity of ρ. § TIME BASIS LIMIT M→∞ Denote the approximate solution obtained in previous section as (ρ_m,u_m,η_m). One obtains from (<ref>) and (<ref>) that ∂_t u_m is bounded by a constant independent from m in L^1(0,T; span{f_i}_1≤ i≤ n). This means that u_m is bounded in L^∞(0,T; span{f_i}_1≤ i≤ n), so one can again estimate ∂_t u_m in a better space L_#^p(0,T; span{f_i}_1≤ i≤ n), for any p<∞. Similarly, the equation (<ref>) implies ∂_ttη_m∈ L^p_#(0,T; span{s_i}_1≤ i≤ n) for any p<∞. This together with (<ref>) allow us to pass to the limit m→∞ in most terms in the system (<ref>)-(<ref>). The following lemma allows us to pass to the limit in the trace terms. Let u_m u weakly in L^2_#(0,T;H^1_#(Ω)) and let η_m η weakly in L^∞_#(0,T;H^2_#(Γ)) and in H^1_#(0,T;H^1_#,0(Γ)). Then ∫_Γ_Tu_m(t,x,η_m(t,x)) ·ψ(t,x) x t →∫_Γ_Tu(t,x,η(t,x)) ·ψ(t,x) x t for all ψ∈ C^∞_#,0(Γ_T). Denote ũ_m(t,x,z) = u_m(t,x,z+η_m(t,x)). The Sobolev embedding theorem implies (η_m)_x is bounded in L^∞(Γ_t) and therefore ũ_m is bounded in L^2_#(0,T;H^1_#(Ω)). We extract a subsequence converging to some U weakly in L^2_#(0,T;H^1_#(Ω)). Our aim is to identify the limit as U(t,x,z) = ũ(t,x,z) := u(t,x,z+η(t,x)). Denote w_m := u_m - u. We have (ũ_m - ũ)(t,x,z) = w_m(t,x,z+η_m(t,x)) + u(t,x,z+η_m(t,x)) - u(t,x,z+η(t,x)) Fix φ∈ C^∞_#(Q_T). Then ∫_Q_Tw_m(t,x,z+η_m(t,x)) ·φ(t,x,z)y t = ∫_Q_Tw_m(t,x,z) ·φ(t,x,z-η_m(t,x))y t, where w_m converges weakly in L^2_#(0,T;H^1_#(Ω)) to zero and φ(t,x,z-η_m(t,x)) converges strongly in, say, L^2_#(Q_T) to φ(t,x,z-η(t,x)), since η_m →η uniformly in Γ_T. The same property implies also u(t,x,z+η_m(t,x)) - u(t,x,z+η(t,x)) → 0 a.e. in Q_T. This proves that ũ_m ũ weakly in L^2_#(0,T;H^1_#(Ω)) and the claim of the Lemma follows. We pass to the limit m →∞ in (<ref>)-(<ref>). We denote by (ρ,u,η) the limit of (ρ_m,u_m,η_m). The tripple (ρ,u, η) fulfills ρ∈ W_#^1,p(0,T; L_#^q(Ω))∩ L_#^p(0,T; W_#^2,q(Ω)), p,q∈ (1,2), u∈ W_#^1,p(0,T; span{f_i}_1≤ i≤ n), p<∞, η∈ W_#^2,p(0,T; span{s_i}_1≤ i≤ n), p<∞. The structure momentum equation ∫_Γ_Tη_t ψ_t x t - ∫_Γ_Tη_xxψ_xx x t -∫_Γ_Tη_txψ_x x t -∫_Γ_Tη_t - v·e_2/εψ x t=-∫_Γ_Tfψ x t holds for all ψ∈ C_#^∞(0,T; span{s_i}_1≤ i≤ n). The damped continuity equation ∂_t ρ + ∇· (ρu) -εΔρ +ερ=ε M, holds almost everywhere in Q_T. The fluid momentum equation δ∫_Q_Tu·∂_tφy t + ∫_Q_Tρu·∂_t φy t + ∫_Q_Tρu⊗u:∇φy t + ∫_Q_T (ρ^γ+δρ^a) ∇·φy t - ∫_Q_T𝕊(∇u):∇φy t -δ∫_Q_T |u|^2u·φy t-ε∫_Q_T∇ρ⊗φ:∇uy t + ε/2∫_Q_T(M-ρ)u·φy t -∫_Γ_Tv-η_te_2 /ε·ψ x t=-∫_Q_TρF_δ·φy t holds for all φ∈ C_#^∞(0,T; span{f_i}_1≤ i≤ n), where ψ=γ_|Γ̂^ηφ and v=γ_|Γ̂^ηu in both (<ref>) and (<ref>). §.§ Uniform estimates independent of n First, we take ϕ∈ C^∞_#(0,T) and choose ψ=ϕη_t in (<ref>), then multiply (<ref>) with γ/γ-1ϕρ^γ-1, then δ a/a-1ϕρ^a-1 and 1/2ϕ|u|^2 and finally choose φ=ϕu in (<ref>), and then sum up these identities to obtain - ∫_0^Tϕ_t(t) E_δ(t) t +∫_Q_Tϕ𝕊(∇u):∇uy t +δ∫_Q_Tϕ|u|^4y t+∫_Γ_Tϕ|η_tx |^2 x t + εγ∫_Q_Tϕρ^γ-2|∇ρ|^2y t+ εγ/γ-1∫_Q_Tϕρ^γy t + εδ a∫_Q_Tϕρ^a-2|∇ρ|^2y t +εδ a/a-1∫_Q_Tϕρ^ay t + 1/ε∫_Γ_Tϕ|v - η_t e_2|^2 x t = = ∫_Γ_Tϕ fη_t x t + ∫_Q_Tϕρu·F_δy t+ ε∫_Q_T Mγ/γ-1ϕρ^γ-1y t+εδ∫_Q_T Ma/a-1ϕρ^a-1y t where E_δ(t):=∫_Ω( 1/2ρ|u|^2+δ/2|u|^2+ 1/γ-1ρ^γ+δ/a-1ρ^a)(t)y +∫_Γ( 1/2|η_t|^2+1/2|η_xx|^2)(t) x. Choose ϕ=1 to get ∫_Q_T𝕊(∇u):∇uy t+δ∫_Q_T|u|^4y t+∫_Γ_T|η_tx |^2 x t + εγ∫_Q_Tρ^γ-2|∇ρ|^2y t + εγ/γ-1∫_Q_Tρ^γy t + εδ a∫_Q_Tρ^a-2|∇ρ|^2y t+εδ a/a-1∫_Q_Tρ^ay t + 1/ε∫_Γ_T |v - η_t e_2|^2 x t = =∫_Γ_T fη_t x t + ∫_Q_Tρu·F_δy t+ ε∫_Q_T Mγ/γ-1ρ^γ-1y t+εδ∫_Q_T Ma/a-1ρ^a-1y t. We deduce similarly to (<ref>) ∫_Q_T𝕊(∇u):∇uy t+δ∫_Q_T|u|^4y t+∫_Γ_T|η_tx |^2 x t + εγ∫_Q_Tρ^γ-2|∇ρ|^2y t + εγ/γ-1∫_Q_Tρ^γy t + εδ a∫_Q_Tρ^a-2|∇ρ|^2y t+εδ a/a-1∫_Q_Tρ^ay t + 1/ε∫_Γ_T |v - η_t e_2|^2 x t ≤ C(ε,δ). Next, we take a sequence of ϕ_k →χ_[s,t], we integrate over (0,T) w.r.t. s and take a supremum over t to deduce sup_t∈ (0,T) E_δ(t) ≤1/T∫_0^T E_δ(s) s + ∫_Γ_T | fη_t | x τ + ∫_Q_T | ρu·F_δ | yτ+ ε M∫_Q_T( γ/γ-1ρ^γ-1 + δ a/a-1ρ^a-1)yτ. The last four terms can be bounded as in (<ref>). Moreover, (<ref>) implies ∫_Q_T( 1/2ρ|u|^2+δ/2|u|^2+ 1/γ-1ρ^γ+δ/a-1ρ^a )y t+ ∫_Γ_T1/2|η_t|^2 x t ≤ C(ε,δ). We choose ψ=η in (<ref>) to obtain ∫_Γ_T |η_xx|^2 ≤ C(ε,δ). Thus, (<ref>) and previous estimates yield sup_t∈ (0,T) E_δ(t) ≤ C(ε,δ). We showed that (<ref>) still holds and moreover we have additional bounds independent of n ∈ℕ from (<ref>), namely η_xx_L^∞(0,T;L^2(Γ)) ≤ C(ε,δ), η_t_L^∞(0,T;L^2(Γ)) ≤ C(ε,δ), u_L^∞(0,T;L^2(Ω)) ≤ C(ε,δ), √(ρ)u_L^∞(0,T;L^2(Ω)) ≤ C(ε,δ), ρ_L^∞(0,T;L^a(Ω)) ≤ C(ε,δ). § SPATIAL BASIS LIMIT N→∞ Denote the solution obtained in previous section as (ρ_n,u_n,η_n). The uniform bounds (<ref>) and (<ref>) give rise to convergences ρ_n ρ weakly^* in L^∞_#(0,T;L^a_#(Ω)) and weakly in W^1,p_#(0,T;L^q_#(Ω)) ∩ L^p_#(0,T;W^2,q_#(Ω)), u_n u weakly^* in L^∞_#(0,T;L^2_#(Ω)) and weakly in L^2_#(0,T;H^1_#(Ω)), η_n η weakly^* in L^∞_#(0,T;H^2_#(Γ)) and weakly in H^1_#(0,T;H^1_#,0(Γ)), for some p,q ∈ (1,2). Our goal now is to pass to the limit n→∞ in (<ref>), (<ref>), (<ref>) and (<ref>). §.§ Limit in the structure momentum equation First, (<ref>) is a linear equation and thus the weak convergence is sufficient to claim ∫_Γ_Tη_t ψ_t x t - ∫_Γ_Tη_xxψ_xx x t -∫_Γ_Tη_txψ_x x t -∫_Γ_Tη_t - v·e_2/εψ x t=-∫_Γ_Tfψ x t, for all ψ∈ C^∞_#,0(Γ_T). We have ∂_ttη_n_L^2(0,T; (H^2_#,0(Γ))^*)≤ C(ε,δ) due to (<ref>). This together with ∂_tη_n_L^2_#(0,T; H^1(Γ))≤ C(ε,δ) imply that ∂_tη_n →∂_t η strongly in L^2_#(Γ_T). We choose ψ = η_n in (<ref>) and ψ=η in (<ref>) and we compare these two identities to conclude ∫_Γ_T |∂_xxη_n|^2 x t →∫_Γ_T |∂_xxη|^2 x t. §.§ Limit in the continuity equation We proceed to a limit in the continuity equation. Estimates (<ref>) and (<ref>) yield that (upon passing to a suitable subsequence) ∂_t ρ + ∇· (ρu) -εΔρ +ερ=ε M almost everywhere in Q_T. We multiply (<ref>) by ρ_n, integrate the resulting equation over Q_T and we pass to the limit n→∞. We compare the result with (<ref>) multiplied by ρ and integrated over Q_T. We deduce ∫_Q_T|∇ρ_n|^2y t →∫_Q_T|∇ρ|^2y t so ∇ρ_n →∇ρ strongly in L^2(Q_T). §.§ Limit in the fluid momentum equation We start with the observation that bounds (<ref>) allow to bound ρu in L^20/9(Q_T), which in turn implies ∇ρ_n_L^20/9(Q_T)≤ C(,δ). Consequently, we use (<ref>), to obtain ∂_t((δ+ρ_n)u_n)_(L^20_#(0,T; W^2,p_#(Ω)))^*≤ C(ε,δ) for some p > 2. Moreover, uniform bounds yield (δ+ρ_n)u_n_L^∞(0,T; L^2a/a+1(Ω))≤ C(ε,δ) and we infer (δ+ρ_n)u_n_L^∞_#(0,T; (W^s,2_#(Ω))^*)≤ C(ε,δ) for some s<1. This however means that (δ+ρ_n)u_n → (δ+ρ)u strongly in L^∞_#(0,T; (W^s',2_#(Ω))^*) for some s<s'<1, and consequently by the weak convergence u_n⇀u in L^2_#(0,T;H^1_#(Ω)) (ρ_n+δ)u_n ⊗u_n ⇀ (ρ+δ)u⊗u weakly in L^p_#(Q_T) for some p>1. Since 0≤ρ_n/ρ_n+δ<1 and ρ_n →ρ a.e. in Q_T, one concludes that ρ_n/ρ_n+δ→ρ/ρ+δ in L^q_#(Q_T) for any q∈ [1,∞) so ρ_n/ρ_n+δ(ρ_n+δ)u_n ⊗u_n=ρ_nu_n ⊗u_n ⇀ρu⊗u in L^1_#(Q_T). The weak convergence u_n⇀u in L^2_#(0,T;H^1_#(Ω)) and the strong convergence of ∇ρ_n in L^2_#(Q_T) obtained in (<ref>) yield ∫_Q_T∇ρ_n⊗φ:∇u_ny t →∫_Q_T∇ρ⊗φ:∇uy t, for any φ∈ C_#^∞(Q_T). The remaining terms are dealt with in a straightforward fashion by means of uniform bounds and Lemma <ref> is used to pass to the limit in the trace term. Therefore, when we let n →∞ in (<ref>) we end up with δ∫_Q_Tu·∂_tφy t + ∫_Q_Tρu·∂_t φy t + ∫_Q_Tρu⊗u:∇φy t + ∫_Q_T (ρ^γ+δρ^a) ∇·φy t - ∫_Q_T𝕊(∇u):∇φy t -δ∫_Q_T |u|^2u·φy t-ε∫_Q_T∇ρ⊗φ:∇uy t + ε/2∫_Q_T(M-ρ)u·φy t -∫_Γ_Tv-η_te_2 /ε·ψ x t=∫_Q_TρF_δ·φy t, for all φ∈ C_#^∞(Q_T) and ψ∈ C_#^∞(Γ_T) such that φ (t,x,η̂(t,x))=ψ(t,x) on Γ_T, where v = γ_|Γ̂^ηu. §.§ Limit in the energy inequality The information gathered above is clearly sufficient to pass to the limit in all terms on the right hand side of (<ref>). In order to pass to the limit on the left hand side we first note that (<ref>), (<ref>) together with (<ref>) and the information about the sequence of densities allows us to pass to the limit in the first term on the left hand side of (<ref>). Finally, we assume that ϕ∈ C^∞_#(0,T) satisfies moreover ϕ≥ 0 and we use weak lower semicontinuity of convex functions to deduce that in the limit, (<ref>) holds as an inequality - ∫_0^Tϕ_t(t) E_δ(t) t +∫_Q_Tϕ𝕊(∇u):∇uy t+δ∫_Q_Tϕ|u|^4 y t + ∫_Γ_Tϕ|η_tx |^2 x t + εγ∫_Q_Tϕρ^γ-2|∇ρ|^2y t + εγ/γ-1∫_Q_Tϕρ^γy t+ εδ a∫_Q_Tϕρ^a-2|∇ρ|^2y t +εδ a/a-1∫_Q_Tϕρ^ay t + 1/ε∫_Γ_Tϕ|v - η_t e_2|^2 x t ≤∫_Γ_Tϕ fη_t x t + ∫_Q_Tϕρu·F_δy t+ ε∫_Q_T Mγ/γ-1ϕρ^γ-1y t+εδ∫_Q_T Ma/a-1ϕρ^a-1y t where E_δ is defined by (<ref>). §.§ Uniform bounds independent of ε We use the energy inequality (<ref>) to deduce estimates of (ρ,u,η) independent of . We start by taking ϕ = 1 in (<ref>) to get ∫_Q_T𝕊(∇u):∇uy t+δ∫_Q_T|u|^4y t+∫_Γ_T|η_tx |^2 x t + εγ∫_Q_Tρ^γ-2|∇ρ|^2y t + εγ/γ-1∫_Q_Tρ^γy t + εδ a∫_Q_Tρ^a-2|∇ρ|^2y t+εδ a/a-1∫_Q_Tρ^ay t + 1/ε∫_Γ_T |v - η_te_2|^2 x t ≤∫_Γ_T fη_t x t + ∫_Q_Tρu·F_δy t+ ε∫_Q_T Mγ/γ-1ρ^γ-1y t+εδ∫_Q_T Ma/a-1ρ^a-1y t. The estimates here need to be more delicate than in the previous section as we no longer have directly information about the density independent of on the left hand side of (<ref>). Therefore we introduce (recall (<ref>)) ℰ_δ :=sup_t∈ (0,T) E_δ(t). We take ϕ→χ_[s,t] in (<ref>), we integrate over (0,T) with respect to s and finally we take the supremum over t to get ℰ_δ≤1/T∫_0^T E_δ(s) s+ ∫_Γ_T fη_t x t + ∫_Q_Tρu·F_δy t + ε∫_Q_T Mγ/γ-1ρ^γ-1y t+εδ∫_Q_T Ma/a-1ρ^a-1y t. Our goal is therefore to bound the terms on the right-hand sides of (<ref>) and (<ref>). The first, third and fourth terms on the right-hand side of (<ref>) can be absorbed as in (<ref>). The second term has to be estimated in a different way. Let p > 1 be small and let q = p/p-1. We have ∫_Q_Tρu·F_δy t≤ C ρ_L^∞(0,T; L^p(Ω))u_L^2(0,T;L^q(Ω))≤ C ρ_L^∞(0,T; L^p(Ω))u_L^2(0,T;H^1(Ω)) ≤ C(s,δ)(1+ ℰ_δ^s) + δ/2( ∫_Q_T𝕊(∇u):∇uy t+ ∫_Q_T|u|^4y t ) for s>0 as small as we want, where we interpolated L^p between L^1 and L^a. Provided δ<1, these terms can be absorbed so it leads to ∫_Q_T𝕊(∇u):∇uy t+δ∫_Q_T|u|^4y t+∫_Γ_T|η_tx |^2 x t + εγ∫_Q_Tρ^γ-2|∇ρ|^2y t + εγ/γ-1∫_Q_Tρ^γy t + εδ a∫_Q_Tρ^a-2|∇ρ|^2y t+εδ a/a-1∫_Q_Tρ^ay t + 1/ε∫_Γ_T |v - η_te_2|^2 x t ≤ C(s,δ)(1+ ℰ_δ^s). The last four terms on the right hand side of (<ref>) are treated the same way, hence it remains to show ∫_0^T E_δ(s) s ≤ C(1+ℰ_δ^β) for some β < 1. We observe that ∫_Q_T1/2(ρ + δ)|u|^2y t + ∫_Γ_T1/2 |η_t|^2 x t ≤ C(s,δ)(1 + ℰ_δ^s/2 + s/a + ℰ_δ^s). We multiply (<ref>) by ρ and integrate over Q_T to get ε∫(ρ^2+|∇ρ|^2)y t = ∫_Q_T -1/2ρ^2 ∇·uy t + ∫_Q_T M ρy t ≤(∫_Q_Tρ^4y t)^1/2u_L^2(0,T; H^1(Ω))+C ≤(∫_Q_Tρ^ay t)^2/a C(s,δ)(1+ ℰ_δ^s) ≤ C(s,δ)(1+ ℰ_δ^s+2/a). Next, we choose ψ = η in (<ref>) and sum up the resulting equation with (<ref>) with the choice φ=ηe_2. Most of the calculations can be done in the same way as in Section <ref>, however we need to estimate several additional terms multiplied by approximation parameters, namely |∫_Q_Tδu·η_te_2y t| ≤ C(δ)u_L^4(Q_T)η_t_L^2(0,T;L^∞(Γ))≤ C(s,δ)(1+ℰ_δ^3/4 s), |∫_Q_Tδ|u|^2u·ηe_2y t| ≤ C(δ)u^3_L^4(Q_T)η_L^4(Γ_T)≤ C(s,δ)(1+ℰ_δ^3/4 s)(η_x_L^2(Γ_T) + η_t_L^2(Γ_T)) ≤ C(s,δ)(1+ℰ_δ^3/2 s) + 1/16η_xx^2_L^2(Γ_T), |ε/2∫_Q_T(M-ρ)u·ηe_2y t| ≤ C(δ)(1+ρ_L^∞(0,T;L^p(Ω)))u_L^2(0,T;L^q(Ω))η_L^2(0,T;L^∞(Γ)) ≤ C(s,δ)(1+ℰ_δ^2s) + 1/16η_xx^2_L^2(Γ_T), and |ε∫_Q_T∇ρ⊗(ηe_2):∇uy t| ≤ C(δ)√(ε)∇ρ_L^2(Q_T)∇u_L^2(Q_T)η_L^∞(Γ_T) ≤ C(s,δ)(1+ℰ_δ^2s+2/a) + 1/16η_xx^2_L^2(Γ_T). Eventually we end up with the estimate ∫_Γ_T |η_xx|^2 x t≤ C(s,δ)(1+ ℰ_δ^s'), for some 0<s'<1. It remains to show ∫_Ω(1/γ-1ρ^γ+δ/a-1ρ^a)y t ≤ C(s,δ)(1+ ℰ_δ^s”), for some 0<s”<1 similarly to Section <ref>. To this end we use φ_h defined in (<ref>) as a test function in (<ref>). As above in the estimate of second spatial derivatives of η, we obtain four more terms to estimate. The term δu·∂_tφ_h is handled similarly as ρu·∂_tφ_h. The remaining three additional terms are easy to handle due to the estimate φ_h_L^∞(Q_T)≤ℬ_Ω[ρ^α-∫_Ωρ^α x ]_L^∞(Q_T)≤ C which follows from (<ref>). Therefore |∫_Q_Tδ|u|^2u·φ_hy t| ≤ C(δ)u^3_L^4(Q_T)≤ C(s,δ)(1+ℰ_δ^3/4 s), |ε/2∫_Q_T(M-ρ)u·φ_hy t| ≤ C(δ)(1+ρ_L^∞(0,T;L^p(Ω)))u_L^2(0,T;L^q(Ω))≤ C(s,δ)(1+ℰ_δ^s), and |ε∫_Q_T∇ρ⊗φ_h:∇uy t| ≤ C(δ)√(ε)∇ρ_L^2(Q_T)∇u_L^2(Q_T)≤ C(s,δ)(1+ℰ_δ^s+1/a). In the second part of this procedure we use the test function φ = φ_he_2 in (<ref>) with φ_h defined in (<ref>). The estimates are again either similar to those in Section <ref> or to those presented above and we recover (<ref>). This however means that (<ref>) is proved which yields ℰ_δ≤ C(δ), and ∫_Q_T𝕊(∇u):∇uy t+δ∫_Q_T|u|^4y t+∫_Γ_T|η_tx |^2 x t + εγ∫_Q_Tρ^γ-2|∇ρ|^2y t + εγ/γ-1∫_Q_Tρ^γy t+ ε aδ∫_Q_Tρ^a-2|∇ρ|^2y t+εδ a/a-1∫_Q_Tρ^a+1y t + 1/ε∫_Γ_T |v - η_te_2|^2 x t ≤ C(δ). §.§ Coupled back momentum equation We sum up the momentum equation (<ref>) for test functions (φ,ψ) and the structure momentum equation (<ref>) for test function ψ. This way the penalization terms get cancelled and we obtain that (ρ,u,η) satisfy the coupled momentum equation δ∫_Q_Tu·∂_tφy t + ∫_Q_Tρu·∂_t φy t + ∫_Q_Tρu⊗u:∇φy t + ∫_Q_T (ρ^γ+δρ^a) ∇·φy t - ∫_Q_T𝕊(∇u):∇φy t-δ∫_Q_T |u|^2u·φy t-ε∫_Q_T∇ρ⊗φ:∇uy t+ ε/2∫_Q_T(M-ρ)u·φy t - ∫_Γ_Tη_t ψ_t x t- ∫_Γ_Tη_xxψ_xx x t -∫_Γ_Tη_txψ_x x t = -∫_Γ_Tfψ x t-∫_Q_TρF_δ·φy t, which holds for all φ∈ C_#^∞(Q_T) and ψ∈ C_#,0^∞(Γ_T) such that φ (t,x,η̂(t,x))=ψ(t,x)e_2 on Γ_T. Note however, that at this point, the problem is still not fully coupled since we cannot ensure that η_te_2 = γ_|Γ^ηu. §.§ Improved estimate of η_xx The following approach comes from <cit.>, where the improved regularity of displacement was obtained for the interaction problem between an incompressible viscous fluid and a nonlinear Koiter shell (see also <cit.> for the compressible counterpart). We start with introducing the notation D_h^s[η] defined as D_h^s[η](x):= η(t,x+h) - η(t,x)/|h|^s-1h, s>0, h ∈. The idea is to take s < 1/4 and test the coupled momentum equation (<ref>) with a suitable test function to obtain an estimate on ∫_Γ_T|D_h^s[η_xx]|^2 x t independent on h < h_0 for some h_0 > 0. The integration by parts formula for D^s_h holds for periodic functions, i.e. ∫_ΓD^s_h[u](x)v(x) x = -∫_Γ u(x) D^s_-h[v](x) x for all periodic u,v such that the integrals are finite. We set ψ_h(t,x) = D^s_-h[D^s_h[η(t,x)]] - 1/|h|^2s(η(t,-h)+η(t,h)) =: ψ_1,h(t,x) - ψ_2,h(t) and use (ψ_he_2,ψ_h) as a test function couple in (<ref>) (note that this is an admissible test function because ψ_h(t,0) = 0). This gives rise to -∫_Γ_Tη_xx (ψ_h)_xx x t = RHS, so by taking into account that (ψ_h)_xx=D^s_-h[D^s_h[η(t,x)_xx]] which implies ∫_Γ_T|D_h^s[η_xx(t,x)]|^2 x t= -∫_Γ_Tη_xx (ψ_h)_xx x t, the proof will follow once we show that RHS is bounded. First, note that D^s_-h[D^s_h [η_t]]_L^p(Γ)≤ C η_tx_L^2(Γ), D^s_-h[D^s_h [η_x]]_L^p(Γ)≤ C η_xx_L^2(Γ), for any p>1 and s<1/4 by embedding theorems (see <cit.> and <cit.>). Moreover, since ||η_tx||_L^2(Γ_T)≤ C(δ), we get η_t ∈ L^2(0,T;C^1/2(Γ)) and thus η_t(t,± h)/|h|^1/2 = η_t(t,± h) - η_t(t,0)/|h|^1/2∈ L^2(0,T) with its L^2-norm bounded by C(δ). This means that for s < 1/4 it holds ψ_2,t∈ L^2(0,T) and ||ψ_2,t||_L^2(0,T)≤ C(δ). This combined with (<ref>) implies D^s_-h[D^s_h [(ψ_h)_t]]_L^2(0,T; L^p(Γ))≤ C η_tx_L^2(0,T;L^2(Γ))≤ C(δ) , while (<ref>) implies D^s_-h[D^s_h [(ψ_h)_x]]_L^∞(0,T; L^p(Γ))≤ C η_xx_L^∞(0,T;L^2(Γ))≤ C(δ), for any p>1 and s<1/4. Finally, since ||η_xx||_L^∞(0,T;L^2(Γ))≤ C(δ) a simple first order Taylor expansion of η yields ψ_2(t) ≤ C(δ)|h|^1-2s≤ C(δ), so D^s_-h[D^s_h [(ψ_h)]]_L^∞(Γ_T)≤ C ( η_xx_L^∞(0,T;L^2(Γ)) + ||ψ_2,h||_L^∞(0,T)) ≤ C(δ). Now, we are ready to show that the arising terms are bounded. First, the bounds of the terms involving time derivatives of ψ_h are bounded as follows |∫_Q_Tρu· (∂_t ψ_h e_2)y t | ≤ C||ρ||_L^∞(0,T; L^γ(Ω)) ||u||_L^2(0,T; L^p(Ω))D^s_-h[D^s_h [(ψ_h)_t]]_L^2(0,T; L^p(Γ))≤ C(δ) for p=2γ/γ-1 by (<ref>), and δ|∫_Q_Tu· (∂_t ψ_h e_2)y t | ≤ Cδ^3/4 ||δ^1/4u||_L^4(Q_T)D^s_-h[D^s_h [(ψ_h)_t]]_L^2(Γ_T)≤ C(δ), |∫_Γ_Tη_t (ψ_h)_t x t | ≤ || η_t||_L^2(Γ_T)D^s_-h[D^s_h [(ψ_h)_t]]_L^2(Γ_T)≤ C(δ), by (<ref>) and uniform bounds. Next, the pressure term vanishes since ∇· (D^s_-h[D^s_h[η]](x) e_2) = 0. The remaining terms all include at most one spatial derivative on ψ_h. Let us bound only the most "difficult" terms: |ε∫_Q_T∇ρ⊗(ψ_h e_2):∇uy t | ≤√(ε) ||√(ε)∇ρ||_L^2(Q_T) ||ψ_h||_L^∞(Γ_T) ||∇u||_L^2(Q_T)≤ C(δ) by (<ref>), and | ∫_Q_Tρu⊗u:∇φ_h y t| ≤ C||ρ||_L^∞(0,T; L^γ(Ω)) ||u||_L^2(0,T; L^p(Ω))^2 ||(ψ_h)_x||_L^∞(0,T; L^p(Ω)) for p=3γ/γ-1, by (<ref>). The remaining terms are bounded in a similar fashion, so we conclude ∫_Γ_T|D_h^s[η_xx]|^2 ≤ C(δ) and as a direct consequence of imbedding and uniform bound on η in L^2(0,T; H^2(Γ)), one finally obtains || η||_L^2(0,T; H^2+s(Γ))≤ C(δ) for any s<1/4. § LIMIT Ε→ 0 Denote the solutions obtained in previous section as (ρ_ε,u_ε,η_ε). The uniform bounds (<ref>) and (<ref>) give rise to the following weak convergencies ρ_ερ weakly^* in L^∞_#(0,T;L^a_#(Ω)), u_εu weakly in L^2_#(0,T;H^1_#(Ω)), η_εη weakly^* in L^∞_#(0,T;H^2_#(Γ)) and weakly in H^1_#(0,T;H^1_#,0(Γ)). We pass to the limit in the equations (<ref>), (<ref>) and the energy inequality (<ref>). §.§ Limit in the continuity equation We use nowadays standard arguments for the continuity equation to get ρ_→ρ in C_w([0,T];L^a(Ω)) and therefore ρ_u_ρu weakly in L^∞(0,T;L^2a/a+1(Ω)). Moreover, due to (<ref>) and (<ref>) we have ∇ρ_→ 0 in L^2(Q_T). We conclude that the limiting functions ρ and u satisfy the continuity equation in the weak sense, i.e. ∫_Q_Tρ (∂_t φ +u·∇φ)y t=0 for all φ∈ C^∞_#(Q_T). Since ρ∈ L_#^∞(0,T; L_#^a(Ω)) and a≥ 2 we further get that the renormalized continuity equation is satisfied by ρ and u, i.e. ∫_Q_Tρ B(ρ)( ∂_t φ +u·∇φ)y t =∫_Q_T b(ρ)(∇·u) φy t for all functions φ∈ C_#^∞(Q_T) and any b∈ L^∞ (0,∞) ∩ C[0,∞) such that b(0)=0 with B(ρ)=B(1)+∫_1^ρb(z)/z^2dz, see i.e. <cit.>. §.§ Limit in the coupled momentum equation The limit in the equation (<ref>) is more involved. The terms integrated over Γ_T are linear and their limits are straightforward. Regarding the terms integrated over Q_T, we start similarly as in Section <ref>, deduce from the continuity equation that ∇ρ__L^20/9(Q_T)≤ C(δ) and we use this information to estimate ∂_t((δ+ρ_)u_) _(L^20_#(0,T,W^2,p_#(Ω)))^*≤ C(δ). The continuity equation implies a similar estimate for the time derivative of the density, namely ∂_tρ__(L^10/3_#(0,T,W^1,2_#(Ω)))^*≤ C(δ). Using this information and the fact that the sequence of velocities is bounded in L^4(Q_T) we get in particular that |∫_Q_T∂_tρ_u_·φy t| ≤ C(δ) for any φ∈ L^20_#(0,T,W^2,p_#(Ω)). Therefore we obtain δ∂_tu__(L^20_#(0,T,W^2,p_#(Ω)))^*≤(δ+ρ_) ∂_tu__(L^20_#(0,T,W^2,p_#(Ω)))^* ≤∂_t((δ+ρ_)u_) _(L^20_#(0,T,W^2,p_#(Ω)))^* + u_∂_tρ__(L^20_#(0,T,W^2,p_#(Ω)))^*≤ C(δ). This bound together with the Aubin-Lions lemma is enough to pass to the limit in the term δ∫_Q_T |u|^2u·φy t. We also obtain similar convergences as in (<ref>) and (<ref>), where we combine the latter with the fact that u_⊗u_→u⊗u in L^p(Q_T) for some p > 1 to pass to the limit in the convective term. The only remaining term without properly identified limit is the pressure term. Regarding this term, we first observe that when deriving (<ref>), we proved that ρ_ε^a has a better than L^1 integrability in the interior of the domain Q_T^η. However, it is still possible that {ρ_ε}_ε>0 might generate some concentrations near the elastic boundary. We define φ_h^ε(t,x,z):=z-η_ε(t,x)/h, for η_ε(t,x)<z<η_ε(t,x)+h, -1/H-h(z-(η_ε(t,x)+h))+1, for η_ε(t,x)+h<z<η_ε(t,x)+2H-h, z-(η_ε(t,x)+2H)/h, for η_ε(t,x)+2H-h<z<η_ε(t,x)+2H. We choose φ=φ_h^εe_2 in (<ref>) (with ψ=0) and we compute similarly as in (<ref>) to get ∫_0^T ∫_{η<z<η+h}∪{η+2H-h<z<η+2H} (ρ_ε^γ+δρ_ε^a)y t ≤ C(δ) h^s, for some s>0. Indeed, to obtain this kind of estimate it is enough to observe that all arising terms have better than L^1 integrability in the space variable. Here we in particular use once again (<ref>). Estimate (<ref>) means that the sequence {ρ_ε^γ+δρ_ε^a}_ε>0 is uniformly integrable so there exists its weak limit in L^1(Q_T) denoted as p_δ(ρ). In order to identify p_δ(ρ), one can use the standard approach on compact subsets of Q_T^η based on convergence of effective viscous flux, renormalized continuity equation and monotonicity argument (see <cit.>) in order to conclude that ρ_ε→ρ, a.e. in Q_T. This is enough to identify p_δ(ρ) as ρ^γ+δρ^a. Finally, let us point out that the kinematic coupling ∂_tηe_2 = γ_|Γ^ηu is recovered due to the bound (<ref>). We have proved that the limit functions (ρ,u,η) satisfy ∫_Q_T(δ + ρ) u·∂_t φy t + ∫_Q_T(ρu⊗u):∇φy t +∫_Q_T (ρ^γ+δρ^a) (∇·φ)y t - ∫_Q_T𝕊( ∇u): ∇φy t +δ∫_Q_T|u|^2u·φy t+∫_Γ_Tη_t ψ_t x t - ∫_Γ_Tη_xxψ_xx x t- _Γ_Tη_txψ_x x t = -∫_Γ_T fψ x t - ∫_Q_TρF_δ·φy t for all φ∈ C_#^∞(Q_T) and ψ∈ C^∞_#,0(Γ_T) such that φ (t,x,η̂(t,x))=ψ(t,x)e_2 on Γ_T. §.§ Limit in the energy inequality Our aim here is to pass to the limit in (<ref>), where ϕ∈ C^∞_#(0,T), ϕ≥ 0. First, it is easy to pass to the limit on the right hand side, in particular the last two terms converge to zero. On the left hand side we simply discard the penalization term 1/ε∫_Γ_Tϕ|v_ - (η_)_t e_2|^2 x t, because it is obviously non-negative. We apply the same argument for the terms εγ∫_Q_Tϕρ_^γ-2|∇ρ_|^2y t + εδ a∫_Q_Tϕρ_^a-2|∇ρ_|^2y t. The uniform bounds (<ref>) and (<ref>) imply that εγ/γ-1∫_Q_Tϕρ_^γy t + εδ a/a-1∫_Q_Tϕρ_^a→ 0y t. Next, we use the weak lower semicontinuity of convex functions to pass to the limit in the terms ∫_Q_Tϕ𝕊(∇u_):∇u_y t +δ∫_Q_Tϕ|u_|^4y t+∫_Γ_Tϕ|(η_)_tx |^2 x t. It remains to identify the limit of the first term in (<ref>), namely ∫_0^Tϕ_t(t) E_δ(t) t = ∫_Q_T( 1/2ρ_|u_|^2+δ/2|u_|^2+ 1/γ-1ρ_^γ+δ/a-1ρ_^a)ϕ_ty t+∫_Γ_T( 1/2|(η_)_t|^2+1/2|(η_)_xx|^2)ϕ_t x t. We use the same arguments as when passing to the limit in the convective term in the coupled momentum equation to obtain 1/2∫_Q_T(δ|u_ε|^2+ ρ_|u_ε|^2)ϕ_t(t)y t→1/2∫_Q_T(δ|u|^2+ρ|u|^2)ϕ_t(t)y t. Moreover, the a.e. convergence of {ρ_ε}_ε>0 and equiintegrability of {ρ_ε^a}_ε>0, imply ∫_Q_T(1/γ-1ρ_ε^γ+δ/a-1ρ_ε^a)ϕ_t(t)y t→∫_Q_T(1/γ-1ρ^γ+δ/a-1ρ^a)ϕ_t(t)y t. The bound on ∂_txη_ε in L^2(Γ_T) and (<ref>) imply ∂_xxη_ε→∂_xxη strongly in L^2(Γ_T) so 1/2∫_Γ_T |∂_xxη_ε|^2ϕ_t(t) x t →1/2∫_Γ_T|∂_xxη|^2ϕ_t(t) x t. It only remains to prove the convergence of the term involving the square of the time derivative of η_. First, we choose (φ,ψ) = (η_εe_2,η_ε) in (<ref>) and (φ,ψ) = (ηe_2,η) in (<ref>) and we compare the two identities to conclude that ∫_Q_T(δ+ρ_ε)u_ε·∂_t η_εe_2y t + ∫_Γ_T |∂_t η_ε|^2 x t →∫_Q_T(δ+ρ)u·∂_t ηe_2y t + ∫_Γ_T |∂_t η|^2 x t. Moreover, the strong convergence of (δ+ρ_ε)u_ε→ (δ+ρ)u in L^2(0,T; H^-1/2(Ω^η(t))) and the weak convergence of u_ε - Ext[v_ε] to u - η_t e_2 in L^2(0,T; H_0^1/2(Ω^η(t))) where Ext[v_ε](t,x,z) = v_ε(t,x) imply ∫_Q_T(δ+ρ_ε)u_ε·(u_ε - ∂_t η_εe_2)y t = ∫_Q_T(δ+ρ_ε)u_ε· (u_ε - Ext[v_ε])y t + ∫_Q_T(δ+ρ_ε)u_ε·(Ext[v_ε]-∂_t η_εe_2)_→ 0y t →∫_Q_T(δ+ρ)u· (u - η_te_2)y t. We sum up (<ref>) and (<ref>) and by (<ref>) we deduce 1/2∫_Γ_T|∂_t η_ε|^2ϕ_t(t) x t →1/2∫_Γ_T|∂_t η|^2ϕ_t(t) x t. Thus, (ρ,u,η) satisfies - ∫_0^Tϕ_t(t) E_δ(t) t +∫_Q_Tϕ𝕊(∇u):∇uy t+δ∫_Q_Tϕ|u|^4y t+∫_Γ_Tϕ|η_tx |^2 x t ≤∫_Γ_Tϕ fη_t x t + ∫_Q_Tϕρu·F_δy t for all ϕ∈ C^∞_#(0,T), ϕ≥ 0. §.§ Estimates independent of δ At this point, one can adjust the calculations from Section <ref> to take into account terms with δ in (<ref>) in order to deduce estimates independent of δ. We only list main changes with respect to Section <ref> here. The starting point is the energy inequality (<ref>), where we first use test function ϕ = 1 and follow Section <ref> to get δu_L^4(Q_T)^4 + u_L^2(0,T;H^1(Ω))^2+ η_t _L^2(0,T;H^1(Γ))^2 ≤ C(κ)(1+ℰ_δ^κ). Next, using the notation for E_δ(t) and ℰ_δ introduced in (<ref>) and (<ref>) respectively, we take a sequence of test functions ϕ_k →χ_[s,t], pass to the limit with k →∞ and using calculations of Section <ref> we get ℰ_δ≤ C_0(1+∫_0^T E_δ(s) s). All terms are handled similarly to their counterparts in Section <ref>, there are however two additional terms with respect to (<ref>). These are treated as follows δ|∫_Q_T |u|^2u·ηe_2y t| ≤δu_L^4(Q_T)^3η_L^4(Γ_T) ≤ C(κ)(1+ℰ_δ^3κ/4)(η_t_L^2(0,T;L^4(Γ)) + η_L^2(0,T;L^4(Γ))) ≤ C(κ)(1+ℰ_δ^3κ/2) + 1/8η_xx_L^2(Γ_T)^2, and δ|∫_Q_Tu·η_t e_2y t| ≤ Cu_L^2(0,T;L^q(Ω^η(t)))η_t_L^2(0,T;L^∞(Γ))≤ C(κ)(1+ℰ_δ^κ). Eventually we recover ∫_Γ_T| η_xx |^2 x t ≤ C(κ)(1+ℰ_δ^3κ). Finally, (<ref>) contain the additional term δ∫_Q_T^ηρ^a+αy t on the left hand side and four more terms on the right hand side. Two terms arise from the δρ^a in the pressure and these terms are estimated exactly as in (<ref>) and (<ref>). Next, similarly as in (<ref>) δ|∫_Q_T^ηu·∂_t φ_hy t| ≤ C(κ)(1+ ℰ_δ^α/γ+κ), and δ|∫_Q_T^η |u|^2u·φ_hy t| ≤ C(1+ ℰ_δ^3/4) We then continue as in Section <ref> and end with (<ref>) and thanks to the choice of parameters α, κ we get (<ref>). We want a similar bound also for δρ^a, however we can not use the same combination of parameters α and κ, because the inequality (<ref>) might not hold if γ is replaced by a. Therefore, we next set κ̅ := 1/5(a-1) and α̅ := 2/5, repeat the calculations of Sections <ref>-<ref> and Section <ref> in order to deduce δ∫_0^T ∫_{η+h<z<η+2H-h}ρ^a+α̅y t≤δ∫_Q_T^ηρ^a+α̅ϕ_hy t ≤ C(κ̅)(1+ℰ_δ^1+3κ̅/2). By interpolation δ∫_0^T ∫_{η+h<z<η+2H-h}ρ^ay t ≤ C(κ̅)(1+ℰ_δ^1-κ̅'), where κ̅':= 1 - (1+3κ̅/2)a-1/a+α̅-1. We continue with estimates of the pressure near the boundary using the function (<ref>). Again, we encounter some additional terms in equation (<ref>). To be more precise, terms δρ^a appear both on the left hand side and in the first term on the right hand side. The left hand side provides the information we seek, while the term on the right hand side is bounded using (<ref>). The integrals of δu·∂_t(φ_he_2) and δ|u|^2u·(φ_he_2) yield the powers ℰ_δ^κ and ℰ_δ^3/4, respectively. Hence, we conclude that there exists κ” > 0 such that ∫_Q_Tρ^γ + δρ^a y t≤ C(1+ℰ^1-κ”). Finally, in Section <ref> we estimate δ∫_Q_T |u|^2 by (<ref>) and we obtain ℰ_δ≤ C, ∫_Q_T𝕊(∇u):∇uy t+δ∫_Q_T|u|^4y t+∫_Γ_T|η_tx |^2 x t≤ C. Similarly to Section <ref>, we obtain η_L^2(0,T; H^2+s(Γ))^2≤ C, for some s>0. § LIMIT Δ→ 0 Denote the solution obtained in previous section as (ρ_δ,u_δ,η_δ). The goal is to pass to the limit δ→ 0 to conclude that the limiting functions (ρ,u,η) represent a weak solution in the sense of Definition <ref>. The uniform estimates deduced in Section <ref> give rise to the following convergencies ρ_δ ρ weakly^* in L^∞_#(0,T;L^γ_#(Ω)), u_δ u weakly in L^2_#(0,T;H^1_#(Ω)), η_δ η weakly^* in L^∞_#(0,T;H^2_#(Γ)) and weakly in H^1_#(0,T;H^1_#,0(Γ)). §.§ Limit in the continuity equation We employ standard arguments from the existence theory of weak solutions to the compressible Navier-Stokes equations (see i.e. <cit.>) to deduce that functions ρ and u satisfy the continuity equation in the weak sense, i.e. ∫_Q_Tρ (∂_t φ +u·∇φ) x t=0 for all φ∈ C^∞_#(Q_T). The validity of the renormalized continuity equation remains open at this moment since ρ may not possess enough regularity to use a direct argument. §.§ Limit in the coupled momentum equation First, the kinematic coupling u(t,x,η̂(t,x)) = η_t(t,x)e_2 is recovered using Lemma <ref>. Our aim is to pass with δ to zero in (<ref>). Once again, the terms integrated over Γ_T are linear and therefore their limits are straightforward. Estimates (<ref>) are enough to identify 0 as a limit of terms ∫_Q_Tδu·∂_tφy t and ∫_Q_Tδ|u|^2u·φy t. The limit in the last term on the right hand side is easy. In the remaining terms we follow the existence theory of weak solutions to the compressible Navier-Stokes equations and the main task is to deduce the limit in the pressure term, which is closely related to the validity of the renormalized continuity equation. Both issues are solved by means of the effective viscous flux identity and boundedness of the oscillations defect measure. We get the pointwise convergence of ρ_δ→ρ a.e. in Q_T and thus recover both (<ref>) and (<ref>). §.§ Limit in the energy inequality Finally we need to pass to the limit in (<ref>) in order to prove (<ref>). The limits of the terms on the right hand side are simple. On the left hand side we simply discard the term δ∫_Q_Tϕ |u|^4 since it is surely nonnegative and for the second and fourth term on the left hand since we use lower semicontinuity of convex functions. Therefore it remains to deal with the first term on the left hand side. First, the kinetic energy term is treated the same way as the convective term in the coupled momentum equation. Next, it is easy to use (<ref>) to pass to zero in the term containing δ|u|^2. Pointwise convergence of densities allows us to pass to the limit in the pressure terms of E_δ. Improved estimate (<ref>) allows us to pass to the limit in the last term of E_δ, while a similar procedure as in (<ref>)-(<ref>) provides necessary information to pass to the limit in the term |η_t|^2 of E_δ. Thus we recover (<ref>). The validity of (<ref>) follows from the calculations in Section <ref> with the starting point being the energy inequality (<ref>). Acknowledgments: The work of O. K., V. M. and Š. N. was supported by Praemium Academiae of Š. Nečasová and by the Czech Science Foundation (GAČR) through project GA22-01591S. The Institute of Mathematics, Czech Academy of Sciences, is supported by RVO:67985840. 99 AbelsLiu1 H. Abels and Y. Liu: On a fluid-structure interaction problem for plaque growth: cylindrical domain. J. Differential Equations 345, 334–400, 2022. AbelsLiu2 H. Abels and Y. Liu: On a fluid–structure interaction problem for plaque growth. Nonlinearity 36, 537–583, 2022. AdFo R. Adams and J. Fournier: Sobolev Spaces second edition. Pure and Applied Mathematics (Amsterdam), 140; Elsevier/Academic Press, Amsterdam, 2003 Avalos1 G. Avalos, I. L. Lasiecka and R. Triggiani: Higher regularity of a coupled parabolic-hyperbolic fluid-structure interactive system. Georgian Math. J. 15, 403–437, 2008. Avalos2 G. Avalos, P. G. Geredeli and J. T. Webster: Semigroup well-posedness of a linearized, compressible fluid with an elastic boundary. Discrete Contin. Dyn. Syst. Ser. B 23, 1267–1295, 2018. veiga H. Beirão da Veiga: On the existence of strong solutions to a coupled fluid-structure evolution problem. J. Math. Fluid Mech. 6, 21–52, 2004. benevsova2020variational B. Benešová, M. Kampschulte and S. Schwarzacher: A variational approach to hyperbolic evolutions and fluid-structure interactions. J. Eur. Math. Soc. (2023), published online first, doi:10.4171/JEMS/1353. BG17 M. Boulakia and S. Guerrero: On the interaction problem between a compressible fluid and a Saint-Venant Kirchhoff elastic structure. Adv. Differ. 22, 1–48, 2017. breit2021compressible D. Breit, M. Kampschulte and S. Schwarzacher: Compressible fluids interacting with 3D visco-elastic bulk solids. Preprint, arXiv:2108.03042. Breit D. Breit and S. Schwarzacher: Compressible fluids interacting with a linear-elastic shell. Arch. Rational Mech. Anal. 228, 495–562, 2018. Breit2 D. Breit and S. Schwarzacher: Navier-Stokes-Fourier fluids interacting with elastic shells. To appear in Annali della Scoula normale superiore de Pisa, Classe di scienze, doi: 10.2422/2036-2145.202105_090 CanicMuha S. Čanić and B. Muha: Fluid-structure interaction between an incompressible, viscous 3D fluid and an elastic shell with nonlinear Koiter membrane energy. Interfaces Free Bound. 17, 465–495, 2015. muhacanic2 S. Čanić and B. Muha: Existence of a weak solution to a fluid-elastic structure interaction problem with the Navier-slip boundary condition. J. Differential Equations 260, 8550–8589, 2016. casanova J. Casanova: Existence of Time-periodic Strong Solutions to a Fluid-Structure System. Discrete Contin. Dyn. Syst. 39, no. 4, 3291–3313, 2019. grandmont3 A. Chambolle, B. Desjardins, M. J. Esteban and C. Grandmont: Existence of weak solutions for the unsteady interaction of a viscous fluid with an elastic plate. J. Math. Fluid Mech. 7, 368–404, 2005. CS05 D. Coutand and S. Shkoller: Motion of an elastic solid inside an incompressible viscous fluid. Arch. Ration. Mech. Anal 176, no. 1, 25–102, 2005. CS06 D. Coutand and S. Shkoller: The interaction between quasilinear elastodynamics and the Navier-Stokes equations. Arch. Ration. Mech. Anal. 179, 303–352, 2006. FNPS E. Feireisl, Š. Matušů-Nečasová, H. Petzeltová and I. Straškraba: On the Motion of a Viscous Compressible Fluid Driven by a Time-Periodic External Force. Arch. Rational Mech. Anal. 149, no. 1, 69–96, 1999. FMNP E. Feireisl, P. B. Mucha, A. Novotný and M. Pokorný: Time-periodic Solutions to the Full Navier-Stokes-Fourier System. Arch. Rational Mech. Anal. 204, no. 3, 745–786, 2012. FeNobook E. Feireisl and A. Novotný: Singular limits in thermodynamics of viscous fluids, 2^nd edition. Advances in Mathematical Fluid Mechanics. Birkhäuser/Springer, Cham, 2017. grandmont4 C. Grandmont: Existence of weak solutions for the unsteady interaction of a viscous fluid with an elastic plate. SIAM J. Math. Anal. 40, no. 2, 716–737, 2008. grandmont2 C. Grandmont and M. Hillairet: Existence of global strong solutions to a beam-fluid interaction system. Arch. Rational Mech. Anal. 220, 1283–1333, 2016. grandmont1 C. Grandmont, M.  Hillairet and J. Lequeurre: Existence of local strong solutions to fluid-beam and fluid-rod interaction systems. Ann. Inst. H. Poincaré Anal. Non Linéaire 36, no. 4, 1105–1149, 2019. grandmont5 C. Grandmont, M. Lukáčová-Medviďová, Š. Nečasová: Mathematical and numerical analysis of some FSI problems. Fluid-structure interaction and biomedical applications, 1–77. Adv. Math. Fluid Mech. Birkhäuser/Springer, Basel, 2014 continuous G. Guidoboni, M. Guidorzi, M. Padula, Continuous Dependence on Initial Data in Fluid–Structure Motions. J. Math. Fluid Mech. 14, 1–-32,  2012. KMN M. Kalousek, S. Mitra and Š. Nečasová: Existence of weak solution for a compressible multicomponent fluid structure interaction problem. Preprint, arXiv:2301.11216. contact M. Kampschulte, B. Muha and S. Trifunović: Global weak solutions to a 3D/3D fluid-structure interaction problem including possible contacts, Preprint, arxiv:2304.11809. unrestricted M.  Kampschulte, S.  Schwarzacher, G.  Sperone: Unrestricted deformations of thin elastic structures interacting with fluids. J. Math. Pures. Appl. 173, 96–148, 2023. KT12 I.  Kukavica and A. Tuffaha: Regularity of solutions to a free boundary problem of fluid-structure interaction. Indiana Univ. Math. J. 61, 1817–1859, 2012. LeRu14 D. Lengeler and M. Růžička: Weak solutions for an incompressible Newtonian fluid interacting with a Koiter type shell, Arch. Rational Mech. Anal. 211, 205–255, 2014. Leq J. Lequeurre: Existence of strong solutions for a system coupling the Navier-Stokes equations and a damped wave equation. J. Math. Fluid Mech. 15, 2, 249–271, 2013 Leq1 J. Lequeurre: Existence of strong solutions to a fluid-structure system. SIAM J. Math. Anal. 43, 1, 389–410, 2011. MaMuNeRoTr V. Mácha, B. Muha, Š. Nečasová, A. Roy and S. Trifunović: Existence of a weak solution to a nonlinear fluid-structure interaction problem with heat exchange. Comm. Partial Differential Equations 47, 1591–1635, 2022. MR4189724 D. Maity, J.-P. Raymond and A. Roy: Maximal-in-time existence and uniqueness of strong solution of a 3D fluid-structure interaction model. SIAM J. Math. Anal. 52, no. 6, 6338–6378, 2020. RoyMaity D. Maity, A. Roy and T. Takahashi: Existence of strong solutions for a system of interaction between a compressible viscous fluid and a wave equation. Nonlinearity 34, 2659–2687, 2021. MaityTakahahi D. Maity and T. Takahashi: Existence and uniqueness of strong solutions for the system of interaction between a compressible Navier-Stokes-Fourier fluid and a damped plate equation. Nonlinear Anal. Real World Appl. 59, Paper No. 103267, 2021. MS C. Mîndrilă and S. Schwarzacher: Time-periodic weak solutions for an incompressible Newtonian fluid interacting with an elastic plate. SIAM J. Math. Anal. 54, no. 4, 4139–4162, 2022. MS1 C. Mîndrilă and S. Schwarzacher: Time-periodic weak solutions for the interaction of an incompressible fluid with a linear Koiter type shell under dynamic pressure boundary conditions. Preprint, arXiv:2303.13625. sourav S. Mitra: Local existence of strong solutions for a fluid-structure interaction model. J. Math. Fluid Mech. 22, Paper No. 60, 2020. MuhaSch B. Muha and S. Schwarzacher: Existence and regularity for weak solutions for a fluid interacting with a non-linear shell in 3D. Annales de l'Institut Henri Poincaré C 39, No. 6, 1369–1412, 2023. RV14 J.-P. Raymond and M. Vanninathan: A fluid-structure model coupling the Navier-Stokes equations and the Lamé system. J. Math. Pures Appl. 102, 546–596, 2014. rouba T. Roubíček: Nonlinear Partial Differential Equations with Applications. International Series of Numerical Mathematics 13 Birkhäuser Verlag, Basel, 2005. Sebastian S. Schwarzacher and M. Sroczinski: Weak-strong uniqueness for an elastic plate interacting with the Navier-Stokes equation. SIAM J. Math. Anal. 54, no. 4, 4104–4138, 2022. simon J. Simon: Sobolev, Besov and Nikolskii Fractional Spaces: Imbeddings and Comparisons for Vector Valued Spaces on an Interval. Annali di Matematica pura ed applicata (IV), Vol LCVII, 117–148, 1990. triebel H. Triebel: Theory of function spaces. Monographs in Mathematics, 100. Birkhäuser Verlag, Basel, 2006. Tr S. Trifunović: Compressible fluids interacting with plates: regularity and weak-strong uniqueness. J. Math. Fluid Mech. 25, no. 1, Paper No. 13, 28 pp, 2023. Tri1 S. Trifunović and Y. G. Wang: On the interaction problem between a compressible viscous fluid and a nonlinear thermoelastic plate. accepted for publication in SIAM J. Math. Anal. trwa S. Trifunović and Y. G. Wang: Existence of a weak solution to the fluid-structure interaction problem in 3D. J. Differential Equations 268, 1495–1531, 2020.
http://arxiv.org/abs/2307.02081v1
20230705074436
LØ: An Accountable Mempool for MEV Resistance
[ "Bulat Nasrulin", "Georgy Ishmaev", "Jérémie Decouchant", "Johan Pouwelse" ]
cs.CR
[ "cs.CR", "cs.DC" ]
Delft University of Technology Delft The Netherlands b.nasrulin@tudelft.nl Delft University of Technology Delft The Netherlands g.ishmaev@tudelft.nl Delft University of Technology Delft The Netherlands j.decouchant@tudelft.nl Delft University of Technology Delft The Netherlands peer2peer@gmail.com Possible manipulation of user transactions by miners in a permissionless blockchain systems is a growing concern. This problem is a pervasive and systemic issue, known as Miner Extractable Value (MEV), incurs highs costs on users of decentralised applications. Furthermore, transaction manipulations create other issues in blockchain systems such as congestion, higher fees, and system instability. Detecting transaction manipulations is difficult, even though it is known that they originate from the pre-consensus phase of transaction selection for a block building, at the base layer of blockchain protocols. In this paper we summarize known transaction manipulation attacks. We then present , an accountable base layer protocol specifically designed to detect and mitigate transaction manipulations. is built around accurate detection of transaction manipulations and assignment of blame at the granularity of a single mining node. forces miners to log all the transactions they receive into a secure mempool data structure and to process them in a verifiable manner. Overall, quickly and efficiently detects reordering, injection or censorship attempts. Our performance evaluation shows that is also practical and only introduces a marginal performance overhead. LØ: An Accountable Mempool for MEV Resistance Johan Pouwelse August 1, 2023 ============================================= plain plain § INTRODUCTION Enabled by blockchain technologies, Decentralised Finance (DeFi) tools and mechanisms have generated a lot of interest as building blocks for novel digital markets, both in terms of practical applications amounting to over 80 billion USD total value locked at the moment of writing, and in terms of significant research interest <cit.>. Furthermore, these tools enable monetization mechanisms for the new paradigm of Web3 development, providing alternatives to monopolistic centralised digital platforms. Decentralised exchanges, lending markets, derivatives, and other products built on permissionless blockchains are just some examples of these novel financial applications. However, these developments, are undermined by unresolved issues of transaction manipulations, such as censorship, injection, and re-ordering of transactions, at the expense of application users at underlying layers of blockchain protocols. This problem led to the notion of Miner Extractable Value (MEV)[Sometimes also referred to as Blockchain Extractable Value, or Maximum Extractable Value.], which refers to the maximum revenue a miner can obtain from benign or manipulative transaction selection for block production <cit.>. This problem is a pervasive and systemic issue at large scale as exemplified by the Ethereum blockchain, where MEV transaction manipulations have generated over 320 USD million of revenue for bots and miners <cit.>. Furthermore, over 90% of the blocks produced on Ethereum contain MEV transactions <cit.>. Such manipulations not only undermine users' trust, but also induce systemic issues like congestion, inflated fees, and system instability <cit.>. We argue that the root cause of MEV is a lack of accountability at the base layer of permissionless blockchain protocols, sometimes referred to as 'dark forest' <cit.>. By base layer, we refer to the processing steps that happen before consensus has to be reached on a block, such as sharing pending transactions (recorded in the mempool) with other miners and assembling them into blocks. In contrast to what happens at the consensus layer, at the base layer miners are expected to act as trusted parties. As such, a miner that creates a new block can arbitrarily select the transactions from its mempool. In practice, miners can therefore arbitrarily censor, inject or reorder transactions <cit.>. While this problem has received certain attention in the context of MEV mitigation tools, there are no comprehensive solutions preventing these types of transaction manipulations <cit.>. Most of the proposed solutions in this category focus on the application layer and on consensus layer mitigation tools <cit.>. Many of these tools do not prevent MEV attacks but rather aim to mitigate them. The most well known approach, Proposer Builder Separation (PBS) <cit.>, is implemented with the Flashbots middleware on Ethereum and does not prevent MEV, but only the redistribution of its associated revenues. Some proposed theoretical solutions, such as fair ordering consensus protocols <cit.>, prevent transaction manipulations. However, these algorithms assume permissioned settings and small network sizes, and require important modifications of the blockchain consensus layer. As transaction manipulations arise from the lack of accountability at the base layer of blockchain protocols, we argue that comprehensive mitigation of MEV requires addressing trust assumptions at this particular layer. To address them, we design , an accountable mempool protocol. In miners become accountable for the process of transaction selection and ordering. As new transactions are propagated among miners, they exchange and record commitments on the content of their mempools with each other. New transactions are shared in bundles, and commitment is recorded on a whole transaction bundle. This provides a local partial ordering of transactions. Our system is based on pairwise commitments that are exchanged during a mempool reconciliation phase, which is executed before the consensus protocol. This allows miners to witness each others' transaction selection and commit to a particular order and set of transaction that they will use for block generation. Therefore, ensures that any transaction manipulation, such as transaction censorship, injection and reordering, can be detected and proven by a correct node. Our system is agnostic to a specific type of consensus protocol in a permissionless blockchain system. It can be seamlessly integrated with existing blockchain solutions, as a relatively simple modification of a Peer-to-Peer (P2P) protocols that propagates transactions and blocks. In addition, it does not require any additional cryptographic setups, and it does not impose a significant performance overhead. We leverage Minisketch data structure for the reconciliation of mempools to implement bandwidth-efficient commitments  <cit.>. This paper makes the following contributions: ∙ We identify key types of transaction manipulation attack primitives at the base layer. We propose a new taxonomy based on these attack primitives that can grasp all potential MEV attacks. We discuss the stages of the transaction processing pipeline that allow for these manipulations by miners. (<ref>). ∙ After describing our system model (<ref>), we provide an overview of , an accountable base layer protocol specifically designed to prevent transaction manipulations. is built around accurate detection of transaction manipulations and assignment of blame at the granularity of a single mining node. We discuss specific policies targeted at the detection of different MEV manipulations in (<ref>). ∙ We detail how detects transaction manipulation attacks and potential mechanisms for the enforcement of these policies (<ref>). We further discuss possible attacks against accountability in . ∙ We present our performance evaluation, which demonstrates that is practical. It is both bandwidth and memory efficient. For example, it only requires up to 10 MB of additional storage for a network of 10,000 nodes and a workload of 20 transactions per second. At the same time, it is at least four times more efficient than the classical flooding-based mempool exchanges (<ref>). § TRANSACTION MANIPULATIONS AT THE BASE LAYER We distinguish the base layer of a blockchain system from its consensus layer. In the complete life-cycle of a transaction from its creation to its inclusion in a blockchain, the base layer corresponds to the steps that precedes the block consensus phase as illustrated in Fig. <ref>. These steps include the creation of the transaction and its initial sharing, its inclusion in the mempools, the reconciliation of the mempools between miners, and the inclusion of the transaction in a candidate block. We emphasize that the block-building phase, where a miner selects transactions that it includes in a candidate block, is a pre-consensus phase. Indeed, while sometimes block building is described as part of blockchain protocols, it is strictly speaking not a part of the consensus mechanism as blocks can be produced offline, as illustrated by PBS in Ethereum and selfish mining in Bitcoin <cit.>. We further distinguish the base layer from the network layer of blockchain protocols, as the latter is required in all transaction processing phases, including during consensus. The base layer typically provides much lower guarantees against misbehaving nodes than the consensus layer. Miners only conduct checks on the validity and priority of transactions (which is related to miners fee) and add it to a local pool of unconfirmed transactions referred to as the 'mempool' <cit.>. However, miners are considered to be trusted parties with regard to the selection, withholding, and ordering of transactions <cit.>. Therefore, all phases of the transaction life-cycle that precede consensus allow transaction manipulations. §.§ Transaction Manipulation Primitives We consider practical attacks that include reordering of transactions by miners. These attacks have been observed in practical settings and described in academic works that relate to MEV <cit.>. In practice, these attacks combine different types of transaction ordering manipulations. A common taxonomy of MEV attacks is application-specific and depends on the source of attack revenue. Well known attack types include sandwich-attacks, front running, back running, which are associated with decentralized exchanges, sniping, which is associated with Non Fungible Token auctions, and liquidations, which are associated with collateralized loan protocols. This taxonomy evolves as new MEV attacks rapidly emerge with new applications. In this paper we consider a different taxonomy focusing on specific attack primitives on the base layer. These primitives allow a broad range of MEV, either on their own or in combination. Namely: censorship, injection, and re-ordering of transactions. Censorship. Censorship is the ability of a miner to delay or ignore new transactions. Censorship can enable different financially motivated MEV attacks, such as sniping, executed alone or in combination with other primitives. For example, when receiving transactions for a bid in Non Fungible Token auctions, a faulty miner can censor competing transactions to become the auction winner. This censorship mechanism can take place either during the mempool inclusion phase, or during the block inclusion phase. Mempool Censorship. Faulty miners can ignore transactions received from some other nodes, and exclude their valid transactions from their mempool. We assume that a faulty miner either provides a fake transaction reception acknowledgment, or does not acknowledge it at all. This type of attack enables censorship at the level of a mempool <cit.>, and facilitates transaction manipulation based on front-running. Blockspace censorship. Faulty miners can exclude valid transactions from blocks, even after acknowledging their reception and including it in the mempool. This enables transaction censorship at the level of blockspace. Injections. We assume that honest miners include transactions received from other nodes in new blocks in a deterministic order. Honest miners can also add their own new transactions, under the assumption that updated mempool commitment is shared with other nodes and acknowledged. Faulty miners inject new transactions in blocks in an arbitrary manner, without prior sharing of the updated mempool and without acknowledgements. This type of attack can result in certain types of transaction manipulations such as front-running, sandwich, back-running <cit.>. Reordering. Faulty miners can reorder transactions in a mempool and in a block in a way that deviates from a protocol and violates expectations of other nodes. Reordering is different from an injection attack, since a faulty miner does not add new transactions itself, but manipulates the order of transaction received from other nodes. §.§ Transaction Processing Stages Attacks can happen at different stages of the transaction life-cycle. We model the processing of a transaction in a generic blockchain system in Fig. <ref>. This processing happens in four stages: (I) initial transaction sharing, (II) mempool reconciliation, (III) block building and (IV) block settlement. In the following we describe each stage, and discuss the corresponding attacks that enable transaction manipulations such as MEV. Stage I. Initial transaction sharing. A transaction is first created at the client side. The client signs the transaction with its private key. The transaction contains all the required context to be processed by miners, such as signature, UTxO address, execution commands, transaction fee, etc. The client shares the transaction with a subset of peers that it personally knows or whose identity is publicly known (step 1). The peers receive the transaction and attempt to prevalidate it (step 2). Our system is agnostic in respect to the choice of specific consensus protocol which will define the requirements for transaction prevalidation. For example, successful prevalidation of a transaction may require: valid signature from a client, sufficient amount of in a client account, and inclusion of a sufficient transaction processing fee. Miners that successfully prevalidate a transaction insert it in their local mempool storage. Optionally, miners might respond to the client with the transaction status, to acknowledge inclusion of a transaction in a mempool (step 3). Also optionally, client can query a miner to get an acknowledging of transaction inclusion in a mempool. A malicious peer can censor the transaction at the point of prevalidation, without adding it to the mempool performing Exclusion From Mempool. For example, a peer can exclude a client based on its id, e.g. all transaction originating from a specific address. At the same time, client and peer can collude to include an invalid transaction into a mempool. Stage II. Mempool reconciliation. At this stage peers share their transaction mempools (step 1). Typically, a mempool exchange is implemented to first share the transaction ids, and only later selectively share the transaction content of the corresponding ids. Once a miner receives the transaction content, it prevalidates the transaction (step 2), similarly to stage I. In theory, this stage allows miner to converge to the same transaction set for any peer-to-peer network. Unfortunately, in practice, there is no guarantee that miners will converge. Client can be partially or completely excluded from learning particular transactions when communicating with malicious peers. Moreover, miners can inconsistently exchange their mempools. Finally, without a requirement for the mempool reconciliation, malicious miner can exclude or include any transaction without being detected by other miners. Different types of Injection attacks and Exclusion attacks can be performed by faulty miners at that stage. For example, a malicious miner receiving a high-fee transaction can withhold it from sharing with other nodes in order to include it in own block later. Stage III. Block building. Upon creating a block, a miner populates it based on information stored in its local mempool data (step 1). For each block, the miner selects a subset of transactions to fill up the blockspace (step 2). The selected transactions are included in the block in a specific order chosen by the miner (step 3). A final block contains additional metadata, like signature, nonce, or timestamp (step 4). Most of the reported MEV is happening at the stage of block building. Indeed, miners can freely select, exclude, or order transactions to maximize their profit, performing Order manipulation and Blockspace censorship. Stage IV. Block settlement. is agnostic to the specific consensus process to finalize the blocks. We model miner selection as a random process, where a selected miner build its block and sends it to other miners. The attacks on this stage are extensively discussed in previous works. The most discussed manipulations include block withholding, block reordering and equivocation attacks. We consider the accountability on this stage out of scope. Our solution can be combined with other solutions addressing the manipulations on this stage, such as Polygraph <cit.>. § SYSTEM MODEL This section describes our system model, which is the classical one for blockchain protocols . The mining nodes (miners) belong in a set Π = {p_1,p_2, …} and communicate with each other by exchanging messages over the network. We assume that each miner is equipped with a cryptographic key pair, and is uniquely identified by its public key. Nodes have access to a cryptographic signature scheme and messages are authenticated. Communication Overlay. Nodes form an undirected communication graph that is not assumed to be fully connected. Nodes are free to unilaterally add or drop local connections. Nodes are able to leave and later rejoin the network. Nodes share messages to their overlay neighbors through their direct connections. We use notation N_i to refer to the neighbors of a node p_i, i.e., the nodes that are currently directly connected with it. Bootstrap and Peer Discovery. We assume that nodes that join the system are able to contact bootstrap nodes that facilitate node discovery. When (re)joining the network, each correct node requests a set of known active nodes from the bootstrap nodes. The bootstrap nodes are correct, i.e., they serve all nodes and unbiasedly propose a node from a set of locally known. As a result the nodes operate in one network. Continuous sampling. Correct nodes continuously sample the network through a discovery procedure. is build on top of Byzantine resilient uniform sampling algorithm <cit.>. Malicious nodes can delay the discovery, however, it is guaranteed that correct node will eventually be able to communicate. Types of Nodes. In different consensus protocols nodes participating in block creation can be called validators, proposers, builders, etc. Here we only consider the role of block creator and refer to the nodes that create blocks as miners. For the sake of simplicity we do not consider light clients, which our model can trivially cover without modifications. Miners can create new transactions, and they can also propose new blocks with ordered transaction to be included in the blockchain. All nodes maintain a list of unconfirmed transactions (mempool) and exchange it with other nodes in the network through messages. §.§ Attacker Model In our network, each node is either correct or faulty. Correct nodes adhere to the reference protocol without data tampering and generate valid messages. Faulty nodes, on the other hand, can deviate arbitrarily from the reference protocol. We assume that a faulty miner can execute any of the transaction manipulations we previously described: censoring transactions, injecting new transactions out-of-order, or deviating from the canonical transaction order <cit.>. These attacks can be carried out by a faulty miner in a naive way by sending the same message (e.g., a reordered set of transactions) to all neighboring nodes, or they can attempt to evade detection of manipulations by equivocating, i.e., sending conflicting messages to different nodes. §.§ Accountability We consider the standard accountability property for distributed systems and protocols <cit.>. We define accountability as the ability to detect transaction manipulations and assign blame at the granularity of a single mining node. In asynchronous environments, an adversary can try to evade detection as it is challenging to distinguish between a misbehaving node that deliberately ignores requests and a slow node. To circumvent this difficulty, we divide blames into two types: suspicions and exposures. An exposure is a verifiable proof of misbehavior, while a suspicion is a lack of response to a request. We consider two desirable properties of accountability: Accuracy: (1) Temporal. No correct node is perpetually suspected by a correct node, and (2) No false-positives. No correct node is exposed as misbehaving by other nodes. Completeness: (1) Suspicion completeness. Every misbehaving node that ignores requests is perpetually suspected by all correct nodes. (2) Exposure completeness. Given an exposure message on node p_i, every correct node exposes node p_i as misbehaving. § : ACCOUNTABLE BASE LAYER   In this section we present  which achieves accountability at the base layer. Specifically, is implemented as a modification of mempool reconcilation and block building stages. §.§ New Explicit Policies at the Base Layer This section introduces , our accountable base layer protocol for permissionless blockchains. improves over the `vanilla' mempool reconciliation and block building protocols of permissionless blockchains (stages 2 and 3 of Fig. <ref>). To enable accountability we require to modify some currently implicit or ill-defined polices at the base layer. Our observation is that current implementations of blockchain systems use implicit policies that significantly complicate the detection of transaction manipulations. First, a transaction censorship is not possible to attribute to a miner given an unreliable transaction relay. Every miner has its own relaying policy, and even perfectly correctly behaving nodes may choose not to relay anything at all. Second, miners can build a block with any transactions from the mempool, or even inject new transactions during the block creation. Third, there is no `canonical order' inside a block, allowing for any type of reordering. Instead of these ill-defined policies we propose three alternative explicit policies to enable the detection of any transaction manipulations, as presented in Table <ref>. In a nutshell, introduces three new explicit policies: Inclusion of All Transactions, Transaction Selection in Received Order, and Verifiable Canonical Order in a Block. Transaction manipulations are detected as violations of our explicit policies during the mempool reconciliation, or when inspecting the content of a block. Inclusion of All Transactions. Each miner includes all valid transactions it encountered during the system run in its locally maintained append-only transactions set. Once two nodes are connected they directly exchange their known transactions. The transaction exchange is implemented as a sequence of set reconcilations. The miners exchange multiple transactions in one transaction bundle. This allows two nodes to efficiently obtain the transactions they are missing and as a result end up with the same transaction sets. The key ability of is that after a successful round of reconciliation both correct nodes are ensured to have a common set of observed transactions. To ensure that none of the transactions is censored and all processed in the same way miners keep all valid transactions they encounter. Miners commit to be able to reveal all transactions they know about, if necessary. Transaction Selection in Received Order. During the reconciliation process, each miner commits on the order it received a transaction bundle from another miner. To mitigate any out-of-order injections, the miners are required to process the transactions following their insertion order in their mempool. As miners learn and commit on their mempool transactions, the transactions are then naturally ordered according to the order with which they were received. Verifiable Canonical Order in a Block. Transactions that are inserted into a newly created block are selected according to a deterministic process. In more details, committed transaction bundles are first assembled following sequential order. The order inside a bundle is then pseudo-random: transactions are shuffled using a known shuffling algorithm and an order seed value. The order seed value is based on the hash of the last created block. §.§ Mempool Reconciliation The mempool reconciliation process (cf. <ref>) forces miners to correctly share the transactions they accepted into their mempool. In practice, 's mempool reconciliation uses two techniques: (i) anti-entropy gossip reconciliations <cit.>; and (ii) signed commitments <cit.>. Nodes maintain a mempool of all pending transactions and keep a record of all valid transactions they have ever received. Nodes reconcile their mempools to disseminate transactions throughout the system and generate commitments that are exchanged during mempool reconciliations. These commitments cover not only the transactions in the current mempool, but all valid transactions ever received by a node at the time of reconciliation. Mempool reconciliation serves two purposes: (1) it allows miners to learn about new transactions from their neighbors; and (2) it ensures that miners commit to a specific transaction partial order during reconciliation. This partial order must be maintained during block creation. Miners mutually commit to the order by first exchanging a commitment. Miners are inherently motivated to receive transactions from other miners. However, they only disclose the transactions after their counterpart has committed to a specific order of transactions. Reconciliation Algorithm. In Algorithm <ref> we provide 's pseudocode for a miner p_i ∈Π. Periodically, miners require their neighbours to commit for new transactions by sending them a request for a new commitment (line 4-10). While the request is pending, the node is suspected. We refer to C_i as a commitment for the set of transactions included by miner p_i. At the same time, the commitment serves as a cryptographic checksum of included mempool transactions. During the reconciliation process, nodes first exchange a signed commitment C. After receiving the commitments of their neighbours, nodes calculate their transaction set differences with them (line 14). Since the commitment is signed, it can later be used as a proof of inclusion of transactions—any receiver can use the commitment C_j as verifiable evidence that node p_j should have included transactions in its mempool. Our mempool reconciliation between a miner p_i and miner p_j works in two phases. In the first phase, miner p_i sends to miner p_j a request to commit to new set of transactions (line 8). A peer p_j receives the request and responds either with its new C_j that already includes all transactions (line 18), or with a new commitment fixing locally the order of transactions Δ C_ij, i.e., a promise to apply them immediately after all known local transactions C_j (line 16). In the second phase, miner p_i sends all the transactions corresponding to the Δ C_ij to peer p_j (lines 23-25). All miners store at least the last received commitments from their overlay neighbors (line 13). On receiving a checksum C it is first validated against previously received set C (line 19-21). The set C is grow-only and keeps all the transactions committed by the node. If C is inconsistent against the previously reported messages C, the evidence of the faulty behavior is shared with other nodes (line 21). This inconsistency could happen for example when a faulty node is trying to hide a previously reported message or does not report a message received from other nodes. Example. Fig. <ref> illustrates a possible mempool reconciliation. Nodes A, B, and C first exchange transaction commitments. Note that commitments can also be received indirectly, but this scenario is not included in Fig. <ref> for simplicity. Node A sends a request, along with the mempool commitment C_A, to node B. Node B reconciles commitment C_A with its own C_B and promises to include node A's missing transactions immediately after all transactions C_B. Node A promptly sends the missing transaction 2 to node B. Shortly afterward, node C reconciles with node B in a similar manner. However, this time, node B promises to include transactions of node C only after the transactions 1,3,4,2. Let's assume that later, node B creates a new block, possibly because it is elected as a consensus leader. Node B must then select all transactions in the order of the commitment it made, which is 1,3,4,2,5,6. Implementation Details. employs Minisketch and Bloom Clocks to implement the mempool reconciliation protocol efficiently. A commitment in this context includes both the miner's Bloom Clock and Minisketch. These data structures serve two primary purposes: (1) they identify inconsistencies with the digests shared in previous rounds, and (2) they facilitate set reconciliation to identify a miner's unknown transactions. A Minisketch is a data structure proposed for the bandwidth-optimized exchange of transaction sets between nodes in the Bitcoin network <cit.>. Initially proposed for the reconciliation of mempool data, it can also be used to optimize block propagation. In this protocol, a sketch serves as a "set checksum". The primary advantage of Minisketch is its ability to reconcile quickly and accurately. However, it has a downside: the requirement to decode the reconciled Minisketch, which can fail. In such cases, we repeat the process by dividing the set in half and sending two sketches. A Bloom Clock is a space-efficient, probabilistic data structure used for the partial ordering of events in distributed systems <cit.>. uses Bloom Clocks to swiftly detect inconsistencies between two sets. In rare cases, when a Bloom Clock fails to detect an inconsistency due to collisions, we resort to a hash checksum. We employ Bloom Clocks to speed up the verification of inconsistencies between two sets. The pairwise commitment scheme ensures that miners are committed to all transactions they discover according to the order with which they are received. §.§ Block Building To avoid manipulations during the block building stage, we slightly modify the `vanilla' block building process with our new policies. The modified block-building process is shown in Fig. <ref>. Transaction Selection. Peers select all transactions they encounter during the mempool reconcilation phase and that are included in the mempool (step 1). Miners must verify these transactions. The transactions that are not valid are not included in the block. The transactions that have fees lower than some threshold are not included in the block, and are rejected (step 2). Transaction Ordering. The selected transactions are ordered in a verifiable canonical way (step 3). Recall from the mempool reconciliation process that transactions are partially ordered with the commitments order as the commitments define the order between transaction bundles. We also define a deterministic pseudo-random order function inside each of the bundle. We use a hash of previous block as a seed for the intra-bundle order function. Block Inspection. Next, a block is created (step 4) and shared with the network. Given the mempool commitments, any node can verify the produced block by inspecting its content with respect to the reference protocol (step 5). Note that block inspection is a separate process than block validation, and does not affect the block inclusion into the chain. Any violation exposes the block creator (step 6), by comparing the block content with the known commitments. Our protocol is agnostic to the specific punishment mechanisms, but we discuss some options in Section <ref>. During the block building process, miners select and order transactions deterministically. § DEALING WITH ATTACKS This section presents an analysis of various attacks and discusses how the integration of detection mechanisms and a broad spectrum of enforcement tools can counter them. §.§ Detection of Transaction Manipulations Every node utilizes a block inspection module to detect violations. Nodes are required to disclose all their known transactions and they must consistently disclose each commitment or they run the risk of being identified as faulty. An inconsistency is detected when comparing two commitments, provided both sets contain at least one transaction. Nodes are obligated to respond to commitment requests. Failure to do so results in an eventual fault suspicion by every correct miner in the network. Reconciliation messages and proposed blocks are validated against the protocol rules. Violations, such as censorship of particular transactions, commitment inconsistencies, or message tampering, can then be identified. Evidence of faulty behavior is disseminated across the network by correct miners. Countering Attacks during Mempool Reconciliation. Every node involved in a mempool reconciliation retains a signed commitment acquired from other nodes, which can be used to identify faulty nodes. Sufficient interaction with correct nodes in the network makes it virtually impossible for a node to manipulate its mempool and not be detected. The mempool reconciliation process thus ensures reliable detection of injection and mempool censorship attacks. A misbehaving miner attempting a front-running attack, for example, may inject a new transaction out-of-order. However, this attack is swiftly detected as the injected transaction would be inconsistent with previous commitments. Enhancing Detection Resilience. After a mempool reconciliation between two miners, they can mutually detect each other's violations. Throughout the operation of the system, miners collect commitments from all their overlay neighbors. Consequently, an overlay neighbor can detect a violation. However, if an overlay neighbor is offline, it cannot broadcast the exposure message to other miners. To enhance resilience, miners share between each other a sample of the last commitments they received. This allows other non-neighbouring miners to also detect violations. Countering Attacks during Block Building. The order function ensures that order manipulation attacks can be detected, as any block where the transaction order deviates from the canonical one will be detected. Similarly, a block-space censorship attack is detected as a deviation from the selection function rules. §.§ Suspicion and Misbehavior Sharing provides guarantees that violation of block production rules can be reliably detected by other nodes in the block inspection process and that misbehaving node will be exposed. Our accountability mechanism provided in that consists of suspicions, equivocation detection, and exposure. Suspicions. The Accountability Mechanism incorporates liveness checks and propagates transaction commitments between nodes through indirect paths. If a node does not respond to transaction requests before a timeout, it is suspected by the requester. The requester may resend the request multiple times before suspecting the node. Correct nodes retain all pending requests. If a node is suspected, the requester broadcasts the suspected requestee's identity to other nodes, along with information on pending requests and the requestee's last known commitments. A node may retrieve pending requests after a partition or a crash. Once it publicly responds to all pending requests, no correct node will suspect it. In Fig. <ref>, node B has an earlier commitment (C_A, n) from node A. Node C has the latest commitment (C_A, n+1) from node A. Node B sends a request for a commitment on a particular transaction τ from node A, but does not receive a response. After a timeout, node B suspects node A and broadcasts a suspect status along with the latest commitment (C_A, n) from A that is available to B to its neighbors, in this case, to node C. Equivocation Detection. A consistency check occurs when a node is suspected. Commitments are append-only sets and thus follow chronological order. When a node has two commitments from a neighbor, it can easily detect any inconsistency between the previous commitment n and the latest commitment n+1 using its bloom clock. Nodes can receive commitments from other nodes both directly and indirectly. Consider an example of suspicion and consistency check in Fig. <ref>. Node C receives two commitments originating from node A, i.e., commitment (C_A, n+1) from node B, and (C_A, n+1) from node A. Node B has tried to get a commitment on transaction τ from A and suspects A because of the high response delay. Node C will check whether (C_A, n) and (C_A, n+1) are consistent with each other. * If these commitments are inconsistent, node C exposes A as a misbehaving node. * If (C_A, n) and (C_A, n+1) are consistent and (C_A, n+1) already includes a commitment on a transaction τ, then node C will share the latest commitment (C_A, n+1) with B. * If (C_A, n) and (C_A, n+1) are consistent but (C_A, n+1) does not include a commitment on τ, then C will send a request for commitment on τ to C and suspect C. Any mempool counterpart can submit a proof of misbehavior showing inconsistency between a mempool commitment and a produced block. §.§ Possible MEV Prevention Mechanisms Reliable detection and blame assignment allow for MEV mitigation through the enforcement of policies. The choice of specific enforcement mechanisms depends on the consensus protocol. Given that is agnostic to the particular consensus algorithm used, a detailed analysis of specific enforcement mechanisms is beyond the scope of this paper. For instance, in Proof-of-Stake (PoS) consensus algorithms, various slashing strategies can be applied to misbehaving nodes <cit.>. Since validating nodes in PoS must invest a certain amount of funds to become validators, slashing of stake incurs a financial loss. For consensus algorithms based on the reputation of validating nodes, slashing of reputation can equivalently serve as a penalization mechanism <cit.>. Misbehaving nodes can also be penalized at the network layer level, such as temporary disconnection from the network <cit.>. In addition to penalizing misbehaving miners, detection allows the implementation of mechanisms for the rejection of blocks that deviate from the canonical transaction order <cit.>. However, this latter approach imposes significant trade-offs on the modification of the consensus protocol. §.§ Addressing Accountability Attacks In our model, we assume that miners are incentivized to learn about more transactions. This assumption aligns with empirical observations, as miner profitability correlates with their ability to discover new transactions <cit.>. In our system, miners only discover transaction content after exchanging commitments. Hence, by learning about new transactions, miners commit themselves to mempool commitments. However, a potential loophole exists for malicious miners. A miner could conspire with an accomplice who does not interact with correct nodes to create a block for them using a manipulated transaction order. This attack is depicted in Fig.<ref>. A malicious miner can transfer a transaction, denoted as Tx, to another colluding miner or to a Sybil miner under their control. Since a colluding miner has not exchanged commitments with the originator of Tx, it can attempt to reorder or inject transactions, and propose an alternative block. However, this type of attack is impractical due to several reasons: * Colluding miners can only front-run or back-run an entire original transaction bundle. Any attempt to inject, censor, or reorder transactions within a transaction bundle is eventually detectable by correct miners. This significantly restricts the attack granularity, a crucial factor for MEV profitability. * Colluding miners or Sybils cannot respond to queries from honest miners to evade commitments. They can only learn about new transactions via malicious nodes acting as a bridge. However, such a non-responding set of colluding miners is eventually detected and suspected. * Colluding miners or Sybil miners must have a high probability of becoming the consensus leader to include a specific transaction. To increase this success rate, a substantial set of colluding miners or Sybils is required, which is costly considering the initial investment and the absence of profits from honest protocol participation. Finally, to further mitigate the attack, one option is to require sufficient Proof-of-Interaction during block creation. Specifically, the block creator must also include signatures from a sufficient number of miners (based on mining power or stake), thereby proving recent interaction with them. § EVALUATION This section presents our evaluation of focusing on its resilience against malicious nodes and the impact of such nodes on detection. We also discuss the overhead associated with . §.§ Experimental setup was evaluated experimentally on a national research cluster <cit.>. Each server in the cluster is equipped with an Intel Xeon E5-2630 CPU with 24 physical cores operating at 2.4 GHz, hyper-threading enabled, and 128 GiB of main memory. The servers are interconnected via a Gigabit Ethernet network. was implemented in Python. We emulated realistic network latencies using netem[See <https://www.linux.org/docs/man8/tc-netem.html>] and incorporated ping statistics from 32 cities worldwide from the WonderNetwork dataset <cit.>. Each miner was assigned to a city in a round-robin manner. Unless otherwise stated, the parameters for the reported experiment were set as follows: The experiment was conducted with 10,000 nodes, generating a workload of 20 transactions per second, with each transaction being 250 bytes in size. The transactions were injected into our system based on a realistic dataset of Ethereum transactions <cit.>. Each experiment was repeated 10 times, and the average result of these runs is reported. We constructed a connected topology where each node had eight outgoing connections and up to 125 incoming connections, in line with the default Bitcoin parameters. Every node attempted to reconcile with three random neighbors every second. The request timeout was set to 1 second. If a request was not fulfilled within this time, it was resent three times, after which the node was suspected of being faulty. The Minisketch size was set to 1,000 bytes, sufficient to reconcile a set difference of up to 100 transactions, allowing the Minisketch to fit into a single UDP packet. If reconciliation failed, all transactions were divided into two subsets, and the process was repeated with two sketches. The size of Bloom-Clocks was fixed at 32 cells (i.e., 68 bytes in total). §.§ Resilience We assess the impact of colluding censoring miners on the network, specifically focusing on their effect on the convergence of correct nodes. In this scenario, malicious miners attempt to prevent correct nodes from learning about transactions, commitments, exposure, and suspicion messages. All malicious miners are assumed to be interconnected. For these experiments, we ensure that the correct nodes remain connected via some path in the network by initially running an unbiased sampling algorithm <cit.>. Fig. <ref> illustrates the time required for all correct nodes to converge, depending on the number of faulty nodes in the network. The presence of faulty nodes marginally increases the time needed for all correct nodes to learn about the exposure message, extending it to 6-7 seconds after the first miner detects and creates the message. We also demonstrate how our system can detect faulty nodes that ignore requests. We report the time until every correct node suspects all faulty nodes (Fig. <ref>, `Suspicion'). As expected, the time until all faulty nodes are suspected is longer than the time required for nodes to discover an exposure message, as the nodes need to submit a request and wait for it to timeout. §.§ Transaction Latency We report the time necessary for miners to discover a transaction and include it in their mempool. The latency distribution is reported in Fig. <ref>. It appears that all nodes learn about the transaction after contacting 5 to 6 nodes. On average, a transaction is discovered by a node in 1.14 s. To demonstrate the effects of our new policies on block building, i.e., selecting transactions in order, we simulate a block creation process at randomly selected miners with an average block time of 12 s, which is the block time in Ethereum. We report the average time it takes for a transaction to be included in a block in Fig.<ref>. We compare the policy for block creation described in Section <ref> (`Natural' ordering) with the policy that is currently widely used in public blockchains, i.e., creating a block with the highest-fee transactions of the mempool (referred to as Highest Fee'). The average transaction latency for the 'Natural' ordering is 3 seconds, while it is around 7-8 seconds for the 'Highest Fee' strategy. Furthermore, we observe that the 'Highest Fee' strategy exhibits a wide spread along the axes, with many low-fee transactions experiencing very high latency. 's orders transactions according to the order with which they have been received by miners, which leads to transactions being processed sequentially and increases fairness. §.§ Protocol Overhead §.§.§ Bandwidth We benchmark our protocol against two baseline protocols: 'Flood' and 'PeerReview'. 'Flood' is a traditional mempool exchange protocol where miners initially send a 'Mempool' message containing a list of hashes of the transactions currently in their mempool. The receiving miner compares these hashes against its known transaction IDs and requests any missing transactions. We also compare to 'PeerReview', a generic accountability protocol that could be used to monitor censorship attempts by miners <cit.>. Every miner maintains an additional log for each received message. For each miner, we assign 8 witnesses. Periodically, each miner fetches the log from the miners and checks for any injection (commission) or censorship (omission). The comparison is reported in Fig. <ref>. Note that we omit the bandwidth overhead for sharing transactions, as it is the same for all three protocols. Our protocol is the most bandwidth-efficient compared to the other two protocols, incurring 20 times less bandwidth overhead than PeerReview. §.§.§ Memory and CPU Overhead The overhead for encoding and decoding Minisketch scales linearly with the size of the set difference <cit.>. Minisketch computes a set difference with 1,000 items in 10 seconds. To optimize the usage of the sketch, we hash-partition the mempool space into subsets, as described in <cit.>. Each time reconciliation fails, the node divides the mempool in half and sends an additional Minisketch for each partition. As a result of this optimization, we encode and decode all sketches required for a set difference of 1,000 items in less than 100 ms. We report the average number of sketch reconciliations per minute per node depending on the workload in Fig. <ref>. only requires a small additional memory overhead to store the commitments of all its neighbors. The size of the commitment depends on the workload. For example, for a workload of 120 transactions per minute, the commitment size is 1.17 KB, while for a workload of 24,000 transactions per minute, the total size of commitments can reach up to 9.36 KB. Even if the miner stores the commitments of all 10,000 nodes, it would only require 87 MB. § RELATED WORK The problem of MEV has attracted a considerable amount of research <cit.>. Different MEV mitigation mechanisms can be categorized according to implementation at different layers: application, consensus, base. §.§ MEV Mitigation at the Application Layer Decentralized Exchanges aggregators such as Cowswap implement Batch Auctions where orders are placed off-chain and not immediately executed, but rather, collected and aggregated to be settled in batches <cit.>.The applications of this approach is tied to a specific application, and thus limited to specific types of MEV attacks (front running and sandwich). A2MM is a DEX design that atomically performs optimal routing and arbitrage among the considered AMM, minimizing subsequent arbitrage transactions <cit.>. §.§ MEV Mitigation at the Consensus Layer Proposer-builder separation (PBS) is a proposal aiming at MEV minimization <cit.>.The latest iteration of this mechanism, MEV-Boost, is implemented as a middleware. It enables private communication channels between clients creating new transactions and validating nodes. However, this approach has significant trust assumptions, such as relays not reordering or censoring transactions, which empirically do not hold <cit.>. Pre-ordering solutions aim to separate transaction ordering from execution to ensure 'fair' ordering. The Helix consensus protocol <cit.> guarantees random selection and ordering of transactions in blocks, relying on a randomness beacon within the consensus protocol. Aequitas <cit.> provides guarantees on transaction ordering within a block, but assumes a permissioned environment and introduces significant communication overhead. Pompe <cit.> is a Byzantine ordered consensus (BOC) protocol that outputs a transaction t and a sequence number s for ordering t. Wendy <cit.> describes ordering protocols for permissioned systems. Enforcing relative order requires building a dependency graph to prevent transactions from being included in a block before their dependencies <cit.>. Enforcing fair-ordering is more resource-intensive than enforcing our accountability properties and not practical in a permissionless setting. Heimbach and Wattenhoffer propose encrypting transaction content, ordering it, and revealing its content only after it has been ordered <cit.>. This approach is implemented by Fino, which integrates MEV protection into a BFT protocol in the partial synchrony model with a DAG transport protocol <cit.>. Lyra <cit.>, a Byzantine ordered consensus protocol, also uses a commit-reveal scheme and relies on Verifiable Secret Sharing (VSS). The encrypt-commit-reveal scheme is more resource-intensive than our accountable approach and requires additional trust assumptions to ensure that encrypted transactions are always revealed. §.§ MEV Mitigation at the Base Layer Secret Mempools hide the content of a transaction so that it cannot be censored, reordered, etc. F3B is a generic approach for online transaction encryption based on a commit-and-reveal architecture <cit.>. Ferveo is a protocol for Mempool Privacy on BFT consensus blockchains <cit.>. Both of these solutions assume permissioned settings. ZeroMEV is an existing MEV mitigation solution implemented on the base layer <cit.>. This solution is Ethereum-specific and implemented on the basis of Geth software fork as a validator execution client. It orders transactions based on timestamps with local FIFO order. However, this solution does not provide any accountability and requires strong trust assumption as it relies on the altruism of a validator. § CONCLUSION We introduced , an accountable base layer for permissionless blockchains. It is consensus-protocol agnostic and provides detection guarantees for various MEV attacks. mandates that both correct and faulty miners log all received transactions into a secure mempool data structure and exchange and record commitments on their mempool content. Any inconsistency, such as transaction withholding or equivocation, is exposed during a mempool reconciliation process with a correct miner. To ensure the exposure of faulty miners, simply requires correct miners to be interconnected through a network path. We outlined the transaction manipulation attacks associated with MEV that miners might execute and mapped different attack types to the relevant stages of a transaction’s lifecycle within the protocol. Our performance evaluation demonstrates the practicality of . It is bandwidth and memory efficient, using only 10 MB with 10,000 miners and a workload of 20 transactions per second. Moreover, it is at least four times more bandwidth efficient than classical flooding-based mempool exchanges and processes transactions with higher fairness. ACM-Reference-Format
http://arxiv.org/abs/2307.00362v2
20230701151922
Kernelization for Finding Lineal Topologies (Depth-First Spanning Trees) with Many or Few Leaves
[ "Emmanuel Sam", "Benjamin Bergougnoux", "Petr A. Golovach", "Nello Blaser" ]
cs.DS
[ "cs.DS" ]
Kernelization for Finding Lineal Topologies with Many or Few Leaves E. Sam, B. Bergougnoux, P. Golovach, N. Blaser Department of Informatics, University of Bergen, Norway {emmanuel.sam,petr.golovach, nello.blaser}@uib.no Institute of Informatics, University of Warsaw, Poland benjamin.bergougnoux@mimuw.edu.pl Kernelization for Finding Lineal Topologies (Depth-First Spanning Trees) with Many or Few LeavesThe research leading to these results has received funding from the Research Council of Norway via the projects (PCPC) (grant no. 274526) and BWCA (grant no. 314528). Emmanuel Sam10000-0001-7756-0901 Benjamin Bergougnoux20000-0002-6270-3663 Petr A. Golovach 10000-0002-2619-2990 Nello Blaser 10000-0001-9489-1657 ======================================================================================================================================================================================================================================================================= For a given graph G, a depth-first search (DFS) tree T of G is an r-rooted spanning tree such that every edge of G is either an edge of T or is between a descendant and an ancestor in T. A graph G together with a DFS tree is called a lineal topology 𝒯 = (G, r, T). Sam et al. (2023) initiated study of the parameterized complexity of the Min-LLT and Max-LLT problems which ask, given a graph G and an integer k≥ 0, whether G has a DFS tree with at most k and at least k leaves, respectively. Particularly, they showed that for the dual parameterization, where the tasks are to find DFS trees with at least n-k and at most n-k leaves, respectively, these problems are fixed-parameter tractable when parameterized by k. However, the proofs were based on Courcelle's theorem, thereby making the running times a tower of exponentials. We prove that both problems admit polynomial kernels with (k^3) vertices. In particular, this implies FPT algorithms running in k^(k)· n^O(1) time. We achieve these results by making use of a (k)-sized vertex cover structure associated with each problem. This also allows us to demonstrate polynomial kernels for Min-LLT and Max-LLT for the structural parameterization by the vertex cover number. § INTRODUCTION Depth-first search (DFS) is a well-known fundamental technique for visiting the vertices and exploring the edges of a graph <cit.>. For a given connected undirected graph with vertex set V(G) and edge set E(G), DFS explores E(G) by always choosing an edge incident to the most recently discovered vertex that still has unexplored edges. A selected edge, either leads to a new vertex or a vertex already discovered by the search. The set of edges that lead to a new vertex during the DFS define an r-rooted spanning tree T of G, called a depth-first spanning (DFS) tree, where r is the vertex from which the search started. This tree T has the property that each edge that is not in T connects an ancestor and a descendant of T. All rooted spanning trees of a finite graph with this property, irrespective of how they are computed, such as a Hamiltonian path, are generalized as trémaux trees <cit.>. Given a graph G and a DFS tree T rooted at a vertex r ∈ V(G), it is easy to see that the family 𝒯 of subsets of E(G) induced by the vertices in all subtrees of T with the same root r as T constitute a topology on E(G). For this reason, the triple (G, T, r) has been referred to as the lineal topology (LT) of G in <cit.>. Many existing applications of DFS and DFS trees — such as planarity testing and embedding <cit.>, finding connected and biconnected components of undirected graphs <cit.>, bipartite matching <cit.>, and graph layout <cit.> — only require one to find an arbitrary DFS tree of the given graph, which can be done in time O(n + m), where n and m are the number of vertices and edges of the graph. An application of a DFS tree, noted by Fellows et al. <cit.>, that calls for a DFS tree with minimum height is the use of DFS trees to structure the search space of backtracking algorithms for solving constraint satisfaction problems <cit.>. This motivated the authors to study the complexity of finding DFS trees of a graph G that optimize or near-optimize the maximum length or minimum length of the root-to-leaf paths in the DFS trees of G. They showed that the related decision problems are NP-complete and do not admit a polynomial-time absolute approximation algorithm unless P = NP. In this paper, we look at the Minimum Leafy LT (Min-LLT) and Maximum Leafy LT (Max-LLT) problems introduced by Sam et al. <cit.>. Given a graph G and an integer k≥ 0, Min-LLT and Max-LLT ask whether G has a DFS tree with at most k and at least k leaves, respectively. These two problems are related to the well-known NP-complete Minimum Leaf Spanning Tree (Min-LST) and Maximum Leaf Spanning Tree (Max-LST) <cit.>. Sam et al. <cit.> proved that Min-LLT and Max-LLT are NP-hard. Moreover, they proved that when parameterized by k, Min-LLT is para-NP-hard and Max-LLT is W[1]-hard. They also considered the dual parameterizations, namely, Dual Min-LLT and Dual Max-LLT, where the tasks are to find DFS trees with at least n-k and at most n-k leaves, respectively. They proved that Dual Min-LLT and Dual Max-LLT are both FPT parameterized by k. These FPT algorithms are, however, based on Courcelle's theorem <cit.>, which relates the expressibility of a graph property in monadic second order (MSO) logic to the existence of an algorithm that solves the problem in FPT-time with respect to treewidth <cit.>. As a by-product, their running times have a high exponential dependence on the treewidth and the length of the MSO formula expressing the property. §.§ Our Results We prove that Min-LLT and Max-LLT admit polynomial kernels when parameterized by the vertex cover number of the given graph. Formally, we prove the following theorem. Min-LLT and Max-LLT admit kernels with (τ^3) vertices when parameterized by the vertex cover number τ of the input graph. Based on these kernels, we show that Dual Min-LLT, and Dual Max-LLT admit polynomial kernels parameterized by k. Dual Min-LLT and Dual Max-LLT admit kernels with (k^3) vertices. This last result follows from a win-win situation as either (1) the input graph has a large vertex cover in terms of k and, consequently, both problems are trivially solvable or (2) the input graph has a small vertex cover, and we can use Theorem <ref>. Finally, we use our polynomial kernels to prove that Dual Min-LLT, and Dual Max-LLT admit FPT algorithms parameterized by k with low exponential dependency. Dual Min-LLT and Dual Max-LLT can be solved in k^(k)· n^(1) time. As the previously known FPT algorithm for each of these problems was based on Courcelle's theorem, our algorithms are the first FPT-algorithms constructed explicitly. §.§ Related Results Lu and Ravi <cit.> proved that the Min-LST, problem has no constant factor approximation unless P = NP. From a parameterization point of view, Prieto et al.<cit.> showed that this problem is W[P]-hard parameterized by the solution size k. The Max-LST problem is, however, FPT parameterized by k and has been studied extensively <cit.>. Dual Min-LLT is related to the well-studied k-Internal Spanning Tree problem <cit.>, which asks to decide whether a given graph admits a spanning tree with at most n-k leaves (or at least k internal vertices). Prieto et al.<cit.> were the first to show that the natural parameterized version of k-Internal Spanning Tree has a ^*(2^klogk)-time FPT algorithm and a (k^3)-vertex kernel. Later, the kernel was improved to (k^2), (3k), and (2k) by Prieto et al., Fomin et al.<cit.>, and Li et al. <cit.> respectively. The latter authors also gave what is now the fastest FPT algorithm for k-Internal Spanning Tree, which runs in ^*(4^k) time. An independency tree (IT) is a variant of a spanning tree whose leaves correspond to an independent set in the given graph. Given a connected graph on n ≥ 3, G has no IT if it has no DFS tree in which the leaves and the root are pairwise nonadjacent in G <cit.>. From a parameterization point of view, the Min Leaf IT (Internal) and Max Leaf IT (Internal) problems <cit.>, which ask, given a graph G and an integer k ≥ 0, whether G has an IT with at least k and at most k internal vertices, respectively, are related to Dual Min-LLT and Dual Max-LLT, respectively. Casel et al. <cit.> showed that, when parameterized by k, Min Leaf IT (Internal) has an ^*(4^k)-time algorithm and a 2k vertex kernel. They also proved that Max Leaf IT (Internal) parameterized by k has a ^*(18^k)-time algorithm and a (k2^k)-vertex kernel, but no polynomial kernel unless the polynomial hierarchy collapses to the third level. Their techniques, however, do not consider the properties of a DFS tree and, therefore, do not work for our problems. §.§ Organization of the paper Section <ref> contains basic terminologies relevant to graphs, DFS trees, and parameterized complexity necessary to understand the paper. In section <ref>, we first prove a lemma about how, given a graph G and a vertex cover of G, the internal vertices of any spanning tree of G relate to the given vertex cover. We then use this lemma to demonstrate a polynomial kernel for Min-LLT and Max-LLT for the structural parameterization by the vertex cover number of the graph. This is followed by the kernelization algorithms for Dual Min-LLT and Dual Max-LLT parameterized by k. In section <ref>, we devise FPT algorithms for Dual Min-LLT and Dual Max-LLT based on their polynomial kernels. Finally, we conclude the paper in section <ref> with remarks concerning future studies. § PRELIMINARIES We consider only simple finite graphs. We use V(G) and E(G) to denote the sets of vertices and edges, respectively, of a graph G. For a graph G, we denote the number of vertices |V(G)| and the number of edges |E(G)| of G by n and m, respectively, if this does not create confusion. For any vertex v ∈ V(G), the set N_G(v) denotes the neighbors of v in G and N_G[v] denotes its closed neighborhood N_G(v) ∪{v} in G. For a set of vertices X⊆ V, N_G(X)=(⋃_v∈ XN_G(v))∖ X. We omit the G in the subscript if the graph is clear from the context. For a vertex v, its degree is d_G(v)=|N_G(v)|. Given any two graphs G_1=(V_1, E_1) and G_2=(V_2, E_2), if V_1 ⊆ V_2 and E_1 ⊆ E_2 then G_1 is a subgraph of G_2, denoted by G_1 ⊆ G_2. If G_1 contains all the edges uv ∈ E_2 with u,v ∈ V_1, then we say G_1 is an induced subgraph of G_2, or V_1 induces G_1 in G_2, denoted by G[V_1]. If G_1 is such that it contains every vertex of G_2, i.e., if V_1 = V_2 then G_1 is a spanning subgraph of G_2. Given a set of vertices X ⊆ V(G), we express the induced subgraph G[V(G)∖ X] as G-X. If X={x}, we write V(G)∖ x instead of V(G)∖{x} and G-x instead of G-{x}. Given a graph G, a set of vertices S ⊆ V(G) is a vertex cover of G if, for every edge uv ∈ E(G), either u ∈ S or v ∈ S; the vertex cover number of G, denoted by τ(G), is the minimum size of a vertex cover. A set Y ⊆ V(G) is called an independent set, if for every vertex pair u, v ∈ Y, uv ∉ E(G). A matching M in a given graph G is a set of edges, no two of which share common vertices. A pendant vertex is a vertex with degree one. For definitions of basic tree terminologies including root, child, parent, ancestor, and descendant, we refer the reader to <cit.>. Given a graph G, we denote a spanning tree of G rooted at a vertex r ∈ V(G) by (T,r). When there is no ambiguity, we simply use T instead of (T,r). For a rooted tree T, a vertex v is a leaf if it has no descendants and v is an internal vertex if otherwise. A spanning tree T with a root r is a DFS tree rooted in r if for very every edge uv∈ E(G), either uv∈ E(T), or v is a descendant of u in T, or u is a descendant of v in T. Equivalently, T is a DFS tree if it can be produced by the classical depth-first search (DFS) algorithm <cit.>. We say that a path P in a rooted tree T is a root-to-leaf path if one of its end-vertices is the root and the other is a leaf of T. Now we review some important concepts of Parameterized complexity (PC) relevant to the work reported herein. For more details about PC, we refer the reader to <cit.>. Let Σ be a fixed finite alphabet. A parameterized problem is a language P ⊆Σ^∗×. Given an instance (x,k) ∈Σ^∗× of a parameterized problem, k ∈ is called the parameter, and the task is to determine whether (x,k) belongs to P. A parameterized problem P is classified as fixed-parameter tractable (FPT) if there exists an algorithm that answers the question (x,k) ∈ P? in time f(k)· poly(|x|), where f: → is a computable function. A kernelization algorithm, or simply a kernel, for a parameterized problem P is a function ϕ that maps an instance (x,k) of P to an instance (x',k') of P such that the following properties are satisfied: * (x,k) ∈ P if and only if (x',k') ∈ P, * k'+|x'| ≤ g(k) for some computable function g:→, and * ϕ is computable in time polynomial in |x| and k. If the upper-bound g(·) of the kernel (Property <ref>) is polynomial (linear) in terms of the parameter k, then we say that P admits a polynomial (linear) kernel. It is common to write a kernelization algorithm as a series of reduction rules. A reduction rule is a polynomial-time algorithm that transform an instance (x,k) to an equivalent instance (x',k') such that Property  <ref> is fulfilled. Property <ref> is referred to as the safeness or correctness of the rule. § KERNELIZATION In this section, we demonstrate polynomial kernels for Dual Min-LLT and Dual Max-LLT. But first, we show that Min-LLT and Max-LLT admit polynomial kernels when parameterized by the vertex cover number of the input graph. The following simple lemma is crucial for our kernelization algorithms. Let G be a connected graph and let S be a vertex cover of G. Then every rooted spanning tree T of G has at most 2|S| internal vertices and at most |S| internal vertices are not in S. Let T be a rooted spanning tree tree of G with a set of internal vertices X. For every vertex v of T, we denote by (v) the set of its childred in T. For each internal vertex v of T, we have (v)≠∅ and if v∉ S, then (v)⊆ S because S is a vertex cover of G. Moreover, for any distinct internal vertices u and v of T, (u)∩(v)=∅. Given X∖ S= {v_1,…,v_t}, we deduce that (v_1),…,(v_t) are pairwise disjoint and non-empty subsets of S. We conclude that |X∖ S|≤ |S| and |X|≤ 2 |S|. We also use the following folklore observation. The set of internal vertices of any DFS tree T of a connected graph G is a vertex cover of G. To see the claim, it is sufficient to observe that any leaf of a DFS tree T is adjacent in G only to its ancestors, that is, to internal vertices. We use Lemma <ref> to show that, given a vertex cover, we can reduce the size of the input graph for both Min-LLT and Max-LLT. There is a polynomial-time algorithm that, given a connected graph G together with a vertex cover S of size s, outputs a graph G' with at most s^2(s-1)+3s vertices such that for every integer t≥ 0, G has a DFS tree with exactly t internal vertices if and only if G' has a DFS tree with exactly t internal vertices. Let G be a connected graph and let S be a vertex cover of G of size s. As the lemma is trivial if s=0, we assume that s≥ 1. Denote I=V(G)∖ S; note that I is an independent set. We apply the following two reduction rules to reduce the size of G. The first rule reduces the number of pendant vertices. To describe the rule, denote by (v) for v∈ S the set of pendant vertices of I adjacent to v. RuleRule [bth] v∈ S |(v)| > 2delete all but two vertices in (v) from G 1() To see that Rule <ref> is safe, denote by G' the graph obtained from G by the application of the rule. Notice that for every v∈ S, at most one vertex of (v) is the root and the other vertices are leaves that are children of v in any rooted spanning tree T of G. Let T be a DFS tree of G rooted in r with t internal vertices. Because for every v∈ S, the vertices of (v) have the same neighborhood in G and Rule <ref> does not delete all the vertices of (v), we can assume without loss of generality that r∈ V(G'). Let T'=T[V(G')]. Because the deleted vertices are leaves of T, we have that T' is a tree and, moreover, T' is a DFS tree of G' rooted in r. Clearly, each internal vertex of T' is an internal vertex of T. Let v∈ S be a vertex such that |(v)|>2. Then v has a pendant neighbor u≠ r in G' and u should be a child of v in T'. Thus, v is an internal vertex of T'. This implies that every leaf v of T' is not adjacent to any vertex of V(G)∖ V(G') in G. Hence, v is a leaf of T. Because the deleted vertices are leaves of T, we obtain that a vertex v∈ V(G) is an internal vertex of T if and only if v is an internal vertex of T'. Then T and T' have the same number of internal vertices. For the opposite direction, let T' be a DFS tree of G' rooted in r with t internal vertices. We construct the tree T from T' by adding each deleted vertex u as a leaf to T': if u∈ V(G)∖ V(G'), then u∈(v) for some v∈ S and we add u as a leaf child of v. Because the deleted vertices are pendants, we have that T is a DFS tree of G. Observe that each internal vertex of T' remains internal in T. In the same way as above, we observe that a vertex v∈ S with |(v)|>2 cannot be a leaf of T', because v has a pendant neighbor in G' distinct from r that should be a child of v. Hence, every leaf v of T' is not adjacent to any vertex of V(G)∖ V(G') in G and, therefore, is a leaf of T. Since the deleted vertices are leaves of T, we obtain that a vertex v∈ V(G) is an internal vertex of T if and only if v is an internal vertex of T'. Thus, T and T' have the same number of internal vertices. This concludes the safeness proof. The next rule is used to reduce the number of nonpendant vertices of I. For each pair of vertices u, v ∈ S, we use common neighbor of u and v to refer to a vertex w ∈ I that is adjacent to both u and v and denote by W_uv the set of common neighbors of u and v. Rule <ref> is based on the observation that if the size of W_uv for any vertex pair u,v ∈ S is at least 2s+1, then it follows from Lemma <ref> that every spanning tree T contains at most s internal vertices and at least s+1 leaves from W_uv. We prove that it is enough to keep at most 2s vertices from W_uv for each u,v∈ S. pairs {u,v} of distinct vertices of SLabel max{|W_uv|,2s} vertices in W_uv Delete the unlabeled vertices of I with at least two neighbors in S from G. 2() To show that Rule <ref> is safe, let x∈ I be a vertex with at least two neighbors in S which is not labeled by Rule <ref>. Let G'=G-x. We claim that G has a DFS tree with exactly t internal vertices if and only if G' has a DFS tree with exactly t internal vertices. We use the following auxiliary claim, the proof of which can be found in Appendix <ref>. (i) For any DFS tree T of G, the vertices of N_G(x) are vertices of a root-to-leaf path of T. (ii) For any DFS tree T' of G', the vertices of N_G(x) are vertices of a root-to-leaf path of T'. (iii) For any DFS tree T' of G', every vertex of N_G(x) is an internal vertex of T'. We use Claim <ref> to show the following property. If G has a DFS tree with t internal vertices, then G has a DFS tree T with t internal vertices such that x is a leaf of T. Let T be a DFS tree of G with a root r that has exactly t internal vertices. We prove that if x is an internal vertex of T, then T can be modified in such a way that x would become a leaf. Observe that by Claim <ref> (i), x has a unique child v in T. We have two cases depending on whether x=r or has a parent u. Suppose first that x=r. By Claim <ref>, the neighbors of x in G are vertices of some root-to-leaf path of T. Let u be the neighbor of x at maximum distance from r in T. Because d_G(x)≥ 2, u≠ v. Since x is not labeled by Rule <ref>, |W_uv|>2s. By Lemma <ref>, there are at least s+1 vertices W_uv that are leaves of T. These leaves have their parents in S which has size s. By the pigeonhole principle, there are distinct leaves w,w'∈ W_uv with the same parent. We rearrange T by making w a root with the unique child v and making x a leaf with the parent u. Denote by T' the obtained tree. Because x is adjacent to u and some of its ancestors in T and w is adjacent only to some of its ancestors in T, we conclude that T' is a feasible DFS tree. Notice that w which was a leaf of T became an internal vertex of T' and x that was an internal vertex is now a leaf. Because x is a leaf of T', we have that T”=T'-x is a DFS tree of G' rooted in w. By Claim <ref> (iii), u is an internal vertex of T”. This implies that u is an internal vertex of both T and T'. Since the parent of w in T has w'≠ w as a child, we also have that w is an internal vertex of both T and T'. Therefore, T and T' have the same number of internal vertices. This proves that G has a DFS tree T' with t internal vertices such that x is a leaf of T'. Assume now that x has a parent u in T. By Claim <ref>, the neighbors of x in G are vertices of some root-to-leaf path of T. Denote by v' be the neighbor of x at maximum distance from r in T; it may happen that v'=v. As x is not labeled by Rule <ref>, |W_uv|>2s. Then by Lemma <ref>, there are at least s+1 vertices W_uv that are leaves of T. These leaves have their parents in S which has size s. By the pigeonhole principle, there are distinct leaves w,w'∈ W_uv with the same parent. We rearrange T by making w a child of u and a parent of v and making x a leaf with the parent v'. Denote by T' the obtained tree. Because x is adjacent to v' and some of its ancestors in T and w is adjacent only to some of its ancestors in T, including u and v, we have that T' is a feasible DFS tree. Notice that w was a leaf of T and is now an internal vertex of T', while x was an internal vertex in T and is now a leaf in T'. Because x is a leaf of T', we have that T”=T'-x is a DFS tree of G' rooted in w. By Claim <ref> (iii), v' is an internal vertex of T”. Therefore, v' is an internal vertex of both T and T'. Since the parent of w in T has w'≠ w as a child, we also have that w is an internal vertex of both T and T'. Thus, T and T' have the same number of internal vertices. We obtain that G has a DFS tree T' with t vertices such that x is a leaf of T'. This concludes the proof. Now we are ready to proceed with the proof that G has a DFS tree with exactly t internal vertices if and only if G' has a DFS tree with exactly t internal vertices. For the forward direction, let T be a DFS tree of G with t internal vertices. By Claim <ref>, we can assume that x is a leaf of T. Let T'=T-x. Because x is a leaf of T, T' is a DFS tree of G'. Let u be the parent of x in T. Because u is adjacent to x in G, we have that u is an internal vertex of T' by Claim <ref> (iii). This means that the number of internal vertices of T and T' is the same, that is, G' has a DFS tree with t vertices. For the opposite direction, let T' be a DFS tree of G' with t internal vertices with a root r. By Claim <ref> (ii), the neighbors of x in G are vertices of some root-to-leaf path in T'. Let v be the neighbor of x at maximum distance from r in T'. We construct T by making x a leaf with the parent v. Because x is adjacent in G only to v and some of its ancestors in T', T is a DFS tree. By Claim <ref>(iii), v is an internal vertex of T'. Therefore, T' and T have the same set of internal vertices. We obtain that G has a DFS tree with t vertices. This concludes the proof of our claim. Recall that G' was obtained from G by deleting a single unlabeled vertex x∈ I of degree at least two. Applying the claim that G has a DFS tree with exactly t internal vertices if and only if G'=G-x has a DFS tree with exactly t internal vertices inductively for unlabeled vertices of I of degree at least two, we obtain that Rule <ref> is safe. Denote now by G' the graph obtained from G by the application of Rules <ref> and <ref>. Because both rules are safe, for any integer t≥ 0, G has a DFS tree with exactly t internal vertices if and only if G' has a DFS tree with exactly t internal vertices. Because of Rule <ref>, G'-S has at most 2s pendant vertices. Rule <ref> guarantees that G'-S has at most 2ss2=s^2(s-1) vertices of degree at least two. Then the total number of vertices of G' is at most s^2(s-1)+2s+s=s^2(s-1)+3s. It is straightforward to see that Rule <ref> can be applied in (sn) time and Rule <ref> can be applied in (s^2n) time. Therefore, the algorithm is polynomial. This concludes the proof. As a direct consequence of Lemma <ref> we obtain that Min-LLT and Max-LLT admit polynomial kernels when parameterized by the vertex cover number of the input graph. We are ready to prove our kernels parameterized by vertex cover. We show the theorem for Min-LLT; the arguments for Max-LLT are almost identical. Recall that the task of Min-LLT is to decide, given a graph G and an integer k≥ 0, whether G has a DFS tree with at most k leaves. Equivalently, we can ask whether G has a DFS tree with at least |V(G)|-k internal vertices. Let (G,k) be an instance of Min-LLT. We assume that G is connected as, otherwise, (G,k) is a no-instance and we can return a trivial no-instance of Min-LLT of constant size. First, we find a vertex cover S of G. For this, we apply a folklore approximation algorithm (see, e.g., <cit.>) that greedily finds an inclusion-maximal matching M in G and takes the set S of endpoints of the edges of M. It is well-known that |S|≤ 2τ. Then we apply the algorithm from Lemma <ref>. Let G' be the output graph. By Lemma <ref>, G' has (τ^3) vertices. We set k'=k-|V(G)|+|V(G')| and return the instance (G',k') of Min-LLT. Suppose that G has a DFS tree with at most k leaves. Then G has a DFS tree with t≥ |V(G)|-k internal vertices. By Lemma <ref>, G' also has a DFS tree with t internal vertices. Then G' has a DFS tree with |V(G')|-t≤ |V(G')|-(|V(G)|-k)=k' leaves. For the opposite direction, assume that G' has a DFS tree with at most k' leaves. Then G' has a DFS tree with t≥ |V(G')|-k'=|V(G)|-k internal vertices. By Lemma <ref>, G has a DFS tree with t internal vertices and, therefore, G has a DFS tree with at most k leaves. Because S can be constructed in linear time and the algorithm from Lemma <ref> is polynomial, the overall running time is polynomial. This concludes the proof. Now we demonstrate a polynomial kernel for Dual Min-LLT. Dual Min-LLT admits a kernel with (k^3) vertices. Recall that the task of Dual Min-DLL is to verify, given a graph G and an integer k≥ 0, whether G has a DFS tree with at most n-k leaves. Equivalently, the task is to check whether G has a DFS tree with at least k internal vertices. Let (G,k) be an instance of Dual Min-LLT. If G is disconnected, then (G,k) is a no-instance and we return a trivial no-instance of Dual Min-DLL of constant size. From now, we assume that G is connected. We select an arbitrary vertex r of G and run the DFS algorithm from this vertex. The algorithm produces a DFS tree T. Let S be the set of internal vertices of T. If |S|≥ k, then we conclude that (G,k) is a yes-instance. Then the kernelization algorithm returns a trivial yes-instance of Dual Min-LLT of constant size and stops. Assume that this is not the case and |S|≤ k-1. By Observation <ref>, we have that S is a vertex cover of G of size s≤ k-1. We use S to call the algorithm from Lemma <ref>. Let G' be a graph produced by the algorithm. By Lemma <ref>, G' has (k^3) vertices. Our kernelization algorithm returns (G',k) and stops. To see correctness, it is sufficient to observe that by Lemma <ref>, for any integer t≥ k, G has a DFS tree with t internal vertices if and only if G' has a DFS tree with t internal vertices. Because the DFS algorithm runs in linear time (see, e.g., <cit.>) and the algorithm from Lemma <ref> is polynomial, the overall running time is polynomial. This completes the proof. We use similar arguments to prove the following theorem in Appendix <ref>. Dual Max-LLT admits a kernel with (k^3) vertices. Theorems <ref> and <ref> implies Theorem <ref>. § FPT ALGORITHMS In this section, we give algorithms that solve Dual Min-LLT and Dual Max-LLT in FPT time using the kernels given in the previous section. Our algorithms are brute force algorithms which guess internal vertices. Recall that the standard DFS algorithm <cit.> outputs a labeled spanning tree. More formally, given an n-vertex graph and a root vertex r, the algorithm outputs a DFS tree T rooted in r and assigns to the vertices of G distinct labels d[v] from {1,…,n} giving the order in which the vertices were discovered by the algorithm. Thus, the algorithm outputs a linear ordering of vertices. Given an ordering v_1,…,v_n of V(G), we say that a DFS tree T respects the ordering if T is produced by the DFS algorithm in such a way that d[v_i]=i for every i∈{1,…,n}. Observe that for an ordering of the vertices of G, there is a unique way to run the DFS algorithm to obtain T respecting the ordering. This gives us the following observation. It can be decided in linear time, given an ordering v_1,…,v_n of the vertices of a graph G, whether G has a DFS tree respecting the ordering. Furthermore, if such a tree T exists, it is unique and can be constructed in linear time. Let G be a graph and let r∈ V(G). For a tree T⊆ G with r∈ V(T), we say that T is extendable to a DFS tree rooted in r, if there is a DFS tree T' of G rooted in r such that T is a subtree of T'. We call T' an extension of T. The definition of a DFS tree immediately gives us the following necessary and sufficient conditions for the extendability of T. Let G be a graph with r∈ V(G) and let T⊆ G be a tree containing r. Then T is extendable to a DFS tree rooted in r if and only if (i) T is a DFS tree rooted in r of G[V(T)], (ii) for every connected component C of G-V(T), the vertices of N_G(V(C)) are vertices of a root-to-leaf path of T. Note that (i) and (ii) can be verified in polynomial (in fact, linear) time. We need the following variants of Observation <ref> for special extensions in our algorithms. Let G be a graph with r∈ V(G) and let T⊆ G be a tree containing r. Then T is extendable to a DFS tree rooted in r with an extension T' such that the vertices of V(T) are internal vertices of T' if and only if (i) T is a DFS tree rooted in r of G[V(T)], (ii) for every connected component C of G-V(T), the vertices of N_G(V(C)) are vertices of a root-to-leaf path of T, (iii) for every leaf v of T, there is u∈ V(G)∖ V(T) that is adjacent to v. Let G be a graph with r∈ V(G) and let T⊆ G be a tree containing r. Then T is extendable to a DFS tree rooted in r with an extension T' such that the vertices of L=V(G)∖ V(T) are leaves of T' if and only if (i) T is a DFS tree rooted in r of G[V(T)], (ii) L is an independent set, (iii) for every v∈ L, the vertices of N_G(v) are vertices of a root-to-leaf path of T. Now, we are ready to describe our algorithms. For the proof of Lemma <ref>, see Appendix <ref>. Dual Min-LLT and Dual Max-LLT can be solved in n^(k) time. Combining Lemma <ref> and Theorem <ref> implies Theorem <ref> by providing k^(k)· n^(1) time algorithms for the dual problems. § CONCLUSION We have shown that Dual Min-LLT and Dual Max-LLT admit kernels with (k^3) vertices and can be solved in k^(k)· n^(1) time. A natural question is whether the problems have linear kernels, such as for k-Internal Spanning Tree <cit.>. Another question is whether the problems can be solved by single-exponential FPT algorithms. As a byproduct of our kernelization algorithms for Dual Min-LLT and Dual Max-LLT, we also proved that Min-LLT and Max-LLT admit polynomial kernels for the structural parameterization by the vertex cover number. It is natural to wonder whether polynomial kernels exist for other structural parameterizations. In particular, it could be interesting to consider the parameterization by the feedback vertex number, i.e., by the minimum size of a vertex set X such that G-X is a forest. §.§.§ Acknowledgements We acknowledge support from the Research Council of Norway grant “Parameterized Complexity for Practical Computing (PCPC)” (NFR, no. 274526) and “Beyond Worst-Case Analysis in Algorithms (BWCA)” (NFR, no. 314528). splncs04 § PROOF OF CLAIM <REF> IN THE PROOF OF LEMMA <REF> We show (i) by contradiction. Assume that there are u,v∈ N_G(x) such that the lowest common ancestor w of these vertices is distinct from u and v. Because x is not labeled by Rule <ref>, |W_uv|>2s. Hence, by Lemma <ref>, there is a vertex z∈ W_uv such that z is a leaf of T. However, any leaf in a DFS tree of T can be adjacent only to its ancestors in T. This contradiction proves the claim. We use exactly the same arguments to prove (ii) by replacing T by T' and observing that S is a vertex cover of G'. To show (iii), let T' be a DFS tree with a root r. By (ii), there is a leaf y such that the vertices of N_G(x) are vertices of the (r,y)-path in T'. Observe that y may be not unique. We prove that y∉ N_G(x). For the sake of contradiction, assume that x and y are adjacent. Because d_G(x)≥ 2, x has a neighbor u≠ x. Because x is not labeled by Rule <ref>, |W_uy|>2s. By Lemma <ref>, we obtain that there is v∈ W_uy that is a leaf of T'. We have that vy∈ E(G') but two leaves of a DFS tree cannot be adjacent; a contradiction. This proves that y∉ N_G(x) and concludes the proof of the claim. § PROOF OF THEOREM <REF> The aim of Dual Max-LLT is to decide, given a graph G and an integer k≥ 0, whether G has a DFS tree with at least n-k leaves. This is equivalent to asking whether G has a DFS tree with at most k internal vertices. Let (G,k) be an instance of Dual Max-LLT. If G is disconnected, then (G,k) is a no-instance, and we return a trivial no-instance of Dual Max-DLL of constant size. From now, we assume that G is connected. If T is a DFS tree, then the set of internal vertices of T is a vertex cover of G by Observation <ref>. Hence, if G has a DFS tree with at most k internal vertices, then τ(G)≤ k. We approximate τ(G) by selecting greedily an inclusion-maximal matching M in G (see, e.g., <cit.>). If |M|>k, then we conclude that τ(G)>k and return a trivial no-instance of Dual Max-DLL of constant size. Assume that this is not the case. Then we take S as the set of endpoints of the edges of M and observe that S is a vertex cover of size at most 2k. We call the algorithm from Lemma <ref> for G and S, which outputs a graph G' with (k^3) vertices. The kernelization algorithm returns the instance (G',k) of Dual Max-DLL and stops. To see the correctness, note that by Lemma <ref>, for any nonnegative integer t≤ k, G has a DFS tree with t internal vertices if and only if G' has a DFS tree with t internal vertices. Because M can be constructed in linear time and the algorithm from Lemma <ref> is polynomial, the overall running time is polynomial. This completes the proof. § PROOF OF LEMMA <REF> IN SECTION <REF> First, we give an algorithm for Dual Min-LLT. Let (G,k) be an instance of the problem. If G is disconnected, then (G,k) is a no-instance. Assume that this is not the case. Also, we have a trivial no-instance if n≤ k and we assume that n≥ k. Recall that the equivalent task of Dual Min-LLT is to decide, given a graph G and an integer k, whether G has a DFS tree with at least k internal vertices. We guess a set S of k internal vertices containing a root of a solution DFS tree T forming a subtree T'=T[S]. To guess T' and S, we apply Observation <ref> using the fact that T' should be a DFS tree of G[S]. Formally, we consider all k-tuples (v_1,…,v_k) of distinct vertices of G. For each k-tuple, we check whether there is a DFS tree T' of G[S], where S={v_1,…,v_k}, respecting the ordering v_1,…,v_k using Observation <ref>. If such a tree T' exists, we use Observation <ref> to check whether T' has an extension T such that the vertices of S are internal vertices of T. If we find such a k-tuple, we conclude that (G,k) is a yes-instance of Dual Min-LLT. Otherwise, if we fail to find T' and a required extension for all k-tuples, we conclude that (G,k) is a no-instance of Dual Min-LLT. The correctness of the algorithm immediately follows from Observations <ref> and <ref>. Because we have at most n^k k-tuples of vertices, we obtain that the overall running time is n^(k). We use a similar strategy for Dual Max-LLT. Recall that now the task is to decide whether a graph G has a DFS tree with at most k internal vertices. Let (G,k) be an instance of the problem. As above, we can assume that G is connected. Also, if n≤ k, then (G,k) is a yes-instance and we can assume that n>k. We guess a set S of k vertices containing a root and the internal vertices of a solution DFS tree T and a subtree T'=T[S]. For this, we consider all k-tuples (v_1,…,v_k) of distinct vertices of G. For each k-tuple, we check whether there is a DFS tree T' of G[S], where S={v_1,…,v_k}, respecting the ordering v_1,…,v_k using Observation <ref>. If such a tree T' exists, we use Observation <ref> to check whether T' has an extension T such that the vertices of V(G)∖ S are leaves of T. If we find such a k-tuple, we conclude that (G,k) is a yes-instance of Dual Max-LLT. Otherwise, if we fail to find T' and a required extension for all k-tuples, we conclude that (G,k) is a no-instance of Dual Max-LLT. Observations <ref> and <ref> imply correctness, and the overall running time is n^(k). This concludes the proof.
http://arxiv.org/abs/2307.01816v1
20230704164209
Over-the-Counter Market Making via Reinforcement Learning
[ "Zhou Fang", "Haiqing Xu" ]
q-fin.TR
[ "q-fin.TR" ]
=cmr12 at 15pt Over-the-Counter Market Making via Reinforcement Learning Zhou Fang Haiqing Xu August 1, 2023 ========================================================= The over-the-counter (OTC) market is characterized by a unique feature that allows market makers to adjust bid-ask spreads based on order size. However, this flexibility introduces complexity, transforming the market-making problem into a high-dimensional stochastic control problem that presents significant challenges. To address this, this paper proposes an innovative solution utilizing reinforcement learning techniques to tackle the OTC market-making problem. By assuming a linear inverse relationship between market order arrival intensity and bid-ask spreads, we demonstrate the optimal policy for bid-ask spreads follows a Gaussian distribution. We apply two reinforcement learning algorithms to conduct a numerical analysis, revealing the resulting return distribution and bid-ask spreads under different time and inventory levels. § INTRODUCTION Market makers play a crucial role as liquidity providers in diverse financial markets, adapting their liquidity provision strategies to suit the specific characteristics of each market. High-frequency trading (HFT) firms primarily engage in market-making activities, which are widely recognized for their contributions to stabilizing and enhancing the efficiency of financial markets. OTC markets encompass a wide range of financial instruments such as foreign exchanges (FX), bonds, and stocks that are not traded on formal exchanges for various reasons. In an OTC market, a specific asset class market is typically consisted by multiple dealers-to-clients (D2C) platforms and sometimes supplemented by a dealers-to-dealers (D2D) network, as observed in the case of FX markets. Market makers play a crucial role in the OTC market by providing liquidity to clients through bid-ask prices that they offer for buying and selling. These bid-ask prices are adjusted based on the order size. Given the inherent volatility of financial markets, market makers aim to maintain manageable inventories and achieve this by either adjusting their bid-ask price quotes or utilizing external avenues such as the D2D network to manage inventory levels. The study of market making originated in the 1980s with seminal works such as <cit.> and <cit.>. A significant resurgence of interest in market-making occurred with the publication of <cit.>, which sparked a wave of subsequent literature in this area. Subsequent works such as <cit.>, <cit.>, <cit.>, <cit.>, and <cit.> introduced more sophisticated structures and features, including alpha signals, order flows, and minimum resting time, into market-making models. However, these aforementioned papers primarily focused on single-dimensional market-making problems and did not explicitly address the challenges posed by multi-dimensional market-making. The curse of dimensionality makes the multi-dimensional market-making problem significantly more intricate. Only recently, a series of papers such as <cit.>, <cit.>, <cit.>, and <cit.> have begun to tackle this problem from a purely mathematical perspective. In parallel, machine learning techniques have also been employed to approach market making in several papers, including <cit.>, <cit.>, <cit.>, and <cit.>. However, these papers tend to adopt relatively simple models and are primarily focused on demonstrating the application of machine-learning techniques to market-making, rather than delving into the complexities of addressing intricate market-making issues through machine-learning methods. In this research paper, we present a novel framework based on reinforcement learning to address the intricate challenges of the multi-dimensional market-making problem encountered by over-the-counter (OTC) market makers. Our approach centers on the utilization of a stochastic policy, which enables the market maker to determine bid-ask spreads. The application of stochastic policies in financial mathematics has gained recognition, as evidenced by prior works such as <cit.> and <cit.>, which leverage stochastic policies to tackle portfolio management problems. Notably, employing a stochastic policy offers several advantages, including enhanced robustness and the ability to balance exploration and exploitation, as elucidated in the notable contributions of <cit.> and <cit.>, who propose a unified framework encompassing policy evaluation and policy gradient techniques, building upon the earlier aforementioned works. Within the scope of this paper, we specifically focus on the scenario where market order arrivals follow a Poisson process, with the intensity of arrivals being inversely proportional to bid-ask spreads. Under this assumption, we demonstrate that the optimal policy for bid-ask spread determination can be modeled as a Gaussian distribution. To validate the effectiveness of our proposed framework, we conduct extensive numerical experiments using simulated data, showcasing various performance metrics and bid-ask spread outcomes for specific inventory levels. The subsequent sections of this paper are structured as follows. Section 2 provides a comprehensive overview of the model setup, elucidating the key components and variables employed in our analysis. In Section 3, we derive the Hamilton-Jacobi-Bellman (HJB) equation and establish the optimal policy's formulation, shedding light on the fundamental principles guiding our approach. Moving forward, Section 4 presents proof of the policy improvement theorem, which serves as the cornerstone of our primary method for approximating the optimal policy. To validate the efficacy and practicality of our proposed approach, Section 5 presents a detailed analysis of the numerical results obtained through extensive experimentation. In this section, we compare the performance of our method against a traditional actor-critic algorithm, providing insightful comparisons and contrasting the bid-ask spreads policy under specific factors such as inventory level, and asset reference prices. § MODEL This paper delves into the intricate realm of stochastic control problems encountered by OTC market makers, who are confronted with the challenging task of setting diverse bid-ask quotes tailored to orders of varying sizes. Furthermore, these market makers possess the flexibility to externalize their inventory to fellow market participants, thereby introducing an additional dimension of strategic decision-making. In the context of our study, we focus on a relatively brief trading period denoted as [0, T] that typically spans several hours or a single trading day. To streamline the problem and facilitate analysis, we adopt the assumption that the volatility of the underlying asset remains constant throughout the entire trading period. This assumption allows us to concentrate on the dynamics of the asset's reference prices, also known as mid-prices, which are defined as follows: dS_t/S_t = σ dW_t In our model, we characterize the occurrence of buy and sell order executions of size z_k as Poisson processes. Specifically, we denote the number of buy order executions as N_t^+(k) and the number of sell order executions as N_t^-(k). The intensities of these Poisson processes are denoted as λ^+(k) and λ^-(k), respectively. In this paper, we simplify the setting by assuming that the market maker does not externalize its inventory. As a result, the dynamics of the inventory can be described as follows: dq_t = ∑_k = 1^K z_k(dN_t^+(k) - dN_t^-(k) ) in this paper, we assume a simple setting, which is the market orders arrival intensity is linearly inverse to bid-ask spreads, shown as follows, λ_t^± (k) = A_k - B_kϵ_t^a,b(k) Let ϵ_t = (ϵ_t^b, a(k))_k=1^K represent the bid-ask spreads posted by the market maker at time t. The probability density function for posting spreads ϵ_t is denoted as π(ϵ_t | t, S, q). If the market maker chooses to post the spreads ϵ_t at time t, the wealth exhibits the following dynamics: dX_t = ∑_k = 1^K z_k [ ϵ_t^b(k)dN_t^+(k) + ϵ_t^a(k)dN_t^-(k)] + d(q_t S_t) § MARKET-MAKING IN OTC MARKETS §.§ Hamilton-Jacobian-Bellman Equation In the context of our study, let us consider a policy π that guides the market maker's actions. We denote the corresponding inventory process under policy π as q^π_t, where its initial condition at time t is specified as S_t = S and q^π_t = q. The central objective of our market maker is to optimize the expected profit by effectively managing inventory risk and promoting exploration through the utilization of a stochastic policy. To quantitatively assess the level of exploration, we employ the cross-entropy of the stochastic policy, which offers insights into the policy's capacity for exploration and exploitation. Furthermore, to discourage excessive inventory holdings over the trading period, we introduce a penalty term based on the average square of the inventory. By combining these elements, we define the value function under policy π as follows: V^π(t, S, q) =𝔼[ ∫_t^T ∫_ϵ_u[ ∑_k = 1^K z_k[ ϵ_u^b(k)dN_u^+(k) + ϵ_u^a(k)dN_u^-(k)] + d(q_u S_u) ] π (ϵ_u | u, S_u, q_u^π) dϵ_u - γ∫_t^T ∫_ϵ_uπ(ϵ_u | u, S_u, q_u^π) logπ(ϵ_u | u, S_u, q_u^π) dϵ_u du - δ∫_t^T q_u^2 du | S_t = S, q_t^π = q ] = 𝔼[ ∫_t^T ∫_ϵ_u∑_k = 1^K[ z_k ( S_u + ϵ_u^b(k) ) dN_u^+(k) - z_k( S_u - ϵ_u^a(k) ) dN_u^-(k) ] π(ϵ_u | u, S_u, q_u^π) dϵ_u - γ∫_t^T ∫_ϵ_uπ(ϵ_u | u, S_u, q_u^π) logπ(ϵ_u | u, S_u, q_u^π) dϵ_u du - δ∫_t^T q_u^2 du | S_t = S, q_t^π = q ] then the value function under the optimal policy is V(t, S, q) = πmax𝔼[ ∫_t^T ∫_ϵ_u∑_k = 1^K[ z_k ( S_u + ϵ_u^b(k) ) dN_u^+(k) - z_k( S_u - ϵ_u^a(k) ) dN_u^-(k) ] π(ϵ_u | u, S_u, q_u^π) dϵ_u - γ∫_t^T ∫_ϵ_uπ(ϵ_u | u, S_u, q_u^π) logπ(ϵ_u | u, S_u, q_u^π) dϵ_u du - δ∫_t^T q_u^2 du | S_t = S, q_t^π = q ] = πmax𝔼[ ∫_t^t + Δ t∫_ϵ_u∑_k = 1^K[ z_k ( S_u + ϵ_u^b(k) ) dN_u^+(k) - z_k( S_u - ϵ_u^a(k) ) dN_u^-(k) ] π(ϵ_u | u, S_u, q_u^π) dϵ_u - δ∫_t^t+Δ t q_u^2 du - γ∫_t^t + Δ t∫_ϵ_uπ(ϵ_u | u, S_u, q_u^π) logπ(ϵ_u | u, S_u, q_u^π) dϵ_u du + V(t + Δ t, S_t + Δ S_t, q_t + Δ q_t^π) | S_t = S, q_t^π = q ] = πmax{∫_ϵ_t∑_k = 1^K[ z_k ( S_t + ϵ_t^b(k) ) λ_t^+(k) - z_k( S_t - ϵ_t^a(k) ) dN_t^-(k) ] π(ϵ_t | t, S_t, q_t^π) dϵ_t Δ t - δ q_t^2Δ t - γ∫_ϵ_tπ(ϵ_t | t, S_t, q_t^π) logπ(ϵ_t | t, S_t, q_t^π) dϵ_t Δ t + 𝔼[ V(t + Δ t, S_t + Δ S_t, q_t^π + Δ q_t^π) | S_t = S, q_t^π = q ] } to make notation simplier, denote ℒV(t, S_t, q_t) as ℒV(t, S_t, q_t) = V(t, S_t, q_t) + (∂_t V(t, S_t, q_t) + 1/2σ^2 ∂_SS V(t, S_t, q_t)) Δ t + σ∂_S V(t, S_t, q_t) dW_t since dS_t = σ S_t dW_t, and dq_t = ∑_k z_k(dN_t^+(k) - dN_t^-(k)), by the Ito formula, we have the following, V(t + Δ t, S_t + Δ S_t, q_t + Δ q_t) = V(t + Δ t, S_t + Δ S_t, q_t) ∏_k (1 - dN_t^+(k))(1 - dN_t^-(k)) + ∑_k[V(t + Δ t, S_t + Δ S_t, q_t + z_k) dN_t^+(k) + V(t + Δ t, S_t + Δ S_t, q_t - z_k) dN_t^-(k)] = ℒV(t, S_t, q_t) ∏_k (1 - dN_t^+(k))(1 - dN_t^-(k)) + ∑_kℒV(t, S_t, q_t + z_k)dN_t^+(k) + ℒV(t, S_t, q_t - z_k) dN_t^-(k) it is crucial to note that the previous derivation of the Ito formula is based on the assumption that the inventory process is defined as dq_t = ∑_k z_k(dN_t^+(k) - dN_t^-(k)). It is essential to emphasize that the intensities of the Poisson processes are determined by the quoted bid-ask spreads. As a result, the inventory process in the aforementioned Ito formula assumes that the bid-ask spreads are already determined. Hence, when calculating the conditional expectation 𝔼[V(t+Δ t, S_t + Δ S_t, q_t^π + Δ q_t^π) | S_t = S, q_t^π = q], it becomes necessary to consider and average over all possible scenarios. Consequently, the conditional expectation can be expressed as follows, taking into account the various scenarios 𝔼[ V(t+Δ t, S_t + Δ S_t, q_t^π + Δ q_t^π) | S_t = S, q_t^π = q ] = V(t, S_, q) + ∫_ϵ_tπ(ϵ_t | t, S, q)[ - ∑_k(λ_t^+(k)+ λ_t^-(k)) V(t, S, q) + ∂_t V(t, S, q) + 1/2σ^2 ∂_SS V(t, S, q) + ∑_k[λ_t^+(k) V(t, S, q + z_k) + λ_t^-(k) V(t, S, q - z_k) ] ] d ϵ_t Δ t then the Hamilton-Jacobian-Bellman equation is πmax{∫_ϵ_t∑_k[λ_t^+(k) V(t, S, q + z_k) + λ_t^-(k) V(t, S, q - z_k) - (λ_t^+(k)+ λ_t^-(k)) V(t, S, q) ] π(ϵ_t | t, S, q)dϵ_t + ∫_ϵ_t∑_k = 1^N[ z_k λ_t^+(k)( S + ϵ_t^b(k) ) - z_k λ_t^-(k)( S - ϵ_t^a(k) ) ] π(ϵ_t | t, S, q) dϵ_t - γ∫_ϵ_tπ(ϵ_t | t, S, q) logπ(ϵ_t | t, S, q) dϵ_t } - δ q_t^2 + ∂_t V(t, S, q) + 1/2σ^2 ∂_SS V(t, S, q) = 0 §.§ Optimal Stochastic Policy To determine the maximizer for the quantity inside the max bracket of the HJB equation, we utilize the calculus of variations 0 = ∫_ϵ_t∑_k[λ_t^+(k) V(t, S, q + z_k) + λ_t^-(k) V(t, S, q - z_k) - (λ_t^+(k)+ λ_t^-(k)) V(t, S, q) ] δπ dϵ_t + ∫_ϵ_t∑_k = 1^N[ z_k λ_t^+(k)( S + ϵ_t^b(k) ) - z_k λ_t^-(k)( S - ϵ_t^a(k) ) ] δπ dϵ_t - γ∫_ϵ_tπδπ/π dϵ_t - γ∫_ϵ_tδπlogπ d ϵ_t since π is probability density distribution, then ∫_ϵ_tδπ d ϵ_t = 0 the equation (10) becomes 0 = ∫_ϵ_tδπ (∑_k[λ_t^+(k) V(t, S, q + z_k) + λ_t^-(k) V(t, S, q - z_k) - (λ_t^+(k)+ λ_t^-(k)) V(t, S_t, q_t) + z_k λ_t^+(k)( S + ϵ_t^b(k) ) - z_k λ_t^-(k)( S - ϵ_t^a(k) ) ] - γ (δπ) logπ dϵ_t ) d ϵ_t the quantity inside the bracket above is a constant C = ∑_k[λ_t^+(k) V(t, S, q + z_k) + λ_t^-(k) V(t, S, q - z_k) - (λ_t^+(k)+ λ_t^-(k)) V(t, S, q) + z_k λ_t^+(k)( S + ϵ_t^b(k) ) - z_k λ_t^-(k)( S - ϵ_t^a(k) ) ] - γlogπ to simplify the notations, let ℋ_k^+(t, S, q, π) = V^π(t, S, q + z_k) - V^π(t, S, q) + z_k S ℋ_k^-(t, S, q, π) = V^π(t, S, q - z_k) - V^π(t, S, q) - z_k S under the optimal policy π^*, there is ℋ_k^+(t, S, q) = V(t, S, q + z_k) - V(t, S, q) + z_k S ℋ_k^-(t, S, q) = V(t, S, q - z_k) - V(t, S, q) - z_k S then the optimal stochastic policy is π^* (ϵ_t | t, S, q) ∝exp{1/γ∑_k (A_k - B_k ϵ_t^a,b(k)) (z_k ϵ_t^a,b(k) + ℋ_k^±(t, S, q)) } ∝∏_k exp{ -z_kB_k/γ[ ϵ_t^a,b(k) - A_k/2B_k + ℋ_k^±(t, S, q)/2z_k]^2 } ∝∏_k 𝒩(ϵ_t^a,b| A_k/2B_k - ℋ_k^±(t, S, q)/2z_k, γ/2z_kB_k) hence, it is evident that the optimal policy corresponds to a multi-dimensional Gaussian distribution. To streamline the notation, let us introduce the following notation μ(t, S, q, π) = (A_1/2B_1 - ℋ_1^±(t, S, q, π)/2z_1, ..., A_N/2B_N - ℋ_N^±(t, S, q, π)/2z_N) Σ = [ γ/2z_1B_1 ; γ/2z_1B_1 ; ⋱ ; γ/2z_kB_k ; γ/2z_kB_k ] then the optimal policy becomes π^* ∼𝒩(·| μ(t, S, q, π^*), Σ) § POLICY IMPROVEMENT THEOREM Given any π, let the new policy π_new to be π_new∼𝒩 (·| μ(t, S, q, π), Σ ) then the following inequality holds V^π(t, S, q) ≤ V^π_new(t, S, q) Let q_t^π_new denote the inventory process under the policy π_new, with an initial condition at time t of q_t^π_new = q, and S_t = S. Applying the Ito formula and averaging over all possible scenarios, we obtain the following expression V^π(t, S, q) = 𝔼[V^π(s, S_s, q_s^π_new) + ∫_t^s ∫_ϵ_uπ_new(ϵ_u | u, S_u, q_u^π_new) V^π(u, S_u, q_u^π_new)∑_k [λ_u^+(k) + λ_u^-(k)] d ϵ_u du -∫_t^s ∫_ϵ_uπ_new(ϵ_u | u, S_u, q_u^π_new)∑_k[V^π(u, S_u, q_u^π_new + z_k) λ_u^+(k) + V^π(u, S_u, q_u^π_new - z_k) λ_u^-(k) ] dϵ_u du - ∫_s^t ( ∂_t V^π(u, S_u, q_u^π_new) + 1/2σ^2 ∂_SS V^π (u, S_u, q_u^π_new) ) du | S_t = S, q_t^π_new = q ] since at time t, under policy π, the following equality holds ∫_ϵ_t∑_k[λ_t^+(k) V^π(t, S, q + z_k) + λ_t^-(k) V^π(t, S, q - z_k) - (λ_t^+(k)+ λ_t^-(k)) V^π(t, S, q) ] π(ϵ_t | t, S, q)dϵ_t + ∫_ϵ_t∑_k = 1^N[ z_k λ_t^+(k)( S + ϵ_t^b(k) ) - z_k λ_t^-(k)( S - ϵ_t^a(k)) ] π(ϵ_t | t, S, q) dϵ_t - γ∫_ϵ_tπ(ϵ_t | t, S, q) logπ(ϵ_t | t, S, q) dϵ_t - δ q_t^2 + ∂_t V^π(t, S, q) + 1/2σ^2 ∂_SS V^π(t, S, q) = 0 for the policy π_new, given its construction, and employing the same calculus of variation arguments as in equations (10) - (13), we can conclude that π_new maximizes the following quantity πmax{∫_ϵ_t∑_k = 1^N[ z_k λ_t^+(k)( S + ϵ_t^b(k) ) - z_k λ_t^-(k)( S - ϵ_t^a(k) ) ] π(ϵ_t | t, S, q) dϵ_t + ∫_ϵ_t∑_k[λ_t^+(k) V^π(t, S, q + z_k) + λ_t^-(k) V^π(t, S, q - z_k) - (λ_t^+(k)+ λ_t^-(k)) V^π(t, S, q) ] π(ϵ_t | t, S, q)dϵ_t - γ∫_ϵ_tπ(ϵ_t | t, S, q) logπ(ϵ_t | t, S, q) dϵ_t } which results in the following inequality ∫_ϵ_t∑_k[λ_t^+(k) V^π(t, S, q + z_k) + λ_t^-(k) V^π(t, S, q - z_k) - (λ_t^+(k)+ λ_t^-(k)) V^π(t, S, q) ] π_new(ϵ_t | t, S, q)dϵ_t + ∫_ϵ_t∑_k = 1^N[ z_k λ_t^+(k)( S + ϵ_t^b(k) ) - z_k λ_t^-(k)( S - ϵ_t^a(k) ) ] π_new(ϵ_t | t, S, q) dϵ_t - γ∫_ϵ_tπ_new(ϵ_t | t, S, q) logπ_new(ϵ_t | t, S, q) dϵ_t - δ q_t^2 + ∂_t V^π(t, S, q) + 1/2σ^2 ∂_SS V^π(t, S, q) ≥ 0 then equation (23) yields V^π(t, S, q) ≤𝔼[ ∫_t^s∫_ϵ_t∑_k = 1^N[ z_k λ_u^+(k)( S_u + ϵ_u^b(k) ) - z_k λ_u^-(k)( S_u - ϵ_u^a(k) ) ] π_new(ϵ_u | u, S_u, q_u^π_new) dϵ_u du - δ∫_t^s q_u^2 du - γ∫_t^s ∫_ϵ_uπ_new(ϵ_u | u, S_u, q_u^π_new) logπ_new(ϵ_u | u, S_u, q_u^π_new) dϵ_u du + V^π(s, S_s, q_s^π_new) | S_t = S, q_t^π_new = q ] by setting s = T, we obtain V^π(T, S_T, q_T^π_new) = V^π_new(T, S_T, q_T^π_new). Substituting this into equation (75), there is V^π(t, S, q) ≤𝔼[ ∫_t^T∫_ϵ_t∑_k = 1^N[ z_k λ_u^+(k)( S_u + ϵ_u^b(k) ) - z_k λ_u^-(k)( S_u - ϵ_u^a(k) ) ] π_new(ϵ_u | u, S_u, q_u^π_new) dϵ_u du - δ∫_t^T q_u^2 du - γ∫_t^T ∫_ϵ_uπ_new(ϵ_u | u, S_u, q_u^π_new) logπ_new(ϵ_u | u, S_u, q_u^π_new) dϵ_u du + V^π(T, S_T, q_T^π_new) | S_t = S, q_t^π_new = q ] ≤ V^π_new (t, S, q) § REINFORCEMENT LEARNING ALGORITHM In this section, we introduce two reinforcement learning algorithms. The first algorithm is based on the policy improvement theorem established earlier, and it utilizes a single neural network to model the value function. On the other hand, the second algorithm follows the actor-critic approach, where separate neural networks are employed to model the policy and the value function. This actor-critic algorithm offers the advantage of better control over the range of bid-ask spreads and typically requires less training time compared to the first algorithm §.§ Policy Iteration Algorithm Given a policy π, and q_t^π is inventory process under policy π. Let the initial condition at time t to be S_t = S, q_t^π = q, the value function under policy π is V^π(t, S, q) = 𝔼[ ∫_t^s ∫_ϵ_u∑_k = 1^N[ z_k ( S_u + ϵ_u^b(k) ) dN_u^+(k) - z_k( S_u - ϵ_u^a(k) ) dN_u^-(k) ] π(ϵ_u | u, S_u, q_u^π) dϵ_u - δ∫_t^s q_u^2 du - γ∫_t^s ∫_ϵ_uπ(ϵ_u | u, S_u, q_u^π) logπ(ϵ_u | u, S_u, q_u^π) dϵ_u du + V(s, S_s, q_s^π) | S_t = S, q_t^π = q ] then there is 𝔼[ 1/s-t∫_t^s ∫_ϵ_u∑_k = 1^N[ z_k ( S_u + ϵ_u^b(k) ) dN_u^+(k) - z_k( S_u - ϵ_u^a(k) ) dN_u^-(k) ] π(ϵ_u | u, S_u, q_u^π) dϵ_u - δ/s - t∫_t^s q_u^2 du - γ/s-t∫_t^s ∫_ϵ_uπ(ϵ_u | u, S_u, q_u^π) logπ(ϵ_u | u, S_u, q_u^π) dϵ_u du + V^π(s, S_s, q_s^π) - V^π(t, S_t, q_t^π)/s-t | S_t = S, q_t^π = q ] = 0 By taking the limit as s approaches t and parametrizing the value function V^π_θ, we can define the temporal difference error as follows δ_t^θ = s → tlim𝔼[ V_θ^π(s, S_s, q_s^π) - V_θ^π(t, S_t, q_t^π)/s-t | S_t = S, q_t^π = q ] - γ∫_ϵ_tπ(ϵ_t | t, S, q) logπ(ϵ_t | t, S, q) dϵ_t + ∫_ϵ_t∑_k = 1^N[ z_k ( S + ϵ_t^b(k) ) dN_t^+(k) - z_k( S - ϵ_t^a(k) ) dN_t^-(k) ] π(ϵ_t | t, S, q) dϵ_t - δ q_t^2 using the Monte Carlo method to generate a set of sample paths 𝒟 = {(t_i, S_i^d, q_i^d, Δ N_t_i^+, Δ N_t_i^-)_i = 1^T}_d = 1^D, then the loss to be minimized is ML(θ) = 1/2∑_𝒟∑_i ( V^π_θ (t_i + 1, S_t_i + 1^d, q_t_i + 1^d) - V_θ^π(t_i, S_t_i^d, q_t_i^d)/Δ t - γ∫_ϵ_t_iπ(ϵ_t_i | t_i, S_t_i^d, q_t_i^d) logπ(ϵ_t_i | t_i, S_t_i^d, q_t_i^d) dϵ_t_i ∫_ϵ_t_i∑_k = 1^N[ z_k ( S_t_i^d + ϵ_t_i^b(k) ) dN_t_i^+(k) - z_k( S_t_i^d - ϵ_t_i^a(k) ) dN_t_i^-(k) ] π(ϵ_t_i | t_i, S_t_i^d, q_t_i^d) dϵ_t_i - δ q_t_i^2 )^2 Δ t = 1/2∑_𝒟∑_i ( V^π_θ (t_i + 1, S_t_i + 1^d, q_t_i + 1^d) - V_θ^π(t_i, S_t_i^d, q_t_i^d)/Δ t - γ( N(1 + log 2π) + ∑_k logγ/2z_kB_k) + ∑_k z_k S_t_i^d (Δ N_t_i^+(k) - Δ N_t_i^-(k)) - δ q_t_i^2 + ∑_k ((A_k/2B_k - ℋ^+_θ(t_i, S_t_i, q_t_i, π)/2z_k) Δ N^+_t_i(k) + (A_k/2B_k - ℋ^-_θ(t_i, S_t_i, q_t_i, π)/2z_k) Δ N^-_t_i(k) ) )^2 Δ t The following is the policy iteration algorithm §.§ Actor-Critic Algorithm Given our knowledge that the optimal policy follows a Gaussian distribution, we can consider the following model. We introduce a neural network V^θ(t, S, q) with input (t, S, q) and outputting a scalar value. Additionally, we model the mean of the policy using a neural network, denoted as π^ϕ = 𝒩 ( M^ϕ(t, S, q), Σ). When generating a sample path denoted by 𝒟 = (t_i, S_t_i, q_t_i, ϵ_t_i, Δ N_t_i^+, Δ N_t_i^-)_i = 0^T, we can consider the temporal-difference error for the value function, which is defined as δ^θ _t_i = r_t_i + V^θ(t_i + 1, S_t_i + 1, q_t_i + 1) - V^θ(t_i, S_t_i, q_t_i) = (z_1 Δ N_t_i^+(1), z_1 Δ N_t_i^-(1) , ..., z_N Δ N_t_i^+(N) ,z_N Δ N_t_i^-(N) ) M^ϕ(t, S, q) + [ q_t_i+1S_t_i+1 - q_t_iS_t_i - δ q_t_i^2 - γ( N(1 + log 2π) + 1/2log | Σ | ) ] Δ t + V^θ(t_i + 1, S_t_i + 1, q_t_i + 1) - V^θ(t_i, S_t_i, q_t_i) The critic loss is L(θ) = 1/2∑_i = 0^T - 1 (δ_t_i^θ)^2 then the critic loss's gradient is ∇_θ L(θ) = ∑_i = 0^T - 1δ_t_i^θ∇_θδ_t_i^θ the policy gradient is ∇_ϕ J(ϕ) = ∑_i = 0^T - 1δ_t_i^θ∇_ϕlogπ^ϕ(ϵ_t_i |t_i, S_t_i, q_t_i) § NUMERICAL RESULTS During our numerical analysis, we established specific parameter values to facilitate the evaluation. The trading window was set to T = 1, with a trading interval of dt = 0.01. The volatility of the mid-price was set at σ = 0.05, the exploration coefficient was γ = 0.01, and the penalty coefficient for inventory was δ = 0.01. Additionally, we defined the order size tiers and the coefficient of market order arrival intensity as follows z = [10, 20, 30, 40, 50, 60] A = [20, 18, 15, 12, 10, 8] B = [1, 1, 1, 1, 1, 1] Our investigation starts with numerical studies on the policy improvement algorithm. The algorithm initiates by initializing two neural networks: one dedicated to capturing the bid-ask policy and another designed to approximate the corresponding value function. Through multiple epochs of training, the bid-ask policy network undergoes refinement. Subsequently, it is replaced with the newly trained value network as prescribed in the policy iteration algorithm. To determine the most suitable neural network structure for the policy and value functions, we employ three different architectures: Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), and CNN with a residual block. These network architectures are utilized as initial models for the policy and value functions. To evaluate the approximation capabilities of these neural networks with respect to the true value function under a specific policy, we compute the martingale loss, as defined in Equation (32), using 10 randomly generated samples per epoch. The value function network is trained for a total of 50 epochs. Figures 1-3 display the loss functions associated with the various neural network architectures, with the loss range appropriately adjusted for visualization purposes. Furthermore, Figure 4 presents a comparative analysis of these three loss plots, with a common constant utilized to normalize the loss range. It is essential to emphasize that the constant values employed for division differ across the four figures to ensure accurate representation and comparison. The loss plots presented above clearly indicate that the CNN with a residual block demonstrates a slightly higher level of approximation capability compared to the CNN without a residual block. Additionally, both CNN architectures outperform the MLP in terms of their ability to approximate the value function. Based on these findings, we have made the decision to utilize the CNN with a residual block for the subsequent numerical studies, as it exhibits the most promising performance in terms of accuracy and approximation. To model the value function in the policy iteration algorithm and actor-critic algorithm, we utilize a convolutional neural network (CNN) with a residual block. The inputs to the neural network are the variables (t, S, q), and the output is a scalar representing the predicted value. In the actor-critic algorithm, we employ the same neural network structure to model the mean of the Gaussian policy. However, we modify the output layer to produce a vector representing the mean of the bid-ask spreads. The CNN architecture consists of 1-dimensional convolutional layers with identical settings. Each convolutional layer has an input channel of 1 and an output channel of 2, with a kernel size of 3 and stride and padding set to 1. The residual block is composed of two convolutional layers with Rectified Linear Unit (ReLU) as the activation function. The overall structure of the neural network involves mapping the inputs to a higher-dimensional space using a linear layer, which in our case has a dimension of 128. This is followed by two residual blocks and four linear layers with varying dimensions. The activation function used throughout the network is ReLU. By employing this CNN architecture with a residual block, we aim to capture complex patterns and relationships in the input data, allowing for more accurate predictions of the value function and bid-ask spreads. In our numerical analysis, we conduct a total of five policy iterations using the policy iteration algorithm. At each iteration, we follow the prescribed steps of the algorithm to update the policy. Figures 5-9 showcase the return distributions obtained at each policy iteration. To evaluate the performance of each iteration, we generate a dataset consisting of 100 samples and compute the corresponding returns. These returns are then visualized using histograms, providing a discrete representation of the distribution. To complement the discrete histograms, we employ Gaussian kernel density estimation to derive a continuous representation of the return distributions. This estimation technique allows us to obtain a smooth and continuous description of the observed returns, enabling a more detailed analysis of their characteristics. By examining the return distributions across the five policy iterations, we can gain insights into the evolution of the policy and its impact on the overall returns. Figure 10 presents a comparison of the kernel density estimation (KDE) plots representing the return distributions across the five policy iterations. This comparison allows us to observe the changes in the return distributions as the policy iteration procedure progresses. Furthermore, in Figure 11, we specifically focus on the KDE plots of the return distributions from the first policy iteration and the fifth policy iteration. By examining these specific iterations, we can gain a more detailed understanding of how the return distribution evolves over the course of the policy iteration algorithm. From Figure 11, we can observe that the overall return distribution shows improvement as we progress through the policy iteration procedure. This suggests that the policy iteration algorithm is effective in refining the policy and enhancing the performance in terms of returns. It is crucial to acknowledge that the observed returns may exhibit unusually high values, and one potential explanation for this occurrence is the unrealistic magnitude of the bid-ask spreads. To provide further insight into this matter, Figures 12 and 13 showcase the bid-ask spreads at time t = 0 for tier 1 and tier 2 securities, respectively, across different reference prices and inventory levels. By examining these figures, we can gain a better understanding of the spread values in different scenarios. Furthermore, Figures 14 and 15 illustrate the bid and ask prices for all tiers at time t = 0, with a reference price of S = 1 and an inventory level of q = 50, which do fit the intuition that larger size order will get narrower bid-ask spreads. These figures offer a visual representation of the bid and ask prices for various tiers and can contribute to our understanding of OTC market making. Extensive analysis has been conducted to carefully examine this phenomenon. One of the primary reasons for the policy iteration algorithm yielding inflated bid-ask spreads lies in the assumption that the arrival intensity of market orders is linearly dependent on the bid-ask spreads. This modeling assumption becomes unrealistic when confronted with excessively large bid-ask spreads. Consequently, the algorithm can still operate, albeit with potential complications arising from negative bidding prices. Such issues can have profound implications. Moreover, the policy iteration algorithm has limitations in controlling bid-ask spreads due to the approximation capabilities of neural networks and a limited number of inputs. In contrast, the actor-critic algorithm addresses this challenge by employing a neural network to model the mean output of the bid-ask policy, allowing for better control over bid-ask spreads. This advantage is attributed to the use of suitable activation functions, which constrain the proposed spreads within a reasonable range. Additionally, the actor-critic algorithm offers faster training procedures compared to the policy iteration algorithm. These benefits make the actor-critic algorithm a promising approach for bid-ask spread control in practice. Figures 16 and 17 provide an insightful depiction of the critic loss and policy loss, respectively, as defined in the actor-critic algorithm. Figures 18-20 visually demonstrate the bid-ask spreads at time t = 0 for tiers 1, 2, and 3, respectively. These figures highlight the improved realism and coherence of the bid-ask spreads compared to earlier iterations. Notably, the bid spreads tend to increase as the inventory level rises, while the ask spreads decrease accordingly. This behavior aligns with the goal of effectively managing inventory and reflects the dynamics of a more realistic market-making strategy. The observed patterns validate the efficacy of the refined bid-ask spread formulation in capturing the expected market behavior. Figure 21 displays the return distribution obtained from the actor-critic algorithm. While the returns in this distribution may not exhibit the same level of exceptional performance as those in the policy iteration algorithm, they demonstrate a closer alignment with the expectations of a realistic market-making strategy. It is important to note that although there are instances of negative returns, the overall profitability of the market-making strategy is evident. This observation further validates the efficacy of the actor-critic algorithm in generating a practical and profitable approach to market-making. § CONCLUSION In our future research, we recognize the importance of going beyond the assumptions made in our current study. To achieve this, we plan to explore non-parametric methods that offer greater flexibility and adaptability to real market data. These methods will allow us to capture the complexities of actual market dynamics and uncover insights that better align with real-world scenarios. By delving into non-parametric techniques, we expect to refine our understanding of the market-making problem and generate more nuanced and robust results. In summary, this paper introduces a novel reinforcement learning framework specifically tailored for addressing the multi-dimensional market-making problem in OTC markets. By utilizing a stochastic policy and investigating the relationship between market order arrivals and bid-ask spreads, we demonstrate the effectiveness of our proposed approach through extensive numerical experiments. Moving forward, our research efforts will focus on incorporating non-parametric methods to enhance the adaptability of our framework to real-world market data. apacite
http://arxiv.org/abs/2307.03089v1
20230706160143
Volumetric Occupancy Detection: A Comparative Analysis of Mapping Algorithms
[ "Manuel Gomes", "Miguel Oliveira", "Vítor Santos" ]
cs.RO
[ "cs.RO" ]
Date of current version July 6, 2023. [1]Intelligent System Associate Laboratory (LASI), Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro, 3810-193 Aveiro, Portugal [2]Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal This work was supported by the Project Augmented Humanity [POCI-01-0247-FEDER-046103], financed by Portugal 2020, under the Competitiveness and Internationalization Operational Program, the Lisbon Regional Operational Program, and by the European Regional Development Fund. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Manuel Gomes : Volumetric Occupancy Detection: A Comparative Analysis of Mapping Algorithms Manuel Gomes : Volumetric Occupancy Detection: A Comparative Analysis of Mapping Algorithms Corresponding author: Manuel Gomes (e-mail: manuelgomes@ua.pt). Despite the growing interest in innovative functionalities for collaborative robotics, volumetric detection remains indispensable for ensuring basic security. However, there is a lack of widely used volumetric detection frameworks specifically tailored to this domain, and existing evaluation metrics primarily focus on time and memory efficiency. To bridge this gap, the authors present a detailed comparison using a simulation environment, ground truth extraction, and automated evaluation metrics calculation. This enables the evaluation of state-of-the-art volumetric mapping algorithms, including OctoMap, SkiMap, and Voxblox, providing valuable insights and comparisons through the impact of qualitative and quantitative analyses. The study not only compares different frameworks but also explores various parameters within each framework, offering additional insights into their performance. Volumetric Detection; Human-robot Collaboration; ROS; OctoMap; SkiMap; Voxblox; =-21pt Volumetric Occupancy Detection: A Comparative Analysis of Mapping Algorithms Manuel Gomes1,2, Miguel Oliveira1,2, and Vítor Santos1,2 Member, IEEE ============================================================================ § INTRODUCTION The field of human-robot collaboration is experiencing rapid advancements, leading to the introduction of new functionalities and interfaces. One prominent area of development involves the use of visual interfaces for gesture recognition. Researchers have explored various approaches, including those utilizing RGB <cit.>, Depth <cit.> or Stereo cameras <cit.> to capture and interpret human gestures. Another avenue of exploration in human-robot collaboration is the utilization of audio-based functionalities. Some researchers have focused on transforming speech into text <cit.> or detecting emotions from speech <cit.>. Combining both visual and audio cues, certain authors have investigated the fusion of images and audio for speech <cit.> and emotion recognition <cit.>. In addition to visual and audio interfaces, electromyography interfaces have emerged as a promising approach for gesture recognition in human-robot collaboration. These interfaces leverage the measurement of muscle activity signals to detect and interpret human gestures <cit.>. Furthermore, advanced functionalities in human-robot collaboration encompass the handover of objects between humans and machines. Researchers have proposed innovative strategies and techniques to facilitate smooth and efficient object transfer <cit.>. While these functionalities and interfaces are intriguing from a research perspective, it is important to note that they are not indispensable for collaboration to take place. However, an essential functionality that underpins collaboration scenarios is volumetric detection. Volumetric detection provides a crucial security feature by enabling real-time detection of foreign objects within a workspace, including their current pose. While 1-D or 2-D barriers can detect the entrance of foreign objects, volumetric detection creates a proliferate environment for collaboration, enhancing safety and efficiency in collaborative settings. Currently, there is a lack of widely adopted volumetric detection frameworks specifically designed for human-robot collaboration. Mapping frameworks serve as the foundation for most volumetric detection systems since volumetric detection is required for mapping purposes. However, the evaluation of mapping frameworks often lacks comprehensive and insightful metrics, with performance assessments primarily focused on time and memory efficiency indicators <cit.>. This paper presents aims to address the aforementioned gaps in research. The objectives of this study are to: * Create a simulation environment capable of testing the capabilities of various volumetric detection frameworks; * Establish a methodology to extract ground truth data from the simulation environment; * Apply evaluation metrics and automate their calculation to enable a comprehensive evaluation; * Provide insights into the performance of well-known mapping algorithms, namely OctoMap, SkiMap, and Voxblox, and identify optimal parameter settings for these frameworks. The article is organized in five sections to comprehensively address these objectives. Introduction (<ref>) presents the purpose of this research and highlights the contributions of the authors. Related Work (<ref>) provides a detailed description of OctoMap, SkiMap, and Voxblox, including their potential advantages and disadvantages. Proposed Approach (<ref>) outlines the testing environment, ground truth extraction methodology, and evaluation metrics in a comprehensive manner. Results (<ref>) offers an in-depth description of the experimental methodology employed, along with extensive qualitative and quantitative results. Finally, Conclusions (<ref>) summarizes the approach and its advantages, while also delineating avenues for future research and development. § RELATED WORK This section provides an overview of three open-source mapping frameworks, namely OctoMap, SkiMap, and Voxblox. These mapping frameworks have played a crucial role in advancing the field of 3D mapping, empowering robots and autonomous systems to navigate and operate with efficiency and accuracy in complex, spatial environments. OctoMap is a mapping framework that utilizes a tree-based representation known as octrees to provide high flexibility in terms of mapped area and resolution <cit.>. Octrees are hierarchical data structures for spatial subdivision in three dimensions <cit.>. Each octree consists of nodes that represent cubic volumes of space. In OctoMap, this tree is dynamically updated in real-time based on sensor measurements or observations. The tree begins with a single root node representing the entire space. Then, the node is recursively divided into eight more nodes until it reaches a predefined minimum voxel size, thereby discretizing the three-dimensional space into a hierarchical grid. An example of an octree representation is depicted in <ref>. The basic form of an octree can be used to model a Boolean property, where each node can have two states: occupied volume or free volume. In the case of volumetric detection, a node is initialized if a certain volume is occupied. However, this approach creates ambiguity by failing to differentiate between unknown and free voxels. To address this issue, OctoMap explicitly represents free volumes in the tree by employing raycasting. This technique assumes the volumes between the sensor and the measured endpoint as free space. A visual depiction of raycasting is presented in <ref>. By adopting this representation, the framework achieves a compact environment representation by pruning all children of the same node if they have the same state. In scenarios involving sensor noise and dynamic environments, the Boolean tree representation is inadequate. Therefore, OctoMap incorporates a probabilistic model to represent occupancy. In this model, each node is associated with a probability value indicating the likelihood of a voxel being occupied. A threshold is often applied to classify a voxel as occupied or free based on its probability value. OctoMap adopts a holistic approach by considering the complete unfolding of a scene, rather than relying solely on time snapshots. Consequently, it incorporates multiple observations to determine the present state of a voxel. For instance, when a voxel has been observed as unoccupied in k instances, a minimum of k observations of occupancy is required to transition it from an unoccupied to an occupied state. This characteristic strengthens the framework's resilience to sensor noise, facilitates temporal scene analysis, and enables sensor fusion. Nevertheless, in dynamic environments, this behavior becomes undesirable as it hinders the algorithm's responsiveness. To address this issue, OctoMap proposes a clamping update policy that establishes upper and lower bounds on the number of observations needed to change the state. In summary, OctoMap provides a powerful framework for 3D occupancy mapping, offering efficient storage, real-time updates, and probabilistic representation. Its octree-based structure enables accurate modeling of the environment, making it a valuable tool for robots and autonomous systems operating in three-dimensional spaces. SkiMap is a mapping framework, akin to OctoMap, that employs a tree of SkipLists as its underlying data structure <cit.>. SkipLists are layered probabilistic data structures <cit.>. The first layer comprises an ordered linked list, and each subsequent layer is a subset of the layer below it, encompassing a fraction of the elements. This design reduces the computational complexity associated with random access. In SkiMap, the tree of SkipLists consists of multiple levels, where the first SkipList tracks quantized x coordinates, representing depth level 1, and the individual elements in this SkipList are referred to as xNodes. Each xNode itself corresponds to a SkipList that tracks quantized y coordinates, representing depth level 2, with its elements named yNodes. Finally, each yNode corresponds to a SkipList of zNodes, representing the actual voxels and serving as containers for user data. As a result of this data structure, voxel coordinates can be obtained by traversing predecessors, obviating the need to store coordinates within the containers alongside user data. This tree of SkipLists is better visualized in <ref>. To classify voxels as occupied or free, Each voxel within the tree of SkipLists possesses a dynamically defined weight. When a detection occurs within a voxel, its weight increases, while it decreases when no detection is present. If the weight surpasses a predefined threshold known as the minimum voxel weight, the voxel is classified as occupied. The authors of SkiMap argue that their methodology leads to reduced runtime and memory usage compared to OctoMap. Voxblox is a mapping framework similar to the previously discussed OctoMap and SkiMap <cit.>. Originally developed for Micro Aerial Vehicles operating in unstructured and unexplored environments, Voxblox provides a fast and flexible local mapping solution. This framework is based on the creation of ESDF derived from TSDF. Both ESDF and TSDF are data structures built upon SDF, which are voxel grids that divide the 3D space. Each voxel is assigned a projective distance label representing the distance between the voxel and the nearest detected surface. Positive and negative values indicate whether the voxel is located outside or inside the surface, respectively. In TSDF, a truncation distance is defined to limit the storage of distances, optimizing memory usage and processing speed by focusing on the relevant parts of the environment. A TSDF representation is presented in <ref>. In contrast, ESDF employ the Euclidean distance between the voxel and the nearest obstacle instead of the projective distance. Consequently, ESDF are particularly useful for navigation purposes. Voxblox calculates TSDF by employing raycasting and averaging the newly measured projective distances into the existing voxels. For each detection, a weighting function is applied, which decreases quadratically with the distance from the sensor to the surface and linearly with the distance from the voxel to the surface. Subsequently, Voxblox derives ESDF from TSDF using a method based on occupancy maps <cit.>. This method utilizes wavefronts, propagating waves that update distances from the start voxel to its neighbors. Updated voxels are added to the wavefront queue for further propagation. Voxblox employs two wavefronts: a raise wavefront and a lower wavefront. The raise wavefront is triggered when the new distance value of a voxel from the TSDF exceeds the previously stored value in the corresponding ESDF voxel. Invalidation of the voxel and its children follows, as they are added to the raise queue. The wavefront continues propagation until no voxels with invalidated parents remain. Conversely, the lower wavefront initiates when a new occupied voxel enters the map or when a previously observed voxel decreases its value. Based on the current voxel, its neighbors, and their respective distances, the distances of neighboring voxels are updated. The wavefront propagation terminates when there are no remaining voxels whose distances could decrease based on their neighbors. The authors of Voxblox argue that constructing a TSDF is faster than building an OctoMap, and constructing an ESDF from a TSDF is faster than building it from an occupancy map. § PROPOSED APPROACH The frameworks described in the previous section are usually assessed using only basic metrics, such as time of integration of voxel grid or memory usage <cit.>. While these metrics serve as valuable indicators of efficiency, they prove insufficient for volumetric representation. The existing body of research lacks a comprehensive measurement approach that adequately assesses the alignment between the outputs of frameworks and the actual objects being detected. Consequently, the authors of this study have undertaken the development of a tool specifically designed to carry out such evaluation. The subsequent sections will provide a detailed description of this tool. §.§ Testing Environment The development of a simulation-based testing environment by the authors was a strategic choice, primarily driven by the objective of acquiring ground truth values, which cannot be feasibly obtained in a real environment. Although this decision effectively eliminated the challenges associated with hardware-related issues encountered in testing with physical sensors, its primary motivation remains the acquisition of accurate reference data. Gazebo[<https://gazebosim.org>], an open-source robotics simulator, is used to construct their simulation environment. It was selected due to its attributes, including a highly accurate physics engine that faithfully replicates object behavior. Furthermore, Gazebo has the capability to simulate multiple sensors concurrently, allowing for the introduction of sensor noise. A noteworthy feature of Gazebo is its capacity to animate static models, referred to as actors, which plays a vital role in creating realistic environments. Moreover, Gazebo's integration with the ROS is particularly significant. ROS, an open-source framework, offers a comprehensive set of software libraries and tools that are instrumental in the development, control, and integration of robotic systems. This integration is important because all the algorithms employed in this study are ROS-integrated. This seamless integration significantly enhances the ease-of-use and compatibility of the simulation environment with the algorithms, ensuring an efficient workflow. The simulated environment was based on LARCC, a collaborative cell equipped with three LiDARs, one RGB-D camera, and three RGB cameras <cit.>. This real-life setup can be observed in <ref>. The environment includes four sensors suitable for volumetric detection: the three LiDARs and the RGB-D camera. The dimensions of the cell are 4 meters in length, 2.8 meters in width, and 2.29 meters in height. The purpose of this environment is to test the ability of occupancy algorithms to promptly detect a person entering the cell. To simulate this scenario accurately, a Gazebo actor representing a person was spawned in the middle of the cell. The actor moves in an elliptical pattern along the center of the cell, with a semi-major axis of 2 meters aligned with the length of the cell and a semi-minor axis of 1 meter aligned with the width. The actor moves in this pattern two times, totaling into an episode of around 96 seconds. In the center of this elliptical movement, a rectangular plate measuring 0.8 meters in length, 0.6 meters in width, and 1 meter in height was placed. This plate serves to evaluate the algorithms' performance when the moving object is partially occluded. The environment can be seen in detail in <ref>. §.§ Data Filtering and Retrieval The testing environment produces a point cloud for each range-finder sensor. To conduct an evaluation, it is essential to obtain the ground truth data that correspond to the actor. The fusion of point clouds from multiple sensors is a prerequisite for accurate evaluation. To achieve this, an approximate time filter is implemented[<https://wiki.ros.org/message_filters/ApproximateTime>]. This filter organizes the ROS messages into a queue for each sensor. When all four sensors to have a point cloud message in the queue, the filter retrieves the latest message from each sensor and then clears the queue. In order to enhance the efficiency of the testing framework, the authors implemented a downsampling technique on the point cloud acquired from the RGB-D camera. The point cloud exhibited high density, resulting in computationally intensive operations. To mitigate this issue, a mean pooling algorithm was employed to downsample the point cloud. In this approach, the LinkStates message provided by Gazebo is utilized[<https://docs.ros.org/en/jade/api/gazebo_msgs/html/msg/LinkStates.html>]. The LinkStates message contains real-time pose information for each rigid body (link) present in the simulation. The actor consists of various links, such as "Head", "RightForeArm", "LowerBack", "LeftUpLeg", among others. To determine the connections between these links, an automated analysis of the original mesh file (.dae) of the actor is performed. A three-dimensional volume for each connection (joint) that encapsulates the entire link within it was created. For instance, a cylinder with a radius of 0.1 meters is defined between the shoulder and the elbow. A more complex example involves a rectangular prism with a width of 0.2 meters and a length of 0.3 meters between the Spine and the LowerBack. One of the prism's axes is aligned with the line connecting the LeftHip and the RightHip. With these volumes established, point-in-polyhedron algorithms were implemented to extract the actor points from the original point clouds. For a cylinder with a radius of r, the algorithm verifies whether a point, represented by the coordinate vector q⃗, lies between the top and bottom faces of the cylinder. The centers of these faces are represented by the coordinate vectors p⃗_1 and p⃗_2, respectively. Additionally, the algorithm checks if the point lies within the curved surface of the cylinder. This is done by verifying if the distance between the point and the cylinder's center axis is smaller than the radius r. These operations are illustrated by <ref>, <ref>, and <ref>. (q⃗-p⃗_1)·(p⃗_2-p⃗_1) ≥ 0 (q⃗-p⃗_2)·(p⃗_2-p⃗_1) ≤ 0 |(q⃗-p⃗_1)×(p⃗_2-p⃗_1)|/|p⃗_2-p⃗_1|≥ r For a rectangular prism with dimensions of length l, width w, and height h, the normal vectors for all its faces are computed. Specifically, n⃗_l, n⃗_w, and n⃗_h represent the normal vectors along the length, width, and height of the prism, respectively. To determine if a point q lies within the prism, <ref> is utilized. This equation calculates the coordinate vector of the point with respect to the coordinate vector of the prism's center, denoted as c⃗. Subsequently, the dot product of this vector with each of the normal vectors is computed. This calculation enables an assessment of how far the point is from the center in a particular direction. If the result is greater than half of the corresponding dimension of the prism, the point is considered outside the prism. |(q⃗ - c⃗)·n⃗_α| ≤α/2, α∈{l,w,h} From this method, the actor points can be extracted, as seen in <ref>. However, a challenge arises due to the mismatched units of measurement between this points and the voxels retrieved from the mapping framework. To address this issue, a voxel grid was constructed with matching resolution and pose as the voxel grid defined by the framework. In this new grid, occupied voxels represent the locations of the actor points, therefore being the ground truth data. §.§ Evaluation Metrics The assessment of the effectiveness of the framework will be predicated upon several performance metrics, namely Precision (P), Recall (R), and F1 Score (F_1). The inclusion of F_2 and F_3 scores are also presented, as these scores place a higher emphasis on recall in relation to precision. This emphasis proves valuable, particularly in contexts involving security scenarios, such as volumetric detection in a factory setting. These metrics are defined by <ref>, <ref>, and <ref>, in which TP, FP, and FN are true positives, false positives, and false negatives, respectively. In <ref>, the β is 1 in F_1 score, 2 in F_2 score and 3 in F_3 score. P = TP/TP+FP R = TP/TP+FN F_β = (1+β^2)·P · R/(β^2 · P) + R To establish TP, FP and FN values within the context of this work, it is necessary to define relevant and retrieved elements. Relevant elements refer to the ground truth voxels that have been previously extracted, as described in <ref>. Retrieved elements, on the other hand, correspond to the output of the frameworks in the form of voxels. Ensuring the retrieved elements are extracted after relevant elements is crucial. To accomplish this temporal alignment, an approximate time filter, resembling the one described in <ref>, is implemented. This filter organizes the output messages from the mapping framework into a queue, arranging them in ascending order based on their timestamps. It identifies the most recent timestamp among the fused point cloud messages. Consequently, it retrieves the earliest message from the mapping framework queue that is received after point cloud messages latest timestamp. With both the relevant and retrieved elements established, TP can be identified as voxels that are classified as occupied by both the framework's and the ground truth's voxel grid. FP present a greater challenge, as the framework's voxel grid encompasses the entire scene rather than solely focusing on the actor. To address this, the convex hull of the actor points is obtained, and any retrieved element within this convex hull that is not a TP is classified as FP. Conversely, FN are voxels that are identified as occupied by the ground truth but classified as unoccupied by the frameworks. By utilizing these definitions, the aforementioned metrics can be calculated to quantitatively assess the performance of the framework. These definitions can be visualized in <ref>. § RESULTS This section presents the experimental evaluation and results of the OctoMap, SkiMap, and Voxblox frameworks. It encompasses both qualitative and quantitative analyses to comprehensively assess the capabilities and limitations of these frameworks in capturing and representing the dynamic environment. The experimental infrastructure is described, followed by the qualitative results of the default parameter settings for each framework. Finally, the quantitative results, including precision, recall, and F scores, are presented and analyzed, offering valuable insights into the performance of OctoMap, SkiMap, and Voxblox. §.§ Experimental Methodology In order to standardize the experiments, a ROS tool named bag files was utilized[<https://wiki.ros.org/Bags>]. These files are capable of capturing all the ROS messages, such as sensor pointclouds, within a specific time frame. Subsequently, they allow for the replaying of the recorded messages at the original recording rate. By recording the environment described in <ref>, it ensures consistency and minimizes variability between experiments. Bag files offer the functionality to adjust the data reproduction speed, enabling both deceleration and acceleration. This feature proves particularly useful in this study, as slowing down the bag file playback allows the mapping framework sufficient time to integrate the map. This ensures that slower frameworks are not disadvantaged. It is important to note that the objective of this study is not primarily focused on measuring efficiency. The experiments conducted not only involve comparing the default versions of the frameworks with each other but also exploring different parameters within each framework. Some parameters exhibit variations across frameworks. These parameters are: * Minimum Voxel Length — This parameter represents the length of the smallest voxel into which the algorithm can divide the environment. * Hit Probability — It denotes the probability of the algorithm increasing the occupied probability of a voxel that has a detection. * Miss Probability — Probability of the algorithm reducing the occupied probability assigned to a voxel lacking a detection. * Minimum Voxel Weight — It indicates the minimum weight required for a voxel to be classified as occupied. * Truncation Distance — This value represents the maximum distance considered for TSDF calculations in Voxblox. * Constant Weight — When this boolean parameter is active, calculated weights remain constant irrespective of the distance between the point and the sensor. The group of parameters and their default values can be summarized in <ref>. §.§ Qualitative Results In this subsection, the authors aim to provide qualitative results of the frameworks utilizing their default parameters. All figures are consistently captured at the same moment in time and scene position, specifically when the actor passes for the second time between the middle plate and the right leg of the gantry. Commencing with OctoMap, as depicted in <ref>, the actor's representation is clearly discernible, and the entire scene is well-defined. However, a limitation arises wherein certain voxels that previously indicated occupancy encounter difficulty in accurately depicting emptiness, particularly those corresponding to the upper side of the actor when it traverses behind the plate. This phenomenon arises from the properties of raycasting, as the rays that intersect with these voxels do not encounter any objects behind them from the perspective of the sensors. Consequently, the affected voxels cannot be effectively cleared, leading to a disparity between their actual empty state and their visual representation. <ref> illustrates the qualitative results of SkiMap. Interestingly, the issue observed in OctoMap is magnified in this framework, as there appears to be minimal to no occupied voxel erasure. This occurrence can be attributed to the fact that SkiMap was not specifically designed with volumetric detection in mind and, consequently, is ill-equipped to handle such scenarios. The framework's limitations in effectively handling and updating voxel occupancy representations in dynamic environments become more apparent, further highlighting its lack of readiness for addressing these specific types of situations. <ref> presents the qualitative results of Voxblox. It is evident that this framework encounters significant challenges in detecting moving actors within the scene. This difficulty can potentially be attributed to the adoption of a high truncation distance, which leads to the framework observing a larger volume encompassing the actor. As a consequence, the larger volume primarily consists of empty space, resulting in a comparatively greater distance value assigned to the voxel and subsequently classifying it as empty. A more comprehensive explanation of this phenomenon can be found in <ref>, where detailed quantitative results are elaborated upon. §.§ Quantitative Results In this subsection, the authors present the quantitative results of precision, recall, and F scores obtained from evaluating different variations of OctoMap, SkiMap, and Voxblox. These metrics provide objective measures of the frameworks' performance in capturing and representing the dynamic environment. The analysis aims to provide deeper insights into the strengths and limitations of each framework, facilitating a comprehensive understanding of their respective capabilities. Starting with OctoMap, the effect of the minimum voxel length can be seen in <ref>. The precision consistently improves with higher lengths, possibly because false positives are generally situated in close proximity to true positives. By enlarging the voxels, this proximity is now encompassed within the true positive voxels, thereby enhancing precision. The change in recall, however, is less pronounced and reaches its peak at 0.1 meters. Generally, recall is smaller than precision due to occlusion in the testing environment, where the front half of the actor obstructs the back half, resulting in a considerable number of false negatives. Smaller lengths yield smaller recall as a smaller volume of the actor is covered by occupied voxels. Similarly, higher lengths also yield smaller recall because certain false negatives persist while the overall voxel count decreases, thereby elevating the false negatives to true positives ratio. Thus, the authors propose that an optimal recall point is reached in this environment, estimated to be around 0.1 meters. The authors further conclude that a length of 0.150 meters excels in applications with lower security requirements, while a length of 0.1 meter is ideal for scenarios prioritizing security. The impact of OctoMap hit probability is demonstrated in <ref> of this study. Notably, higher probabilities yield improved precision and recall measures, with precision values remaining constant for probabilities of 0.7 and 1. This trend can be attributed to the increased sensitivity of the framework to changes as the probability is increased, thereby reducing the occurrences of false negatives and false positives. Conversely, decreasing the probability results in an inaccurate representation of the environment, leading to an augmented number of false negatives and false positives. Based on these findings, the authors strongly advocate for a hit probability of 1 across all scenarios. The impact of OctoMap miss probability is illustrated in <ref> of this investigation. Upon examination, it is observed that increasing the miss probability to 0.7 leads to a reduction in both precision and recall values. Conversely, when the probability fluctuates within the range of 0.2 to 0.4, the results remain relatively consistent. The decline in precision and recall can be attributed to a substantial decrease in the weight assigned to voxels, rendering them significantly less likely to be classified as occupied, even in the presence of detections. Consequently, this amplifies the number of false negatives and diminishes the count of true positives. Furthermore, this imbalance results in a larger number of false positives compared to true positives. Hence, for scenarios resembling the one described, the authors endorse employing a low miss percentage. The impact of the minimum voxel length on SkiMap is demonstrated in <ref> of this study. The observed behavior is analogous to that observed in OctoMap, whereby precision increases as the voxel length enlarges. Additionally, the recall reaches its peak at approximately 0.075 meters in length. Based on these findings, the authors further infer that a voxel length of 0.150 meters is particularly well-suited for applications with lower security requirements. Alternatively, voxel lengths of 0.075 or 0.1 meters are recommended for scenarios that prioritize security considerations. The impact of the minimum voxel weight on SkiMap is depicted in <ref> of this investigation. It is observed that as the minimum weight increases, precision values also increase, while recall values decrease. This phenomenon can be attributed to the fact that higher minimum weights result in detections occurring only in voxels that exhibit a high level of certainty in terms of occupancy. Consequently, the number of false positives is reduced, but the number of false negatives is amplified. However, when evaluating the F scores, it is evident that a minimum weight of 1 yields the most optimal performance for this particular scenario. The impact of the minimum voxel length on Voxblox is presented in <ref>, highlighting similarities and notable distinctions from the findings of OctoMap and SkiMap. While some similarities in behavior are observed, a striking difference is identified. Specifically, for voxel lengths smaller than 0.150 meters, a significant decrease in recall is observed. This phenomenon can be attributed to the relationship between the truncation distance and the minimum voxel length, which will also be discussed in detail in the discussion of <ref>. The authors conducted an analysis and found that achieving a ratio of voxel size per truncation distance of at least 1.5 leads to an increase in recall, despite Voxblox's recommended ratio of 0.5. The authors speculate that this improvement occurs because as the ratio increases, the framework incorporates points that are farther away from the voxel, consequently yielding a larger average distance value for that voxel. This effect may generate distance values that fall below the occupied threshold, resulting in false negatives. The impact of Voxblox truncation distance is elaborated upon in <ref>, providing additional support for the authors' previous argument regarding the importance of achieving a ratio of at least 1.5. The table clearly demonstrates that significantly better results are obtained when the ratio reaches 2, in comparison to ratios of 1 or 0.5, which is the recommended value. These findings emphasize the significance of appropriately adjusting the truncation distance to establish a favorable balance between voxel size and the distance of influence. By ensuring a higher ratio, the algorithm selectively incorporates points within close proximity to the voxel, effectively mitigating the introduction of extraneous signals originating from distant points. Consequently, this approach engenders more precise and dependable distance values, thereby optimizing the performance of Voxblox within the given contextual parameters. The impact of the Voxblox constant weight is illustrated in <ref>. Notably, the utilization of a constant weight leads to a decline in performance compared to Voxblox's default behavior. This is primarily attributed to the fact that the constant weight approach overlooks crucial parameters that are typically considered by Voxblox, including the distance from the sensor to the voxel and the distance of detection to the center of the voxel. By neglecting these important factors, the algorithm's performance is compromised, resulting in suboptimal outcomes. It is evident that the dynamic and context-aware weight assignment employed by Voxblox provides more accurate and reliable results compared to the simplified constant weight approach. § CONCLUSIONS This paper presented a novel test environment designed to evaluate the performance of volumetric detection algorithms. The proposed environment utilizes simulation techniques to capture ground truth data and compares them to the detections obtained, enabling the calculation of precision, recall, and F scores. To capture the ground truth data, the test environment leverages Gazebo and its LinkStates messages to track the location of each actor joint in real time. For each joint, a polyhedron that encompasses the entire joint is defined. Subsequently, point in polyhedron algorithms are applied to calculate the actor points inside each joint, which are then used to construct a voxel grid correspondent to the ground truth data. This ground truth is compared to the voxel grid generated by the frameworks under evaluation, allowing for the determination of true positives, false positives, and false negatives. This approach fills a gap in the field by providing a standardized method for evaluating the precision and recall of volumetric detection algorithms. Typically, existing approaches only compare integration time and memory usage among frameworks, neglecting other important aspects. The development of the innovative test environment and performance metrics proved essential in providing previously unknown insights into the performance of various volumetric detection frameworks. The conducted tests involved systematically varying parameters within each framework to identify an optimal configuration. The results were evaluated using precision, recall, and F scores, supplemented by a qualitative assessment of each framework's default settings, enabling a more comprehensive analysis. Based on these insights, it can be concluded that OctoMap, with its default parameters, is the most suitable framework for volumetric detection in a cell environment. While SkiMap exhibited marginally superior quantitative outcomes, the qualitative evaluation highlighted its limitation in accurately removing occupied voxels that are actually empty. This issue arises as a significant portion of the voxels traversed by the actor continue to be labeled as occupied in SkiMap. Future work should focus on expanding the volume of analysis to encompass the entire trajectory of the actor during the experiment. This would enhance the compatibility between quantitative and qualitative analyses and address the aforementioned issue with SkiMap. Additionally, further tests of the test environment should be conducted in different scenarios and with a wider range of frameworks to ensure comprehensive evaluation and validation. [ < g r a p h i c s > ]Manuel Gomes received an Integrated Master's Degree in Mechanical Engineering from the University of Aveiro in 2022. Their M.Sc. dissertation focused on sensor calibration in robotic vehicles. Currently pursuing a Ph.D. in Mechanical Engineering at the University of Aveiro, the author is actively involved in the AUGMANITY project, a EU project, working on collaborative robots. [ < g r a p h i c s > ]Miguel Oliveira received the bachelor's and M.Sc. degrees from the University of Aveiro, Aveiro, Portugal, in 2004 and 2007, respectively, and the Ph.D. degree in mechanical engineering (specialization in robotics, on the topic of autonomous driving systems), in 2013. From 2013 to 2017, he was an Active Researcher with the Institute of Electronics and Telematics Engineering of Aveiro, Aveiro, and with the Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal, where he participated in several EU-funded projects, as well as national projects. He is currently an Assistant Professor with the Department of Mechanical Engineering, University of Aveiro, where he was also the Director of the Masters in Automation Engineering. He has supervised more than 20 M.Sc. students. He authored over 20 journal publications, from 2015 to 2020. His research interests include autonomous driving, visual object recognition in open-ended domains, multi-modal sensor fusion, computer vision, and the calibration of robotic systems. [ < g r a p h i c s > ]Vítor Santos obtained a 5-year degree in Electronics Engineering and Telecommunications in 1989, at the University of Aveiro, Portugal, where he later obtained a PhD in Electrical Engineering in 1995, and the Habilitation in Mechanical Engineering in 2018. He was awarded fellowships for research in mobile robotics during 1990-1994 at the Joint Research Center, Italy. He is currently Associate Professor at the University of Aveiro where he lectures several courses on robotics, autonomous vehicles and computer vision, and has carried out research activity on mobile robotics, autonomous driving, advanced perception and humanoid robotics, also in public and privately funded projects. He has supervised and co-supervised more than 100 students in Masters, PhD and Postdoc programs, and coordinated the creation of two University degrees in the field of Automation at the University of Aveiro. He founded the ATLAS project for mobile robot competition that achieved 6 first prizes in the annual Autonomous Driving competition, and has coordinated the development of ATLASCAR, the first real car with autonomous navigation capabilities in Portugal. He was involved in the organization of several conferences, workshops and special sessions in national and international events, including being the General Chair of the IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC2021. He is one of the founders of the Portuguese Robotics Open in 2001 and co-founder of the Portuguese Society of Robotics in 2006.
http://arxiv.org/abs/2307.01262v1
20230703180004
Eccentric debris disc morphologies II: Surface brightness variations from overlapping orbits in narrow eccentric discs
[ "Joshua B. Lovell", "Elliot M. Lynch" ]
astro-ph.EP
[ "astro-ph.EP", "astro-ph.SR" ]
firstpage–lastpage [ Emanuele Berti August 1, 2023 ================== We present Paper II of the Eccentric Debris Disc Morphologies series to explore the effects that significant free and forced eccentricities have on high-resolution millimetre-wavelength observations of debris discs, motivated by recent ALMA images of HD53143’s disc. In this work, we explore the effects of free eccentricity, and by varying disc fractional widths and observational resolutions, show for a range of narrow eccentric discs, orbital overlaps result in dust emission distributions that have either one or two radial peaks at apocentre and/or pericentre. The narrowest discs contain two radial peaks, whereas the broadest discs contain just one radial peak. For fixed eccentricities, as fractional disc widths are increased, we show that these peaks merge first at apocentre (producing apocentre glow), and then at pericentre (producing pericentre glow). Our work thus demonstrates that apocentre/pericentre glows in models with constant free and forced eccentricities can be both width and resolution dependent at millimetre wavelengths, challenging the classical assertion that apocentre/pericentre glows are purely wavelength dependent. We discuss future high-resolution observations that can distinguish between competing interpretations of underlying debris disc eccentricity distributions. circumstellar matter - submillimetre: planetary systems - celestial mechanics § INTRODUCTION Debris discs are a relatively common type of circumstellar disc, with instruments such as Spitzer and Herschel detecting these towards about 20–30% of main-sequence stars <cit.>. Their presence indicates that belts of planetesimals have formed and are evolving via collisional erosion, producing observable dust grains which trace the orbits of their parent bodies. The observed surface brightness distributions of debris disc dust can be used to infer the distribution of planetesimals and planets within planetary systems. For example, as planets sweep up and/or scatter nearby planetesimals, these can result in disc cavities, the structure of which can be used to constrain the type of planet/s responsible <cit.>, important for understanding the population of planets on wide orbits in a parameter space far outside of current planet-detection capabilities. In addition, bodies orbiting nearby to eccentric planets <cit.> can have their own orbits shaped as a result of three-body dynamics between the star and planet, and individual planetesimals <cit.>. This effect has been interpreted as the origin of the eccentric debris disc structure observed in the discs of Fomalhaut, HR 4796, HD202628, HD38206, and most recently HD53143, <cit.>. The classical paradigm in debris discs was developed in the context of low-resolution data which often captured disc ansae within single beams. This picture found that eccentric debris disc emission structures could be largely understood as a function of wavelength. <cit.> demonstrated that at mid-to-far-infrared wavelengths, eccentric debris discs exhibit `pericentre glow', whereby the warmest dust (closest to the star) produces enhanced emission relative to the coolest dust (furthest from the star). The predictions of this theory match Keck and Herschel observations of discs <cit.>. Conversely, at longer submillimetre/millimetre wavelengths observations are more sensitive to the dust mass under the beam. By modelling 1-dimensional eccentric loops (with a line density), <cit.> concluded that slower apocentre dust velocities can `pile-up' dust preferentially at apocentre. The predictions of this 1-D theory appear to match ALMA observations of debris discs such as those of Fomalhaut and HD53143 which both have peak ansae emission towards the direction of their imaged apocentres <cit.>, i.e., these exhibit `apocentre glow'. More recently, <cit.> modelled three-dimensional debris disc density structures with a forced eccentricity component and eccentricity gradients, and (in the long-wavelength regime, defined as λ≫λ_⋆ := hc/k_B T_⋆√(2a/R_⋆)) made three important conclusions. Firstly, in discs with constant eccentricity, orbits are more radially bunched up at pericentre and spread out at apocentre, resulting in millimetre-wavelength pericentre glow. Secondly, that only in the unresolved limit (i.e., disc emission remains observationally separated from the stellar emission; Δ a ≪ B ≪ a, for discs with widths/extents Δ a and semi-major axes a, imaged with beams with angular extent B) are the results of the classical line-density theory recovered, with apocentre-enhancements achieved in poorly resolved, large radii, narrow discs. Conversely, at higher-resolution, pericentre brightnesses are enhanced, where observations are more sensitive to the densely packed dust orbits at pericentre. Finally, it was shown that disc eccentricity gradients can further enhance pericentre glow if these rise with a or apocentre glow if these fall with a. Since the publication of <cit.> new eccentric disc observations have been reported <cit.>. By modelling the disc of HD53143 with free and forced eccentricity components, this new work concludes that HD53143 hosts amongst the most eccentric discs ever observed, finding best-fit parameter estimates of e_p=0.11 and e_f=0.21 respectively (for the free (e_p) and forced (e_f) eccentricity components). This disc shows evidence of an apocentre brightness enhancement, at a level close to the predictions of <cit.>, despite being observed at a relatively high-resolution with ALMA, i.e., a beam with a resolution of ∼1.0”. There are key differences however between the models of <cit.> and <cit.> which complicate their direct comparison that we explore in this paper. In this work, Paper II of the Eccentric Debris Disc Morphologies series <cit.>, we likewise model the emission of debris discs with a 3-dimensional density distribution (imaged on 2D grids). Here we present results from new models of dust orbits parametrised with both free and forced eccentric components, and explore the impact of disc widths on disc brightness morphologies, for the restricted case of a face-on disc with the same parameters derived for HD53143 <cit.>. We demonstrate in these models that apocentre enhancements are strongly resolution and disc width dependent, and suggest that a more natural explanation for discs exhibiting strong apocentre glows that these are instead driven by brightness enhancements due to having falling eccentricity gradients, i.e. ∂ e/ ∂ a < 0. As in our previous work, we use the terms pericentre glow and apocentre glow to refer to the apse in which the peak surface brightness appears, though note that inclinations can alter the precise peak emission location. For this reason, we conduct our later analyses with face-on models. In Section <ref> we present our model setup, in <ref> we discuss the implications of our models, and present our main conclusions in <ref>. § A PARAMETRIC MODEL OF A DISC WITH FREE AND FORCED ECCENTRICITY §.§ Model setup The modelling methodology we adopt here follows the same prescription as our Paper I, with a modified surface density distribution that now includes the free eccentricity. We developed a single non-dynamical, parametric disc model with a top-hat distribution of dust semi-major axes (with user-defined parameters as discussed below) and imaged this with the RADMC-3D <cit.> package. To account for the free eccentricity component, we superimpose multiple families of orbits (with each family corresponding to our constant eccentricity model from Paper I). Each family includes a contribution from a free eccentricity component, e_p, with a fixed argument of pericentre for the free eccentricity, ω_p. The resulting dust-density is obtained by summing over families which uniformly sample ω_p. We adopt this prescription here for N=16,000 orientations of ω_p, which we tested to ensure sufficient numerical convergence on our image grids (with sizes of 2000x2000 pixels, and a pixel scale of 0.01”, or 0.18 au at the distance to HD53143). In this section we continue by exploring how this new model setup manifests at millimetre wavelengths. We define a series of parameters required to model optically thin debris disc dust emission within the RADMC-3D framework, which unless otherwise explicitly stated, remain constant throughout, with values consistent with M22. These include the observational wavelength (λ_obs, fixed at 1.3 mm), the distance (d, fixed at 18.3 pc), the peak emission semi-major axis (a_0, fixed at 90.1 au), the disc width (Δ a, variable between 2% and 50% of a_0), the vertical aspect ratio (h, fixed at 4%), the forced eccentricity (e_f, fixed at 0.21), the free eccentricity (e_p, fixed at 0.11) the argument of pericentre of the forced eccentricity (ω_f, fixed at either 112.8^∘ for the inclined model, or 90^∘ for the face-on model, where ω_f is defined as the anticlockwise angle from North), the inclination (i, fixed at either 65.6^∘, or face-on, i.e., 0^∘), the position angle (PA, from North, anti-clockwise, fixed at 156.4^∘), the phase offsets in RA and Dec (δ_RA and δ_Dec, fixed at 0.07” and 0.04” respectively), the minimum and maximum grain sizes (D_min and D_max, respectively set by the blowout size of a ∼solar-type star, and by neglecting emission from larger grains than ALMA wavelengths at values of 0.9 μm and 1.0 cm), the dust grain density (ρ, fixed at 2.7 g cm^-3), and the grain size power-law distribution <cit.>. We fixed the total dust mass (M_dust) at 0.05 M_⊕. Although this parameter was not modelled by M22, since this only acts as a total brightness scaling factor in the optically thin limit, and our investigation only considers fractional brightness ratios, we are free to set this parameter to anything as long as the disc remains optically thin. The dust temperature in our models is determined by the stellar temperature (T_⋆) and stellar radius (R_⋆) which define a template <cit.> stellar spectra (in all instances in this work these were fixed as 5250K and 1.0 R_⊙ respectively), and the stellar mass (M_⋆) was fixed as 0.9 M_⊙. In all models we scale the flux of the star to F_star=50 μJy (consistent with the stellar flux modelled by M22), and fix the origin of the image coordinate system at the star's location. Our models have a vertical Gaussian density distribution, defined by the vertical aspect ratio, h=H/r, where H is the absolute vertical height of dust at a radius r in the disc. This same parameterisation has been applied previously to model sub-mm ALMA observations <cit.>, as well as in Paper I and is consistent with both the expected physical distribution of dust above and below the disc mid-plane, and the work of M22. Our models are projected on to 2-D grids in (r,ϕ) space, for which r=0.0” corresponds to the origin (the location of the star). § DISC MODELLING: RESULTS AND DISCUSSION §.§ Model verification and comparison To verify our model behaviour as correct, we successfully reproduced the global disc emission structure of HD53143, as imaged and modelled by M22. In doing so, we demonstrate that our models, now including free eccentricity are correctly reproducing the emission of an inclined disc with both free and forced eccentricities, and can accurately predict the structures associated with more general disc scenarios (i.e., those with different parameters, such as i, PA and Δ a). We show in Fig. <ref> our model imaged at 0.04”, 0.25”, 1.0” and 2.5”, of which the 0.04” resolution corresponds to an imaged `full resolution' model of M22 (within ALMA's resolution capabilities) and to progressively lower resolution data, all of which can be achieved with sufficiently deep ALMA observations. This correctly reproduces the disc's morphology (i.e., an inclined, 90.1 au narrow ring with a large central cavity, and star offset from its centre), and importantly for our later investigation, apocentre glow (i.e., a brightness enhancement on the north-east disc ansa) in all four resolutions, and at the same level as HD53143's at 1.0” resolution. Determining the best-fit parameters of our model is beyond the scope of our investigation, since the above has shown that using identical parameters to the 10^7 particle method of M22, we can accurately reproduce the model of HD53143 with a parametric model, which we verified by producing a model image based on the same parametrisation but instead using 10^7 particles to sample the dust distribution (i.e, consistent with the approach of MM22). As noted by <cit.>, particulate models of discs imaged at sub-arcsecond resolution require many millions of particles to minimise computational shot-noise error to far below the errors induced by observations (an issue significantly compounded for ever-higher resolution models, which we discuss in Appendix <ref>). In verifying our parametric model, we found that our parametric approach thus has a significant computational advantage versus particulate models at high-resolution, where numerical convergence is achieved much faster due to two factors. Firstly, due to the increasingly higher number of particles required to reduce model noise to sufficiently low levels (see Appendix <ref>). Secondly, that our parametric model derives solutions for complete orbit families rather than individual particles, which greatly reduces the number of calculations needed to accurately define the density of dust in a circumstellar disc. §.§ Ring width and resolution: the origin of free-eccentricity induced brightness enhancements The main aim of our investigation is to determine how disc surface brightness variations can arise at millimetre wavelengths (e.g., such as apocentre and pericentre glows) in systems which have both free and forced eccentricities. For this, we use the example of HD53143 given this is the most eccentric millimetre-resolved disc yet observed and as such makes this disc the most straightforward to visualise how ring widths and free/forced eccentricities interact. We note that there is otherwise nothing special about our choice of this debris disc. We start with an identical model to that presented in <ref> and alter this by i) inclining this to a face-on orientation, ii) rotating the position angle horizontally east-west, and iii) re-producing and imaging the model for a range of narrow fractional widths (Δ a/a_0) from 5% to 50%. This range of widths was selected to ensure that we sampled all plausibly `narrow' discs (typically defined as those having fractional widths below 50%), and down to the lower limit at 5%, below which we deem the model behaviour too noisy for reasonable analysis of disk structures. Observations of debris discs have found there to be a wide range of widths, including broad discs <cit.>, and discs as narrow as Fomalhaut <cit.>. Here we focus on narrow discs exclusively. We present in Fig. <ref> a subset of these models; all shown at high resolution (0.04”) for discs with fractional widths of 5%, 15%, 22% and 30% (noting that 22% is the MM22-determined fractional width of HD53143). This figure demonstrates an important observational point: as the fractional width increases at a fixed (high) resolution, the surface brightness varies through the disc, e.g., with the initial apocentre glow (in the narrow discs) altered to a pericentre glow (in the broader disc) at millimetre wavelengths. Instead of the millimetre-wavelength apocentre glow being an effect due (solely) to resolution and wavelength, here we show that this is the result of overlapping dust orbits for dust belts modelled with both free and forced eccentricity components. In the radial profiles shown in Fig. <ref>, it can be seen that these images exhibit emission profiles that, when most narrow, are double peaked, consistent with the theoretical models of <cit.> and <cit.> (and from their associated images, that azimuthally, these radial minima are present throughout the disc). However, due to the different physical widths of discs as measured at their apocentre and pericentre directions (for fixed eccentricities), the gap between the double-peaked apocentre emission reduces before the gap on the pericentre side (e.g., for this model, for 15% width discs, the apocentre radial peaks now overlap, whereas the pericentre ansa remains double-peaked). This has the effect that for optically thin, eccentric discs with sufficiently narrow widths, the surface density of dust (and thus emission brightness) on the apocentre side rises more sharply than the pericentre side. At a fractional width of 22% the pericentre ansa remains double-peaked, though with a smaller separation. However by 30%, the double-peaked pericentre emission has ceased, and pericentre dust orbits now overlap constructively, producing a sharp increase in the pericentre emission brightness. On the apocentre however, dust orbits are instead being radially spread out, reducing the apocentre dust surface density. This has the effect, that (for face-on discs with all of HD53143's other parameters held fixed, and fixed 0.04” resolution), fractional width increases above a given threshold width induce pericentre glows at millimetre wavelengths. We generalise this picture in Fig. <ref> by showing all of our modelling results between fractional disc widths of 5–50%, and at all four resolutions. This plot shows the peak-apocentre-to-peak-pericentre brightness ratio, f_a/p, as a function of fractional width, demonstrating where the brightness enhancement regimes operate for a face-on disc with parameters otherwise consistent with HD53143. Initially, in all cases, the peak brightness of the narrowest discs appear on the apocentre side, although at the start of this scenario, the number of overlapping dust orbits on the apocentre side is low (with a correspondingly smaller brightness enhancement). As the orbital overlap of eccentric dust enhances more rapidly on the apocentre side than the pericentre side however, this disc side sees a sharp brightness enhancement over the pericentre side, peaking approximately in the region 15–25% for sub-arcsecond resolutions, before this rapidly falls, and by 30% may only achieve pericentre brightness enhancement. We note here that the best-fit M22 value of 22% sits inside this 15–25% apocentre-enhancement range. We also show on Fig. <ref> the predictions of the brightness ratio with the models of <cit.> (black-dashed line) and the models of Paper I for a constant eccentricity disc with only fixed forced eccentricity (convolved with a 1.0” circular Gaussian beam, black-solid line, `L&L22') for millimetre wavelengths. We note that only in the unresolved limit does the <cit.> predicted f_a/p∼√((1+e)/(1-e)) remain valid, under the assumption that e is dominated by the forced (constant) eccentricity component. On the other hand, we show that the behaviour of our constant eccentricity Paper I model follows broadly the same profile as the R=1.0” free and forced component model, predicting apocentre glow in poorly-resolved narrow discs, and pericentre glow otherwise. We note two additional points. Firstly, that in the limit of very low fractional widths, our constant eccentricity `L&L22' model converges on the <cit.> result as our disc widths become unresolved. Secondly, that our model under-predicts the apocentre enhancement expected of models that include free eccentricity (by between 5–10%, for the shown fractional widths) but as disc widths increase, our L&L22 model accurately tends to the same value as the models with both free and forced components. All of our 2D disc model images are provided as Supplementary Information. The apocentre glow origin in M22's HD53143 debris disc model is the result of overlapping eccentric dust orbits, for which a disc with a narrow fractional width and high free and forced eccentricities induces surface density distributions with a single radial peak at apocentre, and two radial peaks at pericentre <cit.>. We note therefore that to achieve apocentre glow at a level matching the observations of HD53143 (i.e., approximately 15%, MM22) may require significant fine-tuning of the disc width, given the results we present here. We have shown that to achieve 10–20% apocentre brightness enhancements at 1.0” resolution (consistent with the M22 observations) models are forced into the region of parameter space where eccentric dust orbits overlap to induce a single peak at apocentre and two peaks at pericentre, only possible with disc widths in the range 15%<Δ a/a < 25%, for fixed (high) free eccentricity. This description may not be a true reflection of what is going on physically however, since the disc width could lie outside of this width range. In Paper I we showed that observable apocentre dust enhancements are induced by falling eccentricity gradients (e.g., consistent with forcing from an eccentric planet internal to the disc). This may be a more natural explanation for the origin of HD53143's apocentre glow, since such an internal eccentric perturber/s can provide the force necessary to shape the disc's eccentric structure and apocentre glow (over its ≳Gyr lifetime) without requiring such stringent conditions on the disc width. As such, higher-resolution ALMA observations that resolve HD53143's disc width could provide sufficient data to model the disc's underlying eccentricity distribution and determine whether or not this is consistent with either i) constant eccentricity parameters, or ii) internal planet-forcing. §.§ Implications for inferred planetary architectures Our results demonstrate that based on the width of an eccentric disc with free and forced eccentricity, the non-overlapping orbits of (radially double-peaked) eccentric dust in very narrow discs can result in local radial emission minima over all azimuthal angles at high angular resolution (see model Δ a/a=5% in Fig. <ref>). This is analogous to the sub-structure observed in protoplanetary discs ascribed to planet-carving <cit.> and more recently in optically thin debris discs observed with annular emission minima <cit.>. Whilst further detailed work should be conducted to understand the implications for debris discs more generally (accounting for different disc radii, widths and eccentricities), our results show that structures that are readily ascribed to embedded planet-carving may be instead consistent with dynamical expectations from discs with free eccentricities (i.e., in the absence of embedded planets). § CONCLUSIONS In this work we have developed models of HD53143, to demonstrate the effect of free and forced eccentricities on debris disc morphologies. This work is a direct extension to, and supports the findings of our previous study (Paper I) of <cit.> by demonstrating that interpretations of eccentric disc structures based on line density models remain valid only in a very limited set of circumstances, which are increasingly unlikely to be met as debris disc observations become better resolved. In this study we conclude: * in general, by including constant free and forced eccentricities in disc models, at millimetre wavelengths discs may exhibit surface brightness variations that result in either pericentre or apocentre glows at millimetre wavelengths; * that pericentre/apocentre glows in such models are a result of preferential overlapping eccentric dust orbits (with uniformly distributed arguments of pericentre of the free eccentricity) i.e., not from the pile-up of lower-velocity dust (i.e. apocentre hang-time); * that in the case of HD53143, new observations and detailed modelling is needed to determine the origin of its apocentre glow (which may either be due to a falling eccentricity profile, or a narrow ring with (constant) free and forced eccentricity); * that the structures induced by free and forced eccentricity models may induce observational signatures in debris discs, consistent with those ascribed to planet carving. § ACKNOWLEDGEMENTS We thank Grant Kennedy for useful discussions on eccentric disc modelling and comments on an earlier paper draft, and the referee for constructive suggestions that improved both the clarity and quality of our paper. J. B. Lovell thanks the STFC (through a postgraduate studentship) and the Institute of Astronomy, University of Cambridge, for partly funding this work through a Summer Research Assistantship, and the Smithsonian Institute for funding via a Submillimeter Array (SMA) Fellowship. E. M. Lynch thanks the Science and Technologies Facilities Council (STFC) for funding this work through a STFC studentship, and the European Research Council (ERC). This research was supported by STFC through the grant ST/P000673/1 and the ERC through the CoG project PODCAST No 864965. This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 823823. § DATA AVAILABILITY All model files and surface density structure code will be made available via JBL's github: <https://astrojlovell.github.io>. mnras § PARTICLE MODEL ERROR ANALYSIS In this Appendix we briefly investigate one complication introduced by modelling high-resolution discs with particle models, by measuring the differences between images produced by models based on a dust surface density distribution and those produced when this surface density distribution is sampled with a finite number of particles. For illustrative purposes we considered only a single circular Gaussian disc model (i.e., e_p=0 and e_f=0, with disc parameters all otherwise fixed to those presented in <ref>) with a 20% fractional width (Δ a /a), and sampled either 10^4, 10^5, 10^6, 10^7, 10^8, or 10^9 particles (in the particle Gaussian sampling method). We ran these dust density distributions through the same imaging pipeline described in <ref>. All model images were then convolved with either 0.04”, 0.25”, 1.0” and 2.5” Gaussian beams, and for each resolution scale, we subtracted each particle model image from the Gaussian function model image to obtain model residual maps. Finally, we then measured the rms within a region defined by the ±3σ width of the Gaussian function (i.e., with σ=Δ a / 2√(2ln2)). We show in Fig. <ref> the outcome of the measured residual rms values (i.e., the model error) as a function of particle number. Firstly, we demonstrate that for increasing particle number, the rms errors fall following the relationship RMS ∝ 1/√(N_particles) as expected when the error is dominated by the sample variance. Therefore for ever higher-resolution observations, the number of particles needed to accurately model debris disks increases. This has important observational implications for future measurements and modelling exercises, e.g., for the ALMA Large Program ARKS (Marino et al. 2023, in preparation), or for HD53143. ARKS is scheduled to resolve the structure of 18 debris discs on scales ranging from 0.03-0.8”, and achieve per-beam SNRs in the range of 5–10. Thus, for particle-based models to accurately model these systems, i.e., producing modelling errors ≳50× smaller than observational errors, these will require rms errors of 0.1–0.2%. Based on Fig. <ref>, for an approximate average ARKS resolution of 0.25”, this will require models with ≳10^9 particles. In the case of HD53143, which was observed with approximately an average per-beam SNR of 7 (across the complete disk extent), resolving this with a factor of 4 resolution improvement (i.e., from the existing 1.0” MM22 resolution to 0.25”), our investigation implies that ≳10^8 particles would then be needed to ensure that observational errors dominate modelling errors (an order of magnitude larger than used in MM22). In summary, we have shown that as higher-resolutions are achieved observationally, the computational cost of modelling these with existing particle-based models increases approximately quadratically. This incentivises adopting methods that scale better with increased resolution. § SUPPLEMENTARY MATERIAL In our Supplementary Material we provide 2D images of the model suite used to interpret the brightness enhancements in <ref>, and in the online version a `.gif' movie of each row presented in this grid. All of these models will be made publicly available via <https://astrojlovell.github.io>.
http://arxiv.org/abs/2307.02634v1
20230705200812
Radio WISSH: tuning on the most luminous quasars in the Universe
[ "Gabriele Bruni", "Javier Moldón", "Enrico Piconcelli", "Francesca Panessa", "Miguel Pérez-Torres", "Manuela Bischetti", "Chiara Feruglio", "Giustina Vietri", "Cristian Vignali", "Luca Zappacosta", "Ivano Saccheo" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA" ]
Influence of interface-induced valley-Zeeman and spin-orbit couplings on transport in graphene-on-WSe_2 heterostructures M. Tahir August 1, 2023 ========================================================================================================================= In the past years, the results obtained by the WISSH quasar project provided a novel general picture on the distinctive multi-band properties of hyper-luminous (L_bol>10^47 erg/s) quasars at high redshift (z∼2-4), unveiling interesting relations among active galactic nuclei, winds and interstellar medium, in these powerful sources at cosmic noon. Since 2022, we are performing a systematic and statistically-significant VLA study of the radio properties of WISSH. We carried out high-resolution VLA observations aiming at: 1) identifying young radio source from the broad-band spectral shape of these objects; 2) sample an unexplored high redshift/high luminosity regime, tracking possible evolutionary effects on the radio-loud/radio-quiet dichotomy; 3) quantifying orientation effects on the observed winds/outflows properties. § INTRODUCTION Hyper-luminous QSOs (HyLQSOs, i.e. L_bol>10^47 erg/s), powered by the most massive, highly-accreting supermassive black holes (SMBHs, i.e. M_BH > 10^9 ), are ideal targets to probe the assembly of giant galaxies (and, likely, the cradles of proto-clusters). Following the nowadays consensus view on SMBH-host galaxy co-evolution, the huge amount of energy released by highly-accreting SMBHs in HyLQSOs is able to strongly affect the evolution of the host galaxy by heating and expelling the interstellar medium (ISM) (the so-called “AGN feedback" mechanism, see e.g. <cit.>, <cit.> for a review). The systematic study of HyLQSOs has faced significant challenges due to their low number density and low fluxes resulting from their distance. A significant improvement in our understanding of the properties of the accreting SMBH (, ), the nuclear thermal and non-thermal emission components, multiphase winds and multiphase ISM (and their interplay) in HyLQSOs can be only achieved investigating all these aspects in a large sample of HyLQSOs. Namely, one that is markedly different from traditional observing programs in a specific frequency band, and focused on sparse sources to study a particular aspect of the HyLQSO phenomenon. This highlights the necessity of building large samples of HyLQSOs with extensive multi-band coverage from radio to X-rays. The additional information coming from the radio band can provide fundamental inputs on the presence of jets, their interplay with winds, and in general on the presence of a possible young radio phase at cosmic noon, and the long-standing questions about the radio-loud/radio-quiet dichotomy across cosmic epochs. §.§ Probing the brightest end of the AGN luminosity function with WISSH The WISSH quasar project can be regarded as a multi-band effort in the study of HyLQSOs, as demonstrated by the number of publications since 2017 dealing with their the central engine, the outflows/feedback and host galaxy properties, e.g. <cit.>. The aim of the WISSH project is to establish a reference sample of HyLQSOs at cosmic noon to investigate their nuclear properties and the AGN feedback mechanism on a sound statistical basis. The sample consists of 85 broad-line Type 1, radio-quiet AGN at z ∼ 2–4.5 from SDSS-DR7 and selected by the WISE All Sky Survey with flux F_22μ m> 3 mJy (see <cit.> and references therein). Accordingly, the WISSH quasars result to be among the most luminous AGN known in the Universe with > 2 × 10^47 . In this proceeding, we briefly summarize the first results of our radio campaign on WISSH QSOs, aiming at characterizing the radio emission in objects at cosmic noon. § A RADIO CHARACTERIZATION OF THE WISSH SAMPLE During 2022, we realized a deep, high-resolution VLA survey of the WISSH sample in A configuration in the 2-8 GHz range. We covered ∼90% of the sample at 2-4 GHz, while ∼75% at 4-8 GHz, probing physical scales between 2 and 5 kpc. Our strategy was to reach a sensitivity threshold of ∼50 μJy, well below past or current radio surveys. Indeed, a first radio approach to the WISSH sample carried out by <cit.> showed that, cross-correlating with the FIRST survey at 1.4 GHz (<cit.>), only ∼20% of the objects shows a detection at a ∼500 μJy sensitivity threshold, and a compact morphology (<40 kpc at the median redshift of the sample). The recent release of the first epoch of the VLASS survey at 3 GHz - at a sensitivity similar to the FIRST one (RMS∼120 μJy/beam) - allowed us to confirm this detection rate. The estimated radio loudness (R=f_6cm/f_4400Å) is lower than 10 for all except two objects, for which R=47 and 290. Given these premises, going deeper in terms of physical scale and sensitivity appeared to be key to unveil the radio properties of the sample. The main goals of our VLA campaign are the following: * Quantify the fraction of young radio sources: by compiling the L_1.4 GHz vs linear size (LS) diagram, we will be able to compare with the fraction of young radio sources found for heavily obscured quasars (<cit.>), investigating how the different evolutionary stages influence the radio phase. The collected VLA data can be complemented with LOFAR measurements from the LoTSS DR2 survey (<cit.>) - where, among the 34 sources in the footprint, 32 were detected - allowing us to extend the frequency coverage down to 0.15 GHz. The overall radio spectrum will allow us to quantify the fraction of peaked sources in the 0.15-12 GHz range, possibly confirming the fraction of young radio sources estimated from the L_1.4 GHz vs LS diagram. * Probe an unexplored high redshift/high luminosity regime of active galaxies: at the median redshift of WISSH (z=3.33), and with an RMS of ∼10 μJy/beam at 3 GHz, it is possible to probe radio powers down to ∼7×10^23 W/Hz at 3-sigma significance. Thanks to the availability of optical and X-ray (Chandra program ongoing) luminosities, a distribution of radio-loudness can be obtained and compared to those of other surveys in order to test a possible evolution scenario of the radio loudness (<cit.>). This can provide clues on long-standing questions about the radio-loud/radio-quiet dichotomy. * Test orientation effects on the observed winds/outflows properties: in quasars, the spectral index of the optically-thin part of the radio jet spectrum can be used as an indicator of the jet orientation (<cit.>), suggesting a near-to-polar line of sight for values >–0.5, while an equatorial one for values <–0.5 (S_ν∝ν^α). This information can be compared with the outflows orientation estimates from <cit.>, and with the presence of nuclear winds (BAL) from <cit.>. §.§ First results: detection rates and morphologies At an RMS of ∼10 μJy/beam, about 80% of the observed sample was detected at 3 GHz - reaching a redshift of 4.3 - implying an estimated radio power >10^23 W/Hz (classical threshold between radio-quiet and radio-loud AGN in the local Universe, <cit.>). This suggests that, at cosmic noon, most of the hyperluminous AGN like the WISSH ones could host jets. The implication of this result are wide-reaching, from the evolutionary effects on the radio-quiet/radio-loud dichotomy, to the contribution of jets in the QSOs feedback budget at large redshifts, and jets launching at this luminosity regime. The very fact that most of sources lie below the VLASS detection threshold (see Fig. <ref>, left panel) highlights how important deep observations are to perform population study at this high redshift, allowing to reach the completeness needed to draw conclusions on the radio phase evolution in AGN. Three objects showed a resolved morphology at 3 GHz, and more could arise from higher frequency observations. They show a symmetric morphology centered on the optical position of the host, with a projected linear extension of about 30 kpc (see Fig. <ref>, right panel). This could suggest that these sources are radio galaxies at cosmic noons, but more analysis, including spectral index estimates, is necessary to claim this. Observations are still ongoing, and will be concluded at the end of the current VLA semester, completing the multi-frequency radio view on the WISSH sample. §.§ The radio phase across cosmic epochs Recently, <cit.> performed a VLA study of heavily obscured quasars. The sample was selected in the ultra-luminous regime (L_bol∼10^11.7-14.2L_⊙) at z∼0.4-3, and extremely red in the WISE mid-infrared/optical band, along with a detection of bright, unresolved radio emission from the NVSS. Thanks to high-resolution VLA observations at 10 GHz, they found radio luminosities and linear extents similar to young radio sources (Gigahertz Peaked Spectrum, GPS, and Compact Steep Spectrum, CSS, sources). In a subsequent paper (<cit.>), they built the radio spectra by adding data from surveys, and confirmed the presence of a high fraction of young radio source. Although both the <cit.> and WISSH quasar samples are selected to allow the direct observation of AGN feedback in action, they are complementary in terms of quasar evolutionary stages. Indeed, according to <cit.>, the obscured quasars in <cit.> represent the initial heavily dust-enshrouded phase associated with rapid SMBH growth and star formation triggered by multiple galaxy encounters, while optically-bright objects like WISSH ones are undergoing the “blow-out” phase, which is characterized by powerful QSO-driven outflows blowing away the nuclear dust cocoon and part of the cold gas reservoir in the host galaxy. The same kind of study performed by <cit.> and <cit.>, once realized on the WISSH sample of hyper-luminous broad-line quasars, will not only provide unprecedented information on the radio phase at cosmic noon, but also shed light on the possible link between the launching mechanism of nuclear winds and radio jets. [Ballo et al. (2012)]2012A A...545A..66B Ballo, L., Heras, F. J. H., Barcons, X., et al. 2012, A&A, 545, A66 [Becker et al. (1995)]1995ApJ...450..559B Becker, R. H., White, R. L., & Helfand, D. J. 1995, ApJ, 450, 559 [Bischetti et al. (2017)]2017A A...598A.122B Bischetti, M., Piconcelli, E., Vietri, G., et al. 2017, A&A, 598, A122 [Bischetti et al. (2018)]2018A A...617A..82B Bischetti, M., Piconcelli, E., Feruglio, C., et al. 2018, A&A, 617, A82 [Bischetti et al. (2021)]2021A A...645A..33B Bischetti, M., Feruglio, C., Piconcelli, E., et al. 2021, A&A, 645, A33 [Bruni et al. (2019)]2019A A...630A.111B Bruni, G., Piconcelli, E., Misawa, T., et al. 2019, A&A, 630, A111 [Condon (1992)]1992ARA A..30..575C Condon, J. J. 1992, ARAA, 30, 575 [Duras et al. (2017)]2017A A...604A..67D Duras, F., Bongiorno, A., Piconcelli, E., et al. 2017, A&A, 604, A67 [Fabian (2012)]2012ARA A..50..455F Fabian, A. C. 2012, ARAA, 50, 455 [Hopkins et al. (2008)]2008ApJS..175..390H Hopkins, P. F., Cox, T. J., Kereš, D., et al. 2008, ApJS, 175, 390 [Martocchia et al. (2017)]2017A A...608A..51M Martocchia, S., Piconcelli, E., Zappacosta, L., et al. 2017, A&A, 608, A51 [Morganti (2017)]2017FrASS...4...42M Morganti, R. 2017, Frontiers in Astronomy and Space Sciences, 4, 42 [Orr & Browne (1982)]1982MNRAS.200.1067O Orr, M. J. L. & Browne, I. W. A. 1982, MNRAS, 200, 1067 [Patil et al. (2020)]2020ApJ...896...18P Patil, P., Nyland, K., Whittle, M., et al. 2020, ApJ, 896, 18 [Patil et al. (2022)]2022ApJ...934...26P Patil, P., Whittle, M., Nyland, K., et al. 2022, ApJ, 934, 26 [Saccheo et al.(2023)]2023A A...671A..34S Saccheo, I., Bongiorno, A., Piconcelli, E., et al. 2023, A&A, 671, A34 [Shimwell et al.(2022)]2022A A...659A...1S Shimwell, T. W., Hardcastle, M. J., Tasse, C., et al. 2022, A&A, 659, A1 [Travascio et al. (2020)]2020A A...635A.157T Travascio, A., Zappacosta, L., Cantalupo, S., et al. 2020, A&A, 635, A157 [Vietri et al. (2018)]2018A A...617A..81V Vietri, G., Piconcelli, E., Bischetti, M., et al. 2018, A&A, 617, A81 [Vietri et al. (2022)]2022A A...668A..87V Vietri, G., Misawa, T., Piconcelli, E., et al. 2022, A&A, 668, A87 [Zappacosta et al. (2020)]2020A A...635L...5Z Zappacosta, L., Piconcelli, E., Giustini, M., et al. 2020, A&A, 635, L5
http://arxiv.org/abs/2307.02451v1
20230705172341
On the heat capacity of quantum hard sphere fluid
[ "Sergei Stishov" ]
cond-mat.other
[ "cond-mat.other" ]
stishovsm@lebedev.ru P. N. Lebedev Physical Institute, Leninsky pr., 53, 119991 Moscow, Russia The thermodynamic properties of the Boltzmann hard sphere system is discussed. It was found that zero point energy decreases with temperature so slowly that it turned out to be an almost a constant addition to the classical value. In result the heat capacity of the system differs little from the classical value of 3/2 k everywhere except for the narrow region of low temperatures, where heat capacity drops to zero. The predicted linear temperature contribution to the heat capacity like in ideal Fermi gas was clearly detected in the quantum hard sphere system at the lowest temperatures. On the heat capacity of quantum hard sphere fluid S.M. Stishov August 1, 2023 ================================================= § INTRODUCTION At sufficiently high temperatures, or in systems with a strong repulsive interaction, when the particles exchanges are practically impossible, the effects of Bose and Fermi statistics can be neglected. However, the system may be quantum mechanical due to "diffraction effects" associated with the wave nature of the particles. Moreover, the effects of quantum statistics decay exponentially with increasing temperature, while the "diffraction effects" disappear as an inverse power of temperature at T→∞. Thus, in the quantum system of hard spheres there is a significant temperature range where the effects of quantum statistics play only a minor role <cit.>. So further we will discuss energy and heat capacity behavior of the Boltzmann quantum hard sphere fluid. § DISCUSSION AND RESULTS The system of classical hard spheres is the simplest non-trivial system with an interaction of the form Fig.<ref>: Φ(r)=0, r>σ Φ(r)= ∞, r<σ However, in contrast to the classical system of hard spheres, in the quantum case an interparticle repulsion occurs due to the uncertainty principle, which ensures an existence of the “restoring” force to long-wavelength acoustic deformations <cit.>. The hard sphere model has been widely used to describe strongly interacting systems. Let us recall the van der Waals theory of critical phenomena, in which the interparticle repulsive interaction is described by the hard sphere potential. Subsequently, much effort was expended developing a theory of fluids using the hard sphere model as a zero approximation in the framework of perturbation theory <cit.>. The quantum model of hard spheres has been used at an analysis of behavior of quantum systems with short-range interactions, in particular, helium <cit.>. Now we turn attention to one particular study on thermodynamic properties of quantum hard spheres published many years ago in Ref.<cit.>. A surprising result of this study was claim of Fermi-like linear temperature dependence of heat capacity system C_v∽ T, which arises from “the physical exclusion of interpenetration rather than statistics”  <cit.>. But real physics of this situation can not be described in a simple way. Indeed, in a dense hard sphere system particles are confined in some sort of cage formed by the neighboring particles. Then an energy of the particles should be quantisized. But because of non regular forms of cages in a hard sphere fluid the corresponding energy levels should be different for each particular cage. Curious that the calculations <cit.> of specific heat of a quantum particle in a box do not show a linear behavior at low temperature. A validation of the cited study <cit.> can be conducted with results of calculations of the thermodynamic properties of the quantum system of hard spheres by the Monte Carlo method, carried out in Ref. <cit.>. The author  <cit.> presented the values of the dimensionless energy E/kT of the fluid state of the system as a function of the reduced density ρ^*=ρσ^3 (σ-sphere diameter) along lines with constant λ^*(λ^*=h/(2π mkTσ^2)^1/2 is the ratio of the thermal de Broglie wavelength to the diameter of the hard sphere). For analysis, we select the results of energy calculations at density ρ^*=0.3, covering the largest range of reduced de Broglie lengths λ^*. The corresponding data is shown in Fig. <ref>. As can be seen from Fig.2, the calculated data are obviously extrapolated at λ^*→ 0 to the classical value of E/kT=1.5, which verifies the calculated results. Note that the total energy of quantum hard spheres includes only the kinetic energy of the translational motion of particles and zero energy associated with the uncertainty principle. The approximation formula describing the numerical data <cit.> has the form: E/kT=1.5 + 1.5645(λ^*)^2.1169 Substituting the numerical data into the expression λ^* ( σ=3.5 Å, m=28.0134 a.u. <cit.> )we obtain for the energy and heat capacity: E=1.5T+1.395 T^-0.06 C_v=1.5-0.084T^-1.006 Note, as follows from a relation (4) C_v turns to zero at small but finite temperature equal to 5.7 × 10^-2, which is obviously a result of calculational errors and approximations. This mismatch is corrected when needed. Quite surprising results follow from expressions (3) and (4). Zero point energy decreases so slowly with temperature that it turns out to be an almost a constant addition to the classical value, Fig.<ref>. The behavior of the quantum contribution to the energy of a system of hard spheres (Fig. <ref>) confirms the conclusion of the work <cit.> that as a contrary to naive expectations, quantum effects turn out to be very important even when the thermal wavelength of De Broglie is only a small fraction of the hard sphere diameter. Due to the mentioned specifics of the quantum contribution, the heat capacity of the system differs little from the classical value of 3/2 k everywhere except for the narrow region of low temperatures, where heat capacity of the system drops to zero (Fig. 4). The low temperature part of the heat capacity of quantum system of hard spheres is depicted in Fig. <ref>. As is seen the dependence C_v(T) certainly contains the low temperature linear component. A finite value of the derivative dC_v/dT at the coordinate origin like it occurs in case of the Fermi gas clearly support this conclusion. Fig. <ref> well illustrates this point. It should be reminded that the linear temperature dependence of heat capacity of Fermi gas arises only at T/T_f<<1, where T_f-Fermi energy. At higher temperatures a behavior of heat capacity is essentially nonlinear (see Ref.<cit.>). The same situation is expected in our case and a linear behavior of heat capacity can be seen at T/ε <<1, where ε is some energy barrier, preventing particles from free moving. One may conclude from Fig.<ref> that ε 10^-2K. In this connection it is instructive to analyze Fig.<ref>, where four C_v(T) curves describing heat capacity behavior as functions of temperature of ideal Bose and Fermi gases, quantum Boltzmann and Debye solid are displayed. One can see in Fig.<ref> that the curves exhibit different behavior in the vicinity of zero temperature certainly as a result of different nature of excitations responsible for the heat capacity (single particle or collective). Probably just single particle character of thermal excitations in the hard sphere fluid and ideal Fermi gas defines their linear dependence of heat capacity on temperature. The distinct similarity of heat capacity curves of ideal Fermi gas and hard sphere fluid is illustrated in Fig.<ref>. § CONCLUSION Heat capacity behavior as a function of temperature of the Boltzmann quantum hard sphere fluid was derived from the Quantum Monte Carlo calculations, which appeared to be quite similar to that of the ideal Fermi gas. We suggest that the reason of this similarity lies in the specifics of single particle nature of excitations responsible for heat capacity characteristics in the both media. § ACKNOWLEGMENT Author appreciates A.M. Belemuk advice on the matter of the Fermi gas properties and expresses his gratitude to A.E. Petrova for some calculations. 99 runge K.J. Runge, G.V. Chester, Phys.Rev.B 38 , 135 (1988) barker J.A. Barker, D. Henderson, J.Chem.Phys. 47, 2856 (1967) Hansen J-P Hansen, D. Levesque, D. Schiff, Phys.Rev. A 3, 776 (1971) kalos M.H. Kalos, D. Levesque, L. Verlet, Phys.Rev. A 9, 2178 (1974) Cole R.K. Cole, Jr. Phys.Rev. 155, 114 (1967) Ros H.B. Rosenstock, Am.J.Phys.,30,38 (1962) sese1 L.M. Sesé, J.Chem.Phys. 136, 244504 (2012) bha R.K. Bhaduri, W wan Dijk, M.K. Srivastava, Europ.J.Phys. 27, 1323 (2006) Pat R.K. Pathria, Paul D. Beale, Statistitical Mechanics, Third Edition, Elsevier (2011)
http://arxiv.org/abs/2307.00921v1
20230703104446
Influence of the Anderson transition on thermoelectric energy conversion in disordered electronic systems
[ "I. Khomchenko", "H. Ouerdane", "G. Benenti" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.dis-nn" ]
[Corresponding author: ]ikhomchenko@uninsubria.it Center for Digital Engineering, Skolkovo Institute of Science and Technology, 30 Bolshoy Boulevard, bld. 1, 121205 Moscow, Russia Center for Nonlinear and Complex Systems, Dipartimento di Scienza e Alta Tecnologia, Università degli Studi dell’Insubria, via Valleggio 11, 22100 Como, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Milano, via Celoria 16, 20133 Milano, Italy H.Ouerdane@skoltech.ru Center for Digital Engineering, Skolkovo Institute of Science and Technology, 30 Bolshoy Boulevard, bld. 1, 121205 Moscow, Russia giuliano.benenti@uninsubria.it Center for Nonlinear and Complex Systems, Dipartimento di Scienza e Alta Tecnologia, Università degli Studi dell’Insubria, via Valleggio 11, 22100 Como, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Milano, via Celoria 16, 20133 Milano, Italy NEST, Istituto Nanoscienze-CNR, I-56126 Pisa, Italy So far, the efficiency of thermoelectric energy conversion remains low compared to traditional technologies, such as coal or nuclear. This low efficiency can be explained by connecting the thermoelastic properties of the electronic working fluid to its transport properties. Such connection also shows that operating close to electronic phase transitions can be an efficient way to boost the thermoelectric energy conversion. In this paper, we analyze themoelectric efficiency close to the metal-insulator Anderson transition. Our results reveal the direct link between the thermoelectric and thermoelastic properties of Anderson-type systems. Moreover, the role of the conductivity critical exponent in the thermoelectric energy conversion is analysed. Finally, we show that relatively large values of the thermolectric figure of merit may be obtained in the vicinity of the Anderson transition. Influence of the Anderson Transition on Thermoelectric Energy Conversion in Disordered Electronic Systems Giuliano Benenti August 1, 2023 ========================================================================================================= § INTRODUCTION Thermoelectric conversion performance is usually determined using a combination of three transport coefficients: the electrical conductivity σ, the thermal conductivity κ, and the Seebeck coefficient α. This combination is known as the thermoelectric figure of merit zT <cit.>: zT = α^2σ/κT, where T is the average temperature across the system. As phonons of the crystal lattice and charge carriers - usually electrons, contribute to thermal transport, κ can be written as κ = κ_ e + κ_ ph. The range of applications of thermoelectric technology would be significantly extended, provided zT exceeds a value of at least 4 <cit.>. This is a formidable challenge, due to the interdependence of the transport coefficients, ruled by phenomenological laws such as the Wiedemann–Franz law connecting heat and electric conductivity <cit.>. To establish an upper bound for zT under given working conditions, it is enough to consider the thermoelectric figure of merit of the conduction electron gas alone, z_ eT, which disregards the lattice thermal conductivity κ_ ph <cit.>: z_ eT = α^2σ/κ_ eT > zT Coupling between heat and electrical transport, as was underlined by Apertet et al. <cit.>, results in a convective process, namely a heat flow associated with the displacement of charge carriers. The convective part of the heat flow, which adds to the conductive part due to electrons and phonons under open circuit conditions, can be enhanced near the critical temperature for a electronic transition <cit.>. In the literature, thermoelectric conversion was discussed for superconducting <cit.> and metal-insulator Anderson transition <cit.>. To address the conversion efficiency optimization problem, it is instructive to analyze the thermodynamic properties of the conduction electron gas, which is the actual working fluid of thermoelectric devices. The Seebeck coefficient given by the ratio of the gradients of two intensive variables, electrochemical potential μ and temperature T: α = -∇μ/(q∇ T) <cit.>, has a thermostatic counterpart: α_ th = - dμ/ dT, which derives from the Gibbs-Duhem equation. The quantity α_ th can be written using also the definitions of the thermoelastic coefficients of the electron gas and the Maxwell relations, and a thermodynamic figure of can be defined from the calculation of the isentropic expansion factor <cit.>: Z_ thT= β^2/χ_TC_NT, where β is the analogue to thermal dilatation coefficient, χ_T is the analogue to isothermal compressibility, and C_N is the analogue to specific heat at constant volume. Definitions are given further below. As discussed in <cit.>, driving the electron gas close to a phase transition yields a significant increase of the isentropic expansion factor, which boosts the energy conversion efficiency. In the latter works, thermally driven effects, namely fluctuating Cooper pairs and nematic fluctuations, were considered in 2D systems and thin films. Other effects such as disorder can influence the thermodynamic and transport properties of the electron gas: in a disordered system, the charge carriers states at a given energy can either be localized or delocalized depending on the disorder strength. In this work, we analyze the effects of the transition from the delocalized to localized states, or Anderson metal-insulator transition, on the thermodynamic properties of the electron gas and its ability to perform an efficient energy conversion in the vicinity of the critical point. Indeed, one may expect that the Seebeck coefficient may drastically increase as the system is driven away from its metallic phase as the entropy per carrier increases. Thermoelectric conversion near the Anderson transition has been studied in <cit.>, but the link between thermoelastic and transport properties has not been yet considered, while it has been studied for the metal to superconductor phase transition <cit.>. The paper is organized as follows. In the next two sections, for completeness and clarity, we give a brief recap of the basic ingredients of our approach: the transport coefficients and the thermoelastic coefficients. We then focus on the Anderson transition, detailing the assumptions and parameters we use for our model. We present and discuss our numerical results in the subsequent section, and we end the paper with concluding remarks. § TRANSPORT COEFFICIENTS The standard approach to calculate σ, α, and κ_ e is to relate these transport coefficients to Onsager's kinetic coefficients L_ij (i,j=1,2), in the frame of linear non-equilibrium thermodynamics <cit.>: σ = e^2 L_11/T , α = L_12/qT L_11, κ_ e = 1/T^2[L_22 - L_12L_21/L_11 ], where q is the electron charge. To compute the Onsager coefficients L_ij, we use the Boltzmann equation in the relaxation time approximation <cit.>: L_11 = T∫_0^∞Σ(E) (-∂ f/∂ E) dE, L_12 = L_21 = T∫_0^∞ (E - μ)Σ(E) (-∂ f/∂ E) dE, L_22 = T∫_0^∞ (E - μ)^2Σ(E) (-∂ f/∂ E) dE, where f={exp[(E-μ)/k_ BT] + 1 }^-1 is the Fermi distribution function, Σ(E) is the transport distribution function, and μ is the electrochemical potential introduced above. Here, Σ(E) = τ(E)v^2(E)g(E) is the transport distribution function, with τ(E) being the electron relaxation time, v(E) the electron group velocity, and g(E) the density of states. Note that the relaxation time depends on the model of the system and also varies with respect to the type of collisions <cit.>. § THERMOELASTIC COEFFICIENTS The thermodynamics of the noninteracting electron gas is very similar to its classical gas counterpart. Using the correspondence between volume V and the number of electrons N and pressure P and the chemical potential μ, namely, V ⟶ N and -P ⟶μ, one can define analogous coefficients for the electron gas β, χ_T, C_N (already introduced in Eq. (<ref>)), and C_μ <cit.>, where C_μ is the analogue to specific heat at constant pressure. Following the same approach as with transport coefficients, we relate thermoelastic coefficients to the distribution function <cit.>. The analogue to isothermal compressibility is: χ_T N=∫_0^∞ g(E)(-∂ f/∂ E) dE; the analogue of thermal dilatation coefficient reads: β N=1/T∫_0^∞ g(E)(E-μ)(-∂ f/∂ E) dE; and the specific heat at constant electrochemical potential is given by: C_μ N=1/T∫_0^∞ g(E)(E-μ)^2(-∂ f/∂ E) dE. The specific heat at constant particle number C_N and at constant electrochemical potential C_μ are connected via the Maxwell’s relation C_μ = C_N + β^2 T/χ_T = C_N (1 + β^2/χ_T C_NT). § ANDERSON TRANSITION Anderson developed the concept of localized and extended states with a simple theoretical model: if a quantum-mechanical system is sufficiently disordered (e.g., a semiconductor with lattice defects or impurities) and at sufficiently low carrier density, diffusion cannot take place, thus entailing wave function localization  <cit.>. The model assumes a distribution of sites occupied by particles, which may be random or might be regular in three-dimensional space. The basic characteristics of the probability distribution is its width W. When a particle occupies the site j, it has energy E_j. This energy is a stochastic variable with a probability distribution P(E) while the interaction potential V_jk between the site j and the site k is not a stochastic variable <cit.>. In the three-dimensional Anderson model, a mobility edge E_ c separates localized states (for E < E_ c) from extended states (for E > E_ c). The densities of states g(E) near the mobility edge are given g_ hom(E) = const, g_ inh(E) = const |E-E_ c|^(d-y)/y, where d is the system's dimension, and y a scaling parameter, and the expression for the electrical conductivity at zero temperature reads <cit.>: σ(T=0,E) = {[ A (E-E_ c)^(d-2)/y, E ≥ E_c,; 0, E ≤ E_ c, ]. where A is a proportionality constant. Moreover, the electrical conductivity at zero temperature is related to the transport distribution function as σ(T=0, E)=2e^2Σ(E) <cit.>. We are here interested in the case d=3 because of the absence of quantum diffusion in two dimensions and one dimension <cit.>. The reason for that is there is no fixed point, i.e., no mobility edge E_c. Before concluding this section, we make one remark about possible values of y. According to Wegner, the theory is valid for the values of y in range of 0<y<d <cit.>. §.§ Parameters of the Model We consider the electron gas at the metal-insulator transition <cit.>. The calculations below are provided for the homogeneous and inhomogeneous ensembles using Eqs. (<ref>) and (<ref>) and with g_3D(E_ F) =2.45 · 10^24 J^-1·m^-3 at the Fermi level. For convenience, let us denote 1/y as x, which is also known as the critical exponent. We investigate the influence of this parameter on the transport coefficients so that we show the dependencies for several values of x. The constant A in Eq. (<ref>) was chosen as A=10^22 Ohm^-1· m^-1· J^-x <cit.> to match the typical values of the electrical conductivity. As for the thermodynamic figure of merit Z_ thT, we include the homogeneous ensemble, for which x=1/3, and the inhomogeneous ensemble as the densities of states are provided by different equations. Note that the exact value of x is not known, notwithstanding the several numerical, analytical, and experimental methods used for determining it <cit.>. For example, MacKinnon provides the value of x=1.54, which is in range of typical values 0.5<x<2 <cit.>. The temperature-dependent chemical potential of the three-dimensional electron gas reads <cit.>: μ(T) = E_ F{[ 1 + a_1t + a_2t^2 + a_3t^3+a_4t^4, 0 ≤ t ≤ t^*,; t ln(b_+ t^-3/2)- t ln(1 - b_+ (2t)^-3/2), t ≥ t^*, ]. where t=T/T_ F, with T_ F=E_ F/k_ B being the Fermi temperature; t^*=1.36. The coefficient b_+ is given by b_+ = a_+/ Γ(3/2), a_+=2/3, Γ(x) is the gamma function, a_1=0.016, a_2=-0.957, a_3=-0.293, and a_4=0.209 <cit.>. In our calculations, T_ F=42.3 K for the three-dimensional electron gas, whose concentration n was taken as n=10^18cm^-3. The mobility edge is set to E_ c(T) = 0 eV, based on the simplified model of Wegner <cit.>. § RESULTS AND DISCUSSION As we study the effect of disorder on thermoelectric energy conversion, several values of the critical exponent are considered. Our model includes extended and localized sates, with a smaller proportion of the last ones since the mobility edge E_ c =0. As for the extended states, their influence is significant because for the system with weak enough disorder, the extended states may exist and contribute to the conductivity, which is finite even at zero temperature <cit.>. Figure <ref> illustrates the electron gas figure of merit z_ eT and the thermodynamic figure of merit Z_ thT as functions of the temperature T. To compute z_ eT, we calculated first the transport coefficients κ_e, σ, and α numerically integrating Eqs. (<ref>)-(<ref>) using the transmission function Σ(E) = σ(T=0,E)/2e^2 <cit.>, where σ(T=0,E) is given by Eq. (<ref>). As regards the thermodynamic the figure of merit Z_ thT, we integrated Eqs. (<ref>,<ref>,<ref>) using the densities of states g(E) given in Eqs. (<ref>) and (<ref>). Both figures of merit grow steadily over the whole temperature range. Importantly, both z_ e and Z_ th are of comparable magnitude and increase as the system is driven close to the phase transition, implying that the increase of the electron gas isentropic expansion factors fosters the desired behavior of the transport parameters. The figure of merit Z_ th is slightly larger than z_ e for x=0.8 and x=1.1, although the ranges for their values are similar. At relatively small temperatures (T ≲ 150 K), the larger the critical exponent, the larger the figures of merit, while at higher temperatures (T ≳ 150 K), the dependence is reversed. There is no fundamental difference between the homogeneous and the inhomogeneous ensembles, i.e., the shape of curves is the same in both cases. To understand how thermoelastic properties are reflected in transport properties, we show the correlation between the figure of merit z_ eT and the figure of merit Z_ thT in Fig. <ref>. Depending on the value of the critical exponent, the correlation between the figures of merit varies from a straight line at x=0.5 to a power law behaviour for other values of x. We underline that z_ eT is a monotonously growing function of Z_ thT, regardless of the value of the critical exponent. This means that we can estimate how efficient energy conversion can be in the best-case scenario (that is, neglecting phonons) on purely thermodynamic grounds, by studying the behaviour of the figure of merit Z_ thT rather than z_ eT. Next, we show how efficient is energy conversion near the transition temperature and in the whole temperature range. To characterise the efficiency of this process, we introduce the thermodynamic efficiency η_ max = √(γ)-1/√(γ)+1η_ C, where γ = C_μ/C_N is an analogue to the classical isentropic expansion factor, and η_ C is the Carnot efficiency. At T≈ 50 K, when k_B T≈μ-E_c and thermoelectric transport is affected by the non-analytical behavior of the transmission function Σ(E) (see Eq. (<ref>)), and η_ max varies from around 0.28η_ C to roughly 0.4η_ C when the critical exponent x varies from 0.(3) to 1.1. For higher temperatures (T ≈ 150 K), we can get the performance η_max≈ 0.5η_ C. Of course, such high efficiency is only an upper bound, as phonons become more relevant with increasing temperature. Figures reported in Table <ref> show how η_ max/η_ C varies as a function of the critical exponent x. Another essential issue is the influence of the mobility edge E_ c on the thermoelectric properties of the Anderson transition. For simplicity, we considered E_ c=0 J, although one can choose a temperature-dependent mobility edge in the form E_c(T) = (0.6+E_F - 4 · 10^-4T) eV, as in <cit.>. Such a choice may help to study the interplay between the localised and extended states and might increase the electron gas figure of merit z_ eT. In <cit.>, several values of E_ c were considered and high values of the figure of merit zT were obtained. A more realistic model of the mobility edge and how it may affect the energy conversion in the metal-insulator transition deserves more attention and is beyond of scope of the present work. § CONCLUSION We have connected the thermoelectric and thermoelastic properties of the three-dimensional Anderson model, discussing the effect of different types of electron distributions - the homogeneous ensemble and the inhomogeneous ensemble -, as well as the role of the critical exponent. In contrast with the sharp enhancement of thermoelectric conversion close to the superconducting phase transition <cit.>, our results show a smooth, monotonously growing dependence of the thermoelectric figure of merit on temperature. Indeed, in single particle models a sharp energy-dependence of the transmission function is requested to obtain large thermoelectric efficiencies. On the other hand, in the Anderson model the dependence of the transmission function Σ(E) on energy is non-analytical at the mobility edge E_ c - see Eq. (<ref>) -, but less sharp than the more thermoelectrically efficient boxcar-function-shaped transmission functions <cit.>. On a general perspective, our results confirm the validity of the thermodynamic approach as a useful and physically intuitive way to estimate the ideal thermoelectric performance of the working fluid, neglecting the detrimental effect of phonons. Such an approach naturally suggests the consideration of electronic phase transitions to boost thermoelectric efficiency <cit.>. *
http://arxiv.org/abs/2307.02806v1
20230706065010
A Singular-value-based Marker for the Detection of Atrial Fibrillation Using High-resolution Electrograms and Multi-lead ECG
[ "Hanie Moghaddasi", "Richard C. Hendriks", "Borbala Hunyadi", "Paul Knops", "Mathijs S van Schie", "Natasja M. S. de Groot", "Alle-Jan van der Veen" ]
eess.SP
[ "eess.SP" ]
Dynamics of a droplet in shear flow by smoothed particle hydrodynamics [ August 1, 2023 ====================================================================== Objective: The severity of atrial fibrillation (AF) can be assessed from intra-operative epicardial measurements (high-resolution electrograms), using metrics such as conduction block (CB) and continuous conduction delay and block (cCDCB). These features capture differences in conduction velocity and wavefront propagation. However, they do not clearly differentiate patients with various degrees of AF while they are in sinus rhythm, and complementary features are needed. In this work, we focus on the morphology of the action potentials, and derive features to detect variations in the atrial potential waveforms. Methods: We show that the spatial variation of atrial potential morphology during a single beat may be described by changes in the singular values of the epicardial measurement matrix. The method is non-parametric and requires little preprocessing. A corresponding singular value map points at areas subject to fractionation and block. Further, we developed an experiment where we simultaneously measure electrograms (EGMs) and a multi-lead ECG. Results: The captured data showed that the normalized singular values of the heartbeats during AF are higher than during SR, and that this difference is more pronounced for the (non-invasive) ECG data than for the EGM data, if the electrodes are positioned at favorable locations. Conclusion: Overall, the singular value-based features are a useful indicator to detect and evaluate AF. Significance: The proposed method might be beneficial for identifying electropathological regions in the tissue without estimating the local activation time. Keywords: Atrial fibrillation, high-resolution electrograms, multi-lead body surface potentials, rank analysis, singular value decomposition § INTRODUCTION Atrial fibrillation (AF) is the most prevalent and persistent cardiac tachyarrhythmia. In the electrocardiogram (ECG), AF is characterized by fibrillatory waves and irregular RR intervals. Various mechanisms have been proposed to underly AF, including multiple wavelets, rotors, re-entrant activity, endo-epicardial breakthrough waves, and ectopic foci <cit.>, but the precise pathological mechanisms causing AF in the individual AF patient are yet unknown. High-resolution electrograms (EGMs) are used to understand the electropathological process of AF in more detail. From such measurements, the electrical propagation in the heart and the conduction velocity are assessed by the local activation time (LAT) and derived features, such as conduction block (CB) and continuous conduction delay and block (cCDCB). This analysis is influenced by the accuracy of the LAT estimation. Furthermore, these features assessed during sinus rhythm do not sufficiently differentiate between the various stages of AF development <cit.>. In some cases, the electropathology of atrial tissue can also be linked to the morphology of the observed signals. The R/S ratio of single potentials (SPs) has been shown to be useful for assessing the severity of conduction inhomogeneity <cit.>. This is complementary to the LAT analysis: a wavefront[To compare the various wavefronts visually, see Fig. <ref>] could propagate normally, even while the underlying APs are abnormal. In our previous work, we have demonstrated that the development stages of AF might manifest themselves as variations in atrial potential waveforms <cit.>. In this paper, we go one step deeper and study a measurement data matrix formed from an array of unipolar EGMs. This matrix is preprocessed such that it becomes insensitive to differences in LAT but remains sensitive to spatial differences in AP morphology. We then look at the singular values of this matrix. A simple cardiac signal model demonstrates that if all cells beneath the electrodes generate the same action potential (AP), and the propagation follows a flat wavefront, then the data matrix has rank 1: only a single singular value is nonzero. Furthermore, with a slightly more elaborate signal model, we will demonstrate that, in case of more complex underlying physiology - such as differences in AP morphology or abnormal wavefront propagation - the data matrix has a higher rank, which can be detected by an increase in the second singular value. Thus, our hypothesis is that the normalized second singular value is a useful feature to detect and classify degrees of AF. This is tested on two types of clinical data. First, we study this feature on measured intraoperative unipolar EGMs obtained from patients with induced AF. Comparing SR and AF data, we found that the normalized singular values are significantly higher during AF than during SR, and this allows to discriminate between SR and AF. Next, we also study singular values for data collected using a (non-invasive) multi-lead ECG. In particular, we have designed a sub-vest to monitor the body surface potentials (BSPs) at 15 leads simultaneously combined with EGM mapping at specific epicardial locations during minimally invasive surgery. This vest was designed by the authors as a standard 12-lead ECG cannot be simultaneously acquired during open-heart surgery. The acquired data allows to study the singular values of the low-resolution BSP data for exactly the same heartbeats as the high-resolution EGMs. The results show that, for specific placements of the electrodes, the BSP shows even more significant changes in singular values than the EGM, presumably because the latter only measures a small area of the atrial surface. For EGM data acquired on a sufficiently large rectangular grid, the normalized second singular value can also be computed from overlapping submatrices of 3× 3. Looking at the submatrices helps us to locate the abnormal electropathological regions in the tissue. This allows to construct a novel map that has complementary information to the traditional activation map. The results show that this map highlights areas of double potentials, simultaneous presence of multiple AP morphologies, and block. These are often associated with AF. We propose this map as a useful tool, complementary to the use of activation maps. A related eigenvalue analysis (and corresponding map) was proposed by Riccio e.a. <cit.>. As a preprocessing step, their method requires to time-align the time-domain electrogram traces. The estimation of the local activation times is an additional step, which also based on an underlying model where for each trace a single activation time can be estimated. This is problematic at electrodes that see a double potential. In contrast, our proposed method requires only little preprocessing. The rest of this paper is structured as follows. In Section <ref>, we introduce our method, including notation, action potential, and electrogram model, and analyze the singular values in relation to various scenarios, such as one or more signal morphologies, and one or more wavefronts. In Section <ref> we demonstrate the proposed approach on simulated data and two types of clinical data. In Section <ref>, we discuss the results. Finally, conclusions are phrased in Section <ref>. § METHODS AND ALGORITHMS §.§ Notation In this paper, scalars are denoted by normal lowercase letters, vectors by bold lowercase letters, and matrices by bold uppercase letters. For matrices, (|·|) is the element-wise absolute value, and (.^H) denotes the Hermitian operator. §.§ Action potential and electrogram model An action potential is generated by a sequence of voltage changes across the membrane of a cell. Various mathematical models have been proposed to describe the AP in atrial myocytes and pacemaker cells. In particular, the total ionic current in human atrial myocytes can be computed from the Courtemanche model <cit.>, while for pacemaker cells, which have funny currents, the ionic current is governed by the Fabbri et al. model <cit.>.[The Courtemanche model was used to produce the simulated data for our analysis. By varying parameters, this model can generate a variety of AP morphologies. More elaborate computer models of electrograms have been developed in <cit.>, and these could be used to improve our analysis.] Next, a reaction-diffusion equation models the AP propagation in a 2D tissue, described by the interaction of three currents: the transmembrane current, the stimulus current, and the ionic current. In models with uniform parameters, the resulting APs are the same for all cells, and a simple data model to describe this is d_c(t) = a_c s(t-τ_c) , c = 1, ⋯, N_c , where d_c(t) is the AP (voltage) for the cth cell, a_c is a positive real amplitude, s(t) is the reference AP, and τ_c is the time delay between a reference cell and the cth cell. N_c denotes the total number of cells in the 2D tissue model. Here, “cell” does not refer to a physical cell, but rather a space-discretized grid point representing a collection of physical cells. The delays τ_c are the LATs. These are related to each other via wavefronts, which follow from the selected diffusion-reaction model, the underlying conductivity tensors, and an initial stimulus scenario <cit.>. Moving one level up, the electrogram as measured on the epicardium is modeled by a collection of M electrodes assumed to be placed at a constant height above the 2D tissue. Each electrode measures a weighted sum of the action potentials from all cells on the 2D tissue. The mth electrode signal (voltage) d_m(t) at location (x_m,y_m) with a constant height z_0 above the 2D tissue then can be modeled by a space-discretized equation <cit.> as d_m(t)= ∑_c=1^N_c a_m,c d_c(t) , m = 1, ⋯, M a_m,c = a/√((x_c-x_m)^2+(y_c-y_m)^2+z^2_0) . Here, the weight a_m,c describes the instantaneous gain from cell c to electrode m, using an inverse relation to distance, and a is a constant scale parameter (electrode gain). The electrode size in this model formulation is very small and is considered as a point electrode. §.§ Data stacking and processing Returning to the cell model (<ref>), we first apply a Fourier transform: let d_c(ω)=∫_-∞^+∞d_c(t)e^-jω tdt where (.) denotes the frequency domain. We take N samples in frequency domain:[In practice, we would sample in time domain and use the FFT.] ω∈{ω_1,ω_2,⋯, ω_N}. The N_c× N complex samples are stacked into a matrix : = [d_c(ω_n)]_c,n ∈ ℂ ^N_c× N . For electrode signals d_m(t), we can do a similar processing, resulting in a matrix that we also denote by , but that now will have size M× N. As motivated later, we drop the phase by taking the element-wise absolute value of : = | | . Finally, we compute the singular value decomposition (SVD) of as =^H , where and are unitary matrices containing the left and right singular vectors, and is a diagonal matrix containing the singular values {σ_1,⋯,σ_N}, sorted in descending order. The singular values are indicative of the numerical rank of the matrix, and they give important information on the complexity of the matrix. Next, we analyze these singular values for several cases of interest. §.§ Singular value analysis §.§.§ Cell level, single AP morphology We start at the cell level and assume, as in (<ref>), that under healthy conditions all cells generate APs with the same morphology. In the frequency domain, (<ref>) gives d_c(ω) = a_c e^-jωτ_cs(ω) . After discarding the phase by taking the absolute value, and stacking the magnitude spectra into the matrix , we observe that = ^T , where = [a_1,⋯,a_N_c]^T and ^T = [|s(ω_1)|,⋯,|s(ω_N)|]. This model shows that has rank 1: only one singular value is non-zero. This important property is achieved by discarding the phase (which contains the effect of the LATs), and the assumption that all cells have the same AP morphology. Unfortunately, some information on the morphology is lost, since we also drop the phase of s(ω). Since the LATs do not play a role after taking the absolute value, it does not matter whether the AP model describes an SR scenario (τ_c organized in a single wavefront) or an AF scenario (τ_c organized in multiple wavefronts, or highly unstructured). §.§.§ Cell level, two different AP morphologies As a second case, we consider a scenario where cells take one out of two morphologies, s_1(t) or s_2(t). The data model for results in = = _1 _1^T + _2 _2^T where = [_1 _2] and = [_1 _2]^T. Entries of _1 are zero if the corresponding cell is of the second type, and likewise, entries of _2 are zero if a cell is of the first type. Thus, the columns of are complementary and trivially orthogonal. Since by assumption _1 ≠_2, the matrix has rank 2, and only two singular values are nonzero. These two singular values are determined by two parameters: * The difference between _1 and _2, as expressed by their cross-correlation. If the difference is small, then the second singular value will be small. * The number of cells in group 1 versus the number of cells in group 2: this determines the ratio _1/_2. If the cells are predominantly in one group, then the second singular value will be small. The effect could be calculated in closed form, but is easier appreciated from a simulation. Referring to Fig. <ref> A), two signal morphologies for the action potential are used: the unmarked blue one, and one of the numbered signals. The signals are derived from the Courtemanche model for a human atrial myocyte, and distinct morphologies are obtained by modifying the calcium current's parameters. A collection of cells are simulated with equal a_c, random τ_c, and a specified fraction assigned to each of the two morphologies. In both cases, signals are scaled to have equal l_2 norm. The resulting singular values are shown in Panel B and C (where σ_1 is normalized to 1, and we zoomed in on the range between 0 and 0.4) . In Fig. <ref> B), the fraction of cells in either group is equal, while in Fig. <ref> C), the fraction of cells in either group has a 1:80 ratio. In Panel B, it is seen that the second singular value increases if the second signal is more different. In Panel C, it is seen that the differences are more subtle if there is a significant imbalance in the number of cells between the two groups. This result extends to more than two different AP morphologies, but although the number of terms in (<ref>) increases, the columns {_i} tend to be parallel and at some point the singular values will not increase by much. §.§.§ Electrogram with single AP morphology, flat wavefront Let us now consider the electrode signals. For a single AP morphology s(t), we obtain from (<ref>) d_m(ω) = ∑_c=1^N_c a_m,c e^-jωτ_c s(ω) . Due to the summation over the cells in <ref>, taking the absolute value |d_m(ω)| will not have the desired effect of removing the phase factors e^-jωτ_c. As a consequence, the resulting matrix constructed from d_m(ω) will generally have full rank. Let us make the simplifying assumption that the atrial wave travels in a single flat wavefront (i.e., a plane wave), and that the coefficients a_m,c are spatially invariant except perhaps for an electrode gain, as in (<ref>). In that case, the phase factors average to ∑_c=1^N_c a_m,c e^-jωτ_c =: a_m e^-jωτ_m where τ_m is the delay for the cell under electrode m. Under this condition, we can again write = ^T and only one singular value will be nonzero. A proof of this claim is in the Appendix; the proof shows that it does not matter if the electrodes are in a grid or more randomly placed. The condition of a single flat wavefront describes the situation of a heart in sinus rhythm (SR), with the activating source sufficiently far away from the electrode. §.§.§ Electrogram with single AP morphology, curved or multiple wavefronts If the wavefront under an electrode is sufficiently curved, then (<ref>) does not hold. As a consequence, more singular values will be nonzero. It is hard to analyze this more quantitatively, but if the delay differences are small, then this effect is not expected to be very strong. A curved wavefront occurs if the activating source is close to the electrode, or in case of a nearby focal activation. The effect is shown in Fig. <ref> A), where we compare two regions (i.e., -matrices) each with 9 electrodes. The tissue is activated in the top-left corner. For Location 1, where the activation wavefront is strongly curved, the second singular value is higher than for Location 2, where the activation wavefront is almost flat. It is also seen that more than two singular values are raised. A stronger effect is expected in case an electrode sees multiple wavefronts. This will significantly destroy the symmetry which was a condition to arrive at a rank-1 model. This case relates to the occurrence of fractionation or double potentials, and therefore is associated with AF. Fig. <ref> B) shows the effect. The electrodes in Location 2 see a single wavefront, while the electrodes in Location 1 are above a block and see two wavefronts with clearly different LATs. In the latter case, the singular values are substantially raised. §.§.§ Electrogram with multiple AP morphologies If we have cells with two types of morphologies, then (as before) the rank of is increased. If the wavefront is still flat, the presence of two cell types will result in rank 2, but for larger numbers the rank will probably be harder to judge. If the wavefronts are curved or we have multiple wavefronts, then the number of nonzero singular value will increase as well. The effect is shown in Fig. <ref> C), where two signal morphologies are used. Cells activated from the top-left corner use one type of morphology, and cells activated from the top-right corner use a second type. Location 2 has a curved wavefront but only a single signal morphology, which is similar to Fig. <ref> A). Location 1 is a region where the two wavefronts collide and two morphologies are present; this further raises the second singular value. In summary, for the matrix derived from the EGM, we expect a low rank (only one large singular value) in case there is only one AP morphology and the wavefront is flat, a raised second singular value in case there are multiple AP morphologies and/or the wavefront is curved, and a strongly raised second singular value if some electrodes see multiple wavefronts with clearly different LATs. This suggests that the ratio of the second singular value to the first one (σ_2/σ_1) could be a useful feature for detecting and classifying AF. The advantage of this feature would further be that it is directly derived from the data, without relying on the prior estimation of LATs. The examples further showed that the maximal σ_2/σ_1-ratio that we can expect is about 0.25. §.§ Definition of a In the presentation of the method, we defined a data matrix obtained from the electrode array. In principle, the matrix could encompass the entire array. The examples presented in Fig. <ref> used subsets of 8 or 9 electrodes, which showed that the normalized singular values vary depending on the location. In locations with a curved wavefront or with multiple wavefronts, the normalized σ_2 is higher than in locations with a flat wavefront and a single AP morphology. If we collect all electrodes in the data matrix, then the location data is averaged, and the differences between the singular values become smaller and harder to detect. Therefore, to analyze EGM array data with electrodes arranged in a rectangular grid, we propose to use a “σ_2 map”, a location-dependent map. We use 3× 3 subsets of the electrode array, and for each subset construct the matrix and compute the normalized σ_2 value. This gives one pixel in a σ_2 map, located at the center of the subset. The subset is shifted to cover the entire rectangular array, resulting in the σ_2 map. As an example, Fig. <ref> shows a simulation where we compare a σ_2 map for a tissue with a homogeneous conductivity (left part) to a tissue with a conduction block (right part). A rectangular electrode array with 32×32 electrodes is placed within the area denoted by the red dashed line. The tissue is activated from the top left and a single AP morphology is used. For the homogeneous tissue, all 3× 3 subsets result in normalized σ_2 values of less than 0.05. For the tissue with a block, the normalized σ_2 is larger than 0.05 around the block, and is easily recognized in the map. The corresponding time domain signals (e.g., at location 3 and location 4) show double potentials, while in the homogeneous area (e.g., at location 1) we see a single potential. Thus, the σ_2 map can rapidly point out the inhomogeneities and blocks in the tissue. The advantage of this method is that a LAT estimation and analysis is not required for detecting the blocks. §.§ Data To evaluate the method's performance and reliability in Sec. <ref>, we use simulated and clinical data. The clinical data is part of the Halt & Reverse study, approved by the medical ethical committee (MEC 2014-393), Erasmus University Medical Center, Rotterdam, the Netherlands. §.§.§ Simulation data generation We simulated a 2D tissue of 200×200 cells. The distance between the cells was 0.1 mm. We considered two conductivity maps. In the first simulated tissue, we have taken into account a homogeneous tissue where the conductivity is constant throughout the tissue. For the second simulated tissue, two different conductivities have been used: specific cells have a constant conductivity of c_1=1, whereas the others have a constant conductivity of c_2=0.01. The AP signals of the cells are generated at a resolution of 1 kHz, using the Courtemanche model as implemented in <cit.>. We generated APs with two different morphologies, which are shown in Fig. <ref> as AP1 and AP2. For visualization purposes, activation maps are generated by detecting the activation time as the instant when a cell crosses a threshold of -40 mV during the depolarization phase of the AP. To activate the tissue, two wavefront directions have been used. The first wavefront originates from the top-left corner, and the second is from the top-right corner. The electrogram signals were observed by a rectangular electrode array of 10×10 electrodes with inter-electrode distance of 2 mm, at a constant height of z_0 = 1 mm from the tissue. To increase the resolution in the σ_2 map analysis, we increased the number of electrodes in the electrode array to 32× 32 electrodes with an inter-electrode distance of 0.5 mm. §.§.§ EGM data collection High-resolution epicardial unipolar EGM data was collected at the Erasmus Medical Center (EMC) during open-heart surgery on patients without a history of AF, as described in more detail in <cit.>. Fig. <ref> A) shows the standardized 9 recording locations; at each location, a recording consists of 5s of SR followed by 10s of induced AF. The electrograms were recorded using a rectangular electrode array with 8× 24 electrodes where the inter-electrode distance was 2 mm and the electrode diameter was 0.45 mm. The signals were amplified, filtered to a frequency range between 0.5 and 400 Hz, sampled at a rate of 1 kHz with resolution of 16 bits, and stored. One lead was used to record the ECG. During data analysis, we filtered the signals using a Butterworth band-pass filter in the frequency range between 0.33 Hz and 30 Hz <cit.>. The Pan-Tompkins R-peak detection method <cit.> was used on the ECG lead to segment the atrial activity of each EGM. To select the atrial activity, we used a fixed window with a length of 260 ms and select the interval between 320 ms and 60 ms before the R-peak <cit.>. This resulted in N=130 frequency-domain samples per beat. For the EGM data analysis, we included five patients without a history of AF. We pre-screened the available heartbeats using these exclusion criteria: 1) electrically silent heartbeats; 2) heartbeats where the fibrillatory waves are absent in the fixed window. As a result, the number of patients per location varies between 3 and 5, where between 2 and 23 heartbeats are included per patient. In total, 189 and 395 heartbeats are used for SR and AF episodes, respectively. More details about the number of heartbeats per location are reported in Table <ref>. §.§.§ Body Surface Potential data collection To be able to simultaneously measure high-resolution epicardial EGMs and multi-lead body surface potentials (BSPs), we designed a novel sub-vest to record the BSPs during minimally invasive surgery. We placed the 15 electrodes of the vest at the locations indicated by the circles in Fig. <ref>, where black circles denote the electrodes on the front and red circles denote the electrodes on the back of the patient. This design was motivated as follows. First, the area highlighted in the faded color, called the sterile field, is inaccessible during minimally invasive surgery. Second, the atrial activity is a focus of this investigation. Since the atrial activity is generated during the depolarization phase, we covered an area that captures this, i.e. optimized for a heart axis between -30 and +90 degrees. Additionally, to capture the atrial activity from the back of the patient, we positioned three electrodes close to the atrium on the back. For practical reasons, we had to limit the total number of leads. We used 15 prewired disposable electrodes from Nissha Medical Technologies (NMT), with the code CLARAVUE 4009839C, which are attached to the patient before starting the surgery. A 6-7 cm incision called an auxiliary port is made in the third or fourth intercostal space to perform the surgery. After positioning the electrode array to the right atrium at three locations marked RA1, RA2, and RA3 in Fig. <ref>, high-resolution electrograms and multi-lead BSPs were simultaneously measured. We acquired measurements from 1 male patient without a history of AF. The patient underwent mitral valve prolapse (MVP) surgery. The left ventricular ejection fraction (LVEF) was normal, and the body mass index (BMI) was 26.5. We recorded for 30s during SR and 30s during an induced AF episode. A similar filtering and segmentation approach has been implemented for the BSP measurements. § RESULTS From Sec. <ref>, the hypothesis is that the normalized σ_2 value is a useful indicator to detect and classify cases of AF. This is tested on both simulated and two types of clinical data. We used the entire data matrix to demonstrate the general characteristics of AF by the normalized singular values. While using submatrices to locate the electropathological areas in the tissue allowed us to learn more about the spatial distribution of the AF substrate in the tissue. §.§ Simulation results §.§.§ Setup We compare three cases. In the first case, we generated a homogeneously conducting 2D tissue in which the AP is initiated from the top-left corner and propagates throughout the tissue. The activation map and electrograms at a few selected electrodes are shown in Fig. <ref>. As can be seen, the electrograms have similar morphologies, with a single deflection (a so-called single potential). For the second case, we generated one wavefront with two different AP morphologies. The activation map and selected electrograms are shown in Fig. <ref>; the areas with two different morphologies are indicated by rectangles. Fractionated potentials can be seen in the electrograms. For the third case, we generated two wavefronts, activated from the top-left and top-right corners of the simulated tissue. The conductivity was homogeneous, and the two wavefronts collided on the center axis. The cells at the left and the right generated different AP morphologies AP1 and AP2 (Fig. <ref>). Fig. <ref> shows the activation map and example electrograms. Some electrograms are fractionated, and this is more pronounced at the locations of wave collisions. §.§.§ Results The three scenarios are first tested at the cell level. The results are in Fig. <ref> A). The figure shows that the normalized singular values of the -matrix are increased only by the variety of AP morphologies, but not by the number of wavefronts. This is because at the cell level, the -matrix is not sensitive to activation time. However, at the epicardial level (Fig. <ref> B), it is seen that the normalized singular values of the -matrix are increased in all cases. For the case with 1 wavefront and 1 AP morphology, this is because the wavefront is curved. For the other cases, it is a combination of the curved wavefronts and the variation in the AP morphology. Also with a single AP morphology, the fact that there are 2 colliding wavefronts will increase the singular values. These effects are apparently not additive: the first scenario gives a normalized σ_2 value of 0.15, the other scenarios result in 0.24. §.§ Clinical results We applied the proposed method, normalized singular values, to the clinical EGM data as well. Fig. <ref> shows in blue and red faded lines the normalized singular values for each heartbeat during the SR and AF episode per location. The averages over the heartbeats are shown in bold blue and red. It is evident that the singular values are higher during AF heartbeats than during SR heartbeats. Arrows show the range of normalized σ_2 values, which is smaller for SR than for AF. The distribution of the normalized σ_2 over the heartbeats is shown more clearly in Fig. <ref>. It can be seen that there is a significant difference between the normalized σ_2 during SR and the normalized σ_2 during AF (P-value < 0.001 for all mapping locations). These results show that the normalized σ_2 is a helpful feature to discriminate between SR and AF. The threshold to separate the distributions appears to be location specific. §.§ Combined EGM and Body Surface Potential data results For the combined EGM and BSP data, we investigated the singular values of the matrix during SR and induced AF on a single patient without history of AF. Fig. <ref> shows a segment of one lead of the measured data where the patient was in AF for 5s and then the rhythm returned by itself to the normal sinus rhythm. The EGM and BSP data has been measured simultaneously. The normalized singular values for the SR and AF heartbeats for both EGMs and multi-lead BSPs are shown in Fig. <ref>, using the data at location RA1. The average singular values across the heartbeats are shown in bold blue and red for the SR and AF beats, respectively. It can be seen that the singular values of the heartbeats during AF are higher than during SR. This difference is somewhat more pronounced for the BSP data than for the EGM data: the average normalized σ_2 is 0.29 during AF and 0.19 during SR. To study this in more detail, we sub-divided the BSP data to investigate the location dependency of the singular values. Referring to Fig. <ref> and Fig. <ref>, we took 5 electrodes in the anterior plane, denoted by the green box (location 1), and 5 electrodes in the posterior plane, denoted by the orange box (location 2). At location 1, the average singular values of the AF beats are strongly higher than the average singular values of the SR beats (σ_2 at 0.35 vs. 0.22). In contrast, at location 2, the average singular values of the SR and the AF beats are similar (σ_2 at 0.15). This demonstrates that the presence of abnormal regions is not visible in all electrodes, and a selected subset might show a stronger response than the full data matrix. The EGM data at the location RA1 can be studied in more detail using the activation map and the σ_2 map. A single heartbeat during SR is shown in Fig. <ref> A. The wave propagation starts from the bottom-left corner, denoted by grey in the activation map. The wavefront propagation is shown by the black arrow, which is analysed by a qualified physician. The wavefronts appear to be flat (linear) in most of the map. Using an overlapping 3×3 electrode array (e.g., the area indicated by the pink square), we construct the σ_2 map shown in Fig. <ref> B). Generally, the normalized σ_2 is less than 0.1 (blue color), except for a few regions. It demonstrates that on the areas with a flat wavefront, the rank 1 approximation in Section <ref> holds. However, at location 1, the normalized σ_2 is about 0.25 (yellow color). The corresponding EGM in Fig. <ref> C) shows that location 1 has a double potential. This abnormality is not seen in the activation map. In other words, while the LATs remain normal, the σ_2 map visually indicates areas with altered morphology. The activation map for a single heartbeat during the induced AF episode is shown in Fig. <ref> A). It can be seen that the tissue under the electrode array starts to be activated at multiple locations (bottom left and top left). Furthermore, some areas have conduction blocks (CB), as marked by the bold black lines. A CB is declared when the LAT difference between two adjacent electrodes is greater than or equal to 12ms <cit.>. The σ_2 map for the same heartbeat is shown in Fig. <ref> B). At the area with CB around locations 1, 2 and 3, we observe that σ_2 ≥ 0.2 (orange/yellow color), while in the areas where the wave propagates normally, the σ_2 is less than 0.1 (blue color). Looking at the corresponding EGM examples in Fig. <ref> C, at locations 1-3 we observe signals with a double potential followed by a single potential. It demonstrates that the double potential regions can be detected by the σ_2 map. Further, at location 4, the activation map shows normal wave propagation, while the σ_2 is greater than 0.2 (orange color). The EGMs at location 4 shows a single potential followed by a double potential. Thus, the σ_2 map can point at double potentials in some regions while these are not visible in the activation map. Conversely, the activation map shows some CB (black lines) above location 4, while the σ_2 map gives no trigger at this location. § DISCUSSION Summarizing the results, we have shown that the normalized σ_2 of the -matrix from subsets of electrode data is sensitive to curved wavefronts, conduction blocks, and variations in AP morphology. These changes can be detected in EGMs and also in multi-lead ECGs, if the electrodes are positioned at favorable locations. Further, we have seen that the σ_2 map is a useful tool for detecting such changes, complementary to the use of activation maps. An increased normalized σ_2 is often related to the occurrence of double potentials or simultaneous presence of multiple AP morphologies in the considered subset of electrodes. These are often associated with AF. In the clinical data, we showed that the heartbeats annotated as AF always scored higher than heartbeats in SR. If we assume that AF initiation and progression can be modeled by a variation in the morphology of APs, then the normalized σ_2 can be a useful feature to detect such changes. The array data can be processed for each individual heartbeat, and by tracking the normalized σ_2, we can efficiently monitor the evolution of a patient over time. In preliminary research, we have shown that variations of this feature are able to discriminate between paroxysmal (short-lasting AF) and persistent (long-lasting AF) with an accuracy of 78.42% <cit.>. Related work has been done by Riccio et al. <cit.>, who developed “eigenvalue dominance ratio” maps, which are based on the singular values of the data matrix (or equivalently the eigenvalues of the associated sample covariance matrix) constructed from unipolar (catheter) electrograms. This data matrix contains the time-domain traces of each electrode. The method requires to time-align these traces by estimating the local activation time in each trace, which is done via an iterative process that maximizes the cross-correlations of the traces. After proper time alignment, the method detects the similarity of the AP morphologies, with the goal of the detection of fibrotic areas. The main distinction with our work is (i) it requires time-alignment, which is not always easy to achieve and relies on an underlying parametric model that does not account for fractionation; (ii) it works in the time domain rather than on the amplitude spectra, hence includes more phase information. This makes the methods not directly comparable. In our proposed method, we took the element-wise absolute values of the Fourier spectra. This removes time-delay effects and allowed us to study changes in the morphology's dispersion without estimating the LAT. At the same time, the LAT (or phase of the -matrix) can be regarded as independent, complementary information. It thus makes sense to look at both the activation map and the σ_2 map together. §.§ Limitations and future work The proposed method computes the Fourier spectra of the measured signals and takes the absolute value. This step suppresses half the information present in these signals. In particular, the suppressed phase has all information on the local activation times. Thus, the proposed σ_2-map has independent information from the traditional activation maps. Future research should address the integration of these two feature maps, and relate them to the hidden electropathological parameters of the tissue such as conductivity. Our study has shown that there are clear differences in normalized σ_2 between SR and AF-type array measurements. However, the results in Fig. <ref> show that it is hard to propose a fixed threshold to distinguish between these two cases. Such a threshold would vary between 0.2 and 0.25 depending on location and other factors, such as the height of the electrodes above the tissue (z_0). Similarly, the BSP data (Fig. <ref>) has shown that at some locations, no difference is found. Thus, array placement is an issue that needs further study. We have shown σ_2-maps based on 3× 3 electrodes and a single beat. These highlight locations that deserve further attention. Alternatively, we have also shown plots (Fig. <ref> and Fig. <ref> ) where the normalized σ_2 of the entire array is computed (in a sense, averaging over space), and we could average that over multiple beats. That allows to compress a larger data set into a single feature. An open question is whether this averaging will dilute the differences between SR and AF such that this feature is not sufficiently discriminative anymore. For the σ_2-maps such as Fig. <ref> and <ref>, we have shown that areas of increased σ_2 correspond to EGMs with double potentials or related irregularities. We did not demonstrate the reverse, namely that all irrregular EGMs are highlighed in the σ_2-map. § CONCLUSION In this paper, we developed a method for analyzing EGM and multi-lead ECG data. The method is non-parametric and requires little preprocessing. We have shown that the singular values of the processed data matrix give information on inhomogeneity of the AP morphologies, and the related σ_2-map points at areas subject to fractionation and block. The method gives a clear distinction between heartbeats in SR and AF. Further, experiments using simultaneous EGM and multi-lead ECG measurements showed that the singular values of the heartbeats during AF are higher than during SR, and that this difference is more pronounced for the ECG data than for the EGM data, if the electrodes are positioned at favorable locations. Our related results show that the proposed singular value features can be a useful indicator to evaluate AF. § ACKNOWLEDGEMENTS This research was funded in part by the Medical Delta Cardiac Arrhythmia Lab (CAL), The Netherlands. The authors would like to thank Dr. Frans B.S. Oei, cardio-thoracic surgeon at the Erasmus Medical Center (EMC), for his valuable suggestions on the cardio-thoracic measurements. §.§ Proof of the claim in Section <ref> We prove this for a continuous cell distribution in 1D space. For a traveling plane wave, the cell voltage as function of position x is c(x,t) = s(t-x/v) where v is the propagation velocity. (Equivalently, the propagation delays τ are a linear function of position x.) For simplicity, we assume cells have an equal gain normalized to 1. In the frequency domain this then becomes c̃(x,ω) = S(ω) e^-j ω x/v. The spatial response of an electrode centered at location 0 is some function f(x); the example in (<ref>) would in this context be f(x) = a/|x|. Assuming the response is linear space-invariant, the electrode voltage measured at a location y is the convolution integral m(y,t) = ∫ c(x,t) f(y-x) dx = ∫ c(y-x,t) f(x) dx In frequency domain, this becomes m̃(y,ω) = ∫ S(ω) e^-j ω (y-x)/v f(x) dx = S(ω) e^-jω y/v∫ e^jω x/v f(x) dx =: S(ω) e^-jω y/v I(ω) Taking the absolute value gives |m̃(y,ω)| = |S(ω) I(ω)| which is not a function of position y anymore. Sampling over y and ω, the corresponding measurement matrix will be rank 1. Generally, will be rank 1 if we are able to factorize |m̃(y,ω)| as |m̃(y,ω)|=A(y)B(ω). This also shows that we can permit electrodes to have different gains (as long as they are frequency-independent): this results in a different factor A(y) but will not increase the rank. Generalizing the derivation, we observe that if cells have unequal gains a(x), i.e., c(x,t) = a(x) s(t-x/v), we will have rank 1 only if we can factorize a(y-x) into separate factors depending on y and x. It follows that a(x) must be of the form a(x) = e^α x. For small α, this can be linearized to a(x) = 1 + α x. Thus, we can permit a small gain gradient over the cells. If the wavefront is curved, the delays are a nonlinear function of x, e.g. τ = (x + a/x)/v. It is easily seen that in this case, |m̃(y,ω)| does not factorize, so that will have rank higher than 1. guillem2016presence María S Guillem et al., "Presence and stability of rotors in atrial fibrillation: evidence and therapeutic implications", Cardiovascular Research, 109, 4, 480–492, 2016, Oxford University Press van2021conduction Willemijn FB van der Does et al., "Conduction disorders during sinus rhythm in relation to atrial fibrillation persistence", Journal of Clinical Medicine, 10, 13, 2846, 2021, Multidisciplinary Digital Publishing Institute lanters2017spatial Eva AH Lanters et al., "Spatial distribution of conduction disorders during sinus rhythm", International Journal of Cardiology, 249, 220–225, 2017, ye2021signal Ziliang Ye, Mathijs S van Schie, and Natasja MS de Groot, "Signal fingerprinting as a novel diagnostic tool to identify conduction inhomogeneity", Frontiers in Physiology, 12, 652128, 2021 van2020classification Mathijs S van Schie et al., "Classification of sinus rhythm single potential morphology in patients with mitral valve disease", EP Europace, 22, 10, 1509–1519, 2020 moghaddasi2022novel Hanie Moghaddasi et al., "Novel Rank-based Features of Atrial Potentials for the Classification Between Paroxysmal and Persistent Atrial Fibrillation", Computing in Cardiology (CinC), vol. 498, pp. 1–4, 2022, IEEE riccio2022atrial Jennifer Riccio et al., "Atrial fibrosis identification with unipolar electrogram eigenvalue distribution analysis in multi-electrode arrays", Medical & Biological Engineering & Computing, 60, 11, 3091–3112, 2022, Springer courtemanche1998ionic Marc Courtemanche, Rafael J Ramirez, and Stanley Nattel, "Ionic mechanisms underlying human atrial action potential properties: insights from a mathematical model", American Journal of Physiology-Heart and Circulatory Physiology, 275, 1, H301–H321, 1998 fabbri2017computational Alan Fabbri et al., "Computational analysis of the human sinus node action potential: model development and effects of mutations", The Journal of Physiology, 595, 7, 2365–2396, 2017, Wiley Online Library virag2002study Nathalie Virag et al., "Study of atrial arrhythmias in a computer model based on magnetic resonance images of human atria", Chaos: An Interdisciplinary Journal of Nonlinear Science, 12, 3, 754–763, 2002 jacquemet2006analysis Vincent Jacquemet et al., "Analysis of electrocardiograms during atrial fibrillation", IEEE Engineering in Medicine and Biology Magazine, 25, 6, 79–88, 2006, IEEE abdi2019compact Bahareh Abdi et al., "A compact matrix model for atrial electrograms for tissue conductivity estimation", Computers in Biology and Medicine, 107, 284–291, 2019 jordi J. W. de Vries et al., "Estimation of Cardiac Fibre Direction Based on Activation Maps", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023 code J. W. de Vries, "Courtemanche et al. model MATLAB code", https://sps.ewi.tudelft.nl/Repository/repitem.php?id=44&ti=16, 2022 yaksh2015novel Ameeta Yaksh et al., "A novel intra-operative, high-resolution atrial mapping approach", Journal of Interventional Cardiac Electrophysiology, 44, 3, 221–225, 2015, moghaddasi2022classification Hanie Moghaddasi et al., "Classification of De novo post-operative and persistent atrial fibrillation using multi-channel ECG recordings", Computers in Biology and Medicine, 143, 105270, 2022 chang2010arrhythmia Kang-Ming Chang, "Arrhythmia ECG noise reduction by ensemble empirical mode decomposition", Sensors, 10, 6, 6063–6080, 2010 moghaddasi2021tensor Hanie Moghaddasi et al., "Tensor-based Detection of Paroxysmal and Persistent Atrial Fibrillation from Multi-channel ECG", 2020 28th European Signal Processing Conference (EUSIPCO), 1155–1159, 2021, IEEE pan1985real Jiapu Pan and Willis J Tompkins, "A real-time QRS detection algorithm", IEEE Transactions on Biomedical Engineering, 3, 230–236, 1985 IEEEtran
http://arxiv.org/abs/2307.02180v1
20230705101851
Runtime Repeated Recursion Unfolding: A Just-In-Time Online Program Optimization That Can Achieve Super-Linear Speedup
[ "Thom Fruehwirth" ]
cs.PL
[ "cs.PL", "cs.CC", "cs.PF", "cs.SC" ]
[ * #1⟨#1 ⟩ University of Ulm, Germany Runtime Repeated Recursion Unfolding: A Just-In-Time Online Program Optimization That Can Achieve Super-Linear Speedup Thom Frühwirth University of Ulm, Germany thom.fruehwirth@uni-ulm.de August 1, 2023 ====================================================================================================================== Thom FrühwirthRuntime Repeated Recursion Unfolding: Online Program Optimization for Super-Linear Speedup July 5, 2023 under submission We introduce a just-in-time runtime program transformation based on repeated recursion unfolding. Our online polyvariant program specialization generates several versions of a recursion differentiated by the minimal number of recursive steps covered. The base case of the recursion is ignored in our technique. Our method is presented here on the basis of linear direct recursive rules. When a recursive call is encountered at runtime, first an unfolder creates specializations of the associated recursive rule on-the-fly and then an interpreter applies these rules to the call. Our approach reduces the number of recursive rule applications to its logarithm at the expense of introducing a logarithmic number of unfolded rules. Each rule is applied at most once. We prove correctness of our technique and determine its worst-case time complexity. For recursions that solve tractable problems and which have enough unfoldings that can sufficiently simplified, we prove a super-linear speedup theorem, i.e. speedup by more than a constant factor. The simplification is problem-specific and has to be provided at compile-time. In the best case, the complexity of the given recursion is reduced to that of its first recursive step. We have implemented the necessary unfolder and meta-interpreter for runtime repeated recursion unfolding in Constraint Handling Rules (CHR) with just five rules. We illustrate the feasibility of our approach with complexity results and benchmarks for several classical algorithms. The runtime improvement quickly reaches several orders of magnitude. Keywords Just-In-Time Program Transformation, Runtime Program Optimization, Online Polyvariant Program Specialization, Repeated Recursion Unfolding, Super-Linear Speedup, Recursion, Meta-Interpreter, Speedup Theorem, Time Complexity, Tractable Problems. Table of contents is included for reviewing purposes only § INTRODUCTION Specializing a program means transforming it with respect to constraints which restrict its possible executions. Often, the constraints concern the input. Polyvariant program specialization is the generation of specialized versions of a program according to different constraints. In the context of rule-based programming, unfolding is a program transformation that basically replaces a call in the body (right-hand side) of a rule with the body of a rule whose head (left-hand side) is matched by the call. Repeated recursion unfolding <cit.> first unfolds a given recursive rule with itself and simplifies it. This results in a specialized recursive rule that covers two recursive steps instead of one. It continues to unfold the last unfolded recursive rule with itself. Each unfolding doubles the number of recursive steps covered by the unfolded rule. In this article, we extend the method to an online program optimization and give an implementation of the necessary unfolder and interpreter. Provided a super-linear speedup is possible, this is the only way that it can be realized for any call. The given call determines how far the unfolding proceeds and how many rules are generated. [Summation] Consider the following simple recursive program written in abstract syntax of the programming language Constraint Handling Rules (CHR). It recursively adds all numbers from 1 to n. Rule b covers the base case and rule r covers the recursive case. b: sum(N,S) N=1 | S=1 r: sum(N,S) N>1 | sum(N-1,S1), S:=N+S1 Head sum(N,S), guard (e.g. N=1) and body of a rule are separated by the symbols and |, respectively. Upper case letters stand for variables. When a call matches the head of a rule and the guard condition holds, the body of the rule is executed. Unfolding the recursive rule with a copy of itself and simplifying the resulting rule gives r_1: sum(N,S) N>2 | sum(N-2,S1'), S := 2*N-1+S1'. Note that this rule r_1 cannot replace the original recursive rule because it only applies in case N>2. It behaves like applying the original rule r twice. So we only need about half as many recursive steps as with the original rule alone. Because the arithmetic computation is simplified, we can also expect to halve the runtime. We can now unfold rule r_1 with itself: r_2: sum(N,S) N>4 | sum(N-4,S1), S := 4*N-6+S1 This rule results in fourfold speedup. We can continue this process, doubling the speed each time. The most unfolded rule should cover as many recursive steps of the call as possible but not more. For example, for N=4 we will unfold till rule r_1 with N>2, for N=5 we will unfold till rule r_2 with N>4, for N=50 we will unfold till rule r_5 with N>32. As we have just seen, our method requires unfolding on-the-fly because the number of unfoldings depends on the current call. We do not want to modify the given program at runtime. Therefore we also introduce a simple interpreter for the unfolded rules. This meta-interpreter[A meta-interpreter interprets a program written in its own implementation language.] tries and applies each unfolded rule at most once starting with the given call and the most unfolded rule. With sufficient simplification of the unfolded rules (as in the example), a super-linear speedup in runtime can be achieved. The time complexity is reduced. Overview and Contributions of the Paper In this paper, we assume a recursive rule with linear direct recursion and concentrate on tractable problems, i.e. those with polynomial worst-case time complexity. We will use summation as our running example. The next section recalls syntax and semantics of the CHR programming language. Section <ref> defines our program transformation method of runtime repeated recursion unfolding with simplification and proves it correct. We also show that there is a straightforward optimal rule application strategy and prove it sound and complete. Section <ref> introduces the implementation of the unfolder and meta-interpreter to perform repeated recursion unfolding at runtime. It consists of just five rules. Section <ref> discusses the worst-case time complexity of our unfolder and meta-interpreter in relation to the given recursion. Our implementation has little overhead, runtime mainly depends on the given recursive rule and its unfolding scheme. Section <ref> proves that the optimal rule application strategy for unfolded recursions results in super-linear speedup for tractable problems in case of sufficient simplification. Section <ref> contains the experimental evaluation of our technique on three examples, summation, list reversal and sorting. We analyse their time complexity and compare it with the result of benchmarks. Our examples confirm that sufficient simplification and thus super-linear speedup are indeed possible. The benchmarks quickly show an improvement of runtime by several orders of magnitude. Section <ref> discusses related work and Section <ref> discusses limitations and possible improvements of our approach. Finally, we end with conclusions and future work. § PRELIMINARIES We recall the abstract syntax and the equivalence-based abstract operational semantics of CHR (Constraint Handling Rules) <cit.> in this section. §.§ Abstract Syntax of CHR The CHR language is based on the abstract concept of constraints. Constraints are relations, distinguished predicates of first-order predicate logic. There are two kinds of constraints: built-ins (built-in constraints) and user-defined (CHR) constraints which are defined by the rules in a CHR program. Built-ins can be used as tests in the guard as well as for auxiliary computations in the body of a rule. There are at least the built-in constraints and (denoting inconsistency), syntactical equality = over terms including lists and the usual relations over arithmetic expressions. When CHR is embedded into a host language, built-ins can be host language statements. A program is a finite set of rules. A (generalized simplification) rule is of the form r: H ⇔ C | B, where r is an optional name (a unique identifier) of a rule. The head H is a conjunction of user-defined constraints, the optional guard C is a conjunction of built-ins, and the body B is a goal. The local variables of a rule are those not occurring in the head of the rule. A goal is a conjunction of built-in and user-defined constraints. A call is either an atomic constraint in a rule body or a given constraint. A linear direct recursive rule has exactly one call that has the same constraint symbol as the single head constraint. (Possibly empty) conjunctions of constraints are denoted by upper-case letters in definitions, lemmas and theorems. Conjunctions are understood as multisets of their atomic conjuncts. We often use simple commas to denote logical conjunction to avoid clutter. §.§ Abstract Operational Semantics of CHR Computations in CHR are sequences of rule applications. The operational semantics of CHR is given by a state transition system. It relies on a structural equivalence between states that abstracts away from technical details in a transition <cit.>. In CHR, states are goals. State equivalence treats built-ins semantically and user-defined constraints syntactically. Basically, two states are equivalent if their built-ins are logically equivalent (imply each other) and their user-defined constraints form syntactically equivalent multisets in this context. For example, if X and Y are not local variables, X=<Y Y=<X c(X,Y) ≡ X=Y c(X,X) ≢X=Y c(X,X) c(X,X). Let be a (decidable) constraint theory for the built-ins. A copy (fresh variant, renaming) of an expression (state or rule) is obtained by uniformly replacing its variables by new variables. We then say that the variables have been renamed apart. <cit.> Let C_i be the built-ins, let B_i denote user-defined constraints, and let V be a set of variables. Variables of a state that do not occur in V are called local variables of the state. Two states S_1 = (C_1 B_1) and S_2 = (C_2 B_2) with local variables x̅ and y̅ that have been renamed apart are equivalent, written S_1 ≡_ V S_2, if and only if ∀ (C_1 →∃y̅ ((B_1 = B_2) C_2)) ∀ (C_2 →∃x̅ ((B_1 = B_2) C_1)) Note that this definitions implies ∀ (∃x̅ (B_1 C_1) ↔∃y̅ (B_2 C_2). It also makes sure that there is a one-to-one correspondence between user-defined constraints as enforced by B_1 = B_2. B_1 and B_2 are considered as multisets, i.e. their conjuncts are pairwise syntactically equivalent. Furthermore, state equivalence allows one to be agnostic about local variables. It allows their renaming. They can be removed if logical equivalence is maintained. Occurrences of local variables can be substituted by other terms if logical equivalence is maintained. These properties have been proven in <cit.>. An example illustrates these properties of state equivalence and the effect of the non-local variables V: X=Y c(X,Y) ≡_{X} c(X,X) X=Y c(X,Y) ≢_{X,Y} c(X,X). Using this state equivalence, the abstract CHR semantics is defined by a single transition (computation step) between states. It defines the application of a rule. If the source state can be made equivalent to a state that contains the head and the guard of a copy of a rule, then we can apply the rule by replacing the head by the body in the state. Any state that is equivalent to this target state is also in the transition relation. A CHR transition (computation step) S ↦_r T is defined as follows, where S is called source state and T is called target state: S ≡_ V (H C G) ≢ (r : H ⇔ C | B) (C B G) ≡_ V T S ↦_r T where the rule (r : H ⇔ C | B) is a copy (renaming, fresh variant) of a rule from a given program P such that its local variables do not occur in G. The goal G is called context of the rule application. It remains unchanged. It may be empty. A computation (derivation) of a query (given goal, call) S with variables V in a program 𝒫 is a connected sequence S_i ↦_r_i S_i+1 beginning with the query S as initial state S_0 and either ending in a final state (answer, result) S_n or otherwise not terminating (diverging). The relation ↦^* denotes the reflexive and transitive closure of ↦. For convenience, we may drop the reference to the rules from the transitions. We may also drop V from the equivalence if it is clear from the context. [Summation, Contd.] Recall the rules for summation with sum/2: b: sum(N,S) N=1 | S=1 r: sum(N,S) N>1 | sum(N-1,S1), S:=N+S1 Then a computation for the query sum(3,R) proceeds as follows. sum(3,R) ≡_{R} sum(N',S'), N'>1, N'=3, S'=R ↦_r N'>1, sum(N'-1,S1'), S':=N'+S1', N'=3, S'=R ≡_{R} sum(3-1,S1'), R:=3+S1' ↦_r sum(2-1,S1'), S1:=2+S1', R:=3+S1 ↦_b S1'=1, S1:=2+S1', R:=3+S1 ≡_{R}  R=6 § RUNTIME REPEATED RECURSION UNFOLDING We recall a definition of rule unfolding in CHR. Next we define simplification inside rule bodies. Then we have all the ingredients necessary to introduce runtime repeated recursion unfolding and show its correctness. We also prove some useful lemmas. We also show that there is a straightforward optimal rule application strategy and prove it sound and complete. We will need the standard notions of substitutions, matching and instances. A substitution is a mapping function from variables to terms θ: V→ T, written in postfix notation, such that domain of θ, the set dom(θ) = {X | Xθ≠ X}, is finite. When a substitution is applied to a goal, it is applied to all variables in the goal. If A=Bθ, where B is a goal, we say that A is an instance of B, A matches B, and that B is instantiated. §.§ Rule Unfolding For unfolding of rules in CHR, we follow the definition and proofs of <cit.>. In this paper we rewrite their definition of unfolding in terms of generalized simplification rules. This simplifies the definition and is sufficient for our purposes. To define unfolding, we need the following notation. For a goal A, let vars(A) denote the set of variables in A. Set difference C_1 = C_2 ∖ C_3 for conjunctions of built-ins is defined as C_1 = {c ∈ C_2 |𝒞𝒯C_3 → c}. In words, C_1 does not contain the built-in constraints from C_2 that are implied by C_3. (based on Def. 8 <cit.>) Let P be a CHR program and let r, v ∈ P be two rules whose variables have been renamed apart [ r: H ⇔ C | D B G; v: H' ⇔ C' | B', ] where D is the conjunction of the built-ins in the body of r. Then we define the unfolding of rule r with rule v 𝑢𝑛𝑓𝑜𝑙𝑑(r,v) = r' as follows. Let θ be a substitution such that dom(θ) ⊆ vars(H'). Let C”θ = C'θ∖ (C ∧ D). If 𝒞𝒯∃ (C ∧ D) with 𝒞𝒯∀ ((C ∧ D) → G=H'θ), vars(C”θ) ∩ vars(H'θ) ⊆ vars(H) and 𝒞𝒯∃ (C ∧ C”θ), then the unfolded rule r' is r': H ⇔ C C”θ | D B G=H' B', If a goal G in the body of rule r matches the head H' of a rule v, unfolding replaces G by the body of rule v together with G=H' to obtain a new rule r'. We also add to its guard C an instance of a part of the guard of rule v. This part C” contains the non-redundant built-ins of C' (they are not implied by the built-ins in the rule r). Note that for a correct unfolding according to the above definition, three conditions have to be met. The chosen substitution must make H' equivalent to the matching G in the context of the built-ins of the given rule. Under this substitution, the common variables of H' and C” must already occur in H, and the guard of the unfolded rule must be consistent. If these conditions are violated, unfolding cannot take place and no unfolded rule is produced. Correctness of unfolding means that we can safely add the unfolded rule to a program while preserving its semantics. The original program and the one with the unfolded rule added are operationally equivalent. Given a CHR program with rules r and v and their unfolding resulting in rule r' = 𝑢𝑛𝑓𝑜𝑙𝑑(r,v) and a computation with a transition that applies the unfolded rule G ↦_r' G'. Then there exists an equivalent computation where we replace the transition by transitions without the unfolded rule G ↦^* G'. Proof Correctness of unfolding is proven in Corollary 1 <cit.>. In other words, a correctly unfolded rule is always redundant (but of course its application is expected to improve efficiency). Given the rules r and v and their unfolding resulting in rule r' = 𝑢𝑛𝑓𝑜𝑙𝑑(r,v) and any goal G with a transition with the unfolded rule G ↦_r' G”, then there exist transitions with the original rules either of the form G ↦_r G' ↦_v G” G ↦_r G”≡. Proof The lemma corresponds to Proposition 6 in the appendix of <cit.>, where the proof can be found. [Summation, contd.] We unfold the recursive rule for summation with (a copy of) itself: r: sum(N,S) N>1 | S := N+S1, sum(N-1,S1) v: sum(N',S') N'>1 | S' := N'+S1', sum(N'-1,S1'). Then the unfolded rule is r_1 : sum(N,S) N>1, N-1>1 | S:=N+S1, sum(N-1,S1)=sum(N',S'), S':=N'+S1', sum(N'-1,S1'). Unfolding is possible since its three conditions are met. First, sum(N-1,S1) is an instance of sum(N',S'), more precisely (N>1, S := N+S1) → sum(N-1,S1)=sum(N',S')θ, where the substitution θ maps N' to N-1 and S' to S1. Second, vars(N-1>1) ∩ vars(sum(N-1,S1)) ⊆ vars(sum(N,S)) holds since {N}∩{N,S1}⊆{N,S}. Third, the new guard N>1, N-1>1 is satisfiable. Obviously we can simplify the built-ins of the guard and the body of this rule, and we will define this kind of simplification next. §.§ Rule Simplification Speedup crucially depends on the amount of simplification that is possible in the unfolded rules. We want to replace built-ins by semantically equivalent ones that can be executed more efficiently. We define a suitable notion of rule simplification and prove it correct. In this subsection, we basically follow <cit.>. Given a rule r of the form r: H ⇔ C | D B, where D are the built-ins and B are the user-defined constraints in the body of the rule. We define 𝑠𝑖𝑚𝑝𝑙𝑖𝑓𝑦(r) = (H' ⇔ C' | D' ∖ C' B') (H C) ≡_ V (H' C') (C D B) ≡_ V (D' B'), where C' and D' are the built-ins and H' and B' are the user-defined constraints and where V = vars(H) ∪ vars(H'). In the given rule, we replace head and guard, and the body, respectively, by simpler yet state equivalent goals. The choice of V allows us to remove local variables if possible, i.e those that occur only in the guard or body of the rule. We temporarily add the guard C when we simplify the body to ensure correctness and improve the simplification. For correctness we have to show that the same transitions S ↦ T are possible with rule r and rule 𝑠𝑖𝑚𝑝𝑙𝑖𝑓𝑦(r). (Theorem 1 of <cit.>) Let r = (H ⇔ C | D B) be a rule and let s = (H' ⇔ C' | D' ∖ C' B') be the simplified rule 𝑠𝑖𝑚𝑝𝑙𝑖𝑓𝑦(r). For any state S and variables V, S ↦_r T iff S ↦_s T. Proof According to the definition of a CHR transition (Def. <ref>) and of rule simplification 𝑠𝑖𝑚𝑝𝑙𝑖𝑓𝑦(r) (Def. <ref>), we know that S ↦_r T S ≡_ V (H C G) ≢ (C D B G) ≡_ V T S ↦_s T S ≡_ V (H' C' G') ≢ (C' D' ∖ C' B' G') ≡_ V T (H C) ≡_ V' (H' C') (C D B) ≡_ V' (D' B'). Note that (C' D' ∖ C') is just (C' D'). It suffices to show that S ↦_r T implies S ↦_s T, since the implication in the other direction is symmetric and can be shown in the same way. Hence we have to show that there exists a goal G' such that S ≡ (H C G) ≡_ V (H' C' G') (H C) ≡_ V' (H' C') and T ≡ (C D B G) ≡_ V (C' D' ∖ C' B' G') (C D B) ≡_ V' (D' B'). We choose G' = C G. The main part of the proof reasons on the first-order logic formulas resulting from applying the definition of state equivalence (Def. <ref>) to the above equivalences. The full proof can be found in appendix A of the full version of <cit.>. We conclude this subsection by simplification of the unfolded rule of our running example. [Summation, contd.] Recall the unfolded rule sum(N,S) N>1, N-1>1 | S:=N+S1, sum(N-1,S1)=sum(N',S'), S':=N'+S1', sum(N'-1,S1'). For the head and guard we have that sum(N,S), N>1, N-1>1 ≡_{S,N} sum(N,S), N>2. For the body we have that N>1, N-1>1, S := N+S1, sum(N-1,S1)=sum(N',S'), S' := N'+S1', sum(N'-1,S1') ≡_{S,N} N>2, S := 2*N-1+S1', sum(N-2,S1'). Thus the unfolded rule can be simplified into the rule sum(N,S) N>2 | S := 2*N-1+S1', sum(N-2,S1'). §.§ Runtime Repeated Recursion Unfolding We can now define our novel method of runtime repeated recursion unfolding based on rule unfolding and rule simplification. We prove it correct by showing the redundancy of unfolded recursive rules and their termination. On the way, we will also prove lemmas about the number of recursive steps covered and the number of rules generated. In our method, we start from a call (query) for a CHR constraint defined by a recursive rule. We unfold the recursive rule with itself and simplify it. Then we unfold the resulting rule. We repeat this process as long as the resulting rules are applicable to the query. In this paper, we assume linear direct recursion. Let r be a recursive rule and G be a goal. Let 𝑢𝑛𝑓𝑜𝑙𝑑(r) = 𝑢𝑛𝑓𝑜𝑙𝑑(r,r). The runtime repeated recursion unfolding of a recursive rule r with goal G and with rule simplification is a maximal sequence of rules r_0, r_1, … where r_0 = r r_i+1 = 𝑠𝑖𝑚𝑝𝑙𝑖𝑓𝑦(𝑢𝑛𝑓𝑜𝑙𝑑(r_i)) G ↦_r_i+1 G', (i≥ 0) The definition describes the repetition of the following step to produce the desired sequence of more and more unfolded rules: We unfold and simplify the current unfolded rule r_i. If the unfolding is possible and if the resulting rule r_i+1 is applicable to the query G (as expressed by G ↦_r_i+1 G'), we add the new rule to the sequence and continue with it. [Summation, contd.] Consider a query sum(10,R). Recall the unfolded simplified rule r_1 = sum(N,S) N>2 | S := 2*N-1+S1, sum(N-2,S1). Since sum(10,R) ↦_r_1 10=N, N>2,…, we repeat the unfolding: 𝑢𝑛𝑓𝑜𝑙𝑑(r_1) = sum(N,S) N>2, N-2>2 | S := 2*N-1+S1, sum(N-2,S1)=sum(N',S'), S' := 2*N'-1+S1', sum(N'-2,S1'). The unfolded rule can be simplified into the rule 𝑠𝑖𝑚𝑝𝑙𝑖𝑓𝑦(𝑢𝑛𝑓𝑜𝑙𝑑(r_1)) = r_2 = sum(N,S) N>4 | S := 4*N-6+S1', sum(N-4,S1'). The rule r_2 is applicable to the goal. Further recursion unfolding results in rules with guards N>8 and then N>16. To the latter rule, the goal sum(10,R) is not applicable anymore. Hence runtime repeated recursion unfolding stops. The rules for the goal sum(10,R) are therefore (more unfolded rules come first): r_3 = sum(N,S) N>8 | S := 8*N-28+S1, sum(N-8,S1) r_2 = sum(N,S) N>4 | S := 4*N-6+S1, sum(N-4,S1) r_1 = sum(N,S) N>2 | S := 2*N-1+S1, sum(N-2,S1) r = r_0 = sum(N,S) N>1 | S := N+S1, sum(N-1,S1) b = sum(N,S) N=1 | S=1. Note that to the goal sum(10,R) we can apply any of the recursive rules. The most efficient way is to start with the first, most unfolded rule. It covers more recursive steps of the original recursive rule than any other rule. We will formalize such optimal rule applications in the next section. We now prove some useful properties of runtime repeated recursion unfolding. Unfolded recursive rules are redundant, their addition to the program does not change its semantics. As is the case for any unfolded rule, their computations can also be performed with the original rule. Assume a runtime repeated recursion unfolding of a recursive rule r with goal G. It results in a sequence of rules r_0, …, r_i, … where r=r_0, i≥ 0. Then for any goal B with a transition B ↦_r_i+1 B” there exist transitions either of the form B ↦_r_i B' ↦_r_i B” B ↦_r_i B”≡. Proof This claim follows immediately from (Lemma <ref>). One computation step (transition) with an unfolded rule corresponds to two computation steps with the rule that was unfolded (if no inconsistency is involved). So each unfolded rule doubles the number of recursive steps of the original rule that it covers. Assume runtime repeated recursion unfolding of a recursive rule r with goal G. It results in a sequence of rules r_0, …, r_i, … where r=r_0, i≥ 0. If G ↦_r_i G' G' ≢, then there exists a sequence of 2^i transitions with rule r G ↦_r G_1 …↦_r G_2^i G' ≡ G_2^i. Proof By correctness of rule simplification (Theorem <ref>), rule r and its simplification 𝑠𝑖𝑚𝑝𝑙𝑖𝑓𝑦(r) admit equivalent transitions. We can therefore ignore the application of rule simplification (cf. Definition <ref>) in this proof. We will use induction over the rule index j (i ≥ j ≥ 0), going from the largest unfolded rule r_i to the original rule r_0. We actually prove a more general result: that with rule r_j we need 2^i-j transitions. We first consider the base case. Our claim holds trivially for j=i resulting in 2^0, i.e. one transition with rule r_i. For the induction argument, we assume for rule r_j+1 we need 2^i-(j+1) transitions. Then for rule r_j we claim to need twice as many, 2^i-j transitions. This can be shown by replacing each transition B ↦_r_j+1 B” by the two transitions B ↦_r_j B' ↦_r_j B” according to (Lemma <ref>). The lemma also admits another possible replacement B ↦_r_i B”≡. But all states in any computation starting with G and ending in G' ≡ G_2^i are different from because G' ≢ and no transition is possible from a state . So the replacement involving is not possible. Thus for j=0, i.e. rule r_0=r, we need 2^i transitions for one transition with rule r_i. Hence rule r_i covers 2^i recursive steps of the original recursive rule r with goal G if the computation does not end in a state . For the upcoming lemmas, we define when a goal G takes n recursive steps with the original recursion. Given a goal G with a recursive rule r. Let n be the maximum number of transitions starting from the query G that only involve applications of the given recursive rule r G ↦_r G_1 …↦_r G_n↦̸_r. If the computation is finite and terminates, then we call n the recursion depth of goal G with rule r. We can unfold rules as long as the number of recursive steps they cover does not exceed n. This gives us a limit on the number of rules that we can generate. Given a goal G with a recursive rule r that has recursion depth n and ends in a state G_n≢. Then repeated recursion unfolding will generate k rules such that 2^k ≤ n. Hence, k ≤⌊ log_2(n) ⌋. Proof By contradiction: Assume repeated recursion unfolding generates a rule r_k such that 2^k > n. According to Lemma <ref> rule r_k allows for a transition with G that is equivalent to 2^k transitions with the original recursive rule r. But the maximum number of transitions possible with r is just n. Note that less rules than ⌊ log_2(n) ⌋ may be generated because (further) unfolding is not possible if its three conditions are not met. Given a goal G with a recursive rule r that has recursion depth n and ends in a state G_n≢. Then the runtime repeated recursion unfolding of r with G terminates. Proof Direct consequence of Lemma <ref>. We give two simple examples for nontermination. [Nontermination] The goal p(0) does not terminate with the recursive rule: r: p(N) N≠1 | p(N-1). Now we perform runtime repeated recursion unfolding of this rule with goal p(0). Then the unfolded and simplified rule is r_1: p(N) N≠1,N≠2 | p(N-2). The guard consists of the simplified guards of the two recursive steps with the original rule. Since the resulting rule is applicable to the goal p(0), our unfolding can proceed. Each unfolding adds an inequality to the guard, but the guards will always admit N=0. Therefore, runtime repeated recursion unfolding does not terminate as well. The next example shows that the condition that the resulting state is not is necessary. We use a variation of the rule above. [Nontermination with ] The goal p(0) terminates in a state when applying the recursive rule r. r: p(N) N≠1 | N<0, p(N-1). This is because the body built-in N<0 is inconsistent with N=0 for the goal p(0). Now we perform runtime repeated recursion unfolding of this rule with the goal p(0). The unfolded and simplified rule is r_1: p(N) N≠1,N≠2 | N<0, p(N-2). Since the resulting rule is applicable to the goal p(0), unfolding can proceed forever. It does not terminate even though the computation with the original rule r terminated. Of course, for the goal p(0) any computation with any unfolded rule will lead to . Based on the lemmas proven, we can now directly show correctness of our method. Given a goal G with a recursive rule r that has recursion depth n and ends in a state G_n≢. Then the runtime repeated recursion unfolding of rule r with goal G terminates and generates redundant unfolded rules. Proof The claim is a direct consequence of termination proven in Lemma <ref> and the redundancy of unfolded recursive rules proven in Lemma <ref>. §.§ Optimal Rule Applications An unfolded rule covers twice as many recursion steps than the given rule. When we apply a more unfolded rule, we cover more recursive steps with a single rule application. Based on this observation we introduce a rule application strategy where we try to apply more unfolded rules first. Furthermore each unfolded rule is tried only once and is applied at most once. We prove our optimal rule application strategy sound and complete. Given a recursive rule r=r_0 with a goal G that has k rules r_0, r_1, …, r_k-1, r_k from runtime repeated recursion unfolding. Let the notation G ↦^opt_r G' be shorthand for G ↦_r G' or otherwise G ≡ G'. Then the optimal rule application strategy is as follows: G ↦^opt_r_k G_k ↦^opt_r_k-1 G_k-1… G_2 ↦^opt_r_1 G_1 ↦^opt_r_0 G_0. As a result of this strategy, to the query G we apply the most unfolded rule r_k exactly once[We know the application is possible since otherwise the unfolding would not have taken place.]. In the remaining computation, no matter if a rule r_i (i < k) was applied or not, we next try to apply rule r_i-1 until i=0. We first show soundness of this computation strategy. Computations with optimal rule applications correspond to computations with the original rule only. Given a recursive rule r=r_0 with a goal G that has k rules from runtime repeated recursion unfolding. Then for a computation for goal G with optimal rule applications there exists a computation for G only using the original recursive rule r that ends in an equivalent state. Proof In such a computation, by Lemma <ref>, we can replace a transition with rule r_i+1 (0 ≤ i<k) by transitions with only rule r_i. Furthermore the resulting states of these computations are equivalent. Thus we can repeat this process of replacement until all transitions only involve rule r and the computation will end in an equivalent state. We have seen that a transition with an unfolded rule can replace transitions with the original rule. The other direction is not necessarily true. The unfolded rule may not be applicable because the guard of the unfolded rule may come out stricter than necessary. For our optimal rule application strategy to be complete, we require that unfolding generates all rules with the following property: If a rule can perform two recursive transitions for a goal, then its unfolded rule is also applicable to the goal. Given a recursive rule r=r_0 with a goal G with recursion depth n that has k = ⌊ log_2(n) ⌋ rules from runtime repeated recursion unfolding, where for any rule r_i (0 ≤ i<k) and any goal B with transitions B ↦_r_i B' ↦_r_i B” there exists a transition B ↦_r_i+1 B”. Then for any computation for G with rule r with recursion depth n there exists a computation for G with optimal rule applications that ends in an equivalent state. Proof We start from a computation only using rule r. According to the condition in the claim, we can replace the first two transitions with rule r by one transition with rule r_1 without changing the resulting state. We repeat this for the remaining pairs of subsequent transitions. We get a computation with transitions using rule r_1 ending in at most one transition with rule r. With rule r_1 we start from the first transition again and repeat this process of replacing two transitions by one of rule r_2. We continue going from rule r_i to rule r_i+1 until i+1=k. But now we have a computation that applies each rule from r_k, r_k-1, …, r_1, r_0 at most once and in the given order of the rules. So this computation is one with optimal rule applications. [Summation, contd.] Recall that the rules for sum/2 are: r_3 = sum(N,S) N>8 | S := 8*N-28+S1, sum(N-8,S1) r_2 = sum(N,S) N>4 | S := 4*N-6+S1, sum(N-4,S1) r_1 = sum(N,S) N>2 | S := 2*N-1+S1, sum(N-2,S1) r = r_0 = sum(N,S) N>1 | S := N+S1, sum(N-1,S1) b = sum(N,S) N=1 | S=1. An computation with optimal rule applications for the goal sum(10,R) is: sum(10,R) ↦_r_3 10=N, N>8, R=S, S := 8*N-28+S1, sum(N-8,S1) ≡_{R} R := 52+S1, sum(2,S1) ↦_r_0 R := 52+S1, 2=N', N'>1, S1=S', S' := N'+S1', sum(N'-1,S1') ≡_{R} R := 54+S1', sum(1,S1') ↦_b R := 55 § IMPLEMENTATION OF RUNTIME REPEATED RECURSION UNFOLDING We introduce the implementation of our runtime program transformation. At compile-time, the rules for the given recursive constraint are replaced by a call to the unfolder that contains these rules and then to the meta-interpreter that interprets the unfolded rules. At runtime, the unfolder repeatedly unfolds a recursive rule as long as it is applicable to a given goal using a predefined unfolding scheme. Then the meta-interpreter applies the resulting unfolded rules according to the optimal rule application strategy. We use CHR embedded in Prolog. Such sequential CHR systems execute the constraints in a goal from left to right and apply rules top-down according to their textual order in the program. A user-defined (CHR) constraint in a goal can be understood as a procedure call that goes through the rules of the program. If it (and possibly other constraints from the current goal) match the head of a rule, a copy of the rule is instantiated according to the matching. If the guard check of the rule copy holds, then the rule is applicable. For application, the matched constraints are replaced by the body of the rule copy and execution continues with the calls in the body. This behavior has been formalized in the so-called refined semantics which is a proven concretization of the abstract operational semantics <cit.>. For our implementation, we use CHR embedded in Prolog. In the following code in concrete syntax, =/2, copy_term/2 and call/1 are standard built-ins of Prolog. The syntactic equality =/2 tries to unify its arguments, i.e. making them syntactically identical by instantiating their variables appropriately. The built-in copy_term/2 produces a copy (variant, renaming) of the given term with new fresh variables. The Prolog meta-call call/1 executes its argument as a goal. It works for Prolog built-ins and CHR constraints. Our implementation with CHR in SWI Prolog together with the examples and benchmarking code is available online at <https://sp2.informatik.uni-ulm.de/fruehwirth/rrru.pl>. §.§ Unfolder Implementation The unfolder is implemented as a recursive CHR constraint unf/3. It repeatedly unfolds and simplifies a recursive rule as long as it is applicable to a goal. In unf(G,Rs,URs), the first argument G is the goal and Rs is a list of rules. URs is the resulting list of unfolded rules. We assume that in the goal G the input arguments are given and the output arguments are variables. Initially, the list Rs consists of the recursive rule followed by one or more rules for the base cases of the recursion. Consider the code below. The comment in the first line gives the arguments of unf/3 where a + sign means input and a - sign means output. A variable that occurs only once in a CHR rule has a name that starts with an underscore character. In a recursive step of unf/3, the first rule element in the list Rs is unfolded and added in front of Rs. In the base case of the recursion, the final resulting list of unfolded and given rules is returned in URs. We explain the recursive rule for unf/3 in detail now. We check if the rule R in the list is applicable to the query (call, goal) G. The guard check is performed by getting (using =/2) and copying the relevant parts (head and guard) of rule R, unifying the copied head with the goal (all with copy_term/2) and then executing the instantiated guard copy with call/1. The copies will not be needed after that. If the guard check succeeds, we unfold the current rule R with itself and and simplify it using simp_unf/2 and add the resulting rule UR to the rule list in the recursive call of unf/3. Note that we unfold the given general rule, not the instance of the rule stemming from the query. The Prolog predicate simp_unf/2 implements the unfolding scheme. It takes the current rule and computes its simplified unfolding. For defining simp_unf/2, we use a rule template which is a suitable generalisation of the given recursive rule and its simplified unfoldings. The original rule and its unfolded rules are then instances of the template. Some variables in the template term represent parameters for the instance. The parameters will be bound at runtime. The parameters for the unfolded rule will be computed from the parameters of the current rule. When the guard check has failed, the base case of unf/3 returns the rules that have been accumulated in the rule list as the result list in the third argument (with the exception of the first rule to which the goal was not applicable). To simplify the implementation, the body of the rules in the lists syntactically always consists of three conjuncts of goals: the constraints before the recursive goal, the recursive goal and the constraints after the recursive goal. If there are no such constraints (or no recursive goal in the base case), we use the built-in true to denote an empty conjunct. The following example clarifies the above remarks on the implementation. [Summation, contd.] We show how we implement unfold and simplify with sim_unf/2 for the summation example. We abbreviate sum to its first letter s to avoid clutter in the code. The rule template for sum is where the variables V and W are parameters that stand for integers. Its instance for the original recursive rule is The implementation of the unfolding scheme for summation is accomplished by the following Prolog clause for simp_unf/2. For a goal s(100,S) the unfolder is called with It will return the following rules in the list URs: §.§ Meta-Interpreter Implementation We implement the optimal rule application strategy with the help of a specialized meta-interpreter for CHR. Our meta-interpreter handles the recursive calls, any other goal will be handled by the underlying CHR implementation. To a recursive goal, the meta-interpreter tries to apply the unfolded rules produced by the unfolder and applies them at most once. The meta-interpreter is called with mip(G,Rs), where G is the given recursive goal and Rs is the list of rules from the unfolder unf/3. We now discuss the three rules of our meta-interpreter. * In the first rule, the base case is reached since the recursive goal has been reduced to true. * The second meta-interpreter rule tries to apply the rule R in the rule list to the current goal G. It copies the rule, unifies the copied head with the goal and then checks if the guard C holds with a meta-call. If so, the rule is applied. The conjunct before the recursive goal B is directly executed with a meta-call. Next, the recursive goal G1 is handled with a recursive call to the meta-interpreter using the remainder of the rule list. Finally the conjunct after the recursive goal D is directly executed with a meta-call. * Otherwise the first rule from the rule list was not applicable, and then the last meta-interpreter rule recursively continues with the remaining rules in the list. This ensures that each unfolded rule is tried and applied at most once in accordance with the optimal rule application strategy. §.§ Recursive Constraint Implementation In order to enable runtime repeated recursion unfolding, at compile-time, the rules for the given recursive constraint r/n are replaced by a call to the unfolder unf/3 that contains these rules and then to the meta-interpreter mip/2 that interprets the unfolded rules. We replace according to the rule template named rec_unfold where X1,...,Xn are different variables and OriginalRules is the list of the given original rules that defined the recursive constraint. [Summation, contd.] For the summation example, the rec_unfold rule instance is as follows: § TIME COMPLEXITY OF THE IMPLEMENTATION For the worst-case time complexity of our implementation of runtime repeated recursion unfolding, we have to consider the recursion in the original rule, and the recursions in the unfolder as well as meta-interpreter. We parametrize the time complexity by the number of recursive steps with the original rule. From the runtime of the recursive step we can derive the time complexity of the recursion. Our time complexity considerations are based on the following realizable assumptions for the Prolog built-ins: Matching, unification and copying take constant time for given terms and quasi-linear time in the size of the involved terms in general. A Prolog meta-call has the same time complexity as directly executing its goal argument. In the following code for the unfolder and meta-interpreter, the comments indicate the time complexity of each non-recursive goal in the bodies of the rules. A comment with symbol * in front indicates a non-recursive goal whose execution dominates the complexity of a recursive step. For a constraint goal with symbol c let the function |c(...)| denote the runtime of executing the constraint with the given arguments. §.§ Time Complexity of the Original Rule The time complexity of the original rule is straightforward to derive. The worst-case time complexity r(n) of taking n recursive steps with the given recursive rule r can be derived from the recurrence equation r(n) = b(n) + r(n-1), where b(n) is the runtime of the recursive step in the body of rule r. Proof The recurrence follows directly from the structure of the linear direct recursive rule r. §.§ Time Complexity of the Unfolder For the unfolder we can derive the time complexity of its rules as follows: The runtime of the rule for the base case is constant. The runtime of a recursive step mainly depends on the time for copying head and guard, for guard checking, and for unfolding and simplification of the current rule. that takes n recursive steps with the original recursive rule r. Given a terminating recursive constraint goal R with a recursive rule r that has recursion depth n. Then we can derive the worst-case time complexity unf(n) of the unfolder unf/3 for goal R with rule r from the recurrence equation unf(n) = c(n) + unf(n/2), where the function c computes the runtime of a recursive step of the unfolder as follows: c(m) = where R is the unfolded rule (H <=> Co | B) that covers ⌊ log_2(m) ⌋ recursive steps of the original recursive rule r. Proof As we can see from the recursive rule in the implementation of the unfolder, the non-constant contributions to the runtime of a recursive step with unf/3 consist of * |copy_term((H<=>Co),(G<=>C))|, the time for copying head and guard of the rule R and unifying the copied head with the call G * |call(C)|, the time for executing the instantiated guard copy of rule R * |simp_unf(R,UR)|, the time for unfolding and simplifying rule R Thus the function c is correctly defined. Furthermore, the recurrence halves n. We show that this is correct. By Lemma <ref> we know that k unfolded rules will be returned by the unfolder such that 2^k ≤ n. In each recursive step, the unfolder doubles the number of recursive steps covered by the currently unfolded rule and the number will not exceed n. Thus the complexity of generating these rules is the sum of c(2^i) with 0 ≤ i ≤ k. On the other hand, the recurrence halves n in each recursive step. This results in the sum of c(n / 2^j) with 0 ≤ j ≤ log_2(n). But then for each c(2^i) we have a corresponding c(n / 2^j) with j=k-i such that 2^i ≤ n / 2^j since 2^k ≤ n. Therefore the recurrence provides an upper bound on the time complexity of the rules. §.§ Time Complexity of the Meta-Interpreter In the following code for the meta-interpreter, again comments indicate the runtime of each goal. The second rule of the meta-interpreter applies a rule from the list to the current goal. It dominates the complexity. Its runtime is dominated by the time needed for copying the rule and for the meta-calls of the guard and of the two body conjuncts of the rule. The runtime of the other two rules is constant. The resulting recurrence for complexity and its proof are analogous to the one for the unfolder. Given a recursive constraint goal R with a recursive rule r that has recursion depth n. Then we can derive the worst-case time complexity mip(n) of the meta-interpreter mip/2 for goal R with rule r from the recurrence equation mip(n) = d(n) + mip(n/2), where the function d computes the runtime of the recursive step as follows: d(m) = where R is the unfolded rule that covers ⌊ log_2(m) ⌋ recursive steps of the original recursive rule r. Proof The non-constant contributions to the runtime of a recursive step in the meta-interpreter with a rule application consist of * |copy_term(R,(G<=>C|B,G1,D))|, the time for copying the rule R and unifying the copied head with the call G * |call(C)|, the time for executing the instantiated guard copy of rule R * |call(B)| and |call(D)|, the time for executing the instantiated body conjunct copies of rule R Thus the function d is correctly defined. Furthermore, the recurrence halves n. We show that this is correct. The unfolder returned k unfolded rules with 2^k ≤ n (cf. Lemma <ref>). These rules are ordered such that the more unfolded rules come first. In each recursive step, the meta-interpreter tries to apply the current unfolded rule once and then proceeds to the next one. Rule r_i covers 2^i recursive steps of the original rule r. Thus the complexity of applying these rules is the sum of d(2^i) with 0 ≤ i ≤ k. The remainder of this proof is analogous to the one for the unfolder: Since the recurrence halves n in each recursive step, it results in the sum of d(n / 2^j) with 0 ≤ j ≤ log_2(n). But then for each d(2^i) we have a corresponding d(n / 2^j) with j=k-i such that 2^i ≤ n / 2^j since 2^k ≤ n. Therefore the recurrence provides an upper bound on the time complexity of the rules. Note that d(n) has about the same time complexity as directly executing the rule (but possibly without optimizations), since the overhead of meta-calls is assumed to be constant and only the cost of copying the rule is added. §.§ Time Complexity of Runtime Repeated Recursion Unfolding We now can establish the worst-case time complexity of the recursive constraint under runtime repeated recursion unfolding. Recall that the original rules for the recursive constraint are replaced by the following rule that calls the unfolder and then the meta-interpreter. Given runtime repeated recursion unfolding for the rules of a recursive constraint R and the time complexities c(n) and d(n) for a recursive step of the unfolder and the meta-interpreter, respectively. Then the worst-case time complexity u(n) of the instance of rule rec_unfold for R can be derived from the recurrence equation u(n) = c(n) + d(n) + u(n/2). Proof Clearly u(n) = unf(n) + mip(n) according to rule rec_unfold. Recall the recurrence equations for the worst-case time complexity of the unfolder and meta-interpreter: unf(n) = c(n) + unf(n/2) , mip(n) = d(n) + mip(n/2) . Both recurrences follow the same recursion scheme, so the claimed recurrence for u(n) follows by induction. § SUPER-LINEAR SPEEDUP THEOREM We now compare the worst-case time complexity of the given original recursion and with that of runtime repeated recursion unfolding. As we will show, with effective and sufficient simplification of the unfolded rules, we can achieve a super-linear speedup for algorithms that have polynomial time complexity. An unfolded rule r_i covers 2^i recursive steps of the given rule. In the best case, the worst-case time complexity for generating (by unfolding) and interpreting the recursive step of the most unfolded rule is the same as of the first recursive step with the given rule. Given a recursive constraint goal R with a recursive rule r that has recursion depth n. Then in runtime repeated recursion unfolding, we have best-case simplification if O(c(n) + d(n)) = O(b(n)). As we will prove, we can already achieve a super-linear speedup if the complexity for the recursive step of the most unfolded rule is better than that of the complete recursion (all recursive steps) with the given rule. Given a recursive constraint goal R with a recursive rule r that has recursion depth n. Then in runtime repeated recursion unfolding, we have sufficient simplification if O(c(n) + d(n)) ⊂ O(n b(n)) = r(n). We will consider three broad time complexity classes of tractable algorithms: polynomial, polylogarithmic and polylog-polynomial functions. A polylogarithmic function in n is a polynomial in the logarithm of n, i.e. its monoms are of the form a_k log(n)^k, k≥0. Note that any polylogarithmic function grows more slowly than n^j for any positive exponent j. In particular, the polylogarithmic complexity class is sub-linear, i.e. O(log(n)^k) ⊂ O(n). Polylog-polynomial functions are functions where the polynoms are multiplied with a polylogarithmic function, i.e. the resulting monoms are of the form n^j log(n)^k. In the following super-linear speedup theorem we show how much both sufficient and best-case simplification improve the time complexity of the given recursion for these complexity classes. Given a query with n recursive steps with the original recursive rule. Assume runtime repeated recursion unfolding with best-case or sufficient simplification and with completeness of optimal rule applications (cf. Theorem <ref>). Then for polynomial time complexity classes we have a super-linear speedup according to Table <ref> for best-case simplification and Table <ref> for sufficient simplification[In the tables we save space by not repeating columns with identical entries.]. In Table <ref> the last column for u(n) gives the highest polylog-polynomial complexity that still achieves a super-linear speedup. The parameters i, j, k and n are natural numbers. Proof The results in Table <ref> and Table <ref> can be proven by solving the recurrences for r(n) and u(n). The runtime for the computation of the original recursion r(n) is clearly bounded by n b(n). These complexity bounds correspond to the ones given in Table <ref> and Table <ref>. For u(n) the complexity is bounded by log_2(n) (c(n)+d(n)). This holds for logarithmic complexity. For u(n) with polynomial or polylog-polynomial complexity in the recursive step, the complexity bound in the table is tighter, u(n) = c(n)+d(n). We have to prove these remaining two cases. Our proof for polylog-polynomial complexity also covers the polynomial complexity case by allowing j to be 0: we have that c(d)+d(n) = n^k log_2(n)^j, n ≥ 2, k ≥ 1, j ≥ 0. We prove the solution u(n) = 2 n^k log_2(n)^j. For showing upper bounds it suffices to show that the left hand side is larger than the right hand side of the recurrence relation. Then u(n) = 2 n^k log_2(n)^j = n^k log_2(n)^j + n^k log_2(n)^j = n^k log_2(n)^j + 2^k (n/2)^k log_2(n)^j ≥ n^k log_2(n)^j + 2 (n/2)^k log_2(n/2)^j = (c(d)+d(n)) + u(n/2). With best-case simplification, for linear, polynomial and polylog-polynomial time complexity classes, a super-linear speedup by the factor O(n) is possible, for constant and polylogarithmic complexity classes, a super-linear speedup of O(n/log_2(n)). We already have a super-linear speedup with sufficient simplification. For linear and polynomial time complexity classes, a super-linear speedup by the factor O(n/log_2(n)^i) for a given i ≥ 0 is at least possible, for constant, polylogarithmic and polylog-polynomial complexity classes, a super-linear speedup of at least O(log_2(n)). § EXPERIMENTAL EVALUATION: EXAMPLES WITH BENCHMARKS Our examples will demonstrate that sufficient simplification and thus super-linear speedup are indeed possible. The time complexity is effectively reduced when applying runtime repeated recursion unfolding. The benchmarks quickly show an improvement of runtime by several orders of magnitude. Because the improvement is so dramatic, we can only benchmark small inputs with the original recursion and have to benchmark larger inputs with runtime recursion unfolding. In our experiments, we used the CHR system in SWI Prolog Version 6.2.1 running on an Apple Mac mini 2018 with Intel Core i5 8GB RAM and OS-X 10.14.6. We use default settings for SWI Prolog (including stack sizes), except for the command line option -O which compiles arithmetic expressions. During multiple runs of the same benchmarks we observed a jitter in timings of at most 5%. §.§ Summation Example, Contd. We have already unfolded and simplified the recursive rule for summation in Section <ref>, Example <ref>. We introduced the implementation in concrete syntax in Section <ref>, Example <ref>. We now derive estimates for the time complexities for our summation example and then compare them to benchmark results. We will predict and observe a super-linear speedup. §.§.§ Complexity Our example deals with arithmetic built-ins. SWI Prolog uses the GNU multiple precision arithmetic library (GMP), where integer arithmetic is unbounded. Comparison and addition have logarithmic worst-case time complexity in the numbers involved, while naive multiplication is quadratic in the logarithm. A variety of multiplication algorithms are used in GMP to optimize performance. If one multiplies with a power of 2, the complexity can be reduced to logarithmic. This is the case in our example. We have confirmed this with some benchmarks in SWI Prolog. Original Recursion The rule for the original recursion for summation in template form is The recursion depth n for summation corresponds to the input number A. The most costly arithmetic operation in the recursive step is adding A and D to compute C. The number D is the result of the recursive call, i.e. the sum so far, and is bounded by the square of A, i.e. n^2 (which one can prove by simple induction). So the time complexity of a recursive step b(n) is O(log(n^2))=O(2 log(n))=O(log(n)). Hence the worst-case time complexity for the original recursion r(n) is O(n log(n)) according to Lemma <ref> and Theorem <ref>. Unfolder Recall that in Lemma <ref> the complexity of a recursive step of the unfolder is defined by c(n) = O() and recall the predicate sim_unf/2 for summation s/2 Consider the definition of c(n). Copying head and guard of an unfolded summation rule and checking its guard with call/1 involves the parameter V. Because of the guard A>V, the value of V is bounded by A, i.e. n. So this contributes a runtime at worst logarithmic in n. For the complexity of sim_unf/2, consider the given rule template. The input is A and the parameters are V and W. All variables are positive integers. For the complexity we need bounds on their values. The result of the summation C and of the recursive call D are bounded by n^2. The product V*A is also bounded by n^2. Due to the equation C is V*A-W+D, the parameter W is hence bounded by 2 n^2. The body of the clause for sim_unf/2 contains Vl is 2*V. Since the first value for V in the original recursion is 1, V must be a power of 2. Overall, the clause body contains an addition and three multiplications that always involve a power of 2 (2 or V*V). So the time complexity of all arithmetic operations is logarithmic in the values involved. Since all values are positive and bounded by 2 n^2, we arrive at a worst-case time complexity of O(log(2 n^2))=O(log(n)) for rule unfolding and simplification with sim_unf/2. Hence the time complexity for a recursive step of the unfolder c(n) is O(log(n)). Meta-Interpreter Recall the complexity of a recursive step of the meta-interpreter according to Lemma <ref> d(n) = O() and recall that the template for unfolded summation rules is Copying an unfolded summation rule can be done in logarithmic time. As for executing the guard and the non-recursive goals of the body with call/1 each, we have a comparison, subtractions, an addition and a multiplication in the rule. The multiplication is with V, a power of 2. All values of the variables involved are bounded by 2 n^2. So the time complexity for a recursive step of the meta-interpreter d(n) is O(log(n)). Complexity of Runtime Repeated Recursion Unfolding The complexity for c(n) and d(n) is the same as for b(n), namely O(log(n)). We therefore have best-case simplification. The overall time complexity for runtime recursion unfolding for the given recursive constraint as defined by rule rec_unfold is O(log(n)^2) according to Theorem <ref>, Table <ref>. So with repeated recursion unfolding the complexity is reduced from O(n log(n)) to O(log(n)^2), clearly indicating a super-linear speedup. §.§.§ Benchmarks Table <ref> shows benchmarks results for the summation example. Times are given in milliseconds. Experiments that show a runtime of less than 10 milliseconds are the averages of 1000 runs. The benchmarks confirm the super-linear speedup. Original Recursion In each subsequent table entry, we double the input number. The runtime roughly doubles. This is in line with the expected log-linear time complexity O(n log(n)): since the numbers are small, addition is fast and the runtime is dominated by the linear time recursion. For larger numbers, the original recursion runs out of local stack. Unfolder and Meta-Interpreter For runtime repeated recursion unfolding of our summation example, we give the time needed for the unfolding, the time needed for the execution with the meta-interpreter, and the sum of these timings (column 'Sum Time'). Because our method has lower time complexity, it is already 5000 times faster than the original recursion for n=2^21. Hence we start from 2^25 and in each subsequent table entry, we square the input number instead of just doubling it. The runtimes of the unfolder and meta-interpreter are similar. For each squaring of the input number, the their runtimes more than double. The benchmarks results obtained are consistent with the expected complexity of O(log(n)^2), e.g. 0.0000002 log_2(n)^2 + 0.002 log_2(n) for the unfolder. Comparing Recursion Depths 2^i and 2^i+1 In the meta-interpreter, each of the unfolded rules will be tried by matching its head and checking its guard, but not all rules will be necessarily applied. This may lead to the seemingly counterintuitive behavior that a larger query runs faster than a smaller one. For this reason, we compare timings for values of n of the form 2^i and 2^i+1. Input numbers of the form 2^i+1 will need exactly one application of the most unfolded rule r_i to reach the base case, because the following recursive call has the input number computed by B is A-V which is 2^i+1-2^i, i.e. 1. For numbers of the form 2^i however, all unfolded rules are applied. In this case, the most unfolded rule is r_i-1 (not r_i), yielding a recursive call with input 2^i-2^i-1, i.e. 2^i-1. To this call, the next less unfolded rule r_i-2 applies and so on. As a consequence it roughly halves the runtime of the meta-interpreter when going from a query with number 2^i to 2^i+1. The timings for the unfolder stay about the same, because only one more rule is generated for 2^i+1 (e.g. n=2^1600+1 generates 1601 rules). §.§ List Reversal Example The classical program reverses a given list in a naive way. It takes the first element of the list, reverses its remainder and adds the element to the end of the reversed list. The CHR constraint r(A,B) holds if list B is the reversal of list A. r(E, D) E=[C|A] | r(A, B), a(B, [C], D) r(E, D) E=[] | D=[]. We use Prolog notation for lists. In the following, all variables stand for lists except for variable names starting with letter C that denote list elements. Note that [C|A] stands for a list with first element C and remaining list A. The built-in a(X,Y,Z) appends (concatenates) two lists X and Y into a third list Z. Its runtime is linear in the length (number of elements) of the first list. §.§.§ Runtime Repeated Recursion Unfolding Our aim is to find the appropriate rule template for the repeated unfolding of the recursive rule with itself. Unfolding We start with unfolding the original recursive rule with a copy of itself: r(E, D) E=[C|A] | r(A, B), a(B, [C], D) r(E', D') E'=[C'|A'] | r(A', B'), a(B', [C'], D'). The unfolding substitutes E' by A in the guard and produces r(E, D)E=[C|A], A=[C'|A'] | r(A, B)=r(E', D'), r(A', B'), a(B', [C'], D'), a(B, [C], D). This unfolding is correct because its three conditions are satisfied (cf. Def. <ref>). First, r(A,B) is an instance of r(E',D'). The second condition requires vars(A=[C'|A']) ∩ vars(r(A,B)) ⊆ vars(r(E,D)), i.e. {A}⊆ vars(r(E,D)). This will hold if we consider the guard: since r(E, D) E=[C|A] ≡ r([C|A], D) E=[C|A], we can replace E by [C|A] and then {A}⊆ vars(r([C|A],D)). Third and finally, the guard E=[C|A], A=[C'|A'] is satisfiable. Simplification Now we proceed with rule simplification for unfolded rules (Definition <ref>). We simplify the head and guard by eliminating the local variable A. r(E, D), E=[C|A], A=[C'|A'] ≡_{E,D} r(E, D), E=[C,C'|A']. For the body we first simplify by eliminating the local variables A, E' and D'. E=[C|A], A=[C'|A'], r(A, B)=r(E', D'), r(A', B'), a(B', [C'], D'), a(B, [C], D) ≡_{E,D} E=[C,C'|A'], r(A', B'), a(B', [C'], B), a(B, [C], D) The insight for best-case simplification is that we can merge the two calls to constraint a/3 into one if we concatenate their second arguments [C'] and [C]. E=[C,C'|A'], r(A', B'), a(B', [C'], B), a(B, [C], D) ≡_{E,D} E=[C,C'|A'], r(A', B'), a(B', [C',C], D). Generalisation We can simplify to two append constraints of the form a(F, C, D), a(D, A, B), where the list C is sufficiently known, into a(F,E,B), where E is the result of computing a(C,A,E) during simplification while unfolding. This kind of simplification gives rise to a rule template of the following form r(E, D)E=[C_1,…,C_m|A'] | r(A', B'), a(B', [C_m,…,C_1], D). We call [C_1,…,C_m|A'] an open list, because it ends in the list variable A'. The open list has size m because can match any list with at least m elements. The m elements C_1,…,C_m are called element variables. Note that these element variables occur in reversed order in the list in the second argument of a/3 in the rule body. §.§.§ Implementation We use concrete syntax now and the Prolog built-in append/3 for a/3. Unfolding with Simplification The unfolding scheme for list reversal is implemented with the following Prolog clause for simp_unf/2. All variables stand for lists. During unfolding, in the given rule template, the variable E in the guard will be instantiated with an open list ending in the variable C. The list F in append/3 then consists of the element variables of E in reversed order. In the unfolded rule template, the number of elements in these two lists is doubled and their relationship of reversal is maintained. The doubling is achieved by copying the guard list E together with its end variable C and list F twice. In the first copy, the guard list El ends in Cc. In the second copy, list Ec ends in Cl from the recursive call in the unfolded rule template. The variable Cc is unified with Ec from the second copy, thus doubling the number of element variables in El. In this way, we have constructed a guard list El with twice as many element variables that ends in Cl. Finally, the lists resulting from copying F twice, Fc1 and Fc2, are concatenated in their reversed order by executing append/3 in the body of the clause during unfolding. The result is the new reversed list Fl in append/3 in the unfolded rule template. Recursive Constraint For list reversal, the rec_unfold rule is as follows: The list in the second argument of unf/3 contains the original recursive rule and the rule for the base case in appropriate template form. Unfolded Rules The rules that are returned by the unfolder unf/3 for a query with 16 to 31 list elements are We see here an increase in rule size. With each unfolding, the rule size almost doubles because the number of elements in the lists double. For a query with n list elements, we unfold ⌊ log(n) ⌋ times. So the list in the most unfolded rule has not more than n elements. Therefore the size of all unfolded rules taken together will be proportional to n. Note that this does not increase overall space complexity, since the corresponding input list has n elements. §.§.§ Complexity We now derive estimates for the time complexities. Original Recursion With the original rule we have n recursive steps for an input list of length n. The guard of the rule can be checked in constant time. In the body, append/3 goes through the list in its first argument. The time needed is linear in the length of this list, which is at most n. So we have b(n)=O(n). This results in the well-known quadratic complexity O(n^2) of naive list reversal. Unfolder We now consider a recursive step of the unfolder. During unfolding, we copy, unify and concatenate lists whose size doubles with each unfolding. Copying head and guard of an unfolded rule as well as checking its guard has a runtime at worst linear in the size of the open list. In simp_unf we copy and concatenate these lists. The sizes of the lists in a rule are bounded by the length n of the input list. The worst-case time complexity of a recursive step in the unfolder is therefore c(n) = O(n). Meta-Interpreter Recall the template for an unfolded rule of list reversal: The sizes of the lists in the rule are bounded by the length n of the input list. In the meta-interpreter, copying an unfolded rule and checking its guard is linear in the size of the open list E. The time for concatenation with append/3 is linear in the length of the list D. The runtime complexity of a recursive step in the meta-interpreter is therefore d(n) = O(n). Complexity of Runtime Repeated Recursion Unfolding According to the recurrence equations and Theorem <ref>, this gives linear complexity O(n) in the input list length for the unfolder as well as the meta-interpreter and for both of them together. So with repeated recursion unfolding the complexity is reduced from O(n^2) to O(n), clearly indicating a super-linear speedup due to best-case simplification. §.§.§ Benchmarks Table <ref> shows benchmarks results for the list reversal example. The list sizes n are powers of 2. Times are in seconds. A time measurement of 0.0n means that it was below 0.01 but more than zero. The experiments confirm the super-linear speedup using runtime repeated recursion unfolding. Original Recursion For the original recursion, the benchmarks indicate a complexity consistent with the expected O(n^2). Doubling the list size increases the runtime by a factor of about four. Unfolder and Meta-Interpreter All measured runtimes are consistent with a linear complexity O(n). For list size n = 2^13, runtime repeated recursion unfolding is already two orders of magnitude faster than the original recursion. A list with half a million elements can be reversed in half a second. Comparing Recursion Depths 2^i-1 and 2^i To complete the picture, we give timings for list lengths n of the form 2^i and their predecessor numbers 2^i-1. For n=2^i we will need exactly one application of the most unfolded rule r_i to get a recursive call with the empty list, which is the base case. So we are done in one recursive step. But note that still the guards of all unfolded rules will be checked. For n=2^i-1 however, the most unfolded rule is r_i-1, which results in a recursive call with list length 2^i-1-2^i-1, i.e. 2^i-1-1. To this call, the unfolded rule r_i-2 applies and so on, until all rules have been applied. In the meta-interpreter, the runtime of applying all unfolded rules (case of n=2^i-1) is less than of applying just the next larger unfolded rule (which has twice the size and complexity) (case of n=2^i). The unfolder takes several times longer than the meta-interpreter. Going from 2^i-1 to 2^i, the unfolder generates one more rule and the time spent doubles. Overall, going from 2^i-1 to 2^i almost doubles the total runtime. §.§ Sorting Example The classical insertion sort program sorts the numbers given in a list in ascending order: s(L, S) L=[A|L_1] | s(L_1, S_1), i(A, S_1, S) s([], S) S=[]. The built-in i(A, S_1, S) inserts a number A into the sorted list S_1 such that the resulting list S is sorted too. §.§.§ Runtime Repeated Recursion Unfolding Again we first have to find and define an appropriate rule template with sufficient simplification to improve on the runtime. Unfolding Unfolding the recursive rule of s/2 results in the rule s(L, S) L=[A,A_1|L_2] | s(L_2, S_2), i(A_1, S_2, S_1), i(A, S_1, S). The number A_1 is inserted into the already sorted list S_2, then into the resulting list S_1, the number A is inserted. Repeating this unfolding scheme does not lead to any significant performance improvements, since we just generate more and more insertions. Simplification In the above rule, we can more efficiently insert both numbers A_1 and A during a single traversal of the list S_2. We first insert the smaller number and then continue traversing the sorted list to insert the larger number. Since we get more and more insertions with each unfolding, we will actually have to insert more and more numbers in this way, and they have to be pre-sorted. To implement this behavior, we use a built-in m(S_1,S_2,S_3) instead of insertions. It merges the sorted lists S_1 and S_2 into a sorted list S_3. In the above rule, we first order A and A_1 by putting them into a sorted list before they are merged with list S_2. For the necessary ordering we will also use m/3. We replace the built-ins in the body of the rule i(A_1, S_2, S_1), i(A, S_1, S) by the semantically equivalent m([A], [A_1], S_0), m(S_0, S_2, S). The simplified unfolded rule for sorting is now s(L, S) L=[A,A_1|L_2] | m([A], [A_1], S_0), s(L_2, S_2), m(S_0, S_2, S). The merging before the recursive call pre-sorts single numbers into a sorted list. The merging after the recursive call merges this list into the sorted list returned by the recursive call. Now let us unfold this simplified rule with itself. The resulting rule is s(L, S) L=[A,A_1,A_2,A_3|L_3] | m([A], [A_1], S_0), m([A_2], [A_3], S_1), s(L_3, S_3), m(S_1, S_3, S_2), m(S_0, S_2, S). We now generate more and more mergings. Generalisation Note that after the recursive call, we merge the list of two elements S_1 into the already sorted list S_3 and the resulting list S_2 is in turn merged with the two elements of the list S_0. We can improve the runtime if we rearrange the mergings so that we merge lists that are about the same length. We merge S_1 and S_0 first, move this merging before the recursive call and merge its result with S_3 after the recursive call: s(L,S) L=[A,A_1,A_2,A_3|L_3] | m([A],[A_1],S_0), m([A_2],[A_3],S_1), m(S_1,S_0,S_4), s(L_3, S_3), m(S_4, S_3, S). In this way we have almost halved the runtime by avoiding the generation and traversal of the intermediate sorted list S_2. The introduction of mergings is the essential idea for the simplification of the unfolded rules. It gives rise to the rule template s(L,S) L=[A,A_1,…,A_m|L_1] | 𝑀𝑒𝑟𝑔𝑖𝑛𝑔𝑠, s(L_1, S_1), m(S_0,S_1,S). The placeholder 𝑀𝑒𝑟𝑔𝑖𝑛𝑔𝑠 stands for the mergings of A,A_1,…,A_m that result in the sorted list S_0. §.§.§ Implementation We now implement the unfolding and the recursive constraint for sorting. Unfolding with Simplification Relying on the rule template, the unfolding scheme is defined the following clause. We copy the input rule twice onto instances of the rule template to simulate the unfolding of the recursive call. In the first copy, the recursive call is s(L1,S1). We directly use it as the head of the second copy of the given rule. The resulting unfolded rule is composed of the head of the first copy s(L,S), of the guard of the first copy L=AL, of the mergings MG1 and MG2 before the recursive call of the two copies together with m(S3,S4,S0), and the new merging after the recursive call m(S0,S2,S). The built-in clean/2 removes superfluous true constraints in the resulting mergings. (The constraints stem from the original recursive clause and would proliferate otherwise.) Finally, the resulting guard is completed by executing the guard of the second copy L1=AL1 at unfolding time. This will double the size of the open list AL which ends in L1. Recursive Constraint For the sorting example, the rec_unfold rule is as follows: We write the original recursive clause also in simplified form using merge/3 instead of insert/3. Unfolded Rules The first few rules that are returned by the unfolder with an appropriate query are As with list reversal, the rule size roughly doubles with each unfolding, but again this does not increase the space complexity. §.§.§ Complexity We derive estimates for the time complexities. Original Recursion The recursion depth is determined by the number of elements n in the input list of the given query. In the original recursion we have n recursive steps. In each step, insert/3 at worst goes through a list of length n. This results in the well-known quadratic complexity O(n^2) of insertion sort. Unfolder We now consider runtime repeated recursion unfolding. The rule size, the number of mergings and the size of the lists in the rule double with each unfolding. The guard check in the unfolder involves such a list, copying it and trying to unify it with the input list of the query. In simp_unf/2 we basically copy the rule twice and the auxiliary built-in clean/3 goes once through the mergings. The runtime complexity c(n) of a recursive step in the unfolder is therefore linear in the input list length n, O(n). Meta-Interpreter For the meta-interpreter, copying an unfolded rule and checking its guard is linear in the size of the open input list. In the rule body, there are n calls to merge/3 for an open input list of size n. These mergings dominate the complexity. The runtime of mergings is determined by the sum of the lengths of their input lists. The mergings of the singleton lists involve the n input list elements. The mergings of the resulting two-element lists also involve all n list elements. Indeed, the mergings of all lists of the same length always involve all n input list elements. The lists double their lengths until all elements are merged before the recursive call. So we have a number of different list lengths that is logarithmic in n. Overall, this results in a log-linear complexity for the mergings before the recursive call. After the recursive call, a list of length n is merged with the list resulting from the recursive call. The latter list cannot be larger than the former, because otherwise a more unfolded rule would have been applicable. In conclusion, the runtime complexity d(n) of a recursive step in the meta-interpreter is therefore log-linear in the input list length n, O(n log_2(n)). Complexity of Runtime Repeated Recursion Unfolding The solution of the associated recurrence equation in accordance with Theorem <ref>, Table <ref> maintain the log-linear complexity O(n log_2(n)) in the input list length n for the unfolder and the meta-interpreter together. Note that the unfolder itself has a lower, linear complexity. So with repeated recursion unfolding the complexity is reduced from O(n^2) to O(n log_2(n)), clearly indicating a super-linear speedup, in this example due to sufficient simplification. §.§.§ Benchmarks Table <ref> shows benchmarks results for the sorting example. Times are in seconds. The benchmarks are performed with random permutations of integers from 1 to n. The individual runtimes show little variation, but are faster with already sorted lists, be they in ascending or descending order. They confirm the super-linear speedup. Original Recursion The experiments for the original version of insertion sort indicate a complexity that is indeed quadratic O(n^2). Doubling the list length increases the runtime by a factor of four. Unfolder and Meta-Interpreter The runtimes of the unfolder are consistent with a linear complexity O(n). The meta-interpreter timings are consistent with a log-linear complexity O(n log(n)). The generation of all rules in the unfolder takes less time than applying one or more rules in the meta-interpreter. Comparing Recursion Depths 2^i-1 and 2^i Going from input list length 2^i-1 to 2^i, the unfolder generates one more rule. It has twice the size of the previous rule. And indeed the runtime for the unfolder almost doubles. Going from list length 2^i-1 to 2^i, the meta-interpreter applies all unfolded rules in the first case but only the next more unfolded rule in the second case. In both cases, all rules are tried by checking their guard. The runtime increases somewhat when going from 2^i-1 to 2^i. § RELATED WORK Program transformation to improve efficiency is usually concerned with a strategy for combining unfolding and folding to replace code (for an overview see <cit.>). The transformations are typically performed offline, at compile-time. Program transformation for specific aims and applications is abundant in logic programming in general <cit.> and in CHR in particular <cit.>. General methods exist for unfolding <cit.> (which we have adapted for this paper), for specializing rules with regard to a specific given query <cit.>, and for optimizations induced by confluence <cit.>. More recently, <cit.> uses program transformation implemented in CHR on constraint logic programs that verify properties of imperative programs. Polyvariant program specialization is the generation of specialized versions of a program according to different constraints that restrict its execution. It is extensively used in the context of Constrained Horn Clauses (GHCs) that can represent a wide range of programming languages <cit.>. The online specialization algorithm of <cit.> uses constrained facts derived from a program to drive the program specialization based on unfolding and folding operations. Repeated recursion unfolding can be considered as a polyvariant program specialization. The basic idea of the approach was introduced in <cit.>. But there the rules were transformed at compile time. Because of this, super-linear speedup could only be achieved for calls that did not exceed a given fixed number of recursion steps. For larger calls, the speedup detoriated to a constant factor. Here we revised and extended the approach for just-in-time (JIT) online execution. We introduced an unfolding scheme and a specialized meta-interpreter so that super-linear speedup can be achieved on-the-fly at runtime for any recursive call. Our technique relies solely on unfolding and simplifying the recursive step again and again. It ignores the base case of the recursion. We add redundant rules this way but never remove any. We never fold a rule, but we simplify rule bodies. Super-linear speedups are rare and mostly concern parallel programs. Our technique applies to sequential programs. This also holds for work based on supercompilation for functional programming languages like Refal and Haskell. In advanced cases of this offline program transformation (<cit.> and <cit.>) generalisation while unfolding increases the chance for folding and can achieve super-linear speedup on some examples. In contrast, our approach is straightforward as it does not involve generalisation or folding and works online at runtime. However, it requires problem-specific simplification. § DISCUSSION We discuss some issues and limitations of runtime repeated recursion unfolding and suggest some possible improvements as well. Rule Simplification Our technique hinges on sufficient simplification of the recursive step resulting from unfolding. This simplification has to be provided at compile-time. It requires some insight into the given problem and cannot be fully automated (but mathematical software tools and theorem provers might help). If the simplification is not sufficient, no super-linear speedup can be achieved. The question arises if we can characterize those problems were our approach leads to super-linear speedup. As with any other program transformation technique, we cannot expect a simple answer. Clearly, an algorithm implementation that is already optimal cannot be further improved. For a simple example, a search for the minimum of an unordered list has to go through all elements of the list. We cannot improve the time complexity of the linear direct recursion that performs this traversal. Any algorithm that keeps intermediate results of recursive steps will also be hard to optimize. For a simple example, this applies to a recursion that squares each number in a given list. This observation also applies to the unfolder and the meta-interpreter that implement our approach. They recursively generate and use a list of unfolded rules, and each rule is potentially needed. Space Complexity Another issue are the space requirements of our approach. We generate a number of rules that is logarithmic in the recursion depth of the given query. In our examples we saw an increase in rule size. With each unfolding, the rule size roughly doubled. In effect, the size of all unfolded rules taken together is proportional to the size of the input number for the summation example and to length of the input list for reversal and sorting. Hence we saw no increase space complexity. In general however, we cannot rule out code explosion in our approach. Limited Unfolding Rule unfolding in CHR has some conditions and may not be possible at all. Second, repeated unfolding may not produce enough rules to allow for optimal rule applications. So far, we have not observed these problems in practice. If they should occur, then we think they could be tackled with a more liberal definition of unfolding in CHR. Possible Improvements Note that unfolded rules are generic and can be reused for any later call, improving the efficiency further. As for the implementation, the following optimizations come to mind: The unfolder and the meta-interpreter can be specialized for a given recursive rule using standard partial evaluation techniques, which typically lead to an additional constant factor speedup. The unfolder and the meta-interpreter are currently head-recursive, the implementation could be made tail-recursive. § CONCLUSIONS AND FUTURE WORK We have given a formal definition of runtime repeated recursion unfolding with simplification and proven its correctness. Our method reduces the number of recursive rule applications to its logarithm at the cost of introducing a logarithmic number of unfolded rules. We provided an implementation of our approach in five rules, comprising the unfolder and the meta-interpreter. We proved a super-linear speedup theorem for tractable problems provided the necessary unfolding with best-case or sufficient simplification is possible. The result relies on a straightforward optimal rule application strategy that we proved sound and complete. In the best case, the complexity of the given recursion is reduced to that of its first recursive step. We showed with benchmarks on three classical examples that the super-linear speed-up indeed holds in practice. It quickly reaches several orders of magnitude. For each example, we had to develop a specific rule simplification scheme based on rule templates. Table <ref> summarizes our estimated and observed time complexity results for our examples. They feature typical complexities of tractable algorithms and reduce the time complexity by a factor of n or n / log(n). Summation and list reversal are examples for best-case simplification. The sorting example has sufficient simplification with different complexities for the unfolder and meta-interpreter. Future work In this paper, we assumed a recursive rule with linear direct recursion expressing problems in the polynomial complexity class. We want to extend our technique to mutual recursion as well as multiple recursive rules. This would also allow to express and improve exponential algorithms. We defined and implemented repeated recursion unfolding using the rule-based language CHR, but we think our approach can be applied to other rule-based languages and mainstream programming languages as well. For the implementation, meta-programming features may not be necessary if the interpreter is specialized with regard to the given recursion so that the meta-calls go away. It already might be an advantage that the number of recursive steps is reduced to its logarithm by our approach. It should also be possible to apply our technique to loops instead of recursion. Acknowledgements. This research work was initiated during the sabbatical of the author in the summer semester of 2020. fundam ]
http://arxiv.org/abs/2307.01222v1
20230702183132
The minmin coalition number in graphs
[ "Davood Bakhshesh", "Michael A. Henning" ]
math.CO
[ "math.CO", "cs.DM" ]
The minmin coalition number in graphs ^1Davood Bakhshesh and ^2Michael A. HenningResearch supported in part by the University of Johannesburg and the South African National Research Foundation ^1Department of Computer Science University of Bojnord Bojnord, Iran Email: d.bakhshesh@ub.ac.ir ^2Department of Mathematics and Applied Mathematics University of Johannesburg Auckland Park, 2006 South Africa Email: mahenning@uj.ac.za =========================================================================================================================================================================================================================================================================================================================================================================================================================== A set S of vertices in a graph G is a dominating set if every vertex of V(G) ∖ S is adjacent to a vertex in S. A coalition in G consists of two disjoint sets of vertices X and Y of G, neither of which is a dominating set but whose union X ∪ Y is a dominating set of G. Such sets X and Y form a coalition in G. A coalition partition, abbreviated c-partition, in G is a partition 𝒳 = {X_1,…,X_k} of the vertex set V(G) of G such that for all i ∈ [k], each set X_i ∈𝒳 satisfies one of the following two conditions: (1) X_i is a dominating set of G with a single vertex, or (2) X_i forms a coalition with some other set X_j ∈𝒳. Let A = {A_1,…,A_r} and B= {B_1,…, B_s} be two partitions of V(G). Partition B is a refinement of partition A if every set B_i ∈ B is either equal to, or a proper subset of, some set A_j ∈ A. Further if A B, then B is a proper refinement of A. Partition A is a minimal c-partition if it is not a proper refinement of another c-partition. Haynes et al. [AKCE Int. J. Graphs Combin. 17 (2020), no. 2, 653–659] defined the minmin coalition number c_min(G) of G to equal the minimum order of a minimal c-partition of G. We show that 2 ≤ c_min(G) ≤ n, and we characterize graphs G of order n satisfying c_min(G) = n. A polynomial-time algorithm is given to determine if c_min(G)=2 for a given graph G. A necessary and sufficient condition for a graph G to satisfy c_min(G) ≥ 3 is given, and a characterization of graphs G with minimum degree 2 and c_min(G)= 4 is provided. Keywords: Coalition number; Domination number; Coalition partition AMS subject classification: 05C69 § INTRODUCTION A set S of vertices in a graph G is a dominating set if every vertex in V(G) ∖ S is adjacent to a vertex in S. The domination number γ(G) of G is the minimum cardinality of a dominating set of G. If A,B ⊆ S, then set A dominates the set B if every vertex b ∈ B belongs to A or is adjacent to a vertex of A. The study of domination in graphs is an active area of research in graph theory. A thorough treatment of this topic can be found in recent so-called “domination books” <cit.>. For graph theory notation and terminology, we generally follow <cit.>. Specifically, let G be a graph with vertex set V(G) and edge set E(G), and of order n(G) = |V(G)| and size m(G) = |E(G)|. Two adjacent vertices in G are neighbors. The open neighborhood of a vertex v in G is N_G(v) = {u ∈ V uv ∈ E} and the closed neighborhood of v is N_G[v] = {v}∪ N_G(v). We denote the degree of v in G by _G(v), and so _G(v) = |N_G(v)|. The minimum and maximum degrees in G are denoted by δ(G) and Δ(G), respectively. An isolated vertex in G is a vertex of degree 0 in G. A graph is isolate-free if it contains no isolated vertex. A vertex v is a universal vertex, also called a full vertex in the literature, if N_G[v] = V(G), that is, _G(v) = n(G) - 1. If the graph G is clear from the context, we simply write V, E, n, m, (v), N(v), and N[v] rather than V(G), E(G), n(G), m(G), _G(v), N_G(v), and N_G[v], respectively. We denote a path and cycle on n vertices by P_n and C_n, respectively, and we denote a complete graph on n vertices by K_n. A complete bipartite graph with partite sets of cardinalities r and s we denote by K_r,s. A star is a complete bipartite graph K_1,s where s ≥ 2. A nontrivial tree is a tree of order at least 2. A partition of a set is a grouping of its elements into non-empty subsets, in such a way that every element of the set is included in exactly one subset. A coalition in a graph G consists of two disjoint sets of vertices X and Y of G, neither of which is a dominating set but whose union X ∪ Y is a dominating set of G. Such sets X and Y form a coalition in G. A coalition partition, called a c-partition, in G is a partition Ψ = {V_1,…,V_k} of V(G) such that for all i ∈ [k], the set V_i is either a singleton dominating set or forms a coalition with another set V_j for some j, where j ∈ [k] ∖{i}. The coalition number, C(G), in G equals the maximum order k of a c-partition of G. Coalitions in graphs were introduced and first studied by Haynes, Hedetniemi, Hedetniemi, McRae, and Mohan <cit.>, and have subsequently been studied, for example, in <cit.>. Their research primarily focused on examining coalition numbers in trees and cycles. In <cit.>, they established upper bounds on the coalition number of a graph in terms of its minimum and maximum degree. Bakhshesh et al. in <cit.> characterized graphs G of order n with δ(G) ≥ 1 and C(G) = n. They also identified all trees T of order n with C(T) = n-1. In <cit.>, Haynes et al. introduced the refinement of a coalition partition and defined a minimal coalition partition. Let A = {A_1,…,A_r} and B= {B_1,…, B_s} be two partitions of V(G). Partition B is a refinement of partition A, denoted A≤ B, if every set B_i ∈ B is either equal to, or a proper subset of, some set A_j ∈ A. Further if A B, then B is a proper refinement of A, denoted A< B. The following observation follows from the definition of a proper refinement of a coalition partition. (<cit.>) If Ψ = {V_1,…,V_k} is a c-partition of a graph G, and there exist two sets V_i and V_j whose union V_i ∪ V_j is not a dominating set, then the partition Ψ' formed from Ψ by replacing V_i and V_j with the union V_i ∪ V_j is a c-partition of G and Ψ is a proper refinement of Ψ'. A c-partition A is a minimal c-partition in G if it is not a proper refinement of any other c-partition in G. In <cit.>, Haynes et al. defined the minmin coalition number c_min(G) of G to equal the minimum order of a minimal c-partition of G. A minimal c-partition of G of cardinality c_min(G) is called a c_min-partition of G. Haynes et al. <cit.> posed the following open problem. (<cit.>) What can you say about c_min(G)? This paper addresses Problem <ref>. We proceed as follows. In Section <ref>, we present lower and upper bounds on the minmin coalition number and we prove that if G is a graph of order n, then 2 ≤ c_min(G) ≤ n, and these bounds are sharp. In Section <ref>, we characterize graphs G of order n satisfying c_min(G) = n. In Section <ref>, we give a comprehensive description of graphs G satisfying c_min(G)=k where 2 ≤ k ≤ 4. Additionally, we present a polynomial-time algorithm to determine if c_min(G)=2 for a given graph G. If G is an isolate-free graph that does not contain a universal vertex and has minimum degree 1, then we show that c_min(G)=2. A characterization of graphs G with minimum degree 2 and c_min(G) = 4 is provided. § BOUNDS ON THE MINMIN COALITION NUMBER In this section, we present lower and upper bounds on the minmin coalition number. We prove firstly that the minmin coalition number is bounded above by the cardinality of an arbitrary c-partition of G. If G is a graph and X is a c-partition of G, then c_min(G) ≤ | X|. Let X be an arbitrary c-partition of G. If X is a minimal c-partition of G, then c_min(G) ≤ | X|. Now, assume that X is not a minimal c-partition. By repeated applications of Observation <ref>, there exists a minimal c-partition P of G with P < X. Hence, c_min(G) ≤ | P| < | X|.  As an immediate consequence of Proposition <ref>, we have the following upper bound on the minmin coalition number. If G is a graph, then c_min(G) ≤ C(G). We determine next a relationship between the minmin coalition number of a graph and the graph obtained from it by removing the universal vertices. If G is a graph that is not a complete graph and contains k universal vertices, then c_min(G) = c_min(G') + k, where G' is obtained from G by removing the universal vertices. Let G be a graph that is not a complete graph and contains k universal vertices, and let X be a c_min-partition of G. Thus, X is a minimal c-partition of G and | X| = c_min(G). If u is a universal vertex, then every c-partition of G contains the set {u}. In particular, {u}∈ X. Let X' be obtained from X by removing all sets {u}, where u is a universal vertex of G. The resulting partition X' is a c-partition of G'. Thus by Proposition <ref> we infer that c_min(G')≤ | X'| = | X|-k = c_min(G) - k. To prove the reverse inequality, let Y' be a c_min-partition of G'. Thus, Y' is a minimal c-partition of G' and | Y'| = c_min(G'). Adding all sets {u} to Y', where u is a universal vertex of G, yields a partition Y that is a c-partition of G. By Proposition <ref> we infer that c_min(G) ≤ | Y| = | Y'|+k = c_min(G') + k. Consequently, c_min(G) = c_min(G')+k.  We next establish lower and upper bounds on the minmin coalition number of a graph. If G is a graph of order n ≥ 2, then 2 ≤ c_min(G) ≤ n, and these bounds are sharp. Let G be a graph of order n ≥ 2. By Corollary <ref>, c_min(G) ≤ C(G). Since C(G) ≤ n, the upper bound c_min(G) ≤ n trivially holds. To prove the lower bound on c_min(G), let Y be a c_min-partition of G. If Y contains a set of cardinality 1, then such a set is a singleton dominating set of G. In this case, we infer from the lower bound on the order of G that | Y| ≥ 2. If every set in Y has cardinality at least 2, then every set in Y forms a coalition with some other set in the c-partition, implying once again that | Y| ≥ 2. Hence in both cases, c_min(G) = | Y| ≥ 2. To prove the sharpness of lower bound, consider, for example, when G is a path v_1v_2 … v_n of order n ≥ 4. The partition X = {{v_1,v_2}, {v_3,…,v_n}} is a c-partition of G that is not a proper refinement of any other c-partition in G, implying that X is a minimal c-partition. Therefore, 2 ≤ c_min(G) ≤ | X| = 2, and so c_min(G) = 2. To prove the sharpness of upper bound, consider, for example, a complete bipartite graph G with partite sets V_1, V_2, …, V_k where k ≥ 2 and |V_i| = 2 for all i ∈ [k]. Since γ(G)=2 and every subset of V(G) of cardinality 2 is a dominating set of G, we infer that every c-partition of G contains only singleton sets. Hence, c_min(G)=n.  § GRAPHS WITH LARGE MINMIN COALITION NUMBER In this section, we characterize graphs G of order n satisfying c_min(G) = n. For this purpose, we define a family M of graphs G that are generated in the following recursive manner. We begin by including the graphs K_1, K_2, and K_2 in M. If H is a graph already present in M, then we add the two graphs K_1+H and K_2+H to M, where the join F + G of two graphs F and G is the graph formed from disjoint copies of F and H by joining every vertex of F to every vertex of G. As an illustration, the C_4 = K_2 + K_2 belongs to the family M, and so C_4 ∈ M. The graph K_2 + C_4 illustrated on the left hand drawing of Figure <ref> (where here H = C_4) therefore belongs to the family M. Moreover the graph K_1 + C_4 belongs to the family M, implying that the graph K_1 + H illustrated in the right hand drawing of Figure <ref> (where here H = K_1 + C_4) therefore belongs to the family M. If G ∈ M has order n ≥ 3, then G is a connected graph and c_min(G) = n. Let G ∈ M have order n ≥ 3. By construction of graphs in the family M, the graph G is connected. Let V={v_1,…,v_n}. We proceed by induction on n ≥ 3 to show that the property (⋆) below holds in the graph G: (⋆) Every two distinct vertices that are not universal vertices form a dominating set of G. Suppose that n = 3. In this case, either G = K_1 + K_2 = K_3 or G = K_1 + K_2 = P_3. If G = K_3, then every vertex of G is a universal vertex. If G = P_3, then G has two vertices that are not universal and these two vertices form a dominating set of G. Hence if n=3, then property (⋆) holds. Suppose that n = 4. Thus, G = K_1 + H where H ∈{P_3,K_3} or G = K_2 + H where H ∈{K_2,K_2}. Hence, G ∈{K_4,K_4 - e,C_4} and property (⋆) holds in the graph G. This establishes the base case when n = 3 and n = 4. Suppose that n ≥ 5 and that if G' ∈ M has order n' where 3 ≤ n' < n, then property (⋆) holds in the graph G'. Let G ∈ M have order n. Since G ∈ M, either G = K_1 + G' or G = K_2 + G' for some graph G' ∈ M. Suppose firstly that G = K_1 + G' for some graph G' ∈ M of order n'. Necessarily, n' = n-1. Let v be the vertex added to G' to construct G, and so v is a universal vertex of G. Every universal vertex in G' is also a universal vertex in G. Let x and y be two arbitrary vertices of G that are not universal vertices of G. Both x and y belong to G' and neither x nor y is a universal vertices of G'. Applying the inductive hypothesis to G', the set {x,y} forms a dominating set of G', and therefore also of G since both x and y are adjacent to the universal vertex v of G. Hence in this case when G = K_1 + G' for some graph G' ∈ M, property (⋆) holds in the graph G. Suppose secondly that G = K_2 + G' for some graph G' ∈ M of order n'. Necessarily, n' = n-2. Let v_1 and v_2 be the two vertices added to G' to construct G, and so v_i is adjacent to every vertex of G except for the vertex v_3-i for i ∈ [2]. Every universal vertex in G' is also a universal vertex in G. Let x and y be two arbitrary vertices of G that are not universal vertices of G. If both x and y belong to G', then neither x nor y is a universal vertex of G', and so by the inductive hypothesis, the set {x,y} forms a dominating set of G', and therefore also of G. If exactly one of x and y belong to G', then renaming vertices if necessary we may assume that x = v_1 and y ∈ V(G'). In this case, since x dominates the set V(G') and the vertex y is adjacent to v_2, the set {x,y} once again forms a dominating set of G. Finally, if {x,y} = {v_1,v_2}, then {x,y} again forms a dominating set of G. Hence in this case when G = K_2 + G' for some graph G' ∈ M, property (⋆) holds in the graph G. Since property (⋆) holds in the graph G, if X is an arbitrary c-partition of G, then every set in the partition X is a singleton set, and so | X| = n, implying that c_min(G)=n.  We are now in a position to characterize graphs G of order n satisfying c_min(G) = n. If G is a connected graph of order n ≥ 3, then c_min(G) = n if and only if G ∈ M. Let G be a connected graph of order n ≥ 3. If G ∈ M, then by Proposition <ref>, c_min(G) = n. Hence it suffices for us to prove that if c_min(G) = n, then G ∈ M. We prove by induction on n ≥ 3 that G ∈ M. If n = 3, then since G is a connected graph either G = P_3 or G = K_3. In both cases, G ∈ M. This establishes the base case. Suppose that n ≥ 4 and that if G' is a connected graph of order n' where 3 ≤ n' < n and c_min(G') = n', then G' ∈ M. Let G be a connected graph of order n satisfying c_min(G) = n, and let V={v_1,…,v_n}. Thus, G contains a unique c-partition, namely the partition in which every set is a singleton set. In particular, G contains a unique c_min-partition, namely the partition Ψ = {{v_1}, {v_2}, …, {v_n}}. Since the c_min-partition Ψ is not a proper refinement of any other c-partition in G, by Proposition <ref> if v_i and v_j are distinct vertices that are not universal vertices of G, then the set {v_i,v_j} is a dominating set of G for all i,j ∈ [n]. If G = K_n, then by repeated applications of the join operation K_1 + H where H ∈ M, the graph G ∈ M noting that K_2 ∈ M. Hence, we may assume that G K_n, for otherwise the desired result is immediate. In particular, G contains at least two vertices that are not universal vertices. Let v be an arbitrary vertex of G that is not a universal vertex, and consider the set S_v = V ∖ N_G[v]. Thus, the set S_v consists of all vertices of G different from v that are not adjacent to v. By supposition, |S_v| ≥ 1. Suppose that |S_v| ≥ 2, and let {x,y}⊆ S_v. Since neither x nor y is adjacent to v, the vertices x and y are not universal vertices of G. By our earlier observations, the set {x,y} is therefore a dominating set of G. However, the vertex v is not dominated by the set {x,y}, a contradiction. Hence, |S_v| = 1. Let S_v = {v'}. Let u be an arbitrary neighbor of v, and so u ∈ N_G(v). If u is a universal vertex, then u is adjacent to v'. If u is not a universal vertex, then by our earlier observations, the set {u,v} is therefore a dominating set of G. In particular, this implies that the vertex u is adjacent to v'. Hence, the vertex v' is adjacent to every vertex in N_G(v). Thus, G = K_2 + G' where G' = G - {v,v'}. Let G' have order n', and so n = n' - 2. If G' is a complete graph K_n-2, then G' ∈ M, implying that G' ∈ M. Hence we may assume that G' contains at least two vertices that are not universal vertices in G'. Let x and y be two distinct vertices that are not universal vertices of G'. Since x and y are not universal vertices of G, by our earlier observations that {x,y} is a dominating set of G. This in turn implies that {x,y} is a dominating set of G'. This is true for every two distinct vertices that are not universal vertices of G'. Hence, c_min(G') = n'. If n' = 2, then either G' = K_2, in which case G = K_4 - e, or G' = K_2, in which case G = C_4. In both cases, G ∈ M. Hence, we may assume that n' ≥ 3, for otherwise the desired result follows. Thus by our earlier properties of the graph G', we infer that G' is a connected graph. As observed earlier, c_min(G') = n'. Applying the inductive hypothesis to G', we infer that G' ∈ M. This in turn implies that G = K_2 + G' ∈ M.  § GRAPHS WITH SMALL MINMIN COALITION NUMBER In this section, we study graphs with small minmin coalition number. We characterize graphs G with no universal vertex satisfying c_min(G) = 2. We present necessary and sufficient condition for a graph G to satisfy c_min(G) ≥ 3, and necessary and sufficient condition for a graph G to satisfy c_min(G) ≥ 4. We first prove a necessary condition for a graph G to satisfy c_min(G) = 2. If G is an isolate-free graph with δ(G) = 1 that does not contain a universal vertex, then c_min(G)=2. Let G be an isolate-free graph with δ(G) = 1 that does not contain a universal vertex. Suppose, to the contrary, that c_min(G) 2. Hence by Theorem <ref>, c_min(G)≥ 3. In particular, this implies that G has order at least 3. Let x be a vertex of degree 1 in G, and let y be its only neighbor. We now consider the partition Ψ = (V_1,V_2) of V(G) into sets V_1 = {x,y} and V_2 = V(G) ∖ V_1. Since the vertex x is not dominated by V_2, the set V_2 is not a dominating set of G. By supposition the graph G does not contain a universal vertex. In particular, the vertex y is not a universal vertex of G, implying that the set V_1 is not a dominating set of G. Hence, the sets V_1 and V_2 form a coalition in G. Thus, Ψ is a c-partition of G, implying by Proposition <ref> that c_min(G) ≤ |Ψ| = 2, a contradiction.  As a consequence of Lemma <ref>, we can determine the minmin coalition number of a tree. If T is a nontrivial tree, then c_min(T)=2, unless T is a star T ≅ K_1,r where r ≥ 2. Let T be a nontrivial tree. Thus, T has order n ≥ 2. If n = 2, then c_min(T)=2. Hence we may assume that n ≥ 3, for otherwise the desired result is immediate. If T does not contain a universal vertex, then we immediately infer from Lemma <ref> that c_min(G)=2. Hence we may further assume that T contains a universal vertex. Thus, T is a star K_1,n-1. By our earlier assumptions, n ≥ 3. If Ψ is an arbitrary c-partition of T, then the universal vertex, v say, of T form a singleton set {v} in Ψ. Since n ≥ 3, no leaf of T is a dominating set of T. Since the set of leaves in T forms a dominating set of T, the set of leaves cannot be a set in the c-partition Ψ, implying that Ψ contains at least two sets different from the singleton set {v}. Therefore, |Ψ| ≥ 3. Since this is true for every c-partition of T, we infer that c_min(T) ≥ 3. On the other hand, if (V_1,V_2) is an arbitrary partition of the set of leaves of T, then the partition X = (V_1,V_2,V_3) where V_3 = {v} is a c-partition of T, and so by Proposition <ref> c_min(G) ≤ | X| = 3. Consequently, c_min(T) = 3.  We present next a necessary and sufficient condition for a graph G to satisfy c_min(G) ≥ 3. If G is a graph that does not contain a universal vertex, then c_min(G) ≥ 3 if and only if for every vertex v ∈ V, the set N[v] is a dominating set of G. Let G be a graph that does not contain a universal vertex. Suppose firstly that c_min(G) ≥ 3. Let v be an arbitrary vertex of G, and let S = N_G[v]. Since the vertex v is not adjacent to any vertex in V ∖ S, the set V ∖ S is not a dominating set of G. If S is not a dominating set of G, then {S,V ∖ S} is a coalition partition of G, and so by Proposition <ref> c_min(G) ≤ 2, a contradiction. Hence, S is a dominating set of G. Conversely, suppose that the set N[v] is a dominating set of G for every vertex v ∈ V. We show that c_min(G) ≥ 3. Suppose, to the contrary, that c_min(G)<3. By Theorem <ref>, this implies that c_min(G)=2. Let X = {A, B} be a c_min-partition of G. Since G has no universal vertex, neither set A nor B is a dominating set of G. Let u be a vertex of G not dominated by the set A, and so A ∩ N[u] = ∅. Since B = V ∖ A, we have N[u] ⊆ B. By supposition the set N[u] is a dominating set of G. Since the property of being a dominating set is superhereditary, the set B is therefore a dominating set of G, a contradiction. Hence, c_min(G) ≥ 3, as desired.  As an immediate consequence of Theorem <ref> and Theorem <ref>, we have the following characterization of graphs with no universal vertex satisfying c_min(G) = 2. If G is a graph that does not contain a universal vertex, then c_min(G) = 2 if and only if there exists a vertex v ∈ V such that N[v] is not a dominating set of G. As an application of Corollary <ref>, consider the Heawood graph and the Petersen graph illustrated in Figure <ref>(a) and (b), respectively. Since neither graph has a vertex whose closed neighborhood is a dominating set, we infer from Corollary <ref> that the minmin coalition number is equal to 2 for both the Heawood graph and the Petersen graph. As a further application of Corollary <ref>, we can determine the minmin coalition number of a cycle. The minmin coalition number of a cycle C_n is given by following closed formula. c_min(C_n) = {[ 2 if  n≥ 6; 3 if  n=3   or  n=5.; 4 if  n=4. ]. If n ≥ 6, then by Corollary <ref> we have c_min(C_n) = 2. For n ≤ 5, Theorem <ref> implies that c_min(C_n) ≥ 3. When n=3, the only c-partition of C_3 consists of three singleton sets which implies that c_min(C_3) = 3. When n=4, we note that every subset of vertices of C_4 with at least two vertices is a dominating set, implying that the only c-partition of C_4 consists of four singleton sets, and so c_min(C_4) = 4. Suppose that n=5 and consider the 5-cycle v_1 v_2 v_3 v_4 v_5 v_1. The partition {{c_1}, {c_2,c_3},{c_4,c_5}}, for example, is a minimal c-partition of C_5, implying that c_min(C_5) = 3.  We remark that Theorem <ref> yields a polynomial time algorithm to determine whether a graph G with no universal vertex satisfies c_min(G)=2 or c_min(G) 2. We recursively examine all the vertices of G one by one. If there exists a vertex v ∈ V such that the set N[v] is not a dominating set of G, then we infer by Theorem <ref> that c_min(G)=2. If no such vertex exists, then we infer that c_min(G) 2. Given a set of vertices of G, one can easily check in polynomial time whether the set is a dominating set of G or not. Therefore, the total time required to determine whether c_min(G) = 2 is polynomial. This yields the following result. There exists a polynomial time algorithm to determine whether for a given graph G with no universal vertices the equation c_min(G)=2 holds or not. We present next a necessary and sufficient condition for a graph G to satisfy c_min(G) ≥ 4. If G is a graph that does not contain a universal vertex, then c_min(G) ≥ 4 if and only if for every vertex v ∈ V and every partition {P,Q} of N[v], at least one of P or Q is a dominating set of G. Let G be a graph that does not contain a universal vertex. Suppose firstly that c_min(G) ≥ 4. Let v be an arbitrary vertex of G, and let {P,Q} be an arbitrary partition of N[v]. Thus, P ∅, Q ∅ and P ∪ Q = N[v]. Suppose that P = {v}, and so Q=N(v). By Theorem <ref>, N[v] is a dominating set of G, implying that Q is a dominating set, yielding the desired result. Analogously, if Q = {v}, then the desired result follows. Hence we may assume that P {v} and Q{v}. Renaming the sets P and Q if necessary, we may assume that v ∈ P. Let R = V ∖ (P ∪ Q). Since R contains no neighbors of v, it is not a dominating set of G. We now consider the partition X = {P, Q, R} of V. Suppose that neither P nor Q is a dominating set of G. Since c_min(G) 2, by Theorem <ref> the set P ∪ Q is a dominating set. Additionally, P ∪ R is a dominating set of G noting that the vertex v dominates all vertices in Q. Therefore, X is a c-partition of G, which implies that c_min(G) ≤ 3, a contradiction. Hence, at least one of P or Q is a dominating set. Conversely, suppose that for every vertex v ∈ V and every partition {P,Q} of N[v], at least one of P or Q is a dominating set of G. We show that c_min(G) ≥ 4. Suppose, to the contrary, that c_min(G) < 4. Hence by Theorem <ref>, c_min(G) can only be equal to 2 or 3. If c_min(G)=2, then by Corollary <ref> there exists a vertex v ∈ V such that N[v] is not a dominating set of G. However in this case for every possible partition {P,Q} of N[v], neither P nor Q is a dominating set of G, a contradiction. Therefore, c_min(G) = 3. Let X = {A, B, C} be a c_min-partition of G. Thus, X is a minimal c-partition of G and | X| = 3. Since G has no universal vertex, we note that none of the sets A, B and C is a dominating set of G. Let u be a vertex of G not dominated by the set A, and so A ∩ N[u] = ∅. Since B ∪ C = V ∖ A, we therefore have N[u] ⊆ B ∪ C. Suppose that B∩ N[u] = ∅, implying that N[u] ⊆ C. By Theorem <ref> and our assumption that c_min(G) 2, the set N[u] is a dominating set. Since the property of being a dominating set is superhereditary, the set C is therefore a dominating set of G, a contradiction. Hence, B ∩ N[u] ∅. Analogously, C ∩ N[u] ∅. Thus letting P=B∩ N[u] and Q=C∩ N[u], we have that {P,Q} is a partition of N[u]. Hence by supposition, at least one of P or Q is a dominating set of G. This in turn implies that at least one of B or C is a dominating set of G, a contradiction. Hence, c_min(G) ≥ 4.  Let F be the family of graphs G with vertex set V = {v,x,y}∪ U, where (v) = 2 and N(v)={x,y}, and where the vertices x and y are not adjacent but are both adjacent to all vertices of U as illustrated in Figure <ref>. Furthermore, the subgraph G[U] induced by U contains any number of edges, including the possibility of no edges. As an application of Theorem <ref>, we have the following result. If G is a graph with δ(G) = 2 that does not contain a universal vertex, then c_min(G) = 4 if and only if G∈ F. Let G be a graph with δ(G) = 2 that does not contain a universal vertex. Let v be a vertex of degree 2 in G, and let N(v)={x,y}. Further, let U = V ∖ N[v]. Suppose firstly that c_min(G) = 4. Let P={v,x} and Q={y}. By Theorem <ref>, at least one of P and Q is a dominating set of G. Since G has no universal vertex, the set Q is not a dominating set of G, implying that P is necessarily a dominating set of G. This in turn implies that x is adjacent to all vertices of U. Since G has no universal vertex, the vertex x is therefore not adjacent to the vertex y. Interchanging the roles of x and y, identical arguments show that the vertex y is adjacent to all vertices of U. Hence, G ∈ F. Conversely, suppose that G∈ F. We adopt the notation used in the definition of the family F, and so V = {v,x,y}∪ U, where N_G(v) = {x,y}. We show firstly that c_min(G) ≥ 3. Suppose, to the contrary, that c_min(G)=2. Let X = {A, B} be a c_min-partition of G. Since G has no universal vertex, neither set A nor B is a dominating set of G. If {x,y}⊆ A or {x,y}⊆ B, then since {x,y} is a dominating set of G, at least one of A or B is a dominating set of G, a contradiction. Hence, |A∩{x,y}|=1 and |B∩{x,y}|=1. Renaming the sets A and B if necessary, we may assume that x ∈ A and y ∈ B, and that |A|≥ 2. Let w be a vertex in A different from x. Thus either w=v or w∈ U. In both cases, the set A is a dominating set of G, a contradiction. Hence, c_min(G) ≥ 3. We show next that c_min(G) ≥ 4. Suppose, to the contrary, that c_min(G)=3. Let X = {A, B,C} be a c_min-partition of G. Since G has no universal vertex, none of the sets A, B or C is a dominating set of G. We therefore infer that the set {x} forms a singleton set in X, as does the set {y}. Renaming the sets A, B and C we may assume that A = {x} and B = {y}, and so C = V ∖{x,y}. However, such a set C is a dominating set of G, a contradiction. Hence, c_min(G) ≥ 4. The partition Ψ = ({v},{x},{y},U) is a c-partition of G, implying by Proposition <ref> that c_min(G) ≤ 4. Consequently, c_min(G) = 4.  § CONCLUDING REMARKS In this paper we address an open problem posed by Haynes, Hedetniemi, Hedetniemi, McRae, and Mohan <cit.> to study the minmin coalition number c_min(G) of a graph G. We show that if G is a graph of order n, then 2 ≤ c_min(G)≤ n. Among other results, we characterized graphs G of order n satisfying c_min(G) = n and we provided polynomial time algorithm to determine if c_min(G) = 2. It would be intriguing to investigate the open question of whether a polynomial time algorithm exists to determine larger values of c_min(G). 99 bakhcoal D. Bakhshesh, M. A. Henning, and D. Pradhan, On the coalition number of trees. Bull. Malays. Math. Sci. Soc. 46 (2023), no. 3, Paper No. 95. coal0 T. W. Haynes, J. Hedetniemi, S. T. Hedetniemi, A. A. McRae, and R. Mohan, Introduction to coalitions in graphs. AKCE Int. J. Graphs Combin. 17 (2020), no. 2, 653–659. coal2 T. W. Haynes, J. T. Hedetniemi, S. T. Hedetniemi, A. A. McRae, and R. Mohan, Upper bounds on the coalition number. Australas. J. Combin. 80 (2021), 442–453. coal3 T. W. Haynes, J. T. Hedetniemi, S. T. Hedetniemi, A. A. McRae, and R. Mohan, Coalition graphs. Commun. Comb. Optim. 8 (2023), no. 2, 423–430. coal1 T. W. Haynes, J. T. Hedetniemi, S. T. Hedetniemi, A. A. McRae, and R. Mohan, Coalition graphs of paths, cycles, and trees. To appear in Discuss. Math. Graph Theory. HaHeHe-20 T. W. Haynes, S. T. Hedetniemi, and M. A. Henning (eds), Topics in Domination in Graphs. Series: Developments in Mathematics, Vol. 64, Springer, Cham, 2020. viii + 545 pp. HaHeHe-21 T. W. Haynes, S. T. Hedetniemi, and M. A. Henning (eds), Structures of Domination in Graphs. Series: Developments in Mathematics, Vol. 66, Springer, Cham, 2021. viii + 536 pp. HaHeHe-23 T. W. Haynes, S. T. Hedetniemi, and M. A. Henning, Domination in Graphs: Core Concepts Series: Springer Monographs in Mathematics, Springer, Cham, 2023. xx + 644 pp. HeYe-book M. A. Henning and A. Yeo, Total domination in graphs. Series: Springer Monographs in Mathematics, Springer, Cham, New York, 2013. xiv + 178 pp.
http://arxiv.org/abs/2307.02943v1
20230706121924
Stochastic Approximation for Expectation Objective and Expectation Inequality-Constrained Nonconvex Optimization
[ "Francisco Facchinei", "Vyacheslav Kungurtsev" ]
math.OC
[ "math.OC" ]
[ Aristides Gionis August 1, 2023 ==================== § INTRODUCTION In this paper we consider the constrained optimization problem, [ min_x∈ℝ^n F(x),; s.t. C(x)≤ 0, ] where F:ℝ^n→ℝ and C:ℝ^n→ℝ^m are (generally nonconvex) continuously differentiable. We assume that this is a stochastic optimization problem wherein for all x, F(x) and C(x) are defined to be expectations of functions that depend on random variables ξ and ζ, respectively, defined on a probability space (Ω^f×Ω^c,Σ^f×Σ^c,P_x), i.e., F(x) = 𝔼_P_x[ f(x,ξ) ], and C(x) = 𝔼_P_x[ c(x,ζ) ] In the sequel we discard, in the notation, the stochastic dependence on P_x. We do not assume any functional form in regards to the dependence of f and c on ξ and ζ. We do allow for dependence between ξ and ζ, in general, however. Thus each noisy function evaluation involves sampling σ∈Σ^f×Σ^c from the product σ-algebra on the sample space Ω^f×Ω^c based on the probability measure P_x. As standard for stochastic optimization, this framework is appropriate for large scale instances of learning, where data cannot be stored entirely in memory and mini batch samples must be taken to compute problem information used to calculate the iterate update at each iteration. For instance, a model can be trained on some data while satisfying some maximal loss on another set of data. Alternatively, the optimization problem can define some engineering process that is inherently stochastic and the functions represent its operational performance on some criteria. We assume, as is standard, that the second moments of the uncertain problem functions are bounded, [ ∃ M, s.t. ∀ x∈ℝ^n, 𝔼[∇ f(x,ξ)^2]≤ M, 𝔼[∇ c(x,ζ)^2]≤ M, 𝔼[c(x,ζ)^2]≤ M,; ∃ T,M̅, s.t., ∀ t≤ T,x∈ℝ^n, 𝔼[e^t∇ f(x,ξ)]≤M̅, 𝔼[e^t∇ c(x,ζ)]≤M̅, 𝔼[e^tc(x,ζ)]≤M̅, ] We are interested in developing a stochastic approximation algorithm for solving (<ref>). The algorithm and convergence theory will be based upon the method and Ghost penalty framework presented in <cit.>. There it was shown, among other results, that a modified sequential convex approximation algorithm with a diminishing step-size converges asymptotically to a stationary point of the underlying nonconvex constrained optimization problem. Classically, all the literature on stochastic approximation for solving (<ref>) considers convex constraints for which there is a simple projection operator, including standard texts on stochastic approximation <cit.>, or cases where the indicator of the constraint has a proximal operation that can be easily computed in closed form <cit.>. Otherwise, a classic work with functional constraints is <cit.> and more recently these are considered in <cit.> which considers stochastic approximation (SA) in the case where all the problem functions are convex, and develops and studies a penalty algorithm for solving the problem. More recently <cit.> appeared, considering tailored algorithms for (<ref>) when F(x) and C(x) are convex. Closer to our work, <cit.> presents an SQP method that solves (<ref>) with deterministic but nonlinear, and thus nonconvex, equality constraints. The paper <cit.> considers the more general inequality case, presenting an active set SQP method for solving such problems. Finally, the paper <cit.> is the only one, to the best of our knowledge, that studies stochastic constraints specifically and has theoretical guarantees. The work presents a proximal point framework that, similarly to this work, solves a series of strongly convex subproblems and with asymptotic guarantees to convergence to a KTT point under conditions of recursive feasibility. In this paper we intend to advance the state of the art in considering stochastic nonconvex constraints, while using the framework developed in <cit.> in order to construct a simple diminishing step-size method that does not require the computation of penalty parameters. We are able to show almost sure convergence of the iterates to an appropriate stationary point, while using a step direction with consistent zero bias finite variance (as opposed to asymptotically vanishing vriance) relative to a desired deterministic direction. There exists work on directly solving (<ref>) with Sample Average Approximation (SAA). For instance, see <cit.> and <cit.>. The works <cit.> and <cit.> extend this line of work to precise guarantees for nonconvex and convex constraints under problem assumptions of heavier tails. SAA and SA are two separate, complementary tools for solving stochastic optimization problems. Given the reliance of SAA on the law of large numbers and the bias of the estimated solution, however, in practice SA has been used more often in big data settings, witness, e.g., the popularity of stochastic gradient and its variants for training neural networks. Another challenge with SAA, as pointed out in <cit.>, is that it is required, typically, that, in order for a sequence of solutions to converge to a specific local or the global solution to (<ref>) as the number of samples grows large, a specific corresponding local, or the global solution, to the deterministic problem must be obtained. This is, of course, computationally intractable for nonconvex optimization in high variable dimensions (and impossible to determine a priori to find a specific local solution). In practice, we seek a stationary point or local minimizer. Of course, SAA is still a valuable tool for certain classes of problems, and in this paper, we utilize SAA to solve a particular subproblem that will act as an unbiased estimator of the SA iteration. § ALGORITHM AND CONVERGENCE In this section we consider a general iterative algorithm for solving (<ref>). Consider that for a current iterate x^ν, a subproblem as originally described in <cit.> is given by, [ min_d ∇ F(x^ν)^T d+τ/2d^2,; s.t. C(x^ν)+∇ C(x)^T d≤κ(x^ν)e; d_∞≤β ] with, correspondingly, κ(x^ν) := (1-λ) max_i∈{1,...,m}{C_i(x^ν)_+}+λmin_d{max_i {(C_i(x^ν)+∇ C_i(x^ν)^T d)_+} , d_∞≤ρ} where 0<λ<1, 0<ρ<β and a_+=max(0,a). This has the effect of expanding the feasible region of the subproblem such that it is always feasible. Define the solution to (<ref>) as d(x^ν). This is a standard successive convex approximation subproblem with the feasible region is expanded based on ideas from <cit.>. We will also make use of the following quantity, θ(x) ≜max_i {C_i(x)_+} - κ(x) = λ(max_i {C_i(x)_+} - min_d {max_i {C̃_i(d; x)_+ } | d_∞≤ρ}) We shall make the following Assumption on F(x), C(x), F(x) is Lipschitz continuously differentiable with constant L_∇ F and for all i, C_i(x) is continuously differentiable with constant L_∇ C_i. We also recall a useful result, <cit.> If the eMFCQ condition holds at every x∈ℝ^n, there exist a positive constant θ such that for all x,y, it holds that, d(x)-d(y)≤θx-y^1/2 Of course, in the general case we cannot evaluate ∇ F(x^ν), etc. Instead, we perform a stochastic approximation (SA) algorithm for iteratively solving (<ref>) by taking, at each iteration, an appropriate sample that forms a noisy unbiased estimate of d(x^ν). In the context of constrained optimization, computing such an estimate is not trivial, however. In the subsequent section, we will describe a procedure that in fact manages to compute an appropriate vector. In this section, assume that it is possible to do so, in particular, assume that for each k, we compute a stochastic quantity d̃(x^ν) such that 𝔼[d̃(x^ν)] = d(x^ν). We consider the simple algorithm defined to be the following, * Choose x^1, γ^ν satisfying, ∑_ν=1^∞γ^ν = ∞, ∑_ν=1^∞(γ^ν)^2 <∞ x^1∈ℝ^n and γ^1<1. * For iteration ν=1,2,... * Let d̃(x^ν) be an unbiased estimate of the solution d(x^ν) to (<ref>) * Set x^ν+1=x^ν+γ^νd̃(x^ν) * Repeat §.§ Convergence In this section we proceed with the proof of asymptotic convergence with probability 1. The argument will partially resemble the proof of <cit.>. Consider the sequences, x^ν+1=x^ν+γ^νd̃( x^ν), ν≥ 1, x̃^0=x_0 which is the stochastic process generated by the algorithm defined above. We assume that the variance of the noise of d̃(x^ν) as an estimate of d(x^ν) is uniformly bounded, allowing us to write the expression for x^ν in terms of a stochastic approximation iteration, x^ν+1 = x^ν+ γ^ν(d( x^ν)+M_ν) where M_ν is the noise term. Consider the sample space Ψ of all possible values of {M_ν}_ν=1,...,∞. Defining a cylinder Ψ^ν of possible values up until ν, we define the filtration σ-algebras ℱ_ν=σ(Ψ^ν) with ℱ_-1=∅ for completeness, satisfying, ℱ_-1⊂ℱ_0⊂ℱ_1⊂ ... ⊂ℱ_ν⊂ .... Under this construction, given j∈ℤ, relative to ℱ_ν, the iterate x^ν-j is deterministic for j≤ 1 and stochastic for j>1. We let ℱ = ⋃_ν≥ -1ℱ_ν, which is countable, and thus well-defined as a σ-algebra in its own right. Finally, the set ℱ_∞ = ⋂_k=-1^∞⋃_ν≥ k(ℱ_ν∖ℱ_ν-1) is now the set of tail-events for the sequence ℱ_ν. We can then consider each element of the sequence of realizations {M_ν} as arising from a sampling of the probability space (Ψ,ℱ_ν,P^M), i.e., each iteration is to be considered as a computation of d(x^ν) followed by a sampling M_ν from the σ-algebra ℱ_ν defined on Ψ under the probability measure P^M. We make the following assumption on d̃(x^ν). It holds that d̃(x^ν)=d(x^ν)+M_ν satisfies, * 𝔼[M_ν|ℱ_ν] = 0, * 𝔼[M_ν|ℱ_ν^2]≤σ^2 Defining an appropriate procedure to satisfy these conditions will be the focus of the subsequent section. We will now prove convergence using the Ghost penalty framework as in <cit.>. In order to do so, we must make an assumption on the boundedness of the iterates. It holds that x^ν lie in a bounded compact set almost surely This could be considered a strong assumption, however, entire books of SA considering the unconstrained case have been written with this assumption throughout <cit.> and in the unconstrained case, proving the condition has only been recently done <cit.> and even then with a deterministic gradient bound. Assume that MFCQ holds at for all x∈ℝ^n as well as Assumption <ref>. Then, almost surely, any limit point x̂ of x^ν, with d̃(x^ν) satisfying Assumption <ref>, satisfies d(x̂)=0 and x̂ satisfies the KKT conditions. Consider the ideal step d(x^ν) which is the solution to the subproblem given in (<ref>). Since the MFCQ holds for all x^ν, we have by <cit.> that d(x^ν) is a KKT point of (<ref>). Thus it holds that, for any realization in ℱ and all ν, 0 ∈∇ F(x^ν) +τ d(x^ν)+ ∇ C(x^ν) μ^ν + N_β𝔹^n_∞(d(x^ν)). Therefore, we have for some ℵ^ν∈ N_β𝔹^n_∞(d(x^ν)), [ τd(x^ν)^2+∇ F(x^ν) d(x^ν) = -(∇ C(x^ν) μ^ν) d(x^ν)-(ℵ^ν) d(x^ν); ≤μ^ν [C(x^ν) - κ(x^ν) e] ≤μ^ν [max_i{C_i(x^ν)_+} - κ(x^ν)] e ] where we used the constraints of (<ref>). Recalling the definition of θ(x^ν) (<ref>), we have, ∇ F(x^ν) d(x^ν) ≤ -τd(x^ν)^2 + θ(x^ν) μ^ν e. Let us now consider the nonsmooth (ghost) penalty function W(x;ε) ≜ f(x) + 1/εmax_i {g_i(x)_+}, with a positive penalty parameter ε. This function plays a key role in the subsequent convergence analysis although it does not appear anywhere in the algorithm itself. In particular, we start with taking expectations with respect to ℱ_ν to define the decrease in W(x^ν;ε) conditional on the iterates up to x^ν, using the Descent Lemma and Lipschitz continuity of the problem derivatives, [ 𝔼[W(x^ν + 1;ε)|ℱ_ν] - W(x^ν;ε); = 𝔼[F(x^ν + γ^νd̃(x^ν))|ℱ_ν] - F(x^ν); + 1/ε[𝔼[max_i{C_i(x^ν+ γ^νd̃(x^ν))_+}|ℱ_ν] - max_i {C_i(x^ν)_+}]; ≤ γ^ν∇ F(x^ν)𝔼[d̃(x^ν)|ℱ_ν] + (γ^ν)^2 L_∇ F/2𝔼[d̃(x^ν)^2|ℱ_ν]; + 1/ε[max_i {(C_i(x^ν) + γ^ν∇ C_i(x^ν)𝔼[d̃(x^ν)|ℱ_ν])_+}; - max_i{C_i(x^ν)_+} + (γ^ν)^2 max_i{L_∇ C_i}/2𝔼[d̃(x^ν)^2|ℱ_ν]]; (a)= γ^ν∇ F(x^ν) d(x^ν) + (γ^ν)^2 L_∇ f/2𝔼[d̃(x^ν)^2|ℱ_ν]; + 1/ε[max_i {(C_i(x^ν) + γ^ν∇ C_i(x^ν) d(x^ν))_+}; - max_i{C_i(x^ν)_+} + (γ^ν)^2 max_i{L_∇ C_i}/2𝔼[d̃(x^ν)^2|ℱ_ν]]; (b)≤ γ^ν∇ F(x^ν) d(x^ν) + 1/ε[max_i {(1 - γ^ν) C_i(x^ν)_+ + γ^νκ(x^ν)} - max_i{C_i(x^ν)_+}]; + (γ^ν)^2/2 (L_∇ F + max_i{L_∇ C_i}/ε) 𝔼[d̃(x^ν)^2|ℱ_ν]; ≤ γ^ν∇ F(x^ν) d(x^ν) - γ^ν/ε θ(x^ν) + (γ^ν)^2/2 (L_∇ F + max_i{L_∇ C_i}/ε) 𝔼[d̃(x^ν)^2|ℱ_ν], ] where in (a) we used that 𝔼[d̃(x^ν)|ℱ_ν] = d(x^ν) and (b) uses the constraint C(x^ν)+∇ C(x^ν) d(x^ν)≤κ(x^ν) e on d(x^ν) and we used the definition of θ(x^ν) for the last inequality. Furthermore, we observe that [ ∇ F(x^ν) d(x^ν) - 1/ε θ(x^ν) ≤ - τd(x^ν)^2 + θ(x^ν) μ^ν e - 1/ε θ(x^ν) ≤ - τd(x^ν)^2 + (m μ^ν_∞ - 1/ε) θ(x^ν), ] where the first inequality is entailed by (<ref>). By (<ref>), for any fixed x^ν and for any η∈ (0, 1], there exists ε̅^ν > 0 such that ∇ F(x^ν) d(x^ν) - 1/ε θ(x^ν) ≤ - ητd(x^ν)^2 ∀ε∈ (0, ε̅^ν]. Now suppose that (<ref>) does not hold uniformly for every x^ν, that is η∈ (0,1], and a subsequence {x^ν}_𝒩 exists, where 𝒩⊆{0, 1,2, …} such that we can construct a corresponding subsequence {ε^ν}_𝒩∈ℝ_+ with ε^ν↓ 0 on 𝒩 and ∇ F(x^ν) d(x^ν) - 1/ε^ν θ(x^ν) > -ητd(x^ν)^2 for every ν∈𝒩. For (<ref>) to hold, relying on (<ref>), the multipliers' subsequence {μ^ν}_𝒩 must be unbounded. Now by the MFCQ assumption we reach a contradition. So we continue by considering that  (<ref>) holds uniformly for every x^ν. And so, [ 𝔼[W(x^ν + 1; ε̃)|ℱ_ν] - W(x^ν; ε̃) ≤ - γ^νη c d(x^ν)^2 + (γ^ν)^2/2 (L_∇ F + max_i{L_∇ C_i}/ε̃) 𝔼[d̃(x^ν)^2|ℱ_ν]; ≤ -γ^ν[η c - γ^ν (L_∇ F + max_i{L_∇ C_i}/ε̃)] d(x^ν)^2; +(γ^ν)^2 (L_∇ F + max_i{L_∇ C_i}/ε̃)𝔼[M_ν^2|ℱ_ν] ] where we used the triangle inequality to split d̃(x^ν)^2=d(x^ν)+M_ν^2≤ (d(x^ν)+M_ν)^2≤ 2d(x^ν)^2+2M_ν^2 Thus, for ν≥ν̅ sufficiently large, there exists ω such that, 𝔼[W(x^ν+1; ε̃)|ℱ_ν] - W(x^ν; ε̃) ≤ - ωγ^νd(x^ν)^2+(γ^ν)^2 C_M σ^2. with C_M=(L_∇ f + max_i{L_∇ g_i}/ε̃)>0. Now, given that ∑_ν=1^∞ (γ^ν)^2 <∞, we can apply the Super-martingale Convergence Theorem (for instance, Theorem 1 in <cit.>), from which we can conclude that, lim_ν∑_t=ν̅^νγ^td(x^t)^2<+∞. and that W(x^ν; ε̃) converges almost surely. Thus, for almost every realization it holds that lim inf_ν→∞d(x^ν)=0. As lim inf_ν→∞d(x^ν)=0 is an event in ℱ_∞, we can denote the set 𝒟⊆ℱ_∞ as the probability one set for which this is the case. Recalling the definition of θ (<ref>), assume that we are in 𝒟 and consider any sequence realized by ℱ whose tail events lie in 𝒟. Taking the limit on a subsequence 𝒩 such that d(x^ν)𝒩→ 0, we have ∇ C(x^ν)_∞d(x^ν)𝒩→ 0 and so θ(x^ν) 𝒩→ 0. Now let x̂ be a cluster point of subsequence {x^ν}_𝒩. Since θ(x^ν) 𝒩→ 0 implies κ (x̂) = max_i {g_i(x̂)_+}, by the MFCQ, <cit.> implies that x̂ is a KKT point for (<ref>). Specifically, taking the limit in (<ref>), we obtain by the KKT multipliers' boundedness and outer semicontinuity property of the normal cone mapping N_β𝔹^n_∞(∙), - ∇ F(x̂) - ∇ C(x̂) ξ̂∈ N_β𝔹^n_∞(0) = {0}, with ξ̂∈ N_ℝ^m_-(C(x̂) - κ(x̂) e) = N_ℝ^m_-(C(x̂)) and where the first equality follows from <cit.>. In turn, x̂ is a KKT point for problem (<ref>). Consider the set of tail events 𝒟^0⊆𝒟⊆ℱ_∞ for which it holds that lim sup_ν→∞d(x^ν)>0. Then, for any sequence {x^ν} determined by ℱ̅⊆ℱ whose tail events lie in 𝒟^0, there exists δ>0 such that d(x^ν)>δ and d(x^ν)<δ/2 for infinitely many νs. Therefore, there is an infinite subset of indices N such that, for each ν∈ N, and some i_ν>ν, the following relations hold: d(x^ν)<δ/2,d(x^i_ν)>δ and, if i_ν > ν + 1, δ/2≤d(x^j)≤δ,ν<j<i_ν. Hence, for all ν∈ N, we can write [ δ/2 < d(x^i_ν)-d(x^ν) ≤ d(x^i_ν) - d(x^ν)(a)≤θx^i_ν-x^ν^1/2; (b)≤ θ[∑_t=ν^i_ν-1γ^t (d(x^t)+M_t)]^1/2(c)≤θδ^(1/2)(∑_t=ν^i_ν-1γ^t)^1/2+θ(∑_t=ν^i_ν-1γ^tM_t)^1/2, ] where (a) is due to Proposition <ref>, (b) comes from the triangle inequality and the updating rule of the algorithm and in (c) we used (<ref>). By (<ref>) we have _ν→∞lim inf θ (δ)^1/2(∑_t=ν^i_ν-1γ^t)^1/2+θ(∑_t=ν^i_ν-1γ^tM_t)^1/2>δ/2. We prove next that (<ref>) is in contradiction with the convergence of {W(x^ν;ε̃)} for any ε̃∈ (0, ε̅]. To this end, we first show that d(x^ν)≥δ/4, for sufficiently large ν∈ N. Reasoning as in (<ref>), we have d(x^ν+1) - d(x^ν)≤θx^ν+1-x^ν^1/2≤θ (γ^ν)^1/2 (d(x^ν)+M_ν)^1/2, for any given ν. By Assumption <ref> part 2, for any M however large, there exists a 0<α(M)<1 sufficiently small such that ℙ[M^ν≥ M|ℱ_k-1]≤α(M). Thus by construction of ν, since d(x^ν+1)≥δ/2, if d(x^ν)≤δ/4, we have, assuming M^ν≤ M, δ/4 ≤θ (γ^ν)^1/2 (δ/4+M_ν)^1/2≤θ (γ^ν)^1/2 (δ/4+M)^1/2 which is impossible for ν≥ν̅ for ν̅ large enough. Thus M^ν > M for such ν≥ν̅. However, given that α(M)>0 and arbitrary, it must hold with probability one that M^ν≤ M occurs infinitely often, reaching a contradiction. Thus it holds that for some ν̅, d(x^ν)≥δ/4 for ν≥ν̅ and ν∈𝒩. Now (<ref>) implies that there exists θ̅ and ν̃ such that, ∑_t=ν^i_ν-1γ^t≥θ̅. for ν≥ν̃. However, lim_ν→∞∑_t=1^νγ^t d(x^t)^2 ≥∑_ν∈𝒩∑_t=ν^i_ν-1γ^t d(x^t)^2 ≥∑_ν∈𝒩, ν≥max{ν̅,ν̃}[θ̅δ^2/16] →∞ which contradicts (<ref>). Thus it is impossible for a sequence in ℱ̅ to have lim sup_ν→∞d(x^ν)≥δ for any δ and limd(x^ν)→ 0 with probability one. § COMPUTING AN UNBIASED ESTIMATE OF THE DESIRED STEP In this Section we present a method of computing the unbiased estimate d̃(x^ν) as needed for the convergence theory above. One could consider naively computing a one-sample stochastic estimate of (<ref>), i.e., by, at each iteration ν taking a single sample (ξ^ν,ζ^ν) and solving [ min_d ∇ f(x^ν,ξ^ν)^T d+τ/2d^2,; s.t. c(x^ν,ζ^ν)+∇ c(x,ζ^ν)^T d≤κ(x,ξ^ν,ζ^ν)e; d_∞≤β ] to obtain d(x^ν;(ξ^ν,ζ^ν)) as an estimate for d(x^ν), with κ(x,ξ^ν,ζ^ν) as in (<ref>) but with C_i(x^ν) replaced by ∇ c_i(x^ν,ζ^ν), however in general this will not be an unbiased estimate of d(x^ν). This is because d(x^ν) is a complicated nonlinear function of the noise of the component parts of (<ref>), whereas in standard SA, the noise can be modeled as being additive and unbiased with respect to the gradient. In particular, we expect that generally 𝔼[d(x^ν;(ξ^ν,ζ^ν))]≠ d(x^k). Instead we consider a particular Monte Carlo technique for estimating the solution of the deterministic subproblem defining d(x^ν) which has been shown to define an unbiased estimate of this solution. Specifically, we use the work of <cit.>, based originally on <cit.>, which presents an unbiased Monte Carlo method for estimating α = r(𝔼(X)), for any function r and random variable X. In this case r(·)=d(x^ν), with X_ν=(∇ f(x^ν,ξ^ν),c(x^ν,ζ^ν),∇ c(x,ζ^ν)) the set of random variables. Thus we generically define d(x^ν,X_ν) as the solution to (<ref>) for a sampled X_ν, and denote multiple samples j∈{1,...,J} by X_ν,j and d(x^ν,{X_ν,j}) as the solution to the SAA subproblem, [ min_d 1/J∑_j=1^J ∇ f(x^ν,ξ^ν,j)^T d+τ/2d^2,; s.t. 1/J∑_j=1^J c(x^ν,ζ^ν,j)+1/J∑_j=1^J∇ c(x^ν,ζ^ν,j)^T d≤κ(x^ν,{ξ^ν,j},{ζ^ν,j})e; d_∞≤β ] where now [ κ(x^ν,{ξ^ν,j},{ζ^ν,j}):=(1-λ) max_i∈{1,...,m}{1/J∑_j=1^J c_i(x^ν,ζ^ν,j)_+}; + λmin_d{max_i {(1/J∑_j=1^J c_i(x^ν,ζ^ν,j)+1/J∑_j=1^J∇ c_i(x^ν,ζ^ν,j)^T d)_+}, d_∞≤ρ} ] The estimate d̃(x^ν) we compute is defined as follows, [ d̃(x^ν) = Δ_N/p(N)+d(x^ν;X_ν), where,; Δ_N = d(x^ν;S(2^N+1)/2^N+1)-1/2{ d(x^k;S_O(2^N)/2^N)+d(x^k;S_E(2^N)/2^N)}; where S(n) denotes n samples of X_ν and; S_E(n),S_O(n) denote even and odd numbers of samples, i.e., S_E(n) denotes 2n samples of X_ν; p(n) = P(N=n) and N is a geometric random variable ] Thus to perform an estimate of d(x^ν), one samples N from a geometric distribution with some parameter p_α, then creates a set of 2^N+2+1 samples as required to calculate the quantities Δ_N and the four subproblem computations d(x^ν,{X_ξ,ζ}) used to obtain d̃(x^ν). In the rest of this section, we show that this d̃(x^ν) satisfies the properties needed for convergence in Section <ref>, in particular Assumption <ref> on the unbiasedness and uniformly bounded variance of the estimate. §.§ Deterministic Constraints Case In the case where the constraints are entirely deterministic, i.e., C(x)=c(x), then in <cit.>, under a strong but standard collection of assumptions, it was shown that the estimate d̃(x^ν) defined above is an unbiased estimate for d(x^ν) with bounded variance. In this section we review the results and note the required assumptions for our context in order to ensure a uniform bounded variance across ν. In the subsequent subsection we proceed to consider the more general scenario. We now state the assumptions necessary for the unbiased estimation to hold in this case. Denote the feasible region at iteration ν, F_ν = {d: C(x^ν)+∇ C(x)^T d≤κ(x^ν)e, d_∞≤β} We state the assumptions as presented in <cit.>, discarding those that automatically hold for the structure of (<ref>). We are left with the following necessary conditions, It holds that, * There exist δ_0>0 and σ^2_F>0 such that for |t|≤δ_0 and for all x^ν, sup_d∈ F_ν𝔼_ξ[e^t (∇ f(x^ν,ξ)^T d-∇ F(x^ν)^T d)] ≤ e^σ_F^2 t^2/2 * The linear independence constraint qualification holds for (<ref>) at d(x^ν) for all ν. * Strict complementarity holds for (<ref>) at d(x^ν) for all ν. from which we can deduce,  <cit.> The solution of problem (<ref>) with a deterministic constraint (i.e., c(x^ν,ζ)=C(x^ν) and ∇ c(x^ν,ζ)=∇ C(x^ν) w.p.1) under Assumptions <ref> satisfies, for some σ^2, 𝔼[d̃(x^ν)]=d(x^ν), Var(d̃(x^ν))<σ^2 for all ν, and the computational complexity for computing d̃(x^ν) is bounded in expectation. Along with the above assumptions we need to verify the condition: * There exists a locally bounded measurable function ι:Ω→ℝ^+ and γ,δ>0 such that |∇ f(x^ν,ξ)^T d+τ/2d^2-∇ f(x^ν,ξ)^T d'-τ/2d'^2| ≤ι(ξ)d-d'^γ for all feasible d and d' with d-d'≤δ, and ι(ξ) has a finite moment generating function in a neighborhood of the origin. Note that in the original, only a single optimization problem was under consideration, whereas here, since we need a uniformly bounded variance SA error term, ι(ξ) should be independent of x^ν. Indeed, let γ=1 and ι(ξ)=| sup_x∈ℝ^n∇ f(x,ξ)+δ|. Given (<ref>), we have that the moment generation function 𝔼[e^tι(ξ)] is finite. Next we must also show the condition, * There exists δ'_0, t>0 and M̅_f such that, sup_d-d(x^ν)≤δ'_0𝔼[e^t∇_x f(x^ν,ξ)+τ d] ≤M̅_F for all ν. Again, this follows from (<ref>) and the boundedness of {x^ν}. §.§ General Case In this section, we prove the equivalent of <cit.> for the case of stochastic constraints, i.e., the original problem with C(x)=𝔼[c(x,ζ)]. We take the base point for the subproblems, x as given in this section and for ease of exposition drop the dependence on the iteration ν. Thus all expectations will be implicitly with respect to (ξ^ν,ζ^ν) conditional on the σ-algebra ℱ_ν. However, we will take care that all constants and bounds shall be defined to be uniform across the iterations ν and thus hold across all x^ν, noting that we shall make use of Assumption <ref> in order to do so. Consider a set of stochastic realizations {ξ_i,ζ_i}_i=1,...,2^N, and now the following subproblem solutions, * κ_* as the quantity defined in (<ref>). * κ_N as, [ κ^N:=(1-λ) max_i∈{1,...,m}{1/2^N∑_j=1^2^N c_i(x,ζ_j)_+}; + λmin_d{max_i {(1/2^N∑_j=1^2^N c_i(x,ζ_j)+1/2^N∑_j=1^2^N∇ c_i(x,ζ_j)^T d)_+}, d_∞≤ρ} ] * (d_*,μ_*) as the solution to (<ref>), i.e. d(x^k) with the corresponding multiplier vector. * (d_*^N,μ_*^N) as the solution to, if it exists, [ min_d ∇ F(x)^T d+τ/2d^2,; s.t. C(x)+∇ C(x)^T d≤κ^N e; d_∞≤β ] * (d^+_ϵ,μ^+_ϵ) as the solution to, if it exists, [ min_d ∇ F(x)^T d+τ/2d^2,; s.t. C(x)+∇ C(x)^T d≤(κ^N+ϵ) e; d_∞≤β ] with feasible set ℱ^+_ϵ. * (d^-_ϵ,μ^-_ϵ) as the solution to, if it exists, [ min_d ∇ F(x)^T d+τ/2d^2,; s.t. C(x)+∇ C(x)^T d≤(κ^N-ϵ) e; d_∞≤β ] with feasible set ℱ^-_ϵ. * (d^c_N,μ^c_N) the solution to, [ min_d ∇ F(x)^T d+τ/2d^2,; s.t. 1/2^N∑_j=1^2^N c(x,ζ_j)+1/2^N∑_j=1^2^N∇ c(x,ζ_j)^T d≤κ^N e; d_∞≤β ] with feasible set ℱ^c (which is always nonempty). * (d_N,μ_N) the solution to, [ min_d 1/2^N∑_j=1^2^N∇ f(x,ζ_j)^T d+τ/2d^2,; s.t. 1/2^N∑_j=1^2^N c(x,ζ_j)+1/2^N∑_j=1^2^N∇ c(x,ζ_j)^T d≤κ^N e; d_∞≤β ] with feasible set ℱ^N (which is always nonempty). We now state the assumptions needed for the main results in this section. To this end we define a generic stochastically dependent feasible set, F({ζ_i};x) = {d: ∑_i c(x,ζ)+∑_i ∇ c(x,ζ)^T d≤κ(x,{ζ_i})e, d_∞≤β, } For this we adapt the conditions in Assumption <ref> to apply across all realizations of the feasible constraint region. Furthermore we add some assumptions as introduced for the study of SAA of expectation constraints in <cit.>. We drop the dependence on the iteration index k. It holds that, * There exist δ_0>0 and σ^2>0 such that for |t|≤δ_0, for a.e. {ζ_i}_i∈ℕ, sup_d∈ F({ζ_i};x)𝔼_ξ[e^t (∇ f(x,ξ)^T d-∇ F(x)^T d)] ≤ e^σ^2 t^2/2 * The linear independence constraint qualification holds for (<ref>) at d(x,{X_ξ,ζ}) as well as d_*^N and d^c_N and their associated optimization problems defined above, for almost every {X_ξ,ζ}. * Strict complementarity holds for (<ref>) at d(x,{X_ξ,ζ}) for almost every {X_ξ,ζ}. * For all d∈ F({ζ_i};x) it holds that the moment generating function M_d(·) of ∇ c(x,ζ)^T d-∇ C(x)^T d is bounded by some M̅_c (independent of x) in a neighborhood of zero. * It holds that for ψ(d;x) = max_i {(C_i(x)+∇ C_i(x)^T d)_+} and ψ^N(d;x) = max_i {(∑_j=1^2^N c_i(x,ζ_j)+∑_j=1^2^N∇ c_i(x,ζ_j)^T d)_+}, for all x and d, M^κ_d,x(t):= lim_N→∞𝔼[e^t[ψ^N(d;x)-ψ(d;x)]] exists as an extended real number and M^κ_d,x(t)<∞ for t sufficiently close to zero. * It holds that there exists δ such that for all x, ℙ[1/2^N∑_j=1^2^Nmax_i,x∈ℝ^n∇ c_i(x,ζ_j)≥ϕ̅] ≤ e^-2^Nδ where ϕ̅≥𝔼[max_i,x∈ℝ^n∇ c_i(x,ζ_j)]. It holds that, * For any ζ there exists an integrable function ϕ such that, |∇ c(x,ζ)^T d-∇ c(x,ζ)^T d'|≤ϕ(ζ)d-d' Denote Φ:=𝔼[ϕ(ζ)]. * The moment generating function M_ϕ(·) of ϕ(ζ) is finite in a neighborhood of zero. This is clear by letting ϕ(ζ)=sup_x∈ℝ^n∇ c(x,ζ), the boundedness of {x^ν}, and <ref>. There exists α_0 and c_ϵ such that for sufficiently large N, it holds that, ℙ(d_N-d_*≥ϵ) ≤ c_ϵ e^-α_0 2^N β(ϵ) and, ℙ(μ_N-μ_*≥ϵ) ≤ c_ϵ e^-α_0 2^N β(ϵ) where β(ϵ)=β_0 ϵ^2 as ϵ→ 0 and β_0>0. All constants are uniform across major iterations ν. Let ϵ̅>0. First we claim that for any ϵ̂ we can find ϵ_κ(ϵ̂), locally quadratically varying with ϵ̂, such that ℙ(κ_*-κ_N≥ϵ̂)≤ c_κ e^-2^Nϵ_κ(ϵ̂). Note that the claim resembles <cit.> and thus we prove it accordingly. To begin with we claim that κ̂(d,ζ) = max_i{c_i(x,ζ)+∇ c_i(x,ζ)^T d} is calm with respect to d with a modulus depending on ζ uniform across x. Indeed |κ(d,ζ)-κ(d',ζ)|≤max_i ∇ c_i(x,ζ)d-d'≤ H(ζ)d-d' for measureable H(ζ) by the continuous differentiability of c(·,ζ) and the compactness of {x^ν}, and Lemma <ref>. We can now combine, 1) Assumption <ref> Part 6 2) the fact that from (<ref>) it holds that the moment generating function, 𝔼[e^tmax_i,x∈ℝ^n∇ c_i(x,ζ_j)] is finite valued for t close to zero, and, finally, that 3) the boundedness of the ∞ norm constraint, condition (<ref>) Lemma <ref>, and the independence (with respect to each other) of the samples {ζ_i} to say that lim_N→∞𝔼[e^t(κ_N-κ_*)] exists and is finite for t close to zero (see Assumption 3.1 and the subsequent statement in <cit.>), to assert that we have satisfied the conditions for <cit.> for κ(d,ζ). The final result comes from subsequently applying <cit.> whose proof simply combines <cit.> and <cit.> (which clearly applies to the optimization problem associated with κ_N and κ_*). Thus the claim has been proven. Now, by strong stability of the primal-dual solution pair of a regular strongly convex optimization problem <cit.>, it holds that for sufficiently small ϵ̂, κ_*-κ_N≤ϵ̂ implies that (d_*^N,μ_*^N) exists and d_*-d_*^N+μ_*-μ_*^N≤ Cϵ̂ for some C>0. Let ϵ̂ be small enough such that Cϵ̂≤ϵ̅/5. Again by strong stability (applying Assumption <ref> part 2) we also know that for sufficiently small ϵ, (d^+_ϵ,μ^+_ϵ) and (d^-_ϵ,μ^-_ϵ) exist and d_*^N-d^+_ϵ+μ_*^N-μ^+_ϵ≤ϵ̅/5 and d_*^N-d^-_ϵ+μ_*^N-μ^-_ϵ≤ϵ̅/5. But then, since the assumptions of the Proposition are satisfied by Assumption <ref> Part 5, Lemma <ref> and (<ref>), we can apply <cit.> to conclude that, ℙ{ℱ^+_ϵ_f⊆ℱ^c⊆ℱ^-_ϵ_f}≥ 1-B e^-2^Nϵ_f^2/8√(M) where B is a constant that can depend on n (i.e., it also depends on β but we can drop this dependence since the conditions of the result still hold for the problem without it, and this would only affect the constant to the effect of making it advantegeously smaller). Thus, we can make ϵ_f as small as necessary in order that, by the same perturbation arguments (again, requiring Assumption <ref> part 2), {ℱ^+_ϵ_f⊆ℱ^c⊆ℱ^-_ϵ_f} implies that (d^-_ϵ_f,μ^-_ϵ_f) and (d^+_ϵ_f,μ^+_ϵ_f) exist and d^c_N-d^-_ϵ_f+μ^c_N-μ^-_ϵ_f≤ϵ̅/5 and also d^c_N-d^+_ϵ_f+μ^c_N-μ^+_ϵ_f≤ϵ̅/5, with ϵ̅=C_fϵ_f for some C_f>0. Finally, we claim that we can use the arguments in <cit.> to bound ℙ(d_N-d^c_N≥ϵ̅/5) and ℙ(μ_N-μ^c_N≥ϵ̅/5), i.e., by, ℙ{d_N-d^c_N≥ϵ/5}+ℙ(μ_N-μ^c_N≥ϵ̅/5) ≤ c_ϵ e^-2^N α(ϵ) where α(ϵ) is locally quadratic about the origin. Indeed Assumption <ref>, part 1, implies the conclusion of <cit.>, i.e., that, ℙ[ |1/2^N∑_j=1^2^N∇ f(x,ζ_j)^T d+τ/2d^2-∇ F(x)^T d+τ/2d^2|≥α}≤ c(α)e^-2^N β(α) for all d∈ F({ζ},x), with β(α) growing locally quadratically with α. Note that this corresponds to the conclusion of <cit.> for the problem associated with d^c perturbed to d^c_N. Now, applying strong stability <cit.> instead of <cit.> yields the conclusions of <cit.> for both d_N-d^c_N and μ_N-μ^c_N. Thus we have that, if ϵ̅ is small enough, [ ℙ{d_N-d_*≥ϵ̅}≤ℙ{d_N-d^c_N≥ϵ̅/5}+ ℙ{d^c_N-d^-_ϵ_f≥ϵ̅/5 OR d^c_N-d^+_ϵ_f≥ϵ̅/5}; ℙ{d^N_*-d^-_ϵ_f≥ϵ̅/5 OR d^N_*-d^+_ϵ_f≥ϵ̅/5}+ ℙ{d_*-d^N_*≥ϵ̅/5}; ≤ c_ϵ e^-N α(ϵ̅) +c_κ e^-Nϵ_κ(ϵ̂)+2B e^-2^Nϵ_f^2/8√(M) ] and likewise for ℙ{μ_N-μ_*≥ϵ̅}. Finally, it is clear from the constructions that ϵ_κ(ϵ̂) and α(ϵ̅) depend quadratically on ϵ̅ locally around zero. The quantity d̃(x) defined in (<ref>) satisfies 𝔼[d̃(x)]=d(x) and 𝔼[d̃(x)^2]<σ for some σ independent of x. From <cit.>, in the discussion immediately prior to Section 5.3, it was shown that, N^1/2[ d_N-d_*; μ_N-μ_* ]→𝒩(0,J^-1Γ J) in distribution, where, Γ = [ Σ_f Σ_fc; Σ_fc^T Σ_c ], and, J=[ H A^T; A 0 ] where, [ ∇ L(d_*,μ_*;x,ξ) = ∇ f(x,ξ)+τ d_*-∇ c_a(x,ξ)^T [μ_*]_a - ∇ F(x)-τ d_*+[∇ C(x)]_a^T μ_*; Ψ(d_*,μ_*;x,ξ) = c_a(x,ξ)-C_a(x)-κ(x,ξ) e_a+∇ c_a(x,ξ)^T d_*-∇ C_a(x)^T d_*-κ_* e_a; Σ_f=𝔼[(∇ L(d_*,μ_*;x,ξ)) (∇ L(d_*,μ_*;x,ξ))^T],; Σ_c=𝔼[(Ψ(d_*,μ_*;x,ξ))(Ψ(d_*,μ_*;x,ξ))^T], and,; Σ_fc = 𝔼[(∇ L(d_*,μ_*;x,ξ)) (Ψ(d_*,μ_*;x,ξ))^T] ] where the subscript a denotes that only the constraints active at d_* are considered and e_a is the a-dimensional vector of ones. The nonsingularity of these matrices are given by the Assumptions. Finally the Hessian matrix is defined as H=τ I_n and A is the set of gradients associated with the active set at d^*, including the active bounds corresponding to the constraint x+d_∞≤β. Since the feasible region is closed and bounded uniformly due to the presence of the constraint d≤∞, it holds that {d_2^N:N≥ 0} is uniformly integrable. Since d_2^n→ d_* it holds that, 𝔼[d̃(x)] = ∑_n=1^∞𝔼[Δ_n]+𝔼[β_1] = lim_N→∞𝔼[d_2^N] = β_* and thus the estimate is unbiased. Now we must study the variance term and attempt to show 𝔼[d̃d̃^T]=O(2^-(1+α)N) for some α>0. To this end we follow the proof of <cit.>, first by noting that Lemma <ref> provides a moderate deviations estimate for d_2^N and μ_2^N and seek to derive one for ℵ_2^N. We similarly introduce the perturbed optimization problem with parameter η=(η^(1),η^(2)), [ ∇ F(x) +τ d(η)+∇ C(x) μ(η)+ℵ(η)=-η^(1),; C(x)+∇ C(x)^T d(η)≤κ(x) e-η^(2),; d(η)_∞≤β(x),; μ(η)^T (κ(x)e-C(x)-∇ C(x)^T d(η))=0,; ℵ(η)(β-d(η)_∞) = 0,; μ(η),ℵ(η)≥ 0, ] with the primal-dual solution d(η),μ(η),ℵ(η) being continuously differentiable functions of η locally close to η=0 by standard sensitivity results for strongly regular functions and Assumption <ref>, part 3 (i.e., with no active set changes). Consider the optimality conditions of the SAA problem, 0=1/2^N∑_i=1^2^N[(∇ f(x,ξ_i) +τ d_2^N)+∇ c(x,ξ_i) μ_2^N+ℵ_2^N] If we define, η̅^(1)_2^N = 1/2^N∑_i=1^2^N[∇ f(x,ξ_i)+∇ c(x,ξ_i) μ_2^N -∇ F(x)-∇ C(x) μ_2^N] we get that, ∇ F(x) +τ d_2^N+∇ C(x) μ_2^N+ℵ_2^N=-η̅^(1)_2^N Similarly, for, 1/2^N∑_i=1^2^N[c(x,ξ_i)+∇ c(x,ξ_i)^T d_2^N-κ^N e ] ≤ 0 if we set. η̅^(2)_2^N = 1/2^N∑_i=1^2^N[c(x,ξ_i)+∇ c(x,ξ_i)^T d_2^N-κ^N e -C(x)-∇ C(x)^T d_2^N+κ^* e ] we get that, C(x)+∇ C(x)^T d_2^N-κ^* e≤-η̅^(2)_2^N From the system of equations (<ref>) (<ref>), and (<ref>) we can construct a map f_2^N such that, f_2^N(d_2^N,μ_2^N)=(η̅^(1)_2^n,η̅^(2)_2^n,ℵ_2^N) and subsequently apply <cit.> to conclude that by the previous Lemma, the Large Deviations Principle for (d_2^N,μ_2^N) implies an LDP for (η̅^(1)_2^n,η̅^(2)_2^n,ℵ_2^N). Now, [ 0=1/2^N∑_i=1^2^N[∇ f(x,ξ_i) +τ d_*+∇ c(x,ξ_i) μ_*+ℵ_*]; + 1/2^N∑_i=1^2^N[∇ f(x,ξ_i) +τ d_2^n+∇ c(x,ξ_i) μ_2^n+ℵ_2^n -∇ f(x,ξ_i) -τ d_*-∇ c(x,ξ_i) μ_*-ℵ_*]; = 1/2^N∑_i=1^2^N[∇ f(x,ξ_i) +τ d_*+∇ c(x,ξ_i) μ_*+ℵ_*]; +τ(d_2^N-d_*)+ℵ_2^N-ℵ_*+1/2^N∑_i=1^2^N[∇ c(x,ξ_i) μ_2^N-∇ c(x,ξ_i) μ_*]; ⟹; τ(d_2^N-d_*)+ℵ_2^N-ℵ_*+∇ C(x) (μ_2^N-μ_*); = -1/2^N∑_i=1^2^N[∇ f(x,ξ_i) +τ d_*+∇ c(x,ξ_i) μ_*+ℵ_*.; .+ ∇ c(x,ξ_i) (μ_2^N-μ_*)-∇ C(x)(μ_2^N-μ_*) ] ] And thus, [ τ(d_2^N+1-1/2(d^O_2^N+d^E_2^N))+(ℵ_2^N+1-1/2(ℵ^O_2^N+ℵ^E_2^N))+∇ C(X) (μ_2^N+1-1/2(μ^O_2^N+μ^E_2^N)); = -1/2^N+1∑_i=1^2^N+1[∇ f(x,ξ_i)+∇ c(x,ξ_i) μ_*]; +1/2(1/2^N∑_i=1^2^N[∇ f(x,ξ^O_i)+∇ c(x,ξ^O_i) μ_*] +1/2^N∑_i=1^2^N[∇ f(x,ξ^E_i)+∇ c(x,ξ^E_i) μ_*]); -(1/2^N+1∑_i=1^2^N+1 (∇ c(x,ξ_i)-∇ C(x)) (μ_2^N+1-μ_*).; .-1/2(1/2^N∑_i=1^2^N (∇ c(x,ξ^O_i)-∇ C(x))( μ^O_2^N-μ_*)..; .. +1/2^N∑_i=1^2^N (∇ c(x,ξ^E_i)-∇ C(x))( μ^E_2^N-μ_*))) ] Using the central limit theorem, i.e., via <cit.>, we can see that the first three terms have their 𝔼[·] of the order of 2^-2^N, and the last three as well from, 𝔼[μ_2^N-μ_*] ≤𝔼[μ_2^N-μ_*I(μ_2^N-μ_*≥ 2^-ρ 2^N)] +𝔼[μ_2^N-μ_*I(μ_2^N-μ_* <2^-ρ 2^N)] for some ρ∈(0,1) and applying the previous Lemma. From this it is clear that the right hand side has a covariance that is O(2^-ρ 2^N), which finally implies that 𝔼[Δ̅_NΔ̅_N^T]=O(2^-ρ 2^N). § NUMERICAL RESULTS For these experiments we consider training with various protocols. We use a neural network with one hidden layer and sigmoid activations, where the sigmoid function is defined as, σ(x) = 1/1+e^-x We used the following parameters for our experiments: β=10, ρ=0.8, λ=0.5, τ=1 We considered two stepsize rules, γ^ν = 1/1+ν, and γ^ν+1 = γ^ν(1-ζγ^ν) as they both satisfy the stepsize summability requirements (<ref>). We found that the incremental ζ=0.001 was a good hyperparameter choice. Recall that the MNIST data set has inputs of dimension 784, and outputs a class label among the ten possible digits. In order to perform experiments with reasonable computational load, while also testing examples in a smaller quantity on larger problems, we considered capping the input dimension to one of {100,400, 784} and the hidden layer to have {50,100,200,400} neurons. We considered taking the geometric random variable parameter to be N∼𝒢{0.05,0.1,0.2,0.4,0.7,0.9} For each set of parameters considered, we ran 21 trials and plotted the median and the inter-quartile range (the 25% and 75% quartiles) surrounding the median for both the objective and constraints. Given the large optimization parameter dimension, we used OSQP <cit.>, a splitting based first order QP solver, to solve the subproblems. §.§ Deterministic Convex Constraints In this case the objective is defined to be, for a simple sample of an input vector θ and label y, i.e., ξ=(θ,y) with weights W_i and biases b_i, i.e., x=(W_1,W_2,b_1,b_2), F(x,ξ) = 1/2(W^T_2σ(W^T_1 θ+b_1)+b_2 -y) We consider training with an ellipsoidal constraint on the parameters, introducing a constraint of the form, 𝐖^2/a_w^2+𝐛^2/a_b^2≤ c where we allow for a different balance on the weights and biases, i.e., a_w≠ a_b. Specifically, we used a_w=2 and a_b=1 and c=5. We present representative results in Figure <ref>. Here p_N=0.9 and we consider the loss for predicting digit label 2. It can be seen that the stochastic gradient converges quickly to minimal loss and the constraint infeasibility decreases more noisily but reliably to the threshold value. §.§ Stochastic Constraints - Training and Validation Here we consider building a model that minimizes the loss for identifying one digit on one segment of the data, while at the same time enforcing validation error of a certain bound on a separate segment of the data. In this case, we use the same loss function, i.e., the same functional form for F(x,·) and C(x,·), however, with a distinct set of samples. Formally, F(x,ξ) = 1/2(W^T_2σ(W^T_1 θ+b_1)+b_2 -y), C(x,ζ) = 1/2(W^T_2σ(W^T_1 θ'+b_1)+b_2 -y') A representative figure is shown in Figure <ref>, where now we plot the loss in log scale. In this case the training and validation are coherent, there is no overfitting and the constraint value is trained below its threshold. §.§ Stochastic Constraints - Training Constraint Here we consider building a model that minimizes the loss for classifying one digit, while at the same time exhibiting classifying a different digit to some threshold training loss. In this case, the neural network will have common parameters for the input to hidden layer, in effect learning more general features, while having separate parameters from the hidden to the output layer. Formally, given input and label, F(x,ξ) = 1/2(W^T_2,1σ(W^T_1 θ+b_1)+b_2,1 -y), C(x,ξ) = 1/2(W^T_2,2σ(W^T_1 θ+b_1)+b_2,2 -y) noting that the stochastic space, the samples, are the same, but the weights and biases of the two output activations are distinct. Figure <ref> shows a representative case, now with p_N=0.1. The objective is the training loss for the recognition of digit 5, while the constraint is evaluating the training loss for the recognition of digit 4. Note that the constraint with noise exhibits a robustness mechanism wherein the training goes beyond enforcing the specific constraint value. § CONCLUSION This paper presents the first, to the best of the authors' knowledge, work on stochastic approximation for the challenging problem of solving an optimization problem with noise in the evaluation of both the objective and constraint function. We intend for this to be an inspiration for additional algorithms for this growingly important class of problems, as both modified Algorithms, alternative (possibly biased, with the bias controlled) estimators, and other statistical criteria (risk measures, chance constraints, etc.) are studied. plain tocchapterBibliography
http://arxiv.org/abs/2307.01662v1
20230704114030
Improving Constraints on Models Addressing the Hubble Tension with CMB Delensing
[ "Joshua Ange", "Joel Meyers" ]
astro-ph.CO
[ "astro-ph.CO" ]
Exploring Transformers for On-Line Handwritten Signature Verification Pietro Melzi,^1 Ruben Tolosana,^1 Ruben Vera-Rodriguez,^1 Paula Delgado-Santos,^1,2 Giuseppe Stragapede,^1 Julian Fierrez,^1 Javier Ortega-Garcia^1 ^1Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411, Tartu, Estonia ======================================================================================================================================================== § INTRODUCTION The cosmic microwave background (CMB) has been instrumental in shaping our modern understanding of the evolution of the universe. With the increase of precision CMB observations available, temperature and polarization anisotropies have become more finely measured, leading to more precise inferences of the contents and history of the universe. The physical mechanisms driving the formation of the CMB and its anisotropies are well understood, leading to a robust prediction for the form of the angular power spectra for any specific cosmological model. Based on ever-improving measurements of the CMB anisotropies, the cosmological parameters driving these models have become more tightly constrained <cit.>. Acoustic oscillations in the primordial plasma prior to recombination imprinted a series of peaks and troughs in CMB power spectra. The expansion history and plasma properties before recombination set the physical scale of the sound horizon, the expansion history after recombination sets the distance to the surface of last scattering, and the combination of the two sets determine the angular scale of the sound horizon at recombination. Measurements of the CMB acoustic peak positions allow for a direct measurement of the angular size of the sound horizon at recombination. Additionally, the relative peak heights allow for an inference of the parameters that determine the scale of the sound horizon within a given cosmological model. Combining this information allows for an inference of the integrated expansion since recombination, including in particular the present expansion rate, the Hubble constant H_0 <cit.>. While this is a useful qualitative picture for how constraints are derived from CMB observation, in practice, the parameter constraints for any particular cosmological model are obtained from a full likelihood analysis of the model given the CMB data. The past decade has seen an ever-growing tension in the value of H_0 as determined from local measurements <cit.> and as inferred from early universe observations like the CMB <cit.> within ΛCDM cosmology. Today, this “Hubble Tension" is driven primarily from the SH0ES (Supernovae and H_0 for the Equation of State of dark energy) collaboration's cosmic distance ladder measurements and the Planck collaboration's CMB observations <cit.>. This apparent discrepancy may be due to some unaccounted systematic error in one or more of the measurements, or it may point toward some physics that is left out of our current cosmological models. Many attempts at resolving the tension take the route of new physics <cit.>. In this work, we have found that implementing CMB delensing leads to significantly improved constraints on the parameters defining models aimed at resolving the Hubble tension, and these improved constraints remain valid even if the Hubble Tension does not end up directly involving new physics. Between the CMB surface of last scattering and our telescopes, large scale structure gravitationally deflects incoming photons, producing non-stationary statistics of the CMB fluctuations. At the level of the power spectra, this lensing effect smooths acoustic peaks, making their angular scale more difficult to measure and thus makes some parameter constraints less precise than they otherwise would be <cit.>. Current measurements allow for a detection of the effects of CMB lensing at a significance of 40σ with Planck data <cit.>, with similar precision from ACT <cit.> and SPT <cit.>.For future low-noise CMB data, such as that expected from Simons Observatory <cit.> and CMB-S4 <cit.>, lensing will be measured at much higher significance, and peak smoothing will hinder some parameter constraints. Fortunately, the off-diagonal mode correlations induced by lensing allow for the reconstruction of a lensing potential map <cit.>, which can be used to reverse the effects <cit.>. Delensing temperature and polarization maps generates spectra with sharper acoustic peaks, more prominent damping tails, and other clearer features, thereby allowing for tighter parameter constraints <cit.>. Here, we provide a quantitative look at the improvements made possible by CMB delensing, specifically in regard models aimed at resolving the Hubble Tension. Many proposed solutions to the tension take the form of extensions to the ΛCDM model. By introducing new physics that modify the sound horizon, a shift can be made in the inferred value of H_0, aligning it more closely with local measurements <cit.>. We have considered three broad categories of early universe solutions to the Hubble Tension: those involving a variation of fundamental constants at early times, those involving early dark energy, and those involving self-interacting dark radiation. We generate TT, TE, EE, and ϕϕ power spectra from the best-fit values of model parameters that most ease the Hubble Tension, we implement iterative delensing to produce delensed spectra for these models, and we perform Fisher forecasts to determine the expected parameter constraints from upcoming CMB observations. We show that CMB delensing leads to significantly tighter forecasted constraints on models aimed at resolving the Hubble Tension. In Section <ref>, we outline the Hubble Tension and how H_0 is inferred from the CMB. We briefly overview how iterative delensing works and affects constraining power derived from CMB observations in Section <ref>, and we also provide the details of our forecasts there. In Section <ref>, we show the degree to which CMB delensing improves constraints across the proposed solutions. Section <ref> concludes. § THE HUBBLE TENSION There is a growing tension in the value of the current expansion rate of the universe, the Hubble Constant H_0, as inferred from early universe observations and from local measurements. The tightest local measurements utilize the Cepheid-calibrated cosmic distance ladder, and the SH0ES team provides a constraint of H_0 = 73.2 ± 1.3 km/s/Mpc <cit.>. Alternatively, early universe observations like those of the CMB can be used to infer the Hubble constant within a particular cosmological model, and the Planck collaboration gives H_0 = 67.4 ± 0.5 km/s/Mpc in ΛCDM cosmology <cit.>. To measure the Hubble constant from early universe observations, one evaluates the likelihood of parameter values given some set of measurements to match theoretical and observational findings. It is enlightening to consider qualitatively how the CMB power spectra can be used to constrain H_0 <cit.>. This involves calculating the sound horizon r_s^* at the CMB and its angular size θ_s^* to determine the comoving angular diameter distance to the surface of last scattering D_A^* = r_s^*/θ_s^*. The size of the comoving sound horizon at the CMB last scattering surface is r_s^* = ∫_0^t_*dt/a(t) c_s(t) = ∫_z_*^∞dz/H(z) c_s(t), where t_* and z_* refer to the time and redshift values associated with the CMB surface of last scattering. The sound horizon is dependent on the time of recombination, the sound speed in the primordial plasma c_s(t), and the Hubble parameter H(z) prior to recombination (which is dependent on the energy budget of the universe). The Fourier modes of density perturbations in the primordial plasma underwent damped and driven oscillations, and the angular size of the sound horizon θ_s^* can be directly observed from the peak spacing of the Fourier modes in CMB spectra <cit.>. The comoving angular diameter distance to the surface of last scattering can then be determined from r_s^* and θ_s^*. The comoving angular diameter distance to the surface of last scattering is given by D_A^* = ∫_0^z_*dz/H(z), thereby allowing a determination of H_0 for a given cosmological model. The tension in the H_0 inference is not restricted to the comparison of local measurements with CMB data. Constraints on H_0 derived from the combination of baryon acoustic oscillation data and measurements of the primordial light element abundances agree with the inferences from the CMB and show a tension with the SH0ES constraint <cit.>, with a recent analysis of primordial deuterium abundance and SDSS DR16 data giving H_0 = 68.3 ± 0.7 km/s/Mpc <cit.>. The basic idea of these constraints is that baryon acoustic oscillation measurements provide joint constraints on H_0, Ω_m, and ω_b, and adding the tight constraint on ω_b obtained from observing the abundance of deuterium produced during big bang nucleosynthesis allows for a precise inference of H_0. This combination of data is independent of CMB observations, indicating that the Hubble tension is very unlikely to be due to a systematic error in the CMB observations. Similarly, local measurements other than those based on Cepheid-calibrated supernovae can be used to constrain H_0 <cit.>. An approach based on the distance ladder calibrated with the tip of the red giant branch rather than with Cepheids provides a measurement of H_0 that agrees with both the early universe and SH0ES values <cit.>. Inferences of H_0 from strong lensing time delays tend to agree more closely with the SH0ES measurement, though the best fit value and size of the confidence interval depends on modeling and analysis choices <cit.>. The persistent discrepancy between local and early universe measurements of H_0 has prompted a wide range proposals for new cosmological models aimed at resolving the tension <cit.>. Proposed solutions to the tension often take the form of extensions to the ΛCDM model that alter one or more aspects of the cosmological history and shift our inference of H_0. This typically involves modifying the sound horizon of the CMB at last scattering with additional energy contributions or by modifying one of the other assumptions built into the ΛCDM model (such as the time of recombination). Such changes modify early universe dynamics and impact the inference of H_0 (see Fig. <ref>). These types of models can often shift the inferred value of H_0 to be closer to the value obtained by SH0ES, but the changes to the cosmological model also impart other changes to the CMB anisotropies, which leads to constraints on these proposed solutions. One possible resolution to the Hubble tension is that something is wrong with one or more of the measurements involved in obtaining the differing values of H_0 or that the uncertainties in those inferences have been underestimated. Our goal for this paper is not to assess whether the Hubble tension requires new physics or to evaluate any particular model. Rather, we demonstrate that for a broad class of models aimed at resolving the Hubble tension, CMB delensing provides a means by which we can obtain tighter constraints on the parameters defining those models. § DELENSING AND FORECASTING Gravitational deflection of CMB photons by the cosmological structure intervening between the surface of last scattering and our telescopes distorts our view of CMB anisotropies <cit.>. This deflection can be both a help and a hindrance, as it allows for a means to determine the distribution of matter, but also distorts the primary anisotropies of the CMB. Lensing remaps the CMB temperature and polarization along some direction n by a deflection angle d(n) such that T^len(n) = T^unl(n + d(n)), and similarly for the Stokes Q and U parameters defining linear polarization. Gravitational lensing modifies the statistics of CMB anisotropies, inducing correlations between fluctuations of different angluar sizes. By considering quadratic combinations of CMB fields (such as the minimum variance quadratic estimator), a map of gravitational deflection can be estimated, thereby providing a map of the integrated matter density throughout the universe <cit.>. Delensing aims to utilize this understanding of the effects of gravitational lensing from measured CMB maps (such as T^obs) to estimate lensing deflection d^obs and to reverse the effects of lensing. Iterative delensing is the process whereby one alternates lensing reconstruction and delensing, obtaining a better lensing estimate and more thoroughly delensed CMB maps at each step <cit.>. For this paper, we utilize the publicly available code [<https://github.com/selimhotinli/class_delens>] as described in Ref. <cit.> (a modified version of the Boltzmann solver  <cit.>), which computes the power spectra expected from performing an iterative delensing procedure to CMB maps. Our forecasts were carried out using the code[<https://github.com/ctrendafilova/FisherLens>], described in Ref. <cit.>. We conduct forecasts for a range of noise levels, and for all configurations, we utilized a beam size of 1.4 arcmin, sky coverage of f_sky = 0.5, and a range of 30 to 5000 in ℓ in our analysis (though we restricted the range for TT spectra to 30 to 3000 in ℓ since astrophysical foregrounds are expected to contaminate smaller scales). We take noise in polarization to be given by Δ_P = √(2)Δ_T, as is expected from fully polarized detectors. For each model, we chose fiducial values for the cosmological parameters that best reduced the Hubble Tension according to analyses of those models in the references given below. We impose a prior constraint on the optical depth to reionization of σ(τ) = 0.007. Our forecasted constraints include the effects of lensing-induced non-Gaussian power spectrum covariances as described in Refs. <cit.>. In conducting forecasts for some models (particularly those involving varying fundamental constants), constraints for some parameters and configurations derived from delensed spectra with lensing-induced non-Gaussian covariances included were marginally smaller than those derived from delensed spectra with purely Gaussian covariances. We expect non-Gaussian covariance to weaken the constraining power <cit.>, so, in these cases, we reported the larger errors neglecting the non-Gaussian covariance. This discrepancy is likely due to our approximation of the lensing-induced non-Gaussian covariance (see the technical details in Refs. <cit.>). We expect that, since delensing acts to reduce the non-Gaussian covariance for well-measured modes, the error introduced by neglecting it is small. We leave a more complete and robust treatment of the lensing-induced non-Gaussian covariance for future work. We carry out forecasts both with and without the Baryon Acoustic Oscillation (BAO) information anticipated from DESI (Dark Energy Spectroscopic Instrument) <cit.>. We constructed a Fisher matrix for BAO observables following the prescription given in Ref. <cit.>. Forecasts with BAO included simply add this Fisher matrix to the one obtained from the CMB. As shown below, the improvement in constraints from CMB delensing is smaller, though still non-trivial, when BAO information is included. § RESULTS As the precision of both local distance ladder measurements and ΛCDM inferences have improved, and the Hubble Tension has grown more statistically significant, a number of solutions have been proposed. We consider models that were found to perform well for reducing the tension according to the survey of models described in Ref. <cit.>. In that work, a wide variety of models and modifications were investigated, involving early- and late- universe physics, and we focus on representative examples of the three broad classes of early universe solutions: varying fundamental constants, early dark energy, and self-interacting dark radiation. §.§ Varying Fundamental Constants One class of proposed models that is effective at reducing the Hubble Tension comes in the form of varying fundamental constants, particularly those affecting recombination. By positing a time-varying electron mass and fine structure constant, the energy levels of Hydrogen atoms, the Thomson scattering cross-section, and other physics of the CMB at recombination are shifted <cit.>. These changes affect the energy and temperature at which recombination occurs, indicating a strong degeneracy of those parameters and the time of recombination t_*. This leads to a change in the comoving size of the sound horizon, meaning the inference of H_0 can be made more compatible with local measurements <cit.>. In these models, it is assumed that the fundamental constants shift to their currently measured values sometime after recombination, but well before reionization. Observable impacts of this model should be visible in CMB power spectra, as a shifted time of recombination implies oscillations in the primordial plasma `froze' at a different point in their development, thereby shifting acoustic peaks (see Fig. <ref>). CMB delensing produces sharper peak locations, meaning tighter constraints for these parameters could be found. We define the time-varying parameters δ m_e = m_e,early/m_e,obs, δα = α_early/α_obs where the early values of these parameters are time-independent throughout recombination, but change to their current, observed values at z = 50 following the treatment in Ref. <cit.>. In Fig. <ref> we show the results for 8-parameter forecasts where we allow for varying, non-unity values of δ m_e and δα in addition to the ΛCDM parameters. From left to right, the panels show constraints for H_0, δ m_e, and δα, respectively, across a range of CMB noise levels. At decreasing noise levels, delensing provides a significant improvement in the constraints. The improvement from lensed to delensed constraints is shown in Table <ref> for various combinations of fixed parameters. In Fig. <ref>, we show the forecasted uncertainty and degeneracy of parameters of the ΛCDM + δ m_e + δα model (without BAO) before and after delensing. Delensing provides a significant tightening of constraint contours for all parameters of interest. Additionally, there is a strong degeneracy between δ m_e and H_0 which is partially broken by delensing, as can be seen in the second and fourth rows of Table <ref>. Note, in the middle panel of Fig. <ref> (and indeed for all constraints involving δ m_e or degenerate parameters), the delensed spectra give tighter constraints than the unlensed spectra. In the limit that there is perfect knowledge of the lensing spectrum, this would be senseless, but with noisy lensing reconstruction (as computed in our analysis), this is a valid result. If one considers a parameter for which the observable impact is entirely contained in the lensing (and not the primary CMB), without lensing reconstruction tighter constraints would undoubtedly be found from lensed spectra rather than unlensed spectra. With a noisy, sub-optimal lensing reconstruction, one would still find tighter constraints from the lensed spectra (and noisy reconstruction) than the unlensed spectra (and noisy reconstruction), as the poorly reconstructed components of the lensing spectrum still influence the lensed CMB data, while no information for the parameter is found in the unlensed data. Likewise, the delensed spectra should produce tighter constraints than the unlensed spectra, as only the lensing modes that are well-measured are removed. In the opposite limit, with a parameter that affects the primary CMB and not the lensing spectrum, the unlensed CMB spectra would outperform the lensed and delensed spectra regardless of reconstruction fidelity. For parameters that impact both the primary CMB and lensing spectrum, it does not necessarily hold that the unlensed spectra error should always be smaller the delensed spectra error, so delensing actually produces tighter constraints than the unlensed CMB would. More generally, when considering constraints on several cosmological parameters, with varied and partially degenerate impacts on the primary and lensing spectra, delensing should always improve constraints compared to analyses using lensed spectra, and in some cases delensed spectra may provide tighter constraints than unlensed spectra, so long as the reconstructed lensing spectrum is included in the analysis. §.§.§ Spatial Curvature In Ref. <cit.>, a non-zero spatial curvature was also found to ease the Hubble Tension by shifting D_A^* for the CMB. We utilize a fiducial value of Ω_k = -8.9719 × 10^-3, as guided by Ref. <cit.>. In Fig. <ref> we show the results for 8-parameter forecasts with varying δ m_e and Ω_k in addition to ΛCDM parameters. From left to right, the panels show constraints for H_0, δ m_e, and Ω_k respectively, as a function of noise. At decreasing noise levels, delensing provides a significant improvement in the constraints. The improvement from lensed to delensed constraints is shown in Table <ref>. Note that the model which most significantly reduced the Hubble Tension in Ref. <cit.>, ΛCDM + δ m_e + δΩ_k has an improvement in the H_0 error from delensing of 18.0%, which is roughly on par to reducing the noise of lensed observations from 10 to 1 μ K-arcmin, which would require a hundred-fold increase in the number of detector-years of observing. In Fig. <ref>, we show the forecasted uncertainty and degeneracy of parameters of this ΛCDM + δ m_e + Ω_k model (without BAO) before and after delensing. As with zero curvature models, delensing provides a significant tightening of constraint contours for all parameters of interest. §.§ Early Dark Energy Early dark energy refers to a class of models aimed at resolving the Hubble Tension by including a scalar field initially frozen-in due to Hubble friction that performs damped oscillations around its local minimum once the Hubble parameter drops below a critical value. Physically, this describes an axion-like field that once behaved like a cosmological constant, but began to decay at some critical redshift z_c <cit.>. By increasing the Hubble parameter for a limited time in the early universe, the sound horizon and diffusion damping scale decrease, and field perturbations have additional pressure (which produces signature effects on CMB spectra, as seen in Fig. <ref>) <cit.>. By involving an increase in the expansion rate around matter/radiation equality due to this ΛCDM extension, the Hubble Tension can be somewhat alleviated without spoiling the fit to Planck measurements <cit.>. For our analysis, we utilize the extremely light field and potential of the form V(ϕ) = m^2 f^2 (1 - cos(ϕ/f))^n + V_Λ. Instead of particle physics parameters f and m, we utilize the corresponding effective early dark energy parameters log_10(z_c), the critical redshift at which the field becomes dynamical, and f_EDE, the fractional energy contribution of the field at said redshift. We additionally include parameter θ_i, the initial axion misalignment angle. We utilize the fiducial best-fit values of f_EDE = 0.098, log_10(z_c) = 3.63, and θ_i = 2.58 taken from Ref. <cit.> that are found to best reduce the tension. We incorporated the treatment of early dark energy from Ref. <cit.>[<https://github.com/mwt5345/class_ede>] into to perform the forecasts presented here. In Fig. <ref> we show the results for ΛCDM + f_EDE + log_10(z_c) + θ_i cosmology forecasts. The percent improvement from lensed to delensed constraints is shown in Table <ref>. In Fig. <ref>, we show the forecasted uncertainty and degeneracy of parameters of the ΛCDM + f_EDE + log_10(z_c) + θ_i model (without BAO data) before and after delensing. Delensing provides a significant tightening of constraint contours for all parameters of interest. §.§ Self-Interacting Dark Radiation Some of the most well-known extensions to the ΛCDM model take the form of free-streaming massless relics (dark radiation). Such extensions are relatively well-constrained by the CMB <cit.>, but there is no physical necessity that these relics are free-streaming. While free-streaming and self-interacting species would contribute equally to the energy budget of the universe and CMB damping tail, only free-streaming particles induce the observed, characteristic phase shift in the anisotropies <cit.>, meaning there may be non-free-streaming particles at play that are not as well constrained. Such self-interacting species would form a relativistic fluid that changes the expansion rate during radiation domination, similarly to the early dark energy of Sec. <ref>, therefore influencing the size of the sound horizon at last scattering and shifting the inference of H_0 (see Fig. <ref>). We utilize the fiducial best-fit value of N_idr = 0.3867 from Ref. <cit.>, where N_idr is the contribution of self-interacting dark radiation to the effective neutrino number N_eff. The percent improvement from lensed to delensed constraints is shown in Table <ref>. §.§.§ Dark Radiation-Dark Matter Scattering Additionally, it is possible that this self-interacting dark radiation futher interacts with dark matter. Such a cosmology is then additionally dependent on Γ_0, the present rate of momentum transfer between the dark matter and dark radiation <cit.>. In this case, we utilize the fiducial best-fit values of N_idr = 0.4290 and Γ_0=2.371×10^-8 Mpc^-1 from Ref. <cit.>. Fig. <ref> shows the results for the forecasts of ΛCDM and the 2-parameter extension of N_idr and Γ_0. In Fig. <ref>, we show the forecasted uncertainty and degeneracy of parameters of the ΛCDM + N_idr + Γ_0 model (without BAO data) before and after delensing. The improvement from lensed to delensed constraints is shown in Table <ref>. § CONCLUSION The CMB is a key tool in understanding the foundational development of our universe. With higher precision measurements of CMB temperature and polarization anisotropies, more precise constraints for ΛCDM parameters have been made with remarkable levels of confidence <cit.>. Discrepancies like the Hubble Tension may point the way toward novel physics in the early universe <cit.>, and it is valuable to test these ΛCDM extensions with observations. As more low-noise CMB data becomes available, delensing will be a valuable tool to increase the cosmological constraining power provided by our observations. In this work, we provided a quantitative look at the improvements on constraints related to models aimed at addressing the Hubble Tension made possible by iterative delensing. We implemented three broad categories of early universe solutions (varying fundamental constants, early dark energy, and self-interacting dark radiation), estimated the effects of map-level delensing on generated power spectra, and forecasted constraints on model parameters. We demonstrated that constraints on H_0 derived from delensed spectra can be about 20% tighter than from lensed spectra in the models we studied for upcoming CMB surveys like Simons Observatory and CMB-S4. Delensing provides significantly improved constraining power for model parameters across the board and is an analysis technique that does not require any instrumental design changes to implement with future data. CMB delensing proves an invaluable tool to glean as much information as possible about the early universe, regarding the Hubble Tension and beyond. We thank Daniel Green, Selim Hotinli, Cynthia Trendafilova, and Alexander van Engelen for helpful conversations. We would like to thank the Southern Methodist University (SMU) Physics Department for their continued support of research and foundation of education. We would like to thank the SMU Dedman College Interdisciplinary Institute and the Hamilton and Buford families for their funding and support over the course of this project through the Hamilton Scholars Program. This work was supported by the US Department of Energy under Grant . Computational resources for this research were provided by SMU’s Center for Research Computing. utphys
http://arxiv.org/abs/2307.01554v1
20230704080945
Characterization of the threshold for multi-range percolation on oriented trees
[ "Olivier Couronné" ]
math.PR
[ "math.PR" ]
[2020]60K35; 60J80 Université Paris Nanterre, Modal'X, FP2M, CNRS FR 2036, 200 avenue de la République 92000 Nanterre, France. olivier.couronne@parisnanterre.fr The author is supported by the Labex MME-DII funded by ANR, reference ANR-11-LBX-0023-01, and this research has been conducted within the FP2M federation (CNRS FR 2036) Spatio-Temporal Perception-Distortion Trade-off in Learned Video SR Olivier Couronné =================================================================== We give a characterization of the percolation threshold for a multi-range model on oriented trees, as the first positive root of a polynomial, with the use of a multi-type Galton-Watson process. This gives in particular the exact value of the critical point for the model studied in <cit.> and <cit.> for k=2. § INTRODUCTION §.§ The general multi-range model We consider an oriented graph whose vertex set is that of a d-regular, rooted tree, and, for some k∈, all the edges of range between 1 and k. We fix a sequence (p_1, …, p_k) of k reals in [0, 1]. The percolation process we study is such that for each i between 1 and k, edges of range i are open with probability p_i, independently of each others. We shall describe a multi-type Galton-Watson process having exactly the same threshold. Such a Galton-Watson process is supercritical if and only if the largest eigenvalue of the transition matrix is strictly larger than one. If the p_i's are such that the percolation process associated to (p_1, …, p_k-1, 0) is subcritical, the study of the transition matrix provides us the critical point for p_k. We shall get a polynomial that, with respect to p_k, is of degree 2^k-1, independently of the value of d. This gives a polynomial of degree 2 when k=2, and of degree 4 when k=3. There are exact expressions for their roots, but we only give the value of the critical point for k=2: For k=2 and 0≤ p_1<1/d, p_2, c=1/2d+1/2d^2-√((d-1)(3dp_1+d+p_1-1))/2d^2√(1-p_1) We apply this formula on some values: * When d=2 and p_1=0.25, this gives p_2, c≈ 0.135643, in accordance with the inequality p_2, c>0.125 obtained in <cit.>. * When p_1=0, we get p_2, c=1/d^2, as for the classical percolation on a d^2-regular tree. * When p_1=1/d, the formula reduces to p_2, c=0, as expected. When d becomes large, with p_1<1/d, the value we obtain is equivalent to the lower bound (1-dp_1)/d^2 of <cit.>. The model considered in <cit.> and <cit.> corresponds to the case where p_2=…=p_k-1=0. Of course, for k=2, the two models are identical. §.§ Organization of the paper We describe the model of multi-range percolation in section <ref>. Then we introduce a multi-type Galton-Watson process in section <ref>, which will be equivalent to the percolation process. We use this process in section <ref> to solve the model for k=2, and indicate how to do it for k=3. Finally, in section <ref>, we place a discussion on how to obtain the percolation threshold in more general cases. § THE MULTI-RANGE PERCOLATION This section draws upon the description found in <cit.>. For an integer d≥2, define [d]={1,…,d} V=[d]_*=⋃_0≤ n<∞[d]^n. The difference between V and [d]_* is that the set V is the set of the vertices of the graph, whereas [d]_* is seen as the set of finite sequences with elements in [d]. The set [d]^0 is a single point o, which, when an element of V, we will refer to as the root of the graph. For u=(u_1,…, u_m)∈ V and v=(v_1, …, v_n)∈[d]_*, the concatenation of these two elements, as an element of V, is defined by u· v=(u_1, …, u_m, v_1,…, v_n); o· v=v; u· o=u. Now the set of oriented edges is E=⋃_1≤ l≤ kE_l with E_l={⟨ r, r· i ⟩:r∈ V, i∈[d]^l}. The oriented graph is finally =(V, E). In , every vertex has out-degree d+d^2+…+d^k. The percolation model we consider on is as follows. We fix a sequence (p_1, …, p_k) of k reals in [0, 1]. All the edges are independent of each other, and for l, 1≤ l≤ k, every edges in E_l is open with probability p_l. The law obtained is denoted by . The cluster of the root is the set of vertices that can be reach by an oriented path from o. We focus on p_k, and define p_k, c=p_k, c(p_1, …, p_k-1):=inf{p_k: (||=∞)>0}. The percolation model is stochastically dominated by a branching process with offspring distribution that is the sum of k independent binomial random variables, that is (d, p_1), (d^2, p_2), ..., (d^k, p_k). This branching process is critical for parameters satisfying ∑_1≤ l≤ kd^lp_l=1, and so p_k, c≥(1-∑_1≤ l< k d^lp_l)/d^k. In the context of only one long range (that is, only p_1 and p_k can be non-null), the authors of <cit.> proved the much more difficult strict inequality. The present paper focuses on giving a method to obtain the numerical value of p_k, c, but apart for k=2 and perhaps, but not done here, for k=3, our method does not seem to provide the strict inequality for general k and d, even in the context of <cit.>. § THE MULTI-TYPE GALTON-WATSON PROCESS The graph is a regular d-tree. We have fixed k∈^*, and suppose p_k>0 (if that is not the case, simply decrease the value of k). A branch is a path (x_1, …, x_k) of length k on the tree such that for each i, 1≤ i<k, x_i is the parent of x_i+1. For a configuration of the percolation process, we associate to each vertex 1 if it is in , that is there exists a path of open edges from the origin to the vertex, 0 otherwise. We denote it by Y(x) for a vertex x of the tree. We now focus on our multi-type Galton-Watson process, and we refer to <cit.> for a detailed introduction to this topic. The space of types is {0, 1}^k∖ 0, the sequences of 0 and 1 of length k, whose elements are not all null. Such a type indicates if a vertex is occupied (for 1) or vacant (for 0) in a branch. Let a be the type of a branch (x_1, …, x_k). The vertex x_k has, on the tree, d children, each one of them having the same probability of being occupied, a probability entirely determined by the type a. Take for x_k+1 arbitrarily one of the d children of x_k. The branch (x_2, x_3, …, x_k, x_k+1) will then be, if not entirely null, a child of (x_1, …, x_k), and the first k-1 elements of the type of the new branch are entirely determined. Hence, a type a=(a_1, …, a_k) can have children of at most two different types: * a'_0=(a_2, a_3, …, a_k, 0) * a'_1=(a_2, a_3, …, a_k, 1) We get a'_1, that is to say Y(x_k+1)=1, when at least one edge connecting an occupied x_i with x_k+1 is open. Otherwise we get a'_0. The probability that the new branch (x_2, x_3, …, x_k, x_k+1) is of type a'_1 is entirely determined by the type a of the previous branch, and the same goes for the probability that the new branch is of type a'_0. We multiply by d each of these probabilities to get the expected numbers of children of type a'_1 and of type a'_0, and this determines entirely the multi-type Galton-Watson process. We denote by M the corresponding matrix. The initial individual of the Galton-Watson process is (0, …, 0, 1). From any type (and we recall that they contain at least one 1), one can attain the type (1, 0, …, 0) by closing the right number of edges. From the type (1, 0, …, 0), we can obtain the type (0, …, 0, 1) as p_k>0. Since the type (0, …, 0, 1) is considered as the type of the origin, all the types of the successive children are all in the same irreducible component of the matrix M. This little aside allows us to consider cases such as (p_1, …, p_6)=(0,0.1,0,0.1,0,0.1), but of course one can always impose that the set of i's associated to non-null p_i has only 1 for a common divisor. From now on, we consider only the states in this irreducible component, and change M accordingly if needed. The Galton-Watson process we obtain is just another description of the multi-range percolation process, so the thresholds are exactly the same. § ENTIRELY SOLVABLE CASES Here we consider either k=2, or k=3 with p_2=0. §.§ A formula when k=2 The set of types is constituted of (1,1), (1,0) and (0,1). The transition matrix M of the Galton-Watson tree is: 3mm (1,1) (1,0) (0,1) (1,1) d(p_1+p_2-p_1p_2) d(1-p_1)(1-p_2) 0 (1,0) 0 0 dp_2 (0,1) dp_1 d(1-p_1) 0 2mm When d and p_1 are considered fixed, with dp_1<1, the critical value p_2, c of p_2 has to be such that the largest eigenvalue of M is 1, and this implies that (M-I_3)=0. This determinant is a polynomial of degree two in p_2, whose roots are 1/2d+1/2d^2±√((d-1)(3dp_1+d+p_1-1))/2d^2√(1-p_1). When p_2=0, the largest eigenvalue of M is dp_1<1. This eigenvalue is increasing by arguments of coupling for example, so p_2, c is the first positive root of the polynomial. Using p_1<1/d, one can obtain that the third term is strictly less than 1/2d, and so the first positive root is the one with the minus sign. This is exactly Theorem <ref>. §.§ The case k=3 with p_2=0 As the transition matrix is relatively sparse, with at most two non-null elements for each line, we express M line-by-line as follows: * (1,1,0): (1,0,0) with expectation d(1-p_3), (1,0,1) with expectation dp_3 * (1,0,0): (0,0,1) with expectation dp_3 * (1,1,1): (1,1,0) with expectation d(1-p_1)(1-p_3), (1,1,1) with expectation d(p_1+p_3-p_1p_3) * (1,0,1): (0,1,1) with expectation d(p_1+p_3-p_1p_3), (0,1, 0) with expectation d(1-p_1)(1-p_3) * (0,1,1): (1,1,0) with expectation d(1-p_1), (1,1, 1) with expectation dp_1 * (0,0,1): (0,1,1) with expectation dp_1, (0,1, 0) with expectation d(1-p_1) * (0,1,0): (1,0,0) with expectation d. For the last three lines, the expectations do not use p_3. The polynomial (M-I_7) is of degree 4, which makes it solvable, albeit not easily. For d=2 and p_1=0.25, we obtain p_3, c≈ 0.073780, to compare with p_3, c>0.0625 of <cit.>. In the case k=3 and p_2>0, the matrix has almost the same sparsity (just the last line has a second term), and the determinant is a polynomial of degree 4, thus exactly solvable. We refrain nevertheless to write the matrix in this case. § CHARACTERIZATION OF THE THRESHOLD We can develop an algorithm that, once we have fixed k and (p_1, …, p_k-1), expresses the coefficients of the matrix M as polynomials of degree zero or one in p_k. More precisely, for each type beginning by 0, the probabilities do not depend on p_k, and the corresponding lines in M have only constants (with respect to p_k). For the types begining by 1, the probabilities are polynomials of degree one. Then we have two methods: * Develop (I-M) and get a polynomial of degree 2^k-1 in p_k. As M has at most two non-null elements in each line, we should get this polynomial in at most an order of 2^k operations. Then, for k not too large, mathematical solvers allow us to find the smallest positive root. * Iteratively multiply a vector X, initiated with only 1's, by M, and divide at each step by the largest component obtained. This largest component converges to the largest eigenvalue of M. On one hand, we then try to get the largest p_k such that the largest eigenvalue is smaller than 1, and this provides a lower bound for p_k,c. On the other hand, we seek the smallest p_k such that the largest eigenvalue is strictly larger than 1, and this provides an upper bound for p_k,c. plain
http://arxiv.org/abs/2307.03354v1
20230707022618
Token-Level Serialized Output Training for Joint Streaming ASR and ST Leveraging Textual Alignments
[ "Sara Papi", "Peidong Wan", "Junkun Chen", "Jian Xue", "Jinyu Li", "Yashesh Gaur" ]
cs.CL
[ "cs.CL", "cs.SD", "eess.AS" ]
Stability and Generalization of Stochastic Compositional Gradient Descent Algorithms Ming Yang^†Equal contribution. First Version on May 15, 2023 University at Albany, SUNY Albany, NY, United States Xiyuan Wei^† Texas A&M University College Station, TX, USA Tianbao Yang Texas A&M University College Station, TX, USA Yiming Ying University at Albany, SUNY Albany, NY, United States Abstract 0.9 We review the modular flavor symmetric models of quarks and leptons focusing on our works. We present some flavor models of quarks and leptons by using finite modular groups and discuss the phenomenological implications. The modular flavor symmetry gives interesting phenomena at the fixed point of modulus. As a representative, we show the successful texture structure at the fixed point τ = ω. We also study CP violation, which occurs through the modulus stabilization. Finally, we study SMEFT with modular flavor symmetry by including higher dimensional operators. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In real-world applications, users often require both translations and transcriptions of speech to enhance their comprehension, particularly in streaming scenarios where incremental generation is necessary. This paper introduces a streaming Transformer-Transducer that jointly generates automatic speech recognition (ASR) and speech translation (ST) outputs using a single decoder. To produce ASR and ST content effectively with minimal latency, we propose a joint token-level serialized output training method that interleaves source and target words by leveraging an off-the-shelf textual aligner. Experiments in monolingual (it-en) and multilingual ({de,es,it}-en) settings demonstrate that our approach achieves the best quality-latency balance. With an average ASR latency of 1s and ST latency of 1.3s, our model shows no degradation or even improves output quality compared to separate ASR and ST models, yielding an average improvement of 1.1 WER and 0.4 BLEU in the multilingual case. automatic speech recognition, speech translation, streaming, serialized output training § INTRODUCTION In many real-world applications such as lectures and dialogues, automatic speech recognition (ASR) and translation (ST) are often both required to help the user understanding the spoken content <cit.>. For instance, a person can have a partial knowledge of the uttered language and a good knowledge of the translation language, therefore consulting the translation only when the transcription is not fully comprehended <cit.>. Moreover, the consistency between transcriptions and translations represents a desirable property for speech applications <cit.>, and having access to both source and target texts is also particularly useful for explainable AI <cit.>. Despite these requests and the several research efforts towards developing systems that are able to produce both outputs <cit.>, little research has focused on the streaming scenario <cit.> where these outputs have to be generated while incrementally receiving additional speech content. In particular, only Weller et al., 2021 <cit.> proposed a unified-decoder solution for real-time applications that, however, leverages a fully attention-based encoder-decoder (AED) architecture <cit.>, which is theoretically not well suited for the streaming scenario <cit.>, and adopts the re-translation approach <cit.>, which is well-known to be affected by the flickering problem <cit.>. Recently, Wang et al. 2023 <cit.> proposed a streaming language-agnostic multilingual speech recognition and translation model using neural transducers (LAMASSU), which is capable of generating both ASR and ST results. More specifically, LAMASSU with a unified prediction and joint network (LAMASSU-UNI) uses language identification (LID) information to replace the start-of-sentence token. However, in order to perform ASR and ST simultaneously, LAMASSU requires two decoder instances. In this paper, we introduce the first streaming Transformer-Transducer (T-T) <cit.> able to jointly generate both transcriptions and translations using a single decoder (Figure <ref>). To effectively learn how to produce the interleaved ASR and ST words, we propose a joint token-level serialized output training (tSOT) <cit.> method that leverages an off-the-shelf neural textual aligner to build the training data without any additional costs. Monolingual (it-en) and multilingual ({de,es,it}-en) experiments demonstrate the effectiveness of our proposed alignment-based joint tSOT model, achieving the best quality-latency trade-off across languages. With an average latency of 1s for ASR and 1.3s for ST, our model not only improves the output quality compared to separate ASR and ST models, resulting in an average improvement of 1.1 WER and 0.4 BLEU in the multilingual case, but also enables a more interpretable ST, assisted by the corresponding generated ASR outputs. Furthermore, the ability of our system to consolidate multiple tasks and languages into a single model significantly reduces the number of required systems (from 6 to 1 in the multilingual case), thus moving towards a more environmentally-friendly AI (Green AI) approach <cit.>. § RELATED WORKS The SOT <cit.> method was initially introduced for non-streaming overlapped ASR and later extended to its token-level version for the streaming multi-talker scenario <cit.> and distant conversational ASR <cit.>. Recently, Omachi et al., 2023 <cit.> proposed a similar approach for explainable and streaming ST by incorporating interleaved post-editing annotations into the target text but exhibiting a very high latency (more than 5 seconds).[The maximum acceptable latency limit is set between 2 and 3 seconds from most works on simultaneous interpretation <cit.>.] In the streaming scenario, only Weller et al., 2021 <cit.> proposed a unified decoder for generating both ASR and ST outputs based on an AED architecture and adopting re-translation. Their framework is completely different from what we propose in this paper, since our model is Transducer-based, thus having a different architecture that naturally implements the streaming capabilities. The original encoder of the Transducer model <cit.> was composed of LSTM layers, which were later replaced by Transformer layers due to their improved performance <cit.>. Extensive research has been conducted on the T-T model for ASR <cit.>, with a particular focus on the streaming scenario <cit.>. Although the adoption of the T-T model has been previously proposed for the streaming ST task <cit.>, including extensions to multilingual settings <cit.> and architectural modifications <cit.>, our paper is the first introducing a streaming single encoder-single decoder T-T model that can jointly produce ASR and ST outputs with minimal latency. Furthermore, we explore the application of the tSOT method to jointly generate ASR and ST outputs, which has not been previously investigated in prior work. § JOINT TSOT BASED ON TEXTUAL ALIGNMENTS §.§ Joint tSOT In this section, we provide a detailed explanation of the joint version of the tSOT method. To emit both transcriptions and translations given the input speech, we serialize the ASR and ST references into a single token sequence. Specifically, we introduce two special symbols ⟨ asr ⟩ and ⟨ st ⟩ to represent the task change (the transition between ASR and ST output) and concatenate the reference labels by inserting them between utterances (either at the sentence level or within specific words). For instance, given the transcription reference 𝐫_𝐚𝐬𝐫=[r_asr_1, r_asr_2, ..., r_asr_m] and the translation reference 𝐫_𝐬𝐭=[r_st_1, r_st_2, ..., r_st_n], where m≤len(𝐫_𝐚𝐬𝐫) and n≤len(𝐫_𝐬𝐭), the corresponding joint tSOT reference is: 𝐫_ 𝐭𝐒𝐎𝐓=[⟨ asr ⟩,r_asr_1, r_asr_2, ..., r_asr_m,⟨ st ⟩,r_st_1, r_st_2, ..., r_st_n] If the transcription and translation utterances are divided into chunks (composed of a single or even multiple words), the concatenation process is repeated until m=len(𝐫_𝐚𝐬𝐫) and n=len(𝐫_𝐬𝐭) to obtain the final 𝐫_ 𝐭𝐒𝐎𝐓. Note that ⟨ asr ⟩ and ⟨ st ⟩ are not considered as special tokens during training: they are added directly to the vocabulary and considered as all the other tokens in the loss computation. §.§ Textual alignment-based joint tSOT In proposing the AED architecture for the ASR and ST joint decoding, Weller et al., 2021 <cit.> introduced a method for interleaving transcript and translation words, controlled by the parameter γ. In particular, the next interleaved word is a transcription word if: (1.0 - γ) * (1+count_asr)>γ * (1+count_st) where count_asr and count_st represent the count of ASR and ST words generated in the target text up to that point. The authors explored different scenarios, including corner cases such as γ=0.0, where all the transcription words are generated first, followed by all the translation words (hereinafter, INTER 0.0), and γ=1.0, where all the translation words are followed by all the transcription words (hereinafter INTER 1.0). However, these corner cases are not actually streaming for one of the two tasks, as INTER 0.0 is not streaming for ST and INTER 1.0 is not streaming for ASR. For this reason, the authors proposed to alternate one ASR word and one ST word (hereinafter, INTER 0.5), thus realizing a streaming model for both tasks.[The authors also provided results for γ=0.3, showing consistently inferior performance compared to the other strategies. We also tried to interleave more than one word at a time when adopting INTER 0.5 but it led to significantly worse results.] The switch between the two tasks is controlled by a language token, determined from learned embeddings that are summed with the word embeddings during training and predicted at test time. In our approach, we first integrate the interleaving method into the tSOT training by removing the need for learned embeddings. We replace them with specific ASR and ST tokens, as explained in Section <ref>, setting ⟨ asr ⟩=#ASR# and ⟨ st ⟩=#ST#. An example of tSOT INTER 0.0, 1.0, and 0.5 is shown in Table <ref>. Second, we introduce a new method for interleaving ASR and ST words based on a semantically-motivated approach. We leverage an off-the-shelf neural textual aligner  <cit.> to predict the alignment between transcription and translation texts. Then, let again 𝐫_𝐚𝐬𝐫={r_asr_1,...,r_asr_m} the transcription reference and 𝐫_𝐬𝐭={r_st_1,...,r_st_n} the translation reference, we build the alignment-based interleaving (hereinafter, INTER ALIGN) by applying the following rules: * If a transcription word r_asr_i and a translation word r_st_j are uniquely aligned, they are interleaved following INTER 0.5: ⇒ 𝐫_ 𝐭𝐒𝐎𝐓 += #ASR#, r_asr_i, #ST#, r_st_j * If k consecutive transcription words r_asr_i, r_asr_i+1,..., r_asr_i+k-1 are aligned with the same translation word r_st_j, we interleave them together as a single word (valid also in the opposite case): ⇒ 𝐫_ 𝐭𝐒𝐎𝐓 += #ASR#, r_asr_i,r_asr_i+1,...,r_asr_i+k-1, #ST#, r_st_j * If a transcription word r_asr_i is aligned with a translation word r_st_a that appears consecutively after the current translation word r_st_j, but r_asr_i is not also aligned with r_st_j, we consider all the words r_st_j,...,r_st_a for the interleaving (the condition must also be satisfied in the reverse direction): ⇒ 𝐫_ 𝐭𝐒𝐎𝐓 += #ASR#, r_asr_i, #ST#, t_j,...,t_a * If a transcription word r_asr_miss appears consecutively after r_asr_i and is not aligned with any translation words r_st_j,...,r_st_n, the word is included in the subsequent interleaving sequence (valid also in the opposite case): ⇒ 𝐫_ 𝐭𝐒𝐎𝐓 += #ASR#, r_asr_i, #ST#, r_st_j,..., #ASR#, r_asr_miss,... * if no 𝐫_𝐚𝐬𝐫 or 𝐫_𝐬𝐭 words are left, we concatenate together all the remaining words of, respectively, 𝐫_𝐬𝐭 or 𝐫_𝐚𝐬𝐫: ⇒ 𝐫_ 𝐭𝐒𝐎𝐓 += #ST#, r_st_j,...,r_st_n or 𝐫_ 𝐭𝐒𝐎𝐓 += #ASR#, r_asr_i,...,r_asr_m With the transcription and translation example in Table <ref>, we obtain the alignment shown in Figure <ref>. Its corresponding INTER ALIGN output is shown in the last row of Table <ref>. In particular, since Ich (ASR) and I (ST) are uniquely aligned, they are interleaved in the INTER 0.5 fashion. But, since brauche (ASR) is aligned with need (ST), and really (ST) is aligned with wirklich (ASR), the entire ASR block composed of brauche das wirklich is inserted before the corresponding ST words really need it. § EXPERIMENTAL SETTINGS We adopt a streaming T-T architecture <cit.> with 24 Transformer layers for the encoder, 6 LSTM layers for the predictor and 2 feed-forward layers for the joiner. The Transformer encoder has 8 attention heads, the embedding dimension is 512 and the feed-forward units are 4096. We use a chunk size of 1 second with 18 left chunks. The LSTM predictor has 1024 hidden units as well as the feed-forward layers of the joiner. Dropout is set to 0.1. We use 80-dimensional log-mel fiterbanks as features, which are sampled every 10 millisecond. Before feeding them to the Transformer encoders, we process the features with 2 layers of CNN with stride 2 and kernel size of (3, 3), with an overall input compression of 4. Our experiments are performed using 1k hours of proprietary data for each language (German, Italian, Spanish to English) and the models are tested on the CoVoST2 dataset <cit.>. AdamW <cit.> is used as optimizer with the RNN-T loss <cit.>. The training steps are 6.4M for the joint tSOT models and 3.2M for the separate ASR and ST models.[We noticed that a longer training of 6.4M steps does not improve or even degrades the performance.] Checkpoints are saved every 320k steps. The learning rate is set to 3e-4 with Noam scheduler, 800k warm-up steps and linear decay. The vocabulary is based on SentencePiece <cit.> and has dimension 4k for all the monolingual models, all the separate ASR and ST models, and the multilingual source ST model (since the target is always English). For the multilingual (source) joint tSOT and ASR models, the vocabulary size is set to 8k. Coverage is always set to 1.0. We use 16 NVIDIA V100 GPUs with 32GB of RAM for all the training and a batch size of 350k. We select the last checkpoint for inference, which is then converted to open neural network exchange (ONNX) format and compressed. The beam size of the beam search is set to 7. We report WER for the ASR output quality and BLEU[sacreBLEU <cit.> version 2.3.1] for the ST output quality. Latency is measured in milliseconds (ms) with the length-adaptive average lagging (LAAL) <cit.>, which is derived from the speech adaptation <cit.> of the average lagging (AL) metric <cit.>, incorporating the capability to handle predictions longer than the reference. § RESULTS §.§ Monolingual Results Table <ref> presents the results of the Italian monolingual ASR, ST and joint tSOT models. First, we observe how effective is the joint tSOT compared to training separate ASR and ST models. With the only exception of the ASR task for INTER 1.0 and the ST task for INTER 0.0, the joint tSOT models always outperform the separate ASR and ST architectures with improvements ranging from 0.63 to 1.18 WER while maintaining the same latency for ASR, and from 0.64 to 2.79 BLEU with also an average latency reduction of 312ms for ST. Therefore, the obtained results indicate the joint tSOT as a very promising approach. Moreover, the high latency shown by INTER 1.0 for ASR (over 3.5s) and INTER 0.0 for ST (approximately 3s) was expected since, for these two approaches, only one of the two modalities is actually streaming (as also already discussed in Section <ref>). Second, in contrast to Weller et al., 2021 <cit.>, we notice that INTER 0.5 achieves the best WER result instead of INTER 1.0 while, in accordance with them, the best BLEU is obtained by INTER 0.0. The lowest latency is achieved by INTER 0.5 and INTER ALIGN for ASR, and by INTER ALIGN for ST with a very large margin (between 150 and 1600ms of latency reduction). Considering both output quality and latency, the overall best result (underlined in Table <ref>) is obtained by INTER 0.5 for ASR, closely followed by INTER ALIGN, and INTER ALIGN for ST. Therefore, in the monolingual setting, INTER ALIGN emerges as the optimal model for jointly performing the ASR and ST tasks. §.§ Multilingual Results We extend our analysis to the multilingual setting by incorporating two additional source languages: Spanish, an Italic/Romance language with subject-verb-object (SVO) ordering similar to Italian, and German, a Germanic language with subject-object-verb (SOV) ordering <cit.>. In Table <ref>, we compare the joint tSOT methods with both monolingual and multilingual ASR and ST models. Looking at the results of the separate ASR and ST models, we observe a significant improvement going from monolingual to multilingual, particularly for Italian and German ASR (with an improvement of, respectively, 1.29 and 2.35 WER) and for all languages in ST (with an average BLEU improvement of 3.52). Consistent with the findings from the monolingual experiments, our joint tSOT methods outperform the monolingual and multilingual separate ASR and ST models considering both the output quality and the latency. While INTER 1.0 achieves the highest BLEU scores across all languages, it also exhibits the highest, hence worst, WER. In contrast, no clear trend emerges for the best WER results. Regarding latency, the INTER ALIGN method consistently achieves the lowest, hence best, LAAL, with an average of 1s for ASR and 1.3s for ST. Balancing both quality and latency, the overall best results are obtained by the INTER ALIGN method, with the only exception of German ASR where the WER of the INTER 0.5 is slightly better. In conclusion, the joint tSOT method, and in particular the INTER ALIGN approach, proves to be the most effective solution for jointly generating ASR and ST outputs, delivering high-quality results with minimal latency. The results show that the joint tSOT INTER ALIGN achieves significant improvements compared to the separate multilingual ASR and ST models, with an average reduction of 1.1 WER and 0.4 BLEU across all languages, while maintaining comparable or even slightly lower latency (approximately 200ms average reduction). These findings highlight the efficiency of our proposed approach, which consolidates both ASR and ST functionalities into a single model. §.§ Interpretable ASR and ST Results To examine the relationship between the ASR and ST outputs obtained by our joint tSOT models, we conducted a manual analysis of the generated texts. We focused on the Italian to English language pair and selected the joint tSOT INTER ALIGN model as it resulted the best one for the streaming scenario. Representative examples extracted from the CoVoST 2 test set are shown in Table <ref>. The first example shows how a wrong transcription of the verb martirizzare (en: martyr) to the verb utilizzato (en: use/utilized) leads to a wrong translation having the same meaning of the wrong transcription. Additionally, example 2 proves how an omission in the transcription also leads to the same omission in the translation (it: finali/en: finals instead of it: semifinali/en: semifinals). Example 3 and 4 present another interesting phenomenon related to the wrong recognition of named entities and terminology. It has been previously demonstrated that failures in named entities recognition often produces the insertion of a completely different name or a common noun instead of the correct named entity <cit.>. In fact, Example 3 shows how the name Marine is incorrectly recognized as Marianne and this affects both the transcription and the translation. In Example 4, instead, the Joule term is misrecognized but as the common word già, presumably because these two words have assonance in Italian. As a consequence, the ST output is affected by the prediction of a wrong ASR word but, differently from Example 1, the translation does not reflect the meaning of the wrong word già but is completely random. Lastly, in Examples 5, we observe that stanza dei forestieri (en: guest room) is literally translated by using out-of-context terms, where chamber is generated instead of room due to both concepts being expressed by the same Italian word stanza. Therefore, by analyzing the transcriptions and translations jointly produced by our joint tSOT model, we can better identify and understand the root causes of mistranslations, leading to a more interpretable output. This highlights the potential of our method to leverage the generated transcription to enable explainable ST. § CONCLUSIONS This paper introduced the first streaming Transformer Transducer able to jointly generate both automatic speech recognition and translation outputs using a single decoder. To effectively produce transcription and translation words without increased latency, we proposed a joint serialized output training that leverages an off-the-shelf neural text aligner to build the data without any additional costs. Monolingual (it-en) and multilingual ({de,es,it}-en) experiments proved that our proposed approach not only better balance the quality and the latency constraints of the streaming scenario, with an average latency of 1s for ASR and 1.3s for ST, but also outperforms separate ASR and ST models by an average of 1.1 WER and 0.4 BLEU in the multilingual case. Moreover, it promotes a more explainable ST by exploiting the ASR outputs to better understand the root cause of the mistranslations and Green AI by significantly reducing the number of required systems. IEEEbib
http://arxiv.org/abs/2307.00456v1
20230702023457
Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data
[ "Xinzhe Li", "Ming Liu", "Shang Gao" ]
cs.CL
[ "cs.CL" ]
GenRec: Large Language Model for Generative Recommendation Yongfeng Zhang August 1, 2023 ========================================================== This paper addresses the ethical concerns arising from the use of unauthorized public data in deep learning models and proposes a novel solution. Specifically, building on the work of <cit.>, we extend their bi-level optimization approach to generate unlearnable text using a gradient-based search technique. However, although effective, this approach faces practical limitations, including the requirement of batches of instances and model architecture knowledge that is not readily accessible to ordinary users with limited access to their own data. Furthermore, even with semantic-preserving constraints, unlearnable noise can alter the text's semantics. To address these challenges, we extract simple patterns from unlearnable text produced by bi-level optimization and demonstrate that the data remains unlearnable for unknown models. Additionally, these patterns are not instance- or dataset-specific, allowing users to readily apply them to text classification and question-answering tasks, even if only a small proportion of users implement them on their public content. We also open-source codes to generate unlearnable text and assess unlearnable noise to benefit the public and future studies. § INTRODUCTION With the increase in the prevalence of deep learning, public data has become more frequently used for developing predictive models. However, the use of unauthorized public data, such as tweets, raises ethical concerns. Furthermore, it is considered even more unethical to charge the public for services based on these models. In addition to the ethical concerns, our research can help address privacy issues associated with the development of sensitive applications that impede public privacy. For instance, facial recognition systems can recognize individuals even when they are on the street <cit.>. To prevent deep learning models from exploiting textual content and potentially predicting private information such as sentiments on sensitive topics <cit.>, political affiliations <cit.>, age, and gender of users <cit.>, we propose making text unlearnable. While <cit.> proposed a process to make images unlearnable, our work extends this idea to generate unlearnable text using a gradient-based search approach. In our study, we investigate the performance of error-minimization modifications for text unlearning in three tasks: sentiment analysis, topic classification, and question answering. Sentiment analysis and topic classification can reveal users' interests, such as political leaning, while question answering can extract information from users' text. Due to data accessibility limitations and privacy concerns, we conduct our experiments on open data that is commonly used for academic purposes. Our contributions include the adaptation of the bi-level optimization formulation from <cit.> to text, and the development of a search procedure to modify text for (inner) error minimization. Our results show the efficacy of error-minimization modifications in making text unlearnable for all three tasks. However, the optimization process is impractical in real-world scenarios. Therefore, we extract two synthetic patterns from error-min modifications: label hints for text classification and an answer hint for question answering. These patterns can make text unlearnable and can be applied to any individual text without requiring a computationally expensive algorithm. We also consider the effectiveness of these synthetic patterns in real-world scenarios. Our results show that they can be effective on models with different network architectures and training paradigms, including training from scratch and the pretrain-then-fine-tune paradigm. Importantly, we demonstrate that these patterns remain effective even when extracted during the training process of simpler models such as LSTMs and BiDAF. Moreover, they remain effective even when only a portion of users use them, and can be applied to one of the classes, which can be helpful in making one specific sensitive class unlearnable. § BACKGROUND In this section, we will conduct an analysis of the existing privacy protection methods designed to safeguard against training deep learning models. We will then proceed to explicate the bi-level optimization approach adopted in this study to generate unlearnable images. In the subsequent section, we will demonstrate the generalizability of this method to generate unlearnable text §.§ Privacy Protection The development of deep learning models with public data has raised concerns about privacy leakage. Several research directions have been proposed to address this concern. Differentially-private techniques <cit.> have been suggested as a solution to prevent the memorization of user-specific information during the training process. However, the application of such techniques requires users to trust those who collect their data. Another proposed approach is machine unlearning <cit.>, which aims to remove the training impact of specific samples provided by users after the models have successfully learned from the data. Protection of textual messages against unauthorized neural natural language processing (NLP) models is critical. Especially, statistical features learned by these models can lead to the extraction of private informationextracted by hackers <cit.> since DNNs can memorize private information such as name and address in training data. This paper concentrates on user-end solutions for privacy protection, exploring noise-addition approaches against unauthorized NLP models. While several noise-addition approaches have been proposed by the computer vision community against facial recognition models <cit.>, to the best of our knowledge, no similar work has been conducted in the NLP community. §.§ Formulating the Unlearnable Objective as a Bi-level Optimization Problem Consider a training set 𝒟=(x, y)_i=1^N, where the i-th instance consists of a text x and its true label y for classification. A DNN f(θ), where θ is parameters of the model f, maps the input space 𝕏 to the output pace 𝕐. The training objective is to minimize the loss function ℒ: θminℒ(f(x), y)] Min-min Optimization by <cit.>. <cit.> nested the unlearnable objective within the training objective (Equation <ref>) to formulate a bi-level optimization problem: θmin𝐄_(x+η, y) ∼𝒟[ηminℒ(f(x+η), y)], where a pixel-wise vector η∈ℛ^C × H × W is optimized to minimize ℒ, , where C, H, W are the numbers of channels, height and weight of images respectively. They solved the outer objective with the common training routine, i.e., the gradient descent algorithm to iteratively optimize the model parameters θ: θ_t+1 = θ_t - γ∇_θ_tℒ, where γ is the learning rate. For the inner objective, they nested another iterative process of projected gradient descent (PGD) <cit.> to optimize the noise η (error-min noise) for each training sample (sample-wise noise) or each class (class-wise noise), which is a common routine to solve bi-level optimizations <cit.>. Equation <ref> shows the one-step update: η_t+1 = η_t - εsgn∇_η_tℒ(x_t), where they obtained perturbed images via element-wise addition x=x+η, and εsgn performs a ℓ_∞ norm. We detail the whole min-min optimization in Algorithm <ref>. Unlike the original process, we add the exit condition when the evaluation metrics on test sets are unchanged for computational efficiency, which indicates the noise's effectiveness. [We would use accuracy for text classification tasks and F1 scores for question answering.] To generate unlearnable text, we replace the step <ref> with a loss approximation search procedure, as demonstrated in the next section. § ADAPTATION TO TEXT In this section, we first formulate noise as discrete text modifications in contrast to pixel-wise vectors for images. To adapt Algorithm <ref> with text modifications, we use a search procedure (Algorithm <ref>) to replace PGD optimization steps. §.§ Text Modifications Unlike images, a textual input x consists of a sequence of words w_1, w_2, ..., w_T, where T is the number of words. A vocabulary V consists of all the words. Therefore, we define noise as substituting the word w_p ∈ x indexed by the position p with a word s ∈ V, denoting as η=(p, s). However, there are two problems: 1) The discrete operation (p, s) is not differentiable: Since the noise η for images is continuous, it is differentiable and can be optimized via gradient descent. However, we cannot use gradient descent to optimize (p, s); 2) Modifying a single token may change the semantics of text (e.g., "I love you" to "I the you"), while a simple ℓ_∞ norm on noise for an image can make it imperceptible. §.§ A Search Procedure To solve the first problem, we approximate the loss change for all possible substitutions and search for a substitute word causing the lowest loss. Specifically, each word w can be transformed into a dense vector e_w via a matrix 𝐄∈ℛ^n × m parameterized in a DNN f(θ), where n is the size of a vocabulary V and m is the size of each embedding vector. We measure the loss change of substituting a word w_p with another word s ∈ V by the inner product of e_s and the gradient of loss w.r.t. e_w (∇_e_wℒ(x, y)). smin e_s^T∇_e_wℒ(x, y) The first-order approximation approach has been used for adversarial attacks <cit.> with different implementations. For semantic preservation, we select the modified word s from semantically similar words for each substitution. Following the setting of <cit.> for generating adversarial candidates, we calculate the cosine similarity between w and s and select candidate words within the threshold. We discuss the setting of the hyperparameters in Appendix <ref>. Besides, we only consider one modification (p, s) for a text. For question answering, we exclude positions in answer spans. Implementation. To search for a (p, s) to minimize the training loss, we acquire the gradients for all the positions of the original example by one forward and backward pass, i.e., ∇_xℒ=∇_e_w_1ℒ, ..., ∇_e_w_Tℒ. Instead of searching over the vocabulary for each w_p, we efficiently approximate the loss changes for all the candidates (P, S) by one matrix multiplication as Equation <ref>. We discuss the approximation errors in Appendix <ref>. 𝐀 = ∇_xℒ^T𝐄, where ∇_xℒ∈ℛ^T × m, and embedding matrix 𝐄∈ℛ^n × m, We then rank all the candidates according to the approximation scores 𝐀∈T × n and select the one with the lowest score satisfying the constraints. Algorithm <ref> demonstrates the process of searching for a optimal (p*, s*) for an instance (x, y) at one iteration. § EXPERIMENTAL SETTINGS This section will first introduce all our experiment's tasks, datasets, and models. We then demonstrate essential factors for generating unlearnable modifications. §.§ Tasks and Datasets Text classification. A neural network f(θ) takes a text x and outputs a probability distribution over the output space Pr(Ŷ| x) after normalizing by the Softmax function, i.e., Pr(Ŷ| x) = Softmax(f(x)). ℒ is defined as a negative log likelihood of Pr(y| x, θ) or a cross entropy between Pr(Ŷ| x) and one-hot representation of the true label y. We choose two datasets to simulate real-world scenarios to identify users' sentiments and interests, each with training, validation, and test sets. * SST2: It contains movie reviews from the Stanford Sentiment Treebank (SST) dataset. Each sentence is labelled as either positive or negative sentiment. <cit.> * AG-News: This dataset divides news articles into four classes: world, sports, business, and sci/tech. It involves 10,800 training samples, 12,000 validation samples, and 7,600 test samples. It works as a proxy task to detect users' interests. Question answering. Given a passage of text p and a question q, models aim to extract a correct answer span a from p. Given x=(p, q), f(θ) will output probability distributions for both the beginning and ending positions of the answer span a, denoting as Pr_start and Pr_end. The loss ℒ is calculated by adding negative log likelihoods of Pr_start and Pr_end. We aim to prevent QA models from learning the passage when we maintain correct answers in the passage. We use the Stanford Question Answering Dataset (SQuAD) v1.1 dataset <cit.>, which contains more than 100,000 question-answer pairs based on about 500 articles. Since the SQuAD test set is unavailable, we use the validation/test splits from <cit.> derived from the original validation set. §.§ Models To generate error-min modifications, we use LSTMs <cit.> (∼ 3.8M parameters) for all the text classification tasks and Bidirectional Attention Flow (BiDAF) model <cit.> (∼ 2.5M parameters) for question answering. Specifically, BiDAF uses one bidirectional LSTM to represent each context and question respectively and applies an attention mechanism to generate two question-aware context representations with a dimension of H, where H is the hidden size. A linear layer parameterized by a matrix M^H × 2, followed by a softmax function, transforms them into the probability distributions Pr_start and Pr_end respectively. We use the 300-dimensional GloVe word vectors <cit.> for the above models. To answer whether we can make text unlearnable when fine-tuning powerful pretrained language models, we evaluate BERTBASE with 110M parameters <cit.> for text classification and RoBERTaBASE with 125M parameters <cit.> for question answering. In contrast to BiDAF, RoBERTa is pretrained to support a pair of sequences as inputs by concatenating them with a special token. §.§ Computational Considerations Generating modifications by the min-min optimization is computationally expensive. Due to limited computational resources, we down-sample the training set for AG-News and SQuAD to validate the min-min optimization, i.e., using the first 3,200 articles and their categories of AG-News and 1,000 question-answer pairs from the SQuAD training set. However, we construct the vocabulary on the whole training data to avoid out-of-vocabulary when evaluating test data. Note that such size of SQuAD examples is not large enough to train a good QA model. However, we can evaluate the effectiveness of the min-min optimization by comparing model performance on clean and modified data. Even so, we find that the algorithm <ref> runs much slower on AG-News and SQuAD than SST2 since it is harder to find substitute words to satisfy the similarity constraint. We would not apply the constraint to AG-News and SQuAD. Since the text in these two datasets are much longer (19 for SST2, 43 for AG-News, and more than 100 for SQuAD), it is unlikely to change the semantics of a text by substituting one word. [Even so, running Algorithm <ref> to generate one set of error-min modifications once costs around 4 hours for AG-News and more than 10 hours for SQuAD with RTX3080 (16GB).] § EFFECTIVENESS OF MIN-MIN OPTIMIZATION In this section, we report the effectiveness of modifications generated via the min-min optimization and further analyze why min-min modifications are effective. §.§ Experimental Results The min-min optimization generates several sets of error-min modifications (S_0, P_0), ..., (S_i, P_i), ..., (S_N, P_N) at different training checkpoints (see step <ref> in Algorithm <ref>). For example, Error-min-i=(S_i, P_i) is generated by Algorithm <ref> after M × i training steps, which would be applied on the next M training steps (see step <ref> in Algorithm <ref>) until (S_i+1, P_i+1) is generated. Error-min-N=(S_N, P_N) is the final output from the min-min optimization. We not only answer whether the final min-min modifications (Error-min-N) can make text unlearnable but also evaluate whether other sets of error-min modifications (e.g., Error-min-i) can be effective. Specifically, we apply each set of error-min modifications to the clean training data and optimize neural networks on the modified training data. We then follow the strategy from <cit.> to measure metrics on test samples during different training epochs. The min-min optimization over LSTM on SST2 generates three sets of error-min modifications (i.e., N=3), while two sets for SST2 and SQuAD. All the results in Figure <ref> demonstrate that the Error-min-0 modifications effectively make text unlearnable. They are even more effective than the last error-min modifications for SST2 and AG-News. With this, the bi-level optimization may be unnecessary to generate effective modifications, and one-step error minimization on randomly initialized DNNs can generate effective modifications. §.§ Analysis After exploring why Error-min-0 appears more effective in this section, we find that there exist simple, explicit patterns which correlate to the task-specific outputs (i.e., labels for text classification or answers for QA) to make text unlearnable. Specifically, we first investigate whether substitute words in each set of error-min modifications correlate with labels. We divide all the substitute words for each class into bags of words (label-wise BOWs) and calculate the average Jaccard similarity between each pair of BOWs as Equation <ref>. Table <ref> shows that effective modifications (e.g., Error-min-0) present low similarity, which indicates that label-wise patterns may make text unlearnable. Average Similarity = ∑_i=1^K∑_j=i+1^K|BOW_i ∩BOW_j|/|BOW_i ∪BOW_j| where K is the number of classes/labels. We also find little sample-wise feature in each label-wise BOW. Specifically, we calculate the probabilities over all the substitute words. For example, Pr_BOW_0(w) denotes the probability that the word w appears amongst all the samples with the label indexed by 0. We then rank the probabilities in descending order and cumulate the probabilities for the top 5 words. Figure <ref> shows that we only need five words to make most of the examples unlearnable. We then investigate the distribution of positions P. We calculate the relative position p_rel for each sample by dividing each position p (the index of the modified word) by the length of the sentence x. Extremely, p_rel=0 when modifying the first word, while p_rel=1 if the last word is modified. Figure <ref> shows that text tends to be modified at the end. We also find a simple pattern in the error-min modifications for SQuAD: 1) all the positions are identified within the one-word distance of the answers. 2) Similar to text classification, the top 5 substitute words modify 98% of 1000 samples. Therefore, we can reasonably hypothesize that the min-min optimization would generate noise with task-specific patterns to make text unlearnable, e.g., words correlating to labels for text classification or words to indicate the positions of answers for QA. § MANUALLY GENERATING SIMPLE PATTERNS In this section, we test the effectiveness of synthetic patterns according to the previous findings since it is difficult to use the min-min optimization in reality. First, it assumes that users can access model architectures and the whole training data (or at least a batch of instances). In real life, users can only access their portion of data and publish one instance (e.g., a tweet) once at a time. Besides, generating modifications with the min-min optimization is very computationally expensive. Hence we construct synthetic patterns, including class-wise symbols (label hints) for text classification and a symbol surrounding the answer spans (answer hints) for question answering. Another benefit is that inserting such symbols maintains semantics without complicated constraints. To show that the patterns can be generalized to other network architectures, we evaluate them by fine-tuning two popular pretrained transformers: BERT for text classification and RoBERTa for question answering. Figure <ref> shows that these hints can effectively prevent DNNs from comprehending the text. Surprisingly, class-wise symbols are effective at any position (the beginning/middle/end). Although we show experimental results with characters (e.g., "a", "b") as the hints, we can also achieve the same outcome by inserting an exclamation mark ("!") and an at sign ("@") at the end of positive and negative reviews respectively as label hints, which makes such patterns more imperceptible (See Appendix <ref> for examples). The patterns' effectiveness when only partial training instances can be modified. Since it may not be possible to let all users add the patterns, we explore their effectiveness when applying such patterns to partial training data. We randomly select a certain percent of training instances (𝒟_partial) and apply unlearnable patterns on them (𝒟_unlearn). To show the effectiveness of unlearnable patterns, we calculate the change in the test accuracy after adding 𝒟_unlearn into the training process. For comparison, we report the result by adding 𝒟_partial. As shown in Table <ref>, models rarely learn useful information from 𝒟_unlearn compared to 𝒟_partial. Can we only make one class of examples unlearnable? We select one class in AG-News (i.e., the "World" category) and insert a symbol ("a") only on instances belonging to the "World" class. A BERT model fine-tuned on such a dataset shows low accuracy on the test instances belonging to the "World" class (0.015) and high accuracy on others (0.93). Henceforth, users can make a sensitive class of data unlearnable by agreeing on a class-specific symbol. §.§ Why Do Simple Patterns Make Text Unlearnable? We consider simple patterns as biased features. Without any biased feature, the gradient descent algorithm would optimize θ to approximate the conditional probability Pr(y| x) by minimizing empirical errors of any training instance. When we embed a simple biased feature b into x, the DNN would first learn Pr(y| b). Many previous works <cit.> have found that deep learning tends to learn superficial patterns. As shown in our experiments, once the model learns such Pr(y| b), models have difficulty exploiting the semantics of the input x during the latter training process since the performance on test data does not improve. This property coincides with shortcuts found in question answering <cit.>. An unlearnable state. We assume that there exists an unlearnable state when models confidently correlate b with model outputs, i.e., Pr(y|b) ≈ 1, which would lead to ℒ≈ 0 for any input x with b. Correspondingly, the forward pass would generate zero gradients to update the model during the backward pass. Since the model has no update according to the data, we can ensure that there is no information leakage. We verify this by tracing gradient norms during fine-tuning BERT on synthetic patterns. Figure <ref> shows that the unlearnable state appears at about 250 iterations, where the model stops updating parameters. The same phenomenon occurs during training LSTM on error-min modifications (see Appendix <ref>). § CONCLUSION By adapting min-min optimization, we develop an approach to expose vulnerabilities of deep learning to make text unlearnable. To overcome the limitation of requiring knowledge of models and training data, we extract simple patterns (e.g., label hints and answer hints) from the min-min optimization to make text unlearnable. Although our experiment explores patterns for text classification and question-answering tasks, the pipeline potentially works for any NLP task. Reproducibility. To ensure the effectiveness of unlearnable modifications, we slightly tuned the training hyperparameters to achieve well-trained models, such as setting maximum gradient norms and early stopping according to validation sets. We open-source codes with configuration files, which contain hyperparameters regarding model architectures (e.g., the number of layers), batching (e.g., data sampling), and training setups (e,g., learning rate). Since these files are configurable in JSON format, future works can easily reproduce and extend the experiments. § LIMITATIONS The main concern is that debiased techniques may remove simple biased features. However, to our knowledge, most debiased techniques <cit.> can only remove biases across a concept subspace (e.g., the bias direction for gender) in the embedding space. Another setup of data debiasing, e.g., <cit.>, requires hypothesized biases to train biased models and is limited to tasks with known hypothesized biases (e.g., lexical overlap for NLI). Also, they remove biased examples rather than identify biased symbols (e.g., label hints). However, we still expect future works to consider other complicated patterns beyond symbol insertions or word substitution. acl_natbib § THE CHANGE OF GRADIENT NORMS Figure <ref> shows gradient norms with error-min modifications and further proves the argument. The set of the Error-min-0 modifications with label-wise patterns (see Table <ref>) has almost zero gradients during training. It even has a small gradient update in the first few steps. It may be because the randomly initialized models can easily learn class-wise patterns, while BERT has to overcome its pretrained priors. § HYPERPARAMETER SETTING The interval of optimizing the error-min noise M. If M is too small, the test accuracy after another M iterations easily plateaus due to insufficient model update, which causes the early stop of the min-min process. On the other hand, a large interval will linearly increase the computational complexity. Specifically, since we use modifications for batches of instances in the next M training iterations, error-min optimization needs to be run for M × B instances, where B is the batch size. Hence, we set M=30 for text classification tasks and a smaller M (10) for SQuAD because of a larger batch size and longer sequence lengths to train SQuAD models. The threshold of cosine similarity. We set the threshold to 0.5 to follow the work <cit.> for generating adversarial noise. The effect of the threshold: Increasing the threshold can help find more semantically similar words (even synonyms), as specified in <cit.>. For example, when we use this threshold, the word "award-winning" is identified to replace "charming". However, by increasing the threshold to 0.9, the substitute word becomes "lovely". However, Algorithm <ref> runs much slower by denying most of the high-ranked candidates and leads to noise that is hard to make data unlearnable. Also, it stops us from deriving general unlearnable patterns via qualitative analysis of substitute words. For example, the cumulative probabilities in Table <ref> would be smaller due to more varying substitution sets. § ERRORS OF APPROXIMATING LOSS CHANGES Generally, in our experiment, Equation <ref> can always approximate the loss change in a correct direction, in our case, leading to the decrease of the actual loss. Specifically, the errors of the approximate loss change depend on the state of the models (the outcome of the outer minimization). For example, the results (the loss on the original SST2 training instances/the loss on the modified instances/the approximate loss change) for a randomly initialized LSTM would be 0.6931/0.6833/-0.0004, while, at the other extreme, the results for the LSTM checkpoint which has converged on our label hint are 0.4457/0.0782/-0.0012 or 0.4905/0.0714/-0.0379.
http://arxiv.org/abs/2307.03154v1
20230706173014
Reversible Non-Volatile Electronic Switching in a Near Room Temperature van der Waals Ferromagnet
[ "Han Wu", "Lei Chen", "Paul Malinowski", "Jianwei Huang", "Qinwen Deng", "Kirsty Scott", "Bo Gyu Jang", "Jacob P. C. Ruff", "Yu He", "Xiang Chen", "Chaowei Hu", "Ziqin Yue", "Ji Seop Oh", "Xiaokun Teng", "Yucheng Guo", "Mason Klemm", "Chuqiao Shi", "Yue Shi", "Chandan Setty", "Tyler Werner", "Makoto Hashimoto", "Donghui Lu", "T. Yilmaz", "Elio Vescovo", "Sung-Kwan Mo", "Alexei Fedorov", "Jonathan Denlinger", "Yaofeng Xie", "Bin Gao", "Junichiro Kono", "Pengcheng Dai", "Yimo Han", "Xiaodong Xu", "Robert J. Birgeneau", "Jian-Xin Zhu", "Eduardo H. da Silva Neto", "Liang Wu", "Jiun-Haw Chu", "Qimiao Si", "Ming Yi" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics, University of Washington, Seattle, Washington 98195, USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104, USA Theoretical Division and Center for Integrated Nanotechnologies, Los Alamos National Laboratory, Los Alamos, NM, USA Cornell High Energy Synchrotron Source, Cornell University, Ithaca, NY 14853, USA Department of Applied Physics, Yale University, New Haven, Connecticut 06511, USA Department of Physics, University of California, Berkeley, Berkeley, California 94720, USA Department of Physics, University of Washington, Seattle, Washington 98195, USA Department of Materials Science and Engineering, University of Washington, Seattle, Washington 98195, USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics, University of California, Berkeley, Berkeley, California 94720, USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Materials Science and NanoEngineering, Rice University, Houston, TX, 77005, USA Department of Physics, University of Washington, Seattle, Washington 98195, USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Applied Physics, Yale University, New Haven, Connecticut 06511, USA Stanford Synchrotron Radiation Lightsource, SLAC National Accelerator Laboratory, Menlo Park, California 94025, USA Stanford Synchrotron Radiation Lightsource, SLAC National Accelerator Laboratory, Menlo Park, California 94025, USA National Synchrotron Light Source II, Brookhaven National Lab, Upton, New York 11973, USA National Synchrotron Light Source II, Brookhaven National Lab, Upton, New York 11973, USA Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Materials Science and NanoEngineering, Rice University, Houston, TX, 77005, USA Departments of Electrical and Computer Engineering, Rice University, Houston, TX, 77005, USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Department of Materials Science and NanoEngineering, Rice University, Houston, TX, 77005, USA Department of Physics, University of Washington, Seattle, Washington 98195, USA Department of Materials Science and Engineering, University of Washington, Seattle, Washington 98195, USA Department of Physics, University of California, Berkeley, Berkeley, California 94720, USA Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA Department of Materials Science and Engineering, University of California, Berkeley, USA Theoretical Division and Center for Integrated Nanotechnologies, Los Alamos National Laboratory, Los Alamos, NM, USA Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104, USA Department of Physics, University of Washington, Seattle, Washington 98195, USA Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA mingyi@rice.edu Department of Physics and Astronomy and Rice Center for Quantum Materials, Rice University, Houston, TX, 77005 USA Reversible Non-Volatile Electronic Switching in a Near Room Temperature van der Waals Ferromagnet Ming Yi August 1, 2023 ================================================================================================= § ABSTRACT The ability to reversibly toggle between two distinct states in a non-volatile method is important for information storage applications. Such devices have been realized for phase-change materials, which utilizes local heating methods to toggle between a crystalline and an amorphous state with distinct electrical properties <cit.>. To expand such kind of switching between two topologically distinct phases requires non-volatile switching between two crystalline phases with distinct symmetries. Here we report the observation of reversible and non-volatile switching between two stable and closely-related crystal structures with remarkably distinct electronic structures in the near room temperature van der Waals ferromagnet Fe_5-δGeTe_2. From a combination of characterization techniques we show that the switching is enabled by the ordering and disordering of an Fe site vacancy that results in distinct crystalline symmetries of the two phases that can be controlled by a thermal annealing and quenching method. Furthermore, from symmetry analysis as well as first principle calculations, we provide understanding of the key distinction in the observed electronic structures of the two phases: topological nodal lines compatible with the preserved global inversion symmetry in the site-disordered phase, and flat bands resulting from quantum destructive interference on a bipartite crystaline lattice formed by the presence of the site order as well as the lifting of the topological degeneracy due to the broken inversion symmetry in the site-ordered phase. Our work not only reveals a rich variety of quantum phases emergent in the metallic van der Waals ferromagnets due to the presence of site ordering, but also demonstrates the potential of these highly tunable two-dimensional magnets for memory and spintronics applications. § MAIN Materials that can toggle between two states with distinct properties are important for information storage technology. Phase-change materials, for example, have been widely used for rewriteable optical data storage <cit.>. The key advantage is that the two phases are controlled by a non-volatile process, which is realized via a transient laser pulse that locally heats and changes the crystal structure, either resulting in a crystalline state or a quenched amorphous state. 2D van der Waals (vdW) materials is another class of material family whose properties are highly tunable, such as by electrostatic doping, optical illumination, or strain <cit.>. They are valued not only for their versatile tunability but also the low dimensionality that allows exotic properties to arise due to quantum confinement. The advent of the concept of topology adds the potential to realize switching devices that go beyond resistive or optical readouts. As topology is often distinguished by crystalline symmetries, switching between two topologically distinct states can be realized via tuning knobs that change symmetries <cit.>, most often achieved with a structural transition. However, such tuning knobs typically modulate temperature, electrostatic doping, strain, field, or pressure, all difficult to achieve in a non-volatile method. Here in this work, we demonstrate non-volatile reversible switching of two closely related crystal structural phases in the vdW ferromagnet Fe_5GeTe_2 via an annealing and quenching procedure. Fe_5GeTe_2 belongs to a class of Fe-based metallic vdW ferromagnets that exhibits relatively high Curie temperatures (T_C = 275 K to 330 K in the bulk limit) <cit.>. Different from other widely studied 2D ferromagnets such as CrI_3 and Cr_2X_2Te_6 (X = Ge, Si) <cit.>, Fe_5GeTe_2 is air-stable and metallic, hence has been considered a top candidate for spintronics applications <cit.>. The two phases share similar overall crystal structure but differ only in the ordering or disordering of an Fe vacancy site occupation that results in distinct crystalline symmetries. Second harmonic generation (SHG) measurements show the site-disordered phase to exhibit global inversion symmetry while the site-ordered phase breaks inversion symmetry, with intensity differing by a factor of 30. Remarkably, the electronic structures in the two phases are qualitatively distinct, as observed by angle-resolved photoemission spectroscopy (ARPES). From a combination of symmetry analysis and first principle calculations, we also provide a understanding of the key features of the observed electronic structure. In the site-disordered phase, we observe topological nodal lines that are compatible with the preserved global inversion symmetry, while in the site-ordered phase, we observe the lifting of the topological degeneracy due to the broken inversion symmetry as well as flat bands that are compatible with the quantum destructive interference of a bipartite crystalline lattice formed by the site-order. Our work not only demonstrates the exciting potential of using site order in the Fe-based 2D materials as a novel tuning knob to engineer and control correlated topological phases, but also reveals the potential of this class of 2D materials as a novel type of phase-change materials for non-volatile spintronics, memory or non-linear optical applications. §.§ Reversible switching of two distinct electronic structures Fe_5GeTe_2 belongs to a larger class of Fe-based metallic ferromagnets, Fe_nGeTe_2 (n=3 to 5) <cit.>, and is known to have a unique partially occupied split site <cit.>. The crystal lattice of Fe_5GeTe_2 is rhombohedral (space group R3m, No. 166) <cit.>. The crystal structure consists of an ABC-stacking of the vdW slabs (Fig. <ref>a). Each slab consists of Fe and Ge sites sandwiched between layers of Te. In addition, each slab consists of three distinct Fe sites, marked as Fe(1), Fe(2), and Fe(3) in Fig. <ref>a. While Fe(2) and Fe(3) sites are fully occupied, Fe(1) sites are known to be split-sites where for each up-down pair within a single slab, they are either occupied in the up or down site <cit.>. This choice of either the up or down site for Fe(1) pushes the Ge sites to also occupy a split site, where the site farther away from the occupied Fe(1) site is preferred. The choice of either occupying the up or down Fe(1) sites can be uncorrelated spatially or form an order depending on the rate at which the crystals are formed from growth <cit.>. In particular, the occupancy of the Fe(1) sites could form an up-down-down (UDD) or down-up-up (DUU) pattern, resulting in a √(3)×√(3) superstructure <cit.>. This ordered occupancy is favored when the crystals are quenched from above a structural transition identified by previous literature as T_HT = 550 K while the random distribution is favored with slow cooling <cit.>. For simplicity, we refer to the uncorrelated phase the site-disordered phase and the ordered phase the site-ordered phase. The ordering of the Fe(1) sites plays a crucial role in modifying the global symmetry of the crystal. In the site-disordered phase, the global inversion symmetry is preserved. This can be seen in Fig. <ref>a, where the inversion centers of each vdW slab is between the Ge split sites. In the site-ordered phase, the inversion symmetry is broken by the Fe(1) sites <cit.>. Such symmetry breaking has profound impact on the electronic structure, and as we will demonstrate, is the key to the tunability. To probe such an effect, we carried out ARPES measurements on crystals that were prepared in the two thermal methods. The measured Fermi surface (FS) of the slow-cooled crystals (Fig. <ref>g) and the quenched crystals (Fig. <ref>h) under the same measurement conditions are drastically different. In particular, the quenched crystals exhibit small pockets at the K points of the BZ, which are absent in the slow-cooled crystals. Instead, the slow-cooled crystals exhibit additional large pockets centered at the Γ point. As we will show in detail in each of the two subsequent sections, the band dispersions leading to these FSs are significantly different, belonging to distinct topologically non-trivial phases. Before we discuss the electronic structure in depth, we first demonstrate the reversible non-volatile switching of these two phases. To confirm that it is the last thermal cooling step that dictates the electronic phase, we performed the following test (see Fig. S8 in the SI). First, we prepared the crystals by quenching them from above T_HT down to room temperature. Then we cut a crystal into halves and annealed a half piece to the metastable phase above T_HT and slowly cooled it back down to room temperature while leaving the other half untreated (Fig. <ref>b). The half pieces are then measured by ARPES. The electronic structure of the two halves are observed to be distinct, with the original quenched half identical to that shown in Fig. <ref>h, while the annealed and slow-cooled half identical to that presented in Fig. <ref>g. We have also checked the reverse process, which is to start with a crystal that was first formed via slow-cooling to room temperature, cut it in half, and annealing one half to above T_HT and then quenched in water (Fig. <ref>c). The subsequent ARPES measurement on the two halves again show the contrasting electronic structures, with the re-quenched half showing electronic structure identical to that in Fig. <ref>h and the original slow-cooled half identical to Fig. <ref>g. This procedure demonstrates that the key for the distinct electronic structures is the cooling rate in the final thermal treatment from above T_HT. Hence we have demonstrated that there are two stable phases with drastically distinct electronic structures that can be reversibly switched in a non-volatile method. §.§ Fe(1) site ordering as origin for distinct electronic phases As reported, there are three types of possible variations of the Fe_5-δGeTe_2 single crystals: Fe deficiency (δ), stacking faults, and the formation of the  Fe(1) site order <cit.>. Since each pair of half crystals used above originate from the same original piece, the above procedure also rules out any difference in Fe deficiency as a potential cause for the difference in the electronic structure. Furthermore, we can also rule out the vdW stacking faults as a possible cause of the distinct electronic structure. From our transmission electron microscopy (TEM) images on the two types of crystals (see Fig. S1 in the SI), we do not observe any regular appearance of stacking faults in either the slow-cooled or quenched crystals. Both crystals exhibit ABC stacking, with occasional stacking faults between the vdW layers. Such rare occurrence cannot constitute a qualitative electronic structure distinction between the two types of crystals. Therefore we are left with the appearance of the  Fe(1) site order as the likely cause of the dichotomy of the electronic structure. First from single crystal x-ray diffraction (XRD) measurements, while the diffraction peaks corresponding to the √(3)×√(3) order are observed in the two types of crystals, their intensity relative to the Bragg peaks is reduced in the slow-cooled samples compared to those measured on a quenched crystal (see Extended Data Fig. S3). This suggests that while both site-disordered and site-ordered regions exist in the slow-cooled crystals, the population of the site-ordered regions is smaller. To further confirm this, we carried out STM measurements on both quenched and slow-cooled crystals, revealing regions with a √(3)×√(3) superlattice with both UDD and DUU ordering of Fe(1) occupation sites (Fig. <ref>e-f), consistent with previous STM reports on Fe_5-δGeTe_2 <cit.>. While the field of view (on the order of 1 μm^2) of our STM measurements on quenched crystals showed only √(3)×√(3) ordered regions, similar measurements on slow-cooled crystals also showed disordered regions without the √(3)×√(3) superlattices. Interestingly, in slow-cooled crystals, these disordered regions dominate the field-of-view, surrounding small domains of √(3)×√(3) superlattices (see SI Extended Data Fig. S2), consistent with the XRD results. The existence of regions with √(3)×√(3) order in the two types of crystals revealed by STM is further confirmed by SHG measurement. We carried out polarization-dependent SHG measurements at 5 K on the two types of crystals. The quenched crystals reveal a 30 times stronger SHG signal compared to that of the slow-cooled crystals (Fig. <ref>i,j), note that to observe the tiny SHG signal (less than 10. c.p.s.) in the slow-cooled sample, a incident power of 4 mW with a 50 X objective is needed, which is just below the damage threshold, requiring a photon counter. As the SHG signal is contributed by the electric dipole (ED), I_i^ED(2ω) ∝|Σ_jkχ_ijk^EDE_j(ω)E_k(ω) |^2, where k and ω are the wavevector and frequency of incident beam respectively, χ is the nonlinear susceptibility tensor and i, j,k,l are Cartesian coordinate indices, it is a sensitive probe of the presence of inversion-symmetry-breaking. On one hand, for the quenched crystals, the clear presence of inversion symmetry breaking is consistent with the formation of the  order. On the other hand, in the slow-cooled crystals dominated by regions with random Fe(1) site occupancy, the electric dipole contribution to SHG would be forbidden due to the preserved global inversion symmetry while only a smaller electric quadrupole (EQ) SHG contribution following the three-fold rotational symmetry would be allowed, I_i^EQ(2ω)∝| Σ_jklχ_ijkl^EQk_jE_k(ω)E_l(ω) |^2 under normal incidence. The much enhanced SHG signal in the quenched crystals is consistent with both the STM and XRD observations. Hence, we associate the electronic structure measured on the quenched crystals to that of the Fe(1) site-ordered phase and that measured on the slow-cooled crystals to that of the Fe(1) site-disordered phase. As we will demonstrate subsequently in the discussion section, the crystal symmetries for the site-disordered and site-ordered phases are highly compatible with the topological band dispersions that we observe from ARPES. §.§ Nodal lines in the site-disordered phase Next, we present in detail the key features in the measured electronic structure of the site-disordered phase achieved from slow-cooling the crystals. The FS in the ferromagnetically ordered state is shown in Fig. <ref>b, consisting of several circular Fermi pockets centered at the BZ center and elliptical pockets surrounding the K̅- M̅- K̅'̅ BZ boundaries, as highlighted by white dashed lines. To understand these features, we measured the electronic dispersions along the high symmetry direction of M̅-K̅-Γ̅-K̅ (Fig. <ref>e-f). First, we observe a number of hole bands centered at the Γ̅ point, giving rise to the observed circular Fermi pockets. Interestingly, near the K̅ point, we also observe a band crossing near -0.18 eV. The crossing can be better visualized from the energy distribution curves (EDC) as well as second energy derivatives of the raw spectra (Fig. <ref>g). The nature of the crossing can be further demonstrated from a series of cuts of the measured band dispersions close to the K̅ point. In Fig. <ref>d, both horizontal cuts (cuts 1 to 5) and vertical cuts (cuts 6 to 10) in the crossing region reveal two bands that cross near the K̅ point and become gapped away from K̅. Having demonstrated the band crossing at the K̅ points in the in-plane direction, we also examine the dispersion along the out-of-plane direction (k_z) by varying the photon energy. As the inter-layer interactions in vdW materials are quite weak, we do not observe strong variation along k_z (see SI Extended Data Fig. S6). For a range of photon energies that probes a range much beyond that of a single BZ along k_z, we always observe the crossing near the K̅ point (Fig. <ref>h), hence the in-plane nodal crossing takes the form of nodal lines along the out-of-plane direction. Taking these findings together, our ARPES data reveal the existence of nodal lines along the BZ boundaries. As these nodal lines are observed in a ferromagnetic phase, the time-reversal symmetry is broken and hence the spin degree of freedom is quenched, giving rise to two-fold degenerate lines. §.§ Flat bands in the site-ordered phase Having presented the existence of the nodal lines in the site-disordered phase, we now focus on the observed electronic structure of the site-ordered phase. Figure <ref> summarizes the measured electronic structure of quenched crystals. Instead of the elliptical Fermi pockets surrounding the K̅-M̅ direction resulting from the Dirac crossing at K̅ points in the slow-cooled crystals, the quenched crystals exhibit circular pockets at the K̅ points. This distinction can be further seen from dispersions measured along the high symmetry direction M̅-K̅-Γ̅-K̅. In stark contrast to that measured for the site-disordered phase (Fig. <ref>), the site-ordered crystals show electron bands at the K̅ points with clear band bottoms and no band crossings, and hence the absence of the nodal lines observed in the site-disordered crystals. More interestingly, three flat bands are observed in the site-ordered crystals that are not observed in the site-disordered crystals. We first illustrate them along the K̅-Γ̅-K̅-M̅ direction, captured in measurements under both linear horizontal (LH) and linear vertical (LV) polarizations (Fig. <ref>a-b). The location of the flat bands can be identified as peaks in the integrated EDCs from both polarizations, at , -0.2 eV, and -0.6 eV. Beyond the high symmetry direction, the flat bands are observed to persist across a large region of the BZ. We illustrate this from five cuts measured across the in-plane BZ (Fig. <ref>d-e). The flat band near  could be clearly seen along the K̅-M̅- K̅ direction as shown on cut 1. When the Γ̅ point is approached from cut2 to cut5, the flat band near  shifts to above  and could no longer be observed. The second and third flat bands located at -0.2 eV and -0.6 eV are flat throughout the BZ except where they hybridize with the dispersive bands near the Γ̅ point. We note that this hybridization indicates that these flat dispersions are intrinsic to the crystal and cannot be due to disorders or impurities that would otherwise form momentum-independent states that do not interact with intrinsic band structure. Furthermore, we carried out photon energy-dependent measurements, where the flat bands are observed to persist across  (see SI), consistent with the 2D nature of the vdW materials. §.§ Topology for the distinct electronic phases The drastically distinct electronic structures of the two types of crystals, with one exhibiting two-fold nodal lines and the other flat dispersions, belong to distinct topological states. Here we show that they can be understood from the symmetries dictated by the site-disordered or site-ordered phases, respectively. We first discuss the case of the site-disordered phase where we observe nodal lines at the K points. For a single vdW slab with 50% occupation of the Fe(1) sites, the crystalline symmetry belongs to the centrosymmetric space group P3̅1m (No. 164). Here, the crystal has both two-fold rotational symmetry about the y axis (C_2y) (Fig. <ref>b) and three-fold rotational symmetry about the z axis (C_3z) (Fig. <ref>c), similar to the case of graphene. The momentum point K (K') is invariant under these two symmetry operations and allows the existence of a 2D irreducible representation. In the ferromagnetic phase where time-reversal symmetry is broken, the spin-polarized bands in the ferromagnetic state can be regarded as spinless states and would cross at the K (K') points, where the two-fold degeneracy comes from the orbital degree of freedom (see SI for a discussion of the orbitals), leading to a symmetry-enforced ferromagnetic Dirac crossing. To demonstrate this, we built an effective tight binding model considering the different Fe 3d orbitals and show that such a crossing is indeed protected at the K (K') point (see SI for a full discussion of the tight-binding model). When we incorporate spin-orbit coupling (SOC), (see SI), in general, the SOC can be expressed as H_SO=λ_SO L· S, where L and S are the angular and spin momenta, respectively. In a ferromagnetic system, S∼⟨ S⟩ plays the role of an effective Zeeman splitting field in the orbital basis. Since the direction of the magnetic moment is along the z-direction <cit.>, which is parallel to the direction of the orbital angular momentum, the SOC would lift the two-fold degeneracy at K and K'. The band structure calculation from the tight binding model with SOC is illustrated in Fig. <ref>d. This is consistent with our experimental observation of the crossing at K points except that the gap due to SOC is not resolved in the experiment due to the energy resolution. The appearance of this degeneracy at K points is similar to that reported in the related Fe_3GeTe_2, where the topological nodal lines are theoretically identified to give rise to a large anomalous Hall effect <cit.>, but difficult to resolve in the ARPES measured dispersions. Here in Fe_5GeTe_2, they are clearly observed. Having understood the single layer case, we now consider the bulk system of the site-disordered phase. In a simple hypothetical AAA stacking scenario, the hopping along the z direction extends the original 2D hexagonal BZ into a 3D hexagonal prism and would extend the topological crossings at K and K' to nodal lines along the K-H direction. This is protected by a combination of C_3z and PT symmetries. For the real ABC stacking of the layers, the BZ changes from a hexagonal prism into the BZ of a rhombohedral space group (Fig. <ref>e). Due to the ABC stacking of the layers, the K-H direction is no longer a high symmetry line of the BZ. Instead, the topological crossings at each k_z plane shift away from the K and K' points, forming helical nodal lines that wind around K-H, where the magnitude of the shift is proportional to the strength of the interlayer hopping, similar to helical nodal lines reported in other ABC-stacked materials including Fe_3Sn_2 <cit.>. Here in Fe_5GeTe_2, due to the weak vdW interlayer coupling, the in-plane deviation of the crossing from the K-H line is too small to be experimentally resolved. Hence we cannot directly observe the winding but only observe nodal lines near K-H. In addition to tight-binding calculations, we also carried out density functional theory (DFT) calculations to check for the symmetry-enforced crossings (see SI Extended Data Fig. S4a-c). To demonstrate the importance of the globally preserved inversion symmetry of the random Fe(1) occupation, we carried out the following comparison. First, we calculated the band structure for the Fe(1) sites all occupying the up sites (UUU). The inverted case is the structure with Fe(1) sites all occupying the down sites (DDD). The average of the two from directly overlapping the UUU and DDD band structures would give an average stoichiometry of Fe_5GeTe_2. We note that while such structure does not exist in the crystal, it mimics the site-disordered phase except it lacks inversion symmetry. To directly compare this calculation with an inversion symmetric structure, we also calculated the band structure of a crystal structure with both up and down sites fully occupied, giving a stoichiometry of Fe_6GeTe_2. By comparing calculations without and with SOC, only the inversion symmetric Fe_6GeTe_2 shows band crossings at the K point that open up a gap with the inclusion of SOC, demonstrating the symmetry-enforced nature of the topological nodal lines. The UUU and DDD band structures do not exhibit such kind of band crossing, confirming that the presence of global inversion symmetry is consistent and also required for the observed topological nodal lines. Finally, we discuss the site-ordered phase. Consistent with the inversion symmetry breaking observed by SHG, we no longer observe the topological crossings at the K point. Such inversion symmetry breaking is consistent with the  order caused by the Fe(1) site ordering. Interestingly, for such DUU occupation order of the Fe(1) site (Fig. <ref>f), the shortest bond occurs between the Fe(1) sublayer and the adjacent Fe(3) sublayer <cit.>. The in-plane projection of these two sublayers form a clover unit pattern, with the center being the missing Fe(1) site, as shown in Fig. <ref>g-h. Considering only the nearest neighbor hopping, t_1, which is between the red Fe(1) sites and yellow Fe(3) sites, the lattice is manifested as a bipartite crystalline lattice (BCL), in which the lattice is categorized into two sublattices with different numbers of atoms (Fig. <ref>h). BCLs are predicted to be a generic platform to realize destructive interference of the electronic wavefunction and further lead to flat bands <cit.>, but have never been directly observed in bulk materials. To see this clearly, we consider the Hamiltonian for the single orbital clover lattice with nearest neighbor hopping H() = [ 0_2× 2 ℋ_; ℋ_ 0_3× 3 ], where ℋ_ is the hopping matrix between two sublattices (see explicit form in Eq. <ref> in methods). Given that ℋ_ is a 3×2 rectangular matrix, the Hamiltonian contains at least 3-2=1 zero modes for all . The band structure is shown in Fig. <ref>j, after diagonalizing the Hamiltonian. The correspinding localized wavefunction for the flat band is: ψ(k_x, k_y) = [ 0 0 e^k_y/3 - e^-2/3k_y e^-k_x/2√(3) - k_y/6 - e^k_x/√(3) + k_y/3 e^k_x/2√(3) - k_y/6 - e^-k_x/√(3) + k_y/3 ], leading to a real space Wannier function as shown in Fig. <ref>h. The Wannier amplitude is identically zero on the red sites because of the destructive interference effect. To see this clearly, we have constructed an effective tight-binding model for the clover lattice (see a full description in the SI) that demonstrates that the Wannier amplitude is identically zero on the red sites, leading to a flat band (Fig. <ref>j). When SOC is incorporated, the flat band gains dispersion and also acquires a finite Chern number, and becomes topologically non-trivial. The consideration for different orbital groups is also provided in the methods. Such kind of destructive-interference induced flat bands have been discussed in kagome <cit.> and pyrochlore <cit.> lattices. Here in quenched Fe_5GeTe_2, they are directly the result of the geometrically frustrated lattice formed by the Fe(1) occupation site ordering enabled by the quenching process. We carried out DFT calculations to check for the BCL flat bands associated with the clover lattice (see SI Extended Data Fig. S4d). To mimic the site-disordered phase, we overlap the band structures for the UUU and DDD structures and compare it to the band structure calculated with the site-ordered phase with the  order. A direct comparison of the two shows that no flat bands are observed in the UUU+DDD calculation but flat bands are observed for the site-ordered phase. We can also unfold the band structure of the  order back to the original unfolded BZ to compare more directly with the observed dispersions, and find reasonable agreement (see SI Extended Data Fig. S5). Projection of the density of states unto the different Fe sites also show that the peaks corresponding to the flat bands have large contributions from the Fe(1) and Fe(3) sites that form the clover lattice. This demonstrates that the flat dispersions that we observe only in the site-ordered phase and not in the site-disordered phase are associated with the clover unit that only forms with the Fe(1) site order, and is a direct result of the quantum destructive interference of the bipartite crystalline lattice. §.§ Discussion Taking all experimental and theoretical evidence presented together, we have demonstrated the reversible switching of two remarkably distinct electronic structures ascribed to two closely-related crystalline phases via a non-volatile thermal process in Fe_5GeTe_2. The capability is enabled by the Fe(1) site ordering that changes the crystal symmetries leading to distinct topological characteristics. On one hand, the random occupation of the Fe(1) sites leads to global inversion symmetry that allows symmetry-enforced topological nodal lines, which are observed to be lifted when the inversion-symmetry breaking order forms. On the other hand, the formation of the site order creates a bipartite crystalline lattice that localizes electronic states to form flat bands, which are observed to be destroyed with the breaking of the site order. Our findings indicate that the Fe_5GeTe_2 system is a rich system for probing and understanding topology in the correlated regime. As Fe_5GeTe_2 is known to exhibit high Curie temperature, it would be interesting to compare the magnetic properties of the two phases, including manipulation of the topological nodal lines in the site-disordered phase and the role of the topological flat bands for magnetism in the site-ordered phase. Fe_5GeTe_2 is also an interesting system to probe from the order-disorder perspective. As our STM results show that the slow-cooled samples exhibit domains of ordered regions, it would be interesting to understand how the domains form and propagate in the cooling process as a function of cooling rate, especially given that Fe_5GeTe_2 behaves counterintuitively in that the ordered phase is preferred via quenching. Such studies would benefit from the vast expertise developed for probing and understanding order-disorder formation in other quantum materials <cit.>. Finally, the non-volatile switch our work exemplifies promises versatile settings to apply a novel design principle, viz to utilize the cooperation of crystalline symmetry and strong correlations to produce new correlated topological materials <cit.>. Aside from fundamental physics, our work also indicates that Fe_5GeTe_2 has great potential for applications. Skyrmions have recently been reported in this class of Fe-based vdW ferromagnets <cit.>. As skyrmions are stablized by Dzyaloshinsky–Moriya interaction, which is only allowed when inversion symmetry is broken, there has been debates on how to understand the appearance of skyrmions in these seemingly centrosymmetric crystals. In the case of Fe_3GeTe_2, this has been explained via random Fe deficiencies that on average occur asymmetrically in the crystal <cit.>. In the case of (Fe,Co)_5GeTe_2, this is ascribed to AA' stacking of the vdW layers <cit.>. Here we show that the two phases have clean distinction on inversion symmetry via the site-ordering process, hence provides a platform to potentially control skyrmion formation. The process by which we demonstrate the switching–heating and cooling all above room temperature–is similar to that already commercially used for phase-change materials such as Ge_2Sb_2Te_5 <cit.>. Different from phase-change materials, we only need to surpass a submelting temperature where the Fe(1) sites are mobilized instead of having to achieve the melting and crystallization temperature. Techniques such as local laser heating can be explored for spatial writing of the two phases especially given that the overall crystal structures are compatible. This is in contrast to some vacancy ordered materials such as K_xFe_2-ySe_2 where the metallic regions are structurally unstable and only appears as microstructures amidst the insulating vacancy ordered phases <cit.>. The heating and quenching process that we utilize is non-volatile and above room temperature, which is advantageous compared from those controls that require the presence of field, strain, pressure, or current. Nevertheless, modifying the Fe(1) sites and their vacancies appears to have lower energy barrier than re-crystallization, suggesting that electrical current, photo illumination or other commonly utilized switching methodologies could also be explored for this 2D vdW material. Finally, the concept of using vacancy order-disorder to realize distinct topological phases goes beyond Fe_5GeTe_2. Phase change via order-disorder has been explored extensively for realizing switches based on electrical or optical properties. Here we demonstrate the concept that vacancy order can be utilized to change the crystalline symmetries of two otherwise energetically similar ground states with dramatically distinct consequences on their topological character. A large base of quantum materials are known to exhibit vacancies or site disorder. The consideration of the symmetries of these phases may open up new routes towards realizing exotic topological phases as well as novel spintronics applications. § DATA AVAILABILITY The data that support the findings of this study are available from the corresponding author upon reasonable request. § ACKNOWLEDGMENTS The authors acknowledge insightful discussions with Kai Sun, Luis Balicas, and Alex Frano. This research used resources of the Advanced Light Source, the Stanford Synchrotron Radiation Lightsource, and the National Synchrotron Light Source-II, all U.S. Department Of Energy (DOE) Office of Science User Facilities under contract Nos. DE-AC02-05CH11231, AC02-76SF00515 and No. DE-SC0012704, respectively. Rice ARPES work is supported by the U.S. DOE grant No. DE-SC0021421 and the Gordon and Betty Moore Foundation’s EPiQS Initiative through grant no. GBMF9470. The theory work at Rice is primarily supported by the U.S. DOE, BES, under Award No. DE-SC0018197 (L.C., symmetry analysis), by the AFOSR under Grant No. FA9550-21-1-0356 (C.S., electronic structure construction), and by the Robert A. Welch Foundation Grant No. C-1411 (Q.S.). Work at Los Alamos was carried out under the auspices of the U.S. Department of Energy (DOE) National Nuclear Security Administration (NNSA) under Contract No. 89233218CNA000001, and was supported by LANL LDRD Program, UC Laboratory Fees Research Program (Grant Number: FR-20-653926), and in part by the Center for Integrated Nanotechnologies, a DOE BES user facility. The development of the SHG photon counter is supported by the Army Research Office and was accomplished under grant no. W911NF-19-1-0342. The sample exfoliation is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-22-1-0449. Q.D. is supported by the NSF EPM program under grant no. DMR-2213891. L.W. acknowledges the support by the Air Force Office of Scientific Research under award no. FA9550-22-1-0410. TEM study is supported by Welch Foundation (C-2065-20210327). The authors acknowledge the use of the Electron Microscopy Center at Rice. The work at LBL and UC Berkeley was funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC02-05-CH11231 (Quantum Materials program KC2202). Research conducted at the Center for High-Energy X-ray Sciences (CHEXS) is supported by the National Science Foundation (BIO, ENG and MPS Directorates) under award DMR-1829070. Materials synthesis at UW was supported as part of Programmable Quantum Materials, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under award DE-SC0019443 § AUTHOR CONTRIBUTIONS The project was initiated and organized by M.Y. The single crystals were grown by P.M., Y.S., X.C., Y.H, C.H, X.X and J.C. The ARPES measurements and analyses were carried out by H.W., J.W.H., J.S.O., R.J.B. and M.Y. with the help of D.H.L., M.H., S.-K.M., A.F., J.D., T.Y. and E.V. The tight binding model and symmetry analyses were proposed and carried out by L.C., C.S. and Q.S. The first principle calculations were carried out by B.G.J. and J.Z. The SHG were carried out by Q.D. and L.W. The STM measurements were measured by K.S. and E.dsN. The x-ray diffraction was done by J.R. The TEM were measured by C.S. and Y.-M.H. The sample annealing and quenching process and characterization were carried out by P.M., J.C., X.C., Y.X., B.G., X.T., M.K., H.W. and P.D. The manuscript was written by H.W. and M.Y. and contributed by all the authors. § COMPETING INTERESTS The authors declare no competing interests. naturemag
http://arxiv.org/abs/2307.00745v1
20230703042336
The contribution of Coulomb interaction to elastic pp and p p\bar scattering in holographic QCD
[ "Yu-Peng Zhang", "Xun Chen", "Xiao-Hua Li", "Akira Watanabe" ]
hep-ph
[ "hep-ph" ]
School of Nuclear Science and Technology, University of South China, Hengyang 421001, China chenxunhep@qq.com School of Nuclear Science and Technology, University of South China, Hengyang 421001, China lixiaohuaphysics@126.com School of Nuclear Science and Technology, University of South China, Hengyang 421001, China watanabe@usc.edu.cn School of Mathematics and Physics, University of South China, Hengyang 421001, China The differential cross sections of elastic proton-proton (pp) and proton-antiproton (pp) scattering are studied in a holographic QCD model, considering the strong and Coulomb interaction in the Regge regime. Based on previous studies of strong interactions described in terms of Pomeron and Reggeon exchange, we add the contribution of Coulomb interaction described by photon exchange. We present the momentum transfer dependence of the contribution rates for each component, especially for the Coulomb-nuclear interference, which refers to the cross term between both interactions. For the adjustable parameters for the strong interaction, we can adopt the values determined in previous studies, and there are no extra adjustable parameters that need to be determined for the Coulomb interaction. It is presented that the resulting differential cross sections are consistent with the data for pp and pp scattering. The contribution of Coulomb interaction to elastic pp and pp scattering in holographic QCD Akira Watanabe August 1, 2023 ========================================================================================== § INTRODUCTION Quantum chromodynamics (QCD) is a well-established theory of the strong interactions, and in principle, all strong interaction phenomena should be describable in terms of the fundamental QCD Lagrangian. In pp and pp scattering with high center-of-mass energy s and small momentum transfer t, some practical difficulties were encountered due to the complexity and non-perturbative nature in the soft kinematic region (also called the forward region) <cit.>. The analytical solution of non-perturbative QCD is a challenging task, and QCD is unable to directly deduce various hadronic properties. Before the establishment of QCD, there is a time-honoured theory, the Regge theory, which provided a useful framework to analyze the hadron-hadron scattering cross sections <cit.>. The Regge theory, which incorporates both Reggeon and Pomeron contributions, remains a reliable framework for describing the total cross sections of hadronic scattering. The pp and pp scattering amplitudes were described by the Reggeon trajectories and the soft Pomeron <cit.>. The Regge theory is founded on the complex angular momentum analysis. The 2^++ glueball with mass is recognized to be the lightest state on the leading Pomeron trajectory, which has an intercept slightly greater than 1. The growing trend in total cross sections relative to the center-of-mass energy √(s) is attributed to Pomeron exchange. On the contrary, the exchange of the Reggeon trajectories accounts for the decreasing behavior. For decades, high energy hadron-hadron scattering has been one of the most important research topics in the high energy physics since its cross sections reflect the internal structure of the involved hadrons. Holographic QCD, a non-perturbative methodology for QCD, has been established employing the anti-de Sitter/conformal field theory (AdS/CFT) correspondence <cit.>. This correspondence, which establishes a connection between a 4-dimensional conformal field theory and a gravitational theory in higher-dimensional AdS space, provides us a hopeful way to investigate strongly coupled quantum field theories. Holographic QCD model has been utilized to examine the spectrum and configuration of hadrons <cit.>, achieving favorable outcomes. And, this model has also been employed to study high energy scattering processes <cit.>. A holographic QCD model has been proposed, relying on string theory, to portray the experimental data of elastic pp and pp scattering cross sections in the Regge regime <cit.>. Scattering amplitudes of hadrons within the Regge regime can be computed via exchanges involving the lightest mesons or glueballs. Subsequently, the single particle propagators are substituted with their Reggeized counterparts, which are obtained through comparison with string scattering amplitudes <cit.>. As widely recognized, the electromagnetic effect - soft photon radiation and Coulomb scattering - are an indivisible component of any strong interaction involving charged hadrons. The strong interaction is conventionally referred to as "nuclear" or "hadronic", while the electromagnetic interaction is commonly known as the "Coulomb" interaction. Sometimes these effects obstruct the detection of strong interaction phenomena but sometimes they present a distinctive source of information on important details of hadronic amplitudes. The holographic QCD is a theoretical framework that relates a 4-dimensional conformal field theory to a gravitational theory in the higher dimensional AdS space. In this theoretical framework, the Coulomb interaction is not a crucial constituent, but it is important when the scattering angle is very small (i.e., |t| is very small). In previous studies, this contribution is of importance at |t|≈0.002 GeV^2 and becomes negligible when |t|>0.01 GeV^2 <cit.>. Moreover, the combined scattering amplitude receives a third contribution reflecting the cross term with both strong and Coulomb exchanges. This term, along with the intricate nature of the scattering amplitudes, characterizes the influence of Coulomb-nuclear interference (CNI) on the differential cross-section. The experimental study of CNI in pp and pp scattering can reveal the amplitude structure of hadrons <cit.>. Regrettably, the range of scattering angles where such interference is clearly observable is relatively limited, nevertheless, analysis of the differential cross section of this interval can give us some very important information about the strong interaction. In this work, we study the elastic pp and pp scattering in the Regge regime, taking into account both the strong and Coulomb interaction. This work is an extension of previous research <cit.>, in which the strong interaction described by the Pomeron and Reggeon exchange was considered. The Pomeron exchange makes a major contribution to the cross-section of the high energy region and the Reggeon exchange gives the dominant contribution to cross sections in the lower energy region. However, the Coulomb interaction contribution needs to also be considered to describe the data in a very small momentum transfer |t| interval. In our work, the Coulomb interaction is described by the pure real photon exchange QED amplitude, and being affected by the multiphoton exchange process will result in additional phase difference αϕ. The Coulomb phase has been studied by many researchers <cit.>. In the present study the interference formulae of R. Cahn <cit.> was adopted for taking into account the CNI effects. We explicitly show how the both interaction contributions vary with the energy, focusing on the contribution ratios. It is presented that the resulting differential cross sections are consistent with the data in a wide kinematic region for both the pp and pp scattering. The present paper is structured as follows. The model, focusing on both interactions are outlined in Sec. <ref>. We briefly review the formalism developed in the preceding studies, and present the expressions for the total scattering amplitude and differential cross sections. In Sec. <ref>, the |t| dependence of these contributions is shown in detail, focusing on the contribution ratios. We also show the data analysis of the differential cross sections. Our conclusion with the implications of this work is given in Sec. <ref>. § MODEL SETUP §.§ Strong interaction amplitude in holographic QCD In the previous study <cit.>, the contribution of combining both Pomeron and Reggeon was considered in the Regge regime, which was described by Reggeized spin-2 glueball and vector meson, respectively. The strong interaction amplitudes can be written in the following form F_N=F_g+F_ν. The amplitude of the glueball exchange can be obtained by combining the Proton-glueball-Proton vertex <cit.> and the massive spin-2 glueball propagator <cit.>, and similarly, the amplitude of the vector meson exchange is obtained by combining the Proton-vector-Proton vertex and the vector meson propagator <cit.>. Hence, the strong interaction amplitude can be written as F_N= -i λ_g^2/8(t-m_g^2)[2 s A^2(t)(u̅_1γ^α u_3)(u̅_2γ_α u_4)+4 A^2(t) p_2^α p_1^β(u̅_1γ_α u_3)(u̅_2γ_β u_4)] +i λ_v^2/t-m_v^2η_μν(u̅_1γ^μ u_3)(u̅_2γ^ν u_4), where t=-(p_3-p_1)^2, λ_g is the Proton-glueball-Proton coupling constant, λ_v is the proton-vector-proton coupling constant, m_g is the mass of glueball, m_v is the vector meson mass. At t→0, the form factors A(0)→1. By taking the modulus and the spin averaged sum of strong interaction amplitude, the differential cross section has the following form d σ_N/d t =1/16 π s^2| F_N(s, t)|^2 =λ_g^4 s^2 A^2(t)/16 π|t-m_g^2|^2-λ_g^2λ_v^2 A^2(t) s/4 π|t-m_g^2||t-m_v^2|+λ_v^4/4 π|t-m_v^2|^2. Here, the differential cross section only contains the lightest states, in order to include higher spin states on the Pomeron and Reggeon trajectories, the Reggeized procedures are employed <cit.>. The propagator of massive spin-2 glueball is to be replaced by 1/t-m_g^2 → (α_g'/2)e^-iπα_g(t)/2Γ[-χ_g]Γ[1-α_g(t)/2]/Γ[-χ_g-1+α_g(t)/2](α_g's/2)^α_g(t)-2, where χ_g=2 α_g^'m_p^2+3/2α_g(0)-3, the dependence on χ_g is introduced. The propagator of the vector meson is to be replaced by 1t-m_v^2 → α_v' e^-iπα_v(t)/2 sin[πα_v(t)2] (α_v's)^α_v(t)-1 Γ[-α_v(t)] . The differential cross section of both Pomeron and Reggeon exchange can be obtained by replacing the factors 1/t-m_v^2 and 1/t-m_g^2. The invariant amplitude for strong interaction can be expressed as F_N(s, t)= -s λ_g^2 A^2(t) e^-i πα_g(t)/2Γ[-χ_g] Γ[1-α_g(t)/2]/Γ[α_g(t)/2-1-χ_g](α_g^' s/2)^α_g(t)-1 +2s λ_v^2α_v^' e^-i πα_v(t)/2sin[πα_v(t)/2](α_v^' s)^α_v(t)-1Γ[-α_v(t)] . In the above equation, there are seven adjustable parameters in total. Three of these parameters are related to the Pomeron exchange, i.e., the intercept α_g(0), slope α^'_g and proton-glueball coupling constant λ_g. For these adjustable parameters to the Pomeron exchange, we use the values given in previous work <cit.>, α_g(0)=1.084, α_g^'=0.368 GeV^-2 and λ_g=9.59 GeV^-1. For the other four adjustable parameters, the intercept α_v(0), slope α^'_v , pp scattering coupling constant λ_v^pp and pp scattering coupling constant λ_v^pp are taken from Ref.<cit.>, α_v(0)=0.444, α_v^'=0.9257 GeV^-2, λ_v^pp=7.742 GeV^-1 and λ_v^pp=16.127 GeV^-1. In the domain of strong interaction, similar to previous investigations <cit.>, we employ the proton gravitational form factors for A(t) calculated from soft wall model. §.§ Electromagnetic Form Factor In this paper we apply the AdS/QCD model to the region of small momentum transfer 0 ≤|t| ≤ 0.01 GeV^2 in which the contribution of the Coulomb interaction is not negligible. To the Coulomb interaction, we introduce the electromagnetic form factor of proton for f(t) which was derived from the authors of Ref. <cit.> by considering a Dirac field coupled to a vector field in the five-dimensional AdS space in the AdS/QCD model . We use the results obtained from the soft-wall model, in which the AdS geometry is smoothly cut off by a background dilaton field at the infrared boundary. And the final expression for form factor f(t) does not bring any adjustable parameter. Here we briefly review previous research, in which the model action is given below. S_F= ∫ d^d+1 x √(g) e^-Φ(z)(i/2Ψ̅ e_A^N Γ^A D_N Ψ. .-i/2(D_N Ψ)^†Γ^0 e_A^N Γ^A Ψ-(M+Φ(z)) Ψ̅Ψ), where e_A^N=z δ_A^N is the inverse vielbein, D_N=∂_N+1/8ω_N A B[Γ^A, Γ^B]-i V_N is the covariant derivative which ensures the action satisfies gauge invariance and diffeomorphism invariance, M is the mass of the bulk spinor. The Dirac gamma matrices are defined in such a way that they satisfy the anticommutative relation {Γ^A, Γ^B}=2 η^A B. The background dilaton field is given by Φ(z)=κ^2 z^2, and the right and left spinor are defined as Ψ_R,L=(1/2)(1±γ^5)Ψ. By imposing appropriate boundary conditions, the normalizable wave function can be expressed as Ψ_L^(n)(z)=1/κ^α-1√(2 Γ(n+1)/Γ(α+n+1))ξ^α L_n^(α)(ξ), Ψ_R^(n)(z)=√(n+α)/κ^α-1√(2 Γ(n+1)/Γ(α+n+1))ξ^α-(1 / 2) L_n^(α-1)(ξ), where α=M+1/2 and κ=0.35 GeV. The correct large momentum scale for the proton electromagnetic form factor is given when M=3/2. The present investigation solely focuses on the ground state of the proton, and the parameter z_0 is determined with the proton mass. The electromagnetic current matrix element can be generally expressed in terms of two independent form factors ⟨ p_3,s_3|J^μ(0)|p_1,s_1⟩ =u(p_3,s_3)(f_1(Q)γ^μ +f_2(Q)iσ^μνq_ν/2m_n)u(p_1,s_1), where q=p_3-p_1 and Q^2=-q^2. The invariant functions are given by C_1(Q)=∫ dze^-ΦV(Q,z)2z^2M(Ψ_L^2(z)+Ψ_R^2(z)), C_2(Q)=∫ dze^-Φ∂_z V(Q,z)/2z^2M-1(Ψ_L^2(z)-Ψ_R^2(z)), C_3(Q)=∫ dze^-Φ2m_n V(Q,z)/z^2M-1 Ψ_L(z)Ψ_R(z). For the soft-wall model, the bulk-to-boundary propagator of the vector field is written as <cit.> V(Q,z) =Γ(1+a)U(a,0;ξ) =a∫_0^1dxx^a-1exp(-x1-xξ), where a=Q^2/(4κ^2), and ξ=κ^2z^2. The electric and magnetic form factor for the proton can be obtained by G_E(Q)=C_1(Q)+η_pC_2(Q)-τη_pC_3(Q), G_M(Q)=C_1(Q)+η_pC_2(Q)+η_pC_3(Q), where η_p=0.224,τ=Q^2/4 m_p^2. And the effect of both form factors can be described by the effective electromagnetic form factor squared G_e f f^2(t)=1/1+τ[G_E^2(t)+τ G_M^2(t)]. §.§ Coulomb interaction amplitude In studies of pp and pp scattering, the Coulomb interaction becomes exceedingly prominent as the momentum transfer t approaches zero. Typically, the standard way to represent the whole scattering amplitude was expressible in the form <cit.> F_tot=F_N+e^i αϕ F_C. The lowest order one-photon-exchange Coulomb amplitude for point-like charges is F_C(s,t)=∓8πα s|t|, where α is the fine structure constant. The negative (positive) sign corresponds to the scattering of particles possessing identical (opposing) charges. In Eq. (<ref>), the amplitudes of strong interaction and Coulomb interaction are bound mutually with the help of the additional phase difference αϕ(s, t) which was the result of the possibility of multiphoton exchange processes. The Coulomb phase ϕ was calculated first by Bethe with the WKB approach in potential theory and derived the following form <cit.> ϕ=2 ln(1.06 /|𝐤_1| b Θ), where |𝐤_1| is the c.m. momentum, b is the range of the strong-interaction forces, and Θ is the c.m. scattering angle. Similar results were obtained by these authors <cit.> using the potential model West and Yennie (WY) re-examined the interference between Coulomb interaction and strong interaction in terms of Feynman diagrams <cit.> ϕ_W-Y=∓[ln (B|t| / 2)+γ+O(B|t|)], where γ is the Euler constant. The upper (lower) sign corresponds to the scattering of p p (p̅ p). B is the t-independent diffractive slope of the strong interaction amplitude and is associated with the center of mass energy √(s), generally defined as B(s, t)=lim _t → 0d[ln(d σ_N / d t)]/d t. R. Cahn <cit.> gives a more precise calculation based on the above which accounts for the details of the electromagnetic form factor under the assumption that |t|→0, and obtained a general expression for the phase ϕ_C a h n= -[ln(B|t|/2)+γ+C], C=ln(1+8/B Λ^2)+(4|t| / Λ^2) ln(4|t| / Λ^2)+2|t| / Λ^2, where Λ^2=0.71 GeV^2. In the present work, we will determine Λ^2 by comparing the data with the electromagnetic form factor which we previously introduced. The main difference from the result of WY is a shift of the Coulomb amplitude due to the form factor’s influence on the phase. Compared to WY, the Coulomb phase calculated with Cahn's shown a better fit to the experimental data <cit.>. Furthermore, as noted in Ref. <cit.>, the form of the Coulomb phase proposed by WY contradicts the general properties of analyticity in the t-channel. And from Ref. <cit.>, we can know that the theoretical assumptions of the WY model were inconsistent with experimental data. In addition, Nurushev <cit.> and Kopeliovich <cit.> derived the Coulomb phase in a large range of momentum transfer, and the results were similar to the calculation of Cahn. Considering the kinematic range of this work, we choose Cahn's calculation for the Coulomb phase. By relating to the strong interaction amplitude, we give the trend of t-slope B with the center of mass energy √(s) for pp and pp scattering, as shown in Fig. <ref>. It can be observed that the t-slope B exhibits a consistent increasing trend, and the difference in t-slope values between pp scattering and pp scattering is negligible when the energy √(s) is greater than 100 GeV. According to Ref. <cit.>, it is evident that the contribution of Reggeon and its cross term with Pomeron to the differential cross section in strong interactions can be disregarded when the energy √(s) is greater than 100 GeV. When the energy √(s) is below 100 GeV, there are some differences in the B-values of pp and pp scattering due to the varying parameters involved in the Reggeon exchange. Following the introduction of the proton electromagnetic form factor, the Coulomb interaction amplitude can be written as F_C(s, t)=-8 πα s/|t| G_e f f^2(t). Then we obtain the total scattering amplitude that includes both the Coulomb and strong interaction. F_tot= -e^i αϕ8 πα s/|t| G_e f f^2(t)-s λ_g^2 A^2(t) e^-i πα_g(t)/2Γ[-χ_g] Γ[1-α_g(t)/2]/Γ[α_g(t)/2-1-χ_g](α_g^' s/2)^α_g(t)-1 +s λ_v^2α_v^' e^-i πα_v(t)/2sin[πα_v(t)/2](α_v^' s)^α_v(t)-1Γ[-α_v(t)], and the total differential cross section is given by d σ_tot/d t=1/16 π s^2| F_tot(s, t)|^2. § NUMERICAL RESULTS §.§ Contribution ratios of the Coulomb and strong interaction Here we present t-dependence of contributions of the strong interaction, Coulomb interaction and the cross term in the present model. We numerically evaluate the contribution of these three items to the total differential cross section for the pp and pp scattering, respectively. Considering the applicability of our present model and taking into account the range of Coulomb interaction, we decide to focus on the kinematic region, where 10 G e V≤√(s)≤ 13 T e V and 0 ≤|t| ≤ 0.01 G e V^2. Focusing on these kinematic ranges, we display the t-dependence of the ratios for pp scattering in Fig. <ref>, and the ratios for pp scattering in Fig. <ref>. The Coulomb interaction contribution decreases with |t|, and it is opposite for the strong interaction contribution. Due to the electric charge of the proton, the contribution of the cross term occasionally exhibits a negative value. Regardless of whether the contribution of the cross term is positive or negative, with the variation of t, there will be either a maximum or minimum value when the contribution of the Coulomb and strong interaction are approximately equal. In the case of pp scattering, as the energy √(s) increases, the contribution of the cross term undergoes a transition from positive to negative values while continuously decreasing. For pp scattering, the trend is precisely opposite to pp scattering. As the energy √(s) increases, the contribution of the cross term transitions from negative values to gradually become positive values, while continuously increasing. §.§ Differential cross section In the preceding section, we obtained the parametrized form of the differential cross section, derived from the characterization of the total scattering amplitude. In the strong interaction section, seven adjustable parameters are given in the expression for the invariant amplitude, and there is no additional adjustable parameter in the form factors and Coulomb interaction amplitude. For these seven parameters, as previously stated, we use the values obtained in the previous work <cit.>. By calculating the contribution to the Coulomb amplitude, the Coulomb contribution and the cross term contribution are almost non-existent at |t| = 0.05, as shown in Fig. <ref>, we use the Λ^2 obtained by matching with the electromagnetic form factor G_e f f^2(t) in the range of 0 ≤|t| ≤ 0.05G e V^2. By utilizing the Scipy package of Python, one obtains, for both the pp and pp scattering, Λ^2=0.69 GeV^2. We present our results of the differential cross section for pp and pp scattering to demonstrate the correction of scattering by Coulomb interaction, focusing on the Regge regime. In the kinematic range being considered, the Coulomb interactions cannot be neglected in the extremely small range of |t|, and the cross term between the two interactions also have a crucial role in the model. By combining Coulomb with strong interaction, we plot the differential cross section with fixed center-of-mass energy √(s). The experimental data that we are using for the pp scattering are taken from Refs. <cit.>. The results of pp scattering for the kinematic range of 10 GeV<√(s)<30 GeV are shown in Fig. <ref>. Based on the analysis of the data, it is evident that our calculations are alignment with the overarching trends. The results of p p scattering for 30 GeV<√(s)<13 TeV are displayed in Fig. <ref>. This figure presented herein encompasses a substantial range for √(s). Notably, our model provides an accurate description of the corresponding data. The experimental data we are using for the p p scattering are taken from Refs. <cit.>. The quantity of available data for p p scattering is notably lower when compared to that of p p scattering. Specifically, at the GeV scale, our collection of data sets for p p scattering consists of a mere four instances, and no data has been gathered at the TeV scale. The results are presented in Fig. <ref>. Although the amount of data collected for pp scattering is limited and may lead to some loss of credibility, it is undeniable that we have still obtained an excellent consistency between our calculated results and the experimental data. § CONCLUSION We have investigated the differential cross sections of p p and p p scattering with the incorporation of Coulomb interaction, within the framework of holographic QCD. In our model setup, the strong interactions are considered to be represented by the exchange of Pomeron and Reggeon in the Regge regime. The Pomeron and Reggeon exchanges are described by the Reggeized spin-2 glueball and vector meson propagators, respectively. By combining the proton-vector meson and proton-glueball couplings with those propagators, we have obtained the strong interaction amplitude. The Coulomb interaction amplitude is represented by the lowest-order point-like charge amplitude and takes into account the influence of the electromagnetic form factor. According to Bethe <cit.>, the complete amplitude of a scattering process is typically expressed as a superposition of two distinct contributions – the Coulomb interaction amplitude and the strong interaction amplitude. These two amplitudes are connected by the coulomb phase factor that serves to mutually bind them together. We have adopted the classic Coulomb phase Eq. (<ref>), which has taken into account the form factor of proton. There are several parameters in our model, but all of which can be fixed with the values obtained in previous works <cit.>. Apart from these parameters, the introduction of form factors and Coulomb interactions in the model will not introduce any additional parameters. As demonstrated in this work, the total scattering amplitude obtained by combining strong interaction and Coulomb interaction can well describe the physical process of pp and pp scattering which reproduces quite well the data in the interference region without any additional parameters. The differential cross section results we provided have been found to be in excellent agreement with experimental data for both pp and pp scattering. This work is supported by the National Natural Science Foundation of China Grants under No. 12175100, the Natural Science Foundation of Hunan Province of China under Grants No. 2022JJ40344, and the Research Foundation of Education Bureau of Hunan Province, China (Grant No. 21B0402). And we would like to thank Zhi-Bo Liu for his assistance in programming.
http://arxiv.org/abs/2307.02010v2
20230705034315
ZJU ReLER Submission for EPIC-KITCHEN Challenge 2023: Semi-Supervised Video Object Segmentation
[ "Jiahao Li", "Yuanyou Xu", "Zongxin Yang", "Yi Yang", "Yueting Zhuang" ]
cs.CV
[ "cs.CV" ]
ZJU ReLER Submission for EPIC-KITCHEN Challenge 2023: Semi-Supervised Video Object Segmentation Jiahao Li, Yuanyou Xu, Zongxin Yang, Yi Yang, Yueting Zhuang ReLER, CCAI, Zhejiang University {xljh,yoxu,yangzongxin,yangyics,yzhuang}@zju.edu.cn =================================================================================================================================================================================== The Associating Objects with Transformers (AOT) framework has exhibited exceptional performance in a wide range of complex scenarios for video object segmentation <cit.>. In this study, we introduce MSDeAOT, a variant of the AOT series that incorporates transformers at multiple feature scales. Leveraging the hierarchical Gated Propagation Module (GPM), MSDeAOT efficiently propagates object masks from previous frames to the current frame using a feature scale with a stride of 16. Additionally, we employ GPM in a more refined feature scale with a stride of 8, leading to improved accuracy in detecting and tracking small objects. Through the implementation of test-time augmentations and model ensemble techniques, we achieve the top-ranking position in the EPIC-KITCHEN VISOR Semi-supervised Video Object Segmentation Challenge. § INTRODUCTION Video object segmentation is a vital task in computer vision that involves segmenting and isolating specific objects of interest in each frame of a video sequence. This task aims to provide pixel-level masks or contours delineating the target object's boundaries in each frame. Semi-supervised Video Object Segmentation (SVOS) has garnered significant attention in recent years, with a particular emphasis on learning-based methodologies <cit.>. In this context, a prevailing approach involves pixel matching across frames to extract crucial insights regarding the target objects. Notably, PLM <cit.> emerges as a pioneering matching-based SVOS method. By performing multi-scale matching between the previous and target frames, PLM successfully segments the target object. To address the challenge of ambiguous backgrounds, the method specifically focuses on the region surrounding the object in the previous frame. FEELVOS <cit.> leverages both global and local matching based on pixel-wise embeddings to facilitate information transfer across frames. These matching results are further integrated with semantic features to predict the final segmentation outcome. CFBI(+) <cit.> employs the first frame and the previous frame as references, treating foreground and background regions with equal importance, implicitly enhancing the discriminability of encoded features. These pioneering techniques contribute to the ongoing development of SVOS, offering valuable insights and potential avenues for further exploration in the field. AOT <cit.> stands as another prominent contribution in this field. Unlike previous methods which segment multiple objects one by one and merge results by post ensemble, AOT presents an innovative identification mechanism that encompasses the encoding, matching, and decoding of multiple objects. By incorporating transformers, AOT effectively associates objects across frames, fostering a deeper exploration and exploitation of inter-object relationships. Building upon this foundation, DeAOT <cit.>, a variant of AOT, employs a hierarchical Gated Propagation Module (GPM) to independently propagate object-agnostic and object-specific embeddings from previous frames to the current frame. This novel approach effectively preserves object-agnostic visual information within the deep transformer layers. Although the aforementioned methods have demonstrated remarkable performance on conventional datasets such as YouTube-VOS <cit.> and DAVIS <cit.>, there remain several challenges that demand attention when dealing with egocentric VISOR <cit.>. 1) Egocentric videos often exhibit rapid camera movements, posing a significant obstacle. 2) In the VISOR dataset, frames are not uniformly sampled at a fixed frame rate, and the time intervals between frames can span up to 1 or 2 seconds. 3) Frequent object-hand interactions in egocentric scenarios introduce issues such as occlusion and motion blur, necessitating effective handling strategies. To address above issues, we propose MSDeAOT, another AOT-based VOS model. MSDeAOT leverages the GPM <cit.> to propagate object masks from previous frames to the current frame. Specifically, we employ GPM in 2 feature scales with strides of 16 and 8, respectively. This multi-scale strategy effectively addresses the aforementioned challenges, enabling MSDeAOT to achieve 𝒥&ℱ score of 89.0% in EPIC-KITCHENS VISOR Semi-supervised Video Object Segmentation Challenge. § METHOD In this section, we present our main method in detail. We begin by providing a brief overview of the DeAOT model to familiarize the reader with the foundational concepts. Subsequently, we delve into the architecture design of our novel MSDeAOT model, outlining its key features and advancements. By elucidating these aspects, we aim to offer a comprehensive understanding of our approach and its contributions. §.§ Revisiting DeAOT DeAOT <cit.>, an adaptation of AOT, preserves the fundamental IDentification mechanism of AOT <cit.>. Consequently, DeAOT exhibits the ability to effectively handle multiple objects concurrently. Diverging from AOT, which consolidates the visual (object-agnostic) and ID (object-specific) embeddings within a shared embedding space, DeAOT adopts a decoupled approach. This entails employing separate propagation processes for each embedding type while maintaining shared attention maps. By decoupling these embeddings, DeAOT achieves enhanced flexibility and discriminative power in handling object representations. The Gated Propagation Module (GPM) plays a pivotal role within the DeAOT framework. In contrast to the LSTT block utilized by AOT, which leverages multi-head attention for propagation, GPM employs a Gated Propagation Function to seamlessly fuse and propagate both object-agnostic and object-specific embeddings. Additionally, GPM employs single-head attention to match objects and effectively propagate information across frames. By incorporating these distinctive mechanisms, GPM enhances the overall performance of the DeAOT model, enabling more accurate and efficient object segmentation. §.§ Multi-Scale DeAOT The whole architecture of MSAOT as shown in <ref>b follows an encoder-decoder design similar to classical segmentation networks like U-Net. The encoder consists of multiple blocks that down-sample the input feature maps, yielding features at different scales. These encoder blocks provide multi-scale features that are crucial for accurate object tracking and segmentation. In the decoder, unlike the FPN module employed in DeAOT ( <ref>a), the Gated Propagation Module (GPM) is integrated with multiple decoder blocks to establish the multi-scale stages of MSDeAOT. Each scale's feature maps from the encoder are fed into the corresponding stage, where the GPM module takes charge of matching the current frame with memory frames and aggregating mask information from the memory frames. The decoder blocks then decode this information. This innovative design of multi-scale stages brings notable benefits. It effectively harnesses the potential of feature maps at different scales, in contrast to the FPN module used in DeAOT, where multi-scale feature maps solely serve as shortcut connections for residual structures. Specifically, in DeAOT, only the feature maps at the smallest scale are utilized for matching across memory frames using the GPM module. In contrast, MSDeAOT comprehensively engages feature maps from multiple scales during the matching process, thereby enhancing performance and enabling finer details of objects to be captured. § IMPLEMENTATION DETAILS In MSDeAOT, we employ ResNet-50 <cit.> and Swin Transformer-Base <cit.> as the backbones for the encoder. While ResNet-50 offers a lightweight option, Swin Transformer-Base achieves superior performance. For the decoder, the MSDeAOT model incorporates GPM modules in multiple stages. Specifically, we set the number of layers in the GPM to 2 for the 16× scale stage and 1 for the 8× scale stage. To save computational resources, we exclude the 4× scale feature maps and instead duplicate the 16× scale feature maps twice to form the feature pyramid. The training process comprises two phases, following the AOT framework. In the initial phase, we pre-train the model using synthetic video sequences generated from static image datasets <cit.> by randomly applying multiple image augmentations <cit.>. In the subsequent phase, we train the model on the train and val sets of the VISOR dataset <cit.>, incorporating random video augmentations <cit.>. During MSDeAOT training, we employ 8 Tesla V100 GPUs with a batch size of 16. For pre-training, we use an initial learning rate of 4 × 10^-4 for 100,000 steps. For main training, the initial learning rate is set to 2 × 10^-4, and the training steps are 100,000. The learning rate gradually decays to 1 × 10^-5 using a polynomial decay schedule <cit.>. § EPIC-KITCHENS CHALLENGE: SEMI-SUPERVISED VIDEO OBJECT SEGMENTATION §.§ Ablation study on VISOR val set We train our MSDeAOT on two backbones, ResNet-50 <cit.> and Swin Transformer-Base <cit.>, and evaluate them on the VISOR val set. As shown in <ref>, SwinB-MSDeAOT achieves better performance than R50-MSDeAOT, with a 𝒥&ℱ score of 85.6. As for training data, we train R50-DeAOTL and R50-MSDeAOT on multiple datasets and evaluate them on VISOR val set. As shown in <ref>, R50-MSDeAOT trained on VISOR achieves the best performance, with a 𝒥&ℱ score of 84.0 and more training data has no improvement. §.§ Model Ensemble To achieve the best performance on hidden test, we train 3 models, , SwinB-DeAOTL <cit.>, R50-MSDeAOT, and SwinB-MSDeAOT on VISOR train and val set. We then ensemble the predictions of these 3 models to obtain the final results. As for test-time augmentations, both multi-scale test and flip test are used. The scales are 1.2×, 1.3×, 1.4× and each scale includes non-flipped and flipped test. Logits Meaning. During the inference process of each model, we refrain from storing the masks themselves. Instead, we retain the logits, which represent the probabilities of each pixel belonging to a specific object. Once inference is completed across multiple models, we acquire multiple sets of logits. These sets are subsequently weighted, averaged, and evaluated. The final label is determined by selecting the set with the highest probability value, thus consolidating the collective predictions from the ensemble of models. However, saving logits requires a large disk space, we can only apply two logtis for ensemble. Mask Voting. First, we adopt a multi-model approach to perform inference, wherein each model produces a mask for an image. As a result, each pixel is associated with multiple labels. To determine the ultimate label for each pixel, we employ a weighted voting scheme that aggregates the labels. This process ensures the derivation of a final label that captures the collective information from the multiple models. As saving masks requires much less disk space than saving logits, we can apply more results of different models for ensemble. § CONCLUSION In this paper, we propose MSDeAOT, a variant of AOT, for semi-supervised video object segmentation. MSDeAOT leverages the hierarchical Gated Propagation Module (GPM) to independently propagate object-agnostic and object-specific embeddings from previous frames to the current frame. MSDeAOT shows remarkable performance on the EPIC-KITCHENS VISOR Semi-supervised Video Object Segmentation Challenge with 𝒥&ℱ of 89.0% on the test set. ieee_fullname
http://arxiv.org/abs/2307.01870v1
20230704182859
Exploring Non-Verbal Predicates in Semantic Role Labeling: Challenges and Opportunities
[ "Riccardo Orlando", "Simone Conia", "Roberto Navigli" ]
cs.CL
[ "cs.CL", "cs.AI" ]
K2 & observations of symbiotic X-ray binaries: and G. J. M. Luna 1 Received June 2023; accepted y ===================================================== Although we have witnessed impressive progress in Semantic Role Labeling (SRL), most of the research in the area is carried out assuming that the majority of predicates are verbs. Conversely, predicates can also be expressed using other parts of speech, e.g., nouns and adjectives. However, non-verbal predicates appear in the benchmarks we commonly use to measure progress in SRL less frequently than in some real-world settings – newspaper headlines, dialogues, and tweets, among others. In this paper, we put forward a new PropBank dataset which boasts wide coverage of multiple predicate types. Thanks to it, we demonstrate empirically that standard benchmarks do not provide an accurate picture of the current situation in SRL and that state-of-the-art systems are still incapable of transferring knowledge across different predicate types. Having observed these issues, we also present a novel, manually-annotated challenge set designed to give equal importance to verbal, nominal, and adjectival predicate-argument structures. We use such dataset to investigate whether we can leverage different linguistic resources to promote knowledge transfer. In conclusion, we claim that SRL is far from “solved”, and its integration with other semantic tasks might enable significant improvements in the future, especially for the long tail of non-verbal predicates, thereby facilitating further research on SRL for non-verbal predicates. We release our software and datasets at <https://github.com/sapienzanlp/exploring-srl>. *Equal contribution.footnote § INTRODUCTION Over the years, Semantic Role Labeling <cit.> – the task of identifying the semantic relations between predicates and their arguments – has attracted continued interest. Enticed by the prospect of acquiring one of the ingredients that might enable Natural Language Understanding <cit.>, the research community has striven to overcome numerous challenges in SRL. As a consequence, not only have automatic systems achieved impressive results on complex benchmarks <cit.>, such as CoNLL-2005 <cit.>, CoNLL-2008 <cit.>, CoNLL-2009 <cit.>, and CoNLL-2012 <cit.>, but SRL has also been successfully leveraged to benefit a wide array of downstream tasks in Natural Language Processing and also Computer Vision, including Machine Translation <cit.>, Summarization <cit.>, Situation Recognition <cit.>, and Video Understanding <cit.>, among others. Notwithstanding the achievements of previous work, we argue that there is still much to be done before the research community can claim SRL is even close to being “solved”. One of the simplest yet erroneous assumptions about SRL is that all predicates – or at least the majority of them – are verbs. Quite the contrary, predicates often manifest themselves as nouns, adjectives, and adverbs. For example, in the sentence “Sensational robbery at the bank during the night: two suspects on the loose!”, the word robbery is a predicate, as it denotes an action, and its arguments are sensational (attribute of the robbery), at the bank (location), during the night (time), and two suspects (agents). We highlight two potential issues in the above example. First, an SRL system that analyzes only verbal predicates cannot identify the nominal event in the sentence and, in turn, its semantic constituents. Second, nominal events like those expressed in the above sentence are far from rare, being commonly found in several settings, such as newspaper headlines, blog titles, short messages, tweets, and dialogues. Perhaps surprisingly, there is limited work on non-verbal predicates, mostly focused on transferring “knowledge” about verbal predicates to nominal ones <cit.>. The scarcity of studies on non-verbal predicates might be explained by the way in which current datasets for SRL are designed, as they focus primarily on verbal predicates <cit.>. Therefore, any progress on non-verbal predicates is often overshadowed by the predominance of verbal instances, resulting in an incomplete picture of the actual situation. The issue is also exacerbated by the fact that, oftentimes, benchmark results are taken at face value. Instead, carrying out in-depth analyses is fundamental, as neural networks have been found to learn patterns that are different from those of humans, especially in semantic tasks <cit.>. In this paper, we perform a reality check and explore non-verbal predicates in English SRL. More specifically, our contributions are as follows: * We provide an empirical demonstration that state-of-the-art systems are not capable of generalizing from verbal to nominal and adjectival predicate-argument structures (PAS) in PropBank-based SRL; * We investigate whether other PAS inventories – namely, FrameNet, VerbNet, and VerbAtlas – are better suited for transferring learned patterns across predicate types; * We introduce a novel, manually-annotated challenge set to evaluate current and future SRL systems on verbal, nominal, and adjectival PAS; * We analyze possible directions and strategies for prospective work on non-verbal SRL. § CHALLENGES As mentioned above, relying on standard benchmarks does not allow us to properly evaluate the performance of state-of-the-art systems on non-verbal SRL. Cases in point are the CoNLL Shared Tasks: CoNLL-2005 covers only verbal predicates; CoNLL-2009 includes verbal and nominal predicates but makes it difficult to compare them, as they belong to two different inventories, PropBank and NomBank, respectively; CoNLL-2012 and its revision in OntoNotes 5.0 <cit.> do not cover adjectival predicates. Therefore, identifying unaddressed challenges, especially in non-verbal SRL, is far from trivial. Introducing and . Since OntoNotes 5.0 – the largest gold evaluation framework for PropBank-based SRL – does not comprehensively evaluate different predicate types, we collect the example sentences provided with each predicate in PropBank 3 <cit.> to create a new evaluation benchmark, named . This allows us to build a “controlled” benchmark, the first on which we can evaluate the performance of PropBank-based SRL on verbal, nominal, and adjectival PAS. In Table <ref> we report statistics on the coverage of CoNLL-2009, OntoNotes 5.0 and in terms of unique framesets (rightmost column), where the considerably higher frameset coverage of is evident. Compared to its alternatives, covers 7481 unique PropBank framesets against 2490 framesets covered in the OntoNotes test set and 2427 in CoNLL-2009. Moreover, when comparing to OntoNotes, the number of unique framesets used in verbal predicate occurrences is more than double (5465 vs. 2215), whereas it is almost double for nominal occurrences (1384 vs. 782). Adjectival occurrences are essentially missing in OntoNotes (with 3 unique framesets only), while covers 1599. We remark that the same PropBank frameset can be used to annotate predicate occurrences from different parts of speech, which explains why the total number of unique framesets does not correspond to the sum of framesets used for verbal, nominal and adjectival predicate occurrences (second, third and fourth column of Table <ref>). Given its considerably higher coverage, also enables a solid evaluation of an SRL system on over 4000 predicate senses that are not included in OntoNotes 5.0; we call this more challenging testbed . We report statistics on in the last row of Table <ref>. Cross-type knowledge transfer. Now that we have wide-coverage multi-type SRL datasets, we can test the ability of SRL systems to generalize across types. The main objective of our experiments here is to empirically demonstrate that: i) “knowledge transfer” between predicate types is an unaddressed challenge, and ii) this problem is not apparent in OntoNotes, but becomes evident from and . To prove these points, we take CN-22 – a state-of-the-art system <cit.> – and study its behavior when trained on the entire OntoNotes (CN-22_ verbs+nouns), only on its verbal structures (CN-22_ verbs), or only on its nominal structures (CN-22_ nouns). The results on the test set of OntoNotes, shown in Table <ref>, represent the first evidence that even a state-of-the-art SRL system is affected by limited generalization capabilities across predicate types. Indeed, the performance of CN-22_ verbs drops significantly when evaluated on nominal PAS, from 84.7 to 16.4 points in F1 score on argument labeling, and that of CN-22_ nouns drops analogously when evaluated on verbal instances, from 72.8 to 11.2 on argument labeling. One could observe that CN-22_ verbs+nouns, jointly trained on verbal and nominal instances, seems to solve the cross-type transfer problem. However, this is true only because the OntoNotes test set does not feature adjectival structures. Indeed, it is very clear from the results on our and that the performance of CN-22_ verbs+nouns does not improve on adjectival PAS compared to CN-22_ verbs (only +0.5% on and +0.2% on for argument labeling). Therefore, we can derive that joint learning on two predicate types (i.e. the verbal and nominal ones) does not provide breakthrough improvements on a third predicate type (i.e. the adjectival one). We stress that, in this case, we cannot simply rely on jointly training CN-22 on verbal, nominal, and adjectival instances as, to our knowledge, no training dataset includes adjectival PAS for PropBank-based SRL. § OPPORTUNITIES In the previous Section, our experiments show that zero-shot knowledge transfer across predicate types is still challenging. We argue that this problem is caused by two main factors. First, PropBank was not designed to aid cross-type knowledge transfer, e.g., the nominal predicate theft.01 is not linked to its verbal equivalent steal.01. Second, recent SRL systems might have limited capability for recognizing common patterns across different predicate types. We conduct an initial investigation of these aspects and discuss some opportunities for improving non-verbal SRL. The role of the linguistic resource. While PropBank might not be the ideal resource for non-verbal SRL, other inventories – based on different linguistic theories – may provide features that could be helpful to aid knowledge transfer between predicate types. After all, previous studies have already shown that language models leverage different hidden layers depending on the linguistic resource used for SRL <cit.>. Here, instead, we take the opportunity to study if there is an inventory whose theoretical principles can aid the generalization capability of an existing SRL system on unseen patterns. We thus evaluate empirically the differences between four different inventories, namely, PropBank, FrameNet <cit.>, VerbNet <cit.>, and VerbAtlas <cit.>.[Appendix <ref> provides an overview of the inventories.] To do this, we create , a multi-inventory benchmark made up of the subset of OntoNotes from SemLink 2.0 <cit.>, whose predicates and arguments are annotated with PropBank, FrameNet, and VerbNet. We also include VerbAtlas annotations thanks to the inter-resource mapping between VerbNet, WordNet, and VerbAtlas.[Appendix <ref> provides further details on our mapping procedure.] For each of these inventories, includes a training, a validation, and a test set with 7336, 816, and 906 sentences, respectively. While we stress that this experimental setting is severely limited since it assumes that all resources can be mapped to each other 1-to-1, it provides a controlled environment for a fair, direct comparison. To study the impact of the inventory, we evaluate our SRL system on each of the linguistic inventories in (CN-22_ PropBank, CN-22_ FrameNet, CN-22_ VerbNet, and CN-22_ VerbAtlas). The results in Table <ref> testify that the linguistic resource of choice plays a role in the results. In particular, we can observe a relative error rate reduction of 38% in predicate sense disambiguation (from 97.9 to 98.7) and 13% in argument labeling (from 88.1 to 89.7) when using VerbAtlas instead of PropBank. This result indicates that higher-level semantic abstractions, such as semantics-based clusters, as available in VerbAtlas thanks to its organization of frames as verbal synset groupings, and cross-predicate role semantics, as adopted in VerbNet and also VerbAtlas, can help a system generalize better on unseen patterns. . While our multi-inventory SemLink-based dataset provides a preliminary indication of the role of a linguistic inventory, it only includes verbal predicates. To further validate the preliminary results obtained on our multi-inventory SemLink-based dataset, we create a small challenge test set for verbal, nominal, and adjectival SRL, manually annotated with parallel labels for PropBank, the most popular inventory, and VerbAtlas, the most promising inventory (cf. Table <ref>). This new test set is particularly challenging, as it features only PAS that do not appear in OntoNotes. Therefore, makes it possible to measure the capability of an SRL system to generalize i) across predicate types, and ii) on the long tail of predicate senses. To construct , we randomly selected a total of 288 sentences – 96 sentences for each predicate type – from . We then asked three expert annotators to independently annotate each sentence with predicate senses and their semantic roles. The annotation process was carried out in two phases: first, each person annotated each sentence independently, resulting in a disagreement of 32%; then, the annotators discussed and resolved their disagreements, if possible, reducing them to 6%. Overall, includes 1898 predicate-argument pairs. As we can see from Table <ref>, confirms our preliminary experiments, macroscopically magnifying the differences between PropBank and VerbAtlas. First, we observe that VerbAtlas is significantly better in predicate sense disambiguation for verbal instances (49.5 vs. 14.5 in F1 score) but worse for nominal and adjectival ones (22.2 vs. 17.7 and 27.7 vs. 13.5, respectively). This is mainly because VerbAtlas was not designed for non-verbal SRL and, therefore, it does not provide a lemma-to-sense dictionary to restrict the possible frames of nominal and adjectival predicates. Second, VerbAtlas significantly outperforms PropBank on argument labeling of verbs (47.0 vs. 5.5 in F1 score), nouns (44.2 vs. 2.1), and adjectives (36.8 vs. 10.8). We argue that this is largely due to the adoption in VerbAtlas of cross-frame semantic roles that are coherent across frames, which allows the system to leverage other predicates seen at training time with similar structures. Leveraging Word Sense Disambiguation. Finally, we carry out a preliminary exploration of possible directions that could aid non-verbal SRL in the future. While SRL research has not dealt with non-verbal semantics, other areas have investigated semantics for different parts of speech, and one of these is Word Sense Disambiguation (WSD). More specifically, WSD is the task of assigning the most appropriate sense to a word in context according to a predefined sense inventory <cit.>. It is easy to notice how this task resembles predicate sense disambiguation in SRL, the only difference being that WSD is not limited to predicates, as it aims to disambiguate every content word. Therefore, we believe that WSD is an interesting candidate to explore whether a different disambiguation task can help to improve the generalization capability of an existing SRL system on , i.e., on predicate-argument structures that the SRL system did not see at training time. To investigate the effect of WSD on SRL, we start by leveraging the fact that VerbAtlas frames are clusters of WordNet synsets. Therefore, we map each synset predicted by AMuSE-WSD <cit.>,[<https://nlp.uniroma1.it/amuse-wsd/>] a state-of-the-art off-the-shelf WSD system, to a VerbAtlas frame, and compare them to the prediction of our SRL system. Table <ref> shows the performance of AMuSE-WSD on predicate sense disambiguation (WSD_baseline). Interestingly, we observe that a simple WSD baseline can strongly outperform an SRL system when training data is scarce. Indeed, AMuSE-WSD surpasses CN-22_ SemLink in each predicate type (46.7 vs 6.2, 32.7 vs 6.2, 3.8 vs 3.1, for verbs, nouns and adjectives, respectively), and CN-22_ OntoNotes in nominal predicates, with an overall improvement of +5.7 (31.7 vs 26.0) over the best performing SRL system. Most interestingly, if we employ an oracle to pick the best prediction between the WSD baseline and our best SRL system, we notice a further improvement (41.5% vs. 26.0%), demonstrating that current state-of-the-art SRL systems can still benefit from explicit lexical semantics. We hypothesize that tighter integration of the two tasks may lead to even better improvements in generalization capabilities. § CONCLUSION AND FUTURE WORK In this paper, we carried out a reality check and demonstrated that, despite impressive results on standard benchmarks by state-of-the-art systems, SRL is still far from “solved”. Indeed, thanks to a carefully-designed set of experiments and the introduction of novel, manually-curated, wide-coverage benchmarks, we showed that current SRL systems possess inadequate capabilities for transferring knowledge between predicate types. Our analyses pointed out that we can address this limitation by working in two directions: leveraging the intrinsic characteristic of frameset resources, including semantics-based clusters and cross-predicate role semantics, and tighter integration of other semantics-based tasks, such as Word Sense Disambiguation, into SRL. We hope our work will be a stepping stone for innovative research on high-performance SRL systems for non-verbal predicate-argument structures, a problem that still needs extensive investigation. For this reason, we release our software and datasets at <https://github.com/sapienzanlp/exploring-srl>. § LIMITATIONS Part of our analyses and experiments is based on our dataset, which provides parallel annotations for PropBank, FrameNet, VerbNet, and VerbAtlas. We take the opportunity to remark that this is a constrained setting, as these resources cannot be mapped 1-to-1 without losing information. As such, this setting may not provide the full picture of how these resources compare against each other. However, we also believe that a setting like this can at least provide an intuitive idea of the role of a linguistic resource in cross-inventory generalization. Creating novel benchmarks that can better compare the role of different linguistic resources is certainly a direction for future work that may provide novel insights into verbal and non-verbal SRL. Another limitation of our work is the small size of . Even though contains only about 300 sentences, it features almost 2000 predicate-argument pairs, and this is a number that is sufficient to show the inability of a current state-of-the-art system to generalize across predicate types. We acknowledge that a larger benchmark may have provided further insights. However, we also note that, in our case, increasing the number of annotations would hardly have brought us to a different conclusion, especially given the large differences in performance among the model configurations that we evaluated. Finally, we stress that our experiments on integrating a simple WSD baseline into an SRL system do not provide a definitive answer on whether more complex integrations may lead to improved results. Instead, our intention is to support the claim that SRL is still far from being “solved”, as knowledge from other tasks can still hypothetically bring benefits to an existing SRL system, especially when the size of the training data is small. § ETHICS STATEMENT We release all the new datasets we produce under an open license. However, some of the datasets mentioned and used in our paper are not openly available, e.g., CoNLL-2009 and OntoNotes 5.0. We acknowledge the fact that such datasets may become unavailable at a later moment, as their distribution is not under our control. § ACKNOWLEDGMENTS 0.1 < g r a p h i c s > 0.70 The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 under the European Union's Horizon 2020 research and innovation programme. 0.1 < g r a p h i c s > The last author gratefully acknowledges the support of the PNRR MUR project PE0000013-FAIR. acl_natbib § INVENTORIES In this paper, we evaluate empirically how SRL systems are influenced by the different linguistic inventories employed. We tested four popular inventories, namely PropBank, FrameNet, VerbNet, and VerbAtlas. Each of these inventories features different characteristics, which we summarize briefly here. PropBank PropBank <cit.> enumerates the senses of each predicate lemma, e.g., eat.01, eat.02, etc., and defines semantic roles (Arg0-Arg5) that are specific to each predicate sense, e.g., the meaning of Arg2 in eat.01 differs from that of eat.02. FrameNet FrameNet <cit.> groups predicates that evoke similar actions in semantic frames, e.g., the frame Ingestion includes eating, feeding, devouring, among others; each frame can have frame-specific roles, e.g., Ingestor and Ingestible. VerbNet VerbNet <cit.> defines classes of verbs with similar syntactic patterns, e.g., eating and drinking belong to Eat-39.1-1; all verb classes share a set of thematic roles, e.g., Agent and Patient. VerbAtlas VerbAtlas <cit.> clusters WordNet <cit.> synsets into coarse-grained frames, similar to FrameNet, and adopts a common set of thematic roles for all frames, similar to VerbNet. § In this Section, we provide further details on the construction process of . We leverage the data distributed as part of SemLink 2.0 <cit.>, which includes instances from OntoNotes 5.0 annotated with PropBank, FrameNet, and VerbNet. We select the subset of the instances that have a corresponding annotation in all three inventories. In addition, we also include VerbAtlas annotations through the inter-resource mapping between VerbNet, WordNet, and VerbAtlas. To convert the predicate senses, we employ the mapping from VerbNet to WordNet included in the Unified Verb Index (UVI)[<https://uvi.colorado.edu/>] project: since a VerbAtlas frame is a cluster of WordNet synsets, we associate a VerbNet class with a VerbAtlas frame through their corresponding synset. Additionally, we also extend the VerbAtlas annotations to include argument roles. Given that both VerbNet and VerbAtlas adopt a similar set of thematic roles, we manually map all the VerbNet roles to their corresponding VerbAtlas ones and convert the argument annotations accordingly. § MAPPING NOUNS TO VERBATLAS FRAMES Since VerbAtlas was originally designed only as a verbal inventory, its frames contain only verbal WordNet synsets. To expand its coverage and include nominal predicates, we propose a method for deriving nominal predicates from the verbal ones already included. The method leverages WordNet <cit.>, a lexical database that contains a wealth of information about word senses and their relationships. Specifically, we use the “hypernym” and “derivationally related forms” relations in WordNet to identify nominal word senses that are semantically related to a verbal predicate in VerbAtlas. Informally, to be included in our expanded version of VerbAtlas, a nominal word sense must meet the following criteria: * It must have a “hypernym” that belongs to the top-100 most frequent nominal senses related to event.n.01, i.e., event as in “something that happens at a given place and time”. * It must be semantically related – “derivationally related forms” related – to a verbal predicate included in a VerbAtlas frame. This approach allows us to identify a large number of nominal word senses that are semantically related to a verbal predicate in VerbAtlas. Therefore, we assign these nominal word senses to the same VerbAtlas frame as their related verbal predicates. In total, we are able to cluster 5334 nominal word senses, significantly expanding the coverage of VerbAtlas to include both verbal and nominal predicates. We release this mapping together with the rest of our software and datasets. § MAPPING ADJECTIVES TO VERBATLAS FRAMES We follow a similar strategy to also include adjectival predicates in VerbAtlas. This time, we rely on the “pertainyms”, “similar to”, and “derivationally related forms” relations to connect adjectival word senses in WordNet to VerbAtlas frames. More specifically, we include each adjectival word sense that satisfies at least one of the following conditions: * It must be “derivationally related” or “pertaining” to a noun or verb sense that is already included in VerbAtlas; * It must be “similar to” another word sense that is in turn “derivationally related” to a predicate in VerbAtlas. We then assign these adjectival word senses to the same VerbAtlas frame as their related verbal and nominal predicates. As a result, we are able to include 2968 adjectival predicates in VerbAtlas. We release this mapping together with the rest of our software and datasets. § LICENSE We release our data under the Creative Commons Attribution Share-Alike (CC-BY-SA) license.
http://arxiv.org/abs/2307.02576v1
20230705181815
Measuring the hot ICM velocity structure function using XMM-Newton observations
[ "Efrain Gatuzz", "R. Mohapatra", "C. Federrath", "J. S. Sanders", "A. Liu", "S. A. Walker", "C. Pinto" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.CO", "astro-ph.GA" ]
firstpage–lastpage Multimodal Temporal Fusion Transformers Are Good Product Demand Forecasters Marcel Worring June 25, 2023 =========================================================================== It has been shown that the gas velocities within the intracluster medium (ICM) can be measured by applying novel XMM-Newton EPIC-pn energy scale calibration, which uses instrumental Cu Kα as reference for the line emission. Using this technique, we have measured the velocity distribution of the ICM for clusters involving AGN feedback and sloshing of the plasma within the gravitational well (Virgo and Centaurus) and a relaxed one (Ophiuchus). We present a detailed study of the kinematics of the hot ICM for these systems. First, we compute the velocity probability distribution functions (PDFs) from the velocity maps. We find that for all sources the PDF follows a normal distribution, with a hint for a multimodal distribution in the case of Ophiuchus. Then, we compute the velocity structure function (VSF) for all sources in order to study the variation with scale as well as the nature of turbulence in the ICM. We measure a turbulence driving scale of ∼ 10-20 kpc for the Virgo cluster, while the Ophiuchus cluster VSF reflects the absence of strong interaction between the ICM and a powerful Active Galactic Nucleus (AGN) at such spatial scales. For the former, we compute a dissipation time larger than the jet activity cycle, thus indicating that a more efficient heating process than turbulence is required to reach equilibrium. This is the first time that the VSF of the hot ICM has been computed using direct velocity measurements from X-ray astronomical observations. X-rays: galaxies: clusters – galaxies: clusters: general – galaxies: clusters: intracluster medium – galaxies: clusters: individual: Virgo, Ophiuchus, Centaurus § INTRODUCTION Measuring the velocity structure of the ICM is important in order to constrain the different heating mechanisms that have been proposed to transfer energy from active galactic nuclei (AGN) back into the ICM <cit.>. In addition to energetics, turbulent motions also contribute to non-thermal pressure support, particularly at large radii and affect cluster mass estimates when assuming hydrostatic equilibrium <cit.>. They play a role in the transport of metals within the ICM, due to uplift and sloshing of metals by AGN outflows <cit.>. The turbulent velocity structure is also an excellent probe of the microphysics of the ICM, such as viscosity and conductivity <cit.>. In addition, measuring velocities should directly measure the sloshing of gas in cold fronts, which can remain for several Gyr <cit.>. Simulations indicate that the ICM should contain turbulent and bulk flow motions, due to the merger of other subcomponents and clusters <cit.>. Inflation of bubbles and the action of the relativistic jets by the central AGN also likely generate motions of a few hundred km/s <cit.>. Furthermore, merging substructures can generate relative bulk motions of several hundred km/s due to perturbations in the ICM <cit.>. Overall, there is a close connection between the ICM physical state and the velocity power spectra <cit.>. Despite its importance, the velocity structure of the ICM remains poorly constrained observationally. Direct measurements of random and bulk motions in the ICM using Fe-K emission lines were obtained by the Hitomi observatory, revealing low levels of turbulence near the Perseus cluster core despite the obvious impact of the AGN and its jets on the surrounding ICM <cit.>. <cit.> examined several clusters with Suzaku, although systematic errors from the Suzaku calibration were likely around 300 km/s and its PSF was large. Low turbulence motion is also measured from line broadening and resonant scattering, with velocities between 100-300 km/s and limited to the cluster core <cit.>. Indirect estimates of the level of turbulent velocities have been obtained from X-ray brightness fluctuations <cit.> and thermal Sunyaev-Zeldovich fluctuations <cit.>. However, these methods are not based on direct velocity measurements and are highly model-dependent. <cit.> present a novel technique that consists of using instrumental X-ray lines seen in the spectra of the XMM-Newton EPIC-pn detector to calibrate the absolute energy scale of the detector to better than 100 km/s at Fe-K. Using this technique, direct measurements of the bulk ICM velocity distribution have been done in multiple systems, including the Perseus and Coma clusters <cit.>, the Virgo cluster <cit.>, the Centaurus cluster <cit.> and the Ophiuchus cluster <cit.>. Velocity structure functions (VSFs) and spatial power spectra constitute useful diagnostic tools to study the turbulence motions in a medium (e.g., ISM or ICM), since they represent the variation of velocity with scale <cit.>. Recent observational studies have used such diagnostic to study the interstellar medium <cit.> and intergalactic medium <cit.> velocity structure. <cit.> studied turbulent velocities of ICM using optical data of atomic filaments in several nearby clusters. They analyzed the first-order structure functions of line-of-sights (LOS) velocity (VSF_1^LOS,obs) and found them to be steeper than expected from Kolmogorov turbulence theory <cit.>. They also found that the driving scale of turbulence in their sample of clusters is proportional to the size of AGN-driven bubbles. Such measurements have led to numerical studies of VSFs and velocity power spectra in similar multiphase ICM environments <cit.>. More recently, <cit.> carried out a thorough study of the VSFs for both the hot and cold ICM phases, including the effect of projection using different weightings along the LOS. In this work, we study the nature of the ICM within the Virgo, Centaurus and Ophiuchus clusters by measuring their VSF using direct velocity measurements obtained with the XMM-Newton EPIC-pn detector. The outline of this paper is as follows. In Section <ref> we describe the data reduction and fitting process. In Section <ref> we analyze the velocity probability distribution functions. The analysis of the VSFs is shown in Section <ref>. A detailed discussion of the results is shown in Section <ref>, while the conclusions and summary are included in Section <ref>. Throughout this paper we assumed a ΛCDM cosmology with Ω_m = 0.3, Ω_Λ = 0.7, and H_0 = 70 km s^-1 Mpc^-1. § XMM-NEWTON DATA REDUCTION The XMM-Newton European Photon Imaging Camera <cit.> observations are the same as we used in <cit.> and we followed the same data reduction process. Spectra were reduced with the Science Analysis System (SAS[<https://www.cosmos.esa.int/web/xmm-newton/sas>], version 19.1.0). First, we processed each observation with the epchain SAS tool. We used only single-pixel events (PATTERN==0) while bad time intervals were filtered from flares applying a 1.0 cts/s rate threshold. In order to avoid bad pixels or regions close to CCD edges we filtered the data using FLAG==0. Following the work done in <cit.>, we used updated calibration files, which allows to obtain velocity measurements down to 100 km/s at Fe-K by using the background X-ray lines identified in the spectra of the detector as references for the absolute energy scale. Identification of point sources was performed using the SAS task edetect_chain, with a likelihood parameter det_ml > 10. The point sources were excluded from the subsequent analysis, including the AGN in the Virgo cluster core (i.e., a central circular region with a diameter D=58 ). We made spectral maps of the clusters using the contour binning algorithm <cit.>. We created regions applying a geometrical constraint factor of 1.7, to prevent bins becoming too elongated. We masked out the point sources during binning. We performed the analysis using the maps binned with a signal-to-noise ratio of 75. While these maps have the disadvantage of producing a non-smoothly varying map, compared to those analyzed in <cit.>, the advantage is that the regions are statistically independent. We analyze the spectra with the xspec spectral fitting package (version 12.11.1[<https://heasarc.gsfc.nasa.gov/xanadu/xspec/>]) using cash statistics <cit.>. For each source, we followed the spectral fitting described in <cit.> which we will describe briefly. We model the cluster gas emission with an apec model. In the case of Centaurus, we model the spectra with a log-distribution of temperatures (lognorm model) in order to account for the ICM multi-temperature component within the system <cit.>. In order to account for the Galactic absorption we included a tbabs component <cit.>. The free parameters in the model are the redshift, metallicity, temperature, logσ (i.e., for the lognorm model) and normalization. Finally, we included Cu-Kα, Cu-Kβ, Ni-Kα and Zn-Kα emission lines to model the instrumental background. Figure <ref> shows the resulting velocity map from the contour binning process. The line shifts are with respect to each system analyzed (i.e., not with respect to us). Overall, these maps are similar to the codependent velocity maps described in our previous reports. For example, the Virgo cluster displays a redshifted gas along the west direction near the cluster center, while a blueshifted gas along the east direction is seen, a distribution shown in <cit.>. The Centaurus cluster, on the other hand, shows mainly a blueshifted gas, with larger velocities around the south-west direction, similar to the structure found in <cit.>. Finally, a redshifted-to-blueshifted interface with very large velocities can be identified in the Ophiuchus cluster velocity map in the east direction from the central core, a feature that was also identified in <cit.>. § VELOCITY PROBABILITY DISTRIBUTION FUNCTIONS Figure <ref> shows the velocity probability distribution functions (PDFs) computed from the velocity maps, weighted by area. For each PDF, we compute the Shapiro-Wilk <cit.> and D'Agostino and Pearson's <cit.> normality tests to determine if the data set is well modeled by a Gaussian[Both tests are included in the scipy.stats package.]. We found that for all sources the distribution follows a normal distribution (i.e., the p-value is larger than α=0.05 level). Table <ref> shows the best-fit parameters obtained for a Gaussian model fitted to each PDF. In the cases of Ophiuchus, there are hints for a multimodal probability distribution function, however the sample of points for the modes is not large enough to perform a normality test. Simulations predict a Gaussian velocity probability distribution function for the ICM as the one observed in Figure <ref> <cit.>. <cit.> predicted a σ value in the ∼ 200-400 km/s range for the hot ICM phase, with indications of isotropy, while Hitomi measured up to σ∼220 km/s. However, these measurements were obtained only for the Perseus cluster. Large velocities can be observed due to the presence of substructures, subgroups and/or strong merger shocks <cit.>. <cit.> found large velocities (>1000 km/s) for some regions within the Perseus cluster and the Coma cluster. In the latter case, they are associated with subgroups in the system. For Ophiuchus cluster, the large velocities found along the east direction from the cluster center (see Figure <ref>), coincide with a sharp surface brightness, which can be an indication of merger activity <cit.>. A detailed analysis of possible systematics carried out by <cit.> lead out to the conclusion that these velocity patterns are significant and reliable. It could be that the merger direction within this source has a large line-of-sight component, therefore two separated clumps of gas are not observed in the image. Previous results obtained from X-ray, radio and optical observations are consistent with Ophiuchus being a merger <cit.>. Also, large velocities have been found using optical observations for cluster members even for distances <150 kpc <cit.>. Future missions such as the X-ray Imaging and Spectroscopy Mission <cit.>, the Line Emission Mapper <cit.> or Athena <cit.> will provide more direct evidence to test such interpretation. § VELOCITY STRUCTURE FUNCTIONS We compute the first-order structure function by taking the weighted average of the difference between the line-of-sight velocities (v) of two points separated by r. Mathematically, we define it as: δ v (r) = ∑_𝐱w(𝐱+𝐞_1r, 𝐱) |v(𝐱+𝐞_1r)-v(𝐱)|/∑_𝐱w(𝐱+𝐞_1r, 𝐱) where x denotes the position of any point in the dataset and 𝐞_1 denotes a unit-vector in any direction. We bin δ v into logarithmically-spaced bins of separation r. We have used three different weighting functions for our analysis: w_const=const, w_area=area_𝐱+𝐞_1r+area_𝐱, w_err=[(v_err)^2_𝐱+𝐞_1r+(v_err)^2_𝐱]^-1/2. Here area denotes the total number of pixels in a binned region (see Figure <ref>). We use w_const for the data presented in Figures <ref>, <ref> and <ref>. We show the effects of choosing different weighting functions in Figure <ref>. Figure <ref> shows the first-order VSFs computed for Virgo (blue points), Centaurus (green points) and Ophiuchus (yellow points). Because of the large uncertainties, we limit the analysis to the first-order VSF. A broad power-law slope can be identified for all sources (red dotted line), confirming that the gas motion is turbulent. While a slope of the VSF ∼1/3 is consistent with the expectation of classical Kolmogorov turbulence for an incompressible fluid, it has been shown that the steepening of the VSF may be due to projection effects <cit.>[Using idealised turbulence simulations, <cit.> show that projection along the line-of-sight generally leads to steepening of the VSF. However, the degree of steepening decreases with increasing clumpiness of the emitting medium, see their Fig. 7. In the absence of a systematic study on the steepening of the VSF with clumpiness, we refrain from guessing the true 3D slope from the values obtained for the projected slope.]. For Virgo cluster a flattening is observed, thus indicating a driving scale of ∼15 kpc. Such flattening has been shown in magnetohydrodynamical simulations of AGN jet feedback <cit.>. In the case of Ophiuchus, such flattening is not observed. This is expected given that the influence of AGN feedback is minimal for this system. There are hints of flattening on very large scales (∼ 250) kpc. <cit.> reported the discovery of a very large bubble of radius ∼ 230 kpc. However, the analysis of systematic effects shows that such flattening may be artificial (see Section <ref>). Finally, the smallest scale accessible in our analysis is limited by the Full Width at Half Maximum (FWHM) of the effective point-spread-function (PSF). Recent work by <cit.> suggests that the steepening of the VSF could also partially be due to the total PSF, which tends to smooth out velocity differences on turbulence-driving scales much smaller than the scales we have derived in our study. Additionally, we require a minimum of 1000 counts in the 4-9 keV energy range to accurately determine the redshift of the Fe K-complex. Consequently, we are extracting spectra from regions considerably larger than the PSF. § DISCUSSION The VSFs indicate a driving scale of turbulence of ∼ 10-20 kpc for the Virgo cluster. Moreover, such a driving scale is expected given that bubbles have typical sizes of ∼ 5-20 kpc <cit.>. We can estimate the dissipation time, which is a few times the eddy turnover time t_ℓ≃ l/v_ℓ, where ℓ is the scale and v_ℓ is the velocity at that scale. For the Virgo we take v_ℓ≃ 350 km/s, which gives t_ℓ(10 kpc)≈ 28 Myr and a dissipation time of t_diss(10 kpc)>40 Myr. The period of AGN outburst is t_AGN≈ 12 Myr for the Virgo cluster <cit.> . Thus, the dissipation time is longer than the jet activity cycle, therefore the turbulence can transfer only a small fraction of the AGN power to heat the ICM. This implies that more efficient heating processes in addition to turbulence are required to reach equilibrium <cit.>. However, we note that in both cases t_ℓ and t_AGN are highly uncertain. The driving scale obtained for Virgo is larger than that obtained for the cold gas by <cit.> near the cluster core. In that sense, <cit.> have shown that the cold- and hot-phase velocities are uncorrelated at scales close to the driving scale. In the case of the Ophiuchus cluster the AGN itself only displays weak, point-like radio emission. While <cit.> report to have discovered a large cavity to the southeast of the cluster, <cit.> did not find changes in the metallicities or temperatures for regions inside and outside that region. Figures <ref> and <ref> show comparisons between the VSF of the hot ICM obtained for Virgo and Centaurus (red circles) from <cit.> and <cit.>, respectively, using MUSE data of Hα filaments (i.e., the cold ICM). The Hα velocities inferred on the largest scale seem to match the velocities inferred from the X-ray observations on small scales. This may imply multiple or different driving scales for the hot and cold gas, since the flattening occurs at different separations (∼2 kpc for the cold gas and ∼10 kpc for the hot gas). However, the smaller field of view of the observations analyzed by <cit.> could also affect the overall shape of the curve on large scales (a few kpc). In that sense, future observations are crucial to better understand the link between both environments. For example, Hα measurements on larger scales may show additional energy injection scales while better high-resolution X-ray VSFs will provide insights about additional breaks on smaller scales. We also compare our measurements with the velocities inferred from X-ray brightness fluctuations in <cit.> for Virgo and <cit.> for Centaurus in Figures <ref> and <ref>, resepectively. We find that the two measurements differ by roughly a factor of 2–4 (note the large errors in our data on scales ≲10 kpc, as well as our measurement uncertainty of ∼100 km/s). In addition to the above, further differences could be due to the following reasons: (1) The region analyzed by <cit.> is very small in comparison with our analysis; (2) In the case of Virgo, unlike us, <cit.> exclude the jet-arm regions from their calculations <cit.>, which are associated with larger brightness (and possibly velocity) fluctuations; (3) stratified turbulence simulations have shown that the ratio between density and velocity fluctuations that they use depends on the strength of stratification of the ICM <cit.>. It increases with increasing stratification and saturates for strongly stratified turbulence. Since both <cit.> and <cit.> use the value of this ratio in the limit of strong stratification <cit.>, they may under-estimate the amplitude of turbulent velocities when the stratification is weaker. In that sense, recent works have estimated the expected scatter for the proportionality coefficient <cit.>. §.§ Systematic effects §.§.§ Effect of weighting function The velocity maps obtained for these systems are not equally spaced and therefore there is no pixel-velocity one-to-one relation (see Figure <ref>). In order to account for such effects, we have computed the VSF by weighting each region with its area in pixels units. The top panel in Figure <ref> shows the results. We note that when including the area weighting, the flattening of the Virgo and Centaurus cluster VSFs is less pronounced. We perform a further test on the impact of systematics by weighting the curves including the uncertainties of each velocity measurement (see Figure <ref>). The bottom panel in Figure <ref> shows the VSFs obtained after weighting with the uncertainties. We note that the flattening of the curves is more noticeable. For the Ophiuchus cluster, the flattening shown at large distances in Figure <ref> is no longer present. These results indicate that the area weighting is more sensitive to large-scale variations, while the error weighting is more sensitive on small scale. Figure <ref> shows the distribution of pair separations in Centaurus, Ophiuchus and Virgo. Pair separations for the Ophiuchus cluster are much larger in comparison with the other sources. It is also clear that our analysis covers intermediate to large spatial scales compared to Hα studies. §.§.§ Effect of S/N cutoff For calculating the VSFs in section <ref>, we have used a signal-to-noise (S/N) filter of 1.0 on the independent velocity dataset. In Fig. <ref> we show the effect of choosing S/N filter to be 0.5 (upper panel) and 1.5 (lower panel), respectively. A lower S/N filter gives us a much larger number of pairs of points per radial separation bin and smoother variations in the VSF with separation. However, these VSFs show larger propagation error (error propagated from δ v/v measurement). On the other hand, our VSFs with S/N≥1.5 show smaller errorbars but larger scatter and suffer from low number statistics (larger Poisson error), since the number of pairs per separation bin is greatly reduced. § CONCLUSIONS AND SUMMARY We have analyzed the velocity structure functions (VSFs) of the hot ICM within the Virgo, Centaurus and Ophiuchus clusters of galaxies. This is the first time such velocity structures are measured for the hot gas using direct velocity measurements from X-ray astronomical observations. Line-of-sight velocities were measured using the technique developed by <cit.> to calibrate the absolute energy scale of the XMM-Newton EPIC-pn detector. Here we briefly summarize our findings. * We made spectral maps of the clusters using the contour binning algorithm. These maps provide velocity measurements for statistically independent regions. * We computed the velocity PDFs from the velocity maps. We applied a normality test and found that for all sources the PDF follows a normal distribution, as predicted by simulations. In the case of Ophiuchus, there are hints for a multimodal distribution. * We have computed the VSFs for all sources. For the Virgo cluster we found a driving scale of the turbulence of ∼ 10-20 kpc. For the Ophiuchus cluster, the VSF obtained reflects the absence of strong interactions between the ICM and a powerful AGN at such spatial scales. * We have found that the dissipation time is larger than the jet activity cycle, thus indicating that an additional process besides turbulence is required to reach equilibrium. That is, more efficient heating processes are required to reach equilibrium in addition to turbulence. § ACKNOWLEDGEMENTS The authors thank Irina Zhuravleva, Yuan Li and Shalini Ganguly for sharing data for Figures <ref> and <ref>. This work was supported by the Deutsche Zentrum für Luft- und Raumfahrt (DLR) under the Verbundforschung programme (Messung von Schwapp-, Verschmelzungs- und Rückkopplungsgeschwindigkeiten in Galaxienhaufen). This work is based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. This research was carried out on the High Performance Computing resources of the cobra cluster at the Max Planck Computing and Data Facility (MPCDF) in Garching operated by the Max Planck Society (MPG). C.F. acknowledges funding by the Australian Research Council (Future Fellowship FT180100495 and Discovery Projects DP230102280), and the Australia-Germany Joint Research Cooperation Scheme (UA-DAAD). A.L. acknowledges financial support from the European Research Council (ERC) Consolidator Grant under the European Union's Horizon 2020 research and innovation programme (grant agreement CoG DarkQuest No 101002585). §.§ Data availability The observations analyzed in this article are available in the XMM-Newton Science Archive (XSA[<http://xmm.esac.esa.int/xsa/>]). mnras
http://arxiv.org/abs/2307.01302v1
20230703191248
Primitive Automata that are Synchronizing
[ "Igor Rystsov", "Marek Szykuła" ]
cs.FL
[ "cs.FL" ]
[ Ben Heuer Received ; accepted ======================== A deterministic finite (semi)automaton is primitive if its transition monoid (semigroup) acting on the set of states has no non-trivial congruences. It is synchronizing if it contains a constant map (transformation). In analogy to synchronizing groups, we study the possibility of characterizing automata that are synchronizing if primitive. We prove that the implication holds for several classes of automata. In particular, we show it for automata whose every letter induce either a permutation or a semiconstant transformation (an idempotent with one point of contraction) unless all letters are of the first type. We propose and discuss two conjectures about possible more general characterizations. § INTRODUCTION We consider deterministic finite semiautomata (called shortly automata) and the properties of their transition monoids. An automaton is synchronizing if it admits a reset word, which is a word such that after reading it, the automaton is left in one known state, regardless of the initial state. In the transition monoid of the automaton (or the semigroup acting on the set of states), this corresponds to the existence of a constant map, which is induced by a reset word. The theory of synchronizing automata is most famous due to the conjecture, which says that every synchronizing automaton with n states admits a reset word of length at most (n-1)^2 <cit.>. This longstanding open problem from 1969 motivated researchers to develop a vast number of results. The currently best general upper bound is cubic in n <cit.>. Most of the research was collected in the recent survey <cit.>; see also the older ones <cit.>. Here, we consider the possibilities of relating the primitivity of the transition monoid with its synchronizability. Both these properties often appear together in the literature. We define the primitivity in analogue to that in the context of permutation groups, which means that there is no non-trivial congruence on the set of states preserved by the action of letters. This is often useful since we can then construct a smaller quotient automaton, where a state represents a class in the original one. So such congruences are used to derive results, in particular, upper bounds on the length of the shortest reset words in particular cases <cit.>. There are also some bounds stated for primitive synchronizing automata <cit.> (primitive automata are called there simple) and other results relating the primitivity and the synchronizability of automata <cit.>. A very related problem concerning permutation groups was the subject of extensive research <cit.>. The problem was to characterize when a primitive permutation group, after adding one non-permutational transformation, results in a synchronizing semigroup. The transformations of this property have been successfully characterized. However, the whole study concerns only the case when the group is primitive, which is a strong condition from the automata point of view. In many cases, the group contained in the transition monoid of an automaton is not only non-primitive but non-transitive or even trivial (just if the automaton has no permutational letters). This question is naturally generalized to semigroups, where the non-permutational transformations also contribute to the primitiveness of the transition monoid. Hence, our specific research question is the following: Under what additional condition(s) every primitive automaton is synchronizing? Additionally, if, in some cases, primitivity implies synchronizability, then sometimes we could relax necessary conditions, e.g., where an automaton needs to be both primitive and synchronizing <cit.>, or we could obtain a synchronizing automaton when needed by ensuring its primitivity instead of other properties, e.g., we could think about auxiliary constructions that need to be synchronizing such as the induced automata <cit.>. §.§ Our contribution We propose criteria that presumably are sufficient to imply the synchronizability from the primitivity of an automaton. We discuss two variants of the conjecture (weak and strong) referring to the shape of non-permutational transformations induced by the letters. We show that they cannot be (much) relaxed and provide experimental support. Based on the literature results, we show that the implication holds in several cases. In particular, we prove that it holds for automata with permutational and semiconstant letters, which are a generalization of automata with simple idempotents <cit.>. This class is a restricted case covered by our conjecture in the strong variant. Note: The weak variant of our conjecture in a stronger form has been recently solved by Mikhail Volkov <cit.>, together with several new results concerning our problem. § PRELIMINARIES An automaton A is a triple (Q, Σ, δ), where Q is a finite set of n elements called states, Σ is a finite non-empty set of letters, and δ Q ×Σ is a totally-defined transition function. The transition function is naturally extended to words (finite sequences of elements from Σ) in Σ^*. Given a word w ∈Σ^*, let δ(w) denote its induced transformation, which is a function (transformation, map) δ(w) Q → Q defined by δ(w)(q) = δ(q,w). For a set T of maps Q → Q, the transformation monoid (Q, M) generated by T and acting on Q is denoted by ⟨ T⟩, where M is the set of all transformations that can be obtained from maps from T by composition. For an automaton A = (Q, Σ, δ), its associated transition monoid is ⟨{δ(a) | a ∈Σ}⟩ = (Q,M), where M is the set of all maps δ(w) for every word w ∈Σ^*. In the literature, there are many names for a transformation monoid, e.g., transformation semigroup, operand <cit.>, polygon <cit.>; and it is also an algebra with unary operations <cit.>. We use the right-hand side convention and denote by (q)f the image of the state q ∈ Q under the action of f Q → Q. For a subset of states S ⊆ Q, the image of the map f we denote by (S) f = {(q)f | q ∈ S}. The cardinality |(Q) f| is the rank of f. The deficiency (or co-rank) of f is n-|(Q) f|. Maps of rank n (equivalently, of deficiency 0) are permutations (bijections on Q). A transformation monoid (Q,M) is transitive (or strongly connected) if for every two states s,t ∈ Q, there is a transformation f ∈ M such that (s)f = t. If a transformation monoid contains a map of rank 1, then the monoid is called synchronizing. A pair of different states s,t ∈ Q are called compressible if there is a transformation f ∈ M such that |({s,t})f| = 1; then f compresses the pair. It is well known that a transformation monoid is synchronizing if and only if every pair of states is compressible <cit.>. We transfer the above terminology from the transition monoid to the automaton and from transformations to words and letters that induce them. Thus, a synchronizing automaton is one that admits a word of rank 1. Such a word is also a reset word. §.§ Relations and Primitivity Let (Q, M) be a transformation monoid. Then the action of M on Q naturally continues on the square Q × Q by components: (s, t)f = ((s)f, (t)f), for each f ∈ M, s,t ∈ Q . For a binary relation ρ⊆ Q × Q, we put: (ρ)M = {(s,t)f | (s,t) ∈ρ, f ∈ M} . A relation ρ is called invariant for (Q, M) if (ρ)M = ρ. If ρ is additionally an equivalence relation, then it is a congruence of (Q, M). The equivalence classes of this congruence are called blocks. A relation ρ is trivial if it is the identity relation (diagonal) _Q = {(q,q) | q ∈ Q} or the total relation (square) _Q = Q × Q. A transformation monoid is primitive if it does not have any non-trivial congruence. An automaton is primitive if its transition monoid is primitive. Letters of deficiency 0, whose induced transformation is a permutation are called permutational, and these transformations generate a permutation group acting on Q and contained in the transition monoid. § THE CONJECTURE ON PRIMITIVITY IMPLYING SYNCHRONIZABILITY Every automaton with a letter of certain deficiency and a primitive group generated by the permutational letters is synchronising <cit.>. This holds for deficiencies 1, 2, n-3, and n-4, and also for other deficiencies if the letter's map additionally has some properties. Hence, it is natural to use deficiency as an additional condition for our question. Our best guess for a general conjecture is the following: Every primitive automaton with permutational letters and letters of deficiency 1 is synchronizing unless all letters are permutational. The class of these automata contains, in particular, almost-group automata <cit.>, which have only one non-permutational letter of deficiency 1, and it is known that at random they are synchronizing with high probability. First, we note that the conjecture does not hold in the reverse direction. There also exist almost-group automata that are synchronizing and non-primitive. Both automata from <ref> are strongly connected, non-primitive, and synchronizing, though they have only permutational letters and letters of deficiency 1. The automaton on the right is almost-group. The automata are synchronizing, since the words ab and a^2ba^3 are reset respectively for the automaton on the left and on the right. The automata are non-primitive, since the equivalence relation with classes {q_0,q_1}, {q_2} is a non-trivial congruence for the automaton on the left and the equivalence relation with classes {q_0}, {q_1}, {q_2}, {q_3,q_4} is a non-trivial congruence for the automaton on the right. §.§ Known Cases By some results from the literature, we know that the conjecture holds in certain cases. A sink state q_0 ∈ Q is such that (q_0)M = {q_0}. An automaton is 0-transitive if it has a unique sink q_0 and (q)M = Q for all q ∈ Q ∖{q_0}; in other words, there is a sink reachable from every state and the other states form one strongly connected component. For n ≥ 3, a primitive automaton (Q,Σ,δ) is either strongly connected or 0-transitive. It follows that it would be enough to show the conjecture for strongly connected automata, as 0-transitive automata are synchronized in q_0. An automaton is called aperiodic if its transition monoid does not have any non-trivial subgroups; in other words, in there is no transformation that acts cyclically on some subset of states of size ≥ 2. From the following statement, we get that a primitive aperiodic automaton is synchronizing: An aperiodic automaton (Q,Σ,δ) is synchronizing if and only if it has a state q ∈ Q that is reachable from all the states, i.e., q ∈ (p)M for all p ∈ Q. Another case is automata with a prime number of states that also contains a full cyclic permutation in their transition monoid. It follows by an old result of Pin <cit.>, stating that a circular automaton, i.e., with a letter that induces one cycle on all the states, with a prime number of states is synchronizing whenever it has any non-permutational letter. An automaton with a prime number of states n that also contains a full cyclic permutation in its transition monoid is primitive and it is synchronizing if and only if it has any non-permutational letter. The permutation group contained in the automaton's transition monoid is transitive, and it is known that every transitive group of a prime degree is primitive <cit.>. The statement that the automaton is synchronizing if it contains any non-permutational letter has been proved in <cit.>. §.§ Strong Variant We note that restricting deficiency is essential. Examples of primitive non-synchronizing automata can be easily found already if we admit letters of deficiency 2. Both automata from <ref> are primitive and non-synchronizing. Consider the automaton on the left. Observe that every pair of states can be mapped to {q_0,q_1}. If there is a non-trivial congruence on the set of states, so some two states q_i, q_j are in one block, then so q_0, q_1 are in one block. But then also q_0 and each q_i for i=1,…,4 are in one block by the action of b, thus the congruence is trivial. Also, the automaton is non-synchronizing, because the pairs {q_0,q_i} for i=1,…,4, {q_1,q_3}, and {q_2,q_4} are not compressible. Consider the automaton on the right. Observe that every pair of states can be mapped to {q_0,q_3}, and this pair can be mapped to each {q_i,q_3} for i ∈{0,1,2,4}. Thus, as before, the automaton is primitive. Also, the pairs {q_i,q_3} for i ∈{0,1,2,4}, {q_0,q_4}, and {q_1,q_2} are not compressible, thus the automaton is non-synchronizing. However, we can still slightly relax the condition and propose a stronger conjecture. For this, we refer to the notion of components and cycles of a transformation, which are defined on the digraph D(t) in which from every state q there is exactly one outgoing arc (q,(q)t). Then, a component of a transformation t is a weakly connected component of D(t), which always contains exactly one cycle, which can be a loop. The strongly connected components of D(t) are precisely its cycles. Every primitive automaton where each letter: * is permutational, * has deficiency 1, or * its transformation has one component with a cycle of size ≤ 2 and the other components being cycles, is synchronizing, unless all letters are permutational. Both automata from <ref> satisfy the conditions of <ref> except that they are non-primitive; in particular, their letter a meets condition (3). On the other hand, the examples from <ref> show that we cannot strengthen the conjecture more: These automata do not satisfy the conditions due to the transformation of the letter a; in the left automaton, it has two components that are not cycles, and in the right automaton, the cycle in the unique component has size 3. §.§ Experimental Support By the automata enumeration method of <cit.>, we have verified both conjectures for automata with a small number of states and a small alphabet. <ref> summarizes the cases. The computation of the binary case with n=11 (only the weak variant) took about 81 hours of one CPU core. We also tried further strengthening of <ref> and then it was easy to find many counterexamples already for n ≤ 5 (for instance, as in <ref>). § PRIMITIVE AUTOMATA WITH SEMICONSTANT LETTERS A map f is semiconstant if there is a subset S ⊆ Q such that (S)f = {r}, for some state r ∈ Q, and (q)f = q for every q ∈ Q ∖ S. We denote it by (S → r). The special cases of a semiconstant map are: * the identity map, which can be denoted by (∅→ r) for any r ∈ Q; * constant maps, which can be denoted by (Q → r) for r ∈ Q; * unitary (or simple idempotent) maps, denoted by ({q}→ r), where q ≠ r. We denote it simpler by (q → r). We transfer this terminology to letters that induce these transformations. An automaton A = (Q,Σ,δ) where every letter is permutational or semiconstant is a PSc-automaton. If additionally, every letter of the automaton is permutational or unitary, then it is a PU-automaton. In the literature, the latter was often called an automaton with simple idempotents <cit.>, whereas the names unitary and semiconstant appear in <cit.>. The later terminology is adopted as it offers and distinguish both types of letters that we consider here. In this section, we prove that every primitive PSc-automaton is synchronizing. We first show this for PU-automata and then generalize to the wider class. §.§ Closures of Relations Before we consider PU-automata, we state some basic properties of relations that will be used. Let (Q,M) be a transformation monoid. It is easy to see that the union and the intersection of invariant relations are also invariant (for (Q,M)). Since the identity relation _Q = {(q,q) | q ∈ Q} and the inverse relation ρ^-1 = {(q,p) | (p,q) ∈ρ} of an invariant ρ relation are also invariant, we get the following statement: If ρ is an invariant relation for (Q,M), then its symmetric closure _Q ∪ρ∪ρ^-1 is also invariant for (Q,M). For a relation ρ, we assign the oriented graph Γ(ρ) = (Q,ρ), called shortly the orgraph of ρ. For each pair of different states (s,t) ∈ρ, there is an arc (directed edge) in the orgraph from s to t, and for (s,s) ∈ρ, there is a loop. When considering orgraphs, we use the usual notions from graph theory such as a path, a simple path, a route (walk), and a cycle. Denote by ρ^* the transitive closure of ρ – this is the reachability relation of the orgraph Γ(ρ), i.e., (s,t) ∈ρ^* if and only if there is a (possibly empty) path from s to t in Γ(ρ). If ρ is an invariant relation for (Q,M), then also ρ^* is invariant. Let (s,t) ∈ρ^* and let P=(s=s_1,…,s_k=t) be a path in the orgraph Γ(ρ) such that (s_i,s_i+1) ∈ρ for all 1 ≤ i < k. Consider a map f ∈ M and the image (P)f = ((s_1)f,…,(s_k)f). Then for all 1 ≤ i < k, ((s_i)f,(s_i+1)f) ∈ρ since ρ is invariant. Thus, (P)f is a route in Γ(ρ), which implies that ((s)f,(t)f) ∈ρ^*. Denote by ϵ[ρ] the equivalence closure of ρ, i.e., the smallest equivalence relation containing ρ. This can be obtained by taking the reflexive, symmetric, and transitive closures: ϵ[ρ] = (_Q ∪ ρ ∪ ρ^-1)^* . The equivalence classes of ϵ[ρ] are the weakly connected components of the orgraph Γ(ρ). From Propositions <ref> and <ref>, we get: If ρ is an invariant relation of (Q,M), then its equivalence closure ϵ[ρ] is a congruence of (Q,M). For a binary relation ρ, we have: ϵ[ρ] = ϵ[ρ^*]. The direction ⊆ follows from the trivial inclusion ρ⊆ρ^*, and the other direction ⊇ follows from ρ^* ⊆ϵ[ρ] = (_Q ∪ρ∪ρ^-1)^* by definition. The following will be a useful property of relations: A binary relation ρ is cyclic if its transitive closure is symmetric: ρ^* = (ρ^*)^-1. A binary relation ρ is cyclic if and only if one of the following equivalent conditions are satisfied: * ρ^-1⊆ρ^*. * Γ(ρ) is a union of (not necessarily disjoint) cycles. * Every weakly connected component of Γ(ρ) is strongly connected. If ρ is cyclic, then ρ^-1⊆ (ρ^*)^-1 = ρ^* (1). If for each arc (s,t) ∈ρ, we have (t,s) ∈ρ^* (1), so there is a simple path P from s to t in Γ(ρ), then the path (s,t) P is a cycle containing (s,t) (2). From (2), every s and t from one weakly connected component are in a cycle, thus they are in the same strongly connected component (3). From (3), for every (s,t) ∈ρ^*, there is also a path from t to s, thus we have (t,s) ∈ρ^*, and vice versa, thus the relation is cyclic. The transitive closure of a cyclic relation ρ is an equivalence relation: ρ^* = ϵ[ρ] . By <ref>, we have ρ^* ⊆ϵ[ρ^*] = ϵ[ρ]. Consider the other direction. The identity relation _Q and ρ are trivially contained in ρ^*. From <ref>(1), we also have ρ^-1⊆ρ^*. Thus, ϵ[ρ] ⊆ρ^*, and the equality follows. §.§ Transition Semigroups of Automata with Permutational and Unitary Letters Let A = (Q,Σ,δ) be a strongly connected PU-automaton, and let P and U be respectively the set of the permutations and the set of unitary transformations induced by its letters. The transition monoid of this automaton is (Q,⟨ P ∪ U⟩). Consider the submonoid G = (Q,⟨ P⟩), which is a permutation group on Q. Note that the reachability relation in G is an equivalence relation, i.e., each weakly connected component is strongly connected, which is a class in this relation called an orbit. We denote by (s)G the orbit of G that contains s ∈ Q: (s)G = {(s)g | g ∈ G}. The action of the permutation group G continues on the square Q × Q. The (strongly) connected components of G on Q × Q are called orbitals. Denote by (s,t)G the orbital of G that contains (s,t), i.e., (s,t)G = {(s,t)g | g ∈ G}. Note that each orbital is an invariant relation of G. Two pairs of states are G-equivalent if they belong to the same orbital of G. We define the unitary relation δ(U) as follows: δ(U) = ⋃_(s → t) ∈ U (s,t) . The group closure of the idempotent relation is defined by the formula: π = (δ(U)) G = ⋃_(s,t) ∈δ(U) (s,t)G . Since π is a union of orbitals of G, it is an invariant relation for G. A pair of states (s,t) ∈π is called internal if both states are in the same orbit of G. The pair is external if s and t are in different orbits of the group. We split π into disjoint π_int containing the internal pairs and π_ext containing the external pairs. The relations π_int and π_ext are invariant for G. Since π is invariant, for every (s,t) ∈π_int, we have ((s)g,(t)g) ∈π. If (s,t) ∈π_int, thus s and t are in the same orbit of G, then we know that also (s)g and (t)g are in the same orbit, hence we have an internal arc: ((s)g,(t)g) ∈π_int. Similarly, if (s,t) ∈π_ext, thus s and t are in different orbits of G, then (s)g and (t)g are in different orbits, hence we have an external arc: ((s)g,(t)g) ∈π_ext. The following statement in other terms has been used in the theory of permutation groups <cit.>. The relation π_int is cyclic. Consider the orgraph Γ(π_int) = (Q,π_int) and an arc (s,t) ∈π_int. Since s and t are in the same orbit of G, there is a permutation g ∈ G such that (s)g = t. Denote by k ≥ 1 the minimum number such that (t)g^k = s, and consider the sequence of pairs C = ((s,t)g^i i ≤ i ≤ k), where g^0 is the identity map (the unit of G). As π_int is invariant for G, all the arcs from C are in Γ(π_int) thus C is a cycle in Γ(π_int). It follows that every arc from Γ(π_int) is in a cycle, thus Γ(π_int) is a union of cycles and by <ref>, π_int is cyclic. Consider the orgraph Γ(π_ext) and let (s,t) ∈π_ext. Then the pair (s,t) is located between the orbits (s)G and (t)G of the group G. Consider the orbital (s,t)G of the pair (s,t) and its orbital orgraph Γ(s,t) = ((s)G ∪ (t)(G), (s,t)G). It follows from formula (<ref>) and Lemma <ref> that Γ(s,t) is a subgraph of Γ(π_ext). For an external pair (s,t) ∈π_ext, the orbital orgraph Γ(s,t) is a bipartite graph with parts (s)G and (t)G such that from each state in (s)G goes out an arc to a state in the orbit (t)G. For each state s' ∈ (s)G, we can find a permutation g ∈ G such that (s',t') = (s,t)g is the outgoing arc from s' to a state t' ∈ (t)G. For every s,t ∈ Q, there is a path in the orgraph Γ(π_ext) from s to a state in the orbit (t)G. Since we assume that A is strongly connected, there is a path of transitions e_1,…,e_k from the state s to a state in (t)G, i.e., for each e_i = (s_i,t_i) there is either a permutational letter inducing g ∈ P such that (s_i)g = t_i or the unitary transition (s_i → t_i) ∈ U. In the former case, e_i is an internal arc from π_int, and in the latter case, it can be either an internal or an external arc from π. We remove all internal arcs, obtaining a sequence of external arcs e_i_1, …, e_i_m only, which is a subsequence of e_1,…,e_k. If this is the empty sequence (m=0), then s ∈ (t)G, so the lemma holds. Otherwise, for the first arc e_i_1 = (s_i_1,t_i_1), we have s_i_1 in the orbit (s)G and t_i_1 in another orbit. By <ref> applied for e_i_1, we can find a pair (s,s_1) ∈π_ext that is G-equivalent to e_i_1, thus where s_1 is a state from (t_i_1) G. There may be several such arcs, and we can choose any. Then for e_i_2, we choose an arc (s_1,s_2) ∈π_ext in the same way, and so on till e_i_m. As the result, we obtain a route in the orgraph Γ(π_ext) that starts from s, follows the same orbits as the path e_1,…,e_k, and ends in a state from (t)G. The relation π_ext is cyclic. Consider the orgraph Γ(π_ext) = (Q,π_ext) and an arc (s_1,t_1) ∈π_ext. We are going to show that (s_1,t_1) ∈π_ext is in some cycle in Γ(π_ext), thus the lemma follows by <ref>. By <ref>, there is a path P_1 in the orgraph Γ(π_ext) from t_1 to some state s_2 ∈ (s_1)G. If s_1 = s_2, then we have found the cycle (s_1,t_1) P_1, which contains (s_1,t_1). Otherwise, we choose a state t_2 ∈ (t_1)G such that the arc (s_2,t_2) ∈π_ext is G-equivalent to (s_1,t_1), and we again choose a path P_2 from t_2 to some state s_3 ∈ (s_1)G. We continue this process, which stops when the found path P_k ends in a state s_k+1∈ (s_1)G that has been encountered before as an s_i for some i ≤ k. As the size of (s_1)G is finite, this finally happens. Now, (s_i,t_i) P_i … (s_k,t_k) P_k is a closed route containing (s_i,t_i) ∈π_ext. Let X be the path separated from the route P_i … (s_k,t_k) P_k by removing all cycles so that every state occurs at most once. Then C = (s_i,t_i) X is a cycle containing the external arc (s_i,t_i), which is G-equivalent to (s_1,t_1). Let g ∈ G be such that (s_1,t_1) = (s_i,t_i)g. Then the cycle C' = (C)g is contained in Γ(π_ext) (as π_ext is invariant for G) and contains (s_1,t_1). The relation π = π_int∪π_ext is cyclic. We recall the known characterization of synchronizing automata with permutational and unitary letters. In the literature, the orgraph Γ(π) is often called the Rystsov graph of an automaton <cit.>. A strongly connected PU-automaton is synchronizing if and only if the orgraph Γ(π) is strongly connected. We have now all ingredients for the main result of this section. A primitive PU-automaton with at least one unitary letter is synchronizing. By <ref>, it is enough to consider a strongly connected primitive U-automaton. Suppose that such an automaton is not synchronizing, thus by <ref>, the orgraph Γ(π) is not strongly connected. Then, for the transitive closure π^* of π, we obtain the strict inclusion: π^* ⊊ Q × Q. Since the automaton has at least one unitary letter, the relation π is non-empty and thus ⊊π^*. Since π is cyclic (<ref>), from <ref>, we get π^* = ϵ[π] so it is an equivalence relation. As π is invariant for G (<ref>), from <ref>, the equivalence closure ϵ[π] is a congruence of the automaton's transition monoid (Q,⟨ P ∪ U⟩). Thus, ϵ[π] is a non-trivial congruence of (Q,⟨ P ∪ U⟩), so the automaton is non-primitive. §.§ Generalization to Semiconstant Letters The generalization follows by a decomposition of semiconstant transformations into unitary transformations. If A is an PSc-automaton, then by A^Sc→U we denote the PU-automaton obtained from A in the following way: every letter that induces a semiconstant transformation ({s_1,…,s_k}→ r), where the states s_i are pairwise distinct and different from r, is replaced with k fresh letters inducing the unitary transformations (s_1 → r), …, (s_k → r), respectively. Let A be a PSc-automaton. Then: * If A is primitive then also A^Sc→U is primitive. * A is synchronizing if and only if A^Sc→U is synchronizing. A semiconstant transformation ({s_1,…,s_k}→ r) can be generated as the composition of the k unitary transformations that replace it: (s_1 → r)·…·(s_k → r) = ({s_1,…,s_k}→ r) . Hence, the transition monoid of A is a submonoid of A^Sc→U. This implies (1) and the ⇒ direction of (2). For the ⇐ direction of (2), denote A = (Q, Σ_A, δ_A) and B = (Q, Σ_B, δ_B) = A^Sc→U. We show by induction on k that every pair of states {p,q} which is compressible in B with a word of length k is compressible in A. For k=0 it is trivial. Assume the statement for k and consider a pair {p,q} which is compressible in B with a word w = a u of length k+1, where a ∈Σ_B and u ∈Σ^*_B. If a is permutational, then let τ(a) = a, and the following equation holds trivially: {δ_B(p,a),δ_B(q,a)} = {δ_A(p,τ(a)),δ_A(q,τ(a))} Then the statement follows from the induction hypothesis since u of length k compresses the pair from (<ref>) in B. Otherwise, a induces a unitary transformation (s → r). If s ∉{p,q}, then δ_B({p,q},a) = {p,q}, so again u of length k compresses the pair. Without loss of generality, suppose s = p. Then let τ(a) be a semiconstant letter that induces (S → r), where s=p ∈ S. If q ∈ S ∪{r}, then τ(a) compresses {p,q}, so we are done. Otherwise, q ∉ S ∪{r}, thus (<ref>) holds again. It follows that if B is synchronizing, thus all pairs of states are compressible, then also they are in A, so A is synchronizing. A primitive PSc-automaton is synchronizing, unless all its letters are permutational. If A is a primitive PSc-automaton, then by <ref>(1), A^Sc→U is primitive, so by <ref> A^Sc→U is synchronizing. Then by <ref>(2), A is also synchronizing.
http://arxiv.org/abs/2307.01631v1
20230704102723
The River Model of Gravitational Collapse
[ "Soumya Chakrabarti" ]
gr-qc
[ "gr-qc", "hep-th", "math-ph", "math.MP" ]
soumya.chakrabarti@vit.ac.in School of Advanced Sciences, Vellore Institute of Technology, Tiruvalam Rd, Katpadi, Vellore, Tamil Nadu 632014, India We show that the transformation of a time-evolving spherically symmetric metric tensor into a Painlevé-Gullstrand-Lemaître form brings forth a few curious consequences. The time evolution describes a non-singular gravitational collapse, leading to a bounce and dispersal of all the clustered matter, or a wormhole geometry for certain initial conditions. The null convergence condition is violated only at the onset of bounce or the wormhole formation. As an example, the requirements to develop a Simpson-Visser wormhole/regular black-hole geometry is discussed. The solution can be regarded as a new time-evolving twin of sonic dumb holes found in analog gravity. The River Model of Gravitational Collapse Soumya Chakrabarti August 1, 2023 ========================================= The modern idea of gravitational physics is based on an intuitive interpretation of the laws of nature and a few paradoxes. General Theory of Relativity (GR) provides a way to address them based on a geometric description. Some of these paradoxes have developed popular research problems over the years and they can be classified depending on their origin and basic motivations. There are problems which do not necessarily require a gravitational environment, for example, the study of topological defects evolving from the residues of cosmological phase transitions <cit.>. They can carry signatures of an early cosmic expansion history. Focus must equally be given on some problems strongly related to the background gravitational environment, for example, the dynamics of a collapsing stellar matter distribution after the death of a star. It is widely believed that such a gravitational collapse will produce a zero proper volume singularity which will probably remain hidden behind a null surface, known as the horizon <cit.>. A formation of horizon indicates a black hole from which information cannot escape, atleast classically. It is a natural intuition to imagine that near a zero proper volume quantum effects will generate some modifications and lead to phenomena like Hawking radiation <cit.>. However, no such complete model of gravitational collapse has been proposed till date. The formation of a horizon itself remains a debatable issue <cit.> and has led to a number of proposals and counter-proposals, the most remarkable amongst all being the `cosmic censorship conjecture'. Once again, a quantum correction can perhaps provide a better understanding of how a horizon develops, however, any such correction is based on a quantum field theory that has found success only on very small scales. No successful analogue system with horizons have been constructed in the lab so far that can test quantum corrections on an appropriate scale or provide any alternative notion. In a sense this lack of explanation keeps an open end and demands new perspectives. It might be beneficial for relativists if the mathematics of classical gravitational collapse do not necessarily generate singularities for generic spacetime geometries. However, the known few models of non-singular collapse either relies on exotic matter distributions <cit.>, quantum corrections <cit.> or suffers due to lack of physical motivation. In this letter we try to provide an escape clause, based on the concept of an analog black hole. We propose that the collapse of a stellar distribution can lead to non-singular outcomes if it has a time evolving analogue black hole structure. An analogue black hole, first proposed by Unruh <cit.>, is written using a static spherically symmetric solution of Einstein field equations in Painlevé-Gullstrand-Lemaître (PGL) form <cit.>. We construct a time evolving PGL geometry to describe a gravitational collapse of sufficiently massive stellar distributions. There is no curvature singularity, no formation of horizon and therefore, the dichotomy related to cosmic censorship is avoided. The geometry is better explained by a sonic flow separated into downstream region and upstream region. These two regions are characterized by supersonic and subsonic speeds of the flow, respectively (shown in Fig. <ref>). A fish flowing along the supersonic downstream can never send a sound signal to a second fish flowing along the upstream. In effect, the boundary of the two regions work like a sonic event horizon. Since the upstream region does not receive any sound across the horizon, this region acts like a `sonic dumb hole', an analogue black hole. If a river consists of irrotational fluid (density ρ, pressure p = p(ρ), velocity v) of negligible viscosity, it should obey the Navier-Stokes equations ∇× v = 0  , ∂ρ/∂ t + ∇· (ρv) = 0, ∂v/∂ t + v·∇v = -1/ρ∇p - ∇Φ. In a general relativistic scenario Φ is equivalent to the gravitational potential. For fluctuations of the background flow as in ρ = ρ_0 + δρ and v = v_0 + ∇ϕ, the velocity fluctuation ϕ obeys <cit.> ∇_μ∇^μϕ = 1/√(-g)∂_μ ( √(-g) g^μν∂_νϕ ) = 0. The background fluctuations correspond to a motion written using the metric ds^2 = (c_s^2 - v_0^2) dt^2 + 2 v_0 dt dr - dr^2 - r^2 dΩ^2. This is the standard PGL metric describing a stationary, spherically symmetric black hole. A horizon develops when the fluid velocity is equal to the sound speed, i.e., v_0 = c_s. The analogy becomes fascinating once the Schwarzschild metric is written in this form ds^2 = dt_eff^2 - (dr + v_0 dt_eff)^2 - r^2 dΩ^2. v_0 turns out to be the Newtonian escape velocity for a spherical object of mass M, v_0 = (2GM/r)^1/2. An object that starts to fall radially inward, freely from infinity towards the black-hole, records time as t_eff. The usual Schwarzschild time t is related to this time by t_eff = t + 2 (r r_s)^1/2 + r_s ln| r^1/2 - r_s^1/2 r^1/2 + r_s^1/2|. The PGL analog of stationary black hole can be called a `River Model' (the name first coined in <cit.>). It comes from an imagination that space is flowing with Newtonian escape velocity, radially inwards, through a flat background. An object defined in this metric is moving alongwith this flow, obeying the laws of special relativity. There is an event horizon whenever the infall velocity is equal to the speed of light. Beyond this horizon all objects are carried away towards the central singularity with an infall velocity greater than the speed of light. The illustration in Fig. <ref> is done by comparing a couple of fishes in this current with photons. The upstream region is the exterior region where the `photon-fishes' can still move against the flow. However, in the downstream region/interior, the inward flow is too fast (greater than the speed of light!) for any fish/photon to make a cross-over into the upstream and inevitably, they should fall towards a central singularity. Compared to the standard Newtonian picture contemplated in the works of Michell <cit.>, the river model is a non-conservative narrative. The geometry of a PGL metric has generated some limited curiosity from time to time within the community of relativists <cit.>. However, Unruh's construction <cit.> is undoubtedly the most important of all as it inspired the foundation of `analog gravity' <cit.>. In an analog gravity framework one works with a fluid flowing with pre-assigned velocity and simulates dynamical evolutions in a general relativistic spacetimes. It remains one of the few ways to explore gravity experimentally near quantum scales, using sonic analogs. It has been discussed before that to admit a river analog (or a PGL form) a stationary black hole metric must be spatially flat at any fixed time, up to a conformal factor <cit.>. This conformal factor includes the velocity of the river flowing through the background flat space. However, these claims apply only for a static or stationary metric. It is not possible to construct a model of stellar collapse using this metric unless one can find a time-evolving analogue of the PGL metric, which, till date has never been prescribed. We construct such a metric and argue that it describes a `river model' of gravitational collapse. For a generic spherically symmetric metric we derive the transformation equations which the metric components should obey in order to have a PGL form. We find an exact solution, albeit a special case but nonsingular, which allows the collapse to either go on for an infinite time or forces a bounce and dispersal. We further discuss that for some reasonable initial radial profiles of the collapsing distribution the time-evolving metric components satisfy a Wormhole throat condition. In other words, a river-frame gravitational collapse can evolve into a Wormhole geometry. We can write a general spacetime metric to describe a spherical star as ds^2_- = A^2 dt^2 - B^2 dr_c^2 + r_c^2 C^2 dΩ^2. The interior of the spherical star contains a fluid with local anisotropy and heat flux, therefore, T_αβ=(ρ+p_t)u_αu_β-p_tg_αβ+ (p_r-p_t)χ_αχ_β+q_αu_β+q_βu_α. ρ, p_t and p_r are density, tangential and radial pressure and q^α=(0,q,0,0) is the radial heat flux. The four-velocity and the unit four-vector in radial direction follow usual normalizations. We introduce the transformation r_cC = r, such that dr_c = 1/C(dr-r_cdC) = 1/C{ dr - r_c(Ċdt + C'dr)}. A dot represents derivative with respect to t and a prime is derivative with respect to r. The transformation allows us to write the spherical metric in the following form ds^2_- = (A^2 - r^2B^2Ċ^2/C^4)dt^2 + (2rB^2Ċ/C^3 - 2r^2B^2ĊC'/C^4) dtdr - (B^2/C^2 - 2rB^2C'/C^3 + r^2B^2C'^2/C^4)dr^2 - r^2dΩ^2. Comparing the above metric with a generic PGL form ds^2 = (1-ζ^2)dt^2 ± 2ζ drdt - dr^2 - r^2 dΩ^2, the original metric components and ζ(r,t) should satisfy the following set of differential equations, ± 2ζ = 2rB^2Ċ/C^3 - 2r^2B^2ĊC'/C^4, 1-ζ^2 = A^2 - r^2B^2Ċ^2/C^4, B^2/C^2 - 2rB^2C'/C^3 + r^2B^2C'^2/C^4 = 1. We find a particular exact solution of these equations as follows A(r,t)^2 = [1 + α(r)^2β^2{(1 ±4r^5/2/α(r)^1/2) - (1 ± 2r^5/2/α(r)^1/2)^2(1-r/(α(r)^1/2 + 2r^1/2)^2) }][δ - g T(t)^n ], B(r,t)^2 = r α(r) T(t)^2 e^± 4 f(r), C(r,t)^2 = r^2 T(t)^2e^± 4 f(r),     f(r) = ∫r^3/2/α(r)^1/2dr, ζ(r,t) = ∓ 2α(r)^1/2β r^5/2(δ - gT^n)^1/2, T(t) = (δ/g)^1/n[1 - tanh {δ n^2/4(β t - C_1)^2}]^1/n. β, δ and g are positive parameters. α(r) is a positive but otherwise arbitrary function of r. For a constant α(r) = α_0, the function f(r) is simplified and the the metric coefficients B and C are written as B(r,t)^2 = α_0 r T(t)^2 e^±8/5α_0^-1/2r^5/2, C(r,t)^2 = r^2 T(t)^2 e^±8/5α_0^-1/2r^5/2. It is evident form Eq. (<ref>) that for a real metric function ζ, (δ - gT^n) > 0. Moreover, Eq. (<ref>) indicates that the radius of the two-sphere C(r,t)^2 can only be zero when the hyperbolic tangent function tends to 1. Solutions for both n > 0 and n < 0 are allowed, however, they are of different physical nature. A n > 0 case shows a forever collapsing spherical star reaching a zero proper volume only at t →∞. On the other hand, a n < 0 case is a collapse and bounce case, without any formation of zero proper volume singularity. We assume the exterior region of the collapsing star to be a Schwarzschild metric ds^2 = (1-ζ^2)dt_s^2 - dr^2/1-ζ^2 - r^2 dΩ^2  , ζ = ±√(R/r),   R = 2m. Taking t_s = t + g(r) the metric can be transformed into ds^2 = (1-ζ^2)dt^2 ± 2ζ drdt - dr^2 - r^2 dΩ^2, which is a PGL-compatible form, provided g^' = ±ζ/1-ζ^2 ,  g = ∓ R ( 2√(r/R) + ln√(r)-√(R)/√(r)-√(R)). Now, both the interior and the exterior of the collapsing sphere are in a single metric form Eq. (<ref>) with a single metric function ζ = {[ ∓ 2α^1/2β r^5/2(δ - gT^n)^1/2,; - √(R/r). ]. We refer to this metric and coordinate transformations as a generalized PGL form. The interior of such a collapsing star can describe some interesting geometric features. To explore this we construct an embedding geometry for a generic metric of pattern ds^2 = T(t)^2 [A(r)^2dt^2 - B(r)^2dr^2 - r^2dΩ^2], of which the generalized PGL metric is a special case. On a the spatial slice of constant t and θ = π/2 the metric looks like dl^2 = B^2T^2dr^2 + r^2T^2dϕ^2. dl^2 is the metric on a surface of revolution ρ = ρ(z) embedded in a three-dimensional space with an Euclidean metric dl^2 = dz^2 + dρ^2 + ρ^2 dϕ^2, where z, ρ and ϕ are cylindrical coordinates. Comparing with Eq. (<ref>) ρ^2 = r^2T^2,    dz^2 + dρ^2 = B^2T^2dr^2. For a constant t, dρ = T dr and dρ/dz = 1/(B^2-1)^1/2,   d^2ρ/dz^2 = -B'B/T(B^2-1)^2. This formulation leads us to check if a Wormhole throat condition is satisfied during the collapse or not. A throat is a two-dimensional space-like surface having the projected shape of a sphere, located at a certain minima r = r_w <cit.>. On an embedding diagram, this sphere of r = r_w is represented by a circle of radius ρ on the surface of revolution. An usually tube-shaped wormhole can have a throat acting as pathway between two universes. Radial null rays converge and become parallel at a wormhole throat before eventually diverging on the other side. Naturally, at the throat the radius of the circle ρ(z) has a minimum. Conditions for a formation of this minimum is dρ/dz|_r_w = 0,   d^2ρ/dz^2|_r_w > 0. Using Eqs. (<ref>), (<ref>) and (<ref>), for a constant α = α_0, the first wormhole throat condition Eq. (<ref>) becomes dρ/dz|_r_w = 1/α_0 r T(t)^2 e^±8/5α_0^-1/2r^5/2 - 1|_r_w. It can only be zero at r_w→∞, which means a wormhole is never developed in finite time. However, for the more generic case α = α(r), dρ/dz|_r_w = 1/α(r) T(t)^2 e^± 4 f(r) - 1|_r_w, and there is a real possibility that the first wormhole condition is satisfied for a real and finite value of r. It depends on the functional form of k(r) or more precisely, α(r). We demonstrate this through an example. Recently it has been proved that a wide class of Wormhole solutions can develop out of a gravitational collapse of imperfect fluid <cit.>. One such example is a Simpson-Visser metric, which, for different values of a parameter a can represent different geometric structures (a Schwarzschild metric for a = 0, a traversable wormhole for a > 2m, one-way wormhole for a = 2m; see <cit.> for more detailed discussions). In a coordinate range r∈(r_w,+∞), where r_w is the wormhole throat the metric can be written as ds^2 = (1-2m/r)dt^2 - dr^2/(1 - a^2/r^2)(1-2m/r) - r^2 dΩ^2. If we expect the river model of gravitational collapse to produce a Simpson-Visser wormhole, the metrics should be comparable on the spatial slice of constant t and θ = π/2, i.e., coefficients of g_11 should match, leading to α'/α + 1/r± 2r^3/2α^-1/2 + 2a^2/r^3/(1-a^2/r^2) - 2m/r^2/(1-2m/r). If a = 0, the collapse should end up in a Schwarzschild black hole. In this limit, it is possible to solve Eq. (<ref>) analytically and write α(r)|_BH = 1/36 (2m-r)^2[36 m^6 r-12 m^5 r^2-59 m^4 r^3 +34 m^3 r^4 + 21 m^2 r^5 - 20 m r^6 + 4 r^7 + 12 m^11/2 r^3/2 √(2 m-r/m) sin^-1(√(r)/√(2)√(m)) - 72 m^13/2√(r)√(2m-r/m) sin^-1(√(r)/√(2)√(m)) + 72 c_1 m^7/2√(2 m-r/m)√(2m-r) sin^-1(√(r)/√(2)√(m)) - 72 c_1 m^3 √(r)√(2m-r) + 12 c_1 m^2 r^3/2 √(2m-r) + 60 c_1 m r^5/2√(2m-r) - 24 c_1 r^7/2√(2m-r) + 72 c_1^2 m - 36 c_1^2 r + 60 m^9/2 r^5/2√(2 m-r/m) sin^-1(√(r)/√(2)√(m)) -24 m^7/2 r^7/2√(2m-r/m) sin^-1(√(r)/√(2)√(m)) + 72 m^7 sin^-1(√(r)/√(2)√(m))^2 -36 m^6 r sin^-1(√(r)/√(2)√(m))^2] For all a ≠ 0, Eq. (<ref>) is solved numerically to show the required form of α(r) for which the collapse ends up forming a wormhole. For the two signs of e^± 4 f(r) and different values of mass parameter m the numerical solutions vary slightly. However, they produce a qualitatively similar evolution for different values of a. We plot the two evolutions in Fig. <ref>. To discuss the nature of matter distribution within the collapsing star, we write the nonzero components of the Einstein tensor as G^0_ 0 = -2ζζ^'/r - ζ^2/r^2,   G^1_ 1 = -2ζζ^'/r - ζ^2/r^2 - 2ζ̇/r, G^1_ 0 = 2ζζ̇/r,    G^2_ 2 = G^3_ 3 = -ζ̇+2ζζ^'/r - ζ̇^' - ζζ^'' - ζ^' 2. The stress-energy tensor come into the setup via the field equations G^α_ β = -8π GT^α_ β. Continuity of the energy density can be ensured through the requirement ρ = T^0_ 0 = {[ 0, exterior,; 3αβ^2/π G r^3 [δ - g T(t)^n], interior, ]. Moreover, to ensure a continuity of ζ as in Eq. (<ref>) across the fluid surface r^3/2 + 1/(2β)^1/2(R/α)^1/4 (δ - gT^n)^-1/4 = 0. For a collapsing spherical star that starts shrinking somewhere in negative times and reaches its end state around t ∼ 0 we approximate Eq. (<ref>) by keeping only the terms linear in t and expanding in series. The approximation leads to a simplified continuity equation r^3/2 + 5C_1R^1/4/16β^3/2(αδ)^1/4{ t-2/5β C_1(δ n^2 C_1^2/4 - 5)} = 0. Using the metric Eq. (<ref>), it can be proved that the geodesic equation for a zero energy falling particle in the Schwarzschild exterior region is r^3/2 + 3√(R)/2 (t-t_0) = 0. This is easily comparable with Eq. (<ref>). If α(r) and the parameters satisfy the following constraints R = 0.002 C_1^4/αδβ^6 ,  t_0 = 2/5β C_1(δ n^2 C_1^2/4 - 5), they match exactly. Therefore, in this parameter range a freely falling particle will hover at the infalling surface of the spherical star. If there is a stress in the surface layer, the internal pressure of the fluid can be balanced and the star is stable. Due to a surface tension the surface can fall faster than a freely falling particle. Around the time the star reaches its minimum accessible volume, irrespective of an onset of it bounce/wormhole throat formation or a forever collapse, the surface tension should become negligible. As a result the system can behave like an idealized pressureless fluid. It is expected under normal circumstances that the energy-momentum tensor components will satisfy certain energy conditions during the gravitational collapse and maintain a notion of `locally positive energy density'. We focus in particular, on the Null Energy Condition (NEC) which has a more generic root in the Null Convergence Condition <cit.>. It has an algebraic form written as NEC →|ρ + p_r|- 2 | q|≥ 0. We plot the NEC as a function of time, for different values of r, i.e., distance from the center of the sphere. r = 1 is taken as the boundary of the star. Fig. <ref> shows the evolution of NEC with time as the collapsing fluid bounces and disperses away all clustered matter. There is a clear violation of NEC at the critical point as the bounce starts. For this plot the analytical solution of α(r) as in Eq. (<ref>) is used. In Fig. <ref>, we plot the NEC profile as the collapsing sphere develops a wormhole throat, using numerical solution of Eq. (<ref>). It is seen that not only the NEC is violated, there is a curious onset of periodicity during the wormhole throat formation. In summary, this letter provides an alternative picture of gravitational collapse using the unconventional set of coordinates, namely, the generalized PGL metric. We name it a river model, owing to the illustration of space flowing with Newtonian escape velocity, much like a river, through flat background. While the static PGL metric is known for almost a century, we construct a new time-evolving PGL geometry describing gravitational collapse of a spherical stellar body. We prove that a spherically symmetric metric should satisfy a set of transformation equations in order to have a PGL form and find a unique, exact solution to this set. This solution can describe three possible non-singular outcomes, depending on initial conditions. The first two are either a collapse for infinite time without any zero proper volume, or a bounce and eventual dispersal of all the clustered matter. It is curious to find that there is also a clear third alternative, where the collapsing body forms a wormhole throat at some finite value of the radial coordinate. This is proved by exploring the interior geometry on an Euclidean metric embedded in a three-dimensional space. As an example, we discuss the requirement for the formation of a Simpson-Visser wormhole geometry, which falls within a wider class of static geometries behaving as Black Hole mimickers <cit.>. Therefore, the consequence of a gravitational collapse in the river-frame is quite intriguing; it implies that just a simple requirement to allow a consistent river metric/PGL form ensures a resolution of the singularity problem even within the realms of classical general relativity. The original static metric and the dynamic counterpart, both are motivated from the analogy that a sonic barrier in fluid dynamics can behave like a general relativistic event horizon <cit.>. The static geometry has remarkably led to laboratory simulations of sonic black holes in some very realistic states of matter, such as Bose-Einstein Condensates <cit.>. It is our optimism that the time-evolving twin will also be of immense use in laboratory simulations of gravitational collapse, which is perhaps the only phenomena through which a classical stellar body can evolve naturally towards a quantum scale. Furthermore, using the exact metric prescribed in this letter, simulating the sonic analogue of a wormhole (not-so-dumb hole) for the very first time, also seems plausible for a reasonably well-defined fluid flow. 99 topo R. Durrer, M. Kunz and A. Melchiorri, Phys. Rept. 364 : 1 (2002). os B. Datt, Z. Phys. 108, 314 (1938) ; J. R. Oppenheimer and H. S. Snyder, Phys. Rev. 56, 455 (1939). hawking S. W. Hawking, Nature 248, 30 (1974); Comm. Math. Phys. 43, 199 (1975) ; S. Giovanazzi, Phys. Rev. Lett. 94, 061302 (2005) ; J. Sonner and A. G. Green, Phys. Rev. Lett. 109, 091601 (2012) ; E. E. Flanagan, Phys. Rev. Lett. 127, 041301 (2021) ; I. Agullo, A. J. Brady and D. Kranas, Phys. Rev. Lett. 128, 091301 (2022). penrose R. Penrose, Phys. Rev. Lett. 14, 57 (1965) ; R. Penrose, Nuovo Cimento Rivista Serie 1 (1969). censor T. Crisford and J. E. Santos, Phys. Rev. Lett. 118, 181101 (2017) ; A. Bonanno, B. Koch and A. Platania, Class. Quant. Grav. 34, 095012 (2017) ; W. E. East, Phys. Rev. Lett. 122, 231103 (2019) ; F. Corelli, M. de Amicis, T. Ikeda and P. Pani, Phys. Rev. Lett. 130, 091501 (2023). branden R. Brandenberger, L. Heisenberg and J. Robnik, JHEP, 2021, 90 (2021). hayward S. A. Hayward, Phys. Rev. Lett. 96, 031103 (2006). bojo M. Bojowald, R. Goswami, R. Maartens and P. Singh, Phys. Rev. Lett. 95, 091302 (2005). unruh W. G. Unruh, Phys. Rev. Lett. 46, 1351 (1981). painleve A. Gullstrand, Arkiv. Mat. Astron. Fys. 16(8), 1 (1922). gulstrand P. Painleve, C. R. Acad. Sci. (Paris) 173, 677 (1921). hamilton A. J. S. Hamilton and J. P. Lisle, Am. J. Phys. 76 : 519 (2008). michell J. Michell, Phil. Trans. Roy. Soc. London 74, 35 (1784). review M. Visser, Class. Quant. Grav. 15, 1767 (1998) ; R. Schutzhold and W. G. Unruh, Phys. Rev. Lett. 95, 031301 (2005) ; H. Lu, J. Mei and C. N. Pope, Phys. Rev. Lett. 103, 091301 (2009) ; Z. Liu and M. Tegmark, Phys. Rev. Lett. 128, 180201 (2022). unruh1 W. G. Unruh, Phys. Rev. D 14, 1351 (1981). barcelo C. Barcelo, S. Liberati and M. Visser, gr-qc/0505065. others S. Liberati, M. Visser and S. Weinfurtner, Phys. Rev. Lett. 96, 151301 (2006) ; S. Weinfurtner, E. W. Tedford, M. C. J. Penrice, W. G. Unruh and G. A. Lawrence, Phys. Rev. Lett. 106, 021302 (2011) ; G. Krein, G. Menezes and N. F. Svaiter, Phys. Rev. Lett. 105, 131301 (2010) ; T. Torres, S. Patrick, M. Richartz and S. Weinfurtner, Phys. Rev. Lett. 125, 011301 (2020) ; S. Patrick, H. Goodhew, C. Gooding and S. Weinfurtner, Phys. Rev. Lett. 126, 041105 (2021). garat A. Garat and R. H. Price, Phys. Rev. D 61, 124011 (2000). doran C. Doran, Phys. Rev. D 61, 067503 (2000). throat A. Einstein and N. Rosen, Ann. Phys. (N.Y.) 2, 242 (1935) ; M. S. Morris and K. S. Thorne, Am. J. Phys. 56, 395 (1988) ; H. Ellis, J. Math. Phys. (N.Y.) 14, 104 (1973) ; K. A. Bronnikov, Acta Phys. Pol. B 4, 251 (1973) ; S. Capozziello, F. S. N. Lobo, and J. P. Mimoso, Phys. Rev. D 91, 124019 (2015). scsk S. Chakrabarti and S. Kar, Phys. Rev. D 104, 024071 (2021). simpson A. Simpson and M. Visser, J. Cosmol. Astropart. Phys. 02 042 (2019). ncc C. W. Misner and J. A. Wheeler, Ann. Phys. (N.Y.) 2, 525 (1957) ; C. A. Kolassis, N. O. Santos, and D. Tsoubelis, Class. Quant. Gravit. 5, 1329 (1988). mimic R. Shaikh, Mon. Not. Roy. Astron. Soc., 523(1), 375 (2023). lahav O. Lahav, A. Itah, A. Blumkin, C. Gordon, S. Rinott, A. Zayats and J. Steinhauer, Phys. Rev. Lett. 105, 240401 (2010).
http://arxiv.org/abs/2307.01120v1
20230703154915
musif: a Python package for symbolic music feature extraction
[ "Ana Llorens", "Federico Simonetta", "Martín Serrano", "Álvaro Torrente" ]
cs.SD
[ "cs.SD", "cs.MM", "eess.AS" ]
Design, fabrication, and characterization of electrostatic comb-drive actuators for nanoelectromechanical silicon photonics Søren Stobbe August 1, 2023 =========================================================================================================================== In this work, we introduce , a Python package that facilitates the automatic extraction of features from symbolic music scores. The package includes the implementation of a large number of features, which have been developed by a team of experts in musicology, music theory, statistics, and computer science. Additionally, the package allows for the easy creation of custom features using commonly available Python libraries. is primarily geared towards processing high-quality musicological data encoded in MusicXML format, but also supports other formats commonly used in music information retrieval tasks, including MIDI, MEI, Kern, and others. We provide comprehensive documentation and tutorials to aid in the extension of the framework and to facilitate the introduction of new and inexperienced users to its usage. § INTRODUCTION The abstraction represented in music scores, which are symbolic representations of music, has been shown to be highly relevant for both cognitive and musicological studies. In cognitive studies, the abstraction process used by human music cognition to categorize sound is important to understand how we identify and perceive different musical aspects, such as timbres, pitches, durations, and rhythms <cit.>. In musicological studies, the abstraction represented in music scores is important as it provides a direct source of information to understand how the music was constructed. Throughout history, these aspects have been encoded in different forms, with common Western music notation being the most widely used in the Western world for centuries. Therefore, music notation is considered of paramount importance in the field of musicology. In the field of sound and music computing, however, research has primarily focused on analyzing music in the audio domain, while other modalities such as images and scores have received less attention <cit.>. Researchers interested in applying machine learning methods to the analysis of music scores will likely seek methods for representing them in a suitable way. In the context of modern deep learning and machine learning, two main approaches have emerged: feature learning <cit.> and feature extraction <cit.>. Feature learning – or representation learning – involves using algorithms to learn the features from the data in a way that is optimal to the specific statistical inference problem and is mainly applied with Neural Networks <cit.>; feature extraction, instead, involves the computation of generic and hand-crafted features, needing further successive steps such as feature selection and dimensionality reduction. Both approaches have their own set of advantages and disadvantages and the choice of which approach to use will depend on the specific task and the available data. Here we focus on the latter exclusively. Feature extraction has widely been used in various machine learning tasks and has been partially successful in music computing <cit.>. However, a major drawback is the effort and time required to craft useful features for a specific task. To address this issue, researchers have previously proposed software tools that assist in extracting features from music, such as audio files and scores. Additionally, with the advancement of modern computer languages such as Python and JavaScript, the implementation of new features has become easier and more accessible. Musicologists may also resort to feature extraction, especially in the context of the so-called corpus studies. In fact, existing software for symbolic music feature extraction – e.g. jSymbolic <cit.> – was partly designed to help musicologists obtain the data they required in a fast and accurate way. This is especially important because the computation of the features could hardly be achieved by the manual work of musicologists, who, as of today, devote time to manual annotations such as harmony <cit.> and cadence <cit.>. Examples of such feature-driven, computational musicology can be found in studies of musical form <cit.>, harmony <cit.>, and compositional styles <cit.>, among others. In this work, we introduce a software tool named , which offers a comprehensive collection of features that are extracted from various file formats. The tool is designed to be easily extensible using the Python programming language and is specifically tailored for 18th-century opera arias, although it has been tested on a variety of other repertoires, including Renaissance and Pop music. Furthermore, in contrast to previous software <cit.>, is developed with a focus on musicological studies and is thus geared towards high-quality music datasets, addressing the issue of limited data availability that is commonly encountered in feature learning methods. To aid in its usage, is accompanied by detailed software documentation [<https://musif.didone.eu>]. This documentation provides adequate information for both novice and advanced users, enabling them to take full advantage of the tool and add new features and file formats as needed. The project is developed using open source methods and adopts GitHub to manage issues and pull requests, as well as to distribute the source code[<https://github.com/DIDONEproject/musif>]. § DESIGN PRINCIPLES The development of the was guided by four key design principles. The foremost principle was the ability to customize and extend the framework to meet the user's specific requirements. This includes the capability to alter the feature extraction process by introducing new features coded by the user and by modifying the existing pipeline. The second principle that guided the development was to ensure the usability of the software by individuals with minimal technical expertise, with musicologists as the primary target audience. This principle mainly entailed providing a user-friendly interface for the entire feature extraction process, with default settings that are deemed optimal. Additionally, comprehensive documentation was produced to aid novice users in understanding the feature extraction process of symbolic music. As musicologists were identified as the primary target audience, special attention was paid to the file types supported by the system. Specifically, an effort was made to find a combination of file formats that were both easy to create and able to represent musicological annotations, which could be used as sources for feature extraction. The final principle that underpins the entire structure of is its suitability for big data analysis. Specifically, measures were taken to ensure that the framework was computationally efficient on commercially available computers. § IMPLEMENTATION [frame=single,framesep=10pt,breaklines,breakafter=d,fontsize=]python from musif.extract.extract import FeaturesExtractor from musif.process.processor import DataProcessor features = FeaturesExtractor( # here we use `None`, but it could be the path to a YAML file containing # specifications None, # the options below override the YAML file if it is provided xml_dir="data_notation", musescore_dir="data_harmony", basic_modules=["scoring"], features=["core", "ambitus", "interval", "tempo", "density", "texture", "lyrics", "scale", "key", "dynamics", "rhythm"] ).extract() # For the DataProcessor, the arguments are the extracted table and the path to a YAML # file # As before, the YAML file can be overridden by variadic arguments processed_features = DataProcessor(features, None).process().data # the output is a pandas DataFrame! Example of usage feature extraction with default options and stock features §.§ General pipeline The implementation of is mainly based on music21 <cit.> and methodically divided into two primary stages, both of which are highly configurable. Fig. <ref> shows a flowchart of the general pipeline. The initial stage pertains to the actual extraction of features, during which a substantial number of features are derived from the data. Among these features, some are solely designed for the calculation of “second-order” features, which are derived from the primary ones. For instance, the number of notes on a score may not hold inherent significance, but it acquires meaning when considered in relation to the total length of the score. Therefore, an additional operation is required to compute the ratio between the number of notes and features that denote the total duration of the score, such as the total number of beats. As a result of this, certain "first-order" features may not be relevant for the specific task at hand. To address this issue, we have implemented an additional step that we refer to as “post-processing”. In this stage, certain “first-order” features are eliminated, while others are aggregated according to the user specifications. For example, to lower the overall number of features and attain a more succinct representation, the user may choose to aggregate features that originate from similar instruments, such as strings, by utilizing statistical measures such as the mean, the variance, and other statistical moments. Another crucial task accomplished during post-processing is the standardization of representation for missing data, such as NaN values or empty strings. The aforementioned two steps correspond to two Python objects, namely, the and the . Both of these objects take as input an extensible configuration, which can be expressed in various ways, namely variadic python arguments in the class constructor and/or a YAML file. The configuration of the object includes the path to the data, the features that should be extracted, the paths or objects containing custom features, and other similar requirements. For its part, the configuration of the object offers the flexibility to specify the columns that should be aggregated or removed, as well as the columns in which NaN values should be replaced with a default value, such as zero. The outcome of the entire process is a tabular representation, with one column per feature and one row per musical score. Optionally, scores can be analyzed using moving windows, in which case the output table will have one row for each window. When using windows, the window size and overlap can be specified as the number of measures, as shown in Fig. <ref>. A sample code that demonstrates the usage of the tool is provided in Listing <ref>. §.§ File formats Given that our primary objective was to develop a software tool for musicological applications, it was imperative to support file formats that are easily usable in musicological analysis. As such, we carefully considered file formats such as MusicXML, MEI, and IEEE 1599. These file formats can represent common Western music notation with a high degree of detail and have been utilized for both musicological and MIR tasks. However, it was determined that only MusicXML is fully supported by user-end graphical interfaces. The requirement for users to possess both musicological training and the ability to effectively utilize advanced software for editing large XML files is a rare combination, and, as such, it was not deemed a viable inclusion in the design of the system. Moreover, certain features implemented by are derived from functional, Roman-numeral harmonic analysis, which cannot be represented in the standard format of MusicXML. To solve this issue, we have adopted the MuseScore file format, in line with previous works in this field <cit.>. Overall, the recommended file formats for the system are MusicXML for notation parsing and MuseScore for harmonic annotations. However, if only MuseScore files are available, the software can be utilized to generate the necessary MusicXML files. Additionally, alternative file formats may be employed in place of MusicXML by leveraging the library for parsing notation files. This approach supports a comprehensive array of file formats. Furthermore, any file format supported by MuseScore can be utilized through automatic conversion to MusicXML. This pipeline is particularly recommended for extracting features from MIDI files. However, the parsing approach adopted in this system may be relatively slow when working with a large number of files. To mitigate this issue, a caching system has been implemented in order to save to disk any property, function, or method result that originates from objects. This approach has been tested and has demonstrated a significant improvement in processing speed, with a 2 to 3 times increase in speed observed when cached files are used. This caching system is particularly useful when designing or debugging feature extractions on a large number of files, as it allows for more efficient and expedient processing. §.§ Customization To facilitate customization of the feature extraction process, three main tools are available. These tools allow for more flexibility and precision in the feature extraction process, enabling users to tailor the process to their specific needs and requirements. These tools are further described in the subsequent list. * Custom features: The user can add custom features by developing two simple functions: one to extract features from each individual part in the score, and another to extract features from the entire score. This second function can optionally utilize the features extracted from the individual parts. Additionally, the user can specify the extraction order and feature dependencies, allowing for the use of previously extracted features in the computation of newer features. The implementation of these custom features can be easily accomplished using the Python library. * Hooks: Hooks are user-provided functions that are called at specific stages of the extraction process. In the current version of , only one type of hook is possible, namely just after the parsing of the input files is completed and just before the caching mechanism is initialized. The user can provide a list of functions that accept the parsed score as input and that are run before the caching mechanism is initialized. When using the cached files, these hooks will no longer be run. This hook is particularly useful for modifying the input scores before caching, such as deleting or modifying unsupported notation elements from objects, thus mitigating the constraints of the caching mechanism, which only allows read-only operations on the scores. * Python mechanisms: The Python programming language offers a range of advanced methods for modifying and extending existing software. As is fully implemented in pure Python, these methods are fully applicable. They include, but are not limited to, class inheritance, method and property overriding, and type casting. § STOCK FEATURES is distributed with a wide variety of features already implemented. These sets of features can be selected for extraction using the 's constructor arguments – see Listing <ref> –, while the can be utilized for further refining the desired features. Each set corresponds to a specific Python sub-package. The total number of features varies based on the instrumentation used in the score and is usually between 500 feature values for simple monophonic scores and more than 10,000 feature values for orchestral scores. In this presentation, we will provide a brief summary of each of these modules. For those who wish to carefully select features, more detailed information can be found in the online documentation, including pre-made Python regular expressions that can be used to easily select the desired features. In general, all the features were designed to be meaningful for musicologists and music theorists, giving value to studies attempting to explain statistical results on the basis of the features. The modular structure of the features also allows researchers to conveniently focus their analysis on only certain aspects of the music. Here, we will use the word sound to refer to a specific timbre – e.g. violin – which can be repeated multiple times in the score – e.g. violin I and violin II. Moreover, we will use family to refer to a family of instruments – e.g. strings, voices, brasses, and so on. The stock feature modules available in are as follows: * Core: These features are essential for the identification of music scores and for subsequent elaboration. They are always required and include the total number of measures and notes, as well as the number of measures containing notes and their averages for each sound or part and for each family and/or score. Other examples of such features include the filename of the score, the time signature, and the key signature. * Scoring: This module computes features that are related to the instrumentation and voices used in the score. Examples of features in this module include the instruments, families, and parts present in the score, as well as the number of parts for each instrument and family in the score. This module can be used to get a better understanding of the orchestration used in the composition. * Key: This module computes features that are related to the key signature and tonality, i.e., the key, of the piece. Examples of features in this module include the Krumhansl-Schmuckler tonality estimation <cit.>, the key signature, and the mode (major or minor). This module allows for analyzing the underlying tonal system used in the composition. * Tempo: This module computes features that are related to the tempo marking on the score. It should be noted that since some features depend on the terminology used by the composer for the tempo indication, some of these features may not be reliable for all repertoires. In fact, as the composers' marking need not be expressed quantitatively – it is actually more typical in some repertoires to have just a verbal indication – the numerical values extracted by ultimately depend on the BPM value given during the engraving process, if available. * Density: These features relate the number of notes with respect to the total number of measures, as well as with respect to the total number of measures that contain sound, for a single part, sound, or family. This module provides insights into the density of the sound in the composition and allows comparing the activity level of different parts or families in the score. * Harmony: This is one of the largest feature modules; it computes features based on the harmonic annotations provided in the MuseScore files according to a previous standard <cit.>. Examples of these features include the number of harmonic annotations, the number of chords performing the tonic, dominant, and sub-dominant functions, the harmonic rhythm – i.e. the rate of harmonic changes in relation to the number of beats or measures –, as well as features related to modulations annotated in the MuseScore files. This module can be used to get a better understanding of the harmonic structure of the composition and to analyze the harmonic progressions used in the composition. * Rhythm: This module computes features related to the note durations and to particular rhythmic figures, such as dotted and double-dotted rhythms. Examples of features in this module include the average note duration and the frequency of particular rhythmic figures. This module analyzes the rhythmic structure of the composition and the rhythmic patterns used in it. * Scale: This module computes features related to specific melodic degrees with respect to the main key of the score, as computed in the key module, and to the local key, as provided in the MuseScore harmonic annotations. Examples of features in this module include the frequency of specific scale degrees in a given part. * Dynamics: This module computes features related to the distribution of dynamic markings across the score, by assigning numerical values to each dynamic marking according to their corresponding intensity. As is the case with tempo, the specific numerical value of a given dynamic marking is assigned during the engraving process, with some software assigning default values that the engraver may need to modify depending on the notation conventions. Similarly to other features, this module may not be completely generalizable to some repertoires, as the interpretation of dynamic markings can vary across different compositions and styles, or even be completely absent. Examples of features in this module include the frequency of specific dynamic markings, the average dynamic level, and the distribution of dynamic markings across the score. This module extracts information about the expressivity of the composition and analyzes the use of dynamic contrasts in it. * Ambitus: This module computes the ambitus, or melodic range, of the piece in semitones, for the whole piece as well as for each individual part, sound, or family. It also computes the lowest and highest pitches and the note names thereof. * Melody: This module computes an extensive number of features related to the distribution and types of melodic intervals for each part, voice, sound, and family. This is the largest set of features within . Examples of features in this module include the frequency of specific interval types, the distribution of interval sizes, and the proportion of ascending and descending intervals. This module provides insights into the melodic structure of the composition by analyzing the use of specific intervals in it. * Lyrics: This module considers the alignment between lyrics, if available, and the notes and computes features related to their distribution. Examples of features in this module include the total number of syllables in each vocal part, the average number of notes per syllable, and the proportion of measures that contain notes for each vocal part in the score. This module can facilitate a more profound comprehension of the relationships between lyrics and music in the composition. * Texture: This module computes the ratio of the number of notes between two parts, considering all the possible pairs of parts. This feature can provide insight into the relative density and activity level of different parts in the score and can be used to analyze the texture of the composition. § DISCUSSION AND FUTURE WORKS This work presents the module to the scientific community as a tool for the extraction of features from symbolic music scores. It is designed with a focus on extensibility and customization, while also providing good defaults for the novice user and supporting musicologically-curated datasets. The module is implemented in Python, and it provides a comprehensive set of features covering various aspects of music scores, including harmony, rhythm, melody, and many more. The modular structure of the makes it easy to use and customize according to the user's needs. In comparison to existing software such as  <cit.> and  <cit.>, offers a significantly larger number of features, approximately 2 times larger. Additionally, computes features based on pure MIDI encoding, with only 2 features based on the MEI format. This is an essential aspect for musicological studies as MIDI, although commonly used in the MIP field, is not capable of representing various characteristics of music notation, such as alterations, key signatures, rhythmic and dynamic annotations, chords, and lyrics. already implements several features based on its powerful parsing engine, which allows it to take full advantage of MusicXML, MEI, and Kern features. However, expands upon this set of computable features while remaining completely based on and allowing the automatic extraction of features at the window level. Furthermore, it includes a caching system that allows for improved performance during the feature extraction process. This caching system saves the results of computations to disk, reducing the need to perform the same calculations multiple times, thus making the extraction process more efficient. Thus, provides a more extensive set of features while being highly performant in its extraction process, making it a valuable tool for researchers in the field of music information retrieval and musicology. While this paper describes the release of 1.0, we are aware that there is wide room to improve further, making it faster, more general, usable, and accurate. Specifically, we want to improve three aspects of the software: * Data visualization: we want to provide the user with tools that help the visualization of the data that extracts; this aspect would particularly be useful for preliminary analysis. * Repertoire: As of now, has been tested on other types of corpora for different music styles, including EWLDc̃itesimonetta2018symbolic, Humdrum database <cit.>, piano scores and performances <cit.>, and masses from the Renaissance <cit.>. It has additionally been utilized on an in-house corpus of more than 1600 opera arias. For this reason, most of the design choices and of the implemented features target this repertoire. We want to make it more powerful and efficient for other repertoires too. * More numerical features: Although already provides a wide set of musical features, we are sure that many other features could be defined and included in , empowering both musicological analysis and data science studies. We also plan to study in more depth the comparison between the existing tools for music feature extraction, including benchmarks and test performances. While we continue working on these paths, we hope that can be a valuable tool for the Sound and Music Computing community and welcome any suggestions or contributions to the software. We encourage the community to use and test and provide feedback so that we can continue to improve and develop it further. It is our goal to make a widely used and reliable tool for MIP and musicology research. This work is a result of the Didone Project<cit.>, which has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program, Grant agreement No. 788986. It has also been conducted with funding from Spain’s Ministry of Science and Innovation (https://doi.org/10.13039/501100011033IJC2020-­043969-I/­AEI/­10.13039/­501100011033).
http://arxiv.org/abs/2307.01471v2
20230704043936
On Hofstadter's G-sequence
[ "Michel Dekking" ]
math.CO
[ "math.CO", "Primary 05A17, Secondary 68R15" ]
plain theoremTheorem corollary[theorem]Corollary lemma[theorem]Lemma proposition[theorem]Proposition definition definition[theorem]Definition example[theorem]Example conjecture[theorem]Conjecture remark remark[theorem]Remark 1cmOn Hofstadter's G-sequence 1cm F. M. Dekking CWI Amsterdam and Delft University of Technology Faculty EEMCS P.O. Box 5031 2600 GA Delft The Netherlands mailto:F.M.Dekking@math.tudelft.nlF.M.Dekking@math.tudelft.nl .2 in We characterize the entries of Hofstadter's G-sequence in terms of the lower and upper Wythoff sequences. This can be used to give a short and comprehensive proof of the equality of Hofstadter's G-sequence and the sequence of averages of the swapped Wythoff sequences. In a second part we give some new results that hold when one replaces the golden mean by other quadratic algebraic numbers. § INTRODUCTION Hofstadter's G-sequence G is defined by G(1)=1, G(n)=n-G(G(n-1))) for n≥ 2. It was proved in 1988, independently in the two articles <cit.> that there is a simple expression for Hofstadter's G-sequence as a slow Beatty sequence, given by G(n) = ⌊ (n+1)γ⌋, where γ:=(√(5)-1)/2, the small golden mean. The terminology `slow Beatty sequence' comes from the paper <cit.> by Kimberling and Stolarsky. From this paper we copy the following useful result. [Kimberling and Stolarsky] Suppose that σ in (0, 1) is irrational, and let s(n) = ⌊ nσ⌋. Let a be the sequence of numbers n such that s(n + 1) = s(n), and b the sequence of those n such that s(n + 1) = s(n) + 1. Then a is the Beatty sequence of 1/(1-σ), and b is the Beatty sequence of 1/σ. § HOFSTADTER AND WYTHOFF Let φ:=(1+√(5))/2 be the golden mean. The lower and upper Wythoff sequences are given by L(n)=⌊ n φ⌋ and U(n)=⌊ n φ^2 ⌋ for n≥ 1. The Hofstadter G-sequence satisfies G(L(n))=n, G(U(n))=L(n), for all n≥ 1. The lower Wythoff sequence L satisfies by definition nφ = L(n)+ε_n ⇒φ = (L(n)+ε_n)/n, for some ε_n with 0<ε_n<1. Since φγ=1, this leads to G(L(n))= ⌊ (L(n)+1)γ⌋ = ⌊L(n)+1/φ⌋= ⌊n(L(n)+1)/L(n)+ε_n⌋=⌊ n+δ_n ⌋, with δ_n=n(1-ε_n)/L(n)+ε_n. Since obviously n<L(n)+ε_n, we have 0<δ_n<1, and we conclude that G(L(n))=n. We turn to the second equation. The upper Wythoff sequence U satisfies by definition nφ^2 = U(n)+ε_n', for some ε_n' with 0<ε_n'<1. Since φγ=1, this leads to G(U(n))= ⌊ (U(n)+1)γ⌋ = ⌊U(n/φ+γ⌋= ⌊nφ^2-ε_n'/φ+γ⌋= ⌊ nφ+(1-ε_n')γ⌋=L(n), since obviously 0<(1-ε_n')γ<1. We next turn our attention to sequence A002251, described as: Start with the nonnegative integers; then swap L(k) and U(k) for all k ≥ 1, where L and U are the lower and upper Wythoff sequences. This means that this sequence, which we call W, satisfies W(L(n)) = U(n), W(U(n) )= L(n) for all n≥ 1. Regretfully, the sequence W has been given offset 0 in OEIS. One of the unpleasant consequences of the useless addition of 0 is that sequence A073869 is not a clean Cesaro average of A002251. Another unpleasant consequence is that A073869 is basically a copy of A019444. The sequence W has the remarkable property that the sum of the first n+1 terms is divisible by n+1. This leads to the sequence A073869, defined as A073869(n) = ∑_i=0^n W(i)/(n+1). The following theorem is a conjecture by Amarnath Murthy in <cit.>, but is proved in the long paper <cit.>. We give a new short proof below. The averaged Wythoff swap sequence W is equal to Hofstadter's G-sequence. The result holds for n=0,1. It suffices therefore to consider the sequence of differences. Subtracting G(n-1)=∑_i=0^n-1 W(i)/n from G(n)=∑_i=0^n W(i)/(n+1), we see that we have to prove (n+1)G(n)-nG(n-1)=W(n). But we know that there are only two possibilities for the recursion from G(n-1) to G(n). Therefore Equation (<ref>) turns into the following two equations. G(n)=G(n-1) ⇒ G(n) = W(n), G(n)= G(n-1)+1 ⇒ G(n) = W(n)-n. It is not clear how to prove these equalities directly. However, we can exploit Theorem <ref>. According to this theorem with σ=γ, G(n)=G(n-1) ⇔∃ M such that n=U(M), G(n)= G(n-1)+1 ⇔∃ M such that n=L(M) . So we first have to prove that n=U(M) implies G(n) = W(n). This holds indeed by an application of Theorem <ref> and Equation (<ref>): G(n)=G(U(M)=L(M)=W(U(M))=W(n). Similarly, for the second case n=L(M): G(n)=G(L(M))=M=U(M)-L(M)=W(L(M))-L(M)=W(n)-n. Here we applied U(M)=L(M)+M for M≥ 1, a direct consequence of φ^2M=(φ+1)M. In the comments of A073869 there is a scatterplot by N.J.A.Sloane—c.f. Figure <ref>. The points have a nice symmetric distribution around the line y=x, since the points consists of all pairs (L(n),U(n)) and (U(n),L(n)) for n=1,2,…. (Ignoring (0,0).) Apparently the points are almost lying on two lines. What are the equations of these lines? This is answered by the following proposition. Let W be the Wythoff swap sequence. Then for all n≥ 1 W(U(n))=⌊γ U(n)⌋, W(L(n))=⌊φ L(n) ⌋ +1. From Equation (<ref>) and Equation (<ref>) we see that W(n) = G(n), if G(n)=G(n-1), W(n) = G(n)+n if G(n)= G(n-1)+1. Since G(n)=⌊ (n+1)γ⌋ by Equation (<ref>), it follows from Equation (<ref>) that W(U(M))=⌊ U(M)γ⌋. Since all M=1,2… will occur, this gives the first half of the proposition. For the second half of the proposition we do the following computation under the assumption that n=L(M): G(n)+n = G(n-1) + n +1 = ⌊ nγ⌋ +n +1 = ⌊ n(γ+1) ⌋ +1 = ⌊ nφ⌋ +1. Now Equation (<ref>) gives that W(L(M))=⌊φ L(M) ⌋ +1. Simple applications of Theorem <ref> prove the conjectures in A090908 (Terms a(k) of A073869 for which a(k)=a(k+1).), and A090909 (Terms a(k) of A073869 for which a(k-1), a(k) and a(k+1) are distinct.). It also proves the conjectured values of sequence A293688. The results in this section can all be proved automatically by Walnut: see the paper <cit.>. § GENERALIZATIONS There is a lot of literature on generalizations of Hofstadter's recursion G(n)=n-G(n-1)). In most cases there is no simple description of the sequences that are generated by such recursions. An exception is the recursion V(n)=V(n-V(n-1))+V(n-V(n-4)) analysed in <cit.>. The sequence with initial values 1,1,1,1 generated by this recursion is sequence A063882. Allouche and Shallit prove in <cit.> that the `frequencies' of this sequence can be generated by an automaton. See the recent paper <cit.> for more results on this type of Hofstadter's recursions, known as Hofstadter Q-sequences. We consider the paper <cit.>, that gives a direct generalization of Hofstadter's G-sequence. [Celaya and Ruskey] Let k≥ 1, and let γ = [0; k, k, k, …]. Assume H(n) = 0 for n < k, and for n ≥ k, let H(n) = n-k+1 - (∑_i=1^k-1 H(n-i))- H(H(n-k). Then for n≥ 1, H(n) = ⌊γ(n+1)⌋. As an example, we take the case k=2. In that case γ=√(2)-1, the small silver mean. The recursion for what we call the Hofstadter Pell sequence is H(n)=n-1-H(n-1)-H(H(n-2)). Here Theorem <ref> gives that (H(n))=⌊γ(n+1)⌋=0, 0, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 6, 6, 7, 7, 7, 8, 8, 9, 9, 9,10,10,…. This is sequence A097508 in OEIS. Let 1/γ=1+√(2) and 1/(1-γ)=1+1/2√(2) form the Beatty pair given by Theorem <ref>. Let L^ P=(⌊ n(1+√(2))⌋) and U^ P=(⌊ n(1+1/2√(2))⌋) be the associated Beatty sequences. One has L^ P=A003151, and U^ P=A003152. The following version of Theorem <ref> holds, where R is the slow Beatty sequence A049472 given by R(n)=⌊1/2√(2)n⌋. The Hofstadter Pell sequence H satisfies H(L^ P(n))=n, H(U^ P(n))=R(n), for all n≥ 1. The proof of Theorem <ref> is very similar to the proof of Theorem <ref>, based on the relation γ(1+√(2))=1. The sequence with L^ P and U^ P swapped is A109250 =2, 1, 4, 3, 7, 9, 5, 12, 6, 14, 16, 8, 19, 10, 21, 11, 24, 26…. Apparently there is nothing comparable to the averaging phenomenon that occurred in the golden mean case. See A078474, and in particular A286389 for two generalizations of Hofstadter's recursion, with conjectured expressions similar to Equation (<ref>). For the recursion a(n)=n-⌊1/2 a(a(n-1))⌋ given in A138466 it is proved by Benoit Cloitre that (a(n)) satisfies Equation (<ref>) with γ=√(3)-1. For generalizations of this see A138467. § ACKNOWLEDGMENT I thank Jean-Paul Allouche for useful remarks. Thanks are also due to an anonymous referee for pointing out an important reference. plain 8 AllSha J.-P. Allouche and J. Shallit, A variant of Hofstadter’s Q-sequence and finite automata, J. Aust. Math. Soc. 93 (2012), 1-–8. doi:10.1017/S1446788713000074 Avdip M. Avdispahić and F. Zejnulahi, An integer sequence with a divisibility property, Fibonacci Quarterly 8 (2020), 321–333. Balamohan B. Balamohan, A. Kuznetsov, and S. Tanny, On the behavior of a variant of Hofstadter’s q-sequence, J. Integer Seq. 10 (2007), Article 07.7.1. Celaya M. Celaya and F. Ruskey, Morphic words and nested recurrence relations, arxiv 1307.0153 (Jun 29 2013), [math.CO] Gault D. Gault and M. Clint, “Curiouser and curiouser" said Alice. Further reflections on an interesting recursive function, Internat. J. Computer Math. 26 (1988), 35–43. Granville V. Granville and J.-P. Rasson, A strange recursive relation, J. Number Theory 30 (1988), no. 2, 238–241. kimbstol C. Kimberling and K. B. Stolarsky, Slow Beatty sequences, devious convergence, and partitional divergence, Amer. Math. Monthly 123 (2016), 267–273. Fox N. Fox, Connecting slow solutions to nested recurrences with linear recurrent sequences, J. Difference Equ. Appl. 10 (2022), 1–34. Walnut J. Shallit, Proving properties of some greedily-defined integer recurrences via automata theory, preprint. oeis N. J. A. Sloane et al., On-Line Encyclopedia of Integer Sequences, electronically available at <https://oeis.org>, 2023. Venk B. J. Venkatachala, A curious bijection on natural numbers, J. Integer Seq. 12 (2009) 09.8.1. 2020 Mathematics Subject Classification: Primary 05A17, Secondary 68R15 Keywords: Hofstadter's G-sequence, Wythoff sequences.
http://arxiv.org/abs/2307.01455v1
20230704031835
A Pulsed Muon Source Based on a High-Repetition-Rate Electron Accelerator
[ "Meng Lv", "Jiangtao Wang", "Kim Siang Khaw" ]
physics.acc-ph
[ "physics.acc-ph", "hep-ex" ]
meng.lv@sjtu.edu.cn jiangtao_wang@sjtu.edu.cn kimsiang84@sjtu.edu.cn Tsung-Dao Lee Institute and School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai, China Muons have established a unique and pivotal role in both fundamental physics and applied sciences. Given that a typical muon experiment spans roughly ten muon lifetimes, the optimal muon source should operate at around 50 kHz in pulsed mode. However, existing muon facilities operate in either the 25-50 Hz pulsed mode or continuous beam (DC) mode, which results in low-duty cycles for various muon experiments. As a result, precision muon physics with continuous muon beam has been limited by statistical uncertainty. In this study, we investigate the potential of a high-repetition-rate pulsed electron beam at the Shanghai SHINE facility to serve as a muon source driver. SHINE houses an 8-GeV CW superconducting RF linac, with a 1 MHz bunch rate and 100 pC bunch charge. Following X-ray production, the electron beam is deflected downstream of the undulators and absorbed in a beam dump. Using Geant4 Monte Carlo simulations, we estimated the yield of the muon beam to be approximately 10^3μ^±/bunch. This type of muon beam could be instrumental in a broad range of muon experiments, including muon lifetime measurement, a search for muonium to anti-muonium conversion, and the muon spin spectroscopy. A Pulsed Muon Source Based on a High-Repetition-Rate Electron Accelerator Kim Siang Khaw August 1, 2023 ========================================================================= § INTRODUCTION Research in fundamental physics and applied science using muons has gained substantial interests in recent years. A long-standing discrepancy between theoretical predictions <cit.> and experimental measurements <cit.> regarding the muon's magnetic moment strongly suggests the presence of physics beyond the Standard Model of particle physics (for a recent review, see <cit.>). Techniques involving muon spectroscopy, such as muon spin rotation and muon-induced X-ray emission <cit.>, have catalyzed advancements in superconductivity, magnetism, and the elemental analysis of archaeological artifacts. A common feature of existing and planned facilities is their reliance on high-power proton accelerators. Most of them operate in a multipurpose mode, where experiments with muons, neutrons, and pions are conducted simultaneously. These facilities currently operate either in a pulsed mode (25-50 Hz, e.g., J-PARC in Japan <cit.>, and ISIS in the UK <cit.>) or a continuous (DC) mode (e.g., PSI in Switzerland <cit.>, TRIUMF in Canada <cit.>). This is also true for the five new muon facilities currently under study at CSNS <cit.>, HIAF/CiADS <cit.>, RAON <cit.>, and FNAL <cit.>. Given that a typical muon experiment spans roughly ten muon lifetimes, current operating modes result in low-duty cycles for various muon experiments. For instance, precision muon physics with continuous muon beam has been limited by statistical uncertainty. Recently, several authors have noted that the optimal muon source for experiments such as the muon spin rotation (µSR) <cit.>, muon electric dipole moment <cit.>, and muonium to anti-muonium conversion <cit.> operates in pulsed mode with a repetition rate of several tens of kHz. A non-scaling fixed-field alternating gradient (FFAG) proton accelerator technology with a frequency of a few kHz <cit.> has been proposed for this purpose, but is still under development <cit.>. The proton beam to be delivered to Fermilab's Mu2e experiment <cit.> has a proton bunch repetition rate of 0.59 MHz, achieved by resonantly extracting the proton bunch from a delivery ring. However, it is dedicated to the Mu2e experiment. Recent work at ORNL's SNS aimed to extract 50 kHz proton pulses for µSR applications, employing laser neutralization on a hydrogen ion beam <cit.>. Apart from the proton beam, an electron beam could also be used to drive a muon source. Recently, Nagamine et al. proposed using a 300 MeV, 10 µA electron accelerator for a µSR facility <cit.>, predicting a potential yield of 8 × 10^3μ^+/s taking into account acceptance effects and transportation losses. A feasibility study for a muon beam using the Hall-A beam dump at the Continuous Electron Beam Accelerator Facility (CEBAF) <cit.> in Jefferson Lab has also been conducted <cit.>. With a target electron number of about 10^22/year, the anticipated muon flux is approximately 2×10^15/year. The latest advancements in laser wakefield acceleration (LWFA) technology have compactified electron accelerators to mere meters rather than kilometers. The development of such compact muon sources has been extensively studied recently <cit.>, in anticipation of the imminent availability of high-repetition-rate femtosecond multi-PW lasers. In this article, we explore the possibility of utilizing a high repetition rate electron beam at SHINE <cit.> in Shanghai to power a muon source. We present a detailed analysis of the anticipated beam intensity for muons and other secondary particles for two distinct target configurations. Potential applications of these muon beams in the field of fundamental physics as well as applied science will also be discussed. § MUON PRODUCTION BY ELECTRON Muon beams are typically produced as a tertiary beam within a proton accelerator complex. This production process is driven by a high-intensity proton beam hitting on a graphite target <cit.>. This results in the production of pions via strong nuclear interactions. Bending magnets are then used to extract these pions from the target area. As they travel through a lengthy decay channel, these pions gradually decay into muons, which are subsequently delivered to muon experiment zones. A particular category of muon, known as a surface muon, can be selected by ensuring the beam momentum is approximately 28 MeV/c. This specific muon beam originates from the pion decaying near the target's surface and is typically nearly 100% polarized due to the parity-violating weak decay of the pion. Contrarily, the muon production scheme for an electron-driver varies significantly. Muons can be created as a tertiary beam via photo-nuclear process. The photo-nuclear process entails the production of a real photon via the bremsstrahlung process, which is then followed by pion production through photo-excitation of the nucleus. Muons can also be created via the Bethe-Heitler process, which uniquely does not proceed through pion production and decay. The relative sizes of the cross sections are approximately in the ratio 1000:1 for photo-nuclear and pair production processes respectively, for electrons in the GeV scale <cit.>. § SHANGHAI SHINE FACILITY The Shanghai High repetition rate XFEL and Extreme light facility (SHINE) <cit.>, currently under construction at Shanghai's Zhangjiang High Technology Park (see Fig. <ref>), is a fourth-generation light source. This facility houses a 8-GeV CW superconducting RF linac, which delivers an 8-GeV bunched electron beam. The beam offers a repetition rate of up to 1 MHz and a bunch charge of 100 pC, culminating in an average current of 100 µA. SHINE is equipped with three undulator lines, capable of producing hard X-rays of up to 25 keV. Following X-ray production, each electron beam is diverted and routed towards a beam dump <cit.>. Interactions between the electron beam and either the beam dump or a thin target preceding the beam dump could generate muons and other secondary particles. § SIMULATION SETUP In order to simulate the production of muons, positrons, and pions resulting from electron interactions with the target, we employed the package <cit.>, which is based on the Geant4 toolkit <cit.>. We modified the physics list to use the FTFP_BERT model for hadronic processes. The initial phase of our investigation into the electron-beam-driven muon source focused on optimizing the muon production target. Based on prior study <cit.>, we selected tungsten as our target material and designed it in a cylindrical shape. The target was optimized to maximize muon yield at a distance of 5 cm from the center of the target, as illustrated in Fig. <ref>. We found that the dimensions yielding the highest surface muon production for a tungsten target are a thickness of 30 mm and a radius of 6 mm. The energy and angular distribution of the muons are presented in Fig. <ref>. Muons generated from the photo-nuclear process exhibited a lower energy and a broader angular spread compared to those created from the pair-production process. This is consistent with our expectations since the former arise from pion decays. The yield from the photo-nuclear process is approximately 1.4 × 10^4 per bunch, while the yield from the pair production process is around 6.9 × 10^3 per bunch. Furthermore, we observed two distinct peaks in the momentum distribution, as depicted in Fig. <ref>(top). These peaks each correspond to the decays of pions and kaons, respectively, and have close to 100% polarization, suitable for µSR application. The yield of the surface muon at 4 MeV is approximately 1,400 per bunch, resulting in an intense rate of 1.4 × 10^8/s for a 100 kHz operation. Surface muons resulting from kaon decay present a unique prospect for probing extremely dense materials, owing to their higher penetration power <cit.>. In a scenario that minimizes modifications to the SHINE facility, we examined an alternative target configuration: the SHINE beam dump <cit.>. As shown in Fig. <ref>, the SHINE beam dump is cylindrical and composed of aluminum and copper, with a radius of 16 cm and a total length of 115 cm. Given the tiny emittance of the SHINE linac's electron beam, we approximated it as a pencil beam to simplify our model. To analyze the muon yield at various points around the beam dump, we installed five virtual detection planes: front, left side, right side, back left side, and back right side, as depicted in Fig. <ref>. As anticipated, a substantial rate of muon and pion beams could still be detected from the beam dump. The energy distribution of the muons and pions is presented in Fig. <ref>, and the yield for each particle species is summarized in Tab <ref>. § CONCLUSION In this study, we analyzed the feasibility of leveraging the high-repetition-rate electron beam at Shanghai's SHINE facility to generate a muon source. Through our simulations, we projected a muon yield of approximately 10^4μ^±/bunch using a cylindrical tungsten target with a thickness of 30 mm and a radius of 6 mm. In the beam dump configuration, we anticipate a yield of 10^3μ^±/bunch or even greater from all sides of the SHINE beam dump. Such a high-repetition-rate muon beam holds promise for diverse muon experiments, from muon lifetime measurements <cit.> to the search for muonium-to-anti-muonium conversion <cit.>, as well as the application in muon spin spectroscopy studies <cit.>. A dedicated study on the beam extraction will be performed as the next step of our research. § ACKNOWLEDGEMENTS We would like to thank Guanghong Wang, Wenzhen Xu, Jianhui Chen, and Dong Wang from Shanghai Advanced Research Institute for useful discussions regarding the feasibility of developing a muon beam line using SHINE beam dumps. This work is supported by the Shanghai Pilot Program for Basic Research (21TQ1400221). 99 Aoyama:2020ynm T. Aoyama et al., Phys. Rept. 887 (2020), 1-166. doi:10.1016/j.physrep.2020.07.006 Muong-2:2006rrc G. W. Bennett et al. [Muon g-2], Phys. Rev. D 73 (2006), 072003. doi:10.1103/PhysRevD.73.072003 Muong-2:2021ojo B. Abi et al. [Muon g-2], Phys. Rev. Lett. 126 (2021) no.14, 141801. doi:10.1103/PhysRevLett.126.141801 Keshavarzi:2021eqa A. Keshavarzi, K. S. Khaw and T. Yoshioka, Nucl. Phys. B 975 (2022), 115675. doi:10.1016/j.nuclphysb.2022.115675 Hillier:2022nat A. D. Hillier et al., Nat Rev Methods Primers (2022) 2, 4. doi:10.1038/s43586-021-00089-0 Miyake:2014zra Y. Miyake et al., J. Phys. Conf. Ser. 551 (2014) no.1, 012061. doi:10.1088/1742-6596/551/1/012061 Hillier:2018wkl A. D. Hillier et al., JPS Conf. Proc. 21 (2018), 011055. doi:10.7566/JPSCP.21.011055 Grillenberger:2021kyv J. Grillenberger, C. Baumgarten and M. Seidel, SciPost Phys. Proc. 5 (2021), 002. doi:10.21468/SciPostPhysProc.5.002 Marshall:1991wb G. M. Marshall, Z. Phys. C 56 (1992), S226-S232. TRI-PP-91-69. Vassilopoulos:2022ali N. Vassilopoulos [EMuS project], PoS NuFact2021 (2022), 110. doi:10.22323/1.402.0110 Cai:2019feg H. J. Cai, L. Chen, L. Yang and S. Zhang, in Proc. 10th Int. Particle Accelerator Conf. (IPAC’19), Melbourne, Australia, May. 2019, pp. 3673-3675. doi:10.18429/JACoW-IPAC2019-THPGW041 Won:2014gja E. Won, JPS Conf. Proc. 2 (2014), 010110. doi:10.7566/JPSCP.2.010110 Gatto:2022olb C. Gatto et al., [arXiv:2212.04897 [physics.ins-det]]. Cywinski:2009zz R. Cywinski et al., Physica B 404 (2009), 1024-1027. doi:10.1016/j.physb.2008.11.203 Adelmann:2010zz A. Adelmann, K. Kirch, C. J. G. Onderwater and T. Schietinger, J. Phys. G 37 (2010), 085001. doi:10.1088/0954-3899/37/8/085001 Adelmann:2021udj A. Adelmann et al., [arXiv:2102.08838 [hep-ex]]. Willmann:2021boq L. Willmann and K. Jungmann, SciPost Phys. Proc. 5 (2021), 009. doi:10.21468/SciPostPhysProc.5.009 Kuno:2000wn Y. Kuno, Nucl. Instrum. Meth. A 451 (2000), 233-243. doi:10.1016/S0168-9002(00)00550-7 Kuno:2005zz Y. Kuno, Conf. Proc. C 0505161 (2005), 29-33. doi:10.1109/pac.2005.1590351 Seidel:2021jyz M. Seidel, [arXiv:2105.04477 [physics.acc-ph]]. Mu2e:2014fns L. Bartoszek et al. [Mu2e], [arXiv:1501.05241 [physics.ins-det]]. Liu:2020hcu Y. Liu, A. Rakhman, C. D. Long, Y. Liu and T. J. Williams, Nucl. Instrum. Meth. A 962 (2020), 163706 doi:10.1016/j.nima.2020.163706 Nagamine:2009zz K. Nagamine, H. Miyadera, A. Jason and R. Seki, Physica B 404 (2009), 1020-1023. doi:10.1016/j.physb.2008.11.231 Dudek:2012vr J. Dudek et al., Eur. Phys. J. A 48 (2012), 187. doi:10.1140/epja/i2012-12187-1 Fulci:2023hmx A. Fulci, Nucl. Instrum. Meth. A 1046 (2023), 167514. doi:10.1016/j.nima.2022.167514 Titov:2009cr A. I. Titov, B. Kampfer and H. Takabe, Phys. Rev. ST Accel. Beams 12 (2009), 111301. doi:10.1103/PhysRevSTAB.12.111301 Dreesen:2014mt W. Dreesen et al., 2014 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), Seattle, WA, USA, 2014, pp. 1-6. doi: 10.1109/NSSMIC.2014.7431088 Rao:2018njj B. S. Rao, J. H. Jeon, H. T. Kim and C. H. Nam, Plasma Phys. Control. Fusion 60 (2018), 095002. doi:10.1088/1361-6587/aacdea Zhao:2018lcl Z. Zhao, D. Wang, Z. H. Yang and L. Yin, in Proc. 38th International Free Electron Laser Conference (FEL'17), Geneva, Switzerland, Aug. 2017, pp. 182-184. doi:10.18429/JACoW-FEL2017-MOP055 Bungau:2014rxa A. Bungau, R. Cywinski, C. Bungau, P. King and J. S. Lord, Phys. Rev. ST Accel. Beams 17 (2014) no.3, 034701. doi:10.1103/PhysRevSTAB.17.034701 Berg:2015wna F. Berg et al., Phys. Rev. Accel. Beams 19 (2016) no.2, 024701. doi:10.1103/PhysRevAccelBeams.19.024701 Cook:2016sfz S. Cook et al., Phys. Rev. Accel. Beams 20 (2017) no.3, 030101. doi:10.1103/PhysRevAccelBeams.20.030101 Blomqvist:1976mq I. Blomqvist, P. Janecek, G. G. Jonsson, H. Dinter, K. Tesch, N. Freed and P. Ostrander, Phys. Rev. C 15 (1977), 988-1001. doi:10.1103/PhysRevC.15.988 Xu:2020bd Y. Xu et al., Radiation Protection 2020, 40(6): 510-515. Sedlak:2012 K. Sedlak, R. Scheuermann, T. Shiroka, A. Stoykov, A.R. Raselli and A. Amato, Phys. Proc. 30 (2012) 61. doi:10.1016/j.phpro.2012.04.040 GEANT4:2002zbu S. Agostinelli et al. [GEANT4], Nucl. Instrum. Meth. A 506 (2003), 250-303. doi:10.1016/S0168-9002(03)01368-8 Grinenko:2023 V. Grinenko, private communication, May. 2023. Kanda:2022too S. Kanda, PoS NuFact2021 (2022), 215. doi:10.22323/1.402.0215 Bai:2022sxq A. Y. Bai et al., [arXiv:2203.11406 [hep-ph]]. Li:2023gxn Q. Li et al., J. Phys. Conf. Ser. 2462 (2023) no.1, 012022. doi:10.1088/1742-6596/2462/1/012022
http://arxiv.org/abs/2307.03372v1
20230707034052
Triangle singularity in the $J/ψ\to γ\bar{p} Δ$ decay
[ "Ke Wang", "Rong Li", "Bo-Chao Liu" ]
hep-ph
[ "hep-ph" ]
wangke563@qq.comrongliphy@xjtu.edu.cn liubc@xjtu.edu.cn MOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, School of Physics, Xi’an Jiaotong University, Xi ’an 710049, China. Institute of Theoretical Physics, Xi ’an Jiaotong University, Xi ’an 710049, China. In this work, we study the role of triangle singularity in the J/ψ→γp̅Δ decay. We find that through a triangle mechanism, involving a triangle loop composed by ω, π and p, this decay may develop a triangle singularity and produce a visible peak in the invariant mass M_γΔ around 1.73 GeV with a width of 0.02 GeV. Such a triangle mechanism may also cause significant spin effects on the final Δ, which can be detected by measuring its spin density matrix elements. Our calculations show that the branching ratios due to the triangle mechanism is Br(J/ψ→γp̅Δ,Δ→π p)=1.058× 10^-6. Hopefully, this reaction can be investigated at BESIII and future experiments, e.g. Super Tau-Charm Facility, and the narrow width of the induced structure, the moving TS position and the distinct features of the spin density matrix elements of the Δ may serve as signals for the triangle singularity mechanism. Triangle singularity in the J/ψ→γp̅Δ decay Bo-Chao Liu Abstract 0.9 We review the modular flavor symmetric models of quarks and leptons focusing on our works. We present some flavor models of quarks and leptons by using finite modular groups and discuss the phenomenological implications. The modular flavor symmetry gives interesting phenomena at the fixed point of modulus. As a representative, we show the successful texture structure at the fixed point τ = ω. We also study CP violation, which occurs through the modulus stabilization. Finally, we study SMEFT with modular flavor symmetry by including higher dimensional operators. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Triangle singularity(TS) as one kind of kinematical singularities in the scattering amplitude was first studied by Landau in 1956<cit.>. Later, the corresponding physical picture of the special kinematic conditions needed to produce TS, known as the Coleman-Norton theorem, was described in Ref.<cit.>. Specifically, for the decay process A → B+C proceeding through a triangle loop composed by internal particles 1, 2 and 3, the particle A first decays into particles 1 and 2, then particle 1 decays into the particle 3 and B, finally the particle 2 and 3 merge into the particle C. TS occurs in the amplitude only when these sub-processes take place in a classical manner. It corresponds to the case that all three intermediate particles are on shell simultaneously and their three momenta are collinear in the rest frame of particle A. Besides, particle 3 must move fast enough to catch up with particle 2 and merge into particle C. In recent years, TS has attracted a lot of attentions of researchers and has been suggested to play an essential role for understanding the nature of some observed structures and clarifying some important puzzles<cit.>. For example, the abnormally large isospin-breaking effects observed in J/ψ→γη(1405) →γπ^0 f_0(980) can be understood by considering the TS mechanism originating from the K^*K̅K loop<cit.>. The band around 1.4 GeV on the π^0ϕ distribution in Dalitz plot for the isospin-breaking decay J/ψ→ηπ^0ϕ can also be explained by the TS mechanism<cit.>. Furthermore, some exotic states observed recently in experiments, e.g. Z_c<cit.>, X(2900)<cit.> and T^+_cc<cit.>, have been argued to involve TS mechanism. For a comprehensive review of these topics, we refer to Ref.<cit.>. Although TS mechanism may be essential for understanding those interesting and important experimental phenomena, further studies are still needed to investigate its physical effects and find ways to identify its contribution in experiments. It is well known that TS mechanism can cause an enhancement in the invariant mass spectrum of final particles, which has been the main focus of previous studies<cit.>. However, since TS and resonances can induce similar structures in the invariant mass spectrum, it raises the question on how to distinguish these two mechanisms. One possible way is to change the kinematic conditions that are necessary for TS mechanism<cit.>. The structure should disappear for the TS model but not for the resonance model when changing the kinematic conditions. Although this method is feasible in principle, it changes the conditions of the original experiment and may introduce other ambiguities, e.g., the change of relative strength of various contributions due to varying kinematic conditions. Therefore, a better method would be one that can distinguish these two mechanisms without changing the experiment conditions. In our recent work<cit.>, we suggest that in some cases TS mechanism may cause significant spin effects, which offers an alternative way to verify TS mechanism and thus deserves further studies. In this work, we propose that in the radiative decay process J/ψ→γp̅Δ(1232) the TS mechanism, through the triangle loop involving ω, π and p as shown in Fig.<ref>, may play an important role. In this process, the couplings of the three vertices J/ψ→ pp̅ω, ω→γπ and π p →Δ(denoting the Δ(1232) hereafter) involved in the loop are relatively strong<cit.>. Furthermore, the small width of the intermediate states in the loop may also enhance the triangle loop contribution and can produce a relatively narrow peak in the γΔ invariant mass spectrum at the position of the TS. At the same time, as argued in our previous work such a TS mechanism may also cause significant spin effects. The physical picture behind this expectation is quite simple. When incident particles are moving along some fixed direction, the produced intermediate state may have spin alignment due to angular moment conservation. For example, considering the Δ resonance produced in π N elastic scattering process in the center of mass frame, the spin projection on the z-axis of the produced Δ can only be ±1/2 if we take z-axis along the beam direction. Therefore, the spin of the Δ is aligned and the angular distribution of its decay products is anisotropic. The spin status of the Δ can be described by the spin density matrix elements(SDMEs) and measured through the analysis of the angular distribution of the Δ→π N decay. In the J/ψ→γp̅Δ process through the triangle diagram, according to the Coleman-Norton theorem, the π and N in the loop should move along the direction of the momentum of the Δ at TS in the γΔ rest frame. It means, if we consider the helicity states of the Δ, i.e. choosing the quantization axis along the direction of the momentum of the Δ, the helicity should be ±1/2 similar as the case of the Δ production in the π N elastic scattering process mentioned above. In other words, the special kinematic conditions required by the TS constrain the helicity of the Δ in the γΔ rest frame, which is absent for other mechanisms. Therefore, if the TS mechanism indeed plays an important role in this reaction, we expect a peak structure in the γΔ invariant mass spectrum and the production of the Δ with helicity ±1/2 should be enhanced near TS. This paper is organized as follows. In Sec.<ref>, we present the theoretical framework and amplitudes for the reaction J/ψ→γp̅Δ. In Sec.<ref>, we show the numerical results and discuss their implications. Finally, we summarize our findings and conclusions in Sec.<ref>. § MODEL AND INGREDIENTS In this work, we shall introduce the TS mechanism in the radiative decay process J/ψ→γp̅Δ within an effective Lagrangian approach. The Feynman diagram for the process that may produce TS is shown in Fig.<ref>. In this process, the J/ψ first decays into pp̅ω, then ω decays to a photon and a π meson. In the γΔ rest frame, if the π meson travels along the momentum of the proton produced in J/ψ decay and moves faster than it, the π may catch up with the proton and they can finally merge into the final Δ. According to the results in Ref.<cit.>, TS exists in this decay process only when the special kinematic conditions are satisfied. Using the method in Ref.<cit.>, if we adopt the nominal masses in PDG<cit.> for the involved particles in Fig.<ref>, it turns out that the TS should occur at M_γΔ=1.731GeV. To calculate the decay amplitude for the Feynman diagram in Fig.<ref>, we need the Lagrangian densities for the various vertices. For the J/ψ→ pp̅ω vertex, we adopt a contact interaction _ψω NN̅ = g_c N̅ψ^μω_μ N, where g_c is the coupling constant and can be determined through the J/ψ→ pp̅ω partial decay width in PDG<cit.>. Note that up to now there is no evidence that resonance productions play an important role in the J/ψ→ p p̅ω decay. For the ωγπ and Δπ N vertices, we adopt the effective Lagrangians<cit.>. _ωγπ = e g_ωγπ/m_ωε^μναβ∂_μω_ν∂_α A_βπ, _Δπ N = g_Δπ N/m_πΔ̅^μ(τ⃗·∂_μπ⃗) N+h.c., where A represents the photon field and e is taken as √(4π/137). The coupling constants g_ωγπ and g_Δπ N appearing in the above Lagrangian densities can be determined through the corresponding partial decay width using Γ_ω→πγ = e^2 g^2_ωγπ/12π|p_π|^3/m^2_ω, Γ_Δ→π N = g_Δπ N^2/12 πE_N+m_N/m_Δ m_π^2|p_π|^3, where |p_π| and E_N denote the magnitude of the three momentum of the π and the nucleon energy in the rest frame of the mother particles, respectively. The obtained coupling constants are listed in Table <ref>. With the above Lagrangian densities for various vertices, we can straightforwardly obtain the amplitude for the triangle loop diagram in Fig.<ref> as ^T = -ie g_c g_ωγπ g_Δπ N/m_π m_ωu̅^μ_Δε^ν_ψε^*α_γ∫d^4q/(2π)^4 p_π,μ G^1/2(p_p) ϵ_βρλα p^β_ω G^1,ρ_ν(p_ω) p^λ_γ G^0(p_π) F(p_π) v_p̅ ≡ g u̅^μ_Δε^ν_ψε^*α_γ_μνα v_p̅, where u_Δ,v_p̅, ε_γ and ε_ψ are the spin functions of the Δ, p̅, photon and J/ψ, respectively. G^Js denote the propagators of the intermediate particles with spin J, which are defined as<cit.> G^0(q) = i/q^2-m^2, G^1_μν(q) = -i(g_μν-q_μ q_ν/m^2)/q^2-m^2, G^1/2(q) = i(q+m)/q^2-m^2, where q and m are the four momentum and the mass of the intermediate state. In the above amplitude, we have introduced a monopole form factor F(p_π) for the intermediate π meson in order to make the loop integral convergent, which is taken as<cit.> F(p_π) = m^2_π-Λ^2_π/p^2_π-Λ^2_π. Here we note that near TS the off-shell effects of the intermediate states in the loop are small, so we do not need to consider the form factors for other particles. Furthermore, the possible problem of an artificial pole introduced by the form factor should not be worried here as discussed in Ref.<cit.>. The cutoff Λ_π can be determined through an empirical formula Λ_π=m_π+αΛ_QCD<cit.>, where α is a dimensionless free parameter and Λ_QCD =0.22 GeV is the scale parameter of QCD. The α is usually taken to be about unity, and in this work we take α=1 in the calculations. For the quasi three-body decay process, i.e. ignoring the decay of the Δ, the invariant mass distribution of the γΔ system can be obtained through the following formula<cit.> dΓ/d M_γΔ = 4m_N m_Δ/(2 π)^5 2^4 m^2_ψ|p_p̅| |p_γ^*|/3∫dΩ_p̅dΩ_γ^*∑_spin|^T|^2 , where the quantities with or without * represent that they are defined in the center of mass frame of the γΔ system or the J/ψ rest frame, respectively. To further consider the influences of the finite width effects of the Δ due to the Δ decay as shown in Fig.<ref>, we follow the approach used in Ref.<cit.> by introducing a mass distribution function for the Δ in Eq.(<ref>). Then we obtain dΓ/d M_γπ N = ∫4m_N M_π N/(2 π)^5 2^4 m^2_ψdΩ_p̅dΩ_γ^* dM_π N^2 |𝐩_p̅| |𝐩_γ^*|/3π × m_ΔΓ_Δ·∑_spin| ^T|^2/(M_π N^2 - m^2_Δ)^2 + (m_ΔΓ_Δ)^2, where M_π N stands for the invariant mass of its decay products π N or the varying mass of the Δ. In this work, we will also discuss the spin effects due to the triangle singularity as studied in Ref.<cit.>. Here we shall study the SDMEs of the Δ, which will be calculated in the quasi three-body decay process with taking the M_Δ at some fixed values and using the formula presented above. We shall consider the helicity states of the Δ in the c.m. frame of the γΔ system. The spin density matrix element ρ_λλ' of the Δ as a function of the γΔ invariant mass in the γΔ rest frame is defined as: ρ_λλ'(m_γΔ) = ∫ dΩ_p̅ dΩ_γ^* ∑_spin'^T_λ^T*_λ'/∫ dΩ_p̅ dΩ_γ^*∑_spin|^T|^2, where ∑ ' represents the summing of all the spins apart from the Δ's, and λ and λ' are the helicities of the final Δ. In this work, we will concentrate on the observable P_Δ defined as P_Δ = ρ_11-ρ_33/ρ_11+ρ_33, where ρ_11 and ρ_33 are the diagonal SDMEs of the Δ and corresponding to the probability of finding the Δ in the helicity 1/2 and 3/2, respectively. Therefore, the P_Δ describes the asymmetry of the probabilities of the Δ having the helicities 1/2 and 3/2. Here we want to study the M_γΔ dependence of the P_Δ, so the angular dependence has been integrated(see Eq.<ref>). According to the definition, the value of the P_Δ can vary from -1 to 1. If TS mechanism dominates this reaction, we expect the P_Δ should approach 1 near TS. The ρ^Δ_33 can be extracted from the angular distribution of its decay products, i.e. π or N, in its rest frame through<cit.> W(cosθ) = 1/4[ ( 1+4ρ_33) + ( 3-12ρ_33) cos^2θ], and the ρ_11 can be deduced from the relation ρ_11+ρ_33=1/2. § RESULTS AND DISCUSSION In this section, we shall study the TS mechanism in the reaction J/ψ→γp̅Δ and discuss its effects on both the invariant mass spectrums of final particles and the P_Δ. With using the package LoopTools<cit.>, the loop integral in Eq.(<ref>) can be evaluated numerically. Through Eq.(<ref>), we can obtain the distribution of the differential decay width versus the invariant mass M_γΔ by taking M_Δ=1.182, 1.232 and 1.282 GeV individually. The corresponding results are depicted in Fig.<ref>. As can be seen in the figure, the position of the peak caused by triangle singularity depends on the adopted mass of the Δ. Therefore, by selecting the events in different region of the M_π N, the peak position in the invariant mass spectrum will change if TS mechanism indeed plays an important role here. As discussed in Ref.<cit.>, the moving peak observed here is mainly attributed to the reason that the position of TS is determined by kinematic conditions and dependent on the invariant mass of the external particles of the triangle loop. Following the method in Ref.<cit.>, by adopting the value of M_Δ from 1.081 to 1.286 GeV, the position of the TS in M_γΔ can vary from 1.721 to 2.159 GeV. In fact, there are two kinds of singularities which are relevant here<cit.>. One is the normal two-body threshold cusp (TBTC), and the other is the TS. In the case of M_Δ=1.182 GeV(the red dashed line in <ref>), the small bump around 1.73 GeV is caused by the TBTC. While, in other cases there is only one peak structure since the TS and TBTC are close to each other and their effects are overlapped. Here it is also worth noting that the width of the structure is rather narrow(∼20 MeV), which is mainly ascribed to the narrow width of the intermediate states in the loop. The feature of the moving peak and the rather narrow width of the peak structure caused by the TS mechanism therefore offer the clues for identifying the TS mechanism in experiment. Since the Δ is unstable and has a relatively large width, it is also necessary to further discuss the effects of its finite width on the invariant spectrum. Based on the differential mass distribution formula in Eq.(<ref>), we present the mass distribution as a function of M_γπ N in Fig.<ref> with considering the finite width effect explicitly. It can be found that with including the width effects of the Δ the peak structure become wider due to an average of the effects of the moving TS. While, even in this case the width of the structure is only about 30 MeV, which is significantly smaller than the width of the N^* or Δ^* in this energy region and makes it distinguishable from ordinary resonance contributions. We can also calculate the decay branching ratio of J/ψ→γp̅Δ using Eq.(<ref>) with adopting m_Δ=1.232 GeV, and we obtain Br( J/ψ→γp̅Δ) = 1.506 × 10^-6. When futher considering the finite width of the Δ with taking Γ_Δ=0.117 GeV, the decay branching ratio can be obtained through Eq.(<ref>), then we get Br( J/ψ→γp̅Δ(→π N) ) = 1.058 × 10^-6. The production rate of this decay is within the measurable range at BESIII and also suitable to be explored at the Super Tau-Charm Facility. Next, let's focus on the spin effects induced by the TS mechanism on the Δ. According to the Coleman-Norton theorem<cit.>, TS occurs when the triangle loop process depicted in Fig.<ref> takes place in a classical manner. Specifically, in the rest frame of the γΔ system, if the internal ω, π and p are on-shell simultaneously, their three-momenta are collinear, and the π moves in the same direction as the proton and can catch up with it to fuse to the Δ, then the TS develops. Therefore, at TS the final Δ is predominantly produced by the intermediate π and proton moving in the same direction as the final Δ in the γΔ rest frame. In such a special condition, the Δ should be exclusively produced with helicity ±1/2. To understand this result, it is helpful to consider the π p elastic scattering in s-channel in the center of mass frame. In this process, even if the spin of the initial nucleon is unpolarized, the spin of the intermediate resonance is necessarily aligned when the spin of the intermediate resonance is larger than 1/2[In the center of mass frame, if we take z axis along the direction of the momentum of the initial proton, the magnetic quantum number of the z component of orbital angular momentum has to be zero due to the fact that the momenta of the π and p are along z axis. Therefore, by taking the spin quantization axis along z axis, the spin projection along z axis of the intermediate state can only be ±1/2 due to angular momentum conservation along z axis. For resonances with spin larger than 1/2, it means that its spin is aligned.]. In the J/ψ→γp̅Δ decay, since helicity is invariant under a boost from the Δ rest frame to the γΔ rest frame, the above arguments also hold in the γΔ rest frame. On the other hand, when the special kinematic conditions are not satisfied, i.e. departing the postion of the TS, the helicity of the Δ will not necessarily be ±1/2 anymore. These expectations can be verified by a numerical calculation of the P_Δ defined above. In Fig.<ref>, we show the P_Δ versus the M_γΔ with taking the mass of the Δ as 1.182, 1.232 and 1.282 GeV, respectively. As can be seen from the figures, the P_Δ peaks appear at the corresponding TS positions in accordance with the expectations using the various Δ mass. Here, we want to note that such a M_γΔ dependence is quite distinct from the expectation of a simple resonance model, since in resonance model the M_γΔ dependence mainly comes from the denominator of the resonance propagator and should be canceled in calculating the ratio in Eq.(<ref>). Therefore, the spin observable P_Δ can be used to verify whether the structure in invariant mass spectrum is caused by TS or a resonance. It is also interesting to notice that in the M_Δ=1.182 GeV case(red dashed line in Fig.<ref>) there is a small bump at about M_γΔ=1.72 GeV, corresponding to the pω threshold, in the P_Δ distribution. As explained in Ref.<cit.>, at pω threshold the production of the Δ with the helicities ±1/2 is also enhanced due to the kinematic condition. For the other cases, there is no such a structure due to the closeness of the pω threshold and the TS. When considering Δ decay, we expect the peak structure of the P_Δ should still exist but with a larger width. However, by selecting the events in different M_π N regions the phenomena discussed above should be observed in experiments. Finally, when taking into account the Δ decay, the decay process J/ψ→γp̅Δ(→π^0 p/π^+ n) through the TS mechanism involves the π^0 p→π^0 p or π^0 p→π^+ n scattering as a subprocess. According to Schmid theorem<cit.>, in the π^0 p→π^0 p case the contribution of the triangle loop diagram may be negligible compared to the corresponding tree level diagram. However, Ref.<cit.> demonstrates that Schmid theorem holds strictly only in the limit Γ_ω→ 0. Furthermore, by making a cut of the invariant mass M_γπ in the final states it can also reduce the contribution of the tree diagram<cit.>. In practice, it can also avoid the effects due to the Schmid theorem in this decay by choosing π^+ n as the final state in experiment. Therefore, we expect the main features of the TS mechanism predicted in this work should still be observable after considering the Schmid theorem. § SUMMARY In this work, we investigate the triangle singularity developed in the J/ψ→γp̅Δ process, where ω, π and p compose the internal triangle loop. According to our results, the TS mechanism may induce a structure with a width of 0.02∼ 0.03 GeV in the γΔ invariant mass spectrum. We find the position of the TS is dependent on the M_Δ or the invariant mass of the final π N. By adopting the value of M_Δ ranging from 1.081 to 1.286 GeV, the position of the TS in M_γΔ can vary from 1.721 to 2.159 GeV. Therefore, by performing a cut of the invariant mass of the final π N the TS and the corresponding peak in the M_γΔ distribution should be shifted accordingly. If the TS mechanism indeed plays an important role, we also expect that the spin observable P_Δ should take a relatively large value and have a peak versus the invariant mass M_γΔ near the TS. The predicted decay branching ratio for this process is Br( J/ψ→γp̅Δ(→π N) ) = 1.058 × 10^-6, which should be accessible at BESIII and future super Tau-Charm factory. We acknowledge the support from the National Natural Science Foundation of China under Grants No.U1832160, the Natural Science Foundation of Shaanxi Province under Grant No.2019JM-025, and the Fundamental Research Funds for the Central Universities. 99 Landau:1959fi L. D. Landau, Nucl. Phys. 13, no.1, 181-192 (1959), doi:10.1016/B978-0-08-010586-4.50103-6. Coleman:1965xm S. Coleman and R. E. Norton, Nuovo Cim. 38, 438-442 (1965), doi:10.1007/BF02750472. Guo:2019qcn F. K. Guo, Phys. Rev. Lett. 122, no.20, 202002 (2019), doi:10.1103/PhysRevLett.122.202002. Sakai:2020ucu S. Sakai, E. Oset and F. K. Guo, Phys. Rev. D 101, no.5, 054030 (2020), doi:10.1103/PhysRevD.101.054030. Molina:2020kyu R. Molina and E. Oset, Eur. Phys. J. C 80, no.5, 451 (2020), doi:10.1140/epjc/s10052-020-8014-7. Sakai:2020crh S. Sakai, H. J. Jing and F. K. Guo, Phys. Rev. D 102, no.11, 114041 (2020), doi:10.1103/PhysRevD.102.114041. Yan:2022eiy M. J. Yan, Y. H. Ge and X. H. Liu, Phys. Rev. D 106, no.11, 114002 (2022), doi:10.1103/PhysRevD.106.114002. Wu:2011yx J. J. Wu, X. H. Liu, Q. Zhao and B. S. Zou, Phys. Rev. Lett. 108, 081803 (2012), doi:10.1103/PhysRevLett.108.081803. Aceti:2012dj F. Aceti, W. H. Liang, E. Oset, J. J. Wu and B. S. Zou, Phys. Rev. D 86, 114007 (2012), doi:10.1103/PhysRevD.86.114007. Wu:2012pg X. G. Wu, J. J. Wu, Q. Zhao and B. S. Zou, Phys. Rev. D 87, no.1, 014023 (2013), doi:10.1103/PhysRevD.87.014023. Achasov:2015uua N. N. Achasov, A. A. Kozhevnikov and G. N. Shestakov, Phys. Rev. D 92, no.3, 036003 (2015), doi:10.1103/PhysRevD.92.036003. Du:2019idk M. C. Du and Q. Zhao, Phys. Rev. D 100, no.3, 036005 (2019), doi:10.1103/PhysRevD.100.036005. Liang:2019yir W. H. Liang, S. Sakai, J. J. Xie and E. Oset, EPJ Web Conf. 199, 04008 (2019), doi:10.1051/epjconf/201919904008. Jing:2019cbw H. J. Jing, S. Sakai, F. K. Guo and B. S. Zou, Phys. Rev. D 100 (2019) no.11, 114010, doi:10.1103/PhysRevD.100.114010. Wang:2013cya Q. Wang, C. Hanhart and Q. Zhao, Phys. Rev. Lett. 111, no.13, 132003 (2013), doi:10.1103/PhysRevLett.111.132003. Liu:2013vfa X. H. Liu and G. Li, Phys. Rev. D 88, 014013 (2013), doi:10.1103/PhysRevD.88.014013. Nakamura:2019btl S. X. Nakamura and K. Tsushima, Phys. Rev. D 100, no.5, 051502 (2019), doi:10.1103/PhysRevD.100.051502. Liu:2020orv X. H. Liu, M. J. Yan, H. W. Ke, G. Li and J. J. Xie, Eur. Phys. J. C 80, no.12, 1178 (2020), doi:10.1140/epjc/s10052-020-08762-6. Braaten:2022elw E. Braaten, L. P. He, K. Ingles and J. Jiang, Phys. Rev. D 106, no.3, 034033 (2022), doi:10.1103/PhysRevD.106.034033. Achasov:2022onn N. N. Achasov and G. N. Shestakov, Phys. Rev. D 105, no.9, 096038 (2022), doi:10.1103/PhysRevD.105.096038. Guo:2019twa F. K. Guo, X. H. Liu and S. Sakai, Prog. Part. Nucl. Phys. 112, 103757 (2020), doi:10.1016/j.ppnp.2020.103757. Liu:2015taa X. H. Liu, M. Oka and Q. Zhao, Phys. Lett. B 753, 297-302 (2016), doi:10.1016/j.physletb.2015.12.027. Huang:2021olv Q. Huang and J. J. Wu, Phys. Rev. D 104, no.11, 116003 (2021), doi:10.1103/PhysRevD.104.116003. Szczepaniak:2015hya A. P. Szczepaniak, Phys. Lett. B 757, 61-64 (2016), doi:10.1016/j.physletb.2016.03.064. Guo:2016bkl F. K. Guo, U. G. Meißner, J. Nieves and Z. Yang, Eur. Phys. J. A 52, no.10, 318 (2016), doi:10.1140/epja/i2016-16318-4. Wang:2016dtb E. Wang, J. J. Xie, W. H. Liang, F. K. Guo and E. Oset, Phys. Rev. C 95, no.1, 015205 (2017), doi:10.1103/PhysRevC.95.015205. Xie:2016lvs J. J. Xie, L. S. Geng and E. Oset, Phys. Rev. D 95, no.3, 034004 (2017), doi:10.1103/PhysRevD.95.034004. Liang:2017ijf W. H. Liang, S. Sakai, J. J. Xie and E. Oset, Chin. Phys. C 42, no.4, 044101 (2018), doi:10.1088/1674-1137/42/4/044101. Pavao:2017kcr R. Pavao, S. Sakai and E. Oset, Eur. Phys. J. C 77, no.9, 599 (2017), doi:10.1140/epjc/s10052-017-5169-y. Roca:2017bvy L. Roca and E. Oset, Phys. Rev. C 95, no.6, 065211 (2017), doi:10.1103/PhysRevC.95.065211. Debastiani:2017dlz V. R. Debastiani, S. Sakai and E. Oset, Phys. Rev. C 96, no.2, 025201 (2017), doi:10.1103/PhysRevC.96.025201. Xie:2017mbe J. J. Xie and F. K. Guo, Phys. Lett. B 774, 108-113 (2017), doi:10.1016/j.physletb.2017.09.060. Bayar:2017svj M. Bayar, R. Pavao, S. Sakai and E. Oset, Phys. Rev. C 97, no.3, 035203 (2018), doi:10.1103/PhysRevC.97.035203. Sakai:2017hpg S. Sakai, E. Oset and A. Ramos, Eur. Phys. J. A 54, no.1, 10 (2018), doi:10.1140/epja/i2018-12450-5. Dai:2018hqb L. R. Dai, R. Pavao, S. Sakai and E. Oset, Phys. Rev. D 97, no.11, 116004 (2018), doi:10.1103/PhysRevD.97.116004. Nakamura:2019emd S. X. Nakamura, Phys. Rev. D 100, no.1, 011504 (2019), doi:10.1103/PhysRevD.100.011504. Liang:2019jtr W. H. Liang, H. X. Chen, E. Oset and E. Wang, Eur. Phys. J. C 79, no.5, 411 (2019), doi:10.1140/epjc/s10052-019-6928-8. Liu:2019dqc X. H. Liu, G. Li, J. J. Xie and Q. Zhao, Phys. Rev. D 100, no.5, 054006 (2019), doi:10.1103/PhysRevD.100.054006. Sakai:2020fjh S. Sakai, Phys. Rev. D 101, no.7, 074041 (2020), doi:10.1103/PhysRevD.101.074041. Shen:2020gpw C. W. Shen, H. J. Jing, F. K. Guo and J. J. Wu, Symmetry 12, no.10, 1611 (2020), doi:10.3390/sym12101611. Huang:2020kxf Q. Huang, C. W. Shen and J. J. Wu, Phys. Rev. D 103, no.1, 016014 (2021), doi:10.1103/PhysRevD.103.016014. Luo:2021hyy X. Luo, D. He, Y. Xie and H. Sun, Phys. Rev. D 104, no.7, 074016 (2021), doi:10.1103/PhysRevD.104.074016. Wang:2022wdm K. Wang, S. F. Chen and B. C. Liu, Phys. Rev. D 106, no.9, 094032 (2022), doi:10.1103/PhysRevD.106.094032. ParticleDataGroup:2022pth R. L. Workman et al. [Particle Data Group], PTEP 2022, 083C01 (2022), doi:10.1093/ptep/ptac097. Bayar:2016ftu M. Bayar, F. Aceti, F. K. Guo and E. Oset, Phys. Rev. D 94, no.7, 074039 (2016), doi:10.1103/PhysRevD.94.074039. Lu:2014yba Q. F. Lü, R. Wang, J. J. Xie, X. R. Chen and D. M. Li, Phys. Rev. C 91, no.3, 035204 (2015), doi:10.1103/PhysRevC.91.035204. Zhao:2019syt C. G. Zhao, G. Y. Wang, G. N. Li, E. Wang and D. M. Li, Phys. Rev. D 99, no.11, 114014 (2019), doi:10.1103/PhysRevD.99.114014. Xie:2014zga J. J. Xie, J. J. Wu and B. S. Zou, Phys. Rev. C 90, no.5, 055204 (2014), doi:10.1103/PhysRevC.90.055204. Fan:2019lwc J. Q. Fan, S. F. Chen and B. C. Liu, Phys. Rev. C 99, no.2, 025203 (2019), doi:10.1103/PhysRevC.99.025203. Chen:2020szc S. F. Chen and B. C. Liu, Phys. Rev. C 102, no.2, 025202 (2020), doi:10.1103/PhysRevC.102.025202. Xie:2013db J. J. Xie and B. C. Liu, Phys. Rev. C 87, no.4, 045210 (2013), doi:10.1103/PhysRevC.87.045210. Liu:2017vij B. C. Liu and S. F. Chen, Eur. Phys. J. A 53, no.3, 39 (2017), doi:10.1140/epja/i2017-12229-2. Liu:2017acq B. C. Liu and S. F. Chen, Phys. Rev. C 96, no.5, 054001 (2017), doi:10.1103/PhysRevC.96.054001. Chen:2020zzz S. F. Chen and B. C. Liu, Chin. Phys. C 44, no.3, 034107 (2020), doi:10.1088/1674-1137/44/3/034107. Ling:2021lmq X. Z. Ling, J. X. Lu, M. Z. Liu and L. S. Geng, Phys. Rev. D 104, no.7, 074022 (2021), doi:10.1103/PhysRevD.104.074022. Xiao:2018kfx C. J. Xiao, D. Y. Chen, Y. B. Dong, W. Zuo and T. Matsuki, Phys. Rev. D 99, no.7, 074003 (2019), doi:10.1103/PhysRevD.99.074003. Thomas:1973uh D. W. Thomas, A. Engler, H. E. Fisk and R. W. Kraemer, Nucl. Phys. B 56, 15-45 (1973), doi:10.1016/0550-3213(73)90217-4. Hahn:2000jm T. Hahn, Nucl. Phys. B Proc. Suppl. 89, 231-236 (2000), doi:10.1016/S0920-5632(00)00848-3. Schmid:1967ojm C. Schmid, Phys. Rev. 154, no.5, 1363 (1967), doi:10.1103/PhysRev.154.1363. Debastiani:2018xoi V. R. Debastiani, S. Sakai and E. Oset, Eur. Phys. J. C 79, no.1, 69 (2019), doi:10.1140/epjc/s10052-019-6558-1.
http://arxiv.org/abs/2307.00223v1
20230701044509
Modeling of uniflagellated bacterial locomotion in unbounded fluid and near a no-slip plane surface
[ "Vahid Nourian", "Henry Shum" ]
physics.bio-ph
[ "physics.bio-ph", "cond-mat.soft", "physics.flu-dyn" ]
APS/123-QED vnourian@uwaterloo.ca henry.shum@uwaterloo.ca Department of Applied Mathematics, University of Waterloo, Waterloo, ON, N2L 3G1, Canada The accumulation of swimming bacteria near surfaces may lead to biological processes such as biofilm formation and wound infection. Previous experimental observations of Vibrio alginolyticus showed an interesting correlation between the bacterial entrapment near surfaces and the concentration of NaCl in the swimming medium. At higher concentrations of the ions, V. alginolyticus in the puller mode (with flagella in front of the body) tends to stay close to the surface whereas in the pusher mode (with flagella behind the body) it is more likely to escape from the surface. Motivated by these observations, we numerically investigate the locomotion of a uniflagellated model bacterium in unbounded fluid and near a planar surface. In our elastohydrodynamic model, the boundary integral technique and Kirchhoff rod model are employed respectively to calculate the hydrodynamic forces on the swimmer and model the elastic deformations of the flagellum consisting of a short, flexible hook and a long, relatively stiff filament. Our numerical results demonstrate that hydrodynamic interactions between the model bacterium and the solid wall cause the puller type to be attracted to the surface, whereas the pusher type is either repelled from or attracted to the surface depending on the flagellum and hook stiffness, the ion concentration (which determines the motor torque), and the cell body aspect ratio. We also show that large deformations of a very flexible hook can lead to an abrupt reorientation of the cell when the bacterium encounters a solid surface. These research findings can be used not only in understanding uniflagellated bacterial behavior but also in designing bacteria-mimicking micro-robots with biomedical and environmental monitoring applications. Modeling of uniflagellated bacterial locomotion in unbounded fluid and near a no-slip plane surface Henry Shum August 1, 2023 =================================================================================================== § INTRODUCTION Bacterial habitats are usually confined by solid boundaries. The hydrodynamic interactions between the bacteria living in aqueous media and the boundaries could significantly affect the swimming properties of the bacteria. Depending on the morphology and mode of motility of the species, the bacteria might be hydrodynamically trapped close to boundaries <cit.>. Such entrapment of swimming bacteria near surfaces may facilitate some biological processes such as biofilm formation, which is a major problem in many industries and biomedical sectors <cit.>. The search for practical solutions against biofilm formation requires a deep understanding of the pre-formation stages, including the hydrodynamic interactions between the bacteria and the contacted surfaces. Monotrichous bacteria, such as Vibrio alginolyticus, have a single flexible flagellum typically protruding from one pole of the cell body. The flagellar filament is approximately helical and is connected to the cell body via a very flexible and short hook. Peritrichous bacteria have multiple flagella distributed around the cell body. During steady swimming, the flagella generally form a single bundle behind the cell body <cit.>. In these bacteria, the flexibility of the hook to bending motion allows it to act as a universal joint, transmitting torque from the motor to the filament even when the bundling causes the filament to be at an angle of 90^∘ to the axis of the motor <cit.>. In contrast, the role of flexibility of the hook and flagellar filament during steady swimming is less evident for monotrichous bacteria. Indeed, some theoretical studies that model thelocomotion of monotrichous bacteria assume that the hook and flagellum are rigid. With such models, Giacché et al. and Shum et al. demonstrated that bacteria can be trapped near no-slip surfaces and exhibit stable circular trajectories parallel to the surfaces <cit.>. Their results indicated that the tendency of bacteria to swim close to surfaces and their stable distances from the surfaces strongly depend on the cell body's shape and the flagellum length. It was also shown that the bacteria are still attracted to a surface and exhibit stable periodic orbits when they swim between two parallel surfaces or at the corner of a rectangular channel <cit.>. Even though these results are qualitatively consistent with the experimental observations for steady swimming, the assumption of a rigid filament rotating about a fixed axis means that such models cannot determine how flexural rigidity or swimming speed affect the behavior of the bacterium near surfaces. Moreover, the bacterial motor is not always steady and the dynamic behaviour that occurs when the motor speed changes is not fully captured by rigid flagellum models. For instance, previous experimental observations of V. alginolyticus show that the bacteria can swim in both forward (pushing) and backward (pulling) directions by switching the direction of the bacterial motor rotation while maintaining the same handedness of the helical flagellum <cit.>. The backward swimming speed is approximately 40% greater than the forward swimming speed <cit.>. Wu et al. <cit.> found that V. alginolyticus can be entrapped near a surface in both puller and pusher modes and that the entrapment behavior strongly depends on the cells' swimming speeds; the cell concentration near surfaces decreases with swimming speed for pushers and increases with swimming speed for pullers. It has also been observed that the cell body of V. alginolyticus sometimes abruptly reorientates when the motor switches from backward to forward swimming, a phenomenon known as flicking <cit.>. To study the impacts of hook deformation on monotrichous bacterial motility, Shum and Gaffney extended the model by assuming that a rigid flagellum is connected to a spheroidal rigid cell body via a flexible and naturally straight hook <cit.>. Unlike the most common rigid-flagellum models in which the flagellum shape is described with an amplitude growing factor (see <cit.>), they assumed that the filament is purely helical and connects tangentially to the hook, which bends during swimming to align the axes of the cell body and helical flagellum. They found that steady swimming of the bacteria near a surface is very sensitive to hook rigidity and the shapes of the cell body and flagellum. Even far from surfaces, effective swimming is possible only within a bounded range of hook stiffnesses scaled by the applied motor torque and hook length. In particular, an instability occurs when the motor torque exceeds a threshold. The instability causes the hook to bend and the flagellum to be become misaligned with the cell body axis. Son et al. <cit.> showed that a buckling instability was indeed involved in the flick of V. alginolyticus. Other theoretical studies of the role of hook flexibility identified a transition from straight to helical swimming as the rigidity decreases <cit.>. Jabbarzadeh and Fu <cit.> showed with simulations that a buckling instability of the hook, in conjunction with a flexible filament, can lead to flicking motion similar to those observed experimentally. The instability of the hook is also important for bundle formation in peritrichous bacteria, as shown by Riley et al. <cit.>. Park et al. numerically studied a flexible helical filament driven by rotations at one end <cit.> and demonstrated three dynamical states for the filament: stable twirling, unstable whirling, and stable overwhirling. The state that emerges is determined by physical properties such as the rotation frequency and the stiffness of the filament. To model the flicking behavior observed in V. alginolyticus, they assumed that the flagellar motor reversal causes the hook to temporarily relax into an unloaded state with a lower bending modulus, thereby increasing its susceptibility to buckling. In a subsequent study, a spheroidal cell body was included in the model and critical thresholds for the hook bending moduli and the rotation frequency for buckling instabilities were reported <cit.>. By assuming that the hook is in a relaxed mode for a short period of time, they investigated the effects of the hook stiffness and the rotation frequency on the buckling angles. In another study, Park et al. numerically simulated the locomotion of a uni-flagellated bacterium with a rigid cell body and a flexible flagellum close to a planar surface <cit.>. They studied the influences of geometrical parameters, rotation frequency, and flagellum stiffness on the swimming properties of the model bacterium near the surfaces. The sensitivity of the bacterial motion to the stiffness of the hook and filament equivalently manifests as sensitivity to the motor torque driving the flagellum. The modelling studies summarized above assumed either a prescribed motor torque or a prescribed rotation rate of the flagellum. It is known, however, that the torque generated by the bacterial motor follows a characteristic trend with rotation speed <cit.>; approximately constant torque is produced from 0 Hz up to a plateau knee frequency beyond which the torque drops linearly to zero. For V. alginolyticus, the motor is driven by Na^+ ions and Sowa et al. <cit.> demonstrated that the torque–speed curve is shifted with the concentration of NaCl in the swimming medium such that higher torques are produced at higher concentrations. In any case, the observed torque and motor rotation speed is determined by the intersection of the motor torque–speed curve and the load line, which describes the linear dependence of the hydrodynamic torque on the filament with the motor speed. The slope of the load line can change as the hook or filament deform; hence, the torque and motor speed are generally time dependent. In the present study, we use an elastohydrodynamic model to simulate the locomotion of a uniflagellated bacterium with a flexible hook and flagellum. A Regularized Stokes Formulation (RSF) <cit.> is accompanied by a Boundary Element Method (BEM) <cit.> to model the hydrodynamic interactions among the bacterium components and a no-slip wall. Furthermore, we assume that the flagellum and hook are inextensible and unshearable and follow a discretization of the Kirchhoff rod model <cit.>. It is expected that the hook's stiffness, the rest configuration, and the form of connection between the filament and the cell body considerably affect bacterial behavior. These effects and their importance on bacterial locomotion are further investigated in the first part of our work. We shed light on different aspects of locomotion of V. alginolyticus in bulk fluid and near a planar surface. Instead of applying a constant torque or constant rotational frequency to the motor, we assume that the flagellar motor follows the torque–speed curve obtained experimentally for V. alginolyticus at three levels of NaCl concentrations (3, 10, and 50 mM). Entrapment of three different strains of V. alginolyticus near surfaces is experimentally studied by Wu et al. <cit.>. Investigating the behavior of V. alginolyticus near a surface, in either puller or pusher modes, and different levels of NaCl concentrations is another aim of the present numerical study. Finally, the importance of the hook and flagellum flexibility and the cell body aspect ratio in near-surface entrapment of the uniflagellated bacteria is investigated. § MODELLING AND METHODS In this study, the uniflagellated model bacterium consists of a cell body, a flexible helical filament, and a very flexible straight or helical hook connecting the filament to a pole of the cell body. The dimensions of the model bacterium follow the experimental measurements reported by Son et al. for V. alginolyticus <cit.>. As illustrated in Fig. <ref>, the shape of the cell body is a spherocylinder, i.e., a cylinder capped by a hemisphere at each end. A small gap is considered between the cell body and the hook to avoid singularities in the numerical scheme. The position and configuration of the model bacterium are described by three reference frames, namely, the motor-fixed frame {e⃗_1^(M),e⃗_2^(M),e⃗_3^(M)}, body-fixed frame {e⃗_1^(B),e⃗_2^(B),e⃗_3^(B)}, and global frame {X⃗,Y⃗,Z⃗}. We initialize all simulations with the hook and filament in their rest configurations. In the reference frame of the flagellar filament, the curve describing the shape of the filament in the rest configuration is given by: Λ⃗(ξ)=ξe⃗_1^(F) +Ξ(ξ)[cos(2π/pξ)e⃗_2^(F)+sin(2π/pξ)e⃗_3^(F)]. The variable ξ parameterizes the distance along the axis of the right-handed helix with 0⩽ξ⩽ L_F. In this study, we compare two functional forms for Ξ(ξ): a constant Ξ(ξ) = a, which describes a pure helical filament, and one with a growing factor k_E such that Ξ(ξ)=a(1-e^-(k_Eξ)^2). In these equations, a and p represent the helix maximum amplitude and pitch, respectively. In the studied configurations, we use a hook of length 0.02l that is either straight or helical at rest and tangentially connected to the starting point of the filament. For the pure helical filament, the tangent constraint requires that the frame {e⃗_1^F),e⃗_2^(F),e⃗_3^(F)} rotate with respect to the motor fixed frame {e⃗_1^(M),e⃗_2^(M),e⃗_3^(M)}. Therefore, the axis of the filament in the initial and rest configurations does not align with the cell axis. All the lengths in the model bacterium are non-dimensionalized by the averaged cell body radius R=0.81 µm; this value is the radius of an equivalent sphere with the same volume as the cell body. Dimensions and mechanical properties of the model bacterium are given in Table <ref>. In this study, motor torques, flagellum, and hook stiffnesses are non-dimensionalized with the maximum motor torque T_Max=3.8 pN µm <cit.> in V. alginolyticus. We describe the relative stiffnesses of the hook and flagellum k_ϱ(ϱ = H, F) as: k_ϱ=(EI)_ϱ/T_MaxR, where E is Young’s modulus of the material and I represents the moment of inertia of the cross sections of the hook and flagellum. §.§.§ Hydrodynamic interactions The Reynolds number associated with V. alginolyticus motility in unbounded pure water is ≈ 10^-4. In this Reynolds number, the inertia term in the Navier–Stokes equations is negligible and the flow field u⃗ is described by the incompressible Stokes equations - ∇ p+μΔu⃗+F⃗_b=0⃗, ∇·u⃗=0, where p is the fluid pressure, μ is the fluid viscosity, and F⃗_b is the body force field. In our formulation, the surface of the spherocylindrical cell body B is treated as a rigid no-slip boundary for the fluid domain whereas the flagellar hook and filament exert regularized forces and torques along their centerlines, contributing to F⃗_b. In a Lagrangian description, a point on the cell body surface B is represented by S⃗(θ,ϕ,t), where ϕ and θ are material coordinates on the cell's surface, and t is time. The elastic flagellum (including both the hook and the filament sections), which rotates and deforms in time (t), is represented by a three-dimensional space curve Γ(t), a point along which is denoted by γ⃗(s,t), where s is the arclength. Following these descriptions, the body force term F⃗_b due to the flagellum is expressed as: F⃗_⃗b⃗(x⃗,t)= ∫_Γf⃗_F(s,t)ψ_ϵ(x⃗-γ⃗(s,t)) ds + 1/2∇×∫_Γn⃗(s,t)Φ_ϵ(x⃗-γ⃗(s,t))ds, where f⃗_F and n⃗ are respectively the force per unit length and torque per unit length applied to the fluid along the flagellum and the evaluation point x⃗ may be anywhere in the fluid including on the flagellum. The regularized stokes formulation <cit.> is used here to reduce the slender flagellum to a one-dimensional distribution for computational efficiency, while approximately retaining a finite effective radius of the flagellum. The regularizing blob functions for the force and torque density are defined as ψ_ϵ(x⃗) =15ϵ^4/8π (‖x⃗‖^2+ϵ^2)^7/2, Φ_ϵ(x⃗) =3ϵ^2/4π (‖x⃗‖^2+ϵ^2)^5/2, where we set ϵ=d/2 to represent the effective flagellum radius. The velocity field due to the stress distribution f⃗_B on the cell body and the flagellum force and torque distributions in the presence of a no-slip plane wall at z=0 can be expressed in the form u⃗(x⃗,t)= ∮_BU⃗_s(f⃗_B,r⃗_B,r⃗̂⃗_B,0) dA + ∫_Γ[U⃗_s(f⃗_F,r⃗_F,r⃗̂⃗_F,ϵ)+U⃗_r(n⃗,r⃗_F,r⃗̂⃗_F,ϵ)] ds, where U⃗_s and U⃗_r are respectively the velocities of the (regularized) stokeslet and rotlet including the image system for the no-slip wall. See Appendix <ref> for the definitions of U⃗_s and U⃗_r. The angular velocity field is obtained from the curl of the velocity field and is expressed in an analogous form, namely, w⃗(x⃗,t)= ∮_BW⃗_s(f⃗_B,r⃗_B,r⃗̂⃗_B,0) dA + ∫_Γ[W⃗_s(f⃗_F,r⃗_F,r⃗̂⃗_F,ϵ)+W⃗_r(n⃗,r⃗_F,r⃗̂⃗_F,ϵ)] ds where W⃗_s and W⃗_r are angular velocities of the regularized stokeslet and rotlet including the image system, defined in Appendix <ref>. To evaluate the first integrals in. Eqs. <ref> and <ref>), the cell body surface B is discretized into N_B curved triangular elements. For this purpose, six nodes are required to construct an element: three nodes are the vertices and a node at the middle of each edge (see Fig. <ref>). We employ Gauss-Legendre quadrature method with 12 Gauss points to evaluate the integrals over the elements. Thus the surface of each element is mapped into a right-angle isosceles flat triangle before the integration. The stokeslets (red points in Fig. <ref>) are distributed according to Gauss-Legendre abscissas over each element, and their directions and magnitudes are approximated by using cardinal interpolation functions and interpolating the nodal force densities at the evaluation points (blue points) <cit.>. Unlike the flagellum, which we treat as a one-dimensional object, the contribution from the cell body is a weakly singular surface integral that can be computed numerically without regularization, following the scheme presented by Pozrikidis <cit.>. The hook and flagellum are discretized into N_H and N_F connected equal-length straight segments. We use eight Gauss points on each segment to evaluate the integral. We also approximate the stokeslets/rotlets (red points in Fig. <ref>) from the nodal force/torque densities at the evaluation points (blue points), by employing a second-order polynomial interpolation function. In the following, we let N_P_F=2(N_F+N_H)+1 denote the number of evaluation points on the flagellum and N_P_B indicate the number of evaluation points on the cell body. We apply the presented scheme to Eqs. (<ref>, <ref>). By satisfying the no-slip boundary condition on the swimmer, a linear relationship between the translational and angular velocities (i.e. u⃗_1,,u⃗_N_P_B+N_P_F,w⃗_1,,w⃗_N_P_F) and the nodal force and torque densities at the evaluation points (i.e., f⃗_1,, f⃗_N_P_B+N_P_F,n⃗_1,, n⃗_N_P_F) is constructed. §.§ Kinematics Since the cell body in our model is rigid, the translational velocity of each node on the cell body's surface can be written in terms of the translational U⃗^(B) and angular Ω⃗^(B) velocities of the cell body. Furthermore, the flagellar segments are only allowed to rotate with respect to the adjacent segments in the proposed model; thus the translational and angular velocities of each node on the flagellum can be expressed in terms of the relative angular velocities of the segments ω⃗_s^n, U⃗^(B) and Ω⃗^(B). Note that ω⃗_s^n represents the angular velocity of the nth segment with respect to the (n-1)st one. This approach treats the filament as inextensible, which Jabbarzadeh & Fu <cit.> showed could be much more computationally efficient than allowing extensibility while having an insignificant impact on the shape of the rotationally driven filament. The overall translational velocity of any given point X⃗^E on the swimmer is written as: U⃗(X⃗^E) = U⃗^(B)+Ω⃗^(B)×X⃗^E, X⃗^E∈on cell body, U⃗^(B)+Ω⃗^(B)×X⃗^E+∑_n=1^mω⃗_s^n×X⃗_rel^n, X⃗^E∈ mth segment, where X⃗_rel^n=X⃗^E-X⃗^(M)-γ⃗^n-1/2, n=1,...,N_H+N_F; γ⃗^n-1/2 is the position vector of the nth joint of the flagellum on the motor fixed frame, and X⃗^(M) denotes the position of the motor (flagellum base) on the cell body (see Fig. <ref>). The angular velocity of any given point X⃗^E on the flagellum is the sum of the relative angular velocities of the preceding segments and the cell body's angular velocity as: w⃗(X⃗^E) = Ω⃗^(B)+∑_n=1^mω⃗_s^n,X⃗^E∈ mth segment. Finally, Eqs. (<ref>,<ref>) are written in form of a system of linear equations to represent the translational and angular velocities of all evaluation points in terms of the unknowns ω⃗_s^n, U⃗^(B) and Ω⃗^(B) <cit.>. §.§ Elasticity In this study, the standard Kirchhoff rod model is employed to simulate the deformations of the hook and the flagellum. Since the stretching and shearing of the flagellum are negligible in comparison with the bending, we assume that the hook and the filament are inextensible, unshearable, and only allowed to bend and twist. The center line of the flagellum at the initial and rest configurations is represented by the space curve γ⃗(s,0) (equivalent to Λ⃗(ξ) in Eq. <ref> on the global frame). To describe the orientation of the material points in the cross-section of the flagellum, a right-handed orthonormal frame {D⃗_1(s,t),D⃗_2(s,t),D⃗_3(s,t)} is introduced. For simplicity, it is assumed that D⃗_3(s,t) is always tangent to the curve γ⃗(s,t) i.e. D⃗_3(s,t)=γ⃗'(s,t). Let κ⃗(s,t)=(κ_1,κ_2,κ_3) denote the twist vector at point s and time t. Then, following the linear theory of the elasticity, the internal moments N⃗(s,t) transmitted along the flagellum can be estimated as <cit.>: N⃗(s,t)= EI[(κ_1(s,t)-κ̂_̂1̂(s))D⃗_1(s,t)+(κ_2(s,t)-κ̂_̂2̂(s))D⃗_2(s,t) + Υ(κ_3(s,t)-κ̂_̂3̂(s))D⃗_3(s,t)], where κ⃗̂⃗(s) represents the twist vector at the rest, and Υ=GJ/EI is the ratio of the twisting stiffness GJ to the bending stiffness EI. Here, it is assumed that the flagellar filament and the hook are isotropic, homogeneous, and Υ=1 <cit.>. We discretize the hook and filament into N_H and N_F segments, respectively, by introducing uniform intervals Δs_H=l_H/N_H and Δs_F=(l-l_H)/N_F of the Lagrangian variable s. The length of the hook is l_H and the total length of the flagellum (hook and filament) is l. In our model, the triads D⃗_î^n (n = 1, 2, ..., N_H+N_F, î = 1, 2, 3), which are updated over the time as the segments rotate, represents the orientation of the nth segment of the flagellum (see Figs. <ref> and <ref>). The segment with index n=N_H is the last segment of the hook and the index n=N_H+1 represents the first segment of the filament. Since the segments on the hook are identical in length, the principal square root of the rotation matrix M^n that maps the triad {D⃗_î^n} to the triad {D⃗_î^n+1} is used to describe the orientation at the joint between neighboring segments, as shown in Fig. <ref>). A similar approach is used for the segments on the filament, M^n=∑_î=1^3D⃗_î^n+1(D⃗_î^n)^T, D⃗_î^n+1/2=√(M^n)D⃗_î^n. By discretizing Eq. (<ref>) and following the scheme used by Lim et al. <cit.>, the internal moments at the flagellum joints are estimated by: N_î^n+1/2=E_ϱI_ϱ(D⃗_ĵ^n+1-D⃗_ĵ^n/Δs_ϱ·D⃗_k̂^n+1/2-κ̂_î^n+1/2), N⃗^n+1/2=∑_î=1^3N_î^n+1/2D⃗_î^n+1/2, where subscript ϱ=H, F distinguishes the hook from the filament (ϱ=H for segment indices n=1,...,N_H and ϱ=F for n=N_H+1,...,N_H+N_F-1); (î,ĵ,k̂) is any cyclic permutation of (1,2,3); N⃗^n+1/2 is the internal moment transmitted from nth to (n+1)st segment; κ̂_î^n+1/2 represents the twist vector's îth component in the rest configuration, and n=1,...,N_H+N_F is the segment number. Above, N⃗^1/2 denotes the internal moment transmitted from the rotor to the first segment of the hook. In the present scheme, the magnitude and direction of N⃗^1/2 are estimated by employing a sub-iterative method to satisfy the Kirchhoff rod model and impose the motor torque. See <cit.> for the details about this method. §.§ Steric repulsive force When the bacterium swims so close to the wall, the cell body and the flagellum are susceptible to touching the surface. Any collision causes the model to break, therefore we apply steric repulsive forces between the components and the wall at a short-range distance to keep them away. A truncated Lennard-Jones potential is employed in our model to calculate the potential energy and then the corresponding repulsive force between the wall and the evaluation points on the swimmer <cit.>. Specifically, we calculate the magnitude of the repulsive force between the ith evaluation point and the wall by finding the derivative of the Lennard-Jones potential (U_LJ^i(h^i)) with respect to the vertical distance of the point from the surface (h^i). Then, this force is applied to the point in the direction of the surface's normal vector. U_LJ^i(h^i)=F_sσ/6[(σ/h^i)^12-(σ/h^i)^6]H(2^1/6σ-h^i), F⃗_rep^i=-dU_LJ^i(h^i)/dh^ie⃗_3, where i=1,,N_P_B+N_P_F. We use a Heaviside step function H to deactivate the repulsive force when the distance to the wall is more than the defined threshold 2^1/6σ. In these equations, σ is the cut-off distance, determines which kind of interaction occurs, and F_s is the repulsion strength. Our tests indicate that the magnitude of the repulsion strength does not have a significant impact on the locomotion of the bacterium as long as it is large enough to avoid collisions. For this reason, we choose a medium value for F_s which guarantees no collision. Moreover, we choose 2^1/6σ fairly greater than the effective radius of the flagellum (ϵ_F) to ensure that the flagellum does not touch the wall. §.§ Torque and force balance equations By neglecting the inertial and assuming that there is no gravity acting on the bacterium, the model bacterium is torque- and force-free <cit.>. In the total force balance Eq. (<ref>), the sum of the steric repulsive forces and the integrals of viscous force densities over the elements (B^n) and along the segments (Γ^n) is set to be zero. ∑_n=1^N_B∫_B^nf⃗_BdA_n+∑_n=1^N_H+N_F∫_Γ^nf⃗_Fds_n+∑_i=1^N_P_B+N_P_FF⃗_rep^i=0. Likewise, the total torque about the center of the cell body is zero according to the torque balance equation ∑_n=1^N_B∫_B^n(S⃗-X⃗^(B))×f⃗_BdA_n+∑_n=1^N_H+N_F∫_Γ^nn⃗ds_n + ∑_n=1^N_H+N_F∫_Γ^n(X⃗^(M)+γ⃗-X⃗^(B))×f⃗_Fds_n+∑_i=1^N_P_B+N_P_FT⃗_rep^i=0, where T⃗_rep^i is the torque applied to the center of the cell body due to the steric repulsive force at ith evaluation point. The first integral in Eq. (<ref>) represents the hydrodynamic torques induced by the viscous force densities on the cell body and the last two integrals describe the total torque densities on the flagellum and the torque induced by the viscous force densities, respectively. To complete the system of the equations, we balance the viscous torques about each joint of the flagellum with the transmitted internal moment estimated by Eq. (<ref>): ∑_n=m^N_H+N_F[ ∫_Γ^n(γ⃗-γ⃗^m-1/2)×f⃗_Fds_n+∫_Γ^nn⃗ds_n +T⃗_rep^2n+T⃗_rep^2n+1]+N⃗^m-1/2=0, where m=1,...,N_H+N_F, and T⃗_rep^2n, T⃗_rep^2n are the torques applied to the mth joint of the flagellum due to the steric repulsive forces at the middle and end of the nth segment. In fact, toque balance Eq. (<ref>) is written for all the joints we have on the flagellum, thus N_H+N_F equations are obtained in total. By employing the Gauss-Legendre quadrature method, Eqs. (<ref>-<ref>) are expressed in terms of the nodal force and torque densities at the evaluation points (i.e., f⃗_1,, f⃗_N_P_B+N_P_F,n⃗_1,, n⃗_N_P_F). §.§ Overview As already stated, the motor torque in our model is adjusted dynamically according to the rotation frequency of the flagellum. In this regard, the torque–speed curve in V. alginolyticus at three different concentrations of NaCl <cit.> are non-dimensionalized and employed here to apply a proper torque at a given concentration and motor frequency. If we split the torque–speed curve into two pieces, high torque-low speed, and low torque-high speed, and suppose that the relationship is linear in each piece, a piece-wise function can be constructed to relate the motor torque to its rotation frequency. The two lines intersect at the crossover point and the motor torque at a given frequency is the minimum of the two linear functions at that frequency. Specifically, for three levels of NaCl concentration, we model the torque-frequency relationships as: T_H(ν) =min{-1.203ν+1,-25.197ν+2.543}, T_M(ν) =min{-1.691ν+0.789,-24.572ν+1.562}, T_L(ν) =min{-1.071ν+0.551,-33.079ν+1.164}, where ν represents the dimensionless motor frequency (revolutions per unit time), and T_H, T_M, and T_L denote the dimensionless motor torques at NaCl concentrations of 50, 10 and 3mM, respectively. The equivalent experimental data were unavailable for motors running in reverse. For these cases, we assume that the torque–speed relationships are the same but with opposite signs. As shown in Fig. <ref>, the axial direction e⃗_1^(M) indicates the rotor orientation, and therefore is fixed with respect to the cell body, whereas the orientations e⃗_2^(M),e⃗_3^(M) change in time with the rotation of the rotor. To actuate the flagellum complex, the projection of the internal moment (at the joint connecting the hook to the rotor) onto e⃗_1^(M) sets to be equal to the motor torque at each time step, i.e.: N⃗^1/2·e⃗_1^(M)=T_i(ν), i=L,M,H. In this equation, e⃗_1^(M) is known and T_i(ν) is obtained from Eq. (<ref>). A sub-iterative method (see the details in <cit.>) is employed here to solve Eq. (<ref>) for N⃗^1/2. Eqs. (<ref>-<ref>, <ref>, <ref>-<ref>) together construct a system of linear equations in which the components of the nodal force and torque densities at the evaluation points, the angular velocities of the segments, and the angular and translational velocities of the cell body are unknowns. In this study, solver in Matlab is employed to solve the equations and determine the unknowns. Based on the obtained velocities, the configuration of the model bacterium is updated accordingly. Combining a Kirchhoff rod model with a boundary element method leads to a stiff set of ODEs. Using general implicit schemes to solve these equations is computationally expensive. Instead, an explicit multirate time integration scheme is employed in this study. The original scheme, suggested by Bouzarth et al. <cit.>, includes spectral deferred corrections that we do not implement in our study. In this approach, the angular and translational velocities of the cell body and the nodal force densities on the cell body are updated on a coarse time step Δ t_coarse while the angular velocities of the flagellar segments and the nodal force and torque densities on the flagellum are updated on a fine time step Δ t_fine. This procedure considerably reduces the computational time <cit.>. § RESULTS §.§ Unbounded fluid We first investigate the locomotion of the model bacterium in free space. We compare the puller and pusher modes in terms of swimming speed and then explore the impacts of the shape of the hook and the model for the hook-flagellum transition on the swimming properties of the model bacterium. Except where otherwise noted, all simulations use a naturally straight hook and pure helical shape for the flagellar filament. §.§.§ Swimming speed To calculate the average swimming speed of the model bacterium in the unbounded space, we follow Higdon's formulation <cit.> given as: U=(Ω⃗^(B)-ω⃗_s^0) · U⃗^(B)/||Ω⃗^(B)-ω⃗_s^0||, where ω⃗_s^0=2πνe⃗_1^(M) is the angular velocity of the rotor. This formula is valid for motions that are constant in the motor-fixed frame, which would be the case once the flagellum reaches a steady shape and simply rotates about the motor axis. We apply constant motor torques, ranging from 0 to 1 in dimensionless units, and measure the steady swimming speeds in the puller and pusher modes. As shown in Fig. <ref>A, the swimming speed is approximately linear with the motor torque and it is slightly higher for the pusher than the puller at a given motor torque. Interestingly, for a given motor torque and mode, the swimming speed is almost independent of the flagellum stiffness k_F as long as the rotation of the flagellum is stable (when the flagellar rotation is unstable, changes in stiffness become important). Since the rotation of the flagellum becomes unstable at lower stiffnesses and/or higher motor torques, there are no data points for the puller with k_F = 0.5 and T = 1, the pusher with k_F=1, T=1, or the pusher with k_F=0.5, T≥0.6. It is worth stating that the pusher flagellum exhibits overwhirling rotation <cit.> and the puller flagellum bends toward the cell body and tends to wrap around it <cit.> in the mentioned exceptional cases. For the same set of simulations with the motor torque varying from 0 to 1, we also track the rotation speeds of the flagellar motor (2πν) for the different stiffnesses of the flagellum and swimming modes. The obtained results indicate that the flagellum rotates slightly faster in the pusher than the puller at a given motor torque, as depicted in Fig. <ref>B. Equivalently, for a fixed motor speed, the puller requires a larger motor torque than the pusher. A closer inspection of the filament shape indicates that the average curvature of the filament in the puller mode is slightly smaller than in the pusher mode. In other words, the puller filament has a slightly larger amplitude and/or pitch than the pusher one if we assume that the filament maintains an approximately helical shape during the rotation. This is consistent with simulation results of Park et al. <cit.>, who reported that the helical pitch and radius are smaller than their resting values during pushing motion and larger during pulling motion. Consequently, we find that the pusher flagellum rotates faster at a given motor torque because it has lower hydrodynamic resistance to axial rotations. The steady shapes of the rotating pusher and puller flagella on the body fixed-frame and over one period of the motor rotation are displayed in Fig. <ref>B. Unlike the swimming speed, the rotation frequency is affected by the stiffness of the filament. For pushers, increasing the flagellum stiffness decreases the motor speed at constant torque. For pullers, the opposite correlation is observed. The slopes of the curves in Fig. <ref>B indicate the effective rotational drag coefficient of the flagellum depends on the flagellum stiffness. In contrast, the slopes of the curves in Fig. <ref>A are found to be relatively insensitive to the flagellum stiffness. This is an interesting observation because even though the flagellum deforms at higher torques and the rotational drag coefficient changes, the swimming speed maintains a linear relationship with the torque. The torque–speed curves are characteristics of the flagellar motor and are independent of the properties of the attached flagellum. These curves at three concentrations of NaCl are depicted in Fig. <ref>B. The intersection of the flagellum's torque–speed curves (colored lines) and the motor's torque–speed curves [Eq. (<ref>)] represents the motor's torque and speed during steady swimming. For a given NaCl concentration, Fig. <ref>B shows that pullers have a higher motor torque and lower motor speed than pushers with the same flagellum stiffness. In most of the remaining simulations, the instantaneous motor torque and frequency is computed at each time step to lie on the motor's torque–speed curve while simultaneously matching the hydrodynamic torque due to the rotation of the flagellum. In unbounded space, the shape of the flagellum generally stabilizes so the point of intersection converges to the equilibrium point presented in Fig. <ref>B. Any change in the motor load, due to the bacterium approaching a solid surface, for instance, alters the equilibrium point on the curve. Accounting for this behavior in our model is important because we can accurately study the locomotion of the bacteria in various swimming environments where changes in motor torque could be significant. §.§.§ Comparison of hook and flagellum shapes The hook, which acts as a universal joint to transmit the torque from the rotor to the flagellar filament, could be straight or helical <cit.> in the rest configuration. Motivated by this difference, we numerically study the impacts of the hook shape on the stable motor torque and the swimming speed in an unbounded fluid with a medium concentration of NaCl. The actual molecular structure of the flagellar filament is uniform along its length, thus a purely helical shape for the filament can be expected when it is stationary. In rigid models, the filament shape is usually described with an amplitude envelope growth rate k_E to align the flagellum's axis with the cell body's axis, as in Eq. <ref> with the second functional form for Ξ. Such an assumption is widely used in the literature but its effect on the swimming properties has not been quantitatively compared with the purely helical filament. To conduct this comparison, three model bacteria A, B, and C with different filament and hook configurations are taken into account. In the first configuration (A), the hook is straight and the rest shape of the filament is purely helical. In the second model bacterium (B), we assume that the hook is straight and the helical filament's rest shape is described using an amplitude growing rate k_E. In the third one (C), the hook's rest shape is helical with the same properties as the filament, and the purely helical filament is tangentially connected to the hook, as shown in Fig. <ref>. The obtained results indicate that in configuration B, the model bacterium reaches steady-state quickly and its steady motor torque is the smallest among the cases in the puller and pusher modes (see Tab. <ref>). The transient time in our study is defined as the dimensionless time it takes for the speed calculated by (<ref>) to stabilize to within 2% of its steady constant value. This represents the time required to relax to the steady swimming configuration from the initial conditions, which are at equilibrium in the absence of motor torques. In configurations A and C, the flagellum and the cell body's long axes are not initially aligned, thus longer transient times are obtained for these cases. After a few rotations of the flagellum, the angle between those axes eventually decreases and the swimming properties become steady. Closer inspection indicates that the flagellum curve remains stationary with respect to the motor-fixed frame during the steady state. As shown in Fig. <ref>, the axis of the flagellum does not align with the motor axis in configurations A and C, therefore off-axis rotation of the flagellum results in slightly different swimming properties. Even though the rotation speed of the pusher flagellum in configuration B is the highest among the cases, its swimming speed is the lowest, because the average amplitude of the filament in this configuration is smaller than in the others. In this regard, the pusher-type model bacterium with a helical hook has the highest swimming speed whereas it has the lowest motor speed. Moreover, the variation of the swimming speeds in the puller modes is small, but the differences in the steady motor torque are relatively considerable. This is an interesting observation because the model bacterium with configuration B swims as fast as the other model bacteria by applying a smaller torque to the flagellum. §.§ Near a surface Motivated by the behavior of V. alginolyticus, we mainly focus on the tendency of a uniflagellated model bacterium to a planar surface in this section. Specifically, the impacts of the swimming modes, NaCl concentration, flagellar filament/hook stiffness, initial condition, and the cell body aspect ratio on the boundary accumulating behavior and escaping angles of the model bacterium are investigated. §.§.§ Effects of concentration of sodium chloride V. alginolyticus utilizes a Na^+-driven flagellar motor to rotate the flagellum complex in the CW and CCW directions. The availability of sodium chloride in the swimming medium limits the performance of the motor. For this reason, the torque–speed relationship varies with the sodium chloride concentration, as expressed in Eq. <ref> and plotted in Fig. <ref>B. The swimming trajectories in Fig. <ref> demonstrate that the model bacterium in the pusher mode tends to escape from the surface at all three concentrations of NaCl. The escaping angle α_e increases significantly with sodium concentration; at low concentrations, the bacterium swims almost parallel to the surface (α_e≈ 1^∘). These results are consistent with the experimental observation of V. alginolyticus in which higher concentrations of cells are observed near the surface under lower concentrations of Na^+ ions in the swimming medium <cit.>. This trend can be explained by the dipolar structure of the flow field generated by a pusher bacterium. When such a swimmer is approximately parallel to a wall, the image flow field due to the no-slip boundary pulls the swimmer towards the wall. The hydrodynamic attraction is strongest at the centre, around the hook, causing the hook to bend and the cell body to point away from the wall. The result is that the bacterium tends to swim away from the wall. At higher concentrations of NaCl, the hydrodynamic stresses are larger so the hook bends more. In our simulations, we found that the maximum angles between the cell body's long axis and the surface were 14.05^∘, 8.79^∘ and 7.01^∘, respectively, in decreasing order of NaCl concentrations. As displayed in Fig. <ref>, the model bacterium in the puller mode is attracted to the surface, regardless of the Na^+ concentration. Our numerical results show that the model bacterium moves on orbits of higher curvature at higher concentrations of NaCl. The cell body is almost parallel to the surface in all cases but becomes more parallel at higher concentrations; the mean angle between the cell body's long axis and the surface at high, medium, and low concentrations of NaCl are 2.66^∘, 3.32^∘ and 4.11^∘, respectively. This trend with NaCl concentration is consistent with the hydrodynamic effects of the force dipole image system in the boundary. For pullers, the boundary-induced velocity pushes the hook away from the wall, reducing the inclination of the cell body away from the wall. Fig. <ref> represents the variation of the motor torque as the model bacterium swims next to the surface. After a brief transition period from the initial condition to a quasi-steady swimming configuration, the mean value of the motor torque does not change significantly as the model bacterium swims toward or escapes from the surface. However, the motor torque oscillates with each revolution of the flagellum due to variations in the hydrodynamic and steric interactions with the wall. The amplitude of the motor torque oscillations is largest when the flagellum is close to the surface. In all cases the mean motor torque is higher in the puller mode, which indicates that the motor speed is lower in the puller mode. This is consistent with the results discussed for free space swimming in Section <ref> and is enhanced by the fact that pullers tend to swim closer to the surface, where the increased drag reduces the motor speed. §.§.§ Sensitivity to initial conditions We showed that the escaping angle of the pusher-mode model bacterium varies with the concentration of sodium chloride. To ensure that this near-surface behavior is independent of the initial distance and orientation, we compare the swimming trajectories of the model bacterium when it is initially placed in different distances (H_0=1.1,3) and angles (α_0=15^∘,45^∘) from the surface. As shown in Fig. <ref>, the model bacteria mainly remain near the surface in the lowest concentration of NaCl (green trajectories), and conversely, they exhibit a weak tendency to remain longer near the surface in the highest concentration of NaCl (see red trajectories). The escaping angles are quantitatively compared in Tab. <ref>. These results clearly illustrate that regardless of the initial condition, pusher-mode bacteria strongly tend to swim close to the surface in the lower concentrations of the ions. Such a correlation between the concentration and the tendency to mainly move next to the surface is consistent with the experimental observations of Wu et al. <cit.>. The obtained results in Tab. <ref> also demonstrate a meaningful correlation between the escaping angle and the initial distance and attack angle. In this respect, the bacterium escapes from the surface with a larger angle as it is initially placed closer to the surface and/or approaches the surface with a larger attack angle. Comparing the obtained angles indicate that the dependency of the escaping angle to the initial condition is notable in the high concentration of NaCl, and it is fairly negligible in the medium and low concentrations. §.§.§ Effects of hook and filament flexibility The hydrodynamic interactions between the uniflagellated bacteria and a planar surface have already been studied well when the flagellum is assumed to be a single rigid helix <cit.>. This simplification is adopted in many studies but the flexibility of the hook and the flagellum could impact attributes such as the mean swimming speed, the orientation of the cell body, and the flagellum with respect to the surface. Therefore, hook and flagellum flexibility could change the boundary accumulating behavior of the bacteria. To fill the research gap and better understand the behavior of different flagellated microorganisms near the surfaces, we study the locomotion of the model bacterium near the surface when different stiffnesses are assigned to the flagellum filament and the hook. As shown in Fig. <ref>, decreasing the rigidity of the filament and/or the hook helps the pusher-mode model bacterium to escape from the surface more easily. When the cell body is pushed near the surface, the viscous torque tends to tilt the cell body upwards. If the filament and hook are stiff, then the cell body cannot rotate because the flagellum is obstructed by the wall. When the filament or hook is more flexible, they bend easily and allow the cell body to rotate away from the wall and escape. Motivated by the obtained results which indicate that the escaping angle decreases by increasing the flagellum or hook rigidity, we compare the behavior of two model bacteria with a flexible and rigid flagellum, respectively. For both models, we define the flagellum helical shape with an amplitude envelope k_E to align the flagellum with the cell body axes. This shape is used instead of the purely helical filament so that the axis of the rigid flagellum is aligned with the axis of the cell body, as required for effective propulsion. For the flexible model, we use the stiffness k_F = 3.23 and do not consider a separate hook structure. Interestingly, our simulations (Fig. <ref>) show that the model bacterium with rigid flagellum is entrapped by the surface whereas the bacterium with a flexible flagellum escapes from the surface with a small escaping angle. The simulations are continued beyond the trajectories shown to ensure that the bacteria escape or remain at the surface as described. These simulations demonstrate that the flexibility of the flagellum in pusher-mode bacteria likely facilitates the escape from the surfaces. Comparing the trajectories in the puller mode (Fig. <ref>) shows that the model bacterium moves on smaller circular orbits when it has a more flexible hook or flagellum. Furthermore, the radius of the orbits mainly changes with the filament stiffness than the hook stiffness. Calculating the stable orientation of the cell body demonstrates that the long axis of the cell body is more parallel to the surface as the hook or the filament is stiffer. In our simulations, this angle varies from 6.4^∘ to 3.3^∘, depending on the stiffnesses. Like the pusher mode, the cell body more freely tilts upwards when the filament and hook are more flexible, hence the cell body makes a larger angle with respect to the surface. We note that the sensitivity of the path curvature on flagellar stiffness contrasts with results by Park et al. <cit.>, who found that the circular orbits of pusher bacteria were unaffected by the prescribed motor frequency (which should have the same effect as varying the stiffness of the flagellum). §.§.§ Hook instability The hook in uniflagellated bacteria is very flexible and easily becomes unstable (buckled) if it is subjected to a load which is more than a critical value. When bacteria swim toward a boundary, the load on the hook deviates from the steady state load in unbounded fluid due to hydrodynamic interactions with the boundary. These fluctuations in viscous forces could make the hook much more susceptible to becoming unstable. In this section, we prescribe a constant motor torque T=1 and choose the hook's stiffness so that the hook is stable but close to the free-space critical value for pushers, i.e., the threshold rigidity below which the hook becomes unstable in an unbounded fluid. By performing simulations with different hook stiffnesses, the critical rigidity of the hook is determined to be k_H≈0.105. We vary the hook rigidity starting at a minimum value k_H=0.106 and study the locomotion of the pusher mode model bacterium near a surface. The other model parameters are as listed in Tab. <ref>. As shown in Fig. <ref>, the trajectories are distinct from the gradually escaping paths and entrapped circular orbits typically observed. Instead, the cell body undergoes rapid reorientation and swims away from the wall with a large escape angle (for k_H≥ 0.110), or even becomes trapped with the bent hook against the wall and both the cell body and the flagellum pointing away from the wall (for k_H = 0.106). In the latter case, the cell body spins and the flagellum precesses around the cell body but there is no appreciable net swimming motion either away from or parallel to the wall. The large deformations of the hook indicate that the load on the hook exceeds the critical value for buckling when the swimmer is near a wall. If the relative hook stiffness is high enough, the hook only buckles transiently before returning to a stable swimming shape but if the relative hook stiffness is low, then the hook does not recover from the buckled state. Interestingly, the spinning motion at k_H = 0.106 is reminiscent of experimental observations of V. fischeri intermittently pausing, wiggling, and changing directions when confined between parallel plates <cit.>. Also, the abrupt changes in direction in the cases with higher relative hook stiffness are similar to the flick behavior of V. alginolyticus, which is due to a dynamic instability that occurs when the motor switches from reverse to forward swimming <cit.>. The hook instability near walls that we see in our simulations could provide a mechanism for tumbling without any change in motor torque or direction. §.§.§ Effects of cell body aspect ratio Our results thus far have shown that the model pusher bacterium (with aspect ratio α_cell=2.5) escapes from the surface regardless of the different flagellum stiffness, NaCl concentrations, and the initial conditions chosen in this study. Previous numerical investigations have shown that decreasing the aspect ratio of the cell body increases the tendency of pusher-mode bacteria to be hydrodynamically trapped near the surfaces <cit.>. To illustrate the importance of the cell body aspect ratio in the entrapment of pusher-mode bacteria, we reduce the cell body's aspect ratio from 2.5 to 2.25 and 1.75. As expected and shown in Fig. <ref>, the escaping angle of the model bacterium decreases when the aspect ratio becomes 2.25. Further reduction of the aspect ratio (to α_cell=1.75), causes the bacterium to be entrapped at the surface. Our simulations demonstrate that independent of the initial distance from the surface, the bacterium reaches a unique stable distance H_c=1.71 from the surface when the aspect ratio is α_cell=1.75. Having found that for α_cell=1.75, the model bacterium with flexible flagellum and hook is attracted to the surface, we next consider the motion under higher concentrations of NaCl to see whether this qualitatively affects the behavior near boundaries, Interestingly, the model bacterium escapes from the surface when the concentration increases from medium to high, as shown in Fig. <ref>. This result is consistent with experimental evidence that the concentration of ions changes the pusher-mode bacteria's behavior in boundary accumulating; specifically, they tend to escape from the surfaces at higher concentrations of NaCl. § DISCUSSION AND CONCLUSION The main aim of this study is to model and analyze the near-surface motion of uni-flagellated bacteria with a flexible hook and filament using parameters appropriate for V. alginolyticus. Unlike other modelling studies <cit.>, we employ empirical relationships between the flagellar motor torque and its frequency to account for the load-dependent motor activity. Using a straight hook and purely helical filament shape, we show that the swimming speed for a fixed motor torque does not change significantly with the flagellum stiffness (over the tested range). Applying the characteristic motor torque–speed relationship, however, the motor torque increases with flagellar stiffness for pushers and has the opposite trend for pullers; these effects are most pronounced at high NaCl concentrations. This suggests that changes in the propulsive efficiency of the flagellum due to deformations are compensated by changes in the motor torque and speed when the flagellum is deformed. The implication is that a model that assumes constant torque would predict no change in swimming speed with stiffness whereas a model that assumes constant motor speed would predict higher swimming speeds for pushers and lower swimming speeds for pullers at higher flagellar stiffnesses. Accounting for the motor torque–speed relationship, the trend with flagellar stiffness is as in the constant motor speed model but to a lesser degree. At medium NaCl concentrations and high flagellar stiffnesses, we found that the motor torque and frequency are approximately the same for pushers and pullers but pushers swim around 8% faster than pullers. This trend is sensitive to the model used for the hook and filament shape, however. If the flagellum shape is instead defined with a growing amplitude, then pullers are around 3% faster than pushers, which is similar to the findings of Park et al. <cit.> using constant motor speed and the same growing amplitude model for the flagellum. Experimental observations have shown that the accumulation of V. alginolyticus near surfaces changes with the concentration of sodium chloride in the swimming medium. Depending on the swimming mode (pusher or puller), the relationship between the ion concentration and the tendency to swim near the surfaces could be direct or inverse <cit.>. We confirm that changing the ion concentration, assuming this only shifts the motor torque–speed curve, affects the near-surface behavior of bacteria. In particular, for certain geometries and mechanical properties of the model bacterium, the pusher mode swimmer is attracted to surfaces at low ion concentrations and escapes at high concentrations. In this regard, comparing the escaping angles of the pusher-mode model bacterium in different concentrations of NaCl shows an inverse relationship between these parameters. Further investigation indicates that this conclusion is independent of the initial conditions of the bacteria. Our results in the puller mode show that for a model bacterium that is attracted to surfaces at all concentrations of NaCl, variations in the ion concentration impacts the radius of the circular orbits and the stable orientation of the cell body with respect to the surface. In particular, the model bacterium tends to move on smaller circular paths in higher concentrations of NaCl. Our simulations of V. alginolyticus swimming in puller mode exhibit circular orbits of radius R_c≈17.5 µm when converted to dimensional units; this is comparable to the experimental measurements (R_c≈10-15.5 µm) of Wu et al. <cit.>. In addition to V. alginolyticus, the uniflagellated bacterium Caulobacter crescentus has been experimentally observed swimming forwards and backwards near surfaces <cit.>. It was reported that this bacterium spends much less time close to the surface when swimming in pusher mode compared with swimming in puller mode. Circular orbits were only observed in backward swimming cells. These observations are consistent with our numerical results using parameters for V. alginolyticus. We note that the flexibility of the hook and filament (as long as they are in a stable state) facilitates the escaping from the surface by allowing the cell body to tilt upward more freely. Our simulations illustrate that the flexibility of the flagellum may change a pusher-mode model bacterium state from boundary accumulating to boundary escaping, for example. In general, it seems that there is an inverse relationship between the cell body's long axis angle with the surface and the hook or filament's relative stiffness in either puller or pusher modes. Higher viscous forces applied to the flagellum and the cell body as the bacterium swims near a surface may cause the hook to become unstable. This kind of instability may lead to a different form of entrapment near surfaces in which the cell body spins on the spot with the hook bent and closest to the surface. Transient instability of the hook due to proximity to a surface results in an abrupt and large change in orientation of the cell, similar to a flick. These two passive behaviors, separated by a small change in relative hook stiffness, have opposite consequences; the former case keeps the cell pressed against the surface whereas the latter quickly scatters the cell away from the surface. Interestingly, it has been reported that tumbling is suppressed near solid surfaces for the peritrichous bacterium Escherichia coli <cit.>. The effects of surfaces on bacteria are, evidently, highly dependent on the mode of motility. The simulations show that the pusher-mode bacterium with a flexible (but far from the threshold for instabilities) hook and filament is entrapped by the flat surface when the cell body has a small aspect ratio. This conclusion is consistent with the results of Park et al. <cit.>. The transition from escaping state to entrapment state in a specific aspect ratio of the cell body is already well studied for the bacteria with rigid flagellum <cit.>. Here, comparing the near-surface behavior of a rigid flagellum model with that of a flexible flagellum model demonstrates that the flexibility of the flagellum can affect the threshold of the cell body aspect ratio for surface entrapment. To sum up, whereas many investigations of uniflagellated bacterial locomotion are carried assume that the bacterial flagellum is rigid, our results clearly demonstrate that the hook and flagellum flexibility may change the behavior of the bacterium, especially near a planar surface. For example, flexibility may change the bacteria's behavior from boundary accumulating to boundary escaping or cause them to be locally entrapped near the surfaces. We expect that accurately accounting for hook and filament flexibility is also necessary for modelling bacteria interacting with each other and in other confined geometries. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number RGPIN-2018-04418]. Cette recherche a été financée par le Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG), [numéro de référence RGPIN-2018-04418]. § REGULARIZED STOKESLET AND ROTLET Consider a regularized point force f⃗ and torque n⃗ applied at X⃗ = (X,Y,h) and the associated image point X⃗̂⃗ = (X,Y,-h) due to a no-slip wall at Z=0. Defining the displacement vectors r⃗=x⃗-X⃗ and r⃗̂⃗=x⃗-X⃗̂⃗ to the evaluation point x⃗, the translational and angular velocities of the regularized stokeslet and rotlet used in Eqs. (<ref>) and (<ref>) are given by <cit.>: U⃗_s (f⃗,r⃗,r⃗̂⃗,ϵ)=1/8πμ{[f⃗ J_1(r,ϵ)+(f⃗·r⃗)r⃗ J_2(r,ϵ)] -[f⃗ J_1(r̂,ϵ)+(f⃗·r⃗̂⃗)r⃗̂⃗ J_2(r̂,ϵ)] -h^2[(b⃗·r⃗̂⃗)r⃗̂⃗ K_2(r̂,ϵ) +b⃗ K_1(r̂,ϵ)]+2h[(b⃗·e⃗_3)r⃗̂⃗ J_2(r̂,ϵ) +(r⃗̂⃗·b⃗)e⃗_3(J_3(r̂,ϵ) -J_2(r̂,ϵ))+(r⃗̂⃗·e⃗_3)b⃗ J_2(r̂,ϵ)+1/2(r⃗̂⃗·e⃗_3)(r⃗̂⃗·b⃗)r⃗̂⃗ K_2(r̂,ϵ)] +2h J_3(r̂,ϵ)(m⃗×r⃗̂⃗)}, U⃗_r (n⃗,r⃗,r⃗̂⃗,ϵ)=1/8πμ{1/2[P(r,ϵ)(n⃗×r⃗)- P(r̂,ϵ)(n⃗×r⃗̂⃗)] +h[p⃗ K_1(r̂,ϵ)+(p⃗·r⃗̂⃗)r⃗̂⃗ K_2(r̂,ϵ) ]-[[(p⃗·r⃗̂⃗)e⃗_3 +(e⃗_3 ·r⃗̂⃗)p⃗]J_3(r̂,ϵ)+(e⃗_3 ·r⃗̂⃗)(p⃗·r⃗̂⃗)r⃗̂⃗K_2(r̂,ϵ)] -J_3(r̂,ϵ)(q⃗×r⃗̂⃗)+h^2 J_4(r̂,ϵ)(n⃗×r⃗̂⃗) -h[p⃗ J_3(r̂,ϵ) +(e⃗_3 ·r⃗̂⃗)(n⃗×r⃗̂⃗) J_4(r̂,ϵ)]}, W⃗_s (f⃗,r⃗,r⃗̂⃗,ϵ)=1/8πμ{1/2[P(r,ϵ)(f⃗×r⃗)- P(r̂,ϵ)(f⃗×r⃗̂⃗)] +h^2 J_4(r̂,ϵ)(b⃗×r⃗̂⃗)+h[K_2(r̂,ϵ)-J_4(r̂,ϵ)](b⃗·r⃗̂⃗)(e⃗_3 ×r⃗̂⃗) +h[m⃗[r̂^2 J_4(r̂,ϵ)+2J_3(r̂,ϵ)]-J_4(r̂,ϵ)(m⃗·r⃗̂⃗)r⃗̂⃗] +h P(r̂,ϵ)m⃗}, W⃗_r (n⃗,r⃗,r⃗̂⃗,ϵ)=1/8πμ{-1/4[K_3(r,ϵ)n⃗+K_4(r,ϵ)(n⃗·r⃗)r⃗ -K_3(r̂,ϵ)n⃗-K_4(r̂,ϵ)(n⃗·r⃗̂⃗)r⃗̂⃗]+(p⃗·r⃗̂⃗)(e⃗_3 ×r⃗̂⃗)+1/2[J_4(r̂,ϵ) -K_2(r̂,ϵ)](e⃗_3 ·r⃗̂⃗)(p⃗×r⃗̂⃗)-h J_4(r̂,ϵ)(p⃗×r⃗̂⃗)-1/2[r̂^2 J_4(r̂,ϵ) +2J_3(r̂,ϵ)]q⃗+1/2J_4(r̂,ϵ)(q⃗·r⃗̂⃗)r⃗̂⃗-h/2[(e⃗_3 ·r⃗̂⃗)n⃗[r̂^2 J_5(r̂,ϵ) +3J_4(r̂,ϵ)]-J_4(r̂,ϵ)[(n⃗·e⃗_3)r⃗̂⃗+(p⃗×r⃗̂⃗)] -J_5(r̂,ϵ)(e⃗_3 ·r⃗̂⃗)(n⃗·r⃗̂⃗)r⃗̂⃗]+h^2/2[n⃗[2J_4(r̂,ϵ)+r̂^2 J_5(r̂,ϵ)] -J_5(r̂)(n⃗·r⃗̂⃗)r⃗̂⃗] }, where b⃗ =2(f⃗·e⃗_3)e⃗_3 - f⃗, m⃗ =f⃗×e⃗_3, p⃗ =n⃗×e⃗_3, q⃗ =n⃗ - (n⃗·e⃗_3)e⃗_3, J_1(r,ϵ) =2ϵ^2+r^2/ (r^2+ϵ^2)^3/2, J_2(r,ϵ) =1/(r^2+ϵ^2)^3/2, J_3(r,ϵ) =-3ϵ^2/(r^2+ϵ^2)^5/2, J_4(r,ϵ) =15ϵ^2/(r^2+ϵ^2)^7/2, J_5(r,ϵ) =-105ϵ^2/(r^2+ϵ^2)^9/2, P(r,ϵ) =5ϵ^2+2r^2/ (r^2+ϵ^2)^5/2, K_1(r,ϵ) =-10ϵ^4+7ϵ^2r^2+2r^4/ (r^2+ϵ^2)^7/2, K_2(r,ϵ) =-21ϵ^2-6r^2/ (r^2+ϵ^2)^7/2, K_3(r,ϵ) =-4ϵ^2+2r^2/ (r^2+ϵ^2)^5/2, K_4(r,ϵ) =-6/ (r^2+ϵ^2)^5/2. * 48 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Wu et al.(2018)Wu, Hsiao, and Woon]wu2018 author author K.-T. Wu, author Y.-T. Hsiao, and author W.-Y. Woon, title title Entrapment of pusher and puller bacteria near a solid surface, @noop journal journal Physical Review E volume 98, pages 052407 (year 2018)NoStop [Shum et al.(2010)Shum, Gaffney, and Smith]shum2010modelling author author H. Shum, author E. Gaffney, and author D. Smith, title title Modelling bacterial behaviour close to a no-slip plane boundary: the influence of bacterial geometry, @noop journal journal Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences volume 466, pages 1725 (year 2010)NoStop [Park et al.(2019a)Park, Kim, and Lim]park2019flagellated author author Y. Park, author Y. Kim, and author S. Lim, title title Flagellated bacteria swim in circles near a rigid wall, @noop journal journal Physical Review E volume 100, pages 063112 (year 2019a)NoStop [Schierholz and Beuth(2001)]schierholz2001implant author author J. Schierholz and author J. Beuth, title title Implant infections: a haven for opportunistic bacteria, @noop journal journal Journal of Hospital Infection volume 49, pages 87 (year 2001)NoStop [Conrad(2012)]conrad2012physics author author J. C. Conrad, title title Physics of bacterial near-surface motility using flagella and type iv pili: implications for biofilm formation, @noop journal journal Research in microbiology volume 163, pages 619 (year 2012)NoStop [Bixler and Bhushan(2012)]bixler_biofouling_2012 author author G. D. Bixler and author B. Bhushan, title title Biofouling: lessons from nature, https://doi.org/10.1098/rsta.2011.0502 journal journal Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences volume 370, pages 2381 (year 2012), note publisher: Royal SocietyNoStop [Berg and Anderson(1973)]berg_bacteria_1973 author author H. C. Berg and author R. A. Anderson, title title Bacteria Swim by Rotating their Flagellar Filaments, https://doi.org/10.1038/245380a0 journal journal Nature volume 245, pages 380 (year 1973), note number: 5425 Publisher: Nature Publishing GroupNoStop [Brown et al.(2012)Brown, Steel, Silvestrin, Wilkinson, Delalez, Lumb, Obara, Armitage, and Berry]brown_flagellar_2012 author author M. T. Brown, author B. C. Steel, author C. Silvestrin, author D. A. Wilkinson, author N. J. Delalez, author C. N. Lumb, author B. Obara, author J. P. Armitage, and author R. M. Berry, title title Flagellar Hook Flexibility Is Essential for Bundle Formation in Swimming Escherichia coli Cells, https://doi.org/10.1128/JB.00209-12 journal journal J. Bacteriol. volume 194, pages 3495 (year 2012)NoStop [Giacché et al.(2010)Giacché, Ishikawa, and Yamaguchi]giacche2010hydrodynamic author author D. Giacché, author T. Ishikawa, and author T. Yamaguchi, title title Hydrodynamic entrapment of bacteria swimming near a solid surface, @noop journal journal Physical Review E volume 82, pages 056309 (year 2010)NoStop [Shum and Gaffney(2015a)]shum2015Parallel author author H. Shum and author E. A. Gaffney, title title Hydrodynamic analysis of flagellated bacteria swimming near one and between two no-slip plane boundaries, @noop journal journal Physical Review E volume 91, pages 033012 (year 2015a)NoStop [Shum and Gaffney(2015b)]shum2015rectangular author author H. Shum and author E. A. Gaffney, title title Hydrodynamic analysis of flagellated bacteria swimming in corners of rectangular channels, @noop journal journal Physical review E volume 92, pages 063016 (year 2015b)NoStop [Homma et al.(1996)Homma, Oota, Kojima, Kawagishi, and Imae]homma_chemotactic_1996 author author M. Homma, author H. Oota, author S. Kojima, author I. Kawagishi, and author Y. Imae, title title Chemotactic responses to an attractant and a repellent by the polar and lateral flagellar systems of Vibrio alginolyticus, https://doi.org/10.1099/13500872-142-10-2777 journal journal Microbiol. volume 142, pages 2777 (year 1996), note publisher: Microbiology Society,NoStop [Magariyama et al.(2001)Magariyama, Masuda, Takano, Ohtani, and Kudo]magariyama_difference_2001 author author Y. Magariyama, author S.-y. Masuda, author Y. Takano, author T. Ohtani, and author S. Kudo, title title Difference between forward and backward swimming speeds of the single polar-flagellated bacterium, Vibrio alginolyticus, https://doi.org/10.1111/j.1574-6968.2001.tb10970.x journal journal FEMS Microbiol. Lett. volume 205, pages 343 (year 2001)NoStop [Xie et al.(2011)Xie, Altindal, Chattopadhyay, and Wu]xie_bacterial_2011 author author L. Xie, author T. Altindal, author S. Chattopadhyay, and author X.-L. Wu, title title Bacterial Flagellum as a Propeller and as a Rudder for Efficient Chemotaxis, https://doi.org/10.1073/pnas.1011953108 journal journal Proc. Natl. Acad. Sci. U.S.A. volume 108, pages 2246 (year 2011)NoStop [Shum and Gaffney(2012)]shum2012effects author author H. Shum and author E. Gaffney, title title The effects of flagellar hook compliance on motility of monotrichous bacteria: A modeling study, @noop journal journal Physics of Fluids volume 24, pages 061901 (year 2012)NoStop [Ramia et al.(1993)Ramia, Tullock, and Phan-Thien]ramia1993role author author M. Ramia, author D. Tullock, and author N. Phan-Thien, title title The role of hydrodynamic interaction in the locomotion of microorganisms, @noop journal journal Biophysical journal volume 65, pages 755 (year 1993)NoStop [Son et al.(2013)Son, Guasto, and Stocker]son2013 author author K. Son, author J. S. Guasto, and author R. Stocker, title title Bacteria can exploit a flagellar buckling instability to change direction, @noop journal journal Nature physics volume 9, pages 494 (year 2013)NoStop [Nguyen and Graham(2017)]nguyen_buckling_2017 author author F. T. M. Nguyen and author M. D. Graham, title title Buckling Instabilities and Complex Trajectories in a Simple Model of Uniflagellar Bacteria, https://doi.org/10.1016/j.bpj.2016.12.051 journal journal Biophys. J. volume 112, pages 1010 (year 2017)NoStop [Zou et al.(2021)Zou, Lough, and Spagnolie]zou_helical_2021 author author Z. Zou, author W. Lough, and author S. Spagnolie, title title Helical trajectories of swimming cells with a flexible flagellar hook, https://doi.org/10.1103/PhysRevFluids.6.103102 journal journal Phys. Rev. Fluids volume 6, pages 103102 (year 2021), note publisher: American Physical SocietyNoStop [Jabbarzadeh and Fu(2018)]jabbarzadeh_dynamic_2018 author author M. Jabbarzadeh and author H. C. Fu, title title Dynamic instability in the hook-flagellum system that triggers bacterial flicks, https://doi.org/10.1103/PhysRevE.97.012402 journal journal Phys. Rev. E volume 97, pages 012402 (year 2018)NoStop [Riley et al.(2018)Riley, Das, and Lauga]riley_swimming_2018 author author E. E. Riley, author D. Das, and author E. Lauga, title title Swimming of peritrichous bacteria is enabled by an elastohydrodynamic instability, https://doi.org/10.1038/s41598-018-28319-8 journal journal Sci Rep volume 8, pages 10728 (year 2018)NoStop [Park et al.(2017)Park, Kim, Ko, and Lim]park2017instabilities author author Y. Park, author Y. Kim, author W. Ko, and author S. Lim, title title Instabilities of a rotating helical rod in a viscous fluid, @noop journal journal Physical Review E volume 95, pages 022410 (year 2017)NoStop [Park et al.(2019b)Park, Kim, and Lim]park2019locomotion author author Y. Park, author Y. Kim, and author S. Lim, title title Locomotion of a single-flagellated bacterium, @noop journal journal Journal of Fluid Mechanics volume 859, pages 586 (year 2019b)NoStop [Berg and Turner(1993)]berg_torque_1993 author author H. C. Berg and author L. Turner, title title Torque generated by the flagellar motor of Escherichia coli, https://doi.org/10.1016/S0006-3495(93)81278-5 journal journal Biophys. J. volume 65, pages 2201 (year 1993)NoStop [Li and Tang(2006)]li_low_2006 author author G. Li and author J. X. Tang, title title Low Flagellar Motor Torque and High Swimming Efficiency of Caulobacter crescentus Swarmer Cells, https://doi.org/10.1529/biophysj.106.080697 journal journal Biophys. J. volume 91, pages 2726 (year 2006)NoStop [Sowa et al.(2003)Sowa, Hotta, Homma, and Ishijima]sowa2003torque author author Y. Sowa, author H. Hotta, author M. Homma, and author A. Ishijima, title title Torque–speed relationship of the na+-driven flagellar motor of vibrio alginolyticus, @noop journal journal Journal of molecular biology volume 327, pages 1043 (year 2003)NoStop [Cortez(2001)]cortez_method_2001 author author R. Cortez, title title The Method of Regularized Stokeslets, https://doi.org/10.1137/S106482750038146X journal journal SIAM J. Sci. Comput. volume 23, pages 1204 (year 2001)NoStop [Ainley et al.(2008)Ainley, Durkin, Embid, Boindala, and Cortez]ainley2008method author author J. Ainley, author S. Durkin, author R. Embid, author P. Boindala, and author R. Cortez, title title The method of images for regularized stokeslets, @noop journal journal Journal of Computational Physics volume 227, pages 4600 (year 2008)NoStop [Olson et al.(2013)Olson, Lim, and Cortez]olson2013modeling author author S. D. Olson, author S. Lim, and author R. Cortez, title title Modeling the dynamics of an elastic rod with intrinsic curvature and twist using a regularized stokes formulation, @noop journal journal Journal of Computational Physics volume 238, pages 169 (year 2013)NoStop [Pozrikidis(2002)]pozrikidis2002practical author author C. Pozrikidis, @noop title A practical guide to boundary element methods with the software library BEMLIB (publisher CRC Press, year 2002)NoStop [Lim et al.(2008)Lim, Ferent, Wang, and Peskin]lim2008dynamics author author S. Lim, author A. Ferent, author X. S. Wang, and author C. S. Peskin, title title Dynamics of a closed rod with twist and bend in fluid, @noop journal journal SIAM Journal on Scientific Computing volume 31, pages 273 (year 2008)NoStop [Cortez et al.(2005)Cortez, Fauci, and Medovikov]cortez2005method author author R. Cortez, author L. Fauci, and author A. Medovikov, title title The method of regularized stokeslets in three dimensions: analysis, validation, and application to helical swimming, @noop journal journal Physics of Fluids volume 17, pages 031504 (year 2005)NoStop [Nourian and Shum(2023)]nourian_shum_2023 author author V. Nourian and author H. Shum, title title A numerical method for the locomotion of bi-flagellated bacteria in viscous fluid, https://doi.org/10.1017/flo.2022.34 journal journal Flow volume 3, pages E4 (year 2023)NoStop [Jabbarzadeh and Fu(2020)]jabbarzadeh_numerical_2020 author author M. Jabbarzadeh and author H. C. Fu, title title A numerical method for inextensible elastic filaments in viscous fluids, https://doi.org/10.1016/j.jcp.2020.109643 journal journal J. Comput. Phys. volume 418, pages 109643 (year 2020)NoStop [Goriely and Tabor(1997)]goriely1997nonlinear author author A. Goriely and author M. Tabor, title title Nonlinear dynamics of filaments. iii. instabilities of helical rods, @noop journal journal Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences volume 453, pages 2583 (year 1997)NoStop [Nguyen and Graham(2018)]nguyen2018impacts author author F. T. Nguyen and author M. D. Graham, title title Impacts of multiflagellarity on stability and speed of bacterial locomotion, @noop journal journal Physical Review E volume 98, pages 042419 (year 2018)NoStop [Adhyapak and Stark(2015)]adhyapak2015zipping author author T. C. Adhyapak and author H. Stark, title title Zipping and entanglement in flagellar bundle of e. coli: Role of motile cell body, @noop journal journal Physical Review E volume 92, pages 052701 (year 2015)NoStop [Shum(2019)]shum2019microswimmer author author H. Shum, title title Microswimmer propulsion by two steadily rotating helical flagella, @noop journal journal Micromachines volume 10, pages 65 (year 2019)NoStop [Bouzarth and Minion(2010)]bouzarth2010multirate author author E. L. Bouzarth and author M. L. Minion, title title A multirate time integrator for regularized stokeslets, @noop journal journal Journal of Computational Physics volume 229, pages 4208 (year 2010)NoStop [Higdon(1979)]higdon1979hydrodynamics author author J. J. Higdon, title title The hydrodynamics of flagellar propulsion: helical waves, @noop journal journal Journal of Fluid Mechanics volume 94, pages 331 (year 1979)NoStop [Park et al.(2022)Park, Kim, Lee, and Lim]park2022modeling author author J. Park, author Y. Kim, author W. Lee, and author S. Lim, title title Modeling of lophotrichous bacteria reveals key factors for swimming reorientation, @noop journal journal Scientific Reports volume 12, pages 1 (year 2022)NoStop [Shaikh et al.(2005)Shaikh, Thomas, Chen, Samatey, Matsunami, Imada, Namba, and DeRosier]shaikh2005partial author author T. R. Shaikh, author D. R. Thomas, author J. Z. Chen, author F. A. Samatey, author H. Matsunami, author K. Imada, author K. Namba, and author D. J. DeRosier, title title A partial atomic structure for the flagellar hook of salmonella typhimurium, @noop journal journal Proceedings of the National Academy of Sciences volume 102, pages 1023 (year 2005)NoStop [Lauga et al.(2006)Lauga, DiLuzio, Whitesides, and Stone]lauga2006swimming author author E. Lauga, author W. R. DiLuzio, author G. M. Whitesides, and author H. A. Stone, title title Swimming in circles: motion of bacteria near solid boundaries, @noop journal journal Biophysical journal volume 90, pages 400 (year 2006)NoStop [Tokárová et al.(2021)Tokárová, Perumal, Nayak, Shum, Kašpar, Rajendran, Mohammadi, Tremblay, Gaffney, Martel et al.]tokarova2021patterns author author V. Tokárová, author A. S. Perumal, author M. Nayak, author H. Shum, author O. Kašpar, author K. Rajendran, author M. Mohammadi, author C. Tremblay, author E. A. Gaffney, author S. Martel, et al., title title Patterns of bacterial motility in microfluidics-confining environments, @noop journal journal Proceedings of the National Academy of Sciences volume 118 (year 2021)NoStop [Li et al.(2011)Li, Bensson, Nisimova, Munger, Mahautmr, Tang, Maxey, and Brun]li2011accumulation author author G. Li, author J. Bensson, author L. Nisimova, author D. Munger, author P. Mahautmr, author J. X. Tang, author M. R. Maxey, and author Y. V. Brun, title title Accumulation of swimming bacteria near a solid surface, @noop journal journal Physical Review E volume 84, pages 041932 (year 2011)NoStop [Molaei et al.(2014)Molaei, Barry, Stocker, and Sheng]molaei_failed_2014 author author M. Molaei, author M. Barry, author R. Stocker, and author J. Sheng, title title Failed Escape: Solid Surfaces Prevent Tumbling of Escherichia coli, https://doi.org/10.1103/PhysRevLett.113.068103 journal journal Phys. Rev. Lett. volume 113, pages 068103 (year 2014)NoStop [Blake(1971)]blake1971note author author J. Blake, title title A note on the image system for a stokeslet in a no-slip boundary, in @noop booktitle Mathematical Proceedings of the Cambridge Philosophical Society, Vol. volume 70 (organization Cambridge University Press, year 1971) pp. pages 303–310NoStop [Cortez and Varela(2015)]cortez2015general author author R. Cortez and author D. Varela, title title A general system of images for regularized stokeslets and other elements near a plane wall, @noop journal journal Journal of Computational Physics volume 285, pages 41 (year 2015)NoStop
http://arxiv.org/abs/2307.02551v1
20230705180007
Semidefinite programming relaxations for quantum correlations
[ "Armin Tavakoli", "Alejandro Pozas-Kerstjens", "Peter Brown", "Mateus Araújo" ]
quant-ph
[ "quant-ph" ]
Physics Department, Lund University, Box 118, 22100 Lund, Sweden Instituto de Ciencias Matemáticas (CSIC-UAM-UC3M-UCM), Nicolás Cabrera 13-15, 28049 Madrid, Spain Télécom Paris - LTCI, Inria, Institut Polytechnique de Paris, 91120 Palaiseau, France Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria Semidefinite programs are convex optimisation problems involving a linear objective function and a domain of positive semidefinite matrices. Over the last two decades, they have become an indispensable tool in quantum information science. Many otherwise intractable fundamental and applied problems can be successfully approached by means of relaxation to a semidefinite program. Here, we review such methodology in the context of quantum correlations. We discuss how the core idea of semidefinite relaxations can be adapted for a variety of research topics in quantum correlations, including nonlocality, quantum communication, quantum networks, entanglement, and quantum cryptography. Semidefinite programming relaxations for quantum correlations Mateus Araújo August 1, 2023 ============================================================= § INTRODUCTION Understanding and explaining the correlations observed in nature is a central task for any scientific theory. For quantum mechanics, the study of correlations has a crucial role in both its concepts and its applications. It broadly concerns the foundations of quantum theory, quantum information science and nowadays also the emerging quantum technologies. Although quantum correlations is an umbrella term, under which many different physical scenarios are accommodated, it establishes a common focus on the investigation of probability distributions describing physical events. Naturally, the various expansive topics focused on quantum correlations have warranted review articles of their own and we refer to them for specific in-depth discussions; see e.g. <cit.> for nonlocality, <cit.> for entanglement, <cit.> for contextuality, <cit.> for quantum communication and <cit.> for quantum cryptography. Studies of quantum correlations take place in a given scenario, or experiment, where events can influence each other according to some causal structure and the influences are potentially subject to various physical limitations. For example, this can be a Bell experiment where two parties act outside each other's light cones but are nevertheless connected through a pre-shared entangled state. Another example is a communication scenario where the channel connecting the sender to the receiver only supports a given number of bits per use. The fundamental challenge is to characterise the set of correlations predicted by quantum theory. This applies directly to a variety of basic questions, e.g. determining the largest violation of a Bell inequality or the largest quantum-over-classical advantage in a communication task, but also indirectly to e.g. benchmarking a desirable property of quantum devices or computing the secret key rate in a quantum cryptographic scheme. Unfortunately, the characterisation of quantum correlations is typically difficult and can only be solved exactly, by analytical means, in a handful of convenient special cases. Therefore, it is of pivotal interest to find other, more practically viable, methods for characterising quantum correlations. Over the past two decades, semidefinite programs (SDPs) have emerged as an efficient and broadly useful tool for investigating quantum theory in general, and quantum correlations in particular. An SDP is an optimisation task in which a linear objective function is maximised over a cone of positive-semidefinite (PSD) matrices subject to linear constraints. They rose to prominence in convex optimisation theory in the early 1990s through the development of efficient interior-point evaluation methods; see <cit.> for a review. In the context of quantum correlations, SDPs roughly correspond to optimising a quantifier of correlations over the set of quantum operations and they are today an indispensable tool for the field. However, it is frequently the case that quantum correlation problems cannot directly be cast as SDPs, thus impeding a straightforward solution. Sometimes the reason is that the correlation function simply is not convex (see Section <ref> for some relevant examples). More often, the reason is that the full set of quantum operations cannot be described as a single collection of PSD matrices. For example, while an optimisation over qubit states or qubit measurements individually is compatible with the SDP framework, the joint optimisation over both is not. Nevertheless, SDPs offer a powerful path out of such difficulties because they can be employed to approximate solutions that would otherwise remain obscure. Specifically, more sophisticated quantum correlation problems, that are not immediately solvable by an SDP, can still be approached through sequences of increasingly precise relaxations, each of which is itself an SDP (see Fig. <ref>). In this way, one can obtain approximations that are accurate enough for practical purposes and sometimes even exact. From the methodology perspective, these SDP relaxation methods have attained a prominent role in quantum information science. Their success derives in part from the fact that today there exist powerful, practical and easily accessible algorithms for their evaluation, and partly from the fact that they offer a single methodology that pertains to most forms of quantum correlation, even though the physics underpinning experiments can be vastly different. The purpose of this article is to review SDP relaxation methods for quantum correlations. We discuss how this methodology can be adapted for a variety of conceptual and applied problems. In the remainder of Section <ref>, we introduce the basics of semidefinite programming and some of the main correlation scenarios. In Section <ref>, we present a general framework for semidefinite relaxation hierarchies that can be applied to many of the later, physically motivated, considerations. Sections <ref> and <ref> discuss SDP relaxation methods in the context of entanglement theory and nonlocality, respectively, including device-independent applications. Section <ref> focuses on correlations from quantum communication and their applications. Section <ref> concerns SDP methods for evaluating the performance of protocols in random number generation and quantum key distribution. Section <ref> focuses on networks comprised of independent sources of entanglement and discusses SDP methods for assessing their nonlocality. Section <ref> gives an overview of some related topics where SDP relaxations are prominent. Finally, Section <ref> provides a concluding outlook. A guide to free and publicly available SDP solvers and relevant quantum information software packages is found in Appendix <ref>. §.§ Primals and duals We begin with a basic introduction to semidefinite programming, referring the reader to relevant books and review articles, e.g. <cit.>, for in-depth discussions. In particular, for their use in quantum correlations, see the recent book <cit.>, which offers a didactic approach to SDP using basic quantum information tasks, and <cit.>, which focuses on the mathematical foundations of SDP. A semidefinite program is an optimisation problem in which a linear objective function is optimised over a convex domain consisting of the intersection of a cone of PSD matrices with hyperplanes and half-spaces. In general this can be written as max_X ⟨ C, X⟩ ⟨ A_i, X ⟩ = b_i ∀ i, X ≽ 0, where C, X, and the A_i are Hermitian matrices, b is a real vector, and ⟨·,·⟩ denotes the inner product. In addition to the above linear equality constraints, SDPs can also include linear inequality constraints. These can always be converted into linear equality constraints, as appearing in Eq. (<ref>), by introducing additional parameters known as slack variables. SDPs are generalisations of the more elementary linear programs (LPs) for which the PSD constraint is replaced by an element-wise positivity constraint. This is achieved by restricting the matrix X to be diagonal. It is well known that LPs can be efficiently evaluated using interior-point methods <cit.> and such methods also generalise to the case of SDPs <cit.>. To every SDP of the form of Eq. (<ref>), one can associate another SDP of the form min_y ⟨ b, y ⟩ ∑_i A_i y_i ≽ C, where the optimisation is now over the real vector y. This is known as the dual SDP corresponding to the primal SDP in Eq. (<ref>). Every feasible point of the dual SDP gives a value ⟨ b, y ⟩ that is an upper bound on the optimal value of the primal SDP. Thus, also every feasible point of the primal SDP gives a value ⟨ C, X ⟩ that is a lower bound on the optimal value of the dual SDP. This relation is known as weak duality. A fundamental question is whether the bounds provided by weak duality can be turned into equality, i.e. when does the optimal value of the primal in Eq. (<ref>) coincide with the optimal value of the dual in Eq. (<ref>)? When they are equal we say that strong duality holds. Strong duality always holds for LPs but in general not for SDPs. However, a sufficient condition for strong duality is that the primal or the dual SDP is strictly feasible <cit.>: in the primal formulation (<ref>) this means that there exists X^* ≻ 0 such that ⟨ A_i, X^* ⟩ = b_i for all i, and in the dual formulation (<ref>) that there exists y^* such that ∑_i A_i y^*_i ≻ C. If one wants to numerically solve an SDP one needs, in addition, that the optimal values of the primal and dual problems are attained, i.e., that there exist finite X and y that produce the optimal values. A sufficient but not necessary condition for their existence is that both the primal and dual problems are strictly feasible <cit.>. §.§ Correlation scenarios and quantum theory This section provides a brief introduction to some often studied quantum correlation scenarios. We first discuss scenarios based on entanglement and then scenarios featuring communication. The presentation is geared towards highlighting the relevance of LPs and SDPs. §.§.§ Entanglement-based scenarios The standard scenario for investigating quantum correlations harvested from the shares of a bipartite state is illustrated in Fig. <ref>. A source emits a pair of particles in some state ρ_AB that is shared between two parties, Alice and Bob. Formally, a state is a PSD operator ρ_AB≽ 0 of unit trace (ρ_AB)=1. Alice and Bob can independently select classical inputs, x and y, respectively from alphabets of finite size X and Y, and perform corresponding quantum measurements on their systems A and B. In general, a quantum measurement with N possible outcomes is represented by a positive operator-valued measure (POVM), i.e. a set of PSD operators {E_i}_i=1^N that sums to identity: E_i≽ 0 and ∑_i=1^N E_i=𝕀. These conditions ensure the positivity and normalisation of probabilities, respectively. We write the measurements of Alice and Bob as POVMs {A_a|x} and {B_b|y} respectively, where a and b denote their respective outcomes. The probability distribution of their outcomes, for a specific choice of inputs, is given by Born's rule, p(a,b|x,y)=((A_a|x⊗ B_b|y)ρ_AB). This conditional probability distribution is interchangeably referred to as the distribution or the correlations. Such entanglement-based correlations are often studied in three different scenarios, namely those of entanglement, steering and nonlocality: Entanglement.— A bipartite quantum state is called separable if it can be written as a probabilistic mixture of states individually prepared by Alice and Bob, namely <cit.> ρ_AB=∑_λ p(λ) ϕ_λ⊗φ_λ, where p(λ) is a probability distribution and ϕ_λ and φ_λ are arbitrary quantum states of Alice and Bob, respectively. Importantly, some bipartite states cannot be decomposed in this way, and are called entangled. For an in-depth discussion of entanglement, we refer to <cit.>. Assume that Alice and Bob, as in Fig. <ref>, perform known quantum measurements on some unknown shared state ρ_AB. Can we determine if the state is separable or entangled? This is done by inspecting the correlations in Eq. (<ref>). One approach is that both Alice and Bob perform a set of tomographically complete local measurements; most famously exemplified by a complete set of mutually unbiased bases <cit.> or a symmetric, informationally complete POVM <cit.>. Then they can reconstruct the density matrix ρ_AB and try to decide its separability through some analytical criterion. Unfortunately, the separability problem is known to be NP-hard[More precisely, it is NP-hard to decide whether a quantum state is ϵ-close to the set of separable states when ϵ is an inverse polynomial of the dimension.] <cit.> and a necessary and sufficient criterion is only known for qubit-qubit or qubit-qutrit systems. This is the well-known positive partial transpose (PPT) criterion <cit.>, which more generally is a necessary condition for separability in all dimensions: a bipartite system is entangled if ρ_AB^T_A 0, where T_A denotes transposition on system A. However, many entangled states go undetected by this criterion <cit.>. The PPT criterion can be used to quantify the amount of entanglement in a quantum state, for example by computing how much of the maximally mixed state needs to be mixed with ρ_AB to make the resulting state PPT. This is known as the random robustness of entanglement with respect to the PPT criterion, and can be computed via a simple SDP[The random robustness is then given by -t d^2.] <cit.>: max_t t ρ_AB^T_A≽ t 𝕀. Note that since having non-positive partial transposition is not a necessary condition for a state to be entangled, this SDP is a relaxation of the separability problem, and the random robustness it computes is only a lower bound on the actual amount of entanglement in ρ_AB. This is a simple example of the fundamental idea behind the methods explored in this review: in order to tackle an intractable problem, one finds partial conditions for its solution that are tractable to compute and provide bounds on the quantity of interest. Ideally, one should find a sequence of tighter and tighter partial conditions that in the infinite limit correspond exactly to the problem one is trying to solve. In Section <ref> we will see how this can be done for the separability problem. It is also useful to consider the dual of the SDP in Eq. (<ref>), namely min_W (Wρ_AB) (W) = 1, W^T_A≽ 0 . The Hermitian operator W is known as an entanglement witness <cit.>. It is relevant because measuring it is sufficient for detecting and quantifying entanglement <cit.>. In particular, one does not need to perform full tomography of the quantum state, which is often impractical as the number of required measurements grows rapidly with the dimension of the state. In order to measure the entanglement witness, one would need to decompose it in the form W=∑_a,b,x,yc_abxy A_a|x⊗ B_b|y for some real coefficients c_abxy and some POVMs for Alice and Bob. Such a decomposition requires in general much fewer measurements than tomography, so a witness allows to detect entanglement from partial knowledge of the quantum state. It is important to emphasise that entanglement witnesses do not come only from the partial transposition criterion. In principle, for any entangled state ρ_AB one can construct a witness W such that (Wρ_AB) < 0, but that for any separable state σ_AB it holds that (W σ_AB) ≥ 0. The construction of the witness operator is, however, not straightforward. Witness methods can sometimes detect entangled states even using just two local measurement bases, see e.g. <cit.>. A common approach is to construct entanglement witnesses through the estimation of the fidelity between the state prepared in the laboratory and a pure target state <cit.>. While this method is practical for particular types of entanglement, see e.g. <cit.>, it fails to detect the entanglement of most states <cit.>. Independently of using the density matrix or partial knowledge of it, determining whether a state is separable or entangled is difficult. Steering.— By performing measurements on her share of a suitable entangled state and keeping track of the outcome a, Alice can remotely prepare any ensemble of states for Bob <cit.>. The discussion of how entanglement allows one system to influence (or steer) another system traces back to Schrödinger's remarks <cit.> on the historical debate about “spooky action at a distance” <cit.>. Consider again the situation in Fig. <ref> but this time we ask whether Bob can know that Alice is quantumly steering his system. The set of states remotely prepared by Alice for Bob, when her outcome is made publicly known, along with the probabilities of her outcomes, is described by a set of subnormalised states of Bob, ϱ_a|x=_A((A_a|x⊗𝕀) ρ). This set is known as an assemblage. The assemblage can be modelled without a quantum influence from Alice to Bob if there exists a local hidden state decomposition <cit.>, namely ϱ_a|x=∑_λ p(λ) p(a|x,λ) σ_λ, for some probabilities p(λ) and p(a|x,λ), and quantum states σ_λ. One can interpret this as a source probabilistically generating the pair (λ,σ_λ), sending the former to Alice, who then classically decides her output, and delivering the latter to Bob. If no model of the form of Eq. (<ref>) is possible, then we say that the assemblage demonstrates steering and that consequently ρ_AB is steerable. For in-depth reviews of steering, we refer to <cit.>. Deciding the steerability of an assemblage is an SDP. To see this, note that for a given number of inputs and outputs for Alice, there are finitely many functions r mapping x to a. Indexing them by λ, we can define deterministic distributions D(a|x,λ)=δ_r_λ(x),a. A strictly feasible formulation of the SDP is max_{σ̃_λ},t t ϱ_a|x=∑_λσ̃_λ D(a|x,λ) ∀ a, x, σ̃_λ≽ t𝕀∀ λ. A local hidden state model is possible if and only if the optimal value of Eq. (<ref>) is nonnegative. Notice that normalisation of the assemblage is implicitly imposed by the equality constraint. Moreover, it is interesting to consider the SDP dual to Eq. (<ref>), min_{W_a,x} ∑_a,x(W_a,xϱ_a|x) ∑_a,x,λ(W_a,x)D(a|x,λ) =1, ∑_a,xW_a,xD(a|x,λ)≽ 0 ∀ λ. The first constraint is a normalisation for the dual variables {W_a,x} and the second constraint ensures that all local hidden state models return nonnegative values. Thus, if the assemblage demonstrates steering, the dual gives us an inequality, ∑_a,x(W_a,xϱ_a|x)≥ 0, which is satisfied by all local hidden state models and violated in particular by the assemblage {ϱ_a|x} but also potentially by some other steerable assemblages. Indeed, the inequality (<ref>) may be viewed as the steering equivalent of an entanglement witness (recall Eq. (<ref>)), i.e. a steering witness. However, it is important to note that steering is a stronger notion than entanglement because it is established only from inspecting the assemblage, i.e. Bob's measurements are assumed to be characterised whereas Alice's measurements need not even obey quantum theory. Nonlocality.— Bell's theorem <cit.> proclaims that there exist quantum correlations (<ref>) that cannot be modelled in any theory respecting local causality[A discussion of the historical debate on the interpretation of Bell's theorem can be found in <cit.>.] <cit.>. Such a theory, known as a local (hidden variable, LHV) model, assigns the outcomes of Alice and Bob based on their respective inputs and a shared classical common cause λ. A local model for their correlation takes the form p(a,b|x,y)=∑_λ p(λ) p(a|x,λ)p(b|y,λ). Correlations admitting such a decomposition are called local whereas those that do not are called nonlocal. For an in-depth discussion of nonlocality, we refer to <cit.>. The response functions p(a|x,λ) and p(b|y,λ) can be written as probabilistic combinations of deterministic distributions but, in analogy with the case of the local hidden state model, any randomness can be absorbed into p(λ). Thus, without loss of generality, we can focus on deterministic response functions and their convex combinations enabled by the shared common cause. Geometrically, the set of local correlations forms a convex polytope <cit.>. Deciding whether a given distribution p(a,b|x,y) is local can therefore be cast as an LP, max_{p(λ)},t t ∑_λ p(λ) D(a|x,λ)D(b|y,λ)=p(a,b|x,y), p(λ)≥ t ∀ λ, in order to model where the cardinality of λ is the total number of deterministic distributions. The correlations are local if and only if the optimal value of Eq. (<ref>) is nonnegative. As before, it is also interesting to consider the dual LP, min_{c_abxy} ∑_a,b,x,y c_abxyp(a,b|x,y) ∑_λ,a,b,x,y c_abxyD(a|x,λ)D(b|y,λ)=1, ∑_a,b,x,yc_abxy D(a|x,λ)D(b|y,λ)≥ 0 ∀ λ. This is clearly reminiscent of the steering dual in Eq. (<ref>). The first constraint normalises the coefficients {c_abxy} and the second constraint ensures that if p(a,b|x,y) is local then the value of the dual is nonnegative. Hence it implies the inequality ∑_a,b,x,yc_abxyp(a,b|x,y)≥ 0, which is satisfied by all local distributions and violated by some nonlocal distributions, in particular the target distribution p(a,b|x,y) whenever it is nonlocal. This may be seen as the nonlocality equivalent of an entanglement and steering witness, but inequalities like Eq. (<ref>) are more well known under the name Bell inequalities. The violation of a Bell inequality in quantum theory is the strongest sense of entanglement certification, as it requires no assumptions on the measurements of Alice or Bob. The most famous and widely used Bell inequality is the Clauser-Horne-Shimony-Holt (CHSH) inequality <cit.>. It applies to the simplest scenario in which nonlocality is possible, namely when Alice and Bob have two inputs each (x,y∈{0,1}) and two possible outcomes each (a,b∈{0,1}). The CHSH inequality reads S_CHSH≡∑_a,b,x,y (-1)^a+b+xyp(a,b|x,y)≤ 2, and a quantum model based on a singlet state and particular pairs of anticommuting qubit measurements can achieve the violation S_CHSH=2√(2). This is the maximum violation achievable with quantum systems <cit.>. More generally, one can employ a simple optimisation heuristic known as a seesaw <cit.> to search for the largest quantum violation of any given Bell inequality. The main observation is that for a fixed state and fixed measurements on (say) Bob's side, the optimal value of a Bell parameter is a linear function of Alice's measurements and thus can be evaluated as the SDP[When the outcomes are binary, this is just an eigenvalue problem and hence does not require an SDP formulation.] <cit.> max_{A_a|x} ∑_a,b,x,yc_abxy(A_a|x⊗ B_b|yρ_AB) ∑_a A_a|x=𝕀 ∀ x, A_a|x≽ 0 ∀ a, x. Given the optimised POVMs of Alice, an analogous SDP then evaluates the optimal value of the Bell parameter over Bob's POVMs. Then, given the optimised POVMs of Alice and Bob, the optimal state can be obtained as an eigenvector with maximal eigenvalue of the operator[As any maximal eigenvalue problem, this can also be formulated as an SDP: computing the maximum of (ρ𝒮) such that ρ≥ 0 and (ρ) = 1.] 𝒮 = ∑_a,b,x,yc_abxy A_a|x⊗ B_b|y. One then starts with a random choice for the state and measurements, and iterates the three optimizations until the value of the Bell parameter converges. From any starting point it will converge monotonically to a local optimum, but one cannot guarantee it will reach the global optimum, i.e. the largest quantum value. Nevertheless, this heuristic is very useful in practice, and when repeated with several different starting points it often does find the optimal quantum model. Notably, the routine can be reduced to only two optimisations per iteration. This is achieved by considering the measurements of just one party and the ensemble of sub-normalised states remotely prepared by the other party. Optimisation over the latter can also be cast as the SDP max_{ϱ_a|x} ∑_a,b,x,yc_abxy(ϱ_a|xB_b|y) ∑_aϱ_a|x=∑_aϱ_a|x' ∀ x, x', ∑_a (ϱ_a|x)=1 ∀ x, ϱ_a|x≽ 0 ∀ a, x. Quantum theory is not necessary for defining the concept of nonlocality. One just needs probability distributions to obey the principle of no-signaling. No-signaling is the assumption that the one party's outcome cannot depend on the input of the other, which can be physically justified e.g. through space-like separation of the parties. This is formalised as ∑_b p(a,b|x,y)=p(a|x) ∀ a, x, y, ∑_a p(a,b|x,y)=p(b|y) ∀ b, x, y. Since these conditions are linear, the set of all distributions satisfying them (known as no-signaling correlations) can be characterised by LP. Crucially though, such is not the case for quantum correlations as these are known to be more constrained than no-signaling correlations. For instance, no-signaling correlations can achieve the higher-than-quantum CHSH violation of S_CHSH=4 <cit.>. §.§.§ Communication-based scenarios An important family of scenarios is those in which physical systems are not shared, but communicated from some parties to others. The simplest communication scenario is known as the prepare-and-measure scenario and it is illustrated in Fig. <ref>. A sender, Alice, privately selects an input x and encodes it into a message that is sent over a communication channel to a receiver, Bob, who privately selects an input y and performs an associated decoding to receive an outcome b. In a quantum model, the message is described by a quantum state, i.e. ρ_x, and the measurements by POVMs {M_b|y}. The quantum correlations established are given by Born's rule, p(b|x,y)=(ρ_x M_b|y). In contrast, a classical model describes the messages as distinguishable, and can without loss of generality be assigned integer values, but potentially also mixed via classical randomness. Adopting the notations of quantum models, such classical messages are written ρ_x=∑_m p(m|x)|m⟩⟨m| for some conditional message distribution p(m|x). Since classical models admit no superpositions, all classical measurements are restricted to the same basis, namely M_b|y=∑_mp(b|y,m) |m⟩⟨m|. Moreover, it is common to consider also a shared classical cause, λ, between Alice and Bob. Following Born's rule, classical correlations then take the form p(b|x,y)=∑_m,λ p(λ)p(m|x,λ)p(b|y,m,λ). Any correlation that does not admit such a model is called nonclassical. In order for the correlations to be interesting, a restricting assumption must be introduced. Otherwise Alice can always send x to Bob, who can then output b according to any desired p(b|x,y). Typically, the restriction is put on the channel connecting the parties. For this purpose, various approaches have been proposed, all closely linked to SDP techniques. We discuss them in Section <ref>. Here, we exemplify the most well-studied case, namely when the Hilbert space dimension of the message is assumed, or equivalently for classical models, when the cardinality of the message alphabet is known. For a classical model with a message alphabet of size d, the set of correlations in Eq. (<ref>) can be described by an LP <cit.>. In analogy with discussions in the previous section, any randomness in the encoding function p(m|x,λ) and decoding function p(b|y,m,λ) can be absorbed into p(λ). Since d is fixed, there are only finitely many different encoding and decoding functions and they may be enumerated by λ. The LP for deciding whether a given p(b|x,y) admits a classical model based on a d-dimensional message is max_{p(λ)},t t ∑_λ p(λ)∑_m=1^dD(m|x,λ)D(b|y,m,λ)=p(b|x,y), p(λ)≥ t ∀ λ, where the normalisation of p(λ) is implicit in the equality constraint. For reasons analogous to the discussion of local models, the dual of this LP, when p(b|x,y) is nonclassical, provides a hyperplane in the space of correlations which separates it from the classical polytope, i.e. an inequality of the form S≡∑_b,x,yc_bxyp(b|x,y)≥ 0, for some coefficients {c_bxy}, satisfied by all classical models based on d-dimensional messages but violated by the target nonclassical distribution. Interestingly, it is known that correlations obtained from d-dimensional quantum systems can violate the limitations of d-dimensional classical messages. The earliest example, based on comparing a bit message against a qubit message, appeared in <cit.> and was later re-discovered in <cit.>. To see that it is possible, consider that Alice holds two bits, x=x_0x_1∈{0,1}^2 and Bob holds one bit, y∈{0,1}, and that Bob is asked to output the value of Alice's yth bit. However, Alice may only send one (qu)bit to Bob. Classically, one can convince oneself that, on average, the success probability can be no larger than p_suc≡1/8∑_x_0,x_1,yp(b=x_y|x_0,x_1,y)≤ 3/4. This is achieved by Alice sending x_0 and Bob outputting b=x_0 irrespective of y. In contrast, a quantum model can achieve p_suc=1/2(1+1/√(2)) by having Bob measure the Pauli observables σ_X and σ_Z while Alice communicates the qubit states with Bloch vectors ((-1)^x_0,0,(-1)^x_1)/√(2). Such quantum communication advantages are also known to exist for any value of d <cit.>. If we are given an inequality of the form of Eq. (<ref>) and asked to violate it in quantum theory, one can use a seesaw heuristic to numerically search for the optimal value of S, in analogy with the case of Bell inequalities <cit.>. In the prepare-and-measure scenario, the optimisation of S becomes an SDP when the states ρ_x are fixed, and a simple set of eigenvalue problems when the measurements M_b|y are fixed. Specifically, for fixed states the problem becomes[When the outcomes are binary, this too is an eigenvalue problem and hence does not require an SDP formulation.] max_{M_b|y} ∑_b,x,yc_bxy(ρ_xM_b|y) ∑_b M_b|y=𝕀 ∀ y, M_b|y≽ 0, and for fixed measurements it reduces to computing the eigenvectors with maximal eigenvalue of the operators 𝒮_x = ∑_b,yc_bxy M_b|y for each x. Thus, by starting from a randomised initial set of states, one can run the SDP in Eq. (<ref>) and use the returned measurements to compute the optimal states from Eq. (<ref>). The process is iterated until the value converges. §.§ Overview of semidefinite relaxation hierarchies In the previous section we have seen how some classical correlation sets can be characterised via LPs and how SDPs facilitate some quantum correlation problems. However, the characterisation of the set of quantum correlations in most scenarios cannot be achieved with a single SDP, but rather requires SDP relaxation hierarchies. These relaxation hierarchies and their applications are a major focus of the upcoming sections. Here, we provide in Table <ref> an overview of SDP relaxation hierarchies encountered in the study of quantum correlations, the scenario to which they apply, their convergence properties, their main domain of application and the section in this article where they are further discussed. The overview is not comprehensive, as there are also other correlation scenarios where such techniques apply and some of them are discussed in Section <ref>. Furthermore, the hierarchies are not unique; there can be several different SDP hierarchies addressing the same problem, as is the case for instance in the two final rows of the table. Whether an SDP hierarchy converges to the targeted set of correlations is an interesting question, but it can come with noteworthy subtleties. For instance, as we will see later, in Bell nonlocality the tensor product structure of the Hilbert space is relaxed to a single-system commutation condition. Several hierarchies converge to this latter characterisation, which is known to be a strict relaxation of the bipartite tensor-product structure when considering infinite-dimensional systems <cit.>. Importantly, even if a hierarchy converges to the quantum set, but also when it does not, what is often of practical interest is how fast useful correlation bounds can be obtained, since it is commonly the case that one cannot evaluate more than a few levels of relaxation. § SEMIDEFINITE RELAXATIONS FOR POLYNOMIAL OPTIMISATION In this section we review the mathematical preliminaries for some of the SDP relaxation methods used in the subsequent sections of this review. A crucial fact about SDPs is that they can be used to approximate solutions to optimisation problems that themselves are not SDPs. That is, some optimisation problems can be relaxed into a sequence, or hierarchy, of increasingly more complex SDPs, each providing a more accurate bound on the solution than the previous. One particular example of this is polynomial optimisation, which can be relaxed to a sequence of SDPs via the so-called moment approach, or its dual, known as sum-of-squares programming. Considering the various semidefinite programming relaxations discussed in this review, many of them fall into this framework of semidefinite relaxations for polynomial optimisation. In such cases the original problem can be either viewed as (or closely approximated by) some polynomial optimisation problem which can then be transformed into an SDP hierarchy by the aforementioned methods. In fact, polynomial optimisation is at the core of many of the results discussed in all of the remaining sections. In light of this we will now dedicate some time to give an overview of the SDP relaxations of such optimisation problems. §.§ Commutative polynomial optimisation Consider the following optimisation problem max_{x_j} f(x_1, …, x_n) g_i(x_1, …, x_n) ≥ 0 ∀ i, where f and g_i are all polynomials in the variables x_1,…, x_n ∈ℝ. This type of problems is known as a (commutative) polynomial optimisation problem. Apart from the applications discussed in this review, this family of optimisation problems has found applications in control theory <cit.>, probability theory <cit.> and machine learning <cit.>. However, polynomial optimisation is known to be NP-Hard <cit.>. The moment and sum of square hierarchies, first proposed in <cit.> and <cit.>, offer a recipe to formulate a sequence of SDPs that, under mild conditions, will converge to the optimal value of Eq. (<ref>). We will now describe both hierarchies at a high level and refer interested readers to the survey article of <cit.> for a more precise treatment. §.§.§ Moment matrix approach The moment matrix approach, commonly known as the Lasserre hierarchy, relaxes Eq. (<ref>) into a sequence of SDPs. In the following we will describe how these relaxations can be constructed and towards this goal we must first introduce some notation. A monomial is any product of the variables {x_j}_j and the length of a monomial denotes the number of terms in the product, e.g., x_1x_3^2 has length 3 and x_4x_5x_6^3 has length 5. We define the constant 1 to have length 0. For k ∈ℕ, let 𝒮_k denote the set of monomials with length no larger than k. For a feasible point x = (x_1, …, x_n) of the problem (<ref>) let us define its moment matrix of level k, G^k, to be a matrix indexed by monomials in 𝒮_k whose element in position (u,v) is given by G^k(u,v) = u(x) v(x), where u,v ∈𝒮_k . One crucial feature of moment matrices is that they are necessarily positive semidefinite, as G^k = (∑_u u(x) |u⟩)(∑_u u(x) |u⟩)^†. Furthermore, the value of any polynomial of degree no larger than 2 k can be evaluated at the point x by an appropriate linear combination of the elements of G^k. Thus, by taking k large enough, the value of f(x_1,…,x_n) can be reconstructed from the moment matrix. In addition to G^k, for each g_i appearing in the constraints of Eq. (<ref>) we introduce a localising moment matrix of level , denoted G^k_i_g_i, which will act as a relaxation of the constraint g_i(x) ≥ 0. This new matrix is indexed by elements of 𝒮_k_i and the element at index (u,v) is given by G^k_i_g_i(u,v) = u(x) v(x) g_i(x) . One natural choice of k_i is ⌊ k - (g_i)/2⌋, since this ensures that the polynomial u(x) v(x) g_i(x) is of a degree small enough to be expressed as a linear combination of the elements of the original moment matrix G^k. We will assume in the remainder of this section that k_i is chosen this way but we refer the reader to the remark in Section <ref> for further discussion on choices of indexing sets. Finally, for any feasible point (x_1,…,x_n) of Eq. (<ref>) one again necessarily has that G^k_i_g_i≽ 0. The core idea of the Lasserre hierarchy is that, instead of directly optimising Eq. (<ref>), for each level k one can optimise over all PSD matrices that satisfy the same constraints as the level-k moment matrices of a feasible point of Eq. (<ref>). When taking k large enough so that all the polynomials in the problem can be expressed as linear combinations of the moment matrix elements, e.g., f(x) = ∑_u,v ∈𝒮_k c_uv u(x)v(x) for some coefficients c_uv∈ℝ, then one arrives at the semidefinite program max ∑_u,v ∈𝒮_k c_uv G^k(u, v) G^k ≽ 0, G^k_i_g_i≽ 0 ∀ i, where there are many implicit equality constraints relating the elements of G_g_i^k_i matrices to linear combinations of the elements of G^k. Additionally there are other constraints based on the construction of the moment matrices, e.g., G^k(uw,v) = G^k(u, wv) for all monomials u, v, w such that uw,wv∈ S_k as well as the normalisation constraint G^k(1,1) = 1. Note that, as every feasible point of Eq. (<ref>) defines a feasible point of Eq. (<ref>), this new optimisation problem is a relaxation and its optimal value constitutes an upper bound on the optimal value of Eq. (<ref>). Furthermore, <cit.> proved that under certain conditions the sequence of optimal values of Eq. (<ref>) indexed by the relaxation level k will converge to the optimal value of Eq. (<ref>). Note however that the size of the SDPs grows rapidly[The moment matrix of level k is of size |𝒮_k|× |𝒮_k| with |𝒮_k| = (k+n-1)!/(n-1)! k!.] with k. Nevertheless, in many practical problems of interest it has been observed that small relaxation levels can give accurate, and sometimes tight, bounds. To better illustrate this method let us demonstrate its use on the following problem max x_2^2 - x_1 x_2 - x_2 x_1 - x_1^2 ≥ 0, x_2 - x_2^2 ≥ 0. The monomial set for level k=1 is 𝒮_1 = {1, x_1, x_2}. The corresponding relaxation of Eq. (<ref>) at this level is the SDP max y_22 - y_12 - y_02 s.t. [ 1 y_01 y_02; y_11 y_12; y_22 ]≽ 0, y_01 - y_11≥ 0, y_02 - y_22≥ 0, which has an optimal value of 0.125. The monomial set for the second level, k=2, is 𝒮_2 = {1, x_1,x_2,x_1^2, x_1x_2, x_2^2}, and the corresponding relaxation is max y_05 - y_04 - y_02 s.t. [ 1 y_01 y_02 y_03 y_04 y_05; y_03 y_04 y_13 y_14 y_15; y_05 y_14 y_15 y_25; y_33 y_34 y_35; y_35 y_45; y_55 ]≽ 0, [ y_01 - y_03 y_03 - y_13 y_04 - y_14; y_13 - y_33 y_14 - y_34; y_15 - y_35 ]≽ 0, [ y_02 - y_05 y_04 - y_15 y_05 - y_25; y_14 - y_35 y_15 - y_45; y_25 - y_55 ]≽ 0 . At this level one can now see how the entries of the localising moment matrices are linear combinations of the elements of the original moment matrix. If we solve the above example numerically we find that it gives an objective value of 0.000021. In particular, as we increase the relaxation level the objective values converge towards the optimal value of the original problem, which is 0 and is achieved when x_1 = 0 and x_2=1. §.§.§ Sum of squares approach The dual problems to the moment matrix relaxations also have an interesting interpretation in terms of optimising over sum-of-squares (SOS) polynomials <cit.> (see also the survey of <cit.> for a discussion on the duality of the two approaches). A polynomial p(x) is an SOS polynomial if it can be written as p(x) = ∑_i r_i(x)^2 for some polynomials r_i(x). Note that an SOS polynomial is necessarily nonnegative, i.e., p(x) ≥ 0 ∀ x. We can therefore upper bound our original problem, given in Eq. (<ref>), by the SOS problem min λ s.t. λ - f(x) = s_0(x) + ∑_i s_i(x) g_i(x), s_j ∈SOS ∀ j, λ∈ℝ, where the optimisation is over λ and SOS polynomials s_j. Notice that whenever we have an x such that g_i(x) ≥ 0 for every i (i.e., x is a feasible point of Eq. (<ref>)) we know that the right-hand side of the equality constraint must be nonnegative and hence f(x) ≤λ. Therefore this dual problem gives an upper bound on the maximum of f(x). Like the original problem (<ref>), this is not necessarily an easy problem to solve. Nevertheless one can again relax it to a hierarchy of SDPs. The key idea is to notice that for any SOS polynomial p(x) one can always write it in the form p(x) = w^T M w where M is a PSD matrix and w is a vector of monomials. Thus, one can obtain a hierarchy of relaxations by bounding the length of the monomials in the vector w. Let SOS_k be the set of all the SOS polynomials generated when w is the vector of all the monomials in 𝒮_k. Then we have the following hierarchy of relaxations for k ∈ℕ. min λ s.t. λ - f(x) = s_0(x) + ∑_i s_i(x) g_i(x), s_i ∈SOS_2 k - (g_i) ∀ i, λ∈ℝ, where (g_0) = 0. This gives a sequence of SDP relaxations for Eq. (<ref>). Moreover, the SDPs in Eq. (<ref>) are precisely the dual SDPs of the moment matrix relaxations of Eq. (<ref>). By solving these SDPs it is possible to extract an SOS decomposition of λ - f(x), which gives a certificate that f(x) ≤λ whenever g_i(x) ≥ 0 ∀ i. For instance, solving the level-1 relaxation of our previous example, given in Eq. (<ref>), we find that for λ = 1/8 we can write 1/8 - x_2^2 + x_1 x_2 + x_2 as 1/2(1/2 - x_1 -x_2)^2 + 1/2(x_1 - x_1^2) + 3/2(x_2 - x_2^2) . Whenever the constraints x_1 ≥ x_1^2 and x_2 ≥ x_2^2 are satisfied the above polynomial is nonnegative and hence, as it is equal to 1/8 - x_2^2 + x_1 x_2 + x_2 it must be that x_2^2 - x_1 x_2 - x_2 ≤1/8. This is an analytical proof of the upper bound which can be extracted from the numerics. §.§ Noncommutative polynomial optimisation The polynomial optimisation problems of the previous section can also be extended to the setting wherein the variables do not commute. Given some Hilbert space ℋ we can now consider polynomials of bounded operators X_1, …, X_n on ℋ. In particular, consider the following optimisation problem max (ρ f(X_1, …, X_n)) s.t. (ρ h_i(X_1, …, X_n)) ≥ 0 ∀ i, g_j(X_1,…,X_n) ≽ 0 ∀ j, (ρ) = 1, ρ≽ 0, where the optimisation is over all Hilbert spaces ℋ, all states ρ on ℋ and all bounded operators X_1,…, X_n on ℋ, and the polynomials f, h_i and g_j are all Hermitian – although the variables X_1, … X_n need not necessarily be Hermitian. This noncommutative generalisation of Eq. (<ref>) rather naturally captures many problems in quantum theory and, as we shall see in later sections, it forms the basis for characterising nonlocal correlations (see Section <ref>), communication correlations (see Sections <ref> and <ref>), computing key rates in cryptography (see Section <ref>) and characterising network correlations (see Section <ref>). As in the commutative case, this problem is in general very difficult to solve. Indeed, the noncommutative setting is a generalisation of the former and hence inherits its complexity. Nevertheless, <cit.> showed that relatively natural extensions of the moment and sum-of-squares hierarchies can be derived that lead to a hierarchy of SDPs that (under mild conditions[A sufficient condition for convergence is that the constraints of the problem imply a bound on the operator norm of feasible points (X_1,…,X_n). Following the formulation in <cit.>, one should be able to determine some constant C such that C^2 - ∑_i=1^n X_i^† X_i ≽ 0 for all feasible points (X_1,…,X_n). For example if X_i are all projectors then we can take C=√(n).]) will converge to the optimal value of Eq. (<ref>). §.§.§ Moment matrix approach Following the previous section closely, a monomial is any product of the operators X_1, …, X_n and its length is the number of elements in the product. We define the length of the identity operator to be 0. For k ∈ℕ let 𝒮_k denote the set of monomials of length no larger than k, noting that if a variable X_i is not Hermitian then we also include its adjoint X_i^† in the set of variables generating the monomials in 𝒮_k. For any feasible point (ℋ, ρ, X_1,…, X_n) of the problem it is possible to define a moment matrix, Γ^k, of level k which is a matrix indexed by elements of 𝒮_k and whose (M,N) entry for M, N ∈𝒮_k is given by Γ^k(M,N) = (ρ M^† N). As in the commutative case, this moment matrix is necessarily PSD as for any vector |w⟩ we have ⟨w|Γ^k |w⟩ = (ρ R^† R) ≥ 0, where R = ∑_N ∈𝒮_k⟨N|w⟩ N. Note that for any polynomial p(X) of degree no larger than 2k in the variables X_1,…, X_n we have that (ρ p(X)) is a linear combination of the elements of Γ^k. For each polynomial g_i appearing in the constraints of Eq. (<ref>) we also introduce a localising moment matrix of level k_i, denoted Γ^k_i_g_i, whose (M,N) entry is Γ_g_i^k_i(M, N) = (ρ M^† g_i(X) N). As in the commutative case a natural choice of k_i is ⌊ k - (g_i)/2⌋ to ensure that all elements of Γ_g_i^k_i can be expressed as linear combinations of elements of Γ^k. Note that if g_i(X) is PSD then its corresponding moment matrix is also PSD. As in the case of the Lasserre hierarchy, it is possible to relax the problem (<ref>) to a hierarchy of SDPs by optimising over semidefinite matrices that resemble moment matrices and localising moment matrices of level k. In particular if (f), (h_i) ≤ 2k, we can write f(X) = ∑_M,N ∈𝒮_k f_MN M^† N and h_i(X) = ∑_M,N ∈𝒮_k h^i_MN M^† N where f_MN, h^i_MN∈ℂ. Then for k∈ℕ such that (f), (h_i) ≤ 2k we define the level-k relaxation of Eq. (<ref>) to be the SDP max ∑_M,N ∈𝒮_k f_MNΓ^k(M,N) s.t. ∑_M,N h^i_MNΓ^k(M,N) ≥ 0 ∀ i, Γ^k_j_g_j≽ 0 ∀ j, Γ^k ≽ 0 . As in the case of the Lasserre hierarchy, there are many implicit equality constraints in the above SDP, e.g., Γ^k(A, BC) = Γ(B^†A, C), and the normalisation condition Γ^k(𝕀,𝕀) = 1. Let us take a look at a noncommutative extension of the example we introduced in the previous subsection (see problem (<ref>)). Suppose that X_1 and X_2 are now Hermitian operators, and that we are interested in solving the following problem max [ρ (X_2^2 - 1/2 X_1 X_2 - 1/2 X_2 X_1 - X_2)] s.t. X_1 - X_1^2 ≽ 0, X_2 - X_2^2 ≽ 0, (ρ) =1, ρ≽ 0 . Note that if we were to add the condition [X_1,X_2]=0, then the optimal value of the problem would coincide with that of Eq. (<ref>). Applying the moment matrix relaxations to this problem we find that both level 1 and level 2 give the same value of 1/8. In this case we find that the hierarchy has converged already at level 1, and the optimal value of 1/8 is different to the optimal value of the commutative problem (which was 0). This value is achieved by the qubit state ρ = |0⟩⟨0| together with the projectors X_1 = 1/4[ 1 √(3); √(3) 3 ], X_2 = 1/4[ 1 -√(3); -√(3) 3 ] . §.§.§ Sum of squares approach In the same spirit as Section <ref>, the dual problem to the moment matrix approach can be seen as an optimisation over SOS polynomials, in this case with noncommuting variables. Given a polynomial of operators p(X_1, …, X_n) we say that p is a sum of squares if it can be written in the form p(X_1,…, X_n) = ∑_i r_i^†(X_1,…,X_n) r_i(X_1,…, X_n) for some polynomials r_i. It is evident that SOS polynomials are necessarily PSD. It is thus possible to find an upper bound on the problem (<ref>) by instead solving the problem 3min λ s.t. λ - f(X) = s_0(X) + ∑_i ν_i h_i(X) + ∑_i,j r^†_ij(X) g_j(X) r_ij(X), ν_i ≥ 0 ∀ i, s_0 ∈SOS, λ∈ℝ, where the optimisation is over λ, the nonnegative real numbers ν_i, the sum-of-squares polynomial s_0, and arbitrary polynomials r_ij(X). Given a feasible point of the problem (<ref>) and any quantum state ρ, it is clear that if g_j(X) ≽ 0 and if (ρ h_i(X)) ≥ 0 then we must have λ≥(ρ f(X)). Therefore any feasible point of Eq. (<ref>) provides an upper bound on the optimal value of Eq. (<ref>). This SOS optimisation can furthermore be relaxed to a hierarchy of SDPs. To see this note first that a polynomial p(X) is a sum of squares if and only if there exists a PSD matrix M such that p(X) = w^† M w, where w is a vector of monomials. Thus, by considering vectors w whose entries are monomials up to degree k (i.e., elements of 𝒮_k), one optimises over SOS polynomials up to degree 2k and the constraint s_0 ∈SOS is relaxed to the SDP constraint s_0 ∈SOS_2k. The real variables λ and ν_i all appear linearly in the problem and are therefore valid variables for an SDP problem. Finally we have the terms of the form ∑_i r_i^†(X) g(X) r_i(X). This is similar to an SOS polynomial, ∑_i r_i^†(X) r_i(X), except that it is centered around a polynomial g(X). Like in the case of an SOS polynomial, for a bounded degree of r_i this quantity can be rewritten as a PSD matrix M with its entries multiplied by g(X), creating a new matrix M_g that satisfies w^† M_g w = ∑_i r_i^†(X) g(X) r_i(X). Thus this term can also be reinterpreted as a PSD condition. Following the notation for SOS polynomials we denote the set of g-centered SOS polynomials of degree up to d by SOS^g_d. For each k ∈ℕ large enough, one arrives at the following hierarchy of semidefinite programming relaxations for Eq. (<ref>) min λ s.t. λ - f(X) = s_0(X) + ∑_i ν_i h_i(X) + ∑_j s_j(X) ν_i ≥ 0 ∀ i, s_0 ∈SOS_2k s_j ∈SOS^g_j_ 2k - (g_j) ∀ j, λ∈ℝ . By relaxing the noncommutative problem (<ref>) to level 1 of the SOS hierarchy we find that the polynomial 1/8 - X_2^2 + 12 (X_1X_2 + X_2X_1) + X_2^2 can be written as 1/2(1/2 - X_1 - X_2)^†(1/2 - X_1 - X_2) + 1/2(X_1 - X_1^2) + 3/2(X_2 - X_2^2), which provides an analytical proof that for any Hermitian operators (X_1,X_2) that satisfy X_1-X_1^2 ≽ 0 and X_2-X_2^2 ≽ 0 we must have that (ρ (X_2^2 - 1/2 (X_1 X_2 + X_2 X_1) - X_2))≤1/8. Throughout this section we have repeatedly used a monomial indexing which was chosen up to some degree k. In both the moment and SOS approach this k defines the index of the SDP hierarchy. It is important to note however that it is not necessary to construct a hierarchy with these sets and in general indexing by any set of monomials 𝒮 will lead to a valid semidefinite relaxation of the problem. Such constructions can lead to more accurate bounds with less computational resources or to interesting physical constraints <cit.>. Note that this also applies to the indexing sets of the localising moment matrices. § ENTANGLEMENT Entangled states are fundamental in quantum information science. In this section we discuss the use of SDP relaxation methods for detecting and quantifying entanglement. §.§ Doherty-Parrilo-Spedalieri hierarchy Recall that a bipartite state is separable when it can be written in the form given by Eq. (<ref>). Otherwise, it is said to be entangled. This leads to an elementary question: is a given bipartite density matrix separable or entangled? While a general solution is very challenging <cit.>, the problem can be solved through a converging hierarchy of semidefinite relaxations of the set of separable states known as the Doherty-Parrilo-Spedalieri (DPS) hierarchy <cit.>. Consider a quantum state ρ_AB∈ℋ_A ⊗ℋ_B. If the state is separable, then for any positive integer n we can construct a symmetric extension of this quantum state, that is, a quantum state ρ_n ∈ℋ_A ⊗ℋ_B^⊗ n that is invariant under permutation of the subsystems in ℋ_B^⊗ n, and such that taking the partial trace over the additional subsystems recovers ρ_AB. For a separable state written as in Eq. (<ref>), such an extension is given by ρ_n^sep=∑_λ p(λ) ϕ_λ⊗φ^⊗ n_λ. The main idea behind the DPS hierarchy is that this does not hold for entangled states: for any fixed entangled ρ_AB there is a threshold n_0 such that symmetric extensions with n>n_0 do not exist. Testing whether such a symmetric extension exists can be cast as an SDP, and therefore this gives a complete SDP hierarchy for testing entanglement. However, this test can quickly become computationally demanding. Two more ideas can be used to make it more tractable. The first is combining the test for symmetric extensions with the PPT criterion, described in Section <ref>: we add the requirement that the extension ρ_n must have positive partial transposition across all possible bipartitions[Note that because of the symmetry of ρ_n only n partial transpositions need to be considered, instead of the 2^n possible ones.]. This is satisfied by ρ_n^sep. The second idea is to make use of the symmetry of ρ_n^sep in order to reduce the size of the problem[Symmetrisation techniques are useful for a wide class of SDPs and will be discussed more in Section <ref>.]. The key observation is that ρ_n^sep is invariant not only under the permutation of ℋ_B^⊗ n, but satisfies a stronger condition[To see that this is stronger, consider the state |ψ^-⟩ = 1/√(2)(|01⟩-|10⟩). It is not symmetric, as SWAP|ψ^-⟩ = -|ψ^-⟩, but it is permutation invariant, as SWAP|ψ^-⟩⟨ψ^-|SWAP = |ψ^-⟩⟨ψ^-|. ] known as Bose symmetry, that is, ρ_n^sep = (𝕀_A ⊗ P_B )ρ_n^sep = ρ_n^sep(𝕀_A ⊗ P_B) for any permutation P. This implies that we can require additionally that ρ_n^sep belongs to the symmetric subspace (over the copies of B), which has dimension s_n = d_B + n -1n, as opposed to the dimension d_B^n of the whole space. Let then V be an isometry from ℂ^s_n to the symmetric subspace of ℋ_B^⊗ n. With all the pieces in place, we can state the DPS SDP: max_σ,t t (σ) = 1, σ≽ t𝕀 ρ_n = (𝕀_A ⊗ V_B) σ(𝕀_A ⊗ V_B^†) ρ_AB = _B_2,…,B_n(ρ_n) ρ_n^T_B_1… T_B_k≽ t𝕀 ∀ k The variable t has been introduced to make the problem strictly feasible as in Section <ref>. The dimension of σ is d_A s_n, which for fixed d_B increases exponentially fast with n, reflecting the fact that determining separability is an NP-hard problem. It is possible to compute convergence bounds on the DPS hierarchy, i.e., until which n does one need to go in order to test whether a given quantum state is ϵ-close to the set of separable states <cit.>. From the dual of the DPS hierarchy one can in principle obtain an entanglement witness for any entangled state. Moreover, the dual of the DPS hierarchy can also be interpreted as a commutative sum-of-squares hierarchy <cit.>. The hierarchy collapses at the first level if d_Ad_B ≤ 6, as in this case the PPT criterion is necessary and sufficient for determining whether a state is entangled <cit.>. A natural question is then whether it also collapses at a finite level for larger dimensions. Surprisingly, the answer is negative, and moreover one can show that no single SDP can characterise separability in these dimensions <cit.>. It is possible, however, to solve a weaker problem with a single, albeit very large, SDP: optimizing linear functionals over the set of separable of states <cit.>. While the DPS hierarchy gives converging outer SDP relaxations of the separable set, it is also possible to construct converging inner relaxations of the same set <cit.>. These relaxations closely follow the ideas of the DPS relaxations and are based on the observation that small linear perturbations can destroy the entanglement of states with n-fold Bose symmetric extension. However, it differs in the fact that the resulting set of SDPs is not a hierarchy, as the next criterion is not always strictly stronger than the previous. §.§ Bipartite entanglement In this subsection we review the application of SDP methods to the simplest entanglement scenario, namely that of entanglement between two systems. §.§.§ Quantifying entanglement Once a quantum state is known to be entangled, a natural question is to quantify its entanglement; see e.g. the review <cit.>. In the standard paradigm, where the parties can perform local operations assisted by classical communication (LOCC) and have access to asymptotically many copies of the state, it is natural to consider conversion rates between a given state and the maximally entangled state as quantifiers of entanglement. Two important quantifiers are the distillable entanglement, E_D, and the entanglement cost, E_C. The distillable entanglement addresses the largest rate, R, at which one can convert, by means of LOCC, a given bipartite state ρ_AB into a d-dimensional maximally entangled state ϕ^+_d <cit.>; E_D(ρ_AB)= sup R lim_n→∞inf_ℒℒ(ρ_AB^⊗ n)-ϕ^+_2^⌊ nR⌋_1 =0, where O_1=√(O^† O) is the trace norm and ℒ is the set of LOCC operations. This is equivalent to asking how many copies of a maximally entangled qubit pair that can be extracted asymptotically from ρ_AB. While this definition may appear somewhat arbitrary, in the asymptotic setting many alternative definitions turn out to be equivalent to it <cit.>. The entanglement cost is the smallest rate of maximally entangled states required to convert them into a given state by means of LOCC; E_C(ρ_AB)= inf R lim_n→∞inf_ℒρ_AB^⊗ n-ℒ(ϕ^+_2^⌊ nR⌋)_1 =0. This definition remains unaltered by changing the distance measure <cit.>. In general E_D≠ E_C <cit.> and, in fact, a large class of entanglement measures can be shown to be bounded from above by E_C and from below by E_D <cit.>. Thus, computing these quantities is of particular interest. Unfortunately, due the difficulty of characterising ℒ <cit.>, such computations are very hard <cit.>, but they can be efficiently bounded using SDP methods. A frequently used entanglement measure is the logarithmic negativity of entanglement <cit.>. It is defined as E_𝒩=logρ^T_B_1 and it bounds the distillable entanglement as E_D(ρ)≤ E_𝒩(ρ). It can be computed as the following SDP, ρ^T_B_1=min_σ_± (σ_+) + (σ_-) ρ^T_B = σ_+-σ_- σ_±≽ 0. To see the connection between the trace norm and SDP, note that every Hermitian operator, O, can be written as O=σ_+-σ_- for some PSD operators σ_±. Consider that we are given one copy of a non-maximally entangled state and we want to distil a state with as large a fidelity with the maximally entangled state as possible. By relaxing the LOCC paradigm to the (technically more convenient) superset of global operations that preserve PPT, the fidelity can be bounded by an SDP <cit.>. However, this bound is not additive <cit.>. Therefore, once we move into the many-copy regime, the size of the SDP grows with the number of copies n, making it unwieldy for the asymptotic limit n→∞. Notably, in this LOCC-to-PPT relaxed setting, the irreversibility of entanglement (i.e. E_D≠ E_C) still persists as shown through SDP in <cit.>; see also <cit.>. An alternative upper bound on E_D is reported in <cit.> which is fully additive under tensor products, thus resolving the limit issue, and computable by SDP. It is given by E_W=log W(ρ_AB) where W(ρ_AB)= min σ^T_B_AB_1 σ_AB≽ρ_AB. This is bounded from below by the bound in <cit.> and from above by the logarithmic negativity. In fact, once LOCC is relaxed to global PPT-preserving operations, the entanglement cost can be computed exactly. It is obtained from log(E_κ) where E_κ corresponds to the following SDP <cit.>, E_κ(ρ_AB)= min (S) -S^T_B≼ρ_AB^T_B≼ S^T_B. While the asymptotic setting is conceptually interesting, a more applied approach often considers imperfect conversions between states using finitely many copies. In this so-called one-shot setting, SDP methods have been used for bounding the rate of entanglement distillation for a given degree of error <cit.>. This has been considered using many different relaxations of LOCC which admit either LP or SDP formulations <cit.>. In <cit.> SDPs are used for entanglement distillation under realistic limitations on the number of copies, error and exchange of messages in LOCC, including also the the setting in which success is only probabilistic. Another interesting entanglement measure is the squashed entanglement <cit.>. It has several desirable properties: it is fully additive under tensor products, it obeys a simple entanglement monogamy relation <cit.> and it is faithful, i.e. it is non-zero if and only if the state is entangled <cit.>. The definition draws inspiration from quantum key distribution by considering the smallest possible quantum mutual information between Alice and Bob upon conditioning on a third, “eavesdropper”, system E with which the state may be correlated. The squashed entanglement is defined as E_sq(ρ_AB)= min_ρ_ABE 1/2I(A:B|E) ρ_AB=_E(ρ_ABE), where the quantum conditional mutual information can be given in terms of the conditional von Neumann entropy as I(A:B|E)=H(A|E)-H(A|BE). While it is NP-hard to compute <cit.>, it can be bounded from below by means of a hierarchy of SDPs <cit.>. This hierarchy draws inspiration from the methods presented in Section <ref> for giving variational bounds on the von Neumann entropy. It is unknown whether the hierarchy converges to E_sq but non-trivial lower bounds can be obtained already at the first level for particular states. A complementary class of entanglement measures are based on convex roof constructions. This means that one considers every possible decomposition of a mixed state, ρ=∑_i p_i |ψ_i⟩⟨ψ_i|, and evaluates the minimal entanglement as averaged over the entanglement of the pure states in the decomposition, i.e. E_roof(ρ)=min∑_i p_i E(|ψ_i⟩), for some pure-state entanglement measure E. Examples of this are the entanglement of formation <cit.> and geometric measures of entanglement <cit.>. In <cit.> it is shown that convex roofs of polynomial entanglement measures can be viewed as separability problems. An illustrative example is the linear entropy, E(|ψ_AB⟩)=1-(ρ^2_A). By observing that E(|ψ_AB⟩)=(ℙ^asym_AA'|ψ⟩⟨ψ|_AB⊗|ψ⟩⟨ψ|_A'B') where ℙ^asym is the projector onto the antisymmetric subspace, the convex roof of the linear entropy can be written as E_roof(ρ)=(ℙ^asymσ) where σ=∑_i p_i |ψ_i⟩⟨ψ_i|_AB⊗|ψ_i⟩⟨ψ_i|_A'B'. The state σ is separable with respect to AB|A'B', symmetric under swapping these systems, and its marginal is _A'B'(σ)=ρ. Thus, by relaxing separability to e.g. PPT, E_roof can be bounded through an SDP. Finally, we mention that the SDP-based discussion of entanglement quantification and conversion can be extended to many other quantum resource theories, e.g. fidelity distillation of basis-coherence (instead of entanglement as the resource) under incoherent operations (instead of LOCC as the free operation) <cit.>. Similarly, conversion rates can be addressed by SDPs for resource theories of Gaussian states under Gaussian operations <cit.>, basis-coherent states <cit.>, entanglement in complex versus real Hilbert spaces <cit.> and asymmetry of states under group actions <cit.>. §.§.§ Detecting the entanglement dimension Suppose that a bipartite state with local dimension d is certified to be entangled. Does the preparation of the state truly require one to generate entanglement between d degrees of freedom? For pure states, this idea of an entanglement dimension is formalised in the Schmidt rank of a state |ψ⟩. Every pure bipartite state, up to local unitaries, admits a Schmidt decomposition, |ψ⟩=∑_i=1^s λ_i|i,i⟩, for some real and normalised, nonnegative coefficients {λ_i}. The Schmidt rank is the number of non-zero terms (1≤ s≤ d) in the Schmidt decomposition. For mixed states this concept is extended to the Schmidt number. Let ρ=∑_ip_i |ψ_i⟩⟨ψ_i| be some decomposition. The Schmidt number is the largest Schmidt rank of the pure states {|ψ_i⟩} minimised over all possible decompositions of ρ <cit.>. One way to witness the Schmidt number is based on the range of ρ. If the range of ρ is not spanned by pure states that have Schmidt rank at most s then ρ must have Schmidt number at least s+1. However, verifying that the range cannot be spanned by such states is not easy in general. In <cit.> it is shown that the more general question of whether a given subspace of pure quantum states contains any product states (or states with Schmidt rank s) can be addressed by means of a hierarchy of linear programs. This method exploits elementary properties of local antisymmetric projections applied to tensor products of the basis elements of the considered space. Every entangled subspace is detected at some finite level of this hierarchy and it is more efficient to compute in comparison to SDP-based approaches. In analogy with entanglement witnesses, a relevant endeavour is to witness the Schmidt number from partial information about the state ρ_AB, i.e. to find an observable O such that (Oρ)≤α holds for all states with Schmidt number at most s but is violated for at least one state with a larger Schmidt number. Determining the value of α for any given O can be related to a separability problem in a larger Hilbert space <cit.>. Specifically α = max_σ s^2(O_AB⊗|ϕ^+_s⟩⟨ϕ^+_s|_A'B'σ_AA'BB'), where the four-partite state σ is separable with respect to the bipartition AA'|BB'. This is a useful connection because it, among other things, allows us to use known SDP-compatible relaxations of separability to compute bounds on α. However, it has the drawback that the dimension of global Hilbert space scales as (ds)^2 and thus evaluating a bound for a larger Schmidt number becomes more demanding. This type of approach, based on auxiliary spaces A'B', can also be used to address the Schmidt number of ρ_AB directly, without a witness observable, via a hierarchy of SDPs that naturally generalises the DPS construction <cit.>. One treats the density matrix σ_AA'BB' as a variable and imposes that ρ_AB=sΠ_A'B'^†σΠ_A'B' and that σ has a k-symmetric extension that is PPT in the sense of DPS. Notably, constructions of this sort, which connect Schmidt number witnessing to separability problems, can also be leveraged to certify higher-dimensional entanglement in the steering scenario <cit.>. Furthermore, one can systematically search for adaptive Schmidt number witness protocols, that use one-way LOCC from Alice to Bob. This has been proposed in a hypothesis testing framework that aims to minimise the total probability of false positives and false negatives for the Schmidt number detection scheme <cit.>. To achieve this, one can employ the SDP methods of <cit.>, and in particular the dual of the DPS-type approach to Schmidt numbers, to relax the set of possible witnesses. An alternative approach to Schmidt number detection is to do away with the computational difficulty associated to the auxiliary spaces A'B' by trading it for other relaxations. For example, a Schmidt number no larger than s implies that ϕ^+_dρϕ^+_d≤s/d for every maximally entangled state ϕ^+_d <cit.>. Knowing that ρ is close to a particular maximally entangled state thus yields a potentially useful semidefinite relaxation of states with Schmidt number s. Another option is to use the positive but not completely positive generalised reduction map R(σ)=(σ)𝕀-1/sσ. Applied to one share of a state ρ_AB with Schmidt number s it still returns a valid quantum state <cit.>, which constitutes a semidefinite constraint on ρ_AB. Either of these conditions can be incorporated into an SDP, now of size only d^2, for computing an upper bound on an arbitrary linear witness. An iterative SDP-based algorithm that constructs Schmidt number witnesses by leveraging this type of ideas appears in <cit.>. Further, we note that SDP relaxation methods are used in many other contexts of entanglement detection. This includes, for example, the construction of entanglement witnesses from random measurements in both discrete <cit.> and continuous variables <cit.>, the evaluation of perturbations to known entanglement witnesses due to small systematic measurement errors <cit.>, unifying semidefinite criteria for entanglement detection via covariance matrices <cit.> and the problem of determining the smallest number of product states required to decompose a separable state <cit.>. §.§ Multipartite entanglement When considering states of more than two subsystems, one must deal both with an exponentially growing Hilbert space dimension and with an increasing number of qualitatively different entanglement configurations. SDP methods can be useful in both these regards. §.§.§ Entanglement detection Multipartite systems are said to be entangled if they are not fully separable, i.e.,  when they cannot be expressed as convex combinations of individual states held by each of the parties. Fully separable states of N subsystems take the form ρ=∑_λ p(λ) ψ_λ^(1)⊗…⊗ψ_λ^(N). It is possible to extend the original, bipartite, DPS hierarchy discussed in Section <ref> to the multipartite case <cit.> and thereby decide the multipartite separability problem via SDP in the limit of large levels in the hierarchy. Essentially, one considers symmetric extensions of the form of Eqs. (<ref>-<ref>) for all but one of the parties. Considering also dual problem leads to witnesses of multipartite entanglement <cit.>. Due to the aforementioned increase in computational cost, a naive use of this approach is limited in practice to the study of small multipartite systems, both in the number of constituents and in their dimension. A way to circumvent this problem is by limiting the state space by considering representations of multipartite states in the form of tensor networks <cit.>. This approach is used for detecting both, entanglement and nonlocality, in systems composed of hundreds of particles. Another approach is to use all of the symmetries that arise when considering the existence of symmetric extensions of the global state. <cit.> finds hierarchies that are efficient in time and space requirements, that allow to detect entanglement from two-body marginals in systems of hundreds of particles, and that can be used even for infinite systems with appropriate symmetries. The approach followed in <cit.>, namely formulating entanglement detection as asking whether given marginals are consistent with a joint separable state, is an instance of the quantum marginal problem, that we will review in the following section. It is also possible to construct SDP relaxations of the set of separable states from the interior. <cit.> develops a seesaw-like method in which single-system state spaces are approximated with a polytope. By considering larger polytopes, one obtains better inner relaxations of separability at the price of computing more demanding SDPs. This was for instance used to compute bounds on visibilities and robustness measures against full separability for systems up to five qubits or three qutrits. SDP hierarchies have also been proposed for deciding the full separability of specific states. One example is multipartite Werner states. These states have the defining property that they are invariant under the action of any n-fold unitary U^⊗ n. For such states it is possible, using representation theory <cit.>, to provide a characterisation that does not depend on the dimension, and which can be tested via Lasserre's hierarchy or via SDP hierarchies for trace polynomials <cit.>. These hierarchies give, therefore, entanglement witnesses that are valid independently of the local dimensions. Another class of examples are pure product states. These can be characterised in terms of suitable degree-3 polynomials in commuting variables <cit.>. Thus, optimisations under the set of multipartite pure product states can be solved via the Lasserre SDP hierarchy. This approach has been followed for computing entanglement measures for three- and four-qubit states. The multipartite separability problem has also been formulated as an instance of the truncated moment problem <cit.> (see also <cit.> for an application to the separability of quantum channels). This problem consists in obtaining a probability measure that reproduces some finite number of observed moments, and can be solved via SDP <cit.>. In the context of entanglement, this translates into determining whether there exists a separable quantum state that reproduces some observed expectation values. <cit.>, building on the results of <cit.>, addresses this problem by developing an NPA-like hierarchy of matrices that are all PSD if the observed expectation values can be reproduced by a separable state. This gives a tool for detecting many-body entanglement, that recovers the covariance matrix criterion of <cit.> and the spin-squeezing inequalities of <cit.> at concrete finite levels of the hierarchy. However, this tool fails to address finer notions of entanglement (i.e., failure of k-separability for k>2). Multipartite entanglement detection has also been formulated in terms of adaptive strategies <cit.>, which can be formulated in terms of Lasserre-like SDP hierarchies <cit.>. According to Eq. (<ref>), a state is entangled already if two particles are entangled, even if all the rest remain in a separable state. A stronger requirement is called genuine multipartite entanglement (GME). A state is GME if it is not a classical mixture of states σ_τ that are separable with respect to some bipartition τ of the particle labels {1,…,N}, i.e. if no model exists of the form ρ=∑_τ p(τ) σ_τ. Here, τ ranges over all the N2 bipartitions. While some simple witnesses of GME can be systematically constructed without SDPs (see e.g. <cit.>), SDP methods offer a powerful approach for reasonably small particle numbers. A sufficient condition for GME is obtained from replacing the separable states σ_τ with quantum states ω_τ which are PPT with respect to the bipartition τ. Then, by defining the subnormalised operators ω̃_τ=p(τ)ω_τ and adding the normalisation condition ∑_τ(ω̃_τ)=1, one obtains an SDP relaxation of GME <cit.>. If no such decomposition of ρ is found, SDP duality allows the construction of an inequality that witnesses GME. This method has been found to be practical for detecting GME in systems up to around seven qubits. The procedure above can be generalised to any positive map that acts only on one element of a bipartition <cit.>: namely, one relaxes each state σ_τ=∑_i p_i σ^(i)_τ_1⊗σ^(i)_τ_2 by another state, σ_Λ_τ, that satisfies Λ_τ_1⊗𝕀_τ_2[σ_Λ_τ]≽ 0 for a given positive map Λ_τ. This has the apparent disadvantage that one would need potentially to run over all possible Λ_τ in order to prove that a state admits such a decomposition, but it turns out that, in practice, simple maps such as the transposition map as above, the Choi map, or the Hall-Breuer map <cit.>, allow to identify large families of GME states. This connection between separable states and positive maps is also exploited in order to build witnesses of GME from witnesses of bipartite entanglement <cit.>. Moreover, this connection has also been used in linear algebra, where the Lasserre hierarchy is used for checking whether linear maps and matrices are positive and separable, respectively <cit.>. §.§.§ Quantum marginal problems The quantum marginal problem (QMP) asks whether there exists a global entangled state that is compatible with a given collection of few-body states. That is, given a collection of quantum states {ρ_i}_i=1^I, each supported in a set of quantum systems K_i⊂[1,…,N] (with, in general, K_i∩ K_j≠∅), the QMP asks whether a joint state ρ∈ℋ_1⊗⋯⊗ℋ_N exists that satisfies ρ_i=Tr_∖ K_i(ρ) for all i, where _∖ K_i denotes the partial trace over all systems except K_i. The QMP is very naturally cast as an SDP <cit.>, since it only involves positive operators (the marginal quantum states) and linear constraints between them. However, solving these SDPs is in general expensive due to the exponential growth of its size with N and the dimension of the subsystems. In fact, in terms of computational complexity, the QMP is a QMA-complete problem <cit.>. Roughly speaking, QMA is quantum computing counterpart of the complexity class NP (see <cit.>). Particular cases, nevertheless, are tractable or admit tractable relaxations. One such particular case is that where the global state is pure, i.e., ρ=|ψ⟩⟨ψ|. In this case, the QMP can be connected to a separability problem. This restriction makes the problem no longer an SDP, since the requirement of having a global pure state introduces a nonlinear constraint ρ^2=ρ. <cit.> overcomes this issue by considering a symmetric extension of the complete global state, where pure states can be characterised by the restriction Tr(S_ABρ⊗ρ)=1, the operator S_AB denoting the swap operator S_AB=∑_i,j|ij⟩⟨ji| (note that Tr(S_ABρ⊗σ)=Tr(ρσ)). Then, the separability of ρ⊗ρ can be relaxed via the DPS hierarchy to an SDP. This procedure is generalised in <cit.>, which formulates the compatibility problem in terms of spectra. Namely, rather than asking for a joint state that reproduces some given reduced density matrices, <cit.> asks whether there exists a joint state such that the spectra of marginals coincides with some given set. Working with spectra instead of density matrices allows to exploit symmetries in order to reduce the computational load of the problem. This approach, moreover, produces witnesses of incompatibility for arbitrary local dimensions. A case where the characterisation can be done exactly in terms of an efficient SDP is that of states that are invariant under permutation of parties <cit.>. In such a case, first, the symmetries reduce greatly the number of marginals: namely, there is only one possible marginal for each number of subsystems. Thus it suffices to only consider the problem of the compatibility with a single marginal. Moreover, the number of parameters required for the description of the joint state is also very small. This allows <cit.> to give necessary and sufficient conditions for the QMP as a single, tractable SDP for systems composed of up to 128 particles. The compatibility problem has also been formulated in terms of quantum channels. On one hand, <cit.> considers the problem of whether a global broadcasting channel Φ_A→ B_1… B_N exists that has a given set of channels Φ_A→ B_i as marginals. This problem can be connected to a state compatibility problem via the Choi-Jamiołkowski isomorphism <cit.>, allowing to use the methods outlined above. On the other, <cit.> considers the more general problem of whether a global evolution is compatible with a set of local dynamics, giving a measure of robustness that can be computed exactly via SDP. A problem related to the QMP is determining, from a set of marginals of a joint state, properties of other marginals. For instance, one may ask whether, given some entangled states that are marginals of an unknown joint state, whether the remaining marginals must be entangled as well. This problem can be addressed via entanglement witnesses whose optimisation can be cast as SDPs <cit.>. The QMP has also been tackled via tools from the study of nonlocality <cit.>, that are the subject of the next section. § QUANTUM NONLOCALITY In this section we discuss SDP relaxation methods for quantum nonlocality and their applications to quantum information. §.§ The Navascués-Pironio-Acín hierarchy A fundamental question in Bell nonlocality is to characterise the set of distributions that are predicted by quantum theory in a Bell scenario (recall Fig. <ref>) with a given number of inputs (X,Y) and outputs (N, M). This corresponds to deciding whether for any given distribution p(a,b|x,y) there exists a bipartite quantum state of any dimension, |ψ⟩, and local measurements for Alice and Bob, {A_a|x}_a,x and {B_b|y}_b,y, respectively, such that the distribution can be written in the form[Note that since there is no restriction on the dimension, we can without loss of generality assume the quantum state to be pure and the measurements to be projective.] p(a,b|x,y)= ⟨ψ| A_a|x⊗ B_b|y|ψ⟩. In contrast to the set of correlations associated to local models (see Section <ref>), the set of quantum correlations for arbitrary input/output scenarios, denoted by 𝒬, admits in general no simple and useful characterisation: checking membership is known to be undecidable <cit.>. Importantly, however, 𝒬 may be approximated by a sequence of supersets {𝒬_k}_k=1^∞ in such a way that 𝒬_1 ⊇𝒬_2 ⊇…⊇𝒬_∞⊇𝒬, where the membership of p in 𝒬_k can be decided by an SDP. Thus, if there exists some k for which a hypothesised distribution p fails to be a member of 𝒬_k, it follows that no quantum model for p exists. This hierarchy of outer SDP relaxations to the quantum set of Bell nonlocal correlations is known as the Navascués-Pironio-Acín (NPA) hierarchy <cit.>. In the limit of the sequence, k→∞, the NPA hierarchy converges to a set of distributions 𝒬_∞. This set corresponds to quantum models in which the tensor product, demarcating the separation of parties, is relaxed to the commutation condition [A_a|x,B_b|y]=0 <cit.>. This corresponds to changing Eq. (<ref>) to p(a,b|x,y)= ⟨ψ| A_a|xB_b|y|ψ⟩. In finite dimensions, the commutation condition is equivalent to the tensor product, and thus 𝒬_∞=𝒬. In infinite dimensions, the conditions are inequivalent <cit.> and, moreover, a strict separation exists <cit.>. Therefore, the NPA hierarchy in general does not converge to 𝒬. However, for a specific distribution, in a specific input/output scenario, it sometimes is the case that there exists some finite k for which the membership of p in 𝒬_k is necessary and sufficient for a quantum model. The NPA hierarchy is a noncommutative polynomial relaxation hierarchy of the type discussed in section <ref>. Define a list of all linearly independent measurement operators in the Bell scenario, L={𝕀, 𝐀,𝐁} where 𝐀=(A_1|1,…,A_N-1|X) and 𝐁=(B_1|1,…,B_M-1|Y). Then let 𝒮_k be the set of all products of length at most k of the operators appearing in L. Note that one is free to also include some but not all products of a given length; in such cases one speaks of intermediate hierarchy levels, which are often useful in practice (see the remark in Section <ref>). We associate to level k an |𝒮_k|× |𝒮_k| moment matrix whose rows and columns are indexed by elements u,v∈𝒮_k, Γ(u,v)=ψu^† vψ, for some unknown state |ψ⟩. This moment matrix encodes constraints from quantum theory, namely that Γ(𝕀,𝕀) = 1, Γ(u,v) = Γ(s,t) whenever u^† v = s^† t, and Γ(u,v) = 0 whenever u^† v = 0. This implies for example that Γ(B_b|y,B_b'|yA_a|x) = δ_b,b'Γ(A_a|x,B_b|y), as we are assuming without loss of generality that the measurements are projective. Additionally, we want to constrain the moment matrix to reproduce the probability distribution in question, which requires Γ(A_a|x,B_b|y)=p(a,b|x,y), Γ(A_a|x,𝕀)=p_A(a|x) and Γ(𝕀,B_b|y) = p_B(b|y). Following Section <ref>, the key observation is that a necessary condition for the existence of a quantum model for p is that the remaining free variables comprising the moment matrix, i.e. all variables not fixed by p and the additional equality constraints, can be chosen such that Γ≽ 0. Finally, while Γ will in general be a complex-valued matrix, one can without loss of generality restrict it to be real valued. The reason is that if Γ is feasible then so is its complex conjugate Γ^*, and because of linearity also the real-valued matrix (Γ+Γ^*)/2 is feasible. For example, the first level of the hierarchy (k=1) corresponds to the moment matrix Γ = 𝕀 𝐀 𝐁 𝕀 1 P_A P_B 𝐀^† P_A^T S_A P_AB 𝐁^† P_B^T P_AB^T S_B . This matrix has size , where P_A = [p_A(1|1), p_A(2|1),…,p_A(N-1|X)] and P_B = [p_B(1|1), p_B(2|1),…,p_B(M-1|Y)] are Alice's and Bob's marginal probabilities, P_AB^(ax)(by)=⟨ψ|A_a|xB_b|y|ψ⟩=p(a,b|x,y) is the table of joint probabilities, and S_A^(ax)(a'x')=⟨ψ|A_a|xA_a'|x'|ψ⟩ and S_B^(by)(b'y')=⟨ψ|B_b|yB_b'|y'|ψ⟩ are the matrices of second moments of Alice and Bob. Thus, the sub-matrices P_A, P_B and P_AB are completely fixed by p whereas the matrices S_A and S_B are entirely comprised of unknown variables except on their diagonals. If they can be completed such that Γ≽ 0, then p∈𝒬_1. This SDP is, however, in general not strictly feasible, which sometimes causes numerical difficulties when checking for membership in 𝒬_k <cit.>. A straightforward variation is always strictly feasible (see Appendix <ref>) : instead of constraining Γ to reproduce a particular probability distribution, one leaves those terms as free variables, and optimises a Bell functional over them. This allows one to compute bounds on the optimal quantum violation of a Bell inequality. The generic Bell functional in Eq. (<ref>) can be expressed in terms of the moment matrix as ∑_a,b,x,yc_abxyΓ(A_a|x,B_b|y). In our example for 𝒬_1, this would correspond to P_A, P_B, P_AB not being fixed matrices but instead comprised of free variables, a linear combination of which is the objective function of the SDP. It is also interesting to consider the dual of this SDP. As the primal can be considered a particular case of noncommutative polynomial optimisation, the dual can be considered a particular case of optimisation over SOS polynomials. Let then y be a vector of all the free variables in the moment matrix, and write Γ = Γ_0 + ∑_i y_i Γ_i. The primal SDP is thus given by max_y ⟨ b, y ⟩ Γ_0 + ∑_i y_i Γ_i ≽ 0, where b encodes the Bell functional in question. The dual SDP is then given by min_X ⟨Γ_0, X⟩ ⟨Γ_i, X⟩ = -b_i, X ≽ 0. Any feasible solution to the dual then gives an SOS proof that the optimal quantum violation is bounded above by ⟨Γ_0, X⟩. To see this let w be the vector of all monomials in 𝒮_k (ordered using the same ordering as the primal problem), then S = w^† X w = w^† Q^† Q w is an SOS polynomial where we have written X=Q^† Q as X is PSD. In this notation Qw is a vector of polynomials P_i such that S = ∑_i P_i^† P_i. Finally, if Γ is the level k moment matrix for any feasible quantum model then we have that ⟨Γ_0, X ⟩ - ⟨ b, y ⟩ = ⟨Γ, X ⟩ = (ρ S) ≥ 0 . Besides proving bounds on the optimal violation, this is also useful for self-testing, as we shall see in Section <ref>. Lastly, note that the NPA hierarchy applies equally well to scenarios that feature more than two parties. §.§.§ Macroscopic Locality & Almost-Quantum Correlations There is a large body of research aiming to identify the physical principles that constrain quantum correlations. Some of these correspond to certain low levels of the NPA hierarchy. One such principle is Macroscopic Locality <cit.>, which stipulates that classicality must re-emerge in the macroscopic limit of a Bell experiment. Consider that the source in the Bell scenario does not emit one pair of particles but N independent and identical pairs. When Alice and Bob perform their measurements, they will respectively direct the incoming beam of particles onto their detectors, causing them all to fire, but with different detection rates. Assuming that intensity fluctuations in the beam can be detected to the order √(N), one can define the intensity fluctuation around the mean as I_u=1/√(N)∑_i=1^N (d^u_i-p(u)) where u indicates the input/output pair for either Alice or Bob and d^u_i is the indicator function for whether particle number i impinged on the corresponding detector. When N→∞, the central limit theorem implies that Alice and Bob will observe a Gaussian intensity fluctuation with vanishing mean and covariance matrix Γ_uv=I_uI_v=1/N∑_i,j=1^N d^u_i d^v_j. Recalling that the particle pairs are identical and independent one has that Γ_uv=d^u_1 d^v_1. Using the fact that the mean and the covariance matrix completely characterise the Gaussian, and the latter is always PSD, one can show that Macroscopic Locality is characterised by 𝒬_1, i.e. the existence of a matrix of the form of Eq. (<ref>) that is PSD <cit.>. However, macroscopically local correlations are insufficiently restrictive to capture the limitations of quantum theory. From a physical point of view, this follows, for instance, from the fact that such correlations violate <cit.> the principle of Information Causality which quantum theory is known to obey <cit.>. Alternatively, this same follows immediately from the fact that in general 𝒬_1 ≠𝒬. A more precise constraint on the quantum set is known as almost-quantum correlations <cit.>. These correlations satisfy several established elementary principles[However, almost-quantum correlations distinguish between measurements that are mathematically well-defined and measurements that are physically allowed, i.e. they violate the so-called no-restriction hypothesis <cit.>.] in addition to Macroscopic Locality, namely no trivial communication complexity <cit.>, no advantage for nonlocal computation <cit.> and local orthogonality <cit.>. Almost-quantum correlations are those that can be written in the form p(a,b|x,y)=ψÃ_a|xB̃_b|yψ where ∑_a Ã_a|x=∑_b B̃_b|y=𝕀 (normalisation) and A_a|xB_b|y|ψ⟩=B_b|yA_a|x|ψ⟩ for all a, x, b, and y. Interestingly, this natural relaxation of quantum theory admits a simple SDP characterisation that is equivalent to choosing a monomial indexing set 𝒮_1+AB={𝕀,𝐀,𝐁,𝐀×𝐁} in the NPA hierarchy <cit.>. This is a level that is intermediate between k=1 and k=2 and it is often referred to as level “1+AB” (see the remark in Section <ref>). §.§.§ Tsirelson bounds A natural application of the NPA hierarchy is to compute bounds on the optimal quantum violation of Bell inequalities. This value is known as the Tsirelson bound, named after Boris Tsirelson's analytical derivation <cit.> of the maximal quantum violation of the Clauser-Horne-Shimony-Holt (CHSH) inequality <cit.>. However, with the exception of particularly convenient families of Bell inequalities (some examples are found for instance in <cit.>) the Tsirelson bound is too difficult to determine analytically. Therefore, the NPA hierarchy provides a practically viable approach to bounding it. Many concrete instances of Bell inequalities have been analysed by means of the NPA hierarchy, for example in the context of noise tolerance of nonlocality <cit.>, many outcomes in Bell tests <cit.>, multipartite Bell tests <cit.>, Bell tests with additional constraints <cit.>, nonlocal games with conflicting party interests <cit.> and the detection loophole of Bell experiments <cit.>. Interestingly, for a simple but large class of Bell inequalities, the Tsirelson bound is guaranteed <cit.> to coincide with the bound associated to the first level of the NPA hierarchy (i.e. Macroscopic Locality, 𝒬_1). These Bell tests have binary outputs for Alice and Bob, and correspond to Bell functionals of the form ∑_x,y c_xyA_xB_y where c_xy are arbitrary real coefficients and A_x≡ A_1|x-A_2|x and B_y≡ B_1|y-B_2|y are observables for Alice and Bob. The quantum model corresponding to the value associated to 𝒬_1 is known as the Tsirelson construction. This construction stipulates that for any set of dichotomic quantum observables one can find unit vectors u⃗_x,v⃗_y∈ℝ^X+Y such that A_xB_y=u⃗^T_x v⃗_y, and conversely for any unit vectors u⃗_x,v⃗_y∈ℝ^n one can find observables A_x and B_y such that A_xB_y_ψ=u⃗^T_x v⃗_y where ψ is a maximally entangled state of dimension 2^⌈n/2⌉ <cit.>. This construction also provides a connection between Tsirelson bounds and the Grothendieck constant <cit.>. The link between Tsirelson's construction and SDPs has been used to derive the Tsirelson bound for the Braunstein-Caves Bell inequalities <cit.> and analyses based on 𝒬_1 has yielded quantum Bell inequalities (i.e. inequalities satisfied by all quantum nonlocal correlations) for dichotomic observables <cit.>. Notably, for the simplest scenario with binary inputs and outputs, a quantum Bell inequality in the spirit of CHSH, which gives a complete characterisation of the two-point correlators, was known well before the advent of SDP relaxation methods <cit.>. However, an approach based already on 𝒬_1 leads to more accurate characterisations which now also take the marginal probabilities into account. An example of such a 𝒬_1-based quantum Bell inequality in the spirit of CHSH is arcsin D_11+arcsin D_12+arcsin D_21-arcsin D_22≤π, where 0.88!D_xy=(A_xB_y-A_xB_y)/√((1-A_x^2)(1-B_y^2)). However, these inequalities are still not tight as there exist quantum correlations in the binary input/output scenario that satisfy Eq. (<ref>) but are not members of 𝒬_2 <cit.>. Moreover, it is interesting to note that some more general classes of Bell inequalities, including some with many outputs, can be efficiently approximated by SDP methods without the need for hierarchies <cit.>. An illuminating application of the NPA hierarchy is to the Bell inequality known as I_3322 <cit.>. It pertains to the second simplest Bell scenario: it is the only non-trivial facet of the local polytope for the bipartite Bell scenario with three inputs and two outputs (other than a lifting[Every Bell inequality that is a facet for a given number of inputs and outputs can be lifted to a facet when the number of parties, inputs or outputs is increased <cit.>.] of CHSH). It can be written as I_3322 = -p_11 - p_22 - p_12 - p_21 - p_13 - p_31 + p_23 + p_32 + p^A_1 + p^B_1 ≤ 1, where we write p_xy = p(1,1|x,y). While the Tsirelson bound of the simplest Bell inequality, namely CHSH, is straightforward to obtain analytically, the opposite is the case for I_3322. If one restricts to qubit systems, the maximal violation is I_3322=5/4 but the seesaw heuristic (discussed in the end of Section <ref>) has revealed that larger violations are possible by employing higher-dimensional systems. In fact, it is conjectured that the Tsirelson bound is saturated only by an infinite-dimensional quantum state <cit.>. Upper bounds to the Tsirelson bound of I_3322 have been computed via the NPA hierarchy. These are illustrated in Table <ref>. Relaxations up to 𝒬_3 were evaluated in <cit.>, then <cit.> computed 𝒬_4, and <cit.> evaluated 𝒬_5. It is seen that the computational requirements scale rapidly in the relaxation level, as is typically the case with SDP relaxation hierarchies, whereas the bounds rapidly converge. The results for 𝒬_4 and 𝒬_5 are identical up to at least 17 digits, and match the best known quantum violation of I_3322 to within 10^-16 <cit.>. §.§ Device-independent certification Device-independent quantum information is the study of quantum information protocols executed under minimal assumptions <cit.>. This typically amounts to assuming only the validity of quantum mechanics in otherwise uncharacterised experiments, or sometimes even just the no-signaling principle. Here, we consider the former assumption and discuss how SDP relaxation methods for quantum nonlocality can be employed to device-independently certify properties of the underlying quantum systems. §.§.§ Self-testing Self-testing is a sophisticated form of quantum certification where, ideally, Alice and Bob are able to pinpoint their shared quantum state and measurements only from examining their nonlocal correlations <cit.>. Naturally, these cannot be precisely deduced because quantum correlations in Bell scenarios are invariant under collective changes of reference frame. Hence they can at best be determined up to local transformations that leave the correlations invariant. Such transformations are those that preserve inner products, and are called isometries[An isometry can be seen as a linear map that consists of a possible appending of additional degrees of freedom to the system followed by a unitary transformation. They are defined as linear operators V satisfying V^† V = 𝕀.]. A simple example is that from any correlations achieving the Tsirelson bound of the CHSH inequality one can deduce that the state is a singlet up to local isometries <cit.>. For a review of self-testing, we refer the reader to <cit.>. SDP techniques offer a powerful approach to self-testing. To showcase this, let us first define an operator ℬ=β_Q𝕀 - ∑_a,b,x,yc_abxyA_a|x⊗ B_b|y. The second term is the Bell operator associated to a quantum model for a generic Bell functional. β_Q denotes the Tsirelson bound of that Bell inequality, i.e. the maximal quantum value of the Bell parameter. Hence, one can think of Eq. (<ref>) as a shifted Bell operator tailored such that ψℬψ≥ 0 for all quantum states, i.e. the operator is PSD. Assume now that we are able to find a decomposition of ℬ as a sum-of-squares of some operators {P_l}, ℬ=∑_l P_l^† P_l, where P_l are some polynomials of the local measurement operators {A_a|x} and {B_b|y}. Witnessing the Tsirelson bound, namely ψℬψ=0, therefore implies that P_l|ψ⟩=0 for all l. These relations can be very useful for deducing properties of the local measurements <cit.>. Take for example the CHSH inequality, for which finding an SOS decomposition for Eq. (<ref>) is particularly simple: working with the observables instead of the POVM elements, one can choose P_1=A_1+A_2/√(2)-B_1 and P_2=A_1-A_2/√(2)-B_2. Following the given procedure, one can deduce that {A_1,A_2}|ψ⟩={B_1,B_2}|ψ⟩=0, i.e. the local measurements must anticommute on the support of the state. One can then take this further and leverage these relations together with a well-chosen local isometry to deduce also the shared state. The key component in this discussion is to first find the Tsirelson bound β_Q and then find an SOS decomposition of the form of Eq. (<ref>). By considering a sufficiently high level of the NPA hierarchy, one can often recover β_Q. To verify that the bound returned by the NPA hierarchy is optimal, one can for example match it with an explicit quantum strategy for the Bell test. Then, an SOS decomposition can also be extracted. As we have seen in Section <ref>, such SOS decompositions correspond to the dual of a noncommutative polynomial optimisation problem. By considering both primals and duals <cit.> of the NPA hierarchy, one can systematically approach the problem. Indeed, this can even be done analytically for some Bell inequalities by identifying a suitable relaxation level <cit.>. In particular, explicit SOS decompositions have been reported for the tilted CHSH inequalities <cit.>, a three party facet Bell inequality without a quantum violation <cit.>, multi-outcome generalisations of the CHSH inequality <cit.>, chained Bell inequalities <cit.> and Bell inequalities tailored for graph states <cit.>. Notably, however, this SDP approach is not in general guaranteed to lead to a self-testing statement for both states and measurements. When the Bell inequality violation is non-maximal, self-testing is also possible, albeit with other techniques. A useful approach employs SDP methods to place a lower bound on the fidelity of the state with the ideal state that would have been certified had the Tsirelson bound been reached. This is achieved using the swap method <cit.>. The main idea can be illustrated for the case of CHSH. Since CHSH targets a two-qubit state in the registers A and B, we can introduce qubit ancillas A' and B' into which the parties aim to swap their state. To do this, they need to individually use the swap operator. For Alice the swap operator can be written as S_AA'=WVW, where W=𝕀⊗|0⟩⟨0|+σ_X⊗|1⟩⟨1| and V=|0⟩⟨0|⊗𝕀+|1⟩⟨1|⊗σ_X are CNOT gates. Naturally, one cannot assume the specific operation on system A because of the device-independent picture, but can instead try to emulate the swap operator by using an operation that on A only depends on Alice's measurements. While emulation is not unique and less immediate approaches can enhance the results, a concrete example is instructive. Knowing that the local measurements are ideally qubits and anticommuting, it is reasonable to target a correspondence of A_1 and B_1 to σ_Z and A_2 and B_2 to σ_X. The optimal maximally entangled state is then a local rotation of the singlet |ψ^-_target⟩. Hence, the emulated swap operator corresponds to W=𝕀⊗|0⟩⟨0|+A_2⊗|1⟩⟨1| and V=𝕀+A_1/2⊗𝕀+𝕀-A_1/2⊗σ_X for Alice and analogously for Bob. The swapped state is ρ'_A'B'=_AB(Sψ_AB⊗|0⟩⟨0|_A'⊗|0⟩⟨0|_B'S^†), where S=S_AA'⊗ S_BB'. Here, ρ'_A'B' is a 4×4 matrix whose entries are linear combinations of moments (recall Eq. (<ref>)). Its fidelity with the target state, F=ψ^-_targetρ'ψ^-_target, is therefore also a linear combination of moments. Thus, if we relax the quantum set of correlations into a moment matrix problem à la NPA, we can view the fidelity as a linear objective and thus obtain a robust self-testing bound via SDP. This applies also to other Bell inequalities and to other constructions of the swap operator. The SDP relaxation for the fidelity, at the k-th level of the NPA hierarchy, becomes min_Γ_k F(Γ_k) ∑_a,b,x,y c_abxyΓ_k(A_a|x,B_b|y)=β, Γ_k≽ 0, where β≤β_Q is the witnessed Bell parameter. A practically useful relaxation typically requires selected monomials from different levels (i.e. an intermediate level, recall the remark in Section <ref>) in order to ensure that all moments appearing in F also appear in the moment matrix. An important subtlety is that for some Bell scenarios and choices for emulating the swap operator, the operation may cease to be unitary in general. This can be remedied <cit.> by the introduction of localising matrices in the SDP relaxation (<ref>). In the literature, the swap method has been applied to noisy self-testing of partially entangled two-qubit states <cit.>, three-dimensional states <cit.>, the three-qubit W state <cit.>, four-qubit GHZ and cluster states <cit.> and symmetric three-qubit states <cit.>. §.§.§ Entanglement dimension Hilbert space dimension roughly represents the number of controlled degrees of freedom in a physical system. It is therefore unsurprising that it plays a significant role in quantum nonlocality: by creating entanglement in higher dimensions one can potentially increase the magnitude of a Bell inequality violation, and sometimes quantum correlations even necessitate infinite dimensions <cit.>. We now discuss different approaches to characterising the set of quantum nonlocal correlations when states and measurements are limited to a fixed dimension d. Dimensionally restricted quantum nonlocality can be linked to the separability problem of quantum states <cit.>. To see the connection, let Alice and Bob share a two-qubit (d=2) state ρ_AB on which they perform basis projections {|a_x⟩⟨a_x|,𝕀-|a_x⟩⟨a_x|} and {|b_y⟩⟨b_y|,𝕀-|b_y⟩⟨b_y|} respectively. One can now shift the status of these operators from measurement projections to ancillary states. To do that, we append the main registers A and B with additional X- and Y-qubit registers respectively, A_1,…,A_X and B_1,…,B_Y, and define the (2+X+Y)-qubit state σ=ρ_AB⊗_x=1^X |a_x⟩⟨a_x|⊗_y=1^Y |b_y⟩⟨b_y|. The quantum correlations can be recovered by Alice (Bob) applying operators G_a,x (H_b,y), that act as the identity on all ancillary registers except A_x (B_y), which instead are swapped with system A (B) via the two-qubit swap operator S for outcome a = 1, and respectively its complement 𝕀 - S for outcome a = 2. This yields p(a,b|x,y)=(σ G_a,x⊗ H_b,y). The optimum of any Bell parameter for qubits is then converted into a type of entanglement witnessing problem where one must evaluate the optimum of (σℬ) for a known Bell operator, ℬ=∑_a,b,x,yc_abxyG_a,x⊗ H_b,y, over a multiqubit state σ that is separable with respect to the partition AB|A_1|…|A_X|B_1|…|B_Y. As discussed in Section <ref>, this problem can be addressed systematically by the DPS hierarchy. This method has also been extended to scenarios with more outcomes and parties. However, it is useful mainly when the number of inputs is small owing to the increasing number of subsystems in σ. It can be found more efficient when only some parties have a restricted dimension, since the other parties then can be treated as in the NPA hierarchy. Another way to link fixed-dimensional quantum nonlocality problems to the separability problem is proposed in <cit.>. This method has the advantage of coming with good bounds on the convergence rate of the resulting SDP relaxations, and the disadvantage of poor performance in practice. In the particular case of free games, i.e., nonlocal games where the probability distribution over the inputs is a product between a distribution for Alice and another for Bob, the complexity of computing an ϵ-close approximation to the d-dimensional Tsirelson bound scales polynomially in the input size and quasi-polynomially in the output size. In the general case the complexity is still quasi-polynomial in the output size, but becomes exponential in the input size. The problem can also be approached without connecting it to a separability problem and instead adding dimension constraints directly to the NPA moment matrix. This may consist in identifying operator equalities that hold only up to dimension d. For example, the identity [X_1,[X_2,X_3]^2]=0 holds for all complex square matrices of dimension d ≤ 2. However, this is difficult to do in practice as a complete set of operator equalities is not known for d ≥ 3. A handy alternative is instead to implicitly capture the constraints associated to a dimensional restriction, on the level of the NPA moment matrix, by employing numerical sampling in the d-dimensional space <cit.>. See Section <ref> for more details. This method is known to converge to the quantum set of correlations and it is often useful for practical purposes when problems are not too large <cit.>. §.§.§ Entanglement certification Quantum nonlocality implies entanglement and therefore allows for device-independent entanglement certification. This is an inference of entanglement without any modelling of the experimental measurement apparatus. Fundamentally, this black-box approach to entanglement comes at the cost of some entangled states not being detectable <cit.>, although this can be at least partly remedied by considering more complicated nonlocality experiments, see e.g. <cit.>. Nevertheless, a variety of interesting entangled states can still be certified and SDPs offer a powerful path for that purpose. Local measurements performed on an n-partite fully separable quantum system always yield local correlations. Hence, one can certify entanglement device-independently for a given correlation p by evaluating the LP in Eq. (<ref>) that checks its membership to the local polytope. However, this is demanding because the number of variables in the LP scales exponentially in the number of parties and inputs, and polynomially in the number of outputs. Using simplex methods for linear programming, states of up to n=7 qubits have been certified in <cit.>. This is increased to n=11 qubits in <cit.> by adopting a matrix-free approach for interior-point solvers, which reduces memory requirements <cit.>. However, one can go further by considering SDP relaxations of the local polytope. A key observation is that any local correlation between n parties can be obtained from locally commuting measurements on a quantum state. Take the fully separable state ρ=∑_λ p(λ)|λ⟩⟨λ|^⊗ n and let the x_l-th POVM of the l-th party be M_x_l^a_l=∑_μ D(a_l| x_l, μ) |μ⟩⟨μ|, where D(a_l|x_l,μ) is a deterministic distribution. All these POVMs commute. Evaluating the Born rule, one finds the generic local model given in Eq. (<ref>). Thus, a sufficient condition for nonlocality is that p fails some level of the NPA hierarchy under the extra constraint that all local measurements commute. The latter appears in the form of additional equality constraints between the elements of the NPA moment matrix of Eq. (<ref>) that effectively turn it into a Lasserre hierarchy (recall Section <ref>). Using the second level relaxation[Note that local commutation is a trivial constraint at the first level of relaxation, i.e. the associated correlation set is still 𝒬_1.] of the commuting NPA hierarchy, nonlocality has been reported[For some states one requires small additions to the second level but these can be independent of the number of qubits.] for W states, GHZ states, and graph states, reaching up to n=29 qubits <cit.>. A similar approach can also be used to certify entanglement in the steering scenario. On Alice's side one imposes commutation in the NPA relaxation whereas on Bob's side one replaces the unknown measurements with known measurements and uses their algebraic relations to further constraint the moment matrix <cit.>. A natural next question is how one may device-independently quantify entanglement. The main intuition is that a stronger violation of a Bell inequality ought to require stronger forms of entanglement. It is possible to address this question through a reinterpretation of the NPA hierarchy due to <cit.>. Consider that we apply local completely positive maps, Λ_A:ℋ_A→ℋ_A' and Λ_B: ℋ_B→ℋ_B', to a bipartite state ρ_AB. Their action can be represented in terms of unnormalised Kraus operators, i.e. a set of operators {K_i,A}_i and {K_i,B}_i. Now, let us put Alice's and Bob's local POVM elements in lists 𝐀={𝕀, A_1|1,…, A_N|X} and 𝐁={𝕀, B_1|1,…, B_M|Y} respectively. Then, we define Kraus operators of Alice as K_i,A=∑_l_1,…, l_k|l_1 … l_k⟩⟨i|𝐀_l_1…𝐀_l_k and analogously for Bob, where the index k is the level of the hierarchy. The moment matrix, Γ=Λ_A⊗Λ_B[ρ] becomes Γ=∑_r̅,l̅∑_s̅,k̅(ρ𝒜_r̅,l̅⊗ℬ_s̅,k̅)|l̅,k̅⟩⟨r̅,s̅|, where l̅=(l_1,…,l_k) and similarly for k̅, r̅ and s̅, and where 𝒜_r̅,l̅=(𝐀_r_1…𝐀_r_k)^†𝐀_l_1…𝐀_l_k and similarly for ℬ_s̅,k̅. This moment matrix would be the same as that of the NPA hierarchy if instead of local completely positive maps we had opted for global completely positive maps. The advantage of this formulation is that it comes with an explicit bipartition on the level of the moment matrix. For instance, this allows for imposing the PPT constraint, which is needed for device-independent entanglement quantification via the entanglement negativity measure. The negativity, N(ρ_AB), is defined as the sum of the negative eigenvalues of ρ_AB^T_A, which can itself be cast as an SDP, see Eq. (<ref>). On the level of the moment matrix, the SDP becomes: N(ρ_AB)=min(χ_-) such that ρ=χ_+-χ_- where (χ_±)^T_A≽ 0. Thus, the negativity can be bounded by employing the above SDP relaxation for both the operators χ_±, imposing their PPT property on Γ and noting that the objective function is simply an element of the moment matrix. The idea of building a moment matrix featuring a bipartition can also be extended to the steering scenario <cit.>. This allows for one-side device-independent quantification of entanglement, which has also been considered for highly symmetric multi-outcome scenarios <cit.>. While the negativity is relevant as one possible quantifier of bipartite entanglement, other approaches are needed for multipartite entanglement. If we have a multipartite quantum state, a natural question is to ask for the smallest cluster of subsystems that must be entangled in order to model the correlations p. The smallest number of entangled subsystems required, D, is known as the entanglement depth <cit.>. Nonlocality alone gives only a device-independent certificate that some entanglement is present, i.e. that D≥ 2. It was first noted in <cit.> that the NPA hierarchy with suitably imposed local measurement-commutation relations can be used to detect a maximal entanglement depth of D=3 in a three-partite system. For systems of more particles, one can employ the hierarchy of Moroder et al. to relax separability across a given bisection of the subsystems to a PPT condition and use that to bound the entanglement depth <cit.>. Furthermore, by restricting to few-body correlators that are symmetric under permutations of parties, one can formulate a hierarchy of SDP relaxations of the corresponding party-permutation invariant local polytope that benefits considerably from the imposed symmetry. This is showcased in <cit.> where the local polytope is first relaxed to a semi-algebraic set, and then it is leveraged that such sets can be relaxed to SDPs <cit.>. The advantage of this approach is that the number of subsystems is featured as an explicit parameter in the SDP and does not impact the size of the moment matrix. In this way, one can obtain party-permutation invariant Bell inequalities for any number of parties via the dual SDP. Permutation-invariant Bell expressions based on two-body correlators have been used to build device-independent witnesses of entanglement depth by using relaxations to PPT conditions <cit.>. §.§.§ Joint measurability It has been known since the early development of quantum theory that the values of some sets of measurements cannot be simultaneously known, e.g., position and momentum <cit.>. This fundamental feature of quantum theory, known as measurement incompatibility, has been at the forefront of significant research and development within quantum theory <cit.>. Formally, given a collection of POVMs {A_a|x}_a indexed by some x, we say the collection is compatible or jointly measurable if there exists a parent POVM, {M_λ}_λ, and a conditional probability distribution, p(a|x,λ), such that A_a|x = ∑_λ p(a|x,λ) M_λ for all a and x. Operationally, the statistics of compatible measurements can be simulated by measuring the “parent” measurement {M_λ}_λ and then post-processing the results. If such a decomposition does not exist then we say that the measurements are incompatible. It is well known that incompatible measurements are necessary for Bell nonlocality <cit.>, although not sufficient <cit.>. However, when a party in a Bell test has access to more than two measurements it is possible that some subsets of their measurements are compatible whilst the entire set of measurements remains incompatible. A collection of subsets for which the measurements are compatible is referred to as a compatibility structure. In <cit.> the authors investigate how different compatibility structures can be device-independently ruled out by large enough Bell inequality violations. From the perspective of nonlocality, if the behaviour is restricted to the subsets of inputs for which the measurements are compatible then it necessarily becomes local. Thus by combining linear programming constraints for compatible subsets (see Eq. (<ref>)) together with the NPA hierarchy it is possible to explore relaxations of the sets of correlations with different compatibility structures and in turn find Bell-like inequalities that rule out different structures in a device-independent manner. Once the presence of measurement incompatibility has been device-independently detected, a natural follow-up question is whether the degree of incompatibility can be quantified. One such measure of incompatibility is the so-called incompatibility robustness <cit.>. For a collection of measurements {A_a|x}_a,x it is defined as min t {11+t(A_a|x + t N_a|x)}_a,x are compatible, t≥ 0, which roughly captures the amount of noise that one needs to add to the measurements A_a|x in order to make them compatible. There are many other such measures of incompatibility and often these measures can themselves be expressed as SDPs <cit.>. This includes the incompatibility robustness which can be computed by the SDP min 1/d(M_λ) ∑_λ D(a|x,λ) G_λ≽ A_a|x ∀ a, x, ∑_λ G_λ = 𝕀1/d∑_λ(G_λ), G_λ≽ 0, where D(a|x,λ) are deterministic distributions (recall Eq. (<ref>)). In <cit.> the authors relate the incompatibility problem to a steering problem to show that the incompatibility robustness can be lower bounded by a steering robustness quantity, which is an analogous quantity for quantifying steerability. Then a hierarchy of SDP device-independent lower bounds on the latter quantity can be derived to give a method to compute device-independent lower bounds on incompatibility. Several additional lower bounds on the incompatibility robustness in terms of other robustness quantities were provided in <cit.>, e.g. the consistent nonlocal robustness, which can similarly be turned into device-independent lower bounds on the incompatibility robustness using the NPA hierarchy. This work, which surveys many robustness measures and their computability via SDPs, also introduces a new quantity called the consistent steering robustness, which provides tighter lower bounds on the incompatibility robustness than the standard robustness of steering. By combining this with moment matrix techniques developed in <cit.>, even stronger device-independent bounds on the incompatibility robustness can be obtained which in some cases can even be shown to be tight <cit.>. Later, a general method to obtain device-independent bounds on SDP representable incompatibility measures was presented in <cit.>. By avoiding proxy quantities, this provides a much stronger characterisation of the incompatibility robustness and can even be shown to be tight for the correlations achieving the Tsirelson bounds of the tilted CHSH inequalities. § QUANTUM COMMUNICATION In this section, we discuss quantum correlations in the prepare-and-measure scenario, introduced in Section <ref>, in which Alice prepares messages and Bob measures them. Such correlations have been studied for several different types of communication and SDP relaxations play a central role in their characterisation and applications. §.§ Channel capacities We provide a very brief overview of channel coding before proceeding to the role of SDPs in the topic. In information theory, a paradigmatic task is to encode a message, send it over a channel, and then reliably decode it. Specifically, the sender selects a message from an alphabet of size M and encodes it into a codeword consisting of n letters, where n is called the block-length. Each letter is then sent over the channel, Λ, which in the classical case can be represented as a conditional probability distribution p_Λ(y|x) mapping the input x to the output y. Since the channel is noisy, it outputs a distorted codeword, which the receiver must decode into the original message, see Fig. <ref>. The seminal work of Claude Shannon <cit.> showed how to address the efficiency of the communication when the distributions of the letters in the codeword are independent and identical. The key idea of Shannon was to allow a small error probability in the decoding, which then tends to zero in the limit of large n. In this setting, when a memoryless noisy classical channel is used asymptotically many times, it is natural to consider the largest rate, R=log_2 M/n, i.e. the ratio of the number of message bits and the number of channel uses, at which information can reliably be transmitted. This rate, 𝒞_cc(Λ), is called the capacity of the channel and Shannon proved that it is given by the largest single-copy mutual information between the channel's input and output <cit.>, 𝒞_cc(Λ)=max_{p_x} I(X;Y), where I(X;Y)=H(X)+H(Y)-H(X,Y), and H denotes the (Shannon) entropy. A natural endeavour is to extend this type of questions to scenarios with quantum resources; see e.g.  <cit.> for a thorough discussion and <cit.> for a brief overview. The Holevo-Schumacher-Westmoreland theorem <cit.> generalises Eq. (<ref>) to the scenario where the channel instead is quantum, i.e. the message is encoded in a quantum state. The classical capacity of a quantum channel is given by 𝒞_cq(Λ)=lim_n→∞χ(Λ^⊗ n)/n where χ(Λ)=max_{p_x}, {ρ_x} H(∑_x p_x Λ(ρ_x))-∑_x p_x H(Λ(ρ_x)) is called the Holevo capacity of the channel and the maximisation is taken over all input ensembles to the channel. In contrast to the classical case, the possibility of entanglement in the quantum codeword implies that the Holevo capacity is not additive[The failure of additivity for the Holevo capacity implies that several other entropic quantities also are not additive <cit.>.] <cit.> and hence Eq. (<ref>) cannot in general be reduced to a single-letter formula, i.e. a closed expression based on a single use of Λ, but exceptions are known for convenient special cases; see e.g. <cit.>. Non-additivity makes the computation of 𝒞_cq(Λ) very difficult. A contrasting situation is when the sender and receiver additionally are allowed to share entanglement. Then, the resulting entanglement-assisted classical capacity of the quantum channel admits an elegant single-letter formula similar to Eq. (<ref>), but instead of maximising the classical mutual information one maximises the quantum mutual information of the bipartite state (⊗Λ)[ρ_AB] over all entangled states ρ_AB <cit.>. Another natural scenario is when the message itself is a quantum state. One then speaks of quantum capacities. In a general picture, it is possible to characterise the performance of a classical or quantum protocol in terms of the triplet (R,n,ϵ), where ϵ is the error tolerated in the decoding. This error is favourably represented in terms of the fidelity between the maximally entangled state and the state obtained by sending half of it through the communication scheme. The central question is then to characterise the set (R,n,ϵ) that is achievable for a given quantum channel. In the asymptotic setting (n→∞) and independent channel uses, the quantum capacity of the quantum channel, 𝒞_qq(Λ), is the largest rate R at which the error tends to zero. It is given by the Lloyd-Shor-Devetak theorem <cit.> in terms of the largest coherent information[The coherent information is defined as I_coh(ρ_AB)=H(ρ_B)-H(ρ_AB).] when optimised over all bipartite input states after half of it is passed through the channel. However, this quantity must be regularised, i.e. one must take a many-copy limit analogous to that in Eq. (<ref>). This renders the capacity non-additive and therefore very hard to compute. §.§.§ Classical capacities In the conventional setting, the decoding errors are tolerated as long as they vanish in the limit of large block-length. A stricter approach, in which errors are exactly zero for any n, is called zero-error coding[This is not only of independent interest but the zero-error capacity is also relevant for how rapidly the error tends to zero for an increasing block-length in the standard capacity <cit.>.] <cit.>. In zero-error capacity problems, one needs only to consider whether two distinct messages could be confused with each other after they are sent through the channel. Therefore, if the channel is used only once, one can represent the zero-error problem as a confusability graph where each vertex represents a message and each edge represents the possibility that two messages can be mapped into the same output. The one-shot zero-error capacity is given by the largest set of independent vertices in the confusability graph. For larger block-length, one must consider the strong power of the graph. However, computing the independence number is NP-hard <cit.>. In the celebrated work <cit.>, it was shown that the independence number of a graph, G, can be upper bounded via the so-called Lovász theta function, which admits an SDP formulation ϑ(G)=max (X E) X_ij=0 if i and j are connected, (X)=1, X≽ 0, where E_ij=1. The Lovász theta function has the important property that it factors under strong products of graphs, which allows one to address the asymptotic limit for zero-error coding. It has been proven that this SDP also bounds the entanglement-assisted zero-error classical capacity <cit.>. Bounds of this sort are important because it is known that the one-shot zero-error classical capacity can be increased by means of shared entanglement <cit.>. In fact, this is connected to proofs of the Kochen-Specker theorem, which in turn can be represented in the language of graph theory <cit.>. Entanglement-assisted advantages are also possible for asymptotic zero-error coding. Perhaps surprisingly, the zero-error capacity can sometimes even equal the classical capacity of a quantum channel (<ref>) <cit.>. In contrast, the zero-error problem becomes simpler if the sender and receiver are permitted to share general bipartite no-signaling correlations. The capacity can then be computed via an LP which corresponds to the fractional packing number of the confusability graph <cit.>. This can also be generalised to parties that share quantum no-signaling correlations[Quantum no-signaling correlations are completely positive and trace-preserving bipartite linear maps that forbid signaling of classical information in either direction. ]: in the one-shot setting, the capacity is given by an SDP and sometimes an SDP can also be formulated for the asympototic capacity. If quantum no-signaling correlations are permitted, the Lovász theta function corresponds to the smallest zero-error classical capacity of any channel associated to the same confusability graph <cit.>. A related problem considers a one-shot classical channel and a given number of messages and then addresses the largest average success probability with which they can be communicated through the channel. The answer can only be approximated in polynomial time up to a factor (1-1/e), and obtaining a better approximation is NP-hard <cit.>. An upper bound is nevertheless known <cit.>. Interestingly, this bound admits a nice physical interpretation as it is equivalent to the optimal success probability obtained from assisting the classical channel with bipartite no-signaling correlations <cit.>. As made intuitive from this connection, the solution can be bounded by means of relaxation to an LP. This classical information problem can also be considered in the quantum case. For a given block-length and a given tolerance for the error, upper bounds on the optimal transmission rate over a general quantum channel (also when assisted by entanglement) can be obtained by means of SDP by relating the problem to hypothesis testing relative entropies <cit.>. These bounds were later made tighter in <cit.>, applying also to the entanglement-assisted case, and used to give SDP upper bounds on the classical capacity of the qubit amplitude-damping channel. For the large block-length limit, namely the asymptotic channel coding scenario, it was shown in <cit.> how to compute a sequence of upper bounds on 𝒞_cq via a hierarchy of SDP-tailored Rényi divergences <cit.> but its convergence to 𝒞_cq is presently not known. These SDPs are rendered considerably more efficient by invoking symmetry properties, in the spirit of the discussion in Section <ref>. SDP methods in this vein can sometimes also be used to obtain strong converse bounds[The strong converse property means that the error probability tends exponentially to one if the rate exceeds the bound. This property is known for some quantum channels; see e.g. <cit.>.] on the classical capacity <cit.>. An alternative strong converse bound is obtained from the quantum reverse Shannon theorem <cit.>: if one sends messages at a rate above the entanglement-assisted classical capacity then the error tends to unit exponentially with n. As highlighted in <cit.>, this capacity can be approximated by SDP due to the possibility of establishing SDP bounds on the quantum relative entropy <cit.>. §.§.§ Quantum capacities In general, it is difficult to determine capacities in a computable way. A mathematically tractable framework is to view the encoding and decoding operations as a single global, bipartite, operation Z_AA'BB' where A and B are input systems and A' and B' are output systems <cit.>; see Fig. <ref>. Naturally, B in practice depends on A', since system A' is sent through the channel. One can think of Z as linearly transforming the channel Λ into a new channel, i.e. a so-called super operator <cit.>. To it, we associate a Choi matrix J_Z=d_Ad_B'(ℐ⊗ Z)(ϕ^+), where ℐ is the identity channel. In order for the operation Z_AA'BB' to be completely positive and trace-preserving, the Choi matrix must be PSD and satisfy _A'B'(J_Z)=_AB. Moreover, it must be no-signaling from receiver to sender, which corresponds to _B'(J_Z)=_BB'(J_Z)⊗_B; note that the first factor is the Choi matrix of the sender's transformation. In addition, if the sender and receiver do not share post-classical resources, we need to ensure that they are not entangled with each other. A handy relaxation of this constraint is to impose that Z is PPT-preserving, i.e. every PPT-state remains PPT after being sent through Z; which translates into J_Z^T_AA'≽ 0. The key observation here is that all these constraints are either linear or semidefinite in the variable J_Z. Now, to address the one-shot quantum capacity of Λ we consider the fidelity between the maximally entangled state and the state obtained from sending half of it through the communication scheme. The fidelity is particularly convenient because it too can be written in terms of the Choi matrix; F=(J_Z(J_Λ^T ⊗ϕ^+)). Putting it all together, <cit.> obtains an SDP bound on the one-shot quantum capacity which applies both when the quantum channel is unassisted (relaxation to no-signaling and PPT-preserving) and when it is entanglement-assisted (relaxation to no-signaling only). In <cit.> this relaxation is extended, by means of connection to a separability problem and using symmetric extensions, into a hierarchy of SDPs which converges to the fidelity F. The idea to bound quantum capacities by means of relaxation to PPT-preserving and/or no-signaling codes, was further explored in <cit.> where SDP bounds were given for the one-shot bounded-error capacity. Complementarily, and in a similar spirit, it was shown in <cit.> how to determine an SDP upper bound the largest rate R for a fixed number of channel uses n and fixed error ϵ. In the asymptotic setting, deciding whether a given number of quantum states can be sent over many channel uses with zero-error is known to be QMA-complete <cit.>. However, <cit.> showed that in the same way as the SDP-computable Lovász theta function in Eq. (<ref>) can bound the classical problem, the quantum capacity can be bounded via an analogous SDP quantity. Moreover, in the context of strong converse bounds on the asymptotic quantum capacity, one finds good use of SDPs. It is known that taking the diamond norm[The diamond norm is defined by Λ_♢=max_ρ_AB⊗Λ(ρ_AB)_1.] of the channel after applying a transposition map, T, consitutes a strong converse bound. Specifically if the rate exceeds logΛ∘ T_♢ then the error tends to unit in the number of channel uses <cit.>. Importantly, the diamond norm can be computed efficienty by SDP <cit.>. Another strong converse SDP bound was provided in <cit.>. This bound is stronger than the one based on the diamond norm but weaker than another known bound, based on the so-called Rains information of the channel, but which is not straightforward to compute <cit.>. The SDP bound is given by logmax_ρ_A,F_AB (J_Λ F_AB) -ρ_A⊗≼ F_AB^T_B≼ρ_A⊗, (ρ_A)=1, F_AB,ρ_A≽ 0, and it is additive under tensor products of channels. In fact, this is closely related to the entanglement measure E_W discussed around Eq. (<ref>) and it can be interpreted as the largest value of E_W taken over all purifications of ρ_A when half of the purification is sent through the channel. In <cit.> it is shown how to systematically compute bounds on several different classical and quantum channel capacities via SDP. These capacities are evaluated from a single use of the channel (no regularisation needed), they are computable for general channels and they admit a strong converse. This method relies on a quantity known as the geoemtric Rényi divergence which has many convenient properties, for example additivity under tensor products and chain rule <cit.>. §.§ Dimension constraints A natural quantifier of communication is the dimension of the alphabet of the message sent from Alice to Bob. Classically, Alice's message is selected from a d-valued alphabet {1,…,d}, whereas in the quantum case, the message is a state ρ_x selected from a d-dimensional Hilbert space. While Holevo's <cit.> theorem ensures that such quantum systems cannot be used to transmit a message more efficiently than the corresponding classical systems, it is well-known that there are many other communication tasks in which quantum messages provide an advantage over classical messages, see e.g. <cit.>. A central question is to characterise the set of quantum correlations, 𝒬, that can be generated between Alice and Bob for a fixed number of inputs (X and Y respectively) and a fixed number of outputs for Bob (N), when Alice sends a d-dimensional state to Bob (recall Section <ref>). These correlations are given by the Born rule in Eq. (<ref>). Commonly, this problem is investigated while granting Alice and Bob unbounded shared randomness. This renders 𝒬 convex and the task particularly suitable to SDP methods. It should be noted that the same problem without shared randomness deals with a non-convex correlation set, which leads to very different quantum predictions; see e.g. <cit.>. Bounds on 𝒬 are not only important for determining the non-classicality enabled by quantum theory, but they also serve as an important tool for quantum information applications. SDP relaxations of 𝒬 are useful for this purpose, in particular since conventional analytical bounds on 𝒬 only are possible in rare special cases, see e.g. <cit.>. §.§.§ Bounding the quantum set SDP relaxation hierarchies can be constructed to bound the set of quantum correlations in the prepare-and-measure scenario for arbitrary input/output alphabets and arbitrary dimensions. They are typically based on tracial moments, see e.g. <cit.>, on which constraints specific to a d-dimensional Hilbert space must be imposed. To this end, let L={𝕀, ρ,𝐌}, where ρ=(ρ_1,…,ρ_X) and 𝐌=(M_1|1,…,M_N|Y), be the list of all operators appearing in the quantum prepare-and-measure scenario. We define 𝒮_k as a set of monomials over L of length k. Recall from Remark <ref> that it is interesting to consider subsets of all the monomials with a given length. An |𝒮_k|× |𝒮_k| moment matrix can be constructed whose entries are given by, Γ(u,v)= (u^† v), for u,v∈𝒮_k. The moment matrix inherits many constraints from quantum theory. Firstly, normalisation implies Γ(ρ_x,𝕀)=1. Secondly, one may without loss of generality restrict to pure states, i.e. ρ_x^2=ρ_x. Thirdly, under the restriction of projective measurements[In general, one must consider also POVMs when fixing the dimension.], it holds that M_b|yM_b'|y=M_b|yδ_b,b'. Fourthly, the moment matrix contains elements that are equal to the probabilities observed between Alice and Bob, namely Γ(ρ_x,M_b|y)=p(b|x,y). In addition, the cyclicity of the trace implies a number of additional equalities between the moments, e.g. Γ(ρ_x, M_b|yρ_x)=Γ(ρ_x^2,M_b|y)=Γ(ρ_x,M_b|y)=p(b|x,y). Lastly, by an argument analogous to that in Eq. (<ref>), the moment matrix must be PSD. The key issue is how to add constraints to Γ that are specific to the dimension d. One option is to identify polynomial operator identities or inequalities that pertain only to dimension d. However, such relations are typically unknown and finding them is hard. Navascués and Vértesi (NV) <cit.> propose a solution by employing numerical sampling to construct a basis of moment matrices. The prescription is to randomly sample the states and measurements from the d-dimensional Hilbert space and then compute a moment matrix sample, Γ^(1), which will automatically satisfy all the above constraints. The sampling procedure is repeated until a moment matrix sample is found to be linearly dependent on all the previous samples. This can be quickly checked by vectorising the samples, arranging them in a matrix and computing its rank. Then the process is truncated and the collected samples {Γ^(1),…,Γ^(m)} are certain to span[The probability that the m samples do not span the full space but nevertheless the next sample is found to be linearly dependent is essentially zero.] a relaxation of the subspace of moment matrices compatible with dimension d. In order to preserve normalisation, namely (𝕀)=d, the final moment matrix becomes an affine combination of the samples, Γ = ∑_i=1^m γ_i Γ^(i), where ∑_i=1^m γ_i=1. The coefficients {γ_i} serve as the SDP variables in the necessary condition for the existence of a d-dimensional quantum model for p(b|x,y), namely that it is possible to find Γ≽ 0 such that Γ(ρ_x,M_b|y)=p(b|x,y). Note that by relaxing the latter condition, one can equally well employ the NV hierarchy to bound the extremal quantum value of an arbitrary linear objective function ∑_b,x,y c_bxyp(b|x,y) characterised by some real coefficients c_bxy. It is presently unknown whether this hierarchy converges to 𝒬 in its asymptotic limit. Notably, one can also incorporate POVMs by explicitly performing a Neumark dilation in the measurements, albeit at the price of employing a larger dimension. Alternatively, one can sample directly from the set of POVMs, but that requires the use of localising matrices to enforce the bounds M_b|y≽ 0 and ∑_b=1^N-1 M_b|y≼𝕀. For scenarios with a reasonably small number of inputs and outputs, or a fairly low dimension, the NV hierarchy is an effective tool; see examples in <cit.>. However, for middle-sized problems it becomes less handy. The first reason is that the size of the moment matrix, for a fixed level of relaxation, scales polynomially in the size of L. Secondly, the number of SDP variables, m, increases rapidly with any one of the parameters (X,Y,N,d)[The memory required in a single iteration of a typical primal-dual solver scales quadratically in both m and |𝒮| while the CPU time scales cubically in both m and |𝒮|.]. Thirdly, one typically obtains better bounds on 𝒬 by separately considering different rank combinations for the projective measurements and then selecting the best bound, but the number of combinations increases quickly with (N,Y,d). In <cit.>, an alternative SDP hierarchy is developed which applies to bounding correlations obtained from systems that can be represented nearly as d-dimensional. This permits analysis of correlations of systems that approximate e.g. qubits to any desirable extent, and in the special case in which the approximation is exact it provides bounds on 𝒬. The main idea is to supplement the set L with an additional operator V which is meant to emulate the d-dimensional identity projection. Therefore, it is given the properties V^2=V and (V)=d. One can then proceed with building the moment matrix as described below Eq. (<ref>). This method circumvents sampling, immediately takes POVMs into account and its computational requirements are constant in the dimension parameter. The main drawback is that the hierarchy does not converge to the quantum set, because it inherently relaxes dimension-restricted communication to communication that on average has dimension d; such systems have been studied independently in the context of entanglement using SDP relaxations <cit.>. In some concrete instances, this can lead to worse bounds on linear objective functions. §.§.§ Applications Bounds on 𝒬, obtained by means of SDP relaxations, are broadly useful. An evident application is determining upper bounds on the magnitude of quantum advantages in useful communication tasks. An important class of examples are quantum random access codes, in which Bob aims to randomly access a piece of information in a larger database held by Alice <cit.>. Direct application of the NV hierarchy has given tight upper bounds in low dimensions <cit.> and lower bounds have been obtained via SDPs in seesaw when the channel is noisy <cit.>. The former can be made vastly more efficient by exploiting symmetries inherent to the problem in order to reduce the complexity of the SDP; see e.g. <cit.>. Such symmetry methods are further discussed in Section <ref>. In other classes of communication tasks, inspired by the high-dimensional Bell inequalities of <cit.>, SDP relaxations of 𝒬 can showcase dimensional thresholds, i.e. a critical dimension above which the optimal quantum strategy qualitatively changes <cit.>. Moreover, suitably chosen linear objective functions over 𝒬 can be linked to the long-standing problem of determining the number of mutually unbiased bases in dimension six. The hypothesis that no more than three such bases exist could, in principle and if true, be proven through a sufficiently precise SDP relaxation of 𝒬 based on chaining of quantum random access codes <cit.>. There are also other SDP-based approaches to this problem; some that take the route of nonlocality <cit.> and others that consider the existence of so-called Gröbner bases <cit.>. Nevertheless, due to the computational complexity, the problem presently remains open. A complementary consideration is to, in a given dimension, bound the optimal quantum violation of facet inequalities for the polytope of classical prepare-and-measure correlations in a given input/output scenario <cit.>. From another perspective, such bounds can be seen as device-independent tests of quantum dimensions, which is a task where Alice and Bob aim to certify a lower bound on the dimension of their quantum channel without assuming any model for their preparation and measurement devices <cit.>. Applications also pertain to semi-device-independent quantum information processing, i.e. practically motivated protocols performed solely under the assumption that the communicated quantum state is of a limited dimension. Self-testing protocols have been developed in such settings, based on the prepare-and-measure scenario. Drawing inspiration from the swap method for noisy self-testing discussed in Section <ref>, one can for instance employ the NV hierarchy to bound the average fidelity of Alice's qubit preparations with the ensemble used in the paradigmatic BB84 quantum key distribution protocol <cit.>. This type of self-testing has also been extended to quantum instruments in three-partite communication scenarios, featuring a sender, a transformer and a receiver, both with <cit.> and without <cit.> SDPs. In <cit.> it is shown how the NV hierarchy can be extended to such prepare-transform-measure scenarios. It is also possible to certify qualitative properties. For instance, by restricting sampling to real-valued Hilbert spaces, the NV hierarchy has been used to test complex-valued quantum operations in a given dimension <cit.>. Furthermore, a particularly natural use for dimension-restricted systems is to certify non-projective quantum measurements, i.e. measurements that cannot be simulated with standard projective measurements and classical randomness <cit.>. The reason is that such measurements cannot be certified when ancillary degrees of freedom are available due to the possibility of Neumark dilations <cit.>. The NV hierarchy has been used to certify non-projective measurements of dimension two <cit.>, four <cit.> and up to six <cit.>. All these works rely on suitable witness constructions, but that is notably not essential for the SDP relaxation method to work. By analogous means, SDP methods have enabled certification of non-projective measurements in Bell scenarios under the assumption of a limited entanglement dimension <cit.>. In addition, both the SDP seesaw methods for probing 𝒬 (recall Section <ref>) and the NV hierarchy have been applied to determine the output rate of semi-device-independent quantum random number generators <cit.>. However, many actual implementations of dimension-restricted quantum systems, such as spontaneous parametric down-conversion sources of single photons or weak coherent pulses, only nearly represent a proper dimension-restricted system. This can be leveraged to hack a semi-device-independent protocol. Using the SDP relaxation hierarchy of <cit.>, originally discussed in <ref> only for standard dimension constraints, the small inaccuracies of such “almost qudit systems” can be taken into account when constructing quantum information protocols. The deviations from the dimension assumption can be quantified and incorporated as a linear inequality constraint on suitable elements of the corresponding moment matrix. §.§.§ Entanglement-assisted communication In the previous section, the parties in the prepare-and-measure scenario communicated quantum states while sharing classical randomness. In this section, we discuss the prepare-and-measure scenario when the parties additionally share an entangled state. The entanglement-assisted prepare-and-measure scenario is illustrated in Fig. <ref>. Consider a situation where Alice communicates classical messages of dimension d to Bob. Such correlations are interesting because they can boost the performance of classical messages in various communication tasks <cit.>. The source emits an entangled state Φ of local dimension D. Given x, Alice transforms her share into a d-valued classical message. Hence, she applies a d-outcome POVM {N_a|x}_a=1^d and upon receiving the outcome a she sends the classical state |a⟩⟨a| to Bob. Averaged over the probability p(a|x), Bob's total state thus becomes ∑_a |a⟩⟨a|⊗_A(N_a|x⊗𝕀Φ^AB) where the second factor is the unnormalised state of Bob's quantum system conditioned on Alice's outcome. In the most general situation, Bob can now first read the received message and then use the value a to choose a POVM {M_b|y,a} with which he measures his share of the entangled state. The correlations become p(b|x,y)=∑_a=1^d (N_a|x⊗ M_b|y,aΦ^AB). This can be interpreted as marginalised Bell correlations with signaling from Alice to Bob and can also immediately be extended to non-identity classical channels connecting the parties. In the most general case, the entanglement dimension D is unrestricted and Bob may adapt his measurement to the incoming message. Then, one may bound the set of quantum correlations from the exterior, when the alphabet size for the inputs and outputs is fixed, by a converging SDP relaxation hierarchy à la NPA <cit.>, which was discussed in Section <ref>. An alternative SDP relaxation hierarchy for this type of problems appears in <cit.>, which at the first level is more constraining than the NPA approach due to the possibility of imposing that all elements in the moment matrix are non-negative <cit.>. Furthermore, when the entanglement dimension is known, one may instead employ SDP relaxations in the spirit of dimension-restricted NPA, as discussed in Section <ref>. Another interesting situation arises when one insists that Bob does not adapt his measurement to the message, i.e. that a Bell test is performed first and only afterwards does classical communication take place. Correlations from such non-adaptive strategies can also be bounded by SDP relaxations <cit.>, specifically by imposing the commutation relation [M_b|y,a,M_b'|y,a'] = 0 ∀ y, b, b', a, a' in the NPA-type matrix. When the messages are d-dimensional quantum systems, it is well known from the dense coding protocol that stronger correlations are possible <cit.>. For quantum messages, Alice applies a quantum channel Λ_x^A→ C which maps the incoming D-dimensional system to a d-dimensional message state that is sent to Bob. The message space is denoted C. The total state held by Bob becomes τ_x^CB=Λ_x^A→ C⊗𝕀_B[Φ_AB] to which he applies a POVM {M^CB_b|y}. The resulting correlations become p(b|x,y)=(τ_x^CB M^CB_b|y). The only non-trivial constraint on the total state is no-signaling, namely τ_x^B=τ^B. In a given dimension, this allows for alternating convex search methods to be used for exploring the correlation set. In particular, for a maximally entangled two-qubit state the entanglement-assisted correlation set is equivalent to the correlations achievable in an unassisted prepare-and-measure scenario when Alice sends real-valued four-dimensional systems <cit.>. This permits SDP outer bounds via the NV hierarchy restricted to real Hilbert spaces. More generally, when D is unknown and when any quantum channel is used between Alice and Bob, outer bounds can be obtained by a convergent hierarchy of SDPs <cit.>. This hierarchy is based on an explicit Kraus operator parameterisation of the quantum message space. It can be seen as a variation of the NPA hierarchy and therefore its convergence properties are inherited. An important caveat is that this SDP hierarchy scales more rapidly than the adaption of the NPA hierarchy to the classical case: the number of operators used to build the SDP matrix scales quadratically in d, as compared to linearly for the classical case. This quickly makes implementation demanding, although it may be possible to circumvent the issue via symmetrisation methods; see Section <ref>. It is therefore relevant an alternative, unrelated, SDP hierarchy can be applied to efficiently obtain bounds. This is based on the concept of informationally restricted correlations <cit.>, see Section <ref>, which relies on moment matrices with a size independent of d <cit.>. While, in the entanglement-assisted scenario, this hierarchy does not converge to the quantum set, it can give useful and even tight bounds in specific cases. Notably, when Alice and Bob are connected by an identity channel and entanglement is a free resource, the correlation set for quantum messages is identical to the correlation set for classical messages of twice as many bits <cit.>. Consequently, one can still use the SDP hierarchy for classical messages to address the scenario with quantum messages. The SDP methods for the entanglement-assisted prepare-and-measure scenario have found several applications. For instance, in order to fully capture the spirit of a device-independent test of classical or quantum dimensions, i.e. to place a lower bound on the dimension of a message without making assumptions about the internal working of the involved devices, one must allow for the possibility that the preparation and measurement devices share a potentially unrestricted amount of entanglement. In <cit.>, the SDP relaxations are used to make known dimension witnesses <cit.> robust to entangled devices. Furthermore, bounds on the quantum correlations have enabled a number of quantum resource inequalities. For instance, it has been shown that protocols in which Bob adapts his setting to a classical message are in general more powerful than the non-adaptive protcols, and that this distinction is crucial for using entanglement and one bit of classical communication to simulate correlations obtained from an unassisted qubit in the prepare-and-measure scenario depending on whether it is measured with projective measurements or POVMs <cit.>. For example, non-adaptive protocols, based on Bell inequality violations followed by classical communication, are known for improving the task of random access coding <cit.>. Using adaptive measurements and higher-dimensional entanglement can yield larger quantum advantages <cit.>. Moreover, while it is well-known that entanglement cannot increase the capacity of a classical channel, the same is not true in general when the capacity is considered in the non-asymptotic setting <cit.>. For some noisy classical channels, the advantage of entanglement can even be linked to the CHSH inequality <cit.> and SDP relaxations showcase that such strategies are in fact optimal <cit.>. §.§.§ Teleportation Entanglement-assisted communication can also be considered when the inputs themselves are quantum states, rather than classical symbols. The most famous instance of such a scenario is teleportation, where a quantum state is sent by means of a shared maximally entangled state and classical communication <cit.>. However, if the state is not maximally entangled, then the teleportation channel will not flawlessly simulate the quantum identity channel. The traditional approach to quantifying the ability of an entangled state to perform teleportation is via the average fidelity of the target state and the teleported state <cit.>. A more general approach to teleportation is proposed in <cit.>, where a verifier supplies a given number of states ψ_x to Alice and asks her to teleport them to Bob. Alice applies a POVM A_a^VA to ψ_x and her share of the entangled state ρ_AB. The resulting unnormalised states of Bob are σ_a|ψ_x=_V(A_a^VB(ψ_x⊗^B)) where A_a^VB=_A((A_a^VA⊗^B)(^V⊗ρ_AB)). However, if the state is separable, then this simplifies to separable operators, A_a^VB=∑_λ p_λ A_a|λ^V⊗φ_λ^B, where A^V_a|λ=_A(A_a^VA(^V⊗ρ^A_λ)). Notice also that completeness of {A_a^VA}_a implies ∑_a A_a^VB=^V⊗ρ^B. We can then quantify the amount of white noise that must be added to a given set {σ_a|ψ_x}_a,x in order to model it classically, min t σ_a|ψ_x/1+t+t/1+t^B/dN=_V(A_a^VB(ψ_x⊗^B)), ∑_a A_a^VB=^V ⊗(ρ^B/1+t+t/1+t^B/d), A_a^VB∈SEP ∀ a, where N is the number of outcomes for Alice and d is the dimension of system B. If the solution has t>0, there is no classical teleportation model. By relaxing the set of separable operators to a semidefinite constraint, for example PPT, the above becomes an SDP criterion for classicality of teleportation. A resource theory for this type of teleportation, where SDPs again are relevant, was developed in <cit.>. Ideal teleportation can be seen as simulating a noiseless quantum channel using entanglement and classical communication. In <cit.>, SDP methods are developed for bounding various forms of simulation errors for how well the teleportation channel approximates the noiseless quantum channel. A key component for this analysis is to use SDP relaxations of the set of one-way LOCC channels, i.e. relaxations of procedures where Alice measures locally and sends her outcome to Bob who then performs a local channel. To this end, as in Fig. <ref>, we view the actions of Alice and Bob as a single bipartite channel Λ_AB→ A'B'. If this bipartite channel preserves PPT states, which is an SDP condition on the level of the Choi matrix associated to the channel <cit.>, it is also one-way LOCC. Alternatively, it is possible to relax one-way LOCC by imposing that Λ_AB→ A'B' is k-extendible <cit.>[An alternative definition of k-extendible channels and their relevance to SDP appears in <cit.>.]. This concept is analogous to the constraints appearing in the DPS hierarchy. It means that one can associate another channel, 𝒩_AB_1… B_k→ A'B'_1… B'_k, which is invariant under permutations of Bob's inputs and outputs, and such that if all but one of Bob's systems are discarded Λ_AB→ A'B' is recovered. These two relaxations can be combined into a single SDP relaxation of one-way LOCC. The former type of relaxation has also been used to address simulation errors when both Alice and Bob want to teleport states to each other <cit.>. A related task is known as port-based teleportation <cit.>. Here, Bob does not need to perform a correcting quantum channel upon receiving Alice's outcome. To achieve this, Alice and Bob share n copies of the maximally entangled state and Alice jointly measures her input state and all her n shares and sends the outcome to Bob. The outcome tells Bob in which share he can find the teleported state. Optimisation of protocols of this type has been cast as SDP <cit.>. §.§ Distinguishability constraints It is often interesting to benchmark quantum communication not by its dimension, but instead by another property that is either physically or conceptually well-motivated. This pertains partly to understanding the conditions under which quantum correlations go beyond classical limits and partly to build useful protocols for semi-device-independent quantum information processing, where deductions are made under weak and reasonable physical assumptions. Many different frameworks have been proposed, all based on the general idea of limiting the distinguishability of the states sent from Alice to Bob. What they have in common is that SDPs are typically crucial for their analysis. Here we briefly survey the main idea of each of these frameworks from an SDP perspective. Some of the cryptographic applications of these SDP methods are surveyed in Section <ref>. A natural experimental setting is that Alice knows what state, |ψ_x⟩, she is trying to prepare for Bob. However, since she does not have flawless control of her lab, she ends up preparing another state ρ_x which is close but not identical to ψ_x. The accuracy of her preparation can be quantified by the fidelity F_x=ψ_xρ_xψ_x. Alice can either measure this quantity in her lab or estimate it from an error model and name by ϵ_x the deviation in the fidelity from the ideal unit result. The communication between Alice and Bob is then based only on the assumption that Alice can control her state preparation up to an accuracy ϵ_x <cit.>. The key observation for characterising such correlations, based on quantitative distrust, is that Uhlmann's theorem <cit.> allows one to substitute a mixed state ρ for a purification |ϕ⟩ such that the fidelity is preserved. Since N pure states span at most an N-dimensional space, the correlations can be thought of as arising in a dimension-restricted space, to which the NV hierarchy applies as described in Section <ref>. One can therefore use the ideas of the NV hierarchy, but now extending the operator list to also include all the target states {ψ_x}. The fidelity constraints can then be imposed as additional linear inequality constraints, F_x=Γ(ρ_x,ψ_x)≥ 1-ϵ_x, on moment matrix. This was used in <cit.> to, for instance, certify collections of non-classical measurements as a function of ϵ. An alternative approach limits the distinguishability, not with respect to a target state but with respect to the collection of states prepared by Alice. In <cit.> it was considered that a set of Z pure bipartite states, {|ψ_z⟩}, are distributed between Alice and Bob. The Gram matrix of the states is known, i.e. all pairs of overlaps λ_ij=⟨ψ_i|ψ_j⟩, are fixed. The correlations then become p(a,b|x,y,z)=ψ_zA_a|x⊗ B_b|yψ_z. To bound the correlation set via an SDP hierarchy, consider a set of monomials 𝒮 consisting of products of the global projective measurements A_a|x⊗ and ⊗ B_b|y. Define the |𝒮|Z× |𝒮|Z moment matrix Γ(u,v)=∑_i,j=1^Z G^ij⊗|i⟩⟨j|, where for each pair (i,j) we define the matrix G^ij(u,v)=ψ_iu^† vψ_j for u,v∈𝒮. The standard properties of projective quantum measurements and the known Gram matrix imply constraints on Γ. In particular, one recovers the probabilities as G^zz(A_a|x,B_b|y)=p(a,b|x,y,z) and the overlaps as ∑_a G^ij(A_a|x,)=λ_ij. This, combined with Γ≽ 0, gives an SDP relaxation which in the limit of large relaxation level converges to the quantum set of correlations. In the special case of just two pure states, the Gram matrix trivialises to a single non-trivial entry and the correlation set can then be characterised completely with a single SDP <cit.>. This two-state case was for example used in <cit.> to certify a genuine three-outcome measurement. Furthermore, SDP relaxations based on the Gram matrix have been used to compute upper bounds on quantum state discrimination problems for optical modes with arbitrary commutation relations limited by a fixed average photon number <cit.>. A conceptually motivated framework for the prepare-and-measure scenario is to generalise dimension restrictions to information restrictions <cit.>. The main idea is to consider the cost of creating correlations in terms of the amount of knowledge that Alice must make available about her input random variable X. The classical information carried by Alice's ensemble, ℰ={p_x,ρ_x}, is defined as the difference between the min-entropy before and after Bob has received the communication, I(ℰ)= H_min(X)-H_min(X|B). Here, the first term is determined by the largest probability in Alice's prior, H_min(X)=-logmax_x p_x. The conditional min-entropy has an elegant operational interpretation in terms of minimal error quantum state discrimination <cit.>: if P_g is the largest average probability of state discrimination of the ensemble ℰ then H_min(X|B)=-log P_g. The state discrimination task can be written as an SDP, P_g = max_{M_x} ∑_x p_x (ρ_x M_x) ∑_x M_x= and M_x≽ 0. Notably, also other natural communication tasks closely related to quantum state discrimination can be expressed as SDPs. Examples of this are the quantum guesswork, where one aims to minimise the number of guesses needed to learn x <cit.> and quantum state exclusion, where one aims to rule out that Alice selected a subset of her input alphabet <cit.>. The central question then becomes to determine the relationship between the correlations p(b|x,y) and the information I(ℰ) (or the guessing probability P_g). In <cit.>, convex programming methods are developed for analysing both informationally restricted classical and quantum correlations. The former are fully characterised by an LP and the latter can be bounded by an SDP hierarchy. The key step for constructing the hierarchy is to introduce an auxiliary operator, σ, when building the moment matrix. This auxiliary operator comes from the SDP dual to Eq. (<ref>), P_g = min_σ (σ) σ≽ p_x ρ_x ∀ x. Note that strong duality holds. Therefore, the properties that (σ)≤ P_g and that σ≽ p_x ρ_x are built into the moment matrix. The former is just the linear constraint Γ(σ,)≤ P_g. The latters are semidefinite constraints which can be imposed through localising matrices (recall Section <ref>). Moreover, for informationally restricted correlations, one cannot restrict to pure states without loss of generality. Taking this into account requires localising matrices also for imposing that ρ_x is a valid state, namely ρ_x-ρ_x^2≽ 0. Beyond its own domain, the SDP tools for informationally restricted quantum correlations can be applied to problems in quantum contextuality <cit.> and entanglement-assisted correlations with classical or quantum messages <cit.>. For such ends, it has the convenient property that the complexity of the SDP is independent of the amount of information considered. However, the convergence of the hierarchy to the quantum set is presently unknown. A practical approach to quantum communication in the prepare-and-measure scenario is based on limiting the energy in the message from Alice to Bob. In <cit.> it was proposed to limit the non-vacuum component of a weak coherent pulse through an upper bound of the form -|0⟩⟨0|_ρ_x≤ω_x. When Alice has two preparations, it was shown that the set of energy-restricted quantum correlations can be mapped into a qubit problem, which in turn permits complete characterisation in terms of a single SDP <cit.>. § RANDOMNESS AND QUANTUM KEY DISTRIBUTION Quantum cryptography offers a means to execute cryptographic tasks with information-theoretic security, with randomness generation and quantum key distribution (QKD) being the most well studied primitives in this domain. In this section we describe how SDP hierarchies can be used to quantify randomness and compute rates of QKD protocols, referring the reader to <cit.> for more in-depth reviews on the topic. A QKD protocol consists of two parties, Alice and Bob, who want to establish a shared random string that is unknown to any potential adversary. To do this they execute a procedure which generates some classical-classical-quantum system, ρ_ABE, where A and B are classical systems held by Alice and Bob respectively and E is some quantum system held by a potential adversary. From this system they can then try to post-process the classical systems A and B to produce random strings K_A = K_B that are uncorrelated with E. In order to assess the security and performance of such a protocol one needs to compute (or at least lower bound) its asymptotic rate,[It is also possible to compute non-asymptotic rates from the asymptotic rates even against non-IID adversaries <cit.>.] i.e., the number of secret key bits generated in each round of the protocol as the number of rounds tends to infinity. For example, for QKD with one-way error correction, against an adversary who applies the same attack each round on the protocol independently of the other rounds (an IID adversary) the asymptotic rate is given by the Devetak-Winter bound <cit.>, min_ρ_ABE∈𝒮 H(A|E) - H(A|B), where H(X|Y):= H(XY) - H(Y) with H(X) := -(ρ_X log_2 ρ_X) being the von Neumann entropy, and the minimisation is over the set 𝒮 of all classical-classical-quantum states ρ_ABE that are compatible with the protocol. Therefore the exact set 𝒮 depends on the protocol used and the statistics observed. The asymmetry in the Devetak-Winter bound comes from the restriction to protocols with one-way error correction, i.e., all the error correction is sent by Alice to Bob. It is possible to interpret the second term H(A|B) as approximately the rate of bits that Alice must send to Bob for him to successfully correct his raw key to be equal to hers. Note that, as A and B are observed by Alice and Bob, one can estimate H(A|B) directly from the statistics of the protocol. Thus, the main task remaining is to bound from below min_ρ_ABE∈𝒮 H(A|E) . There are two main difficulties to overcome in order to compute bounds on Eq. (<ref>). Firstly the objective function is a nonlinear function of ρ_ABE, and secondly one needs to characterise the set 𝒮 of possible states ρ_ABE output by the protocol when E is a system unknown to Alice and Bob. The latter depends significantly on the security model of the protocol and so, in the following, we will consider the different approaches suited to the different security models. It is further possible to consider different adversaries. For example, one could make a restriction to classical adversaries which enforces E to be a classical system, and hence Eve cannot be entangled with the initial systems of Alice and Bob. §.§ Device-independent approach In the device-independent security model, pioneered by the ideas of <cit.> and <cit.>, Alice and Bob each have an untrusted device which they use to generate nonlocal correlations (see Fig. <ref>). It is assumed, without loss of generality, that the devices produce their outcomes given their inputs by measuring some projective measurements {A_a|x}_a,x, {B_b|y}_b,y on some bipartite state ρ_Q_AQ_B (see Eq. (<ref>)), where the subindex Q indicates that the system is quantum in contrast with the classical systems A and B that hold the measurement outcomes of Alice and Bob. In this security model the source ρ_Q_AQ_B is not trusted, and hence there may exist an adversarial party holding a system E that is potentially entangled with the systems Q_A and Q_B. In a device-independent protocol, Alice and Bob check that the correlations p(a,b|x,y) generated by their devices satisfy some linear constraints, ∑_a,b,x,yr_abxyi p(a,b|x,y) ≥ω_i ∀ i, where r_abxyi, ω_i ∈ℝ are specified by the protocol. They may, for instance, check that their devices achieve some sufficiently high average CHSH violation. Thus the set of states that are required to optimise over in Eq. (<ref>) are exactly the post-measurement states whose statistics are compatible with Eq. (<ref>) imposed by the protocol. Using the NPA hierarchy it is in principle possible to relax the existence of such quantum systems satisfying Eq. (<ref>) to a hierarchy of SDPs as was detailed in Section <ref>. What then remains is to convert the objective function H(A|E) into something that can be expressed within the framework of noncommutative polynomial optimisation. §.§.§ Bounding the min-entropy For a classical-quantum state, ρ_XY = ∑_x |x⟩⟨x|⊗ρ_Y(x), the conditional min-entropy of X given Y is defined as H_min(X|Y) := - log_2 P_g(X|Y), where P_g(X|Y) := max_{M_x}_x∑_x (M_x ρ_Y(x)), and the maximisation is over all POVMs {M_x}_x on system Y. Operationally, the quantity P_g(X|Y) corresponds to the maximum probability with which someone who has access to system Y can guess the value of system X and hence P_g is referred to as the guessing probability <cit.>. This operational interpretation also implies that the min-entropy rates for a classical adversary coincide with those of a quantum adversary. Eve effectively creates a classical system upon measuring, which implies the existence of a classical strategy achieving the same min-entropy bound. Using the fact that H ≥ H_min one can immediately get lower bounds on rates by lower bounding the min-entropy, or equivalently upper bounding the guessing probability. In particular, when Alice inputs X=x we have P_g(A|E) = max_{M_a}∑_a ((A_a|x⊗𝕀⊗ M_a) ρ_Q_AQ_BE), where {M_a} is now some POVM given to the adversary (which can be assumed to be projective) <cit.>. By applying the tools of noncommutative polynomial optimisation the corresponding rate optimisation can be relaxed to a hierarchy of SDPs whose moment matrices are generated by the monomials {𝕀}∪{A_a|x}∪{B_b|y}∪{M_a} and the k-th level relaxation is given by max ∑_a Γ^k(A_a|x, M_a) ∑_a,b,x,y r_abxyi Γ^k(A_a|x, B_b|y) ≥ω_i ∀ i, Γ^k ≽ 0, where, as usual, we have not explicitly specified all the constraints present in Eq. (<ref>), e.g., those coming from projectivity, commutativity, orthogonality and normalisation amongst others. Taking -log_2 of any solution to Eq. (<ref>) will therefore allow to lower bound the rates of device-independent QKD or device-independent randomness generation protocols. By tracing out the E system one can increase the efficiency of these relaxations in terms of the dimension of the SDP <cit.> as Eve's operators are removed from the relaxation and subsequently the size of the moment matrix is reduced. In particular, one can view Eve's measurement, upon obtaining the outcome c, as preparing the subnormalised state ρ_Q_AQ_B(c) = _E((𝕀_Q_AQ_B⊗ M_c)ρ_Q_AQ_BE) for Alice and Bob, which satisfies a normalisation condition ∑_c ρ_Q_AQ_B(c) = ρ_Q_AQ_B. Thus it is possible to create a relaxation for each of the states ρ_Q_AQ_B(c), leading to several smaller moment matrix blocks instead of one large moment matrix for the state ρ_Q_AQ_B. In particular, one can write the k-th level relaxation instead as max ∑_a Γ^k_a(A_a|x, 𝕀) ∑_a,b,x,y,c r_abxyi Γ^k_c(A_a|x, B_b|y) ≥ω_i ∀ i, ∑_cΓ^k_c(𝕀,𝕀) = 1, Γ^k_c ≽ 0 ∀ c, where again we have omitted many implicit constraints. The SDP bounds on H_min have been used extensively to analyse the device-independent randomness generated from different Bell inequalities in the presence of noise <cit.>, from non-inequality settings <cit.>, in the presence of leakage <cit.>, from post-selected events <cit.>, from PPT states <cit.> and from partially entangled states <cit.>. The efficiency of the SDPs allows them to be used to help optimise the experimental design to maximise randomness <cit.> and through the dual it is possible to extract functions on which the experimental parameters can be optimised <cit.>. The dual also provides a function on the space of correlations that lower bounds the certifiable device-independent randomness, which can then be used to create full security proofs of the corresponding protocols <cit.>. Using these SDP relaxations it is also possible to verify that a four-outcome POVM can be used to produce two bits of device-independent randomness using a maximally entangled qubit pair <cit.> and similar advantages from non-projective measurements also appear in systems with higher dimensions <cit.>. The technique can also be applied to more exotic correlation scenarios like sequential measurements <cit.> to show robust generation of more randomness than would be possible with just a single projective measurement or within the instrumental causal structure <cit.>. It was also used together with analytical investigations to demonstrate that a sequence of non-projective measurements can be used to generate unbounded amounts of randomness from a single maximally entangled qubit pair <cit.>. §.§.§ Bounding the von Neumann entropy The min-entropy approach provides a simple method to lower bound the rates of protocols. However, secure asymptotic and non-asymptotic rates are often expressed in terms of the von Neumann entropy <cit.>, which in general is larger than the min-entropy. Thus, in order to find tighter lower bounds on the rate of a protocol it is necessary to find a way to lower bound the von Neumann entropy more accurately. In <cit.> the authors use duality relations of entropies to remove Eve from the problem, following a similar approach taken in <cit.> to view the measurements of Alice and Bob through the lens of an isometry. After rephrasing the problem in terms of only Alice and Bob, they provide a lower-bounding ansatz, which after applying a Golden-Thompson inequality <cit.>, they express as a noncommutative polynomial optimisation problem which can in turn be relaxed to a hierarchy of SDPs. In <cit.> the authors introduce a sequence of conditional Rényi entropies[Rényi entropies are families of entropic quantities that (typically) interpolate between the von Neumann entropy and the min-entropy. For an in-depth overview we refer the reader to <cit.>.], that are all lower bounds on H(A|E). Each of these entropies is defined in terms of a solution to an SDP which emerges from the SDP representability of the matrix geometric mean <cit.>. Similar to the min-entropy, these Rényi entropies can be each used to give a hierarchy of SDPs that lower bound the rates. In <cit.> the method was used to derive improved device-independent QKD rates in settings with more inputs and outputs. Whilst the above two methods improve over the min-entropy, it is not clear that either can compute tight lower bounds on the von Neumann entropy. In <cit.> a sequence of variational forms that converge to the von Neumann entropy from below was introduced. Let m ∈ℕ and let t_1,…,t_m and w_1,…,w_m be the nodes and weights of an m-point Gauss-Radau quadrature rule[A Gaussian quadrature rule approximates an integral ∫_a^b f(x) dx by a finite sum ∑_i w_i f(t_i) where w_i are referred to as weights and t_i as nodes. We refer the reader to <cit.> for further details.] on [0,1] with t_m=1 <cit.>. Then for each m ∈ℕ the following noncommutative polynomial optimisation problem is a lower bound on the rate inf H(A|E), min c_m + ∑_i=1^m-1w_i/t_i log 2∑_a {ρ_Q_AQ_BE[A_a|x (Z_a,i + Z_a,i^† + (1-t_i) Z_a,i^† Z_a,i ) + t_i Z_a,iZ_a,i^†] } ∑_a,b,x,y r_abxyi( ρ_Q_AQ_BE A_a|x B_b|y) ≥ω_i, Z^†_a,i Z_a,i≼α_i 𝕀, [A_a|x, B_b|y] = [A_a|x, Z_c,i] = [B_b|y, Z_c,i] = 0 where α_i = 32 max{1t_i, 11-t_i}, c_m = ∑_i=1^m-1w_it_i log 2, and the operators generating the noncommutative polynomial optimisation are {A_a|x}∪{B_b|y}∪{Z_c,i, Z_c,i^†}. This can then be relaxed to an SDP using the techniques detailed in Section <ref>. In particular, the core of the relaxation consists of a moment matrix generated by the monomials {𝕀}∪{A_a|x}∪{B_b|y}∪{Z_c,i, Z^†_c,i}. It is worth noting that the Z_c,i operators are not Hermitian and hence their adjoint must also be included in the generating set. The construction leads to a double hierarchy, namely, a hierarchy of variational bounds indexed by m, and for each of these bounds, an SDP relaxation hierarchy. For applications presented in <cit.>, a value of m=8 or m=12 was typically used alongside various tricks to speed up the computations. This method has been shown to recover the known tight bounds on rate curves. It has been used to compute randomness generation rates in mistrustful settings <cit.>, i.e., when Alice does not trust Bob, as well as for assessing the optimal randomness certifiable in the binary input and binary output scenario <cit.>. When Alice and Bob only have binary inputs and outputs, the analysis of the rate can be reduced to the analysis of qubit systems through the use of Jordan's lemma <cit.>. For certain Bell inequalities it is then possible to solve the entropy optimisation analytically <cit.>. In <cit.> a hybrid analytical-numerical approach is introduced for binary settings. It is shown that after reducing to qubit systems, it is possible to use further symmetries and properties of entropies to express the rate of the problem as an analytical function of some unknown correlators. These correlators can then be bounded by a commutative polynomial optimisation of just a few variables. This can then be relaxed to a hierarchy of SDPs using the Lasserre hierarchy (recall Section <ref>). The advantage over the techniques discussed previously is that the numerical optimisations are significantly smaller (and hence faster). Furthermore, it can achieve tight bounds in certain cases. A similar hybrid approach is taken in <cit.>, where the authors analyse key rates achievable in binary input and binary output settings when the key is extracted from different input settings. After reducing to qubit systems, they reformulate the problem as a triplet of nested optimisations, with the innermost optimisation being an SDP arising from the SDP formulation of the trace norm, K_1 = min 1/2(X+Y) [ X K; K^† Y ]≽ 0, where K is any square matrix <cit.>. §.§.§ Beyond entropy optimisations So far we have focused on randomness generation and QKD with one-way error correction, the rates of which both require solving a minimisation of some entropy. When moving beyond these protocols the relevant figure of merit will often change. Nevertheless, SDP relaxation techniques remain readily applicable to these new settings. Two-way error correction in QKD, i.e. when both Alice and Bob can communicate in the error correction step of the protocol, is known as advantage distillation. <cit.> showed that a sufficient condition for secret key generation in this binary outcome protocol is min F(ρ_E|00, ρ_E|11) > √(ϵ/1-ϵ), where F(ρ,σ) := √(ρ)√(σ)_1 is the fidelity, ϵ is the probability that Alice and Bob's outcomes disagree on the key generating inputs x, y, i.e., ϵ := ∑_a≠ b p(a,b|x,y), and ρ_E|ab =1/p(ab|xy)_Q_AQ_B[(M_a|x⊗ N_b|y⊗𝕀) ρ_Q_AQ_BE] is the marginal state of Eve conditioned on Alice receiving outcome a and Bob receiving outcome b. Various approaches to compute the minimisation of the fidelity have been proposed. In <cit.> the fidelity was lower bounded by a guessing probability and the resulting optimisation could be tackled using techniques introduced in <cit.>. In <cit.> it was shown that one can apply a fidelity-preserving measurement to the quantum states to reduce the objective function to just a function of probabilities and then by using techniques similar to those introduced in <cit.> the arbitrarily tight bounds on the fidelity can be directly computed using noncommutative polynomial optimisation. A stronger sufficient condition was introduced in <cit.>, based on the Chernoff divergence Q(ρ,σ) := min_0 < s < 1(ρ^s σ^1-s), and indirect lower bounds were analysed using a guessing probability lower bound and resulting SDP relaxations, similar to <cit.>. In contrast to QKD wherein Alice and Bob trust each other, mistrustful cryptography aims to execute a cryptographic task between two agents who do not trust each other. The relevant figures of merits for these protocols are then the probability that each agent can cheat. Bit commitment is such a protocol in which Alice commits a bit to Bob. After commitment Alice should not be able to modify the bit and Bob should only be able to learn the bit when Alice chooses to reveal it. In <cit.> a device-independent bit-commitment protocol based on the CHSH game was introduced and it was shown that the probability that Alice can cheat could be relaxed to a noncommutative polynomial optimisation problem similar to a guessing probability problem. Similar SDP relaxations were also derived for the cheating probabilities of Alice and Bob in an XOR oblivious transfer protocol based on the magic square game <cit.>. §.§ Device-dependent approach The opposite of the device-independent approach is having a full characterisation of the honest parties' devices involved in a protocol. In this device-dependent scenario, Alice and Bob measure some fixed, known POVMs {A_a|x}_a,x and {B_b|y}_b,y, on some bipartite state ρ_Q_AQ_B. As before, the source is not trusted, and thus no assumptions on ρ_Q_AQ_B are made, except that it must be compatible with the statistics measured by Alice and Bob. In particular, an adversarial party holds a quantum system, E, that is potentially entangled with the systems Q_A and Q_B, with the global state denoted by ρ_Q_AQ_B E. The condition that is compatible with the measured statistics is then expressed as [(W_i ⊗𝕀_E )ρ_Q_AQ_B E] = ω_i ∀ i, where W_i = ∑_a,b,x,y c_abxyi A_a|x⊗ B_b|y is specified by the protocol and ω_i is given by the measured statistics. For example, one can choose {A_a|x}_a,x and {B_b|y}_b,y to be mutually unbiased bases, and W_i to compute the probability that Alice and Bob get equal results when measuring in the same bases <cit.>. Equation (<ref>) is a simple SDP constraint, so in order to compute the key rate one just needs to convert the objective function H(A|E) from Eq. (<ref>) into an expression that can be optimised via SDPs. §.§.§ Bounding the min-entropy As in the device-independent case, it is possible to lower bound the von Neumann entropy by the min-entropy, and compute the latter from the guessing probability given by Eq. (<ref>). This equation can be linearised in the optimisation variables by absorbing the {M_c}_c into ρ_Q_AQ_BE, this is, defining ρ_Q_AQ_B(c) = _E ((𝕀⊗𝕀⊗ M_c) ρ_Q_AQ_BE), so that ρ_Q_AQ_B = ∑_c ρ_Q_AQ_B(c). Computing the guessing probability is done by the following SDP: max_{ρ_Q_AQ_B(a)} ∑_a ((A_a|x⊗𝕀) ρ_Q_AQ_B(a)) ∑_a (W_iρ_Q_AQ_B(a)) = ω_i ∀ i, ∑_a (ρ_Q_AQ_B(a)) = 1, ρ_Q_AQ_B(a) ≽ 0 ∀ a. This was used in <cit.> to demonstrate improved noise tolerances for QKD using high dimensional systems. The min-entropy is, however, a rather loose lower bound on the von Neumann entropy, so the key rates computed in this way are unnecessarily pessimistic. The main advantage of this technique is simplicity. §.§.§ Bounding the von Neumann entropy The variational forms converging to the von Neumann entropy introduced by <cit.> can also be applied to compute the key rate in the device-dependent case <cit.>. Having trusted measurement devices drastically simplifies the problem: for a given matrix element in Alice and Bob's side, one builds an NPA hierarchy for Eve together with the quantum state, resulting in a block matrix SDP as done in <cit.>. This NPA-type hierarchy converges at the first level because there are no commutation relations to enforce, and thus for fixed m one has a single SDP. It is then possible to lower bound inf H(A|E) by min_σ,{ζ^a_i,η^a_i,θ^a_i}_a,i c_m + ∑_i=1^m∑_a=0^n-1w_i/t_i log 2[(A_a|x⊗𝕀_B)(ζ^a_i + ζ^a_i^† + (1-t_i)η^a_i) + t_iθ^a_i] s.t. (σ) = 1, (W_k σ) = ω_k ∀ k, Γ^1_a,i := [ σ ζ^a_i; ζ^a_i^† η^a_i ]≽ 0, Γ^2_a,i := [ σ ζ^a_i^†; ζ^a_i θ^a_i ]≽ 0 ∀ a, i, where c_m = ∑_i=1^mw_i/t_i log 2, and w_i,t_i are the weights and nodes of the Gauss-Radau quadrature as in Section <ref>. The block matrices Γ^1_a,i,Γ^2_a,i are the single level of the NPA hierarchy necessary for convergence. Note the similarity with Eq. (<ref>): the variables ζ^a_i correspond to Z_a,i, η^a_i corresponds to Z_a,i^† Z_a,i, and θ^a_i corresponds to Z_a,i Z_a,i^†. Alternative methods to compute key rates for QKD that are not based on SDP relaxations also exist <cit.>. As shown in <cit.>, the H(A|E) term in the key rate can be rewritten as D(𝒵(ρ_Q_AQ_B)𝒵(𝒢(ρ_Q_AQ_B))) where D(ρσ) := ( ρ(log_2(ρ)-log_2(σ))) is the relative entropy and 𝒵 and 𝒢 are quantum channels. This rewriting achieves two things: it removes Eve from the problem, and it results in an objective function that is convex in the variables ρ_Q_AQ_B. Thus the problem becomes a convex optimisation that can be solved via dedicated algorithms <cit.>. §.§ Semi-device-independent approach Semi-device-independent (SDI) protocols offer a tradeoff between the high security of device-independence and the ease of implementation of device-dependent protocols. Ideally, one looks to add one or more easily verifiable assumptions that enable the resulting protocol to be implemented in a significantly simpler manner. Often these protocols reduce to a prepare-and-measure scenario similar to those discussed in Section <ref>, wherein Alice randomly prepares one of the states {ρ_x}_x and sends it to a measurement device (Bob) who performs one of several measurements to generate the data that becomes the source of randomness or secret key. Different assumptions can then be placed on the source device and the measurement device to generate different SDI protocols. Interestingly, if one has an SDP relaxation of the set of correlations achievable within this setting, then one can often optimise entropies over these sets to bound the rates of randomness generation protocols. For several assumptions these SDP relaxations of the correlation sets have already been discussed in Section <ref>. The first work to introduce SDI cryptography considers a QKD scheme in the prepare-and-measure setting that assumes that the quantum states prepared by Alice and sent to Bob are qubits <cit.>. The analysis uses analytical results about dimension witnesses, but is limited to an ideal protocol. Later, SDI protocols for randomness generation based on dimension bounds were introduced <cit.>. In <cit.> the authors use the dimension restricted correlations hierarchy of <cit.> (see also Section <ref>) to compute a lower bound on the min-entropy that can be certified from devices that achieve some minimal success probability for a quantum random access code. The issue with an upper bound on the dimension of a quantum system is that it is difficult to physically justify, hence works then looked for protocols that rely on other assumptions that are easier to verify. <cit.> analyses SDI protocols for randomness generation under assumptions of bounded average and maximal energy of the quantum states sent from Alice to Bob, and a full security proof against classical adversaries is given. The resulting correlation set has an SDP representation (see Section <ref>) and the authors then analyse the randomness generated against a classical adversary. To achieve this they have to deal with the nonlinearity of ∑_x p(x) h(E_x) where p(x) is the input distribution, h(x) := - x log_2 x - (1-x) log_2(1-x) is the binary entropy and E_x is some observed quantity in the protocol. As h is concave, one can define a sequence of piecewise linear lower bounds on h that approximates h arbitrarily well in the limit. This leads in <cit.> to a hierarchy of SDP bounds on the rates based on this approximation. SDI randomness amplification protocols, i.e. SDI randomness generation with a partially trusted seed, can also be performed under the assumption of energy bounds <cit.>. A similar approach was also taken in <cit.>, where it is shown that the formalism developed in <cit.> can also be applied to assumptions on the spacetime symmetries of the protocol, which can further be treated independently of quantum theory. If the source is limited to the preparation of two states then the energy bounds give a physical justification for a minimal overlap |⟨ψ_x|ψ_y⟩| > 0 in the states prepared by the source. A number of works have also analysed SDI randomness generation directly under the assumption of a minimal overlap between the states {|ψ_x⟩}_x sent by the source to the measurement device. In <cit.> it is noted that any attempt to unambiguously discriminate {|ψ_0⟩, |ψ_1⟩} must contain some randomness in whether or not the discrimination is conclusive. An SDP based on H_min is formulated to bound the randomness generated with respect to an adversary who may share classical correlations with the measurement device. This SDP can also be generalised to more inputs and outputs to achieve higher randomness generation rates <cit.>. In <cit.> the minimal overlap assumption is used to show that quantum devices can generate more randomness than noncontextual devices that obey an analogous assumption, and an SDP to compute the randomness for noncontextual devices is also presented. Note that, as these works assume the preparation of pure states, they are able to restrict the analysis to the finite-dimensional subspace spanned by those states. As a result of this, the computation of the rates can be directly written as a single SDP. An SDI randomness generation protocol under which all overlaps ⟨ψ_x|ψ_y⟩ are known is presented in <cit.>. This work also analyses the randomness generated against a quantum adversary who has access to the quantum channel between the source and the measurement device and can use this to create entanglement between themselves and the measured states. By adapting the SDP relaxation method of <cit.> (see Eq. (<ref>)) one can then compute a hierarchy of min-entropy bounds against the adversary. A similar SDI randomness generation protocol is also presented in <cit.> in which the source is fully characterised but the quantum channel and Bob's measurement device is untrusted. Using also the techniques of <cit.> an SDP relaxation of min-entropy bounds is given and a full security analysis is provided. An alternative bounded overlap condition is presented in <cit.>, where it is assumed that the source prepares states that are close to some fixed set of target states (see Section <ref>). The randomness generated by the measurement device with respect to an adversary who shares classical correlations with both the source and the measurement device is then analysed using the min-entropy and SDP relaxations of the underlying correlation set of the scenario. An alternative to the various overlap assumptions is given in <cit.> wherein a bound on the information transmitted through the prepare-and-measure channel in assumed (see also Section <ref>). The certifiable randomness is evaluated using the min-entropy for various bounds on the transmitted information and comparisons to the dimension bounded protocols are also given. The setting of measurement-device-independent (MDI) QKD offers stronger security than device-dependent QKD as well as a longer distance <cit.>. In <cit.> an SDP method to bound the phase error rate of a variety of protocols is derived which can in turn be used to compute their rates via the Shor-Preskill formula <cit.>. Other works in the setting of uncharacterised measurement devices have investigated randomness generated in steering scenarios <cit.> and with trusted quantum inputs <cit.>. § CORRELATIONS IN NETWORKS A line of research in quantum correlations takes the study of entanglement-based correlations beyond the traditional Bell-type scenarios and into the domain of networks. Networks are composed of a number of parties that are connected to each other through multiple independent sources that each emit physical systems. A party can then perform measurements on the shares received from several different sources, see e.g. Fig. <ref>. The framework of networks is not just the appropriate one to analyze long-distance entanglement-based and communication scenarios, but it has also provided new insights into the fundamentals of quantum theory <cit.>. Characterising classical, quantum or no-signaling correlations in networks is a major theory challenge. The origin of the difficulty is that the presence of multiple independent sources renders the correlation sets non-convex and therefore one cannot rely on more standard tools from convex optimisation theory. For this reason, relaxation hierarchies have become a useful way to approach correlations in networks. In this section we provide brief introductions to these methods and their applications, referring to <cit.> for in-depth discussions of these and other aspects of correlations in networks. §.§ Inflation methods Inflation is a general framework for characterising correlations in causal structures in general and networks in particular, via SDP or LP relaxations. The main idea of inflation is to substitute the original network and its non-convex constraints for a larger network, created by copies of the sources and parties of the original network, in which the constraints are relaxed to linear symmetry constraints. An illustration is shown in Fig. <ref>, where the problem of characterising the probability distributions p(a,b,c|x,y,z) compatible with the triangle scenario in Fig. <ref> is relaxed by the characterisation of distributions p_inf({a^i,j},{b^k,l},{c^m,n}|{x^i,j},{y^k,l},{z^m,n}) compatible with one of the inflations in Figs. <ref>fig:cinf-fig:ginf, where the superindices denote the particular copies of the sources that are used to produce a particular value. Thus, one trades a simple but technically challenging problem for one that can be solved by standard methods in a more complicated network. The construction of the inflated network depends strongly on the physical model underlying the network. The three main models of interest are (i) classical models, corresponding to associating each source to an independent local variable, (ii) quantum models, in which each source is associated to an independent entangled quantum state, and (iii) NSI models, where each source is independently associated to a general nonlocal resource required only to respect the no-signaling principle. The key difference in the construction of the inflations stems from the fact that the former case allows free copying of information, whereas the other two do not. We proceed now to discuss the basics of the three types of inflation. §.§.§ Classical inflation In classical networks, the sources can be described by random variables and the measurement devices can without loss of generality be seen as deterministic functions of the classical variables received by a particular party. Since classical information can be copied, an inflated network may feature not only copies of the sources and measurement devices, but also of the concrete values of the local variables distributed by the sources <cit.>. An example of a classical inflation is presented in Fig. <ref> for the triangle-shaped network of Fig. <ref> where the parties' outputs are labelled a, b, and c respectively. Since the sources and measurement devices are copies of those in the original network, the correlations p_inf seen in the inflation must satisfy the symmetries p_inf( {a^i,j},{b^k,l},{c^m,n}) =p_inf({a^π(i),π'(j)},{b^π'(k),π”(l)},{c^π”(m),π(n)}), for independent permutations π, π', π” of the different copies of the sources. Moreover, marginals of p_inf over parties that reproduce (parts of) the original network can be directly associated to the probability distribution p_orig in the original network, p_inf(Π_i {a^i,i,b^i,i,c^i,i})=Π_i p_orig(a^i,i,b^i,i,c^i,i). For instance, one of such constraints in the inflation in Fig. <ref> is p_inf(a^1,1,a^2,2,b^1,1,b^2,2,c^1,1,c^2,2)=p_orig(a^1,1,b^1,1,c^1,1)p_orig(a^2,2,b^2,2,c^2,2). It is important to note that the constraints (<ref>) can only be imposed in feasibility problems, where p_orig is given and thus the right-hand side is a number. When optimising over the set of distributions compatible with a given inflation, p_orig are also variables, and thus the linear constraints that can be imposed are just p_inf(a^i,i,b^i,i,c^i,i)= p_orig(a^i,i,b^i,i,c^i,i) ∀ i. Also a third type of constraint is relevant for feasibility problems <cit.>. These are constraints on marginals that factorise, and some of the factors can be associated to p_orig. An example, also illustrated in the inflation of Fig. <ref>, is p_inf( a^1,1,a^2,2,b^1,1,b^1,2,c^1,1,c^2,1) =p_orig(a^2,2)p_inf(a^1,1,b^1,1,b^1,2,c^1,1,c^2,1). Since all the discussed constraints are linear for a given p_orig, the task of finding a probability distribution p_inf compatible with the same constraints can be cast as a linear program. Interestingly, the inflation technique <cit.> provides a complete solution to the characterisation of classical network correlations. This was shown in <cit.> by identifying a sequence of inflation tests that in the limit of large inflation converges to the set of local correlations associated with the original network. This sequence is constructed such that the n-th test features n copies of each of the sources of the original network. The inflation illustrated in Fig. <ref> corresponds to the second step in this sequence of converging tests for the case of the triangle network. Despite guaranteeing convergence in the asymptotic limit, one must bear in mind that the complexity of the hierarchy of LPs (measured by the number of elements in the probability distribution p_inf) grows in n as N^n^r, where N is the number of outcomes for a party and r is the amount of sources that send states to a party. Inflation has become a standard tool in the analysis of network nonlocality and it has many times been employed in the depicted triangle network. For a specific inflation, the set of compatible no-input binary-outcome correlations was completely characterised <cit.>. This characterisation was later found not to admit quantum violations, but other Bell-like inequalities for the triangle scenario, with four outcomes per party, were found that do admit noise-robust quantum violations <cit.>. Classical inflation has also been used to show that a shared random bit cannot be realised in a classical network <cit.>, and to find certificates of more genuine quantum nonlocality in the four-output triangle scenario by studying the dual of an inflation LP <cit.>. Inflation methods have also enabled examples of nonlocality that use a smaller number of outputs <cit.>. Moreover, outside the domain of networks, it has been used to determine equivalences between causal structures that produce the same correlations <cit.>. §.§.§ Quantum inflation In quantum networks, the sources distribute entangled states and the parties perform quantum measurements on the subsystems at their disposal. In contrast to the classical case, quantum theory must respect the no-cloning theorem <cit.>, which prevents valid inflations from copying the individual subsystems produced in the quantum sources and distributing them between additional parties. This limits the inflations allowed for a network. Taking again as example the triangle scenario of Fig. <ref>, Born's rule gives the quantum correlations p_Q^(a,b,c| x,y,z) = (A_a|x⊗ B_b|y⊗ C_c|z·ρ_AB⊗ρ_BC⊗ρ_CA), in analogy with Eq. (<ref>). Quantum inflation <cit.> relaxes the fact that the global state in the scenario, ρ_AB⊗ρ_BC⊗ρ_CA, has a tensor-product form. It does so via a gedankenexperiment similar to that of the previous section: if states and operators satisfying Eq. (<ref>) exist, then one can have multiple copies of them, and thus there exist (at least) one state ρ and operators A_a|x^i_CA,i_AB, B_b|y^i_AB,i_BC and C_c|z^i_BC,i_CA (where the new superindices indicate the copy of the corresponding source is acted upon) that satisfy the analogues of Eqs. (<ref>) and (<ref>) at the level of Born's rule, namely [ρ· Q({A_a|x^i,j},{B_b|y^k,l},{C_c|z^m,n})] = [ρ· Q({A_a|x^π(i),π'(j)},{B_b|y^π'(k),π”(l)},{C_c|z^π”(m),π(n)})], for any polynomial Q of the operators, and [ρ·⊗^n_i=1(A_a_i|x_i^i,i ⊗ B_b_i|y_i^i,i⊗ C_c_i|z_i^i,i)] = Π^n_i=1 p_Q^(a_i,b_i,c_i|x_i,y_i,z_i), for any n and any independent permutations π, π' and π” of the copies of a same source. Now, one can develop a hierarchy in n, which will denote the amount of copies of each source in the inflation network (Fig. <ref> contains the n=2 quantum inflation of the triangle scenario). For each n the characterisation of the states and operators that satisfy Eqs. (<ref>) and (<ref>) has the same form of that discussed in Section <ref> with some more linear constraints at the level of expectation values, and thus can be approximated by the NPA hierarchy. Thus, the implementation of quantum inflation comprises two different hierarchies. First, that of inflations increasing the number of copies of the sources in the network. Then, for each inflation, there is an NPA-like hierarchy to characterise the correlations compatible with such inflation. The latter hierarchy is known to converge to the set of quantum correlations corresponding to a relaxation of tensor product structure to commutation structure. For the former, it is not yet known whether the hierarchy where step n represents the inflation with n copies of each source (defined in <cit.>) converges, except in the particular case of the bilocality network (see Fig. <ref> in Section <ref>) <cit.>. There exists another, provably convergent, SDP sequence for quantum network nonlocality, namely that of <cit.>. However, this is not a hierarchy in the standard sense because it is not monotonic: failing a particular SDP test does not imply that the subsequent SDP tests will fail too. <cit.> outlines with examples various families of applications of quantum inflation. These include certifying that distributions are impossible to generate in a concrete quantum network, optimizing over distributions that can be generated in a quantum network, extracting polynomial witnesses of incompatibility, and a concrete practical example bounding the information that an eavesdropper can obtain in cryptographic scenarios involving quantum repeaters. Notably, additional commutation constraints can be added to the quantum inflation SDPs in order to constrain the resulting correlations to be classical. These SDPs can be seen as semidefinite relaxations of the LPs of the classical inflation hierarchy of the previous section, where one can trade off computational power for accuracy. §.§.§ No-signaling and independence Correlations in networks can be characterised subject only to minimal physical constraints, namely only by the independence of the sources and by the no-signaling principle. While noting that also other tools apply to this task (; see also Section <ref>), inflation methods present a systematic hierarchy approach for the purpose. The principles for NSI inflation were already put forward in the original work <cit.>: not only physical systems cannot be cloned but also the compatibility relations between the measurements that are performed on the physical states distributed are not characterised. This, in practice, means that measurement devices receive only one copy of each relevant system. These two requirements constrain significantly the set of allowed inflations (see, e.g., Fig. <ref>). The characterisation of correlations compatible with NSI inflations can be formulated in terms of a single linear program for each inflation. NSI inflations (also known as non-fanout inflations in the literature, see, e.g., <cit.>) have been explicitly used in the context of extending the role of the no-signaling principle to networks, in situations where the parties do not have a choice of measurements to perform on their systems <cit.>, and to demonstrations of nonlocality in the simplest scenario in the triangle network, namely that in which all the parties do not have inputs and produce binary outcomes <cit.>. This has led to the definition of the analogous of a Popescu-Rohrlich box <cit.> for network correlations, based on <cit.>. Moreover, the agnosticity of the physical model has been used as a theoretical basis for proposing a definition of genuine n-partite nonlocality based on the idea that correlations cannot be simulated in any network using global classical randomness and nonlocal resources shared between n-1 parties <cit.>. However, in contrast with the classical and quantum versions of inflation, it is not true that the steps in the NSI inflation hierarchy describe sets of correlations that are contained in those corresponding to lower levels, although the behavior observed in practice is that of monotonically improving bounds. Currently, how to define network correlations only subject to the existence of independent sources and no-signaling is a complicated matter <cit.>, and in fact NSI inflations have been proposed as such a definition that is physically well-motivated <cit.>. It is interesting to note that the various inflation techniques can be combined in order to address correlations in networks where different sources distribute systems of different nature. This is especially easy in the case of classical and NSI inflation, since both are naturally formulated in terms of linear programs. For example, such hybrid inflations are useful tools for tests of full network nonlocality <cit.>, where one aims to certify that every source in a network must uphold some degree of nonlocality in order to model observed correlations. However, hybrid networks are still mostly terra incognita. §.§.§ Entanglement in networks A natural question when studying quantum networks is what sort of entangled states can be produced in a given network. Indeed, this question has received considerable attention recently <cit.>. Taking again as illustration the triangle network of Fig. <ref>, if the sources distribute quantum states σ∈ℋ_A”B', μ∈ℋ_B”C' and τ∈ℋ_C”A', and the parties perform local operations characterised as CPTP maps Ω_P:B(ℋ_P'⊗ℋ_P”)→B(ℋ_P), all states that can be produced in it take the form ρ^=[Ω_A⊗Ω_B⊗Ω_C](σ⊗μ⊗τ). While the individual characterisation of any of the components of the expression above can be cast as an SDP (recall, e.g., the seesaw procedure in Eqs. (<ref>), (<ref>)), the complete characterisation of ρ^ cannot. The problem of characterising the quantum states that can be produced in quantum networks is addressed in <cit.> via SDP relaxations based on inflation. In a similar spirit to that described in Section <ref>, these relaxations transform the conditions on the independence of the sources in the network into symmetry constraints in more complicated networks, created using copies of the original sources and operations. The main difference is that while the symmetry constraints were enforced at the level of expectation values of operators when studying nonlocality in Section <ref>, when studying network entanglement the constraints are enforced at the level of the quantum state in the inflation and its marginals. The fact that the state must be a PSD operator allows the problem to be phrased as a single SDP for a fixed inflation. In <cit.>, these relaxations are used to bound the maximum fidelities of known multipartite states with network realisations, which are later interpreted as witnesses of genuine network entanglement. Similarly, <cit.> provides no-go theorems regarding the preparation of cluster and graph states in networks. §.§ Other SDP methods in network correlations In addition to inflation, in some specific scenarios, there exists other methods for characterising network correlations These range from analytic methods to alternative LP and SDP relaxations. Below we review some of the latter. §.§.§ Relaxations of factorisation Particles that never interacted can become entangled via the seminal process of entanglement swapping <cit.>. The simplest entanglement-swapping scenario is that in which two parties, Alice and Charlie, share each a bipartite physical system with a central party, Bob, that performs entangled operations on the two systems received <cit.> (see Fig. <ref>). Recently this setting has been employed for showing that real Hilbert spaces have less predictive power than the complex Hilbert spaces postulated by quantum theory <cit.>. The associated entanglement swapping scenario assumes that the two sources share no entanglement but may be classically correlated. SDP relaxation methods are then employed, based on the PPT constraints compatible with the interpretation of the NPA hierarchy discussed in Section <ref>, to bound the predictive power of real quantum models. However, it is also relevant to consider entanglement swapping when the sources are also classically uncorrelated. Then, any correlation between Alice and Charlie must be mediated by Bob, i.e. two-body expectation values factor as , breaking the convexity of the sets of relevant correlations. This was the first network scenario considered in the literature and for which specific network Bell inequalities were developed <cit.>. <cit.> presents SDP hierarchies that relax the non-convex sets of probability distributions generated in networks where some parties are not connected to others. These hierarchies are modifications of the NPA hierarchy discussed in Section <ref>. The main idea of the modification consists in a scalar extension; allowing to label the rows and columns of moment matrices not just by operators, but also by sub-normalised operators that denote products of an actual normalised operator in the problem and a (possibly unknown) expectation value. The elements in the moment matrices that are generated from these sets of operators will contain variables that represent products of expectation values, and can be associated via linear constraints to other variables upon which one wishes to impose factorisation. Take, as an illustration, the entanglement swapping scenario with Alice and Charlie performing two dichotomic measurements each, {A_0,A_1} and {C_0,C_1}, respectively, and the moment matrix generated by the set of operators {𝕀,A_0A_1,C_0C_1,A_0A_1𝕀}: Γ = 𝕀 A_0A_1 C_0C_1 A_0A_1_ρ𝕀 𝕀 1 v_1 v_2 v_3 (A_0A_1)^† 1 v_4 v_5 (C_0C_1)^† 1 v_6 A_0A_1_ρ^*𝕀 v_7 . Per the standard NPA prescription, we have that v_1 = v_3 because they both evaluate to , and that v_5 = v_7 because both evaluate to . Moreover, and importantly, in the entanglement swapping scenario it holds that, regardless of the particular operators and quantum state, A_0A_1C_0C_1_ρ = A_0A_1_ρC_0C_1_ρ. This constraint is enforced in Eq. (<ref>) by setting the linear constraint v_4 = v_6. Since all constraints between the variables in Eq. (<ref>) are linear, the set of distributions admitting a PSD Γ can be characterised via SDP. A hierarchy can then be generated by taking the levels of the associated NPA hierarchy and, for each of them, adding as many new columns as needed to impose at least one linear constraint per element of Γ that should factorise. For the entanglement swapping scenario, the hierarchy constructed in such a way converges to the desired set of quantum distributions <cit.>. However, the proof technique used there is difficult to generalise to more complicated networks <cit.>, where the SDP method still applies. The method has been used, for instance, to show that networks can activate the usefulness of measurement devices for detecting network nonlocality <cit.> and to provide quantum bounds on Bell inequalities tailored for the entanglement swapping scenario <cit.>. §.§.§ Tests for network topology SDP relaxations can also be used to rule out a hypothesised causal structure, i.e. a network constellation, connecting a given number of measuring parties. In some situations it is fairly easy to detect correlations that could not have been produced in a concrete network. A simple illustration is the network in Fig. <ref>. As discussed earlier, if one considers the marginal distribution of Alice and Charlie, the resulting correlations must factor, since there is no connection between the two parties. Thus, any non-factoring correlations between the Alice and Charlie are impossible to generate in the bilocal network. While non-linear, these constraints can be linearised, for instance, by working with the entropies of the variables instead of the probabilities <cit.>. However, in other networks like the triangle network of Fig. <ref>, such simple criteria do not exist because all parties are connected to all sources. <cit.>, building on a similar characterisation for correlations admitting local models in networks <cit.>, finds simple criteria that allow to discern whether correlations do not admit a realisation in a particular network, independently of the physical model of the systems distributed by the sources. The characterisation is based on the covariance matrix of the variables representing the outcomes of the measurements performed by the parties. Covariance matrices are inherently PSD. The important realisation is that the network structure imposes a decomposition of the covariance matrix in block matrices that are individually PSD as they determine the correlations established by each of the sources. This is, for each source there is one PSD block matrix, that contains nonzero elements only in the rows and columns associated to the parties that receive systems from that particular source. This leads to a very simple and efficient way of characterising the correlations that can be generated in different networks via SDP, which has been found to be connected to the characterisation of block coherence of quantum states <cit.>. However, as discussed in <cit.>, this characterisation is not tight, in the sense that there exist alternative methods that better approximate the set of relevant correlations in some situations. § FURTHER TOPICS AND METHODS In this section we collect additional topics where SDP relaxations are relevant and methods for reducing the computational load of SDP. §.§ Classical models for quantum correlations For some entangled states, the outcome statistics from performing arbitrary local measurements in Bell-type experiments can be simulated by models based on local variables. Consequently, entanglement and nonlocality are two distinct phenomena <cit.>. The seminal example is Werner's model showing that the state ρ_v=v|ψ^-⟩⟨ψ^-|+1-v/4𝕀, where ψ^- is the singlet state, admits an LHV model for v≤1/2 even though the state is entangled for any v>1/3 <cit.>. The critical v up to which the ρ_v admits an LHV model for all projective measurements turns out to be equal to the Grothendieck constant of order 3 <cit.>. General methods are known for deciding whether a given entangled state admits an LHV model <cit.>. These methods are based on the idea of first choosing a finite collection of measurements and determining via LP how much white noise must be added to the state for there to be an LHV model of the resulting distribution. Next, one may add sufficient white noise in the measurement space so that all the quantum measurements can be represented as classical post-processings of the measurements in the selected set. In other words, the measurement space is shrunk until it is contained in the convex hull of the selected measurement set. It then follows that for any such noisy measurement, the distribution must admit an LHV. In a final step, one can pass the noise in the measurement space to the state space and obtain a generic LHV model for the final noisy state. The bottleneck here is that one must choose a large measurement set to obtain a good approximation of the quantum measurement space. This means solving an accordingly large LP or SDP. To circumvent this, one may instead employ an oracle-based method known as Gilbert's algorithm <cit.> which allows one to approximate the distance between a point and a convex set in real space. In <cit.>, this algorithm is used together with a simple heuristic for the oracle to implement a polyhedral approximation of the Bloch sphere based on 625 measurements and thereby obtain an LHV model for ρ_v up to v≈ 0.6829. Another option is to use instead of Gilbert's algorithm the more general Frank-Wolfe algorithm <cit.>. This has further improved the LHV threshold to v≈ 0.6875 <cit.>. Notably, this work also provided an improved upper bound at v≈ 0.6961 using 97 local measurement setting. In fact, methods of this sort work also for building local hidden state models (recall Eq. (<ref>)) for entangled states. The main difference is that one runs an SDP to check for the steerability of the assemblage, instead of the LP for membership to the local polytope. However, when restricting to two-qubit entanglement, one can exploit geometric arguments and use increasingly large polytope circumscription and inscriptions of the Bloch sphere in order to determine bounds on the steerability of a state for infinitely many measurement settings, which can be evaluated via LP <cit.>. The idea of shrinking the quantum measurement space so that it can be inscribed in a polytope whose vertices are finitely many measurements can also be applied in other settings. For instance, a similar approach shows that there exists sets of incompatible measurements that can never be used to violate a Bell inequality <cit.>. This extends earlier SDP arguments that were restricted to sets of projective measurements for the uncharacterised party <cit.>. Moreover, this type of approach can also be used in the prepare-and-measure scenario to determine whether the outcome statistics associated to performing arbitrary measurements on an ensemble of qubit states admits a classical model based on bits <cit.>. Using Gilbert's algorithm with up to 70 measurements, non-trivial bounds have been established on the critical detection efficiency needed to violate classical constraints in the qubit prepare-and-measure scenario <cit.>. §.§ Generalised Bell scenarios A large portion of this review has focused on scenarios where all the parties share parts of a same quantum state, that are known as Bell scenarios. These scenarios were generalised in Section <ref> to account for multiple independent sources distributing systems between different collections of parties. There exist further generalisations of the Bell scenario, that are relevant in randomness generation and in entanglement certification, and that can be characterised via SDP. The first generalisation is that to sequential scenarios where, instead of performing only one measurement at every round, the parties perform sequences of measurements in the systems received <cit.>. These scenarios are conceptually interesting, also in the context of randomness generation, since it is possible to extract more randomness from a given state when performing sequences of measurements <cit.>. It is possible to modify the NPA hierarchy (recall Section <ref>) in order to characterise the correlations that can be produced in these scenarios <cit.>. This is achieved by considering operators that represent strings of outcomes, and imposing that these operators satisfy “no-signaling to the past” (i.e., that the measurement operators that define the first k measurements do not depend on the n-k remaining inputs, since these occur later in the sequence), which are linear constraints admitted in SDPs. The result is a convergent hierarchy, that has been used to certify local randomness beyond two bits and for investigating monogamy properties of nonlocality. The second generalisation that we discuss is that known as broadcast scenarios, where the systems sent to one or several of the parties are passed through channels that prepare new systems, and the outputs are distributed to multiple new parties that measure them <cit.>. When the channel prepares quantum systems, the correlations in the scenario can be characterised with a slight modification of the NPA hierarchy. This scenario has found particular interest in the activation of nonlocality. With a quantum model, it allows to certify in a device-independent way the entanglement of Werner states in almost all the range where it is known to be entangled <cit.>. §.§ Bounding ground state energies A central problem in the study of many-body systems is computing or bounding the ground state energy of the system, i.e. finding the minimal eigenvalue of its corresponding Hamiltonian. This problem is known to be computationally hard, in particular it is in general QMA-complete <cit.>. Thus, computationally tractable relaxations have been sought and in particular several SDP approaches have been developed. The structure of the problem naturally lends itself to a treatment in terms of noncommutative polynomial optimisation. In particular, the problem takes the form min(ρ H) where H can be written as some polynomial of local operators. Thus, lower bounds can be obtained from SDP relaxation techniques similar to those presented in Section <ref> <cit.>. Interestingly, an apparent numerical paradox can be observed when performing these computations for bosonic systems <cit.>. Convergence of the semidefinite hierarchies for noncommutative polynomial optimisation problems is only proven when the operators are bounded <cit.>. Therefore, for problems involving the bosonic creation (annihilation) operators a_i^† (a_i) the standard proof of convergence does not hold. In fact, it can be shown that for Hamiltonians in this setting that hierarchy collapses at level 1. I.e., higher levels give no improvement over level 1 and the optimal value at level 1 is not equal to the optimal value of the original problem. Nevertheless, when performing the computations numerically one can sometimes observe improving lower bounds that converge to the actual solution. This apparent paradox is due to the finite precision of numerical computations implying that the solver is actually solving a slightly perturbed problem. Mathematically, the set of SOS polynomials is dense in the set of positive polynomials generated by the ladder operators. It is worth noting that a similar numerical paradox appeared in the setting of commutative polynomial optimisation <cit.> and it has a similar resolution <cit.>. Another approach to obtaining SDP relaxations for the ground state energy problem has been proposed in <cit.>. There, the problem of computing the ground-state energy of a translation-invariant Hamiltonian with identical nearest-neighbour interactions on each pair of an infinite chain is considered. Formally the problem can be stated as min ( H ρ^(2)) (ρ^(2)) = 1, ρ^(2)≽ 0, ρ^(2)←ψ_TI, where ρ^(m) denotes the density matrix corresponding to an m-body marginal and ρ^(2)←ψ_TI is the constraint that ρ^(2) is a two-body marginal of some translation-invariant state for the entire chain. This latter constraint is equivalent to a quantum marginal problem, asking whether there exists a global state consistent with the marginal states (see Section <ref>). It can for instance be relaxed to the existence of all m-body marginals up to some finite m_max∈ℕ, i.e. a collection of partial trace constraints ρ^(2)←ρ^(3)←…←ρ^(m_max). This results in a hierarchy of SDP relaxations however the size of the SDPs grows exponentially in the number of sites considered. The core idea of <cit.> is to apply compression maps that retain the useful aspects of the marginal constraints whilst reducing the dimension of the variables significantly. §.§ Rank-constrained optimisation Several problems of interest in classical and quantum information theory can be formulated as an optimisation problem that includes a constraint in the rank of a matrix. These include optimisation over pure quantum states, Max-Cut <cit.>, matrix completion <cit.>, compressed sensing quantum state tomography <cit.>, detection of unfaithful entanglement <cit.>, and many others <cit.>. The problem of optimising under rank constraints is in general NP-hard, and as such it is usually solved via heuristics or approximations. It is possible, however, to formulate it as an SDP hierarchy similar to the DPS hierarchy discussed in Section <ref> by reformulating it as a separability problem <cit.>. This allows one to obtain a sequence of global bounds to the problem that converge to the optimal value. The idea is that a state ρ of dimension d has rank at most k if and only if it is the partial trace of a pure state |ϕ⟩∈ℂ^d ⊗ℂ^k. The set of such states is hard to characterise, but it can be handled by first noticing that one is only interested in its convex hull, and second by noticing that the convex hull is the partial trace over the last two subsystems of a state σ∈𝒟(ℂ^d ⊗ℂ^k ⊗ℂ^d ⊗ℂ^k) that respects the constraints of being separable over the bipartition (12|34), invariant under a SWAP over the same bipartition, and recovering the initial ρ through the appropriate partial trace. Exploiting these, one can use the DPS hierarchy to characterise the separability constraint, obtaining the SDP hierarchy for rank-constrained optimisation. Note that although the idea we have explained here is in terms of quantum states, it also applies to bound ranks of general matrices. §.§ Quantum contextuality Quantum theory cannot be modelled with hidden variables that are both deterministic and non-contextual[This means that each projective measurement is assigned a definite value that is independent of other compatible measurements performed simultaneously.] <cit.>. This is known as contextuality <cit.>. Contextuality scenarios can be cast in the language of graph theory, where each input/output tuple (event) is associated to a vertex and an edge is drawn between two vertices if and only if the events can be distinguished by jointly measurable observables. While many different contextuality tests can be associated to the same graph, both the non-contextual hidden variable and quantum bound associated to a given graph can be bounded in terms of the graph's independence number and the Lovász theta function (recall Eq. (<ref>)), respectively <cit.>. These quantities are computable by LP and SDP, respectively. More generally, it is possible to adapt the NPA hierarchy to arbitrary tests of contextuality by leveraging that compatible projective measurements commute, which add constraints to the moment matrix <cit.>. Also a more operational notion of contextuality has been proposed, which is not specific to quantum theory and not limited to projective measurements <cit.>. Two preparations (resp. measurements) are considered indistinguishable if they cannot be distinguished by any measurement (resp. preparation) allowed in the theory. Such are said to belong to the same context and therefore assigned the same realist representation. When operationally indistinguishable preparations give rise to statistics that do not admit such a realist model, the theory is said to be contextual. This test can be cast as an LP, see e.g. <cit.>. The set of preparation contextual quantum correlations can be bounded by hierarchies of SDPs. Two different hierarchies have been proposed. One leverages the idea and SDP methods of informationally restricted correlations <cit.> reviewed in Section <ref>, by interpreting the indistinguishability of two preparations as the impossibility of accessing any information about which preparation was selected <cit.>. The other relies on using unitaries in the monomial representation, and connecting them to POVMs via the fact that every 0≼ M ≼𝕀 can be written in terms of a unitary, M=𝕀/2+U+U^†/4 <cit.>. Both methods require extensive use of localising matrices to deal with mixed states and non-projective measurements. Notably, these ideas also enable the addition of measurement contextuality. The the convergence of either hierarchy to the quantum set remains unknown. Furthermore, in experimental tests of this type of contextuality, it is naturally not the case that the relevant lab preparations are precisely indistinguishable, also when the measurements used to probe their distinguishability are a small subset of the whole measurement space. Upon accepting the latter limitation, the former issue can be resolved by means of LP by leveraging the linearity of the operational theory to postprocess the lab data into new data that corresponds to exactly indistinguishable preparations <cit.>. Simplified variants of this approach have also been used for qutrit-based contextuality tests <cit.>. §.§ Symmetrisation methods Many of the most interesting problems in quantum information exhibit a degree of symmetry. Exploiting them can lead to vast computational advantages: turning an intractable problem into a tractable one, or even making it simple enough to allow for an analytical solution. Symmetries have been fruitfully applied to several problems: for example polynomial optimisation <cit.>, nonlocal correlations <cit.>, quantum communication <cit.>, mutually unbiased bases <cit.>, port-based teleportation <cit.>, unitary inversion, transposition, and conjugation <cit.>, rank-constrained optimisation <cit.>, and measurement incompatibility <cit.>. The fundamental idea behind symmetrisation techniques is that if both the objective function and the feasible set of an SDP are invariant under transformation of the variable X by some function f, one can exploit this symmetry to eliminate redundant variables and block diagonalise X. Both these reductions can drastically simplify the problem. To be more precise, consider again an SDP in the primal form of Eq. (<ref>). Assume that there exists a function f such that ⟨ C, f(X)⟩ = ⟨ C, X⟩, and furthermore that if X is a feasible point, that is, ⟨ A_i,X⟩ = b_i ∀ i and X ≽ 0, then f(X) is also a feasible point. Then for all feasible X and λ∈ [0,1] the point g(X,λ) = λ X + (1-λ) f(X) will be feasible and attain the same value of the objective, which follows from linearity and convexity. Assume also that there exists λ^* such that f(g(X,λ^*)) = g(X,λ^*), so that g(X,λ^*) is a projection of X into a fixed point of f. Then one can add the constraint f(X) = X to the SDP in Eq. (<ref>) without loss of generality. This is, it is possible to rewrite Eq. (<ref>) as max_X ⟨ C, X⟩ ⟨ A_i, X⟩ = b_i ∀ i, f(X) = X, X ≽ 0. To see why, consider a feasible (or optimal) point X^* for the SDP in Eq. (<ref>). From the argument above g(X^*,λ^*) will be also feasible for the original SDP, with the same value of the objective. Since by assumption it is a fixed point of f, it is also feasible for the SDP in Eq. (<ref>). Symmetrising an SDP then boils down to identifying f, the projection onto the fixed point subspace, and using the constraint f(X) = X to simplify the problem. The theory for doing so is particularly simple and well-developed when f is a group action <cit.>, so we shall present it here, while noting that more general techniques exist <cit.>. Let then G be a group, and ρ a representation of the group, that is, a function g ↦ρ_g such that for all g,h ∈ G we have ρ_gh = ρ_g ρ_h. Here we are only going to consider unitary representations, which are those that ρ_g^-1 = ρ_g^†. The group then acts on the SDP variable as X ↦ρ_g X ρ_g^†. We say that the SDP is invariant under this group action if for all g we have that ⟨ C, ρ_g X ρ_g^†⟩ = ⟨ C, X⟩ and ⟨ A_i, X⟩ = b_i implies ⟨ A_i, ρ_g X ρ_g^†⟩ = b_i for all i. Note that we do not need to consider whether ρ_g X ρ_g^†≽ 0 for X ≽ 0, as this is always the case. The projection onto the fixed point subspace is then given by the group average[In the case of an infinite but compact group it is given by ∫_G μ(g) ρ_g X ρ_g^†, where μ is the Haar measure on G.], X = 1/|G|∑_g∈ Gρ_g X ρ_g^†, which can be easily verified to satisfy X = ρ_g Xρ_g^† for all g, so we can add that as a constraint to the SDP. This constraint allows not only to eliminate redundant variables, but also to block diagonalise X using Schur's lemma. The main idea is that a group representation can be decomposed as a direct sum of the irreducible representations with their multiplicities, so there exists a unitary matrix V such that for all g V ρ_g V^† = ⊕_i 𝕀_n_i⊗ρ_g^i, where ρ_g^i is the i-th irreducible representation with dimension d_i and multiplicity n_i. Now the constraint X = ρ_g Xρ_g^† is equivalent to [X, ρ_g] = 0, which implies that the same V also block diagonalises X: V X V^† = ⊕_i X^i ⊗𝕀_d_i, where X^i is a Hermitian matrix of dimension n_i. Computing V can be challenging. For small problems it can be computed analytically by computer algebra systems such as GAP <cit.>. In the particular cases where the representation in question is the tensor product of n unitaries of dimension d, the permutations between n tensor factors of dimension d, or a combination of them, the Schur-Weyl duality can be used to give an explicit construction of V <cit.>. In general, though, it can only be computed numerically by software such as RepLAB <cit.>. To illustrate how symmetrisation works, let us consider a simple SDP: min_x_1,x_2 x_1 + x_2 X = [ 2 x_1 1; x_1 2 x_2; 1 x_2 2 ]≽ 0 This SDP is invariant under permutation of the first and third rows and columns of X. Since this permutation is its own inverse, the underlying group is the symmetric group over two elements, G = {e,p}, where e is the identity and p^2 = e. The group representation we need is then ρ_e = 𝕀 and ρ_p = [ 0 0 1; 0 1 0; 1 0 0 ]. First we eliminate variables using the group average: X = 1/2(ρ_e X ρ_e^† + ρ_p X ρ_p^†) = [ 2 y 1; y 2 y; 1 y 2 ], where y = (x_1+x_2)/2 is now the sole variable of the SDP. To do the block diagonlisation, we note the that symmetric group over two elements has only two irreducible representations, 1 and -1. The representation we are using consists of two copies of 1 and one copy of -1, and the unitary that block diagonalises it is V = 1/√(2)[ 1 0 -1; 0 √(2) 0; 1 0 1 ], with which Vρ_p V^† = (1 ⊗ -1) ⊕ (𝕀_2 ⊗ 1). The same unitary block diagonalises X as V X V^† = (X^1 ⊗ 1) ⊕ (X^2 ⊗ 1), where X^1 = 1 and X^2 = [ 2 y √(2); y√(2) 3 ]. All in all, the SDP was simplified to min_y 2y X = [ 1 0 0; 0 2 y√(2); 0 y√(2) 3 ]≽ 0 It can now be solved as an eigenvalue problem of a 2× 2 matrix, and the optimal solution is -2√(3). § CONCLUSIONS Quantum theory promises many advantages in information-processing tasks. However, in general, characterizing the correlations established by quantum systems is very demanding, both at the complexity theory level and in practice. In this review we have shown that many questions related to quantum correlations can be written as, or relaxed to, semidefinite programming problems. This has enabled researchers to obtain approximate or exact solutions to many problems regarding entanglement, nonlocality, quantum communication, quantum networks and cryptography, among others. For this reason, semidefinite programming has become a central tool in the field. § IMPLEMENTATION GUIDE In this appendix we discuss publicly available computer code packages for SDP relaxation hierarchies addressing various physics problems. We also discuss different SDP solvers and programming languages. SDP solvers require the problem to be input in a standard form, roughly similar to Eqs. (<ref>) and (<ref>), but with details that vary with the specific solver. This can be quite cumbersome for more complex problems. To get around this, it is common to use modellers, that allow much more flexible forms of input, and automatically translate them to the format required by the solvers. The available modellers are: * YALMIP - open source, written in MATLAB/Octave <cit.>. * CVX - proprietary, written in MATLAB <cit.>. * PICOS - open source, written in Python <cit.>. * CVXPY - open source, written in Python <cit.>. * JuMP - open source, written in Julia <cit.>. There is a large number of solvers available. Here we will mention only a few notable ones: * SeDuMi - open source, bindings for MATLAB/Octave. Can handle complex numbers natively <cit.>. * MOSEK - proprietary, bindings for C, C++, Java, MATLAB, .NET, Python, and R. Fast, parallelised implementation <cit.>. * SCS - open source, bindings for C, C++, Python, Julia, R, MATLAB, and Ruby. Uses a first-order method in order to handle large-scale problems <cit.>. * SDPA - open source, bindings for C, C++, and MATLAB. The variants SDPA-GMP, SDPA-QD, and SDPA-DD can solve problems with high or arbitrary precision <cit.>. * Hypatia - open source, bindings for Julia. Can handle complex numbers natively, solve problems with arbitrary precision, and supports a wide variety of cones other than the SDP one <cit.>. There are also several software packages that implement some of the SDP relaxations discussed here. A few notable ones are: * QETLAB - open source, written in MATLAB. Works with CVX. Implements several of the algorithms discussed here, including the DPS and NPA hierarchies <cit.>. * Ncpol2sdpa - open source, written in Python. Works with SDPA and MOSEK. Does commutative and noncommutative polynomial optimisation, focusing on NPA-type problems <cit.>. * GloptiPoly - open source, written in MATLAB, works with YALMIP. Does commutative polynomial optimisation <cit.>. * SOSTOOLS - open source, written in MATLAB. Does commutative polynomial optimisation <cit.>. * NCSOStools - open source, written in MATLAB. Does noncommutative polynomial optimisation <cit.>. * Inflation - open source, written in Python, works with MOSEK. It implements quantum inflation for quantum and classical correlations <cit.>. * RepLAB - open source, written in MATLAB/Octave. Does numerical representation theory, for the purpose of symmetrisation <cit.>. § STRICT FEASIBILITY In this appendix we prove that the unconstrained NPA hierarchy is strictly feasible, by explicitly constructing a positive definite feasible point. We thank Miguel Navascués for providing this proof. Instead of the usual basis of projectors to represent Alice's and Bob's measurements, {A_a|x}_a,x and {B_b|y}_b,y, we shall instead use the unitary basis of generalized observables, defined as A_x = ∑_a=0^N-1ω_N^a A_a|x, B_y = ∑_b=0^M-1ω_M^b B_b|y, where ω_N=e^-i2π/N and ω_M=e^-i2π/M. The conditions that {A_a|x}_a,x and {B_b|y}_b,y are sets of projectors that sum to identity are mapped into the condition that A_x and B_y are unitary and that their N-th and M-th powers evaluate to identity, this is, A_xA_x^† = A_x^† A_x = 𝕀, B_yB_y^† = B_y^† B_y = 𝕀, A_x^N = B_y^M = 𝕀. Note that this transformation from projectors to unitaries is reversible. The inverse operation is given by A_a|x = 1/N∑_a'=0^N-1ω^aa'_N A_x^a', B_b|y = 1/N∑_b'=0^M-1ω^bb'_M B_y^b'. Now, consider the set 𝒜 of inequivalent sequences of products of elements in {A_x}_x, that is, monomials that are not equivalent under relations (<ref>). Define then an infinite-dimensional, countable Hilbert space ℋ_A, with a canonical orthonormal basis whose elements are labeled by monomials of A_1,…,A_n. That is, ℋ_A={|a⟩:a∈𝒜}, with ⟨a|a'⟩ = δ_a,a'. Define now the operators {π_A(A_x)}_x ∈ B(ℋ_A) through their action on this orthonormal basis as follows: π_A(A_x)|a⟩=|A_xa⟩, ∀ a∈𝒜. For each x, π_A(A_x) is a unitary operator satisfying π(A_x)^N=𝕀. The representation π_A is known in operator algebras as the left regular representation of {A_x}_x, given the relations (<ref>). Doing the analogous construction from Bob, we can now define the moment matrix Γ_ab,a'b'=⟨ψ|(π_A(a)^†π_A(a')⊗π_B(b)^†π_B(b'))|ψ⟩ a,a'∈𝒜,b,b'∈ℬ, where |ψ⟩=|1⟩_A⊗|1⟩_B. From the definition of the constructions we have that Γ_ab,a'b' = ⟨1|π_A(a)^†π_A(a')|1⟩⟨1|π_B(b)^†π_B(b')|1⟩ = ⟨a|a'⟩⟨b|b'⟩ =δ_a,a'δ_b,b', so Γ = 𝕀, which is positive definite, and the NPA hierarchy without constraints is strictly feasible, as we wanted to show. Note that from this construction one can also obtain a positive-definite feasible moment matrix in the usual basis of projectors. Using Eqs. (<ref>, <ref>) one constructs the linear transformation C that takes monomials from the unitary basis to the projector basis. The desired moment matrix is then Γ̃ = CΓ C^† = C C^†. We thank Carlos de Gois, Cyril Branciard, Denis Rosset, Felix Huber, Jef Pauwels, Marco Túlio Quintino, Miguel Navascués, Omar Fawzi, Otfried Günhe, Sander Gribling, Ties Ohst, and Valerio Scarani for comments. A.T. is supported by the Wenner-Gren Foundation and by the Knut and Alice Wallenberg Foundation through the Wallenberg Center for Quantum Technology (WACQT). A.P.-K. is supported by the Spanish Ministry of Science and Innovation MCIN/AEI/10.13039/501100011033 (CEX2019-000904-S and PID2020-113523GB-I00), the Spanish Ministry of Economic Affairs and Digital Transformation (project QUANTUM ENIA, as part of the Recovery, Transformation and Resilience Plan, funded by EU program NextGenerationEU), Comunidad de Madrid (QUITEMAD-CM P2018/TCS-4342), Universidad Complutense de Madrid (FEI-EU-22-06), and the CSIC Quantum Technologies Platform PTI-001. P.B. acknowledges funding from the European Union's Horizon Europe research and innovation programme under the project “Quantum Secure Networks Partnership” (QSNP, grant agreement No. 101114043). M.A. acknowledges funding from the FWF stand-alone project P 35509-N. tocchapterReferences
http://arxiv.org/abs/2307.01652v1
20230704111842
Polynomial removal lemma for ordered matchings
[ "Lior Gishboliner", "Borna Šimić" ]
math.CO
[ "math.CO" ]
Translating nano-Hertz gravitational wave background into primordial perturbations taking account of the cosmological QCD phase transition Yuichiro Tada August 1, 2023 ========================================================================================================================================== We prove that for every ordered matching H on t vertices, if an ordered n-vertex graph G is ε-far from being H-free, then G contains poly(ε) n^t copies of H. This proves a special case of a conjecture of Tomon and the first author. We also generalize this statement to uniform hypergraphs. § INTRODUCTION The graph removal lemma is a fundamental result in extremal graph theory, stating that for every fixed graph H and ε > 0, if an n-vertex graph G is ε-far from being H-free, in the sense that ε n^2 edges must be deleted in order to turn G into an H-free graph, then G contains at least δ n^|V(H)| copies of H, where δ = δ(H,ε) > 0. This was proved in a seminal work of Ruzsa and Szemerédi <cit.>. The removal lemma was subsequently generalized to many other combinatorial structures, notably induced subgraphs <cit.>, hypergraphs <cit.> and ordered graphs <cit.>. Removal lemmas are also closely related to graph property testing in the dense graph model, where they correspond to testing algorithms with constant query complexity, see the book <cit.>. A drawback of the known proofs of the removal lemma (and its many generalizations) is that all such proofs rely on Szemerédi's regularity lemma <cit.> or a generalization thereof. This results in weak quantitative bounds; for example, for the graph removal lemma stated above, the best known bound <cit.> is that 1/δ≤tower(O(log 1/ε)), where tower(x) is a tower of x exponents. This situation has led to research on the problem of characterizing the cases where the removal lemma has polynomial bounds, namely, where δ depends polynomially on ε. By now there are many works of this type <cit.>. Here we focus on ordered graphs. In an important work <cit.>, Alon, Ben-Eliezer and Fischer proved an ordered analogue of the graph removal lemma. They further asked to study cases where the ordered removal lemma has polynomial bounds. Addressing this question, Tomon and the first author <cit.> characterized the ordered graphs H for which the induced H-removal lemma has polynomial bounds. They also studied the non-induced case, and conjectured that the (non-induced) H-removal lemma has polynomial bounds if and only if the core of H is an (ordered) forest. As observed in <cit.>, to prove this conjecture it suffices to show that the H-removal lemma has polynomial bounds for every ordered forest H. Here we make progress on this conjecture by proving it in the case that H is an ordered matching. We also generalize this to s-uniform hypergraphs, s ≥ 3. An (ordered) n-vertex s-uniform hypergraph G is said to be ε-far from being H-free if one has to delete at least ε n^s edges to turn G into an H-free hypergraph. Our main result is as follows. For every t ≥ s ≥ 2, there exists C = C(t) such that the following holds. Let G be an ordered s-uniform hypergraph on n vertices, and let H be an ordered s-uniform matching on t vertices. For every ε > 0, if G is ε-far from H-freeness then G contains at least (ε/C)^C n^t copies of H. Our proof of Theorem <ref> shows that one can take C(t) = O(t^2). It would be interesting to improve this to C(H) = O(t), which would be tight, as shown by taking the random (binomial) ordered s-uniform hypergraph H_s(n,ε). It would also be interesting to prove a polynomial removal lemma for other families of ordered forests. § PROOF OF THEOREM <REF> Assume G is as in the statement of the theorem, with vertex set [n]. Firstly, we partition [n] into k := 1/ε intervals I_1,…,I_k of length ε n each, and delete all edges with at least two vertices inside one of these intervals. Let G_0 ⊆ G be the resulting hypergraph. This step deletes less than 1/εε n 2·n-2 s-2 < ε/2 n^s edges (as all such edges are counted, and some more than once), so G_0 is still ε/2-far from being H-free. Set γ := ε/4t (recall that t = |V(H)|). For each 1 ≤ℓ≤ k, we define t nested partitions of I_ℓ as follows. Set 𝒥_ℓ,1 = {I_ℓ}. For j = 2,…,t and for each J ∈𝒥_ℓ,j-1, split J into intervals of length γ|J| and add these intervals to 𝒥_ℓ,j. Note that for each 1 ≤ j ≤ t, 𝒥_ℓ,j forms a partition of I_ℓ into intervals of size γ^j-1|I_ℓ|. Put 𝒥_ℓ = ⋃_j = 1^t 𝒥_ℓ,j. For each v ∈ I_ℓ and 1 ≤ j ≤ t, it will be convenient to denote by J_j(v) the interval in 𝒥_ℓ,j containing v, so that v ∈ J_t(v) ⊆ J_t-1(v) ⊆…⊆ J_1(v) = I_ℓ. Set β := 2γ = ε/2t. We now perform a sequence of k cleaning steps and define k corresponding hypergraphs G_0 ⊇ G_1 …⊇ G_k, where G_ℓ is the hypergraph obtained after the ℓth cleaning step (1 ≤ℓ≤ k). At step ℓ we clean with respect to the interval I_ℓ, as follows: For every choice of s-1 vertices v_1 < v_2 < … < v_s-1 outside of I_ℓ, and for every interval J ∈𝒥_ℓ, let L_ℓ(v_1, v_2, … v_s-1, J) denote the leftmost β|J| vertices w ∈ J such that {v_1,…,v_s-1,w}∈ E(G_ℓ-1), if there are at least β |J| such vertices, and else let L_ℓ(v_1, v_2, … v_s-1, J) be the set of all such vertices. Delete all edges in L_ℓ(v_1, v_2, … v_s-1, J). Let G_ℓ be resulting hypergraph. By definition, for every given (s-1)-set vs-1, and interval J ∈𝒥_ℓ, this deletes at most β|J| edges. Since the intervals in 𝒥_ℓ,j form a partition of I_ℓ (for every 1 ≤ j ≤ t), we delete at most β|I_ℓ| edges when considering these intervals. Summing over 1 ≤ j ≤ t, this gives a total of at most tβ|I_ℓ| edge deletions for each of the less than n^s-1 choices of v_1,…,v_s-1, which adds up to a total of less than ∑_ℓ = 1^k tβ n^s-1 |I_ℓ| = tβ n^s = ε/2n^s edge deletions to obtain G_k from G_0. We then have that G_k still contains a copy of H, as G_0 is ε/2-far from being H-free. The key property guaranteed by the cleaning procedure is the following. Let 1 ≤ℓ≤ k and 1 ≤ m ≤ t, and let w_1 < … < w_m in I_ℓ and v_i, j∈ [n] ∖ I_ℓ, where 1 ≤ i ≤ m and 1 ≤ j ≤ s-1, such that {v_i, 1, v_i, 2, … v_i, s-1 ,w_i}∈ E(G_ℓ) for i = 1,…,m. Then there are at least ( ε/4t)^m(t+1)n^m choices for vertices w'_1 < … < w'_m in I_ℓ such that {v_i, 1, v_i, 2, … v_i, s-1 ,w'_i}∈ E(G_ℓ-1) for i = 1,…,m. Set L_i := L_ℓ(v_i, 1, v_i, 2, …, v_i, s-1 ,J_i(w_i)) ∖ J_i+1(w_i) for 1 ≤ i < m, and L_m := L_ℓ(v_m, 1, v_m, 2, …, v_m, s-1,J_m(w_m)). For every 1 ≤ i ≤ m and w'_i ∈ L_i, it holds that {v_i, 1, v_i, 2, … v_i, s-1 ,w'_i}∈ E(G_ℓ-1), by the definition of the set L_ℓ(v_i, 1, v_i, 2, …, v_i, s-1 ,J_i(w_i)). L_1 < … < L_m. Let 1 ≤ i < m. The elements of L_i ⊆ L_ℓ(v_i, 1, v_i, 2, …, v_i, s-1,J_i(w_i)) are to the left of w_i, because L_ℓ(v_i, 1, v_i, 2, …, v_i, s-1,J_i(w_i)) is a set of leftmost vertices w ∈ J_i(w_i) satisfying {v_i, 1, v_i, 2, … v_i, s-1,w}∈ E(G_ℓ-1), and these edges are deleted when obtaining G_ℓ from G_ℓ-1, while the edge {v_i, 1, v_i, 2, … v_i, s-1 ,w_i} is still present in G_ℓ. So L_i < w_i. Moreover, L_i is disjoint from J_i+1(w_i) by definition. It follows that L_i < J_i+1(w_i); indeed, for each w ∈ L_i, we have J_i+1(w) ≤ J_i+1(w_i) as w < w_i, and also J_i+1(w) ≠ J_i+1(w_i) because w ∉ J_i+1(w_i), hence w < J_i+1(w_i). This gives that L_i < J_i+1(w_i) ≤ J_i+1(w_i+1), and therefore L_i < L_i+1, as L_i+1⊆ J_i+1(w_i+1). |L_i| ≥(/4t)^t+1 n for all 1 ≤ i ≤ m. Recall that for every w ∈ I_ℓ, we have |J_t(w)| = γ |J_t-1(w)| = … = γ^t-1|J_1(w)| = γ^t-1|I_ℓ| = γ^t-1ε n > (ε/4t)^tn. Now, observe that |L_ℓ(v_i, 1, v_i, 2, …, v_i, s-1, J_i(w_i))| > β |J_i(w_i)|, because otherwise we would have deleted all edges of the form {v_i, 1, v_i, 2, …, v_i, s-1,w} with w ∈ J_i(w_i) when obtaining G_ℓ from G_ℓ-1, but the edge {v_i, 1, v_i, 2, …, v_i, s-1,w_i} is still present in G_ℓ. However, |J_i+1(w_i)| = γ |J_i(w_i)| = β/2|J_i(w_i)|, so by the definition of L_i we have |L_i| ≥ |L_ℓ(v_i, 1, v_i, 2, …, v_i, s-1, J_i(w_i))| - |J_i+1(w_i)| ≥β/2|J_i(w_i)| ≥(ε/4t)^t+1n. As we saw above, for every 1 ≤ i ≤ m and w'_i ∈ L_i, it holds that {v_i, 1, v_i, 2, … v_i, s-1 ,w'_i}∈ E(G_ℓ-1) so the lemma follows by Claims <ref> and <ref>. Recall that G_k contains a copy of H; denote it H_k. We can now use this initial copy to construct the required number of distinct H-copies in G_0. For 1 ≤ i ≤ k, let m_i be the number of vertices of H_k in interval I_i. For convenience, put δ := (ε/4t)^t+1. For every ℓ = k,…,0, there are at least (δ n)^m_ℓ+1 + … + m_k copies of H in G_ℓ which have m_i vertices in I_i for every 1 ≤ i ≤ k, and have the same vertices as H_k in I_1 ∪…∪ I_ℓ. The proof is by reverse induction on ℓ. The base case ℓ = k is trivial. So let ℓ < k, and suppose we found a set ℋ_ℓ of at least (δ n)^m_ℓ+1 + … + m_k copies of H in G_ℓ which have m_i vertices in I_i for every 1 ≤ i ≤ k, and have the same vertices as H_k in I_1 ∪…∪ I_ℓ. If m_ℓ = 0 then there is nothing to prove, so suppose that m_ℓ+1≥ 1. Fix any H_ℓ∈ℋ_ℓ. Note that every edge of H_ℓ touching I_ℓ has exactly one vertex in I_ℓ, by the definition of G_0. Namely, every such edge is of the form {v_1,…,v_s-1,w} with w ∈ I_ℓ and v_1,…,v_s-1∈ [n] ∖ I_ℓ. Let w_1 < … < w_m_ℓ be the vertices of H_ℓ in I_ℓ. By Lemma <ref>, we can replace w_1,…,w_m_ℓ in (δ n)^m_ℓ ways to obtain new copies of H. Doing this for each H_ℓ∈ℋ_ℓ gives the required (δ n)^m_ℓ|ℋ_ℓ| ≥ (δ n)^m_ℓ + … + m_k distinct (as the H_ℓ themselves differ on vertices we do not affect) copies of H, completing the induction step. For ℓ = 0, Lemma <ref> gives (δ n)^m_1 + … + m_k = (δ n)^t = (ε/4t)^t(t+1) n^t copies of H in G_0 (and so in G), as required. This proves the theorem. siam
http://arxiv.org/abs/2307.00615v1
20230702165101
An urn model for opinion propagation on networks
[ "Andrew Melchionna" ]
math.PR
[ "math.PR", "90B15, 91D30, 35R02" ]
An urn model for opinion propagation on networks Andrew Melchionna August 1, 2023 ================================================ We consider a coupled Pólya's urn scheme for social dynamics on networks. Agents hold continuum-valued opinions on a two-state issue and randomly converse with their neighbors on a graph, agreeing on one of the two states. The probability of agreeing on a given state is a simple function of both of agents' opinions, with higher importance given to agents who have participated in more conversations. Opinions are then updated based on the results of the conversation. We show that this system is governed by a discrete version of the stochastic heat equation, and prove that the system reaches a consensus of opinion. § INTRODUCTION §.§ Statement of Problem and Result Let G = (𝒱,ℰ) be a simple, connected graph, with each vertex i ∈𝒱 representing an individual agent. In our model of opinion propagation, agents discuss an issue with their neighbors, each conversation resulting randomly in either an agreement on state U or an agreement on state V. If two learners agree on state U or V, both of the learners increase their propensity to prefer state U or V, respectively, in the future. We make this precise in the following discussion. For every vertex i ∈𝒱 and timestep t ∈ℕ∪{0}, let the weights (u^i_t,v^i_t) ≥ 0 represent the propensities of vertex i for U and V, respectively, at time t. For ease of notation, we write (u⃗_t, v⃗_t), where u⃗_t, v⃗_t ∈ℝ^𝒱 have components (u_t^i)_i ∈𝒱 and (v_t^i)_i ∈𝒱. For convenience, we define the total weight of vertex i and the fraction of that total weight stored in state U to be g_t^i := u_t^i + v_t^i x_t^i := u_t^i/g_t^i, respectively. We consolidate notation with g⃗_t and x⃗_t, similarly to the above. We enforce the initial conditions (u⃗_0, v⃗_0) to be such that u_0^i + v_0^i =: g_0^i > 0 for all i, and we define γ⃗_t to be a vector with γ⃗_t^i := 1/g_t^i for later convenience. The dynamics are as follows: at every timestep t ≥ 1 choose a random edge e = (i,j) ∈ℰ. Increment (only) each the two g values: g_t^i = g_t-1^i + 1 g_t^j = g_t-1^j + 1 with all other g_t^k = g_t-1^k unchanged for k ∉{i,j}. Define: p^e_t := u_t^i + u_t^j/g^i_t + g^j_t = x_t^i g_t^i + x_t^j g_t^j/g^i_t + g^j_t, as the pooled opinion of agents i and j, and let p^e_t-1 give the probability of i and j agreeing on state U at time t, given that edge e is chosen at time t. If the chosen i and j agree on state U, increment each of their u values: u_t^i = u_t-1^i + 1 u_t^j = u_t-1^j + 1. If they agree on opinion V, do not alter the u-values. All other u_t^k = u_t-1^k for k ∉{i,j} remain unchanged regardless of the outcome of the conversation along edge e. We show that the dynamics of the system are governed by a discrete, stochastic version of the heat equation, with an "influence matrix" L driving the propagation of opinions. The influence matrix acts like the graph Laplacian, but gives higher weight to vertices which have high degree, which have more conversations on average and therefore develop strong opinions more rapidly. Similarly to the graph Laplacian, the influence matrix has right-eigenvector 1⃗ (the |𝒱|-dimensional vector with each component equal to 1); let a_t be the coordinate of x⃗_t corresponding to 1⃗ with respect to a fixed, generalized eigenbasis of L (discussed below). We will refer to a_t as the consensus coordinate. The goal of this paper is to prove the following theorem, which states that a consensus of opinion is reached in the long-time limit. There exists a random scalar 0 ≤ a_∞≤ 1 such that 𝔼[x⃗_t - a_∞1⃗^2] 0 §.§ Related Work A similar class of frameworks for opinion propagation, called voter models, also feature randomly selected pairs of agents exchanging opinions. For example, in the Deffuant model, pairs of neighbors interact only when their opinions are within some threshold of one another, with consensus and/or polarization being driven by threshold size (<cit.>). Another example of a voter model is the Hegselmann-Krause model, in which an agent is randomly selected to have their opinion replaced by some determinstic function of their neighbors' opinions (<cit.>). The model presented in this paper could perhaps be considered a stochastic voter model (stochastic in the sense that outcomes of conversations are random). A unique property of this model, however, is the pooled-experience nature of conversations, resulting in influences between agents which are random and dynamic, but which tend towards a graph-dependent object (the influence matrix). It should also be noted that this model features continuous opinions (x_t^i ∈ [0,1]) with discrete actions (agents agree on either U or V); different combinations of opinion and action spaces are featured throughout the literature. This model can also be compared to the DeGroot model for learning, in which updates are made according to some constant 'trust matrix' T: x⃗_t+1 = T x⃗_t (<cit.>). The trust matrix can represent how much each agent trusts their neighbors as well as themself, giving a weighted average of their neighbors' beliefs and their own prior opinions. Other, similar models of opinion propagation have been studied, considering the effects of agents' self-confidence and network topology on long-term behavior (<cit.>). Yet another related class of models for opinion propagation are 'probabilistic fuzzy models' which include agents' perceptions of some exogenous, albeit 'fuzzy' (the exact state is unclear) variables (<cit.>). We finally note that much of the literature on opinion propagation focuses on simulation-based studies, while rigorous proofs are less common. §.§ Outline of Paper The rest of the paper proceeds as follows. In Section 2, we derive the fact that the behavior of x_t is governed by a discrete-time stochastic heat equation, and give some important properties of the (stochastic) Laplacian operator driving the diffusion. In Section 3, we prove convergence of the consensus coordinate of x⃗_t, and in section 4, we prove the decay of x⃗_t - a_t 1⃗ (the disagreement component). In Section 5, we give a proof of Theorem <ref>, and in Section 6, we provide a conjecture that may lead to future work. § STOCHASTIC HEAT EQUATION §.§ Preliminaries At each timestep, an edge is randomly selected to host a 'conversation' between its two vertices. The following heuristic is equivalent and useful: let all edges have conversations, and uniformly at random select one edge to actually contribute to the dynamics. Let Ω_t = {ω_t^e}_e ∈ℰ∈{0,1}^ℰ be the results of all conversations occuring at timestep t, with ω_t^e = 1 if opinion U is agreed on, and 0 otherwise. Similarly, let ψ_t ∈ℰ be the edge chosen at time t, and let S_t^e = 1_{ψ_t = e}. Define the filtrations ℋ_1 ⊂ℋ_2 ⊂ℋ_3 ⊂ ... 𝒢_1 ⊂𝒢_2 ⊂𝒢_3 ⊂ ... ℱ_1 ⊂ℱ_2 ⊂ℱ_3 ⊂ ..., where ℋ_t = σ(Ω_t,Ω_t-1,...,Ω_1) corresponds to the information received up to and immediately after discussions in the t^th round, 𝒢_t = σ(ψ_t, ψ_t-1,...,ψ_1) corresponds to the information received given all of the chosen edges up to and including time t, and let ℱ_t = σ(ℋ_t, 𝒢_t,...,ℋ_1, 𝒢_1). Note that Ω_t ∈ m(ℋ_t), but Ω_t+1∉ m(ℋ_t). Since ψ_t does not care about the previous edges chosen or the results of any concurrent or previous conversations, we let σ(ψ_t) be independent of σ(ℱ_t-1∪ℋ_t). Furthermore, for e ≠ f, we let ω_t^e and ω_t^f be conditionally independent given ℱ_t-1 (they are not fully independent, since they are both affected by the history of conversations over the network). Define the full sample space and sigma-algebra to be Ω×Ψ := ({U,V}^ℰ)^ℕ×ℰ^ℕ ℱ :=σ( ∪_t ∈ℕℱ_t). Using the notation established above, we have the following update rule for g and u: g_t+1^i - g_t^i := ∑_e ↣ i S_t+1^e u_t+1^i-u_t^i := ∑_e ↣ i S_t+1^e ω^e_t+1, where e↣ i means that edge e is incident to vertex i, and we are summing over all such edges. Decompose ω_t^e into a ℱ_t-1-measurable random variable and a mean-0 σ(ℱ_t-1, ℋ_t)-measurable fluctuation: ω_t^e = p_t-1^e + w̃_t^e, so that w̃_t^e = 1 - p_t-1^e with probability p_t-1^e -p_t-1^e with probability 1 - p_t-1^e. Let w_t^i := ∑_e ↣ iw̃_t^e S_t^e. From here forward, we will use the notation 𝔼_t[Z] := 𝔼[Z|ℱ_t] to represent conditional expectation with respect to the sigma-algebra ℱ_t. Note that 𝔼_t-1[w_t^i] = ∑_e ↣ i𝔼[ w̃_t^e S_t^e|ℱ_t-1 ] = ∑_e ↣ i𝔼[ 𝔼[ w̃_t^e S_t^e|σ(ℱ_t-1∪ℋ_t ) ]|ℱ_t-1 ] =∑_e ↣ i𝔼[ w̃_t^e 𝔼[ S_t^e|σ(ℱ_t-1∪ℋ_t) ]|ℱ_t-1 ] = 1/|ℰ|∑_e ↣ i𝔼[ w̃_t^e|ℱ_t-1 ] = 0, where in the second, third, and fourth equalities we've used the tower property, 'taken out what was known', and used that S^e_t ⊥σ(ℱ_t-1∪ℋ_t), respectively. For later convenience, we present here a consolidated list of definitions of important quantities, and the earliest sigma-algebra ℱ with respect to which they are measurable: [Important Quantities] * Total weight: g_t^i ∈ mℱ_t, γ_t^i = 1/g_t^i * Weight on opinion U: u_t^i ∈ mℱ_t * Proportion of weight on U: x_t^i = u_t^i/g_t^i∈ m ℱ_t * Initial conditions: u_0^i,g_0^i > 0 * Mutual weight on U: p_t^e = u_t^i+u_t^j/g_t^i + g_t^j∈ mℱ_t, where e = (i,j) * Mean-0 fluctuation of conversation result: w̃_t^e∈ mℱ_t * Result of conversation: ω_t^e = p_t-1^e + w̃_t^e∈ mℱ_t * Edge to play: ψ_t ∈ m ℱ_t, S_t^e = 1_{ψ_t = e}∈ mℱ_t We now define a Hadamard (elementwise) product between a vector and a matrix. Unless otherwise noted, the symbol · will refer to the Euclidean norm for vectors, and the operator norm between Euclidean vector spaces for matrices. We carry this convention through the end of the paper. [Hadamard Product] The left-Hadamard product between an m-dimensional row vector b and (m × n) matrix A is a (m × n) matrix with entries given as follows: (b ∘_L A)^ij := b^i A^ij. Similarly, the right-Hadamard product between an n-dimensional column vector and (m × n) matrix A is an (m × n) matrix with entries as follows: (A ∘_R b)^ij = A^ij b^j. We will omit subscripts L and R when it is clear from the context what is meant. It can readily be shown that the Euclidean norms are sub-multiplicative with respect to the right-Hadamard product. For an m× n matrix A and an n-column vector b, A ∘ b≤Ab. and similarly for left-products. Another important property of Hadamard multiplication is its associativity with matrix multiplication. For an (m × n) matrix A_1, an n × p matrix A_2, and an n-column vector b, A_1 (b^T ∘_L A_2) = (A_1 ∘_R b) A_2. §.§ Deriving the Stochastic Heat Equation Fix an arbitrary vertex i ∈ V. We now consider the quantity u_t+1^i-u_t^i, which represents the increase in the propensity of vertex i to play move u after timestep t+1. u_t+1^i - u_t^i = ∑_e ↣ i S^e_t+1ω_t+1^e= ∑_e ↣ iS^e_t+1( p_t^e + w̃_t+1^e) We use the equation above to write down the change in x^i between timesteps t and t+1: x_t+1^i-x_t^i = u_t+1^i/g_t+1^i - u_t^i/g_t^i = 1/g_t+1^i( u^i_t+1-u^i_t - g^i_t+1-g^i_t/g^i_tu^i_t ) = 1/g_t+1^i( ∑_e ↣ iS^e_t+1( p_t^e + w̃_t+1^e) - (g^i_t+1-g^i_t) x^i_t) = w_t+1^i/g_t+1^i +1/g_t+1^i( [∑_j∼ i S_t+1^ijx_t^i g_t^i + g_t^j x_t^j/g_t^i + g_t^j] - (g^i_t+1-g^i_t) x^i_t) = 1/g_t+1^i( w_t+1^i + (L_t x⃗_t)^i ), where L_t is defined as follows: [The Diffusion Matrix] The diffusion matrix L_t∈ mℱ_t+1 is a |𝒱| × |𝒱| matrix with entries: L_t^ij = S_t+1^ijg_t^j/g_t^i + g_t^j i ≠ j, i ∼ j -∑_j ∼ iS_t+1^ijg_t^j/g_t^i + g_t^j i = j 0 else We also define Λ_t ∈ m ℱ_t+1 to be a |𝒱| × |𝒱| matrix as follows: Λ_t = I + γ_t+1^T ∘ L_t. Note that L_t will have exactly four non-zero entries, and takes the following form: [ 0 ⋯ ⋯ ⋯ ⋯ ⋯ 0; ⋮ ⋱ ⋯ ⋯ ⋯ ⋯ ⋮; ⋮ ⋯ -a ⋯ a ⋯ ⋮; ⋮ ⋯ ⋯ ⋱ ⋯ ⋯ ⋮; ⋮ ⋯ b ⋯ -b ⋯ ⋮; ⋮ ⋯ ⋯ ⋯ ⋯ ⋱ ⋮; 0 ⋯ ⋯ ⋯ ⋯ ⋯ 0 ] for some a,b > 0. Although L_t is sparse, its expectation given the previous timestep, 𝔼_t[L_t], is worthy of mention. It represents the aggregate effects after many rounds of conversations: 𝔼_t[L_t]^ij = 1/ℰg_t^j/g_t^i + g_t^j i ≠ j, i ∼ j -1/ℰ∑_j ∼ ig_t^j/g_t^i + g_t^j i = j 0 else. We also note that each g_t^i - g_0^i is a binomial random variable with mean equal to t d_i/E, where d_i is the degree of vertex i. We thus expect the leading order terms of 𝔼_t[L_t] to look like 1/ℰ times the following influence matrix, a graph dependent constant, defined below. BREAK [Influence Matrix] The influence matrix L is a |𝒱| × |𝒱| matrix with entries: L^ij = d_j/d_i/d_i+d_j i ≠ j, i ∼ j -∑_j ∼ id_j/d_i/d_i+d_j i = j 0 else. We also define A_t to be a |𝒱| × |𝒱| matrix as follows: A_t = I + 1/tL The influence matrix corresponds to the graph Laplacian matrix for the weighted, directed graph ℐ(G) := (𝒱,ℐ(ℰ)), where (i,j) ∈ℰ (i,j),(j,i) ∈ℐ(ℰ), and the edge weight from j to i is defined to be L^ij (see Figure 1). Note that edge weights from j to i are high when d_j is large relative to d_i. We think of j as having more 'influence' than i in this case. With these definitions in place, we present the Stochastic Heat Equation (abbreviated SHE), derived above: We present the differential form of the Stochastic Heat Equation (SHE): ∂_t x_t = γ_t^T ∘ (L_t-1x⃗_t-1 + W_t), and its solution: x⃗_t = ∑_j=0^t [Π_k=j^t-1Λ_k ] (γ_j^T ∘ W_j) where ∂_t x⃗_t := x⃗_t - x⃗_t-1 and W⃗_0 := u⃗_0. Throughout the paper, we will use the convention that Π- products of matrices have older matrices to the right, for example: Π_k=1^t Λ_k = Λ_k Λ_k-1···Λ_2 Λ_1. As intuition may suggest, the steady-state solution to the above heat equation is consensus: all x_t^i will converge to the same (random) constant. At the heart of this idea is the Perron-Frobenius Theorem, which says that the eigenvector 1⃗ of I + L which represents consensus has strictly dominant eigenvalue 1. We first state the Perron-Frobenius theorem for nonnegative matrices (Lemma <ref>), along with another necessary technical ingredient (Lemma <ref>). <cit.> Let M be a square, nonnegative, irreducible, primitive matrix (i.e., there exists k >0 such that M^k > 0 elementwise) with spectral radius ρ. Then the following hold: * ρ is an algebraically simple eigenvalue of M, and the corresponding normalized eigenvector v⃗ is unique and positive * Any nonnegative eigenvecor of M is a multiple of v⃗ * All other eigenvalues of M have absolute value strictly smaller than ρ <cit.> Let M be an n× n matrix, and define Γ(M) to be a digraph with vertex set 𝒱 = {1,...,n} and directed edge set ℰ = {(i,j) : M_ij≠ 0 }. If Γ(M) is strongly connected, and every vertex i of Γ(M) has a self-loop, then M is primitive. Having stated the above two ingredients, we now apply Perron-Frobenius to our system in the lemma below. For all k ≥ 1, 1 is a simple eigenvalue of A_k := I + 1/k L. Furthermore, there exists 0<λ<1 such that for all k ≥ 1, and for all eigenvalues λ^(k)≠ 1 of A_k: |λ^(k)| ≤ 1- λ/k. First notice that, for each row i and for all times t, ∑_j L^ij = ∑_j (γ⃗_t+1^T ∘ L_t)^ij=0 . From this it immediately follows that 0 is an eigenvalue of both matrices with corresponding right-eigenvector 1⃗ := [ 1; ·; ·; ·; 1 ], and thus that 1⃗ is a right-eigenvector of A_k with eigenvalue 1. Next, label the eigenvalues μ_i of L such that μ_1 = 0. Notice that for any k ≥ 1, the eigenvalues λ_i^(k) of A_k are given by λ_i^(k) = 1 + μ_i/k, numbered such that λ_1^(k) = 1 for all k. It remains to show that 1 is a simple eigenvalue of A_k and the bound given above. We invoke the Perron-Frobenius theorem for irreducible non-negative matrices on A_k := I + 1/k L. A_k is nonnegative since it is clear that all off-diagonal elements are nonnegative, and for all i, L^ii = -∑_j ∼ i(j)/(i)/(i) + (j) > -∑_j ∼ i1/(i) = -1. This gives that A_k^ii > 0 and thus that A_k is nonnegative. In order to show that A_k is irreducible, we consider its associated weighted digraph Γ(A_k), which has vertex set V, a complete edge set V × V, and weights W: V × V →ℝ_≥ 0. By the definition of L, we have that for all i ∼ j in the original graph, there are edges with non-zero weights flowing from i to j and from j to i. Since the original graph V is connected, this implies that the weighted digraph associated to A_k is strongly connected, giving that A_k is irreducible. Also note that since the diagonal elements of A_k are all strictly positive, each vertex in Γ(A_k) has a self-loop, and thus A_k is primitive by Lemma <ref>. Thus A_k satisfies the assumptions of the Lemma <ref>. Since the eigenvector 1̂ has components which are all positive, Perron-Frobenius gives that associated eigenvalue 1 of A_k is simple, that the spectral radius of A_k is 1, and that all other eigenvalues of A_k have modulus strictly less than 1. Let λ represent the spectral gap of A_1 (unless the spectral gap is 1, in which case we can arbitrarily set λ = 1/2): λ := 1-max_i>1|λ^(1)_i| if max_i>1|λ^(1)_i| ≠ 0 1/2 else This gives that, for all k≥ 1 and i > 1, |λ_i^(k)| = |1 + μ_i/k| = 1/k |μ_i + k| ≤1/k|λ_i^(1)| + k-1/k≤k-λ/k = 1-λ/k. From here forward, we let λ represent the number guaranteed by the above lemma. The next lemma shows that L is similar to a symmetric matrix and hence is diagonalizable, which simplifies the long-time analysis involving products of L. L is diagonalizable, and can be written L = PDP^-1, where the first column of P is 1⃗, and D_11 = 0. Let E be the diagonal matrix with diagonal elements equal to the degree of each vertex: E^ij = d_i i = j 0 i ≠ j . Note that E has strictly positive entries on the diagonal and is therefore invertible with (E^-1)^ij = 1/d_i i = j 0 i ≠ j . Then note that ELE^-1 is symmetric, because (ELE^-1)^ij = ∑_k,ℓ E^ik L^k ℓ (E^-1)^ℓ j =d_i/d_j L^ij. Now, by definition of L: if i j, then (ELE^-1)^ij =(ELE^-1)^ji = 0, and if i ∼ j, then (ELE^-1)^ij = 1/d_i + d_j = (ELE^-1)^ji. Thus (ELE^-1) is symmetric and therefore diagonalizable. Since L is similar to a diagonalizable matrix, it is itself diagonalizable. From here forward, we fix P and D as given in Lemma <ref>. The above two lemmas make a powerful combination, in the following sense. Note that the solution to the SHE (Proposition <ref>) involves a product of the Λ matrices: Π_k = j^t-1Λ_k. In the discussion below, we show that this large product can be approximated by the following product of constant matrices: Π_k = j^t-1 A_k, which can in turn is similar to a product of diagonal matrices: Π_k = j^t-1 D_k, where D_k = P^-1A_k P. Now, while the first entry of each of the D_k is 1 (corresponding to consensus), the other entries are bounded by 1- λ/k (due to Lemma <ref>). The last ingredient of this section is an application of the theory of gamma functions, due to Gautschi, which shows that while these eigenvalues approach 1 from below as k →∞, the approach is slow enough for their product to approach 0. <cit.> For 0<s<1: x^1-s < Γ(x+1)/Γ(x+s) < (x+1)^1-s For all 1 ≤ j ≤ t and for 0<λ < 1, (j-1/t+1)^λ≤Π_k=j^t (1 - λ/k) ≤(j/t)^λ We write Π_k=j^t (1 - λ/k) = Π_k=j^t (k-λ)/Π_k=j^t (k) = Γ(j)/Γ(j-λ)Γ(t+1-λ)/Γ(t+1), and apply Gautschi's inequality (Lemma <ref>). § CONVERGENCE OF THE CONSENSUS COORDINATE Let p⃗ be the first row of P^-1, i.e. the left-eigenvector of L with eigenvalue 0, and let a_t:= p · x_t be the coordinate corresponding to p⃗ in the eigenbasis expansion of L (where the eigenbasis is given by the columns of P). The goal of this section is to show the following lemma. There exists a random constant a_∞ such that a_t → a_∞ in ℒ^2. We decompose a_t as follows: a_t = a_0 + ∑_j = 0^t-1 (a_j+1-a_j) = a_0 + p⃗·∑_j = 0^t-1 (x⃗_j+1-x⃗_j) = a_0 + p⃗·∑_j = 0^t-1γ⃗^T_j+1∘ L_j x⃗_j + γ^T_j+1∘w⃗_j+1 =a_0 + p⃗·∑_j = 0^t-1 (γ⃗^T_j+1∘ L_j - 1/j+1L)x⃗_j + 1/j+1L x⃗_j + γ⃗^T_j+1∘w⃗_j+1 =a_0 + p⃗·∑_j = 0^t-1 (γ⃗^T_j+1∘ L_j - 1/j+1L)x⃗_j + γ⃗^T_j+1∘w⃗_j+1 where we've used the SHE update in the third line, and the fact that p⃗ is a left 0-eigenvector of L in the fifth. Now, there are two main differences between the dampened diffusion matrix γ⃗^T_j+1∘ L_j and the dampened influence matrix 1/j+1 L. The first is that the diffusion matrix only involves a random edge, while the influence matrix considers all edges. The second is that the g_t are random functions of the ψ variables, while L is a constant. We separate out these two differences by adding and subtracting 𝔼_j[γ⃗^T_j+1∘ L_j]: a_t =a_0 + m_t + s_t, where m_t := p⃗·∑_j = 0^t-1 ( γ^T_j+1∘ L_j -𝔼_j[ γ^T_j+1∘ L_j]) x⃗_j + γ⃗^T_j+1∘w⃗_j+1 s_t := p⃗·∑_j = 0^t-1Δ_j x⃗_j Δ_j := 𝔼_j[γ⃗^T_j+1∘ L_j] - 1/j+1 L We consider each of s_t (which stands for 'small') and m_t (which stands for martingale) separately; in order to show Lemma <ref>, it suffices to show that each of s_t and m_t converge in ℒ^2. While s_t is nonzero due to the randomness of g_t, we show that each term is small in expectation and therefore that the sum is convergent, while m_t is shown to be a martingale, on which we will invoke the martingale convergence theorem. Before proceeding, we state a useful lemma which allows us to rigorously pass from sums to integrals: Let f(k) be nonnegative on [t_1,t_2], non-decreasing on [t_1,x] and non-increasing on [x,t_2] for some x ∈ [t_1,t_2]. Then ∑_k=t_1^t_2 f(k) ≤∫_t_1^t_2 f(k) dk + 2 f(x) In the below, we take sums with lower endpoint strictly greater than upper endpoint to be 0. We have: ∑_k=t_1^t_2 f(k) ≤ 2 f(x) + ∑_k=t_1^⌊ x ⌋ -1 f(k) + ∑_k= ⌈ x ⌉ + 1^t_2 f(k) ≤∫_t_1^⌊ x ⌋ f(k) + ∫_⌈ x ⌉^t_2 f(k) + 2 f(x) ≤∫_t_1^t_2 f(k) dk + 2 f(x). §.§ s_t: Fluctuations of g⃗_t Recall that 𝔼_t [γ⃗_t+1^T ∘ L_t ]^ij= 1/E(g^i_t + 1)g^j_t/g^i_t + g^j_t i ∼ j -∑_k ∼ i1/E(g^i_t + 1)g^k_t/g^i_t + g^k_t i=j 0 i j. Now, the random variable g_t^i is equal to g_0^i plus a binomial random variable resulting from t trials with probability d_i/E of success for each trial. Thus we expect each g_t^i to grow like d_i/Et, with standard deviation proportional to √(t). This gives the heuristic that 𝔼[Δ_j] = O(t^-3/2). This idea is supported by the following concentration inequality for the binomial random variable, which can be used to show that the probability of g_t^i - g_0^i deviating from its mean by t^1/2 + ϵ is exponentially small in t. <cit.> Let B∼Bin(n,p) be a binomial random variable, and let a>0. Then ℙ(|B-np| > a) < 2 exp(-2a^2/n). The above statement serves as the main tool for showing that Δ_j is indeed small. In particular, we use prove the following Lemma, which will be used to show that s_t converges in ℒ^2. From here forward, we use the notation f(t) ≲ g(t) to mean that there exists a constant c, independent of t, such that f(t) ≤ c g(t). For sufficiently large s, t: 𝔼[Δ_sΔ_t] ≲1/(st)^5/4+ s/t^5/4exp(- 2/|ℰ|^2s^1/2) +stexp(- 2/|ℰ|^2t^1/2) Define C_t := {| |ℰ| g_t^i- g_0^i/t - d_i |≤1/t^1/4 : ∀ i ∈𝒱} δ_t^i := |ℰ|g_t^i- g_0^i/t + |ℰ| g_0^i/t - d_i, and note that |ℰ| g_0^i/t - d_i ≤δ_t^i ≤ |ℰ| (g_0^i + 1)- d_i almost surely. Note also that using a = t^3/4/|ℰ| in Lemma <ref> produces ℙ(C^c_t) < 2|𝒱|exp(- 2/|ℰ|^2t^1/2). Now, for fixed i ∼ j, we have that: (t+1) Δ_t^ij = t+1/|ℰ|(g^i_t+1)g_t^j/g_t^i + g_t^j - d_j/d_i/d_i +d_j = 1+1/t/(δ^i_t + d_i+|ℰ|/t)δ_t^j + d_j/δ_t^j + d_j + δ_t^i + d_i - d_j/d_i(d_i + d_j) = (1+1/t)(d_i + d_j)(δ_t^j + d_j)d_i-d_j (δ^i_t + d_i+|ℰ|/t)(δ_t^j + d_j + δ_t^i + d_i) /d_i(d_i + d_j)(δ^i_t + d_i+|ℰ|/t)(δ_t^j + d_j + δ_t^i + d_i) Almost surely: (t+1) |Δ_t^ij| ≤c_1 |δ_t^i| + c_2 |δ_t^j| + c_3 1/t/(d_i + δ_t^i)(d_j + δ_t^j) for some t-independent constants c_1,c_2,c_3. Further, on C_t, |δ_t^i| ≲1/t^1/4 for all i. So, on C_t for sufficiently large t, |Δ_t^ij|≲1/t^5/4. It's also easy to see that, almost surely (in particular, on C_t^c), |Δ_t^ij| ≲ t Further, since Δ_t is row-stochastic for all t, we can drop the requirement that i ∼ j for the above two inequalities on |Δ^ij_t| (perhaps at the cost of a larger constant). Now, for s ≠ t, 𝔼 [Δ_s_maxΔ_t_max] = 𝔼 [max_i,j,k,ℓ|Δ^ij_s Δ^k ℓ_t||C_s ∩ C_t] ℙ(C_s ∩ C_t) + 𝔼 [max_i,j,k,ℓ|Δ^ij_s Δ^k ℓ_t||C_s^c ∩ C_t ] ℙ(C_s^c ∩ C_t) + 𝔼 [max_i,j,k,ℓ|Δ^ij_s Δ^k ℓ_t||C_t^c] ℙ(C_t^c) ≲1/(st)^5/4+ s/t^5/4exp(- 2/ℰ^2s^1/2) + stexp(- 2/ℰ^2t^1/2), where A_max := max_i,j |A^ij|. Finally, note that Δ_t≲Δ_t_max. This gives the desired result for s ≠ t. When s = t, we have: 𝔼 [max_i,j|Δ^ij_t|^2] ≤𝔼 [max_i,j|Δ^ij_t|^2|| C_t] ℙ(C_t) + 𝔼 [max_i,j|Δ^ij_t|^2|C_t^c ] ℙ(C_t^c) ≲1/t^5/2 + t^2 exp(-2/|ℰ|t^1/2), and again we use that Δ_t≲Δ_t_max. This allows us to prove the desired convergence of s_t. s_t converges in ℒ^2. It suffices to show Cauchy in ℒ^2, i.e. that for any ϵ > 0, there exists T such that for all t_1,t_2 > T, 𝔼[(s_t_1-s_t_2)^2] ≤ϵ. Note that (s_t_2 - s_t_1)^2 ≲∑_j,k = t_1^t_2-1Δ_j_2 Δ_k _2 ≲∑_j=t_1^t_2-1∑_k = t_1^j Δ_j_2 Δ_k _2 Taking expectations and using the lemma, 𝔼 [(s_t_2 - s_t_1 )^2] ≲∑_j=t_1^t_2-1∑_k = t_1^j 1/(jk)^5/4+ k/j^5/4exp(- 2/|ℰ|^2k^1/2) +jkexp(- 2/|ℰ|^2j^1/2). It's now clear, for example from Lemma <ref>, that the lemma follows. §.§ m_t: Martingale Convergence The goal of the subsection is to prove that m_t converges. We begin by stating the ℒ^2 martingale convergence theorem without proof. <cit.> Let y_t be a martingale with y_t ∈ℒ^2 for all t. Further assume that sup_t y_t_ℒ^2 < ∞. Then y_t converges in ℒ^2. m_t converges in ℒ^2. We first show that m_t is a martingale. It is clearly an adpated process. Next, consider 𝔼_t-1 [m_t - m_t-1] = p⃗·𝔼_t-1[ γ⃗^T_t∘ L_t-1 - 𝔼_t-1[γ⃗^T_t∘ L_t-1]]x⃗_t-1 +p⃗·𝔼_t-1[γ⃗^T_t∘w⃗_t] . The first term is clearly 0. We next note that: 𝔼_t-1[( γ⃗^T_t∘w⃗_t)^i] = ∑_e ↣ i𝔼[ 1/g_t^iw̃_t^e S_t^e | ℱ_t-1] = ∑_e ↣ i𝔼[ 1/g_t^i S_t^e 𝔼 [W̃_t^e | σ(ℱ_t-1, σ(ψ_t))]| ℱ_t-1] = 0. Lastly, note that m_t = a_t-a_0-s_t, so that m_t_ℒ^2≤a_t_ℒ^2+a_0_ℒ^2+s_t_ℒ^2. a_0 is a constant, a_t is a.s. bounded by virtue of 0 ≤ x_t ≤ 1, and s_t_ℒ^2 is bounded since s_t converges in ℒ^2. Thus m_t is bounded in ℒ^2, proving the theorem. § DECAY OF DISAGREEMENT The goal of this section is to show that the component of x⃗_t corresponding to any differing opinions converges to 0. Let z⃗_t := x⃗_t - a_t 1⃗ represent this component of the opinion vector. We would like to show that 𝔼[z⃗_t^2] → 0 §.§ Preliminary Discussion We develop our approach to a proof as follows. With Q := diag(0,1,...,1), it's clear that z⃗_t = PQP^-1x⃗_t, where P is the matrix of eigenvectors of L. Further, using the sum-product solution of the SHE from Proposition <ref>, we have that z⃗_t = ∑_j=0^t PQP^-1 [Π_k=j^t-1Λ_k ] (γ⃗_j^T ∘w⃗_j). The intuition for why z⃗_t is small is as follows: at each past timestep j ≤ t, a random 'blip' γ⃗_j^T ∘w⃗_j was introduced. In subsequent time steps k ≥ j, this blip was smoothed by repeated application of the Λ_k matrices. Now, as argued in the previous section (see Lemma <ref>), 𝔼[Λ_k] ≈ A_k. Using PQP^-1 to project out the Perron-Frobenius eigenvalue 1 of A_k (corresponding to consensus), we get eigenvalues whose products decay sufficiently rapidly. So, sufficiently old blips are dampened by products of small eigenvalues with many factors, while newer blips will be small because the vector norm of γ⃗_j is expected to decrease as j increases. An issue with the above heuristic, however, is that random draws of γ⃗_k+1^T ∘ L_k are not close to 1/kL (even though they approximately agree in expectation). This is circumvented by noting that the γ⃗^T_k+1∘ L_k are Cesàro-summable with limit proportional to L: we expand the product Π_k=j^t-1Λ_k = Π_k=j^t-1 (I + γ⃗_k+1^T ∘ L_k), show that the leading order terms (i.e. those linear in the dampened diffusion matrix) are proportional to L due to a law of large numbers effect, and show that the lower order terms decay sufficiently rapidly because they have many factors of γ⃗. More precisely: we group the t-j factors in the product Π_k=j^t-1Λ_k into subgroups of size τ := ⌈ t^1/4⌉. This τ is large enough for the law of large numbers to kick in (allowing us to replace the group's average of the γ⃗_k+1^T ∘ L_k with a matrix proportional to L), but small enough so that there are enough factors of L for the decay of the product of the non-dominant eigenvalues to be severe. Note that j needs to be sufficiently small so that we have enough factors of Λ to work with. With this in mind, we will separate the sum defining z_t into j ≤ j_0 and j > j_0 (for a value of j_0 to be specified later). The j ≤ j_0 sum witnesses PQP^-1 [Π_k=j^t-1Λ_k ] to have sufficiently small operator norm, while the j > j_0 sum is small because we expect γ⃗_j to be small at such late values of j. This heuristic is illustrated in Figure 2. For fixed t, define r to be the remainder of t divided by τ, define j_0 := r + τ^2, and let H_k,t represent the aggregate effects of the Λ factors from the τ-window indexed by k. That is, for 1 ≤ k ≤t-r/τ: H_k,t = [Π_j=r+(k-1)τ+1^r+kτΛ_j ] so that, for sufficiently large t and j ≤ j_0, Π_k = j^t Λ_k = [Π_k = τ + 1^t-r/τ H_k,t] [Π_k = j^j_0Λ_k ] §.§ Good and Bad Events The above intuition only holds on 'good events' where the long-term randomness of the ψ variables is close to expectation. In particular, we use this assumption when we assume γ⃗_j to be small for large j, and that the Cesàro mean of γ⃗_k+1^T ∘ L_k is roughly proportional to L. For the rest of the paper we fix ϵ≪1/2, and for τ + 1 ≤ k ≤t-r/τ, we define these good events as follows: A_k,t = { |g_s^i - g_0^i - d_i/Ek τ | ≤ (k τ)^1/2 + ϵ : ∀ i ∈𝒱, ∀ s ∈{(k-1)τ + r + 1,..., k τ+r}} B_k,t = {|(∑_s = (k-1) τ+r+1^k τ+r S^e_s ) - 1/Eτ| ≤τ^1/2 + ϵ : ∀ e ∈ℰ} E_t = ∩_k = τ + 1^t-r/τ (A_k,t∩ B_k,t). A_k,t corresponds to the event that, for all s in the τ-window indexed by k, g_s - g_0 is close to the expectation of g at the point k τ (which lies in the window). The event B_k,t represents that, within the τ window indexed by k, the amount of conversations each edge hosts is close to its expectation. E_t is the intersection of the A and B events for all windows τ + 1 ≤ k ≤t-r/τ. We first establish that the union of the bad events have exponentially small probability in t. For ϵ < 1/2, there exist positive constants c_1 and c_2 such that, for sufficiently large t, ℙ(E_t^c) ≤ c_1 exp(-c_2 τ^2 ϵ). ℙ(E_t^c) = ℙ((∩_k = k_min^k_max(A_k,t∩ B_k,t))^c) = ℙ((∪_k = k_min^k_max(A^c_k,t∪ B^c_k,t))) ≤∑_k=k_min^k_max( ℙ(A^c_k,t)+ℙ(B^c_k,t)), where we've invoked a union bound in the last line. Now, for all t≥ 0, k ≥ 1: ℙ(B_k,t^c) ≤ 2|ℰ| exp(-2 τ^2 ϵ). This follows directly from Lemma <ref>, with union bound. Similarly, for t sufficiently large and k ≥τ: ℙ(A^c_k,t) ≤ 4 |𝒱| exp(-1/4 (k τ)^2 ϵ) ≤ 4 |𝒱| exp(-1/4τ^4 ϵ). The proof of this claim is as follows: Note that A_k,t = { g_s_0^i - g_0^i - d_i/ℰk τ≥ -(k τ)^1/2 + ϵ : ∀ i ∈𝒱}∩{ g_s_1^i - g_0^i - d_i/ℰk τ≤ (k τ)^1/2 + ϵ : ∀ i ∈𝒱} where s_0 and s_1 represent the endpoints for a particular τ-window: s_0 = (k-1)τ + r + 1 and s_1 = kτ + r. Now, we have that ℙ({ g_s_0^i - g_0^i - d_i/ℰk τ≥ -(k τ)^1/2 + ϵ : ∀ i ∈𝒱}^c) ≤ |𝒱|ℙ({ g_s_0^i - g_0^i - d_i/ℰk τ≤ -(k τ)^1/2 + ϵ}) =|𝒱| ℙ({ g_s_0^i - g_0^i - d_i/ℰs_0 ≤ -(k τ)^1/2 + ϵ + d_i/ℰ(τ-r-1)}) ) ≤|𝒱| ℙ({ g_s_0^i - g_0^i - d_i/ℰs_0 ≤ -1/2(k τ)^1/2 + ϵ}) ≤ 2 |𝒱| exp(-1/4 (k τ)^2 ϵ) Similarly, for the second event, ℙ({ g_s_1^i - g_0^i - d_i/Ek τ≤ (k τ)^1/2 + ϵ : ∀ i ∈𝒱}^c) ≤ | 𝒱|ℙ({ g_s_1^i - g_0^i - d_i/ℰk τ≥ (k τ)^1/2 + ϵ}) =|𝒱| ℙ({ g_s_1^i - g_0^i - d_i/ℰs_1 ≥ (k τ)^1/2 + ϵ - d_i/ℰr}) ) ≤ |𝒱| ℙ({ g_s_1^i - g_0^i - d_i/ℰs_1 ≥1/2 (k τ)^1/2 + ϵ}) ) ≤ 2 |𝒱| exp(-1/4 (k τ)^2 ϵ). This concludes the proof of the above claim. We now finish by noting that ( ℙ(A^c_k,t)+ℙ(B^c_k,t)) ≤ 4(|𝒱|+|ℰ|)exp(-1/4τ^2 ϵ), so that, for sufficiently large t: ℙ(E_t^c) ≤∑_k=τ+1^t-r/τ( ℙ(A^c_k,t)+ℙ(B^c_k,t)) ≤ 4(|𝒱|+|ℰ|)t/τexp(-1/4τ^2 ϵ) ≤ 4(|𝒱|+|ℰ|) exp(-1/8τ^2 ϵ) §.§ Law of Large Numbers for Iterated Diffusion We now show that, on good events, H_k,t (representing the time-evolution over the τ-window indexed by k) window is close to A_k. We begin by analyzing the leading-order terms in the product. The following lemma shows that, on good events, the dampened Laplacian matrices are Cesàro-summable, with average close to the influence matrix. Fix ϵ≪1/2. There exists a constant c such that, for t sufficiently large, k ≥τ and on A_k,t∩ B_k,t: (∑_j = (k-1)τ+z+1^k τ+zγ⃗_j+1^T ∘ L_j) - 1/k L ≤c/k τ^1/2-ϵ Fix vertices i ∼ℓ, and consider outcomes on A_k,t∩ B_k,t only. In the below, the constant c may change from line to line, but will never depend on t or k. ( ∑_j = (k-1)τ+z+1^k τ+zγ_j+1^T ∘ L_j)^iℓ = ∑_j = (k-1)τ+r+1^k τ+r1/g_j+1^i S_j+1^i ℓg_j^ℓ/g_j^i + g_j^ℓ ≤1/d_i/|ℰ|k τ - (kτ)^1/2 + ϵ+g_0^id_ℓ/|ℰ|k τ + (kτ)^1/2 + ϵ+g_0^ℓ/d_ℓ+d_i/|ℰ|k τ - 2(kτ)^1/2 + ϵ+g_0^i+g_0^ℓ∑_j = (k-1)τ+r+1^k τ+r S_j+1^i ℓ ≤1/d_i/|ℰ|k τ - (kτ)^1/2 + ϵ+g_0^id_ℓ/|ℰ|k τ + (kτ)^1/2 + ϵ+g_0^ℓ/d_ℓ+d_i/|ℰ|k τ - 2(kτ)^1/2 + ϵ+g_0^i+g_0^ℓ(τ/|ℰ| + τ^1/2+ϵ +1) ≤|ℰ|/k τ (L^i ℓ + c/(k τ)^1/2-ϵ)(τ/|ℰ| + τ^1/2+ϵ + 1 ) = (L^i ℓ + c/(k τ)^1/2-ϵ)(1/k + c/kτ^1/2-ϵ) ≤1/kL^i ℓ + c/k τ^1/2-ϵ. The opposite-direction inequality can be proven similarly, giving that |(∑_j = (k-1)τ+r+1^k τ+rγ⃗_j+1^T ∘ L_j)^iℓ - 1/kL^i ℓ| ≤c/k τ^1/2-ϵ. From this the lemma easily follows. We now use the above lemma to show that the H product matrix over the k^th τ-window is indeed close to A_k (sub-leading order terms included). Define the difference Θ_k,t := H_k,t - A_k Fix ϵ≪1/2. There exists a constant c such that, for sufficiently large t, k ≥τ and on A_k,t∩ B_k,t: Θ_k,t≤c/kτ^1/2-ϵ The matrix H_k,t = Π_j=(k-1)τ + r + 1^k τ + r (I + γ⃗^T_j+1∘ L_j) has τ factors in the product. It can be expanded as a sum: H_k,t = ∑_n=0^τ h_k,t,n where h_k,t,n collects the terms in the expansion with exactly n factors of the dampened laplacian matrix γ⃗^T ∘ L. Now, on good events, and for all s_0 ≤ j ≤ s_1 and for all i ∈𝒱, we have that g_s^i ≳ k τ. It's also to see that, almost surely, we have that L_j≲ 1. Then, using the submultiplicativity of the operator norm with respect to the Hadamard product, we have that, on good events, γ⃗_j+1^T ∘ L_j≤c/k τ for some constant c. So, collecting all terms with n such matrices as factors in the binomial expansion (there are τn of them), we have that h_k,t,n≤τn(c/k τ)^n ≤1/n!(c/k)^n So, using the previous LEMMA (which says that h_k,t,0+h_k,t,1 - A_k≤c'/k τ^1/2-ϵ for some constant c': H_k,t- A_k ≤c'/k τ^1/2-ϵ + ∑_n=2^τ1/n!(c/k)^n ≤c'/k τ^1/2-ϵ + (c/k)^2∑_n=0^τ(c/k)^n ≤c'/k τ^1/2-ϵ + (c/k)^2/1-c/k ≤c'/k τ^1/2-ϵ + 2c^2/k τ^1/2-ϵ = c' + 2c^2/k τ^1/2-ϵ, where we've used that τ is large, for example, enough to have c/τ≤1/2, and that k ≥τ. §.§ Decay of Operator Norm Now, recall that Π_k = j^t Λ_k = [Π_k = τ + 1^t-r/τ H_k,t] [Π_k = j^ j_0Λ_k ]. The late (k > j_0) Λ_k matrices in the product are encapsulated in the H matrices, while we 'chop off' the early (k ≤ j_0) Λ_k matrices. We remove them because t-j may not be divisible by τ (and thus that we cannot successfully partition all Λ into groups of equal size). Note, however, that in the above decomposition, we chop off more than the remainder of t-j divided by τ; this is for later convenience. The next Lemma guarantees that these extra, 'loose' factors of Λ have bounded norm. We present a straightforward proof which makes use of some simple matrix calculations. It can be noted, however, that this lemma can also be proven by noting that a discrete dynamical system driven by the Λ matrices (with no random blips W) represent a version of the heat equation where the only randomness is in the edge selection, rather than in the outcome in the conversation, and long-term solutions must be bounded. For all 0 ≤ j ≤ t, Π_k = j^tΛ_k ≤√(|𝒱|) almost surely. We first aim to prove that Λ_k is nonnegative. The offdiagonal elements are obviously nonnegative, so we focus only on the diagonal. Let k be arbitrary. For any i, Λ_k^ii = 1 -1/g_k+1^i∑_j ∼ iS_k+1^ijg_k^j/g_k^i + g_k^j. Now, if S_k+1^ij = 0 for all j ∼ i, then it's clear that Λ_k^ii = 1. Otherwise, let S_k+1^ij = 1 for some j ∼ i. Note that this will be the only nonzero term in the sum. In this case, we are guaranteed that g_k+1^i = g_k^i + 1 > 1. So: Λ_k^ii = 1 -1/g_k+1^ig_k^j/g_k^i + g_k^j > 1 -g_k^j/g_k^i + g_k^j > 0. This gives that Λ_k is nonnegative. Fix j,t as above, arbitrary. Note that A_j,t := Π_k = j^tΛ_k is row stochastic, as it is the product of row stochastic matrices. It's also nonnegative. So, let v⃗ be an aribtrary unit vector. Note that for all j, |v^j| ≤ 1 so: |(A_j,tv)_i| = |∑_k (A_j,t)^ik v^k| ≤∑_j (A_j,t)^ik = 1. where we've used that A is nonnegative. Thus for arbitrary unit vector, A_j,tv^2 = ∑_k ((A_j,tv)^k)^2 ≤∑_k 1= |𝒱|. This proves that, for arbitrary 0 ≤ j < t, almost surely, A_j,t≤√(|𝒱|). Before tackling the main lemma of this section (Lemma <ref>), we note the useful fact that for a square matrix A, the ℓ_2 operator norm is equivalent to the max of the vector norms of the rows. Let A be an n×n matrix, and let A^i represent the ith row of A. We then have the following two inequalities, for arbitrary 1 ≤ i ≤ n: A^i ≤A A ≤√(n)max_jA^j Note that for 1× n matrices, the matrix norm coincides with the vector norm. Let 1≤ i ≤ n be arbitrary. We prove the first inequality. If A^i = 0⃗, we are done. Otherwise, define the vector x⃗ to have components x^j := A^ij/A^i. Then we have A≥Ax⃗≥A^i, where the second inequality follows because the i^th entry of A x⃗ is equal to A^i · x = A^i. Next we prove the second inequality. Let x⃗ be an arbitrary unit vector. We have Ax⃗^2 = ∑_j (A^j ·x⃗)^2 ≤∑_jA^j^2 ≤ n max_j A^j^2. Taking the square root of both sides, we have the desired inequality. We now add the main ingredient in the proof of Lemma 5.1, which says that for sufficiently small j and on good events, the product of diffusion matrices (with consensus projected out) decays with t. There exists α > 0 such that, on E_t, and for all j ≤ j_0 := r + τ^2, Q P^-1[Π_k=j^tΛ_k] P≲1/t^α, Define k_min := τ + 1, k_max = t-r/τ, Θ'_k,t := P^-1Θ_k,t P, and D_k = P^-1A_k P (the diagonal matrix consisting of eigenvalues of A_k). Now, [Π_k = k_min^k_max H_k,t] [Π_k = j^ j_0Λ_k ]. Q P^-1[Π_k=j^tΛ_k] P ≤Q P^-1 [Π_k=j_0+1^tΛ_k]P P^-1 [Π_k=j^j_0Λ_k]P ≲Q P^-1 [Π_k=j_0+1^tΛ_k]P = Q P^-1 [Π_k=k_min^k_maxH_k,t]P = Q P^-1 [Π_k=k_min^k_max(A_k + Θ_k,t)]P = Q [Π_k=k_min^k_max(D_k + Θ'_k,t)] ≲max_i ≠ 1 [Π_k=k_min^k_max(D_k + Θ'_k,t)]^i = max_i ≠ 1R^i_k_max,k_min,t where we've used Lemma <ref> in the second to last line and we've defined R_k_1,k_2,t := Π_k=k_1^k_2(D_k + Θ'_k,t). We have, for a constant c, for ϵ < 1/2, and for all i ≠ 1 (dropping primes on Theta for ease of notation), R_k_min,k_max,t^i = Θ_k_max,t^i R_k_min,k_max-1,t + |λ^(k_max)|R_k_min,k_max-1,t^i R_k_min,k_max,t^i ≤c/k_maxτ^1/2-ϵ + (1 - λ/k_max)R_k_min,k_max-1,t^i, where λ^(k)≠ 1 is an eigenvalue of A_k, and we've used Lemma <ref>, Lemma <ref>, and Lemma <ref>. By iterating the above, we obtain R_k_min,k_max,t^i ≤Π_k=k_min^k_max (1-λ/k) +c/τ^1/2-ϵ∑_j=k_min^k_max1/jΠ_k = j+1^k_max (1-λ/k) ≤(k_min/k_max)^λ + c/τ^1/2-ϵ∑_j=k_min^k_max1/j(j+1/k_max)^λ ≤(k_min/k_max)^λ + c/k_max^λτ^1/2-ϵ∑_j=k_min^k_max j^λ-1 ≤(k_min/k_max)^λ+ c/τ^1/2-ϵ ≲1/t^λ/2 + 1/t^1/8-ϵ/4, where in the second and fourth inequalities, we used Lemma <ref> and Lemma <ref>, respectively, and the value of c can change from line to line. Setting ϵ = 1/4 (for example) concludes the proof of the lemma. Our final ingredient is the summability of 𝔼[γ⃗_t^2]. The sum ∑_t=0^∞𝔼[γ⃗_t^2] converges. For sufficiently large t: 𝔼[γ⃗_t^2] = 𝔼[γ⃗_t^2 |A_t,t-r/τ ]ℙ(A_t,t-r/τ) + 𝔼[γ⃗_t^2 |A_tt-r/τ^c ]ℙ(A_t,t-r/τ^c) ≲𝔼[γ⃗_t^2 |A_t,t-r/τ ]+ℙ(A_t,t-r/τ^c) ≲1/t^2 + exp(-1/4τ^4 ϵ) ≲1/t^2 §.§ Proof of Lemma 4.1 In the proof of Lemma 4.1, we make use of the following simple comparison between a nonnegative random variable's conditional and total expectation: Let X be an almost-surely nonnegative random variable with 𝔼[X] < ∞, and let E be an event with ℙ(E) ≥1/2. Then 𝔼(X|E) ≤ 2 𝔼[X] 𝔼[X|E] = 1/ℙ[E](𝔼[X] - 𝔼[X|E^c]ℙ(E^c)) ≤ 2 𝔼[X]. We aim to show that 𝔼[z⃗_t+1^2] → 0, where z⃗_t+1 = ∑_j = 0^t+1 P Q P^-1[Π_k=j^tΛ_k ] γ⃗^T_j ∘w⃗_j. Expanding the square: z⃗_t+1^2 = ∑_j = 0^t+1P Q P^-1[Π_k=j^tΛ_k ] γ⃗^T_j ∘w⃗_j^2 + 2 ∑_0 ≤ j_1 < j_2 ≤ t+1⟨ P Q P^-1[Π_k_1=j_1^tΛ_k_1] γ⃗^T_j_1∘w⃗_j_1 , P Q P^-1[Π_k_2=j_2^tΛ_j_2] γ⃗^T_j_2∘w⃗_j_2⟩ We now take the expectation of the cross-terms. For 0 ≤ j_1 < j_2 ≤ t+1: 𝔼[ ⟨ P Q P^-1[Π_k_1=j_1^tΛ_k_1] γ⃗^T_j_1∘w⃗_j_1 , P Q P^-1[Π_k_2=j_2^tΛ_j_2] γ⃗^T_j_2∘w⃗_j_2⟩] =𝔼[𝔼[⟨ P Q P^-1[Π_k_1=j_1^tΛ_k_1] γ⃗^T_j_1∘w⃗_j_1 , P Q P^-1[Π_k_2=j_2^tΛ_j_2] γ⃗^T_j_2∘w⃗_j_2⟩|σ(ℱ_j_2-1,𝒢_t+1)] ] =𝔼[( P Q P^-1[Π_k_1=j_1^tΛ_k_1] γ⃗^T_j_1∘w⃗_j_1)^TP Q P^-1[Π_k_2=j_2^tΛ_j_2] ( γ⃗^T_j_2∘𝔼[w⃗_j_2|σ(ℱ_j_2-1,𝒢_t+1) ] = 0, where we've used independence of σ(ψ_t) and σ(ℱ_t-1,σ(Ω_t)) as well as the fact that 𝔼_t-1[w⃗_t]=0. We now deal with the expectation of the 'diagonal' elements: 𝔼[z⃗_t^2] = ∑_j = 0^t+1P Q P^-1[Π_k=j^tΛ_k ] γ⃗^T_j ∘w⃗_j^2 ≲∑_j = 0^t+1𝔼 [P Q P^-1[Π_k=j^tΛ_k ]^2 γ⃗_j^2 ] = ∑_j=0^ r+τ^2 𝔼[ P Q P^-1[Π_k=j^tΛ_k ]^2 γ⃗_j^2 ] + ∑_j = r+τ^2 + 1^t+1𝔼[P Q P^-1[Π_k=j^tΛ_k ]^2 γ⃗_j^2 ] We show that each of the above two terms goes to zero. Let α > 0 be the number guaranteed by Lemma <ref>. The first term: ∑_j = 0^r+τ^2𝔼[P Q P^-1[Π_k=j^tΛ_k ]^2 γ⃗_j^2 ] ≲∑_j = 0^r+τ^2𝔼[P Q P^-1[Π_k=j^tΛ_k ]^2 γ⃗_j^2 |E_t] + 𝔼[ P Q P^-1[Π_k=j^tΛ_k ]^2 γ⃗_j^2 |E_t^c] ℙ(E_t^c) ≲∑_j = 0^r+τ^21/t^α𝔼[ γ⃗_j^2 |E_t] +ℙ(E_t^c) ≲∑_j = 0^r+τ^21/t^α𝔼[ γ⃗_j^2 ] +ℙ(E_t^c) ≲1/t^α + (r+τ^2) exp(-c τ^2ϵ) → 0, where in the second inequality we used Lemmas <ref> and <ref>, in the third inequality we used Lemma <ref>, and in the fourth inequality we used Lemmas <ref> and <ref>. And in the second term of the expansion of the diagonal sum: ∑_j = r + τ^2 + 1^t+1𝔼[P Q P^-1[Π_k=j^tΛ_k ]^2 γ⃗_j^2 ] ≲∑_j = r + τ^2 + 1^t+1𝔼[ γ⃗_j^2 ], where we've used Lemma <ref>. The right-hand side goes to 0 By Lemma <ref>, since the lower bound of the sum goes to ∞. § PROOF OF THEOREM Let a_∞ be the limit of a_t = p⃗·x⃗_t, established in Lemma <ref>. Using the triangle inequality, we have: 𝔼[x⃗_t - a_∞1⃗^2] = 𝔼[x⃗_t -a_t 1⃗ + a_t 1⃗ - a_∞1⃗^2] ≤𝔼[ a_t 1⃗ - a_∞1⃗^2] +𝔼[x⃗_t -a_t 1⃗^2] + 2𝔼[x⃗_t -a_t 1⃗ a_t 1⃗ - a_∞1⃗] . We now show that each term goes to 0. In the first term, we have that a_t 1⃗ - a_∞1⃗^2 = (a_t - a_∞)^2 1⃗^2, the expectation of which goes to 0 by virtue of Lemma <ref>. Similarly, the second term goes to 0 due to Lemma <ref>. To see that the last term goes to 0, note that x⃗_t -a_t 1⃗ is almost surely bounded, so that 𝔼[x⃗_t -a_t 1⃗ a_t 1⃗ - a_∞1⃗] ≲𝔼[ a_t 1⃗ - a_∞1⃗] = 1⃗𝔼[| a_t - a_∞|] ≲ a_t - a_∞_ℒ^1. Finally, since a_t → a_∞ in ℒ^2, convergence also holds in ℒ^1, so that this term goes to 0 as well. § FUTURE WORK Future work might consider the rate of convergence, for example of the disagreement component z⃗_t to 0. Simulations inspire the following conjecture: 𝔼[z⃗_t^2] ≲1/t^2 λ λ≤1/2 1/t λ > 1/2 . In the case of parallel updates (i.e. all edges converse with all of their neighbors simultaneously in each time step), the above conjecture can be proven readily using the techniques from Lemma <ref>. With the appropriate choice of τ (t), bounds on the decay rate can be proven for the present case (though these bounds seem loser than what simulation demonstrates). This discussion has been omitted because the bounds do not seem empirically tight, and the choice of τ (t) = ⌈ t^1/4⌉ is convenient. Figure 3 shows the decay of disagreement, averaged over 1000 runs for the interval graph I_5 = (𝒱,ℰ), where 𝒱 = {1,2,3,4,5}, and (i,j) ∈ℰ if and only if |i-j| = 1. § ACKNOWLEDGMENT The author thanks Lionel Levine for his guidance throughout this research, particularly for his advice concerning the law of large numbers approach in Section 4. 4 r0 Deffuant, Guillaume, et al. (2000). Mixing Beliefs among Interacting Agents. Advances in Complex Systems, vol. 03, no. 01n04, pp. 87–98. r4 Hegselmann, Rainer, and Ulrich Krause. (2005). Opinion Dynamics Driven by Various Ways of Averaging. Computational Economics, vol. 25, no. 4, pp. 381–405. r10 Degroot, Morris H. (1974). Reaching a Consensus. Journal of the American Statistical Association, vol. 69, no. 345, 1974, pp. 118–121. r1 Banerjee, Abhijit, et al. (2021). Naïve Learning with Uninformed Agents. American Economic Review, vol. 111, no. 11, pp. 3540–3574. r2 Ding, Zhaogang, et al. (2019). Consensus Reaching in Social Network DeGroot Model: The Roles of the Self-Confidence and Node Degree. Information Sciences, vol. 486, pp. 62–72. r3 Gao, Yue, et al. (2020). The Dynamics of Two-State Public Opinion Propagation on Signed Networks. Journal of Systems Science and Complexity, vol. 34, no. 1, pp. 251–264. r5Li, Yun, and Jiakun Wang. (2021). Cross-Network Propagation Model of Public Opinion Information and Its Control in Coupled Double-Layer Online Social Networks. Aslib Journal of Information Management, vol. 74, no. 2, pp. 354–376. r6 Martins, Andre C. (2008). Continuous Opinions and Discrete Actions in Opinion Dynamics Problems. International Journal of Modern Physics C, vol. 19, no. 04, pp. 617–624. r7 Mohammadinejad, Amir, et al. (2018). Opiu: Opinion Propagation in Online Social Networks Using Influential Users Impact. 2018 IEEE International Conference on Communications (ICC). r8 Prasetya, Hafizh A., and Tsuyoshi Murata. (2020). A Model of Opinion and Propagation Structure Polarization in Social Media. Computational Social Networks, vol. 7, no. 1. r9 Ureña, Raquel, et al. (2018). A New Influence Based Network for Opinion Propagation in Social Network Based Scenarios. Procedia Computer Science, vol. 139, pp. 329–337. r11 Bashari, Masoud, and Mohammad-R. Akbarzadeh-T. (2023). Theoretical Development of a Probabilistic Fuzzy Model for Opinion Formation in Social Networks. Fuzzy Sets and Systems, vol. 454, pp. 125–148. r13 Lemmens, Bas, and Roger D. Nussbaum. (2012). Nonlinear Perron-Frobenius Theory. Cambridge University Press. r14 Cairns, Hannah. (2021). Perron’s Theorem in an Hour Taylor and Francis Online. r15 Hogben, Leslie. (2016). Handbook of Linear Algebra. CRC Press/Taylor and Francis Group. r16 §5.6 Inequalities. Digital Library of Mathematical Functions, dlmf.nist.gov/5.6. r17 Alon, Noga, and Joel H. Spencer. (2016). The Probabilistic Method. Wiley. r18 Williams, David. (2020). Probability with Martingales. Cambridge University Press.
http://arxiv.org/abs/2307.02905v1
20230706103724
Decomposing the Origin of TeV-PeV Emission from the Galactic Plane: Implications of Multi-messenger Observations
[ "Ke Fang", "Kohta Murase" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA", "hep-ph" ]
0000-0002-5387-8138]Ke Fang Department of Physics, Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin, Madison, WI, 53706 0000-0002-5358-5642]Kohta Murase Department of Physics; Department of Astronomy & Astrophysics; Center for Multimessenger Astrophysics, Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540, USA Center for Gravitational Physics and Quantum Information, Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto, Kyoto 606-8502, Japan High-energy neutrino and γ-ray emission has been observed from the Galactic plane, which may come from individual sources and/or diffuse cosmic rays. We evaluate the contribution of these two components through the multi-messenger connection between neutrinos and γ rays in hadronic interactions. We derive maximum fluxes of neutrino emission from the Galactic plane using γ-ray catalogs, including 4FGL, HGPS, 3HWC, and 1LHAASO, and measurements of the Galactic diffuse emission by Tibet ASγ and LHAASO. We find that depending on model templates, the diffuse emission is brighter than the sum of resolved sources when excluding promising leptonic sources such as pulsars, pulsar wind nebulae, and TeV halos. Our result indicates that the Galactic neutrino emission observed by the IceCube Collaboration may be dominated by the Galactic diffuse emission or unresolved γ-ray sources. Future observations of neutrino telescopes and air-shower γ-ray experiments in the Southern hemisphere are needed to accurately disentangle the source and diffuse emission of the Milky Way. § INTRODUCTION High-energy neutrinos from the Galactic plane (GP) may come from two components of the Galaxy: the cosmic-ray sea and individual sources. The cosmic-ray sea is a smooth and steady distribution of cosmic rays that emerge from accelerators and propagate in the Galactic magnetic field. Protons and nuclei at TeV to PeV energies may be confined in the Galactic magnetic field for 0.1 to a few million years and lose their initial directions. They collide with gas in the interstellar medium (ISM) and produce charged and neutral pions, which decay into neutrinos and γ rays, respectively. These secondary particles form the Galactic diffuse emission (GDE). In addition to hadronic cosmic rays, a lower flux of cosmic-ray electrons may also up-scatter the interstellar radiation field and the cosmic microwave background (CMB) to γ rays. Above 10 TeV, electrons have a cooling time of t_e∼ 64 (E_e / 10 TeV)^-1 kyr due to the inverse Compton radiation, and propagate for a distance d ∼ (D t_e)^1/2 = 0.3 (E_e / 10 TeV)^-0.33 kpc, where D ≈ 3× 10^28 (R / 3 GV)^1/3 cm^2 s^-1 is the diffusion coefficient assuming Kolmogorov turbulence and R ≡ E/Ze is the rigidity of a particle with energy E and charge number Z. Therefore, electrons above tens of TeV cannot travel too far away from the sources where they were produced. GDE in γ rays has been measured by the Fermi Large Area Telescope (LAT) between 100 MeV and 1 TeV over the full sky <cit.>. Above 1 TeV, the GDE from several regions in the Northern sky has been measured by air shower γ-ray experiments, including ARGO-YBJ at 0.35-2 TeV <cit.>, Tibet ASγ Observatory at 100-1000 TeV <cit.>, HAWC Observatory at 0.3-100 TeV <cit.>, and the Large High Altitude Air Shower Observatory (LHAASO) at 10-1000 TeV <cit.>. High-energy neutrinos and γ rays may also be produced by individual sources harbored in the Milky Way. About two hundred Galactic γ-ray sources have been observed above 1 TeV [http://tevcat.uchicago.edu]. Which sources among them are hadronic emitters, and hence neutrino sources, remains a major question <cit.>. One of the challenges arises from the fact that the pion decay and inverse Compton radiation may yield similar spectra. Only a handful sources show promising features of hadronic γ-ray emission, such as the star formation region at the Galactic center <cit.> and the supernova remnant G106.3+2.7 <cit.>. To date, no Galactic neutrino sources have been identified. In addition to resolved sources, unresolved sources may also contribute to emission from the GP. These unresolved sources may be counted toward GDE in measurements despite that they do not have a diffuse nature. The luminosity function of TeV sources is poorly known due to the limited number of sources and the complications related to TeV catalog creations. Based on 32 sources with flux above 10% Crab from the H.E.S.S. Galactic plane survey (HGPS), the cumulative log N-log S distribution of integral flux above 1 TeV is derived to follow a power law with a slope of -1.3± 0.2 <cit.>. The distribution is flatter below 10% although the measurement is limited by the completeness of the sample. The detection of Galactic neutrinos has been anticipated for decades <cit.>. Whether the Galactic contribution dominates the full-sky neutrino flux was first debated at the time of IceCube's discovery of high-energy cosmic neutrinos <cit.>. Using the multi-messenger connection and diffuse TeV γ-ray data mainly from CASA-MIA and KASKADE, <cit.> showed that the all-sky neutrino flux mostly originates from extragalactic sources. <cit.> derived the upper limit on the Galactic neutrino flux based on the GP observation by Tibet ASγ, and argued that the 100 TeV emission may come from either the GDE or the sum of discrete sources. Lately, the IceCube Collaboration reported evidence for neutrinos from the GP <cit.>. The observed flux level is consistent with the prediction of <cit.>. An important task in understanding the GP is to disentangle the contribution of individual sources from the truly diffuse emission. This is crucial to understanding the PeVatrons in the Milky Way and the leptonic contribution to the TeV-PeV γ-ray sky. While detecting individual Galactic neutrino sources would be the ultimate solution to this problem, in this paper we take a first step in understanding the source contribution to the neutrino GDE via a multi-messenger approach. Specifically, we constrain the neutrino flux of individual sources using γ-ray catalogs and compare it to the GDE measured by IceCube or derived from γ-ray observations. Unlike extragalactic neutrino sources, Galactic neutrino sources are likely optically thin to TeV γ-rays given their relatively low infrared fluxes. γ-ray emission can be made by either electrons or protons and nuclei whereas high-energy neutrinos can only come from the latter. The γ-ray flux of Galactic sources therefore provide an upper limit on the neutrino flux from individual sources. We describe the TeV-PeV γ-ray observations of the GP in Section <ref>, including the source catalogs and GDE observations in Section <ref> and <ref>, respectively. By converting the differential γ-ray flux to neutrino flux assuming that they are simultaneously produced by protons and nuclei, we constrain the high-energy neutrino emission by sources and compare that to the GDE in Section <ref>. We conclude and discuss the caveats of the work in Section <ref>. § TEV-PEV GAMMA-RAY OBSERVATIONS In this section, we describe the γ-ray catalogs and GDE observations to be used for the deviation of high-energy neutrino fluxes. Figure <ref> summarizes the sky regions observed by various experiments. We overlay the neutral hydrogen (HI) emission from the HI 4-PI Survey <cit.>, since the pionic GDE is dominated by cosmic-ray interaction with the HI gas. §.§ Source Catalogs We summarize the sky regions and energy ranges of various γ-ray source catalogs in Table <ref> in Appendix <ref>. Below we describe the usage of each of them. HGPS: 78 sources are reported by the H.E.S.S. Galactic plane survey (HGPS), which is a decade-long observation of the H.E.S.S. telescope with nearly 2700 h of data covering the inner GP <cit.>. One source, HESS J1943+213, is likely an extragalactic object and is removed from our analysis. For each of the remaining sources, we use the flux at the pivot energy and spectral index reported by the catalog found by assuming a power-law spectral model to derive the differential flux between 1 and 30 TeV. The right end of the energy range is chosen based on the lower limit of the maximum energy of the sources. The 77 Galactic sources include 12 pulsar wind nebulae (PWN), 8 shell-type supernova remnant (SNR), 8 composite SNR (where the emission can come from either the shell or the interior nebula), 3 γ-ray binaries, and 47 sources without firmly identified associations, including 35 with possible associations in source catalogs and 11 with no associations. We account for a systematic uncertainty of 30% for the flux. A systematic uncertainty for the spectral index, which is estimated to be an absolute value of 0.2, is not included. 3HWC: 65 sources are reported by the Third HAWC Catalog (3HWC) based on blind searches across HAWC's FOV using 1523 days of data <cit.>. Two of them, Mrk 421 and Mrk 501, are extragalactic and removed for the list, yielding a total of 63 Galactic sources. Based on the spectral index and differential flux at a pivot energy of 7 TeV, we calculate the flux of the sources in 3HWC between 1 and 49 TeV. This energy range is within an energy range that contributes to 75% of the observed significance for most sources. The differential flux of 3HWC is obtained by assuming a pointlike morphology. An extended source may be associated with multiple point sources. The inaccuracy in the source extension barely impact this work since the sum of the flux of point sources reasonably estimates the flux of an extended source. Our calculation includes the systematic uncertainties of the spectral models of the 3HWC sources, which are at the level of 30%. 1LHAASO: 90 sources with extension <2^∘ are reported by the first LHAASO catalog (1LHAASO), including 43 sources that are detected at >4 σ above 100 TeV <cit.>. We exclude the following sources that are likely of extragalactic origin: 1LHAASO J1104+3810, 1LHAASO J1219+2915, 1LHAASO J1653+3943, 1LHAASO J1727+5016, and 1LHAASO J2346+5138. For the remaining sources that are detected, we compute the spectrum following a power law dN/dE = N_0 (E/E_0)^-Γ between E_ min and E_ max, with E_0 = 3 TeV, E_ min=1 TeV, E_ max = 25 TeV for WCDA and E_0 = 50 TeV, E_ min=25 TeV, E_ max = 200 TeV for KM2A. We include systematic uncertainty of 7% on KM2A flux and ^+8%_-24% on WCDA flux. An absolute uncertanity of 0.02 on spectral index of KM2A measurement is not included. Sources that only have upper limits on flux are not included. 4FGL: Between 50 MeV and 1 TeV, the fourth Fermi Large Area Telescope catalog (4FGL) reports 6659 sources based 12 years of Fermi-LAT data <cit.>. We count both “identified" and “associated" source classes, yielding a total of 539 Galactic sources that can be decomposed into the following groups with corresponding designators: 1) 257 pulsars, including 137 young (`PSR' and `psr') and 120 millisecond pulsars (`MSP'), 2) 20 PWNe (`PWN' and `pwn'), 3) 43 SNRs (`SNR' and `snr') 4) composite SNRs (`spp'), 5) 5 star-forming regions (`SNR' and `sfr'), 6) 26 binaries (`HMB', `hmb', `LMB', `lmb', `BIN', `bin'), 7) 4 novae (`NOV'), 8) 35 globular clusters (`glc'), and 9) Galactic center (`GC'). For each source, we evaluate the differential flux between 0.1 and 1 TeV based on the parameters for the reported , which can be a power law, log-parabola, or power law with a super exponential cutoff. The errors of the fluxes include systematic uncertainties associated with the detector effective area and Galactic interstellar emission model. §.§ Galactic Diffuse Emission The GDE measurements by various air shower γ-ray observatories are summarized in Table <ref> and described below. ARGO-YBJ measured the GDE by subtracting a background map from the event map <cit.>. Known sources from the TeVCat were excluded using a 4^∘× 4^∘ / cos(b) mask, where b is the latitude. Faint sources were not masked but expected to contribute to 2.5%. Tibet ASγ detected the GDE at 5.9 σ by comparing the number of γ-ray-like events from the on region, defined as |b|<10^∘, and the off region, |b|>20^∘. By identifying γ-ray-like events within 0.5^∘ of TeVCat sources, <cit.> concludes that the fractional source contribution to the diffuse component within |b|<5^∘ is 13% above 100 TeV. The events above 398 TeV are likely of a diffuse origin since they neither have accompanying signal at lower energies nor come from directions within ∼ 0.5^∘ of known sources. The error bars in the top panels of Figure <ref> correspond to 1 σ statistical error. In addition, a systematic error of 30% is expected due to the uncertainty of absolute energy scale <cit.>. LHAASO detected the GDE from the inner and outer GP at 29.1 σ and 12.7 σ <cit.>. Sources detected by KM2A and additional known sources in TeVCat are masked with a Gaussian width that is 2.5 times of the quadratic sum of the point spread function (PSF) of the detector and the source extension. The contribution from remaining resolved sources is estimated to be <10%. The GDE flux of the inner Galaxy measured by LHAASO is lower than that of Tibet ASγ as a result of their more and larger source masks. In addition, the innermost Galactic disk at 15^∘≲ l ≲ 90^∘ and |b|≲ 1.5^∘ is mostly masked in the study of <cit.>, which could have caused an underestimate of the average GDE in that region. <cit.> found that the flux of the GDE of the inner Galaxy (15^∘<l < 125^∘ and |b|≲ 5^∘) would increase by 61% when not apply any masking. Fermi-LAT: We use the Galactic interstellar emission model (GIEM) for the 4FGL catalog analysis <cit.> to evaluate the GDE flux [<https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html>]. We note that the GDE is contributed by both the interstellar emission and unresolved sources, though the fraction of the latter is at percentage level above 10 GeV <cit.>. The GIEM is a linear combination of emission components including the π^0 decay from hadronic cosmic rays interacting with HI gas and molecular hydrogen traced by the CO emission, as well as dark gas, inverse Compton on the interstellar radiation field, and large structures such as the Fermi Bubbles. The parameters of the model were obtained by fitting to the Pass 8 data. We approximate the model uncertainty with the systematic uncertainty of the Pass 8 data on the effective area [<https://fermi.gsfc.nasa.gov/ssc/data/analysis/LAT_caveats.html>]. <cit.> compared γ-ray emission models to the Fermi-LAT data from the two sky regions observed by Tibet ASγ. They conclude that the total flux is dominated by the π^0 decay of the diffuse cosmic rays at 100-300 GeV, with <10% contributed by resolved and unresolved sources, inverse Compton and bremsstrahlung radiation from cosmic-ray electrons, and the isotropic γ-ray background. We therefore use the total flux of the Fermi-LAT data from <cit.> as an approximate of the GDE flux in these two regions. §.§ GDE vs Source Emission in the γ-Ray Sky Figure <ref> contrasts the intensites of the γ-ray emission by resolved sources and the GDE from three sky regions, from inner Galaxy to outer Galaxy: (1) Tibet region A, 25^∘ < l < 100^ ∘, |b| < 5^∘; (2) Tibet region B, 50^∘ < l < 200^ ∘, |b| < 5^∘; (3) LHAASO outer Galaxy, 125^∘ < l < 235^ ∘, |b| < 5^∘. The shaded bands correspond to the sum of sources in the corresponding sky regions. When summing the sources, we add up the flux linearly and the uncertainties in quadrature for error propagation. For the total flux computed using sources from HGPS, 3HWC, and 1LHAASO catalogs, systematic errors are added with the statistical errors of the flux sum in quadrature, respectively. Figure <ref> suggests that the GDE is comparable to source emission in the inner Galaxy but may dominate over the source emission in the outer Galaxy. § NEUTRINO EMISSION Based on the γ-ray observations in Section <ref>, we derive the upper limit on the Galactic neutrino flux expected from resolved sources and GDE. The connection between γ-ray and neutrino emission through hadronic processes in the Galaxy is studied in <cit.> and summarized in Appendix <ref>. Since none of the TeV γ-ray experiments covers the full sky, we can only estimate the neutrino emission from the GP using the portion of the plane measured by the γ-ray detectors, under the assumption that the unobserved region has a similar emissivity distribution as the observed region. Details regarding this deviation are described in Appendix <ref>. The neutrino flux expected from all resolved Galactic γ-ray sources and the GDE is shown in Figure <ref> in the Appendix. Some classes of γ-ray sources show clear signatures of leptonic emission. For example, the broadband spectral energy distribution of the Crab nebula can be well described by the synchrotron and inverse Compton emission of relativistic electrons <cit.>. A systematic study of the population of pulsar wind nebulae (PWNe) in the HGPS catalog suggests that TeV emission by the population can be consistently explained by energetic leptons <cit.>. TeV halos around middle-aged pulsars are a new phenomenon found by air shower detectors <cit.>. They are much more extended than PWNe, where the electron–positron plasma is confined by the ambient medium. The sizes of TeV halos can usually be explained by the cooling of electrons in the CMB, suggesting that they are also likely of the leptonic origin. Motivated by these facts, we exclude sources in 4FGL and HGPS that are classified as pulsars or PWNe. We exclude 3HWC sources that are coincident with these TeV halo candidate pulsars (in Table 4 of ). For the 1LHAASO catalog, we remove the sources associated with pulsars (in Table 3 of ). In addition, we exclude 1LHAASO J1831-1007u^* and 1LHAASO J0703+1405, which are TeV halo candidates that are removed from the 3HWC. Figure <ref> presents the neutrino flux of resolved γ-ray sources that are not associated with pulsars. The neutrino GDE flux is derived using the γ-ray GDE observations listed in Section <ref>. The red band in Figure <ref> indicates the full-sky GDE derived using the LHAASO observations in both inner and outer Galaxy by assuming that cosmic-ray density follows the SNR distribution described by equation <ref>. We also overlay the prediction of <cit.> based on the Tibet ASγ measurement. The grey band presents the IceCube measurement of the GDE using the π^0 template <cit.>. Figure <ref> shows that in an optimistic scenario where all non-pulsar sources are hadronic emitters, the neutrino emission by the sources could be comparable to the GDE at 1-10 TeV. Above ∼30 TeV, the neutrino emission from the GP is dominated by the truly diffuse component or unresolved sources that have not been detected by any of γ-ray observations. Given that a significant fraction of the remaining sources are still promising leptonic emitters, such as composite SNRs (e.g., ) and γ-ray binaries/microquasars (e.g., ), the neutrino emission of the GP is likely dominated by the emission of diffuse cosmic rays or unresolved sources that are not accounted for. In this sense, it is also intriguing to see that the sum of unresolved hypernova remnants (HNRs) <cit.> can match the Galactic neutrino flux allowed by the Tibet ASγ. Figures <ref> and <ref> suggest that the spectrum of neutrino emission from the GP due to resolved sources is slightly harder than that arises from the GDE, if the π^0 template is correct. This implies the importance of discriminating model templates used in Galactic neutrino searches <cit.>. Figure <ref> further compares theoretical models with the derived and measured neutrino GDE. Measuring both the neutrino spectrum and flux of the GP at 1-10 TeV can help separate these two components. At 10 TeV, the source flux derived from the HGPS catalog is a few times higher than that from the 1LHAASO and 3HWC catalogs. The sensitivities of the HGPS and 3HWC are comparable <cit.>. Comparison of the GP observed by H.E.S.S. and HAWC at 10^∘ < l < 60^∘ found similar integrated fluxes above 1 TeV <cit.>. As the HGPS covers only a small range of latitudes (|b| < 3^∘), the relatively high neutrino flux derived from the HGPS catalog is probably due to the fact that the SNR model (equation <ref>) used for the conversion does not sufficiently describe the clustering of γ-ray sources in the inner Galaxy. Furthermore, more than half of the HGPS region is in the Southern sky, which is not accessible to LHAASO and HAWC (see Figure <ref>). Future air shower γ-ray facilities in the Southern sky are needed to fully understand the difference. § DISCUSSION AND CONCLUSIONS We evaluated the GDE and high-energy neutrino flux from astrophysical sources of the Milky Way based on the latest γ-ray observations. Since the TeV-PeV γ-ray observations are ground-based and partial-sky, the maximum flux of neutrino emission from the entire GP is derived based on models of the source distribution in the Galaxy. When calculating the neutrino emission by sources, we removed sources classified as pulsars, PWNe, and TeV halos which are promising leptonic sources. We found that the contribution from known γ-ray sources is likely lower than the GDE by at least a factor of ∼2 in the neutrino sky. The identification and measurement of Galactic neutrino or γ-ray sources involve a separation of the GDE component. A small fraction of the source flux could arise from the GDE and the isotropic emission <cit.>. This would further lower the source contribution and support our conclusion. We have assumed that γ-ray emission of pulsars, PWNe, and TeV halos mostly come from relativistic electrons and positrons. High-energy neutrinos could be emitted by fast-spinning newborn pulsars, although the birth rate of such sources in the local Universe is relatively low <cit.>. Our results confirmed the previous findings that the Galactic contribution is subdominant in the all-sky neutrino flux <cit.>. Although our conclusion is not directly applied to quasi-isotropic emission, this has also been constrained by not only Fermi-LAT but also TeV-PeV γ-ray observations <cit.>. Upcoming neutrino telescopes such as KM3Net, Baikal-GVD and IceCube-Gen2 <cit.> may resolve individual Galactic sources and disentangle the source emission and GDE. Future air shower γ-ray experiments in the Southern hemisphere such as the Southern Wide-field Gamma-ray Observatory <cit.> are also crucial to understanding the emission of the entire GP. The work of K.F. is supported by the Office of the Vice Chancellor for Research and Graduate Education at the University of Wisconsin-Madison with funding from the Wisconsin Alumni Research Foundation. K.F. acknowledges support from National Science Foundation (PHY-2110821, PHY-2238916) and from NASA through the Fermi Guest Investigator Program (80NSSC22K1584). The work of K.M. is supported by the NSF grants No. AST-1908689, No. AST-2108466 and No. AST-2108467, and KAKENHI No. 20H01901 and No. 20H05852. § TABLE SUMMARY OF GAMMA-RAY OBSERVATIONS OF THE GALAXY Table <ref> and <ref> summarize the γ-ray observations of Galactic sources and GDE, respectively. lccccccc Summary of sky regions observed by γ-ray experiments for source catalogs. 5 0pt Experiment Catalog 2cSky regions Energy [TeV] Reference Fermi-LAT 4FGL all-sky 10^-4-1 <cit.> H.E.S.S. Galactic Plane survey 250^∘≤ l ≤ 65^∘ |b|≤ 3^∘ >1 <cit.> HAWC 3HWC -26^∘ < δ < 64^∘ 0^∘ < α < 360^∘ 7 <cit.> LHAASO 1LHAASO -20^∘ < δ < 80^∘ 0^∘ < α < 360^∘ >1 <cit.> lccccccc Summary of GDE measurements by γ-ray experiments. 6 0pt Experiment Observation 2cSky regions Energy [TeV] Reference ARGO-YBJ GDE region A 25^∘≤ l ≤ 100^∘ |b|≤ 5^∘ 0.35-2 <cit.> Tibet ASγ GDE region A 25^∘≤ l ≤ 100^∘ |b|≤ 5^∘ 100-1000 <cit.> GDE region B 50^∘≤ l ≤ 200^∘ |b|≤ 5^∘ 100-1000 <cit.> LHAASO GDE inner Galaxy 15^∘ < l < 125^∘ |b| ≤ 5^∘ 10-1000 <cit.> GDE outer Galaxy 125^∘ < l < 235^∘ |b| ≤ 5^∘ 10-1000 <cit.> § MULTIMESSENGER CONNECTION As in <cit.>, we derive the upper limit on the neutrino flux of a sky region from the γ-ray measurements through the following relation: E_ν^2 F_ν^Ω≈3/2.(E_γ^2 F_γ^Ω)|_E_γ = 2E_ν × ∫ ds ∫cosb db∫ dl n_s (s, b, l)/∫ ds ∫cosb db ∫ dl n_s P_γ, surv(E_γ = 2E_ν, s, b, l), where F_ν^Ω and F_γ^Ω are the all-flavor neutrino flux and γ-ray flux produced by hadronic cosmic rays from a sky region, either as GDE or source emission. The factor to the right hand side of the equation scales the emissivity of the sky regions by accounting for the attenuation of γ-rays due to propagation in the ISM. In particular, P_γ, surv is the probability for a photon to survive from the pair production along a line-of-sight s in the direction of Galactic longitude l and latitude b, P_γ, surv(E_γ, x⃗_0, x⃗_ ob) = exp(-τ_γγ(E_γ, x⃗_0,x⃗_ ob)), and τ_γγ is the optical depth to a photon with energy E_γ when traveling from its initial position x⃗_0 to the observer at x⃗_ ob computed using the CMB and the interstellar radiation field model of <cit.>. The integrant n_s is the number density of γ-ray and neutrino emitters at position (s, b, l). It is equivalent to the source density, n_s = n_ CR, in the case of source emission and proportional to the product of the cosmic ray (n_ CR) and gas and molecular densities n_N, n_s ∝ n_ CRn_N, in the case of GDE. We approximate n_N with the HI gas density based on the model of <cit.>. When the effective attenuation factor at the right hand side of equation <ref> is 1, the equation returns to the usual form of equation 2 of <cit.>. § CONVERSION BETWEEN SKY REGIONS We derive the neutrino emission of the entire GP from partial-sky observations under the assumption that the unobserved region has a similar emissivity distribution as the observed region. This is done using equation <ref> but integrating over different sky regions for neutrinos, Ω_ν, and γ-ray, Ω_γ. When converting source emission, we take Ω_ν = 4π and Ω_γ of various source catalogs and assume that sources follow the spatial distribution of supernova remnants (SNR). n_ CR∝(r/R_⊙)^ζexp[-η(r-R_⊙/R_⊙)-|z|/z_g]. where R_⊙=8.5 kpc is the solar distance from the GC and the following parameter values are adopted, ζ=1.09, η = 3.87 <cit.> and z_g=0.083 kpc <cit.>. § NEUTRINO EMISSION FROM ALL SOURCES Figure <ref> contrasts the fluxes of the neutrinos expected from all resolved sources in the Galaxy and the GDE. Since the conversion is based on an optimistic assumption that all γ-ray emission is produced by cosmic-ray protons and nuclei in astrophysical sources, the resulted fluxes should be treated as upper limits.
http://arxiv.org/abs/2307.02759v1
20230706034440
Knowledge Graph Self-Supervised Rationalization for Recommendation
[ "Yuhao Yang", "Chao Huang", "Lianghao Xia", "Chunzhen Huang" ]
cs.IR
[ "cs.IR", "cs.AI" ]
<ccs2012> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Recommender systems Knowledge Graph Self-Supervised Rationalization for Recommendation]Knowledge Graph Self-Supervised Rationalization for Recommendation University of Hong Kong yuhao-yang@outlook.com Chao Huang is the corresponding author. University of Hong Kong chaohuang75@gmail.com University of Hong Kong aka_xia@foxmail.com Wechat, Tencent chunzhuang@tencent.com In this paper, we introduce a new self-supervised rationalization method, called , for knowledge-aware recommender systems. To effectively identify informative knowledge connections, we propose an attentive knowledge rationalization mechanism that generates rational scores for knowledge triplets. With these scores,  integrates generative and contrastive self-supervised tasks for recommendation through rational masking. To highlight rationales in the knowledge graph, we design a novel generative task in the form of masking-reconstructing. By masking important knowledge with high rational scores,  is trained to rebuild and highlight useful knowledge connections that serve as rationales. To further rationalize the effect of collaborative interactions on knowledge graph learning, we introduce a contrastive learning task that aligns signals from knowledge and user-item interaction views. To ensure noise-resistant contrasting, potential noisy edges in both graphs judged by the rational scores are masked. Extensive experiments on three real-world datasets demonstrate that  outperforms state-of-the-art methods. We also provide the implementation codes for our approach at <https://github.com/HKUDS/KGRec>. [ Chunzhen Huang August 1, 2023 ================== § INTRODUCTION With the rise of information overload, recommender systems have become a critical tool to help users discover relevant items of interest <cit.>. Among the leading paradigms in this field is collaborative filtering (CF), which assumes that users with similar interactions share similar interests in items <cit.>. CF has proven to be effective in a wide range of applications and has driven significant advances in the field of recommender systems. In recent years, collaborative filtering (CF) frameworks have undergone significant improvements with the introduction of neural networks and latent embedding for users and items, leading to effective enhancements for traditional matrix factorization methods (<cit.>). Moreover, novel models that integrate variational autoencoders, attention mechanisms, and graph neural networks have further increased the performance of CF (<cit.>). However, the sparsity of user-item interactions fundamentally limits the scope of performance improvement. To address this issue, incorporating a knowledge graph (KG) as a rich information network for items has gained traction in collaborative filtering, leading to knowledge graph-enhanced recommendation. The exploration of knowledge graph-enhanced recommendation begins with embedding-based methods and path-based methods. Specifically, some studies <cit.> incorporate transition-based knowledge graph embedding, such as TransR <cit.>, into item embedding to enrich user and item modeling. Other studies <cit.> focus on extracting semantically meaningful meta-paths from the KG and perform complex modeling of users and items along these meta-paths. To unify embedding-based and path-based methods in a mutually beneficial manner, recent research has adopted powerful graph neural networks (GNNs) to capture multi-hop high-order information through propagation and aggregation on the KG. These state-of-the-art solutions include <cit.>. Although knowledge graphs have proven effective for improving recommendation systems, they can also introduce noise and sparsity issues, leading to sub-optimal performances <cit.>. To address these issues, recent studies propose using contrastive learning (CL) for better knowledge-aware recommendation. For example, KGCL <cit.> applies stochastic graph augmentation on the KG and performs CL to address noisy entity and long-tail problems in the KG. <cit.> design a cross-view CL paradigm between the KG and user-item graph to improve KG representation learning with real labels from recommendation. However, we argue that these methods adopt either simple random augmentation or intuitive cross-view information, failing to consider the important latent rationales between the KG and recommendation task. Figure <ref> presents the distribution of attention scores of knowledge triplets in KGAT on the left, and a motivating case on the right that illustrates the rationales in the KG emphasized by CF signals. The distribution of attention scores in the KGAT model shows that only a small proportion of knowledge triplets have high attention scores and are thus highly contributive to recommendation as rationales. The remaining knowledge triplets exhibit a long tail of low scores in the distribution and are less informative in the network. To better understand the relationship between KG and CF signals, we provide an example of an e-commerce platform where users often purchase diving glasses and underwater cameras together. To make accurate predictions, the connections with common semantics “Sports/Diving” will be highlighted in the KG. Thus, for the underwater cameras, the knowledge “Photography” and “Digital Cam” will be less important compared to “Sports Cam”. This highlights the importance of identifying and emphasizing relevant rationales in the KG to improve recommendation performance. In order to achieve accurate and effective knowledge graph-based recommendations, it is important to explicitly model the rationales behind the user preference learning. To address this challenge, we propose a new knowledge graph-enhanced recommender system, called to leverage attentive knowledge rationalization to generate task-related rational scores for knowledge triplets. proposes a self-supervised rationale-aware masking mechanism to extract useful rationales from the KG, by adaptively masking knowledge triplets with higher rational scores. By forcing to learn to reconstruct these important connections, we highlight task-related knowledge rationales. We also align the rational semantics between the KG signals and the Collaborative Filtering (CF) signals via a knowledge-aware contrasting mechanism. This is achieved by filtering out low-scored knowledge that may be potential noise by masking during graph augmentation for contrastive learning. Finally, we inject the rational scores into the knowledge aggregation for the recommendation task, enabling knowledge rational scores to be learned tightly from the CF labels. In summary, we make the following contributions in this paper: * We unify generative and contrastive self-supervised learning for knowledge graph-enhanced recommender systems, which enables the distillation of the useful knowledge connections within the knowledge graph for recommendation and align them in a noise-free and rationale-aware manner. * Our proposed rationale-aware masking mechanism allows us to identify and highlight the most important and relevant information within the knowledge graph, while suppressing potential noise or irrelevant knowledge graph connections. * To validate the effectiveness of our proposed model, , we conduct extensive experiments on three real-world datasets. Evaluation results provide strong evidence that our proposed model achieves superior performance compared with existing knowledge-aware recommender systems. § PRELIMINARIES We begin by introducing the concepts that will be used in our paper and formally defining the KG-enhanced recommendation task. User-Item Interaction Graph. In a typical recommendation scenario, we have a set of users, denoted by 𝒰, and a set of items, denoted by 𝒱. Let u∈𝒰 and v∈𝒱 represent a single user and item, respectively. We construct a binary graph 𝒢_u = (u, y_uv, v) to denote the collaborative signals between users and items, with y_uv=1 if user u interacted with item v, and vice versa. Knowledge Graph. We represent real-world knowledge about items with a heterogeneous graph consisting of triplets, denoted by 𝒢_k = (h,r,t). h,t∈ℰ are knowledge entities, and r∈ℛ represents the semantic relation connecting them, such as (author, wrote, book). It is important to note that the item set is a proper subset of the entity set, 𝒱⊂ℰ. This allows us to model the complex relationships between items and entities in the KG. Task Formulation. Our KG-aware recommendation task can be formally described as follows: given a user-item interaction graph, denoted by 𝒢_u, and a knowledge graph, denoted by 𝒢_k, our goal is to learn a recommender model, denoted by ℱ(u,v|𝒢_u,𝒢_k,Θ), where ℱ represents the model architecture with learnable parameters Θ. The output of the model is a value in the range [0,1] that indicates the likelihood of user u interacting with item v. § METHODOLOGY In this section, we introduce detailed technical design of our proposed . The overall framework is present in Figure <ref>. §.§ Rationale Discovery for Knowledge Graph To automatically distill essential semantics for recommendation from the complex knowledge graph, we propose a rationale weighting function that learns the probability of knowledge triplets being the underlying rationale for collaborative interactions. This rationale function weighs each knowledge triplet based on a learnable graph attention mechanism. Inspired by the heterogeneous graph transformer (HGT) <cit.>, which discriminates the importance of heterogeneous relations, we implement the rationale weighting function f(h,r,t) as follows: f(h,r,t) = 𝐞_h𝐖^Q ·(𝐞_t𝐖^K ⊙𝐞_r)^/√(d), Here, 𝐞_h, 𝐞_r, and 𝐞_t are embeddings for the head, relation, and tail entities, respectively. The trainable weights for attention, 𝐖^Q and 𝐖^K, have dimensions of ℝ^d× d, where d is the hidden dimensionality. To model the relational context, we use the element-wise product between the relation r and the tail entity t, which corresponds to the rotation of the entity embedding 𝐞_t to the latent space of relation r <cit.>. The rationale score f(h,r,t) of a knowledge triplet indicates its importance in assisting user preference, as learned by the model and guided by the labels from the recommendation task. To ensure comparability of rationale scores across neighbors of the same head entity, we normalize the scores by the number of neighbors 𝒩_h using the following softmax function: ω(h,r,t) = exp(f(h,r,t))/∑_(h,r^',t^')∈𝒩_hexp(f(h,r^',t^')). §.§ Rationale-aware Heterogeneous Knowledge Aggregation A complex KG often contains a large number of real-world knowledge triplets with heterogeneous nature. Inspired by previous works such as <cit.>, we design an aggregation layer for the knowledge graph that reflects the relational heterogeneity of knowledge triplets. In particular, we focus on the rationales of knowledge triplets, which enable dynamic weighting considering the importance of neighbor entities. To build the knowledge aggregator, we inject the relational context into the embeddings of the neighboring entities, weighting them with the knowledge rationale scores. 𝐞_h^(l) = 1/|𝒩_h|∑_(h,r,t)∈𝒩_hω(h,r,t)𝐞_r⊙𝐞_t^(l-1), where l denotes the layer of the aggregator, and 𝒩_h ⊆𝒢_k is the node-centric sub-graph of first-order neighbors. To inject relational context, we use the same element-wise product as in Equation <ref> to bridge the gap between aggregation and rationale weighting. By performing such aggregation across the entire knowledge graph, we carefully consider the contextual relationships between knowledge entities and weight neighbor information for the head entity according to normalized rationale scores. It's worth noting that items are a subset of knowledge entities. Therefore, we obtain knowledge-aware item representations by aggregating paths 𝐞_v←𝐞_t_1←⋯←𝐞_t_n on the KG using Equation <ref>. To model collaborative signals between users and items, we take into account the role of users in the interaction graph 𝒢_u. This allows us to generate user embeddings by aggregating the embeddings of the neighboring items in the user-item interaction graph. Specifically, we use a neighbor aggregation method to obtain the user embedding with the following formulas: 𝐞_u^(l) = 1/|𝒩_u|∑_i∈𝒩_u𝐞_v^(l-1), where 𝐞_u and 𝐞_v represent the user embedding and item embedding, respectively. It's important to note that an item v is equivalent to a certain entity h,t in the knowledge graph. We can define the final representations of users and entities as the summation of aggregated embeddings from different layers: 𝐞_h = f_k(𝒢_k;h) = ∑_l^L𝐞_h^(l); 𝐞_u = f_u(𝒢_u;u) = ∑_l^L𝐞_u^(l), L denotes the number of graph aggregation layers, and f_*(· , ·) is the function that generates user or entity representations based on the input graph 𝒢_u or 𝒢_k, and certain instances u or h. §.§ Knowledge-aware Masked Autoencoder §.§.§ Rationale Masking Mechanism As related works have revealed <cit.>, noisy or irrelevant connections between entities in knowledge graphs can lead to suboptimal representation learning. This issue can be particularly problematic in knowledge-aware recommendation systems, where user and item representations are further interrupted by KG noises <cit.>, resulting in inaccurate recommendation results. To eliminate the noise effect in the KG and distill informative signals that benefit the recommendation task, we propose to highlight important knowledge triplets with high rationale scores, as learned in Equation <ref>. Recent studies on masked autoencoders <cit.> have demonstrated the effectiveness of this approach in enabling models to acquire useful implicit semantics by masking important information during the reconstruction of missing knowledge. Building on these findings, we have designed a generative self-supervised learning task that follows a masking-and-reconstructing approach. During each training step, we mask a batch of knowledge triplets in the KG and reconstruct these relational edges towards a generative self-supervised objective. Additionally, we ensure that the masked triplets have globally high rationale scores, meaning that we mask knowledge that is important for the recommendation task and force the model to learn to reconstruct these connections to highlight useful knowledge for encoding user preference. To obtain a global measure of the rationale importance of knowledge triplets, we design a criterion. In Equation <ref>, ω(h,r,t) reflects the local importance of the triplet among all edges to the same head entity h. However, the degree of the head entity can influence the value of ω, making it difficult to compare the importance of triplets across the entire KG. To address this issue, we adjust ω(h,r,t) by multiplying it with the number of head entity neighbors |𝒩_h|. This modification ensures that the importance of the triplet is weighted by the number of connections of the head entity, rather than just its degree. The updated equation is: γ(h,r,t) = |𝒩_h|·ω(h,r,t) = |𝒩_h|·exp(f(h,r,t))/∑_(h,r^',t^')∈𝒩_hexp(f(h,r^',t^')). The motivation behind this criterion is to identify the most valuable knowledge triplets across the entire KG. By using the rationale score after softmax, we can determine the relative proportion of a knowledge triplet among its head entity neighbors 𝒩_h. We multiply the rationale score with the number of head entity neighbors |𝒩_h|, which makes it globally comparable. By using this approach, we can select the most valuable knowledge triplets across the entire KG based on the value of γ(h,r,t). To improve sampling robustness, we add Gumbel noise <cit.> to the learned rationale scores. γ(h,r,t) = γ(h,r,t)-log(-log(ϵ)); ϵ∼Uniform(0,1), where ϵ is a random variable sampled from uniform distribution. Then, we generate a set of masked knowledge triplets by selecting the top k-highest rational scores in the KG: ℳ_k = {(h,r,t)|γ(h,r,t)∈topk(Γ;k_m)}, where Γ represents the distribution of all γ(h,r,t). Finally, to create an augmented knowledge graph, denoted by 𝒢_k^m, we remove the edges ℳ_k with low rationale scores from the original knowledge graph 𝒢_k. In other words, 𝒢_k^m is obtained by subtracting the set of edges ℳ_k from the set of edges in 𝒢_k, represented by 𝒢_k ∖ℳ_k. §.§.§ Reconstructing with Relation-aware Objective In order to enable our model to recover crucial knowledge in a self-supervised way, we provide the model with entity embeddings created from the augmented graph 𝒢_k^m, and train the model to reconnect the masked knowledge edges. Therefore, we begin by applying rationale-aware knowledge aggregation, as outlined in Equation <ref>, on 𝒢_k^m to produce entity embeddings, in which k_m rationale edges have been removed. 𝐞_h = f_k(𝒢_k^m;h); 𝐞_t = f_k(𝒢_k^m;t), The function f_k(·) is the aggregation function on the knowledge graph, as defined in Equation <ref>. At this point, the knowledge triplets with significant rationale scores, denoted by ℳ_k, which were not visible during the aggregation stage, can be used as self-supervision labels for reconstruction. Given the rich relational heterogeneity in the knowledge graph, we aim to reconstruct the important rational connections under relational contexts. To achieve this, we minimize the following dot-product log-loss for the label triplets, with σ(·) representing the sigmoid activation function: ℒ_m = ∑_(h,r,t)∈ℳ_k-log(σ( 𝐞_h^·(𝐞_t⊙𝐞_r))). §.§ Knowledge Rationale-aware Contrasting §.§.§ Rationale-aware Graph Augmentation As explained earlier, the hierarchical rationales for knowledge triplets are derived from the connection between the knowledge graph and user-involved recommendation labels. In order to further enhance the interpretability of the knowledge rationalization modules, we draw inspiration from previous works <cit.>. Specifically, we propose to align the representations of the knowledge graph with collaborative filtering signals, which allows us to explicitly model cross-view rationales. To construct debiased contrastive views, we begin by identifying and removing weakly task-related edges that could potentially introduce noise in both graphs. Regarding the knowledge graph, it is worth noting that knowledge triplets with lower rationale scores tend to have less impact on the recommendation task. Consequently, we aim to improve the quality of the graph by removing the noisy triplets. This augmentation process ensures that the remaining triplets are more informative and have a higher rationale score. By doing so, we can enhance the performance of our model and better capture the underlying relationships between the entities in the graph. 𝒮_k = {(h,r,t)|γ(h,r,t)∈topk(-Γ;ρ_k)}; 𝒢_k^c = 𝒢_k ∖𝒮_k, In Equation <ref>, we introduced the knowledge attentive scores γ and Γ, which are computed with the addition of Gumbel noise. Here, Γ represents the distribution of all γ values. By taking the negative of Γ, denoted as -Γ, we can use the top-k function to calculate the least-k values. The hyperparameter ρ_k controls the dropout ratio during training. We also introduce the augmented knowledge graph 𝒢_k^c, which is debiased from noise with lower rationale scores. In addition to the knowledge graph, we also aim to improve the quality of the u-i interaction graph by removing noisy interactions that are not conducive to cross-view alignment. Specifically, we want to retain interaction edges that clearly reflect the user's interests and can better guide knowledge graph rationalization through cross-view contrasting. Given that the semantics of item embeddings can be influenced by their linked knowledge in the KG, we propose to weight each interaction edge by considering the rationales of the knowledge triplets connected to the item. This approach allows us to better reflect the noise associated with each interaction edge. To implement this, we calculate the mean value of the rationale scores for all the knowledge triplets linked to the item. This mean value is then used as a weight for the corresponding interaction edge, which helps to distinguish between informative and noisy interactions. ϕ_v = mean({γ(h,r,t)|h=v∨ t=v}). A lower ϕ_v value implies that the knowledge entities neighboring an item in the KG are relatively less contributive to the recommendation task, which can lead to bias in the item representation. To address this issue, we filter our interaction edges using the ϕ_v score and augment the graph with only the informative interactions. To avoid overfitting on user and item representations, we adopt a multinomial distribution sampling strategy <cit.> to derive more randomized samples for edge dropout. This approach helps to ensure that the model is not overly reliant on a specific set of interactions and can generalize well to new data. Formally, the process can be defined as follows: ϕ_v^' = expϕ_v/∑_v expϕ_v; 𝒮_u∼multinomialNR(Φ^';ρ_u), After calculating the ϕ_v score for each item v, which represents the mean value of the rationale scores for all the knowledge triplets linked to the item, we apply softmax to obtain a probability distribution ϕ^' over all items. The resulting distribution Φ^' is used to sample a subset of items without replacement using the multinomial distribution sampling method, denoted as multinomialNR(·;·). Here, ρ_u denotes the size of the sampled candidates. By following the previous definitions, we can generate the augmented u-i graph as the difference between the original u-i graph 𝒢_u and the set of sampled interactions 𝒮_u, 𝒢_u^c=𝒢_u∖𝒮_u. §.§.§ Contrastive Learning with Cross-View Rationales. With the augmented knowledge graph and u-i graph, we use pre-defined aggregators to capture the view-specific node representations for items as the contrastive embeddings. For the u-i interaction view, we utilize the state-of-the-art LightGCN <cit.> module to iteratively capture high-order information on 𝒢_u^c. 𝐱_u^(l) = ∑_v∈𝒩_u𝐱_v^(l-1)/√(|𝒩_u||𝒩_v|); 𝐱_v^(l) = ∑_u∈𝒩_v𝐱_u^(l-1)/√(|𝒩_u||𝒩_v|). We obtain the final contrastive embeddings for items in the u-i view by summing up the representations from all layers of the LightGCN module. For the augmented knowledge graph, we use a rationale-aware knowledge aggregation mechanism to generate the knowledge-view item representations, which take into account the rationale scores associated with the knowledge triplets. 𝐱_v^k = f_k(𝒢_k^c; v). It is important to note that the contrastive embeddings 𝐱_i^u and 𝐱_i^k are from different representation spaces, namely the collaborative relational signals and knowledge graph signals. We feed them into two different MLPs to map them into the same latent space. 𝐳_v^* = σ(𝐱_v^*𝐖_1^* + 𝐛_1^*)^𝐖_2^* + 𝐛_2^*, where the notation * ∈u,k denotes view-specific representations, namely 𝐳_v^u and 𝐳_v^k. The learnable weights and bias denoted as 𝐖 and 𝐛. By doing so, we can effectively capture the complementary information from both views. To ensure the alignment of cross-view item representations, we adopt a contrastive objective. To avoid over-fitting and eliminate the false-negative effect, as inspired by <cit.>, we modify the widely used InfoNCE <cit.> loss by specifying one random sample for each view as the negative. Formally, we define our contrastive loss as: ℒ_c = ∑_v∈𝒱-logexp(s(𝐳_v^u, 𝐳_v^k)/τ)/∑_j∈{v, v^',v^''}(exp(s(𝐳_v^u, 𝐳_v^k)/τ)+exp(s(𝐳_j^u, 𝐳_v^k)/τ)), In the contrastive loss, v^' and v^'' are stochastically sampled negative candidates for item v. The similarity measurement s(·) is set to the cosine similarity of normalized vectors. The temperature hyperparameter τ controls the hardness of the contrastive goal <cit.>. §.§ Model Learning and Discussion For the main recommendation task, we use the dot product between the user and item representations as the prediction, which is denoted as ŷ_uv = 𝐞_u^𝐞_v. To optimize the model parameters, we adopt the widely used Bayesian Personalized Ranking (BPR) loss to optimize the model parameters as follows: ℒ_rec = ∑_(u,v,j)∈𝒟-logσ(ŷ_uv-ŷ_uj), In the BPR loss, we use the training instances 𝒟 = (u,v,j), where v is the ground-truth and j is a randomly sampled negative interaction. It is worth noting that we continue to use the entity embeddings 𝐞_v from the masked graph 𝒢_k^m for the recommendation task, rather than performing aggregation on the original knowledge graph again. This is because the masked triplets are generally of small size (512) compared to the whole graph (millions), and this trick can greatly improve the training efficiency while affecting the representation learning only minimally. Moreover, according to <cit.>, this setting can increase the difficulty of the main task learning and improve the optimization effect. To optimize all three loss functions, we use a joint learning approach with the following overall loss function: ℒ = ℒ_rec + λ_1ℒ_m + λ_2ℒ_c, where λ_1 and λ_2 represent the weight of the mask-and-reconstruction and cross-view contrastive learning tasks, respectively. We omit the notation of L2 regularization terms for brevity. §.§.§ Connection to Alignment and Uniformity. Investigating the alignment and uniformity merits of learned representations by the generative and contrastive tasks is important for providing fundamental support for the proposed  method. Following <cit.>, the mathematical definitions for the uniformity and alignment of learned vector representations are presented below: ℒ_align 𝔼_(x,y)∼ p^+[𝐱-𝐲_2^α]; α>0 ℒ_uni log𝔼_(x,y)i.i.d.∼ p[exp(-γ𝐱-𝐲^2_2)]; γ=1/2σ^2, where the exponent α of the Euclidean distance controls the degree to which the learning algorithm focuses on aligning the learned representations with the positive labels. The distribution p of training data and p^+ distribution of positive labels are used to compute the expected value of the alignment loss. We first prove that the rational masking-reconstructing task is an explicit alignment for features. According to the generative loss in Equation <ref>, the optimization of ℒ_m equals to: min𝔼_(x,y)∼ p_r^+[∑_x,ylog(σ(𝐱^𝐲))], where the variable 𝐱 corresponds to the feature vector of the head entity 𝐞_h, while the variable 𝐲 corresponds to the element-wise product of the feature vectors of the tail entity and relation, given by 𝐞_r⊙𝐞_t. The distribution p_r^+ represents the set of knowledge rationales with high rational scores. Note that: 𝐱-𝐲_2^α = (2-2·𝐱^𝐲)^α/2. Since the generative loss function ℒ_m is in the form of an alignment loss, as defined in Equation <ref>, minimizing it leads to the alignment of the masked rationale knowledge triplets. We can further show that the contrastive loss in Equation <ref> reflects the alignment and uniformity properties. Considering that: ℒ_c = 𝔼_(x,y)∼ p_c^+ {x_i^-}_i=1^2∼ p_c[-1/τ𝐱^𝐲 + log(exp(𝐱^𝐲/τ) + ∑_iexp(𝐱_𝐢^-^𝐱/τ) ) ] ≥𝔼_x∼ p_c {x_i^-}_i=1^2∼ p_c[ -1/τ + log( exp(1/τ) + ∑_iexp(𝐱_𝐢^-^𝐱/τ) ) ] The positive pair in Equation <ref> is denoted as 𝐱,𝐲, and the negative samples are denoted as 𝐱^- for brevity. The set of random negative samples x_i^-_i=1^2∼ p_c in Equation <ref> is drawn from the distribution p_c of cross-view item representations. As a result, the lower bound of the contrastive loss function ℒ_c in Equation <ref> is satisfied only if the embeddings 𝐱,𝐲 are perfectly aligned, i.e., 𝐱^𝐲=1, which is equivalent to the definition of alignment in Equation <ref>. If the embeddings satisfy the perfect alignment condition, the optimization of ℒ_c simplifies to a degenerate form. min𝔼_x∼ p_c {x_i^-}_i=1^2∼ p_c[log(∑_iexp(𝐱_𝐢^-^𝐱/τ))], The alignment and uniformity properties in the generative loss function ℒ_m and the contrastive loss function ℒ_c can benefit representation learning by ensuring that positive pairs are in agreement and that random instances are pushed as negatives. In addition, the proposed knowledge rationalization improves the sampling distribution to be rational and noise-resistant, instead of using a random distribution as in the original forms. By exploiting rationales in the KG, we empower the alignment property with rationality-aware positive pairing ability, which provides better gradients for model learning. Additionally, for cross-view rationales, we remove potential noise to build a noise-free distribution, which eliminates the effect of false negative pairing and improves the contrastive effectiveness. Overall, our  is able to derive better alignment and uniformity compared to stochastic methods, which can lead to improved representation for more accurate recommendations. § EVALUATION In this section, we conduct experiments to answer several research questions related to the proposed  framework. * RQ1: Can  outperform state-of-the-art baseline models of different types in terms of recommendation performance? * RQ2: How do the key designs in  contribute to its overall performance, and what is its sensitivity to hyperparameters? * RQ3: What benefits does  bring to tackling task-specific challenges such as cold-start and long-tail item recommendation? * RQ4: Can  derive interpretability with rational scores? §.§ Experimental Setup §.§.§ Dataset To ensure a diverse and representative evaluation, we use three distinct datasets that reflect real-life scenarios: Last-FM for music recommendations, MIND for news recommendations, and Alibaba-iFashion for shopping recommendations. We preprocess the datasets using the commonly adopted 10-Core approach to filter out users and items with less than 10 occurrences. To construct the knowledge graphs, we employ different methods for each dataset. For Last-FM, we map the items to Freebase entities and extract knowledge triplets, following the techniques used in <cit.> and <cit.>. For MIND, we collect the knowledge graph from Wikidata[https://query.wikidata.org/] using the representative entities in the original data, following the approach proposed in <cit.>. For Alibaba-iFashion, we manually construct the knowledge graph using category information as knowledge, as done in <cit.>. Table <ref> summarizes the statistics of user-item interactions and knowledge graphs for three evaluation datasets. §.§.§ Evaluation Protocols To ensure fair evaluation, we employ the full-rank setting and divide our dataset into three parts: 70% for training, 10% for hyperparameter tuning, and 20% for testing. We measure the performance of our proposed  using the Recall@N and NDCG@N metrics, with N set to 20 for top-N recommendations. We implement  using PyTorch and compare its performance with various baseline models using official or third-party code. To optimize the performance of , we conduct a hyperparameter search for the masking size, keeping proportion for contrastive learning, and temperature value. Specifically, we explore values of masking size from the range of {128,256,512,1024}, keeping proportion ρ_k and ρ_u from {0.4,0.5,0.6,0.7,0.8}, and temperature value from the range of {0.1,⋯,1.0}. The number of GNN layers is set to 2 for all graph-based methods. §.§.§ Baseline Models To verify the effectiveness of our proposed design, we conduct benchmark evaluations between  and various baseline models from different research lines. General Collaborative Filtering Methods. * BPR <cit.> is a matrix factorization method that uses pairwise ranking loss based on implicit feedback. * NeuMF <cit.> incorporates MLP into matrix factorization to learn the enriched user and item feature interactions. * GC-MC <cit.> considers recommendation as a link prediction problem on the user-item graph and proposes a graph auto-encoder framework for matrix completion. * LightGCN <cit.> is a state-of-the-art recommendation method based on graph neural networks (GNNs), which improves performance by removing activation and feature transformation. SGL <cit.> introduces a self-supervised learning paradigm to GNN-based recommendation by using stochastic augmentation on the user-item graph based on the InfoNCE objective. Embedding-based Knowledge-aware Recommenders. * CKE <cit.> is an embedding-based KG recommender that leverages TransR <cit.> to enrich item representations by training on structural knowledge, thereby enhancing collaborative filtering. * KTUP <cit.> trains TransH <cit.> using preference-injected CF and enables mutual complementation between CF and KG signals. GNN-based Knowledge Graph-enhanced Recommenders. * KGNN-LS <cit.> considers user preferences towards different knowledge triplets in graph convolution and introduces label smoothing as regularization to force similar user preference weights between nearby items in the KG. * KGCN <cit.> aggregates knowledge for item representations by considering high-order information with GNN and uses preferences from user embeddings as weights. * KGAT <cit.> introduces the concept of a collaborative KG to apply attentive aggregation on the joint user-item-entity graph, with attention scores reflecting the importance of knowledge triplets. * KGIN <cit.> is a state-of-the-art method that models user intents for relations and employs relational path-aware aggregation to effectively capture rich information on the knowledge graph. Self-Supervised Knowledge-aware Recommenders. * MCCLK <cit.> performs contrastive learning in a hierarchical manner for data augmentation, so as to consider structural information for the user-item-entity graph. * KGCL <cit.> introduces graph contrastive learning for KGs to reduce potential knowledge noise. KG contrastive signals are further used to guide the user preference learning. §.§ RQ1: Overall Performance Comparison We report the performance of all the methods on three datasets in Table <ref>. Based on the results, we make the following observations: * The proposed  consistently outperforms all baseline models on both metrics and all three datasets. This can be attributed to three factors. First, by using rational masking and reconstruction,  is able to capture knowledge information that is truly useful for the recommendation task. Second,  is equipped with rational cross-view contrastive learning on augmented, noise-free graphs, which allows for better exploitation of the latent relatedness between KG and CF signals. Third, the knowledge aggregation layer is weighted by knowledge rational scores to reflect the different importance of knowledge triplets. Additionally, the superior results on datasets with vastly different statistics suggest that the proposed knowledge rationalization mechanism can automatically discover useful knowledge related to downstream tasks, regardless of the data characteristics. * On the three datasets, there is no consistent winner among the baseline models. Contrastive learning-based methods (MCCLK and KGCL) are not always better than non-self-supervised methods (KGIN). This may be due to the limitations of random graph augmentation or intuitive handcrafted cross-view pairing, which may fail to discover truly useful KG information from the contrastive views for encoding the interests of users. * GNN-based knowledge-aware recommenders can consistently outperform embedding-based models. This advantage is due to GNNs' ability to capture more complex and higher-order information on the KG, compared to the linear transition-based modeling adopted by embedding-based models. * The introduction of knowledge graphs does not always lead to better performance in recommendation systems. For instance, methods such as CKE and KTUP typically perform worse than non-KG methods like LightGCN and SGL. Even KGNN-LS and KGCN cannot consistently outperform SGL in some metrics. This effect is more noticeable when the dataset has a complex KG and sparse interactions. We suggest that some KG-aware recommenders struggle to effectively model complex relational paths and mitigate noise in the KG, resulting in suboptimal KG representation learning and worse performances. On the other hand, LightGCN and SGL focus more on resolving the sparsity problem of user-item interactions with self-supervision signals. §.§ RQ2: Ablation Study §.§.§ Key Module Ablation In this study, we investigate the effectiveness of key modules in our proposed  from the perspectives of our designed rational masked autoencoding and contrastive learning for recommendation. To compare with the original method, we built four model variants, including: * w/o MAE: removing the generative SSL task of rationale-aware knowledge graph masking and reconstruction. * w/o Rationale-M: replacing the rationale knowledge masking with random masking while keeping the masking size unchanged. * w/o CL: disabling the cross-view contrastive learning task. * w/o Rationale-Aug: replacing the rational graph augmentation with random masking while keeping the masking size unchanged. We report the results of the ablation study in Table <ref> and make the following observations: i) The proposed rationale knowledge masking and reconstruction contributes the most to performance enhancement. This demonstrates that mask&reconstruction is an effective strategy for exploiting highly useful knowledge triplets for recommendation. ii) The rational masking mechanism for both reconstruction and contrastive learning can further improve performance by selecting valuable information and dropping informative knowledge. iii) The contrastive learning is also beneficial for performance. However, we observed that adding non-rationale augmented graph contrastive learning on the MIND dataset can hurt performance. This indicates that simple intuitive cross-view contrasting is not always effective due to noises in the graph. §.§.§ Sensitivity to Key Hyperparameters We present our results and discussion of parameter study in Appendix <ref>. §.§ RQ3: Model Benefits Investigation §.§.§ Cold-start Recommendation We conduct a study to evaluate the effectiveness of  in addressing the common cold-start problem in recommendation systems. We divided users in the Alibaba-iFashion dataset into five groups based on the number of interactions, with smaller group numbers indicating stronger cold-start effects. We then separately tested the performance of  and several strong baselines in each group and reported the results in Figure <ref>. Our findings demonstrate that  outperforms other baseline methods in all cold-start groups, indicating its effectiveness in addressing the cold-start problem for a diverse range of users. This can be attributed to the design of the rationale knowledge masked autoencoding and rationale-based cross-view contrastive learning, which highlight useful knowledge for representation learning and contrast cross-view signals. Therefore,  can effectively alleviate cold-start issue. §.§.§ Long-tail Item Recommendation We investigate whether  can improve representation learning for long-tail items. We counted the occurrence of each item and divided all users into five groups based on the average sparsity degree of items they interacted with. The results are reported in Figure <ref>. Our findings demonstrate that  consistently outperforms baseline models across different groups, indicating its effectiveness in addressing data scarcity problems. This can be attributed to the design of rationale mining, which allows  to better leverage external knowledge and improve representation learning for long-tail items. §.§.§ Recommendation with small proportion of KG We evaluate 's capacity in highlighting important task-related connections from the knowledge graph. Specifically, we tested the recommendation performance of  and baseline models under partial knowledge graphs with different keeping ratios ranging from 40% to 70%. We randomly selected a proportion of knowledge triplets from the original KG in the Last-FM and Alibaba-iFashion datasets for knowledge aggregation, and the results are reported in Figure <ref>. Our findings demonstrate that  can still maintain considerable performance (>95% on Last-FM and >90% on Alibaba-iFashion) with only a small portion of KG. Compared to baseline models,  shows minimal performance degradation in all cases. This can be attributed to the design of rationale knowledge masking and reconstruction mechanism, which can effectively distill useful knowledge from the given portion of the KG. Additionally, the cross-view contrastive learning can enhance KG learning with CF signals to alleviate the absence of some knowledge. Overall, the results further validate the rationality of 's design. §.§ RQ4: Model Explainability Study We discuss the explainability of results generated by  in Appendix <ref>, which provides insights into how  incorporates the KG and rationale knowledge for enhancing recommendation. § RELATED WORK §.§ Knowledge-aware Recommender Systems Knowledge graphs are valuable sources of side information for item representation learning and user modeling in recommender systems. Currently, knowledge-aware recommendation methods can be generally categorized into three groups: embedding-based methods, path-based methods, and GNN-based methods. i) Embedding-based methods <cit.> incorporate knowledge graph entity embedding into user and item representations to enhance the recommendation learning. For example, CKE <cit.> proposes to integrate the modeling of different types of side information for items with collaborative filtering. It encodes a knowledge graph with the transitive KG completion method TransR <cit.> as part of item representations. ii) Path-based methods <cit.> focus on exploiting the rich semantics in relational meta-paths on the KG. For instance, KPRN <cit.> adopts an LSTM to model the extracted meta-paths and aggregates user preference along each path by fully-connected layers. iii) GNN-based methods <cit.> extend GNNs to model the KG and use the learned representations for recommendation. For example, KGAT <cit.> proposes to use a graph attention mechanism to propagate user and item embeddings on the KG, and then apply a multi-layer perceptron to produce the final recommendation score. The line of GNN-based knowledge-aware recommenders <cit.> aims to unify the two paradigms and combine their strengths. GNNs have a powerful ability to capture high-order information, making them effective at extracting useful information from the KG. KGCN <cit.> samples a fixed number of neighbors as the receptive field to aggregate item representations on the KG. KGAT <cit.> leverages graph attention networks (GATs) to weight the knowledge aggregation on the KG by considering the different importance of knowledge neighbors. KGIN <cit.> further considers user latents towards different relations in the KG and injects relational embedding in the aggregation layer to improve performance. GNN-based methods are currently the state-of-the-art solutions due to their ability to exploit rich semantics from the graph and their considerable efficiency. §.§ Self-Supervised Recommendation Incorporating self-supervised learning (SSL) techniques into recommender systems has become a new trend in the research community to address inherent data sparsity problems by leveraging additional supervision signals from raw data <cit.>. Existing studies have explored various SSL techniques for different recommendation tasks. For large-scale industry applications, <cit.> introduces contrastive learning in the two-tower architecture for feature augmentation with the proposed correlated feature masking strategy. SGL <cit.> applies graph contrastive learning to graph collaborative filtering using random augmentation on graphs such as node dropout, edge dropout, and random walk to generate contrastive views and enforce agreement with InfoNCE loss. For sequential recommendation, S3Rec <cit.> aims to augment the sequence itself by masking and adopts the contrast between augmented sequences as an auxiliary task. For social recommendation, MHCN <cit.> performs contrastive learning between user embedding and its social embedding extracted from a sub-hypergraph of the social network. For multi-modal recommender systems, MMSSL <cit.> aims to provide a universal solution for capturing both modality-specific collaborative effects and cross-modality interaction dependencies, allowing for more accurate recommendations. KGCL <cit.> develops graph contrastive learning on the KG to alleviate noise and long-tail problems, while also leveraging additional signals from KG agreement to guide user/item representation learning. MCCLK <cit.> employ cross-view contrastive learning between the KG and interaction graph to mitigate sparse supervision signals. However, we argue that these methods do not sufficiently consider the rationales embedded in the KG. By explicitly rationalizing knowledge triplets for recommendation, our  achieves a significant performance improvement compared to these methods. § CONCLUSION In this paper, we presented a novel graph self-supervised rationalization method () for knowledge-aware recommendation. Our motivation is rooted in the hierarchical rationality of knowledge triplets. We build our method on the attentive knowledge rationalization to weight knowledge triplets, and introduce a novel rational masking and reconstruction module to emphasize rational knowledge. The rational scores were further used to facilitate the knowledge-aware cross-view contrastive learning, where low-scored less informative knowledge was filtered out as noise. Results of extensive experiments validate the advantages of  against state-of-the-art solutions. In future works, we will explore more complex methods for knowledge graph rationalization, such as graph structure learning and graph sparsification. This direction can potentially provide more insights into the underlying knowledge graph structure. ACM-Reference-Format § APPENDIX §.§ Sensitivity to Key Hyperparameters In this study, we investigate the sensitivity of  to changes in key hyperparameters, including the masking size k_m, the keep ratio for CL graph augmentation ρ, and the temperature for CL τ. Our analysis reveal that the optimal hyperparameter settings are highly dependent on the characteristics of the underlying data. Specifically, we found that a masking size of 512 is ideal for MIND and Alibaba-iFashion, while 256 is optimal for Last-FM. Moreover, a CL keep ratio of 0.5 is the best choice for Last-FM and Alibaba-iFashion, while a temperature of 0.1 is recommended for MIND, 0.3 for Alibaba-iFashion, and 0.9 for Last-FM. We hypothesize that this difference in optimal temperature is due to the sparsity of the datasets, with denser datasets requiring higher temperatures to avoid false-negative samples. We suggest tuning the masking size and CL keep ratio in the ranges of [128,512] and [0.4,0.6], respectively, as a good starting point for tuning hyperparameters in other datasets. Although  is relatively robust to small changes in hyperparameters, selecting the optimal settings is still critical for achieving the best performance. §.§ Explainability Study In this section, we examine the interpretability of 's recommendation results through case studies on knowledge rationalization. Specifically, we group news items in the MIND dataset by their preset categories and obtain the learned knowledge rationale scores for triplets connected to items within the same category. To provide an interpretable perspective, we calculate the average of rationale scores by triplet sets of the same relation r and present the cases in Table <ref>. We select cases from five popular news categories, namely sports, newspolitics, travel, finance, and tv-celebrity. For each category, we showcase two of the relations with the highest average global rationale scores of their associated triplets. Our analysis reveals that  is capable of effectively capturing the impact of user interests on the KG as rationales. For instance, in the realm of sports news, users tend to focus on league categories and specific teams, and as such, these two types of relations in the knowledge graph are rationalized by the labels of user preferences. Similarly, the case of newspolitics demonstrates that users' political news preferences often have a strong partisan orientation, and they are also concerned with the positions of political figures. These examples highlight the explainability of our  design. By explicitly modeling the hierarchical rationality in the knowledge graph, our approach can differentiate task rationales that reflect user interests. Moreover, the masking-reconstructing mechanism and cross-view rationale contrastive learning techniques help to emphasize and strengthen the rationale connections. This not only enhances the model's interpretability but also improves its performance by leveraging user preferences to make more accurate predictions. In summary, the rationalized knowledge graph and the  architecture provide a robust framework for personalized recommendation that considers user preferences and interests in a structured and transparent manner.
http://arxiv.org/abs/2307.02299v1
20230705135811
Why can big.bi be changed to bi.gbi? A mathematical model of syllabification and articulatory synthesis
[ "Frédéric Berthommier" ]
eess.AS
[ "eess.AS", "cs.SD" ]
Improving Address Matching using Siamese Transformer Networks https://orcid.org/0000-0001-5987-0789 < g r a p h i c s > André V. Duarte https://orcid.org/0000-0001-8638-5594 < g r a p h i c s > Arlindo L. Oliveira Received March 10, 2023; accepted May 12, 2023 ========================================================================================================================================================================= A simplified model of articulatory synthesis involving four stages is presented. The planning of articulatory gestures is based on syllable graphs with arcs and nodes that are implemented in a complex representation. This was first motivated by a reduction in the many-to-one relationship between articulatory parameters and formant space. This allows for consistent trajectory planning and computation of articulation dynamics with coordination and selection operators. The flow of articulatory parameters is derived from these graphs with four equations. Many assertions of Articulatory Phonology have been abandoned. This framework is adapted to synthesis using VLAM (a Maeda's model) and simulations are performed with syllables including main vowels and the plosives /b,d,g/ only. The model is able to describe consonant-vowel coarticulation, articulation of consonant clusters, and verbal transformations are seen as transitions of the syllable graph structure. Index Terms: articulatory speech synthesis, syllables, articulatory phonology, verbal transformations, consonant clusters § INTRODUCTION Articulatory synthesis is a tool for understanding phonological processes because it involves the definition of articulatory gestures followed by the synthesis of real speech sounds. Its goal is to specify the complete pathway between linguistic structures and the physical world of vocal tract movement and acoustics. Articulatory phonology (AP) <cit.> combined with task dynamics (TD) <cit.> is the reference model in this field. The two main steps are the planning of the gestural score from a phonetic string and the conversion of this plan into the movement of an articulatory model. The gestural score includes the activation period of each articulator and the phasing which constitutes the coordination. Despite its rigorous design, TD has only recently been implemented for VCV synthesis <cit.> with similar results as those discussed here. The complexity of this implementation stems from the distinction between the status of vowels and consonants. Vowels are spatial configurations of the vocal tract that are articulator configurations while consonants are dynamically defined with constriction goal variables. On the other hand, it has long been shown by Carré et al. <cit.> that vowel space emerges from the exploitation of an acoustic tube and that simple rules allow for the synthesis of VCV syllables <cit.>. This theory was physically grounded and extended by Story and Bunton <cit.> to a tube model having more sections. This was constructed from the human vocal tract and it has a more precise locus of constriction but we distinguish it because it is controlled section by section to calculate the area function (see <cit.> for a review about articulatory models). The gap between tube and articulatory models is resolved by the current approach in which consonants and vowels have a unified articulatory-acoustic representation. Schwartz et al. <cit.> have shown with a Maeda's model (named VLAM hereafter) that the plosives /b,d,g/ have their formants localized around the vowel space. The first step in this approach is to establish a bijection between the articulatory space of this model and the vowel space, which allows us to define trajectories between points. Thus a trajectory drawn at this level has an acoustic image which is a modulation of formants. This direct path between planning and acoustic effects greatly simplifies control and practically, the small set of parameters of the model can be adjusted manually by simple hearing. This model opposes other approaches in several ways. First at all, by looking for mathematical relationships as AP/TD does, this approach is the opposite of the large simulations undertaken to find the same type of articulatory trajectories for syllable modeling <cit.>. It meets the demand for less energy consumption and the mathematical relationships are valuable. By giving a structural point of view, they might constrain much more than any AI approach <cit.> the research on the biological foundations of language. The vocal tract model is minimalist and provides the features just necessary to produce syllables with main vowels and /b,d,g/. It has a jaw unlike the tube models. VLAM formants are obtained with a classical transmission line method <cit.>. The source model is also rudimentary and the only post-processing applied at the output of the synthesis is a multiplication by an envelope depending on the syllable structure. This framework is supposed to be adaptable to different VT geometries and other phonemes. Coarticulation mechanisms are revisited and separability between articulators assigned to vowel trajectories and those assigned to consonant trajectories has been found as with the tube models in the line opened by <cit.>. We propose to plan syllable and word production in a bottom-up manner using graphs and interpret verbal transformations as transformations of these graphs. These contain all the information for making the selection and coordination of the 7 VLAM articulators (see Tilsen's discussion <cit.> on the concepts of selection and coordination in the context of AP). The syllabic segmentation is not well understood and it is a hot topic currently investigated with DNNs <cit.>.To understand syllabification, the present paper offers a new generative formalism which reflects production processes without much reference to their biological implementation (as <cit.>) but physically well grounded (as <cit.>). § DESCRIPTION OF THE MODEL §.§ Equations of the model The model is based on the reduction of the many-to-one relationship between the articulatory space and the acoustic frequency space of the formants. This reduction is enabled by a coordination function that correlates the articulators of a given model (which can be a tubular model like the DRM or a true articulatory model like the VLAM). The principle is to use physically based symmetries of vowel production to control articulator positions. Such symmetries have been demonstrated by Shroeder <cit.> and Mermelstein <cit.> and then systematically used to control vowel production with the DRM <cit.> (which is an 8-tubes model). We propose a generalization of this type of control for true articulatory models via the coordination function which is parameterized to cover a surface in the F1-F2-F3 space <cit.>. This function realizes a bijection between the complex domain and this surface by computing the parameters and synthesizing vowels with the given articulatory model. The setting of the model is based on the average Ω and the range Ψ_1 of each parameter (here given in VLAM) plus a fixed angle Ψ_2 (Table <ref>). The value of each articulatory parameter P_i, i=1..7 is computed independently for a given point (ρ_V,θ_V) of the complex domain. The coordination between P_i is provided by the product of the same complex conjugate ρ_V^-iθ_V with each fixed complex value Ψ_i of the VLAM: P_i-Ω_i = Re[Ψ_iρ_V^-iθ_V] P-Ω = Re[Ψρ_V^-iθ_V] = ρ_VΨ_1 cos(Ψ_2 - θ_V) Let remark that the coordination function is a simple cosine. Consistently, the Ψ_2 angles are set to have the angles of the corner vowels /iau/ at θ={5π/3,π,π/3} and ρ=1. This is obtained by assigning a cardinal vowel to each articulator such that P_i-Ω_i=Ψ_1i for this articulator when θ=Ψ_2i. To cover the vowel space and display the whole F1-F2-F3 surface, we choose ρ_V ∈[0,1] and θ_V ∈[0,2π[. An extension to a crown coding for consonants is made with ρ_C ∈[0,1.2] and θ_C ∈[0,2π[. This surface is shown in Fig. <ref> c. Trajectories for vowels and consonants are planned in the complex plane by forming arcs between 2 points (ρ_1,θ_1) and (ρ_2,θ_2) : z(t) = (1-ρ(t)) ρ_1 ^i θ_1 + ρ(t) ρ_2 ^i (θ_2 + ν/Kθ(t)) where t varies between 0 and nT, ρ(t)=cos(θ(t)/2) determines the velocity profile and ν=±1 and K are shape parameters. When K is large (K=30 for vowel arcs and K=10 for consonant arcs), the trajectories become straight lines (see Figure <ref> a) and we recognize Öhman's equation <cit.>. These parameters are adapted to plan formant trajectories and not to reproduce the real dynamics of the articulators as in TD. There are two orientations of these arcs with origin a and arrival b : nT_1 with θ(t)=π t/nT, a=2, b=1 or nT_2 with θ(t)=π (t/nT - 1), a=1, b=2. The number of periods n is important to fix the duration of the vowel arcs as equal to that of the consonant branches. It is the number of consonants plus one (see below). At each time t, the set of parameters is coordinated with Eq. <ref> but the modulation is obtained piecemeal via the chaining of time periods of duration nT without discontinuity. Within each period, the trajectories of the parameters are the real part of the product of the complex column vector Ψ and the complex line vector z̅(t) giving matrices of dimension 7*nT that are concatenated: P(t) - Ω = Re[ Ψz̅(t) ] = (1-ρ(t)) ρ_1 Ψ_1 cos(Ψ_2 - θ_1) +ρ(t) ρ_2 Ψ_1 cos(Ψ_2 - θ_2 - ν/Kθ(t)) The trajectories during the vocalic segments and pauses are defined by the previous equation whereas the superposition of vocalic and consonant trajectories (named superimposed segment) is obtained by adding a selection process splitting the set of 7 components in two parts coordinated separately but synchronized. The coarticulation between vowels and consonants is due to the superposition of 2 branches having the same duration nT together with the same departure and arrival points which are always vowels (V_o,V_e): P(t) - Ω = Re[ S_v.Ψz̅_v(t) + S_c.Ψz̅_c(t) ] where S_v and S_c are the two exclusive selection vectors composed of zeros and ones for selected articulators with S_v+S_c=1 (a column vector of 7 ones). These are multiplied with the complex vector Ψ with a Hadamard product. These vectors depend on the consonant(s), thus avoiding any weighting process. Mathematically, the main effect of superposition is to increase the planning dimension from 2 (1 complex number) to 4 (2 complex numbers). At each instant, two sets of articulators are coordinated separately. This increase is observable in the formant space: while the trajectory of a diphthong remains inside the surface defined by the coordination function, the trajectories of the superimposed segments leave it (see Figure <ref> c). §.§ Assembling syllable graphs Syllabication is modeled by graphs embedding all the information necessary for the selection and coordination of articulators. These graphs describe nodes and arcs to calculate with the previous equations a continuous planning and then a continuous flow of parameters even in silent periods. This is a kind micro-grammar of the temporal encoding of syllables gestures as proposed by Gafos <cit.>. This is generative and syllable graphs are concatenated in a coherent way to make words which are separated by pauses. The articulatory model is then permanently animated. This is in contrast to AP which uses the event-based concept of activation vs. inactivation. This requires some clock mechanism (the coupling of oscillators in AP) to define beginning and end of periods of activation. In contrast, timing is considered here as an inherent property of the model structure (as proposed by Fowler <cit.>) and whole duration is externally controlled by a single parameter T. This determines formant trajectories and the envelope structure which is superimposed can occlude a part of this information. Nevertheless, the envelope adds up temporal information and the lips movements remain visible for guiding the perceptual recovering of produced syllabic structures <cit.>. The graph G=(V,E) of a syllable is a concatenation of superimposed segments and purely vocalic segments (e.g. diphthongs). Each superimposed segment has origin and end nodes which are the vowels V_o and V_e (i.e. without selection because all articulators are coordinated together for vowels). These vowels are not always present in the phonetic chain and must be added as nodes, as we will see, to complete the graph. When they are absent, they can be far from the neutral vowel (with ρ=0 and P=Ω) and they have the same θ as the closest vowel but ρ is multiplied by specific coefficients δ_o and δ_e (which will be set later). Thus, we have V_o=(δ_o.ρ_V,θ_V) and V_e=(δ_e.ρ_V,θ_V). The value of δ_o determines the degree of anticipation of visible vowel-related lip movements before the onset of the consonant. Let us explicitly define a CV graph. This one has 3 nodes V={V_o,C,V} having a location in the complex plane L={(δ_o. ρ_V,θ_V), (ρ_C,θ_C),(ρ_V,θ_V)} and 4 arcs E={(V_o,C),(C,V),(V_e,V),(V, V)} having temporal properties T_p={T_1,T_2,2T_2,T} the last one being a stationary point of duration T. In this case, P-Ω = Re[Ψρ_V^-iθ_V]. These joint properties are necessary to compute the two trajectories z_v(t) and z_c(t) at the planning level. Then, the superposition is applied from the selection properties associated with each arc S={S_c,S_c,S_v,1} the last term being neutral (without selection) because this is a stationary vowel. The superposition between vowel and consonant branches is a coproduction mechanism allowing for coarticulation similar to this promoted by <cit.> (see <cit.> for a review of coarticulation). It removes any weighting of consonant articulation over the vowel as in <cit.> and its implementation remains as simple as in <cit.>. For a CVC, the construction is symmetric thanks to the end vowel V_e defined with its free centralization coefficient δ_e. This may depend on the language and it is well known that French tends to produce a schwa-like vocoid attached to the second consonant <cit.>. It is modeled with a low δ_e=0.5 to be salient and with an intermediate δ_e=0.7 to be audible. This is a new important property because the structure of a CVC has now two equivalent superimposed segments. Contrarily to the AP proposal, there is no synchronous vs. sequential implementation of the consonants depending on their onset or coda position. A degree of asymmetry can also be added by varying the period T as a prosodic feature. This feature is flexible, but the superimposed segments need strict synchronization and lengthening could occur only during V before its coarticulation with C. The graph has 5 nodes V={V_o,C_1,V,C_2, V_e} with location L={(δ_o.ρ_V,θ_V),(ρ_C1,θ_C1),(ρ_V,θ_V),(ρ_C2,θ_C2),(δ_e.ρ_V,θ_V)}. The rest is easy to deduce from the CV graph according to the symmetry. The consonant clusters have a graph relying to the sequencing of several consonants in the same branch. The vocalic anchor point V_o is set as previously with the same anticipation property (e.g. a lips protrusion can appear before the first consonant onset). This timing is different in AP in which the beginning of the vowel is between the two consonants. A CCV has 4 nodes only V={V_o,C_1,C_2,V} with location L={(δ_o.ρ_V,θ_V),(ρ_C1,θ_C1),(ρ_C2,θ_C2),(ρ_V,θ_V)} which can be defined jointly for the two consonants (this is another useful feature) and 5 arcs E={(V_o,C_1),(C_1,C_2),(C_2,V),(V_e,V),(V,V)} having timing properties T_p={T_1,T_2,T_2,3T_2,T}. The same selection vector is now applied 3 times with S={S_c,S_c,S_c,S_v,1} and it is defined specifically for each consonants pair. Despite a similar graph structure, the coarticulation between consonants is more complex because it involves interactions at both levels (location of nodes and selection). The concatenation of syllables to make words is achieved by the '.' operator or it is implicit as for V.CV. There are 3 cases {xV.Cx, xC.Cx, xC.Vx} to be treated. These are based on the assignation of the anchoring vowels depending on the context: {V=V_o,V_e=V_o,V_e=V}. Conversely, the pause is defined as the lack of concatenation and coarticulation between two syllables. This is consistently defined as a diphthong transition between the previous V_e and the next V_o with an arbitrary duration T_p. These two rules combined have a direct consequence on the syllabification of a succession of syllables VC. There is a transition (which is a verbal transformation) when the time pressure cancels T_p because two consecutive syllables becomes concatenated with V_e=V (the third case seen above with δ_e=1). At this time we have no difference between representations of x.VC.VC.x and xV.CV.Cx because all becomes symmetric (δ_o=1 according the first case above). This does not involve a supplementary temporal organizer as the phase vs. antiphase coupling mechanism of the AP nor an opposition synchronous vs. sequential of onset and coda consonants. This mechanism is also much simpler than that of <cit.>. Here, the gain of switching from VC to CV is a cancellation of T_p associated to a verbal transformation represented by graph transitions in Figure <ref> A. The classical transformation of /ib/ into /bi/ is given in the supplement. § SIMULATIONS We give several examples in which the model is successful for representing the Human syllable production. The goal is not to reliably estimate the articulatory parameters because the generated trajectories aim to straightly produce formant modulations and output speech. Moreover, playing with the shape parameters ν and K indicates there is a great flexibility of trajectory shapes. The dynamics (determined by ρ(t) in Eq. <ref>) is somewhat more important but the simulations showed that the key for having correct consonant percepts is the location of nodes in the complex plane combined with the choice of the selection vectors. The model has been tuned for the consonants /b,d,g/ and we found that /b/ is easy to reach with (ρ_b=1,θ_b=π/3) and a selection vector S_c=(1,1,0,0,0,1,0)^ T noted with the indices of ones S_c={1,2,6} hereafter. This means that not only lips and jaw are involved for /b/ but that the tongue body parameter is also engaged (see Fig. <ref> b). The tongue body is making an unexpected front/back movement during /b/ which corresponds to the trough effect <cit.>. In the continuation of the previous explanation about the verbal transformation, when /i.bi/ is pronounced after the cancellation of T_p, the tongue does not stop to move through the consonant. In simulation, the tongue movement as well as the acoustic traces are due to the fact that the placement of the /b/ in the complex plane is the same as for the /u/ vowel. When the selection process is applied, the tongue body follows the z_c(t) trajectory towards /u/ while the vowel articulators S_v={3,4,5,7} are driven by a fairly constant z_v(t) around /i/. The frequency domain trajectory is composite and this is the reason why the locus of /b/ is shifted towards /i/. This raises the question of the degree of compatibility of the model with the locus equations <cit.>. The tuning of /b,d,g/ has been realised by hearing at first and this led to find the conditions of coarticulation of /d/ and /g/ with all vowels. For the /g/ this is well known that it is velar for back vowels and palatal for front vowels. This led to the locations (ρ_g_v=1.2,θ_g_v=π/3) and (ρ_g_p=1.1,θ_g_p=23π/12) combined with the selection vector S_c={1,2,3,4} involving tongue and jaw. The location of /d/ depends on the following vowel around (ρ_d=1.2,θ_d=3π/2) and /d/ has same selection vector S_c={1,2,3,4}. The Figure <ref> is constructed by taking the F2 values 30 ms after the onset for the 8 vowels and δ_o=0.5. This is similar to the classical observation but with a downward shift of the velar /g/ which could be due to geometric differences between the VLAM and the human vocal tract. The tuning of the six pair clusters of /b,d,g/ follows the same principle, but there is a special grouping of articulators when /b/ is involved in the pair. In this case S_c={1,2,3,6} and otherwise this is S_c={1,2,3,4}. When involved, the /g/ is systematically palatal. The position of the /d/ slightly depends on the vowel as previously. Some languages systematically use stop pairs but if statistics are available for articulatory data (see <cit.>), there is no description of formant trajectories. This is thus difficult to evaluate our simulations with this criterion. We must admit this is challenging to form all pairs with all vowels in onset and coda position and some confusions persist after this tuning. § CONCLUSION ABOUT THE TITLE Finally, we simulate the verbal transformation from big.bi to bi.gbi of the title as an overview of the model properties. The graphs of /big/ and /bi/ are concatenated with δ_o=δ_e=0.7 to insert as in <cit.> an audible schwa-like vocoid between the two consecutive consonants (Figure <ref> B). When δ_o=δ_e tends to 1 the graph reorganization can occur because two successive superimposed segments of duration 2*2T anchored to the same vowel V_o=V_e=V can fuse into a single one of duration 3T. At first, the gain of consonant clustering is a reduction of one period T. The graph complexity is corollary reduced with a decay of the number of nodes with the loss of V as well as of the number of arcs (Figure <ref>). Secondly, gestures are reorganized because the selection process switches from two successive vectors S_c={1,2,3,4} for /g/ and S_c={1,2,6} for /b/ to a single vector S_c={1,2,3,6} for /gb/. This is clearly visible in Figure <ref> where there is only one gesture for /gb/. Following Sato et al.'s claim <cit.> of a production constraint in perceptual verbal transformation, the syllabification has a preference for the most compact form as bi.gbi instead of big.bi. IEEEtran
http://arxiv.org/abs/2307.00407v1
20230701184134
WavePaint: Resource-efficient Token-mixer for Self-supervised Inpainting
[ "Pranav Jeevan", "Dharshan Sampath Kumar", "Amit Sethi" ]
cs.CV
[ "cs.CV", "cs.AI", "I.2.10; I.4.0; I.4.4; I.4.3; I.4.5; I.4.1; I.4.2; I.4.6; I.4.7;\n I.4.8; I.4.9; I.4.10; I.2.10; I.5.1; I.5.2; I.5.4" ]
Aggregation Consistency Errors in Semantic Layers and How to Avoid Them Eugene Wu ======================================================================= Image inpainting, which refers to the synthesis of missing regions in an image, can help restore occluded or degraded areas and also serve as a precursor task for self-supervision. The current state-of-the-art models for image inpainting are computationally heavy as they are based on transformer or CNN backbones that are trained in adversarial or diffusion settings. This paper diverges from vision transformers by using a computationally-efficient WaveMix-based fully convolutional architecture – WavePaint. It uses a 2D-discrete wavelet transform (DWT) for spatial and multi-resolution token-mixing along with convolutional layers. The proposed model outperforms the current state-of-the-art models for image inpainting on reconstruction quality while also using less than half the parameter count and considerably lower training and evaluation times. Our model even outperforms current GAN-based architectures in CelebA-HQ dataset without using an adversarially trainable discriminator. Our work suggests that neural architectures that are modeled after natural image priors require fewer parameters and computations to achieve generalization comparable to transformers. § INTRODUCTION Image inpainting refers to the process of filling of missing parts of an image (blemishes, holes, and other defects) realistically to match the available context, thereby restoring a degraded image. It requires implicitly modeling large scale structures in natural images and an ability to perform image synthesis. State-of-the-art inpainting models are based on deep neural networks trained in a self-supervised and adversarial manner by automatically generating training samples from large image datasets by randomly masking parts of the image. Some image reconstruction tasks, such as inpainting with large masks, require networks to have large effective receptive fields <cit.>. Convolutional neural networks (CNN) require deep architectures (a large number of layers) for increasing the receptive fields. On the other hand, using self-attention to access all the pixels of an image right from the first layer gives transformers large receptive fields. However, the quadratic complexity with respect to sequence length (number of patches) introduces an enormous computational burden on transformers. Moreover, transformers require larger training data than CNNs since they lacks the inductive bias of spatial equivariance. The search for efficient models that can mix global spatial information while retaining the inductive bias of CNNs has led to the development of token-mixing models such as PoolFormer <cit.>, ConvMixer <cit.> and WaveMix <cit.> which use pooling, depth-wise convolutions and 2D-discrete wavelet transform (2D-DWT), respectively. These alternatives consume a fraction of the resources compared to transformers to achieve competitive generalization in tasks such as classification and segmentation. The performance of these models on image generation or restoration tasks has not been evaluated. Our model is a neural architecture that is inspired by WaveMix <cit.> and ConvMixer <cit.>. We investigated the application of WaveMix architectural framework to the task of image inpainting with suitable adaptations to the previously proposed architectures. This choice is motivated by the success of WaveMix in approaching the state-of-the–art (SOTA) for different datasets on the task of parameter-efficient image classification and segmentation by modeling additional inductive priors of images, such as scale invariance. Specifically, we have worked on large mask inpainting, where the mask occlude a substantial and non-trivial part of the image, but its shape is known. We have not worked on blind mask inpainting where the model does not see the mask. Sending mask to the model is necessary in the large-mask setting for the model to know where the mask is and where to fill information. Our contributions are summarized below: * We present – WavePaint – a token-mixing network modeled after natural image priors that can perform image inpainting. The network is based on recently proposed WaveMix architecture which uses 2D-discrete wavelet transform for spatial token-mixing. We also employ depth-wise convolution in our network for additional token-mixing. The presence of spatial token-mixing enables the model to have faster receptive field expansion compared to CNNs, which helps in better image reconstruction through access to global context of the image. * The use of a paramter-free 2D-DWT and parameter-efficient depth-wise convolution helps WavePaint reconstruct images without the need for large number of model parameters. WavePaint with 5M parameters can outperform much larger models such as LaMa(27 M) and CoModGAN (109 M) <cit.> on CelebA-HQ dataset in multiple mask sizes. It is able to achieve these results consuming less resources and time. * WavePaint does not need advesarial or diffusion based training routines, which are slow. The ability of wavelet token mixing to generate realistic images from masked ones shows that we can develop more efficient neural networks for image generation. * Complicated multi-stage models have been proposed that generate intermediate predictions which are further processed to restore the missing parts <cit.>. Our model reconstructs the image using a simple single-stage network. * We show that utilizing natural image priors in neural architectural design may be the way forward to avoid large computational costs and training datasets. § RELATED WORKS Mask-Aware Dynamic Filtering (MADF) <cit.> uses an encoder-decoder framework to learn multi-scale features for missing regions in the encoding phase. It adopts Point-wise Normalization (PN) in decoding phase by considering the statistical nature of features at masked points. It does not use adversarial training using a discriminator. §.§ Generative Adversarial Networks Co-ModGAN  <cit.> is a GAN model which introduces variability into the generated outputs by integrating input image-conditional and unconditional generators. It combines an unconditional style vector with an input-conditioned style vector through a linear transformation into a single modulated output. The conditional vector is obtained from an encoder network, and the unconditional vector is obtained by passing a noise vector through a pre-trained FCN, as done in Style GAN  <cit.>. Finally, this combined output is passed through the decoder to generate the output. Image completion with transformer (ICT) <cit.> is a transformer -CNN hybrid model that uses transformers to model the long-range relationships in images to recover pluralistic coherent structures together with coarse textures, and uses CNN for texture replenishment. Mask-Aware Transformer <cit.> uses a multi-head contextual attention for long-range dependency modeling by exploiting valid tokens indicated by a dynamic mask for directly processing high-resolution images. It also proposed a modified transformer block to increase the stability of large mask training. LaMa <cit.> uses Fast Fourier Convolution (FFC) blocks to understand the local and global context of an image. The use of FFC helps in having an image-wide receptive field. The use of FFC can be considered as a token-mixing operation similar to WaveMix where fast fourier transform is used for spatial token-mixing. It also uses a high receptive field perceptual loss and large training masks. WaveFill <cit.> uses 2D-DWT to decompose images into multiple frequency bands and fills the missing regions in each frequency band separately. It applies L1 reconstruction loss to the decomposed low-frequency bands and adversarial loss to high-frequency bands to mitigate inter-frequency conflicts and also uses a normalization scheme to align multi-frequency features. §.§ Diffusion Models Diffusion models uses a T fold pass through a fixed network to go from completely random noise to a coherent and contextually consistent image. Even though the overall performance of diffusion models are excellent, the training and inference process are extremely time-consuming. Latent diffusion model (LDM) <cit.> works on a lower-dimensional feature space rather than the image space to address the time-consuming nature of diffusion training. The model uses an encoder-decoder architecture with the slow diffusion step at the neck of the chain to speed up the entire network. RePaint <cit.> is a denoising diffusion probabilistic model (DDPM) based inpainting approach which employs a pretrained unconditional DDPM as the generative prior. It only alters the reverse diffusion iterations by sampling the unmasked area to condition the generation process. Additionally, they perform re-sampling on the generated output at each step, by noising and successively de-noising a fixed number of times, in order to get a coherent image. Thus. the model produces high quality and diverse output images for any masked images. § WAVEPAINT ARCHITECTURAL FRAMEWORK Observing the success of WaveMix and ConvMixer which uses 2D-DWT and depthwise-convolutions respectively for parameter efficient token-mixing, we have created a neural architecture that can inpaint masked images using these token-mixing operations. The ability of these token-mixers to impart rapid receptive field expansion from initial layers itself helps the model grasp the global context faster than conventional CNN-based networks. Unlike other popular models for image inpainting that uses diffusion or adversarial training, our model has simple single network architecture and can perform well without the need for a discriminator network. §.§ Overall architecture The input image x∈ℝ^H× W × 3 is masked by a binary mask m∈ℝ^H× W × 1 that is generated from a mask generator. The masked image is denoted as x⊕ m. The mask m is concatenated with the masked image x⊕ m, resulting in a 4-channel input x̂∈ℝ^H× W × 4 that is passed to the model as shown in Figure <ref>. The network consists of a series of M Wave modules which processes the input x̂ and gives the output ŷ∈ℝ^H× W × 3. ŷ is multiplied by the inverted binary mask, 1 - m to hide the unmasked areas of the output and retains the inpainted parts by the model. This is added back to the masked image x̂ which fills the unmasked areas and creates the final inpainted image y∈ℝ^H× W × 3. This ensures that the model only fills the masks areas and not change pixel information of unmasked parts. §.§ Wave Modules Proper inpainting requires global context information of the image. WaveMix has shown rapid expansion of receptive fields from very early layers <cit.>. So we use 4 WaveMix blocks in series in each of the Wave modules to process the image and get global context. This is further aided by the depth-wise convolution layer which further helps with spatial token-mixing with high parameter-efficiency <cit.>. Denoting input and output tensors of the Wave module by x̂_in and x̂_out, respectively; convolution operations by c_1 and c_2 and its respective trainable parameter sets by θ_1 and θ_2 respectively; the series of WaveMix blocks by WB; DepthConv by DC; Decoder by D; concatenation along the channel dimension by ⊕, and point-wise addition by +, the operations inside a Wave module can be expressed using the following equations: x̂_0 = x̂_in⊕ m ; x̂_in∈ℝ^H× W × 4 x̂_1 = c_1(x̂_0, θ_1) ; x̂_1∈ℝ^H/2× W/2 × C x̂_2 = WB(x̂_1) ; x̂_2∈ℝ^H/2× W/2 × C x̂_3 = DC(x̂_2) ; x̂_3∈ℝ^H/2× W/2 × C x̂_4 = x̂_3⊕x̂_1 ; x̂_4∈ℝ^H/2× W/2 × 2C x̂_5 = D(x̂_4) ; x̂_5∈ℝ^H× W × C/2 x̂_6 = x̂_5⊕x̂_in ; x̂_6∈ℝ^H× W × (C/2+3) x̂_7 = c_2(x̂_6, θ_2) ; x̂_7∈ℝ^H× W × 3 x̂_out = x̂_7 + x̂_in; x̂_out∈ℝ^H× W × 3 Each Wave module receives the input x̂_in∈ℝ^H× W × 3 and the mask m which are concatenated to create x̂_0 (<ref>). x̂_0 is send to a convolution layer c_1 that reduces its feature resolution by half and increases the channel dimension to C (  <ref>). This feature map x̂_1 is sent to a series of 4 WaveMix blocks for token-mixing (  <ref>). The output from the WaveMix block x̂_2 is further passed through a DepthConv module where the feature maps undergo further spatial token-mixing from the depth-wise convolution (  <ref>). A skip connection from c_1 is concatenated with the output from DepthConv module x̂_3 which increases the channel dimension of the output x̂_4 to 2C (  <ref>). This output is further passed through a Decoder network which increases the resolution of feature maps to original resolution (  <ref>). The Decoder layer also reduces the number of channels to C/2 and the feature maps x̂_5 are again concatenated with the input x̂_in (  <ref>). The output after concatenation x̂_6 is then passed to a final convolution layer c_2 to generate the output x̂_7 (  <ref>). A residual connection is also provided from the input for ease of gradient flow (  <ref>) and resultant feature maps are the final output of the Wave module x̂_out. §.§ WaveMix Blocks WaveMix block <cit.> is the fundamental building block of WaveMix architecture which allows multi-resolution token-mixing of information using 2D-DWT. This helps in a rapid expansion of receptive field. It also reduces computational burden because 2D-DWT decreases the input resolution by half and further processing by multi-layer-perceptron (MLP) is faster and cheaper. DWT helps in lowering the number of model parameters significantly, as it lacks any parameters, while promoting global context understanding even on a shallow network. We have used the WaveMix block with one level of 2D-DWT using Haar wavelet. Details of the operations inside WaveMix block are given in <cit.>. §.§ DepthConv DepthConv employs a depth-wise convolution operation followed by a GELU non-linearity and batch-normalization as shown in Figure <ref>. We use a depth-convolution with kernel size of 5, which is smaller than kernel size used in Convmixer models. This was done further decrease the parameter count. §.§ Decoder Decoder module is used to up-sample the resolution of feature maps back to original input resolution to the Wave module. It comprises of a transposed convolution layer followed by a batch-normalization. The transposed convolution layer is also used to reduce the number of channels by 4, from 2C to c/2. § EXPERIMENTS AND RESULTS §.§ Datasets, Loss Function and Metrics We use CelebA-HQ<cit.> and ImageNet <cit.> datasets (under MIT Licenses) for our experiments. We use images of size 256×256 for CelebA-HQ and 224×224 for ImageNet experiments. Validation for each dataset was performed on the entire validation sets of respective datasets. We followed the same mask generation policy employed in LaMa <cit.> and used their settings to generate narrow, medium and wide masks. We took the same 26,000 train images and 2,000 test images from CelebA-HQ that LaMa used for CelebA-HQ experiments. Learned perceptual image patch similarity (LPIPS) <cit.> and Fréchet inception distance (FID) <cit.> are reported as metrics since L1 and L2 distances are not enough to compare inpainted images with large masks where multiple natural completions are possible. Inference throughput on a single GPU was reported in frames/sec (FPS). We used a hybrid loss L_hybrid to optimize the model parameters. Since we did not employ a discriminator for adversarial training, no adversarial loss was used. We used a weighted sum of L_1 (mean absolute error), L_2 (mean square error) and L_LPIPS as shown below: L_hybrid = (1 - α)L_1 + αL_2 + L_LPIPS §.§ Implementation details Due to limited computational resources, the maximum number of training epochs was set to 300 for CelebA-HQ and 50 for ImageNet Experiments. All experiments were run on a single 80 GB Nvidia A100 GPU. We used AdamW optimizer (α = 0.001, β_1 = 0.9, β_2=0.999, ϵ = 10^-8) with a weight decay of 0.01 during initial epochs and then used SGD (stochastic gradient descent) with learning rate of 0.001 and momentum = 0.9 during the final 50 epochs <cit.>. We used the maximum batch-size that could be accommodated in a single GPU for our experiments. We used an embedding dimension (C) of 128 in all the Wave modules. Each Wave module has 4 WaveMix blocks unless otherwise specified. §.§ Results and Discussion §.§ Quantitative Results We compare our models with the other state-of-the-art baselines as shown in Table <ref> for CelebA-HQ dataset. We compare the performance of the WavePaint across narrow, medium and wide masks. WavePaint consistently outperforms most of the other models, on a variety of mask configurations. It has to be noted most of the other models have much larger parameter count and employ adversarial training using a discriminator. Since WavePaint does not employ a discriminator it is light-wight, it can be trained faster than GANs and diffusion models. We could not compare WavePaint with latest diffusion models such as RePaint <cit.> becasue diffusion is a much slower process of image generation and we were constrained in computational resources. RePaint <cit.> had reported that quantitative results of LaMa <cit.> are better than that of RePaint in wide and narrow mask inpainting on ImageNet and CelebA-HQ datasets. Since LaMa was a resource-efficient model for inpainting, we compared WavePaint with LaMa <cit.> in Table <ref> to analyse its resource-efficiency. We see that WavePaint requires less than one-fifth of the parameters of LaMa to outperform it in FID metric. WavaPaint is also ∼ 3 × faster than LaMa in both inference and training speed and utilizes less than half the GPU consumed by LaMa. Our results clearly shows that WavePaint is more resource and parameter-efficient than LaMa. The high resource-efficiency of WavePaint can be attributed to the resource-efficient token-mixing using WaveMix blocks which processes the image at a lower resolution due to lossless downsampling property of 2D-DWT. The quantitative performance of WavePaint using different hyperparameters on CelebA-HQ and ImageNet datasets are shown in Table <ref>. Table <ref> shows the performance of WavePaint which uses WaveMix blocks with multi-level 2D-DWT. Using higher levels of DWT can improve the performance of the model due to the exponential increase in receptive field. §.§ Qualitative Results The images generated by WavePaint on ImageNet dataset are shown in Figure <ref>. We can see that WavePaint completes textures and missing details by completing the lines and filling in details. The images generated by WavePaint for wide, medium and narrow masks are shown in Figure <ref>, Figure <ref> and Figure <ref> respectively. WavePaint can fill in missing details of facial features, colour, texture, eyes and eyebrows even if major parts of the image are masks. § ABLATION STUDIES Multiple ablation experiments were conducted to optimize the network hyper-parameters and understand the utility of the network components. Table <ref> shows the performance of WavePaint with 8 WaveMix blocks arranged in different number of modules. Results shows that having less number of modules with large number of WaveMix blocks is more parameter-efficient but results in poor performance. When we decrease the number of WaveMix blocks in each module and increase the number of modules, the model become larger with higher parameter count. Modules with 4 Waveblocks each retain parameter-efficiency without degrading performance. Removing DepthConv block from WavePaint reduces the FID score by 38% and increases the training and inference throughput by 14%. Since, depth-wise convolution is a highly parameter efficient operation, its removal only reduces the number of parameters by less than 1%. Therefore, adding DepthConv block in each module is beneficial for the network as it aids the WaveMix block with further spatial token-mixing. § CONCLUSION AND FUTURE WORK This paper proposes using multi-level 2D-DWT token-mixing for the less explored task of image inpainting. The performance of the proposed model is comparable to much larger models and those that uses adversarial training on CelebA-HQ dataset. Also, our model uses only a fraction of the parameters, consumes less GPU RAM and is multiple times faster in training and inference compared to other models such as LaMa <cit.>. A possible direction of future work is to develop resource-efficient image generation models using WavePaint trained in an adversarial or diffusion setting. Thus, this paper points to the the potential of using token-mixing as alternative to vision transformers and CNNs for resource-efficient image inpainting without the need for slower complex training procedures like adversarial and diffusion. The faster receptive field expansion leading to availability of global context information can help these models do image reconstruction on par with transformers. unsrt
http://arxiv.org/abs/2307.00775v1
20230703063838
Laplace Method for calculate the Determinant of cubic-matrix of order 2 and order 3
[ "Orgest Zaka", "Armend Salihu" ]
math.GM
[ "math.GM" ]
Laplace Method for Determinant of cubic-matrix, of orders 2 and 3]Laplace Method for calculate the Determinant of cubic-matrix of order 2 and order 3 Orgest ZAKA]Orgest ZAKA Orgest ZAKA: Department of Mathematics-Informatics, Faculty of Economy and Agribusiness, Agricultural University of Tirana, Tirana, Albania ozaka@ubt.edu.al, gertizaka@yahoo.com Armend Salihu]Armend Salihu Armend Salihu: Department of Computer Science, Faculty of Contemporary Sciences, South East European University, Tetovo, North Macedonia ar.salihu@gmail.com [2010]15-XX; 15Axx; 15A15; 11Cxx; 65Fxx; 11C20; 65F40 In this paper, in continuation of our work, on the determinants of cubic -matrix of order 2 and order 3, we have analyzed the possibilities of developing the concept of determinant of cubic-matrix with three indexes, studying the possibility of their calculation according the Laplace expansion method's. We have noted that the concept of permutation expansion which is used for square determinants, as well as the concept of Laplace expansion method used for square and rectangular determinants, also can be utilized to be used for this new concept of 3D Determinants. In this paper we proved that the Laplace expansion method's is also valid for cubic-matrix of order 2 and order 3, these results are given clearly and with detailed proofs, they are also accompanied by illustrative examples. We also give an algorithmic presentation for the Laplace expansion method's. [ [ August 1, 2023 ================== § INTRODUCTION Based on the determinant of 2D square matrices <cit.>, as well as determinant of rectangular matrices <cit.> we have come to the idea of developing the concept of determinant of 3D cubic matrices, also in paper <cit.> we have studied and proved some basic properties related to the determinant of cubic-matrix of order 2 and 3. In this paper, we study the properties of the determinants of the cubic-matrix of order 2 and 3, related to the Laplace expansion method, our concept is based on permutation expansion method. Encouraged by geometric intuition, in this paper we are trying to give an idea and visualize the meaning of the determinants for the cubic-matrix. Our early research mainly lies between geometry, algebra, matrix theory, etc., (see <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). This paper is continuation of the ideas that arise based on previous researches of 3D matrix ring with element from any whatever field F see <cit.>, but here we study the case when the field F is the field of real numbers ℝ also is continuation of our research <cit.> related to the study of the properties of determinants for cubic-matrix of order 2 and 3. In this paper we follow a different method from the calculation of determinants of 3D matrix, which is studied in <cit.>. In contrast to the meaning of the determinant as a multi-scalar studied in <cit.>, in this paper we give a new definition, for the determinant of the 3D-cubic-matrix, which is a real-number. In the papers <cit.>, have been studied in detail, properties for 3D-matrix, therefore, those studied properties are also valid for 3D-cubic-Matrix. Our point in this paper is to provide a concept of determinant of 3D matrices. Our concept is based on permutation method used in regular square matrices, also based on the Laplace method which is used for calculating 2D square determinants <cit.>. § PRELIMINARIES §.§ 3D Matrix The following is definition of 3D matrices provided by Zaka in 2017 (see <cit.>): 3-dimensional m× n× p matrix will call, a matrix which has: m-horizontal layers (analogous to m-rows), n-vertical page (analogue with n- columns in the usual matrices) and p-vertical layers (p-1 of which are hidden). The set of these matrix’s the write how: M_m × n × p ( F)={a_i,j,k|a_i,j,k∈ F- field ∀ i=1,m; j=1,n; k=1,p} In the following is presented the determinant of 3D-cubic matrices, as well several properties which are adopted from 2D square determinants. §.§ Cubic-Matrix of Order 2 and 3 and their Determinants A cubic-matrix A_n × n × n for n=2,3, …, called "cubic-matrix of order n". For n=1 we have that the cubic-matrix of order 1 is an element of F. Let us now consider the set of cubic-matrix of order n, for n=2 or n=3, with elements from a field F (so when cubic-matrix of order n, there are: n-vertical pages, n-horizontal layers and n-vertical layers). From <cit.> we have that, the addition of 3D-matrix stands also for cubic-matrix of order 2 and 3. Also, the set of cubic-matrix of order 2 and 3 forms an commutative group (Abelian Group) related to 3Dmatrix addition. §.§ Determinants of Cubic-Matrix of Order 2 and 3 In paper <cit.>, we will define and describe the meaning of the determinants of cubic-matrix of order 2 and order 3, with elements from a field F. Recall that a cubic-matrix A_n × n × n for n=2,3, …, called "cubic-matrix of order n". For n=1 we have that the cubic-matrix of order 1 is an element of F. Let us now consider the set of cubic-matrix of order n, with elements from a field F (so when cubic-matrix of order n, there are: n-vertical pages, n-horizontal layers and n-vertical layers), ℳ_n(F)={ A_n × n × n=(a_ijk)_n × n × n | a_ijk∈ F, ∀ i=1̅,̅n̅; j=1̅,̅n̅; k=1̅,̅n̅} In this paper, we define the determinant of cubic-matrix as a element from this field, so the map, : ℳ_n(F) → F ∀ A ∈ℳ_n(F) ↦(A) ∈ F Below we give two definitions, how we will calculate the determinant of the cubic-matrix of order 2 and order 3. Let A ∈ℳ_2(F) be a 2 × 2 × 2, with elements from a field F. A_2 × 2× 2= [ . a_111 a_121 a_211 a_221| a_112 a_122 a_212 a_222 ] Determinant of this cubic-matrix, we called, [A_2 × 2× 2]=[ . a_111 a_121 a_211 a_221| a_112 a_122 a_212 a_222 ]=a_111· a_222 - a_112· a_221 - a_121· a_212 + a_122· a_211 The follow example is case where cubic-matrix, is with elements from the number field ℝ. Let's have the cubic-matrix, with element in number field ℝ, [A_2 × 2× 2]=[ . 4 -3 -1 5 | -2 4 -7 3 ] then according to the definition <ref>, we calculate the Determinant of this cubic-matrix, and have, [A_2 × 2× 2]=[ . 4 -3 -1 5 | -2 4 -7 3 ] =4· 3 - (-2)· 5 - (-3)· (-7) + 4· (-1) = 12 - (-10) - 21 + (-4) = 12 + 10 - 21 - 4 = -3. We are trying to expand the meaning of the determinant of cubic-matrix, for order 3 (so when cubic-matrix, there are: 3-vertical pages, 3-horizontal layers and 3-vertical layers). Let A ∈ℳ_3(F) be a 3 × 3 × 3 cubic-matrix with element from a field F, A_3 × 3× 3= [ . a_111 a_121 a_131 a_211 a_221 a_231 a_311 a_321 a_331| a_112 a_122 a_132 a_212 a_222 a_232 a_312 a_322 a_332| a_113 a_123 a_133 a_213 a_223 a_233 a_313 a_323 a_333. ] . Determinant of this cubic-matrix, we called, [A_3 × 3× 3]= [ . a_111 a_121 a_131 a_211 a_221 a_231 a_311 a_321 a_331| a_112 a_122 a_132 a_212 a_222 a_232 a_312 a_322 a_332| a_113 a_123 a_133 a_213 a_223 a_233 a_313 a_323 a_333. ] =a_111· a_222· a_333 - a_111· a_232· a_323 - a_111· a_223· a_332 + a_111· a_233· a_322 -a_112· a_221· a_333 + a_112· a_223· a_331 + a_112· a_231· a_323 - a_112· a_233· a_321 +a_113· a_221· a_332 - a_113· a_222· a_331 - a_113· a_231· a_322 + a_113· a_232· a_321 -a_121· a_212· a_333 + a_121· a_213· a_332 + a_121· a_232· a_313 - a_121· a_233· a_312 +a_122· a_211· a_333 - a_122· a_213· a_331 - a_122· a_231· a_313 + a_122· a_233· a_311 -a_123· a_211· a_332 + a_123· a_212· a_331 + a_123· a_231· a_312 - a_123· a_232· a_311 +a_131· a_212· a_323 - a_131· a_213· a_322 - a_131· a_222· a_313 + a_131· a_223· a_312 -a_132· a_211· a_323 + a_132· a_213· a_321 + a_132· a_221· a_313 - a_132· a_223· a_311 +a_133· a_211· a_322 - a_133· a_212· a_321 - a_133· a_221· a_312 + a_133· a_222· a_311 The follow example is case where cubic-matrix, is with elements from the number field ℝ. Let's have the cubic-matrix of order 3, with element from number field (field of real numbers) ℝ, [A_3 × 3× 3]= [ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ]. Then, we calculate the Determinant of this cubic-matrix following the Definition <ref>, and have that, [A_3 × 3× 3]= [ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] =3 · 0 · 3 - 3 · 3 · 4 - 3 · 1 · 5 + 3 · 2 · 2 - (-2) · 5 · 3 + (-2) · 1 · (-2) + (-2) · (- 1) · 4 -(-2) · 2 · 3 + 5 · 5 · 5 - 5 · 0 · (-2) - 5 · (-1) · 2 + 5 · 3 · 3 - 0 · (-3) · 3 + 0 · 3 · 5 +0 · 3 · 0 - 0 · 2 · (-3) + 4 · 2 · 3 - 4 · 3 · (-2) - 4 · (-1) · 0 + 4 · 2 · 0 - 1 · 2 · 5 +1 · (-3) · (-2) + 1 · (-1) · (-3) - 1 · 3 · 0 + (-4) · (-3) · 4 - (-4) · 3 · 2 - (-4) · 0 · 0 +(-4) · 1 · (-3) - 0 · 2 · 4 + 0 · 3 · 3 + 0 · 5 · 0 - 0 · 1 · 0 + 0 · 2 · 2 - 0 · (-3) · 3 - 0 · 5 · (-3) + 0 · 0 · 0 =0 -36 - 15 + 12 + 30 + 4 + 8 + 12 + 125 + 0 + 10 + 45 + 0 + 0 + 0 + 0 + 24 + 24 + 0 + 0 - 10 + 6+ 3 - 0 + 48 + 24 + 0 + 12 - 0+ 0 + 0 - 0 + 0 + 0 + 0 + 0 = 326 § MINORS AND CO-FACTORS OF CUBIC-MATRIX OF ORDER 2 AND 3 In this section we will present the meaning of Minors and co-factors for cubic-matrix of order 2 and order 3. §.§ Minors of cubic-matrix Let us start by defining minors. Let A_n be a n× n × n cubic-matrix (with n ≥ 2). Denote by A_ijk the entry of cubic-matrix A at the intersection of the i-th horizontal layers, j-th vertical pages and k-th vertical layers. The minor of A_ijk is the determinant of the sub-cubic-matrix obtained from A by deleting its i-th horizontal layer, j-vertical page and k-vertical layer. We now illustrate the definition with an example. Let's have the cubic-matrix of order 3, with element from number field (field of real numbers) ℝ, A_3 × 3× 3= [ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] . Take the entry A_111=3, The sub-cubic-matrix obtained by deleting the first-horizontal layer, first-vertical page and first-vertical layer is, [ . 0 3 2 5 | 1 2 4 3 ] . Thus, the minor of A_111 is M_111=[ . 0 3 2 5 | 1 2 4 3 ] =0 · 3 - 1· 5 -3 · 4 + 2 · 2 = -5 - 12 + 4 = -13 . Take the entry A_123=1, The sub-cubic-matrix obtained by deleting the first-horizontal layer, 2-vertical page and 3-vertical layer is, [ . 2 -1 0 -2 | -3 3 -3 5 ] . Thus, the minor of A_123 is M_123=[ . 2 -1 0 -2 | -3 3 -3 5 ] =2 · 5 - (-3) · (-2) - (-1) · (-3) + 3 · 0 = 10 - 6 - 3 + 0 = 1 . §.§ Co-factors of cubic-matrix of order 2 and 3 A co-factor is a minor whose sign may have been changed depending on the location of the respective matrix entry. Let A_n be a n× n × n cubic-matrix (with n ≥ 2). Denote by M_ijk the minor of an entry A_ijk. The co-factor of A_ijk is C_ijk=(-1)^i+j+k· M_ijk. As an example, the pattern of sign changes (-1)^i+j+k of a cubic-matrix of order 3 is [ . - + - + - + - + - | + - + - + - + - + | - + - + - + - + - . ] . Let's have the cubic-matrix of order 3, with element from number field (field of real numbers) ℝ, A_3 × 3× 3= [ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] . Take the entry A_111=3. The minor of A_111 is M_111=[ . 0 3 2 5 | 1 2 4 3 ] = -13 and its cofactor is C_111=(-1)^1+1+1· M_111=- M_111 = - (-13) = 13. Take the entry A_123=1. Thus, the minor of A_123 is M_123=[ . 2 -1 0 -2 | -3 3 -3 5 ] = 1 and its co-factor is C_123=(-1)^1+2+3· M_123=M_123 = 1. § LAPLACE EXPANSION FOR DETERMINANTS OF CUBIC-MATRIX OF ORDER 2 AND 3 We are now ready to present the Laplace expansion. Following the Laplace expansion method for 2D square-matrix, we are conjecturing this method for 3D cubic-matrix, [backgroundcolor=green!15] Laplace Expansion If we have A a cubic-matrix of order 2 or 3. Denote by C_ijk the co-factor of an entry A_ijk. Then: L_1 For any horizontal layer i, the following 'horizontal layer' expansion holds: (A)=∑_jkA_ijk· C_ijk. L_2 For any 'vertical page' j , the following 'vertical page' expansion holds: (A)=∑_ikA_ijk· C_ijk. L_3 For any 'vertical layer' k, the following 'vertical page' expansion holds: (A)=∑_ijA_ijk· C_ijk. §.§ Laplace Expansion for determinants of cubic-matrix of order 2 Below we prove that this method is valid for calculating the determinants of the cubic-matrix of order 2. Let A be a cubic-matrix of order 2, A=[ . a_111 a_121 a_211 a_221| a_112 a_122 a_212 a_222 ]. The determinant of this cubic-matrix is invariant into expansion of three "ways" to Laplace expansion. We will prove all three expansion type, L_1,L_2,L_3. (L_1): For any horizontal layer i (i=1,2), the following 'horizontal layer' expansion holds: (A)=∑_jkA_ijk· C_ijk. Really, if we take i=1, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 2, which we described above, we have: [A_2 × 2× 2] = [ . a_111 a_121 a_211 a_221| a_112 a_122 a_212 a_222 ] =a_111·[ a_222 ] - a_121·[ a_212 ] - a_112·[ a_221 ] + a_122·[ a_211 ] =a_111· a_222 - a_112· a_221 - a_121· a_212 + a_122· a_211. We see that this result is the same as the result of Definition <ref>. Now similarly we take i=2, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 2, which we described above, we have: [A_2 × 2× 2] = [ . a_111 a_121 a_211 a_221| a_112 a_122 a_212 a_222 ] =a_211·[ a_122 ] - a_221·[ a_112 ] - a_212·[ a_121 ] + a_222·[ a_111 ] =a_211· a_122 - a_221· a_112 - a_212· a_121 + a_222· a_111. We see that this result is the same as the result of Definition <ref>. (L_2): for any 'vertical page' j , the following 'vertical page' expansion holds: (A)=∑_ikA_ijk· C_ijk. Really, if we take j=1, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 2, which we described above, we have: [A_2 × 2× 2] = [ . a_111 a_121 a_211 a_221| a_112 a_122 a_212 a_222 ] =a_111·[ a_222 ] + a_211·[ a_122 ] - a_112·[ a_221 ] - a_212·[ a_121 ] =a_111· a_222 + a_211· a_122 - a_112· a_221 + a_212· a_121. We see that this result is the same as the result of Definition <ref>. Now similarly we take j=2, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 2, which we described above, we have: [A_2 × 2× 2] = [ . a_111 a_121 a_211 a_221| a_112 a_122 a_212 a_222 ] =-a_121·[ a_212 ] - a_221·[ a_112 ] + a_122·[ a_211 ] + a_222·[ a_111 ] =-a_121· a_212 - a_221· a_112 + a_122· a_211 + a_222· a_111. We see that this result is the same as the result of Definition <ref>. (L_3): For any 'vertical layer' k , the following 'vertical layer' expansion holds: (A)=∑_ikA_ijk· C_ijk. Really, if we take k=1, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 2, which we described above, we have: [A_2 × 2× 2] = [ . a_111 a_121 a_211 a_221| a_112 a_122 a_212 a_222 ] =a_111·[ a_222 ] - a_121·[ a_212 ] + a_211·[ a_122 ] - a_221·[ a_112 ] =a_111· a_222 - a_121· a_212 + a_211· a_122 - a_221· a_112. We see that this result is the same as the result of Definition <ref>. Now similarly we take k=2, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 2, which we described above, we have: [A_2 × 2× 2] = [ . a_111 a_121 a_211 a_221| a_112 a_122 a_212 a_222 ] =- a_112·[ a_221 ] + a_122·[ a_221 ] - a_212·[ a_121 ] + a_222·[ a_111 ] =-a_112· a_221 + a_122· a_221 - a_212· a_121 + a_222· a_111. We see that this result is the same as the result of Definition <ref>. The follow example is case where cubic-matrix of second order, is with elements from the number field ℝ. Let's have the cubic-matrix, with element in number field ℝ, A_2 × 2× 2= [ . 4 -3 -1 5 | -2 4 -7 3 ] then according to the Theorem <ref>, we calculate the Determinant of this cubic-matrix, and have, [A_2 × 2× 2]=[ . 4 -3 -1 5 | -2 4 -7 3 ] For i=1, we have: [A_2 × 2× 2] = [ . 4 -3 -1 5 | -2 4 -7 3 ] =4 ·[ 3 ] - (-3) ·[ -7 ] - (-2) ·[ 5 ] + 4 ·[ -1 ] =4· 3 - (-3)· (-7) - (-2)· 5 + 4· (-1)=-3. We see that this result is the same as the result of Example <ref>. For i=2, we have: [A_2 × 2× 2] = [ . 4 -3 -1 5 | -2 4 -7 3 ] =-1 ·[ 4 ] - 5 ·[ -2 ] - (-7) ·[ -3 ] + 3 ·[ 4 ] =-1· 4 - 5· (-2) - (-7)· (-3) + 3· 4 = -3. We see that this result is the same as the result of Example <ref>. For j=1, we have: [A_2 × 2× 2] = [ . 4 -3 -1 5 | -2 4 -7 3 ] =4 ·[ 3 ] + (-1) ·[ 4 ] - (-2) ·[ 5 ] - (-7) ·[ -3 ] =4· 3 + (-1)· 4 - (-2)· 5 - (-7)· (-3) = -3. We see that this result is the same as the result of Example <ref>. For j=2, we have: [A_2 × 2× 2] = [ . 4 -3 -1 5 | -2 4 -7 3 ] =-(-3) ·[ -7 ] - 5 ·[ -2 ] + 4 ·[ -1 ] + 3 ·[ 4 ] =3· (-7) - 5· (-2) - 4· (-1) + 3· 4 = -3. We see that this result is the same as the result of Example <ref>. For k=1, we have: [A_2 × 2× 2] = [ . 4 -3 -1 5 | -2 4 -7 3 ] =4 ·[ 3 ] - (-3) ·[ -7 ] + (-1) ·[ 4 ] - 5 ·[ -2 ] =4· 3 - (-3)· (-7) + (-1)· 4 - 5· (-2) = -3. We see that this result is the same as the result of Example <ref>. For k=2, we have: [A_2 × 2× 2] = [ . 4 -3 -1 5 | -2 4 -7 3 ] =- (-2) ·[ 5 ] + 4 ·[ 5 ] - (-7) ·[ -3 ] + 3 ·[ 4 ] =-(-2)· 5 + 4· 5 - (-7)· (-3) + 3· 4 = -3. We see that this result is the same as the result of Example <ref>. §.§ Laplace Expansion for determinants of cubic-matrix of order 3 Below we prove that this method is valid for calculating the determinants of the cubic-matrix of order 3. Let A be a cubic-matrix of order 3, A=[ . a_111 a_121 a_131 a_211 a_221 a_231 a_311 a_321 a_331| a_112 a_122 a_132 a_212 a_222 a_232 a_312 a_322 a_332| a_113 a_123 a_133 a_213 a_223 a_233 a_313 a_323 a_333. ] The determinant of this cubic-matrix is invariant into expansion of three "ways" to Laplace expansion. We will prove all three expansion type, L_1,L_2,L_3 also for third order. (L_1): For any horizontal layer i (i=1,2), the following 'horizontal layer' expansion holds: (A)=∑_jkA_ijk· C_ijk. Really, if we take i=1, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 3, which we described above, we have: A=[ . a_111 a_121 a_131 a_211 a_221 a_231 a_311 a_321 a_331| a_112 a_122 a_132 a_212 a_222 a_232 a_312 a_322 a_332| a_113 a_123 a_133 a_213 a_223 a_233 a_313 a_323 a_333. ] = a_111·[ . a_222 a_232 a_322 a_332| a_223 a_233 a_323 a_333 ] - a_121·[ . a_212 a_232 a_312 a_332| a_213 a_233 a_313 a_333 ] + a_131·[ . a_212 a_222 a_312 a_322| a_213 a_223 a_313 a_323 ] - a_112·[ . a_221 a_231 a_321 a_331| a_223 a_233 a_323 a_333 ] + a_122·[ . a_211 a_231 a_311 a_331| a_213 a_233 a_313 a_333 ] - a_132·[ . a_211 a_221 a_311 a_321| a_213 a_223 a_313 a_323 ] + a_113·[ . a_221 a_231 a_321 a_331| a_222 a_232 a_322 a_332 ] - a_123·[ . a_211 a_231 a_311 a_331| a_212 a_232 a_312 a_332 ] + a_133·[ . a_211 a_221 a_311 a_321| a_212 a_222 a_312 a_322 ] After expanding further the above determinant based on Theorem <ref>, we see that this result is the same as the result of Definition <ref>. If we take i=2, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 3, which we described above, we have: A=[ . a_111 a_121 a_131 a_211 a_221 a_231 a_311 a_321 a_331| a_112 a_122 a_132 a_212 a_222 a_232 a_312 a_322 a_332| a_113 a_123 a_133 a_213 a_223 a_233 a_313 a_323 a_333. ] = a_211·[ . a_122 a_132 a_322 a_332| a_123 a_133 a_323 a_333 ] - a_221·[ . a_112 a_132 a_312 a_332| a_113 a_133 a_313 a_333 ] + a_231·[ a_112 a_122 a_312 a_322| a_113 a_123 a_313 a_323. ] - a_212·[ a_121 a_131 a_321 a_331| a_123 a_133 a_323 a_333. ] + a_222·[ a_111 a_131 a_311 a_331| a_113 a_133 a_313 a_333. ] - a_232·[ a_111 a_121 a_311 a_321| a_113 a_123 a_313 a_323. ] + a_113·[ a_221 a_231 a_321 a_331| a_222 a_232 a_322 a_332. ] - a_123·[ a_211 a_231 a_311 a_331| a_212 a_232 a_312 a_332. ] + a_133·[ a_211 a_221 a_311 a_321| a_212 a_222 a_312 a_322. ] After expanding further the above determinant based on Theorem <ref>, we see that this result is the same as the result of Definition <ref>. If we take i=3, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 3, which we described above, we have: A=[ . a_111 a_121 a_131 a_211 a_221 a_231 a_311 a_321 a_331| a_112 a_122 a_132 a_212 a_222 a_232 a_312 a_322 a_332| a_113 a_123 a_133 a_213 a_223 a_233 a_313 a_323 a_333. ] = a_311·[ a_122 a_132 a_222 a_232| a_123 a_133 a_223 a_233. ] - a_221·[ a_112 a_132 a_212 a_232| a_113 a_133 a_213 a_233. ] + a_331·[ a_112 a_122 a_212 a_222| a_113 a_123 a_213 a_223. ] - a_312·[ a_121 a_131 a_221 a_231| a_123 a_133 a_223 a_233. ] + a_322·[ a_111 a_131 a_211 a_231| a_113 a_133 a_213 a_233. ] - a_332·[ a_111 a_121 a_211 a_221| a_113 a_123 a_213 a_223. ] + a_313·[ a_221 a_231 a_221 a_231| a_222 a_232 a_222 a_232. ] - a_323·[ a_211 a_231 a_211 a_231| a_212 a_232 a_212 a_232. ] + a_333·[ a_211 a_221 a_211 a_221| a_212 a_222 a_212 a_222. ] After expanding further the above determinant based on Theorem <ref>, we see that this result is the same as the result of Definition <ref>. If we take j=1, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 3, which we described above, we have: A=[ . a_111 a_121 a_131 a_211 a_221 a_231 a_311 a_321 a_331| a_112 a_122 a_132 a_212 a_222 a_232 a_312 a_322 a_332| a_113 a_123 a_133 a_213 a_223 a_233 a_313 a_323 a_333. ] = a_111·[ a_222 a_232 a_322 a_332| a_223 a_233 a_323 a_333. ] - a_211·[ a_122 a_132 a_322 a_332| a_123 a_133 a_323 a_333. ] + a_311·[ a_122 a_132 a_222 a_232| a_123 a_133 a_223 a_233. ] - a_112·[ a_221 a_231 a_321 a_331| a_223 a_233 a_323 a_333. ] + a_212·[ a_121 a_131 a_321 a_331| a_123 a_133 a_323 a_333. ] - a_312·[ a_121 a_131 a_221 a_231| a_123 a_133 a_223 a_233. ] + a_113·[ a_221 a_231 a_321 a_331| a_222 a_232 a_322 a_332. ] - a_213·[ a_121 a_131 a_321 a_331| a_122 a_132 a_322 a_332. ] + a_313·[ a_121 a_131 a_221 a_231| a_122 a_132 a_222 a_232. ] After expanding further the above determinant based on Theorem <ref>, we see that this result is the same as the result of Definition <ref>. If we take j=2, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 3, which we described above, we have: A=[ . a_111 a_121 a_131 a_211 a_221 a_231 a_311 a_321 a_331| a_112 a_122 a_132 a_212 a_222 a_232 a_312 a_322 a_332| a_113 a_123 a_133 a_213 a_223 a_233 a_313 a_323 a_333. ] = a_121·[ a_212 a_232 a_312 a_332| a_213 a_233 a_313 a_333. ] - a_221·[ a_112 a_132 a_312 a_332| a_113 a_133 a_313 a_333. ] + a_321·[ a_112 a_132 a_212 a_232| a_113 a_133 a_213 a_233. ] - a_122·[ a_211 a_231 a_311 a_331| a_213 a_233 a_313 a_333. ] + a_222·[ a_111 a_131 a_311 a_331| a_113 a_133 a_313 a_333. ] - a_322·[ a_111 a_131 a_211 a_231| a_113 a_133 a_213 a_233. ] + a_123·[ a_211 a_231 a_311 a_331| a_212 a_232 a_312 a_332. ] - a_223·[ a_111 a_131 a_311 a_331| a_112 a_132 a_312 a_332. ] + a_323·[ a_111 a_131 a_211 a_231| a_112 a_132 a_212 a_232. ] After expanding further the above determinant based on Theorem <ref>, we see that this result is the same as the result of Definition <ref>. If we take j=3, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 3, which we described above, we have: A=[ . a_111 a_121 a_131 a_211 a_221 a_231 a_311 a_321 a_331| a_112 a_122 a_132 a_212 a_222 a_232 a_312 a_322 a_332| a_113 a_123 a_133 a_213 a_223 a_233 a_313 a_323 a_333. ] = a_131·[ a_212 a_222 a_312 a_322| a_213 a_223 a_313 a_323. ] - a_231·[ a_112 a_122 a_312 a_322| a_113 a_123 a_313 a_323. ] + a_331·[ a_112 a_122 a_212 a_222| a_113 a_123 a_213 a_223. ] - a_132·[ a_211 a_221 a_311 a_321| a_213 a_223 a_313 a_323. ] + a_232·[ a_111 a_121 a_311 a_321| a_113 a_123 a_313 a_323. ] - a_332·[ a_111 a_121 a_211 a_221| a_113 a_123 a_213 a_223. ] + a_133·[ a_211 a_221 a_311 a_321| a_212 a_222 a_312 a_322. ] - a_233·[ a_111 a_121 a_311 a_321| a_112 a_122 a_312 a_322. ] + a_333·[ a_111 a_121 a_211 a_221| a_112 a_122 a_212 a_222. ] After expanding further the above determinant based on Theorem <ref>, we see that this result is the same as the result of Definition <ref>. If we take k=1, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 3, which we described above, we have: A=[ . a_111 a_121 a_131 a_211 a_221 a_231 a_311 a_321 a_331| a_112 a_122 a_132 a_212 a_222 a_232 a_312 a_322 a_332| a_113 a_123 a_133 a_213 a_223 a_233 a_313 a_323 a_333. ] = a_111·[ a_222 a_232 a_322 a_332| a_223 a_233 a_323 a_333. ] - a_121·[ a_212 a_232 a_312 a_332| a_213 a_233 a_313 a_333. ] + a_131·[ a_212 a_222 a_312 a_322| a_213 a_223 a_313 a_323. ] - a_211·[ a_122 a_132 a_322 a_332| a_123 a_133 a_323 a_333. ] + a_221·[ a_112 a_132 a_312 a_332| a_113 a_133 a_313 a_333. ] - a_231·[ a_112 a_122 a_312 a_322| a_113 a_123 a_313 a_323. ] + a_311·[ a_122 a_132 a_222 a_232| a_223 a_233 a_323 a_333. ] - a_321·[ a_112 a_132 a_212 a_232| a_123 a_133 a_213 a_233. ] + a_331·[ a_112 a_122 a_212 a_222| a_113 a_123 a_213 a_223. ] After expanding further the above determinant based on Theorem <ref>, we see that this result is the same as the result of Definition <ref>. If we take k=2, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 3, which we described above, we have: A=[ . a_111 a_121 a_131 a_211 a_221 a_231 a_311 a_321 a_331| a_112 a_122 a_132 a_212 a_222 a_232 a_312 a_322 a_332| a_113 a_123 a_133 a_213 a_223 a_233 a_313 a_323 a_333. ] = a_112·[ a_221 a_231 a_321 a_331| a_223 a_233 a_323 a_333. ] - a_122·[ a_211 a_231 a_311 a_331| a_213 a_233 a_313 a_333. ] + a_132·[ a_211 a_221 a_311 a_321| a_213 a_223 a_313 a_323. ] - a_212·[ a_121 a_131 a_321 a_331| a_123 a_133 a_323 a_333. ] + a_222·[ a_111 a_131 a_311 a_331| a_113 a_133 a_313 a_333. ] - a_232·[ a_111 a_121 a_311 a_321| a_113 a_123 a_313 a_323. ] + a_312·[ a_121 a_131 a_221 a_231| a_223 a_233 a_323 a_333. ] - a_322·[ a_111 a_131 a_211 a_231| a_123 a_133 a_213 a_233. ] + a_332·[ a_111 a_121 a_211 a_221| a_113 a_123 a_213 a_223. ] After expanding further the above determinant based on Theorem <ref>, we see that this result is the same as the result of Definition <ref>. If we take k=3, and we consider the meaning of the minors and co-factors for the cubic-matrix of order 3, which we described above, we have: A=[ . a_111 a_121 a_131 a_211 a_221 a_231 a_311 a_321 a_331| a_112 a_122 a_132 a_212 a_222 a_232 a_312 a_322 a_332| a_113 a_123 a_133 a_213 a_223 a_233 a_313 a_323 a_333. ] = a_113·[ a_221 a_231 a_321 a_331| a_222 a_232 a_322 a_332. ] - a_123·[ a_211 a_231 a_311 a_331| a_212 a_232 a_312 a_332. ] + a_133·[ a_211 a_221 a_311 a_321| a_212 a_222 a_312 a_322. ] - a_213·[ a_121 a_131 a_321 a_331| a_122 a_132 a_322 a_332. ] + a_223·[ a_111 a_131 a_311 a_331| a_112 a_132 a_312 a_332. ] - a_233·[ a_111 a_121 a_311 a_321| a_112 a_122 a_312 a_322. ] + a_313·[ a_121 a_131 a_221 a_231| a_222 a_232 a_322 a_332. ] - a_323·[ a_111 a_131 a_211 a_231| a_122 a_132 a_212 a_232. ] + a_333·[ a_111 a_121 a_211 a_221| a_112 a_122 a_212 a_222. ] After expanding further the above determinant based on Theorem <ref>, we see that this result is the same as the result of Definition <ref>. The follow example is case where cubic-matrix of third order, is with elements from the number field ℝ. Let's have the cubic-matrix, with element in number field ℝ, A_3 × 3× 3=[ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] then according to the theorem Theorem <ref>, we calculate the Determinant of this cubic-matrix, and have, [A_3 × 3× 3]=[ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] For i=1, we have: A=[ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] = 3 ·[ . 0 3 2 5 | 1 2 4 3 ] - 0 ·[ . -3 3 -3 5 | 3 2 0 3 ] + (-4) ·[ . -3 0 -3 2 | 3 1 0 4 ] - (-2) ·[ . 5 -1 3 -2 | 1 2 4 3 ] + 4 ·[ . 2 -1 0 -2 | 3 2 0 3 ] - 0 ·[ . 2 5 0 3 | 3 1 0 4 ] + 5 ·[ . 5 -1 3 -2 | 0 3 2 5 ] - 1 ·[ . 2 -1 0 -2 | -3 3 -3 5 ] + 0 ·[ . 2 5 0 3 | -3 0 -3 2 ] = 326. After expanding further the minors of above determinant based on Theorem <ref>, we see that this result is the same as the result of Example <ref>. For i=2, we have: A=[ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] = 2 ·[ . 4 0 2 5 | 1 0 4 3 ] - 5 ·[ . -2 0 -3 5 | 5 0 0 3 ] + (-1) ·[ -2 4 -3 2 | 5 1 0 4 . ] - (-3) ·[ 0 -4 3 -2 | 1 0 4 3 . ] + 0 ·[ 3 -4 0 -2 | 5 0 0 3 . ] - 3 ·[ 3 0 0 3 | 5 1 0 4 . ] + 5 ·[ 5 -1 3 -2 | 0 3 2 5 . ] - 1 ·[ 2 -1 0 -2 | -3 3 -3 5 . ] + 0 ·[ 2 5 0 3 | -3 0 -3 2 . ] = 326. After expanding further the minors of above determinant based on Theorem <ref>, we see that this result is the same as the result of Example <ref>. For i=3, we have: A=[ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] = 0 ·[ 4 0 0 3 | 1 0 1 2 . ] - 5 ·[ -2 0 -3 3 | 5 0 3 2 . ] + (-2) ·[ -2 4 -3 0 | 5 1 3 1 . ] - (-3) ·[ 0 -4 5 -1 | 1 0 1 2 . ] + 2 ·[ 3 -4 2 -1 | 5 0 3 2 . ] - 5 ·[ 3 0 2 5 | 5 1 3 1 . ] + 0 ·[ 5 -1 5 -1 | 0 3 0 3 . ] - 4 ·[ 2 -1 2 -1 | -3 3 -3 3 . ] + 3 ·[ 2 5 2 5 | -3 0 -3 0 . ] = 326. After expanding further the minors of above determinant based on Theorem <ref>, we see that this result is the same as the result of Example <ref>. For j=1, we have: A=[ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] = 3 ·[ 0 3 2 5 | 1 2 4 3 . ] - 2 ·[ 4 0 2 5 | 1 0 4 3 . ] + 0 ·[ 4 0 0 3 | 1 0 1 2 . ] - (-2) ·[ 5 -1 3 -2 | 1 2 4 3 . ] + (-3) ·[ 0 -4 3 -2 | 1 0 4 3 . ] - (-3) ·[ 0 -4 5 -1 | 1 0 1 2 . ] + 5 ·[ 5 -1 3 -2 | 0 3 2 5 . ] - 3 ·[ 0 -4 3 -2 | 4 0 2 5 . ] + 0 ·[ 0 -4 5 -1 | 4 0 0 3 . ] = 326. After expanding further the minors of above determinant based on Theorem <ref>, we see that this result is the same as the result of Example <ref>. For j=2, we have: A=[ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] = 0 ·[ -3 3 -3 5 | 3 2 0 3 . ] - 5 ·[ -2 0 -3 5 | 5 0 0 3 . ] + 3 ·[ -2 0 -3 3 | 5 0 3 2 . ] - 4 ·[ 2 -1 0 -2 | 3 2 0 3 . ] + 0 ·[ 3 -4 0 -2 | 5 0 0 3 . ] - 2 ·[ 3 -4 2 -1 | 5 0 3 2 . ] + 1 ·[ 2 -1 0 -2 | -3 3 -3 5 . ] - 1 ·[ 3 -4 0 -2 | -2 0 -3 5 . ] + 4 ·[ 3 -4 2 -1 | -2 0 -3 3 . ] = 326. After expanding further the minors of above determinant based on Theorem <ref>, we see that this result is the same as the result of Example <ref>. For j=3, we have: A=[ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] = (-4) ·[ -3 0 -3 2 | 3 1 0 4 . ] - (-1) ·[ -2 4 -3 2 | 5 1 0 4 . ] + (-2) ·[ -2 4 -3 0 | 5 1 3 1 . ] - 0 ·[ 2 5 0 3 | 3 1 0 4 . ] + 3 ·[ 3 0 0 3 | 5 1 0 4 . ] - 5 ·[ 3 0 2 5 | 5 1 3 1 . ] + 0 ·[ 2 5 0 3 | -3 0 -3 2 . ] - 2 ·[ 3 0 0 3 | -2 4 -3 2 . ] + 3 ·[ 3 0 2 5 | -2 4 -3 0 . ] = 326. After expanding further the minors of above determinant based on Theorem <ref>, we see that this result is the same as the result of Example <ref>. For k=1, we have: A=[ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] = 3 ·[ 0 3 2 5 | 1 2 4 3 . ] - 0 ·[ -3 3 -3 5 | 3 2 0 3 . ] + (-4) ·[ -3 0 -3 2 | 3 1 0 4 . ] - 2 ·[ 4 0 2 5 | 1 0 4 3 . ] + 5 ·[ -2 0 -3 5 | 5 0 0 3 . ] - (-1) ·[ -2 4 -3 2 | 5 1 0 4 . ] + 0 ·[ 4 0 0 3 | 1 2 4 3 . ] - 3 ·[ -2 0 -3 3 | 1 0 3 2 . ] + (-2) ·[ -2 4 -3 0 | 5 1 3 1 . ] = 326. After expanding further the minors of above determinant based on Theorem <ref>, we see that this result is the same as the result of Example <ref>. For k=2, we have: A=[ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] = (-2) ·[ 5 -1 3 -2 | 1 2 4 3 . ] - 4 ·[ 2 -1 0 -2 | 3 2 0 3 . ] + 0 ·[ 2 5 0 3 | 3 1 0 4 . ] - (-3) ·[ 0 -4 3 -2 | 1 0 4 3 . ] + 0 ·[ 3 -4 0 -2 | 5 0 0 3 . ] - 3 ·[ 3 0 0 3 | 5 1 0 4 . ] + (-3) ·[ 0 -4 5 -1 | 1 2 4 3 . ] - 2 ·[ 3 -4 2 -1 | 1 0 3 2 . ] + 5 ·[ 3 0 2 5 | 5 1 3 1 . ] = 326. After expanding further the minors of above determinant based on Theorem <ref>, we see that this result is the same as the result of Example <ref>. For k=3, we have: A=[ . 3 0 -4 2 5 -1 0 3 -2 | -2 4 0 -3 0 3 -3 2 5 | 5 1 0 3 1 2 0 4 3 . ] = 5 ·[ 5 -1 3 -2 | 0 3 2 5 . ] - 1 ·[ 2 -1 0 -2 | -3 3 -3 5 . ] + 0 ·[ 2 5 0 3 | -3 0 -3 2 . ] - 3 ·[ 0 -4 3 -2 | 4 0 2 5 . ] + 1 ·[ 3 -4 0 -2 | -2 0 -3 5 . ] - 2 ·[ 3 0 0 3 | -2 4 -3 2 . ] + 0 ·[ 0 -4 5 -1 | 0 3 2 5 . ] - 4 ·[ 3 -4 2 -1 | 4 0 -3 3 . ] + 3 ·[ 3 0 2 5 | -2 4 -3 0 . ] = 326. After expanding further the minors of above determinant based on Theorem <ref>, we see that this result is the same as the result of Example <ref>. From Theorem <ref> and Theorem <ref>, we have true the following Theorem, The Laplace Expansion for Determinant calculation, applies to cubic-matrix of order 2 and cubic matrix of order 3 §.§ Algorithmics implementation of Determinants for cubic-matrix of order 2 and 3 In paper <cit.> we have presented the pseudo-code of algorithm based on the permutation expansion method as presented in Definition 1. In the following we have also presented the pseudo-code of algorithm based on the Laplace method as presented in <ref>. []1.3pt P 1: Laplace method for determinants of cubic matrices of order 2 and 3 []1.3pt Step 1: Determine the order of determinant: [m,n,o] = size(A); Step 2: Checking if 3D matrix is cubic: if m  ∼ n; m ∼= o; n ∼= o; disp('A is not square, cannot calculate the determinant') d = 0; return end Step 3: Checking if 3D matrix is higher than the 3rd order: if m > 3; disp('A is higher than the third order, hence can not be calculated.') d = 0; return end Step 4: Initialize d=0; Step 5: Handling base case. if m == 1 d = A; return end Step 6: Select which plan we shall use to expand determinant: Horizontal Layer: x1 = 1 or 2 or 3; or Vertical Layer: x2 = 1 or 2 or 3; or Vertical page: x3 = 1 or 2 or 3; Step 7: Calculate 3D determinant of order 2 and 3 based on Laplace methodology: Create loop from 1 to 2 or 3 (Depending on the order of cubic matrix): Create loop from 1 to 2 or 3 (Depending on the order of cubic matrix): If horizontal layer is selected: d=d+(-1)^∧(1+x1+i+j)∗ A(x1,i,j)∗ det_3DLaplace(A([1:x1-1 x1+1:m],[1:i-1 i+1:n],[1:j-1 j+1:m])); end If vertical layer is selected: d=d+(-1)^∧(1+i+x2+j)∗ A(i,x2,j)∗ det_3DLaplace(A([1:i-1 i+1:m],[1:x2-1 x2+1:n],[1:j-1 j+1:m])); end If vertical page is selected: d=d+(-1)^∧(1+i+j+x3)∗ A(j,i,x3)∗ det_3DLaplace(A([1:i-1 i+1:m],[1:j-1 j+1:n],[1:x3-1 x3+1:m])); end end end Step 8: Return the result of 3D determinant. []1.3pt § DECLARATIONS Funding: No Funding. Authors' contributions: The contribution of the authors is equal. Data availability statements: This manuscript does not report data. Conflict of Interest Statement: There is no conflict of interest with any funder. 99 SalihuZaka1 Armend Salihu and Orgest Zaka, (2023). The Determinant of Cubic-Matrix of order 2 and order 3: Some basic Properties and Algorithms. ArXiv: https://arxiv.org/abs/2306.13336 Salihu1 A. Salihu, H. Snopce, A. Luma and J. Ajdari, "Optimization of Dodgson's Condensation Method for Rectangular determinant Calculations", Advanced Mathematical Models and Applications, vol. 7, no. 3, pp. 264-274, 2022. http://jomardpublishing.com/UploadFiles/Files/journals/AMMAV1N1/V7N3/Salihu_et_al.pdf. PetersZakaDyckAM Peters, J.F., Zaka, O. Dyck fundamental group on arcwise-connected polygon cycles. Afr. Mat.. 34, 31 (2023), https://doi.org/10.1007/s13370-023-01067-3 ZakaDilauto Zaka, O. Dilations of line in itself as the automorphism of the skew-field constructed over in the same line in Desargues affine plane. Applied Mathematical Sciences. 13, 231-237 (2019) ZakaFilipi2016Zaka, O., Filipi, K. The transform of a line of Desargues affine plane in an additive group of its points. Int. J. Of Current Research. 8, 34983-34990 (2016) FilipiZakaJusufiFilipi, K., Zaka, O., Jusufi, A. The construction of a corp in the set of points in a line of Desargues affine plane. Matematicki Bilten. 43, 1-23 (2019), ISSN 0351-336X (print), ISSN 1857–9914 (online) ZakaCollineations Zaka, O. A description of collineations-groups of an affine plane. Libertas Mathematica (N.S.). 37, 81-96 (2017), ISSN print: 0278 – 5307, ISSN online: 2182 – 567X, MR3828328 ZakaVertex Zaka, O. Three Vertex and Parallelograms in the Affine Plane: Similarity and Addition Abelian Groups of Similarly n-Vertexes in the Desargues Affine Plane. Mathematical Modelling And Applications. 3, 9-15 (2018), http://doi:10.11648/j.mma.20180301.12 ZakaThesisPhd Zaka, O. Contribution to Reports of Some Algebraic Structures with Affine Plane Geometry and Applications. (Polytechnic University of Tirana,Tirana, Albania,2016), supervisor: K. Filipi, vii+113pp. ZakaPetersIso Orgest Zaka and James F. Peters. Isomorphic-dilations of the skew-fields constructed over parallel lines in the Desargues affine plane. Balkan J. Geom. Appl.. 25, 141-157 (2020), www.mathem.pub.ro/bjga/v25n1/B25-1zk-ZBG89.pdf ZakaPetersOrder Orgest Zaka and James Francis Peters. Ordered line and skew-fields in the Desargues affine plane. Balkan J. Geom. Appl.. 26, 141-156 (2021), www.mathem.pub.ro/bjga/v26n1/B26-1zb-ZBP43.pdf ZakaMohammedSF O. Zaka and M. A. Mohammed, "Skew-field of trace-preserving endomorphisms, of translation group in affine plane", Proyecciones (Antofagasta, On line), vol. 39, no. 4, pp. 823-850, Jul. 2020. https://doi.org/10.22199/issn.0717-6279-2020-04-0052 ZakaMohammedEndo O. Zaka and M. A. Mohammed, "The endomorphisms algebra of translations group and associative unitary ring of trace-preserving endomorphisms in affine plane", Proyecciones (Antofagasta, On line), vol. 39, no. 4, pp. 821-834, Jul. 2020. https://doi.org/10.22199/issn.0717-6279-2020-04-0051 Salihu2 A. Salihu, H. Snopce, A. Luma and J. Ajdari, "Comparison of time complexity growth for different methods/algorithms for rectangular determinant calculations", ICRTEC 2023 - Proceedings: IEEE International Conference on Recent Trends in Electronics and Communication: Upcoming Technologies for Smart Systems. https://doi.org/10.1109/ICRTEC56977.2023.10111874. Salihu3 A. Salihu, H. Snopce, J. Ajdari and A. Luma, "Generalization of Dodgson’s condensation method for calculating determinant of rectangular matrices", International Conference on Electrical, Computer and Energy Technologies (ICECET). https://doi.org/10.1109/ICECET55527.2022.9873054. Salihu4 A. Salihu, H. Snopce, A. Luma and J. Ajdari, "Time Complexity Analysis for Cullis/Radic and Dodgson’s Generalized/Modified Method for Rectangular Determinants Calculations", International Journal of Computers and Their Applications, vol. 29, no. 4, pp. 236-246, 2022. http://isca-hq.org/Documents/Journal/Archive/2022/2022volume2904/2022volume290403.pdf. Salihu5 A. Salihu and F. Marevci, "Chio's-like Method for Calculating the Rectangular (non-square) Determinants: Computer Algorithm Interpretation and Comparison", European Journal of Pure and Applied Mathematics, vol. 14, no. 2, pp. 431-450, 2021. https://doi.org/10.29020/nybg.ejpam.v14i2.3920. Salihu6 A. Salihu and F. Marevci, "Determinants Order Decrease/Increase for k Orders, Interpretation with Computer Algorithms and Comparison", International Journal of Mathematics and Computer Science, vol. 14, no. 2, pp. 501-518, 2021. http://ijmcs.future-in-tech.net/14.2/R-Marecvi-Salihu.pdf. Salihu7 A. Salihu, A. Jusufi and F. Salihu, "Comparison of Computer Execution Time of Cornice Determinant Calculation", International Journal of Mathematics and Computer Science, vol. 14, pp. 9-16, 2019. http://ijmcs.future-in-tech.net/14.1/R-Salihu2.pdf. Salihu8 A. Salihu, "A modern modification of Gjonbalaj-Salihu cornice determinant, transformation to semi-diagonal determinant", International Journal of Mathematics and Computer Science, vol. 13, pp. 1330138, 2018. http://ijmcs.future-in-tech.net/13.2/R-Salihu.pdf. zaka3DmatrixRing ZAKA, O. (2017) 3D Matrix Ring with a “Common” Multiplication. Open Access Library Journal, 4, 1-11. doi: http://dx.doi.org/10.4236/oalib.1103593. zaka3DGLnnp Zaka, Orgest, The general linear group of degree n for 3D matrices GL(n;n;p;F). Libertas Mathematica, New Series. Lib. Math. (N.S.) 39, No. 1, 13–30 (2019; Zbl 1451.15007) ArtinM Artin, M. (1991) Algebra. Prentice Hall, Upper Saddle River. BretscherO Bretscher, O. (2005) Linear Algebra with Applications. 3rd Edition, Prentice Hall, Upper Saddle River Schneide-Barker Schneide, H. and Barker, G.P. (1973) Matrices and Linear Algebra (Dover Books on Mathematics). 2nd Revised Edition. DPoole David Poole: Linear Algebra. A Modern Introduction. Cengage Learning 2005, ISBN 0-534-99845-3, pp. 265–267 HERose Harvey E. Rose: Linear Algebra. A Pure Mathematical Approach. Springer 2002, ISBN 3-7643-6905-1, pp. 57–60 Lang Lang, S. (1987) Linear Algebra. Springer-Verlag, Berlin, New York. Amiri-etal Amiri, M., Fathy, M., Bayat, M., Generalization of some determinantal identities for non-square matrices based on Radic's definition, TWMS J. Pure Appl. Math. 1, no. 2 (2010), 163–175. Radic1 Radić, M., A definition of determinant of rectangular matrix, Glas. Mat. Ser. III 1(21) (1966), 17–22. Radic2 Radić, M., About a determinant of rectangular 2 × n matrix and its geometric interpretation, Beitr¨age Algebra Geom. 46, no. 2 (2005), 321–349 MAKAREWICZetal Anna Makarewicz, Piotr Pikuta, and Dominik Szalkowski. "Properties of the determinant of a rectangular matrix." Annales Universitatis Mariae Curie-Skłodowska, sectio A – Mathematica 68.1 (2014): null. <http://eudml.org/doc/289812>. Milne-Thomson Milne-Thomson, L. (1941). Determinant Expansions. The Mathematical Gazette, 25(265), 130-135. doi:10.2307/3607371
http://arxiv.org/abs/2307.02442v1
20230705171248
Robotic Sonographer: Autonomous Robotic Ultrasound using Domain Expertise in Bayesian Optimization
[ "Deepak Raina", "SH Chandrashekhara", "Richard Voyles", "Juan Wachs", "Subir Kumar Saha" ]
cs.RO
[ "cs.RO" ]
Robotic Sonographer: Autonomous Robotic Ultrasound using Domain Expertise in Bayesian Optimization Deepak Raina^12*, SH Chandrashekhara^3, Richard Voyles^2, Juan Wachs^2, Subir Kumar Saha^1 This work was supported in part by SERB (India) - OVDF Award No. SB/S9/Z-03/2017-VIII; PMRF - IIT Delhi under Ref. F.No.35-5/2017-TS.I:PMRF; National Science Foundation (NSF) USA under Grant #2140612; Daniel C. Lewis Professorship and PU-IUPUI Seed Grant. ^1Indian Institute of Technology (IIT), Delhi, India ({deepak.raina, saha}@mech.iitd.ac.in); ^2Purdue University (PU), Indiana, USA ({draina, rvoyles, jpwachs}@purdue.edu); ^3All India Institute of Medical Sciences (AIIMS), Delhi, India (drchandruradioaiims@gmail.com). ^*Corresponding author is Deepak Raina August 1, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Ultrasound is a vital imaging modality utilized for a variety of diagnostic and interventional procedures. However, an expert sonographer is required to make accurate maneuvers of the probe over the human body while making sense of the ultrasound images for diagnostic purposes. This procedure requires a substantial amount of training and up to a few years of experience. In this paper, we propose an autonomous robotic ultrasound system that uses Bayesian Optimization (BO) in combination with the domain expertise to predict and effectively scan the regions where diagnostic quality ultrasound images can be acquired. The quality map, which is a distribution of image quality in a scanning region, is estimated using Gaussian process in BO. This relies on a prior quality map modeled using expert's demonstration of the high-quality probing maneuvers. The ultrasound image quality feedback is provided to BO, which is estimated using a deep convolution neural network model. This model was previously trained on database of images labelled for diagnostic quality by expert radiologists. Experiments on three different urinary bladder phantoms validated that the proposed autonomous ultrasound system can acquire ultrasound images for diagnostic purposes with a probing position and force accuracy of 98.7% and 97.8%, respectively. § INTRODUCTION Ultrasound is the most frequently used imaging modality for diagnostic and surgical interventions due to its low cost, non-ionizing nature, portability and real-time feedback. Ultrasound offers several advantages over other imaging modalities, like Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), however, the diagnosis by ultrasound is a highly operator-dependent modality <cit.>. This is because of the skills required for manual control of the probe and quality assessment of acquired images. Sonographers employ both directed as well as random explorations strategies to search for diagnostic-quality images. The ultrasound probe is moved within the region of interest through hand maneuvers initially and fine adjustments to the probe's translational and rotational motion later. These maneuvers also include the safe and precise adjustment of the pressure through the probe while simultaneously analyzing the quality of acquired images. Such an intricate procedure requires a great deal of skill, focus, experience and manual effort from sonographers. In rural settings, skilled sonographers availability is limited <cit.>, and alternative solutions are required. In order to reduce the burden on experts, a Robotic Ultrasound System (RUS) is introduced. RUS consists of a dexterous robotic arm and an ultrasound machine with its probe attached to the end effector of the robot, as shown in Fig. <ref>. RUS can help ensure the accuracy, safety and consistency of the ultrasound procedures. Recently, in order to address the aforementioned needs, several telerobotic or human-assisted ultrasound systems have been proposed <cit.>. Compared to these systems, a fully automated ultrasound system offers various potential benefits, including shorter procedure time, a shorter learning curve, minimal communication delays and a reduced cognitive load <cit.>. However, there are key challenges for effective autonomous RUS. One of the most important challenge has to do with the hand motions for ultrasound images acquisition. Such images exhibit considerable inter- and intra-subject variability and the image quality is highly dependent on the precise position, orientation and pressure of the ultrasound probe. With incorrect probe maneuvers, the resulting image presents noise, artifacts, blurred boundaries and poor visibility, thereby making it unacceptable for diagnosis. Sonographers rely on visual and haptic feedback, anatomical information, and diagnostic expertise from prior medical education to rapidly acquire the high-quality images. Therefore, the RUS must locate the regions with acceptable diagnostic image quality for inter- and intra-patient procedures in the fewest exploration steps. In this paper, we present an autonomous robotic ultrasound system that uses the domain-expertise in Bayesian Optimization (BO)-based search to scan the anatomical regions for acquiring diagnostic quality ultrasound images, thereby eliminating the need to thoroughly scan the entire region. The key contributions of our work are as follows: * We proposed a prior in BO, gleaned from the expert's demonstration of high image quality probing poses, termed as expert's prior. BO then estimates the region's unknown image quality as a semi-parametric Gaussian process model with expert's prior. * A novel image quality metric is proposed, trained using a dataset of ultrasound images labelled for diagnostic quality by expert radiologists, which provides image feedback of the region to the BO. * We experimentally validated using three urinary bladder phantoms requiring different probing maneuvers for acquiring high image quality. The results show that our systems consistently and autonomously acquire high-quality ultrasound images in all phantoms. We believe that the use of BO combined with domain expertise to perform autonomous ultrasound scanning will lead to less reliance on expert availability and a wider application in remote and underserved populations. §.§ Related Work Autonomous Robotic Ultrasound Systems: In recent years, a range of autonomous robotic ultrasound systems has been proposed to minimize human intervention. Earlier works used image features for ultrasound image-based visual servoing <cit.>. Later, various systems used pixel-based confidence map methods <cit.> and segmentation of structures for optimizing the probe poses and forces <cit.>. However, these image feature- and pixel-based approaches are modality specific, computationally expensive and do not consider the significance of diagnostic aspects. Hennersperger et al. <cit.> developed the autonomous system using the pre-operative MRI scan, however, MRI is quite expensive to acquire. Ma et al. <cit.> proposed autonomous lung scanning by localizing the target region using RGB-D sensor data. However, the system used only force feedback and did not rely on ultrasound image feedback, thereby limiting its diagnostic accuracy. Recently, Li et al. <cit.> proposed a deep Reinforcement Learning (RL) framework to control the probe for spinal ultrasound, incorporating image quality optimization into the reward formulation. However, the success of these systems is limited to phantoms and patients whose data was included during training. Moreover, deploying RL in medical systems is quite challenging, as it requires vast amount of physical interaction with the human body and poses safety and ethical concerns. In contrast to these systems, the proposed autonomous ultrasound system narrows down the area to be scanned using BO, eliminating the need to thoroughly scan the entire region. We further propose using domain expertise gleaned from the experts in the form of BO prior and image quality metrics, in order to acquire diagnostic-quality ultrasound images. Bayesian Optimization for Medical Robots: Due to the fast optimization capability, BO has been adopted for safety-critical robotic medical procedures, such as autonomous robotic palpation <cit.>, semi-autonomous surgical robot <cit.>, controller tuning of hip exoskeletons <cit.> and autonomous robotic ultrasound <cit.>. Our work is a non-trivial extension to the work by Goel et al. <cit.>. They proposed using BO for autonomous ultrasound utilizing segmentation of the vessel in the ultrasound image as feedback to the BO for scanning the region with high vessel density. They used hybrid position-force control to move the robot in (x,y) plane while maintaining constant force along the z-direction to the point of contact. In contrast, our work suggests two technical improvements to enhance the practicality of this approach. First, we recommend using a deep learning model that generates quality scores for ultrasound images as feedback to the BO instead of relying on a segmented mask of the tissue or structure. The latter approach can be very time-consuming and labor-intensive for experts as they would need to annotate anatomical structures' boundaries, taking into account the ultrasound image noise and variability due to machine settings, probe pressure, and patient anatomy. Second, we expand the capabilities of the BO by enabling it to search for the optimal scanning region along the (x,y,z)-axis. Notably, the z-axis is under variable force control to account for varying physiological conditions <cit.>. Domain Expertise in BO: BO can utilize the expert's knowledge in the form of priors (beliefs) that the expert (practitioner) has on the potential location of the optimum. Such techniques have been mostly used for hyper-parameter tuning of image and text datasets <cit.>, open-source machine learning datasets <cit.> and robot simulation experiments <cit.>. A few recent works have utilized expert's knowledge in the form of prior for medical robots <cit.>. Ayvali et al. <cit.> propose robotic palpation to detect tissue abnormalities using BO. They modified the acquisition function of BO, whose value peaks at the user-provided locations. Zhu et al. <cit.> proposed an autonomous robotic auscultation system for locating the optimal sound quality location using BO. They used visual registration of the patient to locate the anatomical landmarks for obtaining a prior observation model. Inspired by these works, we propose BO for autonomous ultrasound leveraging a prior quality map gleaned from expert's demonstrations. § METHODOLOGY The pipeline of the autonomous robotic ultrasound system is shown in Fig. <ref>. In the offline phase, the expert will demonstrate the potential probing poses to acquire the diagnostic quality images. This demonstrated data would be used to build a prior quality map, which encodes prior anatomical approximation about expected image quality. We also built a dataset of urinary bladder ultrasound images of humans and phantoms with labelled image qualities and trained a deep learning model for image quality assessment metrics. In the online phase, we used BO to select the probe poses to find the optimal ultrasound image quality utilizing both the prior map and quality metric gleaned from the domain expertise. §.§ Bayesian optimization formulation We use BO to search adaptively for probing poses that yield a high-quality ultrasound image within a specified anatomical region. Let A be the region of interest on the human body enclosing the anatomical structure, then the objective of BO is to solve: max_p∈ A q(ℐ(p)) where q(ℐ(p)) denotes the quality score of ultrasound Image ℐ at probe pose p. The BO will compute the probabilistic estimate of the unknown quality map q(ℐ(p)) across the human body using the domain expertise in the form of prior and image quality metric. An acquisition function is optimized to yield the new probing pose. Once the new observation is found, the estimate is re-fitted to the data and the process is repeated till the termination criteria is reached, which is either the maximum reasonable iteration N_max or the estimated quality score threshold required for adequate diagnosis. The overall algorithm is outlined in Algorithm 1. §.§.§ Expert's prior A common estimator used in BO is Gaussian Process (GP) model, which defines an unknown function f by assigning a probe pose p a random variable f(p), which jointly represent a Gaussian. A GP for unknown function f is defined by the mean function μ(·) and covariance or kernel function κ(·,·). Given the function value estimates f̅ = [f(p_1), ⋯, f(p_n)] at probe poses p̅ = [p_1, ⋯, p_n], GP regression can predict the function f at new probe pose p^* as the Gaussian distribution and is given by: 𝒫(f(p^*)|p^*,p̅, f̅) = 𝒩(kK^-1f̅, κ(p^*, p^*) - kK^-1k^T) where, k = [ κ(p_*, p_1) ⋯ κ(p_*, p_n) ] K = [ κ(p_1, p_1) ⋯ κ(p_1, p_n); ⋮ ⋱ ⋮; κ(p_n, p_1) ⋯ κ(p_n, p_n) ] We opted to use a combination of two kernel functions, namely the radial basis function and white noise function, as their combination improved estimations for structures present in ultrasound images <cit.>. The formulation of the kernel is: κ(p_i, p_j) = σ_r exp(-||p_i - p_j||^2/2l^2) + σ_w I where σ_r is the overall variance, l is the length-scale, σ_w is the variance of noise and I is the identity matrix. We further denote the set of image qualities as q̅ = [q_1, ⋯, q_n]. In GP, we propose using prior knowledge gleaned from expert's demonstrations to reduce the explorations and capture the variations of probe poses on the magnitude of ultrasound image quality corresponding to different human anatomy. Inspired from work in <cit.>, we formulated the GP as a semi-parametric GP model, with its prior ℰ(θ) modeled as a Gaussian process with latent parameters θ, representing the mean μ_θ and covariance function κ. The parameters θ is initially inferred from observed probe poses and ultrasound image qualities, which the expert will provide by maneuvering the probe at the potential poses of the optimum image quality across different subjects. During online BO, θ will be inferred using the history of points in (p̅, q̅) and prior ℰ(θ) with Maximum A Posteriori (MAP) estimation, using an L-BFGS solver as: θ^* = max_θℒ (θ| p̅, q̅) ℰ(θ) where ℒ (θ| p̅, q̅) = ∏ℙ (q_i| μ_θ(p_i), K) is the likelihood function and ℙ(.) denotes the probability density function of Gaussian distribution 𝒩(q_i| μ_θ(p_i), K). Since GP models the residual function f(p) with respect to the prior, we subtract the prior from image quality as f(p_i) = q_i - μ_θ(p), before re-estimating the GP. §.§.§ Acquisition Function In each iteration of BO, the next probe pose to observe the image quality is determined using an acquisition function. We have used an Expected Improvement (EI), which is the most commonly used acquisition function. If the posterior mean and variance of GP is given by μ_f̅(x), σ_f̅^2(x), then EI can be formulated as: 0.9!EI(p) = (μ_f̅(p) - f^+(p) - ξ)Φ(Z) +σ_f̅^2(p)ϕ(Z) if σ_f̅^2(p) > 0 0 if σ_f̅^2(p) = 0 where Z = μ_f̅(p) - f^+(p)/σ_f̅^2(p) if σ_f̅^2(p) > 0 else 0; Φ and ϕ are the probability and cumulative density function of standard normal distribution, respectively and f^+(p) is the best observed quality so far. The parameter ξ in eq. (<ref>) governs the amount of exploration during optimization and a high ξ value leads to more exploration or less exploitation. §.§ Expert's ultrasound image quality metric §.§.§ Dataset We used two datasets of Urinary Bladder (UB) ultrasound images. One of them is collected during the in-vivo trials of our in-house developed Telerobotic Ultrasound System <cit.> at All India Institute of Medical Sciences (AIIMS), Delhi, India. The AIIMS ethics committee approved this study under IEC-855/04.09.2020,RP-16/2020. The other dataset is collected from the UB phantom. A total of 2016 real and 2016 phantom images were collected. The ground truth quality of the images is an average integer score of labels by three expert radiologists, each having 15 years of experience in abdomen radiology. Each label is an integer score between 1-5, based on an internationally prescribed generalized 5-level absolute assessment scale <cit.>. A score of 1 means no appearance of the urinary bladder and 5 means that the clear depiction of the urinary bladder with distinct boundaries and acceptable artifacts, depicting a high diagnostic accuracy. A subpar-quality image (2 to 4) either contains noise or motion artifacts, blurred images, indistinct boundaries, obscuring the posterior or anterior sections of the urinary bladder. Later, we normalized the quality score in the range of 0-1 for standard comparison with other quality estimation methods. §.§.§ Feature extraction Ultrasound image quality assessment requires rich feature extraction for classifying the images that are highly variable in appearance but differ a lot in terms of image quality, as shown in Fig. <ref>. In recent work, Song et al. <cit.> proposed a bilinear Convolutional Neural Network (CNN) for fine-grained classification of breast ultrasound image quality. We propose a technical enhancement to this work in order to analyze the urinary bladder ultrasound images, in which the bladder appears at multiple scales/shapes (refer Fig. <ref> for sample images) due to the variability among inter- and intra-subject anatomy, probe poses and forces. Thus, it is also essential to analyze images at multiple scales. Recently, Basu et al. <cit.> proposed combining multi-scale and second-order capability for detecting gall bladder cancer. Taking inspiration from these works, we proposed a deep CNN-based quality assessment model. The base network used is Residual Network (ResNet50) <cit.> and combined with Multi-scale, Bilinear Pooling classifier, as shown in Fig. <ref>. We used the group convolution kernels on equal-width feature volume splits in place of the 3 × 3 convolution kernel in the bottleneck layer of ResNet50. If 𝒳∈ℝ^H × W × N represents the feature volume with height H, width W and number of channels N, then the operation of Multi-scale block can be represented by the following equations: 𝒴_1 = 𝒳_1 𝒴_3 = C_2(𝒴_2 + 𝒳_3) 𝒴_2 = C_1(𝒳_2) 𝒴_4 = C_3(𝒴_3 + 𝒳_4) where 𝒳_i ∈ℝ^H × W × N/4. Each split 𝒳_i is first concatenated with output of previous split 𝒴_i-1 and then fed to the 3 × 3 convolutional kernel C_i to produce an output 𝒴_i. After passing the image through 16 multi-scale blocks, the image feature volume 𝒳∈ℝ^H × W × N is passed through 1 × 1 convolution block to reduce the feature volume to X∈ℝ^H × W × N^'. Then it is reshaped to a matrix 𝒳∈ℝ^M × N^' where M = H × W. Later, a bilinear pooling is applied as: ℬ = 1/N^' (𝒳𝒳^T) + ϵI ℬℬ/||ℬ||_2 sign(ℬ)√(|ℬ|) where eq. (<ref>) computes the outer product of feature volume, eq. (<ref>) will first perform the element-wise square-root and then the l_2 normalisation of the matrix ℬ. Finally, the flattening of the feature map is done and then a fully connected layer is utilized to return the ultrasound image quality score. §.§ Robot control The robot controller will move the probe to the new pose p = [x, y, z] given by BO, where (x, y) is under position control and z is under force (f_z) control. For the safety of phantoms, the force limits have been set to 20 N <cit.>. The orientation of the probe is kept normal to the point of contact. The hybrid position-force control is used for controlling the robot. After searching, the robot may execute the top probe poses with maximum image quality. § RESULTS AND DISCUSSIONS §.§ Experimental setup We conducted the experiments on the laboratory setup of the Robotic Ultrasound System at Purdue University, USA, consisting of a 7-DoF Sawyer collaborative robotic arm (Rethink Robotics, Germany) with Micro Convex MC10-5R10S-3 transducer attached to its end-effector. The US image is captured by the Telemed Ultrasound machine (Telemed Medical Systems, Italy) and is transferred to the laptop. The ultrasound was performed on a urinary bladder phantom (YourDesignMedical, USA). We customized this phantom with the 0.39 inches thick (subjected to manual cutting error) rectangular layers of ballistic gel in order to approximately represent the patient's body with physiological differences. Thus, we present our results using three phantoms, termed as P0, P1 and P2, having 0, 1 and 2 layers, as shown in Fig. <ref>. The BO and image quality model have been implemented in Python 3.8 and PyTorch 1.11. ROS has been used to integrate and establish communication among all components of the setup. For BO Algorithm 1, we used ξ=0.1, N_max=50, A ∈ ((0,0.15)m, (0,0.15)m, (8-20)N) for (x,y,f_z). The prior ℰ(θ) has been modeled using GP by fitting it to 10 potential probing poses and corresponding image qualities. §.§ Performance of quality assessment model We trained the ultrasound image quality assessment model, explained in Section <ref>, using the Categorical Cross Entropy (CCE) as a loss function. We split the dataset in a 90:10 ratio as a training and testing dataset. We also used the transfer learning approach <cit.> and used the proposed model pre-trained on ImageNet. The stochastic gradient descent has been used as an optimizer with a learning rate of 0.005, momentum of 0.9 and weight decay of 0.0005. The size of the input image to the model is 224 × 224, batch size is 16 and the network is trained for 100 epochs. The results in Table <ref> shows that the proposed model (ResNet50+MS+BP) achieved an increase in accuracy by 3.01% on a test set when compared to the ResNet50+BP model proposed in <cit.>. §.§ Comparing different BO strategies In order to analyze the effectiveness of the proposed methodology, we have compared the BO with zero prior to the BO with the proposed expert's prior. We illustrated these search strategies using the image feedback having a mean of the segmented mask of the bladder in the ultrasound image (q_S) as used in <cit.> and having proposed ultrasound image quality metric learned from expert's rating (q_E). For segmentation, we used a U-net-based segmentation model proposed in <cit.>. Further, each of the feedback strategies has been compared with different search spaces, first considering the probe motion along x and y-axis of the phantom, second along x, y and z-axis of the phantom, where z-axis is under the force control (f_z). The estimated quality maps obtained using these strategies for P0 are shown in Fig. <ref>, where red region shows the high-quality region and blue region shows the low-quality region. The black dots over the map represents the queried probe positions over the phantom during the optimization. The first column in Fig. <ref> shows the quality map obtained using the uniform movement of the probe over the phantom, which has been considered as the approximate ground truth quality map. For both the quality types, the ground truth has been obtained using the approximate desired force (f_d) of 14N, 16N and 18N for P0, P1 and P2, respectively, which gives the best image quality in these phantoms. We present results for 3 cases to illustrate the effect of searching with appropriate force in these phantoms: (i) f_z < f_d: fz is constant but equal to f_d-4, (ii) f_z = f_d and (iii) when f_z is variable. We compared the quality maps of these strategies by doing quantitative analysis using three metrics: (i) Sum of quality difference of top n points, (ii) Top quality, and (iii) Zero Normalized Cross Correlation (ZNCC), as shown in Table <ref>. The numbers in the table represent the average value of the matrices for the 3 tests on each phantom. These metrics have been computed with respect to the approximate ground truth for the phantom. The sum of the difference between the top n-points compares the quality of images acquired from the top-n highest quality values, top quality compares the highest value of image quality score and ZNCC evaluates the overall similarity of the acquired quality map during the search. The value of quality differences close to 0, and top quality and ZNCC value close to 1 indicates a better estimation of the quality map. The quality maps in Fig. <ref> with less scattered probe points (less exploration) and more points in the high-quality region (red) represent a better search strategy. From the result in Fig. <ref> and Table <ref>, it has been found that the BO using the segmented image as quality score in (x,y) space with f_z ≤ f_d have resulted in being too exploratory (low ZNCC) with a lot of points spread over the low-quality region of the phantom. However, the quality maps obtained using the expert's quality metric of the image have fewer explorations, with most of the probe positions in the high-quality region of the phantom. Due to noise and shadows in the ultrasound image, the segmentation results are prone to errors, resulting in a large number of probe evaluations in low-quality regions, whereas expert's image quality score, which is based on the holistic assessment of the image, pinpoints the focus on anatomical structures rather than getting distracted by noise. The search strategies using fz<f_d could not find the high-quality region and instead converged to the local maxima rather than the global maxima. However, with f_z=f_d, the high-quality regions have been acquired. When the quality region is searched using f_z as a variable in BO with zero prior, the quality maps and top quality score show that the high-quality regions can be located with a varying force too, which is essential for in-human ultrasound procedures. However, the search is quite exploratory, reporting low ZNCC values of 0.733 and 0.821 for quality q_S and q_E, respectively. When the expert's prior is used, all BO strategies have significantly improved, including the search space with three variables (x,y,f_z). The exploration steps of BO usually increase as the search space dimension increases. However, BO with expert's prior reported a top quality of 0.910 with a ZNCC score of 0.889, which is 9.6% and 7.6% more than the BO with zero prior. §.§ Validating the convergence of probe positions and forces Since our study involves phantom experiments, the approximate probe positions and forces that yield the best-quality images are known. The search strategy should converge to these approximate probe poses and forces to acquire high-quality images. The proposed strategy has reached the desired probe position with an average mean value accuracy of 98.73% across all phantoms. To emphasize the convergence of force, we compared the probe forces explored by different BO search strategies, as shown in Fig. <ref>. The proposed formulation of BO using the expert's prior and image quality metric has resulted in the mean value accuracy of 99.28%, 98.25%, and 96.11% for P0, P1, and P2, respectively. Comparatively, the other BO search strategies using zero-prior and segmentation-based quality maps (q_S) have shown significant errors in mean values and greater standard deviation due to the noise in image feedback and the inability to adapt to the profile of the scanning region. § CONCLUSION We proposed an autonomous Robotic Ultrasound System (RUS) to perform the ultrasound as per clinical protocols. We used Bayesian Optimization (BO) to search for high-quality regions leveraging the domain expertise in the form of a prior quality map and ultrasound image quality. The prior map has been gleaned using expert's demonstration of the potential high-quality probing maneuvers. A novel image quality metric has been learned from the expert-labelled dataset of ultrasound images. Three phantom experiments validated that incorporating domain expertise into BO effectively improves the system performance, resulting in acquiring diagnostic quality ultrasound images while adapting to desired probing maneuvers. Since phantom results are promising, we would like to validate its capability for in-vivo study using our RUS in India <cit.>, which is our future work. We would also expand the search space in BO from [x,y,f_z] to include [roll,pitch,yaw] in order to orient the probe for scanning patients with complex physiological conditions. ieeetr
http://arxiv.org/abs/2307.01008v1
20230703134111
Dyson-Schwinger equations in zero dimensions and polynomial approximations
[ "Carl M. Bender", "Christos Karapoulitidis", "S. P. Klevansky" ]
math-ph
[ "math-ph", "hep-th", "math.MP", "quant-ph" ]
cmb@wustl.edu christos.karapoulitidis@stud.uni-heidelberg.de spk@physik.uni-heidelberg.de ^aDepartment of Physics, Washington University, St. Louis, Missouri 63130, USA ^bInstitut für Theoretische Physik, Universität Heidelberg, 69120 Heidelberg, Germany The Dyson-Schwinger (DS) equations for a quantum field theory in D-dimensional space-time are an infinite sequence of coupled integro-differential equations that are satisfied exactly by the Green's functions of the field theory. This sequence of equations is underdetermined because if the infinite sequence of DS equations is truncated to a finite sequence, there are always more Green's functions than equations. An approach to this problem is to close the finite system by setting the highest Green's function(s) to zero. One can examine the accuracy of this procedure in D=0 because in this special case the DS equations are just a sequence of coupled polynomial equations whose roots are the Green's functions. For the closed system one can calculate the roots and compare them with the exact values of the Green's functions. This procedure raises a general mathematical question: When do the roots of a sequence of polynomial approximants to a function converge to the exact roots of that function? Some roots of the polynomial approximants may (i) converge to the exact roots of the function, or (ii) approach the exact roots at first and then veer away, or (iii) converge to limiting values that are unequal to the exact roots. In this study five field-theory models in D=0 are examined, Hermitian ϕ^4 and ϕ^6 theories and non-Hermitian iϕ^3, -ϕ^4, and -i ϕ^5 theories. In all cases the sequences of roots converge to limits that differ by a few percent from the exact answers. Sophisticated asymptotic techniques are devised that increase the accuracy to one part in 10^7. Part of this work appears in abbreviated form in Phys. Rev. Lett. 130, 101602 (2023). Dyson-Schwinger equations in zero dimensions and polynomial approximations S. P. Klevansky^b ========================================================================== § INTRODUCTION In a recent Letter <cit.> we examined the effectiveness of the Dyson-Schwinger (DS) equations to calculate the Green's functions for both Hermitian and -symmetric quantum field theories. This letter presents in compact form our studies of five zero-dimensional models: Hermitian ϕ^4 and ϕ^6 and non-Hermitian iϕ^3, -ϕ^4, and -iϕ^5 theories. Field theories in D=0 are useful because the Green's functions are already known exactly and the DS equations are polynomial equations in the Green's functions, so one can evaluate the accuracy of the truncation scheme used to close the infinite system of coupled DS equations. The current paper presents the detailed results of this study <cit.>. The advantage of studying zero-dimensional field theory is that we can reduce a very difficult problem – that of solving the DS equations for the Green's functions of a field theory – to the generic problem of finding the roots of a polynomial equation. The polynomial depends on the choice of field theory and also on the scheme that is used to solve the infinite tower of DS equations. To construct this polynomial we first truncate the infinite sequence of DS equations to a finite set consisting of the first N coupled polynomial equations. This finite system is underdetermined because there are always more Green's functions than equations. Next, we set all but the first N Green's functions to zero and solve the resulting determined coupled polynomial system. This polynomial system is triangular so it is easy to eliminate successively all but the lowest Green's function, which then satisfies the Nth degree polynomial equation P_N(x)=0. This kind of iterative approach in which we take more and more DS equations is common in field theory: One begins with a leading approximation and then constructs a sequence of approximations that one hopes will approach the exact answer. If we knew the underlying function that the sequence of polynomial approximants P_N(x) represents, we could use standard techniques such as Newton's method to determine the roots. However, for difficult problems in physics, as is the case here, the polynomial P_N(x) is only an approximate consequence of the DS equations. We are led to ask, Do the roots of the polynomial approximation at each order lead to the correct solution, and what is the nature of the convergence (if it exists)? There are several possibilities: (i) The accuracy of the roots of the polynomial approximation P_N(x) improve as N→∞ (that is, as one includes more DS equations), and some or all of the roots converge to the correct answer; (ii) The roots of P_N(x) at first approach the correct answer, but then diverge away from it. The former behavior is characteristic of Taylor expansions, where, if the sequence of approximants converges, it converges to the right answer. The latter behavior is characteristic of asymptotic series. Both (i) and (ii) reflect the usual behavior of series approximations. There is also a third possibility: (iii) The roots of P_N(x) converge as N→∞, but they converge to the wrong answer, that is, to a number that may be close to the exact answer but is not the correct answer. This means that the procedure may be used to gain an approximate understanding of the physics but that the accuracy of the result is limited. It is rather unusual for a sequence of approximants to behave in this way. Our expectation in solving the D=0 field-theoretic models for the Green's functions was that increasing the number of DS equations would lead to increasing accuracy in our results if we use the unbiased procedure of truncating the DS equations by setting higher Green's functions to zero rather than guessing the behavior of the higher Green's functions. However, this is not the case: The unbiased truncation procedure does not lead to convergence to the correct value for the Green's function as we go to higher orders. Rather, we observe the third possibility (iii). This discovery holds for both Hermitian and non-Hermitian theories. The only truncation strategy that appears to work (for both kinds of theories) is to find the asymptotic behavior of the Green's function in the limit of large order of truncation; that is, to find the asymptotic behavior of the nth Green's function for large n. Finding this asymptotic behavior is nontrivial. However, if this is done, we find that order-by-order in the asymptotic approximation, the roots of the polynomials rapidly get closer to the exact values of the Green's functions. One objective of our study was to search for differences in the convergence behavior of the DS equations for Hermitian and non-Hermitian field theories. There are subtle differences in the convergence behavior: Hermitian theories display a monotone behavior while non-Hermitian theories have an oscillatory behavior. This paper is organized as follows. In Sec. <ref> we use a parabolic cylinder function to illustrate the difficulties with calculating the zeros of a function from polynomial approximations to that function. The question is whether the sequence of polynomials obtained from a Taylor series or from an asymptotic series can approximate the zeros of the parabolic cylinder function. This problem is interesting because, like the DS equations, the polynomial sequences have infinitely many roots while the function being approximated only has a finite number of roots. In Sec. <ref> we show how to derive the DS equations for a general quantum field theory. From the lowest-order calculations of the Hermitian ϕ^4 and non-Hermitian -ϕ^4 theories in D=1, we quantify the errors that arise and motivate the need for examining higher-order truncations of the DS equations. We then study the Hermitian ϕ^4 and ϕ^6 and the non-Hermitian i ϕ^3, -ϕ^4, and iϕ^5 quantum field theories in D=0 dimensions. We begin with the Hermitian quartic theory ϕ^4 in Sec. <ref> and progress to the non-Hermitian cubic theory iϕ^3 in Sec. <ref>, the non-Hermitian quartic theory -ϕ^4 in Sec. <ref>, a quintic theory in Sec. <ref>, and a sextic theory ϕ^6 in Sec. <ref>. Conclusions are presented in Sec. <ref>. § EXAMPLE: ZEROS OF A PARABOLIC CYLINDER FUNCTION To illustrate the nature of polynomial approximations, we attempt to calculate the zeros of the parabolic cylinder function D_3.5(x). This function satisfies the time-independent Schrödinger equation for the quantum harmonic oscillator, -f”(x)+( x^2-4)f(x)=0, and is uniquely determined by the initial conditions D_3.5(0)=π^122^74/Γ(-54)  and  D_3.5'(0)=π^122^94/ Γ(-74). Note that f(x)= D_3.5(x) is not an eigenfunction (and 4 is not an eigenvalue) because, as Fig. <ref> shows, while f(x) vanishes as x→∞, f(x) blows up as x→-∞. The four real zeros of f(x), as shown in Fig. <ref>, are located at -3.04735...,  -1.19090...,  0.39183...,  2.04542... . There are no other zeros in the complex-x plane. One way to find these zeros in (<ref>) is to (i) expand f(x) in a Taylor series, (ii) truncate this series to obtain a polynomial, and (iii) find the roots of the polynomial. The 2N-term Taylor series for D_3.5(x) has the form D_3.5(x) = D_3.5(0)∑_n=0^N-1x^2na_n/(2n)! + D_3.5'(0)∑_n=0^N-1 x^2n+1b_n/(2n+1)!. For even powers of x, a_0=1, a_1=-4, and a_n=-4a_n-1+(n-1)(2n-3) a_n-2; for odd powers of x, b_0=1, b_1=-4, and b_n=-4b_n-1+ (n-1)(2n-1)b_n-2. The Taylor series (<ref>) has an infinite radius of convergence but many terms are required to obtain accurate approximations to the zeros of D_ 3.5(x). In Fig. <ref> we plot the zeros of the 9th-degree Taylor polynomial. A 17th-degree Taylor polynomial gives slightly better approximations to the zeros, as we see in Fig. <ref>. Figures <ref> and <ref> display the roots of 25th-degree and 33rd-degree Taylor polynomials. As expected, the real zeros continue to approach the exact zeros of the parabolic cylinder function and the spurious zeros continue to move slowly outward as the degree of the Taylor polynomial increases. Why is such a high-degree Taylor polynomial required to provide accurate approximations to the four zeros of the parabolic cylinder function? The answer is that, as shown in Fig. <ref>, the parabolic cylinder function behaves differently on the positive-real and the negative-real axes; it decays exponentially like exp(- x^2) on the positive-real axis but grows exponentially like exp( x^2) on the negative-real axis. However, as the Taylor series converges everywhere in the complex plane, it is difficult for the Taylor polynomials to provide accurate approximations on both the positive and the negative axes. Asymptotic series do not suffer from this problem because such series are not valid in all directions in the complex plane. Their validity is limited to wedge-shaped regions called Stokes sectors. The asymptotic series representation for D_3.5(x) is D_3.5(x) ∼ e^-x^2/4 x^-7/2∑_n=0^∞ x^-2nc_n/2^n n! (|x|→∞, -π< arg x<π), where c_n=(-1)^nΓ(9/2)Γ(2n-9/2)/π. This asymptotic series is valid in a Stokes sector of angular opening 270^∘ that includes the positive-real axis but not the negative-real axis. Thus, if we factor off the leading asymptotic behavior to obtain a polynomial, this polynomial will not give useful information about the negative zeros. Although the asymptotic series is valid as |x|→∞, early terms provide good approximations to the positive zeros. The positive-real roots (x=0.59521 and x=2.04530) of the five-term polynomial (1+α x^2+β x^4+γ x^6 +δ x^8) are already quite accurate (see Fig. <ref>); the second root is accurate to one part in 20,000. As the degree of the polynomial obtained from the asymptotic series increases, the ring of spurious zeros expands. For the ten-term polynomial this ring expands past the smaller of the two positive zeros, but there is still a very good approximation to the larger positive root (see Fig. <ref>). For the fifteen-term polynomial, this ring expands past the second positive zero and is no longer directly useful (see Fig. <ref>). Summation techniques such as Padé approximation give even better accuracy but we do not discuss this here. Without using summation techniques, the accuracy of an asymptotic-series approximation typically increases as we include more terms until it reaches an optimal level and then it decreases. This is illustrated in Fig. <ref>, which shows the value of the root of the asymptotic-series polynomial near 2 as a function of the number of terms in the polynomial. Note that the root oscillates about the exact zero of the parabolic cylinder function. Optimal accuracy is attained for the 6-term polynomial after which the accuracy decreases rapidly. To summarize these findings, if we use a Taylor expansion to determine the roots of the parabolic cylinder function, we find more roots than the function actually has, and their number increases with the order of the expansion. Most of these spurious roots are imaginary but there is at least one real spurious root. To distinguish between actual and spurious roots one can use the criterion of stability; that is, one can argue that the spurious roots move outward in the complex plane while the positions of the actual roots stabilize as the order of the expansion increases. Finally, the order of approximation that is required to obtain an accurate result is high, which is unfortunate. We emphasize that the coefficients of the Taylor expansion remain unchanged as we go to higher order. This is not the case for the polynomials associated with the DS equations. The asymptotic series approach also has advantages and disadvantages. Its region of validity is limited to the interior of a Stokes sector and not the entire complex plane. Thus, the number of roots that it can possibly find is also limited. However, in its region of validity, the convergence is fast and requires only very few terms. Like the Taylor expansion, the asymptotic series also has many other roots in the complex plane that are spurious. Evidently, without prior knowledge of a function, it may be difficult to determine from a polynomial expansion of that function which roots are close to the actual roots and which roots are spurious. In the following sections, we restrict our analysis to quantum field theories in D=0 because we can find analytic solutions. This allows us to investigate the systematics of finding the correct roots from the polynomial DS equations. § DERIVATION OF DS EQUATIONS The objective in quantum field theory is to calculate the Green's functions γ_n(x_1, x_2, ..., x_n), which are defined as vacuum-expectation values of time-ordered products of the field ϕ(x): γ_n(x_1,x_2,... x_n)≡⟨ 0| T{ϕ(x_1) ϕ(x_2)...ϕ(x_n)}|0⟩. These Green's functions are then combined into structures called cumulants that give the connected Green's functions G_n(x_1,x_2,... x_n). The connected Green's functions are correlation functions that contain the physical content (energy spectrum, scattering amplitudes) of the quantum field theory. In principle, the program is first to solve the field equations (which are partial differential equations like the classical equations of fluid mechanics) for the quantum field ϕ(x) and then to calculate the vacuum expectation values of products of the fields directly. It is advantageous to calculate the connected Green's functions G_n, rather than the nonconnected Green's functions γ_n because this eliminates the problem of vacuum divergences. As a consequence of translation invariance, each disconnected contribution to γ_n introduces an additional factor of the spacetime volume V, which is an infinite quantity when D>0. The difficulty in quantum field theory is that the field ϕ(x) is an operator-valued distribution rather than a function. Free fields obey linear differential equations but interacting fields obey nonlinear differential equations. (The field equation for a gϕ^4 quantum field theory contains a cubic term.) Unfortunately, products of fields are singular and require great care to define them properly. An early approach to this difficulty was to calculate Green's functions in terms of Feynman diagrams. This perturbative procedure (in powers of the coupling constant g) avoids high-level mathematical analysis and reduces the problem to the evaluation of integrals. Indeed, in the early days of quantum field theory one view was that one could simply define a field theory as nothing but a set of Feynman rules and thereby avoid technical mathematical problems <cit.>. However, Feynman perturbation theory has its own mathematical difficulties: First, individual terms in the graphical expansion may be infinite and must be renormalized to remove the infinities. Second, the resulting renormalized perturbation series is divergent and may not be easily summable. Third, nonperturbative effects are difficult or even impossible to obtain by using perturbative graphical methods alone. Dyson and Schwinger developed another technique for calculating the Green's functions that requires only c-number functional analysis (differential and integral equations), so one need not be concerned about operators, Hilbert spaces, and other mathematical issues <cit.>. In principle, one can use this technique to obtain the nonperturbative as well as the perturbative behavior of Green's functions. The procedure is to (i) construct an infinite system of coupled equations called Dyson-Schwinger (DS) equations that is satisfied exactly by the connected Green's functions, and then (ii) truncate the infinite set of equations to a finite closed system of coupled equations that can be solved to provide approximations to the first few connected Green's functions. To be precise, the DS equations are an infinite triangular system of coupled equations obeyed by the connected Green's functions G_n. Each new equation introduces additional Green's functions so a truncation of the system always contains more Green's functions than equations and the truncated system is underdetermined. An unbiased solution strategy is to close the truncated system by setting the highest Green's function (or Green's functions) to zero. The system can then be solved by successive elimination. The question investigated here is whether this procedure gives increasingly accurate approximations to the Green's functions as the size of the truncated system increases. We also examine the differences between Hermitian and non-Hermitian theories. We will see below that the accuracy of a first-order calculation of G_2 is significantly higher for a one-dimensional Hermitian ϕ^4 theory than for a non-Hermitian -symmetric -ϕ^4 theory. The DS equations for a quantum field theory can be derived directly from the Euclidean functional integral Z[J]=∫ϕ exp[∫ dx{-[ϕ(x)]+J(x)ϕ(x)}], where is the Lagrangian and J is a c-number external source. Here, Z[0] is the Euclidean partition function and ⟨0_+|0_-⟩_J≡ Z[J] represents the vacuum-persistence amplitude; that is, the probability amplitude for the ground state in the far past to remain in the ground state in the far future despite the action of the external source J. The vacuum-persistence functional is a generating function for the Green's functions. If we take n functional derivatives of Z[J] with respect to J and then set J≡0, we obtain the n-point Green's function γ_n: γ_n(x_1, ... x_n)=δ/δ J(x_1)...δ/δ J(x_n)Z[J]|_J≡0. And, if we take n functional derivatives of log(Z[J]) with respect to J and set J≡0, we obtain the connected n-point Green's function: G_n(x_1, ... x_n)=δ/δ J(x_1) ...δ/δ J(x_n)log(Z[J])|_J≡0. §.§ Example: Hermitian quartic theory in D=1 For a Hermitian massless ϕ^4 theory in one-dimensional spacetime, we begin with the Euclidean functional integral Z[J]=∫ Dϕ e^-∫ dt, where =ϕ̇^2+ gϕ^4-Jϕ (g>0). The field equation for this theory is -ϕ̈(t)+gϕ^3(t)-J(t)=0. We take the vacuum expectation value of the field equation and divide by Z[J]: -G̈_1(t)+gγ_3(t,t,t)/Z[J]=J(t), where G_1(t) and γ_3(t,t,t) are functionals of J. To obtain the DS equations for the connected Green's functions we eliminate the nonconnected Green's function γ_3 in (<ref>) in favor of connected Green's functions. We functionally differentiate the equation γ_1(t)= ⟨ 0|ϕ(t)|0⟩=Z[J]G_1(t) repeatedly with respect to J(t): γ_2(t,t)=⟨ 0|ϕ^2(t)|0⟩=Z[J]G_2(t,t)+Z[J]G_1^2(t), γ_3(t,t,t) = ⟨ 0|ϕ^3(t)|0⟩ = Z[J]G_3(t,t,t)+3Z[J]G_1(t)G_2(t,t) +Z[J]G_1^3(t). We then divide this equation by Z[J] and use the result to eliminate γ_3 in (<ref>): -G̈_1(t)+g[G_3(t,t,t)+3G_1(t)G_2(t,t) +G_1^3(t)]=J(t). This is the key equation; the entire set of DS equations is obtained from (<ref>) by repeated differentiation with respect to J and setting J≡0. To get the first DS equation we set J≡0 in (<ref>). This restores translation invariance, so G_1 is a constant and G̈_1=0. Parity invariance implies that all odd-numbered Green's functions vanish. Thus, the first DS equation becomes trivial: 0=0. To get the second DS equation we functionally differentiate (<ref>) once with respect to J(s), set J≡0, and drop all odd-numbered Green's functions: -G̈_2(s-t)+M^2 G_2(s-t)+gG_4(s,t,t,t)=δ(s-t), where the renormalized mass is M^2=3gG_2(0). We cannot solve (<ref>) because it is one equation in two unknowns, G_2 and G_4. As stated above, each new DS equation introduces one new unknown Green's function: The third DS equation is trivial but the fourth contains G_6, the fifth is trivial but the sixth contains G_8, and so on. To proceed, we simply set G_4=0 in (<ref>). To solve the resulting equation we take a Fourier transform to get (p^2+M^2) G̃_2(p)=1. Thus, the two-point connected Green's function in momentum space is G̃_2(p)=1/(p^2+M^2). Taking the inverse transform, we get G_2(t)=e^-M|t|/(2M), so G_2(0)=1/ (2M). Inserting G_2(0) into (<ref>) gives a cubic equation for the renormalized mass whose solution for g=1 is M=(3/2)^1/3=1.145.... To check the accuracy of this result we note that the renormalized mass is the energy of the lowest excitation above the ground state. For this model (massless quantum anharmonic oscillator) the exact answer is M=E_1-E_0=1.088.... Thus, the DS result is 5.2% high, which is not bad for a leading-order truncation. §.§ Example: -symmetric quartic theory in D=1 We obtain a non-Hermitian -symmetric massless ϕ^4 theory in D=1 if g in (<ref>) is negative. In this case the Green's functions are not parity symmetric, so the odd-n Green's functions do not vanish. The first DS equation is not trivial, 3G_2(0)+G_1^2=0, where we have divided by the common factor G_1. Following the procedure in the example above, the second DS equation leads to two more equations M^2=3g[G_1^2+G_2(0)], G_2(0)=1/(2M). We set g=-1 and solve the three equations above for the renormalized mass: M=3^1/3=1.442... . The exact value of M obtained by solving the Schrödinger equation for the -symmetric quantum-mechanical Hamiltonian H= p^2- x^4 is M=E_1-E_0=1.796.... Thus, the result in (<ref>) is 19.7% low. The two examples above raise the following question: Does the accuracy improve if we perform higher-level truncations of the DS equations? In general, this is not an easy question to answer because higher-order truncations of the DS equations lead to nonlinear integral equations, which require detailed numerical analysis. However, we can solve the DS equations in very high order to study the convergence in zero spacetime dimensions. In the next sections we examine this question in detail for the D=0 Hermitian gϕ^4 (g>0) theory, the D=0 non-Hermitian iϕ^3 theory, the D=0 non-Hermitian gϕ^4 (g<0) theory, the D=0 non-Hermitian -iϕ^5 theory, and the D=0 Hermitian ϕ^6 theory. § D=0 HERMITIAN QUARTIC THEORY In zero-dimensional spacetime the functional integral (<ref>) becomes the ordinary integral Z[J]=∫_-∞^∞ dϕ e^-(ϕ), where (ϕ)=ϕ^4-Jϕ and we have set g=1. The connected two-point Green's function is an ordinary integral, which we evaluate exactly: G_2 = ∫_-∞^∞ dϕ ϕ^2 e^-ϕ^4/4/ ∫_-∞^∞ dϕ e^-ϕ^4/4 = 2Γ()/Γ() = 0.675 978... . The theory defined in (<ref>) has parity invariance when J=0, so all odd Green's functions vanish, G_1=G_3=G_5= ... =0 and the first nontrivial DS equation is G_4=-3G_2^2+1. If we truncate this equation by setting G_4=0 and solve the resulting equation 3G_2^2=1, we get the approximate numerical result G_2=1/√(3)=0.577 350.... In comparison with (<ref>) this result is 14.6% low. Let us include more DS equations: The first four are G_4 = -3G_2^2+1, G_6 = -12G_2G_4 - 6 G_2^3, G_8 = -18G_2G_6-30G_4^2-60G_2^2G_4, G_10 = -24G_2G_8-168G_4G_6-126G_2^2G_6-420G_2G_4^2, and the next six are G_12 = -30G_2G_10-360G_4G_8-216G_2^2G_8-378G_6^2 -3024G_2G_4G_6, G_14 = -36G_2G_12-660G_4G_10-330G_2^2G_10 -2376G_6G_8-7920G_2G_4G_8-8316G_2G_6^2 -41580G_3G_5G_6-27720G_4^2G_6, G_16 = -42G_2G_14-1092G_4G_12-468G_2^2G_12 -6006G_6G_10-17160G_2G_4G_10-5148G_8^2 -61776G_2G_6G_8-102960G_4^2G_8-216216G_4G_6^2, G_18 = -48G_2G_16-1680G_4G_14-630G_2^2G_14 -13104G_6G_12-32760G_2G_4G_12-34320G_8G_10 -180180G_2G_6G_10-300300G_4^2G_10 -154440G_2G_8^2 -2162160G_4G_6G_8-756756G_6^3, G_20 = -54G_2G_18-2448G_4G_16-816G_2^2G_16 -25704G_6G_14-57120G_2G_4G_14-95472G_8G_12 -445536G_2G_6G_12-742560G_4^2G_12-72930G_10^2 -7001280G_4G_8^2-1166880G_2G_8G_10 -8168160G_4G_6G_10-14702688G_6^2G_8 -17153136G_6G_7^2, G_22 = -60G_2G_20-3420G_4G_18-1026G_2^2G_18 -46512G_6G_16-93024G_2G_4G_16 -232560G_8G_14-976752G_2G_6G_14 -1627920G_4^2G_14-503880G_10G_12 -3627936G_2G_8G_12 -25395552G_4G_6G_12 -2771340G_2G_10^2-66512160G_4G_8G_10 -69837768G_6^2G_10-119721888G_6G_8^2. Because the DS equations (<ref>) are exact, we can find the precise values of all G_2n sequentially by substituting the exact value of G_2 from (<ref>) into (<ref>). The results are given in Table <ref>. Observe that the G_2n alternate in sign as n increases, a feature that is not immediately evident from the structure of the equations in (<ref>). Close examination of the terms contributing to a given G_2n reveals that all terms are of similar size, so it is not easy to identify a dominant contribution. The oscillation in sign of G_2n as n increases differs from the behavior of the disconnected Green's functions, γ_2n=∫_-∞^∞ d ϕϕ^2ne^-ϕ^4/4=2^n-1/2Γ (2n+14), all of which from (<ref>) are positive. The first eleven numerical values are also given in Table <ref>. It is possible to check the expressions for the connected Green's functions in (<ref>) using an alternative, independent method. We calculate the G_2n directly from a generating function w(x), which in this case is possible, since we know γ_2n explicitly: we can write down the generating function for G_2n in terms of it, w(x)=ln[1+1/2!γ_2/γ_0x^2+1/4!γ_4/γ_0x^4+1/6!γ_6/γ_0x^6+…], expand this in a Taylor series, and identify the G_2n as (2n!)× the coefficient of x^2n. One easily finds that the coefficient of x^2 is Γ( 34)/Γ(14), so that one recovers the value of G_2 given in (<ref>). The next five Green's functions calculated in this way are G_4 = 1-12 Γ( 3 4)^2/Γ( 14)^2, G_6 = -24Γ( 3 4)/Γ(1 4)+240 Γ( 3 4)^3/Γ(1 4)^3 , G_8 = -30 + 1 344Γ( 3 4)^2 /Γ(1 4)^2- 10 080 Γ( 3 4)^4/Γ(1 4)^4, G_10 = 4 632 Γ( 3 4)/Γ( 14)- 120 960 Γ( 3 4)^3/Γ( 14)^3 + 725 760Γ( 3 4)^5/Γ( 14)^5. G_12 = 9 120 - 877 536 Γ( 3 4)^2/Γ( 14)^2 + 15 966 720 Γ( 3 4)^4/Γ( 14)^4 - 79 833 600Γ( 3 4)^6/Γ( 14)^6, with the expansion of the logarithm leading to alternating signs of the terms contributing to each G_2n. A numerical evaluation of (<ref>) confirms the exact values given in Table <ref>. The analytic relationships among the G_2n, as derived from the DS equations (<ref>), can be easily confirmed. This calculation confirms the DS equations, but (<ref>) does not lend itself to an asymptotic analysis because the sign of G_2n as evaluated from these expressions is determined by a delicate cancellation of terms having different signs. The alternation in signs of the G_2n occurs because these functions are cumulants, reflecting only connected terms and therefore requiring subtractions. From the DS equations (<ref>) it is not at all obvious that the signs are oscillating. §.§ Approximate solutions As a rule, we do not know the exact solutions to the DS equations, and must therefore employ approximate methods of solution. The system of DS equations (<ref>) is not closed. Rather it is triangular, and the number of unknowns is always one more than the number of equations. A standard unbiased procedure is to define a truncation scheme in which as a first approximation G_4=G_6=G_8= ... = 0; the next level of approximation is reached by setting G_6=G_8= G_10= ...=0, the next by setting G_8=G_10= G_12=G_14= ...=0, and so on. To do this efficiently, we reorganize (<ref>). We eliminate G_4 by substituting the first equation into the second, we eliminate G_6 by substituting the first two equations into the third, and so on. Continuing this way, we obtain an expression for G_2n as an nth degree polynomial in G_2 only. We denote the monic form of these polynomials (where the highest power of x is 1) as P_n(G_2). The first ten such polynomials are P_2(x) = x^2-13, P_3(x) = x^3-25x, P_4(x) = x^4-815x^2+121, P_5(x) = x^5-23x^3+1931890x P_6(x) = x^6-45x^4+2771575x^2-7610395, P_7(x) = x^7-1415x^5+3611350x^3-853861x, P_8(x) = x^8-1615x^6+356945x^4-47579210135125x^2 +12291091475, P_9(x) = x^9-65x^7+5291050x^5-13583160875x^3 +84135291929727800x, P_10(x) = x^10-43x^8+613945x^6-92464675675 x^4 +3658792328930875x^2-32372186642225, P_11(x) = x^11-2215x^9+76679450x^7 -190319921375x^5 +1304611935638815000x^3-6155980974996239500x. Truncating the DS equations (<ref>) is equivalent to finding the zeros of these polynomials. We list the nonnegative zeros below (negative zeros are excluded because G_2=M^-2, where M is the renormalized mass). The first seven sets of zeros are zero of P_2:  0.577350, zeros of P_3:  0.0,   0.632456, zeros of P_4:  0.336742,  0.648026, zeros of P_5:  0.0,  0.488357,  0.654350, zeros of P_6:  0.232147,  0.560220,  0.657466, zeros of P_7:  0.0,  0.376821,  0.597310,  0.659212, zeros of P_8:  0.176270,  0.466447,  0.618098,   0.660287, and the next three sets of zeros are zeros of P_9:  0.0,  0.302770,  0.523189,  0.630624,   0.660997, zeros of P_10:  0.141830,  0.392352,  0.560204,   0.638652,  0.661493, zeros of P_11:  0.0,  0.251866,  0.456057,  0.585125,   0.644070,  0.661853. The roots up to n=80 are plotted in Fig. <ref>. Note that all roots are real and nondegenerate, and range from 0 up to just below the exact value of G_2 in (<ref>). If we did not already know the exact value G_2, we could not guess which root gives the best approximation to G_2. However, with increasing truncation order, the roots become more dense at the upper end of the range, so we would conjecture that the largest root gives the best approximation. Unfortunately, while the accuracy improves monotonically with the order of the truncation, it improves slowly; the largest root of P(x) is still 1.85% below the exact value. Using Richardson extrapolation, we can determine the value to which the largest root converges <cit.>: G_2=0.663 488…. Thus, the limiting value of the sequence of roots does not converge to the true value G_2=0.675 978… <cit.>. To understand this discrepancy, we examine the large-n asymptotic behavior of the G_2n in detail in the following subsection. Figure <ref> also shows that the zeros of successive polynomials P_n(x) interlace. This interlacing behavior might suggest that the polynomials P_n(x) form an orthogonal set with respect to some weight function, but this conjecture is false. Nevertheless, these polynomials do have interesting properties. In particular, there are relatively simple formulas for the polynomial coefficients: The coefficient of x^n, the highest power of x in P_n(x), is 1 (these are monic polynomials) and the formula for the coefficient of x^n-2, the second highest power of x, is - n  (n>2). The coefficient of x^n-4 is 1/2!()^2[n^2-227/84n] (n>4), the coefficient of x^n-6 is -1/3!()^3[n^3-227/28n^2 +31453/2002n] (n>6), and the coefficient of x^n-8 is 1/4!()^4[n^4-227/14n^3 +28505063/336336n^2-404875283/2858856n] (n>8). §.§ Large-n behavior of the Green's functions G_2n The question is whether it is valid to truncate the DS equations (<ref>) by replacing G_2n with zero. To answer this question we look at the asymptotic behavior of G_2n for large n. We have shown both numerically and analytically <cit.> that the asymptotic behavior of G_2n is G_2n∼ 2r^2n(-1)^n+1(2n-1)! (n→∞), where r=0.409 505 7... . To obtain this result analytically we substitute G_2n=(-1)^n+1(2n-1)! g_2n, which is suggested by the numerical result in (<ref>), and we define a generating function u(x) for the numbers g_2n: u(x)≡ x g_2+x^3 g_4+x^5 g_6 +... . This generating function obeys the second-order nonlinear differential equation u”(x)=3u'(x)u(x)-u^3(x)-x, subject to the initial conditions u(0)=0 and u'(0)=G_2=2Γ()/Γ ()=0.675 978 240 067 285... . The substitution u(x)=-y'(x)/y(x) then gives the third-order linear differential equation y”'(x)=xy(x), which is a higher-order generalization of the Airy equation y”(x)=xy(x). The function y(x) satisfies the initial conditions y(0)=1, y'(0)=0, and y”(0)=-G_2 =-0.675 978 240 067 285... . The exact solution y(x) satisfying these boundary conditions is found by taking a cosine transform: y(x)=2√(2)/Γ(1/4)∫_0^∞ dt cos(xt) e^-t^4/4. When y(x) passes through 0, u(x) becomes infinite, so the value of x at which y(x)=0 determines the radius of convergence of the series (<ref>) for the generating function. We find that u(x) passes through 0 at x=± 2.441 968.... Therefore, r=1/x=0.409 506..., which confirms the numerical results in (<ref>). The asymptotic behavior in (<ref>) is surprising; it shows that the connected Green's functions G_2n grow much faster with increasing n than the nonconnected Green's functions γ_2n which are given exactly for all n in (<ref>). One might not expect G_2n to grow faster than γ_2n because we obtain the connected Green's function by subtracting the disconnected parts from γ_2n. Surprisingly, subtracting disconnected parts makes the absolute values of the connected Green's functions larger and not smaller with increasing n. Even more remarkable is that neglecting the huge quantity G_2n on the left side of the truncated DS equations (<ref>) still leads to a reasonably accurate result for G_2, as Fig. <ref> shows. This accuracy improves with increasing n. We can begin to understand this heuristically by observing that while the term on the left side is very big, the terms on the right side are of roughly comparable size because the coefficients are also big. The numerical technique of Legendre interpolation provides a helpful analogy. Given a set of n data points x_1, ..., x_n at which we measure a function f(x), f(x_1)=f_1, ..., f(x_n)=f_n, Legendre interpolation fits this data by constructing a polynomial P_n-1(x) of degree n-1 that passes exactly through the value f(x_k) at x=x_k for all 1≤ k≤ n. There is a simple formula for this polynomial. However, this construction has a serious problem; while the constructed polynomial passes exactly through the data points, between data points the polynomial exhibits wild oscillations where it becomes alternately large and positive and large and negative. This reveals a fundamental instability associated with high-degree polynomials. This instability is associated with the inherent stiffness of polynomials <cit.>. If there are many data points, it is much better to use a least-squares polynomial approximation, which passes close to, but not exactly through the input data points. (This explains why cubic splines are used to approximate functions rather than, say, octic splines.) It is precisely the instability associated with the stiffness of high-degree polynomials that allows the DS approach to give reasonably accurate results! If we use the exact values of the Green's functions on the right side of the DS equations (<ref>), we obtain the exact value of the Green's function on the left side, which is a huge number. However, changing the Green's functions on the right side of (<ref>) very slightly by replacing the exact values by the approximate values of the lower Green's functions now gives 0, instead of G_2n. Padé approximation does not improve the calculation of G_2 from the DS polynomials in (<ref>). One might anticipate that Padé techniques would be useful because the coefficients of successive powers of x alternate in sign. The approach would be to divide all odd-numbered polynomials by x and then to replace x^2 in each polynomial by y. If one does this for P_11, for example, one can then calculate the [1,4], [2,3], [3,2], and [4,1] approximants. Unfortunately, the zeros of these approximants are not near the exact value of G_2^2, and such an attempt to improve the accuracy of the DS equations fails. Why does this approach fail? Padé approximation accelerates the convergence of a truncated series even if the series diverges. However, unlike the infinite Taylor series expansion of the parabolic cylinder function in (<ref>) where the coefficients of powers of x remain the same as the order is increased, the coefficients in the DS equations change from order to order. Other approaches, such as assuming a value of G_2n estimated from G_2n-2 converge to a limit that is very slightly closer to the correct one, but which is still not correct, and thus also fail. One approach does give excellent numerical results: If the left side of the DS equations is approximated by the asymptotic approximation (<ref>), G_2 reaches an accuracy of seven decimal places in only six steps. (See Fig. <ref>.) At n=7, we have G_2=0.675 978 218… in comparison with the exact result G_2, exact=0.675 978 240… . While we have gained six orders of magnitude in precision, the result in Fig. <ref> is not exact. This is because (<ref>) is only a leading-order asymptotic approximation. Higher-order asymptotic approximations for G_2n will improve this impressive numerical result even further. This suggests that the DS equations can be used to provide extremely accurate solutions for the Green's functions, even when D>0, but these equations must be supplemented by including the large-n asymptotic behavior of the Green's function G_2n. This asymptotic behavior cannot be determined from the DS equations; it must be obtained from a large-n asymptotic approximation to the integral representing the Green's function. § D=0 NON-HERMITIAN CUBIC THEORY This section considers the cubic massless non-Hermitian -symmetric Lagrangian = igϕ^3. For (<ref>) the connected one-point Green's function is G_1=∫ dx xexp(-ix^3/3)/∫ dx exp(-ix^3/3), where we take g=1. The path of integration lies inside a -symmetric pair pair of Stokes sectors. These integrals can be evaluated exactly: G_1=-i3^1/3Γ()/Γ()=-0.729 011 13... i. The DS equations for the Lagrangian (<ref>) are simpler than those in (<ref>) for the Hermitian quartic theory. The first 19 DS equations are given by G_2 = -G_1^2, G_3 = -2G_1G_2-i, G_4 = -2G_2^2-2G_1G_3, G_5 = -6G_2G_3-2G_1G_4, G_6 = -6G_3^2-8G_2G_4-2G_1G_5, G_7 = -20G_3G_4-10 G_2G_5-2G_1G_6, G_8 = -20G_4^2-30G_3G_5-12G_2G_6-2G_1G_7, G_9 = -70G_4G_5-42G_3G_6-14G_2G_7-2G_1G_8, G_10 = -70G_5^2-112G_4G_6-56G_3G_7-16G_2G_8 -2G_1G_9, G_11 = -252G_5G_6 -168G_4G_7-72G_3G_8 -18G_2G_9 -2G_1G_10, G_12 = -252G_6^2-420G_5G_7-240G_4G_8 -90G_3G_9-20G_2G_10-2G_1G_11, G_13 = -924G_6G_7-660G_5G_8-330G_4G_9-110G_3G_10 -22G_2G_11 -2G_1G_12, G_14 = -924G_7^2-1584G_6G_8-990G_5G_9-440G_4G_10 -132G_3G_11-24G_2G_12-2G_1G_13, G_15 = -3432G_7G_8-2574G_6G_9-1430G_5G_10 -572G_4G_11-156G_3G_12-26G_2G_13-2G_1G_14, G_16 = -3432G_8^2 - 6006 G_7G_9 - 4004G_6G_10 -2002G_5G_11-728G_4G_12-182G_3G_13 -28G_2G_14-2G_1G_15, G_17 = -12870G_8G_9-10010G_7G_10-6006G_6G_11 -2730G_5G_12-910G_4G_13 -210G_3G_14 -30G_2G_15 -2G_1G_16, G_18 = -12870G_9^2-22880G_8G_10-16016G_7G_11 -8736G_6G_12-3640G_5G_13-1120G_4G_14 -240G_3G_15 -32G_2G_16 -2G_1G_17, G_19 = -48620G_9G_10 -38896G_8G_11-24752G_7G_12 -12376G_6G_13 -4760G_5G_14 -1360G_4G_15 -272G_3G_16-34G_2G_17-2G_1G_18, G_20 = -48620G_10^2-87516G_9G_11-63648G_8G_12 -37128G_7G_13-17136G_6G_14 -6120G_5G_15 -1632G_4G_16-306G_3G_17-36G_2G_18-2G_1G_19. The coefficients in these equations can be checked easily; the sum of the coefficients on the right side of each equation is an increasing power of 2. For example, for G_8 the sum of the coefficients is 20+30+12+2=2^6, and for G_9 the sum is 70+42+14+2=2^7. As in Sec. <ref>, we again use the unbiased truncation scheme of setting higher-order Green's functions to zero. We obtain the leading approximation to G_1 by substituting the first of these equations into the second and truncating by setting G_3=G_4=…=0. The resulting cubic equation G_1^3= i has three solutions, and we choose the solution that is consistent with symmetry: G_1=-2^-1/3i=-0.793 700 53... i. This result differs by 8.9% from the exact value of G_1 in (<ref>). However, the accuracy improves if we include more DS equations: We close the system by using the first equation to eliminate G_2, the second to eliminate G_3, and so on. The result is that the right side of the G_n equation becomes a polynomial of degree n in the variable G_1, and we truncate the system by setting the left side to zero and finding the roots of this polynomial. At first, the roots consistent with symmetry that are obtained with this procedure seem to approach the exact value of G_1 in (<ref>) but unlike the roots for the Hermitian quartic theory, where the approach is monotone (see Fig. <ref>), the approach here is oscillatory at first: For the n=4 truncation the closest root is -0.693 361 27... i, which is smaller in magnitude than the exact value of G_1, and for n=5 the closest root is -0.746 900 79... i, which is larger in magnitude than the exact value. This pattern seems to persist: For n=6 the closest root is -0.712 564 55... i and for n=7 the closest root is -0.739 871 08... i. However, for n=8 this pattern breaks: The closest root is -0.712 368 70... i, which is smaller in magnitude than the exact value, but is a slightly worse approximation than the n=6 root. The departure from the oscillatory convergence pattern at n=8 signals a new behavior. The closest root for n=9 is G_1=-0.738 595 46... i, which is slightly better than the n=7 root, but for n=10 we observe a qualitative change in the character of the approximants. The polynomial associated with G_10 is G_10=40(9072 G_1^10-7560iG_1^7-1881G_1^4+119 iG_1). If we truncate by setting the right side to zero and ignore the trivial root at 0, we see that all nontrivial roots come in triplets located at the vertices of equilateral triangles. The roots that are closest to the exact value of G_1, which lies on the negative-imaginary axis, are not pure imaginary. Rather, there is a pair of roots close to and on either side of the negative-imaginary axis at -0.717 367 67... i± 0.016 050 677... . For higher truncations we find an accumulation of roots near the exact negative-imaginary value in (<ref>), but arranged in a ring around this exact value. We have solved the DS equations up to the 200th truncation and we plot the solutions as dots in the complex plane in Fig. <ref>. We seek solutions that are near the negative-imaginary axis for two reasons: First, symmetry requires that G_1 be negative imaginary. Second, the first equation in (<ref>), G_2=-G_1^2, shows that otherwise G_2 will not be positive; the second Green's function must be positive because G_2=M^ -2, where M is the renormalized mass. A blow-up of the ring structure on the negative-imaginary axis is shown in Fig. <ref> for the solution to the n=200 polynomial only. This emphasizes that the roots on the ring are not approaching the exact value of G_1 shown in Fig. <ref> as n increases, but rather are just becoming dense on the ring. The three-fold symmetry of the roots in Fig. <ref> arises because the monic polynomial equations that come from solving successively truncated DS equations contain only powers of x^3 (after we exclude the trivial roots at 0): P_3n(x)=x^3n+C_1 x^3n-3+C_2 x^3n-6+... +C_n. Five such polynomials (with factors of i excluded) are P_3 = x^3+12, P_6 = x^6+12x^3+120, P_9 = x^9+34x^6+87560x^3+1160, P_12 = x^12+x^9+93280x^6+13336x^3+78800, P_15 = x^15+54x^12+47x^9+19168x^6+ 569096726720x^3+19856. Like the coefficients of the polynomials in (<ref>), the coefficients C_k in (<ref>) have a fairly simple structure: C_1(n) = 11!()^1 n (n>1), C_2(n) = 12!()^2(n^2- 4735n) (n>2), C_3(n) = 13!()^3 (n^3 -47· 335n^2+13435n) (n>3), C_4(n) = 14!()^4 (n^4-47· 635n^3 +253871225n^2 -1471121175175n)  (n>4). §.§ Asymptotic behavior of G_n for large n In Sec. <ref> we investigated the large-n asymptotic behavior of the Green's functions in order to study the validity of the truncation procedure for the quartic Hermitian theory. We repeat this analysis for the non-Hermitian cubic theory. The DS equations (<ref>) and (<ref>) determine the exact values of the G_n. These are listed in Table <ref>. Applying Richardson extrapolation to the entries in Table <ref>, we find that the asymptotic behavior of G_n for large n (including the overall multiplicative constant) is G_n∼-(n-1)! r^n(-i)^n (n→∞), where r=0.427 696 347 707... . This asymptotic behavior is confirmed analytically in Ref. <cit.>. The derivation goes as follows. We define g_p≡-i^n G_p/(p-1)! and express the DS equations for the Green's functions G_n in compact form as a recursion relation: g_p=1/p-1∑_k=1^p-1g_k g_p-k+1/2δ_p,3 (p≥2). We then multiply by (p-1)x^p to get x^p (p-1) g_p =∑_k=1^p-1g_k x^k g_p-k x^p-k+x^3δ_p,3, and rewrite the left side as xd/dx x^p g_p-x^p g_p. Next, we sum in p from 2 to ∞ and define the generating function f(x): f(x)≡∑_p=1^∞ x^p g_p. This generating function satisfies the Riccati equation xf'(x)-f(x)=f^2(x)+x^3. We linearize this equation by substituting f(x)=-xu'(x)/u(x) and f'(x)=x[u'(x)]^2/[u(x)]^2-u'(x)/u(x)-xu”(x)/u(x) into the Riccati equation. Four terms cancel and we get u”(x)=-xu(x). This is an Airy equation of negative argument whose general solution is u(x)= a Ai(-x)+b Bi(-x), where a and b are arbitrary constants. Thus, f(x)=xa Ai'(-x) + b Bi'(-x)/a Ai(-x) + b Bi(-x). To determine the constants a and b, we note that f'(0)=g_1=-3^1/3Γ()/Γ()= -0.729 011 132 947... . Hence, -3^1/3Γ()/Γ()= a Ai'(0) + b Bi'(0)/a Ai(0) + b Bi(0) We then substitute Ai(0) = 3^-2/3/ Γ(),  Ai'(0)=-3^-1/3/ Γ(), Bi(0) = 3^-1/6/ Γ(),  Bi'(0)=3^1/6/ Γ(), cancel the Gamma functions, and obtain -1=(-a+b√(3))/(a+b√(3)). Thus, a is arbitrary and b=0, so f(x) = x Ai'(-x)/ Ai(-x). The generating function f(x) is a power series, and it blows up when the denominator in this equation is zero. This happens first when x=2.338 107 410 459..., which is the radius of convergence of the series. The inverse of this number is precisely the value of r in (<ref>). Once again, we are faced with justifying the truncation needed to solve the system of DS equations and we repeat the argument in Sec. <ref>. As before, the unbiased truncation gives a slowly converging sequence of approximants that does not converge to the exact value of G_1. The novelty here is that, if we use the asymptotic expression (<ref>) as the basis of the truncation, an entirely new root, which is extremely close to the exact value of G_1, appears inside the tight loop of roots in the complex plane, as shown in Fig. <ref>. This figure gives a comparison of the n=200 evaluation using this asymptotic approximation (red) and the unbiased truncation (blue). The blue and red loops are almost the same size, but the new root agrees with the exact value of G_1 to seven decimal places. However, corresponding new roots also appear in the loops at the ends of the other two propellers. The condition of global symmetry does not exclude these roots because the entire constellation of zeros is symmetric. To exclude these spurious zeros we can impose the condition that the G_2 be positive (spectral positivity). We do so by using the first DS equation in (<ref>). To see more clearly the effect of including the asymptotic behavior of G_n in the truncation scheme, we plot the absolute values of the solutions along the negative axis for n ranging from 1 to 200 in Fig. <ref>. As we see in Fig. <ref>, there are solutions which are both larger and smaller (in absolute value) than the exact solution. Thus, we do not observe a monotonic behavior of the roots for increasing n. However, the the isolated root inside the loop in Fig. <ref> is indistinguishable from the exact solution (red line). § D=0 NON-HERMITIAN QUARTIC THEORY To understand more broadly the behavior of our truncation schemes, we consider next the quartic Lagrangian =- gϕ^4, which defines a non-Hermitian massless -symmetric theory in zero-dimensional spacetime. The connected one-point Green's function is for this Lagrangian G_1=∫ dx xexp(gx^4/4)/∫ dx exp(gx^4/4), where the paths of integration lie inside a -symmetric pair of Stokes sectors of angular opening π4 centered about -π4 and -3π4 in the lower-half complex-x plane. Without loss of generality, we set g=1 and evaluate these integrals exactly: G_1=-2i√(π)/Γ()=-0.977 741 07... i. The first eight DS equations for this theory are G_3 = -G_1^3-3G_1G_2, G_4 = -3G_1G_3-3G_2^2-3G_1^2G_2-1, G_5 = -3G_1G_4-9G_2G_3-3G_1^2G_3-6G_1G_2^2, G_6 = -3G_1G_5-12G_2G_4-3G_1^2G_4-9G_3^2   -18G_1G_2G_3-6G_2^3, G_7 = -3G_1G_6-15G_2G_5-3G_1^2G_5-30G_3G_4   -24G_1G_2G_4-18G_1G_3^2-36G_2^2G_3, G_8 = -3G_1G_7-18G_2G_6-3G_1^2G_6-45G_3G_5   -30G_1G_2G_5-30G_4^2-60G_1G_3G_4   -60G_2^2G_4-90G_2G_3^2, G_9 = -3G_1G_8-21G_2G_7-3G_1^2G_7-63G_3G_6   -36G_1G_2G_6-105G_4G_5-90G_1G_3G_5   -90G_2^2G_5-60G_1G_4^2-360G_2G_3G_4-90G_3^3, G_10 = -3G_1G_9-24G_2G_8-3G_1^2G_8-84G_3G_7   -42G_1G_2G_7-168G_4G_6-126G_1G_3G_6   -126G_2^2G_6-105G_5^2-210G_1G_4G_5   -630G_2G_3G_5-420G_2G_4^2-630G_3^2G_4. The unbiased approach to solving these equations consists of fixing n and then using successive linear elimination to obtain polynomial equations to be solved numerically for the lowest Green's functions. However, the procedure is more difficult than for the Hermitian quartic theory in (<ref>) or the non-Hermitian cubic theory in (<ref>) because this elimination process concludes with two polynomials containing not one but two Green's functions G_1 and G_2. That is, we obtain a coupled pair of polynomial equations to solve for G_1 and G_2 rather than one polynomial equation in one Green's function. For example, the leading truncation (n=4) consists of eliminating G_3 in the second DS equation by substituting the first DS equation into it. We then truncate by setting G_3=G_4=G_5 =…=0 and solve the resulting pair of simultaneous equations. This leads to G_1^4=3/2, and the -symmetric solution in the negative-half plane is G_1=-i ()^1/4=-1.106 681 92... i. This result has an error of 13.2% in comparison with the exact value of G_1 in (<ref>). For larger values of n the procedure for solving the pair of polynomial equations is tedious: We multiply each equation by an expression that makes the coefficient of highest power of G_1 (or G_2) the same and then subtract the two equations to eliminate this highest-power term. We repeat this process until one of the equations becomes linear in G_2. We solve this equation for G_2 and eliminate it algebraically from the other equation. This gives a high-degree polynomial equation for G_1 that we can finally solve numerically. The problem with this procedure is that each multiplication introduces spurious roots. However, we find that the final polynomial in powers of G_1 factors into two polynomials; the roots of one factor are all spurious while the roots of the other factor, which is a polynomial in powers of G_1^4, solve the original pair of equations. The number of roots increases rapidly with n and all roots come in quartets that lie at the vertices of squares in the complex plane. All (nonspurious) roots up to n=33 are displayed in Fig. <ref>. The symmetry of the Lagrangian (<ref>) requires that G_1 be a negative-imaginary number. Since we must solve coupled equations for G_1 and G_2, we require the exact value of G_2 in order to calculate the exact values of all of the Green's functions from (<ref>). The exact value of G_2 is G_2=4π/Γ^2()-2Γ()/Γ()=0.279 999 35... . Then, using G_1 in (<ref>) and G_2 in (<ref>) we obtain the results given in Table <ref>. §.§ Asymptotic behavior of G_n for large n Inspection of Table <ref> shows that the exact values of the odd (even) Green's functions oscillate in sign as n increases, and that the odd Green's functions are imaginary, while the even ones are real. Applying Richardson extrapolation to the entries in Table <ref>, we find that the asymptotic behavior of G_n for large n is G_n∼-(n-1)! (-i)^n r^n (n→∞), where r=0.34640.... (The overall multiplicative constant in the asymptotic behavior is exactly 1.) This result is similar to the behavior in (<ref>) for the Green's functions of the non-Hermitian cubic theory. § D=0 NON-HERMITIAN QUINTIC THEORY Next, we analyze the quintic -symmetric Lagrangian =-15igϕ^5. The one-point Green's function is given by G_1=∫ dx xexp(gx^5/5)/∫ dx exp(gx^5/5). Choosing -symmetric Stokes wedges in the negative half-plane and setting g=1, we get the exact value G_1=-1.078 653… . The first three DS equations that one obtains are G_4 = -G_1^4 -6G_2G_1^2 -4G_3G_1 - 3G_2^2 G_5 = -4G_2^3G_2 -12 G_1G_2^2 -6G_1^3G_3 -10G_2G_3 -4G_1G_4 + i G_6 = -12 G_1^2G_2^2 -4 G_1^2G_3 - 12 G_2^3 - 36 G_1G_2G_3 - 6 G_1^2G_4 - 10 G_3^2 - 14 G_2G_4 - 4 G_1G_5. The first equation for G_4 contains three unknowns, G_1, G_2 and G_3, so setting G_4=G_5=…=0 as a first unbiased truncation means that we must solve three coupled equations. At the next truncation G_40, but all higher G_n=0. We therefore eliminate G_4 in terms of G_1, G_2, and G_3, and must solve the next set of three equations. Thus, the solution is complicated. Figure <ref> gives a plot of the roots in the complex plane up to n=11. These roots exhibit five-fold symmetry. We observe ten concentrations of roots. One can understand this as follows: Associated with the Lagrangian (<ref>) are five Stokes sectors that define the regions of convergence of the integral (<ref>) in the complex plane (Stokes sectors). Thus, there are ten possible distinct paths of integration in the complex plane, each of which lead to different values of G_1. Aside from the imaginary value in (<ref>), there is another imaginary -symmetric solution from -symmetric (left-right symmetric) integration in the upper-half plane that gives G_1=0.4120… i. The other possible complex values of the integral for G_1 are ± 0.392…+0.127… i, ± 0.242…-0.333… i, ± 0.634…+0.872… i, and ± 1.025 …-0.333… i. These ten values for G_1 are plotted as red squares on Fig. <ref>, and correspond to the dense regions of solutions. This feature is a general one: for the quartic and cubic systems discussed in the previous sections, an analysis of all possible paths of integration in each case results in all possible solutions of the Green's functions being represented in the complex plane. § D=0 HERMITIAN SEXTIC THEORY Here we consider the zero-dimensional model described by the massless Lagrangian =1/6 gϕ^6. The first two Green's functions are given by G_1 = ∫ dϕϕ e^-ϕ^6/6/ ∫ dϕ e^-ϕ^6/6 G_2 = ∫ dϕϕ^2e^-ϕ^6/6/ ∫ dϕ e^-ϕ^6/6, where g=1. The paths of integration must be specified, and there are six distinct regions of convergence bounded by the Stokes lines at π12 +nπ6, so there are 15 possible combinations of integration paths. Thus, there are 15 different theories associated with the Lagrangian (<ref>). The first DS equation is complicated and contains five unknowns: G_5 = -G_1^5-10G_1^3G_2-10G_1^2G_3-15G_1G_2^2, -5G_1G_4-10G_2G_3 , and, in general, a complete solution of the DS equations would require that we solve four coupled polynomial equations at each truncation level. To reduce the calculational complexity, we restrict the system to be parity symmetric, so that all odd Green's functions vanish. The Hermitian sextic theory has an integration path along the real axis. The exact value of G_2, obtained by integrating (<ref>) along this path gives G_2=0.578 616 519…. However, there are two other choices for the integration path that also respect parity symmetry and give rise to the values G_2=-0.2893…±0.5010… i. Imposing parity symmetry on the first DS equation (<ref>) gives the trivial equation 0=0. The first five (nontrivial) DS equations link the even-n Green's functions to others of higher order: G_6 = -15G_2^3-15G_2G_4+1, G_8 = -60G_2^4 -165G_2^2G_4 -35G_4^2-25G_2G_6, G_10 = -120G_2^5-1200G_2^3G_4-1150G_2G_4^2-365G_2^2G_6 -205 G_4G_6, G_12 = -4200G_2^4G_4-16800G_2^2G_4^2-4550G_4^3 -3360G_2^3G_6-8470G_2G_4G_6-455G_6^2 -645G_2^2G_8-460G_4G_8-45G_2G_10, G_14 = -100800G_2^3G_4^2-168000G_2G_4^3-15120G_2^4G_6 -151200G_2^2G_4G_6-76020G_4^2G_6-23310G_2G_6^2 -7200G_2^3G_8-22680G_2G_4G_8-2910G_6G_8 -1005G_2^2G_10-875G_4G_10-55G_2G_12. As before, we truncate the DS equations by taking at each step a pair of successive equations and setting the highest-order connected Green's functions to zero. This constitutes a truncation of order n. The results for G_2 up to n=30 (where we solve the equations G_64=G_66=0) are shown in Fig. <ref>. Like the quartic case, the roots converge monotonically to points near the three exact values of G_2 in (<ref>) and (<ref>). Richardson extrapolation gives the limiting values of the truncated sequences and these limiting values differ from the exact values by 6% (see Fig. <ref>). § CONCLUSIONS In this paper we have studied the effectiveness of the DS equations as a way to calculate the Green's functions of a quantum field theory. We have examined the DS equations for zero-dimensional field theories only because in this case we can evaluate the integral representations of the Green's functions exactly and then compare these exact results with the approximants provided by the DS equations. We find that while the Green's functions exactly satisfy the infinite system of coupled DS equations, the DS equations alone cannot be used to obtain accurate results for the Green's functions. The reason for this is that Green's functions are expressed in terms of moments of the functional integral that specifies the partition function Z of the quantum field theory. However, the DS equations are derived by functional differentiation of the partition function. While differentiation preserves local information, the global information in the functional integral, which is required to specify the Green's functions uniquely, is lost. As a trivial example, consider the function f(x)=14x^3+2x (1≤ x≤2). We may differentiate f(x) once to obtain a differential equation satisfied by f(x): f'(x)+f(x)x=x^2. However, while this equation describes the local behavior of f(x) at each point x, we have lost the global boundary data needed to recover the original function f(x): The general solution to this differential equation, f(x)=14x^3+Cx, contains an arbitrary constant. However, if we specify the boundary data f(2)= 3, this determines that C=2 and we have recovered f(x) in (<ref>). As explained in Sec. <ref>, deriving the DS equations involves a somewhat more complicated differentiation process. However, the resulting coupled infinite system of DS equations is so complicated that it obscures the simple fact that in the differentiation process some information has been lost. For example, the functional-integral representation of the partition function exists because the path of functional integration terminates as |ϕ|→∞ inside a pair of Stokes sectors in complex-ϕ space (ϕ is the integration variable). Because there are many possible pairs of sectors that give a convergent functional integral, when we solve the DS equations we find all possible solutions to the DS equations corresponding to all possible pairs of Stokes sectors, some corresponding to Hermitian theories and others corresponding to non-Hermitian theories (both -symmetric and non--symmetric). For instance, in Fig. <ref> there are 10 concentrations of roots corresponding to the ten possible paths of integration for a quintic field theory. The DS equations weight each of these theories equally. There is even more loss of information than this. As we have shown, solving the DS equations is a two-step process. First, we truncate the infinite triangular system of DS equations, but when we do so, the resulting finite system always has more Green's functions than equations and is therefore indeterminate. Next, to obtain a closed system of coupled equations we perform a further truncation in which we set the highest Green's functions to 0. (In this paper we call this truncation procedure unbiased.) There are other truncation possibilities as well, but in all cases we find that as we include more and more DS equations, the solutions do not converge to the already known exact values of the Green's functions. Nevertheless, a remarkable feature of the unbiased approach is that for all five theories studied in this paper, as we include more DS equations, the approximate Green's functions actually converge to limiting values, and these limiting values are fairly accurate – several percent off from the exact values for all of the Green's functions for all of the theories corresponding to the possible pairs of Stokes sectors, as discussed above. Finally, we have found a successful way to insert the missing information back into the DS equations. Instead of using the unbiased ansatz of setting the higher unknown Green's functions to zero, we replace G_n by its asymptotic behavior for large n. This procedure gives new and extremely accurate numerical results for the Green's functions (many decimal places). However, it does not eliminate all of the spurious theories associated with different pairs of Stokes sectors; this can only be done by imposing external additional conditions on the DS equations such as spectral positivity. The use of the asymptotic behavior of G_n for large n suggests a new and interesting general mathematical problem that has not been studied previously in this context, namely, finding the asymptotic behavior of many-legged Green's functions in higher-dimensional field theories. One last remark: A simple way to force the DS equations to give sequences of approximants that converge to the exact values of the Green's functions is to require that the Green's functions all have formal weak-coupling expansions in powers of a coupling constant. This approach has been known for a long time <cit.>. To illustrate this idea we return to the trivial differential-equation example above. We can demand that the solution to the differential equation (<ref>) be entire; that is, that the solution f(x) have a convergent Taylor-series representation. This condition uniquely determines the unknown constant C in (<ref>): C=0. Unfortunately, it does not recover the original function f(x) in (<ref>), which is singular at the origin. Similarly, if we require that all Green's functions have weak-coupling expansions, we immediately exclude the possibility of using the DS equations to calculate Green's functions having nonperturbative behavior. Indeed, if we ignore the possibility of nonperturbative behavior, there is no reason to consider the DS equations at all because Feynman diagrams give the perturbative representations of Green's functions. We thank D. Hook for assistance with some numerical calculations. CMB thanks the Alexander von Humboldt and Simons Foundations, and the UK Engineering and Physical Sciences Research Council for financial support. 99 R1 C. M. Bender, C. Karapoulitidis, and S. P. Klevansky, Underdetermined Dyson-Schwinger equations, Phys. Rev. Lett. 130, 101602 (2023). R2 Early low-dimensional studies of the DS equations may be found in C. M. Bender, G. S. Guralnik, R. W. Keener, and K. Olaussen, Numerical study of truncated Green's function equations, Phys. Rev. D 14, 2590 (1976); C. M. Bender, K. A. Milton, and V. M. Savage, Solution of Schwinger-Dyson Equations for PT-Symmetric Quantum Field Theory, Phys. Rev. D 62, 085001 (2000); C. M. Bender and S. P. Klevansky, Families of particles with different masses in -symmetric quantum field theory, Phys. Rev. Lett. 105, 031601 (2010). R3 We thank S. Coleman for a lengthy discussion of this history. R4 F. J. Dyson, The S matrix in quantum electrodynamics, Phys. Rev. 75, 1736 (1949). R5 J. Schwinger, On the Green's functions of quantized fields I, Proc. Nat. Acad. Sci. 37, 452 (1951). R6 J. Schwinger, On the Green's functions of quantized fields II, Proc. Nat. Acad. Sci. 37, 455 (1951). R7 See discussion of Richardson extrapolation in C. M. Bender and S. A. Orszag, Advanced Mathematical Methods for Scientists and Engineers (Springer-Verlag, New York, 1999), Chap. 8. R8 See discussion of Wilkinson polynomials in C. M. Bender and S. A. Orszag, Ibid., Chap. 7. R9 C. M. Bender, F. Cooper, and L. M. Simmons, Jr., Nonunique solution to the Schwinger-Dyson equations, Phys. Rev. D 39, 2343 (1989). This idea has been rediscovered very recently by W. Li, Taming Dyson-Schwinger equations with null states, arXiv: 2303.10978 and Solving anharmonic oscillator with null states, arXiv: 2305.15992.
http://arxiv.org/abs/2307.01131v1
20230703161042
Passive Query-Recovery Attack Against Secure Conjunctive Keyword Search Schemes
[ "Marco Dijkslag", "Marc Damie", "Florian Hahn", "Andreas Peter" ]
cs.CR
[ "cs.CR" ]
M. Dijkslag et al. Passive query-recovery attack against secure CKWS schemes University of Twente, Enschede, The Netherlands m.dijkslag@alumnus.utwente.nl, {f.w.hahn,a.peter}@utwente.nl University of Oldenburg, Oldenburg, Germany andreas.peter@uni-oldenburg.de Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 - CRIStAL, Lille, France marc.damie@inria.fr Passive query-recovery attack against secure conjunctive keyword search schemes Marco Dijkslag1 Marc Damie3 Florian Hahn1 Andreas Peter1,2 August 1, 2023 =============================================================================== While storing documents on the cloud can be attractive, the question remains whether cloud providers can be trusted with storing private documents. Even if trusted, data breaches are ubiquitous. To prevent information leakage one can store documents encrypted. If encrypted under traditional schemes, one loses the ability to perform simple operations over the documents, such as searching through them. Searchable encryption schemes were proposed allowing some search functionality while documents remain encrypted. Orthogonally, research is done to find attacks that exploit search and access pattern leakage that most efficient schemes have. One type of such an attack is the ability to recover plaintext queries. Passive query-recovery attacks on single-keyword search schemes have been proposed in literature, however, conjunctive keyword search has not been considered, although keyword searches with two or three keywords appear more frequently in online searches. We introduce a generic extension strategy for existing passive query-recovery attacks against single-keyword search schemes and explore its applicability for the attack presented by Damie et al. (USENIX Security '21). While the original attack achieves up to a recovery rate of 85% against single-keyword search schemes for an attacker without exact background knowledge, our experiments show that the generic extension to conjunctive queries comes with a significant performance decrease achieving recovery rates of at most 32%. Assuming a stronger attacker with partial knowledge of the indexed document set boosts the recovery rate to 85% for conjunctive keyword queries with two keywords and achieves similar recovery rates as previous attacks by Cash et al. (CCS '15) and Islam et al. (NDSS '12) in the same setting for single-keyword search schemes. § INTRODUCTION With increasing number of enterprises storing their documents in the cloud the question arises how to cope with storing sensitive documents on the cloud without the cloud provider learning information about the stored documents or information being leaked when a data breach occurs. One solution for this problem would be to encrypt the documents to hide its contents to the cloud provider. However, this prevents users from using the (often available) computational resources cloud providers offer, since searching through the documents is no longer possible without first downloading and decrypting it. Searchable symmetric encryption schemes can be a solution to this problem that offer constructions for search functionalities over encrypted documents. The first practical solution towards searchable encryption has been proposed by Song et al. <cit.>. Proposers of searchable encryption schemes need to find a trade-off in efficiency, security, and functionality. With this trade-off in terms of security comes information leakage such as possible search pattern leakage (revealing which queries concerned the same underlying, but unknown, keyword) and access pattern leakage (revealing the identifiers of all documents matching the search query). Most of the efficient searchable encryption schemes that allow for keyword search leak information in the access pattern for efficiency. Searchable encryption is an active line of research for finding efficient schemes that allow for search in encrypted documents with well-defined security in terms of a leakage function. Orthogonally, research is performed on finding attacks against proposed searchable encryption schemes. One such type of attack is a query-recovery attack, i.e. the ability for an adversary to recover the plaintexts from performed queries. In general two kinds of query-recovery attacks exist: (1) a passive attack where an adversary only has access to the information leaked by a scheme and (2) an active attack in which an adversary is able to inject tailored documents into the to-be-searched dataset. Active query-recovery attacks on conjunctive keyword search do exist <cit.> which are described as an extension on the proposed single-keyword search attack. Currently, all existing passive query-recovery attacks against searchable symmetric encryption that allow for keyword searches only focuses on single-keyword search schemes. However, these attacks do not reflect a realistic scenario, since single-keyword searches are limited and statistics show that the number of keywords used by people online in the US peaks at two keywords <cit.>. Also, three keyword searches are still more frequent than searches for a single keyword. The frequency of searches using seven or more keywords becomes negligible. Note that the recovery of conjunctive keyword queries is more difficult with respect to the recovery of single-keyword queries using similar vocabulary sizes. This difficulty stems from the fact that the space for keyword conjunctions is combinatorial in the number of conjunction terms compared to single-keywords, therefore an attacker needs to consider more possible candidates of keyword conjunctions for each observed query. In this work, we explore a passive query-recovery attack against secure conjunctive keyword search (CKWS) schemes. We propose a generic extension strategy for query-recovery attacks against single-keyword search to recover conjunctive queries using the same attack. Our extension strategy is based on the use of trapdoors created from a keyword-conjunction set as a generalization of trapdoors created from single-keywords. Replacing keywords with keyword conjunction sets. Our attack is static and does also work on forward and backward private schemes (<cit.>). We introduce an adaptation of the query-recovery attack proposed by Damie et al. <cit.> to achieve keyword conjunction recovery. We explore the applicability of the attack in two setups: (1) a similar-documents attack, where the attacker only has access to a set of documents that is similar, but otherwise different, from the indexed documents and (2) a known-documents attack, where the attacker has (partial) knowledge of the indexed documents. In both setups it is assumed the attacker knows the keyword conjunctions for a small set of queries a priori. We experimentally show that our attack can work for a relatively small vocabulary size (500) in an attack setup allowing only conjunctive keyword search using 2 keywords. However, we show that in an attack setup using similar-documents the attack performs poorly unless many known queries are assumed to be part of the attacker's knowledge. Furthermore, we demonstrate limitations of our generic extension posed by the combinatorial complexity increase for larger conjunctions. § RELATED WORK Most attacks against searchable symmetric encryption that have been described in the literature are query-recovery attacks. Islam et al. <cit.> were the first to propose a passive query-recovery attack in which they are exploiting the access pattern leakage, i.e. leaked document identifiers from observed queries. In their attack, the adversary needs to know all the documents indexed on the server to be successful. They introduced the idea of computing (word-word and trapdoor-trapdoor) co-occurrences to attack SSE. This idea being reused by other the passive attacks. The attack works by finding the closest mapping between the word-word co-occurrence matrix and trapdoor-trapdoor co-occurrence matrix in which they use meta heuristic simulated annealing. Also, the attack requires a number of known queries to work, i.e. trapdoors from which the attacker knows the underlying plaintext value. Cash et al. <cit.> proposed another passive query-recovery attack. Their attack first exploits that keywords with high frequency have unique keyword document counts to initialize their set of known queries. Then for keywords that do not have a unique keyword document occurrence count they construct a co-occurrence matrix of their known documents and observed queries, similar to Islam et al. They try to recover more queries by constructing for every unknown query their candidate set (i.e. keywords having the same document occurrence count) and remove candidates from the set that do not have the same co-occurrence with a known query in the known queries set. If after iterating over every known query only one candidate is left, the last candidate is appended to the known queries set. This process is repeated for all unknown queries until the set of known queries stops increasing. Both <cit.> rely on the attacker knowing a large part of the indexed documents, where the count attack performs better than the attack by Islam et al. However, their query recovery rate roughly only increases when the attacker knows at least 80% of the indexed documents. The query-recovery attack proposed by Pouliot et al. <cit.> uses weighted graph matching where the attacker needs to find mapping of keyword graph G and trapdoor graph H. The attack achieves recovery rates above 90% when the attacker knows the entire set of indexed documents, but fails as similar-documents attack unless having a smaller set of documents and vocabulary size. Also, the runtime of the attack increases rapidly, where for a vocabulary size of 500 the attack runs in less than one hour, whereas it takes more than 16 hours for a vocabulary size of 1000. The attack in <cit.> has a runtime of a maximum of 14 hours, whereas attacks from <cit.> run in seconds. Ning et al. <cit.> introduced a query-recovery attack that works when the attacker knows a percentage of the indexed documents. Keywords and trapdoors are represented as a binary string where the i-th bit is 1 if the keyword (resp. trapdoor) occurs in document i. Recovery is done by converting the bit strings to integers, where it is considered that a keyword corresponds to a trapdoor if they have the same integer value. The proposed attack outperforms the attack by Cash et al. <cit.>, where in their scenario <cit.> achieves a recovery rate of roughly 28% and their proposed attack around 56% when the attacker knows 80% of the indexed documents. However, they do not report a recovery rate for an attacker having knowledge of more than 80% of the indexed documents. Blackstone et al. <cit.> proposed a "sub-graph" attack requiring much less known documents to be successful and also works on co-occurrence hiding schemes. Their experiments show that an attacker only needs to know 20% of the indexed documents to succeed in her attack.. In <cit.>, Damie et al. proposed their refined score attack that works in a setting where the attacker only knows a similar, but otherwise different and non-indexed, set of documents for query-recovery. A mathematical formalization of the similarity is proposed in their paper. In <cit.> they showed that both the attack proposed by Islam et al. <cit.> and their proposed count attack do not work using similar documents. In <cit.>, the query-recovery attack uses similar techniques as used by <cit.>, i.e. constructing co-occurrence matrices from the document set known by the attacker and a trapdoor-trapdoor co-occurrence matrix from the assumed access pattern leakage. By starting with a few known (keyword, trapdoor)-pairs their attack iteratively recovers queries where previous recovered queries with high confidence scores are added to the set of known queries. Using this approach their attack reaches recovery rates around 85%. Other types of attacks. Zhang et al. <cit.> proposed an effective active document injection attack to recover keywords. Furthermore, they proposed an extension of their attack to a conjunctive keyword search setting which was experimentally verified for queries with 3 keywords. In <cit.>, Poddar et al. proposed several attacks that uses volume pattern as auxiliary information in combination with the attacker's ability to replay queries and inject documents. Moreover, they also gave an extension of their attack for queries with conjunctive keywords which is based on the extension from <cit.> using a document injection approach. Liu et al. <cit.> proposed a query-recovery attack which makes use of the search pattern leakage as auxiliary information. In particular, they exploit the query frequency. However, they simulated their queries by applying Gaussian noise to keyword search frequency from Google Trends[<https://trends.google.com/trends>] because of the lack of a query dataset. The attacker has access to the original frequencies. Another attack introduced by Oya and Kerschbaum <cit.> combines both volume information derived from the access pattern leakage and query frequency information derived from the search pattern leakage as auxiliary information. Conjunctive keyword search schemes. Passive query-recovery attacks against single-keyword search schemes already work for some conjunctive keyword search schemes where the server performs search for each individual keyword in a query independently and returns the intersection of document identifiers of each single-keyword search, i.e. leaking the full access pattern for each individual keyword in the conjunction. However, these attacks cannot be applied on conjunctive keyword search schemes with less or common access pattern leakage, where common refers to the scheme only leaking the document identifiers for the documents containing all keywords from a conjunctive keyword query. Hence, in this work we explore one extension strategy for conjunctive keywords that can be applied to most passive query-recovery attacks against single-keyword search using only common access pattern leakage. <cit.> both proposed such a conjunctive keyword search scheme that returns the intersection of document identifiers for each individual keyword in a conjunctive keyword query, thus leaking the full access pattern. However, we would like to emphasize that in this scenario only an honest-but-curious server that is able to observe the result set for each intermediate keyword can be considered an attacker, since an eavesdropper on the communication channel would not be able to observe the document identifiers for each intermediate single-keyword search. Furthermore, it should be noted that both schemes also offer more functionality than conjunctive keyword search alone. Where <cit.> allows for phrase searches and <cit.> offers result set verifiability and index updatability. Other proposed conjunctive keyword search schemes exist (<cit.>). However, all of them leak at least the common access pattern, where <cit.> have more than common access pattern leakage. To the best of our knowledge there do not exist efficient conjunctive keyword search schemes that have no access pattern leakage. § PRELIMINARIES We first introduce some notations that are used throughout this work. Let document set 𝒟 consist of documents {D_1, ..., D_n}. Let keyword set 𝒲 consist of keywords {w_1, ..., w_m}. Document D_i consists of keywords that form a subset of keyword set 𝒲. Let id(D_i) = i return the identifier for document D_i. We denote x ∈ D_i if keyword x (∈𝒲) occurs in document D_i. A summary of all notations and their meaning used throughout this work is given in Table <ref>. §.§ Searchable symmetric encryption A searchable encryption scheme allows a user to search in encrypted documents and is often described in a client-server setting. The client can search through encrypted documents stored on the server, without the server learning information about the plaintext documents. Often a searchable encryption scheme can be divided in four algorithms: * 𝖪𝖾𝗒𝖦𝖾𝗇(1^k): takes security parameter k and outputs a secret key K. * 𝖡𝗎𝗂𝗅𝖽𝖨𝗇𝖽𝖾𝗑(K, 𝒟): takes document set 𝒟 and secret key K and produces an (inverted) index I. * 𝖳𝗋𝖺𝗉𝖽𝗈𝗈𝗋(K, q): takes query q and secret key K and outputs a trapdoor td_q. * 𝖲𝖾𝖺𝗋𝖼𝗁(I, _q): takes trapdoor _q and index I and outputs the documents that match with query q. In single-keyword search schemes q corresponds to a keyword w, whereas in conjunctive keyword search schemes q would correspond to a query for documents containing d keywords, i.e., the conjunction of keywords w_1 ∧ ... ∧ w_d of keywords w_1, ..., w_d. Then, _q would correspond to the conjunction of d keywords. §.§ Considered conjunctive keyword search model We assume a fixed number of keywords (d) that are allowed to be searched for in a conjunctive keyword search. For instance if d = 2, only trapdoors with 2 distinct keywords are allowed. We denote such a fixed-d scheme as secure d-conjunctive keyword search scheme. For simplicity, we assume a fixed number of d distinct keywords, however one could consider d as a maximum number of keywords in the conjunctive search by reusing the same keyword for non-used keyword entries in the conjunction. For instance, when d = 2, kw ∧ kw for the same keyword kw would be equivalent to a single-keyword search for kw. We consider to be the set of d different keywords that are used to construct a trapdoor (_). For instance, if we consider a conjunctive keyword search scheme that allows search for d = 3 conjunctive keywords, we would create a keyword set for every possible combination of 3 keywords, where _1 = {kw_1, kw_2, kw_3}. [Note: d = 1 refers to a single-keyword search scheme.] First, in the 𝖡𝗎𝗂𝗅𝖽𝖨𝗇𝖽𝖾𝗑 algorithm, the client encrypts every document in the document set locally. Then creates an encrypted index of the document set (locally). Given a trapdoor td_ckw, the server can find the documents containing keywords in ckw using such a created index. The encrypted document set and index are then uploaded by the client to the server. Although in literature different methods for constructing such an index were proposed, here we do not fix which index is used. We only require the model to have at least common access pattern leakage, where common refers to the scheme only leaking the document identifiers for the documents containing all keywords in a conjunctive keyword query. All conjunctive search schemes described in Section <ref> leak at least the common access pattern. The client can search documents by constructing trapdoors. The client constructs a trapdoor by picking d keywords she wants to search for. In our model, she constructs a trapdoor using the function _q = 𝖳𝗋𝖺𝗉𝖽𝗈𝗈𝗋(K, _i = {kw_1, ..., kw_d}), for the keywords she wants to search for. By sending the trapdoor _q to the server, the server responds with a set of document identifiers R__q for documents that contain all keywords in _i. §.§ Attacker model Like in <cit.>, we consider two types of passive attackers which both can observe trapdoors sent by a user and its response including the document identifiers. The first type of attacker is an honest-but-curious server. The server is considered to be an honest entity meaning it follows the protocol. Hence, it always returns the correct result for each query. However, such curious server tries to learn as much information as possible using the scheme leakage. Secondly, we consider an eavesdropper that is able to observe pairs of trapdoor and document identifiers from the communication channel between client and server as an attacker. For both attackers an observation_i is a tuple (_q, R__q) considering conjunctive keyword queries where trapdoor corresponds to d conjunctive keywords. §.§ Attacker knowledge It is assumed the attacker knows the number of keywords d that are allowed to construct trapdoors. Moreover, it can be assumed that an honest-but-curious attacker knows the byte size of the stored documents and the number of documents stored (e.g. from the index). However, an eavesdropper does not. In that case we make use of the proposed formula by <cit.> that approximates the number of documents stored on the server (n_) derived from the attacker's knowledge. We consider two types of attack setups, i.e. a similar-documents attack setup where the attacker has access to a set of similar documents (as formalized in <cit.>) and a known-documents attack setup where the attacker has (partial) knowledge of the documents stored on the server. Similar-documents attack. In our similar-documents attack we assume the attacker has a document set that is ϵ-similar to the real indexed document set . However, we assume ϵ-similarity (as formalized in <cit.>) over the possible keyword conjunctions rather than keywords, where smaller ϵ means more similar. Also, ∩ = ∅, thus do not have overlapping documents. Known-documents attack. Like in <cit.>, for our known-documents attack setup we assume that the attacker has a p-known document set , where 0 < p ≤ 1 defines the known-documents rate. Meaning, the attacker knows a fraction p of the real indexed document set stored on the server. It should be noted that a similar-documents attack can be considered more realistic than a known-documents attack as discussed by Damie et al. <cit.>. Since a known-documents attack will most likely only be possible on a data breach, whereas documents that are only similar to the actual indexed documents maybe even publicly available. Moreover, the user could remove the leaked documents that are used in a known-documents attack from the index. The assumption that the attacker knows (a subset of) the documents stored on the server is rather strong, but is based on what is done in previous work <cit.>. § CKWS-ADAPTED REFINED SCORE ATTACK In this section we describe our conjunctive keyword search (CKWS) adaptation of the refined score attack. Our adaptation builds upon the score attacks that were introduced by Damie et al. <cit.>. We have chosen to use their query-recovery attack against single-keyword search schemes, since it is, to the best of our knowledge, the most accurate similar-documents attack that has been described yet. Furthermore, the matching algorithm used in their attack only has a runtime of 20 seconds while considering a vocabulary size of 4000 keywords. Since the space of possible queries increases combinatorial, we have to consider many possible keyword conjunctions and thus faster runtimes is desired. Moreover, their attack can use either known documents or similar documents as adversary's knowledge. We describe how one can transform their query-recovery attack to an attack on conjunctive keyword search schemes, i.e. considering the (abstract) secure d-conjunctive keyword search scheme described in Section <ref>, using similar terminology as in <cit.>. In addition, the code for the score attacks has been made publicly available online by Damie et al. This allowed us to verify their results first before adapting it to our conjunctive keyword setting. §.§ Score attacks Damie et al. <cit.> first propose the score attack based on the idea of ranking potential keyword-trapdoor mappings according to a score function. To run the score attack an attacker calculates the word-word co-occurrence matrix from its auxiliary document set and constructs a trapdoor-trapdoor co-occurrence matrix from observed queries and their result sets. Assuming some known queries, the attacker removes the columns from both matrices that do not occur in their set of known queries (i.e. word-trapdoor pairs) to obtain so-called sub-matrices. Then for every (observed) trapdoor, it goes through all possible keywords extracted from the auxiliary document set and returns the keyword for which their score function is maximized. Secondly, their proposed refined score attack builds upon previously described score attack. Instead of returning a prediction for all trapdoors, they define a certainty function for each prediction and only keep the RefSpeed best predictions according to this certainty function. These predictions are then added to the set of known queries and the attacker recomputes the co-occurrence sub-matrices. This procedure is repeated until there are no predictions left to make, i.e. no unknown queries left. §.§ Generic extension In short, our generic extension proposes to replace single keywords with keyword conjunction sets. The extension consists of five steps, highlighted by the next five subsections to adapt a passive query-recovery attack against single-keyword search to conjunctive keyword search, i.e. attacks that try to find a mapping between co-occurrences of keywords and trapdoors to recover queries. We describe our extension in a similar-documents attack setup using 𝒟_, but the same steps can be taken in a known-documents attack setup using 𝒟_p- as the attacker's auxiliary document set. Extract vocabulary. First, the attacker extracts keywords from the set of documents 𝒟_ to vocabulary 𝒲_. As in query-recovery attacks on single-keyword search <cit.>, we also assume that the keyword extraction method used by the attacker is the same as the one used by the user when she created the encrypted index. Construct set of possible keyword conjunctions. The attacker creates the set of all possible keyword conjunctions 𝒦_ = {_i ∈𝒫(𝒲_) | |_i| = d}, where m_ = |𝒦_| = v_ d and 𝒫(X) denotes the power set of set X. Compute co-occurrence matrix for keyword conjunctions. From 𝒟_ and derived keyword conjunctions set 𝒦_ the attacker creates the m_× m_ matrix ID_. Here ID_[i,j] = 1 if the i-th document in 𝒟_ contains the keywords that are in keyword conjunction _j and is otherwise 0. Then the attacker computes the - co-occurrence matrix C_ = ID^T_· ID_·1/n_. [A^T denotes the transpose of matrix A.] Compute the trapdoor-trapdoor co-occurrence matrix. We define 𝒬 = {_1, ..., _l} to be the set of observed queries by the attacker containing trapdoors that have been queried by the user. These trapdoors were created by the user from keyword conjunctions in 𝒦_ = {_i ∈𝒫(𝒲_) | |_i| = d}. Let R_ = {id(D) | (∈𝒦_) ∧ ( = 𝖳𝗋𝖺𝗉𝖽𝗈𝗈𝗋(K, )) ∧ (D ∈𝒟_) ∧∀_kw_t ∈ (kw_t ∈ D) } be the set of document identifiers that were observed by the attacker for trapdoor . Then we define the set of document identifiers DocumentIDs = ⋃_∈𝒬 R_ of size s, where s ≤ n_. Similar to the construction of the matrix ID_, we construct s × l trapdoor-document matrix ID_, where ID_[i,j] = 1 if i-th document identifier occurs in R__j (and td_j refers to j-th trapdoor from 𝒬). Otherwise, ID_[i,j] = 0. Then trapdoor-trapdoor co-occurrence matrix C_ = ID^T_· ID_·1/n_. Apply attack. The last step is to apply a passive query-recovery attack using the set of keyword conjunctions and the co-occurrence matrices. §.§ Transform key steps of refined score attack As in <cit.>, our attack also requires the attacker to have knowledge of a set of known queries. However, our set of known queries is slightly different because of the keyword conjunctions. In a similar-documents attack setup our set of known queries KnownQ = { (_i, _) | (_i ∈𝒦_∩𝒦_) (_∈𝒬) (_ = 𝖳𝗋𝖺𝗉𝖽𝗈𝗈𝗋(K, _i) }. For our known-documents attack setup, KnownQ is similarly defined by replacing 𝒦_ with 𝒦_. We recall key steps in the score attack w.r.t. the projection of the keyword-keyword co-occurrence and trapdoor-trapdoor co-occurrence matrix to sub-matrices using the set of known queries. These steps are important because they are different for our CKWS-adapted refined score attack. In short, the projection is done by only keeping the columns of known queries in C_ and C_. Our goal is to generate sub-matrices C^s_ and C^s_ from C_ and C_ respectively. We describe the projection step for C_ using 𝒦_, but the same holds for 𝒦_. Recall that 𝒦_ = {_1, ..., _m_}. We define pos(), which returns the position of ∈𝒦_similar. That is, pos(_i) = i. Similarly, pos(td) returns the position of in 𝒬 = {_1, ..., _l}. Let C_ = [ , c⃗_i, ]_i ∈ [m_] be the m_× m_ co-occurrence matrix, where the column vector c⃗_i denotes its i-th column. Then the m_× k sub-matrix C^s_ = [ , c⃗_pos(_j), ]_(_j, _j) ∈ KnownQ, where c⃗_pos(_j) is the pos(_j)-th column vector of C_. Let C_ = [ , u⃗_i , ]_i ∈ [l] be the l × l trapdoor-trapdoor co-occurrence matrix, where the column vector u⃗_i denotes its i-th column. Then l × k sub-matrix C^s_ can be constructed as follows: C^s_ = [ , u⃗_pos(_j), ]_(_j, _j) ∈ KnownQ, where u_pos(_j) is the pos(_j)-th column vector of C_. Superscript s emphasizes that C^s_ and C^s_ are sub-matrices of C_ and C_ respectively. Also, we denote C^s_[_i] to be the i-th row vector for keyword conjunction set _i and C^s_[_j] to be the j-th row vector for trapdoor _j, where |C^s_[_i]| = |C^s_[_j]| = k. Additionally, we revise the scoring algorithm for which the score is higher if a trapdoor corresponds to a certain keyword conjunction, i.e. the distance between two vectors C^s_[_j] and C^s_[_i] is small. Using keyword conjunctions the score function is defined as: Score(_j, _i) = -ln(||C^s_[_i] - C^s_[_j]||), for all _i ∈𝒦_ (or 𝒦_) and all _j ∈𝒬, where ln(·) is the natural log and ||·|| is a vector-norm (e.g. L2 norm). §.§ Revised algorithm We substitute C^s_kw for C^s_ in <cit.> to transform the refined score attack to the CKWS-adapted refined score attack. Algorithm <ref> contains its pseudocode, where a step is highlighted blue if it is different from the refined score attack proposed by Damie et al. <cit.>. Note that this algorithm is described using 𝒦_, but also works for 𝒦_ as input. One iteration of the algorithm can be defined by the three key phases. First remove known queries from the observed queries set 𝒬. Secondly, find the best scoring keyword conjunction candidate for each unknown query and compute the certainty of this candidate. Using keyword conjunctions the certainty of a keyword conjunction candidate _i for trapdoor is defined by: Certainty(, _i) = Score(, _i) - max_j ≠ i Score(, _j) Using this definition the certainty of a correct match of keyword conjunction with a trapdoor is higher when the score of the match is much higher than all other possible candidate scores. The algorithm defines a notion of refinement speed (RefSpeed) which defines the number of most certain predictions that will be added each iteration of the algorithm to the set of known queries. Which describes the third and last key step of an iteration, i.e. adding the most certain predictions to the known queries and recompute sub-matrices C^s_ and C^s_. Thereafter, either start a new iteration or stop the algorithm if the number of unknown queries is less than RefSpeed. §.§ Complexity As in <cit.>, a higher refinement speed will result in a faster runtime, but less accurate predictions. However, due to our use of keyword conjunctions the number of candidates for a trapdoor increases for larger d. Therefore, the runtime of the CKWS-adapted refined score attack grows combinatorial. The time complexity of the attack is given by 𝒪(f(v) + g(v)), where f(v) = v!/d!(v - d)!· (d - 1) corresponds to the time complexity of the generic extension, where we assume multiplying two vectors takes constant time. Further, g(v) = |𝒬|/RefSpeed· |𝒬| ·v!/d!(v - d)!· k is the time complexity of the attack. For both f and g, input v is either v_ or v_ depending on the attack setup. Besides the increase in runtime, having d > 1 also the space complexity of the algorithm increases faster relative to the vocabulary size. Since co-occurrence matrix C_ in the similar-documents attack setup is m_× m_, in terms of vocabulary size is v_!/d!(v_ - d)!×v_!/d!(v_ - d)! thus increasing faster with larger v_. This increase in time and space complexity led us to first further optimize the revised algorithm for our implementations. Moreover, we use a GPU to decrease runtimes through computing expensive matrix operations on it. § EXPERIMENTS §.§ Setup Documents. As described previously, in our experiments we simulate our attack using the publicly available Enron email document set introduced by Klimt & Yang <cit.>. We chose this document set since this one is also used in most attack papers requiring a set of documents. Similarly, we constructed the same corpus of emails from the folder _sent_mail which results in a set of 30109 documents. Keyword extraction. We extract keywords from solely the contents of the emails in the dataset, i.e. we do not consider email addresses or email subjects to be part of the document set. For keyword extraction we use the Porter Stemmer algorithm <cit.> to obtain stemmed words, moreover we remove stop words in the English language like 'the' or 'a'. Using this method results in a total of 62976 unique keywords in our entire considered document set. Number of keywords in conjunction. Throughout our experiments we fix d, i.e. the number of keywords allowed in one conjunction, to either 1, 2 or 3. This means that no mixture of number of keywords is allowed in search. For instance, when the d = 3 only queries with 3 distinct keywords are allowed, i.e. queries that contain either 1 or 2 keywords are not allowed. Testing environment. We implemented the attack on an Ubuntu 20.04 server with Intel Xeon 20-core processor (64 bits, 2.2 GHz), 512 GB of memory, and NVIDIA Tesla P100 GPU (16GB). We used Python 3.7 and the Tensorflow library <cit.> to accelerate matrix operations on a GPU.[Our code is available at <https://github.com/marcowindt/passive-ckws-attack>] Limitations. Running experiments with larger vocabulary sizes requires a lot of memory, since a vocabulary size of 150 and d = 2 means a document-keyword-conjunction matrix size of 18065 × 11175 (already 1.5 GiB) and a maximum co-occurrence matrix size of 11175 × 11175 (0.9 GiB) which both have to fit in the memory of the GPU for fast calculations. Therefore, having similar vocabulary sizes as used in the score attack is unrealistic in our generic extension strategy setting without having sufficient resources. However, we propose an extrapolation strategy to have approximate results for larger vocabularies. §.§ Results In our experiments where similar-documents are used as the attacker's knowledge, we use the same ratio in similar (40%) and real (60%) documents as in <cit.>. Similar to <cit.>, we define the accuracy to be the number of correct predictions divided by the number of unknown queries excluding the initial known queries, i.e. the accuracy = |CorrectPredictions(unknownQ)|/|𝒬| - |KnownQ|. If not specified otherwise, each accuracy result corresponds to the average accuracy over 50 experiments. Also, the vocabulary used in experiments is always created from the most frequently occurring keywords in the document set. From this vocabulary the keyword conjunctions set is generated. In each experiment it is assumed the attacker has observed 15% of queries that can be performed by the user, i.e. |𝒬| = 0.15 · m_, where queries are sampled u.a.r. from 𝒦_ to construct trapdoors. Result extrapolation. Figure <ref> shows the accuracy of the score attack from <cit.> where the attacker has access to similar-documents for varying vocabulary size and d = 2. We show these results to highlight that we can extrapolate the accuracy of the attack in a similar-documents setting closely, where the extrapolation is depicted by the dashed line and measured results are the solid line. We obtain this extrapolation by first transforming the accuracies using the 𝗅𝗈𝗀𝗂𝗍[𝗅𝗈𝗀𝗂𝗍(x) = 𝗅𝗈𝗀(x/1 - x)] function. Using this transformation, we obtain a space in which we seem to have a linear relationship such that 𝗅𝗈𝗀𝗂𝗍(acc) = b · v_ + a. We then perform a linear regression to obtain these coefficients using our experimental results. Lastly, we use the inverse 𝗅𝗈𝗀𝗂𝗍 function to transform it back to the original scale. We make use of this extrapolation where running experiments becomes infeasible (i.e. experiments with d = 2 and v_ > 500) to extrapolate the accuracy for larger vocabulary sizes. In our linear regressions, we do not provide the coefficient of determination R^2 and the p-value since they are based on the assumption that results are independent which is not true in our experiments as they all use the same document set. Hence, these values should not be used to evaluate the quality of the model even if they are high (e.g. R^2 ≈ 0.95 in Figure 1) but the linear regression is still valid. Although there may exist more precise extrapolation techniques, our intention is to have a simple yet realistic approximation of the accuracy for larger vocabularies for the sake of our discussion. Frequency of keyword conjunctions. Figure <ref> shows the frequency of a keyword conjunction occurring in 𝒟_ for d ∈{1, 2, 3}, where keyword conjunction rank is lowest for the most frequent keyword conjunction. We observe the behavior of using keyword conjunctions instead of a single-keyword, i.e. the frequency of the most frequent keyword conjunction becomes smaller with higher d and the frequency of the least frequent keyword conjunction reaches almost zero. This is to be expected, since the larger vocabulary size the higher the probability that certain keywords from a keyword conjunction do not appear in any document together, i.e. considering the vocabulary is generated with the most frequent keywords first. Note however, that the frequency for rank between 200 and 3600 part is higher for d = 2 relative to d = 1, which is due to the fact that obtaining 4000 keyword conjunctions requires a smaller vocabulary size of 90 for d = 2, and it is still the case that the most frequent keywords occur together. Nevertheless, the same does not hold for d = 3 relative to d = 2, where we actually observe a decrease in keyword conjunction frequency. Here it already is the case that the most frequent keywords used to create a keyword conjunction of 3 keywords do not have to necessarily occur together in a document. CKWS-adapted refined score attack using similar-documents. Figure <ref> shows the accuracy of the CKWS-adapted refined score attack using similar-documents with d = 2 and varying vocabulary size. Also, the plot shows an extrapolation of the accuracies for vocabulary sizes larger than 130 (and smaller than 50). From the extrapolation of the accuracies for varying vocabulary sizes we clearly see a rapid decrease in accuracy with larger vocabulary sizes. We conclude that, when we consider the results with 30 known queries we can still reach a reasonable recovery rate above 50% for vocabulary size 300 to 400 keywords. However, the results are far from the single-keyword search set up presented in <cit.> achieving up to 85% recovery rate for vocabulary size of 1000. In <cit.>, they discussed how the 'quality' of a known query influences the accuracy. A known query is more qualitative if the underlying keyword occurs more frequently. We remind that in the CKWS-adapted setting, it is a way to reduce the number of known queries needed. A lower rank of a keyword conjunction in Figure <ref> the query for the keyword-conjunction is considered more qualitative. Figure <ref> shows the accuracy of the CKWS-adapted refined score attack using similar-documents with d = 2 and varying number of known queries. The plot shows that the standard deviation of the accuracy, assuming 5 or 10 known queries, is relatively high compared to the standard deviation for 15, 30, or 60 known queries. For 5 known queries the standard deviation is 0.15, which is at least 3 times higher than the standard deviation for 15 known queries (≈ 0.05). The accuracy increases and standard deviation decreases with a higher number of known queries, since it becomes more likely to pick more qualitative queries (u.a.r.). This also explains why we observe this noisy behavior of the accuracy in the plot. CKWS-adapted refined score attack using p-known-documents. Since we have shown in Section <ref> that the CKWS-adapted refined score attack does provide limited scaling with having d > 1, we explore how well the attack performs assuming known-documents as the attacker's knowledge. Figure <ref> shows the accuracy of the attack using known-documents with varying known-documents rates of 0.05 ≤ p ≤ 0.8 and steps of 0.05. We observe that with the initial |KnownQ| = 10 setting the attack achieves higher accuracies faster for lower known-documents rates compared to an attack setting having |KnownQ| = 5 initially. Also, with known-documents rates p ≥ 0.7 the accuracy of the attack becomes constant and reaches near 100% accuracy for both 5 and 10 known queries. However, we do note that having a vocabulary size of v_ = 130 is a rather limited setting. In the next section we explore the attack using known-documents with larger vocabularies. CKWS-adapted refined score attack using 0.7-known-documents. In the previous result with varying known-documents rates we observed that the accuracy of the attack using known-documents reaches near 100% for known-documents rate p = 0.7 for both 5 and 10 known queries. Here we explore the accuracy of the attack by fixing the known-documents rate to p = 0.7 with vocabulary sizes 250 and 500. Figure <ref> shows a bar plot for both these results with error bar describing the standard deviation of the accuracy over 50 experiments. We observe that for vocabulary size 250 the difference with an attack using 5 known queries compared to 10 known queries is small. Also, the standard deviation in both settings is small. However, for the 500 keyword setting we clearly see a decrease in accuracy using 5 known queries and a large standard deviation. Whereas for 10 known queries the attack still reaches above 93% accuracy and standard deviation is small. We do note however that in this case an attacker has great advantage, since it knows at least 70% of the whole indexed dataset and 10 known queries. In comparison, previous passive query-recovery attacks <cit.> on single-keyword search did not exceed 40% accuracy assuming known-documents rate of 0.8. Runtime and memory usage. Figure <ref> describes the average runtime of the attack using known-documents over 50 repetitions in function of v_ for d = 2. We observe that the runtime is high for considerably small vocabulary sizes, which is to be expected considering the time complexity described in Section <ref>. We only show the runtime of the attack using known-documents, however, runtime of the attack using similar-documents is similar. Although our runtime can further benefit from using multiple GPUs and even our code is written in such fashion, we found that using two GPUs does not necessarily speed up our attack due to large overhead. The overall memory usage is dominated by the size of co-occurrence matrices C_ and C_. Therefore, we can define the main memory usage of the attack by the size of these two matrices as a function of the vocabulary size and the number of queries observed. In our experiments we always assume the attacker observes |𝒬| = 0.15 · m_ queries. As a result an accurate estimation of the bytes used by one experiment is given by 𝗇𝗎𝗆𝖻𝖾𝗋𝖮𝖿𝖡𝗒𝗍𝖾𝗌(v_, d) = 2 · (0.15 + 0.15^2) ·v_ d^2 ·𝗌𝗂𝗓𝖾𝗈𝖿(𝖿𝗅𝗈𝖺𝗍), where 𝗌𝗂𝗓𝖾𝗈𝖿(·) returns the number of bytes used by the system to store a certain data type. Filling in for v_ = 500, d = 2 and using 64 bit 𝖿𝗅𝗈𝖺𝗍, 𝗇𝗎𝗆𝖻𝖾𝗋𝖮𝖿𝖡𝗒𝗍𝖾𝗌(500, 2) ≈ 40 GiB, whereas the GPU used in our experiments fits at most 16 GB, meaning batching intermediate results is already required. § DISCUSSION Runtime. Although requiring large co-occurrence matrices for the extended refined score attack is cumbersome, if the adversary has sufficient memory resources these large matrices will not be her only concern. Her main concern will be the runtime of the attack because without being able to parallelize our attack to multiple GPUs our attack is difficult to run for vocabulary sizes > 500 and becomes infeasible for vocabulary sizes > 1000, whereas the added time complexity using our extension strategy is relatively small. Observed queries. Furthermore, the question arises whether it is realistic for an attacker to observe 15% of all possible queries. With only single-keyword search we believe this can be achieved. However, with d = 2 the number of keyword conjunctions to be observed is big, i.e. 0.15 ·v_ d. Although a smaller percentage could be considered more realistic and would even decrease the runtime of the attack, larger |𝒬| is still desired, since it will result in better estimators for prediction and thus higher accuracies. Query distribution. In our experiments we only sampled queries using a uniform distribution. However, it is likely that this is unrealistic for keyword conjunctions, since certain keywords might be more likely to be used in a query together whereas other possible conjunctions might not be queried at all. Having knowledge of whether certain keywords are more likely to be searched for in conjunction would decrease the complexity of the attack, since one can then only consider the top most likely keyword conjunctions. Countermeasure. Previous query-recovery attacks on single-keyword search also describe a countermeasure against their attack. In our work we focus on the question if a generic extension is possible. However, because of our generic extension strategy, countermeasures tested in <cit.> will be applicable but were not explored. Also, most introduced countermeasures do not actually leak less information, they make the leakage unusable by the attack proposed in the corresponding work (e.g. adding false positives in the result set). Generic extension. Although we described an adapted version of the refined score attack by <cit.> to a conjunctive keyword setting since it is good performing with low runtimes for single-keywords, our generic extension strategy using keyword conjunction sets is also valid for other attacks (<cit.>) and even other types of attacks (e.g. attacks using query frequency <cit.>). However, we expect similar runtime issues due to the large query space. Blackstone et al. <cit.> has a particular algorithm using cross-filtering that could be helpful to be an attack specifically against conjunctive keyword search. § CONCLUSION In this work we presented a generic extension strategy to adapt any passive query-recovery attack to a conjunctive keyword search setting. We specifically explored its applicability using the refined score attack proposed by Damie et al. <cit.> to a conjunctive keyword search setting. It is the first study of passive query-recovery attacks in the conjunctive keyword search setting. We showed that our attack using documents that are similar, but otherwise different from the indexed documents on the server, does only achieve accuracy of 32% as attack on conjunctive keyword search. However, applying the adapted attack using known-documents can still perform with a low number of known queries and vocabulary size of 500 and achieves a recovery rate similar to previous passive query-recovery attacks <cit.> against single-keyword search. Further, we discussed that the time complexity of the adapted attack grows combinatorial with the number of keywords in the conjunctive search query. Also, the storage required to perform the attack is dominated by the size of the co-occurrence matrices computed from the attacker's knowledge which also increases combinatorial. splncs04
http://arxiv.org/abs/2307.01273v1
20230703180337
EIGER IV: The cool 10$^4$K circumgalactic environment of high-$z$ galaxies reveals remarkably efficient IGM enrichment
[ "Rongmon Bordoloi", "Robert A. Simcoe", "Jorryt Matthee", "Daichi Kashino", "Ruari Mackenzie", "Simon J. Lilly", "Anna-Christina Eilers", "Bin Liu", "David DePalma", "Minghao Yue", "Rohan P. Naidu" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.CO" ]
Rongmon Bordoloi rbordol@ncsu.edu 0000-0002-3120-7173]Rongmon Bordoloi Department of Physics, North Carolina State University, Raleigh, 27695, North Carolina, USA 0000-0003-3769-9559]Robert A. Simcoe MIT Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Ave., Cambridge, MA 02139, USA 0000-0003-2871-127X]Jorryt Matthee Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, Zürich, 8093, Switzerland 0000-0001-9044-1747]Daichi Kashino Institute for Advanced Research, Nagoya University, Nagoya 464-8601, Japan Department of Physics, Graduate School of Science, Nagoya University, Nagoya 464-8602, Japan 0000-0003-0417-385X]Ruari Mackenzie Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, Zürich, 8093, Switzerland 0000-0002-6423-3597]Simon J. Lilly Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, Zürich, 8093, Switzerland 0000-0003-2895-6218]Anna-Christina Eilers MIT Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Ave., Cambridge, MA 02139, USA 0000-0003-4491-4122]Bin Liu Department of Physics, North Carolina State University, Raleigh, 27695, North Carolina, USA 0000-0000-0000-0000]David DePalma MIT Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Ave., Cambridge, MA 02139, USA 0000-0002-5367-8021]Minghao Yue MIT Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Ave., Cambridge, MA 02139, USA 0000-0003-3997-5705]Rohan P. Naidu NASA Hubble Fellow MIT Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Ave., Cambridge, MA 02139, USA We report new observations of the cool diffuse gas around 29, 2.3<z<6.3 galaxies, using deep JWST/NIRCam slitless grism spectroscopy around the sightline to the quasar J0100+2802. The galaxies span a stellar mass range of 7.1 ≤log M_*/M_sun≤ 10.7, and star-formation rates of -0.1 < log SFR/ <2.3. We find galaxies for seven absorption systems within 300 kpc of the quasar sightline. The radial absorption profile falls off sharply with radii, with most of the absorption extending out to 2-3R_200 of the host galaxies. Six out of seven absorption systems are detected around galaxies with log M_*/M_sun >9. absorption kinematics are shifted from the systemic redshift of host galaxies with a median absolute velocity ≈135 and standard deviation ≈85 . The high kinematic offset and large radial separation (R> 1.3 R_200), suggest that five out of the seven absorption systems are gravitationally not bound to the galaxies. In contrast, most cool circumgalactic media at z<1 are gravitationally bound. The high incidence of unbound gas in this work suggests that towards the end of reionization, galaxy halos are in a state of remarkable disequilibrium, and are highly efficient in enriching the intergalactic medium. Two strongest absorption systems are detected at z∼ 4.22 and 4.5, the former associated with a merging galaxy system and the latter associated with three kinematically close galaxies. Both these galaxies reside in local galaxy over-densities, indicating the presence of cool absorption in two “proto-groups" at z>4. § INTRODUCTION The commissioning of the JWST has ushered in a new era for spectroscopy of galaxies and intergalactic matter near the Epoch of Reionization (EoR) <cit.>. Strong rest frame optical lines (e.g., H-α, [OIII]) are finally observable at z ≳ 3.5; which combined with JWST's efficient spectroscopic modes have enabled large scale spectroscopic surveys of galaxies at EoR <cit.>. By performing carefully constructed spectroscopic experiments, where the galaxy fields also have bright high-z quasars, one can extend the study of galaxies to characterize their gaseous halos <cit.>. Deep ground based near infra-red (NIR) spectroscopy of these quasars often reveal intervening metal absorption line systems associated with galaxies along the line of sight: a signature of diffuse baryonic reservoir of gas around galaxies <cit.>. These cosmic ecosystems fuel the growth of stellar mass in galaxies and serve as reservoirs of gas recycling. At z<2, such cosmic ecosystems have been successfully characterized as the circumgalactic medium (CGM) around galaxies <cit.>. Over the last two decades, large galaxy and quasar surveys have enabled detailed characterization of the CGM, establishing it as an ubiquitous reservoir of diffuse gas around galaxies <cit.>. In the last 7 Gyrs of the history of the Universe (z<1), comparison of absorption line systems observed in background spectra of bright background quasars or galaxies with their host galaxy populations have revolutionized our understanding of the late-time CGM and its role in galaxy formation (see, for a detailed review). These studies have revealed that both highly ionized metals (traced by , ) and low ionized metals (traced by ) show strong trends with increasing galaxy star-formation rates, and stellar masses <cit.>. Cool circumgalactic gas traced by shows strong dependence on morphology and orientation of outflows vs inflows <cit.>. Diffuse warm-hot intra-group medium is also detected around groups of galaxies <cit.>. CGM gas appears bimodal in metallicity, with a portion tracing enriched outflows (∼20-100% solar) and a distinct metal-poor component (∼ 5% solar) that may resemble the long-sought “cold accretion” entering galaxies from the IGM <cit.>. Most crucially mass and metal census of the CGM gas at z ∼ 0.2 suggests that the CGM may host a large share of galactic baryons, with the CGM mass content outweighing the total stellar mass content of galaxies <cit.>, and hosts a massive reservoir of galactic metals, with galaxies having ejected at least as much metal mass as they have retained <cit.>. At these redshifts, most of the CGM gas is bound to the dark matter halo of its host galaxy, suggesting that most of the gas will be recycled back to the ISM <cit.>. At z∼ 2, the CGM contain a large metal reservoir, but more of this gas is kinematically consistent with not being bound to the host galaxy's dark matter halo than their z<1 counterparts <cit.>. Observations by lensed QSOs and spatially extended lensed arcs reveal that individual CGM gas clouds are small and show large variations even within a single halo <cit.>. A consensus has emerged that the CGM is a ubiquitous feature of galaxies from z ∼ 3 to z ∼ 0 <cit.>. Within the first Gyr of cosmic history, the rest-frame UV transitions that are instrumental in characterizing the low-z CGM, are redshifted into the near infrared (NIR). Moreover, any transition blueward of will not be observable owing to the intervening neutral IGM. Therefore, heavy element absorption systems in high-z quasar spectra provide our only access to the chemical enrichment and ionization taking place in this environment. NIR spectroscopy of QSOs at z > 6 (e.g., ) has pushed these investigations within the first Gyr of the Big Bang. These studies find that the number density evolution of strong absorption systems trace the cosmic star-formation history of the Universe whereas the weaker systems show no evolution out to z ∼ 6 <cit.>. The incidence of detection of low ionization species (e.g., , ) remain high even at high-z, whereas the incidence of detection of high ionized species (e.g., , ) drop off sharply beyond z ∼ 5.7 <cit.>. This might be owing to change in UV background at the early Universe or reflect some fundamental change in galaxy properties. Therefore, identifying the galaxies associated with these absorbers at high-z might give crucial insight into the latter stages of EoR. Prior to JWST, it has been prohibitive to conduct detailed galaxy surveys to identify the host galaxies of these absorbers. But initial studies of detection of host galaxies suggest a strong correlation between the absorption lines detected at high-z with host galaxies. Recently two works reported the presence of galaxy overdensities near a strong absorption system at z ∼ 5.72, four emitting galaxies (LAE) <cit.> and two [CII]158 μm emitting galaxies <cit.>. Additionally, another host galaxy associated with a z=5.9 OI absorption system using [CII]158 μm line was reported <cit.>. Cross correlation of LAE galaxies from deep MUSE observations also suggest a link between strong absorbers and bright LAE galaxies out to z∼ 4 <cit.>. These promising early results suggest that a systematic multi-wavelength galaxy survey is warranted to study the CGM host galaxies at z > 4, where detailed galaxy properties can be studied. In this work we present the first CGM measurements traced by absorption around 29 (2<z<6) galaxies in the EIGER survey (Emission-line Galaxies and Intergalactic Gas in the Epoch of Reioniation; ). We focus on the first observations around the hyper-luminous z = 6.33 quasar J0100+2802. This paper is organized as follows. In section 2 we describe the observations and summarize the survey strategy. In section 3 we describe the measurements of galaxy properties and the CGM absorber properties. In section 4 we describe the results. In section 5 we present the summary and discussion of the results. Throughout this paper we follow a flat ΛCDM cosmology with H_0 = 67.7 km s^-1 Mpc^-1, Ω_M = 0.31, and Ω_Λ = 0.69 <cit.>. All magnitudes are listed in the AB system. Unless stated otherwise, all distances are quoted in units of physical kpc. § OBSERVATIONS §.§ The EIGER Survey The EIGER survey is a 126.5 hour JWST GTO (PID:1243, PI: S. J. Lilly) program, that performs NIRCam wide field slitless spectroscopy (WFSS) around six extra-galactic fields, each centered on a hyper-luminous (6 ≲ z ≲ 7) quasar. We refer the reader to <cit.> for a detailed description of survey design rationale and data reduction methods. Below we briefly summarize different aspects of the observations. §.§ NIRCam observations of J0100+2802 In this work, we focus on metal absorption in the vicinity of galaxies detected in the z=6.33 quasar field of J0100+2802. JWST/NIRCam WFSS spectroscopy is performed with the F356W filter using the reverse grisms (GRISMR), which both disperse the spectra horizontally on the NIRCam sensors but with opposite parity. The spectral resolution of the observations is R ∼ 1500. Simultaneously with spectroscopy, F115W, F200W imaging of the field is performed. Direct and out-of-field imaging in the F356W filter is performed to cover the full spectroscopic field of view <cit.>. A four pointing mosaic strategy is employed, that ensures that the central 40× 40 has the maximum depth of ∼ 35 kilo-seconds. The total spectroscopic field of view around J0100+2802 is ∼ 25.9 arcmin^2, and the total exposure time ranges from 8-35 kilo-seconds. In this work we focus only on the central ≈ 4.6 arcmin^2 of the field, which is covered by both the NIRCam modules A and B (with reversed dispersion directions). This enables us to accurately identify single emission line objects and measure their redshifts <cit.>. NIRCam imaging data is reduced as described in <cit.> using pipeline (v1.8.2). Additional post-processing steps are performed to mask strong cosmic ray hits following <cit.>. Astrometry is calibrated by aligning known stars from the Gaia Data Release 2 catalog <cit.>. Several known artefacts (e.g., stray-light features, 1/f noise, residual sky background) are subtracted <cit.> before obtaining a final co-added image with pixel sizes of 0.03 /pixel. These final co-added images are used to perform aperture-matched photometry with in dual mode. The F356W image is used as the detection image. All images are convolved to match the point spread function of the F356W image. Kron aperture magnitudes are measured and photometric uncertainties are estimated by measuring random blank sky variations for apertures of different sizes, scaled to the local variance propagated by the pipeline (see, ). NIRCam WFSS data reduction is performed using a combination of pipeline (v1.7.0) and custom in-house tools as described in detail in <cit.> and <cit.>. To summarize, each individual exposure is processed with the step in the pipeline and assigned a WCS with the step. The frames are flat fielded and additional 1/f noise and sky background variation are removed by subtracting the median flux of each column to create the science frames. A continuum map is created by using a running median filter along the dispersion direction. The median filter kernel size is adaptive and has a hole in the center to ensure that it does not over subtract the emission lines. This continuum map is subtracted from each science frame to create an emission line map for each exposure. We stress that the continuum subtraction process does not rely on the source position or any trace model. We refer the reader to <cit.> for detailed description of this process. For each object detected in F356W imaging, a 2D spectrum is extracted based on [<https://github.com/npirzkal/GRISMCONF>] with the V4 trace models[<https://github.com/npirzkal/GRISM_NIRCAM>]. We perform additional pixel-level correction to the trace models to optimize the extraction based on our own empirical calibration using spectra of bright stars. Individual exposures are divided by the relevant sensitivity curve, rectified for small curvature, and re-sampled onto a common observed wavelength grid (3 μ m ≤λ≤ 4 μ m, with pixel size of 9.75 Å). For each individual module, these exposures are co-added with sigma clipping to produce the final 2D spectrum of a galaxy. For a given object position, we extract one independent spectra from each module. This results in two independent spectra obtained from the two NIRCam modules for each source. Since these two spectra have reverse dispersion directions, only emission lines that are truly coming from the object of interest will appear at the same observed wavelength in both the spectra. All other lines will shift in wavelength and/or disappear completely. This is the primary diagnostic to remove unrelated emission lines that are coming from other objects. Figure <ref>, top panels show the continuum subtracted 2D emission line spectra of three galaxies at z∼2.3, 4.2 and 6.3, respectively. Common emission lines are independently detected on both modules, further verifying their robustness. §.§ HST observations of J0100+2802 We obtain HST/ACS imaging of the J0100+2802 field in F850LP, F775W, and F606W filters respectively (HST PID: 15085, 13605). The total exposure time for these observations is 24,450 seconds. We query MAST for the individual flc.fits exposures, which are corrected for charge transfer inefficiency but have not been re-sampled. We align the individual exposures to the NIRCam F356W mosaic, and drizzle the images to the common pixel grid of the NIRCam mosaics using <cit.>. The routine masks cosmic rays, performs median blotting and matches the sky of each exposure before calculating the median co-added image. To create PSF-matched images for precise multi-band photometry we calculate matching kernels, using PSFs derived from <cit.> and re-sample to the 0.03 pixel scale. §.§ Ground Based spectroscopy of Quasar J0100+2802 Deep ground based optical and NIR spectroscopic observations are performed on the z= 6.33 quasar J0100+2802 using both the Magellan/FIRE and the VLT/X-shooter instruments. The target has been observed for a total of 16.8 hours with 5.8 hours of Magellan/FIRE (PI: Simcoe) and 11 hours of VLT/X-shooter (program ID: 096.A-0095; PI: Pettini) observations, respectively. Additionally, high-resolution (R≈ 50,000), Keck/HIRES observations are performed on J0100+2802, to cover the optical (0.86 μ m ≤λ≤ 9.9μ m) part of the spectrum <cit.>. The total integration time for HIRES observations are 3.8 and 3 hours, respectively in two different grating setups. These observations are self consistently reduced with the data reduction pipeline <cit.>, and a final co-added, flux calibrated spectra is produced for both the instruments. We refer the reader to <cit.> for a detailed description of the data reduction method. The final spectra have signal to noise ratios (SNR) of ∼ 86/112 per resolution element at rest frame wavelength of 1300 Å, of the FIRE and X-shooter spectrum, respectively. § METHODS Here we describe how emission line galaxy properties are measured and how absorption lines are identified and analyzed. §.§ Galaxy Redshifts We create a candidate emission line galaxy list using the following two criteria. We search for emission line objects (with SNR per module > 7) in the central ≈ 4.6 arcmin^2 of the EIGER footprint, which is covered by both the NIRCam grism modules (A and B). We further restrict our search to objects whose spectrum contains a verified emission line that would plausibly be at a redshift within Δ z ≈ 0.02 of the identified intervening metal absorption lines. For each identified object, we extract 2D galaxy spectra as described in Section 2.2. Additionally, we look for higher redshift galaxies near the quasar identified with [OIII]/H-β emission lines <cit.>. This yields a sample of 127 objects within ≈ 108 arcsec of the J0100+2802 quasar. Monte Carlo simulations of injecting and recovering synthetic emission lines in the central ≈ 4.6 arcmin^2 of the J0100+28 field, yield a spectroscopic completeness of 50% for emission line flux of 1.6× 10^-18 erg s^-1 cm^-2, and 90% for emission line flux of 2.8× 10^-18 erg s^-1 cm^-2, respectively (Mackenzie et al. in prep). Each spectrum is individually inspected using a custom python API <cit.>, which is used to independently extract an 1D spectrum for each module. We identify individual emission lines and fit a Gaussian profile to measure the redshift of the galaxy. Photo-z posterior distribution function for each object is also inspected to identify any foreground contaminating object. For the same object, it is crucial to inspect the spectra from both modules to identify and mask out any contaminating feature from other galaxies. Since the two spectra are extracted from the two NIRCam modules with reversed dispersion directions, only “real" lines associated with the extracted object will appear consistently at the same wavelength in the two modules. After visually inspecting each object, a confidence class is assigned to each object with 0 being lowest confidence and 3 being the highest confidence. In this work, we only consider objects with a confidence class of 1.5 or higher. This creates a total sample of 87 galaxies with 0.4 < z < 6.8. Figure <ref>, left panel shows the spatial distribution of these galaxies within 300 physical kpc of the J0100+2802 quasar sightline. Each individual galaxy is color coded to reflect its spectroscopic redshift. Independent redshift measurements from both modules yield a typical redshift accuracy of ≈ 140 for each galaxy. Since we focus only on the CGM host of the absorbers along the J0100+2802 sightline, we further select only galaxies within 300 kpc from the J0100+2802 quasar sightline and at a redshift lower than the quasar (2.3 < z < 6.33). The lower redshift limit is chosen to match the lowest redshift absorber detected along this line of sight. This yields a final sample of 29 galaxies. Figure <ref>, right panel shows the redshift and impact parameter distribution of this final sample of galaxies with a mean redshift of ⟨ z ⟩ = 4.5478 ± 0.201. Throughout the rest of the paper, we will only focus on this sample of galaxies. The gray shaded regions (Figure <ref>, right panel) mark the redshift ranges where no strong galaxy rest frame optical emission lines shift into the observed wavelength range of EIGER NIRCam/grism spectroscopy. Our survey is most sensitive to the three redshift windows, 2.3<z<2.7 (using HeI+[SIII]+Pa-γ lines), 4<z<5.1 (using H-α+[SII] lines), and 5.3<z<7 (using [OIII]+H-β lines). Future, ground based spectroscopic follow-ups or additional JWST spectroscopy with different gratings will enable us to cover these missing redshift ranges. We note that since our galaxy search is explicitly within Δ z ≈ 0.02 (Δ v ≈±6000 ) of the identified absorption lines, the galaxy sample is not selected blindly without any knowledge of the absorption systems. This “galaxy centric" <cit.> approach is essential to characterize the covering fraction of the CGM gas or to measure the total metal mass budget of the CGM <cit.>. Owing to challenges of identifying single emission line galaxies, and in mitigating contamination from other sources, we restrict this work to focus only on galaxies within Δ z ≈ 0.02 of identified absorption line systems. This search window is large enough that we can still detect galaxies not associated with these absorption systems. However, this work does not search for all galaxies outside the selection window and does not attempt to quantifying the absorption covering fraction and metal mass budget around the EIGER galaxies. A complete “galaxy centric" analysis of the CGM of the full EIGER survey will be presented in a future work. §.§ Emission line measurement and SED fitting We follow the procedure introduced in <cit.> to measure the emission line flux of the strongest lines in a galaxy spectrum (e.g., H-α, He-10830, [OIII]) from the grism emission line observations, which is summarized as follows. We start with the 2D emission line spectra (Top panels, Figure <ref>), and select a spectral region within ± 50Å of the emission line of interest in the rest frame. We collapse this emission line in the spectral direction and fit the spatial profile with single or multiple Gaussian profiles. This spatial profile is used to optimally extract an 1D continuum filtered spectrum for the galaxy. We follow <cit.> and re-scale the noise levels of the 2D emission line spectrum, by evaluating the standard deviation of empty sky pixels and setting it equal to the mean noise level of our 1D spectrum. This procedure is performed independently on each module. Figure <ref> bottom panels show the three representative optimally extracted 1D spectra of galaxies at z ≈ 2.3, 4.2 and 6.3, respectively. The strong emission line features in each spectrum are marked. We use these optimally extracted 1D spectra to fit the emission lines of interest. We fit the emission lines with Gaussian profiles (between 1 and 3, depending on complexity) and measure their total line flux. These line fluxes are used to perform spectral energy distribution (SED) fits, along with the photometric data at the spectroscopic redshift of the galaxy. We use broad-band photometry from three HST (F6060W, F776W, F850LP) and three JWST (F115W, F200W, F356W) images (see Section 2.2, 2.3). We use the SED fitting code <cit.> to perform fits to these six photometric measurements along with F356W grism line fluxes. Following <cit.>, we assume a 5% error on the spectro-photometric calibration of the observations. fits model the total stellar mass, gas-phase and stellar metallicity, the star-formation history of the galaxy, dust attenuation and the ionization parameter. We assume a <cit.> initial mass function (IMF) and use the MIST isochrone models <cit.>. We use a delayed-τ star-formation history model, and apply a dust attenuation correction following <cit.>. Figure <ref>, shows the stellar mass and star-formation rate (SFR) estimates for these galaxies from the SED fits. Each circle represents a galaxy and is color coded as a function of their spectroscopic redshifts. The estimated stellar masses span four decades in range (7.1 ≤log M_*/M_⊙≤ 10.6, and almost two dex in star-formation rate estimates. Most of the high-z (z>5) galaxies are of lower stellar mass (log M_*/M_⊙ <9.2). We estimate the halo mass of the galaxies using the abundance matching relation from <cit.>. In this work, we quantify the virial radius of a galaxy as R_200, the radius at which the halo mass density is 200 times the critical density of the Universe. We write it as R_200^3 = =M_halo G/100H^2(z), where M_halo is the halo mass, G is the universal constant of gravitation and H(z) is the Hubble parameter at the redshift of interest. The uncertainty on stellar mass and abundance matching relations are propagated through to halo mass and virial radii estimates. The galaxy properties are presented in Table <ref>. We create false color JWST/NIRCam (F115W, F200W, F356W) rgb images of each galaxy presented in this work. Each image is a 5× 5 stamp and is presented in Figure <ref>. §.§ Absorption line measurements We visually inspect the spectra of the quasar J0100 obtained from FIRE, X-shooter and HIRES instruments and search for intervening absorption line systems. We use a python-based API from the package <cit.> to identify and tabulate these systems. Along the J0100+2802 quasar line of sight, 22 unique intervening absorption line systems are identified within 2.3 < z < 6.33. These lines include strong absorption systems traced by , , , , , , , , , transitions. Among these, there are 16 unique absorption systems between 2.3 < z < 6.14. In this paper we will focus primarily on the hosts of absorption line systems within the redshift windows of 2.3<z<2.7, 4<z<5.1, and 5.3<z<6.3 (see Figure <ref>). This redshift range corresponds to the observer frame wavelength range with optical galaxy emission lines covered by EIGER NIRCam/grism spectroscopy. We use a semi-automated framework to measure the absorption line strengths and kinematics associated with each identified foreground galaxy as follows: we first shift the reduced final quasar spectrum to the rest-frame of the foreground galaxy, using the spectroscopic redshift of the galaxy as described in the previous section. We focus on the common atomic absorption lines at predictable observed frame wavelength ranges. We quantify a detected absorption system to be associated with a host galaxy if it is within 300 physical kpc of the J0100+2802 quasar sightline and within ±400 of the systemic redshift of the galaxy. Our emission line search criterion (galaxy emission line SNR >7 per module), can detect galaxies at log M_*/M_⊙≈ 7.1 (Figure <ref>). However, it is possible that some faint emission line galaxies are missed in this search. We adopt a conservative approach and only focus on the reliably detected emission line galaxies in this work. The search for fainter (lower SNR) emission line galaxies would be carried out in a future work incorporating all the six quasar fields of the EIGER survey (Bordoloi et al. in prep). We extract short slices of quasar spectra around ±600 of the systemic redshift of the galaxy, for each line of interest. These lines include , , C IV, Si IV etc. We continuum normalize each slice using a multi-ordered Legendre polynomial and measure the rest frame equivalent width and apparent optical depth (AOD) column density of each transition. We visually inspect each transition to confirm its presence, and set the velocity range for AOD column density integration. We use the identified absorption line list to minimize contamination from other intervening absorption line systems, and when setting the velocity integration range. We attempt to identify every detection feature within ±600 of the lines of interest. Most such features are not associated with the host galaxy and are positively identified to be associated with other intervening absorption line systems. We pay particular attention to the identified absorption doublet and ensure that the AOD ratios between the doublet range from 2:1 to 1:1 and require that the absorption profiles are aligned within 200 from other detected metal absorption lines in that system. Additionally, we fit Voigt profiles to both the absorption doublet, to quantify the kinematic component structure and to estimate column densities for severely blended lines. This is a crucial step, as the Voigt profile fitting improves on the AOD column density measurement using information about line shape, and location to constrain the fit. Further, for several saturated absorption features line saturation is taken into account as the line spread function is accounted for in these fits. We use a python-based Bayesian Markov chain Monte Carlo (MCMC) Voigt profile fitting toolbox [<https://github.com/rongmon/rbvfit>] to perform simultaneous fits to the absorption doublet. This approach fits column density (N), Doppler b parameter and velocity offset v for each component simultaneously for each doublet. We assume flat priors on each of these parameters with reasonable physical bounds. The number of components and the initial guess of the velocity offsets are obtained via visual inspection of the data. The fitting procedure generates posterior distributions for the model parameters. We chose the median of each distribution as the best fit model parameters and the 16^th and 84^th percentile as the upper and lower bounds on the best fit parameters. One advantage of using a Bayesian MCMC approach over a frequentist χ-squared minimization approach is that our approach yields marginalized posterior distribution of fitted parameters. This results in accurate column density estimates, even if simultaneously the Doppler b parameters are not well constrained in moderate resolution spectrum used in this work. The absorption systems presented in this work, and the best fit Voigt profile parameters are reported in Table <ref>, and Figure <ref>. § RESULTS In this section we present the variation of absorption strength with galaxy properties, their radial profiles, and the absorber kinematics. §.§ Distribution of absorption around galaxies We first characterize the spatial extent of absorption around the EIGER galaxies. We examine how the rest frame equivalent width (W_MgII2796) varies as a function of impact parameter (R) and the normalized virial radius (R/R_200) around the EIGER galaxies. Figure <ref> shows the 1-D radial absorption profile around 29 galaxies as function of R (left panel) and R/R_200 (right panel), respectively. Seven unique absorption system associated with host galaxies are detected and the galaxy at the closest impact parameter to the absorber is assigned as the host galaxy (gray circles). No absorption is detected around 12 galaxies and are marked with downward arrows. These measurements show the 2-σ limit on non-detection. Seven galaxies associated with non-detection are at R<200 kpc, suggesting that absorption is patchy at z>4. All galaxies are color coded as a function of their redshifts. In particular, for absorption systems at z ∼ 4, multiple galaxies are detected within ±400 of the absorber redshift. All these associated galaxies are also presented in Figure <ref>. We will discuss the environment of absorbers in the next Section. The error bar on x-axis (right panel) denotes the uncertainty on R_200 estimates. Focusing on the closest galaxy (gray circles), two immediate observations stand out in Figure <ref>, left panel. The strongest absorbers are detected at close impact parameters (R < 100 kpc) of their host galaxies. Further, there are several absorbers detected at high impact parameters (R> 150 kpc). These absorbers are at higher impact parameters than the typical radial profile for absorption systems observed at z<1 (gray shaded region, ). However, this trend is not taking into consideration that these galaxies all have different masses and virial radii, or the possibility that even fainter galaxies below our detection limit exist at closer distances. False-color JWST/NIRCam F115W/F200W/F356W broad band images of each galaxy is presented in Figure <ref>. Individual galaxies exhibit diverse morphology. Galaxies at z<3, typically show well-formed disks and prominent inner bulge. Most of the z>3 galaxies exhibit complex morphology, with several of them exhibiting tidally disturbed features and several individual knots indicating either merger or discrete star-formation events along the galaxy disk. In particular, the stamp of galaxy EIGER-01-10308 stands out (Figure <ref>) as a merging galaxy with tidal streams clearly visible within 2 arcsec of the galaxy. This galaxy is associated with the strongest absorption system reported in this work (Appendix <ref>), and we discuss this system in detail in the next section. As a fraction of their inferred virial radii (Figure <ref>, right panel), most absorption extends out to ∼2-3 R/R_200 of the host galaxies. This spatial extent is much larger than typically observed around galaxies at z<1 <cit.>. absorption strength also show a steep decline with R/R_200. We quantify this radial fall off with a power law fit as log W_MgII2796 = (2.403 ± 0.046) × (R/R_200)^-0.52 ± 0.034. The best fit power law with 68% confidence interval is presented as the dashed line with gray shaded region in Figure <ref>, right panel). We note that 71^+13_-19% (5/7) of the absorbers are detected outside the inferred virial radii of the host galaxies. Figure <ref> presents the variation of absorption as a function of stellar mass (left panel) and SFR (right panel), respectively. The symbols are color coded to be consistent with Figure <ref>. We note that 86^+9_-18% (6/7) of the detected absorption systems are associated with higher stellar mass galaxies (log M_*/M_⊙ >9), with strongest absorption systems associated with the highest stellar mass systems. The two highest redshift absorption systems are associated with lower mass log M_*/M_sun≈ 7.1 (z∼ 5.33) and log M_*/M_sun≈ 9.1 (z∼ 6.01) galaxies, respectively. Further, the strongest absorption systems are associated with galaxies with the highest SFR. These results suggest a correlation between the absorption strength of these strong high-z absorption systems and star-formation activity in their host galaxies. These trends are similar to what is observed for strong absorption at z∼ 1 galaxies <cit.>. A unique facet of these high-z absorption systems is that majority of them are beyond the inferred virial radii of their host galaxies. At these distances the absorbing gas may not be bound to the gravitational potential of their host galaxies. We explore this in the next section. §.§ CGM kinematics and environments at high-z In this section we focus on the kinematics and environment of the seven observed absorption profiles and discuss whether the observed absorption profiles are consistent with being bound to the dark matter halo of the host galaxies. We obtain the column densities and the best fit Voigt profiles as described in Section 3.3. Figure <ref> shows the best fit absorption profiles for both transitions. Different panels show absorption associated with each system. The vertical red ticks mark the position of individual Voigt profile components. We use these components to quantify the velocity distribution of the absorption systems. Figure <ref>, left panel shows the velocity centroids of individual Voigt profile fitted absorption components as a function of R/R_200 from their respective host galaxies. The symbols are color coded to show the column density of each component. The vertical range bars show the velocity range over which the equivalent width of the system is calculated. They are effectively the full width at zero optical depth of each absorption system. Both the thermal and bulk motion associated with the absorption systems are incorporated within the full velocity widths and therefore represents the maximum projected velocity of the absorption systems. The horizontal range bars represent the uncertainty in estimating R/R_200. Figure <ref>, right panel shows the distribution of absorption components relative to the host galaxy redshift. The distribution of absorption components show a large velocity spread from -400 to 300 . The absorption component velocities are offset from the systemic zero velocities with a median absolute velocity of 135 and a standard deviation of 85 , respectively. This is different than what is observed at the CGM of low-z galaxies, where most of the CGM absorption systems cluster around the systemic zero velocity of their host galaxies and their velocities are almost always consistent with being less than the associated virial velocities <cit.>. We further investigate if the absorption detected around EIGER galaxies are consistent with being bound to the dark matter halos of the host galaxies in Figure <ref>. We present the absorption component velocities normalized to the escape velocity associated with the host galaxy at that impact parameter as a function of R/R_200. The horizontal error bars represent the uncertainty associated with the inferred R_200. The vertical range bars denote the velocity range over which equivalent width of the system is calculated (normalized to the escape velocity of the system). It is clearly seen that 5 out of 7 absorption systems are detected at R> 1.3R_200. Two absorption components have velocities higher than the projected escape velocities of these systems. This suggests that these absorption systems are not consistent with being bound to the dark matter halos of the host galaxies. Only two absorption systems (galaxies: EIGER-01-10308, EIGER-01-09950) are at R < R_200, and their component velocities are less than escape velocities associated at these impact parameters. Only these two z∼ 4 systems are consistent with being bound to the dark matter halo of their host galaxies. These two systems are also associated with higher galaxy over-densities around them (Figure <ref>). Looking at both the absorption component kinematics and the R/R_200 distribution of these absorption systems, we conclude that the CGM kinematics at high-z is significantly different than what is observed at z < 1. At low-z bulk of the CGM absorption systems are consistent with being bound to the dark matter halos of their host galaxies, unlike the CGM of EIGER galaxies. This suggests an evolution in CGM gas kinematics as a galaxy evolves from early Universe to today. In the early Universe, the CGM gas could easily escape from individual galaxy halos and chemically enrich the IGM around galaxies. But as the galaxies became larger at low-z, the CGM becomes increasingly bound to the host galaxies. We finally explore the environment around the seven EIGER CGM host galaxies and quantify if these are isolated host galaxies or if they have companion galaxies. Figure <ref> show the impact parameter to each galaxy at the systemic redshift of the galaxy noted in each panel. We plot all galaxies within 300 kpc from the J0100+2802 sightline, and within ±600 of the host galaxies. Each galaxy is color coded as a function of its stellar mass. The vertical range bars show the associated R_200 of each galaxy. We describe the environment of each system below. EIGER-01-10308: This galaxy at z∼ 4.22, is a tidally disturbed merging system. The tidal streams and different components of the merger can be clearly seen in high-resolution JWST imaging (Appendix <ref>, panel a). The merging system has an integrated stellar mass of log M_*/M_sun = 10.17, and a SFR ∼ 70 . We extract individual NIRCam spectra of the two smaller merging components (Appendix <ref>, panel b) and compute their individual redshifts. These two components are at ∼ -23 and -80 from the main galaxy, respectively. These individual components are marked as gold stars in Figure <ref>, top left panel. The galaxy EIGER-01-10308 resides in a local over-density and there are seven additional galaxies within 200 kpc from the quasar sightline. The galaxies are within 220 kpc of each other and have a velocity dispersion of 228 . This large over-density of galaxies at close physical and kinematic separation may be part of a proto-group at ⟨ z ⟩≈ 4.2234. The additional seven galaxies are at higher impact parameter than EIGER-01-10308 and at R > R_200. The absorption associated with EIGER-01-10308 show complex kinematics with six distinct individual absorption components identified with a velocity spread of ≈ 500 (Figure <ref>). Absorption is also detected in , , and transitions (Appendix <ref>, panel c). The galaxy is at low impact parameter with R/R_200 <1 and the velocity of the absorption components are less than the escape velocity associated at this impact parameter. This system is one of the two absorption systems in this work, that has absorption kinematics consistent with being bound to the dark matter halo of the host galaxy. EIGER-01-09950: At z∼ 4.5, there are three additional galaxies within 300 kpc of this galaxy (Appendix <ref>). The galaxy EIGER-01-09950 has a stellar mass of log M_*/M_sun = 9.88, and a SFR ∼ 29 , respectively. There are two other galaxies at very close kinematic separations: EIGER-01-09078 at a separation of 16 kpc from EIGER-01-09950 and a velocity offset of 368 , and galaxy EIGER-01-08811, at a separation of 22 kpc from EIGER-01-09950 and a velocity offset of 122 . Both galaxies EIGER-01-09078 and EIGER-01-08811 are 40 and 50 kpc away from the quasar sightline J0100+2802. A third more massive galaxy (EIGER-01-06193) is detected at 235 kpc away from EIGER-01-09950 and at a velocity separation of 232 . Galaxy EIGER-01-06193 is at an impact parameter of 260 kpc from the quasar sightline J0100+2802. These galaxies all lie close to each other (within 232 kpc and 166 ) and could form a proto-group at z∼ 4.5192. The absorption profile is again complex for this system (Figure <ref>), with five identified distinct absorption components. The absorption spans a velocity range of ≈ 350 . We also detect absorption in , , and transitions in this system (Appendix <ref>). The strongest absorption component is offset from the systemic redshift galaxy EIGER-01-09950 by ∼ 95 , and kinematically lines up with the nearby galaxy EIGER-01-09078. But EIGER-01-09078 has a much lower stellar masses (see Table <ref>), and the impact parameter to it is higher than the inferred virial radius associated with it (Figure <ref>). Since the galaxy EIGER-01-09950 is the closest galaxy to the line of sight, and kinematically the projected velocity is lower than the escape velocity of the host galaxy, we conclude that the absorption is consistent with being bound to the dark matter halo of the host galaxy. EIGER-01-10424: At z∼ 4.64, a faint galaxy is detected at an impact parameter of 72 kpc from the J0100+2802 quasar line of sight. This galaxy has a stellar mass of log M_*/M_sun = 9.18, and a SFR ∼ 13 , respectively. The galaxy is next to a bright z∼ 1 foreground galaxy detected in Pa-α emission (see Figure <ref>). Ground based MUSE spectrum of the bright foreground galaxy show [OII] emission doublet, confirming it as a low-z galaxy. There are two faint emission components detected for the target galaxy, suggesting that it is a merging system at z∼ 4.64, however, owing to the position angle of the NIRCam grism spectra, the two components cannot be spatially resolved. We note that since the emission line redshift for this system is estimated from a single H-α emission line, it is possible that this emission is associated with the foreground bright galaxy at z∼ 1. However, ground based seeing does not allow us to check for [OII] emission associated with the fainter components in the MUSE datacube. The presence of metal absorption and the emission lines in the NIRCam grism spectra has led us to conclude that EIGER-01-10424 is the host galaxy for absorption detected at z∼ 4.6. This system only shows weak absorption doublet, kinematically offset from the host galaxy's systemic redshift by ∼ -350 (Figure <ref>), and beyond the inferred virial radius of the host galaxy (Figure <ref>). We therefore conclude that the CGM absorption is not consistent with being bound to the host galaxy. Galaxies EIGER-01-06898, EIGER-01-20300: These galaxies are associated with absorption at z∼ 6.015 and 5.33, respectively. EIGER-01-06898 is at an impact parameter of 201 kpc and has a stellar mass of log M_*/M_sun = 9.1, and a SFR ∼ 8.4 , respectively. EIGER-01-20300 is at an impact parameter of 19 kpc and has a stellar mass of log M_*/M_sun = 7.1, and a SFR ∼ 1.1 , respectively. Both these galaxies are “isolated", in the sense that no companion galaxy within ±600 and 300 kpc of them is detected. Both galaxies are at an impact parameter beyond the R_200 radii of their host galaxies (Figure <ref>). For EIGER-01-06898, we detect and absorption offset from the systemic redshift of the galaxy by ≈ -136 . Around EIGER-01-20300, we detect , , and absorption, kinematically offset from the systemic redshift of the host galaxy by ≈ 120 . We will report and quantify the high-ionization absorption profiles in an upcoming publication (Simcoe et al. in prep). For both these systems, the absorption is kinematically offset from the systemic galaxy redshift and is detected beyond the inferred virial radii of their corresponding host galaxies. We therefore conclude that the absorption detected around these galaxies is not bound to the host galaxy's dark matter halo. Galaxies EIGER-01-06569, EIGER-01-09351: These two galaxies are associated with absorption at z∼ 2.67 and 2.31, respectively. The galaxy 6569 is at an impact parameter of 270 kpc and has a stellar mass of log M_*/M_sun = 10.17, and a SFR ∼ 75 , respectively. EIGER-01-09351 is at an impact parameter of 172 kpc and has a stellar mass of log M_*/M_sun = 10.64, and a SFR ∼ 75 , respectively. In both these systems, absorption doublet is detected as any line with bluer rest frame wavelength will be in the Ly-α forest and not observable along this high-z quasar sightline. Both these systems are weak absorption systems (Figure <ref>) and are beyond the inferred virial radii of their host galaxies. In both cases, the absorption is consistent with not being bound to the dark matter halos of the host galaxies. In all the detected systems, only two galaxies EIGER-01-10308 and EIGER-01-09950 have associated absorption consistent with being bound to the host galaxy dark matter halos. In both cases, there is a galaxy overdensity suggesting that these galaxies reside in two galaxy proto-groups. In all other cases, where no galaxy overdensity is seen, most of the absorption is consistent with not being bound to the dark matter halo of their host galaxies. This is significantly different than the cool CGM detected at z<1, where most of the CGM gas is consistent with being bound to their host galaxies. § DISCUSSION AND CONCLUSIONS The commissioning of JWST has opened a new discovery space to study the circumgalactic medium of high-z galaxies. In this work, we present the deep NIR (3.5μm) WFSS JWST NIRCam spectroscopic observations of the z∼6.33 quasar field J0100+2802 from the EIGER survey to characterize the cool CGM (traced by absorption) around 29 2.3<z<6.3 galaxies. The JWST WFSS spectroscopy is accompanied by deep JWST/NIR and HST/Optical broad band imaging and deep ground based high resolution spectroscopy of the quasar. This work builds on the initial EIGER survey papers that characterized the properties of a large sample of [OIII] emitting galaxies at z=5.33–6.93 <cit.>. Our main conclusions are summarized as follows: * Using JWST/NIRCam 3.5μm grism spectroscopy, we discover 29 galaxies within 300 kpc from the quasar sightline J0100+2802 in three redshift windows 2.3 <z<2.7, 4<z<5.1, and 5.3<z<6.3, respectively. Accurate spectroscopic redshifts are measured using strong rest frame optical emission lines (e.g., [SIII], He-I 10830, Pa-γ, H-α, H-β, [OIII]). * The galaxies span a stellar mass range of 7.1 ≤log M_*/M_sun≤ 10.7, and exhibit strong correlation between star-formation rates and stellar mass of the galaxies. All the galaxies presented in this work are star-forming. * Galaxies identified show a diverse morphology, from tidally disturbed mergers, to well-formed disks. Most of the z>3 galaxies show complex morphology of either several clumps or tidally disturbed features. * We identify the CGM host galaxies of seven absorption systems within an impact parameter of 300 kpc. Identifying the closest galaxy to the quasar line of sight as the host, we find that strongest absorption is detected at close impact parameters (R< 100 kpc). The absorption strength drops off as a function of galactocentric radii from the host galaxies, characterized by a power law fall off. This radial fall off is slightly shallower than the radial absorption profile observed for z<1 galaxies. * There are 12 galaxies within 300 kpc, for which no absorption is detected at a mean detection threshold of 10-20 mÅ. This shows that at high-z, cool CGM traced by absorption is patchy. * The absorption radial profile normalized to the host galaxy virial radius (R/R_200), show a steep decline with impact parameter. The radial profile is quantified as a power law fit: log W_MgII2796 = (2.403 ± 0.046) × (R/R_200)^-0.52 ± 0.034. Most of the CGM host galaxies are detected at R <2-3 R_200. * 71^+13_-19% (5/7) of the absorption systems are detected outside the virial radii of their host galaxies. Two absorption systems are detected within the virial radii of their host galaxies, and both these galaxies reside in local galaxy overdensities. * 86^+9_-18% (6/7) of the absorption systems are detected around host galaxies with log M_*/M_sun >9, with strongest absorption systems associated with the highest stellar mass systems. The two z>6 absorption system are associated with lower mass log M_*/M_sun≈7 and log M_*/M_sun≈9 galaxies, respectively. Similarly, strongest absorption systems are also associated with the most star-forming galaxies. * The absorption kinematics is not symmetrically clustered around the systemic zero velocity of their host galaxies. The absorption components velocities have a large velocity spread (from -400 to 300 ) around the systemic redshift of the host galaxies. The absorption components show a median absolute velocity of 135 and a standard deviation of 85 . * Five out of the seven absorption systems are associated with host galaxies at R>1.3R_200. Moreover, two absorption components show projected velocities higher than the escape velocity of the host galaxies. We conclude that five out of seven absorption systems have cool CGM gas, consistent with being unbound to their host dark matter halos. * We highlight the CGM around two particular absorption systems (z∼ 4.2 and 4.5) because they are associated with host galaxies at R < R_200, and with absorption gas kinematics consistent with being bound to the dark matter halos of the host galaxies. Both these z∼ 4.22 and z∼ 4.5 absorption systems exhibit complex kinematics spanning ≈ 500 and 350 , respectively. * The absorption system at z∼ 4.22 is associated with a morphologically disturbed merging galaxy with three distinct merging components within 80 of each other. This galaxy is within a local galaxy over-density where seven additional galaxies are observed within 200 kpc of the quasar sightline and within ±500 of the host galaxy. These galaxies might be part of a galaxy proto-group at z∼ 4.22. * The absorption system at z∼ 4.5 is associated with a galaxy with two close kinematic companions within 16-22 kpc of the host galaxy. Both these companion galaxies are within velocity separation of < 370 from the host galaxy. A third massive companion galaxy is detected 235 kpc from the CGM host galaxy. These galaxies might be part of a galaxy proto-group at z∼ 4.5. * The two strongest and kinematically most complex absorption systems (at z∼ 4.22 and z∼ 4.5) are both part of two local galaxy over-densities. The absorption detected in these systems may be part of the intra-group gas associated with these two “proto-group" galaxies at high-z. In summary, we present the first results characterizing the cool CGM around 2.3<z<6.3 galaxies in the first field of the EIGER survey. We examine CGM hosts of seven absorption systems and find that most of the high-z absorption is not consistent with being bound to the dark matter halos of the host galaxies. This is in contrast to what is seen for CGM of z<1 galaxies <cit.>. In particular, extensive HST/COS CGM surveys of z<0.2, L* and sub-L* galaxies show that at low-z most of the CGM is kinematically consistent with being bound to their host galaxies. These differences arise owing to a combination of lower gravitational potential of high-z galaxies and a much higher Hubble parameter in the earlier Universe. These findings indicate that the galaxies in the early Universe were much more efficient in distributing metals produced in stars out of galaxies and chemically enrich their IGM. Such chemically enriched gas could be deposited to nearby galaxies at a later time, providing fuel for next generation of stars in them. These results reinforce the power of JWST/NIRCam grism observations to efficiently conduct high-z galaxy spectroscopy campaigns. By combining the high fidelity JWST spectroscopic campaign with deep group based NIR spectroscopy of z>6 quasars, we demonstrate an efficient program design to census the cool CGM around high-z galaxies in the EIGER survey. In an upcoming paper, we will focus on detailed properties of the CGM of high-z and absorption systems (Simcoe et al in prep.). We will further extend this work to the full six quasar fields of the EIGER survey (Bordoloi et al in prep) to better quantify the CGM-galaxy correlation in the EIGER survey. This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program # 1243. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. This work has been supported by JSPS KAKENHI Grant Number JP21K13956 (DK). RS acknowledges support from NASA award number HST-GO-15085.001. ccccccccc 9 0pt EIGER galaxy properties along with absorption measurements IDa galaxy galaxy z_sysb Rc log M_* R_vird log SFR W_re α [J2000] δ [J2000] [pkpc] M_sun [pkpc] M_sunyr^-1 [mÅ] EIGER-01-09351 01:00:14.38 28:02:16.42 2.3087±0.0001 171.9 10.64^+0.08_-0.14 133^+18_-15 1.88^+0.11_-0.09 34±3 EIGER-01-09138 01:00:14.28 28:02:14.86 2.3537±0.0001 168.2 9.87^+0.25_-0.16 87^+11_-9 1.76^+0.04_-0.06 >27 EIGER-01-08219 01:00:14.72 28:02:18.91 2.3571±0.0000 198.0 9.64^+0.22_-0.20 79^+9_-9 1.96^+0.05_-0.06 >8 EIGER-01-06569 01:00:14.31 28:02:54.32 2.6783±0.0001 270.4 10.17^+0.21_-0.36 92^+17_-12 1.87^+0.20_-0.14 29±6 EIGER-01-06330 01:00:14.76 28:02:45.49 4.2156±0.0003 210.8 8.56^+0.20_-0.22 31^+4_-3 0.18^+0.12_-0.09 >4 EIGER-01-07514 01:00:13.80 28:02:44.01 4.2208±0.0002 145.7 9.00^+0.23_-0.22 38^+5_-4 0.85^+0.06_-0.05 2118±52 EIGER-01-07870 01:00:14.00 28:02:34.76 4.2220±0.0003 109.8 8.75^+0.15_-0.18 34^+4_-4 0.36^+0.12_-0.11 2119±52 EIGER-01-07143 01:00:14.47 28:02:37.92 4.2234±0.0001 158.1 9.16^+0.14_-0.20 40^+5_-4 1.03^+0.05_-0.04 2127±52 EIGER-01-08125 01:00:13.89 28:02:33.98 4.2236±0.0002 98.6 8.88^+0.17_-0.39 36^+5_-4 0.53^+0.10_-0.09 2116±52 EIGER-01-07015 01:00:14.47 28:02:38.32 4.2237±0.0004 159.4 8.52^+0.19_-0.20 30^+4_-3 0.72^+0.05_-0.06 2129±53 EIGER-01-18550 01:00:11.50 28:02:26.27 4.2256±0.0000 139.0 9.09^+0.19_-0.22 39^+5_-5 0.83^+0.08_-0.08 2118±53 EIGER-01-10308 01:00:12.71 28:02:29.24 4.2263±0.0003 36.3 10.11^+0.16_-0.17 62^+7_-6 1.84^+0.08_-0.08 2122±52 EIGER-01-07059 01:00:14.30 28:02:42.43 4.2272±0.0006 165.0 8.81^+0.14_-0.26 35^+5_-4 0.41^+0.12_-0.10 2121±52 EIGER-01-19137 01:00:11.88 28:02:27.81 4.4245±0.0001 103.3 9.17^+0.19_-0.14 39^+4_-4 1.83^+0.09_-0.07 >5 EIGER-01-09950 01:00:13.28 28:02:28.54 4.5159±0.0001 30.4 9.88^+0.15_-0.20 53^+6_-5 1.46^+0.11_-0.10 891±30 EIGER-01-08811 01:00:13.53 28:02:28.89 4.5182±0.0007 50.9 8.30^+0.17_-0.27 26^+4_-3 -0.05^+0.12_-0.08 902±30 EIGER-01-06193 01:00:15.92 28:02:28.58 4.5202±0.0003 260.3 10.70^+0.03_-0.03 120^+79_-26 2.15^+0.06_-0.05 890±30 EIGER-01-09078 01:00:13.45 28:02:27.48 4.5227±0.0001 40.3 8.79^+0.25_-0.35 32^+5_-5 0.61^+0.07_-0.07 878±30 EIGER-01-08183 01:00:12.91 28:02:50.56 4.5466±0.0002 166.4 8.24^+0.26_-0.35 25^+4_-4 0.64^+0.08_-0.06 >20 EIGER-01-10424 01:00:13.07 28:02:36.60 4.6491±0.0003 71.7 9.18^+0.18_-0.18 37^+4_-4 1.11^+0.07_-0.07 109±4 EIGER-01-16490 01:00:09.55 28:02:26.67 4.9422±0.0005 295.7 9.00^+0.17_-0.17 32^+4_-3 0.90^+0.07_-0.06 >9 EIGER-01-20300 01:00:12.88 28:02:23.44 5.3364±0.0021 18.8 7.10^+0.17_-0.10 12^+1_-1 0.05^+0.42_-0.18 167±3 EIGER-01-19002 01:00:10.51 28:02:51.33 5.9079±0.0023 246.0 8.30^+0.20_-0.28 19^+3_-2 0.21^+0.17_-0.12 >76 EIGER-01-06027 01:00:15.74 28:02:30.31 5.9400±0.0023 213.7 7.83^+0.43_-0.37 15^+3_-3 0.58^+0.37_-0.19 >9 EIGER-01-08200 01:00:14.47 28:02:19.32 5.9417±0.0004 119.6 8.55^+0.26_-0.28 21^+3_-3 0.98^+0.14_-0.13 >9 EIGER-01-06898 01:00:15.58 28:02:19.91 6.0154±0.0001 200.7 9.10^+0.19_-0.24 27^+4_-3 0.93^+0.23_-0.19 46±6 EIGER-01-07979 01:00:13.96 28:02:32.78 6.1883±0.0024 82.4 7.70^+0.24_-0.18 14^+2_-2 0.36^+0.34_-0.09 >14 EIGER-01-16842 01:00:09.74 28:02:29.50 6.2051±0.0024 249.3 8.13^+0.34_-0.29 17^+3_-2 0.69^+0.20_-0.14 >19 EIGER-01-10430 01:00:12.79 28:02:25.55 6.3288±0.0002 16.9 7.84^+0.11_-0.09 14^+1_-1 1.36^+0.29_-0.19 >6 aGalaxies within 300 kpc of the quasar J0100+2802. bUncertainties listed are line centroiding errors. cImpact parameter in physical kpc. dVirial radius in kpc. e rest frame equivalent widths. Limits on W_r are 2σ. ccccc 5 0pt Voigt profile fit parameters for absorption ID z_sys log N/cm^-2 b [kms^-1] v [kms^-1] EIGER-01-09351 2.3087 11.84^+0.04_-0.05 40^+4.7_-4.3 -48^+4.2_-3.9 – – 11.81^+0.04_-0.04 11^+1.7_-1.7 17^+0.9_-0.9 EIGER-01-06569 2.6783 11.71^+0.17_-0.21 31^+19.9_-27.5 -93^+10.1_-12.5 EIGER-01-10308 4.2263 12.70^+0.04_-0.05 20^+2.0_-2.0 -278^+2.1_-2.2 – – 13.28^+0.03_-0.02 16^+2.8_-1.9 -233^+1.4_-1.0 – – 13.48^+0.41_-0.30 25^+12.1_-12.2 -172^+20.4_-10.6 – – 14.05^+0.10_-0.19 22^+1.7_-6.5 -134^+4.4_-3.6 – – 12.78^+0.01_-0.02 36^+1.7_-2.1 -29^+1.5_-1.1 – – 13.50^+0.01_-0.01 16^+0.2_-0.2 127^+0.2_-0.2 EIGER-01-09950 4.5159 12.52^+0.01_-0.01 14^+1.0_-1.0 44^+0.5_-0.5 – – 13.29^+0.02_-0.02 9^+0.3_-0.3 95^+0.3_-0.3 – – 12.25^+0.15_-0.13 27^+12.9_-12.3 134^+5.3_-9.2 – – 12.90^+0.02_-0.02 15^+1.7_-1.7 211^+0.8_-0.8 – – 13.04^+0.01_-0.01 17^+0.8_-0.8 258^+0.6_-0.6 EIGER-01-10424 4.6491 12.55^+0.01_-0.01 12^+0.8_-0.8 -310^+0.4_-0.4 EIGER-01-20300 5.3364 12.11^+0.10_-0.10 25^+5.4_-4.6 95^+6.0_-4.8 – – 12.41^+0.05_-0.11 9^+1.8_-1.6 126^+1.2_-1.8 – – 11.87^+0.28_-0.07 13^+12.7_-6.9 158^+2.2_-12.6 EIGER-01-06898 6.0154 12.43^+0.12_-0.07 4^+0.7_-1.3 -136^+0.8_-1.2 JWST (NIRCam), Magellan (FIRE), VLT (X-shooter), Keck (HIRES) astropy <cit.>, Cloudy <cit.>, Source Extractor <cit.> rbcodes <cit.> aasjournal In this section we present the JWST false color stamps, NIRCam WFSS spectra and ground based absorption spectroscopy of the absorption systems at z∼ 4.22 (Figure <ref>) and z∼ 4.5 (Figure <ref>), respectively.
http://arxiv.org/abs/2307.00325v1
20230701123604
A System for Differentiation of Schizophrenia and Bipolar Disorder based on rsfMRI
[ "Daniela Janeva", "Stefan Krsteski", "Matea Tashkovska", "Nikola Jovanovski", "Tomislav Kartalov", "Dimitar Taskovski", "Zoran Ivanovski", "Branislav Gerazov" ]
eess.SP
[ "eess.SP" ]
A System for Differentiation of Schizophrenia and Bipolar Disorder based on rsfMRI Daniela Janeva, Stefan Krsteski, Matea Tashkovska, Nikola Jovanovski, Tomislav Kartalov, Dimitar Taskovski, Zoran Ivanovski, and Branislav Gerazov Faculty of Electrical Engineering and Information Technologies, Ss. Cyril and Methodius University, Skopje, Macedonia ==================================================================================================================================================================================================================================================================================================================================================================== Schizophrenia and bipolar disorder are debilitating psychiatric illnesses that can be challenging to diagnose accurately. The similarities between the diseases make it difficult to differentiate between them using traditional diagnostic tools. Recently, resting-state functional magnetic resonance imaging (rsfMRI) has emerged as a promising tool for the diagnosis of psychiatric disorders. This paper presents several methods for differentiating schizophrenia and bipolar disorder based on features extracted from rsfMRI data. The system that achieved the best results, uses 1D Convolutional Neural Networks to analyze patterns of Intrinsic Connectivity time courses obtained from rsfMRI and potentially identify biomarkers that distinguish between the two disorders. We evaluate the system's performance on a large dataset of patients with schizophrenia and bipolar disorder and demonstrate that the system achieves a 0.7078 Area Under Curve (AUC) score in differentiating patients with these disorders. Our results suggest that rsfMRI-based classification systems have great potential for improving the accuracy of psychiatric diagnoses and may ultimately lead to more effective treatments for patients with this disorder. Schizophrenia, Bipolar disorder, resting-state Functional Magnetic Resonance Imaging (rsfMRI), 1D Convolutional Neural Networks, biomedical engineering, AUC; § INTRODUCTION Schizophrenia and bipolar disorder are two of the most challenging psychiatric illnesses, affecting millions of people worldwide. Schizophrenia is a severe mental disorder characterized by a wide range of symptoms, including delusions, hallucinations, disorganized thinking, and abnormal behaviors <cit.>. On the other hand, bipolar disorder is a mood disorder characterized by recurrent episodes of mania and depression <cit.>. Schizophrenia and bipolar disorder are chronic illnesses that can severely impact an individual's daily life and functioning <cit.>. The symptoms of these disorders can be distressing and debilitating, making it difficult for patients to maintain relationships, work, or engage in everyday activities. Unfortunately, accurate diagnosis of these disorders is often delayed or missed, resulting in inappropriate or ineffective treatment. While the two disorders have distinct clinical features, they also share some similarities in terms of symptoms and genetic risk factors. Both disorders register problems in cognitive achievements reporting deficits in visuospatial performance as a precursor of both disorders. This overlap has led some researchers to suggest that the two disorders may be part of a broader spectrum of mental illnesses that share underlying genetic and environmental risk factors <cit.>. It is essential to distinguish between schizophrenia and bipolar disorder because although they share some common symptoms, they require different treatments. Misdiagnosis or delayed diagnosis can lead to inappropriate or ineffective treatments, resulting in poor outcomes for patients. For example, antipsychotic medications, which are typically used to treat schizophrenia, may exacerbate symptoms of mania in bipolar disorder <cit.>. Conversely, mood stabilizers and antidepressants, typically used to treat bipolar disorder, may not be effective for treating symptoms of schizophrenia <cit.>. In recent years, the development of new techniques for brain imaging has led to significant advances in the diagnosis and treatment of schizophrenia and bipolar disorder. Resting-state functional magnetic resonance imaging (rsfMRI) has emerged as a promising tool for understanding the underlying neural mechanisms of these disorders. RsfMRI measures brain activity by detecting changes in blood flow to different regions of the brain during periods of rest. Studies have shown that there are distinct patterns of brain activity associated with schizophrenia and bipolar disorder, and these patterns can be used to differentiate between the two disorders <cit.>, <cit.>. In this paper, we present methods for differentiating schizophrenia and bipolar disorder based on rsfMRI data. We apply machine learning algorithms to analyze patterns of resting-state brain activity and evaluate the models' performances on a large dataset of patients with schizophrenia and bipolar disorder. The proposed system was submitted to the IEEE Signal Processing Cup (SPC) 2023. <cit.>. § DATASET The dataset used in this work was provided by the Brain Space Initiative for the IEEE SPC <cit.>. It consists of features extracted from the rsfMRI data of individuals with Schizophrenia and Bipolar disorder. The dataset was obtained by using 105 intrinsic connectivity network (ICN) time courses derived from a multi-spatial-scale spatially constrained ICA approach and their functional network connectivity (FNC). The provided features were extracted using the following steps <cit.>: * Quality control was applied to identify high-quality data. * Each subject's rsfMRI data were preprocessed using a common procedure, including rigid body motion correction, slice timing correction, and distortion correction. * Preprocessed subject data were registered into a common space, resampled to 3 mm^3 isotropic voxels and spatially smoothed using a Gaussian kernel with a 6 mm full width at half-maximum (FWHM). * A multi-spatial-scale template of 105 ICNs obtained from 100k+ subjects was used and a constrained ICA approach to obtain subject-specific ICN time courses. * To calculate FNC, ICN time courses were cleaned using a common standard and FNC is estimated by calculating the Pearson correlation between each pair of ICN time courses resulting in one FNC matrix for each individual. The training dataset consists of ICN and FCN features for 471 individuals and the test set contains the features of 315 individuals. An additional test set was withheld to evaluate the submitted models in the IEEE SPC. § METHODS We applied different methods for the differentiation of the two diagnostic groups for the FCN and ICN features according to their nature. For the differentiation of the FCN features between diagnostic groups, we applied statistical methods for feature selection and machine learning algorithms for binary classifications. For the classification of ICNs, we applied digital signal processing techniques as well as machine learning techniques for feature extraction and binary classification. With each of the trained models, we predicted the labels of the test set using soft probability scores, which we used to evaluate the models' AUC scores for the IEEE SPC. §.§ Intrinsic Connectivity Network Intrinsic connectivity networks are a set of brain networks, defined based on the intrinsic functional connectivity of different brain regions that are identified using fMRI <cit.>. They are thought to reflect the underlying organization of the brain which is helpful for understanding brain function <cit.>. ICNs are mainly identified using techniques such as independent component analysis (ICA), which separates the fMRI data into independent components that correspond to different functional networks <cit.>, <cit.>. Research shows alterations in ICNs in various neuropsychiatric disorders, emphasizing their importance for understanding the neural mechanisms underlying those conditions <cit.>. ICNs provide a powerful tool for investigating the functional organization of the brain and its implications for cognition and behavior <cit.>, which are altered processes in patients with schizophrenia and bipolar disorder. §.§.§ Preprocessing To compensate for the varying ICN time course lengths provided by the IEEE SPC organizers, we padded shorter components with zeros to match the maximum ICN length of the dataset. We used a filter bank to obtain signals in three frequency sub-bands: low, mid, and high. The low band contains frequencies between 0.01 - 0.3 Hz, the mid band contains frequencies between 0.3 - 0.7 Hz, and the high band frequencies are between 0.7 - 0.99 Hz. The filter bank comprises three bandpass Butterworth IIR filters of order 6. In Fig. <ref> and Fig. <ref> we show a sample ICN component of an individual with bipolar disorder (BP) and an individual with schizophrenia (SZ) in the filtered frequency sub-bands. §.§.§ Spectrogram For obtaining the time-frequency representation of the ICN time courses, we applied the Short Time Fourier Transformation algorithm (STFT) using a sliding Tukey window of length 22. We then calculate the power spectrogram. By stacking the spectrograms of each ICN, we create a volumetric representation for the individual with dimensions 12 × 11 × 105 where the x-axis represents the frequency components, the y-axis the time components, and the z-axis the number of ICNs as shown in Fig. <ref>. §.§.§ Scalogram For obtaining a 2D representation of the ICNs with a better time-frequency resolution, we applied the Continuous Wavelet transformation (CWT). CWT is a formal tool that provides an overcomplete representation of a signal by “daughter” wavelets which are scaled and translated copies of the finite-length oscillating waveforms known as the “mother wavelet”. The wavelet analysis provides not only accurate frequency information but at the same time it provides information for accurate time localization of the frequency components. This property makes the wavelet transformation highly applicable for the analysis of signals which are characterized by the occurrence of transient events. We generated scalograms of the ICN components using the Morlet wavelet and 50 scales. We stacked the scalograms to create 3D information with dimensions 49×234×105 where the x-axis represents the scales, the y-axis represents the time courses and the z-axis represents the number of ICNs as shown in Fig <ref>. §.§.§ Classification For the classification of raw ICNs, and filtered ICNs in different bandwidths, we used 1D Convolutional Neural Networks (CNNs). The architecture of the model is shown in Fig <ref>. The outputs of each convolution layer are passed through the ReLU activation function <cit.>. For the classification of the 3D stack of scalograms and spectrograms, we used the 3D CNN model shown in Fig. <ref>. We used a simpler architecture, because of memory and computational constraints. We trained both models using Adam <cit.> in 100 epochs with a batch size of 32. To prevent overfitting, we used Early Stopping with patience of 20 Epochs based on the binary cross entropy loss function <cit.>. To evaluate the performance of our models we used the AUC score <cit.> on 20% of the training dataset. §.§ Functional Connectivity Network Functional connectivity networks refer to patterns of synchronous activity between different brain regions that are thought to underlie specific cognitive functions <cit.>. These networks are typically identified using functional magnetic resonance imaging (fMRI) and other neuroimaging techniques that can measure the correlation between the activity of different brain regions. Resting-state networks are commonly studied in the absence of any explicit task or stimulus. The study of functional connectivity networks has important implications for understanding the neural basis of complex cognitive processes and for identifying biomarkers of neurological and psychiatric disorders <cit.>. The IEEE SPC Dataset FCNs are generated by calculating Pearson correlation coefficients of the ICN time courses which results in a symmetrical matrix as shown in Fig. <ref>. The lower triangular matrix is flattened and provided in the IEEE SPC dataset in the form of a vector with dimensions 1 × 5460. §.§.§ Feature selection We normalized the vector values in the range of 0 to 1 and applied the chi-square test <cit.>, for feature selection. We selected the 20 best features that explain most of the variability in the dataset. §.§.§ Classification We applied different machine-learning algorithms for binary classification of the raw ICN features in the train set, including: Logistic Regression (LR), Support Vector Machines (SVM), Linear Discriminant Analysis (LDA), Gaussian Naive Bayes (GNB), K-Nearest Neighbours (KNN). For hyperparameters tuning, we did a Grid Search with 5-fold cross-validation <cit.>. We applied the same methodology for training the machine learning algorithms using only the selected features. §.§ Evaluation For the evaluation of the models, the AUC score was used in the IEEE SPC. We used the best models to predict the soft probability score of the labels of the test set provided in the competition dataset. The soft probability score indicates the confidence the model has in its prediction and it is used to calculate the AUC score. For the competition, the AUC was averaged for the public test set, i.e. the Public AUC score, and the withheld test set, i.e. the private AUC Score. § RESULTS The AUC scores from our CNN models for the filtered raw ICNs, and stacked ICN spectrograms and scalograms are shown in Table <ref>. We can see that the best score is obtained by 1D CNN model trained on raw ICN timecourses. The filtered signal in the range of 0.3 - 0.7 Hz achieved a better AUC score compared to the other bandwidths. Classification of scalograms produced better AUC score, compared to the classification of spectrograms. However, the extracted features from the ICN components achieved slightly worse AUC scores, overall. The AUC scores from the classical machine learning models for all of the FCNs and the selected FCNs are shown in Tables <ref> & <ref>. We can see that the best score is obtained by the Linear Discriminant Analysis algorithm. Classification of all features resulted in a slightly better overall performance compared to using only the top 20 features. However, while including all features provided enhancement in the classification results, using a smaller subset of the most relevant features can yield comparable outcomes. § CONCLUSION The differentiation between schizophrenia and bipolar disorder is an important task for accurate diagnosis and proper treatment of patients. We proposed a number of systems based on rs-fMRI features for achieving this objective. The best-trained 1D CNN model trained on raw ICN timecourses achieved the highest AUC score. The FCNs showed potential but their classification achieved slightly worse AUC scores than raw ICN timecourses. Overall, the results are promising but future work needs to be done for improving the differentiation between Schizophrenia and Bipolar disorder. Future work will include combining the ICN and FCN features and applying more advanced classification algorithms such as NN, LSTM, GRU, RNN, and transformers. IEEEtran
http://arxiv.org/abs/2307.01890v1
20230704192959
On the Internal Structure of Relativistic Jets with Zero Velocity Along the Axis
[ "V. S. Beskin", "F. A. Kniazev", "K. Chatterjee" ]
astro-ph.HE
[ "astro-ph.HE" ]
firstpage–lastpage 2018 Systematic Computation of Braid Generator Matrix in Topological Quantum Computing Mohamed Taha Rouabah Accepted. Received; in original form ==================================================================================== The present work is devoted to the analysis of the internal structure of relativistic jets under the condition that the velocity of the plasma flow at the jet axis vanishes. It is shown that in spite of the seemingly fundamental difference in the formulation of the problem at the axis, the key properties of the internal structure of such relativistic jets remain the same as for nonzero velocity along the axis. In both cases, at a sufficiently low ambient pressure, a dense core appears near the axis, the radius of which is close to the size of the light cylinder. galaxies: active, galaxies: jets § INTRODUCTION The significant progress of radio interferometry at long baselines makes it possible to directly explore the internal structure of relativistic jets from active galactic nuclei <cit.> which are visible manifestations of their activity at an early stage of evolution <cit.>. Such detailed observational studies allow us to test the numerous predictions of the theory of strongly magnetised outflows that have been developed since the 1970s <cit.>. The main conclusions of these theoretical papers, discussed in several reviews and monographs <cit.>, were later confirmed by numerical simulations of jets from accreting black holes <cit.>. One of these theoretical predictions repeatedly confirmed by numerical simulations is the existence of a universal asymptotic behaviour for the Lorentz factor of an outflow γ = ϖ/R_ L, where ϖ is the distance from the rotation axis, and R_ L = c/Ω is the radius of the light cylinder (Ω is the angular velocity of the central engine). As another example, one can mention the presence of a central dense cylindrical core with the radius r_ core = u_ in R_ L, where u_ in is the four-velocity of a flow along rotation axis. This result was first obtained analytically <cit.> and later confirmed numerically <cit.>. As shown in Figure <ref>, this core is formed over long enough distances z > z_ cr from the central engine when the transverse dimension of the jet r_ jet becomes larger than r_ cr = (u_ inσ_ M)^1/2 R_ L. Accordingly, the poloidal magnetic field at this distance B_ cr = B_ p(z_ cr) becomes equal to B_ cr = B_ L/σ_ Mu_ in. Here σ_ M is the Michel magnetisation parameter, and B_ L is the magnetic field on the light cylinder near the origin (see formal definitions below). It is necessary to emphasise that relation (<ref>) was also verified for non-relativistic flows, i.e. for u_ in≪ c <cit.>. We emphasise that, as was already known in the late 1990s, the internal structure of relativistic jets is very sensitive to the behaviour of the Grad-Shafranov (GS) solution near the axis <cit.>. The difficulty of solving the GS equations in this region proved to be the stumbling block that did not allow us to link together the various asymptotic solutions obtained. Only after the work by <cit.> did it become clear that the central core exists only for sufficiently low ambient medium pressure P_ ext < P_ cr (i.e., at sufficiently large distances from the central engine), where P_ cr = B_ cr^2/8π. For larger ambient pressures P_ ext > P_ cr (i.e. at small distances z < z_ cr), the poloidal magnetic field remains practically constant within the whole jet. As a result, depending on the ambient pressure P_ ext < P_ cr, the poloidal magnetic field B_ p outside the central core has the form B_ p∝ϖ^-α with 0 < α < 1. At the same time, however, the magnetic field in the core itself does not differ significantly from the value B_ cr. In this case, the jet remains magnetically dominated till the distance from the origin when the external pressure drops to P_ ext≈ P_ eq = B_ eq^2/8π, where B_ eq = σ_ M^-2B_ L. At lower ambient pressures, the flow becomes particle-dominated. Here, however, one important remark should be made. This model explicitly assumed that the flow velocity along the jet axis itself does not vanish. In fact, relation (<ref>) that, in the non-relativistic regime (i.e. u_ in→ 0), the core radius as well. Thus, in the non-relativistic regime, r_ core→ 0 when u_ in tends to zero. Thus, the very existence of the central core is called into question. Whether this result remains valid if the flow velocity vanishes on the jet axis has not been considered in detail up to now. It must be said that the very assumption that the velocity on the jet axis is not equal to zero still had some grounds. It is based on the model of plasma generation in the vacuum region near the black hole surface <cit.>, which is equivalent to the so-called “outer gap” in the magnetosphere of radio pulsars. In this case, the value u_ in arises as a natural boundary condition for the Grad-Shafranov equation <cit.>, which ultimately leads to the existence of a central core. On the other hand, there is also support for outflow models with zero velocity along the jet axis. For example, this occurs when the only mechanism of plasma acceleration is via electromagnetic forces (the Poynting vector flux on the jet axis is equal to zero). This point of view can also be supported by the pioneering work of <cit.>, who introduced the notion of a stagnation point, i.e. the region of the base of the flow where the velocity is zero. It was further shown that the hydrodynamical motion in a strongly magnetised flow is completely determined by the electric drift; the motion along the magnetic field lines can be neglected <cit.>. Despite the fact that this result concerns only the asymptotically far region ϖ≫ R_ L, it began to be used inside the light cylinder as well (see e.g. ). Finally, the zero velocity along the jet axis was reproduced in recent numerical simulation <cit.>. As was already emphasised, since the results of (<ref>)–(<ref>) discussed above were obtained under the assumption of a non-zero velocity along the jet axis, it is important to discuss the question of whether such an internal structure of relativistic jets is preserved under the assumption u(0) = 0. The present work is devoted precisely to this issue. As will be shown, the seemingly fundamental difference in the formulation of the problem does not change the key properties of the internal structure of relativistic jets. Moreover, relations r_ core≈ R_ L and B_ cr≈σ_ M^-1 B_ L also remain valid. The paper is organised as follows. In Section <ref>, we formulate the basic equation describing cylindrical cold magnetised flow. Section <ref> is devoted to the analysis of singular points. In the problem considered here, this is the rotation axis, as well as the Alfvénic surface near the light cylinder. Finally, in Section <ref>, we formulate the main results of our consideration. § BASIC EQUATIONS Below we use the language developed by <cit.>: all 3D vectors correspond to physical quantities measured by Zero Angular Momentum Observers (which in our case, i.e. far from the central black hole, coincides with the usual cylindrical reference frame). Further, it should be immediately noted that our task is not devoted to the construction of a global solution. It is dedicated to the area far beyond the plasma generation region. Therefore, the region of plasma generation participates in our analysis indirectly through the integrals of motion, which we will try to choose in the most reasonable way. Besides, as was shown by <cit.>, one can consider strongly collimated jet as a sequence of cylindrical flows. This makes it possible to explore their internal structure by analyzing not the second-order Grad-Shafranov equation, but two first-order ordinary differential equations for magnetic flux Ψ(ϖ) and poloidal Alfvénic Mach number M(ϖ) <cit.> M^2 = 4πμη^2/n. Here, n is the number density in the comoving reference frame and μ is relativistic enthalpy. Accordingly, η is the particle-to-magnetic flux ratio determined from relation n u_ p = ηB_ p, which is constant along magnetic field lines: η = η(Ψ). Finally, by definition, in the cylindrical geometry B_z = 1/2 πϖ dΨ/ dϖ, B_φ = -2I/cϖ. Here I is the total electric current within the magnetic tube Ψ = const. The first equation is the relativistic Bernoulli equation u_ p^2 = γ^2 - u_φ^2 - 1, where u_ p and u_φ are the poloidal and toroidal components of the 4-velocity u respectively. It can be rewritten in the form <cit.> M^4/64π^4 ϖ^2( dΨ/ dϖ)^2 = K/ϖ^2 A^2 - μ^2η^2. Here A = 1 - Ω_ F^2 ϖ^2/c^2- M^2 is the Alfvénic factor where the so-called field angular velocity Ω_ F = Ω_ F(Ψ) is constant on the magnetic surfaces (Ω_ F = Ω near ”the central engine”), K= ϖ^2(e')^2(A- M^2)+ M^4 ϖ^2E^2- M^4L^2c^2, and by definition, e^'(Ψ) = E(Ψ) - Ω_ F(Ψ)L(Ψ). Remember that Bernoulli integral E = E(Ψ) and the angular momentum flux L = L(Ψ) E(Ψ) = γμη c^2 + Ω_ FI/2π, L(Ψ) = ϖ u_φμη c + I/2π, together with the angular velocity Ω_ F(Ψ) are also integrals of motion. In this case, the current I, the Lorentz factor γ, and the toroidal four-velocity u_φ are expressed as follows I/2π = L-Ω_ Fϖ^2E/c^2/1-Ω_ F^2ϖ^2/c^2- M^2, γ = 1/μη (E-Ω_ FL) - E M^2/1-Ω_ F^2r^2/c^2- M^2, u_φ = 1/μη c ϖ (E-Ω_ FL) Ω_ Fϖ^2/c^2-L M^2/1-Ω_ F^2r^2/c^2- M^2. The second equation determines the Mach number M for a cold flow (the sound speed c_ s = 0 and the relativistic enthalpy μ = m_ pc^2 = const) and is given by <cit.>: [(e')^2/μ^2η^2c^4-1+Ω_ F^2 ϖ^2/c^2] d M^2/ dϖ = M^6L^2/A ϖ^3 μ^2η^2c^2 +Ω_ F^2 ϖ M^2/c^2[2 - (e')^2/Aμ^2η^2c^4] + M^2 e'/μ^2η^2c^4 dΨ/ dϖ de'/ dΨ + M^2 ϖ^2/2c^2 dΨ/ dϖ dΩ_ F^2/ dΨ - M^2 (1-Ω_ F^2 ϖ^2/c^2) dΨ/ dϖ1/η dη/ dΨ. Let us now define the integrals of motion in a convenient form. In contrast to the basic assumption on the finite velocity along the jet axis discussed earlier, we must now, following (<ref>), set η(Ψ) → 0 as Ψ→ 0. At the same time, thanks to the definitions (<ref>)–(<ref>), it is convenient to express the invariant e^'(Ψ) (<ref>) in terms of the flux ratio η(Ψ) and an additional function ε(Ψ) (e^')^2 = μ^2η^2(Ψ)c^4 - μ^2η^2(Ψ)c^4ε(Ψ). As can be seen from relations (<ref>)–(<ref>), the value of ε vanishes for zero flow velocity. Therefore, it turns out to be convenient in the analysis of the problem under consideration. In particular, the function ε(Ψ) cannot have an arbitrary form. We clarify this issue a little later. Besides, following <cit.>, we set L(Ψ) = Ω_0Ψ/4 π^2√(1 - Ψ/Ψ_ tot), Ω_ F(Ψ) = Ω_0√(1 - Ψ/Ψ_ tot). Such definitions ensure the closure of the longitudinal electric current within the jet. Further, thanks to (<ref>) and (<ref>), we have E(Ψ) = Ω_ F(Ψ)L(Ψ) + μη(Ψ)c^2[1 - ε(Ψ)]^1/2. Finally, due to our main assumption η(0) = 0, the fourth integral η(Ψ), in the limit Ψ→ 0, can be written as η(Ψ) = η_0(Ψ/Ψ_ tot)^β, where β > 0. Below, for simplicity, we assume that the relation (<ref>) is valid for any value of Ψ. Introducing the dimensionless variables x = Ω_0ϖ/c, y = Ψ/Ψ_ tot, one can rewrite Eqns. (<ref>) and (<ref>) as dy/ dx = η(y)x/σ_ M |A| M^2[ f(x,y)[1 - ω^2(y)x^2 - 2 M^2]///. . - M^4 ε(y) + 4 σ_ M M^4ω(y) l(y)/η(y) [1 - ε(y)]^1/2. . + 4σ_ M^2 M^4 [ω^2(y)x^2 - 1] l^2(y)/x^2η^2(y)]^1/2, f(x,y)/ M^2 d M^2/ dx = 4 σ_ M^2 M^4/A x^3 l^2(y)/η^2(y) + xω^2(y) - xω^2(y)/A[f(x,y) + M^2] - 1/2 dε(y)/ dy dy/ dx + 1/2 x^2 dω^2(y)/ dy dy/ dx + 1/2 f(x,y)/η^2(y) dη^2(y)/ dy dy/ dx. Here σ_ M = Ω_0^2Ψ_ tot/8 π^2 μη_0c^2 is the Michel magnetisation parameter already mentioned above, η(Ψ) = η_0η(y), and now A = 1 - ω^2(y) x^2 - M^2. Further, we introduce new important function f(x,y) = ω^2(y)x^2 - ε(y). Finally, despite the fact that according to (<ref>), (<ref>) and (<ref>), we have l(y) = y(1 - y)^1/2, ω(y) = (1 - y)^1/2, and η(y) = y^β, we have kept their literal expressions in Eqns. (<ref>) and (<ref>). § SINGULAR POINTS §.§ Rotation axis Before integrating Eqns. (<ref>)–(<ref>), let us discuss their behaviour for x → 0. This helps us with numerical integration as well. Below we assume that poloidal magnetic field B_z and the number density n are finite at the rotation axis. Then, due to definition (<ref>), M^2 → 0 if η→ 0. Storing now only the leading terms (and grouping the similar ones), we obtain . dy/ dx = η(y)x/σ_ M M^2 f^1/2, f^3/2η(y)/ M^2 d/ dx[ M^2/f^1/2η(y)] + (f + M^2)x = 4 σ_ M^2 M^4y^2/x^3η^2(y). As one can see, the function f(x,y) plays the primary role in determining the behaviour of the solution near the rotation axis, and thus, the function ε(y) should be introduced. In order to understand the functional form of ε(y) for our problem statement, let us suppose that the magnetic field is regular at x → 0. In this case, it is convenient to introduce the dimensionless magnetic field b = B_z/B_ L, where B_ L is the magnetic field on the light cylinder near the origin and can be determined from the condition Ψ_ tot = π R_ L^2 B_ L. It gives b(x) = 1/2x dy/ dx. In particular, denoting b_0=b(0), we get for x → 0 y(x) ≈ b_0x^2. It is clear that in what follows we will be interested in the case b_0≪ 1, because the light cylinder must contain only a small part of the total magnetic flux as the size of the jet is much larger than the light cylinder. Further, according to (<ref>)–(<ref>), we have for v ≪ c, e^'(Ψ)/μη(Ψ)c^2 = γ - Ω_ Fϖ/cu_φ = 1 + 1/2 v_ p^2/c^2 + 1/2 v_φ^2/c^2 - Ω_ Fϖ/c v_φ/c. Comparing this expression with the definition (<ref>), we obtain ε(y) = 2Ω_ Fϖ/c v_φ/c - v_ p^2/c^2 - v_φ^2/c^2, and thus, according to (<ref>), we have f(x, y) = (v_φ - Ω_ Fϖ)^2/c^2 + v_ p^2/c^2. However, as is well-known (see, e.g., ), relation (<ref>) gives v_φ→Ω_ Fϖ for ϖ→ 0. Thus, f(x,y) → v_ p^2/c^2 as x → 0. Using now definitions (<ref>) and (<ref>), we return to relation (<ref>). This result is certainly an important confirmation of the consistency of our approach. Moreover, it allows us to use relation (<ref>) as a definition of ε(y) for y → 0. Together with (<ref>), it gives ε(y) = y/b_0 - 4 η^2(y) b_0^2 M_0^4 σ_ M^2. Here we introduce one more parameter M_0^2 = 4 πμη_0^2/n_0, specifying the particle number density on the rotation axis n_0 = n(0). Relation (<ref>) immediately allows us to make two important conclusions. Indeed, since ε(y) is only a function of y, it cannot depend on such parameters as the magnetic field b_0 and the number density n_0 on any particular slice. This becomes possible only if the conditions η(y) = y^1/2 and 1/b_0 - 4 b_0^2 M_0^4 σ_ M^2 = C, where C = const., are met. The first of them fixes the behaviour of the function η(y) for y → 0. As was already stressed, in what follows we assume that condition (<ref>) is valid for all values of y. As for relation (<ref>), we must now consider it as a connection between the magnetic field b_0 and the number density n_0 on the jet axis. Further, for estimates we can set C = 0, so that 2 b_0 M_0^2 σ_ M≈ b_0^-1/2≫ 1. Returning now to Eqns. (<ref>)–(<ref>) in the limit x → 0, let us rewrite them in the form β̃(x) = 1/2 b_0σ_ M η(y)f^1/2/ M^2(x), -1/β̃(x) dβ̃(x)/ dx + (1 + ζ/β̃^2(x)) x = x/β̃^2(x). Here ζ = 1/4 b_0^2 M_0^2 σ_ M^2 = 4 πμ n_0/B_0^2≪ 1, and β̃(x) = b(x)/b_0 so that β(0) = 1. As one can see, Eqn. (<ref>) is regular at x → 0. It describes the change of the magnetic field. Actually, it depends on only one parameter ζ (<ref>), which is small due to condition b_0≪ 1. This confirms our assumption that the magnetic field remains finite at x → 0. As for Eqn. (<ref>), it can be now used to determine M^2(x) in the limit x → 0. It finally gives M^2(x) ≈ b_0 M_0^2 x^2. In Figure <ref>, we show the change in y(x)/(b_0x) ≈ x and M(x) ∝ x for small x, obtained as an exact solution of Eqns. (<ref>)–(<ref>) using boundary conditions y(x_0) = b_0x_0^2 and M^2(x_0) = b_0 M_0^2 x_0^2 for x_0 = 0.01. As one can see, the exact solution is in full agreement with the analytical estimates (<ref>) and (<ref>). §.§ Alfvénic surface Before proceeding to a discussion of the general structure of a poloidal magnetic field outside the light cylinder, it is necessary to discuss the critical conditions on the Alfvénic surface A = 0. As for the fast magnetosonic surface, there is no singularity on it in the cylindrical geometry considered here <cit.>. This well-known effect is similar to the shift of the singularity into the modified fast magnetosonic surface in the self-similar <cit.> solution. For cylindrical geometry, this singularity shifts to infinity. As for the critical condition on the Alfvénic surface, it is more convenient to find it from the numerator of relation (<ref>): e^'(Ψ_ A) = E(Ψ_ A) M^2(r_ A). Here all the quantities are to be taken at the Alfvénic point, so that Ψ_ A = Ψ(r_ A). It is easy to check that, in this case, the regularity conditions in relations (<ref>) and (<ref>), as well as in our basic equations (<ref>)–(<ref>), are automatically fulfilled. Note now that for the strongly magnetised flow (M^2 ≪ 1) under discussion, the Alfvénic surface is located near the light cylinder: r_ A≈ R_ L (i.e. x_ A≈ 1). Using the dimensionless variables (<ref>)–(<ref>) introduced above, one can rewrite the critical condition (<ref>) as 2 σ_ M M_0^2 y η(y) = 1. Taking into account relations (<ref>) and (<ref>) as well as under condition x_ A≈ 1, we finally obtain 4 σ_ M^2 M_0^4 ≈ b_0^-3. As we see, condition (<ref>) is in accordance with relation (<ref>) for C≈ 0. Therefore, we will not dwell on the problem of passing the critical surface in detail and will immediately proceed to the analysis of the solution for r > R_ L (or x > 1). § DISCUSSION AND CONCLUSION In Figure <ref>, we show solutions of general equations (<ref>)–(<ref>) for dimensionless magnetic field b(x). We carry out the integration from the region of a singular point with boundary conditions corresponding to the asymptotic solutions (<ref>) and (<ref>) for x = 1. For this reason, the main control parameter is the magnetic field b_0 on the jet axis. The jet size r_ jet is determined from the condition Ψ(r_ jet) = Ψ_ tot. As one can see, despite the fact that the velocity at the axis vanishes, in general, there is complete qualitative agreement with the results obtained under the assumption of a finite flow velocity near the axis (see, e.g., ). The poloidal magnetic field B_z remains practically constant within the light cylinder. As for the structure of the magnetic field outside the light cylinder, it depends on the magnetic field b_0 on the jet axis. For sufficiently large values of b_0, longitudinal magnetic field remains essentially uniform (B_ z≈ const). But for small values of b_0, a central core begins to form near the jet axis, the size of which, however, does not tend to zero, as might be expected according to (<ref>). In all cases, its size remains on the order of the radius of the light cylinder: r_ core≈ R_ L, Additionally, there is a quantitative agreement if the expression (<ref>) is corrected to B_ cr≈B_ L/σ_ M. For σ_ M = 30 shown in Figure <ref>, expression (<ref>) results in b_0 = 0.03 for the critical magnetic field. As one can see, this is exactly what takes place. Finally, as shown in Figure <ref>, the universal asymptotic behavior γ≈ x is also reproduced with good accuracy outside the light cylinder. On the other hand, we found one significant difference between the commonly considered case η(y) ≈ const and the case η(y) = y^1/2 considered in this paper. As shown in Figure <ref>, particle number density n_ lab = n γ in the laboratory reference frame remains almost constant outside the central core. This difference, however, can easily be explained. Indeed, according to definition (<ref>), the number density in the comoving reference frame can be written as n = 4 πμη^2/ M^2. Further, far from the light cylinder (Ω_ F^2ϖ^2/c^2 ≫ 1), but in the region of a strongly magnetized flow (M^2 ≪Ω_ F^2ϖ^2/c^2), Lorentz factor γ according to (<ref>) has the form γ≈ M^2 E c^2/μηΩ_ F^2ϖ^2. Using now relations (<ref>)–(<ref>) to determine Bernoully integral E, we finally obtain n_ lab≈ηΨ/πϖ^2. As a result, at a constant η and in the region of existence of the central core, when magnetic flux Ψ grows slowly than ϖ^2, the number density n_ lab is to decrease with increasing distance ϖ from the axis. On the other hand, in the case η = y^1/2, depending on the behavior of the solution Ψ = Ψ(x), both an increase and a decrease in the number density n_ lab with distance x from the axis are possible. Here, however, it should be noted that such behavior takes place only if the relation η = y^1/2 remains valid up to the jet boundary. If this dependence takes place only at x → 0, and at x ∼ 1 we have η≈ const, then the number density n_ lab is to decrease with the distance from the axis. Moreover, our analytical results are in excellent agreement with the above-mentioned results of numerical simulations of <cit.>. First, Figure <ref> shows that jets in numerical simulations exhibit the dependence η(y) ∝ y^1/2 (<ref>), surprisingly matching the relation (<ref>). Here, different curves correspond to different distances from ”the central engine”, confirming that η(y) is indeed an integral of motion. We emphasise that the value of the integral η(Ψ), like all other integrals of motion, was not set initially, as is done in analytical calculations, but emerged self-consistently as a result of evolving a time-dependent numerical simulation. Second, as shown in Figure <ref>, the dependence of the dimensionless poloidal magnetic field b(x) on the dimensionless distance to the axis x = ϖ/R_ L at different distances from the origin in the simulation also well reproduces the structure of the poloidal field shown in Figure <ref>. As far as the number density distribution is concerned, it is determined by the magnetic field strength in the form of the so-called density floors <cit.>, which does not allow us to determine it with sufficient accuracy. Therefore, we do not present here the results of numerical simulation concerning the quantity n_ lab. Thus, we can state with confidence that the appearance of a central core at sufficiently large distances from “the central engine” does not depend on the plasma flow velocity near the jet axis. In all cases, at a sufficiently low ambient pressure, a dense core appears near the axis, the radius of which is close to the size of the light cylinder. Outside the central core, both the poloidal magnetic field and the plasma number density decrease with a power-law behaviour. Finally, our results hold important implications for the jet structure and velocities at distances far from the black hole, relevant for interpreting observed jet morphologies and widths, as well as the transverse jet velocity stratification measured in AGN jets <cit.>. Indeed, the presence of a central core region and low velocity region at the jet axis was also seen in global semianalytical work <cit.>. As we show, once the central core forms atdistances z>z_cr from the black hole, the poloidal magnetic field in the jet becomes of the order of B_cr, the jet becomes susceptible to magnetic pinch and kink instabilities. This result is verified in 2D and 3D numerical simulations <cit.>. Thus, we suggest that when a central core appears, the observed width of the jet will be determined precisely by the magnetically dominated inner jet region, and not by the geometric width of the jet. § DATA AVAILABILITY The data underlying this work will be shared on reasonable request to the corresponding author. § ACKNOWLEDGEMENTS We thank Anna Chashkina and Alexander Tchekhovskoy for useful discussions. This work was partially supported by the National Research Center Kurchatov Institute (Order No. 85 dated 03.20.23). KC is supported by the Black Hole Initiative at Harvard University, which is funded by grants from the Gordon and Betty Moore Foundation, John Templeton Foundation and the Black Hole PIRE program (NSF grant OISE-1743747). mnras
http://arxiv.org/abs/2307.02976v1
20230706132213
Future developments in ground-based gamma-ray astronomy
[ "Ulisses Barres de Almeida", "Martin Tluczykont" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.HE" ]
Future developments in ground-based gamma-ray astronomy Ulisses Barres de Almeida Brazilian Center for Physics Research (CBPF), Rua Dr. Xavier Sigaud 150, 22290-180 Rio de Janeiro, Brazil. ulisses@cbpf.br Martin Tluczykont Institute of Experimental Physics, University of Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany. martin.tluczykont@physik.uni-hamburg.de * Ulisses Barres de Almeida corresponding author and Martin Tluczykont August 1, 2023 ======================================================================== Ground-based γ-ray astronomy is a powerful tool to study cosmic-ray physics, providing a diagnostic of the high-energy processes at work in the most extreme astrophysical accelerators of the universe. Ground-based γ-ray detectors apply a number of experimental techniques to measure the products of air showers induced by the primary γ-rays over a wide energy range, from about 30 GeV to few PeV. These are based either on the measurement of the atmospheric Cherenkov light induced by the air showers, or the direct detection of the shower's secondary particles at ground level. Thanks to the recent development of new and highly sensitive ground-based γ-ray detectors, important scientific results are emerging which motivate new experimental proposals, at various stages of implementation. In this chapter we will present the current expectations for future experiments in the field. Keywords Instrumentation and Methods for Astrophysics; Ground-based Gamma-ray Astronomy; Air Cherenkov Technique; Ground Particle Arrays § INTRODUCTION After several decades of incremental attempts, and following the rapid developments witnessed since the early years of this Century, the field of ground-based γ-ray astronomy has reached the status of 'real astronomy', consolidating a unique observational window into the sky which spans 5 decades in energy, from a few tens of GeV to the PeV, and several decades of typical flux difference between the low- and high-energy ends of the spectrum. The application of different types of techniques and technologies is required to probe this vast range of parameters, all based on the detection of the secondary products of the γ ray-initiated extensive air showers (EAS). The main instrumental constraints are the requirement for large detection areas and the capability to efficiently suppress the much stronger background of cosmic ray (CR) protons. The two experimental approaches available are the air-Cherenkov technique, which observes the optical Cherenkov photons generated by shower particles traversing the atmosphere, and the particle detector arrays, which directly detect (sample) the secondary particles of the shower front at ground, and traditionally operate at higher energies than the air-Cherenkov instruments. The maturity achieved in the field results from the fact that the principal observatories of both classes have assembled a set of instrumental characteristics which proved critical in fulfilling the potentialities of ground-based observations. This means, in general, large array areas with sizes bigger than the shower footprint, and with a dense instrumental coverage, as well as powerful CR suppression factors, approaching the range of 10^-5, meaning 1 residual CR in 10,000, at least in the most performing ranges of operation of each technique. These achievements are the reason we can now speak of real astronomy at very- and ultra-high energies (VHE, 0.3-300 TeV, and UHE, above 300 TeV, respectively). They have enabled instruments to produce skymaps with resolutions superior to ∼ 0.1^∘, to construct well-sampled source spectra spanning various decades in energy, up to several 100s of TeV, as well as explore flux variability with light-curves on sub-hour timescales, and as short as minutes for the brightest transients. Today, ground-based γ-ray astronomy is achieving the combined mark of nearly 300 detected sources. The different approaches between air-Cherenkov instruments, in particular imaging air-Cherenkov telescopes (IACTs), and ground particle arrays, highlight complementarities that go beyond the typical observational energy ranges, and stress the importance of operating both types of instruments contemporaneously and in synergy. In this sense, to secure an adequate global latitude-longitude coverage with both experimental approaches is a main goal of the field for the near future. From the point of view of ground particle arrays, this means the installation of the first-ever instrument of its kind in the Southern Hemisphere, as is being targeted by a number of proposals from various groups. Improving background rejection, especially at the extremes of the energy range, is another fundamental challenge. In the (sub-)PeV domain, LHAASO recently demonstrated that near-background free operations (i.e., CR suppression factors close to ∼ 10^-5) are required to effectively probe PeVatron accelerators <cit.>. To achieve that in more cost-effective ways will depend on novel solutions for a km^2-scale array with large muon detection area, and is a major challenge for future Southern Hemisphere particle arrays. Solutions using hybrid detector setups based on a combination of air-Cherenkov imaging and timing, and particle detection, such as realized by TAIGA, are also investigating a novel, cost-effective way to probe the VHE to UHE energy domain. These CR suppression factors at the highest energies are also pursued by IACTs and will, for example, benefit from the improved imaging and gamma-ray PSF of the Cherenkov Telescope Array (CTA), as well as the advanced analysis possibilities that are opened by such improvements. The frontier towards the lowest energies – that is, to reach below several tens of GeV for IACTs, and to approach the 100 GeV threshold for ground particle arrays – is another primary goal targeted by future proposals. The science motivation is focused on transient and extra-galactic sources, especially in the context of multi-messenger astrophysics and the follow-up of gravitational wave and neutrino event counterparts. For wide-field particle arrays, this will require building high fill-factor (> 50%) instruments at even higher altitudes, approaching the 5 km a.s.l., and would enable to build effective VHE gamma-ray monitoring and trigger instruments that bridge the gap with satellite-based facilities. For IACTs, various technical developments ongoing in the context of CTA suggest that the O(10 GeV) threshold is within reach. To achieve significant improvements in the angular resolution of ground-based gamma-ray observatories is an additional and essential goal, that would enhance science synergies not only between the VHE and UHE domains, but also with multi-band astronomical instrumentation. High fill-factor ground particle arrays with improved timing resolution are required for that, as well as large stereoscopic IACT arrays with highly-pixelated cameras and improved optics, as proposed for CTA. Finally, a key issue for the technological developments in all fronts is to keep production and deployment / maintenance costs as low as possible, so to render denser, larger-scale arrays viable. In this Chapter we will present current expectations for the future experiments in the field. For major endeavours that have started operations recently, or are at an advanced stage of development, such as LHAASO <cit.> and CTA <cit.>, see chapters "Current Particle Detector Arrays in Gamma-ray Astronomy" and "The Cherenkov Telescope Array (CTA)", respectively. Likewise, ongoing upgrades of current experiments with γ-ray observation capabilities, such as the scintillator arrays CARPET-3 <cit.>, at the site of the Baksan Neutrino Observatory, in Russia, and GRAPES-3 <cit.>, located in Ooty, India, will not be discussed here, and the reader is again referred to the chapter "Current Particle Detector Arrays in Gamma-ray Astronomy" for more details on these techniques. For completion, these experiments are nevertheless presented in Table <ref> and Figure <ref>, which summarise the instrumental characteristics and geographic distribution of selected current and planned facilities. After a brief overview of the experimental techniques, aimed at building a common background for the ensuing discussions, we will present in detail the efforts towards the Siberian Cosmic- and γ-ray detector, TAIGA, followed by the proposals for installation of a ground-based particle array detector in the Southern Hemisphere, and finally the proposals for future IACTs beyond CTA. The criteria driving our choice of which experimental proposals to present was their appearance in the 2019 and 2021 editions of the International Cosmic Ray Conference (ICRC). The text reflects the status of the field as of the end of 2022, when this article was compiled. §.§ Overview of Techniques Extensive Air Showers Cosmic rays and gamma rays initiate relativistic particle cascades in the Earth's atmosphere. The energy of the primary particle is transferred via high energy interactions (e.g. pair production, bremsstrahlung, etc.) to secondary cascade particles. These particles also create Cherenkov light which can be detected on the ground, either by measuring the density of the Cherenkov light distribution (wave-front sampling technique), or by imaging the Cherenkov light emitted from the cascade. The number of secondary particles grows, until a critical energy per particle is reached, below which ionization losses become dominant. Such particle cascades are called extended air showers (EAS), and are the tool provided by nature to ground-based gamma-ray astronomers. One of the main tasks in experimental gamma-ray astronomy is the separation between gamma-rays and the dominant hadronic background. While the development of gamma-ray induced EAS can be modeled mainly based on the pair production and bremsstrahlung processes, the development of hadronic EAS is governed by the strong interaction. Due to the greater transverse momentum transferred to pions in strong interactions, hadronic EAS are wider (e.g. <cit.>) and can also have pronounced sub-structures, due to electromagnetic sub-showers initiated by gamma-rays from the decay of neutral pions. Furthermore, the shower maximum of air showers (correlated with the first interaction) depends on the nature of the EAS-initiating particle. In general, hadronic EAS develop their maximum deeper into the atmosphere than gamma-rays with similar energies. This leads to observable differences in the structure of the collected Cherenkov light or secondary particles on the ground, and to differences in the image shape between gamma-rays and hadrons. With a sufficiently high energy of the primary particle, the EAS can reach the ground before the secondary particles lose all their energy by ionization. Particle detectors on the observation level, usually placed at high altitude, can then be used to measure the secondary EAS particle distributions and arrival times on the ground. Another approach is to measure the Cherenkov light emitted by the secondary EAS particles, which is much less attenuated by the atmosphere and can reach down to lower altitudes. By measuring the air Chrenkov yield of the EAS, atmospheric Cherenkov detectors can record the arrival times of the shower front and measure the Cherenkov light lateral density profile. In the case of imaging telescopes, the most successful ground-based gamma-ray technique, the entire longitudinal profile of the EAS can be imaged. Finally, fluorescence or radio measurements are also used for EAS observations. In the following, we briefly introduce the main concepts behind the particle sampling and air Cherenkov techniques, both being the methods implemented by the experiments presented here. For a detailed description of detection principles in ground-based γ-ray astronomy, see chapter "How to Detect Gamma-rays from the Ground: An Introduction to the Detection Concepts". b [b] 0pt9cm Characteristics of proposed ground-based γ-ray facilities. Acronyms stand for Kilometer-square array (KM2A), Large sized telescope (LST), Mid sized telescope (MST), Muon detector (MD), Proportional counter (PRC), Small sized telescope (SST), Surface detector (SD), Underground detector (UD), Water Cherenkov detector (WCD), Wide-field Cherenkov Telescope (WFCT). 35cm0pt Observatory Category Status Location Technology Array Area Unit Spacing Photosensor Muon Detection Ref. ALPACA Particle Sampler 1/4 array (2021) 4.7 km a.s.l. Scintillator SD SD array = 82,800 m^2 SD = 15 m (SD) fast-timing PMT WCD array  <cit.> 1/2 array (2022) Chacaltaya Water Cherenkov UD MD array = 5,400 m^2 8×896 m^2 MD clus. (MD) 20" PMT Underground CARPET-3 Particle Sampler Upgrade (2022) 1.7 km a.s.l. Scintillator SD/UD 200 m^2 Dense carpet 6" PMT (FEU-49) Scintillator  <cit.> Baksan 600 m^2 Continuous 6" PMT (FEU-49) Underground CoMET Particle Sampler R&D Phase 5.1 km a.s.l. Water Cherenkov 20,000 m^2 WCD ≈ 4 m^b (WCD) 8" PMT Surface Array  <cit.> Air Cherenkov Andes^a Timing Array ACT ≈ 7 m^b (ACT) 8×3" PMT. Scintillator layer GRAPES-3 Particle Sampler Upgrade 2.2 km a.s.l. Scintillator SD 25,000 m^2 8 m 2" PMT Proportional Ctr.  <cit.> (Ongoing, MD) Ooty Proportional Ctr. UD 560 m^2 16 × 35 m^2 mod. 6 m × 0.01m^2 tubes Underground Particle Sampler Water Cherenkov (SD) ≈ 80,000 m^2 3×WCDA pools 1.5"+8" PMT^c LHAASO Particle Sampler Completed 4.4 km a.s.l. Scintillator (SD) ≈ 1 km^2 15 m 1.5" PMT. Underground  <cit.> Particle Sampler (2021) (Mt. Haizi). Water Cherenkov (UD) ≈ 1 km^2 30 m 8" PMT (WCD array) Air Cherenkov Calorimetry 256 deg^2 FoV 18 WFCT 32times32 1.2" SiPM STACEX Particle Sampler R&D Phase > 4 km a.s.l. Resistive Plate Chamber ≈ 22,000 m^2 Dense carpet 8" PMT WCD array  <cit.> Andes^a Water Cherenkov O(90% fill-factor) Underground SWGO Particle Sampler R&D Phase > 4.4 km a.s.l. Water Cherenkov Inner ≈ 80,000 m^2 Inner ∼ 4 m 8" PMT WCD array^d  <cit.> Andes^a Outer ≈ 1 km^2 Outer ∼ 16+ m Air Cherenkov Timing array O(100 m) 8"/10" PMT TAIGA Air Cherenkov Pilot (1km^2) Siberia^a Air-Imaging up to 10 km^2 up to 600 m 3/4" PMT / SiPM  <cit.> Particle sampler Scintillator SD/UD ≈ 230 m 1.2" PMT Underground ASTRI Air Cherenkov Construction 2.4 km a.s.l. Imaging. ≈ 40,000 m^2 200 m 0.2^∘ pixel-size SiPM  <cit.> (est. 2024) Tenerife (dual-mirror) 4 m tel. FoV ∼ 10^∘ CTA Air Cherenkov Construction La Palma Imaging ≈ 1 km^2, CTA-N variable 0.1^∘ PMT (LST, MST)  <cit.> (est. 2027) Paranal (multiple sizes) ≈ 10 km^2, CTA-S 0.2^∘ SiPM (SST) MACE Air Cherenkov Commissioning 4.3 km a.s.l. Imaging Mono Telescope 0.12^∘ pixel-size PMT  <cit.> Hanle 21 m tel. 1088 pixels * ^aEvaluation for site of next stage currently ongoing. * ^bRefers to average unit distances, which are arranged in regular clusters. * ^cIn a later configuration, WCDA-2 and WCDA-3 was equipped with 3"+20" PMTs, while WCD-1 remained with the orignal 1.5"+8" PMTs-configuration. * ^dOptions under investigation include double-layer WCD units <cit.> or multi-PMT WCD units <cit.>. Particle Detector Arrays Particle sampling arrays measure the secondary air shower particles that reach the observation level. Many techniques can be applied to such purpose. The electrons and muons from an air shower are typically detected via their scintillation light produced by the ionization in scintillator detectors, or Cherenkov light produced, e.g. in water and ice. The produced light is detected using photomultiplier tubes (PMTs). The light sensitive PMTs are connected to their respective light-producing medium in some light-tight container. From these basic working principles it results that the detector unit is insensitive to daylight and can be operated continuously, with up to 100 % duty-cycle (as opposed to the ∼ 15% of IACTs). A further advantage of particle arrays is a comparatively large field of view of typically 1 sr (as compared to 0.024 sr for a wide angle IACT with 10^∘ field of view). The large field of view and continuous operation duty cycle gives particle sampling arrays an advantage for extended sky surveys or uninterrupted monitoring of sources over extended periods, without limitations due to daytime. Particle arrays use the arrival times and number of particles to reconstruct arrival direction, shower core impact position and energy. Limitations for the directional and energy reconstruction accuracies are the shower-to-shower fluctuations, and the width (arrival time width) of the particle shower front. This basic concept of sampling the shower front translates into two fundamental requirements, common to any technology applied: that of large active array areas (i.e., extended arrays with a good fraction of instrumented surface) and of high altitude installation sites, both necessary to achieve a satisfactory shower reconstruction and overall performance. Such detectors also have typically higher energy thresholds (in comparison to air-Cherenkov experiments), since only the most energetic showers penetrate deep enough in the atmosphere to produce measurable signals from charged particles or secondary high-energy photons at ground level. The capability to discriminate between γ- and CR-induced air showers is another fundamental element of the technique, essential to achieve good sensitivity. Above several TeV, γ/hadron discrimination can be greatly improved by exploring the low muon content of γ-ray induced air-showers, using muon detection as a veto to supress the CR background. Placing particle detectors below ground or equipping them with shielding allows a measurement of the muon-component. This can also be used for a determination of the nature of the primary particle (cosmic ray composition). At lower energies, cosmic-ray showers are muon-poor, and the muon cuts cease to be effective, so that γ/hadron discrimination must be based on the distribution of particles at observation level. With a dense enough sampling of the footprint of the air shower (high array fill-factors), the comparatively irregular shape on the ground can then be used to separate hadrons from the more smoothly distributed particles in a γ-ray induce air shower. For more on the particle detector arrays and water-Cherenkov technique see chapter "Particle Detector Arrays and Water Cherenkov Technique". Air Cherenkov Technique Air Cherenkov detectors measure the Cherenkov photons emitted from the secondary EAS particles over the whole shower development, effectively using the Earth's atmosphere as a calorimeter, and achieve peak sensitivity between circa 100 GeV to few tens of TeV. The advantage of the air-Cherenkov method is that the light can be detected over the full shower development, providing an effective calorimetric measurement of the energy deposited in the EAS. Due to the large number of photons emitted, of O(10^5) for a 1 TeV γ-ray, energy resolutions of the order of 10% are typically achieved. The air-Cherenkov pulses are short close to the shower core (order of 10 ns), allowing to achieve a good angular resolution over a wide energy range. Furthermore, most of the emitted light in the optical range (mainly blue) reaches the ground with only little absorption, so that the energy threshold is lower as compared to the particle detection technique, where the air shower must have sufficient energy for the charged EAS particles to reach the observation level. Thanks to this low energy threshold, now approaching a few 10 GeV (in comparison to few 100 GeV for particle arrays), one advantage of air-Cherenkov instruments is to respond to transient alerts and follow-up variable sources. They nevertheless have a very limited duty cycle, operating only at dark time. The air Cherenkov wave-front sampling technique was introduced by pioneering experiments such as (among others) THEMISTOCLE <cit.>, or AIROBICC <cit.>. This technique provides an angle-integrating measurement of the light density on the ground, and the arrival times at individual detector stations, therefore being also referred to as a timing technique. It is the base for the next step by the TAIGA Collaboration, with the introduction of a hybrid reconstruction of air-shower data, using both IACTs and HiSCORE stations, in order to access the VHE and UHE γ-ray regime with a cost-effective instrumentation deployed over large detector areas. In contrast to the angle integrating HiSCORE stations, IACTs provide an actual angular image of the air shower. At the core of the success of the technique is the efficacy of the imaging analysis in reconstructing the γ-ray shower and providing excellent hadron rejection <cit.>. The Imaging Air Cherenkov technique was established with the first detection of the Crab Nebula by Whipple in 1989 <cit.>, and subsequent detections of several objects by, among others, Whipple, HEGRA and CAT. A further important innovation was the introduction of the stereoscopic observation technique by HEGRA, which is the widely used approach today. Here, multiple telescopes are arranged within an area about the size of the Cherenkov light pool in order to provide multiple simultaneous images of a same EAS event, from various viewing directions. Thanks to the use of stereoscopy, IACTs can achieve better shower core reconstruction, and as a consequence, better angular resolution (≲ 0.1^∘) and hadron rejection than achievable in monoscopic observation mode using a single IACT. With the advent of the third generation of IACT experiments H.E.S.S., MAGIC and VERITAS, IACTs were established as the instrument of choice in the energy domain from 100 GeV to several TeV. Thanks to their excellent sensitivity, which roughly scales with the number of telescopes in the array, IACT arrays are also good timing instruments. Among the key design characteristics of an IACT are its very large aperture, and mirror area, typically consisting of large (100+ m^2) tesselated mirrors, which allow to collect as many Cherenkov photons as possible, and defining in turn the γ-ray energy threshold of the instrument. The telescope's large FoV, ∼ 3^∘-10^∘, is also necessary to fully contain the Cherenkov images of the shower, a few degrees in angular extension, and usually offset by ∼ degree from the source position. This implies also the use of large, meter-sized cameras placed in the focal plane, and equipped with photomultiplier tubes (PMT). A good imaging of the air-shower requires a fine pixelation of the photosensitive area, meaning an array of hundreds or thousands of pixels with sizes of a fraction of a degree.This setup allows to image the fast air-Cherenkov pulses of EAS with great efficiency. For details on the air Cherenkov technique, see chapter "The Air-Cherenkov Technique". § TAIGA - GAMMA-RAY AND COSMIC RAY ASTROPHYSICS IN SIBERIA §.§ The Tunka site The Tunka site is located in the Tunka valley in Siberia (51^∘48'35” N, 103^∘04'02” E) at an altitude of 675 m a.s.l.. At these latitudes the temperatures during winter times can reach down to -50^∘C. Astronomical observations are not possible during the short summer nights, mainly due to frequently arising thunderstorms, which put the instruments at risk, and only deployment operations are therefore concentrated on the summer months. The Tunka site is hosting several experiments, some of which were initiated decades ago and continue to operate to this date, making up the valley's large complex. Tunka-133 is the final stage of a cosmic ray experiment which has evolved from a smaller version in the early nineties, up to the current size of 175 optical stations distributed over an area of 3 km^2 <cit.>. The principle of operation of Tunka-133 is a measurement of the air Cherenkov light pulses from extended air showers (EAS) on the ground, using photomultiplier tubes (PMTs) pointing to the zenith. These measurements can only take place during dark clear nights. Each Tunka-133 optical station consists of a hemispherical PMT (20 cm cathode diameter, EMI 9359, Hamamatsu R1408) inside a metal cylinder. The dynamic range is increased to 3×10^4, using one dynode and one anode readout channel. The Tunka-133 array is organized in clusters of 7 hexagonally arranged detector stations, with a distance between stations of 85 m. Tunka-133 consist of a core of 133 stations (19 clusters) covering an area of 1 km^2, and 6 additional outer clusters, placed at a distance of 700-1000 m from the center of the array, resulting in a total array area of 3 km^2. An array of scintillation detectors, Tunka-Grande, was installed in 2015, allowing to measure the muon-component of EASs <cit.>. Typically, hadronic air showers contain a factor 30 more muons than a gamma-ray air shower. Therefore, a measurement of the muon component can be very efficient for gamma-hadron separation. At the same time, this approach requires a large muon-active to total array area ratio, also referred to as filling factor. The Tunka-Grande detector component was the first step towards a particle detection array for TAIGA, as will be described below. A radio extension, Tunka-Rex <cit.>, was part of the experiment until 2019, measuring the radio emission from EASs. Tunka-Rex consisted of 63 radio antennae, distributed over 3 km^2 and was operated in coincidence with the Tunka-133 and Tunka-Grande arrays. As opposed to Cherenkov light measurements, the radio and particle detection techniques are not restricted to darktime. Finally, an optical telescope with 400 mm diameter is operated on the Tunka site as part of the MASTER Global Network of Robot Telescopes <cit.>. The Tunka site offers valuable infrastructure and scientific environment for the development of the TAIGA experiment. The existing experience with different detection techniques and their operation under extreme conditions during the Siberian winter is of great benefit. §.§ Experimental Concept The Tunka Advanced Instrument for Cosmic ray and Gamma Ray Astronomy (TAIGA) is a hybrid detector concept for gamma-ray astronomy in the energy range from few TeV to several 100s of TeV, and for cosmic ray physics above 100 TeV. TAIGA emerged from a collaboration between University of Hamburg (UHH) and the Moscow state University (MSU), starting in 2009. At that time, the HiSCORE experiment only existed as a concept study under the name of SCORE <cit.>. Due to the similar experimental approach of Tunka-133 and the excellent Tunka-site infrastructure, an agreement between MSU and UHH led to the first HiSCORE prototypes being deployed in the Tunka-valley in 2010. At the same time, first studies for a combination with IACTs were started. These activities eventually developed into the TAIGA collaboration, which today consists of 15 different institutions from Russia and Germany. TAIGA has deployed 120 HiSCORE stations, 2 imaging air Cherenkov telescopes (IACTs), and a first cluster of TAIGA-Muon detectors. All three components of the TAIGA experiment – the air-Cherenkov timing array TAIGA-HiSCORE, the air-Cherenkov imaging telescopes TAIGA-IACT, and the particle detector array TAIGA-Muon – measure the EAS on the ground, exploring both the secondary air shower particles, and the atmospheric Cherenkov photons to reconstruct the EAS properties. The first component of the TAIGA detector complex was the wide-angle large-area wave-front sampling timing-array HiSCORE. The HiSCORE timing-array is based on the concept outlined in <cit.>. This component started in 2010 as a stand alone concept, aiming at a cost-efficient coverage of very large detector areas in order to account for the steeply falling fluxes (power-law) with rising γ-ray energies. However, early on it became clear that while being a cost-efficient approach to cover large areas and achieve good core location reconstruction, and angular and energy resolutions, the HiSCORE concept alone suffers from poor γ/hadron separation power below 100 TeV. Therefore, a combination with classical imaging air-Cherenkov Telescopes (IACTs), which provide good γ/hadron separation by using the EAS image shape, was envisaged. In order to cover large areas using IACTs in stereoscopic mode it is nevertheless necessary to position the telescopes at distances not more than 100-300 m apart from each other. This is a significant limitation when aiming for very high energies, as it implies a large number of telescopes, and a large number of channels (PMTs & electronics) per instrumented km^2. Another option is to use the IACTs in monoscopic mode, placing them up to 600 m apart from each other. While a standalone monoscopic IACT does not provide the same γ/hadron separation quality as stereoscopic systems, it was shown <cit.> that a hybrid event reconstruction using the TAIGA-IACTs together with TAIGA-HiSCORE can achieve, at the same time, good γ/hadron separation and very large effective areas. Later on, an additional particle detector for the measurement of the muon component at higher energies was introduced, resulting in an improved hadron rejection at the high energy end. A general layout of the TAIGA-HiSCORE and TAIGA-IACT components on the Tunka site is shown in Figure <ref>. In the following, the individual components of TAIGA are introduced, followed by a description of the hybrid reconstruction concept, and an overview of early results from the TAIGA pilot array. §.§ TAIGA-HiSCORE Station and array design The TAIGA-HiSCORE array currently consists of 120 angle-integrating air-Cherenkov timing detector stations, distributed over an area of approximately 1 km^2. The stations are arranged in an offset grid with distances between stations of 75 m to 150 m, as illustrated in Figure <ref>. The array is organized in 4 clusters comprising about 30 detector stations each, indicated by the differently coloured stations in the Figure. A cluster is an organisational unit with all stations of a cluster connected to a central cluster controller. An individual station consists of 4 photomultiplier tubes (PMTs) equipped with a segmented Winston Cone. Both 8" and 10" PMTs from ElectronTubes and Hamamatsu are used. The Winston cones are built from light-weight reflective foil segments (Alanod 4300UP), and serve to reduce the background from stray light as well as to increase the light sensitive area to about 0.5 m^2 per station. At the altitude of the Tunka site, the resulting energy threshold is of 40 TeV for gamma-rays, when using at least 3 stations for reconstruction. The full opening angle of each cone is 60^∘ wide, resulting in an effective field of view of 0.6 sr (taking into account that the radial acceptance drops towards the edge of the FoV). All stations are mounted on a steel construction which allows to tilt the optical axis along the north-south direction, therewith increasing the area of the sky that can be covered. Tilting all stations to the south increases the total field of view covered. Tilting all stations to the north concentrates the available observation time to a smaller total FoV, but with much deeper exposure <cit.>. Currently, the HiSCORE stations are tilted by 25^∘ to the south, in order to improve the total exposure on the Crab Nebula during the pilot phase of the TAIGA array. Data acquisition and slow control electronics One of the challenges of the TAIGA experiment is its location at high geographic latitude in Siberia, with harsh, cold winters. The four cones of each station are insulated with foam and covered by a plexiglas window, which is equipped with a heating wire to prevent fogging. The PMTs, placed at the base of the Winston cones, are equipped with divider bases, providing a nominal gain of 10^4 using 6 dynode stages. In order to increase the dynamic range of the signal, the PMT anode and 5th dynode are read out. The station metal box is equipped with a motorized lid, for protection against daylight and bad weather. A separate, heated box, is installed next to each station and contains the readout electronics and parts of the slow control and monitoring system. The micro-controller based slow control system operates the lid motors, monitoring of lid motor and heating currents, control of plexiglas window heating, the high voltage (HV) control, monitoring of anode current, and implements safeguards in case of too high currents, or breaking daylight. The PMT-station and electronics box are connected to the central DAQ via optical fibres for slow-control, readout, and distribution of a time synchronization signal. Additionally, a radio connection (XBeePro) is used for the heating controller of the electronics box. The DAQ system is not switched on before a temperature of 15^∘C is reached inside the box. The DAQ system of the HiSCORE array is illustrated in Figure <ref>. The anode signals from the four PMTs are connected to an analog summator and splitter board. The board output is an analog sum, used for triggering, and the four anode signals are then sent to the readout. The dynode signals are directly connected to the readout. Triggering on the sum of four PMT anode signals reduces noise fluctuations, and therewith the threshold by a factor of 2. If the analog sum exceeds a given threshold, all anode and dynode signals are read out using a DRS 4 (Domino Ring Sampler <cit.>) based readout board at a sampling frequency of 2 GHz. A 9th DRS 4 channel is sampling a fiber-distributed 100 MHz clock, allowing a relative time-synchronization between different HiSCORE stations. Of the 120 HiSCORE stations, 20 are additionally equipped with the WhiteRabbit ethernet-based time synchronization system which is used to cross check the DRS 4-based time-synchronization <cit.>. It was shown that a relative timing accuracy of 0.2 ns over the full array is thus achieved <cit.>, which fulfills the requirement of a sub-ns time-resolution, necessary in order to reach a good angular resolution <cit.>. A horizontal light source deflected into the stations is used for the measurement of the relative time-delays between stations. An estimation of the stability of the used time-synchronization systems resulted in an RMS value of less than 0.5 ns for both systems <cit.>. The detector stations are connected by optical fibre to their corresponding cluster center system. The cluster centers are in turn connected to the central DAQ of the array. Further details on the DAQ system and electronics components used can be found in <cit.>. Data Reconstruction Each HiSCORE station triggers independently. The event-building is done as a pre-processing step of the reconstruction at a later stage. The total amplitude, the time of the event (half-rise-time) and the signal width (full width at half maximum) are extracted for each detector. Based on these parameters, the air shower arrival direction, core impact position, and energy are reconstructed using methods developed ealier for Tunka-133 <cit.>, or HiSCORE itself <cit.>. In a first reconstruction step, a 0th-order core position is estimated as the center-of-gravity of station amplitudes. Furthermore, a 0th-order angular direction is reconstructed using a fit of a plane to the arrival times. More precise core and angular reconstruction quality is achieved using models for the light distribution (light density function, LDF, and amplitude density function, ADF), and arrival time distribution on the ground. Figure <ref> shows the distribution of amplitudes on the ground with the corresponding ADF fit as a function of core impact distance, and the distribution of arrival times at the stations with the corresponding arrival time function fit.The EAS energy is reconstructed using the value of the LDF at a fixed distance to the shower core (typically 200 m). Using these methods, a reconstructed core impact resolution of 35 m at threshold (only few stations used for fitting) and better than 10 m at higher station multiplicities can be achieved. As mentioned above, the angular resolution is better than 0.1^∘. Finally, the energy resolution is of the order of 10%, which is a typical value for air Cherenkov experiments. The EAS maximum can be reconstructed using the slope of the LDF, the signal widths, signal rise times, or from an arrival time fit. However, the overall gamma-hadron separation of HiSCORE alone is poor at the threshold, and only reaches acceptable levels at several 100s of TeV (see, e.g. <cit.>). During the commissioning phase of the HiSCORE engineering array (first 28 stations), a seredipitous discovery of a human light source was made <cit.>. Unexpected strong increases of the trigger rate during ≈ 1 s intervals were found in the data set of the 2015-2016 season. This effect was found out to be due to a fast moving source close to zenith, which could be associated with the CATS-LIDAR (Cloud Aerosol Transport System) <cit.>. The signal could be reconstructed using a plane wave fit and appeared as a point-like source for the HiSCORE array. The CATS-LIDAR was detected for 11 more passages in the subsequent observing season and provided a unique calibration tool for TAIGA-HiSCORE. During one of these passages, the robotic optical MASTER telescope was used to image the track of the source. These measurements were used to verify the absolute pointing of the TAIGA-HiSCORE array, and to re-calibrate the relative time offsets between HiSCORE stations as well as their individual time jitter. It was also used to estimate the angular resolution by comparison with the reconstruction of subsets of the array (chessboard method). This analysis resulted in a 0.1^∘ angular resolution for plane-wave events. Further details on this analysis can be found in <cit.>. The chessboard method had also been previously used in order to verify the TAIGA-HiSCORE performance in comparisons between data and MC simulations <cit.>. Monte Carlo Simulations and Array Performance Simulations of air showers were performed with the CORSIKA package <cit.>. The detector simulation is done using an adaptation <cit.> of the sim_telarray package <cit.> and a custom simulation chain <cit.>. Both simulation chains implement the full detector response, including (see, e.g. <cit.>) Winston cone acceptance (based on ray-tracing simulations), atmospheric photon scattering <cit.>, wavelength-dependent PMT quantum efficiency, photoelectron collection efficiency, PMT signal pulse shape, station trigger, and PMT afterpulsing. Monte Carlo simulations have shown that a sub-ns time-synchronization is required in order to reach the desired angular resolution of the order of 0.1^∘. Figure <ref> shows the expected angular resolution using the reconstruction methods described above, assuming different relative time resolutions. This predicted angular resolution could be verified using the above-mentioned analysis of the CATS-LIDAR, as well as through comparisons of real data to MC simulations using the chessboard method <cit.>. The angular resolution is a function of the number of triggered stations. When using small subsets of stations, the resulting mismatch, α, between the reconstructed direction from both subsets is correlated to the angular resolution of the array. When using the same subsets in both data and MC simulations, the simulations can be verified, thus confirming the simulated angular resolution for the full array (shown in Figure <ref>), which goes down to 0.1^∘ at energies above several 10s of TeV. The chessboard comparison is illustrated in Figure <ref>, showing the mismatch angle α for a study done using the 28-station HiSCORE engineering array <cit.> (see also <cit.>). The trigger rate of individual HiSCORE stations depends on the discriminator threshold used. The threshold was set to a value corresponding to 250 photoelectrons (p.e.), limiting the trigger rate to less than 20 Hz. Simulations of this trigger setup and comparison to real data yield an energy threshold of 50 TeV. This threshold might be further reduced in the future, provided a higher station trigger rate of the 120 HiSCORE stations can be handled by the DAQ system, possibly implementing online filtering methods. §.§ TAIGA-IACT The IACT technique and TAIGA The driving idea of TAIGA is to access the energy regime up to several 100 TeV. At these large energies, the amount of Cherenkov light produced in an air shower is very large, and small-size telescopes with diameters of 4 m are sufficient. With a wide field of view of almost 10^∘ diameter, air showers can be imaged up to core impact distances of around 500 m from the telescope. When placing the TAIGA-IACTs 600 m apart from each other, 4 IACTs are enough to cover an area of more than 1 km^2. This is the opposite approach of classical stereoscopic systems, where the maximum distance between IACTs (100 m to 300 m, depending on the energy range) is dictated by the condition to cover each EAS with at least two telescopes. TAIGA-IACT design Today, three IACTs are in operation in TAIGA. Figure <ref> shows the first two TAIGA-IACTs. The telescope dishes are built following the Davies-Cotton design, and are mounted on an elevation and azimuthal axis (alt-az). A mirror dish consists of 34 hexagonal mirrors with an effective diameter of slightly more than 60 cm each, yielding a total effective light-collection area of more than 10 m^2. The focal length of the mirror dish is 4.75 m, with an f/d of 1.1. The alt-az axes are driven by a Phytron hybrid stepper motor, equipped with 17-bit shaft encoders and stop switches. The telescope pointing is monitored and corrected using a sky-CCD camera system, imaging known bright stars. Measurements of currents induced by stars drifting through the camera (tracking switched-off) were used to determine an absolute pointing accuracy of 0.02^∘ <cit.> (also see <cit.>). A Cherenkov light camera is installed at the focus of the mirror dish. The first IACT camera consists of 560 XP1911 PMTs, with a cathode diameter of 19 mm. The second camera consists of 595 PMTs of the same type. The PMTs are equipped with Winston cone light funnels, fabricated as a single plane. Each Winston cone covers the full reflector diameter. With an angular diameter of 0.36^∘ per camera pixel (PMT+cone), the full camera has a field of view of 9.6^∘. The camera body was especially designed with the harsh environmental conditions in mind, with insulated walls, a temperature control system, and a thick, 1.5 cm, plexiglas entrance window in front of the Winston cone plane. A camera lid protects the PMTs during daylight. Each camera is organized in clusters of up to 28 PMTs (see Figure <ref>). The cameras have a field of view of about 9.6^∘. The high voltage is supplied in groups of 7 PMTs (up to 4 groups per cluster), selected with similar gains. Each cluster includes HV control and current monitoring components. Each cluster is read out with a board based on a 64-channel front-end ASIC, MAROC 3 <cit.>. The trigger condition requires at least 2 pixels above a programmable threshold within one cluster. A central camera controller, based on an FPGA Xilinx Spartan-6, is used to generate the global camera trigger and event time stamps, to manage the settings and readout of the MAROC 3 boards, and for data transmission to the central DAQ. The Central Controller includes a local clock, operated at a fiber-distributed 100 MHz frequency from the DAQ center, which is synchronized with all HiSCORE stations. For further details on the IACT design and electronics, see <cit.>. Event reconstruction When using a single IACT for gamma-ray observations, the gamma-hadron separation typically relies on a set of cuts, e.g. supercuts <cit.>, based on different image parameters, such as width or length, first introduced by <cit.>, and so-called Hillas parameters. Mostly, two types of parameters can be used to cut on the direction of the events. The first parameter is the α-angle, which is the angle between the major axis of the air shower image and the position of the observed source. The image major axis is pointing towards the direction of the EAS event. Therefore, in case of a γ-ray excess from the source position, an enhancement of events is expected at small values of the α angle. When using a single IACT, the classical approach is to use the angle α instead of a true directional information. However, it is also possible to directly reconstruct the actual direction θ using the disp-parameter. This parameter is based on the ratio of image width, w, over image length, l. Using MC simulations, a relation between w/l and the absolute distance from the center of gravity of the image to the position of the event direction along the image major axis inside the camera is generated. MC simulations for TAIGA show that this method can reach an angular resolution better than 0.2^∘ in the energy range from few TeV to 10 TeV. While the goal of TAIGA is to implement a hybrid reconstruction using the IACTs together with the HiSCORE timing array, and the muon counters, an important step towards such a reconstruction is to verify the function of the IACTs using Monte Carlo (MC) simulations and observations of known sources. Simulations of the TAIGA-IACTs are realized in two different MC chains. Both chains are using CORSIKA for air shower simulation <cit.>. The detector simulation is done using an adaptation of the sim_telarray package <cit.> (also see <cit.>), using a custom simulation chain developed within TAIGA <cit.>. First comparisons show a good agreement between data and MC. Figure <ref> shows the image's width parameter, obtained from the second moment of the image pixel distribution for background data of the first TAIGA-IACT, compared to the simulated width for hadrons and γ-rays. Before implementing a full hybrid analysis the functionality of the TAIGA-IACTs must be demonstrated using real data. The Crab Nebula is the standard candle of TeV γ-ray astronomy. Data on the Crab Nebula were taken during the commissioning phase of the first TAIGA-IACT and during subsequent observation seasons. A first analysis was based on 40 h of good weather quality data under stable instrument conditions. Using a standard Hillas analysis with a cut on the α angle yielded an excess of 164 γ-ray events at a significance level of 6.3σ <cit.>. This result was confirmed by different groups using two independent reconstruction chains. An analysis <cit.> using a MC simulation trained random forest algorithm based on a larger dataset of 80 h, and using the disp parameter to reconstruct the direction of the primary gamma-rays, resulted in a larger significance (8 σ level) and a clear signal in the on-source (i.e. centered around the Crab Nebula) region at small values of the angular distance θ, as shown in Figure <ref>. The average distribution of 10 off-source regions used for background estimation essentially remains flat towards low values of θ^2. The signal obtained from the Crab Nebula confirms the expected angular resolution at the current energy threshold of the first TAIGA-IACT <cit.>. A drawback of the current TAIGA-IACTs is the large difference in field of view between the telescopes and TAIGA-HiSCORE. Only about 4% of HiSCORE events fall into the FoV of the telescopes. Furthermore, the usage of Silicon-PMTs (SiPMs) has several advantages over the usage of standard PMTs, such as operation under full-moon conditions, no degradation due to high levels of background light, compact design, and low voltage and power consumption. A small imaging telescope (SIT) prototype concept <cit.> addresses these points, based on a Schmidt optical system and a SiPM camera, providing a FoV of 20^∘ in diameter. However, this study is in the prototyping stage, and a future use in the experiment is not decided. §.§ TAIGA-Muon An efficient γ/hadron separation is possible using the muon component of air showers at energies above 100 TeV, when using an instrumented muon detector area of 0.2-0.3% of the total TAIGA array (filling-factor). The goal is to build a TAIGA-muon array <cit.> of up to 3,000 m^2. In a first step, the Tunka-grande array consists of 19 scintillator stations. Each station has a surface and an underground component. The surface component is built from 12, 80x80 cm^2, scintillator tiles inside a protective steel-hut construction. The scintillator tiles were previously part of the EAS-TOP and KASCADE-Grande arrays. The underground component consists of 8 such detectors at a depth of 1.5 m below ground. The stations are located at about 20 m from individual Tunka-133 clusters, with distances between Tunka-Grande stations of 200 m. Based on the same concept, the TAIGA-Muon array <cit.> will consist of detector stations equipped with 16 counters, each consisting of 100x100 cm^2 scintillator detector tiles <cit.>. Four wavelength shifting light guides are used to guide the light of a counter to an EU-85-4 PMT. A TAIGA-Muon station will consist of 8 surface and 8 underground counters, yielding an area of 2×7.52 m^2. §.§ Hybrid imaging-timing concept For a single IACT, the disp-parameter yields an estimate of the position of the direction along the image major axis. However, this is only possible for a subset of events, because of an ambiguity between two possible positions at both sides of the elliptical shower image. While higher camera resolutions can also help, stereoscopic IACT systems primarily resolve this ambiguity by using the intersection of major image axes of at least two IACTs. The same method is also used to reconstruct the EAS core impact position (with a projection into the observation plane) - see Table <ref>. Stereoscopic IACT systems achieve angular and core position resolutions of typically 0.1^∘, and better than 20 m, respectively, on an event-by-event basis. The knowledge of the core position is key to an enhanced γ/hadron separation, based on the scaling of the image parameters (mostly width and length) with the values expected from simulations, and using the reconstructed core position, zenith angle and total image amplitude (size), measured in photo-electrons (p.e). The caveat of stereoscopic observations is that at least two IACTs are needed to be located within the Cherenkov light-pool of the EAS event, thus small distances between IACTs are required (typically 100-300 m, depending on the energy). Furthermore, stereoscopic events at far distances from the two IACTs are usually discarded due to very small angles between the intersecting image major axes (small stereo-angle). However, even though these requirements result in a smaller effective area than the sum of the effective areas of the individual IACTs - or expressed differently, in the required large number of channels per km^2 - in the past, the superior point-source sensitivity obtained by the very good angular resolution and γ-hadron sepration quality made stereoscopy the method of choice in air-Cherenkov astronomy. The hybrid approach used within the TAIGA experiment is based on an idea to take advantage of the full available effective area of one IACT. This is achieved by placing the telescopes far apart from each other, so that most EASs will only trigger one single IACT, and several HiSCORE stations. In order to compensate for the limitations of monoscopic IACT reconstruction, the telescopes are then combined with the HiSCORE timing array. While in the stereoscopic technique the EAS direction and core position are reconstructed using two IACTs, TAIGA uses the direction and core position as reconstructed by HiSCORE. The principle is illustrated in Figure <ref>. The corresponding instrument response is also shown. It can be seen that the major image axis of the IACT projected to the observation level actually points in the direction of the core position reconstructed by HiSCORE. The latter is close to the simulated EAS core. As illustrated in Figure <ref> for the angular resolution, the angular and core position resolutions of HiSCORE towards higher energies, as obtained from Monte Carlo simulations, are comparable to those of stereoscopic IACT systems <cit.>. The simulations were verified using data with a 9-station engineering array <cit.>. Using the core position as reconstructed by HiSCORE, the zenith angle of the observation, as well as the image size measured by the TAIGA-IACT, the image width, w, can be scaled with its MC-expected value w_MC, thus obtaining the hybrid scaled width parameter (hscw). hscw = w/w_MC(core,size,zenith) Figure <ref> shows how the maximum quality factor of a cut on hscw, defined as the ratio of γ-ray efficiency to the square root of the hadron efficiency of the cut. While for energies in the range from 20 to 38 TeV the quality factor drops beyond core distances of 100 m, the best quality factor for higher energies is reached at larger distances. A sweet spot, where the quality is optimized for all energy ranges considered, lies at core distances of about 250 – 300 m. Therefore, a distance between the position of two TAIGA-IACTs of 500 – 600 m is considered to be the best solution for the operation of the hybrid array. Integrating over all energies, and using a core distance cut of 250 m (corresponding to a distance of 500 m between IACTs), yields a very good overall quality factor of 4.6 for γ/hadron separation using only the hscw parameter. These results were obtained assuming a core impact resolution as obtained from simulations of the HiSCORE array <cit.>. It can be expected that a hybrid reconstruction, based on a combined fit of HiSCORE stations and IACTs will further improve the resolution, and thus the γ/hadron separation. Additional improvement of the γ/hadron separation can be obtained from other parameters such as the hybrid scaled-length, and the air-shower maximum reconstructed with HiSCORE, or a fully combined hybrid fit. Overall, an average energy-integrated quality factor better than 5 can be realistically expected. So far, only events with a HiSCORE station multiplicity of at least 3 were considered for reconstruction. However, two-station events, or even single-station events, might in principle be reconstructed, provided an IACT also triggers the corresponding event. While the reconstruction quality will not reach the same level as achieved for higher energy events with higher station multiplicities, to recover such events will provide additional statistics at the threshold energy range of the HiSCORE array, around 10 TeV. MC simulations yield a potential increase of more than 50% statistics at these energies, when including these events. While this is a special event class, so far not considered in the performance studies of TAIGA, it will be exploited in the future as part of the different types of event classes listed below. * single IACT * single IACT + 1-2 HiSCORE stations * single IACT + 3 or more HiSCORE stations * N IACTs + M HiSCORE stations * N IACTs + M HiSCORE stations + K TAIGA-Muon stations at E > 100 TeV The first event class will provide source monitoring in the range starting at the energy threshold of the IACTs, at few TeV. The second class can enhance the performance of the single IACT analysis in the energy range around 10 TeV. The third event class is aimed for the hybrid operation mode. Here, data from an IACT with core impact distances of up to 300 m can be reconstructed, with an energy resolution of 10-20 %, and an angular resolution better than 0.2^∘, over the full energy range from few TeV to few 100 TeV; a γ/hadron separation with a quality factor of the order of 5 above 10 TeV can also be achieved. Event class number four will additinally provide stereoscopic events, which can be used to cross-check the hybrid reconstruction. In the current phase of the TAIGA experiment, the IACTs are placed closer together on purpose, in order to increase the fraction of stereo-events, for cross-check. Finally, when adding the information of the Muon-content of the EAS as measured from TAIGA-Muon, the gamma-hadron separation at energies above 100 TeV will be enhanced. TAIGA Sensitivity Sensitivities were estimated both for a pure HiSCORE setup <cit.>, and for the hybrid Cherenkov technique <cit.>. Here, the difficulty lies in the fact that both detector components have a different field of view, different energy thresholds, and different operation modes. While the HiSCORE stations with their wide field of view of 60^∘ in diameter are operated in scanning mode, oriented in a fixed direction[The optical axis of the stations are tilted to the south or to the north for longer periods of time (years), therewith accessing different parts of the sky], the IACTs with a field of view of 10^∘ in diameter are steered to point at any position accessible from the observation site. Therefore, many sources will receive exposure time throughout the year with HiSCORE-only data, while some selected sources will be also observed with the TAIGA-IACTs. Furthermore, the TAIGA-Muon detectors will be available at any time observations take place with HiSCORE or the IACTs. Additionally, the TAIGA-Muon detectors will also operate alone during daytime. With this in mind, it is clear that an estimation of the point-source sensitivity strongly depends on the part of the detector considered, and a straightforward comparison with other experiments is difficult. Figure <ref> shows the estimated sensitivity for the TAIGA air-Cherenkov detectors (TAIGA-HiSCORE and TAIGA-IACT). At the low energy end, the sensitivity is dominated by the IACTs. No other detector component of TAIGA is sensitive at few TeV. Here, the first event class listed above will be collected. In the energy range beyond 10 TeV, TAIGA-HiSCORE starts to contribute to the reconstruction. Here, one- or two-station events in combination with an IACT can be used. With rising energy, the number of stations will increase to more than 3 (event class 3). This is the main event class for TAIGA. In this range, the station multiplicity starts to be high enough to provide good angular and core position resolution, allowing a hybrid timing-imaging reconstruction. Eventually the EAS will be large enough to also trigger a second IACT (event class 4). Such events can be used to cross-calibrate the hybrid event reconstruction, by using the classical stereoscopic technique. This is possible using the current setup, where the distances between the IACTs was chosen to be much smaller than 600 m. However, in the future, the distances between two IACTs will be increased to about 600 m, resulting in very few stereoscopic events at very high energies. Above 100 TeV, the TAIGA-Muon component becomes relevant, since it will allow to measure the muon-content of EAS, providing additional hadron-tagging possibility. This improved hadron rejection was not taken into account in the sensitivity curve shown here. TAIGA-Muon is also operational in stand-alone mode during daytime, serving as cosmic ray detector. TAIGA will also allow morphological studies with a good angular resolution (∼ 0.1^∘) and spectral energy reconstruction down to 10% relative energy resolution. §.§ Outlook After individual detector components have been verified, the next step is the implementation of a full hybrid reconstruction and a proof-of-principle of the hybrid concept within 2023. The hybrid sensitivity of the current 1 km^2 TAIGA array will reach down to few × 10^-13 erg cm^-2 s^-1. In the future, a large 10 km^2 array with order of 1,000 HiSCORE stations and 25 IACTs is envisaged. Due to limitations of the Tunka site (total available area, weather quality), such a future site will be located elsewhere. Such a future array will achieve a sensitivity of better than 10^-13 erg cm^-2 s^-1, at the same time allowing morphological studies and good spectral reconstruction. A modified design using either a combination of TAIGA-IACTs with the smaller sized SIT design described above might be envisaged. Another possibility is to use variable HiSCORE station distances in a graded design, which could be shown to increase the effective area for a given number of stations while keeping a good angular reconstruction performance <cit.>. § SOUTHERN-HEMISPHERE EAS ARRAY PROPOSALS Until recently, the capabilities for effectively measuring γ-rays in the ultra-high-energy domain were relatively limited. Low γ-ray fluxes, coupled with high background levels from cosmic rays, make this a difficult regime for ground-based γ-ray astronomy, especially considering the technical challenges for achieving good γ/hadron separation. The perspective has nevertheless changed with the successful mapping of the Northern Hemisphere sky by HAWC above several TeVs <cit.> and the first firm PeVatron discoveries by LHAASO <cit.>, which demonstrated that carefully designed particle array detectors, capable of circumventing the concealing backgrounds, can be effective instruments for UHE γ-ray astronomy. As all experiments in this energy domain have historically been located in the Northern Hemisphere, there is now considerable interest in developing facilities capable of measuring UHE γ-rays in the South, from where an extended view of the Milky Way, and access to the Galactic Center, store many anticipated discoveries. Additionally, in view of the new era of multi-messenger astronomy, and considering the potential of a wide-field array for monitoring the VHE transient sky, there are also strong reasons to try and reach better sensitivities at lower energies, below the current threshold of several hundred GeV. EAS detectors are now well-established, and have been proven ideal for spectral and morphological measurements of γ-ray signals above several tens of TeV. Future proposals are pushing the scientific frontiers of the technique, addressing a number of challenges driven by the need to achieve an improved sampling of the shower front as compared to previous experiments, and a wider dynamic range, at reasonable cost. When designing such facilities, elevations of circa 4 km a.s.l. are sufficient for the highest energies, as UHE showers reach their maximum development at ≈ 4.3 km× log(log(E_0/1 PeV)). Higher sites could still represent an advantage if the goal is to lower the energy threshold, as the number of particles at ground level increases by several times from 4 to 5 km a.s.l., for showers ≲ 1 TeV <cit.>. Higher altitudes can also contribute to improved energy and angular resolutions, as shower fluctuations are smaller at lower atmospheric depths, particularly near shower maximum. To fully exploit the benefits of high altitudes, and achieve the lowest possible energy detection thresholds, it is necessary that the spacing between detector units is reduced so to increase the array active area. To this purpose, a dense (> 50% fill factor) array core, large enough to at least contain the shower footprint (≳ 10^4 m^2), is to be applied. Such a large, dense array, installed at high altitude, can potentially achieve very good angular (< 0.2^∘) and energy resolutions (< 20 %) above few TeV. An extension of the dynamic range towards the PeV would depend on complementing the design with an outer array O(10^6 m^2). In the Southern Hemisphere, the Andes are the only suitable region for the installation of such arrays, given the availability of high-altitude plateaus. Looking into the different experimental options available, the water Cherenkov technique carries the advantage of being sensitive to the more numerous secondary γ-rays, which results in an improved sampling of the shower front, especially at larger distances from the shower core, where the ratio of secondary γ-rays to electrons further increases. This results in a potentially better angular resolution reconstruction of individual showers, by increasing the sampling rate of secondary particles and improving shape determination of the shower front[As will be shown, some experimental setups place a thin sheet of lead above conventional particle counters (e.g., scintillators) to yield additional signal from the conversion of secondary γ-rays into electrons that can be measured <cit.>.]. Having a good angular resolution can also improve sensitivity, helping achieving a more favourable signal-to-noise ratio within the angular range of the point spread function. For that, in addition to a good sampling and shower core determination, accurate timing of the shower front at each detector unit is essential. In the following we will briefly present some of the current projects for Southern-Hemisphere ground-based gamma-ray detectors. The emphasis will be on the techniques proposed and how they differentiate with respect to the basic design considerations and the technology adopted. Although at very different stages of development and maturity, we find it useful to summarise and compare, whenever possible, the future performance expectations and potential of each proposal to advance the technology and measurement techniques. §.§ Southern Wide-Field Gamma-Ray Observatory, SWGO The Southern Wide-Field Gamma-ray Observatory (SWGO) Collaboration was founded in July 2019 for the planning and design of a wide-field ground-based γ-ray observatory in South America. After the successful experience of the HAWC <cit.> water Cherenkov array, a few proposals emerged for the construction of the first Southern-Hemisphere installation of the kind, motivated by the vast scientific potential behind a continuous very-high energy survey of the southern sky <cit.>. The initiatives were based upon the common concept of a dense, large-area, and high-altitude EAS array, which would significantly increase the VHE sensitivity over HAWC, especially towards the lower energies, below several hundred GeV. The SWGO Collaboration finally resulted from a joint effort between members of the SGSO Alliance[See the SGSO White-Paper here: <https://arxiv.org/pdf/1902.08429.pdf>.] and the LATTES Collaboration[Named after the Brazilian physicist Cesar Lattes, co-discoverer of the pion in 1947 <cit.>, and advanced mainly by groups in Brazil, Italy and Portugal, as well as the Czech Republic.] <cit.>. This initial proposal focused on a baseline design <cit.> that would increase the observatory effective area over HAWC, and lower its detection energy threshold, by a combination of high array fill-factor (well-above 50%, within a core area ∼ 10^4-5 m^2) and higher altitude installation site (close to 5 km a.s.l.). Improved background rejection power by the efficient identification of muons at individual detector units is a central design goal. The concept also considers surrounding the central detector by a larger sparse array to provide higher energy sensitivity for the observation of PeVatrons. The Observatory Concept Drawing from these ideas, the baseline concept for the SWGO observatory was defined as: * a ground-level particle detector array with close to 100% duty cycle and order steradian field of view, to be installed in South America above 4.4 km a.s.l, between latitudes -30^∘ and -15^∘. * to cover a wide energy range, from 100s GeV to 100s TeV, and possibly extending up to the PeV scale. * based primarily on water Cherenkov detector units, consisting on a high fill-factor core with area considerably larger than HAWC, and significantly better sensitivity, surrounded by a low-density outer array. With respect to the development stage of the project, the SWGO Collaboration is in the R&D Phase, which aims to deliver a detailed proposal that will guide the construction and operations of the future γ-ray observatory. The R&D Phase is expected to be concluded in 2024, along with a final choice of the installation site, so that construction and operations could start as early as 2026. With a strong international collaboration of nearly 200 scientists from 14 member countries,[<www.swgo.org>] and a number of associated researchers from around the world, SWGO is the first global proposal for an EAS array in the Southern Hemisphere, complementing both LHAASO and CTA as the next-generation of ground-based gamma-ray observatories. The SWGO R&D programme is based on a Baseline Configuration which serves as reference for the array design and detector technology options to be investigated. Its main characteristics are described in Table <ref>. It consists of a core array of circa 5,700 water tanks, spaced in a grid with 4 m gaps between units, and an outer array of at least 800 detectors, with an inter-unit spacing of 16 m. As far as the WCD units are concerned, the same baseline design is considered for the core and the outer arrays, and two major design options are under study. The first consists of a double-layered (2.5 m height top; 0.5 m height bottom) cylindrical tank, with a diameter of 3.8 m <cit.>. Calorimetry of the shower electromagnetic component is done in the upper WCD layer, whereas the bottom layer is primarily for muon tagging. Each layer is equipped with a single, large-area photo-multiplier tube (PMT) placed at the center. Deployment in a lake for improved shielding from lateral penetrating particles is being investigated. The second option is a multi-PMT shallow-WCD tank with a diameter of 3.8 m and 1.75 m height, which aims to explore the asymmetric illumination of three upward-facing PMTs to identify individual muons <cit.>. The key science topics which the SWGO Collaboration aims to address, some of which are unique to southern-hemisphere installations, include: * At the lowest energies, < 1 TeV, focus is on transient sources, exploring the wide-FoV and near-continuous duty cycle of the observatory to work as a monitoring and trigger instrument complementary to CTA. The principal science targets are Active Galactic Nuclei (AGN) and Gamma-ray Bursts (GRB), both of which are candidate multi-messenger counterparts of VHE neutrinos and gravitational waves, respectively. * At the other extreme of the energy range, > 100 TeV, science questions are dominated by the search for PeVatrons, putative sources responsible for the acceleration of knee cosmic-ray particles. * Access to the Galactic Center and Halo offer the possibility of in-depth searches for Dark Matter signals up to ∼ 100 TeV, constraining therefore the entire energy range of WIMP models. * The study of Galactic diffuse emission, and extended sources, such as PWNe and TeV Halos, are an important target which will benefit from improved angular resolution at energies above several 10s TeV. * Finally, efficient single-muon detection capability will allow precise measurement of the muon content in hadronic showers, opening the way for mass-resolved cosmic-ray studies from 10s TeV to the PeV scale. In the following we will detail some of the array configuration and experimental detector work underway for the observatory design. The Array Configuration Evaluation The Observatory layout will consist of a compact core surrounded by a sparse array. The SWGO Collaboration is currently investigating the array configuration options (see Figure <ref>) that impact performance on the basis of a predefined set of quantitative science benchmarks <cit.>. The basic layout parameters to be investigated are the total array area and fill-factor, as well as the site elevation, which will primarily impact the energy detection threshold. In assessing the array sensitivity, the main performance figures will be the effective area[Defined as the geometrical array area convolved with the detection efficiency.], the γ/hadron discrimination efficiency, and the angular resolution, all considered over a target energy range. Ultimately, these parameters will reflect the fraction of the shower energy registered by the array, and in consequence the amount of information available for shower reconstruction (see e.g. <cit.>). The main array configuration trade-offs (at a fixed cost) will likely play out between the low energies (< 1 TeV), mainly driven by the site elevation, fill factor, and detector unit threshold, and the high energies (> 100 TeV), driven by the overall array area and background rejection efficiency. In addition to the array layout, the design of the individual WCD stations (including geometry, size and choice of electronics and photosensors) will determine the energy threshold to secondary particles, dynamic range and timing accuracy of the detector units, and will be discussed further ahead. The ability to discriminate between γ- and cosmic ray-initiated air showers is another fundamental aspect of the design. For the low energies, a dense, and sufficiently large core array is fundamental, in order to achieve good sampling of the entire shower front, of O(10^4 m^2). Here, background discrimination is mostly based on the different ground patterns of active stations produced by hadronic and γ-induced showers, exploring the differences between the lateral distribution of particles with respect to the shower core position. Critical design factors for the low-energies are the amount of signal collected, and the massively increasing background shower rates, which require a suitable trigger strategy to guarantee that an effective reduction of the energy detection threshold is achievable. Large fluctuations in the development and ground signal from low energy showers also play a critical role, usually destroying energy resolution, and altitude is a critical parameter. The angular resolution of an EAS array depends mainly on the array size and the temporal resolution of single stations, which is typically of ∼few ns for the WCD stations considered for SWGO. A high fill-factor core will also impact the precision that can be achieved in the shower core reconstruction. In the low energy regime, the sensitivity of the WCD stations to all shower particles, including secondary photons, which are ∼ 10× more numerous than the charged e^-e^+ pairs, represents an important design advantage, which contributes to improving angular resolution. The energy resolution, obtained from fitting the lateral signal distributions at ground as a function of shower core distance, also benefits from all-particle sensitivity. Nevertheless, because of greater shower fluctuations, it is inevitable that the angular and energy resolutions will degrade significantly at low energies. A worse angular resolution will have, in turn, a negative effect in the signal-to-background ratio N_γ/√(N_CR), and consequently sensitivity[The number of background events is N_CR(E) = Δθ^2ϵ_rejN_CR(E), where Δθ^2 is the angular resolution and ϵ_rej the CR rejection efficiency, both of which severely degrade below ∼ 1 TeV.]. For the highest energy γ-rays, a sparse outer array is a cost-effective way of improving effective area, as the particle density in energetic showers allows for a good reconstruction with fewer stations spread over a larger area (fill-factors of few %). The outer array also plays an important role in increasing the muon sensitive area, as these particles tend to have higher transverse momentum and spread to larger (> 150 m) distances from the shower core, over an area ∼ 10^5 m^2. The critical array design factor is therefore to define the minimal density of stations needed for a good sampling of the shower front and guarantee the desired γ-hadron discrimination, shower core localisation, and energy resolution. The fact that cosmic-ray showers are about 4 orders of magnitude more abundant than γ-rays at these high energies imply that hadron rejection levels of 10^-4 or higher are required. Fortunately, the number of active stations hit by muons will be high enough so that excellent γ-hadron separation can be achieved provided that high muon detection efficiency, as well as the required array areas, are available. In principle, the large amount of electromagnetic energy deposited in the stations (mostly from secondary high-energy photons from π^0 decays in proton events) can also be explored for hadronic rejection. The requirements for γ-hadron discrimination, observatory sensitivity and energy/angular resolution, will ultimately drive the trade-off (at fixed cost) between array density and total area covered for the outer array. Current simulation work is being conducted for evaluating the performance of various array configurations and detector choices <cit.>. Figure <ref> shows the phase-space under exploration in the R&D Phase, bracketed by the array options and detector unit designs under consideration. The performance baseline is set by the minimum configuration described in Table <ref> – array configuration A1 in Figure <ref>. In general terms, a lower γ-ray energy detection threshold can be achieved by reducing the individual unit threshold and deployment at higher elevation sites. Improvements in angular resolution and background rejection will result in overall sensitivity gains. The higher energy enhancement indicated above 100 TeV will be driven by the size of the outer array and background rejection efficiency, which shall scale with the total muon detection area available. The optimisation work is carried out for a same observatory location and magnetic field, at predefined altitudes between 4.1-5.2 km, as well as for a fixed estimated total cost. The Detector Design Options The individual detector units define the accuracy of local measurements of the arrival time and energy density of the shower particles, as well as the capability for single-muon identification, and will directly impact the overall array performance. The large scale of the observatory, and the altitude and remoteness of the installation site, imply the need for little or no maintenance as a major design goal. Water scarcity and environmental concerns also present important constraints. A number of technological options are under investigation for the individual WCD units, as presented in Figure <ref> <cit.>. In particular, two major mechanical concepts are being considered for the construction of the core detector array: bladders installed in surface tanks, which could be made either of metal as in HAWC <cit.>, or rotomolded plastic as in the Pierre Auger Observatory <cit.>; and floating bladders directly deployed in a natural lake <cit.> or an artificial pool. Regarding the common design elements, the requirement of an improved sub-TeV sensitivity implies the need to lower the energy threshold for detection of secondary shower particles (average energy ∼ 10 MeV). For this, the use of reflective liners (tyvek internal layer) within the water volume enhances the number of detectable particles per air shower, improving trigger and energy reconstruction at lower energies. The presence of upward-facing PMTs are important for accurate timing within the individual unit, by measuring the direct Cherenkov light from particles entering the water volume, and inter-cell timing should be such as to preserve temporal accuracy. To achieve precision shower core reconstruction, not only the array fill factor, but also a compact detector unit size is important, as this increases the granularity of the array and the unit sensitive area, i.e. photon-sensitive area with respect to that of the detector unit footprint. The unit aspect ratio should also be optimised, constrained by the Cherenkov angle in water (∼ 41^∘), with the requirement to maximise direct illumination of the PMTs at the base. One possible design solution under study is the double-layered WCD  <cit.>, with a γ-hadron separation strategy based on the use of vertical segmentation to identify energetic muons (typically few GeV) that penetrate the bottom layer. In addition, the bottom layer could provide a larger unit dynamic range, by enabling calorimetry even when the top layer PMT saturates in the case of energetic showers and high particle density. Another option under consideration is the use of shallow, multi-channel WCDs <cit.>. In this case, the design trade-off is between the reduced amount of water and the number of photo-sensors – facing upwards at the bottom of the tank – which would distinguish muons from electromagnetic particles based on the charge asymmetry between the PMTs, and rise times of the signals. Here, the asymmetry for muons is expected to be larger, as they traverse the entire water volume. Both options are in principle compatible with any of the mechanical implementations proposed. Adaptations may need to be considered in both designs for the outer array, in order to reduce scaling costs and avoid losses in the muon tagging efficiency. Concerning photosensors, large photo-active effective areas (of which vacuum PMTs provide the best relative cost) are needed to achieve single photo-electron threshold and lower the unit energy threshold, but electronics should be designed to avoid introducing bias in the shower reconstruction due to saturated detectors near the shower core. Nanosecond inter-unit timing accuracy and the ability to deal with high particle rates, especially at the highest altitudes, are other requirements to be taken into account for the readout and trigger electronics design. Simulations are ongoing <cit.> to evaluate the performance of the single WCD unit options with respect to size, aspect ratio, and PMT configurations, as well as the reflective material for the inner walls. Novel analysis strategies are also being considered <cit.>. The parameters taken into account to compare simulated unit performance are: number of photo-electrons produced, time resolution of the measurement of first photon, and detection efficiency, measured as fraction of particles entering the tank that produce a signal above threshold. Prototyping efforts are currently concentrated on the most critical items, aiming at collecting sufficient information to choose between the candidate options in Figure <ref>. Reliability of electronics and maintenance requirements at high-altitude sites is one such example. The robustness and easy deployment of the detector units is another issue under consideration, as well as bladder production, deployment and stability, which are especially critical for a lake-based solution. As a final note, the Southern-Hemisphere location of SWGO will complement the reach of northern wide-field facilities such as LHAASO for an all-sky coverage, and will allow to fully exploit the synergies with CTA. In order to optmise overlap with LHAASO and to maximise the exposure to Galactic sources, and in particular the Galactic Centre (δ = -28.9^∘), SWGO is planned for installation in a latitude range between -15^∘ and -30^∘. That, in conjunction with the altitude constraints, for which a site above 4.4 km a.s.l. is preferred, leave the South American Andes as the only viable option. Preliminary searches have found several candidates in Argentina, Bolivia, Chile and Peru (see Table <ref>), each of which have specific strengths and match some array or detector technology options better than others. Overall, one of the main challenges for a final selection is water availability, of which an estimated ∼ 10^5 m^3 will be required <cit.>. §.§ An Andean large-area particle detector for γ-rays - the ALPACA Experiment The Andes Large area PArticle detector for Cosmic ray physics and Astronomy experiment, ALPACA <cit.>, will be devoted to the continuous observation and study of γ-ray signals from PeV cosmic-ray accelerators (PeVatrons) in the Galactic plane and the Galactic center. At a more advanced stage of development than other Southern Hemisphere proposals, it will be the first experiment to explore the Southern Hemisphere sub-PeV γ-ray sky. ALPACA will be a high duty-cycle, wide-field of view observatory sensitivity to γ-rays above several 10 TeV, and capable of efficient particle identification and background rejection power in the UHE range thanks to an underground muon detector array. The ALPACA Collaboration[<www.alpaca-experiment.org>] is an international project launched between Bolivia and Japan in 2016, and led by the Institute for Cosmic Ray Research (ICRR) of the University of Tokyo. It includes several member institutions from Mexico as well. The air shower array is currently being constructed at a high-altitude plateau, 4,740 m a.s.l., near the Chacaltaya mountain in Bolivia, a historical site for cosmic-ray research in South America, on the outskirsts of La Paz. At such optimal altitudes the EAS of sub-PeV γ-rays have reached their maximum development. The plateau has a flat area of over 500 m × 500 m, ideal for installation of an extended array, as well as basic infrastructure (road, electricity) accessible nearby. The water source for the WCD units is currently under study, with a promising location identified within 1 km from the site, where small lakes are present. Alternatively, underground water is available at 50 m depth. Truck transportation from the nearby town of El Alto is also possible, at a reasonable cost <cit.>. The ALPACA array (Figure <ref>) consists of two main components: the surface air shower detector array (SD), for energy reconstruction and timing of the shower front, and an underground muon detector array (MD). The presence of a dedicated muon array to select the muon-poor γ-induced showers greatly improves the sensitivity of the observatory, as validated by the similar concept applied in the Tibet-ASγ experiment <cit.>. Each SD unit comprises of 1 m^2 active area × 5 cm thick plastic scintillator, with a 5 mm thick lead plate on top, which is viewed by a fast-timing PMT. The lead is used to convert the EAS secondary γ-rays into pairs of e^- and e^+ (Rossi transition effect) detectable by the scintillators. The MDs are water Cherenkov detectors placed 2 m underground for an equivalent 16 radiation lengths shielding of the shower's electromagnetic component. An individual MD is a cluster of 16 cells, each with a 56 m^2 area and 1.5 m water depth which is viewed by a large 20" PMT placed on top. A muon > 1 GeV can penetrate the soil shielding and produce a clean, 24 photo-electrons signal on average, in the WCD PMTs. The pure muon signals allow for a 99.9% hadron rejection, while retaining 80% of the γ-ray signals above 100 TeV. See Table <ref> for a detailed description of the array. Regarding the status of the project, the collaboration is now constructing a prototype array, ALPAQUITA, 1/4 of the full array area; it is expected to start operations in 2022, as soon as the first underground muon detector (MD) cluster is installed <cit.>. Shortly afterwards, an extension to cover the full array area is scheduled (half ALPACA), with half the surface detector (SD) density and three additional underground MD clusters <cit.>. The full ALPACA array is expected to be completed by the middle of the decade. The configurations of the array at each stage are given in Table <ref>. As show in Table <ref>, the expected full-array angular resolution (50% containment) at 100 TeV is 0.2^∘ (0.25^∘ for half ALPACA <cit.>), similar to that achieved by Tibet-ASγ <cit.>. Simulations based on extrapolations of H.E.S.S. measurements of the Galactic Center <cit.> estimate that ALPACA should be able to detect > 100 TeV γ-rays from the GC after about 1-1.5 year of observations. Beyond ALPACA, the Collaboration plans a future extension towards a km^2 array (Mega ALPACA) to achieve sensitivity at the PeV energy range. The future extension should follow the same technology as currently applied. ALPAQUITA The prototype array ALPAQUITA <cit.> is currently under construction with the 97 SD units array completed by 2022. The surface array density of ALPAQUITA is the same as for the full array, with inter-unit spacing of 15 m. Despite its reduced sized, of 18,450 m^2, ALPAQUITA is not aimed as an engineering prototype only, with enough expected sensitivity to detect a few interesting sources. The array should become operational once construction of the first 900 m^2 MD cluster is finalised, towards the end of 2022. With a simulated energy resolution of ∼ 21%, an angular resolution of ∼ 0.27^∘ (at 100 TeV; 68% containment), and an expected detection area of 12,600 m^2 above 30 TeV (showers at the edge of the array are not recorded) – and thanks to the effective background rejection power of the underground muon array – ALPAQUITA is expected to detect five Southern-Hemisphere sources seen by H.E.S.S., as an early validation for the first year of the experiment. In the case of the PeVatron candidate, HESS J1702-420A <cit.>, an unassociated hard-spectrum source seen by H.E.S.S. beyond 50 TeV, ALPAQUITA should be able to extend spectral measurements beyond 300 TeV, if no cut-off is present. §.§ The Cosmic Multiperspective Event Tracker (CoMET) Project The Cosmic Multiperspective Event Tracker[<https://alto-gamma-ray-observatory.org>] (CoMET) <cit.> is a project for a wide field-of-view atmospheric shower array working in the VHE regime. The proposal is based on a hybrid design combining a ∼ 160 m diameter array of particle detector units (called ALTO) with atmospheric Cherenkov light collectors (CLiC stations), inspired by the HiSCORE design <cit.>. The array is to be placed at high-altitude, over 5 km a.s.l., to increase the capability to detect γ-rays down to 200 GeV. The main goal and key innovative aspect of the hybrid approach is to optimise the EAS sampling technique by improving shower reconstruction during the dark periods when the CLiC stations will be active, for better energy resolution and shower localisation. This is a key design element to achieve the proposed scientific objectives of CoMET, that is the study of soft-spectrum extragalactic γ-ray sources, such as Active Galactic Nuclei (AGN) and Gamma-ray Bursts (GRBs). The project is currently in R&D Phase, and the prototypes of both the particle and atmospheric Cherenkov light detectors are under test at Linnaeus University, in Sweden. Concurrent simulations of the full detector response and shower reconstruction capability are underway. The experiment aims to install 1242 ALTO particle detector units, distributed in 207 clusters of 6 units over a circular area of ∼ 20,000 m^2. The particle array is complemented by 414 CLiC stations placed on top of the ALTO units, at a density of 2 atmospheric Cherenkov stations per particle array cluster (see Figure <ref>). The two components of the experiment provide independent, but complementary information. The CLiC detectors are not aimed for array trigger, which will be based on the coincidence of WCD detectors. Shower core reconstruction is done by modelling of the lateral distribution function, as measured by the WCD stations. The timing from the WCD signals is used to model the shower front and reconstruct arrival direction. The reduced size of the tanks and closed-packing hexagonal design are important factors in achieving a fine sampling of the air-shower footprint at ground. Shower development information from the CLiC stations can be used to further improve γ-ray source localisation during dark time. Simulations show that the addition of the CLiC results to the ALTO-only analysis results in a 10% better angular resolution in the full energy range from 200 GeV to 100 TeV, and a 30% better energy resolution around 1 TeV, with 12% improved background suppression and negligible loss of γ-ray-like events. This is shown to have particular impact in improving detector sensitivity below 10 TeV <cit.>. The event analysis and γ-hadron separation strategies are described in <cit.>. Below we give a quick outline of the individual CoMET detection units. ALTO Stations Each ALTO unit <cit.> consists of a WCD, 2.5 m high and 4.15 m hexagonal-shaped tank, filled with ∼ 25 m^3 of water, and positioned over a 25 cm concrete slab. Underneath this structure, a cylindrical liquid Scintillator Detector (SD), filled with Linear Alkyl Benzene (LAB), is placed (see Figure <ref>). The WCD are primarily used for the detection of the secondary particles in the cascade, while the SDs are conceived for muon counting, with the concrete slab acting as an absorber to prevent the electromagnetic component from reaching the scintillator. The inner walls of the WCD are blackened to improve the timing accuracy from the PMT signal, thus helping to reconstruct shower direction. The use of closely-packed small tanks combined with more precise electronics and time-stamping are important proposed improvements with respect to current WCD designs, providing a fine-grained view of the shower particles at ground, for good arrival direction reconstruction and background rejection. The application of SDs also improve background rejection by muon-vetoing of cosmic-ray showers. CLIC Stations The CLiC stations <cit.> are inspired in the HiSCORE wide-FoV technology used in the TAIGA experiment <cit.>, and consist of an array of 8 × 3" PMTs; this is different from the original HiSCORE setup which uses four channels consisting of large Winston cones coupled to 8" PMTs, and aims to improve the signal-to-noise ratio to low-energy atmospheric showers. Each CLiC 3" PMT is coupled to a 14 cm Winston cone light guide, providing a final 0.1 m^2 collection area per detector station, and an angular cut-off of 30^∘. A UV-pass filter is applied to filter out the night sky background (NSB) and ambient light. The 16-channel signal from the two CLiC stations within a single ALTO cluster are finally summed into a single signal. Detailed simulations of event detection and analysis <cit.> indicate an expected peak performance angular resolution of ∼ 0.15^∘ at 20 TeV, which remains < 1^∘ at energies as low as 300 GeV. Peak point flux sensitivity of < 10^-11 erg.cm^-2.s^-1 after 300 hours for near-zenith events is achieved around 5-10 TeV. Towards the low-energies the expected sensitivity is ∼ 10^-10 erg.cm^-2.s^-1 at 300 GeV. This implies that the TeV-bright GRB 190114C could be detected within 30 min, a similar timescale than that needed to reach a 5-σ detection of PKS 2155-304 in flaring state <cit.>. Further R&D studies are underway, focusing particularly on improving the sensitivity < 600 GeV by loosening the number of WCD stations needed for signal trigger and analysis. Prototyping efforts for achieving an improved timing accuracy through the CLiC stations is also underway. §.§ RPC-based proposals Resistive Plate Chambers are among the types of detectors applied in particle sampling experiments, and have most notably provided the basis for the ground-based detector ARGO-YBJ, operational between 2007 and 2013 in Tibet, at an altitude of 4,300 m <cit.>. They are characterised by very high particle detection efficiency and timing accuracy. RPCs consist of a thin gas volume, where primary ionisation and avalanche multiplication occurs, sandwiched within a pair of resistive plates, where copper strips are placed to read out the signal generated inside the gas gap. The front-end electronics is embodied in the strip panels and the whole is sealed in mechanical support panels (see Figure <ref>). In the case of ARGO-YBJ, the active elements consisted exclusively of RPCs, arranged as a full coverage (92% fill factor) central carpet, 75×75 m^2 in area, and surrounded by a partially instrumented ring to improve the reconstruction of events with the core falling outside the carpet area. RPC-based detectors have some attractive features, which represent key instrumental strengths with respect to other approaches. From one side, the very dense sampling achievable with the RPC carpet can enable operation down to low energy thresholds, with very good position resolution, typically ∼ cm. Additionally, the high-granularity of the read-out of the RPC carpet can be exploited for very good energy and angular resolutions, and the flexible digital/charge read-out schemes opens the possibility for operation over a very wide dynamic range. Nevertheless, the intrinsic challenges of operating a gas-based detector at remote locations, and the associated costs of the technology, have traditionally limited its applicability and the achievable effective areas. In this sense, low gas consumption and simple electronics are desirable elements for an autonomous operation design that can balance the cost and power consumption constraints. Background discrimination capability, which is based on the study of the shower > 40 m away from the core, has also been limited by the small array sizes applied to this date in RPC-based experiments such as ARGO-YBJ, and consequently sensitivity has been historically underachieving. These factors are among the motivations for the use of RPCs in a hybrid approach. The STACEX concept, which is currently the most relevant proposal to the continued development of RPC technology as a viable tool for ground-based γ-ray astronomy aims to improve the sensitivity of RPC-based arrays through such a hybrid approach using WCDs[An earlier proposal for a design combining RPC units and WCD tanks was the LATTES <cit.> project, which was nevertheless discontinued]. The STACEX Concept STACEX <cit.> is the concept for a high-altitude, hybrid detector composed of RPCs and water Cherenkov stations, which aims to achieve good shower reconstruction and γ/hadron separation over a very large dynamic range, from around 100 GeV up to 10 PeV. The team involved in developing the proposal is formed by scientists from the Italian National Institutes for Nuclear Physics (INFN) and for Astrophysics (INAF), responsible for previously operating ARGO-YBJ. The RPC detector proposed for STACEX is therefore similar to that used in ARGO-YBJ, but incorporates some evolution from the previous design, such as operation in avalanche, instead of streamer mode, as well as thinner electrode plates and new front-end electronics adequate to the avalanche mode operations. The STACEX concept is based on a full-coverage (∼ 90%) RPC carpet core, 150×150 m^2 in area, 4× that of ARGO-YBJ. An additional 0.5 mm lead layer would be added on top of the RPCs to improve the quality of the temporal profile by exploiting the conversion of the secondary γ-rays. The arrangement is complemented by a dense muon detector array placed below the carpet, constituted by WCD tanks buried under 2.5 m of soil, akin to the muon-array of LHAASO-KM2A. The performance target aims at a very good energy (< 20%) and angular (better than 0.2^∘) resolutions above 10 TeV. Further expansion of the dynamic range beyond 100 TeV could be achieved by the addition of a few % active area scintillator-based outer array, covering a total area of 0.5 km^2 around the core. Together with the capability of detecting clear muon signals with buried WCDs, the concept could guarantee a high-efficiency rejection of cosmic-rays and multi-PeV sensitivity. As currently proposed, the carpet would follow the same design used in ARGO-YBJ, of an array of bakelite-based RPCs. The proposed detector has a modular structure, with clusters of 5.7×7.6 m^2, made of 12 RPCs of 2.85×1.23 m^2 each. The high-granularity read out is made by 80 external strips per chamber (for a total of 570,216 strips), defining a spatial pixel of 6.75×61.80 cm^2, logically organised in 10 independent pads of 55.6×61.8 cm^2 (for a total of 71,277 pads), which are the temporal pixels of the detector. In order to extend the dynamic range to the PeV, each chamber is equipped with two large size pads (139×123 cm^2) to collect the total charge of the particles hitting the detector <cit.>. Figure <ref> illustrates the structure of the RPC read-out panels, and shows the expected sensitivity of a STACEX-like array of 22,000 m^2 in area. The event selection criteria for the analysis required minimum trigger of 20 strips on the carpet and a reconstructed core position inside an area of 600×600 m^2 centered on the detector. Core reconstruction by the RPC carpet has a resolution of 20 m at 100 GeV, down to 2 m at 100 TeV. The lowest energy bin reconstructed for a strip multiplicity between 20 to 40 strips, is ∼ 100 GeV, with 50% resolution <cit.>. Background rejection in the STACEX concept relies on the underground WCD array to reject charged CR showers based on their muon content, and the efficacy of the approach depends both on the size and coverage of the muon array and the muon detector efficiency. Figure <ref> presents the sensitivity for two distinct muon array layouts, with continuous (22,000 m^2) and partial (3,600 m^2) core coverage. It is clear that the size of the muon detector has a significant impact in sensitivity, and that a large muon array is needed to reduce the sampling fluctuations of hadron-initiated showers. A large overall detector is also important to exploit the pattern of energy deposition in the surface detector, away from the shower core (≳ 40 m), for rejection of hadron-initiated showers. Preliminary studies suggest that the hybrid approach could reach the background-free regime, with a background rejection level of 3×10^-4 above few 10s of TeV, and nearly 100% γ survival rate, thanks to the very dense sampling achieved by the dense core (RPC carpet plus muon array). As already mentioned, such performance could be expanded beyond 100 TeV provided that a suitable extension of the array is available to reach the required photon statistics <cit.>. Up until 2021, the STACEX proposal had conducted only preliminary simulation studies, which nevertheless suggested that RPCs could be a suitable technology for a wide-field γ-ray detector in the Southern Hemisphere, provided that a sensitive muon array is operated in conjunction. § FUTURE IMAGING ATMOSPHERIC CHERENKOV EXPERIMENTS The experimental context of ground-based gamma-ray astronomy today is dominated by the imminent start of construction of the Cherenkov Telescope Array (CTA), complemented, at the UHE regime, by the significant advances brought by LHAASO from the side of the EAS arrays. One of the principal technical advantages of IACTs over the particle samplers discussed in previous sections are their excellent γ/hadron discrimination capability, resulting from the Cherenkov-light imaging of the complete shower development in the atmosphere. Another is the very large effective area, even for a single telescope, which corresponds to the size of the Cherenkov light pool produced by the showers at ground, ∼ 10^5 m^2. That said, above 100 TeV, the imaging atmospheric Cherenkov technique loses part of its competitiveness, and the shower sampling arrays stand out as the most viable approach, being able to cover the ∼km^2 ground areas and steradian angular fields required at a lower cost, while operating with a much more favourable duty cycle. Looking into the future of the air-Cherenkov experiments, a few crucial instrumental frontiers have been identified: * larger arrays for improved sensitivity at multi-TeV energies and short timescales; * enhanced angular resolution for morphological studies of extended sources and improved isotropic cosmic-ray background suppression; * lower energy threshold, towards few tens of GeV, for extra-galactic and variable / multi-messenger science; * wider field of view, for the conduction of surveys, study of extended sources and diffuse emission, as well as the search for transient / serendipitous phenomena All these goals will be addressed by CTA, the next-generation ground-based gamma-ray observatory, which will deliver an order of magnitude improvement in all performance parameters with respect to current instruments. But even in the context of CTA, other IACT experiments are being proposed, focusing on the development of particular instrumental frontiers and aiming to provide scientific contributions at specific areas. §.§ The CTA Context An ideal IACT array uses many telescopes to densely sample the air-Cherenkov light pool, determining the shower properties with great precision, and enhancing the capability to separate the hadronic background to achieve excellent sensitivity. With a planned full-array of over 100 IACTs, deployed over a multi-km^2 area in two sites on both hemispheres, CTA will be the definitive observatory for VHE astronomy in the coming decades, reaching a point-source sensitivity of 0.1% of the Crab Nebula within 50 hours integration time. Furthermore, by combining different sizes of telescopes and covering a large ground area, CTA will expand the performance of current facilities over a very broad range, 4 decades in energy. CTA will distinguish itself in the network of ground-based γ-ray instruments by partly functioning as an open observatory to the scientific community, in a field where facilities were traditionally run as closed experiments. For more on the Cherenkov Telescope Array, see chapter "The Cherenkov Telescope Array (CTA): a worldwide endeavour for the next level of ground-based gamma-ray astronomy". In the context of the technological R&D for CTA, a number of telescopes and camera prototypes have been developed by various groups over the past decade. Among them, the ASTRI design by INAF (the Italian National Astrophysical Institute) advanced a pioneering Schwarzchild-Couder configuration which, beyond working as a prototype for the CTA small-sized telescope, will compose an independent "pathfinder" Mini-Array, whose operation will precede that of the full CTA array. Scientifically, it will complement current facilities by significantly extending the energy reach of Northern Hemisphere IACTs before CTA is fully operational. It will also allow the early exploration of important synergies with LHAASO over an overlapping band of several tens of TeV. The case of the ASTRI Mini-Array demonstrates that there still exists a meaningful role to be played by novel IACT facilities in the era of CTA, which can operate as experiments focused on specific science goals, and complement the capabilities of CTA in terms of temporal, geographical, and spatial coverage. They can also expand some of CTA's technological frontiers, and serve as test beds for future developments in the field. Another relevant case is MACE, the recently installed 21 m diameter Indian IACT. MACE is a state-of-the-art instrument and the latest step in the long development of ground-based γ-ray astronomy in India. Its most distinguishing features are its geographical location (the easternmost IACT in the world) and high installation site – at 4,270 m a.s.l., it is the highest IACT ever built. The idea of installing a high-altitude IACT aims at pushing the observational threshold to the lowest possible energies, down to a few tens of GeV. As the density of Cherenkov light from the air-shower increases monotonically with elevation, the installation of an IACT at very-high altitude allows the detection energy threshold to be directly reduced of the detection energy threshold. But the observation of low-energy showers introduces experimental challenges of its own. From one side, very-low energy showers of ∼ 10 GeV emit Cherenkov radiation mostly in the first generations of secondary electrons (which are above the ∼ MeV threshold for Cherenkov light production) and not throughout the cascade development, as is the case for higher-energy primaries. This means that the Cherenkov image is less regular and more susceptible to fluctuations, and has an important impact in increasing both the PSF and the energy bias and resolution at the lowest energies. The high altitudes also affect the reconstruction capabilities for higher energy showers, since the shower must die completely before reaching the ground for the Cherenkov emission to provide an effective calorimetric measurement. On the other hand, the high altitude results in a non-negligible reduction of the amount of light produced from hadronic showers, that penetrate deeper in the atmosphere, favouring background rejection. Experimentally, the energy range from 10 to 100 GeV constitutes a challenging and traditionally less explored spectral region, bridging between the satellite and ground-based observational regimes, and approaching the few-GeV theoretical limit for air-Cherenkov observations. Thanks to the combination of higher source fluxes and the large collection areas at ground, the science potential at this energy regime is strongly focused on variable phenomena, the instrument serving as an ideal "trasient explorer". §.§ ASTRI The ASTRI project is a future observatory to be installed at the Teide Observatory, in Tenerife, dedicated to the study of the VHE γ-ray sky in the range from a few TeV to 100 TeV and beyond. The innovative telescope technology is based on the ASTRI-Horn design <cit.>, developed by INAF within the context of CTA, as a prototype 4-m diameter small-sized telescope (SST) (see Figure <ref>). The ASTRI-Horn[In honour of the Italian-Jewish astronomer Guido Horn D'Arturo, who pioneered the use of segmented primary mirror in astronomy <cit.>.] prototype is currently installed and operational at the INAF observing station at Mt. Etna, in Sicily. It is the first dual-mirror optical configuration Cherenkov telescope ever deployed, and among the first to use Silicon Photo-Multipliers (SiPM) as detectors. The ASTRI prototype has achieved first light in December 2018, with the detection of the Crab Nebula above 3.5 TeV <cit.>. The optical design is the main technological innovation of the ASTRI-Horn telescope. Typically, IACT experiments use tessellated, single mirror systems based either on the Davies-Cotton <cit.> or parabolic <cit.> configurations. Although convenient for IACT Astronomy, where isochronicity of the signal is important, single mirror designs have a limited field of view and significant off-axis aberration, and also imply the use of bulky cameras placed at the focal plane, due to the resulting large plate-scale. Dual-mirror configurations, such as the Schwarzchild-Couder (SC) aplanatic design proposed by <cit.> and pioneered in the ASTRI-Horn telescope, allow to implement a large field of view, up to 10^∘ in diameter, in a more compact instrument, with consequently much smaller plate-scale, thus preserving a good angular resolution throughout the entire FoV. As a result, the dual-mirror design enables for the correction of aberrations at large-field angles. The primary mirror of the ASTRI-Horn telescope has a diameter of 4.3 m, composed of 18 hexagonal tiles, while the secondary reflector consists of a 1.8 m diameter monolithic hemispherical mirror with 2.2 m radius of curvature. The configuration results in a focal length of 2.15 m (f/0.5) and allows to cover a full FoV larger than 10^∘. The camera plate scale is small, 37.5 mm/^∘. Such optical properties represent significant steps ahead in the design of Cherenkov telescopes, resulting in compact cameras and the possibility to use small SiPM pixel sizes (7 mm linear dimension in this case) as an alternative to the traditional PMTs. The SiPM cameras of the ASTRI-Horn telescope also follow a novel, curved focal-plane camera design, with very-fast read-out electronics <cit.>. The application of SiPM cameras in IACTs has already been successfully demonstrated by the FACT telescope <cit.>. The science commissioning phase of the ASTRI-Horn telescope has started in 2020, and is ongoing, with hardware improvements on the optical system and camera scheduled to reach nominal configuration. The goal is to move towards the second phase of the ASTRI project, represented by the development of a Mini-Array composed of nine dual-mirror IACTs, to be installed at Teidei Observatory, in Tenerife, Canary Islands. The ASTRI Mini-array The goal of the ASTRI Mini-Array <cit.>, to be jointly operated by INAF and the Instituto de Astrofisica de Canarias (IAC), will be to carry out observations of γ-ray sources up to 200 TeV, extending significantly the energy range explored by current IACTs. It aims to exploit the large FoV of the dual-mirror telescope design to discover serendipitous transient sources, simultaneously perform deep exposures of a number of selected targets and fields, as well as to study large extended objects such as Supernova Remnants (SNR) and Pulsar-Wind Nebulae (PWN). In fact, the excellent angular resolution of the observatory, between 0.05^∘-0.15^∘, means it will be able to study in unprecedented detail the morphology of such sources above 10 TeV. Given the relatively high energy threshold of the observatory, of 2 TeV, and the fact that it will operate in a context defined by the start of operations of CTA, the observational schedule of the ASTRI Mini-Array will focus on deep exposures of selected, science-driven targets <cit.>. Many relevant synergies with the EAS arrays HAWC and LHAASO, continuously surveying the northern sky in the VHE-UHE regimes, are also expected. Initially conceived as a pathfinder array for the CTA-South at Paranal, in Chile, the ASTRI Mini-Array is now an independent project, with a specific set of science goals, and expected to begin operations in 2024. Once operational, it will be the most sensitive among current IACT arrays above ∼ 10 TeV, thus complementing CTA in performing unprecedented detailed observations of the ∼ 10-200 TeV γ-ray sky. As such, the ASTRI Mini-Array will devote the first years of its observing programme to deep and ultra-deep observations (200-500 hour exposures) of core science topics, aiming to address a few relevant open questions, such as <cit.>: * testing the "hadron-beam" scenario of blazar emission, and the connection between blazar jets and UHECRs; * probing the EBL IR component through observations above 10 TeV; * performing deep probes of the Galactic Center at high zenith angles, exploring its excellent angular resolution to identify PeV particle accelerators; * performing good angular resolution observations of LHAASO-discovered PeVatron signals, to study their morphology below 100 TeV and attempt an unequivocal identification of their astrophysical counterparts. The final layout of the 9-telescope Mini-Array is shown in Figure <ref>. It will consist of an NE-SW elongated asymmetrical layout with a median spacing among telescopes of ∼ 200 m, near-optimal for operations > 10 TeV. Figure <ref> shows the expected differential sensitivity curve of the ASTRI Mini-Array <cit.>. It can be seen that for the deep (200 h) and ultra-deep (500 h) exposures of the core science programme <cit.>, the Mini-Array will achieve exceptional sensitivities above 10 TeV (higher than CTA for 50 h exposures), and will in any case be more sensitive than current IACTs above a few TeV, for typical integration times of 50 hours. The on-axis angular resolution (68% containment radius) will be as good as 0.05^∘ in the range from a few TeV to 100 TeV, with an energy resolution of the order of 10% for this same range. The ASTRI Mini-Array will thus be a significant complement to both present-day and next-generation observatories in the Northern Hemisphere. §.§ MACE MACE (Major Atmospheric Cherenkov Experiment) is the future Indian telescope dedicated to ground-based γ-ray astronomy. It represents a continuation in the regional development of the field after two decades of activities of its predecessor instrument, the TACTIC array  <cit.>. The telescope has been recently installed, and is under commissioning <cit.> at the Himalayan Gamma ray Observatory (HIGRO), in Hanle, Northern India (32.8^∘ N, 78.9^∘ E). Its geographical location fills an important longitudinal gap among current and future IACT instruments in the Northern Hemisphere. At a site 4,270 m in altitude, MACE will be the highest existing IACT in the world. Although MACE is a single-telescope facility, it closely follows an early proposal for a high-altitude IACT observatory <cit.> and puts forward the case for the installation of a low-energy stereoscopic array at high altitude. An image of the MACE telescope is shown in Figure <ref>. The instrument has an alt-azimuth mount, with a quasi-parabolic light collecting surface of 21 m in diameter, comprised of 356 independent mirror panels of 1 m × 1 m each, and 25 m focal length. Such a large optical reflector is a basic design element, allowing to collect as much Cherenkov light as possible from the weak low-energy showers and effectively lower the observational energy threshold of the instrument. Each panel is in turn composed by four spherical aluminum honeycomb mirror facets of 0.5 m × 0.5 m (in a total of 1424), with graded individual focal lengths between 25 m and 26.5 m, from the centre to the periphery of the reflective surface, in order to guarantee a minimum on-axis spot size at the focal length of the telescope. The reflectance of the mirror facets is superior to 85% in the wavelength range of interest for Cherenkov light, between 280-700 nm. The mirror facets are aligned by an active mirror control system. The total mirror collection area of the telescope is ∼ 346 m^2. The quasi-parabolic design is chosen to reduce the optical aberrations of the large reflector surface. For such a high-altitude instrument, the camera can be more compact, and achieve the necessary angular resolution for imaging with a relatively smaller number of pixels. The MACE telescope camera is placed at the focal length of the reflective surface and has a modular structure, with 68 modules with in-built digital signal processing electronics, each consisting of 16 photo-multiplier tubes (pixels) of 38 mm in diameter. All of the 1088 PMTs are fitted with hexagonal parabolic light guides, each with an angular size of 0.125^∘, for an effective coverage of the entire surface area of the camera, and a total field of view of ∼ 4.3^∘× 4.0^∘. Along with the large mirror area, such a multi-channel, high resolution camera, is essential for performing a good imaging of the higher granularity[Defined as the average angular separation between points of emission of Cherenkov light during the development of extensive air-showes] low-energy Cherenkov images at high-altitude, as needed for a good γ-hadron separation. Only the 36 inner modules (576 pixels) will be used for event trigger, with a field of view 2.6^∘× 3.0^∘. A trigger configuration of 4 close-cluster nearest-neighbour pixels is implemented in the MACE hardware. For a large telescope aperture as in MACE, such a trigger strategy is relevant in order to effectively suppress accidental events from the night-sky background (NSB) and thus allow the instrument to fully explore the increase in the number of photo-electrons registered per shower that results from the high-altitude installation. As pointed out earlier, the very high altitude site of MACE brings along two important advantages to be explored in reducing the observational threshold and improving low-energy sensitivity, as it both reduces the absorption of the shower Cherenkov light by the atmosphere, and increases the density of the Cherenkov photons at ground (to ∼ 1 ph/cm^2 for a 10 GeV γ-ray shower). This geometrical increase in the Cherenkov photon density due to the altitude is also more pronounced for γ-ray than hadronic showers, which are more penetrating, and weights favourably in the trigger probability of signal over background events, especially below 100 GeV. As a result, MACE can achieve a very low γ-ray trigger threshold of about 20 GeV in the low-zenith angle range < 40^∘, leading to an excellent single-telescope integral sensitivity for energies below ∼ 150 GeV <cit.>. The 50 h integral sensitivity of the telescope above the analysis threshold of ∼ 30 GeV is estimated in 2.7% of the Crab <cit.> (see Figure <ref>). The expected energy resolution of the instrument is about 40% below ∼ 50 GeV, improving to ∼ 20% around 1 TeV; the angular resolution similarly improves from ∼ 0.21^∘ below 50 GeV, to ∼ 0.06^∘ at few TeV. The dynamic range of operations of MACE will be between ∼ 30 GeV - 10 TeV <cit.>. Owing to its high altitude site and large collection area, MACE will be focused on the study of sources in the 20-100 GeV energy region, mostly unexplored by ground-based instruments, with a science case strongly centred around the observation of bright flaring objects such as AGNs. Here, thanks to its eastern location, it can valuably complement CTA by extending the monitoring coverage capabilities. The very-low energy threshold of MACE, as well as its peak differential sensitivity around 100 GeV, will provide it with a unique overlapping range with satellite observatories such as Fermi-LAT. This should allow to fill some gaps with respect to other IACTs in the spectral and temporal studies of certain classes of sources < 100 GeV, such as in the observation of distant extragalactic sources for EBL measurements, and on the observation of the GeV-TeV component of pulsars, profiting of the much larger collection area of ground-based instruments in comparison to satellites. Since the conclusion of its installation at the end of 2020, MACE has been under commissioning, performing trial runs for testing the performance of different telescope components <cit.>, and first light is expected soon. § OUTLOOK The current (third) generation of IACT instruments (HESS, MAGIC and VERITAS) has brought ground-based γ-ray astronomy to maturity, improving sensitivity in the TeV range by over an order of magnitude with respect to previous instruments, while lowering the observational threshold to under 100 GeV. As a result, the number of known TeV sources has dramatically increased in the past 20 years. Among the technical ingredients behind these achievements were the perfecting of the stereoscopic imaging technique, and the construction of large reflectors, with over 100 m^2 in area, as well as cameras with a large FoV and fine-pixel sizes ≲ 0.2^∘, which permitted a good resolution of shower features. The imminent coming online of CTA will represent an even larger step in evolution and the inauguration of a new era of astrophysical research for this now fully-established field of astronomy. The past decade has also seen the flourishing of ground-breaking particle arrays such as HAWC and, more recently LHAASO, which together detected over 50 sources above several tens of TeV, finally opening up the UHE astronomical window. In this case, the installation of large arrays (much larger than the shower footprint) at high altitudes, above 4 km a.s.l., with dense instrumented areas that permitted a good calorimetric measurement of the showers and timing of the shower front, were crucial, along with the large muon effective detection areas, which enabled an excellent γ-hadron separation. As described in this chapter, the next steps will centre around hybrid arrays, as in the case of TAIGA, which is focusing on cost-effective ways to instrument very large areas while retaining at the same time a good reconstruction quality and γ-hadron separation power. Another frontier is the expansion of the wide-field coverage towards the Southern Hemisphere, a still unexplored region of the sky for ground particle arrays. Here, SWGO figures as the principal proposal, bringing together all crucial elements such as a very large array area for the consolidation of the (sub-)PeV observational window, the expansion of the energy reach towards lower energies, and the improvement of shower reconstruction, aiming particularly at a better angular resolution. The plans for new instrumentation are at various stages of maturity, but the emerging scene for the field within this decade is that of a well-established global network of ground-based γ-ray instruments, with the potential to work in close synergy to extract the most from the various complementary observational techniques available. Together, these instruments will not only be at the forefront of high-energy astrophysics and astro-particle physics, but will be an essential ingredient of the nascent multi-messenger astronomy. In summary, the field of ground-based γ-ray astronomy is very different from even a decade ago, where discussions revolved around expectations on the number and classes of sources available for detection by the instruments of that time. Today, the scientific implications of the field have clearly taken centre-stage and are the drivers of the next steps, and we can expect an even greater progress for the field within the next ten years. Overall, the instrumental prospects have never been richer, with concepts for new facilities being presented throughout the energy domain and across the globe, and evoke a well-known quote by Pierre Theillard de Chardin: "The history of science can be summarised as the development of ever more perfect eyes in a world where there is always more to see." § CROSS-REFERENCES * How to detect Gamma-rays from ground: an introduction to the detection concepts * The development of ground-based Gamma-ray astronomy: a historical overview of the pioneering experiments * Detecting gamma-rays with high resolution and moderate field of view: the air Cherenkov technique * Detecting gamma-rays with moderate resolution and large field of view: Particle detector arrays and water Cherenkov technique * Current particle detector arrays in gamma-ray astronomy * The Cherenkov Telescope array (CTA): a worldwide endeavor for the next level of ground- based gamma-ray astronomy 99. HAWC17 Abeysekara AU, Albert A, Alfaro R et al (The HAWC Colaboration) (2017) Observation of the Crab Nebula with the HAWC Gamma-Ray Observatory. Astrophys. J. 843:39 HESSGC Abramowski A, et al. (The H.E.S.S. Collaboration) (2016) Acceleration of petaelectronvolt protons in the Galactic Centre. Nature 531:476 Actis2011 CTA Consortium, M. Actis, G. Agnetta, F. Aharonian, A. Akhperjanian, et al. (2011) Design concepts for Aharonian01 Aharonian FA, Konopelko AK, Völk HJ, Quintana, H (2000) 5@5 - a 5 GeV energy threshold array of imaging atmospheric Cherenkov telescopes at 5 km altitude. Astrop. Phys. 15:335 HAWC20 Albert A, Alfaro R, Alvarez C et al (The HAWC Colaboration) (2020) 3HWC: The Third HAWC Catalog of Very-high-energy Gamma-Ray Sources. Astrophys. J. 905:76 Allekotte2008 Allekotte I, Barbosa AF, Bauleo P, et al. (2008) The surface detector system of the Pierre Auger Observatory. Nucl. Inst. Meth. Phys. 586:409 Amenomori90 Amenomori M, Nanjo H, Hotta N et al. (1990) Development and performance test of a prototype air shower array for search for γ-ray point sources in the very high energy region. NIM A 288:619-631 Amenomori21 Amenomori M, Bao YW, Bi XJ et al (The Tibet ASγ Collaboration) (2021) First Detection of sub-PeV Diffuse Gamma Rays from the Galactic Disk: Evidence for Ubiquitous Galactic Cosmic Rays beyond PeV Energies. Phys. Rev. Lett. 126:141101 Anderhub13 Anderhub H, Backes M, Biland A et al (2013) Design and operation of FACT – the first G-APD Cherenkov telescope. J. Instrum. 8:P06008 AntonelliICRC21 Antonelli LA, for the ASTRI Project (2021) The ASTRI Mini-Array at Teide Observatory. PoS(ICRC2021) 395:897, doi: 10.22323/1.395.0897 LATTES18 Assis P, Barres de Almeida U, Blanco A et al (The LATTES Collaboration) (2018) Design and expected performance of a novel hybrid detector for very-high-energy gamma-ray astrophysics. Astropart. Phys. 99:34–42 MercedesWCD Assis P, Bakalová A, Barres de Almeida U, et al (2022) The Mercedes water Cherenkov detector, EPJ C 82:899. Astapov2019 Astapov I, et al. (2019), Scintillation detectors for the TAIGA experiment, Nucl. Instrum. Meth. A 936:254 Astapov2022a Astapov I, Awad K A, Blank M et al. (2022), Optimization studies of the TAIGA-Muon scintillation detectors array, to be submitted to JINST Astapov2022b Astapov I, Awad K A, Bezyazeekov P A et al. (2022), Detection of the Crab Nebula under extreme conditions with the first TAIGA IACT, in preparation Baillon1993 Baillon, P., Behr, L., Danagoulian, S., et al. (1993), Astropart.Phys., 1, Issue 4, 341-355. BarresICRC21 Barres de Almeida U, Giacinti G, Longo F, et al. on behalf of the SWGO Collaboration (2021) Benchmarking the Science for the Southern Wide-Field Gamma-ray Observatory (SWGO). PoS(ICRC2021) 395:893, doi: 10.22323/1.395.0893 Bastieri05 Bastieri D, Agguiaro D, Arnold J et al., for the MAGIC Collaboration (2005) The mirrors for the MAGIC telescope. Proc. 29th ICRC Conference 5:283-286 Bernlohr2008 K. Bernlöhr, Astroparticle Physics 30, 149 (2008). Berezhnev2011 Berezhnev S. F., Besson D., Budnev N. M. et al. (2012), The Tunka-133 EAS Cherenkov light array: Status of 2011. Nucl.Instr.Meth. A 692:98 Bisconti2022 Bisconti F, Chiavassa A (2022) Study of water Cherenkov detector design for ground-based gamma-ray experiments. E-print arXiv:2205.02148 Blank2021 Blank M, Tluczykont M, Kuotb Awad A, et al. (2021) in Proc. of ICRC 2021 PoS(ICRC2021)757 Blank2023 Blank, M (2022), PhD thesis, University of Hamburg, in preparation Blank2023MNRAS Blank M, Tluczykont M, Porelli A, Mirzoyan R, Wischnewski R, et al. (2023), Detection of the Crab Nebula using a random forest analysis of the first TAIGA IACT data, Monthly Notices of the Royal Astronomical Society, 2023;, stad276, https://doi.org/10.1093/mnras/stad276 Blin2012 Blin S, Barillon P (2012) MAROC-3 datasheet, https://www.ge.infn.it/musico/DownloadFiles/Maroc/datasheet_MAROC3_V7.pdf Budnev2017 Budnev N et al. (2017), TAIGA experiment: present status and perspectives, JINST 12 C08018 Budnev2020 Budnev N et al (2020), TAIGA - an advanced hybrid detector complex for astroparticle physics and high energy gamma-ray astronomy in the Tunka valley. JINST 15:C09031, Institute of Physics (the “Institute”) and IOP Publishing Limited 2019. Bezyazeekov2015 Bezyazeekov P A, et al., Measurement of cosmic-ray air showers with the Tunka Radio Extension (Tunka-Rex), Nucl.Instrum.Meth. A 802 (2015) 89-96. BiscontiICRC21 Bisconti F, Chiavassa A, on behalf of the SWGO Collaboration (2021) Study of water Cherenkov detector designs for the SWGO experiment. PoS(ICRC2021) 395:895, doi: 10.22323/1.395.0895 Borwankar20 Borwankar C, Sharma M, Bhatt N, et al (2020) Estimation of expected performance for the MACE γ-ray telescope in low zenith angle range. Nucl. Instrum. Methods Phys.A 953:163182 ZhenCao21Nat Cao Z (2021) An ultra-high-energy γ-ray telescope at 4,410 m. Nat. Astron. 5:849 ZhenCao21PeV Cao Z, Aharonian F et al. (The LHAASO Collaboration) (2021) Ultrahigh-energy photons up to 1.4 petaelectronvolts from 12 γ-ray Galactic sources. Nature 594:33–36 Catalano18 Catalano O, Capalbi M, Gargano C et al. (2018) The ASTRI camera for the Cherenkov Telescope Array. Proc. SPIE 10702:37, doi: 10.1117/12.2314984 Chernov2020 Chernov D, Astapov I, Bezyazeekov P et al. (TAIGA, Collaboration) (2020) Development of a novel wide-angle gamma-ray imaging air Cherenkov telescope with SiPM-based camera for the TAIGA hybrid installation JINST 15, C09062 Conceicao21 Conceição R, Gonzáles B S, Guillén A et al. (2021) Muon identification in a compact single-layered water Cherenkov detector and gamma/hadron discrimination using machine learning techniques. EPJ C 81:542, doi: 10.1140/epjc/s10052-021-09312-4 Conceicao22 Conceição R, Gibilisco L, Pimenta M, Tomé B (2022), Gamma/hadron discrimination at high energies through the azimuthal fluctuations of the particle distributions at ground. E-print arXiv:2204.12337. Davies57 Davier JM, Cotton ES (1957) J. Sol. Eng. Trans. ASME 1:16 deNaurois-Mazin de Naurois M, Mazin D (2015) Ground-based detectors in very-high-energy gamma-ray astronomy. C R Phys 16:610–627 DiSciascio14 Di Sciascio G (2014) Main physics results of the ARGO-YBJ experiment. Int. J. Mod. Phys. D 23:1430019 DiSciascioICRC19 Di Sciascio G, Camarri P, Santonico R, et al. (2019) STACEX: RPC-based detector for a multi-messenger observatory in the Southern Hemisphere. PoS(ICRC2019) 358:660, doi: 10.22323/1.358.0660 DoroICRC21 Doro M, Moraes A, Santander M, et al. on behalf of the SWGO Collaboration (2021) The search for high altitude sites in South America for the SWGO detector. PoS(ICRC2021) 395:689, doi: 10.22323/1.395.0689 Dubus2013 G. Dubus et al. (2013), Surveys with the Cherenkov Telescope Array. Astr.Part.Phys., 43:317 drs4 https://www.psi.ch/drs/ Epimakhov2015 Epimakhov S (2015), Exploring cosmic ray origins with ground-based EAS arrays Tunka and HiSCORE, Dissertation, University of Hamburg, https://ediss.sub.uni-hamburg.de/handle/ediss/6482 GiuntiICRC21 Giunti L, Khélifi B, Kosack R, Térrier R, for the H.E.S.S. Collaboration (2021) Evidence of 100 TeV γ-ray emission from HESS J1702-420: A new PeVatron candidate. PoS(ICRC2021) 395:793, doi: 10.22323/1.395.0793 GoksuICRC21 Goksu H, Hofmann W, on behalf of the SWGO Collaboration (2021) Lake Deployment of Southern Wide-field Gamma-ray Observatory (SWGO) Detector Units. PoS(ICRC2021) 395:708, doi: 10.22323/1.395.0708 Gress2017 Gress O, Astapov I, Budnev N et al. (2017), The wide-aperture gamma-ray telescope TAIGA-HiSCORE in the Tunka Valley: Design, composition and commissioning. Nuclear Instruments and Methods in Physics Research A, 845:367-372 doi 10.1016/j.nima.2016.08.031 Grinyuk2020 Grinyuk, A., Postnikov, E. & Sveshnikova, L. Monte Carlo Simulation of the TAIGA Hybrid Gamma-Ray Experiment. Phys. Atom. Nuclei 83, 262–267 (2020), https://doi.org/10.1134/S106377882002012X Hampf2009 Hampf D, Tluczykont M, Horns, D (2009) Event reconstruction with the proposed large area Cherenkov air shower detector, In Proc. of 31st ICRC, Lodz, Poland, arXiv:0909.0663 Hampf2010 Hampf D, Tluczykont M, Horns D (2010) Simulation of the expected performance for the proposed gamma-ray detector HiSCORE PoS(Texas 2010)245 Hampf2013 Hampf D, Tluczykont M, Horns D (2013), Event reconstruction techniques for the wide-angle air Cherenkov detector HiSCORE. Nucl.Instr.Meth. A 712:137 Heck1998 Heck D, Knapp J, Capdevielle J N, et al. Corsika, 1998. http://www-ik.fzk.de/corsika/ Holder21 Holder J (2021) Atmospheric Cherenkov Gamma-ray Telescopes. In: Burrows D (ed) The WSPC Handbook of Astronomical Instrumentation, World Scientific, Singapore, p 117–136 Hofmann21 Hofmann W (2021) On angular resolution limits for air shower arrays. Astropart. Phys. 123:102479 Hillas1985 Hillas A M (1985) Cerenkov Light Images of EAS Produced by Primary Gamma Rays and by Nuclei, In Proc. of ICRC1985, 1985ICRC....3..445H Hillas1996 Hillas A M (1996) Differences between Gamma-Ray and Hadronic Showers, Space Science Reviews, 75(1-2):17-30 Horn36 Horn-D'Arturo G (1936) Primi esperimenti con lo specchio a tasselli. Mem. Soc. Astron. It. 9:133 Karle1995 Karle, A., Merck, M., Plaga, R., et al. (1995), Astropart.Phys., 3, 321-347 Kato21 Kato S, Condori C A H, de la Fuente E, et al. (2021) Detectability of southern gamma-ray sources beyond 100 TeV with ALPAQUITA, the prototype experiment of ALPACA. Exp. Astron., 52:85-107. Kneizys1996 Kneizys F, Abreu L, Anderson G, et al. (1996), The modtran 2/3 report and lowtran 7 model, 1996. Edited by: Abreu, L.W., Anderson, G.P. Knodlseder20 Knödlseder J (2020) The Cherekov Telescope Array, In Proc. of 16th Rencontres du Vietnam (TMEX2020), arXiv:2004.09213 Kornilov2012 Kornilov V G, Lipunov V M, Gorbovskoy E S et al. (2021) Robotic optical telescopes global network MASTER II. Equipment, structure, algorithms. Exp Astron 33:173–196. https://doi.org/10.1007/s10686-011-9280-z Kudzhaev21 Kudzhaev A U, Dzhappuev D D, Afashokov Yu Z et al (2021) The Carpet-3 EAS Array for Investigation of γ-Radiation with Energy E >100 TeV. Phys At Nucl 84:1030–1036 Kunwar22 Kunwar S, Goksu H, Hinton J et al (2022) A Double Layered Water Cherenkov Detector Array for Gamma-Ray Astronomy. E-print arXiv:2209.09305. MezekICRC21 Kukec Mezek G, on behalf of the CoMET Collaboration (2021) The CoMET multiperspective event tracker for wide field-of-view gamma-ray astronomy. PoS(ICRC2021) 395:905, doi: 10.22323/1.395.0905 Kunnas2017 Kunnas M (2017) Studies of the performance of an IACT system for the TAIGA array, PhD thesis Univ. of Hamburg, https://ediss.sub.uni-hamburg.de/handle/ediss/7582 Lattes Lattes CMG, Muirhead H, Occhialini GPS, Powell CF (1947) Processes involving charged mesons. Nature 159:694–697 Lombardi20 Lombardi S, Catalano O, Scuderi LA, et al (2020) First detection of the Crab Nebula at TeV energies with a Cherenkov telescope in a dual-mirror Schwarzschild-Couder configuration: the ASTRI-Horn telescope. Astron. & Astrophys. 634:A22 LombardiICRC21 Lombardi S, Antonelli LA, Bigongiari C, et al., for the ASTRI Project (2021) Performance of the ASTRI Mini-Array at the Observatorio del Teide. PoS(ICRC2021) 395:884, doi: 10.22323/1.395.0884 Lubsandorzhiev2019 Lubsandorzhiev N (2019), The TAIGA-IACT camera: construction, calibration, performance. PoS(ICRC2019) 358:730 Meyer2010 Meyer M, Horns D, and Zechlin H S (2010), The Crab Nebula as a standard candle in very high-energy astrophysics, Astr.Astrophys. 523:A2 MohantyICRC21 Mohanty P, Ahmad S, Chakraborty M, et al. (2021) Highlights from the GRAPES-3 experiment. PoS(ICRC2021) 395:003, doi: 10.22323/1.395.0003 Mostafa17 Mostafá M, Benzvi S, Schoorlemmer H, Schüssler F, on behalf of the HAWC Collaboration (2017) On the scientific motivation for a wide field-of-view TeV gamma-ray observatory in the Southern Hemisphere. PoS(ICRC2017) 301:851, doi: 10.22323/1.301.0851 Monkhoev2017 Monkhoev R D, et al (2017) The Tunka-Grande Experiment, JINST 12:C06019 Onuchin1992 A. Onuchin et al. (1992), The aerogel Cherenkov counters with wavelength shifters and phototubes Nucl.Instrum.MethodsPhys.Res.A Vol 315:517-520. Pintore20 Pintor F, Giuliani A, Belfiore A et al. (2020) Scientific prospects for a mini-array of ASTRI telescopes: A γ-ray TeV data challenge. JHEAp 26:83–94 Porelli2015 Porelli A et al. (2015), Timing calibration and directional reconstruction for Tunka-HiSCORE, J. Phys.: Conf. Ser. 632:012041, https://iopscience.iop.org/article/10.1088/1742-6596/632/1/012041 Porelli2020 Porelli A (2020) TAIGA-HiSCORE: a new wide-angle air Cherenkov detector for multi-TeV gamma-astronomy and cosmic ray physics, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, doi http://dx.doi.org/10.18452/21610 PorelliICRC21 Porelli A (2021) TAIGA-Observatory: First 5 years of operation of the HiSCORE Air-Cerenkov Array. PoS(ICRC2021) 395:877, doi: 10.22323/1.395.0877 Postnikov2019 Postnikov E.B., Astapov I.I., Bezyazeekov P.A. et al. Monte Carlo Simulation of the TAIGA Experiment. Bull. Russ. Acad. Sci. Phys. 83, 955–958 (2019). https://doi.org/10.3103/S1062873819080331 Sinnis2005 Sinnis G, (2005) HAWC: A Next Generation VHE All-Sky Telescope. In F. A. Aharonian, H. J. Völk, and D. Horns, editors, Heidelberg Gamma-Ray Symposium, volume 745 of AIP, page 234, 2005. supercuts Punch M, Akerlof C W, Cawley M F (1991) Proceedings of the 22nd International Cosmic Ray Conference. 11-23 August, 1991. Dublin, Ireland. Volume 1, Contributed Papers, OG Sessions 1-5. Dublin: The Institute for Advanced Studies, 1991., p.464 Prosin2014 Prosin V V, et al. (2014), Nucl. Instr. Meth. A, 756:94 RodriguezICRC21 Rodriguez-Fernandez G, Bigongiari C, Bulgarelli A et al. (2021) STACEX: RPC-based detector for a multi-messenger observatory in the Southern Hemisphere. PoS(ICRC2021) 395:715, doi: 10.22323/1.395.0715 Sako09 Sako TK, Kawata K, Ohnishi M et al. (2009) Exploration of a 100 TeV gamma-ray northern sky using the Tibet air-shower array combined with an underground water-Cherenkov muon-detector array. Astropart. Phys. 32:177–184 SakoICRC21 Sako T, on behalf of the ALPACA Collaboration (2021) Current status of ALPACA for exploring sub-PeV gamma-ray sky in Bolivia. PoS(ICRC2021) 395:733, doi: 10.22323/1.395.0733 Harm17 Schoorlemmer H, López-Coto R, Hinton J (2017) Baseline Design for a Next Generation Wide-Field-of-View Very-High-Energy Gamma-Ray Observatory. PoS(ICRC2017) 301:819, doi: 10.22323/1.301.0819 HarmICRC21 Schoorlemmer H, Conceiçcão R, Smith AJ, on behalf of the SWGO Collaboration (2021) Simulating the performance of the Southern Wide-view Gamma-ray Observatory. PoS(ICRC2021) 395:903, doi: 10.22323/1.395.0903 Senniappan21 Senniappan M, Becherini Y, Punch M, et al (2021) Signal extraction in atmospheric shower arrays designed for 200 GeV–50 TeV γ-ray astronomy. J. Inst. 16:P07050 SenniappanICRC21 Senniappan M, Becherini Y, Punch M, et al (2021) Expected performance of the ALTO particle detector array designed for 200 GeV - 50 TeV gamma-ray astronomy. PoS(ICRC2021) 395:761, doi: 10.22323/1.395.0761 Sharma17 Sharma M, Chinmay B, Bhatt N, et al (2017) Sensitivity estimate of the MACE gamma ray telescope. Nucl. Instrum. Methods Phys.A 851:125–131 Singh21 Singh KK, Yadav KK (2021) 20 Years of Indian Gamma Ray Astronomy Using Imaging Cherenkov Telescopes and Road Ahead. Universe 7:96 Tluczykont2009 Tluczykont M, Kneiske T, Hampf D, Horns D (2009) Gamma-ray and cosmic ray astrophysics from 10 TeV to 1 EeV with a large-area (>10 km^2) air-shower detector, In Proc. of 31st ICRC, Lodz, Poland, arXiv:0909.0445 T2014 Tluczykont M., Hampf D., Horns, D. et al. (2014) The concept for gamma-ray and cosmic-ray astrophysics beyond 10 Astropart. Phys. 56:42 Tluczykont2017 Tluczykont M, Budnev N, Astapov I et al. (2017), The TAIGA timing array HiSCORE - first results. EPJ Web of Conferences 136:03008, doi 10.1051/epjconf/201713603008 Tluczykont2021 Tluczykont M, et al. (2021) The TAIGA Experiment, The Sixteenth Marcel Grossmann Meeting. February 2023, 3324-3342, doi:10.1142/9789811269776_0274 Vassiliev07 Vassiliev V, Fegan S, Brousseau P (2007) Wide field aplanatic two-mirror telescopes for ground-based γ-ray astronomy. Astropart. Phys. 28:10 VercelloneICRC21 Vercellone S, for the ASTRI Project (2021) The ASTRI Mini-Array Core Science Program. PoS(ICRC2021) 395:896, doi: 10.22323/1.395.0896 Weekes1989 Weekes T, Cawley M F, Fegan D J et al. (1989) Observation of TeV Gamma Rays from the Crab Nebula Using the Atmospheric Cerenkov Imaging Technique. Astrophys. J. 342:379 WernerICRC21 Werner F, Nellen L, on behalf of the SWGO Collaboration (2021) Technological options for the Southern Wide-field Gamma-ray Observatory (SWGO) and current design status. PoS(ICRC2021) 395:714, doi: 10.22323/1.395.0714 Wischnewski2017 Wischnewski R, et al (2017) TAIGA-HiSCORE detection of the CATS-Lidar on the ISS as fast moving point source in 35th International Cosmic Ray Conference PoS(ICRC2017)301 YadavICRC21 Yadav KK, on behalf of the HIGRO Collaboration (2021) Status update of the MACE Gamma-ray telescoper. PoS(ICRC2021) 395:756, doi: 10.22323/1.395.0756 Yashin2016 Yashin I (2016), Imaging Camera and Hardware of TAIGA-IACT Project, PoS ICRC2015(2016):986. YokoeICRC21 Yokoe Y (2021), on behalf of the ALPACA Collaboration (2021) Half ALPACA and its sensitivity to sub-PeV gamma rays from the Galactic Center. PoS(ICRC2021) 395:899, doi: 10.22323/1.395.0899 ZaninICRC21 Zanin R, on behalf of the CTA Observatory (2021), Cherenkov Telescope Array: the World's largest VHE gamma-ray observatory, PoS ICRC2021 395:005 Zhurov2019 Zhurov D (2019), Performance of the TAIGA-IACT telescope pointing system, PoS ICRC2019 358:833 catslidar http://cats.gsfc.nasa.gov/
http://arxiv.org/abs/2307.03139v1
20230706170711
Fixed-magnetization Ising model with a slowly varying magnetic field
[ "Yacine Aoun", "Sébastien Ott", "Yvan Velenik" ]
math.PR
[ "math.PR", "cond-mat.stat-mech", "math-ph", "math.MP" ]
=1 Figures/ showonlyrefs ctrlcst symbol=κ csts symbol=C normal symbol=c plain theoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollary conjecture[theorem]Conjecture remarkRemark[section] propertiesProperties claimClaim definition definitionDefinition[section] obsObservation Obs Section de Mathématiques, Université de Genève, Rue du Conseil-Général 7-9, 1205 Genève, Switzerland Yacine.Aoun@unige.ch Département de Mathématiques, Université de Fribourg, Chemin du Musée 23, 1700 Fribourg, Switzerland ott.sebast@gmail.com Section de Mathématiques, Université de Genève, Rue du Conseil-Général 7-9, 1205 Genève, Switzerland Yvan.Velenik@unige.ch The motivation for this paper is the analysis of the fixed-density Ising lattice gas in the presence of a gravitational field. This is a seen as a particular instance of an Ising model with a slowly varying magnetic field in the fixed magnetization ensemble. We first characterize the typical magnetization profiles in the regime in which the contribution of the magnetic field competes with the bulk energy term. We then discuss in more detail the particular case of a gravitational field and the arising interfacial phenomena. In particular, we identify the macroscopic profile and propose several conjectures concerning the interface appearing in the phase coexistence regime. The latter are supported by explicit computations in an effective model. Finally, we state some conjectures concerning equilibrium crystal shapes in the presence of a gravitational field, when the latter contributes to the energy only to surface order. Fixed-magnetization Ising model with a slowly varying magnetic field Yvan Velenik August 1, 2023 ==================================================================== § INTRODUCTION The analysis of spatially inhomogeneous systems has a long history in theoretical physics and chemistry, see for instance <cit.>. In contrast, the specific equilibrium properties of spatially inhomogeneous lattice systems have been surprisingly little studied in the mathematical physics literature compared to the homogeneous ones. The inhomogeneous nature stems from the interaction <cit.> or from the external potential <cit.>. The present paper is the first in a planned series devoted to various aspects of the latter. In the present work, we consider an Ising model, in a fixed-magnetization ensemble, that is subject to an inhomogeneous magnetic field that varies sufficiently slowly (in a technical sense made precise below). We derive, using thermodynamic arguments (similar to what has been done in <cit.>) and large deviations arguments, the geometry of the typical magnetization profile. We then focus on a specific example: the Ising lattice gas (at fixed density) in the presence of a slowly varying gravitational field. In this case, we make several conjectures concerning the behavior of the typical magnetization and corresponding interfaces by studying a much simpler effective model that we heuristically motivate. The present paper is intended to have a rather expository character, presenting basic results with soft arguments and making predictions about various relevant phenomena that arise. It will be completed by a series of papers investigating in more detail these (and related) problems. The paper is organized as follows: in Section <ref>, we introduce the Ising model with a slowly varying magnetic field. In Section <ref>, we recall basic results on thermodynamical quantities that are used in our arguments. In Section <ref>, we analyze the geometry of typical magnetization profiles. Finally, in Section <ref>, we study the particular example of a gravitational field. We discuss two relevant scaling for this problem. In the first case, the gravitational field has an impact on the thermodynamical properties, resulting in particular in a height-dependent density profile, possibly including an interface. We make a precise conjecture about the latter's scaling limit, which we motivate by establishing the corresponding claim in a simple effective model. Finally, we briefly discuss the second relevant scaling, in which the gravitational field only impacts surface phenomena. In particular, we describe its effect on the macroscopic geometry of phase separation, postponing a detailed analysis of this problem to future work. § ISING MODEL IN A SLOWLY VARYING MAGNETIC FIELD Let Λ_N {0,…, N-1}^d and set Ω_N {-1,1}^Λ_N. Consider the Ising model in the box Λ_N with Hamiltonian H_0(σ) -∑_{i,j}⊂Λ_N i-j = 1σ_iσ_j, ∀σ∈Ω_N. where · is the l_1-norm on ^d. In this work, our goal is to analyze the effect of a slowly varying inhomogeneous magnetic field on the model. Namely we consider a sequence h_N:Λ_N→ of magnetic fields. We say that h_N is slowly varying if there are two sequences a_N∈_+ and b_N∈_+ such that * a_N∞ and a_N/N 0, * b_N 0, * for any N and any x,y∈Λ_N, |x-y|<a_N |h_N(x)- h_N(y)|<b_N. To simplify the exposition, we will assume that N/a_N is an integer (the adaptation to deal with other cases is straightforward). Denote Γ_N = {0,a_N,2a_N,…,N-a_N}^d. It would be possible to allow discontinuities, as long as their measure in the continuum limit vanishes. The Hamiltonian of our model then takes the form H_h_N(σ) H_0(σ) - ∑_i∈Λ_N h_N(i) σ_i, ∀σ∈Ω_N. Let us denote by M_N(σ) ∑_i∈Λ_Nσ_i the total magnetization in the box Λ_N and by _N M_N(σ)σ∈Ω_N the set of all possible values of the magnetization in the box Λ_N. Fix M∈𝖬𝖺𝗀_N and let Ω_N,Mσ∈Ω_NM_N(σ)=M . The canonical ensemble at inverse temperature β∈_≥ 0 associated to the magnetization M is described by the probability measure on Ω_N,M defined by μ_N,β,,M(σ) 1/Z_N,β,,M e^-β H_(σ), Z_N,β,,M∑_σ∈Ω_N,M e^-β H_(σ). To simplify notations, we will make the following abuse of notation: for m∈ [-1,1], we will write Z_N,β,,m (and Z_N,β,m^, see below) for Z_N,β,,K_N(m) (and Z_N,β,K_N(m)^ respectively) where K_N(m) ∈𝖬𝖺𝗀_N is an (arbitrary) sequence such that N^-dK_N(m)→ m. As the results do not depend on the sequence, we allow ourselves this slight abuse and hope it will not confuse the reader. The same will be used for the corresponding measures. § THERMODYNAMIC QUANTITIES IN HOMOGENEOUS SYSTEMS We will use some classical results about large deviations and ensemble equivalence for the Ising model with a constant magnetic field. Define the Grand Canonical and Canonical partition functions Z_N,β,h^∑_σ∈Ω_Nexp[ -β( H_0(σ) - h M_N(σ) ) ], Z_N,β,M^∑_σ∈Ω_N,M e^-β H_0(σ) . We will need a number of standard results. Let K_N∈_N be any sequence such that N^-dK_N→ m. The pressure _β(h) lim_N→∞1/β N^dlog Z_N,β,h^, and the free energy _β(m) -lim_N→∞1/β N^dlog Z_N,β,K_N^, are well defined for all β≥ 0, h∈, m∈ [-1,1] and are convex functions of their argument. Moreover, for any pair of sequences K_N^-≤ K_N^+ such that N^-d K_N^±→ m, -lim_N→∞1/β N^dlog∑_K=K_N^-^K_N^+ Z_N,β,K^ = _β(m). See for instance <cit.>. For all β≤, _β is differentiable on and its derivative m_β(h) d/dh_β(h) is (strictly) increasing and continuous on . For all β>, _β is differentiable on ^* and its derivative m_β(h) d/dh_β(h) is (strictly) increasing and continuous on ^*. Moreover, m_β^* lim_h→ 0^+ m_β(h) = - lim_h→ 0^- m_β(h) > 0. For the statement about the case β= and h=0, see <cit.>. For the remaining statements, see, for instance, Chapter 3 in <cit.>. In fact, when h≠ 0 or β<, _β is analytic; see <cit.>. For any β≥ 0, _β and _β are related by _β(m) = sup_h∈ (hm - _β(h)), ∀m∈[-1,1], _β(h) = sup_m∈[-1,1](m h - _β(m)), ∀h∈. Moreover, whenever the pressure is differentiable (that is, when h≠ 0 or β≤), the supremum in (<ref>) is attained at m=m_β(h). When β> and h=0, the set of points at which the supremum is attained is the phase-coexistence interval [-m_β^*,m_β^*]. See, for instance, Theorem 4.13 and Section 4.8.3 in <cit.>. § TYPICAL MESOSCOPIC MAGNETIZATION PROFILES Our main results will be stated in terms of mesoscopic magnetization profiles, which we define now. A mesoscopic magnetization profile (or just profile for simplicity) in Λ_N is a function q_N:Γ_N → [-1,1]. The profile q_N is said to be compatible with the magnetization density m if 1/|Γ_N|∑_x∈Γ_N q_N(x) = m. We denote this compatibility relation by q_N∼ m and write _N(m) = {q_N:Γ_N → [-1,1]: q_N∼ m }. The first step in our analysis is the following observation. Let h_N be slowly varying and m∈ [-1,1]. Then, 1/N^dlog Z_N,β,h_N,m≥sup_q_N∈_N(m)1/|Γ_N|∑_x∈Γ_N(h_N(x)q_N(x) - _β(q_N(x))) + _N(1). Let q_N∈_N(m). Let q̃_N∈_N(m) be such that, for all x∈Γ_N, σ∈Ω_a_NM_a_N(σ) = q̃_N(x) a_N^d≠∅ and q_N(x) - q̃_N(x) a_N^d ≤ 2. (The existence of such a profile is easy to establish: start by setting q̌_N(x) = ⌊ q_N(x) a_N^d ⌋ for each x∈Γ_N. By construction, ∑_x∈Γ_Nq̌_N(x) ∈(mN^d - 2Γ_N, mN^d]. One then simply flip one minus spin in different cells until the resulting total magnetization is (the implicit allowed total magnetization approximating) mN^d.) We get a lower bound on Z_N,β,h_N,m by restricting the summation to configurations σ∼q̃_N, by which we mean that ∑_i∈ x+Λ_a_Nσ_i = q̃_N(x) a_N^d. Note that, for any x∈Γ_N, i∈ x+Λ_a_N implies |h_N(i)-h_N(x)|<b_N, as |i-j|< a_N implies |h_N(i)-h_N(j)|< b_N by (<ref>). This implies that, for any σ∼q̃_N, ∏_i∈ x+Λ_a_N e^h_N(i)σ_i≥ e^-b_N a_N^d e^h_N(x) q̃_N(x) a_N^d. This yields Z_N,β,h_N,m≥ e^-c_dβ |Γ_N|a_N^d-1 e^-b_N N^d∏_x∈Γ_N Z_a_N,β,q̃_N(x)^ e^h_N(x) q̃_N(x) a_N^d. Taking the log and dividing by N^d gives the result by Theorem <ref>. Lemma <ref> gives rise to a variational problem, namely maximize Ψ_N(q) 1/|Γ_N|∑_x∈Γ_N(h_N(x)q_x - _β(q_N(x))) over all q∈_N(m). It turns out that the latter can be solved explicitly. To avoid some pathologies (and because the cases m=± 1 are trivial), we assume from now on that m∈ (-1,1). Let us introduce the following notations: 2(h_N) {h_N(x), x∈Γ_N}, ∀ h_N:Γ_N→, Γ_N[h] x∈Γ_Nh_N(x) = h, ∀ h∈. (Of course, Γ_N[h]=∅ if h∉(h_N).) We are now going to construct explicitly a maximizer q^*_N of Ψ_N on _N(m). For h∈∖(h_N), we write (h) Γ_N^-1∑_x∈Γ_N m_β(h_N(x) - h) and set h̅suph∈∖(h_N)(h)≥ m. Note that h̅∈, since lim_h→ -∞(h) = 1 > m > -1 = lim_h→ +∞(h). Since, by Theorem <ref>, the function h↦ m_β(h) is (strictly) increasing on and continuous on ^*, it follows that the function h ↦(h) is (strictly) decreasing and continuous on ∖(h_N). Furthermore, ∀ h∈(h_N), lim_h' → h^-(h') - lim_h' → h^+(h') = 2m_β^*Γ_N[h]/Γ_N . In particular, (h̅) = m if and only if h̅∉(Γ_N). We define the mesoscopic magnetization profile q^*_N:Γ_N→ [-1,1] by q^*_N(x) m_β(h_N(x)-h̅) if x∉Γ_N[h̅], m_β^* + Γ_N/Γ_N[h̅](m - (h̅)) if x∈Γ_N[h̅]. Note that this is well defined, since (<ref>) implies that m_β^* + Γ_N/Γ_N[h̅](m - (h̅)) ∈ [-m_β^*,m_β^*]. This should not be surprising, since h_N(x)-h̅=0 when x∈Γ_N[h̅] and the system should therefore be in the phase coexistence regime (in the corresponding cell). It is straightforward to check that q^*_N∈_N(m). Let us ensure that q^*_N is indeed a maximizer of Ψ_N. Let m∈ (-1,1). q^*_N is a maximizer of Ψ_N on _N(m). Moreover, all other maximizers in _N(m) differ from q^*_N only on Γ_N[h̅]. Let q_N∈_N(m). Then, Ψ_N(q_N) = h̅ m + 1/|Γ_N|∑_x∈Γ_N((h_N(x)-h̅)q_N(x) - _β(q_N(x))) ≤h̅ m + 1/|Γ_N|∑_x∈Γ_N_β(h_N(x)-h̅) = h̅ m + 1/|Γ_N|∑_x∈Γ_N((h_N(x)-h̅)q^*_N(x) - _β(q^*_N(x))) = Ψ_N(q^*_N), where the inequality and the identity in the third line follow from Theorem <ref> (since q^*_N(x) = m_β(h_N(x)-h̅) when x∉Γ_N[h̅] and q^*_N(x)∈ [-m_β^*,m_β^*] when x∈Γ_N[h̅]). Let us now consider another maximizer q'_N∈_N(m). The considerations above show that changing the value of q^*_N(x) at any x∉Γ_N[h̅] necessarily (strictly) decreases Ψ_N. Therefore, the only x∈Γ_N at which q'_N(x) can differ from q^*_N(x) belong to Γ_N[h̅]. (Note that, in general, Ψ_N indeed admits several maximizers: they correspond to all possible choices of q'_N(x), x∈Γ_N[h̅], satisfying both ∑_x∈Γ_N[h̅] q'_N(x) = ∑_x∈Γ_N[h̅] q^*_N(x) and q'_N(x)∈ [-m_β^*, m_β^*] for all x∈Γ_N[h̅].) Lemmas <ref> and <ref> provide a lower bound on the partition function of the system that can be used to prove concentration of typical profiles on q^*_N (in the L^1 norm). Let us write 𝗆_N(x) a_N^-d∑_i∈ x+Λ_σ_i for the empirical magnetization density in the cell indexed by x∈Γ_N. Let m∈ (-1,1). Consider a sequence of slowly varying magnetic fields h_N such that lim_N→∞Γ_N[h]/Γ_N = 0 for all h∈(h_N). Then, for all ϵ>0, there exist c>0 and N_0 such that μ_N,β,h_N,m( ∑_x∈Γ_N𝗆_N(x) - q^*_N(x)≥ϵΓ_N) ≤ e^-c N^d, for all N≥ N_0. Let us denote by σ∈{-1,1}^Λ_N∑_x∈Γ_N𝗆_N(x) - q^*_N(x)≥ϵΓ_N the event under consideration. When occurs, there exist at least 1/4ϵΓ_N vertices x∈Γ_N such that 𝗆_N(x) - q^*_N(x)≥1/2ϵ. Indeed, were it not the case, we would have ∑_x∈Γ_Nq^*_N(x) - 𝗆_N(x) < 2 ·14ϵΓ_N + 12ϵ·Γ_N = ϵΓ_N. We choose N_0 large enough to ensure that Γ_N[h̅] < 1/8ϵΓ_N. We thus have μ_N,β,h_N,m() ≤∑_b⊂Γ_N∖Γ_N[h̅] b≥1/8ϵΓ_Nμ_N,β,h_N,m(∀ x∈ b, 𝗆_N(x) - q^*_N(x)≥12ϵ). As follows from the proof of Lemma <ref> and strict convexity of f_β outside the coexistence interval, there exists c=c(ϵ) >0 such that q_N(x) - q^*_N(x)≥12ϵ h_N(x)q_N(x) - _β(q_N(x)) ≤ h_N(x)q^*_N(x) - _β(q^*_N(x)) - c. Proceeding as in the proof of Lemma <ref>, we get, for any admissible (that is, that can be realized as an empirical profile) q_N∈_N(m) such that q_N(x) - q^*_N(x)≥12ϵ for all x∈ b, μ_N,β,h_N,m(𝗆=q_N) ≤1/Z_N,β,h_N,m e^Ψ_N(q_N) N^d + (N^d)≤ e^(Ψ_N(q_N)-Ψ_N(q^*_N)) N^d + (N^d)≤ e^-1/2 c a_N^d b, where we used Lemmas <ref> and <ref> for the second inequality, and (<ref>) for the last one (since (<ref>) implies that Ψ_N(q_N)≤Ψ_N(q^*_N) - ca_N^db). Since the number of admissible profiles q is equal to (a_N^d+1)^Γ_N = e^(N^d), we conclude that μ_N,β,h_N,m() ≤ e^(N^d)∑_b⊂Γ_N b≥1/4ϵΓ_N e^-1/2 c a_N^d b≤ e^-1/10cϵ N^d. § THE CASE OF A GRAVITATIONAL FIELD In this section, we consider one particular instance of the general framework described in Section <ref>: the case of a linearly growing magnetic field. Although we will stick to the magnetic language for simplicity, the physical interpretation is more natural in the lattice gas interpretation, since this linearly growing magnetic field can be interpreted as a gravitational field acting on the particles of the gas. Let m∈ (-1,1), let g_N be a sequence of positive real numbers and let m_N∈𝖬𝖺𝗀_N be a sequence converging to m. In this section, we consider a magnetic field given by h_N(i) g_N i_d, ∀ i=(i_1,…,i_d)∈Λ_N. For the reason explained above, we will refer to this particular form of the magnetic field as a gravitational field. We are going to consider two different scalings for the intensity g_N of the gravitational field. We will be mostly interested in the case g_N = g/N, which is the one relevant if one wants the field to affect the local thermodynamic properties of the system (in particular the value of local magnetization density as a function of the height); this will be discussed in Section <ref>. We will also briefly discuss the physically very relevant case of a gravitational field of the form g_N = g/N^2, for which the field has no effect on the local thermodynamic properties, but affects the macroscopic geometry of phase separation; this will be briefly discussed in Section <ref>, although a detailed analysis is postponed to a future work. For simplicity of exposition, we will describe only the limiting magnetization profile, that is, we will consider the function q^*:[0,1)^d→ [0,1] given by q^*(x) lim_N→∞ q_N^*(⌊ Nx/a_N ⌋ a_N), ∀ x∈ [0,1)^d. §.§ 1/N scaling: density profiles In this subsection, we focus on the case g_N=g/N. It is straightforward to check that this indeed corresponds to a slowly varying magnetic field in the sense of the previous sections, with b_N=g a_N/N and a_N an arbitrary sequence of positive numbers that tends to infinity in such a way that a_N=(N). Note that, with this particular normalization, the contribution of the gravitational term is of order N^d and thus competes with the interaction energy. This is thus the relevant scaling if one wants the gravitational field to have a nontrivial effect on the local density as the height changes. Two typical configurations are given in Figure <ref>, one above the critical temperature, one below. Although both clearly show a decrease of the density with the height, the second one exhibits a much more discontinuous behavior, manifested by a clear interface separating a liquid phase from a gas phase. This can be easily understood, at the level of mesoscopic density profiles, using the results of Section <ref>. §.§.§ The magnetization profile in the continuum limit Since h_N(Nx) = g x_d, for any x=(x_1,…,x_d)∈ [0,1)^d, lim_N→∞Γ_N^-1∑_x∈Γ_N m_β(h_N(x) - h) = ∫_[0,1]^d m_β(gx_d - h) x = ∫_0^1 m_β(gs - h) s = (_β(g-h) - _β(-h))/g. where we used Theorem <ref> in the last equality. Therefore, h̅ = suph∈_β(g-h) - _β(-h) ≥ mg, which implies, by strict convexity of the pressure _β, that h̅ is the unique solution to the equation _β(g-h) - _β(-h) = mg. When β≤ or when β> and h̅/g ∉ [0,1), the limiting macroscopic profile q^*(x) = m_β( gx_d - h̅ ) is an analytic function of the height x_d∈ [0,1). When β> and h̅/g ∈ [0,1), the limiting profile is only well defined for x∈[0,1)^d such that x_d≠h̅/g and is still given by the same expression. In particular, it is still analytic in x_d on [0,1)∖{h̅/g}, but is discontinuous when x_d=h̅/g, which is the height at which the interface between the two phases is located. Moreover, the jump at the discontinuity is of size 2m_β^*. Of course, since there is no explicit expression for _β (except in the trivial one-dimensional case or in perturbative regimes) one cannot say much more. One example in which, one can determine the height of the interface is when m=0. Indeed, _β being an even function, one concludes that h̅ = g/2 and thus that the interface is located at height 1/2, as expected. §.§ 1/N scaling: interface fluctuations in two dimensions In the continuum limit, the interface is given by a horizontal straight line. It would be of interest to understand its fluctuations for finite values of N. Such an analysis might be doable by extending the techniques developed in <cit.> for a similar, albeit somewhat simpler, problem; we hope to come back to this issue in a future work. Here, we will only analyze a toy model for this interface that allows one to conjecture both the size of the typical fluctuations of the interface and its scaling limit. To motivate the effective model, we go through the standard “approximation by random walk” of an Ising interface. To simplify the discussion, we only consider the canonical ensemble with 0 total magnetization. The probability of a given interface γ contains three contributions: 1) the weight of the “free interface” (without constraint on the total magnetization and without magnetic field), 2) the global magnetization constraint, 3) the magnetic field. We start with the following approximations. * The global magnetization constraint is replaced by |γ_+| = |γ_-| where γ_+,γ_- are the part of the box above and below γ. This is “justified” by the symmetry of the system and the fact that the system relaxes quickly away from the interface. * Taking as reference the flat interface, and supposing fast relaxation away from the interface, the contribution of the magnetic field to the effective action is of the form -2m_β^*g/N∑_x∈γ∑_k=1^|x_2| k “=" -c(β, g)/N∑_x∈γ (x_2)^2 (in each column, the effect of having the interface at height t>0 “forces” the spins below t to be in the minus phase, and similarity for t<0). * Lastly, the free interface can be approximated by a (directed) random walk bridge with exponential tails on the steps (this has been justified rigorously in several instances, see for example <cit.>). With these approximations (and further replacing the path of the directed random walk by the space-time trajectory of a standard one-dimensional random walk), one ends up with a random walk subject to a penalty c/N∑_i=1^N S_i^2 and to the global constraint ∑_i=1^N S_i =0. The particular random walk model under consideration should not affect the scaling limit as long as the random walk kernel has sufficiently many moments (two should be enough, see <cit.> for similar considerations), we therefore study the simplest model we can think of: the one with Gaussian increments. We are going to discuss three different regimes, leading to the following conjectures on the planar Ising model with Dobrushin boundary condition. In the canonical ensemble with m=0 and no magnetic field, the diffusively-rescaled interface converges to a Brownian bridge conditioned on having signed area 0. In the canonical ensemble with m=0 and magnetic field given by (<ref>), the interface, rescaled by N^1/2 horizontally and N^1/4 vertically, converges to a stationary Ornstein–Uhlenbeck process. The same scaling limit occurs in the grand canonical ensemble. We hope to return to these conjectures in future works. In the meantime, we establish the corresponding claims for the Gaussian random walk model discussed above; see Section <ref> and Theorem <ref> respectively. §.§.§ Preliminaries: covariance structure Our computations will rely on the following classical property of Gaussian random vectors. Let X=(X_1,X_2) be a Gaussian vector (both X_1,X_2 being vectors) with covariance and mean Σ = [ Σ_11 Σ_12; Σ_21 Σ_22 ], μ = [ μ_1; μ_2 ]. Then, if Σ_22 is invertible, X_1 conditioned on X_2=v is a Gaussian vector with covariance and mean Σ̃ = Σ_11 -Σ_12Σ_22^-1Σ_21, μ̃ = μ_1+ Σ_12Σ_22^-1(v-μ_2). Let us denote the system size by n; we'll keep it implicit in this section. We are interested in the massive 1D Gaussian chain with Dirichlet boundary condition, with distribution dP_m(φ_0,φ_1,…, φ_n+1) ∝δ(φ_0)δ(φ_n+1)e^-1/4∑_i=1^n+1(φ_i-φ_i-1)^2 -m^2/2∑_i=1^n+1φ_i^2 dφ, where δ denotes the Dirac mass at 0. This is a Gaussian vector, let G_m(i,j) _m(φ_iφ_j). When m>0, P_m is simply the law of the massive GFF on conditioned on {φ_0=φ_n+1 = 0}. The covariances of the massive GFF are given by (see <cit.>) G̃_m(i,j) = e^-ν_m |i-j|/sinh(ν_m), where ν_m ln(1+m^2 +√(2m^2 + m^4)). Using (<ref>) and (<ref>), straightforward computations lead to the following explicit expression for G_m(i,j). When m>0, for any 1≤ i≤ j ≤ n, G_m(i,j) = 2sinh(ν_m i)sinh(ν_m (n+1-j))/sinh(ν_m)sinh(ν_m (n+1)). When m=0, for any 1≤ i≤ j ≤ n, G(i,j) = 2 i(1-j/n+1). From this one deduces the following asymptotics, useful when the two vertices are far from the boundary. For any m>0, for any 1≤ i≤ j ≤ n, (1-e^-2ν_m i)(1-e^-2ν_m(n+1-j))e^-ν_m (j-i)/sinh(ν_m)≤ G_m(i,j) ≤ (1-e^-2ν_m(n+1))^-1e^-ν_m (j-i)/sinh(ν_m). Use 2sinh(x)= e^x(1-e^-2x) and Lemma <ref>. The next step is to compute the covariance of the height variables φ_i with the signed area X∑_i=1^nφ_i. This will be used to impose the canonical constraint later on. Observe that X is a centered Gaussian random variable. When m>0, one has _m(Xφ_i) = ∑_j=1^n G_m(i,j), _m(X^2) = ∑_i,j=1^n G_m(i,j). When m=0, one has _0(Xφ_i) = i(n+1-i), _0(X^2) = n(n+1)(n+2)/6. Writing the definition of X and summing the expressions for the covariances obtained in Lemma <ref>, the claims follow from the identities ∑_k=1^n k = n(n+1)/2 and ∑_k=1^n k^2 = n(n+1)(2n+1)/6. §.§.§ Scaling limit: massless case We first consider the case m=0. Consider the process (t∈ [0,1]), W̃_t^n (1-{tn})φ_⌊ tn ⌋ + {tn}φ_⌈ tn ⌉, where {x} x-⌊ x⌋ denotes the fractional part of x. Define its rescaled version by W^n_t 1/√(n)W̃_t^n. In the “Grand-canonical” case, the usual invariance principle of random walk bridge towards Brownian bridge gives the convergence of W_t^(n). The covariances can be read directly from Lemma <ref>: for 0≤ t_1≤ t_2≤ 1, _0(W_t_1^nW_t_2^n) = 2t_1(1-t_2) + _n(1). In particular, the diffusivity constant is √(2). In the “Canonical” case, the condition on the area is “macroscopic” and we end up with a Brownian bridge conditioned on having integral 0 (see <cit.> for a construction of this process). As this is not the main focus of the paper, we only identify the covariance structure. Finite-dimensional moments and tightness follow in a standard way (the arguments are close to those of the next section). For any 0≤ t_1≤ t_2≤ 1, lim_n→∞_0(W_t_1W_t_2 X=0) = 2t_1(1-t_2)- 6t_1t_2(1-t_1)(1-t_2). Using (<ref>), it follows from Lemmas <ref> and <ref> that, for any 1≤ i≤ j≤ n, _0(φ_iφ_j X=0) = G(i,j) - _0(φ_i X) _0(φ_j X) /_0(X^2) = 2i(n+1-j)/n+1 - 6ij(n+1-i)(n+1-j)/n(n+1)(n+2). The claim now easily follows. §.§.§ Scaling limit: massive case. We now turn to the more interesting case m = m_n = g/√(n). For simplicity, we assume n to be even. In this case, the natural scaling of n^1/4 for the field and n^1/2 for the space naturally gives a process indexed by (as will be seen when computing the covariances). In order to be able to zoom in, define for t∈, W̃_t^n (1-{tn^1/2})φ_n/2 + ⌊ tn^1/2⌋ + {tn^1/2}φ_n/2 +⌈ tn^1/2⌉, and W_t^n1/n^1/4W̃_t^n. The goal is to prove the following statement. Consider either the “grand canonical” (φ∼ P_m_n) or the canonical (φ∼P_m_n( · X=0)) setting. In both cases, W_t^n converges weakly, as n→∞, to the stationary Ornstein–Uhlenbeck process with parameters θ = √(2) g,σ^2 = 1. §.§.§ “Grand-canonical” case The proof follows the standard path: we first identify the covariance structure, then we prove convergence of finite-dimensional marginals, and we finally conclude using tightness. Let b∈ (12,1). Then, uniformly over n^b≤ i≤ j≤ n-n^b, G_m_n(i,j) = √(n) e^-√(2)g (j-i)/√(n)/√(2) g(1+(n^-1/2) ). In particular, for any t_1≤ t_2∈, lim_n→∞_m_n(W_t_1^n W_t_2^n) = 1/√(2) ge^-√(2)g(t_2-t_1), where the convergence is uniform over compact sets. Let b,i,j be as in the statement. First, ν_m = ln(1+m^2 + √(2)m√(1+m^2/2)) = √(2)m + (m^3). Lemma <ref> yields, uniformly over n^b ≤ i≤ j ≤ n-n^b, G_m_n(i,j) = e^-ν_m_n (j-i)/sinh(ν_m_n)(1+(e^-cn^b-1/2)), for some c=c(g)>0. Now, as sinh(x) = x + (x^3), e^-ν_m_n (j - i) = e^(n^-1/2) e^-√(2)g (j-i)/√(n), sinh(ν_m_n) = √(2) g/√(n)(1+(1/n)). Therefore, G_m_n(i,j) = √(n) e^-√(2)g (j-i)/√(n)/√(2) g(1+(n^-1/2) ), which is the first part of the claim. The second part follows immediately using the definition of W^n. Let N≥ 0 be an integer. Let t_1, t_2, …, t_N∈. Then, for any λ_1,…,λ_N∈, lim_n→∞_m_n(e^∑_k=1^N λ_k W^n_t_k) = exp(-1/2√(2)g∑_k,l=1^N λ_kλ_l e^-√(2)g|t_l-t_k| ), the convergence being uniform over compact sets. The claim follows from the fact that φ is a Gaussian vector, the definition of W^n, and Lemma <ref>. Having proved the convergence of finite-dimensional distributions, We only have to establish tightness in order to conclude the proof of the first item in Theorem <ref>. Tightness will follow from a simple moment bound on the gradients on the field. For any m≥ 0, any n≥ 1 and any 1≤ i≤ j ≤ n, _m(|φ_i -φ_j|^4) ≤ 12|i-j|^2. In particular, for any t_1≤ t_2, _m(|W_t_1^n -W_t_2^n|^4) ≤ 12|t_1-t_2|^2. Let i,j be as in the statement. Since φ_i-φ_j is a Gaussian random variable, _m(|φ_i -φ_j|^4) = 3_m(|φ_i -φ_j|^2)^2. Now, _m(|φ_i -φ_j|^2) is decreasing in m^2 (can be seen by differentiating with respect to m^2 and using Wick rule). So, _m(|φ_i -φ_j|^2) ≤_0(|φ_i -φ_j|^2) = 2(j-i)/n+1(i + (n+1-j))≤ 2 (j-i), by Lemma <ref>, where the inequality uses i-j≤ 0. The second claim follows from the first one and the definition of W^n. Tightness of (W_t^n)_t∈ [-M,M], n≥ 1 for any M>0 is then implied by the tightness of W_0^n, n≥ 1 (which follows from Lemma <ref>) and the above Lemma; see <cit.>. §.§.§ “Canonical” case The proof is the same as in the “grand canonical” setting, once we get control over the covariances of the conditioned process. Let m_n=g/√(n). Let M>0 and b∈(12,1). Then, uniformly over n^b ≤ i≤ j ≤ n-n^b with |i-j|≤ Mn^1/2, _m_n(φ_iφ_j X=0) = G_m_n(i,j)(1+(n^-1/2)). In particular, for any t_1≤ t_2∈, lim_n→∞_m_n(W_t_1^n W_t_2^n X=0) = 1/√(2) ge^-√(2)g(t_2-t_1), the convergence being uniform over compact sets. By (<ref>), for any m>0, _m(φ_iφ_j X=0) = G_m(i,j) - _m(Xφ_i)_m(Xφ_j)/_m(X^2). To get the claim, we only need to lower bound the above expression (as the ratio in the right-hand side is non-negative). Using Lemmas <ref> and <ref>, we obtain the bounds (valid for n large enough, b>1/2 and n^b≤ i≤ j≤ n-n^b) _m_n(Xφ_i)≤2/sinh(ν_m_n)∑_k=1^n e^-ν_m_n |i-k|, _m_n(Xφ_j)≤2/sinh(ν_m_n)∑_k=1^n e^-ν_m_n |j-k|, G_m_n(i,j)≥e^-ν_m_n |i-j|/2sinh(ν_m_n), _m_n(X^2)≥1/2sinh(ν_m_n)∑_k,l=n^b^n-n^b e^-ν_m_n|k-l|. From these, the fact that ν_m_n is of order n^-1/2 and the bound ∑_k=1^n e^-ν|k-i|≤2/1-e^-ν, we obtain _m_n(Xφ_i)_m_n(Xφ_j)/G_m_n(i,j)_m_n(X^2)≤(n^-1/2) e^ν_m_n|i-j|, which implies the claim, as ν_m_n|i-j| = (1) by assumption. §.§ 1/N^2 scaling: equilibrium crystal shapes Let us now turn our attention to another physically relevant scaling of the gravitational field: g_N = g/N^2. Again, this corresponds to a slowly varying magnetic field with b_N=ga_N/N^2 and a_N an arbitrary sequence of positive numbers that tends to infinity in such a way that a_N=(N). Under such a scaling, the contribution of the gravitational term to the energy is (N^d-1) and thus cannot compete with the interaction energy term H_0. In fact, it is of precisely the same order as the contribution originating from the boundary condition. It is thus important, in this section, to specify more precisely which boundary condition is used (in the previous sections, we used the free boundary condition for convenience, since the latter played no role at large scales). Given a set Λ⋐, the Hamiltonian of the Ising model in Λ with boundary condition η∈{-1,1}^ is the function on {-1,1}^Λ defined by H_Λ^η(σ) -∑_{i,j}⊂Λ_N i-j = 1σ_iσ_j - ∑_i∈Λ_N, j∈^d∖Λ_N i-j=1σ_iη_j. We will denote the corresponding (grand canonical) partition function by Z_Λ,β^η. §.§.§ Surface tension The central quantity needed to describe the macroscopic geometry of phase separation is the surface tension, which we briefly introduce now. Let n⃗ be a unit vector in . We consider the box Δ_N [-N/2,N/2]^d∩ and the boundary condition defined by η^n⃗_j 1 if j·n⃗≥ 0 and -1 otherwise. The boundary condition η^n⃗ forces the presence of an interface through the system. In a continuum limit (that is, when rescaling everything by a factor 1/N), this interface converges to the intersection Π^n⃗ of the rescaled box [-12,12]^d with the plane passing through 0 and with normal n⃗; see Figure <ref>. The presence of this interface contributes a correction -τ_β(n⃗)^(d-1)(Π^n⃗)N^d-1 + (N^d-1) to the pressure of the model, where τ_β(n⃗) is called the surface tension (in direction n⃗) and ^(k)(A) is the k-dimensional Hausdorff measure of the subset A⊂. In other words, the surface tension is defined by τ_β(n⃗) -lim_N→∞1/^(d-1)(Π^n⃗)N^d-1logZ^n⃗_Δ_N,β/Z_Δ_N,β^+ , where Z^n⃗_Δ_N,β and Z_Δ_N,β^+ are the partition functions of the (grand canonical) Ising model in Δ_N with boundary conditions η^n⃗ and η^+≡ 1 respectively. A proof of existence of the limit can be found in <cit.>. §.§.§ Phase separation in the absence of a gravitational field We first recall the phenomenology in the case g=0. So, in this section, we consider the canonical Ising model in the box Λ_N with magnetization density m, boundary condition η^- ≡ -1 and no gravitational field (g=0). In this case, as long as m<m^*_β, phase separation occurs: typical configurations contain a macroscopic droplet of one phase immersed in the other phase; see Figure <ref>. Moreover, the shape of the droplet becomes deterministic in the continuum limit, where it is given by the solution _β^m of the following variational problem[Of course, to really make sense, one should impose some regularity on the sets V; we won't go into this here and refer instead to the review <cit.>.]: minimize ∫_∂ Vτ_β(n⃗_s) ^(d-1)_s over all subsets V⊂[0,1]^d with volume ^(d)(V)=(m_β^* + m)/(2m_β^*), where n⃗_s is the outer unit normal at s. In the absence of the constraint that V⊂ [0,1]^d, the (unique up to translation) solution of this variational problem is given by the Wulff shape <cit.>, that is, the dilation of the convex body ⋂_n⃗∈^d-1x∈x·n⃗≤τ_β(n⃗) having the required volume. In particular, this yields the shape of the droplet when m is close enough to -m_β^* that a suitable translate of this solution fits inside the box [0,1]^d. For larger values of m, the solution can still be determined explicitly, at least in dimension 2 <cit.>, but we won't need this for our discussion here. A rigorous derivation of this variational problem from a probabilistic analysis of the Ising model at any β> and in any dimension d≥ 2 was developed in the 1990s (in the two-dimensional setting) and the early 2000s (for higher dimensions). Stated slightly informally, it has been proved that[Note that, while the actual statements in dimensions 3 and higher are indeed formulated in terms of mesoscopic profiles, much more precise statements exist in dimension 2. We refer to the review <cit.> and references therein.], for any β>, ∀ϵ>0, lim_N→∞sup_V_*μ^-_N,β,0,m( ∑_x∈Γ_N𝗆_N(x) - q_V_*(x)≤ϵΓ_N) = 1, where the supremum is taken of all minimizers V_* of the variational problem and, for each x∈Γ_N, q_V_*(x) -m^*_β if 1/N x ∈ V_*, -m^*_β otherwise. §.§.§ Phase separation in the presence of a gravitational field Let us now consider the effect of a gravitational field of intensity g_N=g/N^2 on the macroscopic geometry. In this case, typical configurations still exhibit phase separation, but the shape of the droplet is modified by the presence of the field; see Figure <ref>. Of course, that the gravitational term influences the macroscopic geometry is not surprising: after all, phase separation is a surface-order phenomenon and therefore the (N^d-1) contribution to the energy coming from the presence of the gravitational field is competing with the cost associated to the phase-separation interface. We plan to establish the following conjecture in future work. Let g_N=g/N^2 and h_N be as in (<ref>). For any β>, ∀ϵ>0, lim_N→∞sup_V_*μ^-_N,β,h_N,m( ∑_x∈Γ_N𝗆_N(x) - q_V_*(x)≤ϵΓ_N) = 1, where the supremum is taken of all minimizers V_* of the following variational problem: minimize ∫_∂ Vτ_β(n⃗_s) ^(d-1)_s + 2m_β^*g∫_V x_d x over all subsets V⊂[0,1]^d with ^(d)(V)=(m_β^* + m)/(2m_β^*). The solution to this variational problem is known when d=2 and the constraint V⊂ [0,1]^2 is dropped (or when m is sufficiently close to -m_β^* for this solution to fit inside the box); see <cit.>. Let us note that the higher-dimensional problem is still unsolved in general. plain
http://arxiv.org/abs/2307.02395v1
20230705161300
Collision integral with momentum-dependent potentials and its impact on pion production in heavy-ion collisions
[ "Natsumi Ikeno", "Akira Ono" ]
nucl-th
[ "nucl-th", "nucl-ex" ]
Department of Agricultural, Life and Environmental Sciences, Tottori University, Tottori 680-8551, Japan RIKEN Nishina Center, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan Cyclotron Institute, Texas A&M University, College Station, Texas 77843, USA Department of Physics, Tohoku University, Sendai 980-8578, Japan RIKEN Nishina Center, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan The momentum dependence of the nucleon mean-field potential in a wide momentum range can be an important factor to determine the Δ resonance and pion production in intermediate-energy heavy-ion collisions. In particular, in neutron-rich systems such as ^132Sn+^124Sn collisions, we need to carefully treat the momentum dependence because the neutron and proton potentials can have different momentum dependence, as characterized at low momenta by effective masses. In the present work, we rigorously calculate the collision terms of NN ↔ N Δ and Δ↔ N π processes with the precise conservation of energy and momentum under the presence of momentum-dependent potentials for the initial and final particles of the process. The potentials affect not only the threshold condition for the process but also the cross section in general as a function of the momenta of the initial particles, which is treated in a natural way in the present work. Calculations are performed by combining the nucleon dynamics obtained by the antisymmetrized molecular dynamics (AMD) model with a newly developed transport code which we call sJAM. The calculated results for central ^132Sn+^124Sn collisions at 270 MeV/nucleon clearly show that the momentum dependence of the neutron and proton potentials has a significant impact on the NN → N Δ process, and this information is strongly reflected in the charged pion ratio (π^-/π^+). We also investigate the effects of the high-density symmetry energy and the isovector part of the potential of Δ resonances on pion production, which we find are relatively small compared to the effect of the momentum dependence of the neutron and proton potentials. Collision integral with momentum-dependent potentials and its impact on pion production in heavy-ion collisions Akira Ono August 1, 2023 =============================================================================================================== § INTRODUCTION Heavy-ion collisions provide useful systems for studying nuclear matter under various conditions of temperature and density. In particular, in collisions of neutron-rich nuclei, they allow us to access insights into the equation of state (EOS) of isospin-asymmetric nuclear matter at high density, where the numbers of protons and neutrons are unbalanced <cit.>. The study of the EOS has recently attracted attention as a combined effort involving nuclear theory, experimental nuclear physics, and astrophysics. For example, by combining information from terrestrial and astrophysical observations, it has been reported that the properties of neutron-rich dense matter are constrained in the density range explored in neutron stars <cit.>. Constraints from heavy-ion collision experiments play an important role there. In the incident energy range from several hundred MeV to several GeV per nucleon, experiments of the heavy-ion collisions have been carried out and information on EOS has been deduced to some extent from observables such as the collective flow and the kaon and pion production, e.g., by the analyses of the Au + Au collision data taken at GSI <cit.>. Recently, at the RI Beam Factory (RIBF) in RIKEN, an experiment of collisions of Sn isotopes was performed at 270 MeV/nucleon by the SπRIT collaboration, which allows us to study the systems of various isospin asymmetries. They reported the pion observables <cit.> and the nucleon observables of the light fragments <cit.>. In particular, the charged pion ratio (π^-/π^+) is believed to be one of the good observables to probe the symmetry energy at high density <cit.>. The slope of the symmetry energy is determined to be 42 < L< 117 MeV from the charged pion spectra at high transverse momenta <cit.> by an analysis with one of the transport models, dcQMD <cit.>. Transport models are used as the main method to obtain physics information from heavy-ion collisions by solving the time evolution of the collision reactions <cit.>. However, there are some ambiguities in the model ingredients and numerical implementations. The Transport Model Evaluation Project (TMEP) has been underway to resolve the uncertainties among the transport model predictions <cit.>. One of the projects was the pion production prediction of Ref. <cit.>, where a significant discrepancy was found between the transport model predictions and the experimental data for the charged pion multiplicities and charged pion ratio in ^132Sn+^124Sn, ^112Sn+^124Sn, and ^108Sn+^112Sn collisions. Most theoretical predictions including the AMD+JAM model <cit.> underestimated the π^-/π^+ ratio. One of the reasons for this discrepancy is considered to be the lack of potentials for the nucleons and Δ resonances in the collision terms. The momentum dependence of the neutron and proton mean-field potentials in isospin-asymmetric systems is one of the important aspect of the nuclear interaction that affects various phenomena in nuclear physics and astrophysics, see e.g. Refs. <cit.> for reviews. The nucleon collective flow in heavy-ion reactions is known to be strongly sensitive to the momentum dependence of the isoscalar part of the potential <cit.>, and also shown to be affected by the momentum dependence of the isovector part of the potential, i.e., the effective mass splitting at high isospin asymmetries <cit.>. The n/p spectral ratio is also predicted to be sensitive to the effective mass splitting <cit.> and some information has been obtained from experimental data <cit.>. Note that the above effects of the momentum-dependent potentials are essentially caused by the single-particle motion in the mean field. The collision term in transport theory is as important as the mean-field propagation term. In fact, when the system reaches thermal equilibrium, the collision term determines the properties such as the EOS and the chemical composition. For example, careful treatments of the NN↔ NΔ and Δ↔ Nπ processes are necessary for a correct description of a mixture of nucleons, Δ resonances and pions in a box as shown in Ref. <cit.>, where transport models were compared in the case without mean-field potentials. When potentials are present, the collision term needs to incorporate potentials at least to guarantee the correct description of equilibrium reflecting mean-field interaction. First of all, the presence of potentials affects the threshold condition for the process particularly when the potentials are momentum dependent and/or the particle species change from the initial to the final state, e.g., in the NN→ NΔ process. In an isospin-asymmetric environment where the neutron and proton potentials are different, the threshold condition depends on the isospin channel, e.g., whether nn→ pΔ^- or pp→ nΔ^++, which requires an extended treatment in transport models. A few transport models consider the threshold effect in heavy-ion collision calculations <cit.> and its importance has also been demonstrated in box calculations <cit.>. Furthermore, a related question is how the presence of potentials modifies the cross section above the threshold as a function of the momenta of the colliding particles, as investigated by Ref. <cit.> for isospin symmetric systems and by Ref. <cit.> for asymmetric nuclear matter in the framework of the one-boson exchange model. A fully consistent incorporation of the potential-dependent cross sections in transport calculations is still a challenging problem. In the present work, we theoretically study the Δ resonance and pion productions in the ^132Sn+^124Sn collision at E/A =270 MeV by taking into account momentum-dependent mean-field potentials in the collision term. For this, we develop a transport model (AMD+sJAM) to properly treat the potentials, e.g., for the NN ↔ NΔ and Δ↔ N π processes. This is an extension of the previous model AMD+JAM <cit.> by Ikeno, Ono, Nara, and Ohnishi in which the antisymmetrized molecular dynamics (AMD) <cit.> was combined with a hadronic cascade model (JAM) whose collision term was formulated for particles in vacuum <cit.>. We will see that by the extension for potentials the results are improved drastically and the high π^-/π^+ ratio can be explained by the AMD+sJAM model, depending on the momentum dependence of the neutron and proton mean-field potentials. This paper is organized as follows. In Sec. <ref>, we explain our choice of the nuclear interaction and the nucleon and Δ potentials. In Sec. <ref>, we formulate the collisions under the presence of potentials, especially for NN ↔ N Δ and Δ↔ N π processes. In Sec. <ref>, as an example, we discuss how the NN → N Δ cross sections in nuclear matter are affected by momentum-dependent potentials. In Sec. <ref>, we introduce the AMD+sJAM transport model, in which the above formulation for the collision term is applied to a newly developed code sJAM. In Sec. <ref>, we show the results of the pion observables in the ^132Sn+^124Sn collision at the incident energy of E/A =270 MeV within the AMD+sJAM model. We will see a strong impact of the momentum dependence of the neutron and proton potentials on the pion productions. We also investigate the effects of the high-density symmetry energy and the isovector part of the potential of the Δ resonances on the pion productions. A summary is given in Sec. <ref>. § POTENTIALS §.§ Energy density and potentials in system with nucleons only When only nucleons are present in the system, our model is based on the interaction energy density expressed as ℰ_int(r) = ∑_αβ { U^t_0_αβρ_α(r) ρ_β (r) + U^t_3_αβρ_α(r) ρ_β (r) [ρ(𝐫)]^γ + U^τ_αβτ̃_α (r) ρ_β (r) + U^∇_αβ∇ρ_α(r) ∇ρ_β(r) }, which is similar to the Skyrme energy density functional but the spin–orbit term is not included in our calculations. Each single-particle state is assumed to be a product of the spatial part and the spin–isospin part χ_α, with the spin–isospin label α (or β) =p↑, p↓, n↑ and n↓. The densities ρ_α(r) and τ̃_α(r) in Eq. (<ref>) are defined by using the one-body Wigner distribution function f_α(r,p) as ρ_α(r) =∫dp/(2πħ)^3f_α(r,p), τ̃_α(r) =∫dp/(2πħ)^3[p-p̅(r)]^2/1+[p-p̅(r)]^2/Λ_md^2 f_α(r,p), with p̅(r)=1/∑_αρ_α(r)∑_α∫dp/(2πħ)^3pf_α(r,p). Here τ̃_α(r) is a kind of the kinetic energy density but a cut-off parameter Λ_md has been introduced following Ref. <cit.>, which will be important for the high-momentum behavior of the mean field. The coefficients U^t_0_αβ, U^t_3_αβ, U^τ_αβ, and U^∇_αβ in Eq. (<ref>) are related to the Skyrme parameters by U^t_0_αβ = 1/2t_0 ⟨αβ | (1+x_0P_σ) | αβ - βα⟩, U^t_3_αβ = 1/12 t_3 ⟨αβ | (1+x_3P_σ) | αβ - βα⟩, U^τ_αβ = 1/4 t_1 ⟨αβ | (1+x_1P_σ) | αβ - βα⟩ + 1/4t_2 ⟨αβ |(1+x_2P_σ)  | αβ + βα⟩, U^∇_αβ = 3/16t_1 ⟨αβ | (1+x_1P_σ) | αβ - βα⟩ - 1/16t_2 ⟨αβ |(1+x_2P_σ)  | αβ + βα⟩ , where P_σ is the spin exchange operator. In the case of Λ_md=∞, our interaction is equivalent to the Skyrme-type interaction v_ij =t_0(1+x_0P_σ)δ(𝐫) +12t_1(1+x_1P_σ) [δ(𝐫)𝐤^2 +𝐤^2δ(𝐫)] +t_2(1+x_2P_σ) 𝐤·δ(𝐫)𝐤 +16t_3(1+x_3P_σ)[ρ(𝐫_i)]^γδ(𝐫), where 𝐫=𝐫_i-𝐫_j and 𝐤=1/2ħ(𝐩_i-𝐩_j). Employing the AMD model <cit.>, we solve the time evolution of a many-nucleon system by directly using the energy density functional [Eq. (<ref>)] together with the other parameters, e.g., for the two-nucleon collision term. However, for some purposes, it is useful to consider the corresponding momentum-dependent mean-field potential which is obtained by U_α(r,p)=(2πħ)^3δ/δ f_α(r,p)∫ℰ_int(r)dr. In the case of Eq. (<ref>), we have U_α(r,p) =A_α(r)[p-p̅(r)]^2/1+[p-p̅(r)]^2/Λ_md^2 +C̃_α(r), with A_α(r)=∑_β U^τ_αβρ_β(r) and C̃_α(r) = ∑_β{ 2U^t_0_αβρ_β(r) + 2 U^t_3_αβρ_β(r) [ρ(𝐫)]^γ + U^τ_αβτ̃_β(r) - 2 U^∇_αβ∇^2 ρ_β(r) } + ( ∑_α' β' U^t_3_α' β'ρ_α'(r) ρ_β' (r) ) γ [ρ(𝐫)]^γ-1. In the above, the term originating from ∂τ̃_α(r)/∂p̅(r) has been ignored, which is justified when ∫dp'/(2πħ)^3p'-p̅(r)/[1+(p'-p̅(r))^2/Λ_md^2]^2 f_α(r,p')≈ 0. An example of the momentum dependent potential U(p)=U_α(r,p) is shown in Fig. <ref> for the zero-temperature symmetric nuclear matter at the saturation density ρ=ρ_0=0.16 fm^-3. Here, we took the SLy4 parameter set <cit.> to determine the coefficients in Eq. (<ref>). The blue dot–dashed curve shows the case of Λ_md=∞, in which the momentum dependence is simply quadratic in p as a direct consequence of the Skyrme interaction of Eq. (<ref>), and the curvature is related to the effective nucleon mass m^*≈ 0.70 m_N in the SLy4 parametrization. This quadratic momentum dependence is too strong compared to the solid points which show the energy or momentum dependence of the optical potential derived from the global fit of the proton–nucleus elastic scattering data by Hama et al. <cit.>. On the other hand, when we choose the parameter Λ_md/ħ=5.0 fm^-1, the momentum dependence of U(p) is weakened as shown by the red dashed line in Fig. <ref>, and it is now similar to the potential by Hama. For the present study of heavy-ion collisions at 270 MeV/nucleon, in particular for the production of Δ resonances and pions, we expect that a suitable description of U(p) at p≳ 500 MeV/c is important. Therefore, in all calculations in this paper, we take Λ_md/ħ=5.0 fm^-1. To formulate the collision term in Sec. <ref>, we use a relativistic framework in which the nucleon single-particle energy is written with the scalar and vector potentials (self-energies) Σ_a(r) ≡ (Σ_a^s(r), Σ_a^0(r), Σ_a(r)) as E_a(r, p) = √((m_N + Σ_a^s(r))^2 + (p- Σ_a(r))^2 ) + Σ_a^0 (r). We assume here that the distribution does not depend on the direction of the spin, and thus the index a takes p (proton) and n (neutron). The relations such as f_p=f_p↑+f_p↓ and U_p=U_p↑=U_p↓ should be understood implicitly. The scalar and vector potentials Σ_a(r) can be determined from the potential U_a(r,p) of Eq. (<ref>) by following the same prescription used in Ref. <cit.>. We require the equivalence p^2/2m_N+A_a(p-p̅)^2+C̃_a +m_N ≈√((m_N+Σ_a^s)^2+(p-Σ_a)^2)+Σ_a^0 to hold at low momenta up to the order of (p-Σ_a)^2. From this condition, the scalar potential is determined by Σ^s_a=m^*_a - m_N with the nucleon effective mass m^*_a= (m_N^-1 + 2A_a)^-1, and the vector potential is derived as Σ_a =4A_a m^*_ap̅, Σ^0_a =C̃_a-Σ^s_a +A_ap̅^2 -8m^*_a A_a^2p̅^2. However, when p̅ρ_b can be identified with the current J_b defined by J_b(r)=∫dp/(2πħ)^3pf_b(r,p), we may use an alternative formula for the vector potential Σ_a = 2m^*_a∑_b U^τ_abJ_b, Σ^0_a = C_a-Σ^s_a-Σ_a^2/2m^*_a, where C_a is the same as C̃_α but τ̃_β(r) in Eq. (<ref>) is replaced by the so-called kinetic energy density τ_b(r)=∫dp/(2πħ)^3p^2f_b(r,p). In Fig. <ref>, the red dashed curve shows a relativistic version of the potential U_rel(p)=√((m_N+Σ^s)^2+p^2)+Σ^0-√(m_N^2+p^2) for the zero-temperature symmetric nuclear matter at ρ=ρ_0. The scalar and vector potentials are determined from the SLy4 parameters as described above. We find that the momentum dependence of U_rel(p) is similar to the empirical optical potential by Hama et al. <cit.> (solid points) and also consistent with the nonrelativistic one with Λ_md/ħ=5.0 fm^-1 (red dashed curve). In the present study, we investigate how the results depend on the effective interaction by comparing three cases of the energy density functional which are labeled as `SLy4', `SLy4:L108', and `SkM*'. In all cases, we set Λ_md/ħ=5.0 fm^-1 to modify the momentum dependence. The `SLy4' functional is based on the Skyrme SLy4 force of Ref. <cit.>, for which the corresponding nuclear-matter incompressibility is K=230 MeV at the saturation density ρ_0=0.160 fm^-3. The nuclear symmetry energy at the saturation density ρ_0 is S_0=32.0 MeV with the slope parameter L=46 MeV (called `asy-soft' or soft symmetry energy). The `SLy4:L108' functional is based on a Skyrme parameter set which is obtained by modifying the x_3 and x_0 parameters of the SLy4 interaction to have a stiff symmetry energy with L=108 MeV <cit.> (called `asy-stiff' or stiff symmetry energy) without changing S_0 and the properties of the symmetric nuclear matter. The `SkM*' functional is based on the SkM* parameter set of Ref. <cit.>, which corresponds to K=217 MeV, S_0=30.0 MeV and L=46 (`asy-soft') of the nuclear matter at the saturation density ρ_0=0.160 fm^-3. In the symmetric nuclear matter at ρ_0, nucleons have an effective mass m^*=0.70 m_N for SLy4 and SLy4:L108, and m^*=0.79 m_N for SkM*. By the solid lines in Fig. <ref>, we show the neutron and proton potentials U_n(p) and U_p(p) as a function of the momentum, after converting them to the relativistic form of Eq. (<ref>). (The Δ potentials shown by the dashed lines are to be discussed later in Sec. <ref>.) For the zero-temperature asymmetric nuclear matter with δ=(ρ_n-ρ_p)/(ρ_n+ρ_p)=0.2, the potentials based on SLy4:L108 (top), SLy4 (middle) and SkM* (bottom) are shown at the density ρ=ρ_0 (left) and ρ=2ρ_0 (right) in the left part (a) of Fig. <ref>. Evidently, in these asymmetric cases, the momentum dependence of U_n is different from that of U_p. At ρ_0, the neutron–proton effective mass difference is Δ m^*_np=m^*_n-m^*_p= -34.4 MeV= -0.18 m_N δ for both SLy4 and SLy4:L108, while it is Δ m^*_np= 61.5 MeV = 0.33 m_N δ for SkM*. It should be noted that U_n and U_p at ρ=ρ_0 for SLy4 are identical to U_n and U_p for SLy4:L108, respectively. When the density is raised to 2ρ_0 (right panels) where the symmetry energy is larger in SLy4:L108 than in SLy4, the neutron potential U_n (or the proton potential U_p) in SLy4:L108 is shifted upwards (or downwards) compared to that in SLy4. Thus, the gap between U_n and U_p is related to the symmetry energy. In the SkM* case, the momentum dependence of U_n is weak compared to that of U_p, i.e. m^*_n<m^*_p, and consequently U_n(p) becomes even lower than U_p(p) at high momenta, p>650 MeV/c for ρ_0 and p >500 MeV/c for 2ρ_0. We will see later in Sec. <ref> how these behaviors of U_n and U_p affect the nucleon dynamics in heavy-ion collisions and the production of Δ resonances. In the right part (b) of Fig. <ref>, the neutron and proton potentials are shown for a high-temperature nuclear matter, in the same way as in Fig. <ref> (a) for the T=0 case. Here the potentials are shown for the kinetic energy density τ_b=3m_NT_Boltzρ_b with T_Boltz=60 MeV, which may be close to the situation in heavy-ion collisions studied in this paper. The potentials are generally higher than in the T=0 case because of their τ_b dependence. However, the qualitative behaviors observed at T=0 are preserved even at this high temperature. §.§ Δ potentials The potential for the Δ resonance in nuclei has been studied by the theoretical analyses of the pion-nucleus, photon-nucleus, and electron-nucleus scatterings for decades (see, e.g., Ref. <cit.>). For example, the potential depth of the Δ resonance was reported to be about -30 MeV in a nucleus <cit.>, and later, to be about -23 ρ/ρ_0 MeV <cit.> and -33 ρ/ρ_0 MeV <cit.> where ρ_0 is the central density. All these potentials for the Δ resonance in nuclei show less binding compared to nucleon potentials. On the other hand, there is still some room for debate on that topic <cit.>. Also, the Δ potential seems to play an important role in the neutron star studies <cit.>. In transport model calculations for heavy-ion collisions in the literature, the Δ potentials U_Δ (or Σ_Δ) are often linked with the nucleon potentials U_N (or Σ_N) by linear combinations such as U_Δ^-=U_n, U_Δ^0=2/3U_n+1/3U_p, U_Δ^+=1/3U_n+2/3U_p, and U_Δ^++=U_p <cit.>. In this case, when one varies U_N to study the sensitivity to the nuclear interaction such as the symmetry energy, observables will change not only through the change of U_N but also through that of U_Δ. Then the results have to be interpreted carefully, considering that the link of U_Δ to e.g. the nuclear symmetry energy has not been established theoretically. In contrast, in the present work, we treat U_N and U_Δ as independent variables, i.e., we do not change the parameters for U_Δ when U_N is varied, similarly to the work by Cozma el al. <cit.>. We write the single-particle energy of a Δ resonance in a relativistic form E(r,p)=√((m_Δ+Σ^s_Δ(r))^2+(p-Σ_Δ(r))^2)+Σ^0_Δ(r) by using the scalar and vector potentials (Σ_Δ^s, Σ_Δ^0, Σ_Δ). The vacuum mass m_Δ is distributed according to the spectral function of the resonance such as of a Breit–Wigner form. In this article, we treat each component of the potential Σ_Δ = (Σ_Δ^s, Σ_Δ^0, Σ_Δ) as consisting of the isoscalar part Σ_ is and the isovector part Σ_ iv as Σ_Δ^- = Σ_ is + 32Σ_ iv, Σ_Δ^0 = Σ_ is + 12Σ_ iv, Σ_Δ^+ = Σ_ is - 12Σ_ iv, Σ_Δ^++ = Σ_ is - 32Σ_ iv. The isoscalar part Σ_ is = (Σ_ is^s, Σ_ is^0, Σ_ is) is chosen as Σ^s_ is = 1/2(Σ_n^s + Σ_p^s)_SkM*, Σ^0_ is = 1/2 (Σ_n^0 + Σ_p^0)_SkM* + α_ρ^Δρ/ρ_0 + α_τ^Δτ/τ_0 , Σ_ is = α_ρ^ΔJ/ρ_0, which is based on the nucleon potential in the SkM* parametrization, regardless of the actual nucleon potential that we choose from SLy4, SLy4:L108 and SkM* as explained in Sec. <ref>. Assuming that the Δ potential is less attractive than the nucleon potential, we add repulsive terms in Σ^0_is that linearly depends on the density ρ=ρ_p↑+ρ_p↓+ρ_n↑+ρ_n↓ and the kinetic energy density τ=τ_p↑+τ_p↓+τ_n↑+τ_n↓. A corresponding term is included in Σ_is with J = J_p↑ + J_p↓ + J_n↑ + J_n↓. We will adjust the parameters α_ρ^Δ and α_τ^Δ to reproduce the experimental data for the overall pion multiplicity in Sec. <ref>. The kinetic energy density is normalized by τ_0=3/5p_F^2ρ_0 with p_F= (3/2π^2 ρ_0)^1/3. As for the isovector part of the Δ potential Σ_ iv = (Σ_ iv^s, Σ_ iv^0, Σ_ iv), we use the neutron–proton potential difference in the SkM* parametrization as Σ^s_ iv = 13γ^Δ (Σ_n^s - Σ_p^s)_SkM* , Σ^0_ iv = 13γ^Δ (Σ_n^0 - Σ_p^0)_SkM* , Σ_ iv = 0, where we have introduced a parameter γ^Δ to vary the isospin splitting of the Δ potentials. The case of γ^Δ =1 corresponds to the relation chosen by Refs. <cit.> in which Σ_Δ^- - Σ_Δ^++ = Σ_n - Σ_p. On the other hand, the case of γ^Δ =3 is another option with a large isospin splitting taken by Refs. <cit.> in which Σ_Δ^- - Σ_Δ^++ = 3(Σ_n - Σ_p). The Δ potentials in asymmetric nuclear matter are indicated in Fig. <ref> with the dotted lines, for the choice of the parameters α^Δ_ρ=15 MeV, α^Δ_τ=15 MeV and γ^Δ=1. In our study, we use this parameter choice as the default setting when we investigate how the momentum dependence of the nucleon potential affects the pion observables. § CROSS SECTION UNDER POTENTIALS In this section, we formulate the cross sections and the resonance decay rates under the presence of potentials. In particular, the inelastic processes NN↔ NΔ and Δ↔ Nπ are important in the present study. In general, let us consider the cross section σ(p_1,p_2) of a reaction channel which is a function of the canonical momenta p_1 and p_2 of the particles in the initial state. This function may be modified in a medium for three reasons. First, the matrix element of the process may change in the medium through the modification of intermediate states. Second, the phase space factor for the final state will change mainly because the momenta of the final particles depend on the potentials through energy conservation. In particular, the potentials affect the condition on p_1 and p_2 for the process to be energetically possible, which is often called the threshold effect in the literature <cit.>. Third, the cross section includes the inverse of the flux of the initial particles, which is also affected by the momentum dependence of the potentials through the dispersion relation. In the present work, we carefully treat the last two sources of the potential effect in σ(p_1,p_2). In heavy-ion collision calculations, we will calculate the cross section at every chance of collisions using the current values of potentials of the initial and final particles. In this section, we use the natural units, ħ=c=1. §.§ General binary process First, we consider a scattering process 1 + 2 → 3 + 4 occurring around a point r at a time t in a heavy-ion collision. This process is assumed to change the momenta of the participating two particles under the constraint of the energy and momentum conservation. We generally consider an inelastic process in which the particle species (3 and 4) in the final state may be different from those (1 and 2) in the initial state. A resonance is treated by randomly assigning the mass m according to the spectral function A(m). We here consider the case in which the particle 4 is a resonance. The case of a stable particle with a mass M corresponds to a δ function as the spectral function, i.e., A(m) = (2π) δ(m-M). The transition probability per unit time and unit volume, to produce a resonance particle with a mass between m_4 and m_4 + dm_4, can be expressed by using a Lorentz invariant matrix element as dW/dVdt =A(m_4)dm_4/2πd^3p_3/(2π)^32E_3^*d^3p_4/(2π)^32E_4^* |⟨p_3p_4|ℳ|p_1 p_2 ⟩_Σ|^2 × 2πδ(E_3 + E_4- E_1- E_2) × (2π)^3δ^3(p_3 + p_4 - p_1 - p_2), where the energies of the particles i = 1,2,3 and 4 depend on the scalar and vector potentials Σ _i= (Σ_i^s, Σ_i^0, Σ_i) as E_i = E_i^* +Σ_i^0, with the effective mass and the kinetic energy and momentum m^*_i = m_i +Σ_i^s, E_i^* = √(m_i^*2+p^*2_i), p_i^* =p_i -Σ_i . For this invariant transition rate, it is convenient to perform the integration over the final momenta in the `out' frame which is defined by the Lorentz transformation with the velocity <cit.> β_out= p^*_1+p^*_2 + Σ_1 + Σ_2 -Σ_3-Σ_4 /E^*_1+E^*_2 + Σ_1^0 + Σ_2^0 -Σ_3^0 -Σ_4^0 . After integrating the transition probability over p_4 and changing the integration variable from p_3 to p^*_3 = (p^*_f, Ω^*_f), we obtain dW/dVdt = ∫p^*2_f dp^*_f dΩ_f^*/(2π)^3 4 E_3^* E_4^* |ℳ|_Σ^2 A(m_4)dm_4/2π × 2πδ(E_3(p^*_f) + E_4(p^*_f) - E_1- E_2 ), and thus dW/dVdt= |ℳ|_Σ^2/π[ p_f^*2/4 v_f E_3^* E_4^*]_outA(m_4)dm_4/2πdΩ_f^*/4π where the quantity in [...]_ out needs to be evaluated in the `out' frame, and v_ f is the relative velocity of the final particles in that frame, v_f =p_f^*/E_3^*+p_f^*/E_4^*. Dividing the transition rate dW/dVdt by the flux times the density v_i (2E^*_1) (2E^*_2) of the initial state in the `in' frame defined by the velocity β_in= p^*_1+p^*_2 /E^*_1+E^*_2, the cross section is written as dσ = |ℳ|_Σ^2/π[ 1/4 v_i E_1^* E_2^*]_in[ p_f^*2/4 v_f E_3^* E_4^*]_outA(m_4)dm_4/2πdΩ_f^*/4π with the initial relative velocity v_i =p_i^*/E_1^*+p_i^*/E_2^* in the `in' frame where p^*_1 = -p^*_2 and |p^*_1| = |p^*_2| = p^*_i. In the present work, we essentially assume that the matrix element |ℳ|_Σ^2 = |⟨p_3p_4|ℳ|p_1 p_2 ⟩_Σ|^2 does not depend much on the presence of the other particles near the colliding two particles. However, since the invariant matrix element is defined for the plane waves normalized as ⟨p|p'⟩_Σ=2E^*(p)(2π)^3δ^3(p-p') depending on the potential Σ, we choose to relate |ℳ|_Σ^2 to the matrix element |ℳ|^2_Σ=0 in the vacuum by |ℳ|^2_Σ = [ E_1^* E_2^*/ω̃_1 ω̃_2 ]_in[ E_3^* E_4^*/ω̃_3 ω̃_4 ]_out |ℳ|^2_Σ=0 with ω̃_i = √(m^2_i + p^*2_i ) (i=1,2,3,4). The cross section under the potential is finally written as dσ = f_in f_out|ℳ|^2_Σ=0/16πs̃[p_f^*]_out/[p_i^*]_inA(m_4)dm_4/2πdΩ_f^*/4π, with s̃ = [ ω̃_1 + ω̃_2 ]_in [ ω̃_3 + ω̃_4 ]_out, f_in = [ 1/ω̃_1 + 1/ω̃_2]_in/[ 1/E^*_1 + 1/E^*_2]_in, f_out = [ 1/ω̃_3 + 1/ω̃_4]_out/[ 1/E^*_3 + 1/E^*_4]_out. For a given initial condition for p_1 and p_2, the cross section depends on the potentials (Σ_i^s, Σ_i^0, Σ_i) for the particles in the initial and final states through the phase space factor f_in f_out [p_f^*]_out/ [p_i^*]_in, which we calculate precisely at every chance of collisions. In particular, the final momentum is obtained by [p_f^*]_out= √([s^*_out-(m^*_3+m^*_4)^2][s^*_out-(m^*_3-m^*_4)^2]/4s^*_out), where s^*_out=(E^*_3+E^*_4)^2-(p^*_3+p^*_4)^2 is determined by the energy and momentum conservation E^*_3+E^*_4 = E^*_1+E^*_2+Σ^0_1+Σ^0_2-Σ^0_3-Σ^0_4, p^*_3+p^*_4 = p_1+p_2-Σ_3-Σ_4. The condition [p^*_f]_out=0 determines the threshold. When the final state includes a resonance, the threshold can be defined as a function of the resonance mass m_4. §.§ The NN → N Δ process For the NN → N Δ process in free space, we assume isotropic scattering and use a parametrization of the matrix element |ℳ|^2_Σ=0/16π s = BΓ_Δ^2/(s-M_Δ^2)^2+sΓ_Δ^2, which is the same form as adopted in the UrQMD model <cit.> but we take B =64400 mb GeV^2, Γ_Δ=0.118 GeV and M_Δ=1.232 GeV <cit.>. The dependence on s is moderate in the region of our interest including near the threshold. Therefore, we can allow some arbitrariness in s at which the matrix element should be evaluated when the potentials are present. Considering that the matrix element is essentially a function of the momenta rather than the energies, we choose s = s̃_NN=[ω̃_1+ω̃_2]_in^2=4(m_N^2+p_N^*2) with p^*_N = [p^*2_i]_in being the kinetic momentum of a nucleon in the rest frame of the NN system. Then, from Eq. (<ref>), we write the cross section as σ_NN → NΔ = C_NN NΔ f_in f_out(|ℳ|^2_Σ=0/16π s)_s=s̃_NN ×[p_f^*]_out/[p_i^*]_inA_Δ(m)dm/2π, where m is the vacuum mass of Δ and C_NN NΔ is the isospin Clebsh-Gordan factor C_NNNΔ=3/4 1/4 0 . The spectral function of the Δ resonance is parametrized as A_Δ(m) = 4 m^2 Γ_Δ(m)/(m^2 - M_Δ^2)^2+ m^2Γ_Δ(m)^2, where the total width Γ_Δ(m) is determined below in Sec. <ref>, depending again on the potentials in the initial and final states of the Δ→ Nπ process. Under the presence of potentials, the Δ decay width in principle depends on the momentum of Δ, which is determined by the scattering angle in the NN→ NΔ process. In the present study, we ignore this dependence by using Γ_Δ(m) evaluated for Δ at rest in the `out' frame. When there are several reaction channels starting with the same initial channel, e.g., p+n→ p+Δ^0 and p+n→ n+Δ^+, the threshold is channel dependent in general because the potentials in the final particles depend on the channel. Furthermore, the width Γ_Δ(m) and therefore the spectral function A_Δ(m) also depend on the channel, e.g., whether Δ=Δ^0 or Δ^+. We will correctly treat such cases, while such channel dependence is often treated approximately in other transport models, e.g., Ref. <cit.>. §.§ The NΔ→ NN process The inverse process NΔ→ NN is described by the matrix element that is related to that of the NN → NΔ process by g_NN |ℳ_NN → NΔ|^2 = g_NΔ |ℳ_NΔ→ NN|^2, where g_NN and g_NΔ are the spin degeneracy factors, g_NN = 4 and g_NΔ=8. Therefore, we have σ_NΔ→ NN = g_NN/g_NΔC_NN NΔ/1 + δ_NN f_in f_out(|ℳ|^2_Σ=0/16π s)_s=s̃_NN[p_f^*]_out/[p_i^*]_in, in which the factor 1/(1 + δ_NN) takes into account the limitation in angle integral for a final state with identical particles. §.§ The Δ→ Nπ process For a decay process of a particle to two particles 1 → 3+4, the decay rate in the rest frame of the decaying particle is Γ = |ℳ|^2_Σ/π1/2 m^*_1[ p^*2_f/4 v_f E_3^*E_4^*]_out. We relate the matrix element in the presence of potentials Σ to that in the free space by |ℳ|_Σ^2 =m_1^*/m_1[ E_3^*E_4^*/ω̃_3ω̃_4]_out |ℳ|^2_Σ=0, so that Γ = f_out|ℳ|^2_Σ=0/8 πs̃ [ p^*_f ]_out with s̃ = m_1 [ ω̃_3 + ω̃_4]_out. For the Δ→ N π process, several parametrizations were studied by Weil <cit.>, among which the present work uses the form by Manley et al. <cit.> that corresponds to a choice of the matrix element for the p-wave decay as |ℳ|^2_Σ=0/8πs̃ = M_0 Γ_0/m_Δ p_0([p^*_f]_out/p_0)^2 p_0^2 + Λ^2 / [p^*_f]_out^2+ Λ^2 , where m_Δ is the vacuum mass of the decaying Δ and the constant parameters are M_0=1.232 GeV, Γ_0 =0.118 GeV, Λ = 1 fm^-1, and p_0=√([M_0^2-(m_N+m_π)^2][M_0^2-(m_N-m_π)^2]/4M_0^2). Considering the isospin Clebsh-Gordan factor, the Δ decay width in the presence of potentials is Γ_Δ→ N π(m_Δ) = C_Δ N π f_outM_0 Γ_0/m_Δ([p^*_f]_out/p_0)^3 p_0^2 + Λ^2 / [p^*_f]_out^2+ Λ^2 , with C_Δ Nπ= 1 2/3 1/3 0 . Starting with the same Δ mass m_Δ, the final momentum [p^*_f]_out and therefore the decay width depend on the decay channel, e.g., whether Δ^0→ n+π^0 or Δ^0→ p+π^-, not only due to the Clebsh-Gordan factor but also because of the different potentials of particles in different final channels. As for the total width, in the spectral function A_Δ(m) of Eq. (<ref>), we use Γ_Δ(m) = Γ_sp^Δρ/ρ_0 + ∑Γ_Δ→ N π(m), where the parameter Γ_sp^Δ determines the Δ spreading width, which was introduced by Ref. <cit.> to take into account the in-medium broadening effect of the Δ resonances due to the absorption and rescattering processes such as Δ N → NN and Δ N →Δ N. In the present work, we take Γ_sp^Δ = 60 MeV as a default setting. The sum of the second term is over the possible isospin channels of of the decay to N + π. §.§ The N π→Δ process The cross section for 3 + 4 → 1 is dσ = [ 1/4 v_i E^*_3 E^*_4]_in |ℳ|^2_ΣA(m_1)dm_1/2π2πδ(E_1 - E_3 - E_4)/2E^*_1. By integrating this over m_1, we have σ = f_in|ℳ|^2_Σ=0/8πs̃π/[p_i^*]_in A(m_1), where m_1 is determined by the energy conservation. For the N π→Δ process, the matrix element is related to that of the inverse process by g_Nπ |ℳ_Nπ→Δ|^2 = g_Δ |ℳ_Δ→ Nπ|^2 with the spin degeneracy factors g_Nπ = 2 and g_Δ=4, and therefore the cross section is related to the decay rate as σ_Nπ→Δ = g_Δ/g_Nππ/[p_i^*]^2_inΓ_Δ→ N π(m_Δ) A(m_Δ). § THE NN → N Δ CROSS SECTIONS IN ASYMMETRIC NUCLEAR MATTER In this section, we give discussions on some examples of the cross sections in the nuclear matter to understand the features of the NN→ NΔ cross sections under the presence of potentials. The cross sections shown here are calculated in the same formalism as in the heavy-ion collision simulations in Sec. <ref>. In the left part (a) of Fig. <ref>, we show the NN → N Δ cross sections for different channels of the Δ production under the presence of potentials. The initial two nucleons with momenta ±p_N are placed in the nuclear matter with the isospin asymmetry δ = 0.20 and the temperature T=0. The nucleon potentials are chosen as described in Sec. <ref> for the three cases based on the Skyrme parametrizations SLy4:L108 (top), SLy4 (middle) and SkM* (bottom). As for the Δ potential of Eq. (<ref>), this figure shows the case when the isoscalar part is taken as Σ_is=1/2(Σ_n+Σ_p)_SkM* with additional repulsive terms (α_ρ^Δ = 15 MeV and α_τ^Δ = 15 MeV). For the isovector part Σ_ iv, two cases of the isospin splitting parameter are shown for γ^Δ =1 (solid lines) and γ^Δ=3 (thin dotted lines). The spreading width of Δ is taken into account with Γ_sp^Δ=60 MeV in Eq. (<ref>). Note that the cross sections are shown here as functions of √(s̃)=s̃_NN^1/2=2(m_N^2+p_N^2)^1/2 which is a direct function of p_N=|p_N| without dependence on potentials. The effect of the isospin asymmetry (δ=0.2) is evident in the cross sections of different isospin channels of NN→ NΔ in Fig. <ref> (a), in particular in the difference between nn→ pΔ^- (red) and pp→ nΔ^++ (blue). This channel dependence is relatively small in the SkM* case of the nucleon potentials compared to the SLy4 and SLy4:L108 cases at the density ρ=0.16 fm^-3 in the left column. When the density is raised to ρ=0.32 fm^-3 in the right column, the channel dependence is particularly large in the SLy4:L108 case, and the weak channel dependence in SkM* is further weakened or even inverted. The same behaviors are observed under a high temperature condition as shown in the right part (b) of Fig. <ref>. When the cross sections in different channels are compared at the same √(s̃), i.e., at the same initial nucleon momentum p_N, the channel dependence may be understood based on the most important factor ϵ^*≡(s^*_out)^1/2-(m_3^*+m_4^*) in Eq. (<ref>) for [p^*_f]_out, which we can write in the present case of nuclear matter as ϵ^* =√(m_1^*2+p_N^2)+√(m_2^*2+p_N^2) +Σ_1^0+Σ_2^0-Σ_3^0-Σ_4^0-m_3^*-m_4^* = ϵ_free(m_4)+Δ U(p_N) with ϵ_free (m_4) =2√(m_N^2+p_N^2)-m_N-m_4, Δ U(p_N) = U_1(p_N)+U_2(p_N)-U_3(0)-U_4(0). Here, U_i(p) (i=1,2,3) are the momentum dependent nucleon potentials defined by Eq. (<ref>) and U_4(0) is that of Δ at zero momentum. It should be noted that the initial nucleon momentum p_N, at which U_1 and U_2 are evaluated in Δ U(p_N), has to be high (p_N≳ 500 MeV/c) for a production of Δ, while the potential U_3 of the final nucleon is evaluated at p=0. When we can ignore the isospin splitting in U_4 for Δ, we can understand the channel dependence of cross sections in Fig. <ref> based on the difference between U_1(p_N)+U_2(p_N) and U_3(0), which we can read from Fig. <ref>. For example, let us compare the nn→ pΔ^- and pp→ nΔ^++ processes. In the case of SLy4:L108, Fig. <ref> shows U_n(p)>U_p(p) at all momenta and therefore 2U_n(p_N)-U_p(0) for nn→ pΔ^- is greater than 2U_p(p_N)-U_n(0) for pp→ nΔ^++, which can explain the strong channel dependence in Fig. <ref>. The SLy4 case is identical to the SLy4:L108 case at ρ=0.16 fm^-3, while at a higher density ρ=0.32 fm^-3 the neutron and proton potentials are similar U_n(p)≈ U_p(p), which makes the channel dependence weak in the cross sections at the high density. In the case of SkM*, the channel dependence of the cross sections is relatively weak or even inverted compared to the SLy4 case because U_n(p_N)≈ U_p(p_N) or U_n(p_N)<U_p(p_N) at high momenta while U_n(0)>U_p(0) at zero momentum. We can appreciate the effect of the isovector part Σ_iv of the Δ potential by comparing the γ^Δ=3 case of the isospin splitting parameter (thin dotted line) to the γ^Δ=0 case (solid line) in each channel. In the case of Fig. <ref> (a) at zero temperature, a strong splitting (γ^Δ=3) results in a weakening of the channel dependence under most of these asymmetric conditions. We may understand this because the splitting now enters in U_4(0) in Δ U(p_N) of Eq. (<ref>). In contrast, in the case of Fig. <ref> (b) at a high temperature, we find that the isospin splitting of the Δ potential influences the cross sections only weakly. This may be partly because the isospin splitting between U_Δ at the high temperature (Fig. <ref> (b)) is smaller than that at zero temperature (Fig. <ref> (a)) when compared at the same splitting factor γ^Δ. The condition ϵ^*=0 determines the threshold for the production of Δ at a vacuum mass m_4, and thus the threshold momentum p_N,th(m_4) or s̃_th(m_4) in each channel can be defined as a function of m_4. This threshold naturally depends on the channel through the isospin dependence of the potentials of nucleons and Δ in Δ U(p_N). To the authors' knowledge, this kind of threshold effect was argued by Fermini et al. <cit.> and by Cozma <cit.>. On the other hand, the minimum value of p_N or s̃ for the Δ production that can be read for each channel from Fig. <ref> is p_N,th(m_min) or s̃_th(m_min), which is the threshold to produce Δ at the minimum mass m_min. In our framework, the minimum mass is determined by the condition [p_f^*]_out=0 for the decay process Δ→ N+π, i.e., m_min+U_Δ(0)=m_N+U_N(0)+m_π, where U_N(0) is for the nucleon after the decay of Δ. Using this relation, the threshold condition ϵ^*=0 with m_4=m_min and U_4(0)=U_Δ(0) is now obtained as 2√(m_N^2+[p_N,th(m_min)]^2)-2m_N-m_π + U_1(p_N)+U_2(p_N)-U_3(0)-U_N(0)=0. Therefore, the threshold, p_N,th(m_min) or s̃_th(m_min), depends on nucleon potentials but does not depend on the choice of the Δ potential. This kind of threshold was considered by the authors of Refs. <cit.>. In our case, by a careful look at Fig. <ref>, we can confirm that the threshold of each channel does not depend on the choice of the parameter γ^Δ for the isospin splitting of the Δ potential. This also implies that the shift of the threshold, such as an assumption like σ(√(s̃))=σ_free(√(s̃)-const.), is not sufficient to express the effects of potentials in the cross sections, as also can be found in the results of Refs. <cit.> with the one-boson exchange model at the zero temperature for the symmetric nuclear matter <cit.> and the asymmetric nuclear matter <cit.>. § SIMULATION OF THE HEAVY-ION COLLISIONS IN THE AMD+SJAM MODEL In the present work, we first solve the dynamics of neutrons and protons by antisymmetrized molecular dynamics (AMD) <cit.>. AMD describes the dynamics of a many-nucleon system by the time evolution of a Slater determinant of Gaussian wave packets. We use the nuclear effective energy density functional given by Eq. (<ref>) in Sec. <ref>. The AMD model has some advantages in that antisymmetrization is treated accurately and cluster correlations can be taken into account by extending the two-nucleon collision process <cit.>. When two nucleons N_1 and N_2 collide, we consider the process N_1+N_2+B_1+B_2→ C_1+C_2 , where each of the scattered nucleons N_j (j=1,2) may form a cluster C_j (up to α cluster) with a spectator particle B_j (nucleon or cluster). This includes the special cases in which both or one of B_1 and B_2 is empty, e.g., N_1+N_2→ N_1+N_2 and N_1+N_2+B_1→ C_1+N_2. The cross section of the process to form clusters (C_1,C_2) is given by dσ(C_1,C_2)/dΩ= P(C_1,C_2,p_f,Ω) p_i/v_ip_f/v_f |M|^2 p_f/p_i, where p_i and v_i are the initial relative momentum and the velocity between the colliding nucleons N_1 and N_2. The relative momentum vector after the momentum transfer between them is denoted by (p_f,Ω), and p_f is determined to conserve the energy E of the system which includes the adopted effective interaction. The velocity factor v_f=∂ E/∂ p_f as a function of p_f also depends on the effective interaction. The overlap probability factor for cluster formation, P(C_1,C_2,p_f,Ω), is defined by considering the non-orthogonality between the states of different configurations (Refs. <cit.> and <cit.>). The matrix element |M|^2 for the two-nucleon scattering is directly related to the assumed in-medium cross sections σ_NN. We may express it as |M|^2=(2/m_N)^2 dσ_NN/dΩ where the right-hand side is evaluated at an average of p_i and p_f. The observables of light fragments in the SπRIT experimental data have been analyzed by AMD calculations <cit.>. In the present work, we use the same nucleon calculations as in the analysis of Ref. <cit.>. In the AMD model, however, Δ resonances and pions have not been incorporated. Considering the small pion multiplicity in heavy-ion collisions of our interest, we can still use the nucleon dynamics calculated by AMD, by regarding Δ and pion production as perturbation. In the AMD+JAM model <cit.> by Ikeno, Ono, Nara, and Ohnishi, the nucleon dynamics was solved by AMD and then reactions related to pions and Δ resonances were handled by a hadronic cascade model (JAM). The JAM model is a reliable hadron transport model developed by Nara, Otuka, Ohnishi, Niita, and Chiba <cit.>. The cascade method by JAM has a feature that the sequence of the collision and decay processes in a many-particle system is handled precisely in the order of the time at which each process should take place. This feature is advantageous in avoiding unphysical dependence on the computational time step parameter, as we found in a comparison of transport models in Ref. <cit.> for pion production in a box. However, in the AMD+JAM calculations, the potentials were not taken into account in the processes related to Δ resonances and pions. For a precise treatment of potentials in the collision term, we have developed a new transport code sJAM. This code precisely follows the cascade mode of the JAM code <cit.> in the case of the presence of only nucleons, Δ resonances and pions without any effect of potentials. The sJAM code now takes into account the potentials that affect the cross sections and decay rates as formulated in Sec. <ref>. The potentials also affect the propagation in sJAM through the dispersion relation. However, the force acting on a Δ resonance is ignored assuming that the momentum change is small during a short time between a production of Δ and its decay or absorption. The electromagnetic force acting on charged pions is taken into account in sJAM. In the practical AMD+sJAM calculations, the information on the nucleon dynamics calculated by AMD is sent to the sJAM calculation in the form of a list of test particles (r_1,p_1), (r_2,p_2), …, (r_A,p_A) at every time step of 1 fm/c. These test particles are generated randomly by the method of Ref. <cit.> according to the Wigner distribution function f_α(r,p) in the AMD calculation. In addition, to allow the calculation with potentials in sJAM, the information on the densities at the positions of test particles, ρ_n(r_i), ρ_p(r_i), J_n(r_i), J_p(r_i), τ_n(r_i) and τ_p(r_i) for i=1,2,…, A, are sent from AMD to sJAM. Using these densities, the potentials Σ_a^s(r_i), Σ_a^0(r_i) and Σ_a(r_i) are calculated either in AMD or sJAM. Then, as formulated in Sec. <ref>, the cross sections and resonance decay rates are calculated at every chance of collisions and decays. For this purpose, we need to know not only the potential that the particle i currently feels but also the potentials that it may feel when it is changed to other species a=n,p,Δ^-,Δ^0,Δ^+ and Δ^++ in the final channels of inelastic processes. Thus, the NN ↔ NΔ and Δ↔ N π processes are calculated in sJAM under the presence of potentials with a precise treatment of energy conservation. The elastic NN collisions are also considered in sJAM, but the nucleon information is updated at every time step by AMD. The Pauli blocking factor for the nucleon(s) in the final state of NN↔ NΔ and Δ→ Nπ processes is determined by using the Wigner function f(r_i,p_i') calculated precisely for the many-nucleon Slater determinant in AMD. This is the best among the methods investigated in Ref. <cit.>. However, in the evaluation of the width parameter Γ_Δ(m) in Eq. (<ref>) for the spectral function A_Δ(m), the Pauli blocking in the Δ→ Nπ final states is ignored. Figure <ref> shows the Δ production in the calculation of ^132Sn+^124Sn collisions at E/A=270 MeV and for the impact parameter range b<3 fm. For the production of Δ^-, Δ^0, Δ^+ and Δ^++, the numbers of NN → N Δ reactions per event are shown in each panel as a function of √(s̃)=√(s̃_NN) [Eq. (<ref>)]. The three cases of nucleon interaction are shown for the cases based on SLy4:L108 (top panel), SLy4 (middle) and SkM* (bottom). In all cases, the Δ production peaks between √(s̃)=2.05 and 2.10 GeV, from which we can appreciate what part in √(s̃) is important in the example of NN→ NΔ cross sections shown in Fig. <ref>. We can find that the differences in the Δ production between the three cases of nuclear interaction are well associated with those in the NN→ NΔ cross sections, in terms of the channel dependence and the absolute value. By integrating the distribution, we notice that the Δ production occurs only a few times per event in this heavy-ion collision system. § PION OBSERVABLES §.§ Pion spectra We calculate the pion production in ^132Sn+^124Sn collision at E/A=270 MeV for the impact parameter range b<3 fm which corresponds to the SπRIT experimental data published in Ref. <cit.>. In Figs. <ref>, <ref> and <ref>, we show calculated pion observables in the three cases of nucleon interaction based on SLy4, SLy4:L108 and SkM*, respectively. Note that the momentum dependence is modified by a parameter Λ_md/ħ=5.0 fm^-1 in the AMD calculation and the nucleon potential is converted to a relativistic form in sJAM (see Sec. <ref>). As explained in Sec. <ref>, the Δ potential, which is parametrized independently of the choice of the nucleon interaction, includes additional repulsive terms with parameters α_ρ^Δ = 15 MeV and α_τ^Δ = 15 MeV, and an isovector term with the parameter γ^Δ = 1. In these figures, the lines in the bottom left panel show the calculated spectra dN/dp_T of charged pions (π^- and π^+) emitted to forward angles θ_c.m.<90^∘, as a function of the transverse momentum p_T, in comparison with the experimental data of Ref. <cit.> shown by points. The top left panel shows the π^-/π^+ ratio of these spectra. The bottom right panel shows an integral of the spectrum N(>p_T) = ∫_p_T^∞dN/dp_T(p_T')dp_T', that is the number of pions emitted with a transverse momentum greater than p_T. This quantity is useful to argue the high momentum part of the spectra with a good estimation of the statistical accuracy. The top right column shows the π^-/π^+ ratio of these integrated spectra. By comparing Fig. <ref> (SLy4-based) and Fig. <ref> (SLy4:L108-based), we can argue the effect of the density dependence of the symmetry energy. However, the difference is not large in the pion yields and spectra between these cases of L=46 and 108 MeV. We will discuss this point in detail in the next subsection. The calculated pion results in these cases are similar to the experimental data. In particular, the π^-/π^+ ratio is higher than the experimental data, in contrast to the much smaller π^-/π^+ ratio predicted by AMD+JAM in Refs. <cit.> and in Ref. <cit.>, where the potentials were not taken into account in NN↔ NΔ and Δ↔ Nπ processes. The large π^-/π^+ ratio in the present calculation can be roughly associated with the strong channel dependence of the NN→ NΔ cross sections illustrated for nuclear matter in the upper two panels of Fig. <ref>, which is further related to the behavior of the momentum dependence of the neutron and proton potentials in Fig. <ref> (see Sec. <ref>). On the other hand, by comparing Fig. <ref> (SkM*-based) to Fig. <ref> (SLy4-based), we can appreciate the effect of the momentum dependence of the neutron and proton potentials (U_n and U_p) under a common condition on the symmetry energy (L=46 MeV). As illustrated in Sec. <ref> for nuclear matter, the channel dependence of NN→ NΔ cross sections is weak or even inverted in the case based on SkM* (bottom panels of Fig. <ref>) due to the weak momentum dependence of U_n compared to that of U_p (bottom panels of Fig. <ref>). This can explain the small π^-/π^+ ratio in the case of SkM* compared to the SLy4 case. The overall pion yield is small in the case based on SkM* compared to the cases based on SLy4 and SLy4:L108, in particular at low pion momenta. This can be associated with the relatively small NN→ NΔ cross sections in nuclear matter in the case of SkM* (see Fig. <ref>), which is likely a consequence of a relatively weak momentum dependence of the isoscalar nucleon potential in this case (see Fig. <ref>). However, the overall pion yield will change when the assumed potential for Δ is varied in the present framework, which nevertheless does not strongly affect the π^-/π^+ ratio (see Sec. <ref>). §.§ Link from nucleon dynamics to pion observables One of the original aims of studying charged pion production in heavy-ion collisions has been to probe the symmetry energy at high densities, expecting that pions originate from energetic nucleon-nucleon collisions occurring in the high density region, as predicted by transport model simulations <cit.>. For this, it is essential to confirm how the nucleon dynamics in heavy-ion collisions, in particular the neutron-to-proton ratio N/Z in the high density region, is reflected in the final π^-/π^+ ratio, after the processes of Δ production and others. In the work of Refs. <cit.>, we showed that the final π^-/π^+ ratio is indeed correlated to the N/Z ratio in the high-density and high-momentum phase-space region, which the nucleon dynamics determines depending on the high-density symmetry energy. However, this study did not take into account the effects of potentials in the processes related to Δ and pions. Here, we repeat the same analysis of Ref. <cit.>, for the present calculation with potentials in NN↔ NΔ and Δ↔ Nπ processes. The panel (a) of Fig. <ref> shows the time evolution of the squared neutron-to-proton ratio (N/Z)^2_ρ>ρ_0 in the high density region as a function of time, for the three cases of the nucleon interaction by the three lines. The high density region is defined in each event as the interior of the sphere defined by ρ(r)>ρ_0, with ρ(r) being the average density on the sphere of the radius r in the center-of-mass frame of the system. The effect of the symmetry energy is clearly seen by comparing the asy-soft cases (SLy4- and SkM*-based) and the asy-stiff case (SLy4:L108-based) in the time interval t=15–25 fm/c around which the maximum density is reached. The panel (b) of Fig. <ref> shows the ratio (N/Z)^2_ρ>ρ_0,HM in high-density and high-momentum region, for which the nucleons in the high density region of ρ(r)>ρ_0 are further selected by the high momentum condition |p - p_ rad| > p_ cut. We take the same condition as in Ref. <cit.>, i.e., p_ cut =480 MeV/c is chosen and the radial flow p_ rad=p_rad(r)r/r is subtracted with p_rad(r) being the radial momentum component averaged for the nucleons on the sphere of the radius r. As we have already seen in Refs. <cit.>, the ratio (N/Z)_ρ>ρ_0,HM increases compared to (N/Z)_ρ>ρ_0 when the high momentum region is selected, and the symmetry energy dependence is somewhat enhanced between the SLy4 and SLy4:L108 cases. Furthermore, we can now find a strong effect of the momentum dependence of the neutron and proton potentials (U_n and U_p) in the behavior in the SkM* case, where (N/Z)^2_ρ>ρ_0,HM increases most drastically from (N/Z)^2_ρ>ρ_0 compared to the other cases. This can be understood from U_n and U_p in the high momentum region of Fig. <ref>. Due to a relatively weak momentum dependence of U_n, its value at a high momentum in the SkM* case is lower than that in the SLy4 case. Therefore, high-momentum neutrons are favored and thus (N/Z)^2_ρ>ρ_0,HM goes up in the SkM* case compared to the SLy4 case. Since high-momentum nucleons are responsible to Δ excitation, we might expect some relation between (N/Z)^2_ρ>ρ_0,HM and the Δ production by NN→ NΔ. The panel (c) of Fig. <ref> shows the ratio between the reaction rates of nn→ pΔ^- and pp→ nΔ^++ (labeled as Δ^- / Δ^++) as a function of time. We recall that Δ^-/Δ^+ was closely related to (N/Z)^2_ρ>ρ_0,HM when potentials were not taken into account for the Δ production in Refs. <cit.>. This is no longer the case when potentials are considered. The Δ^-/ Δ^++ ratio in the SLy4- and SLy4:L108-based cases increases significantly from (N/Z)^2_ρ>ρ_0,HM. This increase is the strongest in the SLy4:L108 case. On the other hand, in the SkM*-based case, the Δ^-/Δ^++ ratio becomes lower than (N/Z)^2_ρ>ρ_0,HM. These drastically different ways of change can be understood again based on the momentum dependence of U_n and U_p shown in Fig. <ref>, from which we have understood the channel dependence of NN→ NΔ cross sections in asymmetric nuclear matter (see Sec. <ref>). In the NN → N Δ reaction, the initial nucleons have high momenta and the final nucleon has a low momentum. Considering the nn → p Δ^- reaction, a high-momentum neutron in the SLy4 and SLy4:L108 cases more favorably turns to a low-momentum proton compared to the SkM* case, because of the difference in the momentum dependence of U_n and U_p. Nexy, when the SLy4 and SLy4:L108 cases are compared, a high-momentum neutron in the SLy4:L108 case more favorably turns to a low-momentum proton compared to the SLy4 case, because the difference between neutron and proton potentials at the high density, which is related to the symmetry energy, is larger in the SLy4:L108 case than in the SLy4 case. Thus, Δ^- production is favored in the SLy4:L108 case, and consequently the relation between the SLy4 (asy-soft) and SLy4:L108 (asy-stiff) case in (N/Z)^2 is inverted in the Δ^-/Δ^++ production ratio. Namely, the (N/Z)^2 ratio with the soft symmetry energy is larger than that with the stiff symmetry energy, while the Δ^-/Δ^++ ratio with the stiff symmetry energy is now larger than that with the soft symmetry energy. The inversion of the symmetry energy effect is also found in other calculations on the pion ratio π^-/π^+ <cit.> and the kaon ratio K^0/K^+ <cit.> (see, e.g., Ref. <cit.> for a review). The results on the various ratios are concisely summarized in Fig. <ref>, which is similar to a figure in Refs. <cit.>. In the first and second columns of Fig. <ref>, we show a representative (N/Z)^2 ratio which is defined in Ref. <cit.> as (N/Z)^2 =∫_0^∞ N(t)^2 dt/∫_0^∞ Z(t)^2 dt, where N(t) and Z(t) indicate the numbers of neutrons and protons as functions of time which are selected by the high density condition ρ>ρ_0 with or without imposing the high momentum condition |p-p_rad|>p_cut. In the third column of Fig. <ref>, we show the representative value of the Δ^-/Δ^++ production ratio which is defined in Ref. <cit.> as Δ^-/Δ^++= ∫_0^∞(nn → p Δ^-)dt/∫_0^∞ (pp → n Δ^++)dt, where (nn → p Δ^-) and (pp → n Δ^++) indicate the reaction rates of the Δ production as a function of time. These three representative ratios in Fig. <ref> show various effects of the symmetry energy and the momentum dependence of U_n and U_p, which we have seen in Fig. <ref> and do not repeat here. In Fig. <ref>, we can also find some information on the dependence on the isovector part of the Δ potential, by comparing two cases of the isospin splitting parameter γ^Δ=1 (solid line) and γ^Δ=3 (thin dashed line). In the present calculation, the splitting plays a minor but non-negligible role in the Δ production. As for the pion ratios, the fourth column shows the π^-/π^+ ratio calculated from all pions (p_T>0), while the fifth column shows that from the high-momentum pions selected by the transverse momentum p_T> 200 MeV/c, which corresponds to the region that Ref. <cit.> used to extract information on the symmetry energy from the SπRIT experimental data. We include here all pions emitted to both forward and backward angles. The π^-/π^+ ratio from all pions (p_T>0) is almost identical to the Δ^-/Δ^++ production ratio in all the cases, except the π^-/π^+ ratio in the asy-soft case (SLy4) slightly increases from the Δ^-/Δ^++ ratio. The reduction of the π^-/π^+ ratio for p_T>200 MeV/c is due to the effect of the Coulomb force acting on charged pions, which should be well under control in transport models <cit.>. Thus, the final pion ratio is rather simply related to the NN→ NΔ process. In summary, the impact of a change of nuclear symmetry energy (SLy4- vs SLy4:L108-based nucleon interaction) on the pion ratio is not very large and it is a consequence of different effects in the nucleon dynamics and in the NN→ NΔ process which act in opposite directions. The impact of the isospin splitting of the Δ potential can be of the same order of that of nuclear symmetry energy. Much larger is the impact of a change of the momentum dependence of the neutron and proton potentials (SLy4- vs SkM*-based). This is also a consequence of the effects in the nucleon dynamics and in the NN→ NΔ process acting in opposite directions; however, the effect in NN→ NΔ is much larger. §.§ The effect of the in-medium Δ spreading width So far, we have calculated the pion production with the default option (α_ρ^Δ=15 MeV, α_τ^Δ=15 MeV, and Γ_sp^Δ=60 MeV) which added the repulsive terms in the isoscalar part of the Δ potential and the spreading width Γ_sp^Δ. Here, in order to see the effect of the spreading width of Δ in the medium [see Eq. (<ref>)], we show in Fig. <ref> the pion spectra when the spreading width is turned off (Γ_sp^Δ=0). By comparing the lower left panel of Fig. <ref> with that of Fig. <ref>, we can see clearly that the spreading width affects only the low momentum part of the π^- and π^+ spectra. Namely, the spreading width works to increase the pion yield in the low momentum region. In spite of the change of the π^- and π^+ spectra, the π^-/π^+ ratio of the spectra, shown in the upper panels of Fig. <ref>, is not affected much when the spreading width is turned off. Thus, the pion ratio is almost free from the uncertainties in the in-medium spreading width. Figure. <ref> shows the various ratios when the spreading width is turned off by Γ_sp^Δ=0. The π^-/π^+ ratios here are quantitatively similar to those in Fig. <ref> where the spreading width parameter was Γ_sp^Δ=60 MeV. On the other hand, the Δ^-/Δ^++ production ratios become larger than those in Fig. <ref>. We can also see that the effect of the symmetry energy (SLy4- vs SLy4:L108-based) is small, and the effect of the difference in the momentum dependence of U_n and U_p (SLy4- vs SkM*-based) is much more significant. These trends are the same as shown in Fig. <ref>. §.§ The effect of the isoscalar part of the Δ potentials Finally, we test the robustness of the above results against the uncertainties in the isoscalar part of the Δ potential, for which we added repulsive terms with parameters α_ρ^Δ and α_τ^Δ, compared to the SkM*-based nucleon potential [see Eq. (<ref>)]. When these options are turned off by setting α_ρ^Δ=0 and α_τ^Δ=0 (and we also set Γ_sp^Δ=0 in this subsection as well as in Subsec. <ref>), the pion yield is overestimated as we can see in the lower left panel of Fig. <ref> in comparison with that of Fig. <ref>. This is naturally understood as a consequence of turning off the repulsive terms in the Δ potential. Even in this case, the π^-/π^+ ratio of the spectra, shown in the upper panels of Fig. <ref>, is not affected much by changing the parameters α_ρ^Δ and α_τ^Δ. Therefore, the result here also suggests that the pion ratio is not affected strongly by the uncertainties in the isoscalar Δ potential. Figure <ref> shows the various ratios when the repulsive terms are turned off by α_ρ^Δ=0 and α_τ^Δ=0 (and Γ_sp^Δ=0). The Δ^-/Δ^++ production ratio and the π^-/π^+ ratio here are quantitatively similar to those in Fig. <ref> where the repulsive terms were taken into account. However, the effect of the symmetry energy (SLy4- vs SLy4:L108-based) is now stronger, i.e., we find a stronger inversion of the symmetry energy effect from the (N/Z)^2_ρ>ρ_0,HM ratio to the Δ^-/Δ^++ production ratio when the repulsive terms in the Δ potential is turned off. The effect of the isospin splitting of the Δ potential (γ^Δ=1 vs γ^Δ=3) is also stronger in Fig. <ref> compared to that in Fig. <ref>. On the other hand, the effect of the difference in the momentum dependence of U_n and U_p (SLy4- vs SkM*-based) is always the most significant, which is not affected by the uncertainties in the isoscalar part of the Δ potential and the in-medium spreading width of Δ. § SUMMARY We investigated the production of Δ resonances and pions in ^132Sn+^124Sn collisions at E/A=270 MeV/nucleon within the AMD+sJAM model, in which the collision term takes into account the momentum-dependent mean-field potentials with strict conservation of energy and momentum. In the newly developed part sJAM of the model, the potentials for the particles in the initial and final states of a process is treated in the form of the scalar and vector self-energies for each species of particles, and the potentials affect the phase space factor for the final state and the flux factor for the initial state in a natural way. In particular, the cross section for NN→ NΔ depends on the isospin channel when the neutron and proton potentials are different in isospin-asymmetric environment. The mass distribution or the spectral function of the Δ resonance is also determined by the potentials through the potential dependence of the Δ→ Nπ width. In particular, we focused on the effect of the different momentum dependence between the neutron and proton potentials. When the neutron potential has a strong momentum dependence compared to the proton potential in a neutron-rich environment (in the SLy4-based case), the process nn→ pΔ^- that converts two high-momentum neutrons to a low momentum proton is favored compared to the pp→ nΔ^++ process. The tendency is opposite when the neutron potential has a weak momentum dependence compared to the proton potential (in the SkM*-based case). The result of the AMD+sJAM simulations shows that this effect of the momentum dependence appears very clearly in the Δ production and consequently the π^-/π^+ ratio is an observable that is very sensitive to the momentum dependence of the neutron and proton potentials. The case with a strong momentum dependence of neutrons compared to protons is more consistent with the SπRIT data than the opposite case. We also investigated the effects of other ingredients. The symmetry energy L dependence (SLy4 vs. SLy4:L108) was found to have a relatively small effect on the pion ratio compared to the effect of the momentum dependence of the nucleon potentials. We carefully traced a link from the nucleon dynamics to the pion observable through the Δ production, and the symmetry energy effect in the neutron-proton ratio (N/Z) was found to be reversed in that of Δ production rate (Δ^-/Δ^++). We confirmed that the conclusions remain the same even if the isoscalar and isovector parts of the Δ potential and the in-medium Δ spreading width are changed. We have found that the in-medium Δ spreading width affects only the low-momentum part of the pion spectrum. It is known by other transport model calculations <cit.> that the low-energy part is also sensitive to the pion potential which the present calculation ignored. Therefore, it is desirable to obtain a full understanding of the low energy pion emission in the future by considering both the Δ spreading width and the pion potential. However, we have fortunately found that the high-momentum part of the pion spectrum is not affected by the uncertainties of these ingredients, which supports the idea that the high-momentum pions are suitable to extract physics information from experiments as claimed in Refs. <cit.>. § ACKNOWLEDGMENTS N. I. would like to thank Che Ming Ko for valuable discussions and encouragement, Zhen Zhang for the practical information on the Hama potential, Betty Tsang and Tommy Tsang for information on the experimental data of SπRIT, and Eulogio Oset for discussing the Δ potential. We thank Yasushi Nara for the useful information on the formula of the Δ decay width. The computation was carried out at the HOKUSAI supercomputer system of RIKEN. This work was supported by JSPS KAKENHI Grant Numbers JP17K05432, JP19K14709, JP21KK0244, and JP21K03528.
http://arxiv.org/abs/2307.00978v2
20230703124959
Neighbors Map: an Efficient Atomic Descriptor for Structural Analysis
[ "Arnaud Allera", "Alexandra M. Goryaeva", "Paul Lafourcade", "Jean-Bernard Maillet", "Mihai-Cosmin Marinica" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
Structural analysis Descriptor Deep Learning Molecular dynamics Crystalline structure Amorphous state paris]Arnaud Alleracorresponding arnaud.allera@cea.fr paris]Alexandra M. Goryaeva dam]Paul Lafourcade dam]Jean-Bernard Maillet paris]Mihai-Cosmin Marinicacorresponding [corresponding]Corresponding author mihai-cosmin.marinica@cea.fr [paris]Université Paris-Saclay, CEA, Service de recherche en Corrosion et Comportement des Matériaux, SRMP, F-91191, Gif-sur-Yvette, France [dam]CEA, DAM, DIF, F-91297 Arpajon, France; Université Paris-Saclay, LMCE, F-91680 Bruyères-le-Châtel, France Accurate structural analysis is essential to gain physical knowledge and understanding of atomic-scale processes in materials from atomistic simulations. However, traditional analysis methods often reach their limits when applied to crystalline systems with thermal fluctuations, defect-induced distortions, partial vitrification, etc. In order to enhance the means of structural analysis, we present a novel descriptor for encoding atomic environments into 2D images, based on a pixelated representation of graph-like architecture with weighted edge connections of neighboring atoms. This descriptor is well adapted for Convolutional Neural Networks and enables accurate structural analysis at a low computational cost. In this paper, we showcase a series of applications, including the classification of crystalline structures in distorted systems, tracking phase transformations up to the melting temperature, and analyzing liquid-to-amorphous transitions in pure metals and alloys. This work provides the foundation for robust and efficient structural analysis in materials science, opening up new possibilities for studying complex structural processes, which can not be described with traditional approaches. Neighbors Map: an Efficient Atomic Descriptor for Structural Analysis [ August 1, 2023 ===================================================================== § INTRODUCTION Structural analysis is instrumental in understanding atomic-scale processes that control the properties of materials <cit.>. Traditionally, crystal structures are analyzed at the atomic scale based on their similarity to an ideal lattice structure. This approach is conceptually simple and, therefore, commonly used for the analysis of Molecular Dynamics (MD) simulations. However, it can fail in complex cases, where the structure is significantly distorted due to the presence of defects, intense thermal fluctuations, or large deformations. Experimentally-obtained atomic coordinates, e.g., reconstructed from atom probe tomography (APT), represent one of the most complex cases for analysis because of the relatively low detection rate and low accuracy on atomic positions that are intrinsic to the method. Another challenge arises from large-scale molecular dynamics simulations as they approach the exa-scale, where efficient on-the-fly analysis methods are needed, to process the vast amount of raw data that is generated, which is too large to be fully stored for later analysis <cit.>. Traditional methods are often based on ad hoc criteria established on field expertise to identify that atoms belong to a specific structure type. These criteria include local density, centrosymmetry <cit.>, occupancy in Voronoi cells (Wigner-Seitz) <cit.>, detection of Burgers circuits (DXA) <cit.>, and similarity to reference crystal structures, e.g., using polyhedral template matching (PTM) <cit.> and Adaptive Common Neighbor Analysis (a-CNA) <cit.>, or bond order parameters <cit.>, among others. These geometry-based methods are computationally inexpensive and routinely used for the analysis of MD simulations. Most of them are available in widely-used software, such as OVITO <cit.>, that enables fast and convenient visualization and analysis of atomic structures. However, the common shortcoming of traditional methods is their sensitivity to atomic perturbations <cit.>. Therefore, when applied to complex systems with a high degree of noise or local strain, these methods often require further adaptation or additional processing (e.g., time averaging of atomic positions), which do not always provide a solution. More recent approaches <cit.> tend to compare atomic environments to a statistical distribution of crystalline or defective structures that is learned within a feature space spawned by atomic descriptors, in a way inspired by methods developed in the machine-learning force-field community <cit.>. Interestingly, these methods treat defects, pristine crystalline structures, and non-crystalline systems in a conceptually similar manner, with minimal assumptions about the distribution of the data. Geiger and Dellago <cit.> demonstrated the potential of neural networks (NN) trained on a database of descriptors <cit.> to efficiently classify crystal structures <cit.>. Instead of using NNs, Goryaeva et al. <cit.> employed statistical distances, within the feature space of a spectral descriptor, bispectrum SO(4) (BSO4) <cit.>. This approach introduces the notion of distortion score for atoms, which intriguingly exhibits a correlation with the local atomic energy and offers a physically-informed criterion for matching atoms to a reference structure. Leitherer et al. <cit.> used a pre-trained Bayesian NN in conjunction with a high-dimensional descriptor (SOAP <cit.>), where the Bayesian nature of the NN allows for the estimation of uncertainty. Ziletti et al. <cit.> proposed a method that represented crystals through the calculation of diffraction images, followed by the construction of a deep learning neural network model for classification. The recent work by Chung et al. <cit.> follows a similar framework, where the input structure is embedded into the feature space using Steinhardt features <cit.>. These features, a simplified version of spectral descriptors like BSO4 or SOAP, are then fed into a dense neural network classifier. The workflow, named DC3 <cit.>, improves drastically the accuracy of classification in physical processes such as solid – liquid phase transition for many crystallographic systems. Recently, the use of Graph Neural Networks (GNNs) for embedding local environments has gained significant popularity in the fields of chemistry and biology and in the field of materials science, particularly for surrogate models <cit.>. The aforementioned methods always follow a common framework, which can be divided into two main steps: (i) representing atomic data in high dimensions using a descriptor function <cit.>, and (ii) employing a machine learning (ML) or deep learning (DL) classifier model to process the high-dimensional representation of atomic coordinates. These state-of-the-art methods are very promising, however, they are often difficult to apply in practice, and can be computationally demanding due to their reliance on descriptor calculations in step (i), where a tradeoff between the accuracy of LAE representation and computational efficiency <cit.> is commonly required. This can be critical in cases where no powerful computation hardware is available, or in HPC applications where analysis can become a bottleneck <cit.>. Consequently, there is a pressing need for a lightweight, data-driven approach that can efficiently and accurately characterize atomic positions in various systems, from simple to highly complex cases, with minimal computational cost. In this paper, we present a novel methodology that follows the same framework, associating a descriptor and a ML/DL classification model. Illustrated in Fig. <ref>, our approach combines two key components: (i) a lightweight, computationally efficient, invariant descriptor that represents any local atomic environment (LAE) as a fixed-size image with multiple channels called , and (ii) a DL classification algorithm, with intermediate complexity based on convolutional neural networks to analyze these image representations. We address the potential limitations of the descriptor's quality by harnessing the complexity of the neural network, resulting in an efficient and stable workflow capable of accurately identifying atomic environments across a wide range of systems, from simple to highly complex cases. The descriptor is designed to remain fast to compute, making it suitable for both on-device analysis on a typical workstation, and on-the-fly feature extraction on massively parallel systems, to achieve in situ analysis of massive data flow produced by exa-scale simulations. After a detailed description of our approach, we present a series of applications that illustrate its robustness, accuracy, and versatility. These applications range from classifying common crystalline structures in distorted systems to tracking phase transformations up to the melting temperature, as well as studying the vitrification process in different materials. § METHODS §.§ Overview of existing methods for high-dimensional encoding of local atomic environments (LAEs) Motivated by the rapid progress of data-driven force field methods, notable strides have been achieved in the realm of descriptor development. Most commonly, atomic descriptors encode the local geometry of neighboring atoms using either distances and/or angles between atoms <cit.>, spectral analysis in spherical harmonics basis of LAEs <cit.>, or based on the scaling wavelets transformation <cit.>. Based on the work of Shapeev on the tensorial description of atomic coordinates <cit.>, and on the concept of similarity between pairs of atomic environments known as Smooth Overlap of Atomic Positions (SOAP) <cit.>, systematic basis were introduced, which preserve permutational and rotational symmetries by writing the total energy as a sum of atomic body-ordered terms. By employing an invariant basis constructed from atomic body-ordered polynomials, permutation-invariant polynomials were discovered <cit.>. Furthermore, through the use of spectral decomposition and a combination of radial and spherical harmonic functions, it becomes possible to introduce the Atomic Cluster Expansion (ACE) <cit.>, a systematic and possibly complete description of the local atomic environment. Overall, the capability of the descriptor to map LAEs into the feature space has a direct influence on the quality of embedded force fields and the classification of atomic neighborhoods. However, achieving high accuracy often entails increased computational cost, which we strive to minimize in this work. In some cases, the framework of artificial neural networks (NN) with a special design can also be used to construct an appropriate descriptor of the system <cit.>. The most employed framework is Graph Neural Networks (GNNs). Initially introduced in 2017 <cit.>, GNN became the most popular DL method for structural analysis in chemistry, biology, and drug design applications. Highly-accurate protein structure prediction with AlphaFold <cit.> has further boosted the interest of different communities in GNN. Training a GNN consists of two main steps, forming together the propagation rule which is the base of the method: (a) aggregating information from neighboring nodes, and (b) updating the nodes and/or edges messages. Notably, the aggregation process is permutation invariant. In this step (a), the input node features are multiplied with the adjacency matrix of the graph and weight matrices, similarly to dense neural networks. In step (b), the resulting product is passed through a nonlinear activation function to generate outputs for the subsequent layer. Based on various details of aggregation and transformation of the passed message, there are a plethora of graph architectures, the most popular being the graph convolutional network (GCN) <cit.> and graph attention method (GAT) <cit.>. The popularity of the GNN algorithms in materials science, chemistry, and biology is explained by the fact that the descriptor function (feature representation) is learned at the same time as the regression or classification task. Numerous methods based on invariant or equivariant descriptions of LAE were proposed since 2017. We cite the most popular: SchNet, a weighted atom-centered symmetry function and the deep tensor neural network <cit.>, ALIGNN - Atomistic Line Graph Neural Network <cit.>, Deep Graph Library (DGL)-LifeSci <cit.>, DimeNET - Directional Message Passing Neural Network <cit.>, PAINN - polarizable atom interaction neural network <cit.>, Nequip - Neural Equivariant Interatomic Potentials <cit.>, DeepMD-kit <cit.>, Allegro – a strictly local, equivariant deep neural network interatomic potential architecture <cit.>, mACE - Higher Order Equivariant Message Passing Neural Networks <cit.>. The top three most accurate force fields in chemical or materials science databases, such as MD17, DB9 <cit.> and 3BPA <cit.>, are methods based on GNNs. GNN-based approaches have demonstrated remarkable performance in accurately predicting and characterizing various chemical and materials properties, making them a prominent choice in the field. Despite the significant progress made, GNNs are still primarily utilized in the domain of chemistry and biology with a numerical cost of per atom per CPU wall time, which is a few orders of magnitude larger than the direct traditional descriptors mentioned earlier. Consequently, in the classification literature of materials science, traditional descriptors or simpler surrogate models are predominantly used as inputs for classifiers <cit.>. Our challenge is to design a lightweight descriptor that is numerically fast and even lighter than traditional descriptors, while maintaining a high level of accuracy. We aim to achieve the best of both worlds by combining the advantages of GNNs and traditional descriptors. Using radial distances or angles between atoms as descriptors constitutes a valuable, rotation-invariant option. The introduction of 2-body or 3-body kernels, as proposed by Glielmo and De Vita <cit.>, allows for the creation of highly numerically efficient descriptors. Recent applications in machine learning force fields, such as those showcased in Refs. <cit.>, demonstrate the robustness of this approach. However, the numerical efficiency of this formalism is limited to capturing interactions up to 3-body interactions, which may result in reduced accuracy when dealing with systems in the liquid phase. On this observation, we have decided to build our descriptor solely based on interatomic distances, excluding angles or higher-order descriptions. However, we aim to retain the many-body aspect of neighborhood geometry. Once the interatomic distances have been selected to maintain rotation invariance, the issue of permutation symmetry arises. Manipulating internal coordinates to achieve a permutation-invariant representation can impact the smoothness and efficiency of the descriptors. In general, fundamental operations that preserve inversion symmetry, such as , , and , are commonly used. One potential alternative is to sort the complete list of interatomic distances. This operation can be viewed from the GNN perspective as an aggregation around a central atom. In the present work, we sort all the distances relative to specific central atoms (the detailed construction is presented in §<ref>). The concept of sorting distances has been previously utilized in <cit.> to identify variants of a cluster with a specific number of atoms. Similarly, in the context of various molecules, the sorting of 2- and 3-body kernels has been employed to address the challenge of permutation invariance <cit.>. In this case, the constructed patterns are analyzed and learned using a CNN regressor, resulting in the “k-bag” model. This model has demonstrated its ability to predict structure-energy relationships with accuracy levels that are comparable to those achieved by ab initio methods <cit.>. The use of sorting distances to achieve permutation invariance has also been proposed in the context of collective variables for free energy sampling. In this case, permutation invariant coordinates are obtained by sorting the spectrum of the adjacency matrix of a graph constructed based on the bond network surrounding a specific atom <cit.>. Furthermore, in the context of polymer chains, a descriptor was developed by considering all inter-monomer distances and sorting them using the spectrum of the corresponding covariance matrix. This descriptor was used for classifying the liquid and glassy states in polymers <cit.>. However, this approach is not applicable in materials science simulations due to the 𝒪(N^2) scaling of the descriptor size. The upcoming section will demonstrate how the use of sorted radial distances around a central atom can enable the classification of local atomic environments, while addressing the challenge of permutation invariance. §.§ High-dimensional encoding of local atomic environments in s In this section, we describe our approach to encode atomic environments in high-dimension, using a new descriptor, denoted . The descriptor maps the local neighborhood of a central atom into a 2D image, and provides a pixelated representation of the dense, non-directed graph with weighted edge connections of the neighboring atoms. Such image-like representation of atomic structures can be readily combined with a CNN classifier. Here, it is worth noting that there have been attempts in the literature to encode atomic environments in a format readable by a standard CNN, as demonstrated in <cit.>. However, these studies encode the raw Cartesian coordinates, and rotational invariance is then achieved through data augmentation. In our approach, we aim to directly encode rotational invariance in the descriptor function and bypass the need for data augmentation. The steps involved in constructing the current descriptor are depicted in Figure 1 and described in detail. §.§.§ Construction of a fixed-cutoff neighbor graph. We consider a system with N atoms and we denote by 𝐫_i ∈ℝ^3 the Cartesian coordinates of the i^th atom. The Euclidean distance between the atoms i and j is denoted as r_ji = | 𝐫_i -𝐫_j|. To construct the descriptor, we consider the set of neighbors of an atom j within some r_cut distance denoted by v(j)={ i | r_ji≤ r_cut, i j }. The cardinal of this set n(j) = | v(j) | is the number of neighbors of the atom j within the r_cut distance. We then assign labels to the atoms to reflect their spatial proximity to the central atom. For this, we define a bijective map α_j : v(j) →{ 1, …, n(j) } that transforms the elements of v(j) set into a sequence from 1 to n(j) such that r_j α_j(i_1)≤ r_j α_j(i_2) if α_j(i_1) < α_j(i_2). The map α_j transforms the labels of the neighbors atoms of the j^th atom, i.e. v(j) set, into numbers from 1 to n(j) such that the atom with the mapped number 1 is the closest to the central atom j, 2 is the second-nearest neighbor, and so on until n(j), which is the n(j)^th nearest neighbors of the j^th atom. This allows us to construct a graph G_j having n(j)+1 nodes, given by all the atoms from the set v(j) of neighboring atoms and the central atom j. The nodes of the graph G_j are labeled by α_j, from 1 to n(j) with the 0^th node of the graph corresponding to the atom j itself. The edges of the graph G_j are the n(j)(n(j)+1)/2 connections between all the nodes of the graph. §.§.§ Distance calculation and node selection. Further, we denote by r_j:lk the Euclidean distance from the l^th neighbor of atom j within G_j to the k^th nearest neighbor of atom l within G_j. In the particular case r_j:0k, k corresponds to the atom initially labeled as i given by α_j^-1(k) = i. Consequently, r_j:0k is equal to r_ji. Similarly, we can calculate the distance r_j:lk between the l^th neighbor of node j≡ 0 and the k^th nearest neighbor of l within G_j. From the set v(j), we select n_G-1 atoms (the first n_G-1 neighbors). This number is typically smaller than the average number of atoms ⟨ n(j)⟩ found within a distance r_cut of the atoms of the dataset, resulting in a final image where each pixel encodes one edge of the graph (see the next paragraph), but it is not a restriction. Below, we consider the two cases. In the first case, (n_G-1) ≤ n(j) for any atom j, and n_G is chosen such as n_G-1 is a power of 2 for a more efficient treatment by the CNN. For the cases when (n_G-1) > n(j) the inverse of all sorted distances r_j:l·^-1 and r_j:· k ^-1 for which l,k > n_G -1 will be attributed to a constant value. Throughout this study, we will set the constant value to zero. This is the case illustrated in Fig. 1. §.§.§ Construction of the 2D Image To the graph G_j, we will attribute an image of the local atomic environment of the j^th atom. From the graph G_j and n_G sorted distances r_j:l·, with l=0, …, n_G-1, we will build a matrix 𝐌_j ∈ℝ^n_G × n_G. Each line l of the matrix is given by the line vector r_j:l·. The first row is calculated as: M_1 k = g_j:1w_j:0 w_j:k/r_j:0k^γ , k = 1, …, n_G, where w_j:0 and w_j:k are weight factors assigned to nodes 0 and k in G_j, respectively. These weights can be set to a value, learned or used as hyperparameters. Their purpose is to encode or “color” various properties of the atoms, such as their chemical species. The utility of these weight factors is discussed in Sec. <ref> where a multi-components system is investigated. Each subsequent row l (1 < l ≤ n_G) of the matrix 𝐌_j corresponds to the (l-1)^th neighbor of node 0 in G_j. Its terms are calculated as: M_l k = g_j:lw_j:(l-1) w_j:(l-1)k/r_j:(l-1) k^γ , k = 1, …, n_G, where w_j:(l-1)k is the weight factor of the k^th neighbor of the (l-1)^th neighbor of node 0 in G_j. The factor g_j:l assigns a weight for the l^th line of the matrix (i.e., the l-1 neighbors of the node 0 of G_j), much like “attention coefficients” in graph convolutional neural networks with attention mechanisms. In this study, g_j:1 is set to 1, while for n_G ≥ l > 1, we use g_j:l = 1/r_j:0(l-1)^β. However, these values can also be used as hyperparameters or learned through the neural network's loss function, as in the case of graph attention networks. The treatment of multi-element systems is straightforward. By using weight factors, denoted w in Eq. <ref>, we can create multiple channels that capture various properties of each atom species. Each channel can be represented by a matrix 𝐌_j(𝐰), computed for a specific set of weight factors 𝐰 defined by the user. Following this convention, we can introduce C channels, denoted as 𝐌_j(𝐰 = 𝐰_c), where c ranges from 1 to C. This means that the neighborhood of each atom is described by C image channels, each with dimensions of n_G × n_G. The first channel is typically a purely geometric channel, where all weight factors 𝐰 are set to 1, providing no specific chemical information about the atoms. The subsequent channels can be assigned weights that are functions of properties such as mass, covalent radius, atomic number, etc. In the case of the Cu-Zr alloy studied in this paper (see Sec. <ref>), we set the number of channels C=2, and weights are set to: 𝐰_c=1 = [ w_Cu; w_Zr ]_c=1 = [ 1.0; 1.0 ], 𝐰_c=2 = [ w_Cu; w_Zr ]_c=2 = [ 1.0; 1.2 ], to create a contrast between the two different elements on the second channel, while they are treated as equivalent in the first channel. §.§ Examples of descriptor images In Fig. <ref>, we provide examples of s encoding LAEs found in some common crystal structures, namely bcc, fcc, hcp, diamond, simple cubic, and a synthetic non-cristalline system. It demonstrates the strong signature of these structures on their s, with differences that can be readily seen on the images. In structures with high symmetry (top row of Fig. <ref>), a number of interatomic distances are equal, resulting in homogeneous regions or bands on the s. Note that the decaying weight g_j:l applied on rows has the effect of diminishing the intensity towards the bottom of the image, in order to increase the importance of the top-left pixels, which encode the distances between the nearest neighbors of the central atom. The effect of Gaussian noise on atoms positions is also illustrated on Fig. <ref> (bottom row), and unsurprisingly results in noise on s too. However, visual comparison between noisy images and their pristine counterpart shows some level of similarity, and differences still appear between noised images based on different structures. Put together, these examples intuitively explain how our s can encode accurately LAEs, with some robustness to positions fluctuations that can be exploited by further analysis using a well-trained model. The choice and design of this model is the subject of the section below. §.§ Structural Analysis Model In this work, we treat structural analysis as a supervised classification task, applied to the high-dimensional representations of the LAEs. To analyze the features in images that reflect the crystal structure (see Fig. <ref>), we employ a Convolutional Neural Network (CNN) <cit.> in a classical supervised classification setting. We use images encoding atomic environments obtained from MD simulations for training, labeled according to the type of crystal structure. In order to minimize the computational cost of the analysis, and considering the clear contrast between images seen in Fig. <ref>, we intentionally give a preference to simple and efficient CNN architecture, consisting of only a few layers and closely resembling the historical LeNet <cit.> or AlexNet <cit.> networks. This design choice ensures rapid computation on a single CPU, making the analysis process accessible without specialized parallel computing hardware (i.e. multicore CPUs, GPUs). However, such platforms can be used transparently for increased throughput, as they are natively supported by popular deep learning libraries. The technical details of the CNN are presented in detail in Appendix A. § RESULTS In this section, we demonstrate the performance of our method on some challenging cases, typically encountered by the community of computational materials science. We mainly focus on the cases where none of the traditional approaches has proven fully satisfactory. §.§ Phase classification in distorted crystals Here, we demonstrate how our workflow can be applied to classify LAEs of crystalline structures that significantly deviate from the ideal structures. The accuracy of the classifier is then compared to a-CNA and PTM algorithms. The RMSD parameter for PTM is set to 0.1 in OVITO. We consider typical structures found in metals: body-centered cubic (bcc), face-centered cubic (fcc), hexagonal compact (hcp), and cubic diamond (cd) structures. While the identification of these structures in their pristine form can be performed by traditional methods with good results, the task becomes challenging in disturbed systems, especially in the case of high-temperature, noisy data, or when a fraction of the atoms of the structure are missing <cit.>. This represents an important barrier to the analysis of experimentally-obtained atomic positions, such as atom probe tomography (APT) data, which can not be treated using traditional methods of LAE analysis. These data are indeed characterized by an important noise and a detection rate in the order of 50%, meaning that half of the atoms of the structure are missing. §.§.§ Computational setup and model training To assess the accuracy of our method as compared to traditional approaches in the presence of noise and vacancies in the , we build a synthetic dataset based on MD simulations of bulk bcc Fe <cit.>, fcc Al <cit.>, hcp Zr <cit.> and cd Si <cit.> using empirical potentials. Simulations consist of a temperature ramp in the NPT ensemble increasing from 100 up to T=2/3T_M, where T_M is the melting temperature, with a total simulated time of 10. Snapshots of atomic positions are sampled every 1 to constitute the database. Additionally, following Ref. <cit.>, N altered copies of each snapshot are added to the database, where a fraction x_N of randomly chosen atoms are removed. We chose {x_1, x_2 … x_6} = {0, 0.1, 0.2, …, 0.5}, yielding extremely distorted atomic environments compared to low-temperature structures. Each simulation cell contains 128 atoms and is periodic in all directions. With the 4 different classes and 6 different fractions of missing atoms, each sampled 10 times during a simulation, the database contains a total of 30,720 descriptors, each encoding a LAE. The CNN classifier has 4 classes, corresponding to each of the structures in the database. The database is split between a train and validation subset, containing respectively 80% and 20% of the samples. §.§.§ Structural analysis The overall accuracy of the classifier on the validation dataset is presented in Fig. <ref>, and compared to the result of traditional methods when applied on the same data. The accuracy of both a-CNA and PTM are excellent for nearly ideal structures, but they are plummeting when even a small fraction of atoms is missing in the structure. For the same structures, our method correctly labels highly disturbed atomic environments for the four learned crystal structures, with near-perfect accuracy and regardless of the applied noise and missing atoms, making it promising for local APT data analysis. Our model demonstrates a satisfying accuracy, comparable with more-complex descriptor-based NN classifiers <cit.>, but with much better computational performance, thanks to a more efficient descriptor and a reduced number of trainable parameters. Training is achieved in a few seconds running on a laptop's CPU, in only 10 epochs and with little hyperparameter tuning –which confirms the computational efficiency of the CNN and its ease of training. For inference, the model achieved competitive performance on limited CPU resources and, thus, can be used as a drop-in replacement for the a-CNA and PTM methods in any existing data analysis pipeline. §.§ Crystal structure identification up to the melting temperature Identifying crystal structures up to the melting point is a challenging task, as thermal noise can become dominant at temperatures typically starting from 1/2T_M to 2/3T_M, from which standard analysis methods can fail to identify crystalline structures, as we will demonstrate in this section. In crystals put under high hydrostatic pressure, the melting point is considerably increased, which can result in a range of a few thousand Kelvins where standard structural analysis is unfeasible. This is typical of iron when placed in Earth-core conditions, which constitutes an interesting model system to assess the capabilities of our analysis method. To this end, we simulate the hcp → liquid phase transformation in a large-scale system using molecular dynamics, based on a recent EAM potential which reproduces well this transformation <cit.>. As our method is based on supervised learning, one additional challenge for classifying simulated data is the availability of a dataset of annotated LAEs. Rather than resorting to a pre-training on synthetic data (which constitutes a valid alternative), we show that traditional structural analysis methods can be used as a source of labelled data. As our method can be trained on small datasets, we perform training on a small subset of the simulation data, using appropriate data augmentation, to label the full trajectory. §.§.§ MD simulations We heat a crystal of hcp iron up to the liquid phase and track the crystal structure evolution. To achieve this, we initially relax a crystal of hcp iron containing 108,000 atoms of dimensions 63.8× 110.5 × 104.2 , under a hydrostatic pressure of 323, to a 1e-3 force tolerance on atoms. The crystal is then heated from 100 to 8000 in the NVT ensemble at a rate 2e13, under the same applied pressure. The temperature-time curve is reported in Fig. <ref>a, showing a linear increase. In Fig <ref> (b), the potential energy curve is reported, showing a continuous increase associated with the temperature ramp, and a sharp step that is typical of first-order phase transitions. This increase in energy is associated with the collective loss of crystallinity, providing a global indication of when most of the hcp → liquid transformation occurred. §.§.§ Structural analysis In Fig <ref> (c), we represent the fraction of hcp atoms found in the cell using PTM. Note that a-CNA could be used instead of PTM, with similar results. Starting at about 4000 or roughly T_M/2, the fraction of hcp atoms detected by PTM diminishes and slowly decays to reach 0% at 7400, i.e. when the phase transformation occurs. However, the linear evolution of the potential energy between 4000 and 7400 suggests that no phase transformation occurred in this range, and that the structure remains hcp up to the high temperatures. To verify this, we set up a CNN classifier to label LAE between the hcp and liquid phase. We build a database by selecting snapshots of the system taken along the course of the simulation. In a conservative approach, the snapshots are taken in the domains where PTM accurately labeled the atoms (below 4000 and above 7400), as identified by blue symbols on Fig <ref> c). Crystalline environments identified by PTM can be used as a source of conservative (i.e., limiting the risk of mislabelling) annotations to build a training database without user annotation. While the total amount of annotated samples in the training database can be made arbitrarily large with no additional human effort, we choose to keep its size small to limit training time and demonstrate the method's efficiency. We thus select 6 snapshots along the trajectory, from which we randomly select only 2,000 atoms from each snapshot (i.e., ∼2% of the supercell), resulting in a set of 6,000 hcp and 6,000 liquid atoms. To improve the representation of noised, high-temperature crystals, without using more labelled data, we perform data augmentation by adding a small amount of noise to a fraction of the atoms. For this, we prepare an altered version of each snapshot, where a Gaussian noise d ∼𝒩(0, 0.05) is added to 33% randomly selected atoms in the snapshot. Similarly, we draw 2,000 samples from each of these altered snapshots, resulting in a final training dataset of 24,000 structures (the smallest in this paper). After training, we use the model to analyze the full trajectory, resulting in the curve shown in Fig. <ref> c). Contrary to PTM, our method yields a much more stable signal, and the phase transformation appears with a strong signature. The fraction of hcp atoms remains close to 100% up to 7000, where it starts to decrease slowly, and produces a sharp decrease associated with melting, in agreement with the energy curve (dashed vertical line). This result is satisfyingly outperforming traditional methods, especially considering the limited database size. It demonstrates how the use of traditional methods and data augmentation can help solve the classical challenge of labelled data availability in supervised learning, as zero user annotation was needed. These results offer an alternative to standard methods for tracking the underlying physics of this phase transition. One interesting finding is the intermediate role of the bcc phase, recently spotted <cit.>, which is a significant topic in itself but goes beyond the scope of the present investigation. §.§ Melting and vitrification of Ni Metallic melts can be solidified into a glassy state through rapid quenching down to low temperature. In monoatomic metallic systems, Nickel can be used as a model material for the study of metallic glasses  <cit.>. However, discriminating between the liquid and amorphous states, which are both non-crystalline phases, remains notoriously challenging. In this section, we examine the ability of our model to distinguish those states in MD simulations. §.§.§ MD simulations The amorphous Ni structure is obtained by the rapid cooling of liquid Ni, following the procedure described in Ref. <cit.>. The simulation cells are fully periodic and contain 6912 atoms (12×12×12 cubic fcc cell). In order to achieve vitrification of the structure, the relaxed fcc Ni undergoes the following steps, as shown in Fig. <ref> a. First, the temperature is gradually increased up to 2500 (1e12 rate) within the NPT ensemble → in order to melt the system. Then, the liquid is equilibrated at 2500 in the NVT ensemble for 100. To induce vitrification, the liquid is rapidly quenched (1e14) down to 10 within the NPT ensemble. The outcome of the simulations is summarized in Fig. <ref>. During heating up to the melting temperature, the potential energy of the system (Fig. <ref>b) gradually increases. A distinct step of energy increase near 2000 is caused by the release of the heat of fusion when melting occurs, and serves as a global indicator of the fcc → liquid first-order transformation. During a subsequent quench of the system, the temperature, and potential energy decrease, as shown in Figs. <ref>a,c. A subtle change of slope (Fig. <ref>c) near 1000 indicates a 2^nd order phase transformation from liquid to a glassy structure. This structural transformation can be also distinguished in the global system feature, such as the radial distribution function (RDF) in Fig. <ref>d, where a splitting of the second peak in the range of 0.4 to 0.5 suggests a short-range atomic reorganization into an amorphous structure <cit.>. The energy and RDF provide global indicators of the transformation of the system, however, they do not give any local, per-atom information. §.§.§ Structural analysis To provide a local structural indicator for non-crystalline systems, we use a similar approach as in Sec. <ref>. We build a database of structures representing each phase (fcc, liquid, amorphous), which will constitute the three classes of the classification model. As per-atom manual annotation is practically unfeasible, we label simulation snapshots based on their energy and RDF curves in domains that we consider as high confidence, i.e. below 2/3T_m for the fcc structure, from the NVT equilibration at 2500 and at 1600 during quench for the liquid, and from the lower temperature domain of the quenching for the amorphous state. These snapshots are represented by blue disk symbols in Fig. <ref>e (for the melting stage), and in Fig. <ref>f (for the quenching). A total of 24 supercells are labelled and included in the dataset (10 fcc, 9 liquid and 5 amorphous). The lower statistical representation of the amorphous phase is compensated by setting a two times larger weight on this class for loss calculation. The database used for training is then composed of a total of 165,888 representations of LAEs. A few examples of images, taken from each class of the dataset, are presented in Fig. <ref> for visual comparison. While inter-class comparison (i.e. between images of different columns in Fig <ref>) may present some similarity, intra-class comparison (i.e. row-wise) allows identification of some distinctive features, which could be picked up by the model during training. No data augmentation is performed, to avoid any risk of overlap between the distributions of liquid and glassy structures. Once trained, the CNN is used in inference to label the LAE of each individual atom of the cell in all frames, resulting in the curves shown in Fig. <ref>e,f. In the domains where structural transformations occur, the model is able to predict a smooth transition from fcc to liquid, capturing with great accuracy the melting temperature (Fig. <ref>e), and then from liquid to amorphous (Fig. <ref>f). This application demonstrates the accuracy and transferability of our approach, where 864,000 atoms were labelled based on only 24 full-cell user annotations. §.§ Vitrification of a Cu-Zr alloy In this section, we showcase the ability of our method to treat multi-component alloys and consider a second-order phase transformation from liquid to amorphous state in Cu-Zr system. §.§.§ MD simulations The database preparation consists in creating a liquid structure of the alloy, by thermalizing it in the liquid phase, and subsequently cooling it down to form an amorphous structure. Initially, we generate the liquid structure by randomly placing 16,000 Cu atoms and 16,000 Zr atoms, resulting in a Cu_50Zr_50 composition, in a cubic cell of side 85. Creating atoms at random positions, there is a risk of atoms being generated too close to one another, leading to an enormous force experienced by the atoms if relaxed with the EAM potential directly, due to its divergence at short distance. To circumvent this issue, we perform a preliminary minimization step to ensure that all atoms are located further apart than a certain cutoff distance r_c. For this purpose, we employ the “soft” pair style in LAMMPS. The pairwise interaction between any two atoms i and j, regardless of their species, is given by the potential energy function: E(r_ij) = A [ 1 + cos ( π r_ij/r_c ) ], r_ij < r_c, with r_c = 1.8 and a prefactor A = 10. The minimization process continues until the force exerted on the atoms falls below 0.1. Then, we redefine the soft pairwise interaction with a larger cutoff, r_c = 2.8, and a prefactor A = 100, and carry out 10,000 steps of molecular dynamics (MD) in the NVE ensemble, enforcing a maximum displacement of 0.1 per timestep t = 1. This is done to ensure that most pairwise distances approximate r_c. After these steps, we replace the soft pairwise interaction with the EAM potential from Mendelev et al. <cit.>. To further stabilize the system, we execute 10 MD steps in the NVE ensemble, maintaining the same maximum displacement limit. We then carry out thermalization in the NPT ensemble at a constant temperature of 2000 for 100,000 steps, initializing the system with a Gaussian velocity distribution. Finally, we conduct two quenching steps, which are descending temperature ramps. The first quenching step reduces the temperature from 2000 to 1500 at a cooling rate of 1e13 to limit the simulated time dedicated to the liquid phase. The second quenching step lowers the temperature from 1500 down to 50 at a slower cooling rate of 1e12 to ensure vitrification. We collect 5 snapshots taken at intervals of 10, and throughout the quenching process, we collect snapshots of the system every 20 to constitute the database. In Fig. <ref> d), the intra- and inter-species RDF are plotted, showing differences between pairwise distances of atoms depending on their species. To account for the different chemical components, we use a two-channel , as described in Sec. <ref>, and a 2-channel CNN, with the same architecture and hyperparameters as single-channel CNNs used in other applications. The results of the MD simulations for the Cu-Zr system are outlined in Fig. <ref>. Similarly to the Ni case, we track the evolution of potential energy during the simulation, identifying the liquid-to-glass transition through an energy shift near 700 in Fig. <ref>a. This serves as a global indicator of the transition. The modification in the RDF at around 0.5 as seen in Fig. <ref>b, further corroborates the transition to an amorphous structure. However, these global indicators lack local, per-atom information and the location of the inflection point on the energy curve in Fig. <ref>b is challenging to extract precisely based solely on the structural information. Furthermore, in liquid and amorphous Cu-Zr phases, traditional LAE analysis methods such as PTM or a-CNA are not applicable, as they are strictly limited to crystalline structure identification. §.§.§ Structural analysis To perform the structural analysis in Cu-Zr, we proceed as in the Ni case, building a database of structures for each phase, now taking into consideration the liquid and amorphous phases of Cu-Zr, and encoding LAEs in two channels (see Sec. <ref>). Simulation snapshots are labeled based on energy and RDF in the high-confidence domains to construct the training set for our model. These domains are marked in Fig. <ref>c, and correspond to less than 450 for the amorphous, and above 1100 for the liquid phase. To limit the size of the dataset while preserving its diversity, n=10,000 randomly selected atoms are drawn from each snapshot of the database. Our database for training then consists of 330,000 LAEs in total (each will be encoded in two channels), with 16× n =160,000 liquid and 17× n=170,000 amorphous environments. No data augmentation is used in this case. Once trained, the CNN delivers individual labels for each atom throughout all frames, revealing the progression from liquid to amorphous phase as indicated by the curves in Fig. <ref>c. In the 400-1100 domain, the CNN demonstrates an accurate depiction of the transition at 700. Thus, our method, similar to the Ni case, showcases its precision and transferability. In conclusion, our approach successfully aids in the identification of the liquid-to-amorphous phase transition in a complex, multi-component system. § DISCUSSION The accuracy of the analysis in the applications presented above outperforms traditional methods and competes with methods based on GNNs <cit.> or on descriptors coupled with neural networks <cit.>, while maintaining much lower computational cost. Furthermore, the results should be balanced with our decision to prioritize simplicity and computational efficiency, particularly in the design of the CNN that extracts features from descriptors and performs the classification. In machine learning, classifying real data with a finite set of classes is a common challenge, as some samples might deviate from all known classes. The used CNN classifier assigns likelihood scores to each class and selects the most likely one. Low-confidence samples thus cannot be labeled as “unknown”. It is then unclear if misclassification is a result of limitations of the model, or from samples that significantly deviate from all reference structures and have an undefined environment. To handle low confidence predictions in practical applications, an “unknown” label could be assigned if the maximum value of the one-hot output vector of the CNN falls below a threshold value. It is also worth noting that there is no inherent limit to the complexity of the network that can be used to extract features from images, while we intentionally used networks with simple architectures and a few tens of thousands of parameters. The obtained computational efficiency particularly stands out compared to GNN methods, both in terms of computational load and ease to train the model, needing only tens of thousands of annotated images. More advanced networks, such as ResNets <cit.> with millions of parameters, could be applied in applications where the learning capacity of small CNNs would be limiting. This could happen for extensive training databases where differences between s are difficult to detect. This would however necessitate the use of specialized hardware to compensate for the increased computational cost. For this reason, exploring the performance of more complex models remains out of the scope of the present study. Similarly, more complex workflows where correlations between s found in the same system (e.g., for object detection or clustering), and/or between s obtained in different frames of a trajectory (time series analysis), are analyzed using NNs represent an interesting perspective for the future studies. Previously, Goryaeva et al. <cit.> have demonstrated the usefulness of spectral descriptors as embeddings for LAEs, i.e., as vectors encoding a LAE and allowing a direct comparison with other LAEs using an appropriate metric, like Mahalanobis statistical distance. Following a similar approach with our lightweight descriptor could be beneficial to enable efficient clustering, outlier detection, as well as for learning from very limited amounts of annotated samples (few-shots learning). This can be addressed with minimal modification of our approach, by learning a similarity metric between s, e.g., using a Siamese network architecture <cit.> to directly optimize the L_1 distance between low-dimensional embeddings calculated by the network (see Fig. <ref>), while taking advantage of the “semantic” high-level features extraction performed by the CNN. Storing low-dimensional embeddings in a database can also take several orders of magnitude less space than raw s and enable faster comparison. Adaptation of such methods is a promising perspective for further investigations. Contrary to the descriptors used for interpolating the energy landscape of atomic systems, s are tailor-made for structural analysis and do not necessarily have to be differentiable. Therefore, sorting operations can be applied for their construction (see Section <ref>). It is worth noting that efficient differentiable sorting is an open problem and some recently proposed solutions <cit.> can be considered for further development of s. The integration of the present method in HPC environments, for on-the-fly analysis or the treatment of large-scale systems, is straightforward. The computation of descriptors in a supercell containing N atoms has a complexity 𝒪(N), and can be parallelized (see Data Availability section). § CONCLUSIONS In this work, we propose a simple and efficient approach for encoding local atomic environments into 2D image fingerprints, called . The descriptor is based on a graph-like architecture with weighted edge connections of neighboring atoms. To enable accurate identification of atomic structures, the descriptor can be readily combined with image processing methods, like CNN classifiers. This workflow for structural analysis achieves an accuracy comparable to specialized neural network architectures (e.g., GNN) and spectral descriptors, while maintaining its computational cost comparable to that of traditional geometry-based methods. Structural analysis with is accessible with modest computational resources, e.g., without GPU and massive parallelism, while being scalable up to HPC workloads. The descriptor intrinsically encodes geometrical invariance, which allows training on relatively small datasets, in contrast to other NN-based methods for structural identification. Thus, a small subset of the data to analyze –possibly annotated using traditional algorithms in high-confidence regions– can be sufficient for training the model. The proposed approach is applicable to crystalline and non-crystalline structures, including multi-element systems, which are notoriously difficult to interpret both by traditional and recent ML-based methods. In perspective, the method can be further adapted for the detection and identification of defects, including extended defects and precipitates in large-scale systems. The simplicity of the enables its straightforward implementation across different frameworks, hardware platforms, and programming languages, allowing for easy integration in molecular dynamics engines and post-treatment structural analysis software. An example of a simple implementation in Python as well as of optimized code suitable for the HPC environment is given in the Data Availability section. § ACKNOWLEDGEMENTS This work was financially supported by the Cross-Disciplinary Program on Numerical Simulation of CEA, the French Alternative Energies and Atomic Energy Commission. AA, AMG, and MCM acknowledge the support from GENCI - (Jean-Zay/CINES/CCRT) computer centre under Grant No. A0130906973. This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (EUROfusion Grant No. 101052200). The views and opinions expressed herein do not necessarily reflect those of the European Commission. The authors gratefully thank Dr. Isabelle Mouton for the insightful conversations regarding the major challenges in the structural analysis of 3D reconstructions from Atom Probe Tomography. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § DATA AVAILABILITY A Python implementation is available on https://github.com/ai-atoms/neighbors-maps/GitHub (https://github.com/ai-atoms/neighbors-maps/). This implementation was used for generating visualizations in Figs. 1 and 2. An optimized implementation, available in the https://ai-atoms.github.io/milady-docs/Milady package (https://ai-atoms.github.io/milady/), was used for the representation of LAEs in section <ref>. § CONVOLUTIONAL NEURAL NETWORK ARCHITECTURE The implemented Convolutional Neural Network (CNN) using TensorFlow Keras is sequential with an input shape of (N, 32, 32) for N-channel images. The architecture depicted in Fig. <ref> consists of: * A 2D convolutional layer (Conv2D) with 16 filters of kernel size (5, 5) outputting a shape of (16, 32, 32) with 416 parameters. * Another Conv2D layer with 32 filters of kernel size (3, 3), adding 4640 parameters, and maintaining the output shape at (32, 32, 32). * A MaxPooling2D layer with a pool size of (2, 2) to downsample the feature maps to (32, 16, 16). * Two additional Conv2D layers, first with 16 filters (3, 3) and 4624 parameters, reducing the output shape to (16, 16, 16), and then 16 filters with strides of (2,2) and 2320 parameters, yielding an output shape of (16, 8, 8). * A final Conv2D layer with 32 filters, strides of (2,2), 2080 parameters, and a (32, 4, 4) output shape. * A Flatten layer, converting the output to a 1D vector of 512 elements. * A Dropout layer for regularization with a dropout rate of 0.5. * A Dense layer with 24 units using ReLU activation, contributing 12,312 parameters, and a batch normalization layer. * A final Dense layer with 2 nodes for model prediction. The architecture employs ReLU activations and L2 regularization (1e-4). The model is compiled with the Adam optimizer, Sparse Categorical Cross-entropy loss function, and accuracy as the performance metric, for 10 epochs. The model's total trainable parameters amount to 26,490. elsarticle-num
http://arxiv.org/abs/2307.02494v1
20230704061643
Comparison of Neural FEM and Neural Operator Methods for applications in Solid Mechanics
[ "Stefan Hildebrand", "Sandra Klinge" ]
cs.CE
[ "cs.CE", "cond-mat.mtrl-sci", "cs.NA", "cs.NE", "math.NA" ]
Machine Learning methods belong to the group of most up-to-date approaches for solving partial differential equations. The current work investigates two classes, Neural FEM and Neural Operator Methods, for the use in elastostatics by means of numerical experiments. The Neural Operator methods require expensive training but then allow for solving multiple boundary value problems with the same Machine Learning model. Main differences between the two classes are the computational effort and accuracy. Especially the accuracy requires more research for practical applications. § INTRODUCTION Induced by ever-rising compute power and successful applications in several domains, Artificial Intelligence (AI) systems and especially Machine Learning (ML) methods attract growing attention for advanced tasks in mechanical engineering <cit.>. This is supported by well-established and flexible Machine Learning frameworks like PyTorch <cit.> and Tensorflow <cit.>. One particular application of ML is the solution of Parameterized Partial Differential Equations (PPDE), which are traditionally solved by numerical discretization methods like Finite Element Method (FEM), Finite Difference Method (FDM), Finite Volume Method (FVM) or Boundary Element Method (BEM). Based on ML techniques, two new classes of methods arose, namely the Neural FEM and Neural Operator methods <cit.>. The aim of the current work is to compare these two classes of methods for applications in solid body mechanics. Therefor, their common representatives are applied to case studies, where the well-established FEM can serve as a benchmark. The mathematical problem to solve with either method can be described as follows: Let an arbitrary Parameterized Partial Differential Equation be given on an open domain B with piecewise smooth boundary Γ in the form: 𝒩[u(y) ; y]=0 on B, ℬ[u(y) ; y]=0 on Γ , where 𝒩 is a nonlinear operator on the domain B, ℬ an operator on Γ that determines the boundary conditions, and u(y)∈ℝ^d the solutions of the PDE. All quantities are parameterized by y∈ℝ^n. The mapping G: B ∪Γ × ℝ^n →ℝ^d, (X, y) ↦u, X∈ B ∪Γ, n, d ∈ℕ is called the solution operator of the PPDE. Neural FEM resembles a conventional FEM implementation. The artificial Neural Network (NN) approximates the solution function of a particular realization of the PPDE. Based on the conventional form of Physics-Informed Neural Networks (PINN, <cit.>), the Deep Energy Method (DEM) and competitive PINNs (cPINN) are proposed in <cit.> . All these approaches are independent from a spatial discretization of the domain B (grid-independent) and can realize high accuracies, but must be retrained for each new set of parameters. In the case of Neural Operator methods, an NN is trained to behave like the solution operator of a PPDE. Then, the network can be applied to arbitrary combinations of parameters and boundary conditions, to solve Boundary Value Problems (BVP). These methods are particularly characterized by a discretization-independent error, allowing zero-shot super resolution (training on coarse grid, inference on fine grid). In this work, the Deep Operator Network (DeepONet) <cit.> and the Fourier Neural Operator (FNO) <cit.> are studied as representatives of Neural Operator methods. Typically, Neural Operator Methods require a large amount of training data, which may need to be computed in a numerically expensive way <cit.>. Physics-Informed variants of Neural Operator methods address this drawback by incorporating knowledge on the underlying PDE as a regularizing mechanism in the loss function <cit.>. This can increase accuracy, generalizability, and data efficiency <cit.>. The present contribution investigates Physics-Informed DeepONet (PIDeepONet) <cit.> and Physics Informed Neural Operator (PINO) <cit.> as representatives of Physics-Informed Neural Operator methods. The paper is structured as follows. First, we give an insight in details of selected ML methods (Sections <ref> and <ref>). Then, we apply these methods to three example problems (Section <ref>). Finally, we discuss the results in comparison to the reference solution from FEM and highlight necessary steps for a future use of these NN methods in elastostatics (Section <ref>). § NEURAL NETWORK NOMENCLATURE Artificial Neural Networks are constructed as layers of neurons <cit.>. Each neuron carries out a (typically nonlinear) activation function. In case of a Fully Connected Neural Network (FCNN), each neuron receives its input as linear transformation of all outputs of the neurons on the layer before. The output ℛ^i of the i^th layer is thus calculated by: W^i = w^iℛ^i-1 + b^i ℛ^i = a^i(Θ^i, W^i) The weights w^i and biases b^i of the i^th layer, together with the parameters Θ^i of the activation functions a, form the set of parameters θ of the Neural Network. Other parameters of the network architecture (like number of layers and layer widths) and the optimizer algorithm (e.g. step width) are called hyperparameters and have to be chosen by the NN user or an outer optimization strategy. All the layers between the input an output layer are called hidden layers. The number of (hidden) layers is referred to as the network's depth, whereas the number of neurons within a layer is called the width of the layer. The whole network represents an arbitrary (continuous) mapping (Universal approximation theorem, <cit.>) ℛ: ℝ^n ↦ℝ^m from the input to the output side, with n and m the input and output layer width, respectively. To make the network approximate a mapping ℛ̃ on a subset D ⊂ℝ^n, the mapping is given indirectly by a set T_tr of training tuples t_tr^k (P^k, ℛ̃(P^k) ), P^k ∈ D, k ∈ℕ, which together form the training data set. P^k are the input samples, ℛ̃(P^k) the corresponding target outputs. A set T_te of testing tuples t_te^l, l ∈ℕ is required to check the quality of the approximation the network has learned so far. Usually, T_tr∩ T_te = ∅. In the application of a network, arbitrary sets of data within D can be the input, but the exact output is usually unknown and only approximated by the net. Conventionally, the parameters of the network are adapted by an optimization algorithm like Adam <cit.>. This algorithm minimizes the empirical risk F (alternatively called loss value) that is calculated as the output of a loss function ℒ. The latter is commonly defined as the discrepancy between the target outputs and the outputs of the network with the current parameters. A frequent choice for the loss function is Mean Square Error (MSE) for N ∈ℕ tuples t^k MSE = 1/N∑_k=1^N(ℛ̃(P^k) - ℛ(P^k) )^2 . The optimization employs (partial) derivatives of the loss function w.r.t. the parameters in the network. Machine learning networks like PyTorch therefor record all operations acting on a variable from input to output symbolically, so that fast, highly accurate derivation becomes possible. This feature is referred to as autograd <cit.>. Its use, however, is not limited to derivations w.r.t. network parameters. By default, the NN parameters are initialized randomly before the first optimizer step. This leads to varying results of effort and achieved accuracy during training. § NEURAL FEM In this class of methods, the output of the NN is chosen as the unknown function of the PDE. The loss function is then computed either from the residual of the PDE (classical Physics-informed Neural Networks (PINN) <cit.> and competitive PINN (cPINN) <cit.>), or from the potential energy if the minimum principle applies (Direct Energy Method (DEM) <cit.>, mixed DEM (mDEM) <cit.>), both incorporating the outputs of the NN. Hence, no training data set is needed. §.§ Physics Informed Neural Networks (PINN) The original PINN formulation uses an FCNN where the loss function is applied to the squared residual of the PDE on specified collocation points, <cit.>, F^PINN, B = 1/N_f∑_i=1^N_f(𝒩[u_θ](X_i^f))^2 . The boundary conditions are accounted for by an additional term in the form F^PINN, Γ = λ_b/N_b∑_i=1^N_b(ℬ[u_θ](X_i^b))^2, where λ_b is a hyperparameter that weighs the error proportions, since the propagated gradients can be of different magnitudes, thus driving the optimization procedure towards an incorrect solution <cit.>. The total empirical risk is then calculated by F^PINN(u_θ)= F^PINN, B + F^PINN, Γ . According to <cit.>, the training of PINNs fails even on very simple problems, such as the 1D convection or the reaction-diffusion equation. An analysis of the occurrence of comparable pathologies in elastostatic or elastodynamic contexts requires further research. In the survey at hand, they did not manifest. §.§.§ Deep Collocation Method (DCM) The Deep Collocation Method (DCM) <cit.> is a representative of the classical PINN, where the empirical risk is built up by the squared residual at random collocation points. The constraints are typically accounted for by additional penalty terms but not enforced by a transformation of the output data. §.§ The Deep Energy Method (DEM) The DEM was originally introduced to calculate finite deformation hyperelasticity <cit.>. This method as well as methods derived from it require only first derivatives to compute the loss function, thus reducing the numerical complexity. In return, errors are generated by the numerical integration of the energy function. The solution is sought in form of the displacement field u(X) that corresponds to the minimum total potential energy Π. This minimization can be accomplished by choosing the loss function F to calculate the total potential energy F := Π. The input of the NN are points in the reference configuration X ∈ B ∪Γ, in the domain B or on its boundary Γ. A transformation is applied to integrate the geometric boundary conditions: Let the output of the NN be given by z(θ, X). To retrieve the displacement field u_θ(X) based on the parameter set θ of the NN, the displacements on the boundary are introduced in a separate term u_g(X). Additionally, a mapping A(X) with A(X)=0 for X∈Γ is introduced. Then, the output is constructed as: u_θ(X)=u_g(X)+A(X) z(θ, X) Now, u_θ automatically fulfills the boundary constraints and a nonlinear optimization problem without constraints is obtained. This problem can be solved with an optimization procedure such as the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. The corresponding loss function to minimize is F^DEM(u_θ)=∫_B( W(F_θ)-b·u_θ) d V-∫_Γ_NT·u_θ d A with the elastic strain energy W, the deformation gradient F_θ = ∂(X + u_θ)/∂X, body forces b and the traction T = P·N defined as the 1^st Piola-Kirchhoff stress tensor P projection on the outward normal of Γ. The deformation gradient F_θ contains only first derivatives and is retrieved by the autograd feature of the NN framework. The described procedure is shown in Fig. <ref>. Several numerical methods to calculate an approximation of the integrals in the loss function have been suggested <cit.>. Some examples are the Monte Carlo integration and the trapezoidal rule, that we use for the examples presented in Section <ref>. An alternative variant of the same concept is the Shallow Energy Method (SEM), where the deep NN is replaced by a shallow NN with a single hidden layer, activated by Radial Basis Functions (RBF) <cit.>. An other enhancement of the basic DEM is the mixed DEM (mDEM) <cit.>, where both displacements u_θ and stresses P_θ are calculated by the NN. The deviation from the constitutive law derived from the strain energy W must then be integrated into the loss function to represent the correct material behavior F^mDEM=F^DEM+V/N_f∑_j=1^N_f||P_θ(X_j^f)-.∂ W(F_θ)/∂F|_X_j^f||_2^2 . The prescribed forces on the Neumann boundary part Γ_N can be accounted for directly by a transformation similar to Eq. (<ref>) holding for the geometric boundary conditions on the Dirichlet boundary part Γ_D. Alternatively, an additional error term can be included to penalize the squared deviation from the prescribed forces. §.§ competitive PINN (cPINN) cPINNs extend the idea of PINNs by formulating the learning problem as a zero-sum game in the style of a Generative Adversarial Network (GAN) <cit.> with a Nash equilibrium that corresponds to the analytical solution of the PDE. This avoids the use of the square of the residual, which aims to improve the learning performance. Compared to a classical PINN, cPINN introduces an additional discriminator FCNN with NN parameter set ϕ which is trained to predict errors of the PINN. Let u_θ be furthermore the output of the PINN and additionally d_ϕ=(d_ϕ^B, d_ϕ^Γ) the output of the discriminator network. Then the minimax formulation of the game is given by max _ϕmin _θ F^cPINN(u_θ, d_ϕ)=.1/N_f∑_i=1^N_f𝒩[u_θ] ·d_ϕ^B|_X_i^f+.λ_b/N_b∑_i=1^N_bℬ[u_θ] ·d_ϕ^Γ|_X_i^b . <cit.> solve this optimization problem by using the Adam based Competitive Gradient Descent (ACGD) method. § NEURAL OPERATOR METHODS In a multi-query context, where a PDE must be evaluated for a large number of parameters, classical methods are computationally intensive. This includes both the conventional FEM and the Neural FEM methods explained previously. That drawback has motivated a large body of work on model order reduction, which deals with the tradeoff towards reduced accuracy, stability, and generalizability. Learning solution operators between (infinite-dimensional) function spaces using NNs is a comparatively young field. In this class of methods, the NN approximates the solution operator of the PPDE, i.e. for a given parameter set, the NN shall output the solution of the PDE at the points of interest. Now, the objective is defined as a risk functional that takes the probability distribution χ of the parameter set y into account <cit.> F=∫_χℒ(G_θ(y), G(y)) dχ , where G is the solution operator and G_θ its approximation by the NN. A possibility to represent operators by means of NNs is a finite-dimensional approximation of the function spaces and interpolation of these spaces by the NN. Let e.g. the Boundary Value Problem (BVP) be given with approximations to the solutions calculated by traditional FEM on the node points. Now, the solution operator can be sought that maps the volume forces to the displacements on the node points, hence training a discrete operator <cit.>. However, this approach introduces a grid-dependency since the results are only obtained on the initially chosen node points. Alternatively, the points of interest in space can be included as input parameter into the mapping that shall be represented by the NN, u_θ: B ∪Γ×ℝ^(d+N_dof)→ℝ^N_dof , [X, 𝐲] ↦u_θ(X , 𝐲) , where 𝐲 is taken as the parameter of the displacement field u_θ. In the following, Deep Operator Network (DeepONet) and Fourier Neural Operator (FNO) are discussed. Both approaches are discretization independent and allow for small generalization errors. §.§ Deep Operator Network (DeepONet) and Physics Informed DeepONet (PIDeepONet) NN can be employed as universal approximators of continuous functions, as well as for nonlinear continuous operators <cit.> |G(y)(X)-g(y(X_1), …, y(X_m))_branch·f(X)_trunk|<ε, where f (trunk) and g (branch) can be represented by various classes of neural networks that satisfy the requirements of the classical universal approximation theorem <cit.>. It is assumed that the parameter y is known on sufficiently many grid points m. On this basis, the stacked and unstacked Deep Operator Network (DeepONet) are proposed in <cit.>. The stacked DeepONet differs from the unstacked one only in the definition of the branch networks. In the unstacked DeepONet, these are combined into one net to facilitate training (Fig. <ref>). Let y=[y(X_1), …, y(X_m)]^⊤ be the parameter function at discrete points in space. Moreover, let the output of DeepONet be G_θ(𝐲)(X). Then, the empirical risk functional for DeepONet is given by F^DeepONet=1/P N∑_i=1^N∑_j=1^P(G_θ(y_i)(X_j)-G(y_i)(X_j))^2 . Here, N is the number of realizations of the parameter input y available for training and P is the number of training data known per realization of the input function. The reference solution G(y_i) is determined by FEM-simulations or measurements. A single data point of the training data set consists of a triple of the form (y, X, u(X)). If P>1, the discrete parameter y must be repeated an appropriate number of times in the data set. For N realizations of the parameter and P evaluations per realization, the training dataset has a total of N × P entries. Despite of its simple structure, DeepONet can represent a wide range of mappings (i.e. it is very expressive) and allows to achieve small generalization errors. Furthermore, it can be applied very easily to arbitrary parametrizations. The physics informed variant PIDeepONet <cit.> extends the empirical risk functional by adding a physically motivated term ensuring the compliance with the PDE in the weak form. For this purpose, the risk functional is extended by adding the squared residuals of the (nonlinear) differential operators 𝒩, ℬ F^PIDeepONet =F^DeepONet +1/m N_y∑_i=1^N_y∑_j=1^m𝒩^2[G_θ(𝐲_i)](X_j) +1/N_b N_y∑_i=1^N_y∑_j=1^N_bℬ^2[G_θ(𝐲_i)](X_j) . In return, no FEM reference data is necessary. Eq. (<ref>) is formulated for the case, where the residual can only be evaluated at the m discretization points, where the parameters y are known. Alternatively, an extended data set can be generated which contains {y(X_1), …, y(X_m) , X_1, y(X_1), …, X_r, y(X_r)} with r the number of (randomly determined) additional gridpoints. §.§ Fourier-Neural-Operator (FNO) An FNO <cit.> represents the solution operator of a PPDE with the help of a series of Fourier Blocks. A Fourier block includes several operations as shown in Fig. <ref>: i) It applies the Fourier transform ℱ (in form of the Fast Fourier Transform, FFT) on its input v^t (X). ii) It applies a linear transform R_θ^t (parameterized by the NN parameter set θ) on the lower Fourier modes and filters out the higher modes. iii) It applies the inverse Fourier transform ℱ^-1. iv) In parallel, the Fourier block applies another linear transform W_θ^t on the input v^t (X). v) Results of both branches are summed up and forwarded to the nonlinear activation function σ. The input [X, y(X)] ∈ℝ^N_dof + d of the network is lifted up to a higher dimension d_v by a shallow (e.g. single-layer) FCNN P with linear activation function. Another FCNN Q projects the output of the last Fourier Block onto the output space ℝ^N_dof which results in the FNO output u_θ(X). The whole process is visualized in Fig. <ref> and can be described by the iterative architecture v_0(X) =P(y(X), X) v_t+1(X) =σ(W_θ^tv_t(X)+ℱ^-1[R_θ^t·ℱ[v_t]](X)) u_θ(X) =Q(v_K(X)) , where K is the number of sequential Fourier Blocks. §.§.§ Physically Informed Neural Operator (PINO) An extension of the FNO to a physically informed neural operator (PINO) <cit.> has been investigated as well. FNO computes displacements on an equidistant grid, so that a DEM-like extension is straight forward. Therefor, the same potential energy as for the DEM (see Eq. <ref>) was added to the loss function F^PINO=F^FNO+F^DEM = F^FNO+ ∫_B( W(F_θ)-b·u) d V-∫_Γ_NT·u d A . § APPLICATION TO EXAMPLES IN ELASTOSTATICS The present section compares the NN methods previously explained by means of example boundary value problems, including several 1D examples and one 2D example with two load cases. §.§ The 1D tensile bar The first example is based on the setup shown in Fig. <ref>. The bar is clamped at the left edge u(-1)=0 and loaded along its entire length with the force density f(X). A Neumann boundary condition P(1)=T is applied at the right edge. §.§.§ Example A The following energy density is considered W(F)=F^3/2-3/2 F+1/2 with F=1+u^'(X) . From Eq. (<ref>), the first Piola-Kirchhoff stress reads P=∂ W/∂ F=3/2(F^1/2-1) ⇒-∂ P/∂ X=-3/41/√(1+u^') u^''(X)=f(X) . Specifically, the force density f(X)=X is chosen and the load at the free end is set to zero: T=0 -3/41/√(1+u^') u^''(X)=X with u(-1)=0, T=0 ⇒ u^'(1)=0 . The example has the following analytical solution which will be used to validate the results obtained by the Neural FEM methods u(X) =1/135(3 X^5-40 X^3+105 X+68) u^'(X) =1/9(X^4-8 X^2+7) . §.§.§ Example B A linear elastic material is investigated: W=1/2(u^')^2 ⇒ P(X)=u^'(X) ⇒ -u^''(X)=f(X) Example B1 A single load case is analyzed in examples related to PINN and DEM. -E u^''(X)= f(X) = Q · A with u(-1)=0 and E u^'(1)=T. Here, distributed forces take the values Q = 9.395 · 10^4 Nm^-1 and T= 1.015 · 10^8 Nm. Young's modulus corresponds to steel (E = 210 · 10^9 N m^-2) and the cross section surface is A = 1  m^2. Example B2 For the Neural Operator models, the PPDE is normalized and a parameterization of the force density f as well as a parameterization of the Neumann boundary condition are studied. Since the boundary only consists of one point, the boundary condition can be described by a scalar π_2 for which a uniform distribution between [0,1] is assumed. The reference solutions are computed using FEniCS. The BVP is described by -∂^2 u/∂ X^2=f(X) with {[ u(-1)=0; u^'(1)=π_2 ]. . §.§ The plate – Example C The selected two-dimensional example deals with a plate made of a Neo-Hookean material with the energy density W(F)=μ/2(I_1-2-lnJ)+λ/2(J-1)^2 . I_1=tr(C) is the first invariant of the right Cauchy-Green deformation tensor C = F^T F and J=det(F) the determinant of the deformation gradient. The corresponding derivatives are ∂J/∂F=JF^-1 and ∂tr(F^⊤F)/∂F=2 F. The symbols λ and μ denote the Lamé constants. For the energy density, Eq. (<ref>), the 1st and 2nd Piola-Kirchhoff stress tensor are given by: P =∂ W/∂F=μF+(λlnJ-μ) F^-⊤ and S =F^-1·P=μI+(λlnJ-μ) C^-1 . The studied example is shown in Fig. <ref>. The plate is clamped at the left edge and the Neumann boundary conditions are set at the right edge. The components of the kinematic fields in Cartesian coordinates are given by [u]=[[ u_x; u_y ]] ⇒ [F]=[[ F_x x F_x y; F_y x F_y y ]]=[[ 1+u_x, x u_x, y; u_y, x 1+u_y, y ]] . From Eq. (<ref>), the first and second Piola-Kirchhoff stress tensor are calculated as follows [P] =[[ P_x x P_x y; P_y x P_y y ]]=μ[[ F_x x F_x y; F_y x F_y y ]]+λlnJ-μ/J[[ F_y y -F_y x; -F_x y F_x x ]] , ´ [S] =[[ S_x x S_x y; S_y x S_y y ]]=[[ μ 0; 0 μ ]]+λlnJ-μ/J^2[[ C_y y -C_x y; -C_y x C_x x ]] . Moreover, an equivalent stress is calculated as in <cit.> and used in contour plots (Section <ref>) S_E=√(0.5 ((S_x x-S_y y)^2+S_x x^2 + S_y y^2) +3 S_x y). Two load cases are investigated for the setup described. Example C1 The first load case deals with the vertical load T=-5 e_y . Example C2 The second load case is uniaxial tension with T=50 e_x. §.§ Error measure With the solution operator of the PPDE G: 𝒴→𝒮 and its NN approximation G_θ, the average relative L_2 error for the N test data sets is calculated for the Neural Operator methods <cit.> as: ϵ_rel = 1/N∑_j=1^N||G_θ(y_j)-G(y_j)||_L_2/||G(y_j)||_L_2 . With the Neural FEM, only one concrete BVP can be analyzed at once. Then, the relative L_2 error is calculated based on the solution for the displacement field ϵ_rel=||u_θ-u||_L_2/||u||_L_2 . In both cases, the determination of the relative L_2-error requires the computation of the L_2-norm which is approximated by the discrete L_2-norm. On an equidistant lattice {X_i^equi }_i=1^N, the discrete L_2-norm is calculated as ||f||_L_2, d^2=Δ V ∑_i=1^N||f(X_i^equi)||_2^2=Δ V ∑_i=1^N∑_j=1^d f_j^2(X_i^equi) , with the volume (in 2D: surface area) of each lattice unit Δ V. §.§ Numerical integration The discrete L_2-norm is based on a simple Riemann sum with error order 𝒪(Δ V). For the approximation of the risk functional, e.g., in connection with the calculation of the potential energy in the DEM, other integration methods have to be considered. Two classical methods are the Monte Carlo (MC) integration and the trapezoidal rule. The trapezoidal rule requires partitioning of the integration domain into polytopes. In the simplest case, these would be hypercubes on an equidistant grid. In <cit.>, an integration method based on the Delaunay triangulation is proposed. The trapezoidal rule still applies, where f̅_i is the average value over the i-th simplex (e.g. triangles). The two polytopes for integration in 2D are shown as examples in Fig. <ref>. Let V be the volume of the integration domain and f̅_i the average of f over the corners of the i-th polytope with i ∈ [1, N] and N the number of vertices. Then, the integral approximations are generally given by Eq. (<ref>a) for Monte-Carlo and Eq. (<ref>b) for the trapezoidal rule. I_MC(f) =V/N∑_i=1^N f(X_i) (a) I_T(f)=∑_i=1^Nf̅_i Δ V_i (b) . Three methods are investigated to select the grid points: equidistant grid points, pseudo-random numbers and quasi-random numbers (Latin Hypercube Sampling, LHS). Exemplarily, we compare the absolute error of the potential energies in the nonlinear 1D setup (Example A). The trapezoidal rule with 100 000 grid points is assumed as a quasi-exact comparison value. The MC integration with equidistant grid points reduces to a simple Riemann sum. The results for 100 and 1000 grid points, respectively, are summarized in Fig. <ref>. A characteristic distribution of grid points is shown in Fig. <ref>. Due to the larger integration error, uniform pseudo-random sampling is not considered further. In the following, "random" sampling always refers to quasi-random LHS. Fig. <ref> also shows that the trapezoidal rule is consistently more accurate than MC integration. §.§.§ Technical Implementation In this work, we use PyTorch (version: 1.11.0), which contains the optimizer L-BFGS that is employed in some of the investigated methods. The computations have been run on an Intel Core i5-7200U mobile processor. A mobile NVIDIA GeForce GTX 950m is used as graphics card. The parameters of the NN are always randomly initialized, so that the results underly statistical variations. §.§ Neural FEM §.§.§ PINN Example A In the numerical experiments related to Example A, the NN architecture [1,10,1] is always used. Two optimizers (L-BFGS, SGD), different numbers of collocation points (100 and 1000 points) and different computational accuracies (single precision FP32 and double precision FP64) are compared (Fig. <ref>). The SGD is repeated for 10000 epochs and the L-BFGS for 15 epochs, each with the default parameters of the methods. The discrepancy in the required number of epochs is reflected by the run time, which is about 43  s for the SGD compared to about 0.300  s for the L-BFGS. Obviously, a second-order method (such as the L-BFGS) can greatly reduce the number of necessary iterations. In each epoch, the complete data set is used (Full-Batch). Despite the same information being provided to the NN, the L-BFGS method performs better on average than the SGD. Fig. <ref> also shows that the reduction in total error relative to the number of collocation points quickly goes into saturation. The difference between N=100 and N=1000 is only about 6.500 · 10^-6 for the L-BFGS. Moreover, no significant increase of the total error measured in the relative L_2 norm is shown when computing on single precision. This can greatly reduce the computation time on commercially available graphics cards that are optimized for single precision computing. However, this needs to be confirmed in the further research for more complex problems. The best results were obtained with the tangent hyperbolic (Tanh) activation function. Other activation functions, such as the Rectified Linear Unit (ReLU) or the Exponential Linear Unit (ELU) do not converge or converge very poorly against the analytical solution of the problem. The calculated displacements for different activation functions are comparatively shown in Fig. <ref>. ReLU and ELU could not be optimized with L-BFGS. Therefore, only the results after optimization with Adam are shown. The second derivative of the approximation with ReLU activation is everywhere constantly zero, (except for the point at the kink). This destroys the information in the residual and a training of the network must necessarily fail. Hence, for the following studies, Tanh was always applied as the activation function. Other activation functions are not considered in the present contribution. However, in the literature, the composite function max(0, x^3) <cit.>, the Swish activation z S(β z) (where S denotes the sigmoid activation function) <cit.> and GELU <cit.> have been successfully employed. The training of PINN with SGD took about 40  s for 10 000 epochs. Example C Amongst the conventional PINN representatives, the DCM is the easiest to implement for 2D problems and thus chosen to apply to Example C (Section <ref>). Within the domain, the balance of linear momentum reads ∇·P=0, which already is a residual form for approximations of P. The Neumann boundary conditions are given by P·N=T, the Dirichlet constraints are incorporated directly by the application of a transformation on the output of the NN (Eq. (<ref>)). The architecture of the network is specified as [2,30,30,2] and on the Neumann boundary part, 900 random collocations points are chosen. 4000 collocation points are used within the body. L-BFGS with learning rate 1.0 and Line Search with Wolfe condition is applied as optimizer. Other than the DEM, the DCM does not converge towards the reference solution for load case C1. A comparison with the DEM shows that the boundary conditions are not appropriately learned by the DCM. <cit.> discuss that the training of PINNs may fail due to numerical inaccuracies if the contributions to the loss value – one portion from the residual and the other portion from the Neumann boundary part – or their gradients w.r.t to the NN parameters are in vastly different orders of magnitude. In the case of Example C, the loss value in the DCM consists of the portion from the residual with the value 0.119 and the portion from the Neumann boundary part with the value 1.121. The gradients of each contribution w.r.t. the NN parameters are relatively uniformly distributed (Fig. <ref>). Moreover, the correct boundary conditions are not learned even if the loss portion of the residual is excluded (manually set to 0). Hence, the failure of the DCM on the example cannot be explained by this kind of numerical inaccuracies. It is possible that the optimizer gets stuck in a local minimum, but this behavior needs a further investigation. §.§.§ cPINN Example A For many applications, the accuracy that can be achieved with a classical PINN is not sufficient. cPINN was developed to improve the accuracy by avoiding the squaring of the residual <cit.>. Instead, it employs Adaptive Competitive Gradient Descent (ACGD) as optimization procedure whose Python implementation is publicly available <cit.>. Furthermore, Tanh in the PINN part and ReLU in the discriminator network are chosen as the activation functions. The architecture of the PINN is chosen with 10 neurons in the hidden layer (architecture: [1,10,1] ) for comparability with the conventional PINN/ DCM. The layer width h of the discriminator, on the other hand, was varied (architecture: [1, 20, 2] vs. [1, 50, 2]). Initial experiments have shown that the output of the discriminator d_ϕ must be separated for points in the domain and on the boundary d_ϕ=(d_ϕ^B, d_ϕ^Γ). Therefore, the output layer contains 2 neurons. The option of separating both outputs of the discriminator into independent subnetworks is also tested. However, this did not result in any improvement. Based on these results, only the first variant with the smaller number of NN parameters is considered further. All calculations are performed with double precision. Fig. <ref> shows the results for accuracy and run times, from 100 runs with random NN parameter initializations in form of a box plot. It can be seen that the accuracy is only moderately affected by the number of collocation points. However, the width of the discriminator has a significant impact on the training result. Increasing the number of neurons in the hidden layer from 20 to 50 reduces the error by an order of magnitude. It is not entirely clear why such a large discriminator network is necessary. Moreover, the training is relatively slow, taking about 2 min (up to 3 min for 1000 collocation points) for about 6000 epochs. The improvement of up to 2 orders of magnitude reported by <cit.> cannot be demonstrated here. This may be because the pathologies related to training PINNs <cit.>, which cPINN addresses, do not arise in this simple example. On the other hand, the regularization by the residual leads to a complex energy landscape of the optimization procedure, making optimization more difficult <cit.>. Moreover, the material law sometimes causes the optimization process to abort if the network parameters have been initialized unfavorably. This problem can be solved by reducing the range for random sampling of the initial parameter values. In addition, the different weighting of the summands of the empirical risk can lead to different magnitudes of the gradients of the loss function w.r.t. the NN parameters, which can impair the NN parameter optimization <cit.>. In the present case, the gradients for the residual within the domain ∇_θ F^PINN, B are much larger than the gradients for the constraints ∇_θ F^PINN, Γ. The optimization procedure is therefore driven more strongly toward a solution that reduces the residual while allowing for deviations from the constraints. As a result, the optimization procedure converges toward a plausible solution, but one that does not satisfy the boundary conditions. For the complex architecture [1,50,50,1], the gradients of the residual and boundary portion of the loss function w.r.t. the NN parameters of the first hidden layer and the output layer are shown in Fig. <ref>. However, not much discrepancy is detected in the order of magnitude of the gradient values for both portions of the loss function. §.§.§ DEM With the architecture [1,10,1], DEM training runs on average twice as fast compared to classical PINN. For deeper networks, this fact is amplified, as will be shown in the analysis of Example C. Our own DEM implementation is based on the public source code <cit.> and is extended to allow training with Monte Carlo integration on quasi-random grid points. Tanh is used as the activation function. Example A We enforced the geometric boundary conditions by the transformation u(X)=(1+X) z_θ(X) , where z_θ(X) is the output of the NN. This way, always zero displacement is calculated at the clamped end (X = -1). A comparison of the resulting relative errors in displacements and strains as well as the run times is shown in Fig. <ref>. Two integration methods (Monte Carlo integration and trapezoidal rule) are compared, each on different sets of randomly selected support points (100 and 1000, respectively). For the trapezoidal rule, the support points are sorted and the boundary points are explicitly considered. The optimizer is the L-BFGS with learning rate 1.000. This proves to be very efficient and approaches the solution after only 15 epochs. The results furthermore illustrate that the use of single floating point accuracy (FP32) leads to only a slight decrease of accuracy, similar as seen with the conventional PINN. However, the run time even increases with FP32, what indicates slower convergence of the optimization procedure. Example B The study of Example B reveals a pathology of the DEM that has not appeared in Example A. Its effect can be seen in Fig. <ref>. This pathology can be attributed to overfitting <cit.>, since the potential energy, unlike the squared residual, has no regularizing effect. In fact, early stopping significantly reduced the influence of overfitting. Alternatively, overfitting could be avoided by increasing the number of grid points. For 1000 grid points, hardly any overfitting occurred without a need for early stopping. Example C The approximation of the energy functional can be done analogously to the 1D example by means of different integration techniques. However, the Monte Carlo integration is the simplest option to implement. The incorporation of the boundary conditions, network architecture and optimizer algorithm as well as the load case are chosen similarly as for the DCM (Section <ref>). 10 000 collocation points within the body bulk are selected. The external energy is evaluated by using 200 random points on the right edge of the plate. The results for load case C1 and error are presented in Figs. <ref> and <ref>, respectively. The relative L_2 error in the displacements is only 0.002 and in the equivalent stress 0.053. Fig. <ref> shows that the error in the equivalent stresses is concentrated at the restraint. The stress peaks at the critical points are not completely resolved by the NN. The integration method does not cause this error, which is confirmed by a second calculation that uses trapezoidal rule. The relative L_2 errors in the case are 0.003 for displacements and 0.039 for equivalent stresses. Again, an equidistant grid with 10 000 collocation points is used. No significant improvement is obtained by the more accurate integration procedure, so the influence must be considered small. For load case C2, the same procedure with MC integration is carried out. The results and errors are presented in Figs. <ref> and <ref>, respectively. The relative L_2 errors are 0.005 and 0.019 in this case. In conclusion, the NN is able to approximate the character of the solution of the BVP. However, relatively large errors are found at the restraint. According to <cit.>, the same problems arise for a PINN trained with the squared residual. The resolution of fine features of the stress and displacement fields seem still to be a challenge for future work. §.§.§ Transfer learning (TF) One possibility to enhance the performance of Neural FEM is by means of transfer learning e.g. in case of a varying Neumann boundary condition. This is illustrated on Example B2, where π_2 is changed by only a small amount in each iteration. Then the NN trained on the previous π_2 value is already a good approximation for its subsequent value. Hence, the NN parameter values can be copied to the new NN to reduce the number of required learning epochs. Applied to Example B2 (linear elastic material), the average run time could be reduced by a factor of four. Similar time savings have been documented in <cit.> in the study of plastic deformations. The relative L_2 errors are shown in Fig. <ref>. The average L_2 error is 3.145 · 10^-5, which is about an order of magnitude smaller than the error with DeepONet. The training duration is reduced from 23  s to about 5  s. §.§.§ Initialization of a conventional FE solver As indicated in the literature and the results above, relative L_2 errors of approx. 10^-4 are usually achieved. The computed solution could then be submitted as initialization to a traditional FEM solver in order to improve the accuracy. Applied to Example A, a FEniCS calculation, that is conventionally initialized with all displacements as zero, runs for four iterations. With the solution of the DEM with trapezoidal rule and 1000 collocation points as initialization values for the FEM solver, the calculation is accelerated by a factor of two, only half of the iterations until convergence are needed. Thus, Neural FEM results can be employed as a potential way to speed up an FEM simulation in settings where the Neural FEM is not yet able to completely replace the FE simulation. §.§ Neural Operators The operator methods are examined on example Example B2 (Section <ref>) – a tensile bar with clamping restraint at the left side and a free end at the right side. In this simple test case, analytical solutions are available to calculate the error for the parameterization of the Neumann boundary condition in case of fixed force density f(X) ≡ 1. Combinations of both varying the force field and the boundary conditions have not been carried out in the present work. Training and test data for the varying force field are generated by means of Gaussian Processes with squared exponential covariance (correlation length l=0.100). The free end of the bar at the right side yields the parameter π_2 = 0. For training, 1000 data sets and for the tests 100 data sets have been generated with FEniCS on an equidistant grid for X ∈ [-1,1] with 1024 grid points and quadratic shape functions. The relative L_2 errors of the FEM simulation are several orders of magnitude smaller than expected from the NN methods (displacements approx. 10^-10; strains approx. 10^-7) and should not influence the survey. One of the resulting data sets is illustrated in Fig. <ref>. §.§.§ DeepONet and PIDeepONet Numerical setup For the DeepONet, the data sets need to be preprocessed since the Neural Operator methods work with P random collocation points that change between the evaluations of the loss function, whereas the reference solutions are produced be FEM on a fix mesh with m equidistant points. Hence, the realizations of the force fields are projected onto the FE mesh. Then, the FE results are interpolated and evaluated at the random collocation points. The parametrization of Neumann boundary condition has only been investigated with DeepONet, where π_2∼ U[0,1] is assumed. The source codes for DeepONet and PIDeepONet have both been published on Github <cit.>. The architectures for the subnets were specified as [20,100,100] for the branch net and [1,100,100] for the trunk net. The branch is set up with layer width m=20. The chosen activation function are ReLU in DeepONet and Tanh in PIDeepONet. A similar architecture has also been suggested in <cit.>. The L-BFGS optimizer with linesearch (strong Wolfe condition) and learning rate 1.0 is applied as suggested in <cit.>, for 120 epochs. However, L-BFGS is very memory consuming, so historysize =50 (default: 100) is set. 1000 load cases are used for training and 100 for testing. In order to reduce the training effort, from the 1024 grid points only 8 are randomly chosen per load case. Hence, the training data set size reduces to 8000 data points. The error reduction of considering more grid points per load case quickly saturates so that this reduction is admissible. Approximately 50 random points can be estimated as the saturation limit for 1000 training data sets <cit.>. Training is conducted for both methods, Full-Batch and Mini-Batch (batch size: 1000) . Since PIDeepONet includes the whole information about the PPDE (similar to the PINN models) in the loss function, no reference solutions by means of FEM are necessary. In exchange, the loss functions needs to be constructed anew for each PDE. For the DeepONet, additionally an alternative loss function is investigated. It employs the relative L_2 error instead of the Mean Square Error – MSE (Eq. <ref>). Results Representative results for the displacement u and strains u' over the bar length obtained with DeepONet and PIDeepONet are shown in Fig. <ref>. Both methods match the displacements relatively well, but DeepONet has visible deviations in the strains. In particular, the non-smooth curve of the strain, which is the spatial derivative of the displacement, can be attributed to the ReLU activation function, which has a discontinuous derivative. The errors for the displacements and strains are shown in Table <ref>. The usage of the residual in the empirical risk in PIDeepONet improves the accuracy in the strains about one order of magnitude compared to DeepONet. The effect of floating point accuracy on ϵ_rel is small, similar as seen for Neural FEM. Overall, training with Full-Batch on FP32 performs best. The runtimes of models, each trained with Full-Batch and Mini-Batch (batch size 1000), are compared in Table <ref>. The run times for the DeepONet are significantly lower than for the FNO with comparable accuracies. Using the relative L_2 error instead of the MSE reduces the convergence rate, requiring more iterations and increasing the run time. The difference between the mean and median is reduced, but no significant effect on the error is found. Fig. <ref> shows the loss histories from the optimization with the DeepONet and PIDeepONet, respectively. Overall, the relative L_2 error on the test data set is smaller for Full Batch training. The difference in resulting accuracy between the two methods can be attributed to the training error alone. PIDeepONet converges significantly slower and yields worse accuracy than DeepONet. With Full-Batches, PIDeepONet even converges to a local instead of the global minimum. The poor convergence of the PIDeepONet demonstrates the significantly more complex optimization task, where the DeepONet makes use of the explicitly obtained FEM results as training data sets. Potential Energy in the loss function The results of Neural FEM (Section <ref>) and <cit.> suggest, that replacing the squared residual in the loss function of PIDeepONet by the potential energy can make the optimization problem easier to solve. Hence, such a method should be more robust and efficient and make training feasible even where training of PINNs fails. However, with 100 random realizations of the force field and 15 collocation points per realization as suggested in <cit.>, this method does not converge for Example B2. Influence of intialization The default initialization by PyTorch sets the weights and bias by randomly sampling from a uniform distribution. For W∈ℝ^N_k× N_k-1 and b∈ℝ^N_k, it holds: W_i j, b_i∼ U[-√(k), √(k)] with k=1/N_k-1, where N_k denotes the width of the k-th layer. <cit.> and <cit.> suggest that the convergence of the NN can be accelerated by the Glorot initialization. Let N[μ, σ^2] be a normal distribution with mean μ = 0 and variance σ^2. This yields b_i=0 and W_i j∼ N[0, σ^2] with σ=√(2/N_k+N_k-1) for the parameters. The errors of the models on single precision with Full-Batch optimization and L-BFGS are shown in Table <ref>. The error of DeepONet for the displacements becomes only slightly smaller. Also, no large difference in run times was observed. The Glorot initialization was developed mainly for deep learning applications and does not have much impact on the shallow networks (with one hidden layer) used here. Neumann boundary parameterization The (PI-)DeepONet and the FNO have been specifically designed for approximating mappings between function spaces. One advantage of the DeepONet over the FNO is that it is very easy to apply to arbitrary parameterizations. For example, the PIDeepONet can be used to parameterize the Neumann boundary. Let again be given the dimensionless problem -u^''(X)=1 with u(-1)=0, u^'(1)=π_2 . The analytical solution is given by u(X)=3/2+X-1/2 X^2+π_2(X+1) . In the following, only the scalar variable π_2 needs to be varied. The solution operator G: I →𝒮 is sought, where I=[0,1] is fixed and 𝒮 denotes the space of admissible deformations. For each of the subnetworks, a hidden layer with 50 neurons is used. Their architectures are thus given by [1,50,50]. Tanh is used as the activation function according to the experience in neural FEM. The NN is trained over 40 epochs with L-BFGS on 10 000 training data set entries, and 1000 validation data set entries. The data set is built by selecting a single random collocation point for each of the 100 realizations of π_2 (P=1) and 1000 grid points each on the interval [-1,1]. No early stopping is used. The relative L_2 error over the whole interval I is shown in Fig. <ref>. The training of the mesh took approximately 118  s. The mean relative L_2 error for the displacements is 1.695 · 10^-4 and 2.463 · 10^-4 for the strains. For the calculation of the complete test data set, the PIDeepONet took 0.033  s. This highlights the difference between the short run time of the inference and the large computational effort for the training. With L-BFGS and 1000 collocation points (to minimize the risk of overfitting) the training of the DEM on 100 realizations took about 23 s. The higher training effort of the operator model is profitable only for about 500 realizations of π_2 and more. §.§.§ FNO and PINO The FNO architecture proposed in <cit.> is applied for Example B2. Here, the hyperparameter representing the hidden layer width d_v = 64 (Section <ref>) results in 549 569 NN parameters. Secondly, a smaller architecture with d_v = 12 is set up which results in 20 885 NN parameters. This is comparable to the DeepONet architecture with 22 500 parameters. Padding was considered as suggested in <cit.>, since the considered example has non-periodic boundary conditions in the input functions. Gaussian Error Linear Unit (GELU) is used as the activation function, GELU(x) = x Φ(x)=x/2[1+erf(x/√(2))], where Φ(x) denotes the standard normal distribution. Adam with decreasing learning rate (initial value 0.001, reduction factor 0.5 every 50 epochs) and weight decay λ = 10^-4 is used as optimizer. The other optimizer parameters are kept as the PyTorch default values. The relative L_2 error is used for the loss function. Similar to the DeepONet, a training data set with 1000 entries and a test data set with 100 entries is used, 500 training epochs are carried out. In the present work, the spatial derivatives in the loss value for the elastic strain energy and body forces are approximated by a second order central difference method instead of employing the autograd feature. This can significantly reduce the computational effort, since the number of parameters in the NN is usually much greater than the number of grid points. Exact derivation methods are discussed in more detail in <cit.>. The results for an exemplary test load case are shown in Figs. <ref> and <ref>. The absolute errors in the displacements computed by FNO and PINO are relatively similar, but the strains at the endpoints of the bar as computed by the pure FNO show significant errors, rendering the solution practically unusable. The mean errors in the displacements and strains on the whole test data set are shown in Table <ref>. The inclusion of potential energy does not yield a significant effect on the accuracy for the displacements. The unphysical oscillations of the calculated strains at the edges of the computational domain are eliminated (Fig. <ref>). The only drawback of PINO is the discretization dependence of the numerical derivative. After training, the error is no longer constant over different discretization levels which is analyzed in Section <ref>. The errors are concentrated at the edge, which indicates that the choice of boundary conditions for the FNO might be unfavorable, although this approach should be able to map non-periodic boundary conditions when padding data arrays with zeros. For comparison, a new data set with periodic force fields and a bar clamped on both sides has been modeled. For this purpose, the data set for the 1D-Burgers problem is assumed <cit.>. The periodic initial conditions are interpreted as force fields. The results that yield the largest relative L_2 error among the test data set are shown in Fig. <ref> for the displacements and strains. The reason for the poor agreement at the boundaries cannot be attributed solely to the non-periodic boundary conditions of the input functions, although for the periodic boundary conditions, the median of the ϵ_rel error is lower for the periodic boundary conditions (2.396 · 10^-2) than for the non-periodic ones (9.278 · 10^-2). Large deviations at the boundaries of the periodic domain are still present (Fig. <ref>). These errors can be reduced by regularization (for example by means of an energy functional), as observed with the Neural FEM. The average run times per epoch for computations on CPU and GPU are shown in Table <ref>. The midrange mobile graphics in use accelerated the calculation by factor 4 for the 32 Bit floating point accuracy. For a comparable acceleration for FP64, instead of consumer graphics cards specific High Performance Computing (HPC) accelerator cards are necessary. We skip a calculation of the ϵ_rel errors with double precision because no large influence on the total error can be expected, according to the results of the Neural FEM. §.§.§ Zero-shot super resolution A main feature of neural operators is the consistency of the numerical error over different discretization levels. This is manifested by a nearly constant progression of the error across the discretization level, as shown in Fig. <ref>. Therefore, zero-shot super resolution becomes possible, which means that the NN can be evaluated on a finer grid than the one that used for training. In the present work, the NNs are trained on a dataset based on FEM solutions with a grid with 1024 nodes, but can also be evaluated on finer discretization levels with the same error. The only exception is the PINO which uses a finite difference method in the optimization process in the current contribution. The reference solution for the finer discretization task is obtained from an FEM analysis with 8192 node points. This data set is cubically interpolated to all other discretizations for comparison with the results of the NNs. § SUMMARY AND OUTLOOK In this work, different NN methods have been analyzed and applied to examples from elastostatics. Specifically, a 1D tensile bar with a hyperelastic material (Example A, Section <ref>) and with a linear elastic material (Example B, Section <ref>) has been investigated. Moreover, a plate made of a Neo-Hookean material has been analyzed for two load cases, vertical loading and uniaxial tension (Example C, Section <ref>). Physics Informed Neural Networks (PINN) In the basic form of classical PINN <cit.>, the empirical risk is built from the squared residuals of the differential operators. In various works <cit.>, it has been shown that such a PINN is difficult or impossible to train even for simple examples, so alternative forms of regularization have been developed. The present work particularly studies the DEM, based on the principle of minimal potential energy, and the cPINN, based on the game-theory. The results for Example A using these three approaches (PINN, DEM, cPINN) are compared in Fig. <ref>. The average accuracies are relatively similar, but the comparatively long run time of the cPINN is disadvantageous. According to <cit.>, the relative errors can be reduced by up to 2 orders of magnitude by the cPINN, which could not be demonstrated with Example A in this work. The training of cPINN and DEM should converge in more cases than pure PINN, i.e. it is more robust, as demonstrated with Example C. Overall, the PINN performes best in Example A. However, the latter is not suitable to show the training pathologies of the PINN. Those pathologies were demonstrated only on Example C, where the training of the PINN fails, but the DEM can be applied successfully. The error measured in the L_2 norm is relatively small. But the absolute errors of the equivalent stresses for the 2D plate in the vertical load case show deviations of up to 77  Nm^-2 at the restraint, for a maximum stress of about 142  Nm^-2. Our results with Example C underline the suggestion from <cit.> that PINN and DEM as well as PIDeepONet in their present form are not able to resolve stress concentrations. Further work is necessary to find and analyze alternative approaches with improved accuracy and applicability for classical tasks in solid mechanics like the investigation of critical areas in strength analysis. DEM In DEM, the convergence order of the integration method does not seem to be significant after reaching a certain limit of accuracy. However, DEM holds the risk of overfitting, which must be accounted for by early stopping or a sufficient number of collocation points (support points). This topic has not been addressed in the literature up to now. An advantage from the use of the potential energy is the reduction of the order of differentiation, which also decreases the numerical effort. The run time is about a factor of six lower compared to the PINN. All studies show an intense dependency of the result from the initialization of the NN parameters. In extreme cases, the optimizations converges towards different functions. The relative L_2 error of the DEM with trapezoidal rule and 1000 collocation points results in the whole range from 4.898 · 10^-6 up to 4.277 ·^-4 for displacements and 3.774 · 10^-5 up to 3.701 · 10^-3 for strains. This indicates that the expensive training has to be conducted several times, until an acceptable ML model is found. Further improvement possibilities are Glorot initialization of the NN parameters <cit.> and pretraining <cit.>. Neural Operator Methods The Neural Operator methods have been applied to Example B. All models are calculated with single precision floating point since the investigation of DeepONet, PIDeepONet and the Neural FEM did not yield significant effects on the accuracy of the results. Furthermore, the DeepONets have been optimized with Full-Batch training. The analysis shows that the DeepONet is significantly faster than the FNO, even if calculated on a GPU. Both FNO and DeepONet can learn the solution operator of the parametric PDE, but the achieved accuracies are not sufficient in many cases. Outlook According to the results presented, the accuracy of NNs has to be improved for a reliable engineering application in elastostatics. One promising approach is a network architecture based on the Neural Attention mechanism, that is claimed to improve the accuracy of PINNs about up to 2 orders of magnitude <cit.>. A similar suggestion is made with regard to the Physics-Augmented Learning <cit.>. The sequential learning and the curriculum learning are two further approaches that might improve the learning ability of PINNs <cit.>. In future works, second order methods should be investigated for the optimization of NN parameters during learning <cit.>. The example of L-BFGS shows, that the optimization with higher order methods can be immensely accelerated. They also take profit from larger batches, which makes them more efficient in terms of data parallelism. § ABBREVIATIONS * ACGD Adaptive Competitive Gradient Descent * ANN Artificial Neural Network * BC Boundary Conditions * CGD Competitive Gradient Descent * cPINN competitive Physics Informed Neural Network * DEM Direct Energy Method * FCNN Fully Connected Neural Network * FEM Finite Element Method * FFT Fast Fourier Transformation * FNO Fourier Neural Operator * MC Monte Carlo * MSE Mean Square Error * NN Neural Network * PDE Partial Differential Equation * PINN Physics Informed Neural Network * PINO Physics Informed Neural Operator * PPDE Parametric Partial Differential Equation * SR Squared Residual * TF Transfer Learning The authors cordially thank Mr. Emre Sahin for his contribution to the present work.
http://arxiv.org/abs/2307.01301v2
20230703191045
Reliable AI: Does the Next Generation Require Quantum Computing?
[ "Aras Bacho", "Holger Boche", "Gitta Kutyniok" ]
cs.AI
[ "cs.AI", "quant-ph", "15A29, 35J05, 46N10, 68Q04, 68Q12, 68Q17, 68Q25" ]
figure scrheadings [subsection]section arrows,calc ℝ ℕ
http://arxiv.org/abs/2307.00304v1
20230701112535
Temperature-independent almost perfect photon entanglement from quantum dots via the SUPER scheme
[ "Thomas K. Bracht", "Moritz Cygorek", "Tim Seidelmann", "Vollrath Martin Axt", "Doris E. Reiter" ]
quant-ph
[ "quant-ph", "cond-mat.mes-hall" ]
arxiv t.bracht@wwu.de Entangled photon pairs are essential for quantum communication technology. They can be generated on-demand by semiconductor quantum dots, but several mechanisms are known to reduce the degree of entanglement. While some obstacles like the finite fine-structure splitting can be overcome by now, the excitation scheme itself can impair the entanglement fidelity. Here, we demonstrate that the swing-up of quantum emitter population (SUPER) scheme applied to a quantum dot in a cavity yields almost perfectly entangled photons. The entanglement degree remains robust against phonon influences even at elevated temperatures, due to decoupling of the excitation and emission process. With this achievement, quantum dots are ready to be used as entangled photon pair sources in applications requiring high degrees of entanglement up to temperatures of about 80K. Temperature-independent almost perfect photon entanglement from quantum dots via the SUPER scheme Doris E. Reiter August 1, 2023 ================================================================================================= § INTRODUCTION With their ability to generate entangled photons on-demand <cit.>, quantum dots offer exciting possibilities for advancing the field of quantum communication <cit.>. To harness their usefulness for quantum applications, considerable efforts have been dedicated to achieving perfect photon entanglement. The generation process in quantum dots relies on the biexciton-exciton cascade. A major obstacle to obtain perfect entanglement is the fine-structure splitting (FSS) between the quantum dot's single exciton states <cit.>. An impaired fidelity can be quantified by the concurrence, which becomes unity only in the ideal case. The issue of the FSS has been successfully addressed by applying external fields <cit.>, advanced quantum dot growth <cit.>, or via strain tuning <cit.>. These methods have enabled the generation of entangled photons with a remarkably high concurrence (about 97 at 5K) <cit.>. To achieve perfect entanglement, the preparation process of the biexciton is likewise of paramount importance. The two-photon excitation (TPE) assures the ultrafast, on-demand preparation of the biexciton. However, during the action of the TPE pulse, an optical Stark shift is induced on the exciton levels, which acts as an effective FSS. Accordingly, TPE sets a fundamental limit to the achievable concurrence <cit.>. This calls for a new scheme to excite the biexciton in an ultrafast way, yet, without affecting the degree of entanglement. The recently proposed swing-up of quantum emitter population (SUPER) scheme <cit.> is a candidate to address the excitation issue. In the SUPER scheme, off-resonant excitation with two pulses is employed to address the desired state <cit.>. The off-resonant excitation induces an optical Stark shift of the energies of the target states during the excitation. In Ref. <cit.> it was shown that the combination of SUPER with a photonic cavity, as available in different geometries <cit.>, leads to improved photon properties, because emission into the cavity is suppressed during the pulse. The question remains open, if by using SUPER the limitation to the degree of entanglement imposed by TPE can be overcome and perfectly entangled photons can be created. Another obstacle to overcome for solid-state quantum emitters is the interaction with lattice vibration of the crystal lattice, in particular longitudinal acoustic (LA) phonons <cit.>. For entangled photons, if the biexciton is initially prepared and there is no FSS, the concurrence is unaffected by LA phonons even at elevated temperatures <cit.>. As soon as this situation is broken, phonons degrade the entanglement, in particular at elevated temperatures <cit.>. In this paper, we show that entangled photons after excitation of a quantum dot with the SUPER scheme can reach 99.8 concurrence even under the influence of phonons. More remarkably, the concurrence of over 99 remains for increasing temperatures, up to the temperature of liquid nitrogen at 77K. Using SUPER for entangled photon generation is therefore highly promising, even at elevated temperatures, which can for example be employed for satellite-based quantum communication <cit.>. § BACKGROUND AND MODEL Here, we give a brief summary of our model and the simulations. Details of the model and its Hamiltonian alongside the parameters used in the calculations can be found in the appendix (or SI). A sketch of the system is shown in Fig. <ref>. Our model consists of the quantum dot modeled as a four-level system placed inside a photonic cavity. The quantum dot is excited using a diagonally polarized external laser field, treated semi-classically. To maximize the concurrence, our calculations are performed for a quantum dot with zero FSS. The biexciton energy is reduced from twice the single exciton by the biexciton binding energy (BBE), for which we take Δ_B=1meV unless stated otherwise. The cavity is set resonant to the two-photon energy, enabling two-photon emission processes <cit.>. We assume a cavity coupling, which for the SUPER scheme yields to high concurrence values as discussed in the appendix. The coupling to LA phonons via the deformation potential coupling, as identified as the main hindering mechanism for state preparation <cit.>, is included via the standard Hamiltonian <cit.>. We further account for radiative decay that does not feed into the cavity via the rate γ and cavity losses via the rate κ. To calculate the quantum dot dynamics as well as the dynamics of the cavity photons, we use a process tensor matrix product operator (PT-MPO) method, with details outlined in Ref. <cit.>. Within PT-MPO methods <cit.> and path integral approaches <cit.> the phonon environment can be included in a numerically complete fashion. Using the PT-MPO method, we can calculate photon properties beyond the limitations inherent to the quantum regression theorem <cit.>. We will compare our results to quantum dots without a cavity, where the concurrence is calculated via the quantum dot polarizations <cit.>. Calculations without phonons are performed in QuTiP <cit.>. From the corresponding dynamics we calculate the correlation functions and the concurrence as detailed in the appendix (or SI). § CONCURRENCE OPTIMIZATION We start with the quantum dot in a cavity without phonons. All exciting laser pulses are assumed to be Gaussian with a pulse duration of σ=2.7ps (FWHM of intensity: 4.5ps). For TPE without a cavity, this pulse duration results in a concurrence of 95.1, in agreement with previous calculations <cit.>. Interestingly, for TPE, the cavity does not enhance the concurrence, but only gives a value of 69.4. We amount this to the cavity-enhanced photon emission during the pulse, leading to stronger impacts of which-path information and re-excitation. In SUPER, two pulses with different detunings Δ_1,2 with respect to the exciton energy and pulse areas α_1,2 excite the system. We fix the detuning of the lesser detuned pulse to Δ_1=-5meV and then scan the lesser detuned pulse area and numerically search for parameters for the second pulse yielding the highest biexciton occupation. While this does not automatically optimize the concurrence, it ensures that the resulting parameters lead to a high photon yield (cf. appendix). We further consider several BBEs Δ_B, as previous studies revealed the influence of this property <cit.>. The results are shown in Fig. <ref>. We find that for Δ_B < 2meV, the concurrence reaches values above 99. A maximum value of 99.9 is achieved for every considered α_1, when Δ_B=1meV. Out of these, we choose the parameters that achieve the highest biexciton preparation fidelity, which is at α_1=32π and for the higher detuned pulse Δ_2=12.96 meV and α_2=12.8 π, (cf. also Tab. <ref> in appendix). Without a cavity, these parameters give a concurrence of only 93.1 due to the induced which-path information during the pulse <cit.>. The close-to-unity concurrence for SUPER can be traced back to the decoupling of the emission during the pulse due to the Stark shifts of the biexciton-ground state transition. Since the cavity is decoupled during the excitation process, the emission sets in only after the preparation is completed. For the emitted photons, the situation comes close to an initial value problem, where the biexciton is assumed to be initially populated, disregarding the excitation process. This boosts the concurrence because in the initial value problem, the situation is symmetric without any which-path information. In fact, for a true initial value problem without FSS, the concurrence is known to be exactly one <cit.>. This decoupling is visualized in Fig. <ref>, where the dynamics of the quantum dot states and the cavity photon number are shown. For SUPER, shown in Fig. <ref>(a), the quantum dot population exhibits the typical swing-up behavior, initially transitioning to the exciton states X/Y before progressing to the biexciton state. For TPE, shown in Fig. <ref>(b), a monotonic rise of the biexciton occupation is found, while there is also a transient occupation of the exciton states. Due to the diagonal polarization and the vanishing FSS, the X and Y excitons (and also cavity photons) are always addressed equally. The population of the biexciton decays exponentially, accompanied by an additional oscillation of the occupation resulting from the QD-cavity coupling. A crucial difference of SUPER and TPE lies in the number of cavity photons N_X/Y during the preparation process as displayed in Fig <ref>(c). For SUPER, due to the decoupling there is minimal photon emission into the cavity during the pulses up to approximately t=15ps. After that, we see that the photon number rises, as the cavity and the ground-to-biexciton two-photon transition are resonant again. On the other hand, for TPE, where the cavity is resonant to the relevant transition all the time, the photon numbers N_X/Y already rise strongly during the pulse, resulting in the reduced concurrence <cit.>. This decoupling effect results in the possibility to achieve nearly perfect entanglement of photons generated from a quantum dot. § TEMPERATURE DEPENDENCE OF THE CONCURRENCE To use quantum dots for practical applications, it is desirable that they work at elevated temperatures. However, with increasing temperature, phonon effects become more pronounced for optical excitation schemes, unless one works in the reappearance regime <cit.>. Phonon coupling can reduce the preparation fidelity of the targeted state drastically <cit.>. As shown in Fig. <ref>(b), the final biexciton occupation decreases as a function of temperature. It is evident that the excitation using the SUPER scheme is less prone to disturbance by phonon interaction. For SUPER, the biexciton population drops approximately linearly with rising temperature, while for TPE, the population rapidly drops below 50. Let us now turn to the concurrence. Interestingly, it has been shown that in the case of zero FSS and an initially prepared biexciton, due to the highly symmetric situation, LA phonons with pure-dephasing type coupling do not affect the concurrence <cit.>. But as soon as the excitation induces an asymmetry in the exciton energies via the Stark shift, phonons degrade the entanglement even further <cit.>. Considering the case without cavity, we clearly see in Fig. <ref>(a) that with increasing temperature the concurrence drops as a function of temperature. Like for the populations, the concurrence drops more rapidly for TPE in comparison to SUPER. Including a cavity, TPE exhibits a decline in the concurrence as the temperature increases, in agreement with findings from previous studies <cit.> that identified phonons as being a substantial source of decoherence, leading to a reduction of the concurrence. It was also found in Ref. <cit.>, that phonons cause a renormalization of the dot-cavity coupling that can improve the concurrence. We attribute the slight increase of the concurrence at around 50K to these effects. The case is quite different when applying the SUPER scheme on the quantum dot in a cavity. Here, the state preparation and the photon emission are decoupled, hence, the entanglement properties should be similar to the case of an initially prepared biexciton <cit.>. Remarkably, the concurrence remains at C>99.7 independent of the temperature. This outcome is significant, as it suggests that using this scheme, near-perfect entanglement can be achieved even at elevated temperatures. Our model focussing on the coupling to longitudinal acoustic phonons should be valid to describe the physics up to temperatures of about 80K. For higher temperatures, multi longitudinal optical phonon couplings of discrete dot states to the continuum of wetting layer states have been shown to limit the concurrence even for initial value problems and vanishing FSS <cit.>. We also consider strongly confined quantum dots, because for weakly confined quantum dots, hot exciton states can decrease the concurrence <cit.>. § CONCLUSIONS We have shown that exciting a semiconductor quantum dot in a cavity via the SUPER scheme overcomes a significant hurdle in generating perfectly entangled photons, namely the limit of the concurrence induced by the duration of the TPE excitation scheme. Our scheme delivers unprecedentedly high values of the concurrence. Strikingly, the almost perfect entanglement can be achieved over a broad parameter range and up to elevated temperatures which paves the way to new types of applications. § ACKNOWLEDGEMENTS We acknowledge financial support from the German Research Foundation DFG through project 428026575 (AEQuDot). § SYSTEM HAMILTONIAN The quantum dot is modeled as a four-level system, which is placed inside a cavity with two cavity modes, one for each of the two orthogonal linear polarizations, X and Y. The Hamiltonian of this system reads H_0 = ħω_x(|X⟩⟨X| + |Y⟩⟨Y|) + (2ħω_x - Δ_B)|B⟩⟨B| + (ħω_x - Δ_B/2) (a^†_X a^†_X + a^†_Y a^†_Y), where ħω_x is the energy of the exciton states, Δ_B the biexciton binding energy (BBE) and the operators a^†_X/Y (a^†_X/Y) destroy (create) a photon in the respective cavity mode. The quantum dot is excited via an external laser and coupled to the cavity, so the electron-light interaction is given by H_el = -ħ/2(Ω_X(t)σ^†_X + Ω_Y(t)σ^†_Y) + ħ g (a_Xσ^†_X + a_Yσ^†_Y) + h.c.. Here, g governs the strength of coupling to the cavity, and the terms Ω_X/Y(t) describe the field of the laser used to excite the quantum dot with the respective linear polarization operators σ^†_S = |G⟩⟨S| + |S⟩⟨B|, where S∈{X,Y}. We assume a Gaussian-shaped pulse given by Ω(t) = α/√(2πσ^2)e^-t^2/2σ ^2e^-iω_L t, where α is the pulse area, σ is a measure of the pulse duration, which is related to the full-width at half maximum (FWHM) of the intensity by τ_FWHM = 2√(ln(2)) σ. The frequency of the laser pulse, denoted by ω_L, is connected to the detuning to the quantum dot ground-state to exciton transition by Δ = ħ(ω_L - ω_x). The state preparation in quantum dots is disturbed by the surrounding environment. At low temperatures, the influence of longitudinal acoustic phonons acts as a limiting effect, this interaction with the lattice vibrations is modeled using the pure-dephasing type Hamiltonian H_ph= ħ∑_𝐪ω_𝐪 b_𝐪^† b_𝐪^ +ħ (|X⟩⟨X| + |Y⟩⟨Y| + 2|B⟩⟨B| ) ∑_𝐪(g_𝐪^ b_𝐪^ + g_𝐪^* b_𝐪^†) . The operator b_𝐪^† (b^_𝐪) creates (destroys) a phonon with wave vector 𝐪. The phonons are coupled to the exciton states with the coupling element g_𝐪 and follow the linear dispersion relation ω_𝐪 =c_LAq, where c_LA is the velocity of sound in the material. Containing two excitons, the coupling to the biexciton is twice as strong. We use the same material parameters as in Ref. <cit.>, with an electron confinement length (size of the quantum dot) of 5nm. The photons emitted by the quantum dot are modeled using Lindblad operators ℒ, ℒ_O,γρ = γ/2(2OρO^† - O^†Oρ-ρO^†O). For the radiative decay with rate γ, this leads to the Operators ℒ_|G⟩⟨X|,γ_x,ℒ_|G⟩⟨Y|,γ_x,ℒ_|X⟩⟨B|,γ_B/2,ℒ_|Y⟩⟨B|,γ_B/2. The out-coupling of photons from the cavity with rate κ leads to the operators ℒ_a^_X,κ, ℒ_a^_Y,κ. § CALCULATION OF THE CONCURRENCE To quantify the degree of entanglement, we utilize the concurrence as a measure of the entanglement degree <cit.>. The concurrence is determined from the two-photon density matrix ρ^2P by evaluating the four eigenvalues λ_i of the matrix M = ρ^2P T ρ^2P^* T , where ρ^2P^* represents the complex conjugate of the two-photon density matrix and T is the anti-diagonal matrix with the elements (-1,1,1,-1). After sorting the eigenvalues in decreasing order, i.e., λ_i+1≤λ_i, the concurrence is then given by <cit.> C = max{0,√(λ_1)-√(λ_2)-√(λ_3)-√(λ_4)}. The two-photon density matrix ρ^2P is calculated using two-time correlation functions of the transition operators σ̃_X/Y, as explained in detail in Ref. <cit.> (SI). In calculations without a cavity, the transition operators correspond to the polarization operators, i.e., σ̃_X/Y=σ_X/Y. In calculations including a cavity, the transition operators correspond to the cavity photon operators, i.e., σ̃_X/Y=a^†_X/Y. These operators are then used in the two-time correlation functions of the form G^(2)_AB,CD(t,τ) = ⟨σ̃_A^†(t)σ̃_B^†(t+τ)σ̃_D(t+τ)σ̃_C(t) ⟩. Due to numerical limitations, calculations including phonons and the cavity consider only one photon per X/Y polarized cavity mode. In addition, the states consisting of the quantum dot's ground state and two photons per cavity mode (i.e., |G, n_X=2, n_Y=0⟩, |G,n_X=0,n_Y=2⟩) are included to ensure accurate results in the two-time correlation functions. Specifically, the correlation functions of the type G^(2)_XX,XX(t,τ=0) = ⟨ a^†_X(t)a^†_X(t)a^†_X(t)a^†_X(t)⟩ would always be zero if only one photon per cavity mode was considered. By using this approach, the Hilbert space dimension is reduced to 18 (=4×2×2 +2) instead of 36 if two photons were fully included. This approximation significantly reduces the computation time, as the numerical effort including phonons scales unfavorably with the dimension. We have verified in the phonon-free case, comparing calculations made with the approximation and the complete inclusion of two and three photons per cavity mode, that including more photons has only negligible effects on the population dynamics and concurrence values for the parameter regime studied in this paper. Figure <ref> illustrates the impact of the approximation for the case of TPE (same parameters as in Fig. <ref>) without phonons. Panel (a) displays the two-time correlation G^(2)_XX,XX(t,τ=0), revealing only minor deviations around t∼20ps, when compared to calculations that fully include two or three photons per cavity mode. Similarly, panel (b) presents the dynamics of the cavity photon number, also demonstrating only slight deviations. § NUMERICAL OPTIMIZATION FOR SUPER PARAMETERS In Fig <ref>, it was observed that a high concurrence exceeding C=99 could be achieved over a wide range of parameters. Here, the excitation parameters for each set were found through numerical optimization of the final biexciton occupation. Due to the computational complexity involved in calculating the concurrence values, the parameters for the SUPER scheme were optimized based on the biexciton population rather than directly optimizing the concurrence. This approach is valuable in the way that it automatically provides results where a high photon rate can be expected. For each values of α_1 and Δ_B, with a fixed Δ_1=-5meV, the optimal α_2,Δ_2 were determined. The pulse areas α_1/2 were constrained to a maximum value of 35π. Figure <ref> shows the final occupation of the biexciton state using the same parameters as in Fig <ref>. It is evident that for small pulse areas α_1, the preparation fidelity drops rapidly. This outcome is expected, as previous studies demonstrated that a high pulse area has to be used for the scheme to work as intended <cit.>. Interestingly, for intermediate biexciton binding energies (BBEs), the preparation fidelity decreased to 70-80. Due to the complex swing-up mechanism, there is no simple, straightforward explanation for this decrease in this parameter regime. We attribute it to the different system energies for varying BBEs that influence the mixing of the dressed states, leading to a more or less optimal preparation depending on the interplay of the dressed states as shown in Ref. <cit.>. This finding highlights that the regimes for optimal concurrence and preparation fidelity (which, in turn, effects the photon rate) may differ. In the presented case, both reach high values for small biexciton bindings and high α_1. The parameter set that yields the highest population, here for α_1=32π, was then chosen for the further investigations. § IMPACT OF CAVITY COUPLING In all previous calculations involving a cavity, a constant cavity coupling strength of ħ g = 0.06meV was used alongside a constant cavity out-coupling rate of ħκ = 0.12meV. Figure <ref> illustrates the influence of the cavity coupling on the concurrence and the number of photon pairs emitted via the cavity for (a) SUPER and (b) TPE. For SUPER, a plateau-like region emerges for cavity couplings up to approximately ħ g ∼0.2meV, beyond which the concurrence gradually decreases. The number of emitted photon pairs with the same polarization (N^P_XX/YY) rises sharply, as for small cavity couplings, the photons are emitted free-space before coupling to the cavity. With a smaller γ, as typically found in most quantum dots <cit.>, an even greater share of photon pairs is emitted via the cavity, so that an arbitrarily high concurrence and a high photon yield can be achieved simultaneously. As the cavity coupling strength increases, a larger portion of the emitted photons pass through the cavity. Eventually, almost the entire excitation of the quantum dot is transferred to cavity photon pairs. For very large couplings ħ g > 0.75meV, additional photons are created due to re-excitation during the laser pulse. The number of photon pairs with different polarizations (N^P_XY/YX) detrimental to the concurrence rises only slowly with increasing coupling values. This behavior can be largely contributed to the decoupling of the cavity from the QD during the preparation process. When the biexciton is prepared and no emission occurs during the pulses, photons are only emitted to the |XX⟩ and |YY⟩ states. A higher coupling efficiency increases the probability of photons being emitted already during the pulse. In contrast, for TPE in a cavity, the absence of the decoupling mechanism results in a strong impact of the enhanced photon emission on the concurrence, as depicted in Fig. <ref>(b). Immediately, detrimental photon pairs are created during the preparation process, causing the concurrence to rapidly drop to zero. With increasing cavity couplings exceeding ħ g ∼0.3meV, the number of emitted photon pairs also decreases, as the strong energy shift resulting from the dot-cavity coupling hinders efficient population transfer. § INFLUENCE OF TEMPERATURE ON POPULATION DYNAMICS In Figure <ref>, it was visible that the influence of the temperature shows significant differences between SUPER and TPE. Up to 77K, the final biexciton population only slightly decreases for SUPER, when compared to TPE. Involving a cavity, the concurrence basically remains a constant 99 for SUPER, while it starts at about 69 for TPE at T=4K and decreases with rising temperature. The findings for TPE are in agreement with previous studies <cit.> that identified phonons as being a substantial source of decoherence, leading to a decrease of the concurrence. Phonons lead to renormalization of the cavity-dot coupling, effectively weakening the interaction <cit.>. The impact of phonons to the photon output can be seen in the population dynamics shown in Fig. <ref>. Panel (c) displays the number of X/Y photons in the cavity, revealing that, when phonons are included, fewer photons are emitted into the cavity at early times compared to the phonon-free case. Additionally, the oscillations are damped. Panel (a) and (b) show the population dynamics of the dot states for SUPER and TPE, respectively, indicating that phonons disturb the process of TPE substantially more than SUPER.
http://arxiv.org/abs/2307.02998v1
20230706140005
Ultrasonic backscattering model for Rayleigh waves in polycrystals with Born and independent scattering approximations
[ "Shan Li", "Ming Huang", "Yongfeng Song", "Bo Lan", "Xiongbing Li" ]
physics.app-ph
[ "physics.app-ph" ]
1 .001 mode = title]Ultrasonic backscattering model for Rayleigh waves in polycrystals with Born and independent scattering approximations 1,2]Shan Li lisa_13@foxmail.com [1]organization=School of Traffic and Transportation Engineering, addressline=Central South University, city=Chang Sha, postcode=410075, state=Hunan, country=China [2]organization=Department of Mechanical Engineering, addressline=Imperial College London, city=Exhibition Road, postcode=SW7 2AZ, state=London, country=United Kingdom 2]Ming Huang m.huang16@imperial.ac.uk 1]Yongfeng Song [1] songyf_ut@csu.edu.cn 2]Bo Lan [1] bo.lan@imperial.ac.uk 1]Xiongbing Li lixb213@csu.edu.cn This paper presents theoretical and numerical models for the backscattering of 2D Rayleigh waves in single-phase, untextured polycrystalline materials with statistically equiaxed grains. The theoretical model, based on our prior inclusion-induced Rayleigh wave scattering model and the independent scattering approximation, considers single scattering of Rayleigh-to-Rayleigh (R-R) waves. The numerical finite element model is established to accurately simulate the scattering problem and evaluate the theoretical model. Good quantitative agreement is observed between the theoretical model and the finite element results, especially for weakly scattering materials. The agreement decreases with the increase of the anisotropy index, owing to the reduced applicability of the Born approximation. However, the agreement remains generally good when weak multiple scattering is involved. In addition, the R-R backscattering behaviour of 2D Rayleigh waves is similar to the longitudinal-to-longitudinal and transverse-to-transverse backscattering of bulk waves, with the former exhibiting stronger scattering. These findings establish a foundation for using Rayleigh waves in quantitative characterisation of polycrystalline materials. Backscattering; Rayleigh waves; Born approximation; Independent scattering; Finite element; Polycrystals [ [ August 1, 2023 ================== § INTRODUCTION Rayleigh waves, when propagating on the surface of a polycrystalline material, can be scattered by the grain boundaries due to the acoustic impedance contrast caused by different alignments of crystallographic orientations of individual grains <cit.>. The backscattered waves - the portion of the scattered wave that travels back to the transducer - are sometimes called backscattered `grain noise', and this phenomenon of the bulk wave has been the subject of thorough scientific investigations. For example, backscattered waves are widely proven to be able to characterise the material's microstructure and estimate material properties, e.g. the size of the grains <cit.>, the degree of preferred texture <cit.>, and the multiphase content <cit.>. In the past decades, there have been several physical quantities, including the figure of merit (FOM) <cit.>, also called the backscattering coefficient <cit.>, and theoretical models developed in order to quantify the backscattered amplitude and intensity of scattered energy. The theoretical models include the independent scattering model (ISM) <cit.>, singly scattered response (SSR) <cit.> and doubly scattered response (DSR) <cit.> for different bulk wave types. Meanwhile, based on these models, research has been performed to study materials with more complicated microstructures <cit.>. Here, we recognise that compared with other models mentioned, the model describing FOM is advantageous for the characterisation of the microstructure of material from a practical point of view, because it involves a simple mathematical expression and does not need the consideration of experimental conditions. While comprehensive models have been established for the scattering of bulk waves by grains, limited studies have focused on the scattering of Rayleigh waves in polycrystalline materials. Zhang and Weaver studied the singly incoherent scattered field of leaky Rayleigh waves from a fluid/solid surface at the critical Rayleigh angle using the first Born approximation <cit.>. They proposed that the mean-square scattered signal level is given in terms of integration of the spatial-spectral density (the spatial Fourier transform of the autocovariance function of the fluctuating elastic moduli). Except for the research of the mean-square scattered signal level, scattering attenuation of Rayleigh waves has also been investigated. For example, Kaganova and Maradudin gave an expression of the scattering attenuation and dispersion relationship of Rayleigh waves <cit.>. However, their theoretical model is difficult to solve, and no quantitative results have been obtained. Recently, Ryzy et al. <cit.> and Li et al. <cit.> predicted the scattering attenuation for different types of Rayleigh waves. However, these cases are only interested in the scattering attenuation. The explicit expression for the backscattered signal of Rayleigh waves is not given yet now. Given Rayleigh waves' capability to quantitatively evaluate the material properties in near-surface regions, we consider it of considerable interest to develop the theory needed to describe Rayleigh wave backscattering behaviour from a polycrystalline material. Lately, we have given an explicit expression for the backward flaw scattering amplitude of R-R scattered by a single inclusion (weak scatterer) based on Born approximation <cit.>, which would provide theoretical support for the backscattered grain noise research. With the independent scattering (IS) approximation <cit.>, the total backscattering power of Rayleigh waves can be interpreted as an incoherent sum of the power scattered from each grain. Thus, an opportunity exists to employ a theoretical method, based on the Born and IS approximations, to complement our understanding of the backscattering grain noise of Rayleigh waves when propagating on the plain surface of polycrystalline materials with single-phase, untextured and equiaxed grains. In addition to the theoretical methods, finite element (FE) modelling is another approach which has been widely used to investigate wave scattering behaviours in a polycrystalline material. Similar to the theoretical developments in literature, the related works in this area have also mainly concentrated on bulk wave grain noise, with two-dimensional (2D) <cit.> and three-dimensional (3D) models <cit.> both well researched. Recently, there have also been successful applications of FE in analysing the scattering attenuation and velocity dispersion of Rayleigh waves in the polycrystalline material <cit.>. Compared to experiments where significant limitations on the testing conditions and knowledge of the detailed materials microstructures are present, these numerical studies allow full control and knowledge of the materials, demonstrating the power of the FE method as a perfectly controlled experiment to realistically simulate Rayleigh wave scattering. Therefore, in this paper, we combine the development of new theoretical advancements with powerful FE simulations for verification purposes. In comparison to the existing studies, this work sets out to study 2D Rayleigh wave grain backscattering behaviour. To achieve this aim, this work contributes to two aspects. (1) The work develops explicit formulae for the backscattered power from single-phase, untextured, and equiaxed grains based on the Born and IS approximations, which leads to the calculation of the backscattered grain noise of Rayleigh waves. (2) We make use of the proven capability of the FE method to perform realistic simulations of backward scattering amplitudes of Rayleigh waves scattered by grains of a polycrystal. This not only allows Rayleigh wave scattering behaviour to be studied numerically as a standalone application, some of the outputs can also be input back into the theoretical model to perform numerical integration, thus allowing thorough verification of the theoretical model. The paper is organised to explain the methodology and highlight the contributions clearly: Section <ref> presents the theoretical model to explain the backward scattering behaviours of the Rayleigh wave based on the approximations. Then Sec. <ref> gives a brief introduction to an FE model which illustrates the R-R wave responses after being scattered by grains. The comparisons between the theoretical and computational results are shown in Sec. <ref>, mainly including that the measured single scattering amplitudes are scattered from single grain with different shapes and random orientation, shown in Sec. <ref>; the theoretical model is verified to evaluate the root-mean-square (rms) backward grain noise with different anisotropy index materials in Sec. <ref>. Finally, conclusions are given in Sec. <ref>. § BACKSCATTERING OF RAYLEIGH WAVES §.§ Brief review for the inclusion scattering of Rayleigh waves The interest here is studying the Rayleigh wave propagation on the smooth polycrystal plain surface. Usually, the grain can be regarded as a weak scatter with a small elastic constant perturbation. Therefore, the scattering behaviour from the single grain can be simplified to the inclusion scattering. In fact, the theoretical model related to the inclusion scattering of Rayleigh waves has been well established by our previous research <cit.>, which is developed based on the Born approximation in which the displacement fields for the scattered wave are approximated by those of the incident wave. Now, a brief overview of the inclusion backscattering theory of the R-R waves is introduced here. We consider a statistically isotropic solid as the host material with constant density ρ in the two-dimensional (2D) half-space defined by the x-z coordinates. An arbitrarily-shaped inclusion is present on the surface or subsurface of the host material. The inclusion is defined in the region V. The anisotropic property of the host material and inclusion are described by the elastic tensor C_pjkl^0 and C_pjkl^1 (𝐱_s). The anisotropic property difference between the inclusion and host material is described by the elastic tensor Δ C_pjkl(𝐱_s) = C_pjkl^1 (𝐱_s)-C_pjkl^0. In addition, there is no density change caused by the inclusion. Based on the reciprocity theorem and the Born approximation, the backscattered Rayleigh wave can be given as <cit.>, u^sc_n(𝐱, ω) ≈∫_V [ -Δ C_pjkl(𝐱_s) G_ni, j(𝐱,𝐱_s, ω) u_k, l^in(𝐱_s, ω) ]  d V . The u_k, l^in(𝐱_s, ω) is the derivative of the incident Rayleigh wave displacement, given as, u_k, l^in(𝐱_s) = [d_k, l^in(z_s) + id_k^in(z_s) k_R e^in_l] exp(i k_R 𝐞^in·𝐱_s), with d^in_k = [U_R(z_s) , 0 , W_R(z_s)], U_R(z) = w_1^Lexp(-η_L z)+w_1^Texp(-η_T z), W_R(z) = w_3^Lexp(-η_L z) +w_3^Texp(-η_T z), w_1^L = k_R(2 c_T^2-c_R^2) /2 η_L c_T^2, w_1^T = - η_T /k_R, w_3^L = i(2 c_T^2-c_R^2) /2 c_T^2 , w_3^T = - 1i, η_L = k_R √(1-c_R^2 /c_L^2), η_T = k_R √(1-c_R^2 /c_T^2), where k_R and c_R are the wave number and phase velocity of the incident Rayleigh wave. 𝐞^in = [1,0,0] is the propagation direction of the incident Rayleigh wave. The phase velocity c_R can be calculated by <cit.>, (2-c_R^2 /c_T^2)^2-4(1-c_R^2 /c_L^2)^1/2(1-c_R^2 /c_T^2)^1/2 = 0 , where c_L and c_T are the Voigt-averaged velocities of the longitudinal and shear waves in the host material, which can be obtained by <cit.>, c_L = √( c_11^0 / ρ), c_T = √( c_44^0 / ρ), where c_11^0 and c_44^0 are the Voigt-averaged constants, which can be given by c_11^0 = 3(c_11+c_22+c_33)+2(c_23+c_13+c_12)+4(c_44+c_55+c_66)/15 c_44^0 = (c_11+c_22+c_33)-(c_23+c_13+c_12)+3(c_44+c_55+c_66)/15, The fourth-rank elastic tensor C_pjkl is written as c_ij using the Voigt index notation where the pairs of indices are contracted to the following single values: 11→ 1, 22→ 2, 33→ 3, 23 or 32→ 4, 13 or 31→ 5 and 12 or 21→ 6. The G_ni, j(𝐱,𝐱_s, ω) is the derivative of the 2D Rayleigh wave green function <cit.>, written by, G_ni,j(𝐱,𝐱_s, ω) = A_0 {d_i,j^sc (z_s ) -i k_R e^sc_j d_i^sc (z_s ) }exp(-i k_R 𝐞^sc·𝐱_s)exp(i k_R r)/√(r)p^sc_n(z ) , with A_0 = 1/4P_Rc_R k_R, P_R = 1/2ρ_0 c_g∫_0 ^∞[ | U_R(z) | ^2+ | W_R(z) |^2 ] d z, p^sc_n(z ) = [ U_R(z ) ,0, W_R(z ) ] , d_i^sc(z_s) = [-U_R(z_s) , 0 , - W_R(z_s)], where r is the propagation distance. P_R represents a normalised power per unit width in the travelling wave mode. c_g is the group velocity of the Rayleigh wave. 𝐞^sc =[-1,0,0] denotes the propagation direction of the scattered Rayleigh wave. Substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>) and rearranging the result, we have, u^sc_n(𝐱, ω) = A (ω) exp(i k_R r)/√( r)p^sc_n(z ) , where A ( ω) is the far-field amplitude of the backscattered Rayleigh wave, expressed as <cit.>, A( ω) = - A_0 ∫ M(𝐱_s) exp[i k_R (𝐞^in- 𝐞^sc)·𝐱_s] d^2 𝐱_s, with M(𝐱_s) = -k_R^2 Δ C_i1k1 (𝐱_s) J_i k^00 +i k_RΔ C_i1k3 (𝐱_s) J_i k^01 +i k_RΔ C_i3k1 (𝐱_s) J_i k ^10+Δ C_i3k3 (𝐱_s)J_i k^11, J_i k^mn(z ) = (-1)^m+n∑_σ_1 = L,T∑_σ_2 = L,T(η_σ_1)^m(η_σ_2)^n w_i^σ_1 w_k^σ_2exp[-(η_σ_1+η_σ_2)z] , where the summation of (i, k) in the above equations is over 1 and 3. m, n = 0, 1 and no summation convention for repeated m, n. §.§ Single scattering of Rayleigh waves from microstructure Before we start with the development of the scattering behaviour of Rayleigh waves on the polycrystalline material, we make the following assumptions that: (1) the polycrystal's statistics are homogeneous and isotropic; (2) there is no orientation correlation between different grains; (3) only R-R wave scattering is considered; (4) multiple scattering phenomena are neglected, i.e., the reflection from one grain is not affected by other grains; (5) total backscattering power is an incoherent sum of the signals scattered by the individual grain in the metal (IS approximation), i.e., the phases of the individual reflection are not correlated. Based on the above-mentioned assumptions, the backscattered power can be given by <cit.>, P( ω) = ⟨ A( ω) A^*( ω)⟩, where < > denotes the ensemble average. The asterisk represents the conjugate. A( ω) denotes the R-R scattering amplitude in the Born approximation, which is given in Eq. <ref>. Substituting Eq. <ref> to Eq. <ref>, the total backscattered power can be expressed as, P( ω) = A_0^2 ∫∫Ψ(𝐱_𝐬, 𝐱) exp [-(η_σ_1 +η_σ_2) z_s-(η_σ_3 +η_σ_4) z ] exp[2 i k_R (x_s -x)] d^2 𝐱_s d^2 𝐱, where the summation of each σ_i over L and T is implied. In the equation, Ψ (𝐱_𝐬, 𝐱 ) is given by Ψ (𝐱_𝐬, 𝐱 ) = ⟨ M (𝐱_s ) M^* (𝐱 ) ⟩ = k_R^4 ⟨Δ C_i 1 k 1(𝐱_s)Δ C_α 1 γ 1(𝐱)⟩Λ_i k αγ^0 0 0 0 + i k_R^3 ⟨Δ C_i 1 k 1(𝐱_s)Δ C_α 1 γ 3(𝐱)⟩Λ_i k αγ^0 0 0 1 + i k_R^3 ⟨Δ C_i 1 k 1(𝐱_s)Δ C_α 3 γ 1(𝐱)⟩Λ_i k αγ^0 0 1 0 - k_R^2 ⟨Δ C_i 1 k 1(𝐱_s) Δ C_α 3 γ 3(𝐱_s) ⟩Λ_i k αγ^0 0 1 1 -i k_R^3 ⟨Δ C_i 1 k 3(𝐱_s) Δ C_α 1 γ 1(𝐱) ⟩Λ_i k αγ^0 1 0 0 + k_R^2 ⟨Δ C_i 1 k 3(𝐱_s)Δ C_α 1 γ 3(𝐱) ⟩Λ_i k αγ^0 1 0 1 + k_R^2 ⟨Δ C_i 1 k 3(𝐱_s) Δ C_α 3 γ 1(𝐱) ⟩Λ_i k αγ^0 1 1 0 + i k_R⟨Δ C_i 1 k 3(𝐱_s)Δ C_α 3 γ 3(𝐱)⟩Λ_i k αγ^0 1 1 1 - i k_R^3 ⟨Δ C_i 3 k 1(𝐱_s)Δ C_α 1 γ 1(𝐱)⟩Λ_i k αγ^1 0 0 0 + k_R^2 ⟨Δ C_i 3 k 1(𝐱_s)Δ C_α 1 γ 3(𝐱)⟩Λ_i k αγ^1 0 0 1 + k_R^2 ⟨Δ C_i 3 k 1(𝐱_s) Δ C_α 3 γ 1(𝐱)⟩Λ_i k αγ^1 0 1 0 +i k_R⟨Δ C_i 3 k 1(𝐱_s)Δ C_α 3 γ 3(𝐱)⟩Λ_i k αγ^1 0 1 1 - k_R^2 ⟨Δ C_i 3 k 3(𝐱_s)Δ C_α 1 γ 1(𝐱)⟩Λ_i k αγ^1 1 0 0 - i k_R⟨Δ C_i 3 k 3(𝐱_s)Δ C_α 1 γ 3(𝐱)⟩Λ_i k αγ^1 1 0 1 -i k_R⟨Δ C_i 3 k 3(𝐱_s)Δ C_α 3 γ 1(𝐱)⟩Λ_i k αγ^1 1 1 0 + ⟨Δ C_i 3 k 3(𝐱_s)Δ C_α 3 γ 3(𝐱)⟩Λ_i k αγ^1 1 1 1. Meanwhile, Λ^m n p q_i k αγ is defined as, Λ_i k αγ^m n p q = J^m n_i k (J^ p q_αγ)^* = (-1)^m+n+p+q∑_σ_1 = L,T∑_σ_2 = L,T∑_σ_3 = L,T∑_σ_4 = L,T(η_σ_1)^m(η_σ_2)^n(η_σ_3)^p(η_σ_4)^q w_i^σ_1 w_k^σ_2 w_α^σ_3* w_γ^σ_4*, for m, n, p, q = 0, 1 and no summation convention for repeated m, n, p, q. Due to the assumption of statistical homogeneity and macroscopic isotropy of the polycrystalline medium, the covariance can be factorised into tensorial and spatial parts <cit.>, ⟨Δ C_ijkl(𝐱) Δ C_αβγδ (𝐱_s)⟩ = ⟨Δ C_ijklΔ C_αβγδ⟩ W (𝐱_s - 𝐱), where ⟨Δ C_ijklΔ C_αβγδ⟩ is elastic covariance, whose detailed information can be found in Ref. <cit.>. W(𝐱_s - 𝐱) is a geometrical two-point correlation (TPC) function, describing the probability that the two points 𝐱, 𝐱_s are in the same grain. For equiaxed grains with a grain size distribution, Van Pamel et al. <cit.> has proved that W(r) can be obtained by fitting an exponential series as, W(𝐱_s - 𝐱) = ∑_ϕ=1^Φ[ A_ϕexp(- | 𝐱_s - 𝐱 |/a_ϕ)], ∑_ϕ=1^Φ A_ϕ = 1. In direct contrast to the conventional single exponential form <cit.>, this generalised TPC function has the advantage of accurately representing the actual TPC statistics of experimental <cit.> and numerical samples <cit.>. The equivalent grain coefficients a_ϕ and A_ϕ are determined by best fitting the actual TPC data of the polycrystal, which is discussed in Sec.<ref>. Substituting Eqs. <ref> and <ref> to Eq. <ref>, we can get, P( ω) = A_0^2 Ψ_0∑_ϕ = 1^Φ{ A_ϕ∫∫ [ exp(- | 𝐱_s - 𝐱 |/a_ϕ) exp[-( η_σ_1 +η_σ_2) z_s-(η_σ_3 +η_σ_4) z ) ] ×exp[2 i k_R (x_s -x )] d^2 𝐱_s d^2 𝐱 ] }, with Ψ_0 = k_R^4 ⟨Δ C_i 1 k 1Δ C_α 1 γ 1⟩Λ_i k αγ^0 0 0 0 + i k_R^3 ⟨Δ C_i 1 k 1Δ C_α 1 γ 3⟩Λ_i k αγ^0 0 0 1 + i k_R^3 ⟨Δ C_i 1 k 1Δ C_α 3 γ 1⟩Λ_i k αγ^0 0 1 0 - k_R^2 ⟨Δ C_i 1 k 1Δ C_α 3 γ 3⟩Λ_i k αγ^0 0 1 1 - i k_R^3 ⟨Δ C_i 1 k 3Δ C_α 1 γ 1⟩Λ_i k αγ^0 1 0 0 + k_R^2 ⟨Δ C_i 1 k 3Δ C_α 1 γ 3⟩Λ_i k αγ^0 1 0 1 + k_R^2 ⟨Δ C_i 1 k 3Δ C_α 3 γ 1⟩Λ_i k αγ^0 1 1 0 + i k_R⟨Δ C_i 1 k 3Δ C_α 3 γ 3⟩Λ_i k αγ^0 1 1 1 - i k_R^3 ⟨Δ C_i 3 k 1Δ C_α 1 γ 1⟩Λ_i k αγ^1 0 0 0 + k_R^2 ⟨Δ C_i 3 k 1Δ C_α 1 γ 3⟩Λ_i k αγ^1 0 0 1 + k_R^2 ⟨Δ C_i 3 k 1Δ C_α 3 γ 1⟩Λ_i k αγ^1 0 1 0 +i k_R⟨Δ C_i 3 k 1Δ C_α 3 γ 3⟩Λ_i k αγ^1 0 1 1 - k_R^2 ⟨Δ C_i 3 k 3Δ C_α 1 γ 1⟩Λ_i k αγ^1 1 0 0 - i k_R⟨Δ C_i 3 k 3Δ C_α 1 γ 3⟩Λ_i k αγ^1 1 0 1 -i k_R⟨Δ C_i 3 k 3Δ C_α 3 γ 1⟩Λ_i k αγ^1 1 1 0 +⟨Δ C_i 3 k 3Δ C_α 3 γ 3⟩Λ_i k αγ^1 1 1 1. By applying the following change of variables, τ = (𝐱_s+𝐱) / 2, 𝐫 = 𝐱_s-𝐱, with the limit of (the detailed information for the transformation of the integral area in the Appendix), 0<τ_z<+∞, -∞<τ_x<+∞, -∞<r_z<+∞, -∞<r_x<+∞ . Therefore, the following equation is straightforward, P( ω) = A_0^2 Ψ_0∑_ϕ = 1^Φ{ A_ϕ∫exp(-η_M τ_z)  d^2 τ∫exp(- r / a_ϕ+2 i k_R r_x+η_N r_z )  d^2 𝐫}, where η_M = η_σ_1 +η_σ_2 +η_σ_3 +η_σ_4 and η_N = (-η_σ_1 -η_σ_2+η_σ_3 +η_σ_4) /2. We want to emphasise that, unlike the uniform bulk wave, there is an exponential energy decay for Rayleigh waves in the z- displacement (thickness direction). Thus, instead of estimating the power per unit area, the backscattered power p( ω), from the grains in the area where is the unit length multiplied by the infinite depth, should be used to assess the backscattering behaviour of Rayleigh waves propagating on the material surface. Based on this, we make a calculation for the integral, which can be followed, p( ω) = A_0^2 Ψ_0∑_ϕ = 1^Φ{ A_ϕ∫_0^∞exp(-η_M τ_z)  dτ_z∫exp(- r /a_ϕ+2 i k_R r_x+η_N r_z )  d^2 𝐫}. After further manipulation, the final equation can be written as, p( ω) = A_0^2 Ψ_0∑_ϕ = 1^Φ{ A_ϕ2 π a_ϕ^2/η_M [a_ϕ^2 (4 k_R^2 - η_N^2 ) + 1]^3/2}. The theoretical prediction of the rms backscattering amplitude of single scattering for multiple grains can be given by, A_rms( ω) = √(p( ω) / n_g), where n_g is grain density. Assuming that N is the number of grains in the active volume of a sample. Ω is the active space of the sample. n_g = N / Ω. A_rms( ω) is the backscattering amplitude of Rayleigh waves for a single scattering of multiple grains with the area (unit length × infinite depth), which is the most important result in this article. §.§ Grain noise of Rayleigh waves in a polycrystalline material Now, we consider a 2D case where one point on the upper surface (x- surface) transmits a plane Rayleigh wave and receive the backscattered and reflected signals. Then, the normalised backward grain noise N_rms in the area with unit length × infinite depth can be defined by the rms of backward amplitude from each grain, given as an approximate expression for the dimensionless ratio <cit.>, N_rms( ω)≡√(<|Γ_noise( ω) |^2> / |Γ_ref( ω) | ^2), where Γ_ref( ω) is the Fourier transform of the reference signal at angular frequency ω = 2 π f. Γ_noise( ω) denotes the Fourier transform of the grain noise signal on the finite time interval indicated in the area with unit length × infinite depth, which is understood to be located away from the reference echoes. In addition to the five assumptions we mentioned in Sec. <ref>, here we apply two more restrictions: (6) No attenuation is considered, which is the direct consequence of assumption (4) above which stipulates that the reflection from one grain is not affected by other grains; (7) The time window of interest is long enough to enclose the time-domain echoes produced by the backscattering of sound by all grains in some regions of the specimen. As we mentioned in assumption (5), the IS approximation states that for an incoherent summation of signals, the power of the sum equals the sum of the powers of the contributing signals. Thus, < |Γ_noise( ω) |^2> = <∑|Γ_i( ω)^2 |>, considering an echo associated with Rayleigh waves which travel directly from the transmitter to grain at position 𝐱_i (x_i,z_i) and then directly back to the receiver, the discrete Fourier transform component of this echo at frequency f may be approximated by <cit.>, Γ_i(ω,x_i,z_i) = A_i( ω) Γ_ref( ω), where A_i( ω) is the scattering amplitude from this grain, given by Eq. <ref>. Furthermore, replacing the sum over grains with an integral over the unit volume of the material, Eq. <ref> can be given as, <|Γ_noise( ω) |^2> = <∑| A_i( ω)^2 |> |Γ_ref( ω) | ^2 = p( ω) |Γ_ref( ω) |^2, where p( ω) is given by Eq. <ref>. Substitution of Eq. <ref> in Eq. <ref>, the normalised grain noise in the area with unit length × infinite depth can be further obtained as, N_rms( ω) = √(p( ω)) =√(n_g)A_rms( ω), where A_rms( ω) is written as Eq. <ref>. From Eq. <ref>, we can figure out that p( ω) is the square of FOM to represent the measurement of the inherent grain noise of the specimen of Rayleigh wave and can be also expressed as the backscattering coefficient, which can be written as FOM^2 ≡ p( ω). We have now completed our theoretical developments for the quantitative predictions of the rms of the backscattering Rayleigh wave in (Eqs. <ref> and <ref>). For the verification of these predictions, we computationally simulate the rms backscattering amplitude of single grains with random shapes and orientations and grain noise scattered by the polycrystalline material in the following section. § FINITE ELEMENT METHOD The capability of 2D FE to model the bulk wave scattering in polycrystals has been well proved in recent FE modelling papers <cit.>. Therefore, numerical validations are used in this section to verify the theoretical model. The FE method for simulating Rayleigh waves flaw scattering was implemented in our prior work <cit.>. A brief overview of the FE method is given below and several previous aspects are emphasised here. As schematically shown in Fig. <ref>, the 2D FE model is based on the x-z plane. The numerical polycrystalline models used here are constructed in the Neper program <cit.> with the Poisson Voronoi tessellations (PVT). The PVT creates uniformly random seeds in the model space of a polycrystal, with each seed being enclosed by a grain within which all points are closer to the enclosed seed than to any other <cit.>. The grains are statistically equiaxed because of the procedure of randomly placing Voronoi seeds <cit.>. Taking the model n20000 in Table <ref> with the PVT microstructure as an example: its dimensions d_x × d_z are 140 mm × 14 mm, the averaged grain edge size d, defined as the square root of the space area divided by the grain number <cit.>, is 0.31 mm, and the polycrystal microstructure is displayed in Fig. <ref>(a). The grain edge sizes, defined as the square root of each grain area, of the PVT grains are normally distributed <cit.> and shown in Fig. <ref>(b). The mean grain size is 0.31 mm, with a standard deviation of 8.31 × 10^-2 mm. The TPC statistics W(r) are numerically measured from the generated polycrystal models and the resulting data points are indicated in Fig. <ref>(c). To incorporate the measured statistics into the theoretical models, they are fitted into a generalised TPC function (Eq. <ref>), which is displayed in Fig. <ref>(c) as the solid curve. The fitted TPC function is treated as a scalar function and the detailed information related to a_ϕ and A_ϕ can be found in Fig.<ref>(c). We note that the fitting numbers a_ϕ and A_ϕ are obtained by scaling the fitting parameters mentioned in prior work <cit.>. Besides, some other tessellations have been widely used to generate microstructures, such as centroidal Voronoi tessellation <cit.> and Laguerre tessellation <cit.>, which are not shown in this paper because of the high computational requirements. Meanwhile, we want to underscore that the theoretical model is reliant on the TPC statistics, which means the theoretical model can be applied with a well-defined TPC function regardless of the type of tessellation model employed. Structured meshes, which have been shown to perform well with sufficiently fine discretization in modelling a grained material <cit.>, continue to be used here. The mesh size used for each model has met the two requirements for obtaining accurate simulation results: (1) at least ten elements per wavelength <cit.>; (2) at least ten elements per averaged grain size <cit.>. The elements on the bottom, left, and right sides of the model are used to define absorbing boundary conditions. The thickness of each absorbing boundary region in the boundary normal direction is chosen to be three times the wavelength of the Rayleigh wave in the host material <cit.>. The desired Rayleigh wave is generated by applying two sinusoidal time-domain signals of 90^∘ phase shift to multiple source nodes located on the top surface of the model (yellow points in Fig. <ref>). The size of the source is set to be equal to three times of centre-frequency wavelengths of the simulated Rayleigh wave, and each source node is assigned a unique amplitude, following Eq. (17) in Sarris et al. <cit.>. The simulation is solved using the GPU-accelerated Pogo program <cit.> with an explicit time-stepping scheme. A relatively large time step of Δ t = 0.9 h /c_L, satisfying the Courant-Friedrichs-Lewy condition <cit.>, is used to minimise numerical error <cit.>. The models generated for this work are summarised in Table <ref>. § RESULTS AND DISCUSSIONS §.§ rms backscattering of single grains with random shapes and orientations First, rms backscattering of single grains with random shapes and orientations is investigated. We know that the theoretical model is developed with independent scattering approximation. Therefore, the FE model, simulating the backscattering of single grains embedded in a homogeneous background material, can avoid the effect of multiple scattering, which makes the FE results represent the theoretical result better. To predict the rms backscattering amplitude of single grain with random shapes and orientations, a number of grains are needed. As we mentioned in Sec. <ref>, Neper program generates an aggregate of grains (`multi-grain' models) with each having its own geometry. The geometrical schematic of multiple grain aggregate is shown in Fig. <ref>(a). In order to perform simulations with only single R-R scattering while avoiding the effect of multiple scattering (to be discussed in Sec. <ref>), we make the following simplification to the simulation model: instead of Rayleigh waves propagating on the whole polycrystalline model directly, the simulation process takes out an individual grain in the S from the polycrystal, and applies a random orientation to it, then embeds it in an isotropic host material, and then simulates the Rayleigh wave propagation in this model with the single grain regarded as a sole scatterer, as shown in Fig. <ref>(b). We repeat the process within the S area until enough grains are used to make sure that the grain distribution is statistically uniform. Over the course of the FE solution, the z - displacement of the generated incident wave is monitored at a transmitting node (point T in Fig. <ref> ), while that of the backscattered wave is recorded at a receiving node (point R). We emphasise that the transmitting and receiving nodes are placed respectively far away from the source nodes and the grain, in order for the former to monitor the well-formed incident wave and for the latter to record solely the scattered Rayleigh wave in the far field. In addition, a reference signal is obtained at the receiving point using an identical but grain-free FE model, and the reference signal is subtracted from the relatively small raw signal to minimise the influence of numerical error. The backscattering amplitude A^FE(f ) from single grains with different shapes and random orientations will be measured to calculate the incoherent rms averaging A_rms^FE( f ). Here, a brief overview of the measurement of the backscattering amplitude and rms averaging is introduced. Figure <ref>(a) and (b) present an example to illustrate the z- displacement monitored by the transmitter and receiver in the time domain and frequency domain for the i-th grain, respectively. The signal U^i_T( t ) at the transmitting node and the corrected signal U^i_R( t ) at the receiving node are Fourier transformed into the frequency domain to obtain the spectra U^i_T( f ) and U^i_R( f ). The frequency-dependent amplitude of the backscattered Rayleigh wave is then calculated by A^i( f ) = U_R^i( f ) / U_T^i( f ) and A_rms^FE( f ) = RMS[A^i( f )] /√(l_1), where l_1 is the the length of S area. A_rms^FE( f ) will be used to evaluate the theoretical model result, A_rms( ω), in Sec. <ref>. Figure <ref>(a) shows the comparison of the FE and theoretical predictions with aluminium material whose detailed information is listed in Table <ref>. The x axis is the product of the wave number k_R and the average linear dimension of the grains in the `multi-grain' models, denoted by d (the detailed information about d is in Table <ref>). The FE predicted RMS backscattering amplitude (calculated by Eq. <ref>) is plotted as the coloured dots and the error bars show the 99.73% confidence interval (3 σ rule) <cit.> for the FE points, demonstrating the variation across the realisations with different crystallographic orientations. Four centre frequencies, 1, 2, 4, and 8 MHz, are used in the FE simulations to cover a large range of k_Rd. The theoretical prediction calculated using Eq. <ref> is plotted as the grey curve. The good agreement between the FE results and theoretical prediction shows that the theoretical model is correct and therefore it has a strong potential to predict grain noise, which will be discussed later in Sec. <ref>. Furthermore, to observe how the agreement changes with anisotropy, the comparisons for the Inconel and lithium materials which have anisotropy indices of 2.83 and 9.14 are also performed, as shown in Figs. <ref>(b) and (c). It is clearly demonstrated that the agreement between the theoretical and FE results decreases as the anisotropy index increases. Such results are reasonable because the theoretical model is developed based on the Born approximation which is expected to gradually fail with the increase of scattering intensity. In addition, it is shown in Fig. <ref> that theoretical values are always smaller than FE results in the large k_R d region, which means that the Born approximation would always result in an underestimation for the A_rms values in the large k_R d region. Meanwhile, we note that all the simulation results on high regions seem flat with fewer fluctuations than theoretical predictions. A possible reason for the difference is that simulation numbers are insufficient; however, a compromise must be made between accuracy and computational time. Now, we discuss the quantitative connection between the backscattering amplitude of Rayleigh waves with frequency. Figure <ref> demonstrates the logarithm of the rms backscattering amplitude A_rms versus the logarithm of normalised frequency k_R d for the aluminium material. Meanwhile, the comparison between the 2D R-R scattering, 2D bulk wave scattering<cit.> and 3D bulk wave scattering <cit.> including longitudinal-to-longitudinal (L-L) scattering and transverse-to-transverse (T-T) scattering, is also performed. It can be clearly seen that the quantitative relationship between the backscattering amplitude and the normalised frequency for 2D Rayleigh waves and 2D bulk waves is similar. For the Rayleigh waves, the backscattering amplitude is proportional to one and a half power of frequency for wavelengths much larger than the average grain size or comparable to the size of the average grain (k_R d < 1). For shorter wavelengths (k_R d > 10^1), the backscattering amplitude saturates and becomes independent of frequency. Moreover, the backscattering amplitude of Rayleigh waves is obviously larger than that of bulk waves(2D scattering and 3D scattering). It implies that there is a stronger scattering for 2D Rayleigh waves, which is significant for practical applications, such as, the potential application for the grain size measurement with more sensitivity. Meanwhile, it is hoped that these findings will be useful for future studies of 3D Rayleigh wave scattering and that they may lay the groundwork for developing an approach to achieve efficient 2D models which are usefully representative of 3D phenomena. §.§ Prediction of grain noise measured with plane Rayleigh wave excitation In this section, the grain noise generated with plane Rayleigh wave excitation is investigated with the numerical method, which is used to compare with the theoretical model given in Sec. <ref> (Eq. <ref>). The models shown in Table <ref> are still used in this section. All setup of simulations is similar to that we mentioned above. What is different from the above simulation is that plane Rayleigh waves propagate on the polycrystalline material's surface directly, as shown in Fig. <ref>(a). For each case discussed in this section, five realisations with random crystallographic orientations of the model are run. An incoherent average (rms) is taken over the signals received by the receiver (point `RT' in Fig. <ref>(a)). Then, the rms of the averaged signals will be used for discussion. The processing results from the backscattered signals are also indicated in Fig. <ref>. Figure <ref>(b1) is the Rayleigh wave field in the model after exciting the source nodes with a signal of 1 MHz centre frequency. It can be seen that except for the Rayleigh wave scattering, there are still some bulk wave scattering behaviours. The fact is that the scattered Rayleigh wave is about 100 times stronger than the scattered longitudinal wave, which means the contribution of longitudinal waves is negligible. Meanwhile, we can control the selected time range to reduce the effect of scattered waves caused by shear waves on the final results, which will be discussed next. The signal related to the reference Rayleigh waves and backscattered grain noise caused by the multiple grains is illustrated in Fig. <ref>(b2) in the time domain. The respective time/frequency-domain amplitude spectra for the reference signal and backscattered grain noise are displayed in Figs. <ref>(c) and (d). The reference signal U_ref( t ) and the backscattered grain noise U_Gs( t ) are Fourier transformed into the frequency domain to obtain the spectra U_ref( f ) and U_Gs( f ). The frequency-dependent normalised grain noise with j realisations is then calculated by N^j( f ) = U_Gs^j( f ) /U_ref^j( f ) and N_rms^FE( f ) = RMS[N^j( f )] /√(l_2), where l_2 is the length corresponding to the selected grain noise. N_rms^FE( f ) will be used to evaluate the theoretical model result, N_rms( ω), in the next. Figure <ref> shows the RMS signals in the time domain under 1, 2, and 4 MHz excitation. Aluminium is considered here. To highlight the grain noise, the reference transmitted waves are clipped in the figure. It can be seen that the grain noise is independent of time at a lower frequency (1MHz), while the grain noise decreases with time at a relatively higher frequency (4MHz), which indicates inherently multiple scattering effects. Therefore, in order to reduce the bulk wave scattering and multiple scattering effects, the FE results received in the appropriate early time range are used later for comparison with the theoretical predictions. For example, in the 1 MHz simulation case, in order to avoid the shear scattering in the very early time range (yellow rectangle) and multiple scattering in the later time range (green rectangle), the appropriate early time range from 60 to 80 μ s (pink rectangle) will be used to get the grain noise signal. Similarly, 30 ∼ 40 μ s and 15 ∼ 20 μ s are applied for 2 MHz and 4 MHz, respectively. Figure <ref>(a) illustrates the comparison of the FE and theoretically predicted grain noise with the aluminium material. The centre frequency used in FE simulations is in the range from 1 to 4 MHz. The grey curve shows the theoretical predictions for the aluminium material. The coloured points are the numerical grain noise. From the figure, a good agreement between the two predictions can be seen with a smaller k_R d. With the increase of the k_R d, a larger discrepancy between the theoretical prediction and numerical results is showing up. It can be explained that the Born approximation is gradually failing and the multiple scattering cannot be neglected, as we discussed in Fig. <ref>, with a larger k_R d. Meanwhile, we want to emphasise that for only considering the R-R single scattering case with aluminium material (as shown in Fig. <ref>(a)), the theoretical model still works well even at k_R d = 2π (i.e. d = λ_R, where λ_R is the wavelength of Rayleigh waves), while the case with multiple scattering is lost accuracy at k_R d = 1.2. It means that multiple scattering is an important contribution to the scattering behaviour of Rayleigh waves and is ignored by the theoretical model. Furthermore, to observe how the agreement changes with anisotropy, the comparisons for the aluminium, A=1.5, and Inconel materials, which have anisotropy indices of 1.24, 1.52, and 2.83 are performed, as shown in Fig. <ref>(b). we note that the y-axis range in (b) is three times of that in (a). It is clearly demonstrated that the agreement between the theoretical and FE results decreases as the anisotropy index increases. Such results are reasonable because of the Born approximation and multiple scattering, which have been discussed above. In addition, the theoretical results are always overestimating the grain noise in larger k_R d regions. As we mentioned before, the Born approximation always has an underestimation for the backscattering amplitude. Therefore, it is not caused by the Born approximation. In fact, when the Rayleigh waves propagate along the surface, the energy of the coherent wave decreases with the increase of the distance far from the observation point in the FE model due to the effect of the multiple scattering. Moreover, it implies the backscattered wave energy from these grains which are far from the observation point is decreasing. The result is that the backscattered energy in different cross sections in the propagation direction will not be equal, which is not like that assumed in the IS approximation. Meanwhile, it should be emphasised that multiple scattering has a larger effect on the accuracy of the theoretical model compared to the Born approximation. In this section, the rms for single backscattering amplitudes are scattered from single grains with different shapes and random orientations are discussed in Sec. <ref> and the backward grain noise excited and captured has been predicted with the theoretical model and FE approaches in Sec. <ref>. Results imply a good agreement between the theoretical prediction and the FE result. Born approximation always has an underestimation of the theoretical backscattering amplitude. In addition, the multiple scattering has little influence on the grain noise level as the anisotropy factor is relatively low and backscattering is weak. However, with a relatively high anisotropy level or a larger normalised frequency, discrepancies are observed indicating the occurrence of multiple scattering. Meanwhile, the existence of multiple scattering makes the FE results smaller than that of theory. We note that the ignoring of multiple scattering will result in a larger difference between the theoretical predictions and FE results, compared with the difference caused by the Born approximation. However, the effects of multiple scattering and Born approximation are intertwined, which makes it difficult to quantify how small should k_R d be for the theoretical model to be valid. Therefore, only a qualitative discussion related to how these two parameters affect the theoretical model has been made. § CONCLUSION In this work, we developed a 2D theoretical model for Rayleigh-to-Rayleigh backscattering in untextured polycrystalline materials with equiaxed grains. The model is formulated in the frequency domain based on the Born approximation. A FE model is established to provide relatively accurate reference data for evaluating the approximations of the theoretical model. The comparison of the theoretical and FE results led to various conclusions, mainly including: 1. Good agreement between the FE and theoretical backscattering amplitudes predictions can be seen in the case with only R-R single scattering. With the increase of the anisotropy index, the discrepancy is larger as a result of the use of the Born approximation. 2. The backscattering amplitude is proportional to one and a half power of frequency when the wavelength is comparable to the size of the average grain or much larger than the average grain size, i.e. k_R d < 1. The backscattering amplitude is independent of frequency in the case that the wavelength is smaller than the average grain size (k_R d > 10^1). The quantitative relationship between the normalised frequency and backscattering amplitude is similar to that of 2D bulk wave backscattering behaviour. 3. With the consideration of the weak multiple scattering, the agreement between the theoretical model and FE results is still excellent. The discrepancy seen in the highly scattering case (larger anisotropy index or the wavelength smaller than the average grain size) represents the larger effect of ignoring multiple scattering, rather than the Born approximation, on the backscattering predictions for the theory. 4. FE is attractive in describing the backscattering noise behaviour as it does not involve some of the important simplifying assumptions included in the theoretical model and therefore can capture multiple scattering which is ignored in the theory. Generally speaking, we have demonstrated the applicability of our theoretical model to evaluate the backscattering behaviour of Rayleigh waves on a polycrystalline material with single-phase, untextured, and equiaxed grains. We have employed FE simulations as perfectly controlled experiments, where the material properties and configurations are user-defined and accurate, to successfully validate the theoretical predictions. Future studies will be focused on experimental verification, with the aim of utilising this mathematical model for material characterisation in practice. The potential application areas include developing a Rayleigh wave scattering model for polycrystals with elongated grains, performing the experimental inversion of grain size, evaluating the scattering attenuation of Rayleigh waves and characterising the grain size variation in the depth direction. § ACKNOWLEDGEMENTS This work was supported by the China Scholarship Council, National Natural Science Foundation of China (Grant No. 92060111). BL gratefully acknowledges the Imperial College Research Fellowship. BL and MH thank the generous funding from the NDE group at Imperial and the EPSRC grant EP/W014769/1. § THE TRANSFORMATION OF THE INTEGRAL REGION WITH VARIABLE CHANGE The change of variables is written as, τ = (𝐱_s+𝐱) / 2, 𝐫 = 𝐱_s-𝐱. Then the following equation is straightforward, τ_z = (z_s+z) / 2, r_z = z_s-z, τ_x = (x_s+x) /2, r_x = x_s-x. The limit before the variable change is expressed by, 0<z<+∞, 0<z_s<+∞, -∞<x<+∞, -∞<x_s<+∞ The detailed calculation for the limit after the transformation of the integral area is shown in Fig. <ref>. It is clear that the limit change can be written as 0<τ_z<+∞, -∞<τ_x<+∞, -∞<r_z<+∞, -∞<r_x<+∞ elsarticle-num-names 55 natexlab#1#1 [#1],#1 [Ryzy et al.(2018)Ryzy, Grabec, Österreicher, Hettich, and Veres]ryzy2018measurement authorM. Ryzy, authorT. Grabec, authorJ. A. Österreicher, authorM. Hettich, authorI. A. Veres, titleMeasurement of coherent surface acoustic wave attenuation in polycrystalline aluminum, journalAIP Advances volume8 (year2018) pages125019. [Grabec et al.(2022)Grabec, Veres, and Ryzy]grabec2022surface authorT. Grabec, authorI. A. Veres, authorM. Ryzy, titleSurface acoustic wave attenuation in polycrystals: Numerical modeling using a statistical digital twin of an actual sample, journalUltrasonics volume119 (year2022) pages106585. [Margetan et al.(1994)Margetan, Thompson, and Yalda-Mooshabad]margetan1994backscattered authorF. Margetan, authorR. Thompson, authorI. Yalda-Mooshabad, titleBackscattered microstructural noise in ultrasonic toneburst inspections, journalJournal of nondestructive evaluation volume13 (year1994) pages111–136. [Han et al.(1996)Han, Thompson, Yalda, Margetan, Anderson, Hirao, and Root]han1996effect authorY. Han, authorR. Thompson, authorI. Yalda, authorF. Margetan, authorA. Anderson, authorM. Hirao, authorJ. Root, titleEffect of texture on ultrasonic backscattering coefficient in pure titanium plate, journalReview of Progress in Quantitative Nondestructive Evaluation: Volume 15A (year1996) pages1685–1691. [Rose(1993)]rose1993theory authorJ. H. Rose, titleTheory of ultrasonic backscatter from multiphase polycrystalline solids, journalReview of Progress in Quantitative Nondestructive Evaluation: Volumes 12A and 12B (year1993) pages1719–1726. [Margetan et al.(1993)Margetan, Thompson, Yalda-Mooshabad, and Han]margetan1993detectability authorF. Margetan, authorR. Thompson, authorI. Yalda-Mooshabad, authorY. Han, titleDetectability of small flaws in advanced engine alloys, journalTechnical Report (year1993). [Rose(1991)]rose1991ultrasonic authorJ. H. Rose, titleUltrasonic backscattering from polycrystalline aggregates using time-domain linear response theory, journalReview of Progress in Quantitative Nondestructive Evaluation: Volume 10B (year1991) pages1715–1720. [Rose(1992)]rose1992ultrasonic authorJ. H. Rose, titleUltrasonic backscatter from microstructure, journalReview of Progress in Quantitative Nondestructive Evaluation. Vol. 11B volume11 (year1992) pages1677–1684. [Margetan et al.(1991)Margetan, Gray, and Thompson]margetan1991technique authorF. Margetan, authorT. Gray, authorR. B. Thompson, titleA technique for quantitatively measuring microstructurally induced ultrasonic noise, journalReview of Progress in Quantitative Nondestructive Evaluation: Volume 10B (year1991) pages1721–1728. [Margetan et al.(1993)Margetan, Thompson, and Yalda-Mooshabad]margetan1993modeling authorF. Margetan, authorR. B. Thompson, authorI. Yalda-Mooshabad, titleModeling ultrasonic microstructural noise in titanium alloys, journalReview of Progress in Quantitative Nondestructive Evaluation: Volumes 12A and 12B (year1993) pages1735–1742. [Ghoshal et al.(2007)Ghoshal, Turner, and Weaver]ghoshal2007wigner authorG. Ghoshal, authorJ. A. Turner, authorR. L. Weaver, titleWigner distribution of a transducer beam pattern within a multiple scattering formalism for heterogeneous solids, journalThe Journal of the Acoustical Society of America volume122 (year2007) pages2009–2021. [Ghoshal and Turner(2010)]ghoshal2010diffuse authorG. Ghoshal, authorJ. A. Turner, titleDiffuse ultrasonic backscatter at normal incidence through a curved interface, journalThe Journal of the Acoustical Society of America volume128 (year2010) pages3449–3458. [Hu et al.(2013)Hu, Kube, Koester, and Turner]hu2013mode authorP. Hu, authorC. M. Kube, authorL. W. Koester, authorJ. A. Turner, titleMode-converted diffuse ultrasonic backscatter, journalThe Journal of the Acoustical Society of America volume134 (year2013) pages982–990. [Hu and Turner(2017)]hu2017transverse authorP. Hu, authorJ. A. Turner, titleTransverse-to-transverse diffuse ultrasonic scattering, journalThe Journal of the Acoustical Society of America volume142 (year2017) pages1112–1120. [Hu and Turner(2015)]hu2015contribution authorP. Hu, authorJ. A. Turner, titleContribution of double scattering in diffuse ultrasonic backscatter measurements, journalThe Journal of the Acoustical Society of America volume137 (year2015) pages321–334. [Huang et al.(2021)Huang, Turner, Song, and Li]huang2021transverse authorY. Huang, authorJ. A. Turner, authorY. Song, authorX. Li, titleTransverse-to-transverse diffuse ultrasonic double scattering, journalUltrasonics volume111 (year2021) pages106301. [Lobkis et al.(2012)Lobkis, Yang, Li, and Rokhlin]lobkis2012ultrasonic authorO. Lobkis, authorL. Yang, authorJ. Li, authorS. Rokhlin, titleUltrasonic backscattering in polycrystals with elongated single phase and duplex microstructures, journalUltrasonics volume52 (year2012) pages694–705. [Yang and Rokhlin(2013)]yang2013ultrasonic authorL. Yang, authorS. Rokhlin, titleUltrasonic backscattering in cubic polycrystals with ellipsoidal grains and texture, journalJournal of Nondestructive Evaluation volume32 (year2013) pages142–155. [Arguelles et al.(2016)Arguelles, Kube, Hu, and Turner]arguelles2016mode authorA. P. Arguelles, authorC. M. Kube, authorP. Hu, authorJ. A. Turner, titleMode-converted ultrasonic scattering in polycrystals with elongated grains, journalThe Journal of the Acoustical Society of America volume140 (year2016) pages1570–1580. [Zhang and Weaver(1996)]zhang1996leaky authorY. Zhang, authorR. L. Weaver, titleLeaky rayleigh wave scattering from elastic media with random microstructures, journalThe Journal of the Acoustical Society of America volume99 (year1996) pages88–99. [Kaganova and Maradudin(1992)]kaganova1992surface authorI. Kaganova, authorA. Maradudin, titleSurface acoustic waves on a polycrystalline substrate, journalPhysica Scripta volume1992 (year1992) pages104. [Li et al.(2022)Li, Song, and Li]li2022attenuation authorS. Li, authorY. Song, authorX. Li, titleAttenuation and dispersion of leaky rayleigh wave in polycrystals, journalThe Journal of the Acoustical Society of America volume152 (year2022) pages3271–3280. [Li et al.(2023)Li, Huang, Song, Lan, and Li]li2022theoretical authorS. Li, authorM. Huang, authorY. Song, authorB. Lan, authorX. Li, titleTheoretical and numerical modeling of rayleigh wave scattering by an elastic inclusion, journalThe Journal of the Acoustical Society of America volume153 (year2023) pages2336–2350. [Bai et al.(2018)Bai, Tie, Schmitt, and Aubry]bai2018finite authorX. Bai, authorB. Tie, authorJ.-H. Schmitt, authorD. Aubry, titleFinite element modeling of grain size effects on the ultrasonic microstructural noise backscattering in polycrystalline materials, journalUltrasonics volume87 (year2018) pages182–202. [Ghoshal and Turner(2009)]ghoshal2009diffuse authorG. Ghoshal, authorJ. A. Turner, titleDiffuse ultrasonic backscatter in a two-dimensional domain, journalActa mechanica volume205 (year2009) pages35–49. [Liu et al.(2019)Liu, Van Pamel, Nagy, and Cawley]liu2019investigation authorY. Liu, authorA. Van Pamel, authorP. B. Nagy, authorP. Cawley, titleInvestigation of ultrasonic backscatter using three-dimensional finite element simulations, journalThe Journal of the Acoustical Society of America volume145 (year2019) pages1584–1595. [Ryzy et al.(2020)Ryzy, Grabec, and Veres]ryzy2020finite authorM. Ryzy, authorT. Grabec, authorI. Veres, titleFinite element modeling of surface acoustic wave propagation in polycrystalline aluminium: effective phase velocity, in: booktitleForum Acusticum, year2020, pp. pages1833–1838. [Viktrov(1967)]viktrov1967rayleigh authorI. Viktrov, titleRayleigh and Lamb waves: physical theory and applications, publisherSpringer New York, NY, year1967. [Kube and Turner(2015)]kube2015voigt authorC. M. Kube, authorJ. A. Turner, titleVoigt, reuss, hill, and self-consistent techniques for modeling ultrasonic scattering, in: booktitleAIP Conference Proceedings, volume volume1650, organizationAmerican Institute of Physics, year2015, pp. pages926–934. [Snieder(1986)]snieder19863 authorR. Snieder, title3-d linearized scattering of surface waves and a formalism for surface wave holography, journalGeophysical Journal International volume84 (year1986) pages581–605. [Aki and Richards(2002)]aki2002quantitative authorK. Aki, authorP. G. Richards, titleQuantitative seismology, publisherUniversity Science, year2002. [Stanke and Kino(1984)]stanke1984unified authorF. E. Stanke, authorG. S. Kino, titleA unified theory for elastic wave propagation in polycrystalline materials, journalThe Journal of the Acoustical Society of America volume75 (year1984) pages665–681. [Weaver(1990)]weaver1990diffusivity authorR. L. Weaver, titleDiffusivity of ultrasound in polycrystals, journalJournal of the Mechanics and Physics of Solids volume38 (year1990) pages55–86. [Van Pamel et al.(2018)Van Pamel, Sha, Lowe, and Rokhlin]van2018numerical authorA. Van Pamel, authorG. Sha, authorM. J. Lowe, authorS. I. Rokhlin, titleNumerical and analytic modelling of elastodynamic scattering within polycrystalline materials, journalThe Journal of the Acoustical Society of America volume143 (year2018) pages2394–2408. [Kube and Turner(2015)]kube2015acoustic authorC. M. Kube, authorJ. A. Turner, titleAcoustic attenuation coefficients for polycrystalline materials containing crystallites of any symmetry class, journalThe Journal of the Acoustical Society of America volume137 (year2015) pagesEL476–EL482. [Man et al.(2006)Man, Paroni, Xiang, and Kenik]man2006geometric authorC.-S. Man, authorR. Paroni, authorY. Xiang, authorE. A. Kenik, titleOn the geometric autocorrelation function of polycrystalline materials, journalJournal of Computational and Applied Mathematics volume190 (year2006) pages200–210. [Thompson and Gray(1983)]thompson1983model authorR. B. Thompson, authorT. Gray, titleA model relating ultrasonic scattering measurements through liquid–solid interfaces to unbounded medium scattering amplitudes, journalThe Journal of the Acoustical Society of America volume74 (year1983) pages1279–1290. [Van Pamel et al.(2015)Van Pamel, Brett, Huthwaite, and Lowe]van2015finite authorA. Van Pamel, authorC. R. Brett, authorP. Huthwaite, authorM. J. Lowe, titleFinite element modelling of elastic wave scattering within a polycrystalline material in two and three dimensions, journalThe Journal of the Acoustical Society of America volume138 (year2015) pages2326–2336. [Van Pamel et al.(2017)Van Pamel, Sha, Rokhlin, and Lowe]van2017finite authorA. Van Pamel, authorG. Sha, authorS. I. Rokhlin, authorM. J. Lowe, titleFinite-element modelling of elastic wave propagation and scattering within heterogeneous media, journalProceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences volume473 (year2017) pages20160738. [Huang et al.(2021)Huang, Sha, Huthwaite, Rokhlin, and Lowe]huang2021longitudinal authorM. Huang, authorG. Sha, authorP. Huthwaite, authorS. Rokhlin, authorM. Lowe, titleLongitudinal wave attenuation in polycrystals with elongated grains: 3d numerical and analytical modeling, journalThe Journal of the Acoustical Society of America volume149 (year2021) pages2377–2394. [Quey et al.(2011)Quey, Dawson, and Barbe]quey2011large authorR. Quey, authorP. Dawson, authorF. Barbe, titleLarge-scale 3d random polycrystals for the finite element method: Generation, meshing and remeshing, journalComputer Methods in Applied Mechanics and Engineering volume200 (year2011) pages1729–1745. [Huang et al.(2022)Huang, Huthwaite, Rokhlin, and Lowe]huang2022finite authorM. Huang, authorP. Huthwaite, authorS. I. Rokhlin, authorM. J. Lowe, titleFinite-element and semi-analytical study of elastic wave propagation in strongly scattering polycrystals, journalProceedings of the Royal Society A volume478 (year2022) pages20210850. [Ryzy et al.(2018)Ryzy, Grabec, Sedlák, and Veres]ryzy2018influence authorM. Ryzy, authorT. Grabec, authorP. Sedlák, authorI. A. Veres, titleInfluence of grain morphology on ultrasonic wave attenuation in polycrystalline media with statistically equiaxed grains, journalThe Journal of the Acoustical Society of America volume143 (year2018) pages219–229. [Huang et al.(2020)Huang, Sha, Huthwaite, Rokhlin, and Lowe]huang2020maximizing authorM. Huang, authorG. Sha, authorP. Huthwaite, authorS. Rokhlin, authorM. Lowe, titleMaximizing the accuracy of finite element simulation of elastic wave propagation in polycrystals, journalThe Journal of the Acoustical Society of America volume148 (year2020) pages1890–1910. [Bourne et al.(2020)Bourne, Kok, Roper, and Spanjer]bourne2020laguerre authorD. P. Bourne, authorP. J. Kok, authorS. M. Roper, authorW. D. Spanjer, titleLaguerre tessellations and polycrystalline microstructures: a fast algorithm for generating grains of given volumes, journalPhilosophical Magazine volume100 (year2020) pages2677–2707. [Quey and Renversade(2018)]quey2018optimal authorR. Quey, authorL. Renversade, titleOptimal polyhedral description of 3d polycrystals: Method and application to statistical and synchrotron x-ray diffraction data, journalComputer Methods in Applied Mechanics and Engineering volume330 (year2018) pages308–333. [Rajagopal et al.(2012)Rajagopal, Drozdz, Skelton, Lowe, and Craster]rajagopal2012use authorP. Rajagopal, authorM. Drozdz, authorE. A. Skelton, authorM. J. Lowe, authorR. V. Craster, titleOn the use of absorbing layers to simulate the propagation of elastic waves in unbounded isotropic media using commercially available finite element packages, journalNDT & E INT volume51 (year2012) pages30–40. [Sarris et al.(2021)Sarris, Haslinger, Huthwaite, Nagy, and Lowe]sarris2021attenuation authorG. Sarris, authorS. G. Haslinger, authorP. Huthwaite, authorP. B. Nagy, authorM. J. Lowe, titleAttenuation of rayleigh waves due to surface roughness, journalThe Journal of the Acoustical Society of America volume149 (year2021) pages4298–4308. [Huthwaite(2014)]huthwaite2014accelerated authorP. Huthwaite, titleAccelerated finite element elastodynamic simulations using the gpu, journalJournal of Computational Physics volume257 (year2014) pages687–707. [Gnedin et al.(2018)Gnedin, Semenov, and Kravtsov]Gnedin2018 authorN. Y. Gnedin, authorV. A. Semenov, authorA. V. Kravtsov, titleEnforcing the Courant–Friedrichs–Lewy condition in explicitly conservative local time stepping schemes, journalJournal of Computational Physics volume359 (year2018) pages93–105. http://arxiv.org/abs/1801.03108arXiv:1801.03108. [Spiegel et al.(2013)Spiegel, Schiller, and Srinivasan]spiegel2013probability authorM. R. Spiegel, authorJ. J. Schiller, authorR. A. Srinivasan, titleProbability and statistics: Schaum's outlines, publisherMcGraw-Hill, year2013. [Sha and Rokhlin(2018)]sha2018universal authorG. Sha, authorS. Rokhlin, titleUniversal scaling of transverse wave attenuation in polycrystals, journalUltrasonics volume88 (year2018) pages84–96. [Huang et al.(2022)Huang, Rokhlin, and Lowe]huang2022appraising authorM. Huang, authorS. I. Rokhlin, authorM. J. S. Lowe, titleAppraising scattering theories for polycrystals of any symmetry using finite elements, journalPhilosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences volume380 (year2022) pages20210382. [Guo(2003)]Guo2003effects authorY. Guo, titleEffects of material microstructure and surface geometry on ultrasonic scattering and flaw detection, Ph.D. thesis, Iowa State University, year2003. [Gubernatis et al.(1977)Gubernatis, Domany, and Krumhansl]gubernatis1977formal authorJ. Gubernatis, authorE. Domany, authorJ. Krumhansl, titleFormal aspects of the theory of the scattering of ultrasound by flaws in elastic materials, journalJournal of Applied Physics volume48 (year1977) pages2804–2811.
http://arxiv.org/abs/2307.01869v1
20230704182800
Ranking-Based Second Stage in Data Envelopment Analysis: An Application to Research Efficiency in Higher Education
[ "Vladimír Holý" ]
stat.AP
[ "stat.AP" ]
Ranking-Based Second Stage in Data Envelopment Analysis: An Application to Research Efficiency in Higher Education Vladimír Holý Prague University of Economics and Business Winston Churchill Square 4, 130 67 Prague 3, Czechia mailto:vladimir.holy@vse.czvladimir.holy@vse.cz Abstract: An alternative approach for the panel second stage of data envelopment analysis (DEA) is presented in this paper. Instead of efficiency scores, we propose to model rankings in the second stage using a dynamic ranking model in the score-driven framework. We argue that this approach is suitable to complement traditional panel regression as a robustness check. To demonstrate the proposed approach, we determine research efficiency in the higher education sector by examining scientific publications and analyze its relation to good governance. The proposed approach confirms positive relation to the Voice and Accountability indicator, as found by the standard panel linear regression, while suggesting caution regarding the Government Effectiveness indicator. Keywords: Two-Stage DEA, Ranking Model, Score-Driven Model, Research & Development. JEL Codes: C33, C44, I23, O32. § INTRODUCTION In operations research, data envelopment analysis (DEA) is a non-parametric method used to measure the relative efficiency of decision-making units (DMUs) that convert inputs into outputs. It compares DMUs by calculating their efficiency scores based on a set of inputs and outputs. The method has been widely applied in the fields of agriculture, education, energy, finance, government, healthcare, manufacturing, retail, sport, and transportation. In DEA research, it is common to follow the efficiency measurement with a second-stage regression analysis that uses efficiency scores as the dependent variable and includes contextual (or environmental) variables as independent variables. This approach is known as two-stage DEA. In many cases, efficiency is assessed annually, which may require a panel regression as the second-stage model to account for time-varying contextual variables. The most frequently employed panel methods for the second stage are panel linear regression (see, e.g., ) and panel Tobit regression (see, e.g., ). In linear regression, log transformations of efficiency scores are often used (see, e.g., ). Other panel methods include panel quantile regression (see, e.g., ), panel fractional regression (see, e.g., ), and panel beta regression (see, e.g., ). The standard two-stage DEA has been subject to criticism by <cit.>, <cit.>, and <cit.>. The criticisms mainly stem from three issues: (1) correlation among the estimated efficiency scores due to the complex structure of the data generating process, (2) the use of estimated efficiency scores as dependent variable instead of the true unobserved efficiency scores, and (3) the potential inseparability between the frontier production and the impact of contextual variables. These issues can significantly affect the validity of inference. When dealing with repeated assessement of efficiency, there is also the issue of temporal dependence. Nevertheless, some authors such as <cit.>, <cit.>, and <cit.> argue for the use of linear regression. For a survey on statistical approaches in nonparametric frontier models, see <cit.>. In this paper, we present an alternative approach for the panel second stage of DEA. Instead of modeling efficiency scores, we propose to model the rankings. In the recent literature, <cit.> developed a time series model for rankings that utilize the Plackett–Luce distribution and incorporates autoregressive and score dynamics. This model is based on the modern framework of score-driven models introduced by <cit.> and <cit.>. While <cit.> applied the model to the results of the Ice Hockey World Championships, they also suggested its potential use in the second stage of DEA. Following this call, we devote this paper to exploring the use of this dynamic ranking model in DEA. The motivation for using the score-driven dynamic ranking model in the second stage of DEA arises from the following properties: * Relevance of Rankings. Rankings preserve the important information of mutual comparison among DMUs. In certain scenarios, the primary objective of DEA may even be to obtain rankings of DMUs, in which case modeling rankings directly is more appropriate. The long-term behavior of DMUs may also be of interest, in which case the long-term ranking may have a clearer interpretation than an aggregate of efficiency scores. * Robustness to DEA Model. Consider two DEA models: the super-efficiency DEA model of <cit.> and the universal DEA model of <cit.>, both with either constant returns to scale (CRS) of <cit.> or variable returns to scale (VRS) of <cit.>. Despite producing different efficiency scores, these models generate the exact same ranking. By modeling rankings instead of efficiency scores in the second stage, any differences between these models are eliminated. An additional consideration when modeling efficiency scores is whether to use the logarithmic transformation. However, since the log transformation preserves rankings, this is not a concern when using a ranking model. * Robustness to Outliers. Outliers, in the form of extreme values of efficiency scores, can significantly influence the coefficients in a second-stage regression model. However, using rankings can mitigate this issue, as a DMU with an extremely low or high efficiency score would simply be ranked last or first, respectively. Thus, a ranking model can effectively handle such outliers. * Simple yet Powerful. The model of <cit.> is straightforward to work with. The Plackett–Luce distribution, unlike its alternatives, is available in a closed form (see ) and the dynamics are observation-driven (see ). As a result, the model can be estimated using the maximum likelihood method, and conventional Hessian-based standard errors can be used. Moreover, the model only requires a modest number of parameters, consisting of individual effects of DMUs, regression coefficients common for all DMUs, and two additional parameters controling dynamics common for all DMUs. Our approach also faces the following limitations: * Loss of Information. While using rankings instead of efficiency scores can provide robustness to DEA model and outliers (as discussed above), it also leads to loss of information. This loss can be beneficial in some scenarios, but it is still important to recognize that it occurs. One drawback of using rankings alone is that it is not possible to determine the boundary between inefficient and efficient DMUs. Efficiency scores, on the hand, provide a clear distinction between the two groups. * Different Data Generating Process. Our approach does not address the criticism of <cit.>, <cit.>, and <cit.>. Indeed, the dependence between the DMUs is not captured by the Plackett–Luce distribution, which assumes the property known as the independence from irrelevant alternatives. The data generating process assumed by the model of <cit.> is much simpler then the true one generated by DEA. * Absence of Ties. The model of <cit.> has a limitation in that it does not allow for rankings with ties. This means that in the second stage, we need to use a suitable DEA model that can rank all DMUs, including the efficient ones. However, this can be addressed by extending the Plackett-Luce distribution to incorporate ties, as demonstrated by <cit.>. * Sufficient Variation in Rankings. A single realization of efficiency scores is often used in a second stage regression model. A single ranking is, however, not enough for a meaningful analysis. Repeated rankings are therefore needed, which naturally take the form of panel data. Our approach is therefore suitable only when the time dimension is present. Even with repeated rankings, however, the Plackett–Luce distribution requires that for any possible partition of DMUs into two non-empty subsets, there exists at least one DMU in the second subset that is ranked higher than at least one DMU in the first subset (see ). Our approach is fundamentally different from traditional panel regressions, but it is not intended to replace them. Particularly when it suffers from the same shortcomings highlighted by <cit.>, <cit.>, and <cit.>. Instead, our approach is best used as a complement to traditional panel regressions to provide valuable insights that are not burdened by the problems specific to efficiency scores. This can be viewed as a form of robustness check, where both approaches are used to provide a more complete picture of the data. Given the controversies surrounding the second stage DEA, conducting extensive robustness checks is crucial for ensuring the reliability and validity of the results. DEA practitioners who wish to utilize the dynamic ranking model can do so easily using the R package, which offers all the necessary tools for estimation, forecasting, and simulation. As an illustration of the proposed approach, we explore the research efficiency in higher education of European Union (EU) countries through the analysis of scientific publications in 2005–2020. In the first stage, we perform DEA analysis for each year independently. We use gross domestic expenditure on R&D and the number of researchers as inputs to reflect the financial and human resources, respectively. For outputs, we use the number of publications and the number of citations to reflect the quantity and quality of scientific research, respectively. In the second stage, we investigate the influence of good governance on the research efficiency. As contextual variables, we use the six Worldwide Governance Indicators (WGI) of <cit.>, together with the gross domestic product (GDP). We perform panel linear regression analysis of efficiency scores obtained by three DEA models proposed by <cit.>, <cit.>, and <cit.>, along with the dynamic ranking model of <cit.>. All models uncover that the Voice and Accountability indicator is significantly positively correlated with research efficiency suggesting that participation in selecting the government, freedom of expression, freedom of association, and freedom of media are key factors of governance influencing research efficiency. The Government Effectiveness indicator has also positive effect, however, its significance is not confirmed by all models and this result is therefore not robust. No other significant relations are found. By utilizing the proposed approach in this study, we are able to assess the robustness of the relationship to the Voice and Accountability indicator. However, the results also indicate caution in interpreting the findings related to the Government Effectiveness indicator. Therefore, conducting extensive robustness checks such as this one is important to increase the reliability of the analysis and prevent misleading conclusions. The rest of the paper is structured as follows. In Section <ref>, we present three DEA models proposed by <cit.>, <cit.>, and, <cit.>, which are utilized in the subsequent analysis. In Section <ref>, we present details on the dynamic ranking model of <cit.> and its estimation, along with some modifications suitable to our case. In Section <ref>, we conduct an empirical study to examine research efficiency in higher education and compare the proposed ranking approach with the traditional panel regression approach. We conclude the paper in Section <ref>. § FIRST STAGE: MEASURING EFFICIENCY The first stage of DEA involves determining the relative efficiency scores of the DMUs. The number of DMUs is denoted by N. Each DMU transforms I inputs into J outputs. Let x_ni denote the i-th input of the n-th DMU, and y_nj denote the j-th output of the n-th DMU. The matrix of inputs is denoted by X = (x_ni)_n=1,i=1^N,I, while the matrix of outputs is denoted by Y = (y_nj)_n=1,j=1^N,J. The inputs of a single DMU n are denoted by x_n = (x_n1, …, x_nI)^⊺, and the outputs of a DMU n are denoted by y_n = (y_n1, …, y_nJ)^⊺. The notation X_-n represents the inputs of every DMU but n, while Y_-n represents the outputs of every DMU but n. §.§ Basic DEA <cit.> proposed the very first DEA model, which has since become one of the most widely used DEA models to date. This model is commonly referred to as the CCR model and is based on the assumption of constant returns to scale (CRS). The efficiency scores θ^CCR_n are found for each DMU n by the following linear program: θ^CCR_n = max_u, v y_n^⊺ u ==subject to x_n^⊺ v ≤ 1, Y u - X v ≤ 0, u ≥ 0, v ≥ 0, where u and v are vectors of weights for the outputs and inputs respectively. The efficiency scores for inefficient DMUs lie in [0,1) and are equal to 1 for inefficient DMUs. §.§ Super-Efficiency DEA A shortcoming of the CCR model is that it cannot differentiate between efficient DMUs, which can lead to the loss of valuable information. <cit.> proposed a super-efficiency DEA to overcome this limitation. In this model, the DMU under evaluation is excluded from the set of benchmarks, which allows efficient DMUs to achieve score greater than 1. The super-efficiency model with CRS (labeled as the AP model) is given by the following linear program: θ^AP_n = max_u, v y_n^⊺ u ==subject to x_n^⊺ v ≤ 1, Y_-n u - X_-n v ≤ 0, u ≥ 0, v ≥ 0. The efficiency scores for inefficient DMUs are the same as those obtained from the CCR model, while the scores for efficient DMUs are greater than or equal to 1. §.§ Universal DEA Recently, <cit.> proposed a DEA formulation that focuses on a robust optimization viewpoint. The model uses a scaled Chebyshev norm to measure efficiency as a distance to inefficiency and inefficiency as a distance to efficiency. The scores generated by this model are universal in the sense that they are naturally normalized, and therefore, can be compared across unrelated models. The universal DEA model with CRS (labeled as the H model) is given by the following linear program: θ^H_n = max_δ, u, v 1 + δ ==subject to y_n^⊺ u ≥ 1 + δ, x_n^⊺ v ≤ 1 - δ, Y_-n u - X_-n v ≤ 0, u ≥ 0, v ≥ 0. Note that <cit.> also proposed a nonlinear DEA model based on the Chebyshev norm, to which (<ref>) is a tight approximation. The efficiency scores for inefficient DMUs lie in [0, 1), while the scores for efficient DMUs lie in [1, 2]. The universal DEA model is closely related to the super-efficiency DEA model of <cit.>. <cit.> showed that the ranking of DMUs according to θ^AP_n is the same as the ranking according to θ^H_n. However, the models are even more connected as the efficient scores themselves can be derived by the following transformations: θ_n^H = 2 θ_n^AP/1 + θ_n^AP, θ_n^AP = θ_n^H/2 - θ_n^H. Applications of the universal DEA model include <cit.>, <cit.>, and <cit.>. § SECOND STAGE: MODELING DYNAMIC RANKINGS The second stage of DEA involves identifying the factors that affect efficiency scores and measure their impact. We assume periodic evaluation of efficiency of the same set of DMUs at times t = 1, …, T with efficiency scores θ_t = ( θ_1t, …, θ_Nt)^⊺. In this paper, we propose to model rankings of DMUs, instead of their efficiency scores as is usual in the second-stage DEA. Let R_t(n) denote the rank of a DMU n according to efficiency scores θ_t at time t. The complete ranking at time t is then denoted by R_t = ( R_t(1), …, R_t(N) )^⊺. The inverse of this ranking is the ordering O_t = ( O_t(1), …, O_t(N) )^⊺ at time t, where O_t(r) represents the DMU with rank r at time t. We employ the dynamic ranking model of <cit.>. §.§ Plackett–Luce Distribution We assume that at each time t the ranking R_t follows the Plackett–Luce distribution proposed by <cit.> and <cit.>. In the ranking literature, it is a widely used probability distribution for random variables in the form of permutations. Each DMU n at each time t has a worth parameter w_nt∈ℝ reflecting its rank at time t. The probability of a higher rank increases with a higher worth parameter value. Specifically, the probability mass function is given by f ( R_t | w_t ) = ∏_r=1^Nexp( w_O_t(r)t)/∑_s=r^N exp( w_O_t(s)t) . In other words, a ranking is iteratively constructed by selecting the best DMU, followed by the second best, the third best, and so on. At each stage, the probability of selecting a particular DMU is proportional to the exponential of its worth parameter divided by the sum of the exponentials of the worth parameters of all DMUs that have not been selected yet. The log-likelihood function is given by ℓ( w_t | R_t ) = ∑_n=1^N w_nt - ∑_r=1^Nln( ∑_s=r^N exp w_O_t(s)t). The score (i.e. the gradient of the log-likelihood function) is given by ∇_n ( w_t | R_t ) = 1 - ∑_r=1^R_t(n)exp( w_nt)/∑_s=r^N exp( w_O_t(s)t), n = 1, …, N. The Plackett–Luce distribution is based on the Luce's choice axiom, which states that the probability of selecting one item over another from a set of items is not influenced by the presence or absence of other items in the set (see ). This property of choice is known as the independence from irrelevant alternatives. Clearly, this property is not met in the case of DEA as addition or removal of DMUs from the set can influence efficiency scores and even ranking of other DMUs. As in the case of many second-stage models, the proposed dynamic ranking model therefore does not conform to the complex data generating process of DEA efficiency scores and rankings. Nevertheless, the proposed model can be a useful tool due to its simplicity when applied with caution. §.§ Regression and Dynamics We let the worth parameters linearly depend on K contextual variables and also include an autoregressive and score-driven component. The worth parameters are then given by the recursion w_nt = ω_n + ∑_k=1^K β_k z_nkt + e_nt, e_nt = φ e_nt-1 + α∇_n ( w_t-1| R_t-1), n = 1,…, N, t = 1, …, T, where ω_n are the individual effects for each DMU n, β_k are the regression parameters for the contextual variables z_nkt, φ is the autoregressive parameter, and α is the score parameter for the lagged score ∇_n ( w_t-1| R_t-1) given by (<ref>). The model corresponds to panel regression with fixed effects and dynamic error term. Note that the model is overparametrized as the probability mass function (<ref>) is invariant to the addition of a constant to all worth parameters. We therefore use standardization ∑_n=1^N ω_n = 0. Our specification differs from the model of <cit.> by introducing the separate e_nt component. Our specification is inspired by the regression with ARMA errors, while the specification of <cit.> resemble the ARMAX model. In our specification, the contextual variables influence only concurrent ranking, which is easier to interpret. Our model is also easier for numerical estimation as ω_n and φ are disconnected. The e_nt component captures dynamic effects by the autoregressive term and the lagged score. The model therefore belongs to the class of score-driven models, also known as generalized autoregressive score (GAS) models or dynamic conditional score (DCS) models, proposed by <cit.> and <cit.>. The score can be interpreted as a measure of the fit of the Plackett–Luce model to the observed rankings. A positive score indicates that a DMU n is ranked higher than what its worth parameter w_nt suggests, while a negative score suggests that it is ranked lower. A score of zero indicates that the DMU is ranked as expected according to its worth parameter. Thus, the score can be used as a correction term for the worth parameter after the ranking is observed. §.§ Maximum Likelihood Estimation The model is observation-driven and can be estimated by the maximum likelihood method. Let θ = (ω_1, …, ω_N-1, β_1, …, β_K, φ, α)' denote the vector of the N+K+1 parameters to be estimated. Note that ω_N is obtained from (<ref>) as ω_N = - ∑_n=1^N-1ω_n. The maximum likelihood estimate θ̂ is then given by θ̂∈max_θ∑_t=1^T ℓ( w_t | R_t ), where the log-likelihood ℓ( w_t | R_t ) is given by (<ref>) and w_t follow (<ref>). The problem (<ref>) can be numerically solved by any general-purpose algorithm for nonlinear optimization. Furthermore, the standard errors of the estimated parameters are computed using the empirical Hessian of the log-likelihood evaluated at θ̂. In order for the log-likelihood to have a unique maximum, it is necessary that for any possible partition of DMUs into two non-empty subsets, there exists at least one DMU in the second subset that is ranked higher than at least one DMU in the first subset (see ). This condition ensures that no DMU is always ranked first, which would result in an infinite worth parameter and violate the assumptions of maximum likelihood estimation. § EMPIRICAL STUDY Our empirical study aims to analyze research efficiency in the higher education sector by examining scientific publications on a country-level basis, with a particular focus on the EU countries between 2005 and 2020. Specifically, we seek to determine whether certain aspects of good governance have a positive impact on research efficiency. §.§ Relevant Studies Assessing the efficiency of research and development (R&D) is a widely studied topic in the data envelopment analysis (DEA) literature. In Table <ref>, we present a list of several relevant DEA papers and the key specifics of each study. We focus on the assessment of countries (and regions), although similar analyses can be performed at more detailed levels of institutions (see, e.g., ) and projects (see, e.g., ). Typically, studies on R&D efficiency use financial resources and human resources as the two main inputs. In terms of outputs, some studies focus on variables related to scientific publications (such as ), some on patents (such as ), while the majority consider both types of R&D-related outcomes. §.§ Input, Output, and Contextual Variables As inputs, we use the following variables: * R&D Expenditure refers to the gross domestic expenditure to R&D activities performed in the higher education sector. The unit is million purchasing power standards. <cit.> emphasize the importance of accounting for purchasing power parity when adjusting prices to ensure meaningful comparisons between countries with varying purchasing power. This variable reflects the financial resources. * Number of Researchers refers to the total number of researchers employed in the higher education sector. The unit is full-time equivalent. This variable reflects the human resources. As outputs, we use the following variables: * Number of Publications represents the number of articles, reviews, and conference papers published. This variable reflects the quantity of scientific research. * Number of Citations represents the number of citations to the published articles, reviews, and conference papers. This variable reflects the quality of scientific research. As contextual variables, we use the six Worldwide Governance Indicators (WGI), which <cit.> define in the following way: * Voice and Accountability captures perceptions of the extent to which a country's citizens are able to participate in selecting their government, as well as freedom of expression, freedom of association, and a free media. * Political Stability and Absence of Violence/Terrorism captures perceptions of the likelihood that the government will be destabilized or overthrown by unconstitutional or violent means, including politically‐motivated violence and terrorism. * Government Effectiveness captures perceptions of the quality of public services, the quality of the civil service and the degree of its independence from political pressures, the quality of policy formulation and implementation, and the credibility of the government's commitment to such policies. * Regulatory Quality captures perceptions of the ability of the government to formulate and implement sound policies and regulations that permit and promote private sector development. * Rule of Law captures perceptions of the extent to which agents have confidence in and abide by the rules of society, and in particular the quality of contract enforcement, property rights, the police, and the courts, as well as the likelihood of crime and violence. * Control of Corruption captures perceptions of the extent to which public power is exercised for private gain, including both petty and grand forms of corruption, as well as “capture” of the state by elites and private interests. Finally, we also include the following variable as a contextual variable: * Gross Domestic Product is used to control for the economic development of a country. To filter out the trend, we use the percentage of EU total GDP per capita based on million purchasing power standards. We therefore have I=2 input variables, J=2 output variables, and K=7 contextual variables. Similarly to <cit.>, we lag the input and contextual variables by one year, recognizing that there is typically a delay between the input variables and the corresponding output variables. §.§ Data Sample Our data sample contains all N=27 countries of EU. The outputs are taken from 2005 to 2020, while the inputs and contextual variables are taken with a one-year lag from 2004 to 2019. We therefore have T=16 time periods to analyze. The source of the R&D expenditure, the number of researchers, and the GDP is Eurostat[<https://ec.europa.eu/eurostat/data/database>]. There were 4 missing observations for the number of researchers of Greece in 2004, 2008, 2009, and 2010. We have interpolated these values using linear regression. The source of the number of documents and the number of citations is Scimago Journal & Country Rank[<https://www.scimagojr.com>]. The source of the Worldwide Governance Indicators is the World Bank[<https://info.worldbank.org/governance/wgi>]. §.§ Suitability of DEA Model In order for efficiency scores to be interpretable, several criteria need to be met. We have adopted the best practices in DEA as outlined by <cit.> and <cit.>. We begin by establishing that the process under evaluation is well-defined. Our focus is on the research output in the form of scientific publications. The two chosen output variables encompass both the quantity and quality of scientific publications. While quantity is naturally quantifiable, measuring quality can be achieved using several metrics such as the number of citations and the h-index. However, combining indices and volume measures can pose difficulties and we have therefore decided to use the number of citations for our analysis. The two primary resources for conducting research are funding and personnel, both of which are represented by the two input variables we have selected. All input and output variables are volume measures and are isotonic (i.e. increased input reduces efficiency, while increased output increases efficiency). With a total of 4 input and output variables and 27 DMUs, our DEA model possesses sufficient discriminatory power. Next, we examine the homogeneity assumption. Our set of DMUs encompasses all EU countries as of February 2020, although it should be noted that EU membership changed during the period under observation. Specifically, Romania and Bulgaria joined in 2007, and Croatia became a member in July 2013, whereas the United Kingdom departed in January 2020. Nevertheless, EU countries should be considered homogeneous in terms of research due to the harmonized policies and frameworks implemented by the European Commission, such as the European Research Area (ERA) and the Horizon Europe program. These initiatives aim to promote collaboration and standardization among EU member states, facilitating the dissemination of research findings and enhancing the overall quality of scientific output. Finally, we analyze the appropriate returns to scale. Note that EU countries exhibit considerable variation in size, with Germany being the most populous country at 83.17 million people and Malta being the least populous with a population of 0.51 million as of January 2020. Our focus is on the higher education sector, which is composed of (1) universities, colleges of technology, and other institutions providing formal tertiary education programmes, (2) research institutes, centres, experimental stations and clinics that have their R&D activities under the direct control of, or administered by, tertiary education institutions (see ). The scientific output of a country can be seen as the sum of outputs from these individual institutions. As a result, we assume that country size does not have a significant impact on the relative scientific output and employ the constant returns to scale (CRS) assumption in our analysis. §.§ Efficiency Scores Table <ref> reports descriptive statistics of efficiency scores and ranks. Bulgaria consistently shows high levels of efficiency across most years, which can be primarily attributed to its extremely low R&D spending, both in absolute value and relative to the number of publications, citations and even researchers. Romania is also found to be efficient in many years due to their relatively low R&D spending. Cyprus has an average of 3.10 publications per researcher, the highest among all countries, followed by Slovenia with 2.57. Moving to Western Europe, the Netherlands stands out as the country with the highest number of citations per researcher, with an average of 81.49. The final country that is ever found efficient in our sample is Luxembourg. Germany, as the largest country, dominate in absolute values of all inputs and outputs; its efficiency is, however, average. At the other end of the efficiency spectrum, we find Latvia with 0.66 publications and 9.09 citations per researcher on average, and Lithuania with 0.61 publications and 8.82 citations per researcher on average. §.§ Long-Term Ranking When conducting an analysis over multiple time periods, it can be beneficial to report the long-term behavior. This could be done by simple aggregate statistics, as we did in Table <ref>. But it is also a perfect task for our dynamic ranking model. For this purpose, we estimate the model without any contextual variables, only in the form of a stationary time series model. We can then rank DMUs according to the unconditional values of the worth parameters, which are simply equal to ω_n. This long-term or “ultimate” ranking is visualized in Figure <ref>. §.§ Panel Regression and Ranking Model We proceed to the second stage where we find relation between the efficiency scores or their associated rankings and the contextual variables. For the efficiency scores, we employ standard panel linear regression model with the robust estimation of the standard errors by the White method. As dependent variable, we use the efficiency scores obtained by the basic DEA model of <cit.> (denoted as CCR), the super-efficiency model of <cit.> (denoted as AP), and the universal DEA model of <cit.> (denoted as H). We also use the log transform of the AP efficiency scores, which are equal to the logit transform of H efficiency scores, θ^Log_nt = ln( θ^AP_nt) = - ln( 2/θ^H_nt - 1 ). Furthermore, we use the AP efficiency scores, or equivalently the H efficiency scores, to derive rankings of the DMUs, which serve as the dependent variable in our dynamic ranking model. The results of the estimated models are reported in Table <ref>. All panel linear regression models exhibit consistent signs of coefficients. They also all find the Voice and Accountability indicator to be statistically significant at the 0.05 level. Furthemore, the Government Effectiveness indicator is significant according to all panel regression models but AP. All the remaining contextual variables are found insignificant by all models. The dynamic ranking model confirms the positive and significant relation to the Voice and Accountability indicator, which is consistent with the results of all panel regression models. However, regarding the Government Effectiveness indicator, the model agrees with AP and finds it to be insignificant. The Political Stability and GDP variables have opposite signs, in contrast to the panel regression models, but remain insignificant. It is important to note that while the signs and significance of coefficients can be compared between the panel regression models and the dynamic ranking model, the estimated values cannot be directly compared due to differences in the model specifications. The coefficients φ and α, which control the dynamics, have both positive values, as expected. The estimated value of 0.86 for φ suggests that the process is stationary, but with high persistence over time. §.§ Computing Environment The empirical study was performed in R. The CCR and AP DEA efficiency scores were obtained using the and functions from the package. The H efficiency scores were obtained from the AP DEA efficiency scores using transformation (<ref>). The panel regressions were estimated using the function from the package with robust inference obtained using the function from the package. The dynamic ranking model was estimated by the function from the package. All these packages are available on CRAN. §.§ Discussion of Results The results of our analysis show that the Voice and Accountability indicator has a consistently positive and significant correlation with research efficiency across all models. This indicates that factors such as participation in selecting the government, freedom of expression, freedom of association, and freedom of media, which form the Voice and Accountability indicator, play a crucial role in enhancing research efficiency. In contrast, the Government Effectiveness indicator also has a positive effect on research efficiency, but its significance is not confirmed by all models. This suggests that while Government Effectiveness can enhance research efficiency, it may not be as crucial as the Voice and Accountability indicator and lacks robustness. The findings of this study can inform policy decisions and strategic planning to enhance research performance and impact, ultimately advancing knowledge and innovation in various fields. § CONCLUSION This paper has illustrated the usefulness of incorporating the dynamic ranking model of <cit.> in the second stage of DEA with an application to evaluating research efficiency in the higher education sector. The primary objective of the model is to serve as a complement to conventional second-stage models and provide a robustness check. While the dynamic ranking model may not be a perfect solution for all situations, it can still be a valuable addition to the DEA researcher's toolkit. Future research efforts should be directed towards expanding the dynamic ranking model in two ways. Firstly, the model should be able to incorporate ties, which may occur due to DEA models lacking super-efficiency. Secondly, the model should be able to capture more complex interdependencies between DMUs, which can be perhaps achieved by employing Thurstone order statistics models based on the multivariate normal or multivariate extreme value distributions. § ACKNOWLEDGEMENTS I would like Jan Zouhar for his comments. Computational resources were supplied by the project "e-Infrastruktura CZ" (e-INFRA LM2018140) provided within the program Projects of Large Research, Development and Innovations Infrastructures. § FUNDING The work on this paper was supported by the Czech Science Foundation under project 23-06139S and the personal and professional development support program of the Faculty of Informatics and Statistics, Prague University of Economics and Business. 49 urlstyle [Alvo and Yu(2014)]Alvo2014 Alvo M, Yu PLH (2014). Statistical Methods for Ranking Data. Springer, New York. ISBN 978-1-4939-1470-8. <https://doi.org/10.1007/978-1-4939-1471-5>. [Andersen and Petersen(1993)]Andersen1993 Andersen P, Petersen NC (1993). A Procedure for Ranking Efficient Units in Data Envelopment Analysis. Management Science, 39(10), 1261–1264. ISSN 0025-1909. <https://doi.org/10.2307/2632964>. [Aristovnik(2012)]Aristovnik2012 Aristovnik A (2012). The Relative Efficiency of Education and R&D Expenditures in the New EU Member States. Journal of Business Economics and Management, 13(5), 832–848. ISSN 1611-1699. <https://doi.org/10.3846/16111699.2011.620167>. [Banker et al.(2019)Banker, Natarajan, and Zhang]Banker2019 Banker R, Natarajan R, Zhang D (2019). Two-Stage Estimation of the Impact of Contextual Variables in Stochastic Frontier Production Function Models Using Data Envelopment Analysis: Second Stage OLS Versus Bootstrap Approaches. European Journal of Operational Research, 278(2), 368–384. ISSN 0377-2217. <https://doi.org/10.1016/j.ejor.2018.10.050>. [Banker et al.(1984)Banker, Charnes, and Cooper]Banker1984 Banker RD, Charnes A, Cooper WW (1984). Some Models for Estimating Technical and Scale Inefficiencies in Data Envelopment Analysis. Management Science, 30(9), 1078–1092. ISSN 0025-1909. <https://doi.org/10.1287/mnsc.30.9.1078>. [Banker and Natarajan(2008)]Banker2008 Banker RD, Natarajan R (2008). Evaluating Contextual Variables Affecting Productivity Using Data Envelopment Analysis. Operations Research, 56(1), 48–58. ISSN 0030-364X. <https://doi.org/10.1287/opre.1070.0460>. [Borozan(2018)]Borozan2018 Borozan D (2018). Technical and Total Factor Energy Efficiency of European Regions: A Two-Stage Approach. Energy, 152, 521–532. ISSN 0360-5442. <https://doi.org/10.1016/j.energy.2018.03.159>. [Charnes et al.(1978)Charnes, Cooper, and Rhodes]Charnes1978 Charnes A, Cooper WW, Rhodes E (1978). Measuring the Efficiency of Decision Making Units. European Journal of Operational Research, 2(6), 429–444. ISSN 0377-2217. <https://doi.org/10.1016/0377-2217(78)90138-8>. [Chen et al.(2011)Chen, Hu, and Yang]Chen2011 Chen CP, Hu JL, Yang CH (2011). An International Comparison of R&D Efficiency of Multiple Innovative Outputs: Role of the National Innovation System. Innovation: Management, Policy and Practice, 13(3), 341–360. ISSN 1447-9338. <https://doi.org/10.5172/impp.2011.13.3.341>. [Chen et al.(2019)Chen, Kamran, and Fan]Chen2019a Chen Q, Kamran SM, Fan H (2019). Real Estate Investment and Energy Efficiency: Evidence from China's Policy Experiment. Journal of Cleaner Production, 217, 440–447. ISSN 0959-6526. <https://doi.org/10.1016/j.jclepro.2019.01.274>. [Cook et al.(2014)Cook, Tone, and Zhu]Cook2014 Cook WD, Tone K, Zhu J (2014). Data Envelopment Analysis: Prior to Choosing a Model. Omega, 44, 1–4. ISSN 0305-0483. <https://doi.org/10.1016/j.omega.2013.09.004>. [Cox(1981)]Cox1981 Cox DR (1981). Statistical Analysis of Time Series: Some Recent Developments. Scandinavian Journal of Statistics, 8(2), 93–108. ISSN 0303-6898. <https://doi.org/10.2307/4615819>. [Creal et al.(2013)Creal, Koopman, and Lucas]Creal2013 Creal D, Koopman SJ, Lucas A (2013). Generalized Autoregressive Score Models with Applications. Journal of Applied Econometrics, 28(5), 777–795. ISSN 0883-7252. <https://doi.org/10.1002/jae.1279>. [Cullmann et al.(2012)Cullmann, Schmidt-Ehmcke, and Zloczysti]Cullmann2012 Cullmann A, Schmidt-Ehmcke J, Zloczysti P (2012). R&D Efficiency and Barriers to Entry: A Two Stage Semi-Parametric DEA Approach. Oxford Economic Papers, 64(1), 176–196. ISSN 0030-7653. <https://doi.org/10.1093/oep/gpr015>. [Da Silva e Souza and Gomes(2015)]DaSilvaESouza2015 Da Silva e Souza G, Gomes EG (2015). Management of Agricultural Research Centers in Brazil: A DEA Application Using a Dynamic GMM Approach. European Journal of Operational Research, 240(3), 819–824. ISSN 0377-2217. <https://doi.org/10.1016/j.ejor.2014.07.027>. [Dyson et al.(2001)Dyson, Allen, Camanho, Podinovski, Sarrico, and Shale]Dyson2001 Dyson RG, Allen R, Camanho AS, Podinovski VV, Sarrico CS, Shale EA (2001). Pitfalls and Protocols in DEA. European Journal of Operational Research, 132(2), 245–259. ISSN 0377-2217. <https://doi.org/10.1016/S0377-2217(00)00149-1>. [Ekinci and Karadayi(2017)]Ekinci2017 Ekinci Y, Karadayi MA (2017). Analysis of the Research and Development Efficiencies of European Union Countries. Business & Management Studies: An International Journal, 5(1), 1–19. ISSN 2148-2586. <https://doi.org/10.15295/bmij.v5i1.97>. [Fonchamnyo and Sama(2016)]Fonchamnyo2016 Fonchamnyo DC, Sama MC (2016). Determinants of Public Spending Efficiency in Education and Health: Evidence from Selected CEMAC Sountries. Journal of Economics and Finance, 40(1), 199–210. ISSN 1055-0925. <https://doi.org/10.1007/s12197-014-9310-6>. [Frýd and Sokol(2021)]Fryd2021 Frýd L, Sokol O (2021). Relationships Between Technical Efficiency and Subsidies for Czech Farms: A Two-Stage Robust Approach. Socio-Economic Planning Sciences, 78, 101059/1–101059/9. ISSN 0038-0121. <https://doi.org/10.1016/j.seps.2021.101059>. [Han et al.(2016)Han, Asmild, and Kunc]Han2016 Han U, Asmild M, Kunc M (2016). Regional R&D Efficiency in Korea from Static and Dynamic Perspectives. Regional Studies, 50(7), 1170–1184. ISSN 0034-3404. <https://doi.org/10.1080/00343404.2014.984670>. [Harvey(2013)]Harvey2013 Harvey AC (2013). Dynamic Models for Volatility and Heavy Tails: With Applications to Financial and Economic Time Series. First Edition. Cambridge University Press, New York. ISBN 978-1-107-63002-4. <https://doi.org/10.1017/cbo9781139540933>. [Hladík(2019)]Hladik2019 Hladík M (2019). Universal Efficiency Scores in Data Envelopment Analysis Based on a Robust Approach. Expert Systems with Applications, 122, 242–252. ISSN 0957-4174. <https://doi.org/10.1016/j.eswa.2019.01.019>. [Holý(2022)]Holy2022a Holý V (2022). The Impact of Operating Environment on Efficiency of Public Libraries. Central European Journal of Operations Research, 30(1), 395–414. ISSN 1613-9178. <https://doi.org/10.1007/s10100-020-00696-4>. [Holý and Šafr(2018)]Holy2018e Holý V, Šafr K (2018). Are Economically Advanced Countries More Efficient in Basic and Applied Research? Central European Journal of Operations Research, 26(4), 933–950. ISSN 1435-246X. <https://doi.org/10.1007/s10100-018-0559-2>. [Holý and Zouhar(2022)]Holy2021 Holý V, Zouhar J (2022). Modelling Time-Varying Rankings with Autoregressive and Score-Driven Dynamics. Journal of the Royal Statistical Society: Series C (Applied Statistics), 71(5), 1427–1450. ISSN 0035-9254. <https://doi.org/10.1111/rssc.12584>. [Hung et al.(2009)Hung, Lee, and Tsai]Hung2009 Hung WC, Lee LC, Tsai MH (2009). An International Comparison of Relative Contributions to Academic Productivity. Scientometrics, 81(3), 703–718. ISSN 0138-9130. <https://doi.org/10.1007/s11192-008-2210-9>. [Hunter(2004)]Hunter2004 Hunter DR (2004). MM Algorithms for Generalized Bradley-Terry Models. The Annals of Statistics, 32(1), 384–406. ISSN 0090-5364. <https://doi.org/10.1214/aos/1079120141>. [Jablonsky(2016)]Jablonsky2016 Jablonsky J (2016). Efficiency Analysis in Multi-Period Systems: An Application to Performance Evaluation in Czech Higher Education. Central European Journal of Operations Research, 24(2), 283–296. ISSN 1435-246X. <https://doi.org/10.1007/s10100-015-0401-z>. [Kaufmann et al.(2011)Kaufmann, Kraay, and Mastruzzi]Kaufmann2011 Kaufmann D, Kraay A, Mastruzzi M (2011). The Worldwide Governance Indicators: Methodology and Analytical Issues. Hague Journal on the Rule of Law, 3(2), 220–246. ISSN 1876-4045. <https://doi.org/10.1017/s1876404511200046>. [Kneip et al.(2015)Kneip, Simar, and Wilson]Kneip2015 Kneip A, Simar L, Wilson PW (2015). When Bias Kills the Variance: Central Limit Theorems for DEA and FDH Efficiency Scores. Econometric Theory, 31(2), 394–422. ISSN 0266-4666. <https://doi.org/10.1017/s0266466614000413>. [Lee and Park(2005)]Lee2005 Lee H, Park Y (2005). An International Comparison of R&D Efficiency: DEA Approach. Asian Journal of Technology Innovation, 13(2), 207–222. ISSN 1976-1597. <https://doi.org/10.1080/19761597.2005.9668614>. [Lee et al.(2009)Lee, Park, and Choi]Lee2009 Lee H, Park Y, Choi H (2009). Comparative Evaluation of Performance of National R&D Programs with Heterogeneous Objectives: A DEA Approach. European Journal of Operational Research, 196(3), 847–855. ISSN 0377-2217. <https://doi.org/10.1016/j.ejor.2008.06.016>. [Luce(1959)]Luce1959 Luce RD (1959). Individual Choice Behavior: A Theoretical Analysis. First Edition. Wiley, New York. ISBN 978-0-486-44136-8. <https://books.google.com/books/about/Individual_choice_behavior.html?id=a80DAQAAIAAJ>. [Luce(1977)]Luce1977 Luce RD (1977). The Choice Axiom after Twenty Years. Journal of Mathematical Psychology, 15(3), 215–233. ISSN 0022-2496. <https://doi.org/10.1016/0022-2496(77)90032-3>. [Mamatzakis et al.(2013)Mamatzakis, Kalyvas, and Piesse]Mamatzakis2013 Mamatzakis E, Kalyvas AN, Piesse J (2013). Does Regulation in Credit, Labour and Business Matter for Bank Performance in the EU-10 Economies? International Journal of the Economics of Business, 20(3), 341–385. ISSN 1357-1516. <https://doi.org/10.1080/13571516.2013.835981>. [McDonald(2009)]McDonald2009 McDonald J (2009). Using Least Squares and Tobit in Second Stage DEA Efficiency Analyses. European Journal of Operational Research, 197(2), 792–798. ISSN 0377-2217. <https://doi.org/10.1016/j.ejor.2008.07.039>. [Moradi-Motlagh and Emrouznejad(2022)]Moradi-Motlagh2022 Moradi-Motlagh A, Emrouznejad A (2022). The Origins and Development of Statistical Approaches in Non-Parametric Frontier Models: A Survey of the First Two Decades of Scholarly Literature (1998-2020). Annals of Operations Research, 318(1), 713–741. ISSN 1572-9338. <https://doi.org/10.1007/s10479-022-04659-7>. [OECD(2015)]OECD2015 OECD (2015). Frascati Manual 2015: Guidelines for Collecting and Reporting Data on Research and Experimental Development, The Measurement of Scientific, Technological and Innovation Activities. Technical report, Paris. <https://doi.org/10.1787/9789264239012-en>. [Pirani et al.(2018)Pirani, Zahiri, Engali, and Torabipour]Pirani2018 Pirani N, Zahiri M, Engali KA, Torabipour A (2018). Hospital Efficiency Measurement Before and After Health Sector Evolution Plan in Southwest of Iran: A DEA-Panel Data Study. Acta Informatica Medica, 26(2), 106–110. ISSN 0353-8109. <https://doi.org/10.5455/aim.2018.26.106-110>. [Plackett(1975)]Plackett1975 Plackett RL (1975). The Analysis of Permutations. Journal of the Royal Statistical Society: Series C (Applied Statistics), 24(2), 193–202. ISSN 0035-9254. <https://doi.org/10.2307/2346567>. [Poveda(2011)]Poveda2011 Poveda AC (2011). Economic Development and Growth in Colombia: An Empirical Analysis with Super-Efficiency DEA and Panel Data Models. Socio-Economic Planning Sciences, 45(4), 154–164. ISSN 0038-0121. <https://doi.org/10.1016/j.seps.2011.07.003>. [Roman(2010)]Roman2010 Roman M (2010). Regional Efficiency of Knowledge Economy in the New EU Countries: The Romanian and Bulgarian Case. Romanian Journal of Regional Science, 4(1), 33–53. ISSN 1843-8520. <https://ideas.repec.org/a/rrs/journl/v4y2010i1p33-53.html>. [Sharma and Thomas(2008)]Sharma2008 Sharma S, Thomas VJ (2008). Inter-Country R&D Efficiency Analysis: An Application of Data Envelopment Analysis. Scientometrics, 76(3), 483–501. ISSN 0138-9130. <https://doi.org/10.1007/s11192-007-1896-4>. [Simar and Wilson(2007)]Simar2007 Simar L, Wilson PW (2007). Estimation and Inference in Two-Stage, Semi-Parametric Models of Production Processes. Journal of Econometrics, 136(1), 31–64. ISSN 0304-4076. <https://doi.org/10.1016/j.jeconom.2005.07.009>. [Simar and Wilson(2011)]Simar2011 Simar L, Wilson PW (2011). Two-Stage DEA: Caveat Emptor. Journal of Productivity Analysis, 36(2), 205–218. ISSN 0895-562X. <https://doi.org/10.1007/s11123-011-0230-6>. [Song et al.(2016)Song, Zhang, Zeng, Liu, and Fang]Song2016 Song M, Zhang G, Zeng W, Liu J, Fang K (2016). Railway Transportation and Environmental Efficiency in China. Transportation Research Part D: Transport and Environment, 48, 488–498. ISSN 1361-9209. <https://doi.org/10.1016/j.trd.2015.07.003>. [Thomas et al.(2011)Thomas, Sharma, and Jain]Thomas2011 Thomas VJ, Sharma S, Jain SK (2011). Using Patents and Publications to Assess R&D Efficiency in the States of the USA. World Patent Information, 33(1), 4–10. ISSN 0172-2190. <https://doi.org/10.1016/j.wpi.2010.01.005>. [Turner et al.(2020)Turner, van Etten, Firth, and Kosmidis]Turner2020 Turner HL, van Etten J, Firth D, Kosmidis I (2020). Modelling Rankings in R: The PlackettLuce Package. Computational Statistics, 35, 1027–1057. ISSN 0943-4062. <https://doi.org/10.1007/s00180-020-00959-3>. [Zhang et al.(2018)Zhang, Sun, and Huang]Zhang2018 Zhang YJ, Sun YF, Huang J (2018). Energy Efficiency, Carbon Emission Performance, and Technology Gaps: Evidence from CDM Project Investment. Energy Policy, 115, 119–130. ISSN 0301-4215. <https://doi.org/10.1016/j.enpol.2017.12.056>.
http://arxiv.org/abs/2307.00575v1
20230702135947
Mode-wise Principal Subspace Pursuit and Matrix Spiked Covariance Model
[ "Runshi Tang", "Ming Yuan", "Anru R. Zhang" ]
stat.ME
[ "stat.ME", "cs.LG", "cs.NA", "math.NA", "math.ST", "stat.TH" ]
1in -0.5in 1in -0.55in 6.3in 8.8in -0.5truein .15truein .1in manualtheoreminnerTheorem manualassumptioninnerAssumption NotationNotation ExampleExample DefinitionDefinition TheoremTheorem *Theorem*Theorem LemmaLemma RemarkRemark CorollaryCorollary AssumptionAssumption *Assumption*Assumption PropositionProposition
http://arxiv.org/abs/2307.00457v2
20230702023707
GenRec: Large Language Model for Generative Recommendation
[ "Jianchao Ji", "Zelong Li", "Shuyuan Xu", "Wenyue Hua", "Yingqiang Ge", "Juntao Tan", "Yongfeng Zhang" ]
cs.IR
[ "cs.IR", "cs.AI", "cs.CL", "cs.LG" ]
patterns, definition definitionDefinition[section] formal width 4pt - 7pt er,positioning er,positioning 15.00 978-1-4503-XXXX-X/18/06 authorsperrow=4 Rutgers University New Brunswick, NJ US jianchao.ji@rutgers.edu Rutgers University New Brunswick, NJ US zelong.li@rutgers.edu Rutgers University New Brunswick, NJ US shuyuan.xu@rutgers.edu Rutgers University New Brunswick, NJ US wenyue.hua@rutgers.edu Rutgers University New Brunswick, NJ US yingqiang.ge@rutgers.edu Rutgers University New Brunswick, NJ US juntao.tan@rutgers.edu Rutgers University New Brunswick, NJ US yongfeng.zhang@rutgers.edu In recent years, large language models (LLM) have emerged as powerful tools for diverse natural language processing tasks. However, their potential for recommender systems under the generative recommendation paradigm remains relatively unexplored. This paper presents an innovative approach to recommendation systems using large language models (LLMs) based on text data. In this paper, we present a novel LLM for generative recommendation (GenRec) that utilized the expressive power of LLM to directly generate the target item to recommend, rather than calculating ranking score for each candidate item one by one as in traditional discriminative recommendation. GenRec uses LLM's understanding ability to interpret context, learn user preferences, and generate relevant recommendation. Our proposed approach leverages the vast knowledge encoded in large language models to accomplish recommendation tasks. We first we formulate specialized prompts to enhance the ability of LLM to comprehend recommendation tasks. Subsequently, we use these prompts to fine-tune the LLaMA backbone LLM on a dataset of user-item interactions, represented by textual data, to capture user preferences and item characteristics. Our research underscores the potential of LLM-based generative recommendation in revolutionizing the domain of recommendation systems and offers a foundational framework for future explorations in this field. We conduct extensive experiments on benchmark datasets, and the experiments shows that our GenRec has significant better results on large dataset. Code and data are open-sourced at <https://github.com/rutgerswiselab/GenRec>. GenRec: Large Language Model for Generative Recommendation Yongfeng Zhang August 1, 2023 ========================================================== § INTRODUCTION Large Language Models (LLMs) have made a particularly significant milestone in this technological evolution. These LLMs, designed to understand and generate human-like text, have revolutionized numerous applications, from search engines to chatbots, and have facilitated more natural and intuitive interactions between humans and machines. This paper seeks to explore a relatively new and promising application of these models in the recommendation systems. Recommendation systems have become an integral part of our digital experience. They are the unseen force guiding us through vast amounts of data, suggesting relevant products on e-commerce websites, recommending movies on streaming platforms, and even proposing what news to read or videos to watch. The primary aim of these systems is to predict the individual user preferences and enhance user experience and engagement. Traditionally, recommendation systems have been built around methods such as collaborative filtering <cit.>, content-based filtering <cit.>, and hybrid approaches <cit.>. Collaborative filtering leverages user-item interactions, making suggestions based on patterns found in the behavior of similar users or items. On the other hand, content-based filtering uses item features to recommend similar items to those a user has previously interacted with. Hybrid methods attempt to combine the strengths of these two approaches to overcome their respective limitations. Despite the progress made with these traditional techniques, there still have some significant challenges. For instance, collaborative filtering struggles with the cold start problem, where it fails to provide accurate recommendations for new users or items due to lack of historical interaction data. Both content-based filtering hard to handle the issue of data sparsity, given that most users interact with only a small fraction of the total items available. Additionally, because of the computational complexity of processing large interaction matrices, these models often struggle to scale effectively with the growth of users and items. The integration of text-based LLMs into recommendation systems presents an exciting opportunity to address these challenges <cit.>. These models can learn and understand complex patterns in human language, which allows for a more nuanced interpretation of user preferences and a more sophisticated generation of recommendations. However, a significant number of the prevailing recommendation models are trained using user and item indexes. This approach leads to the lack of text-based information in the dataset, including details like item titles and category information. In this paper, we propose a novel large language model for generative recommendation (GenRec). One of the primary benefits of the GenRec model is that it capitalizes on the rich, descriptive information inherently contained within the item names, which often contain features that can be semantically analyzed, enabling a better understanding of the item's potential relevance to the user. This could potentially provide more accurate and personalized recommendations, thereby enhancing the overall user experience. We present experimental results to demonstrate the efficacy of our proposed method and compare its performance with other LLM recommendation models. The overarching aim of this paper is not only to present our findings but also to inspire further research in this area. By highlighting the potential of LLMs in enhancing generative recommender systems, we hope to encourage a more widespread adoption of these models and stimulate further innovations in this field. The key contributions of this paper can be summarized as follows: * We highlight the promising paradigm of generative recommendation, which directly generates the target item to recommend, rather than traditional discriminative recommendation, which has to calculate a ranking score for each candidate item one by one and then sorts them for deciding which to recommend. * We introduce a novel approach, GenRec, to enhance the generative recommendation performance by incorporating the textual information into the model. * We also illustrate the efficacy of GenRec on practical recommendation tasks, underscoring its prospective abilities for a wider scope of applications. In the following parts of the paper, we will discuss the related work in Section <ref>, introduce the proposed model in Section <ref>, analyze the experimental results in Section <ref>, and provide the conclusions as well as future work in Section <ref>. § RELATED WORK §.§ Collaborative Filtering and Content-Based Recommendation Systems Collaborative Filtering (CF) models are based on the concept of user-item interactions. Traditional CF models, such as the matrix factorization model <cit.>, focus on latent factor modeling of user-item interaction matrices. More recent advancements, like NeuMF <cit.>, have combined the merits of matrix factorization and neural networks to better capture complex user-item relationships. On the other hand, Content-Based Recommendation systems rely on the features of items to make recommendations. Early works involved simple keyword matching <cit.> or cosine similarity based on TF-IDF vectors <cit.>. More advanced methods have started to exploit deep learning techniques, like CNN <cit.> and RNN <cit.>, for extracting high-level features from item content. §.§ The use of large language models for recommendation systems has gained significant attention recently. These models exhibit great potential in the understanding and modeling of user-item interactions, exploiting rich semantics and long-range dependencies present in user activity data. The pioneering work of P5 <cit.> illustrated the feasibility of formulating recommendation as a natural language task. P5 <cit.> fines the widely-used open-source T5 model <cit.> to create a unified system capable of handling various tasks. These tasks include not only recommendation ranking and retrieval but also complex functions like summary explanation. This innovative approach highlighted the versatility of large language models in handling multi-task learning in the recommendation context. However, the potential of large language models to understand and generate text-based recommendations has not been fully explored. In this paper, we propose a novel approach to text-based generative recommendation, leveraging the latest advances in large language models. We aim to address some of the limitations of previous works and push the boundaries of what is possible in the realm of recommendation systems. § METHOD The architecture of the proposed framework is illustrated in Figure <ref>. Given a user's item interaction sequence, the large language model for generative recommendation (GenRec) will format the item names with a prompt. This reformatted sequence is subsequently employed to fine-tune a Large Language Model (LLM). The adjusted LLM can then predict subsequent items the user is likely to interact with. In our paper, we select the LLaMA <cit.> language model as the backbone. However, our framework retains flexibility, allowing for seamless integration with any other LLM, thus broadening its potential usability and adaptability. §.§ Sequence Generation The initial component of GenRec is a generative function, tasked with producing various sequences that encapsulate user interests. To enhance the model's comprehension of the recommendation task, we have devised multiple prompts that facilitate sequence generation. Take Figure <ref> as an example, we use the user's movie watching history as the training data and use this information to format the training sequence. The sequence consist of three part, instruction input and output. The instruction element outlines the specific task of movie recommendation, for which we have created several directives to enhance the LLM's comprehension of the ongoing recommendation task. The input represents the history of the user's interactions, excluding the most recent instance. And the output is the latest interaction in this record. The primary task for the LLM here is to predict this final interaction accurately. Refer to Figure <ref> for an illustration. This figure represents how we utilize a user's history of watched movies as interaction data. Given the prompt, "Based on the movie viewing habits, what is the most likely movie they will select to watch next?" and the provided input, we then allow GenRec to forecast the subsequent output. §.§ Training Strategy In this paper, we use the LLaMA model as the backbone for the training of GenRec. The LLaMA model is pre-trained on an expansive language corpus, offering a valuable resource for our intended purpose of efficiently capturing both user interests and item content information. However, it's important to note that the memory requirements for GPU to fine-tune LLaMA, even the 7-billion parameter version, are pretty substantial. To circumvent this challenge and conserve GPU memory, we adopt the LLaMA-LoRA architecture for fine-tuning and inference tasks within the scope of this study. By this measure, we have achieved a significant reduction in the GPU memory requirements. With this optimized approach, we can fine-tune the LLaMA-LoRA model on a single GPU with a memory capacity of 24GB. However, in an effort to decrease the overall training time, we have employed a data parallel technique and leveraged multiple GPUs in the experiments. Further details about our experiments, including the implementation and results, will be shared in the following sections of this paper. § EXPERIMENTS §.§ Dataset We conduct extensive experiments on two real-world datasets from Amazon <cit.> and MovieLens <cit.>, respectively, to evaluate the performance of our proposed GenRec approach on recommendation tasks. The Amazon datasets, which record user purchase histories across a diverse range of products, were sourced from the Amazon.com platform. MovieLens datasets comprise a large number of movie ratings and associated metadata, contributed by users of the MovieLens website over various periods. The descriptive statistics of these datasets are depicted in Table 1 (see reference <ref>). For each user interaction sequence, the most recent item is used as the test data, the second-most recent is used as validation data, and the remaining is used for training. §.§ Evaluation Merics In this paper, we evaluate the performance of the model using two widely used metrics : Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG). The HR metric indicates the percentage of items recommended by the model that match those in the ground truth data. On the other hand, NDCG is employed to assess the efficacy of the recommendations when they are ranked, factoring in the relevance of the suggested items. These metrics have found wide acceptance in the evaluation of recommendation systems due to their robustness and comprehensiveness. §.§ Implement Details The GenRec model was pretrained for 5 epochs using the AdamW optimization <cit.> on four NVIDIA RTX A5000 GPUs with a batch size of 128. The peak learning rate was set to 3 × 10^-4 and the maximum input length was set to 256 tokens. A warm-up strategy was employed during training, where the learning rate was gradually increased over the first 1000 steps. §.§ Baseline Methods P5 <cit.>: The Pre-train, Personalized Prompt, and Predict Paradigm (P5) incorporates an array of templates for input and target sequences throughout the training process. This unique approach proficiently dissolves the boundaries between different tasks, promoting a more fluid and integrated training procedure. It has showcased noteworthy performance in the domain of sequential recommendation tasks, underlining its effectiveness and applicability. §.§ Performance Comparison As we can see in the Table <ref>, P5 has better performance on Amazon Toys datasets, while our GenRec has significant better performance on movielens 25M datasets. The possible reasons behind this differential performance could be attributed to the distinct nature of the datasets. The MovieLens 25M dataset, unlike Amazon Toys datasets, contains a richer amount of interaction information, which provides a more robust understanding of the user's preferences and behavior, thus likely leading to more accurate recommendations. Our GenRec model, designed to effectively capture both user interests and item content information and produce more accurate and relevant recommendations. On the other hand, P5, while robust in handling sequential data, might not be as adept in leveraging this additional interaction information, resulting in relatively lower performance on the MovieLens 25M dataset. § CONCLUSION In conclusion, our work on the text-based Large Language Model for Generative Recommendation (GenRec) has revealed a novel and promising approach in the field of recommendation systems. By focusing on the semantic richness of item names as input, GenRec promises more personalized and contextually relevant recommendations. Our practical demonstrations highlight GenRec's efficacy and point towards its adaptability across a diverse range of applications. Furthermore, the flexibility of the GenRec framework facilitates integration with any Large Language Model, hence widening its sphere of potential utility. In terms of future work, there are several directions to explore. We intend to refine the generation of sequences by developing more sophisticated prompts, which could further enhance the model's understanding of recommendation tasks. Additionally, we plan to extend our research to incorporate more complex user interaction data, such as ratings or reviews, which could provide deeper insights into user behavior and preferences. A further direction would be to test GenRec's performance with different Large Language Models, investigating the possible benefits and trade-offs. Our research with GenRec thus far has shown significant promise, and we look forward to continuing to develop and refine this approach. We believe that with further investigation, GenRec could revolutionize the way recommendation systems operate, ultimately leading to more personalized and satisfying user experiences. ACM-Reference-Format
http://arxiv.org/abs/2307.00255v1
20230701073501
The "super-active" accretion phase of T CrB has ended
[ "U. Munari" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.HE" ]
0000-0001-6805-9664]Ulisse Munari INAF Astronomical Observatory of Padova, 36012 Asiago (VI), Italy The symbiotic recurrent nova T CrB erupted for the second and last recorded time in 1946. Following the outburst, the accretion rate onto its WD has remained rather low with only occasional and minor flaring episodes, until in late 2014 it entered a "super-active" phase (SAP) that peaked in April 2016: the flux radiated by Balmer lines increased by two orders of magnitude, accompanied by the appearance of strong HeI, HeII, and many other emission lines. Following the sharp maximum, the intensity of the emission lines has been steadily decreasing, reaching back the pre-SAP levels by mid-2023. The end of SAP is also confirmed by the drop of B-band brightness to pre-SAP conditions and the simultaneous re-appearance of a large-amplitude flickering. This suggest that the accretion disk has emptied from the extra material that has driven the "super active" state and has completed its transfer onto the WD, setting the stage for a new and probably imminent nova eruption. § INTRODUCTION T CrB is a very famous recurrent nova <cit.> and is also a symbiotic binary by harboring a red giant (RG) as the donor star to the massive white dwarf (WD) companion. The life-cycle of a symbiotic binary as outlined by <cit.>, is characterized by long accretion phases interspersed by shorter periods during which the material accumulated on the surface of the WD is burned nuclearly. If the accreted shell is not electron degenerate, the burning proceeds in thermal equilibrium for decades/centuries until most of the hydrogen fuel in the shell is consumed, the burning finally quenches down, and a new long-lasting phase of accretion initiates the next cycle (examples are V4368 Sgr, HM Sge, and V1016 Cyg). When the accreted shell is instead electron degenerate, the nuclear burning proceeds explosively resulting in a nova outburst, with most of the shell expelled in the process and the residual nuclear burning on the WD extinguishes in a few weeks/months, after which accretion resumes and a new cycle begins. In addition to T CrB, other well known symbiotic recurrent novae are RS Oph, V3890 Sgr, and V745 Sco. Traditionally, accretion in symbiotic stars has been treated as a smooth process relatively stable over long periods of time <cit.>. This approach has progressively changed in favor of a highly-episodic interpretation of the accretion process, characterized by brief periods of (very) high accretion rates in-between longer intervals spent at much lower mass-transfer rates <cit.>. <cit.> has called attention to the fact that starting with 2015, T CrB entered a "super-active" accretion phase (SAP), characterized by a much brighter accretion disk as the result of a greatly enhanced mass-flow through it and then toward the central WD. The accretion level attained during SAP largely exceeded any other experienced by T CrB since the 1946 eruption. By noting that a similar event preceded the 1946 nova outburst, <cit.> concluded that SAP is probably announcing a new and imminent eruption of T CrB, a view shared by <cit.>. § OBSERVATIONS We have been regularly recording fluxed spectra of T CrB for the last ∼35 yrs, initially with the Asiago 1.82m + B&C and since 2006 with the Asiago 1.22m + B&C telescope. For all the 1.22m spectra, we adopted a 300 ln/mm grating blazed at 5000 Å that paired with a completely UV-transparent optical train and a highly UV-sensitive CCD detector (ANDOR iDus DU440A with a back-illuminated E2V 42-10 chip, 2048×512 array, and 13.5 μm pixel size), allows to efficiently record spectra down to the ∼3100 Å atmospheric cut-off imposed by the telescope 1000m altitude above sea level. Our 1.22m spectra of T CrB extend from 3200 to 7900 Å at 2.3 Å/pix dispersion. In addition to being fluxed thanks to nightly observations of spectrophotometric standard stars, their flux zero-point is fine-tuned against (nearly-)simultaneous BVR photometry, so that the flux error anywhere in the spectra rarely exceed a few percent. This 2006-2023 set of T CrB spectra is therefore characterized by a highly stable instrumental set-up and robust IRAF calibration procedures, and constitutes an ideal sample for variability studies of spectral features over long intervals of time. A few of the spectra of T CrB here considered can be viewed in <cit.>. § THE END OF THE "SUPER ACTIVE" ACCRETION PHASE To trace the evolution of T CrB along the "super active" accretion phase, we have measured the integrated flux of a sample of emission lines on the 2006-2023 Asiago 1.22m + B&C spectra described in the previous section. The selected lines are Hβ, HeI 5876, and HeII 4686, which are representative of low, medium, and high excitation/ionization conditions, respectively. Their absolute fluxes are plotted in Figure 1 along with the B-band lightcurve of T CrB as recorded by ANS Collaboration. Prior to 2014, both HeI 5876 and HeII 4686 were not visible in emission, and Hβ has been present but always at rather feeble levels. During this period the B-band lightcurve is dominated by the ellipsoidal distortion of the RG with superimposed the scattering due to the large-amplitude and always present flickering <cit.>. The start of the "super active" accretion phase in late 2014 is marked by the sudden appearance in emission of HeI 5876 and HeII 4686, a corresponding rise of Hβ <cit.>, and a large increase in B-band brightness caused by the rapidly brightening accretion disk. SAP reached its maximum in April 2016, when the flux of all emission lines sharply peaked, as illustrated by Figure 1. Around this epoch strong satellite UV and thermal radio emission were also recorded <cit.>. Following the maximum in April 2016, the flux of all emission lines has gone steadily decreasing, at a faster pace for higher excitation/ionization lines, and by mid-2023 they have returned to pre-SAP values, indicating that the "super active" accretion phase is finally over. Also the B-band photometric brightness has been quickly dropping during the last few months, while the flickering has returned to the usual large amplitude <cit.> compared to the much reduced impact it had on photometry collected around SAP maximum <cit.>. The disappearance of emission lines, the drop in B-band brightness, and the return to large amplitude flickering suggest that the accretion disk has emptied from the extra material that driven the "super active" state and has completed its transfer onto the WD. The shell around the latter may possibly still takes a little to cool and shrink down to favorable conditions, but the stage for a new nova outburst appears now inevitably set. aasjournal
http://arxiv.org/abs/2307.01687v1
20230704123641
Thermoacoustic Stabilization of a Sequential Combustor with Ultra-low-power Nanosecond Repetitively Pulsed Discharges
[ "Bayu Dharmaputra", "Sergey Shcherbanev", "Bruno Schuermans", "Nicolas Noiray" ]
physics.flu-dyn
[ "physics.flu-dyn", "physics.app-ph" ]
bayud@ethz.ch noirayn@ethz.ch [cor1]Corresponding authors CAPS Laboratory, Department of Mechanical and Process Engineering, ETH Zürich, 8092, Zürich, Switzerland This study demonstrates the stabilization of a sequential combustor with Nanosecond Repetitively Pulsed Discharges (NRPD). A constant pressure sequential combustor offers many key advantages compared to a conventional combustor, including in particular, a higher fuel flexibility and a wider operational range. However, thermoacoustic instabilities remain a barrier to further widening the operational range of these combustors. In the past decades, both active and passive control strategies for gas turbine combustors have been studied.Passive control strategies to suppress these instabilities, such as Helmholtz dampers, have been more widely used in commercial used in some industrial systems thanks to their simplicity in terms of implementation. Active control strategies are however not found in practical combustors, mainly due to the lack of robust actuators able to operate in harsh conditions with sufficient control authority. In this study, we demonstrate that thermoacoustic instabilities can be suppressed by using a non-equilibrium plasma produced with NRPD in a lab-scale atmospheric sequential combustor operated at 73.4 kW of thermal power. We employ continuous forcing of NRPD forcing to influence the combustion process in the sequential combustor. The two governing parameters are the plasmapulse repetition frequency (PRF) and the plasma generator voltage. We examine the effect of both parameters on the pressure pulsationacoustic amplitude, the NO emissions, and the flame centre of mass. We observe that for some operating conditions, with plasma power of 1.1 W, which is aboutas low as 1.41.5× 10^-3% percent of the thermal power of the flames, the combustor can be thermoacoustically stabilized. By increasing the power of the plasma, the acoustic amplitude can be further reduced, at a small cost of a very low increase of NO emission. However, an additional increase in plasma power to 81 W, which is 1.41.1× 10^-1% percent of the flame thermal power, increase the NO emission significantly without significant improvement on the acoustic amplitude reduction. Furthermore, for some combinations of the plasma parameters, another thermoacoustic mode of the combustor at a different frequency can become unstable. This finding motivates further research on the optimization of the plasma parameters as a function of the thermoacoustic properties of the combustor where it is applied. This study is a pioneering effort in controlling the thermoacoustic stability of turbulent flames with plasma discharges at such low power compared to the thermal power of the sequential combustor. Thermoacoustic Plasma Assisted Combustion Control Sequential Combustor abstract § NOVELTY AND SIGNIFICANCE In this study, we demonstrate that Nanosecond Repetitively Pulsed Discharges (NRPD) can be used as an actuator to stabilize a thermoacoustically unstable sequential combustor. We suppress instabilities with a mean plasma power that is 5 orders of magnitude lower than the flames thermal power, i.e. more than 2 orders of magnitude lower than the current state of the art. This achievement therefore opens the door for commercial application of such a technology for continuous operation. Furthermore, we examine the impact of varying plasma parameters, specifically the plasma repetition frequency and generator voltage, on flame topology changes, as well as investigate the resulting NO emissions. § AUTHORS CONTRIBUTIONS B.D. led the experimental investigations and designed the analyses. B.D. and S.S. performed the experiments and analysed the data. N.N. conceived the research idea. B.D., S.S., B.S. and N.N. discussed the results. B.D. drafted the manuscript with the support of N.N. The final version of the manuscript has been edited and approved by B.D., S.S., B.S. and N.N. § INTRODUCTION Gas turbines have played a significant role in global energy production. However, with the increasing proportion of renewable energy production and tightening emission restrictions, gas turbines now face additional new challenges, including and in particular the need for fuel flexibility, fast ramp-up times, and a wide operational range <cit.>. One major technological breakthrough of the recent past in for gas turbine applications technologies is the constant pressure sequential combustor (CPSC) <cit.>., which It significantly improves the operational range with very low pollutant emissions, and the fuel flexibility<cit.>. Fuel flexibility includes, which is the ability to be supplied with blends of hydrogen and natural gas, as well as the use of non-conventional fuels, such as those derived from waste processes or biomass gasification <cit.>. Notably, the combustion of pure H_2 in an academic CPSC configuration has been investigated recently in <cit.>. In the CPSC configurations, the hot products from the first-stage combustor are diluted with additional fresh air bypassing the first-stage before the sequential fuel is injected., which This combustor architecture reduces the flow temperature of the vitiated air flow at the inlet of the sequential stage in order to prevent instantaneoustoo fast autoignition of the secondarysequential fuel near the injection regionin poorly mixed conditions. Ensuring in this way a sufficient mixing time of the globally lean mixture of gas and vitiated air, the exothermal reactions of the autoignition process in the sequential stage occur in well-mixed conditions, and the NO_x emissions can thus be drastically reduced. Consequently, combustion in the sequential flame relies mostly on the autoignition mechanism <cit.>. Much like a traditional combustor, CPSC iscombustors are also prone to thermoacoustic instabilities <cit.>. Thermoacoustic instabilities are challenging problems in gas turbines for power and propulsion applications <cit.>. These instabilities can manifestlead to high amplitude acoustic pressure fluctuationsoscillations which can induce vibrations causing structural damages and possibly reaching a large fraction of the mean pressure which could lead to flame flashback, and in the extreme cases to structural damage <cit.>. Hence, developing technologies for controlling the instabilities is an essential task for the safety and operability of gas turbines. In the past decades, both active and passive control strategies have been studiedinvestigated in academic and industrial laboratories and implemented in real engines. Nonetheless, gas turbine manufacturers usually opt for passive control strategies which are more cost-effective so far. Indeed, passive damping strategies have been widely studied and applied in real combustors. For example, the nonlinear behavior of Helmholtz resonators mounted on combustion chamber walls has been investigated in a recent study in order to draw design guidelines for avoiding failures of their damping effectiveness <cit.>. For example, a Helmholtz-type damper was implemented in the sequential combustor of the Ansaldo GT26Furthermore, dampers based of interconnected cavities with broadband acoustic absorption capabilities were successfully implemented in large modern gas turbines <cit.>. However, such a passive strategy the design of these passive dampers is still challenging for the following two reasons: First, it requires costly engine testing for obtaining a relatively precise prior knowledge of the system's eigenmodesdifficult-to-predict thermoacoustic instabilities,in advance in order to tune their geometry for effectively reducetion of the acousticpulsation amplitude. Second, for a given volume constraint for their implementation, there is always a trade-off to find between their broadbandness for addressing multiple instability frequencies and their effectiveness at a given frequency. In contrast, active control strategies, with proper parameter tuning, can adapt to the operating conditions of the system but, so far, their implementation in real engines has been hindered by the harsh thermodynamic and thermochemical conditions and by the lack of cost-effective and mechanically-robust actuation solutions. The big challenge of implementing active control strategies, however, is thus finding suitable actuators <cit.>. For example, <cit.> demonstrated the use of pilot fuel flow modulation and loudspeakers forcing to stabilize an unstable combustor by tailoring its acoustic boundary conditions was successfully achieved in an academic configuration operated at atmospheric pressure . However, as the author noted, using loudspeakers has a scalability limitation at high pressures. Moreover, pilot fuel modulation can lead to an increase in NO emissions. <cit.> but could not be applied in a real engine. Another active control strategy, based on the fast modulation of the fuel mass flow has been successfully developed about thirty years ago for liquid spray <cit.> and natural gas <cit.>. The latter technology has even been validated in heavy duty gas turbines and was commercialised. Nonetheless, effective modulation the pilot gas massflow cannot be achieved beyond 500 Hz, which prevents from addressing the problem of high-frequency instabilities with the corresponding valves. Furthermore, such modulation of the fuel massflow could negatively impact the pollutant emissions. Therefore, finding an In this context, the search of alternative actuators for gas turbine combustors, with high control authority, low power consumption, and minimal additional emissions, remains a challenge is highly relevant for increasing fuel and operational flexibility of future gas turbines. In this work, we focus on ultra-low-power plasma actuation, which has never been implemented in industrial systems so far, and we show that it is a very promising strategy for suppressing thermoacoustic instabilities in sequential combustors without increasing NO_x emissions. Non-equilibrium plasma discharges have been shown to have several benefits when it comes tocan be used to enhanceenhancing and stabilizing combustion processes reactions through thermochemical effects <cit.>. First, they can be used to enhance the ignition of fuel mixtures by creating free radicals and other active species. AIn these plasmas, the substantial difference between electron and gas temperatures (T_e≫ T_gas) results in the efficient formation of active species and radicals through direct electron impact, which can help to ignite combustible mixtures, and to extendinitiate and promote combustion reactions. This contributes to a more complete combustion process and an extension of the lean flammability limits <cit.>. Second, non-equilibrium plasma discharges can also help to stabilize the combustion process. This is because the plasma can provide a steady stream of active species to the system, which can help to maintain a stable flame. This can be particularly useful in lean or unstable combustion environments, where the flame can easily be blown off or become unstable. Owing to its high influence to the kinetics of the reactive mixture, several studies have been done to investigate performed to characterize the effect of nanosecond repetitively pulsed discharges (NRPD) on the flame heat release rate oscillations of acoustically-forced flames. For instance, response.Lacoste et al. <cit.> investigates the effect of plasma actuation to the heat release rate fluctuation with respect to the acoustic perturbation have studied this effect in a single stage swirled stabilized combustor. It was shown that NRPD affects significantly the gain and phase of the flame transfer function (FTF) and thereby, might influence the thermoacoustic stability of the combustor. In <cit.>, it was shown with a laminar flame that strong heat release rate modulation fluctuation of the laminar flame responds strongly under the influence of can be induced by periodic series of constant voltage NRPD with square-wave input. The forcing mechanism was mainly attributed to the increase of local burning velocity close to the plasma region. In a similar set-up to the one in <cit.>, Moeck et al. <cit.> has have successfully demonstrated the applicability of nanosecond plasma discharges to stabilize a linearly unstable combustor with active feedback control. By using the extended kalman filter (EKF), the instantaneous phase of the acoustic pulsation was estimated and then fed to the gate signal for the actuation of the plasma generator. The plasma power required to stabilize the combustor was at around 1% percent of the flame thermal power. Furthermore, Kim et al <cit.> have demonstrated the capability of NRPD to stabilize a combustor at realistic low power condition of aero-engine combustors. Nanosecond plasma discharges in pin-to-pin configuration have shown high potential as a tool for second-stage flame stabilization in constant-pressure sequential combustors, as they offer several advantages over traditional flame stabilization methods the hot reactive mixture in the sequential burner already undergoes radicals-producing chemical reactions that precede autoignition. Xiong et al. <cit.> demonstrated that NRPD could shorten significantly the auto-ignition time of the a CH_4 sequential flame with low electric power and acceptable NO emissions. In a more compact laboratory-scale sequential combustor, Shcherbanev et al. <cit.> demonstrated the effectiveness of plasma discharges in igniting very lean mixtures of hydrogen and natural gasblending, and therefore could help keeping the sequential flame alive. One significant advantage of the NRPD actuation is that the electrode system implementation does not require a drastic modification of the combustor geometry and could also be considered for retrofitting existing systems. Another advantage of NRPD is their fast response timeand robustness. They do not require any, which enables high frequency actuation without moving partscomponents, which that are usually causing reliability and durability issues in mechanical actuators. Indeed, non-equilibriumincreases their reliability and durability. The plasma actuation can quickly respond to fluctuations in the influence flames,, resulting in improved flameing their stability and extending their lean flammability limit <cit.>. Finally, the mean NRPD power is known to be small compared to the thermal power of the flame  <cit.>. It should be noted that the mean NRPD power reported in literature pertains only to the electrical energy that gets transferred to the system, without accounting for the electrical energy necessary for the high voltage generator. However, if we consider an optimistic scenario where an exceptionally efficient high voltage generator is feasible, including the achievement of a perfect impedance matching at the electrodes, the generator's power requirements would match the energy deposited into the system. In the case of a sporadic use of the NPRD, for flame ignition assistance or during transient operation, the electric power requirements of a NRPD system is not a major driver in the development of plasma assisted combustion technologies. Although the mean NRPD electric power reported in previous works is of the order of 1 percent of the thermal power of the flame <cit.>, it is worth mentioning that the cost-benefit analysis of a NRPD system with an electric power of 1 percent of the combustor thermal power would not be straightforward for heavy duty gas turbines, for the following two reasons. First, 1 percent is still a large penalty for gas turbine manufacturers which struggle for gaining any 0.1 percent of engine thermal efficiency during stationary operation (over the last decade, this efficiency increased toward 65 percent for combined cycle power plants by only a couple of percents). Second, for an H-class gas turbine exhibiting a combustor with 1 GW of thermal power, it means that, by assuming linear scaling, a system of 10 MW electric power would have to be developed, just for the NRPD actuation, which is technically rather challenging. In the present work, we demonstrate that for a sequential combustor, operated at atmospheric pressure, successful actuation suppressing thermoacoustic instabilities can be achieved with a mean plasma power that is about 3 orders of magnitude lower than 1 percent of the thermal power, which would be much more realistic for implementation in practice (for 1 GW of thermal power, one would need about 10 kW of mean plasma power). Therefore, as research in this area continues, it is likelyforeseen that thissuch NRPD technology will become increasingly prevalent in the design and operation ofmay be implemented in future gas turbines burning green H_2 in sequential combustors for compensating the intermittency of renewable sources., helping to reduce emissions and improve sustainability in the energy sector. However, no study has attempted to utilize the plasma discharges for stabilizing a thermoacoustically unstable sequential combustor. Since NRPD could influence the flame position and helps anchoring the flame in a sequential combustor <cit.>, it is natural to hypothesize that the thermoacoustic stability could be influenced as well. This study thus aims at introducing the ultra-low-power nanosecond plasma rapid discharges (NRPD) to thermoacoustically stabilize a sequential combustor which had never been attempted so far. Additionally, parametric studies on the plasma repetition frequency (PRF) and the generator voltage are doneperformed to investigate the effectiveness of the actuator in suppressing the instability and the associated NO emissions. § EXPERIMENTAL SETUP The lab-scale sequential combustor setup is depicted in figure <ref>. The setup consists of a plenum, a 4 × 4 array of jet flames anchored on a so-called matrix burner, a combustion chamber with 62 × 62 mm^2 cross section, a dilution air section, a sequential burner featuring a mixing channel with 25 × 38 mm^2 cross section, ain which secondary fuel injectionis injected, a sequential or second-stage combustion chamber , andequipped with a motor-driven adjustable outlet orifice. This variable outlet geometry enables an online tuning of the acoustic reflection coefficient, and thus an independent control of the thermoacoustic instabilities, which is key for validating the NRPD-based control. The first stage combustor usesis fed with a mixture of natural gas and air, with the air preheated to 230 C and introducedsupplied from the plenum, while natural gas is added throughin the matrix burner, creating awhich corresponds to a technically premixed mixturefirst stage. The thermal power of the first stage combustor is 35 kW with an equivalence ratio of 0.7. A piezo sensor is placed on a flush mounted plate to monitor the acoustic pressure pulsation inside the first stage combustor and denoted as Mic. 1 in the figure. A massflow of 18 g/s of dilution air at 25 C is introduced from the dilution air port and mixes with the hot gases from the first combustorstage. A mixture of 0.07 g/s of hydrogen and 0.6 g/s of natural gas is injected into the sequential injector. The sequential injector features an X-lobeshaped vortex generator to introduce rotational movement to the vitiated flow which enhances the mixing process. The total thermal power of the two flames is 73.4 kW. A pin-to-pin electrode configuration, with an inter-electrode distance of 5 mm, is located 10.3 cm downstream from the sequential fuel injector, and a gas analyzer (ABB-EL3040) probe is placed at 45 cm from the outlet of the second-stage combustorburner to monitor the NO emissions. Another piezo sensor is placed downstream of the sequential flame to monitor the acoustic pressure pulsation ofin the second combustion chamber. As indicated above, tThe outlet of this chamber has an adjustable orifice to changecontrol the nominalthermoacoustic stabilityof the system. The chemiluminescence is used to characterized the sequential flame, with the camera capturing a portion of the mixing channel downstream of the electrodes.However, due to tThe intense light emission sfrom the plasma discharges , is masked byan optical screen is required to mask them obstacle. The recording setup comprises of a LaVision Star X high-speed CMOS Camera and a LaVision HS-IRO high-speed intensifier, which are equipped with a 45 mm CERCO UV lens (F/1.8 Cerco) and an Edmund Optics optical bandpass filter (centered at 310 nm, FWHM 10nm). To measure the energy deposition of the plasma, a current probe and a back current shunt are placed on the anode and cathode to acquire measurecurrent and voltage. The plasma generator (FID) initiates the high voltage pulses with a 2-3 ns rise time and a pulsed width of 10 ns. A Tektronix (Tektronix MDO3104)mixed signal digital oscilloscope (Tektronix MDO3104) was used to record the current and the voltage signals at 1 GHz bandwidth and 5 GHz sampling rate. A Pearson fast current monitor (model 6585, 0.5 V/A, 50.) werewas used to measure the current. During testing, four generator voltages (11, 11.7, 12.5, and 14 kV) at three pulse repetition frequencies (10, 20, 40 kHz) were used, with an additional voltage value of 13.2 kV at 40 kHz. All experiments were performed using only the negative polarity of applied pulses. The energy deposition measurement of a single pulse for continuous NRPD at 14 kV and 10 kHz is shown in figure <ref>. Note that due to the mismatch between the generator impedance and the plasma impedance, oscillations in voltage and current are observed. The deposited energy is computed by taking the time integral of the product betweenof voltage and current. At this condition, mean energy deposition is around 2 mJ per pulse. The type of plasma at this energy is classified as spark plasma <cit.>. The resulting energy deposition at all conditions are shown in figure <ref>a. NoteIt is interesting to note that the y-axis has a the logarithmic scaleof the y-axis, and that the energy deposition increases exponentially with respect to the generator voltage. Furthermore, except at 14 kV, the energy deposition per pulse at higher frequency and at the same voltage decreases. Thiscould is probably be attributeddue to the interference between the incoming and reflected pulses inside the high voltage cable. Furthermore, there was no plasma observed at 11 kV and 40 kHz and consequently the data can not be shown in the plot. By multiplying the energy deposition and the plasma repetition frequency (PRF), the mean plasma power is obtained and depicted in figure <ref>b. The highest power is at 8081 W which amounts to 1.41.1× 10^-1% percent of the thermal power of the flames. The ratio between the plasma to thermal power is indicated on the right axis of the figure <ref>b) which we denote as η_p. § RESULTS Figure <ref> shows the effects of plasma discharges on the stability of the sequential combustor at a PRF of 10 kHz and generator voltage of 11.7 kV. The plasma was turned on at t=5 s and turned off at t=35 s. In Figure <ref>a and <ref>b, the acoustic pressure signals from the first and second stage combustorscombustion chambers are presented, along with their corresponding Fourierpower spectral density. Without plasma discharges, the combustor exhibits a strong pulsation at the frequency of 330 Hz, by looking at the corresponding probability density of the bandpass filtered time trace around 330 Hz, it is clear that the system exhibits a stable limit cycle and thus linearly unstable. Furthermore, the Fourierpower spectral density of the first microphone show a stableresonance peak at 260 Hz. It is worth notingmentioning that the thermoacoustic instability can alternatively be stabilizedsuppressed by adjusting the outlet orifice area at the considered operating point. Therefore, the instability is not due to the intermittent ignition kernels formed in the mixing channel. Immediately after the start of following the plasma actuation, a burst of the mean value of the unfiltered dynamic pressure signal rises and then decays to the nominal value. The is observed and then decay within a few seconds. This burst is attributed to the rapid change of flame position induced by the start of the NRPD, and a presumed abrupt variation of the pressure drop across the sequential burner. Nevertheless, its amplitude and relaxation time do not provide quantitative information about the actual evolution of the mean pressure in the combustor because the piezoelectric sensor's signal is high-pass filtered andin the data acquisition systemcard are responsible for this apparent decay. It is possible that the rise in mean pressure is due to the plasma discharges changing the flow condition around the flame. However, for tThe purpose of this work is the study of studying thermoacoustic stability control with NRPD. Therefore, the, the frequency band near the instability frequency is of interest. The acoustic signals are bandpass filtered around the thermoacoustic peak frequency of 330 Hz and are shown in Figure <ref>c and <ref>d, alongtogether with their corresponding scaled probability density functions (PDFs) in the inset. As it can be seen in figure <ref>c and <ref>d, when the plasma is on, the combustor becomes linearly stable. In contrast, without the plasma, the combustorPDF of the acoustic pressure exhibits a bimodal distribution which is a typical feature of a system undergoing a limit cycle. Additionally, the first harmonic at around 660 Hz is also observed in the Fourierpower spectral density. Figures <ref>a to <ref>f displayshow the OH chemiluminescence of the sequential combustorchamber and a sectionof a portion of the burner mixing channel. In figures <ref>a to <ref>c, the OH chemiluminescence is shown at three different time instances before plasma initiation, while figures <ref>d to <ref>f show the OH chemiluminescence after plasma activation. The PRF is at 10 kHz with a generator voltage of 11.7 kV. The mean intensity within the red and blue squares in figures <ref>a to <ref>f is illustrated in figure <ref>g. Figure <ref>h depicts the acoustic pressure inside the first and sequential combustorsion chambers. It is noteworthy that prior to plasma actuation, the intensity fluctuates at the same frequency as the acoustic pressure.Remarkably, the thermoacoustic limit cycle in the sequential combustor can be effectively suppressed with a mean plasma power of only 1.1 W, which is about 1.5 × 10^-3 percent of the thermal power of the flame. The mean plasma power in our case is similar to that in <cit.>; however, the thermal power of the flame in our case is 300 times higher than that in their case. In Figure <ref>, the frequency spectra of the pressure oscillations are displayed, with the PRF fixed at 10 kHz and at varying generator voltages. It is evident that the unstable peak corresponding to a limit cycle around 330 Hz becomes smaller, and thus corresponding to a thermoacoustic stabilization, and shifts to higher frequencies as the voltage and energy deposition increase. This frequency shift can be indirectly attributed to a change in the mean flame position, which will be quantified later. In Figure <ref>a, another peak at around 260 Hz becomes more prominent as the voltage is increased, but the gaussian-like PDF of the acoustic pressure filtered around that peak (not shown here), indicates that it remains a resonance peak, i.e. the thermoacoustic oscillations at that frequency are linearly stable. Notably, the first microphone has a muchmore intense peak around 260 Hz compared to the second microphone. However, for the mode at 330 Hz, almost the same amplitudes are observed with both microphones. Another important aspect of the actuator's performance is its ability to stabilize the system quickly, which can be expressed as a decay rate. To measure this decay, in a similar way as in <cit.>, a periodic on-off cycle of plasma actuation is applied. The plasma is turned on for 5 seconds, followed by a 5-seconds off period, and the entire cycle is repeated for 5 minutes. The time trace of the first microphone's bandpass-filtered acoustic signal during this process is shown in Figure <ref>, with the PRF set at 10 kHz and the generator voltage at 12.5 kV. The envelope of the acoustic pressure signal A is obtained from this data by computing the analytical signal using the Hilbert transform, and it is averaged over the cycles. Figure  <ref> illustrates the distribution of the pressure envelope at different time points, with the black line indicating the mean value. This procedure is repeated at generator voltages of 11.7 kV and 14 kV. The resulting decay rates at different voltages are shown in Figure  <ref>, where the envelope is normalized to its value at t=0 for better comparison. It is evident that the envelope decays faster as the voltage increases. The system takes around 30 ms to reach a quasi steady-state value with generator voltages of 12.5 and 14 kV, and the steady-state value is lower at higher voltages. Figure <ref> displays the time evolution of the flame centre of mass at PRF of 10 kHz and various voltages. The chemiluminescence data are vertically integrated, and the centre of mass is computed along the streamwise direction. At 11 kV, the plasma has little effect on the flame, and the fluctuation around 330 Hz is still evident. With increasing voltage, the fluctuation of the flame centre of mass reduces more rapidly, which is strongly correlated with the pressure signal. Moreover, the steady-state value of the flame centre of mass with plasma actuation decreases as the voltage is increased, and the centre of mass shifts closer to the burner outlet. This shift is due to higher energy deposition, resulting in increased mean plasma power that ignites the mixture enhances the autoignition process in the mixing section of the sequential burner more effectively. Although the decay rate gets faster and the acoustic pressure amplitude gets smaller as voltage increases, NO emissions increase slightly. Figure <ref> shows the root mean square of the acoustic pressure p_rms, NO emissions, and the flame center of mass with respect to generator voltage. As it can be seen, NO emissions increase from around 10 ppmvd at 11 kV to around 12 ppmvd at 14 kV. It is a well-known fact that spark plasma can produce a lot ofsignificant NO emissions <cit.>. However, in our configuration, the flame center of mass is shifted upstream, resulting in an increased residence time for the burnt gaseswhich can also be the cause of the NO increase. Furthermore, the energy deposition increases by an order of magnitude, from 0.2 mJ to 2 mJ, as depicted in figure <ref>. According to <cit.>, the plasma regime changes from glow to spark. However, NO emissions only increase by less than 1 ppmvd. Therefore, the upstream shift of the flame centre of mass is suspected to be the dominant contributor to the increase in NO emissions. The exact mechanism behind this process is not yet clear, and further investigations are needed. The same approach was followedroutines were performed at higher PRFs, and the resulting root mean squared (rms) pressure p_rms and NO emission maps are shown in figure <ref>. Since there is a thermoacoustic peak of interest in the power spectral density of the acoustic pressure is at around 260 Hz that needs to be considered, the bandpass filter was set to span from 200 Hz to 400 Hz for the rms calculation. Examining the map, we observe that at 11.7 kV, the plasma discharges at PRF = 10 and 20 kHz are more effective than that at PRF = 40 kHz. This is consistent with the fact that the mean plasma power obtained at 40 kHz for this voltage is lower than the one at 10 kHz or 20 kHz as shown in figure <ref>b. At this condition40 kHz and 11.7 kV, the plasma is thus not strong enough to affect the system. However, at 12.5 kV, all mean plasma powers at all PRFs are in the same order of magnitude (see figure <ref>b). When the PRF is set to 40 kHz, the rms pressure increases. The same behavior is observed at 13.2 kV, but the system is stabilized again when the voltage is set to 14 kV. Because there is a strong pulsation at V = 13.2 kV and PRF = 40 kHz, NO measurement could not be done. Nevertheless, it is clear that the NO emission map shows an increasing trends towards high PRF and high generator voltage, which is consistent with the findings in <cit.>, which investigated the effect of plasma on the sequential flame position and on the NO emissions in another sequential combustor. By looking at the surfacecontour maps, it is evident that staying at PRF = 10 kHz and generatorwith a pulse voltage above 11 kV, the thermoacoustic eigenmode in the combustor can be effectively stabilized without compromising the NO emissions. The dependency of NO emissions on the flame centre of mass and mean plasma power for all operating points will be further discussed in the subsequent paragraphs. To shed light on the peculiar phenomenon at PRF = 40 kHz, we plotted the frequency spectrait is interesting to show the power spectral density of both microphones. This information is given in figure <ref>. As shown,Notably, at 12.5 kV and 13.2 kV, the mode at 260 Hz is excitedbecomes self-excited and exhibits a very high amplitude of 180 dBa, while the mode at 330 Hz is stabilized. The distribution PDF of the acoustic pressure P̂_̂p̂ shown in figure <ref>c exhibit in the case of repetitive pulses of 13.2 kV a typical feature of intermittently unstable flamesthermoacoustic system, as seen in the pressure density plot in figure <ref> c). This observation is also confirmed by the time trace of the filtered signal. According to <cit.>, thissuch intermittent behavior can be attributed to the colored-drivencaused by random fluctuations of the time delay perturbation of the flame response to acoustic perturbations around the mean time delay. For time delay fluctuations that can be described by an Ornstein-Uhlenbeck process, intermittent high amplitude bursts of oscillations occur when these fluctuations induce excursions of the system in linearly unstable conditions, and when the correlation time of the fluctuations is long enough to allow the thermoacoustic system to adapt to the random changes of stability <cit.>. In ourthe present case, the fluctuating time history of the ignition kernels produced by the plasma could be the source of this time delay perturbation. However, to identify the exact reason for this behavior, further thermoacoustic analysis is required, which will be the subject of future investigations. The OH chemiluminescence signals at 13.2 kV and PRF = 40 kHz, recorded at different time instances during a time interval when the NRPD was activated, are shown in figures <ref>a to <ref>f. When the NRPD actuation is turned on with these pulse generator settings, the thermoacoustic dynamics changes from a robust limit cycle at 330 Hz, to a limit cycle at 260 Hz, and in contrast with the PRF of 10 kHz and 20 kHz, the thermoacoustic system is not stabilized. As it can be seen in figure <ref>, OH chemiluminescence signals are observed is visible inside the mixing channel after plasma actuation, due to the formation of ignition kernels induced by the plasmaNRPD. The time trace of the mean OH intensity in fFigure <ref>g shows that the OH chemiluminescence intensity inside the mixing channel, which fluctuates along with that inside as the one in the combustion chamber at the acoustic pressure oscillation a frequency of 260 Hz, which is also the frequency of the pressure pulsation. The acoustic pressure time trace depictedshown in figure <ref>h clearly showsindicates that, prior to plasma actuation, both microphones recorded similar acoustic pressure amplitudes. However, after the plasma discharges wereNRPD are applied, the acoustic pressure pulsation in the first stage combustor was muchis significantly higher than that in the sequential combustor. This observation is consistent with figure <ref>, which shows a 20 dB difference in power spectral density at 260 Hz between the two microphones. It appears that the thermoacoustic mode at 260 Hz is more localized inside the first combustor than in the second combustor. However further investigations using a Helmholtz solver or thermoacoustic network model will be needed to study the modes of the combustor. The plasma discharges are visualized in figure <ref>. The images are phase averaged with respect to the bandpass-filtered acoustic signals of the first microphone. As it can be seen, during an oscillation cycle, when the acoustic pressure in the first stage combustor reaches the maximum point, the plasma bends more towards the outlet of the sequential burner. Whereas at the other phase angles, the discharge channels are relatively straight. The plasma bending effect is similar to the one observed in <cit.>. In this reference, the bending occurs because the inter-pulse time of the discharges is close to the convective time, but in contrast to the present work, there was no thermoacoustic instability at the considered operating conditions. In the present work, the thermoacoustic instability leads to the synchronization of the periodic plasma channel bending with the acoustic signalfield. Figure <ref>a displaysshows the NO emissions of the combustor with plasma operated at different NRPD voltages and PRF, plotted against the relative distance between the emission probe location and the flame centre of mass. One measurement point without the plasma, which is indicated by a red cross in figure <ref>a, is obtained by stabilizing the combustor through adjusting the outlet orifice. The maximum NO emission of approximately 17.65 ppmvd occurs at a generator voltage of 14 kV and a PRF of 40 kHz, with a general increasing trend observed as the relative distance increases. It is worth noting that high PRF and voltage can cause some of the flames to penetrate into early ignition inside the sequential burner mixing channel, leading to a reduction in mixing quality between the vitiated flow and the secondary fuel and consequently a potentialan increase in NO emissions. In Figure <ref>b, a weak correlation between NO emissions and mean plasma power is observed at power levels ranging from 0.1 to 10 W. However, as the mean plasma power increases to 20-81 W, a positive correlation is observed, potentially due to the residence time of the burnt gases or plasma-generated NO. However, ifStill, as the goal of the NRPD actuation in this work is to thermoacoustically stabilize the flame sequential combustor, a generator voltage of 11.7 kV and PRF of 10 kHz can achieve satisfactory resultsis obviously the optimum because the plasma does not lead to an increase of the NO emissions compared to the non-actuated operation., with comparable NO emissions to those without plasma. TFurthermore, the present ultra-low-power NRPD-based control strategiesy for sequential combustors can be improved opens other possibilities such as thermoacoustic instability control during transient operation by employing a feedback loop control. based strategy similar to <cit.>. With this strategy, the duty cycle of the plasma can be reduced to about 50 % yielding to a lower mean plasma power and possibly lower NO emissions. § CONCLUSIONS AND OUTLOOK The measurement series have This study demonstrateds that ultra-low-power non-equilibrium plasma discharges can serve as an can effectively actuator to stabilize a thermoacoustically unstable sequential combustor. Indeed, the power of the plasma produced by the NRPD which can achieve thermoacoustic stabilization, can be 5 orders of magnitude with very lower power compared to the flame thermal power. W: with a mean plasma power of 1.1 W, which amounts to is 1.41.5×10^-3% percent of the flame thermal power of 73.4 kW, the thermoacoustic limit cycle in the sequential combustor was successfully stabilized. At this condition, there is practically no additional NO emission compared to the situation where the flame thermoaocustic mode was stabilized by changing the outlet orifice geometry of the combustor with a motor-driven water-cooled piston. However, at PRF = 40 kHz and generator voltages of 12.5 kV, and 13.2 kV, another thermoaocustic mode at 260 Hz was excitedbecomes self-excited. Therefore, a careful and thorough investigation or a now that an effective ultra-low-power actuator has been found for sequential combustors in the form of NRPD in the sequential burner, a significant part of the research efforts should concentrate on the development of feedback control for ensuring optimum trade-off between global thermoacoustic stability and NO emissions during stead and transient operation. safe optimization procedure is required to implement it in real systems. Furthermore, a significant increase in NO emissions might occur at high generator voltage and PRF. Hence, it is essential to weigh the trade-offs. FFinally, further investigations are required to understand the stabilization mechanism as well as mode switching observed in the experiment. Finally, operating demonstrate the practicality of this NRPD actuation at elevated pressures will be the next crucial step to evaluate the applicability of NRPD in real systems, and to develop predicting tools of the interaction between plasma kinetics and combustion reactions in the thermochemical environment of the turbulent sequential burner. § ACKNOWLEDGEMENTS This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No [820091]). elsarticle-num